Many developers have determined that BigDecimal is the only way to deal with money. Often they site that by replacing double with BigDecimal, they fixed a bug or ten. What I find unconvincing about this is that perhaps they could have fixed the bug in the handling of double and that the extra overhead of using BigDecimal.

My comparison, when asked to improve the performance of a financial application, I know at some time we will be removing BigDecimal if it is there. (It is usually not the biggest source of delays, but as we fix the system it moves up to the worst offender)

### BigDecimal is not an improvement

BigDecimal has many problems, so take your pick, but an ugly syntax is perhaps the worst sin.- BigDecimal syntax is an unnatural.
- BigDecimal uses more memory
- BigDecimal creates garbage
- BigDecimal is much slower for most operations (there are exceptions)

mp[i] = round6((ap[i] + bp[i]) / 2);The same operation using BigDecimal is not only long, but there is lots of boiler plate code to navigate

mp2[i] = ap2[i].add(bp2[i]) .divide(BigDecimal.valueOf(2), 6, BigDecimal.ROUND_HALF_UP);Does this give you different results? double has 15 digits of accuracy and the numbers are far less than 15 digits. If these prices had 17 digits, this would work, but nor work the poor human who have to also comprehend the price (i.e. they will never get incredibly long)

### Performance

If you have to incurr coding overhead, usually this is done for performance reasons, but this doesn't make sense here.Benchmark Mode Samples Score Score error Unitso.s.MyBenchmark.bigDecimalMidPrice thrpt 20 23638.568 590.094 ops/so.s.MyBenchmark.doubleMidPrice thrpt 20 123208.083 2109.738 ops/s### Conclusion

If you don't know how to use round in double, or your project mandates BigDecimal, then use BigDecimal. But if you have choice, don't just assume that BigDecimal is the right way to go.### The code

import org.openjdk.jmh.annotations.Benchmark; import org.openjdk.jmh.annotations.Scope; import org.openjdk.jmh.annotations.State; import org.openjdk.jmh.runner.Runner; import org.openjdk.jmh.runner.RunnerException; import org.openjdk.jmh.runner.options.Options; import org.openjdk.jmh.runner.options.OptionsBuilder; import java.math.BigDecimal; import java.util.Random; @State(Scope.Thread) public class MyBenchmark { static final int SIZE = 1024; final double[] ap = new double[SIZE]; final double[] bp = new double[SIZE]; final double[] mp = new double[SIZE]; final BigDecimal[] ap2 = new BigDecimal[SIZE]; final BigDecimal[] bp2 = new BigDecimal[SIZE]; final BigDecimal[] mp2 = new BigDecimal[SIZE]; public MyBenchmark() { Random rand = new Random(1); for (int i = 0; i < SIZE; i++) { int x = rand.nextInt(200000), y = rand.nextInt(10000); ap2[i] = BigDecimal.valueOf(ap[i] = x / 1e5); bp2[i] = BigDecimal.valueOf(bp[i] = (x + y) / 1e5); } doubleMidPrice(); bigDecimalMidPrice(); for (int i = 0; i < SIZE; i++) { if (mp[i] != mp2[i].doubleValue()) throw new AssertionError(mp[i] + " " + mp2[i]); } } @Benchmark public void doubleMidPrice() { for (int i = 0; i < SIZE; i++) mp[i] = round6((ap[i] + bp[i]) / 2); } static double round6(double x) { final double factor = 1e6; return (long) (x * factor + 0.5) / factor; } @Benchmark public void bigDecimalMidPrice() { for (int i = 0; i < SIZE; i++) mp2[i] = ap2[i].add(bp2[i]) .divide(BigDecimal.valueOf(2), 6, BigDecimal.ROUND_HALF_UP); } public static void main(String[] args) throws RunnerException { Options opt = new OptionsBuilder() .include(".*" + MyBenchmark.class.getSimpleName() + ".*") .forks(1) .build(); new Runner(opt).run(); } }

## Comments

## Alexander Lee replied on Mon, 2014/07/07 - 7:31am

While it is true that BigDecimal is far from perfect, I think this article misses the point.

BigDecimal is used as it represents a precise decimal number, which double and float do not, as they represent floating-point numbers which are approximations of real numbers.

http://en.wikipedia.org/wiki/Floating-point_number

No amount or type of rounding using floating-point numbers will give you the precise decimal result every time (e.g. for money) because of the way they are represented internally. In addition, the way floating point numbers are represented and stored on databases means that you can persist a floating-point number, and get a slightly different result (decimal precision wise) when you read it back.

As such BigDecimal (or decimal types in general) are the only correct way to represent precise decimal numbers such as money. Even small decimal precision differences get magnified if you multiply large monetary amounts.

If you are not concerned about being precise, and a decimal approximation will do, then double or float can be used.

http://stackoverflow.com/questions/3730019/why-not-use-double-or-float-to-represent-currency

http://www.javapractices.com/topic/TopicAction.do?Id=213

## John J. Franey replied on Mon, 2014/07/07 - 11:43am in response to: Alexander Lee

+1

Alex Trebek: Peter, you have the board.

Peter: Thanks Alex. I'll take 'Optimum Performance Limitations' for $400.00.

Alex: And the the answer: BigDecimal. <ding> Alexander:

Alexender Lee: How do I represent real numbers in Java programs?

Alex Trebek: Correct. Alexander, you now have the board.

## Tim Garrett replied on Mon, 2014/07/07 - 12:53pm

I'm not a stellar mathematician, but wouldn't fractional holdings of mutual fund shares priced to 4 decimal places be enough to start causing money to be lost in conversion?

## Peter Lawrey replied on Mon, 2014/07/07 - 2:33pm in response to: Alexander Lee

This is all true for amounts of greater than $70 trillion dollars if you need cent accuracy. However, if you are confident you will always have smaller values, you could be introducing a real problem such a GC pauses to avoid a problem which never actually happens.

The link you provided got 206 votes, but failed to provide one specific example where rounding doesn't fix the problem even when asked for.

## Peter Lawrey replied on Mon, 2014/07/07 - 2:40pm in response to: Tim Garrett

If you have a money with 4 decimal places you will start losing (or gaining) a 0.01 cents on $170 billion. If you have more than 7 quintillion shares worth a cent (i.e. $70 trillion worth, which more than the US national debt) you could get a one cent rounding error, but if the price changes by 0.01 of a cent you could gain or lose $700 million and possibly a cent in rounding error.

## Peter Lawrey replied on Mon, 2014/07/07 - 2:55pm in response to: Alexander Lee

Rounding is still required in many cases. This is a real example where BigDecimal was used <pre>(43 / 7) * 7 == 43</pre>, this is false unless you use additional rounding.

## Loren Kratzke replied on Mon, 2014/07/07 - 3:27pm in response to: Peter Lawrey

First, I am a big fan of your writings on performance, Peter, but I must take the other side on this one for practicality reasons.

To realize performance gains on this topic, you would need to publish an API that does all of the marshaling between Strings and all numeric data types and then publish a disclaimer that the API only produces accurate results under certain circumstances. And by the time you published this API it would probably need to look something like BigDecimal.

The reason you would need to publish an API and implementation is because it would only be a matter of time before some poor developer working on a large code base that used rounded floating point values forgot to round a value up/down/whatever and a bug emerges. No longer does an advertised price match the amount paid and a customer is stuck in a hopeless loop. Or perhaps telemetry controls thrash because the fuel valve is set too high, too low, too high, now too low again. It is a train wreck (or plane wreck, or ship wreck) waiting to happen. But it would be a fast train at least. Here I give you credit because you know Java performance like nobodies business. Just take my advice. Don't do this. Leave it on the drawing board. Keep it as an academic discussion.

## Peter Lawrey replied on Mon, 2014/07/07 - 3:58pm in response to: Loren Kratzke

Thank you for the feedback.

The problem I have with BigDecimal is the assumption that fixed all these problems when it doesn't. You have the same issues where fractions (or the result of a division) cannot be represented by BigDecimal (or long), but they are rarer which means they can be more surprising. For projects using BigDecimal, there was no less errors, just less performance.

e.g.

if(bd1 == bd2) or if (bd1 != bd2)

BigDecimal bd1 = BigDecimal.valueof(43).divide(BigDecimal.valueOf(7), 6, BigDecimal.ROUND_HALF_UP);

if (bd1.multiply(BigDecimal.valueOf(7)).compareTo(BigDecimal.valueOf(43)) == 0)

That's a lot of code and you still get a representation error.

## Alexander Lee replied on Mon, 2014/07/07 - 8:57pm

Peter. I agree that BigDecimal is a pain to use, to the extent that I use a DecimalUtils class to make it useable. It's also memory hungry and in some cases can't be used, such as where you need to hold millions of decimals in memory at the same time. I've even removed it from a few systems and projects, where it was possible to do so, where accuracy was not important.

But I'm afraid in general, systems that deal in precise, complex, or large monetary values just aren't one of those you can remove it from without consequence (short of rolling your own decimal). It's not just a case of "knowing how to use rounding correctly" as being smarter with double/float will not make up for that fact that it generally cannot represent decimal values accurately. If you aren't already running into issues using double/float for monetary values in a financial system, then the worry is that the issues are hidden or haven't been caught yet, and you will run into them eventually, or the guy after you will.

In your $70tn/$170bn posts above, the example is too simplistic, and in the real world the result of one calculation goes into another, compounding/magnifying inaccuracies. For instance, a real world example I've come across time and time again is where FX rates are stored and manipulated as double/float. FX rates are generally stored and used to a precision of 2-6 decimal places and rounded to such (this is where rounding can actually hurt), so if you store then retrieve an FX rate as a double/float, the value you get back maybe inaccurate by 0.01 or by 0.000001, or some where in between. Now, if you just multiply that by a small sized number such as $1mn you get enough of an error, let alone $70tn/$170bn. But it gets worse, as in many cases FX rates are multiplied with each other to get a cross FX rate, and if both the inputs have errors the cross FX rate can be even further off, which is then multiplied by the $70tn/$170bn. And that's not even the last calculation that needs to happen, as the inaccurate or potentially incorrect result just obtained goes into an equation for calculating something else, such as Risk Exposure, PnL, etc. Another example of how even small inaccuracies can build up is computing compounded daily interest on a 25 year mortgage, where any inaccuracies enter a magnifying feedback loop.

Decimals types (e.g. BigDecimal) DO solve this problem for monetary values, but not because they can represent all fractions exactly, but because everyone agrees (by unspoken convention) not to use those monetary values that can't be represented exactly as a decimal. Yes, decimals can still give a representation error, as you still can't represent 1/3 exactly using decimal, but when was the last time you heard someone say "that will be 1/3 of a dollar please"? You don't, they either price it at 33 cents exactly, or the unspoken but agreed rounding convention is half-up, also making it 33 cents exactly. However no one agrees that you shouldn't use 0.1 because it can't be exactly represented as a double/float. So for monetary values, decimals are an exact representation including for all fractions that are used, whereas double/float is not.

As such, using a decimal type (e.g. BigDecimal) to store decimal values such as monetary values means no loss of accuracy, as if you store 0.1 in a decimal you get exactly 0.1 back. However, if you store 0.1 in a double/float, you get 0.100000001490116119384765625 back. Now as long as the same decimal numbers to the same accuracy are used, a counterparty and I can use two different but equivalent equations to come to exactly the same decimal answer. However, if double/float numbers are used by both sides, the end result could be significantly different depending on how the inaccuracies have compounded.

## Alexander Lee replied on Mon, 2014/07/07 - 8:38pm

ExampleSay $1,000,000 is borrowed at an interest rate of 5% per day, compounded every day.

Say the interest rates are used and stored to a precision of 10 decimal places, because there are some rates that are that sensitive.Interest rate represented as decimal: 1.0500000000000000

Interest rate represented as double: 1.0499999523162842

Interest rate as decimal rounded to 10dp: 1.05

Interest rate as a double rounded to 10dp: 1.0499999523

After 5 days using decimal interest rate: $1276281.5625

After 5 days using double interest rate: $1276281.2726

Difference = 0.2899 ($0.29)

After 10 days using decimal interest rate: $1628894.6268

After 10 days using double interest rate: $1628893.8868

Difference = 0.7400 ($0.74)

What do you think it would be after 365 days?

How could rounding fix this without losing precision?

## Peter Lawrey replied on Tue, 2014/07/08 - 1:47am in response to: Alexander Lee

That's a good example. Using some actual code and realism you can illustrate the dilemma. Lets assume no one would accept 5% interest per day, even in the UK the highest interest rate is 4270% per year (insane I know)

Less assume it is 5% per year. How long does it take before you get the inevitable 1 cent round error?As you see a rounding error of 1 cent. But this is where a touch a realism makes all the difference. Say you are a bank and you have a client owning $287 million and they have missed every one of 116 repayment periods in a row, would you primary concern be that a) there is a 1 cent rounding error or b) it doesn't look like the client is going to pay this debt. This is around the amount which brought down Barrings Bank.

Lets try a smaller, more realistic interest rate like 0.0005% per business day (252 days per year). This is about 12.67% per year.

After nine years, of no repayments, the amount is so large you get a 1 cent error. Or you could be more worried that your client owns you $75 billion and more likely than not it's going to cost you more than 1 cent to get it back, possibly not get even a portion of it back.

## Peter Lawrey replied on Tue, 2014/07/08 - 1:54am in response to: Peter Lawrey

But let us consider we have used BigDecimal with the assumption that we don't need worry about additional rounding because we use BigDecimal, how long does it take to get an error.

## Peter Lawrey replied on Tue, 2014/07/08 - 1:57am in response to: Peter Lawrey

But let us consider we have used BigDecimal with the assumption that we don't need worry about additional rounding because we use BigDecimal, how long does it take to get an error.

We find that BigDecimal fails after just 21 iterations. How can this be if BigDecimal is the answer? Because you still need sensible rounding, and if you still need sensible rounding and your use case is a realistic one, double is highly like do what you need.

## Alexander Lee replied on Tue, 2014/07/08 - 4:49am

Sorry Peter, I'm about to give up and leave you to it. But just a few things.

You can't pick and choose which examples your rounding works with, it has to work across the board or it doesn't work. Okay, perhaps there are edge cases, but the example I gave above, while not totally realistic, is not totally off the mark either. Central banks lend to each other at daily rates on much larger sums, so if you want reduce the rate to 0.5% and increase the amount to $1bn, and you get pretty much the same issue.

Your example above is failing because you aren't using BigDecimal correctly. In fact you aren't even doing the calculation correctly as you are rounding with every iteration and setting the scale to 2 ??? You shouldn't be rounding, especially to 2dp in the middle of a calculation, especially when that calculation is a compounding or iterative one, as you're bound to get rounding error.Also I don't see you using a MathContext anywhere. If you're using BigDecimal to manipulate large numbers, then you need to set the precision high enough (larger than you need by at least one digit) such that it doesn't round when executing arithmetic operations, except in the case where it hits a recurring number, and even then the rounding will have almost no effect. I tend to use a default MathContext set to a precision of 20 for all intermediate calculations that involve BigDecimal.

e.g.

MathContext mc = new MathContext(20, RoundingMode.HALF_UP);

bigDecimalA.multiply(bigDecimalB, mc);

## Peter Lawrey replied on Tue, 2014/07/08 - 5:08am in response to: Alexander Lee

When you say that I shouldn't be round to 2 dp places, can you explain what a fraction of a cent really is. You don't a fraction of cent in the real world.

If you say, the BigDecimal calculation is incorrect "because you aren't using BigDecimal correctly. " then I can say, your double calculation is incorrect "because you aren't using double correctly. "

> I tend to use a default MathContext set to a precision of 20 for all intermediate calculations that involve BigDecimal.

On this, I agree, you must have sensible rounding in your calculations, without it you will get errors.

## Peter Lawrey replied on Tue, 2014/07/08 - 5:14am in response to: Alexander Lee

> so if you want reduce the rate to 0.5% and increase the amount to $1bn, and you get pretty much the same issue.

You are not going to have compound interest on a billion dollar loan. You will have a repayment period and amounts you need to pay over than period. The sort of flexible loans which can be any amount don't exist when it comes to big contracts.

The point I am trying to make is that real programs based on real business problems don't have an issue anywhere near as often as developers like to believe they might in theory. The issues double has are also issue which faces BigDecimal, less often, but also less obviously.

## Alexander Lee replied on Tue, 2014/07/08 - 7:37am in response to: Peter Lawrey

There is no representation error if you use BigDecimal correctly:

## Erin Garlock replied on Tue, 2014/07/08 - 7:37am

The function round6 is cute, except it fails for values < 0.0, though I am sure it could be made smarter.

`2 : 0.99`

1 : 0.0

0 : -0.989999

-1 : -1.979999

-2 : -2.969999

-3 : -2.969999

-2 : -1.979999

-1 : -0.989999

0 : 0.0

1 : 0.99

## Erin Garlock replied on Tue, 2014/07/08 - 9:24am in response to: Peter Lawrey

A fraction of a cent may not really be a real problem initially because payouts are in whole pennies, but investments that operate on DRIP (Dividend Reinvestment Plan) are subject to the multiplicity of rounding errors, if not done correctly, as are other types of decimal based calulations. e.g. I own 100 shares of XYZ stock, at $28.45/share, and the dividend is $.08/share. With the DRIP, I get 3.55625 shares. So now I have 103.55625 shares - feel free to round however you like. Repeat every month for a couple years.

## John Davies replied on Tue, 2014/07/08 - 12:03pm

Interesting post thank you. I would always advocate double over BigDecimal but sadly the precision is not so much "needed" but mandated in many banking situations. There are very specific rules that dictate how interested and other similar "sums" are calculated and IEEE double simply does not carry the precision required. If you were to introduce a cent or penny difference in your calculations compared to your counterparty then you'd create a nightmare in reconciliation. While it's true many of these could seemingly be "fixed" with the use of rounding it would also mean that the counterparty would also need to be using the same set of rules, which they often do but not with doubles. What happens is that a not-insignificant percentage of trades fail to match and by fail 0.01 currency units is enough to trigger a failed match, which itself would cost many thousands.

Having spent some time looking at BigDecimal I totally agree that it's a bloody awful implementation, what we've done in many trading systems it to replace it with longs, double*100 or double*1000 depending on number of decimal places of the currency. It doesn't solve the maths but it does solve the sizeof(BigDecimal) in memory or on disk. You may have also noticed that BigDecimal.doubleValue() is horribly inefficient, it creates a String and then does a Double(string) and then a doubleValue(), basically filling your JVM with crap every time you call it.

Few people understand the internals of such things though, a typical solution is to use BigDecimal everywhere and just buy another few machines to compensate for the loss in performance and extra memory needed.

So, I totally agree with you in avoiding BigDecimal whenever you can but sadly there are a few occasions were it's that or write your own implementation.

-John Davies-