Tag: loss of precision
2010
08.13

A recent Java project (replacing a very slow Excel spreadsheet) required me to do some calculations involving money. The issues involved in using floating point arithmetic to handle monetary calculations are well documented (Google it if you don’t believe me) so, as any smart Java programmer would do, I started writing my program using BigDecimal to represent the money.

Unfortunately, when I started checking my results, I found that my answers wouldn’t match up with Excel’s!

Worse, my answers didn’t match up with ones done directly on the data source that both Excel and my program were using to get data!

And even worse, Excel’s answers exactly matched those from the data source.

I was astonished. How, I asked myself, could floating point arithmetic beat arbitrary precision so badly?

Then, suddenly (and by suddenly I mean a half-hour later), it hit me.

BigDecimal has a notion of something called “scale.” The scale corresponds to the number of digits to the right of the decimal place. Thus, the scale determines how accurate BigDecimal calculations will be. If the scale is set too low, the precision is lowered.

As it turned out, the scale on all of my BigDecimals was a measly four digits (I was following good practice by using BigDecimal’s String constructor, and my data source returned numbers with four decimal places). For comparison, printing out my double at any given point would give me \~10 digits. As soon as I set the scale properly, the calculations became accurate and my BigDecimal formulas started working the way I expected them to. :)