Decimal Numbers math operations accuracy


Is there a way or perhaps an addon to ensure that math operations on decimals are avoiding the problems/intricacies of javascript with regards to decimals?


We’ve used BigNumber.js in one of our apps for that sort of thing. Not sure if there are better libs out there or not.


What we do in the embedded world is avoid them all together and use integer math. We take the domain of values for a given problem and divvy the range up into units of 1. For example, if we are making a voltage sensor, and need accuracy down to 1 millivolt over a range of 30 volts, we will use an integer field large enough to hold values from 0 to 30,000. For storage, transmission and computation, we use the integer represation. We only marshal the decimal notation for UX. Not a js explicit way, but I find myself using habits learned on the embedded side over here in web-ville.


Any drawbacks with this approach of using integers and integer math with decimals?


Yes, the biggest is that a value–as it travels through your system–carries the burden of an explaination to future consumers and maintainers of the code. Because it is an unnatural pattern on non-embedded systems, it can be a stumbling block. In the embedded world, it is expected.

Then there is the issue of marshalling across the boundary of integer to decimal for the purpose of human viewing. That code needs to exist and offers up the opportunity for the very thing you are trying to avoid to come right back into the system. What you hope for is that the error that is introduced is way off the spectrum of significance for your problem domain.

Those type conversions can also add extra cycles that hurt performance. In our web/JavaScript/api world, everything is flowing through a text translation at some point to travel via HTTP, so that sort of marshalling happens all over the place anyway.