# SSEQotW: Double Trouble

Some of you may remember that last time on Stupid Software Engineering Question of the Week, we concluded that default constructors are evil. This time, I’m going to attempt to prove that doubles are just as bad…

I want to write tests that precisely define the limits of my objects. So let’s say that I’ve a *Lat(double lat)* object that I can construct with a double (yes, it’s another geospatial example…) but that it’s only valid from -90 to +90 inclusive. What do I do? Something like…

@test

void testLatValid() {

Lat(90);

Lat(-90);

}

…seems like a good start. But now I want

@test(expected InvalidArgumentException)

void testLatTooBig {

Lat(90_and_a_bit)

}

@test(expected InvalidArgumentException)

void testLatTooSmall {

Lat(-90_and_a_bit)

}

That seems pretty reasonable, right?

The only problem is, what’s *90_and_a_bit*? What’s the smallest double that’s *just* greater that 90? Does adding on *Double.MIN_VALUE* guarantee incrementing up to the next valid double? (assuming that you’re not at Infinity, of course).

I actually wrote some code, although I’ve no idea if any of this is guaranteed behaviour, or just platform/compiler/jvm specific… In Java (or, at least in the Java that I’m writing for my current project), the following evaluates to true:

*Double.MIN_VALUE < (2 *Double.MIN_VALUE)*

Nothing too surprising there; Double.MIN_VALUE may be the smallest number representable by a floating-point Double, but you’d still expect two of them to be bigger than just the one. However, the following also evaluates to True:

*90 + Double.MIN_VALUE == 90 + (2 *Double.MIN_VALUE)*

Eh? Surely the left-hand side is smaller than the right – it’s one less *min_value* as big.

The problem is, doubles aren’t “doubles”, they’re “floating-point doubles”, and the “floating” part of all such numbers means that, in order to represent the widest range of numbers possible, floats are a constant trade-off of precision and magnitude. So, in order to store that 90, we lose the bits required to store the *min_value* in there with it, and with it, we lose the ability to make simple assertions about our code.

So, what does this mean? What about…

- does 90 == 90 + Double.MIN_VALUE ?..
- …and 90 + Double.MIN_VALUE == 90 + Double.MIN_VALUE + Double.MIN_VALUE ?..
- …and does it matter if it adds the min_values together first or adds one of them onto 90 first?

I don’t know; I haven’t tried them out. But I do know that the fact that there are some many questions indicates either;

- I’ve thought about it too much, or
- Double isn’t appropriate for my Lat constructor.

What I really want is something that’s quantised nicely, like Integer. Which is no good for Lat() because integer degrees are relatively huge, and I want something like millionth of a degree accuracy. So internally each Lat() can hold an integer that’s in millions of a degree, and when someone asks a Lat for its value it can return (((double) intLat) / 1,000,000). Which is fine. But I don’t really want the constructor to look like Lat(Integer millionthsOfADegree). What if it turns out that millionths of a degree is too accurate? Or not accurate enough? If I change it to Lat(Integer hundredthousandthsOfADegree) then all my existing clients are going to find their Lats out by a factor of 10.

Bleugh.

I’m not even sure Java gives me fixed-point numbers, and I’m not even sure that they’d help. Maybe someone else can tell what what I * do* want…

I agree.

Computers struggle storing floating point decimals in 32 or 64 bits and I see how this could be considered

The Trouble With Doubles.In java:

`System.out.println(0.1 + 0.1 + 0.1);`

will output:

`0.30000000000000004`

This sucks.

Currently, a big problem is that not enough people understand why and therefore don’t write their code aware of these things.

Perhaps the language shouldn’t allow you to write code with floats at all?

In java you can turn a double into the quantised value so you Compare With Confidence using:

`Double.doubleToLongBits(Double.MIN_VALUE); // == 1`

Double.doubleToLongBits(2 * Double.MIN_VALUE); // == 2

But there are of course the exact same problems if not used properly:

`Double.doubleToLongBits(90 + Double.MIN_VALUE); // == 4636033603912859648`

Double.doubleToLongBits(90 + (2 * Double.MIN_VALUE)); // == 4636033603912859648

Another approach would just be to use BigDecimal for all your calculations etc. but make sure you construct appropriately (constructing with doubles will just persist your problem):

`BigDecimal bd1 = new BigDecimal(0.1 + 0.1 + 0.1);`

BigDecimal bd2 = new BigDecimal("0.3");

`System.out.println(Double.doubleToLongBits(bd1.doubleValue()));`

System.out.println(Double.doubleToLongBits(bd2.doubleValue()));

outputs:

`4599075939470750516`

4599075939470750515

As for your current problem, you shouldn’t be comparing doubles with == and such like, you should only compare them with the level of precision you require (just like JUnit does when asserting two doubles are equal by mandating a delta). Perhaps your constructor should be Lat(double lat, double precision)? That way it’s the client that decides how accurate is accurate enough and the compareTo method deals with comparing Lats of different precision.

Hmm.

The Trouble With Doubles, orDouble Trouble. I like them both. But which one’s better? There’s only one way to find out…So, for once, I agree with most of what you’ve said.

Well, some of it, anyway.

The nice thing about BigDecimal is that it really is a decimal, i.e. for decimal fractions, it stores rational numbers where the denominator is a power of 10 (well, actually, it stores the numerator and the power, but you get my meaning).

People seem to understand that there’s no decimal fraction that can exactly represent one-third, but for some reason they find it much harder to realise (or, at least, remember) that binary fractions have just as much trouble storing numbers like one-tenth.

What do you mean my computer can’t store one-tenth? It’s just 0.1, right? How difficult can it be?!Anyway, one way around it is to pander to the masses, and store your data as a decimal fraction, like BigDecimal or decimal64. At least then the limitations of your fractions will be more intuitive to the user.

Although none of this addresses the problem of variable precision in floating-point numbers, and the magnitude/precision trade-off.

So, really, I’ve just said what you said, but in a nicer way. What I really need to do is point out where you’re wrong.

But you’ll have to wait for my next comment for that…

It’s looks like CHG has something to say about it, too.

..and no that wasn’t “my next comment” where I was going to point out where matt went wrong. And this comment isn’t it either. Yes, I know I’ve been slacking in blog-land. Maybe tomorrow…

Good, because that wasn’t controversial and I want a full-on argument about something more interesting.