Origin Version (Select Help-->About Origin): 7.5885
Operating System: Windows XP
Hello,
I recently discovered a problem with calculating simple differences of floating numbers. For simplification I reduced the code to finally just one line:
void test()
{
float d= 1.2-0.3;
}
which I checked with the debugger and there it says d= 0.8999999761581421. Same code but with double precision: d= 0.8999999999999999.
Why doesn't it calculate the correct number of 0.9?
The problem became apparent when I tried to calculate something like this:
int x= (int)(1.2-0.3)/0.1+1;
where I wanted to use just typecasting to convert into integer, because I always know the result should be some integer. But in the shown case the result is 9 instead of 10, because it calculates to 9.999... and the truncates. So I have to always use round to get the right result. For sure, I can live with that, but I was really wondering about that. Is it really meant this way? Shouldn't it give the exact result? Or is there some compiler switch or something like that? Thanks in advance.