Computers have limited abilities when it comes to storing numbers. In particular, storing numbers in the 'double' format limits precision to about 15 or 16 decimal digits. When you give a decimal value - like 14.7 - to a computer, that number gets converted to the binary format needed to store that number (as type double). In fact, for certain decimal values, this process yields a binary number that is as close as you can get to the real value, but not exact. One bit 'more' or one bit 'less' would be even further away from the real value. So this is a fact of computer life that programmers should be aware of and deal with in their code.