I want to understand how small numerical errors propagate through a computation and may drastically change the final result. To this end, I consider a toy world in which every number is limited to exactly 5
significant digits.
The output of
$MaxPrecision = 5;
$MinPrecision = 5;
x = SetPrecision[10^(-10), 5];
x // FullForm
y = 1 + x;
y // FullForm
y - 1 // FullForm
is
1.`5.*^-10
1.`5.
1.`5.*^-10
The last line puzzles me. The FullForm
of y
shows that the Precision
of 5
significant digits is not sufficient to resolve the small deviation x
from 1
. However, y - 1
still yields the exact result, instead of y - 1 = 0
.
If one writes explicitly
1.`5. - 1 // FullForm
this yields, of course,
0
as it should.
Does this mean that FullForm[y]
hides something?
Comments
Post a Comment