RSRP-273440 Discussion

Hi.

The entry can be found here RSRP-273440
To sum it up it is about the warning that comparing a floating value with 0.0f should be changed to use a epsilon. This feature is fine because a 2 in float is maybee a 1.99999999999999999 and so the comparision can be false.

However for me and my colleagues the comparision with 0.0f which we use quite often does not need this check and produces to much noise...
The RSRP entry was closed as won't fix and I just want to talk about why it will not be changed for the comparision with 0.0f.
Is there a technical reason that the 0 is a 0.0000000000000000000000000000000000000000000001 or something like that?

I'm interested in reducing unnecessary warnings. If it is a valid conecern it is ok but I think it is not in the case for 0.

Regards,
Sven

6 comments
Comment actions Permalink

There are few concerns:

1) Due to arithmetic ops, the result could be 0.0...00001 instead of the 0.0  (For example, something like this: 2.0/3.0 - 1.0/3.0 - 1.0/3.0)
2) It is forbidden do divide by 0.0, but dividing by 0.0000...00001 will automatically lead to overflow
3) According to the user domain, figures closer then Epsilon usually should be counted as equals

0
Comment actions Permalink

Hello Evgeny,

thank you for your reply and this information.
I see why you won't change it as it could be a problem indeed. :)

What epsilon value you would suggest for testing a float with 0.0f? Using 0.0f as epsilon looks a little bit scarry^^

Regards,
Sven

0
Comment actions Permalink

Exact Epsilon value depends on the user domain, and the sematic below that figures :)
For example, if you're developing cartography software, 1e-2 is enough (Nobody wants accuracy more than 1 cm). If you're developing quant physics app, then 1e-20 could be enough....

0
Comment actions Permalink

I see. Thanks again. :)

0
Comment actions Permalink

There's nothing wrong with dividing floating point values by 0. IEEE 754/854 has very well-defined behavior in this regard, which is to return +infinity in this case. The result is the same when dividing by any denormalized number, which again will saturate the result to infinity (positive or negative).If your code is designed to properly handle infinite values (for example, I use this for very clean and efficient arithmetic handling of 2D and 3D bounding boxes), then saturating to infinity is a wonderful thing. Do not fear the zero denominator (or dividing by a denorm).

0
Comment actions Permalink

Keep in mind that there may be cases where you do not know what the user domain is. This is the case when you're writing code for a mathematical function, a library, or a general application. In this situation, you need to manage the finite precision of floating-point arithmetically rather around some fixed physical model. Indeed, if your floating-point values are fixed to some arbitrary scale (like meters, with 1cm accuracy), then it would seem that you really should be using a fixed-precision system rather than a floating-point system. If you're dealing with mathematical code, library code, or general application code, then Bruce Dawson's excellent article "http://www.cygnus-software.com/papers/comparingfloats/Comparing%20floating%20point%20numbers.htm" is a great way to learn about how to properly compare floating point values at arbitrary scale.

0

Please sign in to leave a comment.