Compare Floats in Unity

Compare floats in Unity

The nearest float to 16.67 is 16.6700000762939453125.

The nearest float to 100.02 is 100.01999664306640625

Adding the former to itself 5 times is not exactly equal to the latter, so they will not compare equal.

In this particular case, comparing with a tolerance in the order of 1e-6 is probably the way to go.

Why comparing float values is such difficult?

Okay, I found a solution that works like a charm.

I need to use an array and check them in two for loops. First one moves the boxes and second one check if a box went off screen like below

public GameObject[] box;
float boundary = -5.5f;
float boxDistance = 1.1f;
float speed = -0.1f;

// Update is called once per frame
void Update () {
for (int i = 0; i < box.Length; i++) {
box[i].transform.position = box[i].transform.position + new Vector3(0, speed, 0);
}

for (int i = 0; i < box.Length; i++)
{
if(box[i].transform.position.y < boundary)
{
int topIndex = (i+1) % box.Length;

box[i].transform.position = new Vector3(box[i].transform.position.x, box[topIndex].transform.position.y + boxDistance, box[i].transform.position.z);
break;
}
}
}

I attached it to MainCamera.

How can I check if a float variable value is equal to 1.08?

Not all floating point numbers are exactly representable. When you add/subtract you may be adding a little bit more (or less) then you think you are. The value you compare against may not be representable either.

You need to compare to an approximation of that value. Typically this is done by selecting an epsilon value representing a tolerance that is "close enough" and checking that the absolute difference between the actual value and the target value is less than that tolerance. For example:

const float tolerance = 0.00001f;
if (Mathf.Abs(1.08f - value) < tolerance)
{
transform.position = teleporters[1].transform.position;
}

Alternatively you can make use of Unity's Mathf.Approximately

if (Mathf.Approximately(1.08f, value))
{
transform.position = teleporters[1].transform.position;
}

Approximately tests whether the value is within a tolerance of Mathf.Epsilon which is the smallest difference between two floating point numbers. Sometimes this value maybe too small. I suggest defining an epsilon yourself such as the first example.

C# - Is Comparing 2 floats an expensive operation?

Comparison is fast, but if statements are not because processors are executing multiple commands at the same time in a pipeline. If there is a conditional jump like if statement the processor needs to start executing other branch before the branch direction is determined. If it is realized that the selected branch is the wrong one, all instructions in it needs to be flushed. After the flush the other branch is executed but this takes much more time than predicting the branch correctly in the first place.

On the other hand optimization is quite pointless, because the draw call is quite likely made anyways on every frame. This is because it is easy to just clear the whole frame with the background color and draw everything again. For sure the draw call needs to be made on every frame, if there are any transparency in the UI. In that case the transparent UI needs to be drawn as the last element on top of everything else.

Also it is quite likely that, if this optimization is actually useful, it is already implemented in the Unity's build in draw functions for the UI slider.

branch less float comparison in HLSL

You can use a lerp for this purpose, in your example it would be value = lerp(1.0f, 0.0f, x > y). This is, by the way, exactly what the shader compiler will do automatically when flattening a branch - whether or not a branch gets flattened is another topic, see the attributes [flatten] and [branch].

Floating point comparison functions for C#

Writing a useful general-purpose floating point IsEqual is very, very hard, if not outright impossible. Your current code will fail badly for a==0. How the method should behave for such cases is really a matter of definition, and arguably the code would best be tailored for the specific domain use case.

For this kind of thing, you really, really need a good test suite. That's how I did it for The Floating-Point Guide, this is what I came up with in the end (Java code, should be easy enough to translate):

public static boolean nearlyEqual(float a, float b, float epsilon) {
final float absA = Math.abs(a);
final float absB = Math.abs(b);
final float diff = Math.abs(a - b);

if (a == b) { // shortcut, handles infinities
return true;
} else if (a == 0 || b == 0 || absA + absB < Float.MIN_NORMAL) {
// a or b is zero or both are extremely close to it
// relative error is less meaningful here
return diff < (epsilon * Float.MIN_NORMAL);
} else { // use relative error
return diff / (absA + absB) < epsilon;
}
}

You can also find the test suite on the site.

Appendix: Same code in c# for doubles (as asked in questions)

public static bool NearlyEqual(double a, double b, double epsilon)
{
const double MinNormal = 2.2250738585072014E-308d;
double absA = Math.Abs(a);
double absB = Math.Abs(b);
double diff = Math.Abs(a - b);

if (a.Equals(b))
{ // shortcut, handles infinities
return true;
}
else if (a == 0 || b == 0 || absA + absB < MinNormal)
{
// a or b is zero or both are extremely close to it
// relative error is less meaningful here
return diff < (epsilon * MinNormal);
}
else
{ // use relative error
return diff / (absA + absB) < epsilon;
}
}

Is Mathf.Approximately(0.0f, float.Epsilon) == true its correct behavior?

Here is the decompiled code of Unity's public static bool Mathf.Approximately(float a, float b);
You can see the * 8.0f at the end ^^, so a truely badly documented method indeed.

/// <summary>
/// <para>Compares two floating point values if they are similar.</para>
/// </summary>
/// <param name="a"></param>
/// <param name="b"></param>
public static bool Approximately(float a, float b)
{
return (double) Mathf.Abs(b - a) < (double) Mathf.Max(1E-06f * Mathf.Max(Mathf.Abs(a),
Mathf.Abs(b)), Mathf.Epsilon * 8.0f);
}

Comparing the sum of floats

From this answer:

2.) Floating point intermediate results often use 80 bit precision in register, but only 64 bit in memory.

I believe, that sum = a + b generates an instruction to store the result in memory, as a float with a maximum of 64 bits.

Due to a compiler optimization, the machine code of (a + b) >= 1f doesn't seem to cast to the limited float type and apparently uses a higher bit depth, where it can be observed that the numbers don't add up to 1.

We can force memory storage by casting (float)(a+b).

From Enigmativity's comment:

[...] the output is different if you have compiler optimisation turned on or not. When it's on I get true and true. When it is off I get false and true.

Is floating point math more precise for values close to unity?


Is floating point math more precise for values close to unity?

Not really.

In general, floating point math well preserves realtive precision for *, /, sqrt() cover the lion’s share of the floating point range. +, - are subject to significant loss of relative precision (to the result) due to subtraction of nearby values.

Overall, there is little difference for normal numbers for relative precision. It varies from (0.5 to 1.0] * 2-53.

The absolute precision changes in steps for powers of 2.

Floating point numbers [0.5...1.0) have the same absolute precision. For double 2-54.

Floating point numbers [1.0...2.0) have the same absolute precision. For double 2-53.

Floating point numbers [2.0...4.0) have the same absolute precision. For double 2-52.

Floating point numbers [4.0...8.0) have the same absolute precision. For double 2-51.

etc.

floating point arithmetic has the greatest precision if the numbers operated on are close to 1.0 (or sometimes 0.1). Is there any truth to this?

Values just under a power-of-2 have a higher absolute precision (by about 2x) than values just above a power of 2.

With tiny subnormal values, precision is lost, a bit per power of 2 until 0.0 is reached.


Advanced: Trig functions has a special concern when their magnitude is large. A high quality sin(1e10) does an internal extended high precision argument reduction to the primary [-pi ... pi] range. Not all trig function implementations handle this step well. So for radians arguments, starting in the primary range is useful to maintain precision. For degree arguments, a simple fmod(deg, 360.0) is a simple and precise range reduction.



Related Topics



Leave a reply



Submit