Float Precision in Ruby

Ruby float precision

Part of the problem is that 0.33 does not have an exact representation in the underlying format, because it cannot be expressed by a series of 1 / 2n terms. So, when it is multiplied by 10 a number slightly different than 0.33 is being multiplied.

For that matter, 3.3 does not have an exact representation either.

Part One

When numbers don't have an exact base-10 representation, there will be a remainder when converting the least significant digit for which there was information in the mantissa. This remainder will propagate to the right, possibly forever, but it's largely meaningless. The apparent randomness of this error is due to the same reason that explains the apparently-inconsistent rounding you and Matchu noticed. That's in part two.

Part Two

And this information (the right-most bits) is not aligned neatly with the information conveyed by a single decimal digit, so the decimal digit will typically be somewhat smaller than its value would have been if the original precision had been greater.

This is why a conversion might round to 1 at 15 digits and 0.x at 16 digits: because a longer conversion has no value for the bits to the right of the end of the mantissa.

Float precision in ruby

This is an inherent limitation in floating point numbers (even 0.01 doesn't have an exact binary floating point representation). You can use the technique provided by Aleksey or, if you want perfect precision, use the BigDecimal class bundled in ruby. It's more verbose, and slower, but it will give the right results:

require 'bigdecimal'
=> true
1.9.3p194 :003 > BigDecimal.new("113") * BigDecimal("0.01")
=> #<BigDecimal:26cefd8,'0.113E1',18(36)>
1.9.3p194 :004 > BigDecimal.new("113") * BigDecimal("0.01") == BigDecimal("1.13")
=> true

Decimal values are truncating with to_f


Can I know what will be the reason the to_f automatically reduce the limit of the decimal?

The reason is the to_f methods are used to convert objects to Floats, which are standard 64-bit double precision floating point numbers. The precision of these numbers is limited, therefore the precision of the original object must be automatically reduced during the conversion process in order to make it fit in a Float. All extra precision is lost.

It looks like you are using the BigDecimal class. The BigDecimal#to_f method will convert the arbitrary precision floating point decimal object into a Float. Naturally, information will be lost during this conversion should the big decimal be more precise than what Floats allow. This conversion can actually overflow or underflow if limits are exceeded.

I was just thinking about some truncate with some limit

There is a truncate method if you'd like explicit control over the precision of the result. No rounding of any kind will occur, there is a separate method for that.

  • BigDecimal#truncate

    Deletes the entire fractional part of the number, leaving only an integer.

    BigDecimal('3.14159').truncate #=> 3
  • BigDecimal#truncate(n)

    Keeps n digits of precision, deletes the rest.

    BigDecimal('3.14159').truncate(3) #=> 3.141

Ruby: counting digits in a float number


# Number of digits

12345.23.to_s.split("").size -1 #=> 7

# The precious part

("." + 12345.23.to_s.split(".")[1]).to_f #=> .023

# I would rather used
# 12345.23 - 12345.23.to_i
# but this gives 0.22999999999563

Ruby Floating Point Math - Issue with Precision in Sum Calc

If accuracy is important to you, you should not be using floating point values, which, by definition, are not accurate. Ruby has some precision data types for doing arithmetic where accuracy is important. They are, off the top of my head, BigDecimal, Rational and Complex, depending on what you actually need to calculate.

It seems that in your case, what you're looking for is BigDecimal, which is basically a number with a fixed number of digits, of which there are a fixed number of digits after the decimal point (in contrast to a floating point, which has an arbitrary number of digits after the decimal point).

When you read from Excel and deliberately cast those strings like "0.9987" to floating points, you're immediately losing the accurate value that is contained in the string.

require "bigdecimal"
BigDecimal("0.9987")

That value is precise. It is 0.9987. Not 0.998732109, or anything close to it, but 0.9987. You may use all the usual arithmetic operations on it. Provided you don't mix floating points into the arithmetic operations, the return values will remain precise.

If your array contains the raw strings you got from Excel (i.e. you haven't #to_f'd them), then this will give you a BigDecimal that is the difference between the sum of them and 1.

1 - array.map{|v| BigDecimal(v)}.reduce(:+)

How to force a Float to display with full precision w/o scientific notation and not as a string?

The string representation and the actual value of a float are two different things.

What you see on screen/print-out is always a string representation, be it in scientific notation or "normal" notation. A float is converted to its string representation by to_s, puts, "%.10f" % and others.

The float value itself is independent of that. So your last sentence does not make much sense. The output is always a string.

To enforce a certain float format in Rails' to_json you can overwrite Float#encode_json, e.g.

class ::Float
def encode_json(opts = nil)
"%.10f" % self
end
end

Put this before your code above. Note that -- depending on your actual values -- you might need more sophisticated logic to produce reasonable strings.

How do I find out of what size a Float in Ruby is?

The size of single/double precision floats isn't related to whether you're running 64/32 bit implementations of ruby, so your implementation will return the wrong answer on any 32 bit implementation of ruby.

Float defines constants such as Float::MANT_DIG and Float::MAX_EXP from which you can derive the amount of storage used by a float. It will be pretty uncommon for it not to be an ieee 754 double precision though (53 bit mantissa (of which 52 are stored), 1 bit sign, 11 bits exponent)

How do you round a float to 2 decimal places in JRuby?

Float#round can take a parameter in Ruby 1.9, not in Ruby 1.8. JRuby defaults to 1.8, but it is capable of running in 1.9 mode.



Related Topics



Leave a reply



Submit