Storing Money in a Decimal Column - What Precision and Scale

Storing money in a decimal column - what precision and scale?

If you are looking for a one-size-fits-all, I'd suggest DECIMAL(19, 4) is a popular choice (a quick Google bears this out). I think this originates from the old VBA/Access/Jet Currency data type, being the first fixed point decimal type in the language; Decimal only came in 'version 1.0' style (i.e. not fully implemented) in VB6/VBA6/Jet 4.0.

The rule of thumb for storage of fixed point decimal values is to store at least one more decimal place than you actually require to allow for rounding. One of the reasons for mapping the old Currency type in the front end to DECIMAL(19, 4) type in the back end was that Currency exhibited bankers' rounding by nature, whereas DECIMAL(p, s) rounded by truncation.

An extra decimal place in storage for DECIMAL allows a custom rounding algorithm to be implemented rather than taking the vendor's default (and bankers' rounding is alarming, to say the least, for a designer expecting all values ending in .5 to round away from zero).

Yes, DECIMAL(24, 8) sounds like overkill to me. Most currencies are quoted to four or five decimal places. I know of situations where a decimal scale of 8 (or more) is required but this is where a 'normal' monetary amount (say four decimal places) has been pro rata'd, implying the decimal precision should be reduced accordingly (also consider a floating point type in such circumstances). And no one has that much money nowadays to require a decimal precision of 24 :)

However, rather than a one-size-fits-all approach, some research may be in order. Ask your designer or domain expert about accounting rules which may be applicable: GAAP, EU, etc. I vaguely recall some EU intra-state transfers with explicit rules for rounding to five decimal places, therefore using DECIMAL(p, 6) for storage. Accountants generally seem to favour four decimal places.


PS Avoid SQL Server's MONEY data type because it has serious issues with accuracy when rounding, among other considerations such as portability etc. See Aaron Bertrand's blog.


Microsoft and language designers chose banker's rounding because hardware designers chose it [citation?]. It is enshrined in the Institute of Electrical and Electronics Engineers (IEEE) standards, for example. And hardware designers chose it because mathematicians prefer it. See Wikipedia; to paraphrase: The 1906 edition of Probability and Theory of Errors called this 'the computer's rule' ("computers" meaning humans who perform computations).

understanding MONEY() datatype with precision and scale

Precision is the number of digits in a number. Scale is the number of digits to the right of the decimal point in a number.

Therefore:

MONEY has a precision of 19 and a scale of 4

SMALLMONEY has a precision of 10 and a scale of 4

The precision and scale of the numeric data types besides decimal are fixed. The scale of MONEY data type is 4 (four decimal digits)
More on precision and scale here

Should you choose the MONEY or DECIMAL(x,y) datatypes in SQL Server?

Never ever should you use money. It is not precise, and it is pure garbage; always use decimal/numeric.

Run this to see what I mean:

DECLARE
@mon1 MONEY,
@mon2 MONEY,
@mon3 MONEY,
@mon4 MONEY,
@num1 DECIMAL(19,4),
@num2 DECIMAL(19,4),
@num3 DECIMAL(19,4),
@num4 DECIMAL(19,4)

SELECT
@mon1 = 100, @mon2 = 339, @mon3 = 10000,
@num1 = 100, @num2 = 339, @num3 = 10000

SET @mon4 = @mon1/@mon2*@mon3
SET @num4 = @num1/@num2*@num3

SELECT @mon4 AS moneyresult,
@num4 AS numericresult

Output: 2949.0000 2949.8525

To some of the people who said that you don't divide money by money:

Here is one of my queries to calculate correlations, and changing that to money gives wrong results.

select t1.index_id,t2.index_id,(avg(t1.monret*t2.monret)
-(avg(t1.monret) * avg(t2.monret)))
/((sqrt(avg(square(t1.monret)) - square(avg(t1.monret))))
*(sqrt(avg(square(t2.monret)) - square(avg(t2.monret))))),
current_timestamp,@MaxDate
from Table1 t1 join Table1 t2 on t1.Date = traDate
group by t1.index_id,t2.index_id

How do I interpret precision and scale of a number in a database?

Numeric precision refers to the maximum number of digits that are present in the number.

ie 1234567.89 has a precision of 9

Numeric scale refers to the maximum number of decimal places

ie 123456.789 has a scale of 3

Thus the maximum allowed value for decimal(5,2) is 999.99

Decimal(19,4) or Decimal(19.2) - which should I use?

First off - you are receiving some incorrect advice from other answers. Obseve the following (64-bit OS on 64-bit architecture):

declare @op1 decimal(18,2) = 0.01
,@op2 decimal(18,2) = 0.01;

select result = @op1 * @op2;

result
---------.---------.---------.---------
0.0001

(1 row(s) affected)

Note the number of underscores underneath the title - 39 in all. (I changed every tenth to a period to aid counting.) That is precisely enough for 38 digits (the maximum allowable, and the default on a 64 bit CPU) plus a decimal point on display. Although both operands were declared as decimal(18,2) the calculation was performed, and reported, in decimal(38,4) datatype. (I am running SQL 2012 on a 64 bit machine - some details may vary based on machine architecture and OS.)

Therefore, it is clear that no precision is being lost. On the contrary, only overflow can occur, not precision loss. This is a direct consequence of all calculations on decimal operands being performed as integer arithmetic. You will occasionally see artifacts of this in intelli-sense when the type of intermediate fields of decimal type are reported as being int instead.

Consider the example above. The two operands are both of type decimal(18,2) and are stored as being integers of value 1, with a scale of 2. When multiplied the product is still 1, but the scale is evaluated by adding the scales, to create a result of integer value 1 and scale 4, which is a value of 0.0001 and of type decimal(18,4), stored as an integer with value 1 and scale 4.

Read that last paragraph again.

Rinse and repeat once more.

In practice, on a 64 bit machine and OS, this is actually stored and carried forward as being of type *decimal (38,4) because the calculations are being done on a CPU where the extra bits are free.

To return to your question - All major currencies of the world (that I am aware of) only require 2 decimal places, but there are a handful where 4 are required, and there are financial transactions such as currency transactions and bond sales where 4 decimal places are mandated by law. When devising the money datatype Microsoft appears to have opted for the maximum scale that might be required rather than the normal scale required. Given how few transactions, and corporations, actually require precision greater than 19 digits this seems eminently sensible.

If you have:

  1. A high expectation of only dealing with major currencies (which at the current time only require 2 digits of scale); and
  2. No expectation of dealing with transactions that are mandated by law to require 4 digits of scale

then you would be safe to use type decimal with scale 2 (such as decimal(19,2) or decimal(18,2) or decimal(38,2)) instead of money. This will ease some of your conversions and, given the assumptions above, have no cost. A typical case where these assumptions are met is in a GL or Subledger accounting system tracking transactions to the penny. However, a stock- or bond-trading system would not meet these assumptions because 4 digits of scale are mandated by law in those case.

A way to distinguish the two cases is whether transactions are reported in cents or percents, which only require 2 digits of scale, or in basis points which require 4 digits of scale.

If you are at all unsure as to which case applies to your programming circumstance, consult your Controller or Director of Finance as to the legal and GAAP requirements for your application. (S)he will be able to give you definitive advice.

Best data type to store money values in MySQL

Since money needs an exact representation don't use data types that are only approximate like float. You can use a fixed-point numeric data type for that like

decimal(15,2)
  • 15 is the precision (total length of value including decimal places)
  • 2 is the number of digits after decimal point

See MySQL Numeric Types:

These types are used when it is important to preserve exact precision, for example with monetary data.

Money, Decimal or Numeric for Currency Columns

First of all, Decimal and Numeric have the same functionality (MSDN info about it)

To answer the new question money VS decimal, there is already a Stackoverflow question about it: Should you choose the MONEY or DECIMAL(x,y) datatypes in SQL Server? - the short answer was:

Never ever should you use money it is not precise and it is pure garbage, always use decimal/numeric

by SQLMenace

What is the best way to store a money value in the database?

Decimal and money ought to be pretty reliable. What i can assure you (from painful personal experience from inherited applications) is DO NOT use float!

Decimal precision and scale in EF Code First

The answer from Dave Van den Eynde is now out of date. There are 2 important changes, from EF 4.1 onwards the ModelBuilder class is now DbModelBuilder and there is now a DecimalPropertyConfiguration.HasPrecision Method which has a signature of:

public DecimalPropertyConfiguration HasPrecision(
byte precision,
byte scale )

where precision is the total number of digits the db will store, regardless of where the decimal point falls and scale is the number of decimal places it will store.

Therefore there is no need to iterate through properties as shown but the can just be called from

public class EFDbContext : DbContext
{
protected override void OnModelCreating(System.Data.Entity.DbModelBuilder modelBuilder)
{
modelBuilder.Entity().Property(object => object.property).HasPrecision(12, 10);

base.OnModelCreating(modelBuilder);
}
}


Related Topics



Leave a reply



Submit