Difference Between Decimal and Numeric

Difference between numeric, float and decimal in SQL Server

use the float or real data types only if the precision provided by decimal (up to 38 digits) is insufficient

  • Approximate numeric data types(see table 3.3) do not store the exact values specified for many numbers; they store an extremely close approximation of the value. (Technet)

  • Avoid using float or real columns in WHERE clause search conditions, especially the = and <> operators. It is best to limit float and real columns to > or < comparisons. (Technet)

so generally choosing Decimal as your data type is the best bet if

  • your number can fit in it. Decimal precision is 10E38[~ 38 digits]
  • smaller storage space (and maybe calculation speed) of Float is not important for you
  • exact numeric behavior is required, such as in financial applications, in operations involving rounding, or in equality checks. (Technet)


  1. Exact Numeric Data Types decimal and numeric - MSDN
  • numeric = decimal (5 to 17 bytes)
    • will map to Decimal in .NET
    • both have (18, 0) as default (precision,scale) parameters in SQL server
    • scale = maximum number of decimal digits that can be stored to the right of the decimal point.
    • money(8 byte) and smallmoney(4 byte) are also Exact Data Type and will map to Decimal in .NET and have 4 decimal points (MSDN)

  1. Approximate Numeric Data Types float and real - MSDN
  • real (4 byte)
    • will map to Single in .NET
    • The ISO synonym for real is float(24)
  • float (8 byte)
    • will map to Double in .NET

Exact Numeric Data Types
Approximate Numeric Data Types

  • All exact numeric types always produce the same result, regardless of which kind of processor architecture is being used or the magnitude of the numbers
  • The parameter supplied to the float data type defines the number of bits that are used to store the mantissa of the floating point number.
  • Approximate Numeric Data Type usually uses less storage and have better speed (up to 20x) and you should also consider when they got converted in .NET
  • What is the difference between Decimal, Float and Double in C#
  • Decimal vs Double Speed
  • SQL Server - .NET Data Type Mappings (From MSDN)

main source : MCTS Self-Paced Training Kit (Exam 70-433): Microsoft® SQL Server® 2008 Database Development - Chapter 3 - Tables, Data Types, and Declarative Data Integrity Lesson 1 - Choosing Data Types (Guidelines) - Page 93

Is there any difference between DECIMAL and NUMERIC in SQL Server?

They are the same. Numeric is functionally equivalent to decimal.

MSDN: decimal and numeric

What is the difference between decimal and numeric in Postgres?

According to the manual they are the same.

The types decimal and numeric are equivalent. Both types are part of
the SQL standard.

https://www.postgresql.org/docs/current/static/datatype-numeric.html

The difference lies in the SQL standard which allows for different behaviour:

NUMERIC must be exactly as precise as it is defined — so if you define 4 decimal places, the DB must always store 4 decimal places.

DECIMAL must be at least as precise as it is defined. This means that the database can actually store more digits than specified (due to the behind-the-scenes storage having space for extra digits). This means the database might store 1.00005 instead of 1.0000, affecting future calculations.

Difference between DECIMAL and NUMERIC

What is the difference between NUMERIC and FLOAT in BigQuery?

I like the current answers. I want to add this as a proof of why NUMERIC is necessary:

SELECT 
4.35 * 100 a_float
, CAST(4.35 AS NUMERIC) * 100 a_numeric

Sample Image

This is not a bug - this is exactly how the IEEE defines floats should be handled. Meanwhile NUMERIC exhibits behavior closer to what humans expect.

For another proof of NUMERIC usefulness, this answer shows how NUMERIC can handle numbers too big for JavaScript to normally handle.

Before you blame BigQuery for this problem, you can check that most other programming languages will do the same. Python, for example:

Sample Image

What's the difference between str.isdigit(), isnumeric() and isdecimal() in Python?

It's mostly about unicode classifications. Here's some examples to show discrepancies:

>>> def spam(s):
... for attr in 'isnumeric', 'isdecimal', 'isdigit':
... print(attr, getattr(s, attr)())
...
>>> spam('½')
isnumeric True
isdecimal False
isdigit False
>>> spam('³')
isnumeric True
isdecimal False
isdigit True

Specific behaviour is in the official docs here.

Script to find all of them:

import sys
import unicodedata
from collections import defaultdict

d = defaultdict(list)
for i in range(sys.maxunicode + 1):
s = chr(i)
t = s.isnumeric(), s.isdecimal(), s.isdigit()
if len(set(t)) == 2:
try:
name = unicodedata.name(s)
except ValueError:
name = f'codepoint{i}'
print(s, name)
d[t].append(s)


Related Topics



Leave a reply



Submit