How to Turn Off Implicit Type Conversion in SQL Server

Is there a way to turn off implicit type conversion in SQL Server?

There is no way to disable it.

It has been requested though: see the proposed SET OPTION STRICT ON MS Connect request which comes from Erland Sommarskog

However, it is utterly predictable according to datatype precedence rules

Your example of a foreign key is interesting because an actual FOREIGN KEY constraint requires the same datatype, length and collation.

removing implicit conversion in SQL Server

The warning is about your explicit conversion not an implicit conversion. The tooltip you show doesn't mention CONVERT_IMPLICIT

CAST(ba.UpdatedById AS INT) shows up in the plan as CONVERT(int,ba.UpdatedById ,0) and it is warning you about that (it prevents an index seek on ba.UpdatedById).

To stop seeing this warning you would need to fix your schema so you are joining on columns of the same datatype.

T-SQL :: how to remove CONVERT_IMPLICIT

As @Larnu wrote in the comment:

If the data types are different, then one of them will be implicitly
converted using data type precedence. There is no way round that. It
is by design, and intended. If you don't want implicit conversion,
don't use different data types for data that will interact with each
other.

Why does implicit conversion from some numeric values work but not others?

It tries to convert your string to a numeric(3,2) because that's the type on the right of the multiplication1. If you can force your value to be a larger numeric type:

select '185.37' * (102.00 - 100.00)

Then it works fine (produces 370.74) because it'll now attempt to convert it to a numeric(5,2).

Rather than doing it by a trick, however, I'd pick an appropriate numeric type for this and explicitly perform the required conversion, to better document what you want to occur:

select CONVERT(numeric(5,2),'185.37') * 2.00

1And it has a higher precedence.

EDIT (by Gordon Linoff):

SQL Server's use of type precedence for this purpose is explicitly stated in the documentation:

For comparison operators or other expressions, the resulting data type will depend on the rules of data type precedence.

There might be just a little confusion because the documentation is not clear that the scale and precision of numerics is explicitly part of the type (that is, what gets converted is numeric(3, 2) rather than numeric(?, ?) with appropriate scale and precision).

SQL Server and implicit conversion of types

This is the list you are after DataType Precedence

In your examples:

WHERE quantity > '3'

'3' is cast to int, matching quantity

WHERE quantityTest > 3

No casting required

WHERE date = 20120101

20120101 as a number is being cast to a date, which is too large. e.g.

select cast(20120101 as datetime)

This is different from

WHERE date = '20120101'

Where the date as a string can be cast.

If you go down a third of the CAST and CONVERT reference to the section Implicit Conversions, there is a table of implicit conversions that are allowed. Just because it is allowed doesn't mean it will work, such as (20120101 -> datetime).

SQL Server :: implicit conversion of Data Type in query from Microsoft Access?

and I discovered with disgust that Data Types in Access are not the same in SQL Server:

You find that FoxPro data types are different. 
You find that Excel sheets data types are different
You find that SharePoint lists and data types are different
(all of the above are Microsoft products).

You find that MySQL data types are different
You find that Oracle data types are different
And so on. So data types "are" often different.

The result?
You find that quite a bit, if not most data systems do have to work with somewhat different data types. So, the ODBC, or oleDB, or even these days jdbc drivers will handle the required conversions from server to the client software. And this can result in some data types not being supported at all.

does the Data Types commit an implicit conversion when they get translated from Access to SQL Server?

Yes, you are correct.
In fact it is the ODBC driver. SQL server does not "know" if the client query request is from Access, FoxPro, VB6, vb.net, or even some fancy asp.net web site.

In all cases the speed and pull of a data query occurs AT THE SAME RATE.

SQL server does not out of the blue decide that some query in Access, or some sql query from a asp.net web site is to run slower or faster.

The data type conversions (automatic) that ODBC drivers (or the now seldom used oleDB drivers) have NEVER be a significant cost or overhead in regards to pulling data.

So, the speed of Access, or say an asp.net site to pull some data is the SAME.

So, any query in access should not be any slower then a query sent from say asp.net, or some fancy c# client program. They all run the same speed, and the data type translates are a normal part of ALL ODBC or drivers used by client software to pull such data.

So if I use say Access, or FoxPro, or VB6, or C# or some asp.net web site to connect and pull data from SQL server? Well in ALL of these cases, then data type conversions that are compatible with say .net ARE AND WILL occur by the driver(s) and connection stack used. This data type conversion really never factors in ANY significant way in terms of performance.

So, a query submitted from Access, or .net? They should run the same. However, one feature and ability that Access has (and .net and most other connection technologies DO NOT HAVE) is that Access can join between different connection objects. So, I can have a local table, one linked to Foxpro, and another linked to SQL server. In Access you can perform joins and sql queries between those different data source tables. In .net say for example, a query is limited to ONE connection object.

However, this also means that any query that attempts a relational join between two data source tables (even the same database) can occur client side (because Access can do this, and most systems lack this ability). As a result, in some cases, while little if any speed difference in a select query from Access or say asp.net pulling data?

WHEN a relational join is involved, then Access can cause the relational join and work to occur client side as opposed to server side. In these cases, then you can force the query (and join) to occur server side by several approaches. And in these cases such a query will run VERY slow.

Best option:

Use/create a view and link to that view from Access client. This is the BEST option. The reason is you get the same performance as a pass-though query, and you get the same performance as a store procedure. But, there is no code or work to do this. Once done, you once again find the query pull speed in Access client to be the SAME as any other client software (.net, asp.net, c# etc.)

And once again, any consideration of data type translation by the drivers involved is a MOOT point from a performance point of view.

In place of the very little effort and work of a linked view, you can consider a pass-through query. This of course again would be raw T-SQL statements sent from Access client, and again the data type issues are quite much moot since this is t-sql syntax code being sent to sql server, and thus its t-sql taking the sql statements and doing the data type conversions from a ASCII (well ok, uni-code) string, and converting that string into numbers, date types etc. But then again such data conversion occurs for ANY sql statement you write that has values expressed in such a string.

So be it .net, FoxPro, Access, asp.net client software? they all will and have to do data conversion typeing between the data and the client software. For example, .net has several data types that you can define in code that say Access, or FoxPro or even VB6 for that matter (or even c++) does NOT have. So every client system is constantly converting from the native variable and data types in that software to that of data types used on sql server.

So, such data conversions occur for all client software, and this converting is not a performance factor anymore in Access then writing code in c++ or even assembler. The speed of all these systems when pulling a query sent to sql server is the same speed.

Disable implicit conversion of quoted values to integer

No, you cannot disable implicit conversion of quoted literals to any target type. PostgreSQL considers such literals to be of unknown type unless overridden by a cast or literal type-specifier, and will convert from unknown to any type. There is no cast from unknown to a type in pg_cast; it's implicit. So you can't drop it.

As far as I know, PostgreSQL is following the SQL spec by accepting quoted literals as integers.

To PostgreSQL's type engine, 1 is an integer, and '1' is an unknown that's type-inferred to an integer if passed to an integer function, operator, or field. You cannot disable type inference from unknown or force unknown to be treated as text without hacking the parser / query planner directly.

What you should be doing is using parameterised statements instead of substituting literals into SQL. You won't have this issue if you do so, because the client-side type is known or can be specified. That certainly works with Python (psycopg2) and Ruby (Pg gem) doesn't work how I thought for psycopg2, see below.


Update after question clarification: In the narrow case being described here, psycopg2's client-side parameterised statements, while correct, do not produce the result the original poster desires. Running the demo in the update shows that psycopg2 isn't using PostgreSQL's v3 bind/execute protocol, it's using the simple query protocol and doing parameter substitution locally. So while you're using parameterised statements in Python, you're not using parameterised statements in PostgreSQL. I was mistaken above in saying that parameterised statments in psycopg2 would resolve this issue.

The demo runs this SQL, from the PostgreSQL logs:

< 2014-07-07 18:17:24.450 WST >LOG:  statement: INSERT INTO foo (val) VALUES (1), ('2')
< 2014-07-07 18:17:24.451 WST >LOG: statement: SELECT * FROM foo WHERE id='1'

Note the lack of placement parameters. They're substituted client-side.

So if you want psycopg2 to be stricter, you'll have to adapt the client side framework.

psycopg2 is extensible, so that should be pretty practical - you need to override the type handlers for str, unicode and integer (or, in Python3, bytes, str and integer) using psycopg2.extras, per adapting new types. There's even an FAQ entry about overriding psycopg2's handling of float as an example: http://initd.org/psycopg/docs/faq.html#faq-float

The naïve approach won't work though, because of infinite recursion:

def adapt_str_strict(thestr):
return psycopg2.extensions.AsIs('TEXT ' + psycopg2.extensions.adapt(thestr))

psycopg2.extensions.register_adapter(str, adapt_str_strict)

so you need to bypass type adapter registration to call the original underlying adapter for str. This will, though it's ugly:

def adapt_str_strict(thestr):
return psycopg2.extensions.AsIs('TEXT ' + str(psycopg2.extensions.QuotedString(thestr)))

psycopg2.extensions.register_adapter(str, adapt_str_strict)

Run your demo with that and you get:

psycopg2.ProgrammingError: parameter $1 of type text cannot be coerced to the expected type integer
HINT: You will need to rewrite or cast the expression.

(BTW, using server-side PREPARE and EXECUTE won't work, because you'll just suffer the same typing issues when passing values to EXECUTE via psycopg2).



Related Topics



Leave a reply



Submit