Bcp Returns No Errors, But Also Doesn't Copy Any Rows

BCP returns no errors, but also doesn't copy any rows

The bcp command typically needs an identifier to specify the format mode of the bcp file.

  • -c specifies character (plaintext) mode
  • -n specifies native mode
  • -w specifies unicode mode

In your test case, the file you created is plaintext, so you should specify '-c' in your bcp in command.

bcp MyDB.dbo.mincase in data.csv -c -T -S .\SQLEXPRESS

Microsoft Recommends using the '-n' for imports and exports to avoid issues with field delimiters appearing within column values (See section on Character Mode and Native Mode Best Practices).

Copy in failure issue with BCP into SQL Server (seems unrelated to data overflow)?

Found the reason: The SQL Server table design was set such that not fields could be null and apparently was causing bcp to just quit when attempting to insert a record that did have nulls, once I set the table design so that the appropriate fields could be null the bcp process completed as expected.

* Note sure how I would have detected this from the bcp output or the SQL trace, so if anyone has any better debugging tips or how I could have caught this error earlier please do let me know.

BCP not copying all rows

It will happen when you have the ID with auto increment. So follow my idea

  1. Create a view with out the ID field of the table
  2. Insert the data in the view

for example

CREATE TABLE DIM_Vitals (
QueryType varchar(255) default NULL,
QueryDate varchar(255) default NULL,
APUID varchar(255) default NULL,
VitalID varchar(255) default NULL,
VitalName varchar(255) default NULL,
LoadDate varchar(225) default NULL,
id BIGINT IDENTITY(1,1) NOT NULL,
PRIMARY KEY (id) ,
CONSTRAINT dim_v UNIQUE (VitalID, VitalName)
) ;

create a view for the above table

   create view DIM_Vitals_view 
as
select
QueryType,
QueryDate,
APUID,
VitalID,
VitalName,
LoadDate
from DIM_Vitals

now insert data into view [ DIM_Vitals_view - view name ]

bcp DIM_Vitals_view  IN DIM_Vitals_final.dat -f DIM_Vitals.Fmt -S <ServerIP> -U <User> -P <Pwd> -F2

sure it will solve the problem

make sure your view is not having the id field

What does it mean when BCP fails (BCP copy in failed), but not -e error log contents generated?

The errorlog created when you use the -e option is meant to capture errors regarding the data itself. So, the errorlog will contain errors when there is an overflow of data (too many bytes in a field destined for a column with too few).

Execution errors, or errors with the BCP application itself are not captured in the error file created by the -e option.

In an automated environment, if you want to capture or log such errors you will need to redirect the output of the BCP command to a file for viewing later or even loading into a log table in a SQL table.

BCP copy in failed on TSV

Finally found solution and appears to have little to do with BCP (though leaving question title as is, since BCP is where the problem came to light and so may be how others may likely find this post).

TLDR:

Datetime values in the TSV were overflowing the destination field in the destination table because that table schema was expecting a smalldate type, but at the source end we were treating the data as if we thought that the destination could handle a datetime type in that particular column.

Moral of the story:

When debugging ETL / data transfer issues make sure to check if any questionable data type conflicts between source and destination may be involved.

Long version:

Using a binarysearch debugging method on the actual dataset files that are causing the problems (eg. using the first half of the data, see what happens, half that, try again, etc.), was able to avoid the "BCP copy failed" error, but was then seeing that (even though no failure message thrown, 0 rows were being copied (so I really don't know why the "BCP copy failed" errors no longer popped up)). Adding the -e option to the minimal example to get an error log of the bcp copy attempt here (eg. -e filename.bcperror.log), saw the error

#@ Row 1, Column 150: Datetime field overflow. Fractional second precision exceeds the scale specified in the parameter binding. @#

at the top of every row of the generated error file (column 150 being the last column in a row, eg. 2018-08-29 11:34:14).

Looking at the table that BCP is trying to copy to in MSSQL Server, I noticed that (unlike in the tables that BCP was successfully writing to), the final field (column 150) in my case was set as smalldate type whereas other tables used datetime. Changing this field to datetime as well, BCP was able to copy the problem TSVs to the tables without issue.



Related Topics



Leave a reply



Submit