Cannot Bulk Load. the File "C:\Data.Txt" Does Not Exist

Cannot bulk load. The file c:\data.txt does not exist

That's run on the server, so its looking for C:\data.txt on the server's C: drive.

Also ensure the logon your using has read permissions on C:.

Sql Bulk Insert -- File does not exist

Look at that:
Cannot bulk load. The file "c:\data.txt" does not exist

Is that file on the SQL Server's C:\ drive??

SQL BULK INSERT etc. always works only with local drive on the SQL Server machine. Your SQL Server cannot reach onto your own local drive.

You need to put the file onto the SQL Server's C:\ drive and try again.

Cannot bulk load because the file “ could not be opened. Operating system error code 5(Access is denied.)

Thank you so much for taking time for responses and I really appreciate your responses.

It just took some time to get some more information on this issue. So as it turned out, it is kerberos-double-hop issue.

We did dig into the machine logs and we found following logs on the machine which exactly co-incide with the random errors that we are facing. The log looks like following:

The delegated TGT for the user (username@domainname) has expired. A renewal was attempted and failed with error 0xc0000001. The server logon session (0x9:8748fa4a) has stopped delegating the user's credential. For future unconstrained delegation to succeed, the user needs to authenticate again to the server.

The username for which we saw the the error is the user name of the service account which runs web service and which eventually calls the failing stored procedure.

So this establishes the it to be a kerberos-double-hop issue. We are still not sure why it only happens sometimes-randomly.But it still is kerberos-doublehop

Azure blob to Azure SQL Database: Cannot bulk load because the file xxxx.csv could not be opened. Operating system error code 5(Access is denied.)

I think this error message is misleading.

I've created a same test as you, and encountered the same error.

But after I edited the bs140513_032310-demo.csv and File1.fmt, it works well.

  1. I changed the bs140513_032310-demo.csv like this:
    Sample Image

  2. I changed the File1.fmt like this, I changed the cutomer column length from 7 to 100 and age column length from 100 to 7 :

14.0  
3
1 SQLCHAR 0 100 "," 1 customer ""
2 SQLCHAR 0 7 "," 2 age SQL_Latin1_General_CP1_CI_AS
3 SQLCHAR 0 100 "\r\n" 3 gender ""

  1. I use the following statement to query:
   SELECT * FROM OPENROWSET(
BULK 'bs140513_032310-demo.csv',
DATA_SOURCE = 'MyAzureInvoices',
FORMAT = 'CSV',
FORMATFILE='File1.fmt',
FORMATFILE_DATA_SOURCE = 'MyAzureInvoices'
) AS DataFile;

The result shows:

Sample Image


  1. Don't BULK INSERT into your real tables directly.
  • I would always insert into a staging table ext.customer_Staging (without the IDENTITY column) from the CSV file
  • possibly edit / clean up / manipulate your imported data
  • and then copy the data across to the real table with a T-SQL statement like:
INSERT into  ext.customer_Staging with (TABLOCK) (customer, age, gender)
SELECT * FROM OPENROWSET(
BULK 'bs140513_032310-demo.csv',
DATA_SOURCE = 'MyAzureInvoices',
FORMAT = 'CSV',
FORMATFILE='File1.fmt',
FORMATFILE_DATA_SOURCE = 'MyAzureInvoices'
) AS DataFile;
go

INSERT INTO ext.customer(Name, Address)
SELECT customer, age, gender
FROM ext.customer_Staging

BULK INSERT error code 3: The system cannot find the path specified

"I am trying to bulk insert a local file into a remote MS_SQL database"

Your approach is not working because the file specification 'C:\\Users\\userName\\Desktop\\Folder\\Book1.csv' is only a valid path on the workstation that is running your Python code, but the BULK INSERT documentation explains that

data_file must specify a valid path from the server on which SQL Server is running. If data_file is a remote file, specify the Universal Naming Convention (UNC) name.

(emphasis mine). That is, the BULK INSERT statement is running on the server, so a file specification on some other machine (like your workstation) is actually a "remote file" as far as the server is concerned. In other words, SQL Server goes looking for a file named C:\Users\userName\Desktop\Folder\Book1 on the server itself and when that fails it raises the "cannot find the path" error.

In order to use BULK INSERT you would need to either

  1. put the file on a network share that the SQL Server can "see", and then supply the UNC path to that file, or

  2. upload the file to a local folder on the SQL Server and then supply the local (server) path to the file.

If neither of those alternatives is feasible then your other option from Python is to use the subprocess module to invoke SQL Server's bcp utility to upload the data from your local file into the SQL Server database.



Related Topics



Leave a reply



Submit