How to Read a Text File Using T-Sql

How to read a Text file using T-SQL?

You could probably do bulk insert into a temp table and then do another insert joining with the data you want to add. Here is an example

CREATE TABLE #TEXTFILE_1(
FIELD1 varchar(100) ,
FIELD2 varchar(100) ,
FIELD3 varchar(100) ,
FIELD4 varchar(100));

BULK INSERT #TEXTFILE_1 FROM 'C:\STUFF.TXT'
WITH (FIELDTERMINATOR =' | ',ROWTERMINATOR =' \n')

/*You now have your bulk data*/

insert into yourtable (field1, field2, field3, field4, field5, field6)
select txt.FIELD1, txt.FIELD2, txt.FIELD3, txt.FIELD4, 'something else1', 'something else2'
from #TEXTFILE_1 txt

drop table #TEXTFILE_1

Does this not do what you'd like?

Reading a text file with SQL Server

What does your text file look like?? Each line a record?

You'll have to check out the BULK INSERT statement - that should look something like:

BULK INSERT dbo.YourTableName
FROM 'D:\directory\YourFileName.csv'
WITH
(
CODEPAGE = '1252',
FIELDTERMINATOR = ';',
CHECK_CONSTRAINTS
)

Here, in my case, I'm importing a CSV file - but you should be able to import a text file just as well.

From the MSDN docs - here's a sample that hopefully works for a text file with one field per row:

BULK INSERT dbo.temp 
FROM 'c:\temp\file.txt'
WITH
(
ROWTERMINATOR ='\n'
)

Seems to work just fine in my test environment :-)

How to import data from .txt file to populate a table in SQL Server

Using OPENROWSET

You can read text files using OPENROWSET option (first you have to enable adhoc queries)

Using Microsoft Text Driver

SELECT * FROM OPENROWSET('MSDASQL',
'Driver={Microsoft Text Driver (*.txt; *.csv)};
DefaultDir=C:\Docs\csv\;',
'SELECT * FROM PPE.txt')

Using OLEDB provider

SELECT 
*
FROM
OPENROWSET
('Microsoft.ACE.OLEDB.12.0','Text;Database=C:\Docs\csv\;IMEX=1;','SELECT *
FROM PPE.txt') t

Using BULK INSERT

You can import text file data to a staging table and update data from it:

BULK INSERT dbo.StagingTable
FROM 'C:\PPE.txt'
WITH
(
FIELDTERMINATOR = ';',
ROWTERMINATOR = '\n'
)

How to read text file containing SQL code and execute it

I'd do database creation via a SQL script which checks for the existence of tables/views/SPs/etc. before creating them, then I'd execute it in the VB application via ADO.NET. I'd ship it with the application in a subdirectory. It's not a big deal to read text files, or to execute a SQL string via ADO.NET.

I'd have a VERSION table in the database that identifies what DB schema version is installed, and when I shipped upgrade scripts which modified the DB, I would have them update the VERSION table. The first version you ship is 1.0, increment as appropriate thereafter.

All the SQL object creation/detection/versioning logic would be in SQL. That's by far the simplest way to do it on the client, it's the simplest thing to develop and to test before shipping (MS SQL Management Studio is a godsend), it's the simplest thing to diff against the previous version, store in source control, etc.

Incidentally, I would also have my application interact with the database strictly via stored procedures, and I would absolutely never, ever feed SQL any concatenated strings. All parameters going to SQL should be delivered via ADO.NET's SqlParameter mechanism, which is very cool because it makes for clean, readable code, and sanitizes all of your parameters for you. Ever use a DB application that crashed on apostrophes? They didn't sanitize their parameters.

Reading Data from .txt file in SQL

You could try OPENROWSET instead of bulkinsert.

SELECT * into temptable
FROM OPENROWSET('Microsoft.Jet.OLEDB.4.0',
'Excel 8.0;Database=C:\Documents and Settings\....\example.xls;IMEX=1',
'SELECT * FROM [Sheet1$]')

Other Options

SSIS package to import Excel file

You can also generate a package like this from SSMS on the fly by right clicking on database going to tasks then import date and setup excel as source file.

Do you have permission to create a linked server?

If you have them in Excel or I supposed even the txt file you can also create a linked server. you will have to open SSMS as administrator to do this! Also the file will have to be an accessible path for I believe both yourself and your sqlagnet. E.g. local path or a file share. Once you open SSMS create the Excel Linked Server then query.

Here is one tool I use to generate teh linked server statement and add it:

/* !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

IN ORDER TO QUERY EXCEL YOU MUST RUN SSMS AS ADMINISTRATOR!!!!!!!

This doesn't seem to affect import jobs run under the SQL Agent, but the way SSMS handles
permissions to folders it is not prviledged when accessing the ACE 12.0 OLEDB provider.
I have tried all of the in-process and giving direct permissions, etc. and only running SSMS as
Administrator seems to work
*/

DECLARE @RC int
DECLARE @server nvarchar(128)
DECLARE @srvproduct nvarchar(128)
DECLARE @provider nvarchar(128)
DECLARE @datasrc nvarchar(4000)
DECLARE @location nvarchar(4000)
DECLARE @provstr nvarchar(4000)
DECLARE @catalog nvarchar(128)
-- Set parameter values
SET @server = N'XLSERVER'
SET @srvproduct = N'Excel'
SET @provider = N'Microsoft.ACE.OLEDB.12.0'
--SET @provider = N'Microsoft.ACE.OLEDB.15.0'
SET @datasrc = N'FULLFILEPATH'
--SET @provstr = N'Excel 12.0; HDR=Yes' ---wihtout imex
SET @provstr = N'Excel 12.0;IMEX=1;HDR=YES;' ----Office 2007+
--SET @provstr = N'Excel 8.0;IMEX=1;HDR=YES;' ----Office 97-2003 Uses Jet 4.0 instead of ACE 12.0

IF EXISTS(SELECT * FROM sys.servers WHERE name = @server)
BEGIN
--Drop The Current Server
EXEC master.dbo.sp_dropserver @server, @droplogins='droplogins'
END

EXEC @RC = [master].[dbo].[sp_addlinkedserver] @server, @srvproduct, @provider,
@datasrc, @location, @provstr, @catalog

And here is how you can select from the data once you have created the linked server. Note the file cannot be open!

SELECT *
FROM
XLSERVER...Sheet1$

SQL Read Where IN (Long List from .TXT file)

You have a few options, of which one option is my recommended one.

Option 1

Create a table in your database like so:

create table ID_Comparer (
ID int primary key
);

With a programming language of your choice, empty out the table and then load the 5000+ IDs that you want to eventually query in this table.

Then, write one of these queries to extract the data you want:

select *
from main_table m
where exists (
select 1 from ID_Comparer where ID = m.ID
)

or

select *
from main_table m
inner join ID_Comparer c on m.ID = c.ID

Since ID_Comparer and (assuming that) main_table's ID is indexed/keyed, matching should be relatively fast.

Option 1 modified

This option is just like the one above but helps a bit with concurrency. That means, if application 1 is wanting to compare 2000 IDs whereas application 2 is wanting to compare 5000 IDs with your main table at the same time, you'd not want to delete data from comparer table. So, change the table a bit.

create table ID_Comparer (
ID int primary key,
token char(32), -- index this
entered date default current_date() -- use the syntax of your DB
);

Then, use your favorite programming language to create a GUID. Load all the ID and the same GUID into the table like so:

1, 7089e5eced2f408eac8b390d2e891df5
2, 7089e5eced2f408eac8b390d2e891df5
...

Another process doing the same thing will be loading its own IDs with a GUID

2412, 96d9d6aa6b8d49ada44af5a99e6edf56
9434, 96d9d6aa6b8d49ada44af5a99e6edf56
...

Now, your select:

select *
from main_table m
where exists (
select 1 from ID_Comparer where ID = m.ID and token = '<your guid>'
)

OR

select *
from main_table m
inner join ID_Comparer c on m.ID = c.ID and token = '<your guid>'

After you receive your data, be sure to do delete from ID_Comparer where token = '<your guid>' - that'd just be nice cleanup

You could create a nightly task to remove all data that's more than 2 days old or some such for additional housekeeping.

Since ID_Comparer and (assuming that) main_table's ID is indexed/keyed, matching should be relatively fast even when the GUID is an additional keyed lookup.

Option 2

Instead of creating a table, you could create a large SQL query like so:

select * from main_table where id = <first id>
union select * from main_table where id = <second id>
union select * from main_table where id = <third id>
...

OR

select * from main_table where id IN (<first 5 ids>)
union select * from main_table where id IN (<next 5 ids>)
union select * from main_table where id IN (<next 5 ids>)
...

If the performance is acceptable and if creating a new table like in option 1 doesn't feel right to you, you could try one of these methods.

(assuming that) main_table's ID is indexed/keyed, individual matching might result in faster query rather than matching with a long list of comma separated values. That's a speculation. You'll have to see the query plan and run it against a test case.

Which option to choose?

Testing these options should be fast. I'd recommend trying all these options with your database engine and the size of your table and see which one suits your use-case the most.

Read contents of a text file into a varchar WITHOUT using BULK

Can you get them to allow Ad Hoc Distributed Queries? Then you can use OpenRowset or OpenDatasource.

SELECT * 
FROM OPENROWSET('MSDASQL',
'Driver={Microsoft Text Driver (*.txt; *.csv)};DefaultDir=c:\users\graham\desktop;',
'SELECT * FROM [data.txt];'

Here's the recofiguring code, if you need it:

EXEC sp_configure 'show advanced options', 1;
RECONFIGURE;
EXEC sp_configure 'Ad Hoc Distributed Queries', 1;
RECONFIGURE;
go

This is a laborious technique, though -- you sure you can't use client code? Even, I dunno, VBA in Excel or something?

g.

How to load a text file as a complete string for an Sql query?

In MySQL you can do:

UPDATE table_names
SET name = LOAD_FILE('D:\\ttt.txt')
WHERE ID=3;

see: LOAD_FILE



Related Topics



Leave a reply



Submit