Using Input from a Text File for Where Clause

Log parser: Using a text file as an input in WHERE clause

Q1: Instead of filtering the data using logparser use findstr and the /g:file /v switches to filter input files or output lines (depending on the case)

Q2: Strings are not timestamps. Use

BETWEEN TO_TIMESTAMP('%date_1%','yyyy-MM-dd') AND TO_TIMESTAMP('%date_2%','yyyy-MM-dd')

Using a text file in the where clause of postgreSQL query

You could definitely do this natively. If going that route, I would employ the Postgres COPY function along with a temp table in your query, discarding the temp table when finished. If I'm not mistaken, this would require that your file be present inside of a folder that Postgres has control over, such as your data folder, wherever you have Postgres installed.

However, for a cleaner look, I would prefer employing PL/R for something like this. You can quickly read from the file and return an array of values to use in your query. I'm sure you can substitute PL/R with PL/PYTHON or whatever else you prefer that has methods for accessing external files.

CREATE FUNCTION file_vals()
RETURNS integer[] AS
$BODY$
return (readLines('C:/path/to/your/file.txt'))
$BODY$
LANGUAGE plr IMMUTABLE;

Your file.txt looks like:

555
123
567

Then call from your function (I put sample data in a subquery to simplify):

WITH users AS(
SELECT 123 AS userID
)
SELECT userID
FROM users
WHERE userID = ANY(file_vals())

Edit: As DanielVérité pointed out in the comments, it's important to note that this only works if you have admin privileges over your database. PL/R and any other language extension that gives you external file access will inherently be an untrusted language, which means only admins can create functions in those languages.

It's also important to note that the file you're reading from must be accessible directly from the Postgres server. If you're executing these queries via remote client, you'll need to get that file over to the server first.

Use terms from text file in SQL WHERE ... IN clause

I am not so familiar with MySQL but I see something like this where you can load a text file in a table as you suggested in your question:-

LOAD DATA INFILE 'file.txt'
INTO TABLE t1
(column1, column2, column3);

and then use joins for getting data.

Details here.

Microsoft SQL Query where value from text file

The method I use to do this is to Right click the Database and import emails as a new table.

I then use the Like function to compare two tables like so:

use Database

SELECT Email FROM Emails
JOIN EmailToLookFor ON Emails LIKE EmailsToLookFor.Email + '%'

In this example EmailsToLookFor is the new table you imported and Emails is the table to search in,
in both tables Email is the field containing the emails; if you wish to search for more information relating to the found email addresses ass is as:

SELECT  Email, Username, Name etc. FROM Emails

Hope this helps...

SQL Read Where IN (Long List from .TXT file)

You have a few options, of which one option is my recommended one.

Option 1

Create a table in your database like so:

create table ID_Comparer (
ID int primary key
);

With a programming language of your choice, empty out the table and then load the 5000+ IDs that you want to eventually query in this table.

Then, write one of these queries to extract the data you want:

select *
from main_table m
where exists (
select 1 from ID_Comparer where ID = m.ID
)

or

select *
from main_table m
inner join ID_Comparer c on m.ID = c.ID

Since ID_Comparer and (assuming that) main_table's ID is indexed/keyed, matching should be relatively fast.

Option 1 modified

This option is just like the one above but helps a bit with concurrency. That means, if application 1 is wanting to compare 2000 IDs whereas application 2 is wanting to compare 5000 IDs with your main table at the same time, you'd not want to delete data from comparer table. So, change the table a bit.

create table ID_Comparer (
ID int primary key,
token char(32), -- index this
entered date default current_date() -- use the syntax of your DB
);

Then, use your favorite programming language to create a GUID. Load all the ID and the same GUID into the table like so:

1, 7089e5eced2f408eac8b390d2e891df5
2, 7089e5eced2f408eac8b390d2e891df5
...

Another process doing the same thing will be loading its own IDs with a GUID

2412, 96d9d6aa6b8d49ada44af5a99e6edf56
9434, 96d9d6aa6b8d49ada44af5a99e6edf56
...

Now, your select:

select *
from main_table m
where exists (
select 1 from ID_Comparer where ID = m.ID and token = '<your guid>'
)

OR

select *
from main_table m
inner join ID_Comparer c on m.ID = c.ID and token = '<your guid>'

After you receive your data, be sure to do delete from ID_Comparer where token = '<your guid>' - that'd just be nice cleanup

You could create a nightly task to remove all data that's more than 2 days old or some such for additional housekeeping.

Since ID_Comparer and (assuming that) main_table's ID is indexed/keyed, matching should be relatively fast even when the GUID is an additional keyed lookup.

Option 2

Instead of creating a table, you could create a large SQL query like so:

select * from main_table where id = <first id>
union select * from main_table where id = <second id>
union select * from main_table where id = <third id>
...

OR

select * from main_table where id IN (<first 5 ids>)
union select * from main_table where id IN (<next 5 ids>)
union select * from main_table where id IN (<next 5 ids>)
...

If the performance is acceptable and if creating a new table like in option 1 doesn't feel right to you, you could try one of these methods.

(assuming that) main_table's ID is indexed/keyed, individual matching might result in faster query rather than matching with a long list of comma separated values. That's a speculation. You'll have to see the query plan and run it against a test case.

Which option to choose?

Testing these options should be fast. I'd recommend trying all these options with your database engine and the size of your table and see which one suits your use-case the most.



Related Topics



Leave a reply



Submit