How to Run Multiple SQL Queries

How to run multiple queries simultaneously?

In SQL SERVER, I can have multiple queries written out, hit F5, and
SQL Server will run them one at a time - with multiple results tables,
one for each query.

Ok, well, I think you mean you want to in, one go, kick off execution of every query and get results back...not actually run all 200 concurrently.

You have two options. You can also just hit F5 (or use the button I outlined in red), and we'll run all of those and you'll see the output below in a script panel

Sample Image

OR

You can select all and then ctrl+enter, and we'll do exactly what SSMS did, or you described above, each result set will get a data grid

Sample Image

Suggested way to run multiple sql statements in python?

I would create a stored procedure:

DROP PROCEDURE IF EXISTS CopyTable;
DELIMITER $$
CREATE PROCEDURE CopyTable(IN _mytable VARCHAR(64), _table_name VARCHAR(64))
BEGIN
SET FOREIGN_KEY_CHECKS=0;
SET @stmt = CONCAT('DROP TABLE IF EXISTS ',_table_name);
PREPARE stmt1 FROM @stmt;
EXECUTE stmt1;
SET FOREIGN_KEY_CHECKS=1;
SET @stmt = CONCAT('CREATE TABLE ',_table_name,' as select * from ', _mytable);
PREPARE stmt1 FROM @stmt;
EXECUTE stmt1;
DEALLOCATE PREPARE stmt1;
END$$
DELIMITER ;

and then just run:

args = ['mytable', 'table_name']
cursor.callproc('CopyTable', args)

keeping it simple and modular. Of course you should do some kind of error checking and you could even have the stored procedure return a code to indicate success or failure.

How can I run multiple SQL statements in a query in python?

Since you're implementing the migration, why don't you define a function that does it in terms of well-defined functions:

def executemany(conn, statements: List[str]):
with conn.cursor() as c:
for statement in statements:
c.execute(statement)
conn.commit()

Another option is to use a dedicated migration tool. I'm partial to goose.

How to execute multiple SQL commands at once in pd.read_sql_query?

The issues you face are:

  1. You need to pass the MULTI_STATEMENTS flag to PyMySQL, and
  2. read_sql_query assumes that the first result set contains the data for the DataFrame, and that may not be true for an anonymous code block.

You can create your own PyMySQL connection and retrieve the data like this:

import pandas as pd
import pymysql
from pymysql.constants import CLIENT

conn_info = {
"host": "localhost",
"port": 3307,
"user": "root",
"password": "toot",
"database": "mydb",
"client_flag": CLIENT.MULTI_STATEMENTS,
}

cnxn = pymysql.connect(**conn_info)
crsr = cnxn.cursor()

sql = """\
CREATE TEMPORARY TABLE tmp (id int primary key, txt varchar(20))
ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
INSERT INTO tmp (id, txt) VALUES (1, 'foo'), (2, 'ΟΠΑ!');
SELECT id, txt FROM tmp;
"""
crsr.execute(sql)

num_tries = 5
result = None
for i in range(num_tries):
result = crsr.fetchall()
if result:
break
crsr.nextset()

if not result:
print(f"(no result found after {num_tries} attempts)")
else:
df = pd.DataFrame(result, columns=[x[0] for x in crsr.description])
print(df)
"""console output:
id txt
0 1 foo
1 2 ΟΠΑ!
"""

(Edit) Additional notes:

Note 1: As mentioned in another answer, you can use the connect_args argument to SQLAlchemy's create_engine method to pass the MULTI_STATEMENTS flag. If you need a SQLAlchemy Engine object for other things (e.g., for to_sql) then that might be preferable to creating your own PyMySQL connection directly.

Note 2: num_tries can be arbitrarily large; it is simply a way of avoiding an endless loop. If we need to skip the first n empty result sets then we need to call nextset that many times regardless, and once we've found the non-empty result set we break out of the loop.



Related Topics



Leave a reply



Submit