How to Test My Ad-Hoc SQL with Parameters in Postgres Query Window

How to test my ad-hoc SQL with parameters in Postgres query window

I config postgres to log all the commands, and copy the command from the log file, so all the parameters are already replaced with the value, and test the command in the query window.

May not be the best approach, but it is easy and works for me

Query a parameter (postgresql.conf setting) like max_connections

You can use SHOW:

SHOW max_connections;

This returns the currently effective setting. Be aware that it can differ from the setting in postgresql.conf as there are a multiple ways to set run-time parameters in PostgreSQL. To reset the "original" setting from postgresql.conf in your current session:

RESET max_connections;

However, not applicable to this particular setting. The manual:

This parameter can only be set at server start.

To see all settings:

SHOW ALL;

There is also pg_settings:

The view pg_settings provides access to run-time parameters of the
server. It is essentially an alternative interface to the SHOW and
SET commands. It also provides access to some facts about each
parameter that are not directly available from SHOW, such as minimum
and maximum values.

For your original request:

SELECT *
FROM pg_settings
WHERE name = 'max_connections';

Finally, there is current_setting(), which can be nested in DML statements:

SELECT current_setting('max_connections');

Related:

  • How to test my ad-hoc SQL with parameters in Postgres query window

User defined variables in PostgreSQL

Postgres does not normally use variables in plain SQL. But you can do that, too:

SET foo.test = 'SELECT bar FROM baz';

SELECT current_setting('foo.test');

Read about Customized Options in the manual.

In PostgreSQL 9.1 or earlier you needed to declare custom_variable_classes before you could use that.

However, You cannot EXECUTE dynamic SQL without a PL (procedural language). You would use a DO command for executing ad-hoc statements (but you cannot return data from it). Or use CREATE FUNCTION to create a function that executes dynamic SQL (and can return data in any fashion imaginable).

Be sure to safeguard against SQL injection when using dynamic SQL.

Related:

  • Is there a way to define a named constant in a PostgreSQL query?

Passing user id to PostgreSQL triggers

Options include:

  • When you open a connection, CREATE TEMPORARY TABLE current_app_user(username text); INSERT INTO current_app_user(username) VALUES ('the_user');. Then in your trigger, SELECT username FROM current_app_user to get the current username, possibly as a subquery.

  • In postgresql.conf create an entry for a custom GUC like my_app.username = 'unknown';. Whenever you create a connection run SET my_app.username = 'the_user';. Then in triggers, use the current_setting('my_app.username') function to obtain the value. Effectively, you're abusing the GUC machinery to provide session variables. Read the documentation appropriate to your server version, as custom GUCs changed in 9.2.

  • Adjust your application so that it has database roles for every application user. SET ROLE to that user before doing work. This not only lets you use the built-in current_user variable-like function to SELECT current_user;, it also allows you to enforce security in the database. See this question. You could log in directly as the user instead of using SET ROLE, but that tends to make connection pooling hard.

In both all three cases you're connection pooling you must be careful to DISCARD ALL; when you return a connection to the pool. (Though it is not documented as doing so, DISCARD ALL does a RESET ROLE).

Common setup for demos:

CREATE TABLE tg_demo(blah text);
INSERT INTO tg_demo(blah) VALUES ('spam'),('eggs');

-- Placeholder; will be replaced by demo functions
CREATE OR REPLACE FUNCTION get_app_user() RETURNS text AS $$
SELECT 'unknown';
$$ LANGUAGE sql;

CREATE OR REPLACE FUNCTION tg_demo_trigger() RETURNS trigger AS $$
BEGIN
RAISE NOTICE 'Current user is: %',get_app_user();
RETURN NULL;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER tg_demo_tg
AFTER INSERT OR UPDATE OR DELETE ON tg_demo
FOR EACH ROW EXECUTE PROCEDURE tg_demo_trigger();

Using a GUC:

  • In the CUSTOMIZED OPTIONS section of postgresql.conf, add a line like myapp.username = 'unknown_user'. On PostgreSQL versions older than 9.2 you also have to set custom_variable_classes = 'myapp'.
  • Restart PostgreSQL. You will now be able to SHOW myapp.username and get the value unknown_user.

Now you can use SET myapp.username = 'the_user'; when you establish a connection, or alternately SET LOCAL myapp.username = 'the_user'; after BEGINning a transaction if you want it to be transaction-local, which is convenient for pooled connections.

The get_app_user function definition:

CREATE OR REPLACE FUNCTION get_app_user() RETURNS text AS $$
SELECT current_setting('myapp.username');
$$ LANGUAGE sql;

Demo using SET LOCAL for transaction-local current username:

regress=> BEGIN;
BEGIN
regress=> SET LOCAL myapp.username = 'test_user';
SET
regress=> INSERT INTO tg_demo(blah) VALUES ('42');
NOTICE: Current user is: test_user
INSERT 0 1
regress=> COMMIT;
COMMIT
regress=> SHOW myapp.username;
myapp.username
----------------
unknown_user
(1 row)

If you use SET instead of SET LOCAL the setting won't get reverted at commit/rollback time, so it's persistent across the session. It is still reset by DISCARD ALL:

regress=> SET myapp.username = 'test';
SET
regress=> SHOW myapp.username;
myapp.username
----------------
test
(1 row)

regress=> DISCARD ALL;
DISCARD ALL
regress=> SHOW myapp.username;
myapp.username
----------------
unknown_user
(1 row)

Also, note that you can't use SET or SET LOCAL with server-side bind parameters. If you want to use bind parameters ("prepared statements"), consider using the function form set_config(...). See system adminstration functions

Using a temporary table

This approach requires the use of a trigger (or helper function called by a trigger, preferably) that tries to read a value from a temporary table every session should have. If the temporary table cannot be found, a default value is supplied. This is likely to be somewhat slow. Test carefully.

The get_app_user() definition:

CREATE OR REPLACE FUNCTION get_app_user() RETURNS text AS $$
DECLARE
cur_user text;
BEGIN
BEGIN
cur_user := (SELECT username FROM current_app_user);
EXCEPTION WHEN undefined_table THEN
cur_user := 'unknown_user';
END;
RETURN cur_user;
END;
$$ LANGUAGE plpgsql VOLATILE;

Demo:

regress=> CREATE TEMPORARY TABLE current_app_user(username text);
CREATE TABLE
regress=> INSERT INTO current_app_user(username) VALUES ('testuser');
INSERT 0 1
regress=> INSERT INTO tg_demo(blah) VALUES ('42');
NOTICE: Current user is: testuser
INSERT 0 1
regress=> DISCARD ALL;
DISCARD ALL
regress=> INSERT INTO tg_demo(blah) VALUES ('42');
NOTICE: Current user is: unknown_user
INSERT 0 1

Secure session variables

There's also a proposal to add "secure session variables" to PostgreSQL. These are a bit like package variables. As of PostgreSQL 12 the feature has not been included, but keep an eye out and speak up on the hackers list if this is something you need.

Advanced: your own extension with shared memory area

For advanced uses you can even have your own C extension register a shared memory area and communicate between backends using C function calls that read/write values in a DSA segment. See the PostgreSQL programming examples for details. You'll need C knowledge, time, and patience.

PostgreSQL loops outside functions. Is that possible?

You cannot DECLARE (global) variables (there are workarounds) nor loop with plain SQL - with the exception of recursive CTEs as provided by @bma (which is actually iterating over rows, not looping, strictly speaking).

However, there is the DO statement for such ad-hoc procedural code. Introduced with Postgres 9.0. It works like a one-time function, but does not take any parameters and does not return anything. You can RAISE notices et al, so your example would just work fine:

DO
$do$
DECLARE
_counter int := 0;
BEGIN
WHILE _counter < 10
LOOP
_counter := _counter + 1;
RAISE NOTICE 'The counter is %', _counter; -- coerced to text automatically
END LOOP;
END
$do$

If not specified otherwise, the language in the body defaults to plpgsql. You can use any registered procedural language though, if you declare it (like: LANGUAGE plpython).

Postgres also offers generate_series() to generate sets ad-hoc, which may obviate the need for looping in many cases. Try a search here on SO for examples.

Also, you can use the WHERE clause in a data-modifying CTE in plain SQL to fork cases and emulate IF .. THEN .. ELSE .. END ...

Is there a way to define a named constant in a PostgreSQL query?

This question has been asked before (How do you use script variables in PostgreSQL?). However, there is a trick that I use for queries sometimes:

with const as (
select 1 as val
)
select . . .
from const cross join
<more tables>

That is, I define a CTE called const that has the constants defined there. I can then cross join this into my query, any number of times at any level. I have found this particularly useful when I'm dealing with dates, and need to handle date constants across many subqueries.

How to declare a variable in a PostgreSQL query

There is no such feature in PostgreSQL. You can do it only in pl/PgSQL (or other pl/*), but not in plain SQL.

An exception is WITH () query which can work as a variable, or even tuple of variables. It allows you to return a table of temporary values.

WITH master_user AS (
SELECT
login,
registration_date
FROM users
WHERE ...
)

SELECT *
FROM users
WHERE master_login = (SELECT login
FROM master_user)
AND (SELECT registration_date
FROM master_user) > ...;

Insert text with single quotes in PostgreSQL

String literals

Escaping single quotes ' by doubling them up → '' is the standard way and works of course:

'user's log'     -- incorrect syntax (unbalanced quote)
'user''s log'

Plain single quotes (ASCII / UTF-8 code 39), mind you, not backticks `, which have no special purpose in Postgres (unlike certain other RDBMS) and not double-quotes ", used for identifiers.

In old versions or if you still run with standard_conforming_strings = off or, generally, if you prepend your string with E to declare Posix escape string syntax, you can also escape with the backslash \:

E'user\'s log'

Backslash itself is escaped with another backslash. But that's generally not preferable.

If you have to deal with many single quotes or multiple layers of escaping, you can avoid quoting hell in PostgreSQL with dollar-quoted strings:

'escape '' with '''''
$$escape ' with ''$$

To further avoid confusion among dollar-quotes, add a unique token to each pair:

$token$escape ' with ''$token$

Which can be nested any number of levels:

$token2$Inner string: $token1$escape ' with ''$token1$ is nested$token2$

Pay attention if the $ character should have special meaning in your client software. You may have to escape it in addition. This is not the case with standard PostgreSQL clients like psql or pgAdmin.

That is all very useful for writing plpgsql functions or ad-hoc SQL commands. It cannot alleviate the need to use prepared statements or some other method to safeguard against SQL injection in your application when user input is possible, though. @Craig's answer has more on that. More details:

  • SQL injection in Postgres functions vs prepared queries

Values inside Postgres

When dealing with values inside the database, there are a couple of useful functions to quote strings properly:

  • quote_literal() or quote_nullable() - the latter outputs the string NULL for null input.

    There is also quote_ident() to double-quote strings where needed to get valid SQL identifiers.
  • format() with the format specifier %L is equivalent to quote_nullable().

    Like: format('%L', string_var)
  • concat() or concat_ws() are typically no good for this purpose as those do not escape nested single quotes and backslashes.

SQL Server: Query fast, but slow from procedure

I found the problem, here's the script of the slow and fast versions of the stored procedure:

dbo.ViewOpener__RenamedForCruachan__Slow.PRC

SET QUOTED_IDENTIFIER OFF 
GO
SET ANSI_NULLS OFF
GO

CREATE PROCEDURE dbo.ViewOpener_RenamedForCruachan_Slow
@SessionGUID uniqueidentifier
AS

SELECT *
FROM Report_Opener_RenamedForCruachan
WHERE SessionGUID = @SessionGUID
ORDER BY CurrencyTypeOrder, Rank
GO

SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO

dbo.ViewOpener__RenamedForCruachan__Fast.PRC

SET QUOTED_IDENTIFIER OFF 
GO
SET ANSI_NULLS ON
GO

CREATE PROCEDURE dbo.ViewOpener_RenamedForCruachan_Fast
@SessionGUID uniqueidentifier
AS

SELECT *
FROM Report_Opener_RenamedForCruachan
WHERE SessionGUID = @SessionGUID
ORDER BY CurrencyTypeOrder, Rank
GO

SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO

If you didn't spot the difference, I don't blame you. The difference is not in the stored procedure at all. The difference that turns a fast 0.5 cost query into one that does an eager spool of 6 million rows:

Slow: SET ANSI_NULLS OFF

Fast: SET ANSI_NULLS ON


This answer also could be made to make sense, since the view does have a join clause that says:

(table.column IS NOT NULL)

So there is some NULLs involved.


The explanation is further proved by returning to Query Analizer, and running

SET ANSI_NULLS OFF

.

DECLARE @SessionGUID uniqueidentifier
SET @SessionGUID = 'BCBA333C-B6A1-4155-9833-C495F22EA908'

.

SELECT *
FROM Report_Opener_RenamedForCruachan
WHERE SessionGUID = @SessionGUID
ORDER BY CurrencyTypeOrder, Rank

And the query is slow.


So the problem isn't because the query is being run from a stored procedure. The problem is that Enterprise Manager's connection default option is ANSI_NULLS off, rather than ANSI_NULLS on, which is QA's default.

Microsoft acknowledges this fact in KB296769 (BUG: Cannot use SQL Enterprise Manager to create stored procedures containing linked server objects). The workaround is include the ANSI_NULLS option in the stored procedure dialog:

Set ANSI_NULLS ON
Go
Create Proc spXXXX as
....


Related Topics



Leave a reply



Submit