Get Execution Time of Postgresql Query

Postgres Query execution time

Use \timing as explained by "How can I time SQL-queries using psql?".

See also the manual for psql.

If you want server-side execution times that don't include the time to transfer the result to the client, you can set log_min_duration_statement = 0 in the configuration, then SET client_min_messages = log so you get the log info in the console.

You can also use EXPLAIN ANALYZE to get detailed execution timings. There's some timing overhead for this unless you use EXPLAIN (ANALYZE TRUE, TIMING FALSE), which is only in newer versions, and disables detailed timing to give only an aggregate execution time instead.

PgBadger, especially when combined with the auto_explain module, can provide useful aggregate statistics from log analysis.

Finally, there's pg_stat_statements, which can collect handy aggregate information on the running system.

Get execution time of PostgreSQL query

There are various ways to measure execution time, each has pros and cons. But whatever you do, some degree of the observer effect applies. I.e., measuring itself may distort the result.

1. EXPLAIN ANALYZE

You can prepend EXPLAIN ANALYZE, which reports the whole query plan with estimated costs actually measured times. The query is actually executed (with all side -effect, if any!). Works for all DDL commands and some others. See:

  • EXPLAIN ANALYZE not working with ALTER TABLE

To check whether my adapted version of your query is, in fact, faster:

EXPLAIN ANALYZE
SELECT DISTINCT born_on.name
FROM born_on b
WHERE date '2012-01-30' - b.dob <= (
SELECT max(d1.dod - b1.dob)
FROM born_on b1
JOIN died_on d1 USING (name) -- name must be unique!
)
AND NOT EXISTS (
SELECT FROM died_on d2
WHERE d2.name = b.name
);

Execute a couple of times to get more comparable times with warm cache. Several options are available to adjust the level of detail.

While mainly interested in total execution time, make it:

EXPLAIN (ANALYZE, COSTS OFF, TIMING OFF)

Mostly, TIMING matters - the manual:

TIMING

Include actual startup time and time spent in each node in the output.
The overhead of repeatedly reading the system clock can slow down the
query significantly on some systems, so it may be useful to set this
parameter to FALSE when only actual row counts, and not exact times,
are needed. Run time of the entire statement is always measured, even
when node-level timing is turned off with this option. [...]

EXPLAIN ANALYZE measures on the server, using server time from the server OS, excluding network latency. But EXPLAIN adds some overhead to also output the query plan.

2. psql with \timing

Or use \timing in psql. Like Peter demonstrates.

The manual:

\timing [ on | off ]

With a parameter, turns displaying of how long each SQL statement
takes on or off. Without a parameter, toggles the display between on
and off. The display is in milliseconds; intervals longer than 1
second are also shown in minutes:seconds format, with hours and days
fields added if needed.

Important difference: psql measures on the client using local time from the local OS, so the time includes network latency. This can be a negligible difference or huge depending on connection and volume of returned data.

3. Enable log_duration

This has probably the least overhead per measurement and produces the least distorted timings. But it's a little heavy-handed as you have to be superuser, have to adjust the server configuration, cannot just target the execution of a single query, and you have to read the server logs (unless you redirect to stdout).

The manual:

log_duration (boolean)

Causes the duration of every completed statement to be logged. The
default is off. Only superusers can change this setting.

For clients using extended query protocol, durations of the Parse,
Bind, and Execute steps are logged independently.

There are related settings like log_min_duration_statement.

4. Precise manual measurement with clock_timestamp()

The manual:

clock_timestamp() returns the actual current time, and therefore its value changes even within a single SQL command.

filiprem provided a great way to get execution times for ad-hoc queries as exact as possible. On modern hardware, timing overhead should be insignificant but depending on the host OS it can vary wildly. Find out with the server application pg_test_timing.

Else you can mostly filter the overhead like this:

DO
$do$
DECLARE
_timing1 timestamptz;
_start_ts timestamptz;
_end_ts timestamptz;
_overhead numeric; -- in ms
_timing numeric; -- in ms
BEGIN
_timing1 := clock_timestamp();
_start_ts := clock_timestamp();
_end_ts := clock_timestamp();
-- take minimum duration as conservative estimate
_overhead := 1000 * extract(epoch FROM LEAST(_start_ts - _timing1
, _end_ts - _start_ts));

_start_ts := clock_timestamp();
PERFORM 1; -- your query here, replacing the outer SELECT with PERFORM
_end_ts := clock_timestamp();

-- RAISE NOTICE 'Timing overhead in ms = %', _overhead;
RAISE NOTICE 'Execution time in ms = %' , 1000 * (extract(epoch FROM _end_ts - _start_ts)) - _overhead;
END
$do$;

Take the time repeatedly (doing the bare minimum with 3 timestamps here) and pick the minimum interval as conservative estimate for timing overhead. Also, executing the function clock_timestamp() a couple of times should warm it up (in case that matters for your OS).

After measuring the execution time of the payload query, subtract that estimated overhead to get closer to the actual time.

Of course, it's more meaningful for cheap queries to loop 100000 times or execute it on a table with 100000 rows if you can, to make distracting noise insignificant.

Measure the time it takes to execute a PostgreSQL query

Coming from an MSSQL background myself and now more often working in Postgres I feel your pain =)

The "trouble" with Postgres is that it supports only 'basic' SQL commands (SELECT, INSERT, UPDATE, CREATE, ALTER, etc...) but the moment you want to add logic (IF THEN, WHILE, variables, etc.) you need to switch to pl/pgsql which you can only use inside functions (AFAIK). From a TSQL POV there are quite some limitations and in fact, some things suddenly don't work anymore (or need to be done differently.. e.g. SELECT * INTO TEMPORARY TABLE tempTable FROM someTable will not work but CREATE TABLE tempTable AS SELECT * FROM someTable will)

Something I learned the hard way too is that CURRENT_TIMESTAMP (or Now()) will return the same value within a transaction. And since everything inside a function runs inside a transaction this means you have to use clock_timstamp()

Anyway, to answer your question, I think this should get you going:

CREATE OR REPLACE FUNCTION fn_test ( nbrOfIterations int)
RETURNS TABLE (iterations int, totalTime interval, secondsPerIteration int)
AS $$
DECLARE

i int;
startTime TIMESTAMP;
endTime TIMESTAMP;
dummy text;

BEGIN

i := 1;
startTime := clock_timestamp();

WHILE ( i <= nbrOfIterations) LOOP

-- your query here
-- (note: make sure to not return anything or you'll get an error)

-- example:
SELECT pg_sleep INTO dummy FROM pg_sleep(1);

i := i + 1;

END LOOP;

endTime := clock_timestamp();

iterations := nbrOfIterations;
totalTime := (endTime - startTime);
secondsPerIteration := (EXTRACT(EPOCH FROM endTime) - EXTRACT(EPOCH FROM startTime)) / iterations;


RETURN NEXT;

END;
$$ language plpgsql;


SELECT * FROM fn_test(5);

How to get execution time in postgres

You can compare clock_timestamp() before and after the query:

do $$
declare t timestamptz := clock_timestamp();
begin
perform pg_sleep(random());
raise notice 'time spent=%', clock_timestamp() - t;
end
$$ language plpgsql;

Sample result:

NOTICE: time spent=00:00:00.59173

Why explain analyze and execution query time is different

That is because you are using pgAdmin or a similar client tool that takes a long time to render 30038 rows.

Can I log query execution time in PostgreSQL 8.4?

If you set

log_min_duration_statement = 0
log_statement = all

in your postgresql.conf, then you will see all statements being logged into the Postgres logfile.

If you enable

log_duration

that will also print the time taken for each statement. This is off by default.

Using the log_statement parameter you can control which type of statement you want to log (DDL, DML, ...)

This will produce an output like this in the logfile:


2012-10-01 13:00:43 CEST postgres LOG: statement: select count(*) from pg_class;
2012-10-01 13:00:43 CEST postgres LOG: duration: 47.000 ms

More details in the manual:

  • http://www.postgresql.org/docs/8.4/static/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHEN
  • http://www.postgresql.org/docs/8.4/static/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT

If you want a daily list, you probably want to configure the logfile to rotate on a daily basis. Again this is described in the manual.



Related Topics



Leave a reply



Submit