Postgres Next/Previous Row SQL Query

Postgres Next/Previous row SQL Query

Q1: FIRST_VALUE/LAST_VALUE

Q2: PARTITION BY (as Roman Pekar already suggested)

SEE FIDDLE HERE

SELECT
DISTINCT i.id AS id,
i.userid AS userid,
i.itemname AS itemname,
COALESCE(LEAD(i.id) OVER (ORDER BY i.created DESC)
,FIRST_VALUE(i.id) OVER (ORDER BY i.created DESC ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)) AS nextitemid,
COALESCE(LAG(i.id) OVER (ORDER BY i.created DESC)
,LAST_VALUE(i.id) OVER (ORDER BY i.created DESC ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)) AS previtemid,
COALESCE(LEAD(i.id) OVER (PARTITION BY i.userid ORDER BY i.created DESC)
,FIRST_VALUE(i.id) OVER (PARTITION BY i.userid ORDER BY i.created DESC ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)) AS nextuseritemid,
COALESCE(LAG(i.id) OVER (PARTITION BY i.userid ORDER BY i.created DESC)
,LAST_VALUE(i.id) OVER (PARTITION BY i.userid ORDER BY i.created DESC ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)) AS prevuseritemid,
i.created AS created
FROM items i
LEFT JOIN users u
ON i.userid = u.id
ORDER BY i.created DESC;

Is it possible to look at the output of the previous row of a PostgreSQL query?

You have to formulate a recursive subquery. I posted a simplified version of this question over at the DBA SE and got the answer there. The answer to that question can be found here and can be expanded to this more complicated question, though I would wager that no one will ever have the interest to do that.

Postgres Query Based on Previous and Next Rows

I am not sure if I understood your problem correctly. But getting values from other rows this can be done by window functions (https://www.postgresql.org/docs/current/static/tutorial-window.html):

demo: db<>fiddle

SELECT
id,
lag("to") OVER (ORDER BY id) as prev_to,
"from",
"to",
lead("from") OVER (ORDER BY id) as next_from
FROM bustimes;

The lag function moves the value of the previous row into the current one. The lead function does the same with the next row. So you are able to calculate a difference between last arrival and current departure or something like that.

Result:

id   prev_to   from    to      next_from
18 33000 33300 33300
19 33300 33300 33600 33900
20 33600 33900 34200 34200
21 34200 34200 34800 36000
22 34800 36000 36300

Please notice that "from" and "to" are reserved words in PostgreSQL. It would be better to chose other names.

How to compare the current row with next and previous row in PostgreSQL?

This is my solution using WINDOW functions. I used the lag and lead functions. Both returns a value from a column from a row in offset from the current row. lag goes back and lead goes next in the offset.

SELECT tokcat.text
FROM (
SELECT text, category, chartype, lag(category,1) OVER w as previousCategory, lead(category,1) OVER w as nextCategory
FROM token t, textBlockHasToken tb
WHERE tb.tokenId = t.id
WINDOW w AS (
PARTITION BY textBlockId, sentence
ORDER BY textBlockId, sentence, position
)
) tokcat
WHERE 'NAME' = ANY(previousCategory)
AND 'NAME' = ANY(nextCategory)
AND 'NAME' <> ANY(category)

Simplified version:

SELECT text
FROM (
SELECT text
,category
,lag(category) OVER w as previous_cat
,lead(category) OVER w as next_cat
FROM token t
JOIN textblockhastoken tb ON tb.tokenid = t.id
WINDOW w AS (PARTITION BY textblockid, sentence ORDER BY position)
) tokcat
WHERE category <> 'NAME'
AND previous_cat = 'NAME'
AND next_cat = 'NAME';

Major points

  • = ANY() is not needed, the window function returns a single value
  • some redundant fields in the subquery
  • no need to order by columns, that you PARTITION BY - the ORDER BY applies within partitions
  • Don't use mixed case identifiers without quoting, it only leads to confusion. (Better yet: don't use mixed case identifiers in PostgreSQL ever)

In PostgreSQL, how to select the previous row value to compute the current row value?

This will do it. It's not especially efficient though, but for a one off it should do.

select x.id, sum(coalesce(y.value1,y.id,0))
from sample x
left outer join sample y on x.id >= y.id
group by x.id
order by x.id

Basically, it goes through every record in the table, and then sums up the IDs of all the records up to and including that record. There is a special logic to handle the initial value you've got on id=1.

SQL Fiddle at http://sqlfiddle.com/#!9/b8f25/1

Another option is to use a window function.

select x.id, sum(coalesce(x.value1,x.id,0)) over (order by x.id)
from sample x

SQL Fiddle at http://sqlfiddle.com/#!17/b8f25/2

Dynamically add previous row value to current row value in SQL(Postgres)

You just need analytical function sum as follows:

select col1, col2, 
Sum(col1 - col2) over (order by id) as col3
from t1;

Update data based on the previous row in postgresql

You can use a recursive CTE for this type of query:

WITH RECURSIVE rec(name, col, row, min_item_col, x, y, rn) AS (
SELECT *
FROM xyz
WHERE rn = 1

UNION ALL

(
SELECT next.name,
next.cols,
next.rows,
next.minitemcols,
CASE
WHEN cur.col + cur.x = 12 THEN 0
WHEN cur.col + cur.x = next.minitemcols THEN 0
/* this 3rd condition seems pointless given the last one */
WHEN cur.col + cur.x < 12 AND cur.x + cur.col >= next.minitemcols THEN cur.col + cur.x
WHEN cur.col + cur.x < 12 THEN cur.col + cur.x
END AS x,
CASE
WHEN cur.col + cur.x = 12 THEN cur.row + cur.y
WHEN cur.col + cur.x < 12 THEN cur.y
END AS y,
next.rn
FROM rec AS cur
JOIN xyz AS next
ON next.rn > cur.rn
ORDER BY rn
LIMIT 1
)
)
UPDATE xyz
SET x = rec.x, y = rec.y
FROM rec
WHERE xyz.rn = rec.rn
;

updates the table to

+----+----+----+-----------+----+----+--+
|name|cols|rows|minitemcols|x |y |rn|
+----+----+----+-----------+----+----+--+
|A |12 |19 |8 |0 |0 |1 |
|B |12 |11 |12 |0 |19 |2 |
|C |5 |6 |4 |0 |30 |3 |
|D |3 |6 |3 |5 |30 |4 |
|E |4 |6 |3 |8 |30 |5 |
|F |4 |6 |4 |0 |36 |6 |
|G |3 |6 |3 |4 |36 |7 |
|H |5 |6 |5 |7 |36 |8 |
|I |7 |6 |7 |0 |42 |9 |
|J |12 |9 |12 |7 |42 |10|
|K |12 |24 |8 |NULL|NULL|11|
|L |5 |16 |3 |NULL|NULL|12|
|M |7 |16 |7 |NULL|NULL|13|
+----+----+----+-----------+----+----+--+

Note that the results are slightly different from your example from row "K" onward. I think "row J".x should be 7 not 0 (from the last else if condition), which changes the rest of the rows too. Still, I think it should be pretty easy to adapt the code to get the results you want.

In SQL how to select previous rows based on the current row values?

As is well known, every table in Postgres has a primary key. Or should have at least. It would be great if you had a primary key defining expected order of rows.

Example data:

create table msg (
id int primary key,
from_person text,
to_person text,
ts timestamp without time zone
);

insert into msg values
(1, 'nancy', 'charlie', '2016-02-01 01:00:00'),
(2, 'bob', 'charlie', '2016-02-01 01:00:00'),
(3, 'charlie', 'nancy', '2016-02-01 01:00:01'),
(4, 'mary', 'charlie', '2016-02-01 01:02:00');

The query:

select m1.id, count(m2)
from msg m1
left join msg m2
on m2.id < m1.id
and m2.to_person = m1.to_person
and m2.ts >= m1.ts- '3m'::interval
group by 1
order by 1;

id | count
----+-------
1 | 0
2 | 1
3 | 0
4 | 2
(4 rows)

In the lack of a primary key you can use the function row_number(), for example:

with msg_with_rn as (
select *, row_number() over (order by ts, from_person desc) rn
from msg
)
select m1.id, count(m2)
from msg_with_rn m1
left join msg_with_rn m2
on m2.rn < m1.rn
and m2.to_person = m1.to_person
and m2.ts >= m1.ts- '3m'::interval
group by 1
order by 1;

Note that I have used row_number() over (order by ts, from_person desc) to get the sequence of rows as you have presented in the question. Of course, you should decide yourself how to resolve ambiguities arising from the same values of the column ts (as in the first two rows).

Get previous and next row from rows selected with (WHERE) conditions

you didn't specify your DBMS, so the following is ANSI SQL:

select prev_word, word, next_word
from (
select id,
lag(word) over (order by id) as prev_word,
word,
lead(word) over (order by id) as next_word
from words
) as t
where word = 'name';

SQLFiddle: http://sqlfiddle.com/#!12/7639e/1



Related Topics



Leave a reply



Submit