Joining Two Separate Queries in a Postgresql ...Query... (Possible or Not Possible)

How can I combine the two select queries on the same table horizontally in Postgresql?

You can use analytic function ROW_NUMBER to rank records by increasing/decreasing sales for each product in a subquery, and then do conditional aggregation:

SELECT
prod product,
MAX(CASE WHEN rn2 = 1 THEN quant END) max_quant,
MAX(CASE WHEN rn2 = 1 THEN cust END) max_cust,
MAX(CASE WHEN rn2 = 1 THEN TO_DATE(year || '-' || month || '-' || day, 'YYYY-MM-DD') END) max_date,
MAX(CASE WHEN rn2 = 1 THEN state END) max_state,
MAX(CASE WHEN rn1 = 1 THEN quant END) min_quant,
MAX(CASE WHEN rn1 = 1 THEN cust END) min_cust,
MAX(CASE WHEN rn1 = 1 THEN TO_DATE(year || '-' || month || '-' || day, 'YYYY-MM-DD') END) min_date,
MAX(CASE WHEN rn1 = 1 THEN state END) min_state,
avg_quant
FROM (
SELECT
s.*,
ROW_NUMBER() OVER(PARTITION BY prod ORDER BY quant) rn1,
ROW_NUMBER() OVER(PARTITION BY prod ORDER BY quant DESC) rn2,
AVG(quant) OVER(PARTITION BY prod) avg_quant
FROM sales s
) x
WHERE rn1 = 1 OR rn2 = 1
GROUP BY prod, avg_quant

Joining Results from Two Separate Databases

According to http://wiki.postgresql.org/wiki/FAQ

There is no way to query a database other than the current one.
Because PostgreSQL loads database-specific system catalogs, it is
uncertain how a cross-database query should even behave.
contrib/dblink allows cross-database queries using function calls. Of
course, a client can also make simultaneous connections to different
databases and merge the results on the client side.

EDIT: 3 years later (march 2014), this FAQ entry has been revised and is more helpful:

How do I perform queries using multiple databases?

There is no way to directly query a database other than the current
one. Because PostgreSQL loads database-specific system catalogs, it is
uncertain how a cross-database query should even behave.

The SQL/MED support in PostgreSQL allows a "foreign data wrapper" to
be created, linking tables in a remote database to the local database.
The remote database might be another database on the same PostgreSQL
instance, or a database half way around the world, it doesn't matter.
postgres_fdw is built-in to PostgreSQL 9.3 and includes read/write
support; a read-only version for 9.2 can be compiled and installed as
a contrib module.

contrib/dblink allows cross-database queries using function calls and
is available for much older PostgreSQL versions. Unlike postgres_fdw
it can't "push down" conditions to the remote server, so it'll often
land up fetching a lot more data than you need.

Of course, a client can also make simultaneous connections to
different databases and merge the results on the client side.

Possible to perform cross-database queries with PostgreSQL?

Note: As the original asker implied, if you are setting up two databases on the same machine you probably want to make two schemas instead - in that case you don't need anything special to query across them.

postgres_fdw

Use postgres_fdw (foreign data wrapper) to connect to tables in any Postgres database - local or remote.

Note that there are foreign data wrappers for other popular data sources. At this time, only postgres_fdw and file_fdw are part of the official Postgres distribution.

For Postgres versions before 9.3

Versions this old are no longer supported, but if you need to do this in a pre-2013 Postgres installation, there is a function called dblink.

I've never used it, but it is maintained and distributed with the rest of PostgreSQL. If you're using the version of PostgreSQL that came with your Linux distro, you might need to install a package called postgresql-contrib.

Combine two SELECT queries in PostgreSQL

Use a CTE to reuse the result from a subquery in more than one SELECT.

WITH cte AS (SELECT carto_id_key FROM table1 WHERE tag_id = 16)

SELECT carto_id_key
FROM cte

UNION ALL
SELECT t2.some_other_id_key
FROM cte
JOIN table2 t2 ON t2.carto_id_key = ctex.carto_id_key

You most probably want UNION ALL instead of UNION. Doesn't exclude duplicates and is faster this way.

Joining 2 select queries from the same table

You can use a FULL JOIN as in:

select 
coalesce(a.start_date, b.start_date) as start_date,
coalesce(a.end_date, b.end_date) as end_date,
a.new,
b.deleted
from (
-- query #1 here; exclude the ORDER BY clause
) a
full join (
-- query #2 here; exclude the ORDER BY clause
) b on b.start_date = a.start_date and b.end_date = a.end_date
order by coalesce(a.end_date, b.end_date) ASC

how to transfrom two separate queries into one long subselect

Using simple CTE instead of temporary table should work (not tested):

with stat1(products, numberOfAtributes) as (
select k.product_id as pp, count( ap.atribut_id) as numberOfAtributes from productAtributes as ap
JOIN cart k ON k.product_id = ap.product_id
GROUP BY k.product_id )
select count(numberOfAtributes), numberOfAtributes
from stat1
group by numberOfAtributes;

multi-tenancy Join or multiple Select queries postgres

It's good practice to stick to ORM abstraction if possible, while minimising how much and how often data is transferred to/from db. Sequelize is able to construct an equivalent to that query for you, with the necessary joins and filters on the ids. Something among the lines of:

Books.findAll({
where: {book_id: '0eokdpz0l'},
include: [{
model: Genre,
where: {tenant_id : jwtToken.tenant_id}
}]
}).then(books => {
/* ... */
});

Running multiple queries in sequence not only adds latency due to additional round trips to/from db (and possibly connection setup if you're not pooling or holding them open) but it's also moving more bytes of data around, needlessly. tenant_id mismatch on db would send back a shorter message with an empty result. Checking it on client side requires downloading data even when you'll have to discard it.



Related Topics



Leave a reply



Submit