Psql: Fatal: Too Many Connections for Role

psql: FATAL: too many connections for role

From inside any DB of the cluster:

Catch 22: you need to be connected to a database first. Maybe you can connect as another user? (By default, some connections are reserved for superusers with the superuser_reserved_connections setting.)

To get detailed information for each connection by this user:

SELECT *
FROM pg_stat_activity
WHERE usename = 'user_name';

As the same user or as superuser you can cancel all (other) connections of a user:

SELECT pg_cancel_backend(pid)     -- (SIGINT)
-- pg_terminate_backend(pid) -- the less patient alternative (SIGTERM)
FROM pg_stat_activity
WHERE usename = 'user_name'
AND pid <> pg_backend_pid();

Better be sure it's ok to do so. You don't want to terminate important queries (or connections) that way.

pg_cancel_backend() and pg_terminate_backend() in the manual.

From a Linux shell

Did you start those other connections yourself? Maybe a hanging script of yours? You should be able to kill those (if you are sure it's ok to do so).

You can investigate with ps which processes might be at fault:

ps -aux
ps -aux | grep psql

If you identify a process to kill (better be sure, you do not want to kill the server):

kill  123457689 # pid of process here.

Or with SIGKILL instead of SIGTERM:

kill -9 123457689

org.postgresql.util.PSQLException: FATAL: sorry, too many clients already

We don't know what server.properties file is that, we neither know what SimocoPoolSize means (do you?)

Let's guess you are using some custom pool of database connections. Then, I guess the problem is that your pool is configured to open 100 or 120 connections, but you Postgresql server is configured to accept MaxConnections=90 . These seem conflictive settings. Try increasing MaxConnections=120.

But you should first understand your db layer infrastructure, know what pool are you using, if you really need so many open connections in the pool. And, specially, if you are gracefully returning the opened connections to the pool

pgAdmin - Too many connection for the role my username

To increase the connections permanently. You have to configure max_connections parameter.

You can change this in postgresql.conf file if postgres is hosted locally.

Reference for max_connections

Reference for setting parameter

Odoo: psycopg2.OperationalError: FATAL: too many connections for role p_my_oerp_master_11234451

After more investigation, it seems that crons ("Server Actions" in Odoo) having a recurrence set to None in Odoo, but running in fact every couple of secondes, were the original cause for these "Too many connections..." - Error and the Session disconnection every couple of secondes. On the server side, these crons correspond to processes that use odoo-rpc library (external API).

This hypothesis is confirmed by comparing the running processes during the issue (which lasts 5 days long) and after the cron processes get killed (somehow):

DURING ISSUE: running processes (px ax)

my_oerp_master_11234451 [production/v13.0]:~$ ps -ax 

PID TTY STAT TIME COMMAND
1 ? Ss 2:29 ODOO.SH: [my_oerp_master_11234451 / production / 13.0]

106 ? SNsl 0:56 python3 /usr/local/bin/odoo-rpc --database=my_oerp_master_11234451

4807 ? Ssl 8:44 python3 /home/odoo/src/odoo/odoo-bin --database=my_oerp_master_11234451 --logfile=/home/odoo/logs/odoo.log

5899 ? RNs 0:01 /opt/odoo.sh/jupyterlab/bin/python3 /opt/odoo.sh/jupyterlab/bin/odoo-webshell --ip=0.0.0.0 --port=8889 -

5903 pts/0 SNs 0:00 /bin/bash -l

8730 ? SNsl 0:53 python3 /usr/local/bin/odoo-rpc --db_maxconn=2 --database=my_oerp_master_11234451

10738 ? SNsl 0:56 python3 /usr/local/bin/odoo-rpc --db_maxconn=2 --database=my_oerp_master_11234451

11263 ? SNsl 0:56 python3 /usr/local/bin/odoo-rpc --db_maxconn=2 --database=my_oerp_master_11234451

11334 pts/3 SNs+ 0:00 /bin/bash 12134 ? SNsl 0:56 python3 /usr/local/bin/odoo-rpc --database=my_oerp_master_11234451

12864 ? SNsl 0:56 python3 /usr/local/bin/odoo-rpc --db_maxconn=2 --database=my_oerp_master_11234451

13384 ? SNsl 0:49 python3 /usr/local/bin/odoo-rpc --db_maxconn=2 --database=my_oerp_master_11234451

13584 pts/0 RN+ 0:00 ps -ax

16281 ? SNsl 0:48 python3 /usr/local/bin/odoo-rpc --db_maxconn=2 --database=my_oerp_master_11234451

17130 ? SNsl 0:51 python3 /usr/local/bin/odoo-rpc --db_maxconn=2 --database=my_oerp_master_11234451

17882 ? SNsl 0:51 python3 /usr/local/bin/odoo-rpc --db_maxconn=2 --database=my_oerp_master_11234451

20516 ? SNsl 0:09 /opt/odoo.sh/jupyterlab/bin/python3 /opt/odoo.sh/jupyterlab/bin/jupyter-lab --debug --ip=0.0.0.0 --port=8888

29069 pts/1 SNs 0:00 /bin/bash

29079 pts/1 SN+ 0:02 less +F /home/odoo/logs/odoo.log

29094 pts/2 SNs+ 0:00 /bin/bash

30181 ? SNsl 0:56 python3 /usr/local/bin/odoo-rpc --db_maxconn=2 --database=my_oerp_master_11234451

my_oerp_master_11234451 [production/v13.0]:~$

AFTER ISSUE: running processes (px ax)

my_oerp_master_11234451 [production/v13.0]:~$ ps -ax 

PID TTY STAT TIME COMMAND

1 ? Ss 2:29 ODOO.SH: [my_oerp_master_11234451 / production / 13.0]

8855 ? Ssl 8:44 python3 /home/odoo/src/odoo/odoo-bin --database=my_oerp_master_11234451 --logfile=/home/odoo/logs/odoo.log

10597 pts/0 SNs 0:00 /bin/bash -l

10601 ? SNsl 0:09 /opt/odoo.sh/jupyterlab/bin/python3 /opt/odoo.sh/jupyterlab/bin/jupyter-lab --debug --ip=0.0.0.0 --port=8888

11023 pts/1 RN+ 0:00 ps ax

23954 pts/2 SNs+ 0:00 /bin/bash

This hypothesis is confirmed too by the official Odoo documentation concerning the psycopg2-Error related to "the maximum connections parameter": https://odoo-development.readthedocs.io/en/latest/admin/db_maxconn.html

db_maxconn – specify the the maximum number of physical connections to posgresql per odoo process, but for all databases

To resolve it you need configure following parameters:

In odoo

  • db_maxconn
  • workers
  • max_cron_threads

In posgresql

  • max_connections

Those parameters must satisfy following condition:

(1 + workers + max_cron_threads) * db_maxconn < max_connections

For example, if you have following values:

workers = 1 (minimal value to make longpolling work)
max_cron_threads = 2 (default)
db_maxconn = 64 (default)
max_connections = 100 (default)

then (1 + 1 + 2) * 64 = 256 > 100, i.e. the condition is not satisfied and such deployment may face the error described above.

Ok, but which values are good for specific server and load conditions?

PostgreSQL’s max_connections should be set higher than db_maxconn * number_of_processes. You may need to tweak the kernel sysctl if you need max_connections higher than 1-2k.

For multi-processing mode, each HTTP worker handles a single request at a time, so theoretically db_maxconn=2 could work (some requests need 2 cursors, hence 2 db connections). However for multi-tenant this is not optimal because each request will need to reopen a new connection to a different db - setting it a bit higher is better. With lots of workers, 32 is a good trade-off, as 64 could make you reach kernel limits. Also keep in mind that the limit applies to the longpolling worker too, and you don’t want to delay chat messages too much because of a full connection pool, so don’t set it too low no matter what. Keeping the value in the 32-64 range usually seems a good choice.

For multi-thread mode, since there is only 1 process, this is the size of the global connection pool. To prevent errors, it should be set between 1x and 2x the expected number of concurrent requests at a time. Can be estimated based on the number of databases and the expected activity. Having a single process handle more than 20 request at a time on a single core (remember that multi-thread depends on the GIL) is unlikely to give good performance, so again, a setting in the 32-64 range will most likely work for a normal load.

Why does PostgreSQL say FATAL: sorry, too many clients already when I am nowhere close to the maximum connections?

This is caused by how Spark reads/writes data using JDBC. Spark tries to open several concurrent connections to the database in order to read/write multiple partitions of data in parallel.

I couldn't find it in the docs but I think by default the number of connections is equal to the number of partitions in the datafame you want to write into db table. This explains the intermittency you've noticed.

However, you can control this number by setting numPartitions option:

The maximum number of partitions that can be used for parallelism in
table reading and writing. This also determines the maximum number of
concurrent JDBC connections. If the number of partitions to write
exceeds this limit, we decrease it to this limit by calling
coalesce(numPartitions) before writing.

Example:

spark.read.format("jdbc") \
.option("numPartitions", "20") \
# ...

FATAL: too many connections for role: Heroku/django, only while using ASGI

I found a solution that seems to be working fine.
The solution is pgbouncer. Install pgbouncer to heroku by executing the following commands:

heroku buildpacks:add https://github.com/heroku/heroku-buildpack-pgbouncer

heroku buildpacks:add heroku/python

After that, if you deploy to Heroku, you might see the following error on Heroku logs-

heroku server does not support SSL, but SSL was required

A very simple working solution to that is given by LtKvasir here.

I still have db.connections.close_all() written everywhere in my code but I think that's really not necessary now.

Prisma: Error querying the database: db error: FATAL: too many connections

You could try instantiating PrismaClient as shown here so that a new connection isn't created on every reload.



Related Topics



Leave a reply



Submit