Org.Postgresql.Util.Psqlexception: Fatal: Sorry, Too Many Clients Already

org.postgresql.util.PSQLException: FATAL: sorry, too many clients already

We don't know what server.properties file is that, we neither know what SimocoPoolSize means (do you?)

Let's guess you are using some custom pool of database connections. Then, I guess the problem is that your pool is configured to open 100 or 120 connections, but you Postgresql server is configured to accept MaxConnections=90 . These seem conflictive settings. Try increasing MaxConnections=120.

But you should first understand your db layer infrastructure, know what pool are you using, if you really need so many open connections in the pool. And, specially, if you are gracefully returning the opened connections to the pool

Why does PostgreSQL say FATAL: sorry, too many clients already when I am nowhere close to the maximum connections?

This is caused by how Spark reads/writes data using JDBC. Spark tries to open several concurrent connections to the database in order to read/write multiple partitions of data in parallel.

I couldn't find it in the docs but I think by default the number of connections is equal to the number of partitions in the datafame you want to write into db table. This explains the intermittency you've noticed.

However, you can control this number by setting numPartitions option:

The maximum number of partitions that can be used for parallelism in
table reading and writing. This also determines the maximum number of
concurrent JDBC connections. If the number of partitions to write
exceeds this limit, we decrease it to this limit by calling
coalesce(numPartitions) before writing.

Example:

spark.read.format("jdbc") \
.option("numPartitions", "20") \
# ...

spring boot postgres: FATAL: sorry, too many clients already

Check the parameter max_connections in postgresql.conf file with total number of connection showing in application.yml

ALTER SYSTEM SET max_connections ='150';

and restart your instance using

select pg_reload_conf();

Note: Number of connection depends upon the active and idle connection, setting more number in connection will over-killing the process.

Spring Boot PSQLException: FATAL: sorry, too many clients already when running tests

Since there hasn't been a suggested answer I am posting my solution.
Short version: decrease the connection pool size in test properties:

spring.datasource.hikari.maximum-pool-size=2

Longer version: Spring Boot 2 uses HikariCP by default for connection pooling, which has a default value of 10 for connection pool size (as of Jan 2019). While running a lot of ITs the Spring context is created multiple times, which means each context acquires 10 connections from the database. As far as I've observed, tests allocate connections faster than they are released. Therefore, max_connections limit allowed by the database server (which is typically 100 by default) is reached at some point, which leads to that "too many clients" error.

By limiting the connection pool size to 2 in test properties I was able to fix that problem.

PostgreSQL Evolutions: PSQLException: FATAL: sorry, too many clients already

You can reduce the number of connections used by your application. Had the same errors on a mac install. As shown in the official documentation :

db.default.partitionCount=2

# The number of connections to create per partition. Setting this to
# 5 with 3 partitions means you will have 15 unique connections to the
# database. Note that BoneCP will not create all these connections in
# one go but rather start off with minConnectionsPerPartition and
# gradually increase connections as required.
db.default.maxConnectionsPerPartition=5

# The number of initial connections, per partition.
db.default.minConnectionsPerPartition=5

PSQLException: FATAL: sorry, too many clients already error in integration tests with jOOQ & Spring Boot

Since your question seems not to be about the generally best way to work with PostgreSQL connections / data sources, I'll answer the part about jOOQ and using its DataSourceConnectionProvider:

Using DataSourceConnectionProvider

There is no better alternative in general. In order to understand DataSourceConnectionProvider (the implementation), you have to understand ConnectionProvider (its specification). It is an SPI that jOOQ uses for two things:

  • to acquire() a connection prior to running a statement or a transaction
  • to release() a connection after running a statement (and possibly, fetching results) or a transaction

The DataSourceConnectionProvider does so by acquiring a connection from your DataSource through DataSource.getConnection() and by releasing it through Connection.close(). This is the most common way to interact with data sources, in order to let the DataSource implementation handle transaction and/or pooling semantics.

Whether this is a good idea in your case may depend on individual configurations that you have made. It generally is a good idea because you usually don't want to manually manage connection lifecycles.

Using DefaultConnectionProvider

This can certainly be done instead, in case of which jOOQ does not close() your connection for you, you'll do that yourself. I'm expecting this to have no effect in your particular case, as you'll implement the DataSourceConnectionProvider semantics manually using e.g.

try (Connection c = ds.getConnection()) {

// Implicitly using a DefaultConnectionProvider
DSL.using(c).select(...).fetch();

// Implicit call to c.close()
}

In other words: this is likely not a problem related to jOOQ, but to your data source.

Arval SQLException: FATAL: sorry, too many clients already in postgres

You can increase the max_connections in postgres, that is not the solution though. You have resource leaks. It could be any - connection not closed, result set not closed. Please go back and check the code.

Consider using a connection pooling library like c3p0/BoneCp

A general discussion on connection pooling is here
(Thanks to @sinisa229 mihajlovski)



Related Topics



Leave a reply



Submit