Arval SQLexception: Fatal: Sorry, Too Many Clients Already in Postgres

Arval SQLException: FATAL: sorry, too many clients already in postgres

You can increase the max_connections in postgres, that is not the solution though. You have resource leaks. It could be any - connection not closed, result set not closed. Please go back and check the code.

Consider using a connection pooling library like c3p0/BoneCp

A general discussion on connection pooling is here
(Thanks to @sinisa229 mihajlovski)

Too many connections' created in postgres when creating a dashboard in Pentaho

From the comments thread on the original question it seems you're using SQL over JDBC connections on your dashboard. This will create a different database connection for each query that needs to run and if they are somewhat slow you may reach the limit on the number of concurrent connections.

Instead, you should set up a JNDI: on your datasource management window add a new connection and set up the correct credentials. Under advanced options set up a connection pool. Give it a meaningful name. From that point on, you should refer to that name on your dashboard queries and use SQL over JNDI instead of SQL over JDBC. This way each SQL query will get a connection from the connection pool and the DB only sees 1 connection at each time, despite running multiple queries.

How to increase the max connections in postgres?

Just increasing max_connections is bad idea. You need to increase shared_buffers and kernel.shmmax as well.


Considerations

max_connections determines the maximum number of concurrent connections to the database server. The default is typically 100 connections.

Before increasing your connection count you might need to scale up your deployment. But before that, you should consider whether you really need an increased connection limit.

Each PostgreSQL connection consumes RAM for managing the connection or the client using it. The more connections you have, the more RAM you will be using that could instead be used to run the database.

A well-written app typically doesn't need a large number of connections. If you have an app that does need a large number of connections then consider using a tool such as pg_bouncer which can pool connections for you. As each connection consumes RAM, you should be looking to minimize their use.


How to increase max connections

1. Increase max_connection and shared_buffers

in /var/lib/pgsql/{version_number}/data/postgresql.conf

change

max_connections = 100
shared_buffers = 24MB

to

max_connections = 300
shared_buffers = 80MB

The shared_buffers configuration parameter determines how much memory is dedicated to PostgreSQL to use for caching data.

  • If you have a system with 1GB or more of RAM, a reasonable starting
    value for shared_buffers is 1/4 of the memory in your system.
  • it's unlikely you'll find using more than 40% of RAM to work better
    than a smaller amount (like 25%)
  • Be aware that if your system or PostgreSQL build is 32-bit, it might
    not be practical to set shared_buffers above 2 ~ 2.5GB.
  • Note that on Windows, large values for shared_buffers aren't as
    effective, and you may find better results keeping it relatively low
    and using the OS cache more instead. On Windows the useful range is
    64MB to 512MB
    .

2. Change kernel.shmmax

You would need to increase kernel max segment size to be slightly larger
than the shared_buffers.

In file /etc/sysctl.conf set the parameter as shown below. It will take effect when postgresql reboots (The following line makes the kernel max to 96Mb)

kernel.shmmax=100663296

References

Postgres Max Connections And Shared Buffers

Tuning Your PostgreSQL Server



Related Topics



Leave a reply



Submit