Philippe Pepiot <ph@itsalwaysdns.eu> [Tue, 31 Mar 2020 18:22:05 +0200] rev 12966
[server] Make connection pooler configurable and set better default values
Drop the configuration connections-pool-size and add new configurations options:
* connections-pool-min-size. Set to 0 by default so we open connections only
when needed. This avoid opening min-size*processes connections at startup,
which is, it think, a good default.
* connections-pool-max-size. Set to 0 (unlimited) by default, so we move the
bottleneck to postgresql.
* connections-idle-timeout. Set to 10 minutes. I don't have arguments about
this except that this is the default in pgbouncer.
Philippe Pepiot <ph@itsalwaysdns.eu> [Tue, 31 Mar 2020 18:12:20 +0200] rev 12965
[server] Enhance connections-pooler-enabled documentation
Philippe Pepiot <ph@itsalwaysdns.eu> [Tue, 31 Mar 2020 16:17:14 +0200] rev 12964
[server] move connection pooler initialization logic to get_cnxset()
This avoid complex logic in Repository initialization.
Philippe Pepiot <ph@itsalwaysdns.eu> [Mon, 30 Mar 2020 15:46:12 +0200] rev 12963
[server] dynamically close idle database connections
When pool hasn't been empty for `idle_timeout` time, start closing connections.
Philippe Pepiot <ph@itsalwaysdns.eu> [Mon, 30 Mar 2020 15:45:40 +0200] rev 12962
[server] implement dynamic database pooler
Opening too much database connection has a cost at startup and also PostgreSQL
as a maximum number of connection (100 by default).
This get worse when starting multiple wsgi processes, since each process has
its own database pool.
Instead of opening `connections-pool-size` connections to the database at
startup, just open one and open more only when needed.
Philippe Pepiot <ph@itsalwaysdns.eu> [Mon, 30 Mar 2020 15:30:02 +0200] rev 12961
[server] use a LifoQueue in _CnxSetPool
In postgresql, some cache is attached to the connection. Using a LifoQueue
(last-in, first-out) makes a few connections to get the most load which give
best performance.
Philippe Pepiot <ph@itsalwaysdns.eu> [Mon, 30 Mar 2020 15:23:56 +0200] rev 12960
[server] extract creating a new cnxset in a _new_cnxset() helper
So we can move logic specific to _CnxSetPool here.
Philippe Pepiot <ph@itsalwaysdns.eu> [Mon, 30 Mar 2020 15:19:23 +0200] rev 12959
[server] avoid a possible race condition on _CnxSetPool.close()
The pool could become empty between time to check and time to use.
Philippe Pepiot <ph@itsalwaysdns.eu> [Mon, 30 Mar 2020 15:17:10 +0200] rev 12958
[server] extract "no pooler" CnxSet class to a _BaseCnxSet class
So we get rid of "if self._queue is None" in each method of _CnxSetPool
Also add helper get_cnxset(source, size) to instantiate the correct pooler class.
Philippe Pepiot <ph@itsalwaysdns.eu> [Tue, 31 Mar 2020 19:15:03 +0200] rev 12957
[server] prevent returning closed cursor to the database pool
In since c8c6ad8 init_repository use repo.internal_cnx() instead of
repo.system_source.get_connection() so it use the pool and we should not close
cursors from the pool before returning it back. Otherwise we may have
"connection already closed" error.
This bug only trigger when connection-pool-size = 1. Since we are moving to use
a dynamic pooler we need to get this fixed.
This does not occur with sqlite since the connection wrapper instantiate new
cursor everytime, but this occur with other databases.