SQLAlchemy ships with a connection pooling framework that integrates with the Engine system and can also be used on its own to manage plain DB-API connections.
At the base of any database helper library is a system for efficiently acquiring connections to the database. Since the establishment of a database connection is typically a somewhat expensive operation, an application needs a way to get at database connections repeatedly without incurring the full overhead each time. Particularly for server-side web applications, a connection pool is the standard way to maintain a group or “pool” of active database connections which are reused from request to request in a single server process.
The Engine returned by the create_engine() function in most cases has a QueuePool integrated, pre-configured with reasonable pooling defaults. If you’re reading this section to simply enable pooling- congratulations! You’re already done.
The most common QueuePool tuning parameters can be passed directly to create_engine() as keyword arguments: pool_size, max_overflow, pool_recycle and pool_timeout. For example:
engine = create_engine('postgresql://me@localhost/mydb',
pool_size=20, max_overflow=0)
In the case of SQLite, a SingletonThreadPool is provided instead, to provide compatibility with SQLite’s restricted threading model.
Pool instances may be created directly for your own use or to supply to sqlalchemy.create_engine() via the pool= keyword argument.
Constructing your own pool requires supplying a callable function the Pool can use to create new connections. The function will be called with no arguments.
Through this method, custom connection schemes can be made, such as a using connections from another library’s pool, or making a new connection that automatically executes some initialization commands:
import sqlalchemy.pool as pool
import psycopg2
def getconn():
c = psycopg2.connect(username='ed', host='127.0.0.1', dbname='test')
# execute an initialization function on the connection before returning
c.cursor.execute("setup_encodings()")
return c
p = pool.QueuePool(getconn, max_overflow=10, pool_size=5)
Or with SingletonThreadPool:
import sqlalchemy.pool as pool
import sqlite
p = pool.SingletonThreadPool(lambda: sqlite.connect(filename='myfile.db'))
Bases: sqlalchemy.pool.Pool
A Pool that allows at most one checked out connection at any given time.
This will raise an exception if more than one connection is checked out at a time. Useful for debugging code that is using more connections than desired.
Bases: sqlalchemy.pool.Pool
A Pool which does not pool connections.
Instead it literally opens and closes the underlying DB-API connection per each connection open/close.
Reconnect-related functions such as recycle and connection invalidation are not supported by this Pool implementation, since no connections are held persistently.
Bases: sqlalchemy.log.Identified
Abstract base class for connection pools.
Construct a Pool.
Parameters: |
|
---|
Add a PoolListener-like object to this pool.
listener may be an object that implements some or all of PoolListener, or a dictionary of callables containing implementations of some or all of the named methods in PoolListener.
Dispose of this pool.
This method leaves the possibility of checked-out connections remaining open, It is advised to not reuse the pool once dispose() is called, and to instead use a new pool constructed by the recreate() method.
Bases: sqlalchemy.pool.Pool
A Pool that imposes a limit on the number of open connections.
Construct a QueuePool.
Parameters: |
|
---|
Bases: sqlalchemy.pool.Pool
A Pool that maintains one connection per thread.
Maintains one connection per each thread, never moving a connection to a thread other than the one which it was created in.
This is used for SQLite, which both does not handle multithreading by default, and also requires a singleton connection if a :memory: database is being used.
Options are the same as those of Pool, as well as:
Parameter: | pool_size – The number of threads in which to maintain connections at once. Defaults to five. |
---|
Bases: sqlalchemy.pool.Pool
A Pool of exactly one connection, used for all requests.
Reconnect-related functions such as recycle and connection invalidation (which is also used to support auto-reconnect) are not currently supported by this Pool implementation but may be implemented in a future release.
Any PEP 249 DB-API module can be “proxied” through the connection pool transparently. Usage of the DB-API is exactly as before, except the connect() method will consult the pool. Below we illustrate this with psycopg2:
import sqlalchemy.pool as pool
import psycopg2 as psycopg
psycopg = pool.manage(psycopg)
# then connect normally
connection = psycopg.connect(database='test', username='scott',
password='tiger')
This produces a _DBProxy object which supports the same connect() function as the original DB-API module. Upon connection, a connection proxy object is returned, which delegates its calls to a real DB-API connection object. This connection object is stored persistently within a connection pool (an instance of Pool) that corresponds to the exact connection arguments sent to the connect() function.
The connection proxy supports all of the methods on the original connection object, most of which are proxied via __getattr__(). The close() method will return the connection to the pool, and the cursor() method will return a proxied cursor object. Both the connection proxy and the cursor proxy will also return the underlying connection to the pool after they have both been garbage collected, which is detected via weakref callbacks (__del__ is not used).
Additionally, when connections are returned to the pool, a rollback() is issued on the connection unconditionally. This is to release any locks still held by the connection that may have resulted from normal activity.
By default, the connect() method will return the same connection that is already checked out in the current thread. This allows a particular connection to be used in a given thread without needing to pass it around between functions. To disable this behavior, specify use_threadlocal=False to the manage() function.
Return a proxy for a DB-API module that automatically pools connections.
Given a DB-API 2.0 module and pool management parameters, returns a proxy for the module that will automatically pool connections, creating new connection pools for each distinct set of connection arguments sent to the decorated module’s connect() function.
Parameters: |
|
---|
Remove all current DB-API 2.0 managers.
All pools and connections are disposed.