This section describes the connection pool module of SQLAlchemy. The Pool
object it provides is normally embedded within an Engine
instance. For most cases, explicit access to the pool module is not required. However, the Pool
object can be used on its own, without the rest of SA, to manage DBAPI connections; this section describes that usage. Also, this section will describe in more detail how to customize the pooling strategy used by an Engine
.
At the base of any database helper library is a system of efficiently acquiring connections to the database. Since the establishment of a database connection is typically a somewhat expensive operation, an application needs a way to get at database connections repeatedly without incurring the full overhead each time. Particularly for server-side web applications, a connection pool is the standard way to maintain a "pool" of database connections which are used over and over again among many requests. Connection pools typically are configured to maintain a certain "size", which represents how many connections can be used simultaneously without resorting to creating more newly-established connections.
Any DBAPI module can be "proxied" through the connection pool using the following technique (note that the usage of 'psycopg2' is just an example; substitute whatever DBAPI module you'd like):
import sqlalchemy.pool as pool import psycopg2 as psycopg psycopg = pool.manage(psycopg) # then connect normally connection = psycopg.connect(database='test', username='scott', password='tiger')
This produces a sqlalchemy.pool.DBProxy
object which supports the same connect()
function as the original DBAPI module. Upon connection, a connection proxy object is returned, which delegates its calls to a real DBAPI connection object. This connection object is stored persistently within a connection pool (an instance of sqlalchemy.pool.Pool
) that corresponds to the exact connection arguments sent to the connect()
function.
The connection proxy supports all of the methods on the original connection object, most of which are proxied via __getattr__()
. The close()
method will return the connection to the pool, and the cursor()
method will return a proxied cursor object. Both the connection proxy and the cursor proxy will also return the underlying connection to the pool after they have both been garbage collected, which is detected via the __del__()
method.
Additionally, when connections are returned to the pool, a rollback()
is issued on the connection unconditionally. This is to release any locks still held by the connection that may have resulted from normal activity.
By default, the connect()
method will return the same connection that is already checked out in the current thread. This allows a particular connection to be used in a given thread without needing to pass it around between functions. To disable this behavior, specify use_threadlocal=False
to the manage()
function.
For all types of Pool construction, which includes the "transparent proxy" described in the previous section, using an Engine
via create_engine()
, or constructing a pool through direct class instantiation, the options are generally the same. Additional options may be available based on the specific subclass of Pool
being used.
For a description of all pool classes, see the generated documentation.
Common options include:
echo_pool
.
create_engine()
, corresponding to the "plain" or
"threadlocal" connection strategy.
QueuePool options include:
Besides using the transparent proxy, instances of sqlalchemy.pool.Pool
can be created directly. Constructing your own pool involves passing a callable used to create a connection. Through this method, custom connection schemes can be made, such as a connection that automatically executes some initialization commands to start.
import sqlalchemy.pool as pool import psycopg2 def getconn(): c = psycopg2.connect(username='ed', host='127.0.0.1', dbname='test') # execute an initialization function on the connection before returning c.cursor.execute("setup_encodings()") return c p = pool.QueuePool(getconn, max_overflow=10, pool_size=5, use_threadlocal=True)
Or with SingletonThreadPool:
import sqlalchemy.pool as pool import sqlite def getconn(): return sqlite.connect(filename='myfile.db') # SQLite connections require the SingletonThreadPool p = pool.SingletonThreadPool(getconn)