SQLAlchemy 0.6.1 Documentation

Version: 0.6.1 Last Updated: 07/25/2016 21:14:41
API Reference | Index

PostgreSQL

Support for the PostgreSQL database.

For information on connecting using specific drivers, see the documentation section regarding that driver.

Sequences/SERIAL

PostgreSQL supports sequences, and SQLAlchemy uses these as the default means of creating new primary key values for integer-based primary key columns. When creating tables, SQLAlchemy will issue the SERIAL datatype for integer-based primary key columns, which generates a sequence corresponding to the column and associated with it based on a naming convention.

To specify a specific named sequence to be used for primary key generation, use the Sequence() construct:

Table('sometable', metadata, 
        Column('id', Integer, Sequence('some_id_seq'), primary_key=True)
    )

Currently, when SQLAlchemy issues a single insert statement, to fulfill the contract of having the “last insert identifier” available, the sequence is executed independently beforehand and the new value is retrieved, to be used in the subsequent insert. Note that when an insert() construct is executed using “executemany” semantics, the sequence is not pre-executed and normal PG SERIAL behavior is used.

PostgreSQL 8.2 supports an INSERT...RETURNING syntax which SQLAlchemy supports as well. A future release of SQLA will use this feature by default in lieu of sequence pre-execution in order to retrieve new primary key values, when available.

Transaction Isolation Level

create_engine() accepts an isolation_level parameter which results in the command SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL <level> being invoked for every new connection. Valid values for this parameter are READ_COMMITTED, READ_UNCOMMITTED, REPEATABLE_READ, and SERIALIZABLE.

INSERT/UPDATE...RETURNING

The dialect supports PG 8.2’s INSERT..RETURNING, UPDATE..RETURNING and DELETE..RETURNING syntaxes, but must be explicitly enabled on a per-statement basis:

# INSERT..RETURNING
result = table.insert().returning(table.c.col1, table.c.col2).\
    values(name='foo')
print result.fetchall()

# UPDATE..RETURNING
result = table.update().returning(table.c.col1, table.c.col2).\
    where(table.c.name=='foo').values(name='bar')
print result.fetchall()

# DELETE..RETURNING
result = table.delete().returning(table.c.col1, table.c.col2).\
    where(table.c.name=='foo')
print result.fetchall()

Indexes

PostgreSQL supports partial indexes. To create them pass a postgresql_where option to the Index constructor:

Index('my_index', my_table.c.id, postgresql_where=tbl.c.value > 10)

PostgresSQL Column Types

class sqlalchemy.dialects.postgresql.base.ARRAY(item_type, mutable=True)

Bases: sqlalchemy.types.MutableType, sqlalchemy.types.Concatenable, sqlalchemy.types.TypeEngine

Postgresql ARRAY type.

Represents values as Python lists.

Note: be sure to read the notes for MutableType regarding ORM performance implications.

__init__(item_type, mutable=True)

Construct an ARRAY.

E.g.:

Column('myarray', ARRAY(Integer))

Arguments are:

Parameters:
  • item_type – The data type of items of this array. Note that dimensionality is irrelevant here, so multi-dimensional arrays like INTEGER[][], are constructed as ARRAY(Integer), not as ARRAY(ARRAY(Integer)) or such. The type mapping figures out on the fly
  • mutable – Defaults to True: specify whether lists passed to this class should be considered mutable. If so, generic copy operations (typically used by the ORM) will shallow-copy values.
class sqlalchemy.dialects.postgresql.base.BIT(*args, **kwargs)

Bases: sqlalchemy.types.TypeEngine

__init__(*args, **kwargs)
class sqlalchemy.dialects.postgresql.base.BYTEA(length=None)

Bases: sqlalchemy.types.LargeBinary

__init__(length=None)

Construct a LargeBinary type.

Parameter:length – optional, a length for the column for use in DDL statements, for those BLOB types that accept a length (i.e. MySQL). It does not produce a small BINARY/VARBINARY type - use the BINARY/VARBINARY types specifically for those. May be safely omitted if no CREATE TABLE will be issued. Certain databases may require a length for use in DDL, and will raise an exception when the CREATE TABLE DDL is issued.
class sqlalchemy.dialects.postgresql.base.CIDR(*args, **kwargs)

Bases: sqlalchemy.types.TypeEngine

__init__(*args, **kwargs)
class sqlalchemy.dialects.postgresql.base.DOUBLE_PRECISION(precision=None, asdecimal=False, **kwargs)

Bases: sqlalchemy.types.Float

__init__(precision=None, asdecimal=False, **kwargs)

Construct a Float.

Parameters:
  • precision – the numeric precision for use in DDL CREATE TABLE.
  • asdecimal – the same flag as that of Numeric, but defaults to False. Note that setting this flag to True results in floating point conversion.
class sqlalchemy.dialects.postgresql.base.ENUM(*enums, **kw)

Bases: sqlalchemy.types.Enum

__init__(*enums, **kw)

Construct an enum.

Keyword arguments which don’t apply to a specific backend are ignored by that backend.

Parameters:
  • *enums – string or unicode enumeration labels. If unicode labels are present, the convert_unicode flag is auto-enabled.
  • convert_unicode – Enable unicode-aware bind parameter and result-set processing for this Enum’s data. This is set automatically based on the presence of unicode label strings.
  • metadata – Associate this type directly with a MetaData object. For types that exist on the target database as an independent schema construct (Postgresql), this type will be created and dropped within create_all() and drop_all() operations. If the type is not associated with any MetaData object, it will associate itself with each Table in which it is used, and will be created when any of those individual tables are created, after a check is performed for it’s existence. The type is only dropped when drop_all() is called for that Table object’s metadata, however.
  • name – The name of this type. This is required for Postgresql and any future supported database which requires an explicitly named type, or an explicitly named constraint in order to generate the type and/or a table that uses it.
  • native_enum – Use the database’s native ENUM type when available. Defaults to True. When False, uses VARCHAR + check constraint for all backends.
  • schema – Schemaname of this type. For types that exist on the target database as an independent schema construct (Postgresql), this parameter specifies the named schema in which the type is present.
  • quote – Force quoting to be on or off on the type’s name. If left as the default of None, the usual schema-level “case sensitive”/”reserved name” rules are used to determine if this type’s name should be quoted.
class sqlalchemy.dialects.postgresql.base.INET(*args, **kwargs)

Bases: sqlalchemy.types.TypeEngine

__init__(*args, **kwargs)
class sqlalchemy.dialects.postgresql.base.INTERVAL(precision=None)

Bases: sqlalchemy.types.TypeEngine

__init__(precision=None)
class sqlalchemy.dialects.postgresql.base.MACADDR(*args, **kwargs)

Bases: sqlalchemy.types.TypeEngine

__init__(*args, **kwargs)
class sqlalchemy.dialects.postgresql.base.REAL(precision=None, asdecimal=False, **kwargs)

Bases: sqlalchemy.types.Float

__init__(precision=None, asdecimal=False, **kwargs)

Construct a Float.

Parameters:
  • precision – the numeric precision for use in DDL CREATE TABLE.
  • asdecimal – the same flag as that of Numeric, but defaults to False. Note that setting this flag to True results in floating point conversion.
class sqlalchemy.dialects.postgresql.base.UUID(*args, **kwargs)

Bases: sqlalchemy.types.TypeEngine

__init__(*args, **kwargs)

psycopg2 Notes

Support for the PostgreSQL database via the psycopg2 driver.

Driver

The psycopg2 driver is supported, available at http://pypi.python.org/pypi/psycopg2/ . The dialect has several behaviors which are specifically tailored towards compatibility with this module.

Note that psycopg1 is not supported.

Unicode

By default, the Psycopg2 driver uses the psycopg2.extensions.UNICODE extension, such that the DBAPI receives and returns all strings as Python Unicode objects directly - SQLAlchemy passes these values through without change. Note that this setting requires that the PG client encoding be set to one which can accomodate the kind of character data being passed - typically utf-8. If the Postgresql database is configured for SQL_ASCII encoding, which is often the default for PG installations, it may be necessary for non-ascii strings to be encoded into a specific encoding before being passed to the DBAPI. If changing the database’s client encoding setting is not an option, specify use_native_unicode=False as a keyword argument to create_engine(), and take note of the encoding setting as well, which also defaults to utf-8. Note that disabling “native unicode” mode has a slight performance penalty, as SQLAlchemy now must translate unicode strings to/from an encoding such as utf-8, a task that is handled more efficiently within the Psycopg2 driver natively.

Connecting

URLs are of the form postgresql+psycopg2://user:password@host:port/dbname[?key=value&key=value...].

psycopg2-specific keyword arguments which are accepted by create_engine() are:

  • server_side_cursors - Enable the usage of “server side cursors” for SQL statements which support this feature. What this essentially means from a psycopg2 point of view is that the cursor is created using a name, e.g. connection.cursor(‘some name’), which has the effect that result rows are not immediately pre-fetched and buffered after statement execution, but are instead left on the server and only retrieved as needed. SQLAlchemy’s ResultProxy uses special row-buffering behavior when this feature is enabled, such that groups of 100 rows at a time are fetched over the wire to reduce conversational overhead.
  • use_native_unicode - Enable the usage of Psycopg2 “native unicode” mode per connection. True by default.

Transactions

The psycopg2 dialect fully supports SAVEPOINT and two-phase commit operations.

NOTICE logging

The psycopg2 dialect will log Postgresql NOTICE messages via the sqlalchemy.dialects.postgresql logger:

import logging
logging.getLogger('sqlalchemy.dialects.postgresql').setLevel(logging.INFO)

Per-Statement Execution Options

The following per-statement execution options are respected:

  • stream_results - Enable or disable usage of server side cursors for the SELECT-statement. If None or not set, the server_side_cursors option of the connection is used. If auto-commit is enabled, the option is ignored.

pg8000 Notes

Support for the PostgreSQL database via the pg8000 driver.

Connecting

URLs are of the form postgresql+pg8000://user:password@host:port/dbname[?key=value&key=value...].

Unicode

pg8000 requires that the postgresql client encoding be configured in the postgresql.conf file in order to use encodings other than ascii. Set this value to the same value as the “encoding” parameter on create_engine(), usually “utf-8”.

Interval

Passing data from/to the Interval type is not supported as of yet.

zxjdbc Notes

Support for the PostgreSQL database via the zxjdbc JDBC connector.

JDBC Driver

The official Postgresql JDBC driver is at http://jdbc.postgresql.org/.

Previous: Oracle Next: SQLite