Mapping Class Inheritance Hierarchies
The orm.mapper() function and declarative extensions are the primary configurational interface for the ORM. Once mappings are configured, the primary usage interface for persistence operations is the Session.
In the most general sense, the Session establishes all conversations with the database and represents a “holding zone” for all the objects which you’ve loaded or associated with it during its lifespan. It provides the entrypoint to acquire a Query object, which sends queries to the database using the Session object’s current database connection, populating result rows into objects that are then stored in the Session, inside a structure called the Identity Map - a data structure that maintains unique copies of each object, where “unique” means “only one object with a particular primary key”.
The Session begins in an essentially stateless form. Once queries are issued or other objects are persisted with it, it requests a connection resource from an Engine that is associated either with the Session itself or with the mapped Table objects being operated upon. This connection represents an ongoing transaction, which remains in effect until the Session is instructed to commit or roll back its pending state.
All changes to objects maintained by a Session are tracked - before the database is queried again or before the current transaction is committed, it flushes all pending changes to the database. This is known as the Unit of Work pattern.
When using a Session, it’s important to note that the objects which are associated with it are proxy objects to the transaction being held by the Session - there are a variety of events that will cause objects to re-access the database in order to keep synchronized. It is possible to “detach” objects from a Session, and to continue using them, though this practice has its caveats. It’s intended that usually, you’d re-associate detached objects another Session when you want to work with them again, so that they can resume their normal task of representing database state.
Session is a regular Python class which can be directly instantiated. However, to standardize how sessions are configured and acquired, the sessionmaker() function is normally used to create a top level Session configuration which can then be used throughout an application without the need to repeat the configurational arguments.
The usage of sessionmaker() is illustrated below:
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
# an Engine, which the Session will use for connection
# resources
some_engine = create_engine('postgresql://scott:tiger@localhost/')
# create a configured "Session" class
Session = sessionmaker(bind=some_engine)
# create a Session
session = Session()
# work with sess
myobject = MyObject('foo', 'bar')
session.add(myobject)
session.commit()
Above, the sessionmaker() call creates a class for us, which we assign to the name Session. This class is a subclass of the actual Session class, which when instantiated, will use the arguments we’ve given the function, in this case to use a particular Engine for connection resources.
A typical setup will associate the sessionmaker() with an Engine, so that each Session generated will use this Engine to acquire connection resources. This association can be set up as in the example above, using the bind argument.
When you write your application, place the result of the sessionmaker() call at the global level. The resulting Session class, configured for your application, should then be used by the rest of the applcation as the source of new Session instances.
An extremely common step taken by applications, including virtually all web applications, is to further wrap the sessionmaker() construct in a so-called “contextual” session, provided by the scoped_session() construct. This construct places the sessionmaker() into a registry that maintains a single Session per application thread. Information on using contextual sessions is at Contextual/Thread-local Sessions.
A common scenario is where the sessionmaker() is invoked at module import time, however the generation of one or more Engine instances to be associated with the sessionmaker() has not yet proceeded. For this use case, the sessionmaker() construct offers the sessionmaker.configure() method, which will place additional configuration directives into an existing sessionmaker() that will take place when the construct is invoked:
from sqlalchemy.orm import sessionmaker
from sqlalchemy import create_engine
# configure Session class with desired options
Session = sessionmaker()
# later, we create the engine
engine = create_engine('postgresql://...')
# associate it with our custom Session class
Session.configure(bind=engine)
# work with the session
session = Session()
For the use case where an application needs to create a new Session with special arguments that deviate from what is normally used throughout the application, such as a Session that binds to an alternate source of connectivity, or a Session that should have other arguments such as expire_on_commit established differently from what most of the application wants, specific arguments can be passed to the sessionmaker() construct’s class itself. These arguments will override whatever configurations have already been placed, such as below, where a new Session is constructed against a specific Connection:
# at the module level, the global sessionmaker,
# bound to a specific Engine
Session = sessionmaker(bind=engine)
# later, some unit of code wants to create a
# Session that is bound to a specific Connection
conn = engine.connect()
session = Session(bind=conn)
The typical rationale for the association of a Session with a specific Connection is that of a test fixture that maintains an external transaction - see Joining a Session into an External Transaction for an example of this.
It’s helpful to know the states which an instance can have within a session:
Knowing these states is important, since the Session tries to be strict about ambiguous operations (such as trying to save the same object to two different sessions at the same time).
When do I make a sessionmaker() ?
Just one time, somewhere in your application’s global scope. It should be looked upon as part of your application’s configuration. If your application has three .py files in a package, you could, for example, place the sessionmaker() line in your __init__.py file; from that point on your other modules say “from mypackage import Session”. That way, everyone else just uses Session(), and the configuration of that session is controlled by that central point.
If your application starts up, does imports, but does not know what database it’s going to be connecting to, you can bind the Session at the “class” level to the engine later on, using configure().
In the examples in this section, we will frequently show the sessionmaker() being created right above the line where we actually invoke Session(). But that’s just for example’s sake ! In reality, the sessionmaker() would be somewhere at the module level, and your individual Session() calls would be sprinkled all throughout your app, such as in a web application within each controller method.
When do I make a Session ?
You typically invoke Session when you first need to talk to your database, and want to save some objects or load some existing ones. It then remains in use for the lifespan of a particular database conversation, which includes not just the initial loading of objects but throughout the whole usage of those instances.
Objects become detached if their owning session is discarded. They are still functional in the detached state if the user has ensured that their state has not been expired before detachment, but they will not be able to represent the current state of database data. Because of this, it’s best to consider persisted objects as an extension of the state of a particular Session, and to keep that session around until all referenced objects have been discarded.
An exception to this is when objects are placed in caches or otherwise shared among threads or processes, in which case their detached state can be stored, transmitted, or shared. However, the state of detached objects should still be transferred back into a new Session using Session.add() or Session.merge() before working with the object (or in the case of merge, its state) again.
It is also very common that a Session as well as its associated objects are only referenced by a single thread. Sharing objects between threads is most safely accomplished by sharing their state among multiple instances of those objects, each associated with a distinct Session per thread, Session.merge() to transfer state between threads. This pattern is not a strict requirement by any means, but it has the least chance of introducing concurrency issues.
To help with the recommended Session -per-thread, Session -per-set-of-objects patterns, the scoped_session() function is provided which produces a thread-managed registry of Session objects. It is commonly used in web applications so that a single global variable can be used to safely represent transactional sessions with sets of objects, localized to a single thread. More on this object is in Contextual/Thread-local Sessions.
Is the Session a cache ?
Yeee...no. It’s somewhat used as a cache, in that it implements the identity map pattern, and stores objects keyed to their primary key. However, it doesn’t do any kind of query caching. This means, if you say session.query(Foo).filter_by(name='bar'), even if Foo(name='bar') is right there, in the identity map, the session has no idea about that. It has to issue SQL to the database, get the rows back, and then when it sees the primary key in the row, then it can look in the local identity map and see that the object is already there. It’s only when you say query.get({some primary key}) that the Session doesn’t have to issue a query.
Additionally, the Session stores object instances using a weak reference by default. This also defeats the purpose of using the Session as a cache.
The Session is not designed to be a global object from which everyone consults as a “registry” of objects. That’s more the job of a second level cache. SQLAlchemy provides a pattern for implementing second level caching using Beaker, via the Beaker Caching example.
How can I get the Session for a certain object ?
Use the object_session() classmethod available on Session:
session = Session.object_session(someobject)
Is the session thread-safe?
Nope. It has no thread synchronization of any kind built in, and particularly when you do a flush operation, it definitely is not open to concurrent threads accessing it, because it holds onto a single database connection at that point. If you use a session which is non-transactional (meaning, autocommit is set to True, not the default setting) for read operations only, it’s still not thread-“safe”, but you also wont get any catastrophic failures either, since it checks out and returns connections to the connection pool on an as-needed basis; it’s just that different threads might load the same objects independently of each other, but only one will wind up in the identity map (however, the other one might still live in a collection somewhere).
But the bigger point here is, you should not want to use the session with multiple concurrent threads. That would be like having everyone at a restaurant all eat from the same plate. The session is a local “workspace” that you use for a specific set of tasks; you don’t want to, or need to, share that session with other threads who are doing some other task. If, on the other hand, there are other threads participating in the same task you are, such as in a desktop graphical application, then you would be sharing the session with those threads, but you also will have implemented a proper locking scheme (or your graphical framework does) so that those threads do not collide.
A multithreaded application is usually going to want to make usage of scoped_session() to transparently manage sessions per thread. More on this at Contextual/Thread-local Sessions.
The query() function takes one or more entities and returns a new Query object which will issue mapper queries within the context of this Session. An entity is defined as a mapped class, a Mapper object, an orm-enabled descriptor, or an AliasedClass object:
# query from a class
session.query(User).filter_by(name='ed').all()
# query with multiple classes, returns tuples
session.query(User, Address).join('addresses').filter_by(name='ed').all()
# query using orm-enabled descriptors
session.query(User.name, User.fullname).all()
# query from a mapper
user_mapper = class_mapper(User)
session.query(user_mapper)
When Query returns results, each object instantiated is stored within the identity map. When a row matches an object which is already present, the same object is returned. In the latter case, whether or not the row is populated onto an existing object depends upon whether the attributes of the instance have been expired or not. A default-configured Session automatically expires all instances along transaction boundaries, so that with a normally isolated transaction, there shouldn’t be any issue of instances representing data which is stale with regards to the current transaction.
The Query object is introduced in great detail in Object Relational Tutorial, and further documented in Querying.
add() is used to place instances in the session. For transient (i.e. brand new) instances, this will have the effect of an INSERT taking place for those instances upon the next flush. For instances which are persistent (i.e. were loaded by this session), they are already present and do not need to be added. Instances which are detached (i.e. have been removed from a session) may be re-associated with a session using this method:
user1 = User(name='user1')
user2 = User(name='user2')
session.add(user1)
session.add(user2)
session.commit() # write changes to the database
To add a list of items to the session at once, use add_all():
session.add_all([item1, item2, item3])
The add() operation cascades along the save-update cascade. For more details see the section Cascades.
merge() reconciles the current state of an instance and its associated children with existing data in the database, and returns a copy of the instance associated with the session. Usage is as follows:
merged_object = session.merge(existing_object)
When given an instance, it follows these steps:
With merge(), the given instance is not placed within the session, and can be associated with a different session or detached. merge() is very useful for taking the state of any kind of object structure without regard for its origins or current session associations and placing that state within a session. Here’s two examples:
merge() is frequently used by applications which implement their own second level caches. This refers to an application which uses an in memory dictionary, or an tool like Memcached to store objects over long running spans of time. When such an object needs to exist within a Session, merge() is a good choice since it leaves the original cached object untouched. For this use case, merge provides a keyword option called load=False. When this boolean flag is set to False, merge() will not issue any SQL to reconcile the given object against the current state of the database, thereby reducing query overhead. The limitation is that the given object and all of its children may not contain any pending changes, and it’s also of course possible that newer information in the database will not be present on the merged object, since no load is issued.
merge() is an extremely useful method for many purposes. However, it deals with the intricate border between objects that are transient/detached and those that are persistent, as well as the automated transferrence of state. The wide variety of scenarios that can present themselves here often require a more careful approach to the state of objects. Common problems with merge usually involve some unexpected state regarding the object being passed to merge().
Lets use the canonical example of the User and Address objects:
class User(Base):
__tablename__ = 'user'
id = Column(Integer, primary_key=True)
name = Column(String(50), nullable=False)
addresses = relationship("Address", backref="user")
class Address(Base):
__tablename__ = 'address'
id = Column(Integer, primary_key=True)
email_address = Column(String(50), nullable=False)
user_id = Column(Integer, ForeignKey('user.id'), nullable=False)
Assume a User object with one Address, already persistent:
>>> u1 = User(name='ed', addresses=[Address(email_address='ed@ed.com')])
>>> session.add(u1)
>>> session.commit()
We now create a1, an object outside the session, which we’d like to merge on top of the existing Address:
>>> existing_a1 = u1.addresses[0]
>>> a1 = Address(id=existing_a1.id)
A surprise would occur if we said this:
>>> a1.user = u1
>>> a1 = session.merge(a1)
>>> session.commit()
sqlalchemy.orm.exc.FlushError: New instance <Address at 0x1298f50>
with identity key (<class '__main__.Address'>, (1,)) conflicts with
persistent instance <Address at 0x12a25d0>
Why is that ? We weren’t careful with our cascades. The assignment of a1.user to a persistent object cascaded to the backref of User.addresses and made our a1 object pending, as though we had added it. Now we have two Address objects in the session:
>>> a1 = Address()
>>> a1.user = u1
>>> a1 in session
True
>>> existing_a1 in session
True
>>> a1 is existing_a1
False
Above, our a1 is already pending in the session. The subsequent merge() operation essentially does nothing. Cascade can be configured via the cascade option on relationship(), although in this case it would mean removing the save-update cascade from the User.addresses relationship - and usually, that behavior is extremely convenient. The solution here would usually be to not assign a1.user to an object already persistent in the target session.
Note that a new relationship() option introduced in 0.6.5, cascade_backrefs=False, will also prevent the Address from being added to the session via the a1.user = u1 assignment.
Further detail on cascade operation is at Cascades.
Another example of unexpected state:
>>> a1 = Address(id=existing_a1.id, user_id=u1.id)
>>> assert a1.user is None
>>> True
>>> a1 = session.merge(a1)
>>> session.commit()
sqlalchemy.exc.IntegrityError: (IntegrityError) address.user_id
may not be NULL
Here, we accessed a1.user, which returned its default value of None, which as a result of this access, has been placed in the __dict__ of our object a1. Normally, this operation creates no change event, so the user_id attribute takes precedence during a flush. But when we merge the Address object into the session, the operation is equivalent to:
>>> existing_a1.id = existing_a1.id
>>> existing_a1.user_id = u1.id
>>> existing_a1.user = None
Where above, both user_id and user are assigned to, and change events are emitted for both. The user association takes precedence, and None is applied to user_id, causing a failure.
Most merge() issues can be examined by first checking - is the object prematurely in the session ?
>>> a1 = Address(id=existing_a1, user_id=user.id)
>>> assert a1 not in session
>>> a1 = session.merge(a1)
Or is there state on the object that we don’t want ? Examining __dict__ is a quick way to check:
>>> a1 = Address(id=existing_a1, user_id=user.id)
>>> a1.user
>>> a1.__dict__
{'_sa_instance_state': <sqlalchemy.orm.state.InstanceState object at 0x1298d10>,
'user_id': 1,
'id': 1,
'user': None}
>>> # we don't want user=None merged, remove it
>>> del a1.user
>>> a1 = session.merge(a1)
>>> # success
>>> session.commit()
The delete() method places an instance into the Session’s list of objects to be marked as deleted:
# mark two objects to be deleted
session.delete(obj1)
session.delete(obj2)
# commit (or flush)
session.commit()
A common confusion that arises regarding delete() is when objects which are members of a collection are being deleted. While the collection member is marked for deletion from the database, this does not impact the collection itself in memory until the collection is expired. Below, we illustrate that even after an Address object is marked for deletion, it’s still present in the collection associated with the parent User, even after a flush:
>>> address = user.addresses[1]
>>> session.delete(address)
>>> session.flush()
>>> address in user.addresses
True
When the above session is committed, all attributes are expired. The next access of user.addresses will re-load the collection, revealing the desired state:
>>> session.commit()
>>> address in user.addresses
False
The usual practice of deleting items within collections is to forego the usage of delete() directly, and instead use cascade behavior to automatically invoke the deletion as a result of removing the object from the parent collection. The delete-orphan cascade accomplishes this, as illustrated in the example below:
mapper(User, users_table, properties={
'addresses':relationship(Address, cascade="all, delete, delete-orphan")
})
del user.addresses[1]
session.flush()
Where above, upon removing the Address object from the User.addresses collection, the delete-orphan cascade has the effect of marking the Address object for deletion in the same way as passing it to delete().
See also Cascades for detail on cascades.
The caveat with Session.delete() is that you need to have an object handy already in order to delete. The Query includes a delete() method which deletes based on filtering criteria:
session.query(User).filter(User.id==7).delete()
The Query.delete() method includes functionality to “expire” objects already in the session which match the criteria. However it does have some caveats, including that “delete” and “delete-orphan” cascades won’t be fully expressed for collections which are already loaded. See the API docs for delete() for more details.
When the Session is used with its default configuration, the flush step is nearly always done transparently. Specifically, the flush occurs before any individual Query is issued, as well as within the commit() call before the transaction is committed. It also occurs before a SAVEPOINT is issued when begin_nested() is used.
Regardless of the autoflush setting, a flush can always be forced by issuing flush():
session.flush()
The “flush-on-Query” aspect of the behavior can be disabled by constructing sessionmaker() with the flag autoflush=False:
Session = sessionmaker(autoflush=False)
Additionally, autoflush can be temporarily disabled by setting the autoflush flag at any time:
mysession = Session()
mysession.autoflush = False
Some autoflush-disable recipes are available at DisableAutoFlush.
The flush process always occurs within a transaction, even if the Session has been configured with autocommit=True, a setting that disables the session’s persistent transactional state. If no transaction is present, flush() creates its own transaction and commits it. Any failures during flush will always result in a rollback of whatever transaction is present. If the Session is not in autocommit=True mode, an explicit call to rollback() is required after a flush fails, even though the underlying transaction will have been rolled back already - this is so that the overall nesting pattern of so-called “subtransactions” is consistently maintained.
commit() is used to commit the current transaction. It always issues flush() beforehand to flush any remaining state to the database; this is independent of the “autoflush” setting. If no transaction is present, it raises an error. Note that the default behavior of the Session is that a transaction is always present; this behavior can be disabled by setting autocommit=True. In autocommit mode, a transaction can be initiated by calling the begin() method.
Another behavior of commit() is that by default it expires the state of all instances present after the commit is complete. This is so that when the instances are next accessed, either through attribute access or by them being present in a Query result set, they receive the most recent state. To disable this behavior, configure sessionmaker() with expire_on_commit=False.
Normally, instances loaded into the Session are never changed by subsequent queries; the assumption is that the current transaction is isolated so the state most recently loaded is correct as long as the transaction continues. Setting autocommit=True works against this model to some degree since the Session behaves in exactly the same way with regard to attribute state, except no transaction is present.
rollback() rolls back the current transaction. With a default configured session, the post-rollback state of the session is as follows:
- All transactions are rolled back and all connections returned to the connection pool, unless the Session was bound directly to a Connection, in which case the connection is still maintained (but still rolled back).
- Objects which were initially in the pending state when they were added to the Session within the lifespan of the transaction are expunged, corresponding to their INSERT statement being rolled back. The state of their attributes remains unchanged.
- Objects which were marked as deleted within the lifespan of the transaction are promoted back to the persistent state, corresponding to their DELETE statement being rolled back. Note that if those objects were first pending within the transaction, that operation takes precedence instead.
- All objects not expunged are fully expired.
With that state understood, the Session may safely continue usage after a rollback occurs.
When a flush() fails, typically for reasons like primary key, foreign key, or “not nullable” constraint violations, a rollback() is issued automatically (it’s currently not possible for a flush to continue after a partial failure). However, the flush process always uses its own transactional demarcator called a subtransaction, which is described more fully in the docstrings for Session. What it means here is that even though the database transaction has been rolled back, the end user must still issue rollback() to fully reset the state of the Session.
Expunge removes an object from the Session, sending persistent instances to the detached state, and pending instances to the transient state:
session.expunge(obj1)
To remove all items, call expunge_all() (this method was formerly known as clear()).
The close() method issues a expunge_all(), and releases any transactional/connection resources. When connections are returned to the connection pool, transactional state is rolled back as well.
The Session normally works in the context of an ongoing transaction (with the default setting of autoflush=False). Most databases offer “isolated” transactions - this refers to a series of behaviors that allow the work within a transaction to remain consistent as time passes, regardless of the activities outside of that transaction. A key feature of a high degree of transaction isolation is that emitting the same SELECT statement twice will return the same results as when it was called the first time, even if the data has been modified in another transaction.
For this reason, the Session gains very efficient behavior by loading the attributes of each instance only once. Subsequent reads of the same row in the same transaction are assumed to have the same value. The user application also gains directly from this assumption, that the transaction is regarded as a temporary shield against concurrent changes - a good application will ensure that isolation levels are set appropriately such that this assumption can be made, given the kind of data being worked with.
To clear out the currently loaded state on an instance, the instance or its individual attributes can be marked as “expired”, which results in a reload to occur upon next access of any of the instance’s attrbutes. The instance can also be immediately reloaded from the database. The expire() and refresh() methods achieve this:
# immediately re-load attributes on obj1, obj2
session.refresh(obj1)
session.refresh(obj2)
# expire objects obj1, obj2, attributes will be reloaded
# on the next access:
session.expire(obj1)
session.expire(obj2)
When an expired object reloads, all non-deferred column-based attributes are loaded in one query. Current behavior for expired relationship-based attributes is that they load individually upon access - this behavior may be enhanced in a future release. When a refresh is invoked on an object, the ultimate operation is equivalent to a Query.get(), so any relationships configured with eager loading should also load within the scope of the refresh operation.
refresh() and expire() also support being passed a list of individual attribute names in which to be refreshed. These names can refer to any attribute, column-based or relationship based:
# immediately re-load the attributes 'hello', 'world' on obj1, obj2
session.refresh(obj1, ['hello', 'world'])
session.refresh(obj2, ['hello', 'world'])
# expire the attributes 'hello', 'world' objects obj1, obj2, attributes will be reloaded
# on the next access:
session.expire(obj1, ['hello', 'world'])
session.expire(obj2, ['hello', 'world'])
The full contents of the session may be expired at once using expire_all():
session.expire_all()
Note that expire_all() is called automatically whenever commit() or rollback() are called. If using the session in its default mode of autocommit=False and with a well-isolated transactional environment (which is provided by most backends with the notable exception of MySQL MyISAM), there is virtually no reason to ever call expire_all() directly - plenty of state will remain on the current transaction until it is rolled back or committed or otherwise removed.
refresh() and expire() similarly are usually only necessary when an UPDATE or DELETE has been issued manually within the transaction using Session.execute().
The Session itself acts somewhat like a set-like collection. All items present may be accessed using the iterator interface:
for obj in session:
print obj
And presence may be tested for using regular “contains” semantics:
if obj in session:
print "Object is present"
The session is also keeping track of all newly created (i.e. pending) objects, all objects which have had changes since they were last loaded or saved (i.e. “dirty”), and everything that’s been marked as deleted:
# pending objects recently added to the Session
session.new
# persistent objects which currently have changes detected
# (this collection is now created on the fly each time the property is called)
session.dirty
# persistent objects that have been marked as deleted via session.delete(obj)
session.deleted
Note that objects within the session are by default weakly referenced. This means that when they are dereferenced in the outside application, they fall out of scope from within the Session as well and are subject to garbage collection by the Python interpreter. The exceptions to this include objects which are pending, objects which are marked as deleted, or persistent objects which have pending changes on them. After a full flush, these collections are all empty, and all objects are again weakly referenced. To disable the weak referencing behavior and force all objects within the session to remain until explicitly expunged, configure sessionmaker() with the weak_identity_map=False setting.
Mappers support the concept of configurable cascade behavior on relationship() constructs. This behavior controls how the Session should treat the instances that have a parent-child relationship with another instance that is operated upon by the Session. Cascade is indicated as a comma-separated list of string keywords, with the possible values all, delete, save-update, refresh-expire, merge, expunge, and delete-orphan.
Cascading is configured by setting the cascade keyword argument on a relationship():
mapper(Order, order_table, properties={
'items' : relationship(Item, cascade="all, delete-orphan"),
'customer' : relationship(User, secondary=user_orders_table, cascade="save-update"),
})
The above mapper specifies two relationships, items and customer. The items relationship specifies “all, delete-orphan” as its cascade value, indicating that all add, merge, expunge, refresh delete and expire operations performed on a parent Order instance should also be performed on the child Item instances attached to it. The delete-orphan cascade value additionally indicates that if an Item instance is no longer associated with an Order, it should also be deleted. The “all, delete-orphan” cascade argument allows a so-called lifecycle relationship between an Order and an Item object.
The customer relationship specifies only the “save-update” cascade value, indicating most operations will not be cascaded from a parent Order instance to a child User instance except for the add() operation. save-update cascade indicates that an add() on the parent will cascade to all child items, and also that items added to a parent which is already present in a session will also be added to that same session. “save-update” cascade also cascades the pending history of a relationship()-based attribute, meaning that objects which were removed from a scalar or collection attribute whose changes have not yet been flushed are also placed into the new session - this so that foreign key clear operations and deletions will take place (new in 0.6).
Note that the delete-orphan cascade only functions for relationships where the target object can have a single parent at a time, meaning it is only appropriate for one-to-one or one-to-many relationships. For a relationship() which establishes one-to-one via a local foreign key, i.e. a many-to-one that stores only a single parent, or one-to-one/one-to-many via a “secondary” (association) table, a warning will be issued if delete-orphan is configured. To disable this warning, specify the single_parent=True flag on the relationship, which constrains objects to allow attachment to only one parent at a time.
The default value for cascade on relationship() is save-update, merge.
save-update cascade also takes place on backrefs by default. This means that, given a mapping such as this:
mapper(Order, order_table, properties={
'items' : relationship(Item, backref='order')
})
If an Order is already in the session, and is assigned to the order attribute of an Item, the backref appends the Order to the items collection of that Order, resulting in the save-update cascade taking place:
>>> o1 = Order()
>>> session.add(o1)
>>> o1 in session
True
>>> i1 = Item()
>>> i1.order = o1
>>> i1 in o1.items
True
>>> i1 in session
True
This behavior can be disabled as of 0.6.5 using the cascade_backrefs flag:
mapper(Order, order_table, properties={
'items' : relationship(Item, backref='order',
cascade_backrefs=False)
})
So above, the assignment of i1.order = o1 will append i1 to the items collection of o1, but will not add i1 to the session. You can, of course, add() i1 to the session at a later point. This option may be helpful for situations where an object needs to be kept out of a session until it’s construction is completed, but still needs to be given associations to objects which are already persistent in the target session.
A newly constructed Session may be said to be in the “begin” state. In this state, the Session has not established any connection or transactional state with any of the Engine objects that may be associated with it.
The Session then receives requests to operate upon a database connection. Typically, this means it is called upon to execute SQL statements using a particular Engine, which may be via Session.query(), Session.execute(), or within a flush operation of pending data, which occurs when such state exists and Session.commit() or Session.flush() is called.
As these requests are received, each new Engine encountered is associated with an ongoing transactional state maintained by the Session. When the first Engine is operated upon, the Session can be said to have left the “begin” state and entered “transactional” state. For each Engine encountered, a Connection is associated with it, which is acquired via the Engine.contextual_connect() method. If a Connection was directly associated with the Session (see Joining a Session into an External Transaction for an example of this), it is added to the transactional state directly.
For each Connection, the Session also maintains a Transaction object, which is acquired by calling Connection.begin() on each Connection, or if the Session object has been established using the flag twophase=True, a TwoPhaseTransaction object acquired via Connection.begin_twophase(). These transactions are all committed or rolled back corresponding to the invocation of the Session.commit() and Session.rollback() methods. A commit operation will also call the TwoPhaseTransaction.prepare() method on all transactions if applicable.
When the transactional state is completed after a rollback or commit, the Session releases all Transaction and Connection resources (which has the effect of returning DBAPI connections to the connection pool of each Engine), and goes back to the “begin” state, which will again invoke new Connection and Transaction objects as new requests to emit SQL statements are received.
The example below illustrates this lifecycle:
engine = create_engine("...")
Session = sessionmaker(bind=engine)
# new session. no connections are in use.
session = Session()
try:
# first query. a Connection is acquired
# from the Engine, and a Transaction
# started.
item1 = session.query(Item).get(1)
# second query. the same Connection/Transaction
# are used.
item2 = session.query(Item).get(2)
# pending changes are created.
item1.foo = 'bar'
item2.bar = 'foo'
# commit. The pending changes above
# are flushed via flush(), the Transaction
# is committed, the Connection object closed
# and discarded, the underlying DBAPI connection
# returned to the connection pool.
session.commit()
except:
# on rollback, the same closure of state
# as that of commit proceeds.
session.rollback()
raise
SAVEPOINT transactions, if supported by the underlying engine, may be delineated using the begin_nested() method:
Session = sessionmaker()
session = Session()
session.add(u1)
session.add(u2)
session.begin_nested() # establish a savepoint
session.add(u3)
session.rollback() # rolls back u3, keeps u1 and u2
session.commit() # commits u1 and u2
begin_nested() may be called any number of times, which will issue a new SAVEPOINT with a unique identifier for each call. For each begin_nested() call, a corresponding rollback() or commit() must be issued.
When begin_nested() is called, a flush() is unconditionally issued (regardless of the autoflush setting). This is so that when a rollback() occurs, the full state of the session is expired, thus causing all subsequent attribute/instance access to reference the full state of the Session right before begin_nested() was called.
The example of Session transaction lifecycle illustrated at the start of Managing Transactions applies to a Session configured in the default mode of autocommit=False. Constructing a Session with autocommit=True produces a Session placed into “autocommit” mode, where each SQL statement invoked by a Session.query() or Session.execute() occurs using a new connection from the connection pool, discarding it after results have been iterated. The Session.flush() operation still occurs within the scope of a single transaction, though this transaction is closed out after the Session.flush() operation completes.
“autocommit” mode should not be considered for general use. While very old versions of SQLAlchemy standardized on this mode, the modern Session benefits highly from being given a clear point of transaction demarcation via Session.rollback() and Session.commit(). The autoflush action can safely emit SQL to the database as needed without implicitly producing permanent effects, the contents of attributes are expired only when a logical series of steps has completed. If the Session were to be used in pure “autocommit” mode without an ongoing transaction, these features should be disabled, that is, autoflush=False, expire_on_commit=False.
Modern usage of “autocommit” is for framework integrations that need to control specifically when the “begin” state occurs. A session which is configured with autocommit=True may be placed into the “begin” state using the Session.begin() method. After the cycle completes upon Session.commit() or Session.rollback(), connection and transaction resources are released and the Session goes back into “autocommit” mode, until Session.begin() is called again:
Session = sessionmaker(bind=engine, autocommit=True)
session = Session()
session.begin()
try:
item1 = session.query(Item).get(1)
item2 = session.query(Item).get(2)
item1.foo = 'bar'
item2.bar = 'foo'
session.commit()
except:
session.rollback()
raise
The Session.begin() method also returns a transactional token which is compatible with the Python 2.6 with statement:
Session = sessionmaker(bind=engine, autocommit=True)
session = Session()
with session.begin():
item1 = session.query(Item).get(1)
item2 = session.query(Item).get(2)
item1.foo = 'bar'
item2.bar = 'foo'
A subtransaction indicates usage of the Session.begin() method in conjunction with the subtransactions=True flag. This produces a a non-transactional, delimiting construct that allows nesting of calls to begin() and commit(). It’s purpose is to allow the construction of code that can function within a transaction both independently of any external code that starts a transaction, as well as within a block that has already demarcated a transaction.
subtransactions=True is generally only useful in conjunction with autocommit, and is equivalent to the pattern described at Nesting of Transaction Blocks, where any number of functions can call Connection.begin() and Transaction.commit() as though they are the initiator of the transaction, but in fact may be participating in an already ongoing transaction:
# method_a starts a transaction and calls method_b
def method_a(session):
session.begin(subtransactions=True)
try:
method_b(session)
session.commit() # transaction is committed here
except:
session.rollback() # rolls back the transaction
raise
# method_b also starts a transaction, but when
# called from method_a participates in the ongoing
# transaction.
def method_b(session):
session.begin(subtransactions=True)
try:
session.add(SomeObject('bat', 'lala'))
session.commit() # transaction is not committed yet
except:
session.rollback() # rolls back the transaction, in this case
# the one that was initiated in method_a().
raise
# create a Session and call method_a
session = Session(autocommit=True)
method_a(session)
session.close()
Subtransactions are used by the Session.flush() process to ensure that the flush operation takes place within a transaction, regardless of autocommit. When autocommit is disabled, it is still useful in that it forces the Session into a “pending rollback” state, as a failed flush cannot be resumed in mid-operation, where the end user still maintains the “scope” of the transaction overall.
For backends which support two-phase operaration (currently MySQL and PostgreSQL), the session can be instructed to use two-phase commit semantics. This will coordinate the committing of transactions across databases so that the transaction is either committed or rolled back in all databases. You can also prepare() the session for interacting with transactions not managed by SQLAlchemy. To use two phase transactions set the flag twophase=True on the session:
engine1 = create_engine('postgresql://db1')
engine2 = create_engine('postgresql://db2')
Session = sessionmaker(twophase=True)
# bind User operations to engine 1, Account operations to engine 2
Session.configure(binds={User:engine1, Account:engine2})
session = Session()
# .... work with accounts and users
# commit. session will issue a flush to all DBs, and a prepare step to all DBs,
# before committing both transactions
session.commit()
This feature allows the value of a database column to be set to a SQL expression instead of a literal value. It’s especially useful for atomic updates, calling stored procedures, etc. All you do is assign an expression to an attribute:
class SomeClass(object):
pass
mapper(SomeClass, some_table)
someobject = session.query(SomeClass).get(5)
# set 'value' attribute to a SQL expression adding one
someobject.value = some_table.c.value + 1
# issues "UPDATE some_table SET value=value+1"
session.commit()
This technique works both for INSERT and UPDATE statements. After the flush/commit operation, the value attribute on someobject above is expired, so that when next accessed the newly generated value will be loaded from the database.
SQL expressions and strings can be executed via the Session within its transactional context. This is most easily accomplished using the execute() method, which returns a ResultProxy in the same manner as an Engine or Connection:
Session = sessionmaker(bind=engine)
session = Session()
# execute a string statement
result = session.execute("select * from table where id=:id", {'id':7})
# execute a SQL expression construct
result = session.execute(select([mytable]).where(mytable.c.id==7))
The current Connection held by the Session is accessible using the connection() method:
connection = session.connection()
The examples above deal with a Session that’s bound to a single Engine or Connection. To execute statements using a Session which is bound either to multiple engines, or none at all (i.e. relies upon bound metadata), both execute() and connection() accept a mapper keyword argument, which is passed a mapped class or Mapper instance, which is used to locate the proper context for the desired engine:
Session = sessionmaker()
session = Session()
# need to specify mapper or class when executing
result = session.execute("select * from table where id=:id", {'id':7}, mapper=MyMappedClass)
result = session.execute(select([mytable], mytable.c.id==7), mapper=MyMappedClass)
connection = session.connection(MyMappedClass)
If a Connection is being used which is already in a transactional state (i.e. has a Transaction established), a Session can be made to participate within that transaction by just binding the Session to that Connection. The usual rationale for this is a test suite that allows ORM code to work freely with a Session, including the ability to call Session.commit(), where afterwards the entire database interaction is rolled back:
from sqlalchemy.orm import sessionmaker
from sqlalchemy import create_engine
from unittest import TestCase
# global application scope. create Session class, engine
Session = sessionmaker()
engine = create_engine('postgresql://...')
class SomeTest(TestCase):
def setUp(self):
# connect to the database
self.connection = engine.connect()
# begin a non-ORM transaction
self.trans = connection.begin()
# bind an individual Session to the connection
self.session = Session(bind=self.connection)
def test_something(self):
# use the session in tests.
self.session.add(Foo())
self.session.commit()
def tearDown(self):
# rollback - everything that happened with the
# Session above (including calls to commit())
# is rolled back.
self.trans.rollback()
self.session.close()
Above, we issue Session.commit() as well as Transaction.rollback(). This is an example of where we take advantage of the Connection object’s ability to maintain subtransactions, or nested begin/commit-or-rollback pairs where only the outermost begin/commit pair actually commits the transaction, or if the outermost block rolls back, everything is rolled back.
Generate a custom-configured Session class.
The returned object is a subclass of Session, which, when instantiated with no arguments, uses the keyword arguments configured here as its constructor arguments.
It is intended that the sessionmaker() function be called within the global scope of an application, and the returned class be made available to the rest of the application as the single class used to instantiate sessions.
e.g.:
# global scope
Session = sessionmaker(autoflush=False)
# later, in a local scope, create and use a session:
sess = Session()
Any keyword arguments sent to the constructor itself will override the “configured” keywords:
Session = sessionmaker()
# bind an individual session to a connection
sess = Session(bind=connection)
The class also includes a special classmethod configure(), which allows additional configurational options to take place after the custom Session class has been generated. This is useful particularly for defining the specific Engine (or engines) to which new instances of Session should be bound:
Session = sessionmaker()
Session.configure(bind=create_engine('sqlite:///foo.db'))
sess = Session()
Options:
Parameters: |
|
---|
Manages persistence operations for ORM-mapped objects.
The Session’s usage paradigm is described at Using the Session.
Construct a new Session.
Arguments to Session are described using the sessionmaker() function, which is the typical point of entry.
Place an object in the Session.
Its state will be persisted to the database on the next flush operation.
Repeated calls to add() will be ignored. The opposite of add() is expunge().
Add the given collection of instances to this Session.
Begin a transaction on this Session.
If this Session is already within a transaction, either a plain transaction or nested transaction, an error is raised, unless subtransactions=True or nested=True is specified.
The subtransactions=True flag indicates that this begin() can create a subtransaction if a transaction is already in progress. For documentation on subtransactions, please see Using Subtransactions with Autocommit.
The nested flag begins a SAVEPOINT transaction and is equivalent to calling begin_nested(). For documentation on SAVEPOINT transactions, please see Using SAVEPOINT.
Begin a nested transaction on this Session.
The target database(s) must support SQL SAVEPOINTs or a SQLAlchemy-supported vendor implementation of the idea.
For documentation on SAVEPOINT transactions, please see Using SAVEPOINT.
Bind operations for a mapper to a Connectable.
All subsequent operations involving this mapper will use the given bind.
Bind operations on a Table to a Connectable.
All subsequent operations involving this Table will use the given bind.
Close this Session.
This clears all items and ends any transaction in progress.
If this session were created with autocommit=False, a new transaction is immediately begun. Note that this new transaction does not use any connection resources until they are first needed.
Close all sessions in memory.
Flush pending changes and commit the current transaction.
If no transaction is in progress, this method raises an InvalidRequestError.
By default, the Session also expires all database loaded state on all ORM-managed attributes after transaction commit. This so that subsequent operations load the most recent data from the database. This behavior can be disabled using the expire_on_commit=False option to sessionmaker() or the Session constructor.
If a subtransaction is in effect (which occurs when begin() is called multiple times), the subtransaction will be closed, and the next call to commit() will operate on the enclosing transaction.
For a session configured with autocommit=False, a new transaction will be begun immediately after the commit, but note that the newly begun transaction does not use any connection resources until the first SQL is actually emitted.
Return a Connection object corresponding to this Session object’s transactional state.
If this Session is configured with autocommit=False, either the Connection corresponding to the current transaction is returned, or if no transaction is in progress, a new one is begun and the Connection returned (note that no transactional state is established with the DBAPI until the first SQL statement is emitted).
Alternatively, if this Session is configured with autocommit=True, an ad-hoc Connection is returned using Engine.contextual_connect() on the underlying Engine.
Ambiguity in multi-bind or unbound Session objects can be resolved through any of the optional keyword arguments. This ultimately makes usage of the get_bind() method for resolution.
Parameters: |
|
---|
Mark an instance as deleted.
The database delete operation occurs upon flush().
The set of all instances marked as ‘deleted’ within this Session
The set of all persistent instances considered dirty.
Instances are considered dirty when they were modified but not deleted.
Note that this ‘dirty’ calculation is ‘optimistic’; most attribute-setting or collection modification operations will mark an instance as ‘dirty’ and place it in this set, even if there is no net change to the attribute’s value. At flush time, the value of each attribute is compared to its previously saved value, and if there’s no net change, no SQL operation will occur (this is a more expensive operation so it’s only done at flush time).
To check if an instance has actionable net changes to its attributes, use the is_modified() method.
Execute a clause within the current transaction.
Returns a ResultProxy representing results of the statement execution, in the same manner as that of an Engine or Connection.
execute() accepts any executable clause construct, such as select(), insert(), update(), delete(), and text(), and additionally accepts plain strings that represent SQL statements. If a plain string is passed, it is first converted to a text() construct, which here means that bind parameters should be specified using the format :param.
The statement is executed within the current transactional context of this Session, using the same behavior as that of the Session.connection() method to determine the active Connection. The close_with_result flag is set to True so that an autocommit=True Session with no active transaction will produce a result that auto-closes the underlying Connection.
Parameters: |
|
---|
Expire the attributes on an instance.
Marks the attributes of an instance as out of date. When an expired attribute is next accessed, a query will be issued to the Session object’s current transactional context in order to load all expired attributes for the given instance. Note that a highly isolated transaction will return the same values as were previously read in that same transaction, regardless of changes in database state outside of that transaction.
To expire all objects in the Session simultaneously, use Session.expire_all().
The Session object’s default behavior is to expire all state whenever the Session.rollback() or Session.commit() methods are called, so that new state can be loaded for the new transaction. For this reason, calling Session.expire() only makes sense for the specific case that a non-ORM SQL statement was emitted in the current transaction.
Parameters: |
|
---|
Expires all persistent instances within this Session.
When any attributes on a persistent instance is next accessed, a query will be issued using the Session object’s current transactional context in order to load all expired attributes for the given instance. Note that a highly isolated transaction will return the same values as were previously read in that same transaction, regardless of changes in database state outside of that transaction.
To expire individual objects and individual attributes on those objects, use Session.expire().
The Session object’s default behavior is to expire all state whenever the Session.rollback() or Session.commit() methods are called, so that new state can be loaded for the new transaction. For this reason, calling Session.expire_all() should not be needed when autocommit is False, assuming the transaction is isolated.
Remove the instance from this Session.
This will free all internal references to the instance. Cascading will be applied according to the expunge cascade rule.
Remove all object instances from this Session.
This is equivalent to calling expunge(obj) on all objects in this Session.
Flush all the object changes to the database.
Writes out all pending object creations, deletions and modifications to the database as INSERTs, DELETEs, UPDATEs, etc. Operations are automatically ordered by the Session’s unit of work dependency solver.
Database operations will be issued in the current transactional context and do not affect the state of the transaction, unless an error occurs, in which case the entire transaction is rolled back. You may flush() as often as you like within a transaction to move changes from Python to the database’s transaction buffer.
For autocommit Sessions with no active manual transaction, flush() will create a transaction on the fly that surrounds the entire set of operations int the flush.
Return a “bind” to which this Session is bound.
The “bind” is usually an instance of Engine, except in the case where the Session has been explicitly bound directly to a Connection.
For a multiply-bound or unbound Session, the mapper or clause arguments are used to determine the appropriate bind to return.
Parameters: |
|
---|
True if this Session has an active transaction.
Return True if instance has modified attributes.
This method retrieves a history instance for each instrumented attribute on the instance and performs a comparison of the current value to its previously committed value.
include_collections indicates if multivalued collections should be included in the operation. Setting this to False is a way to detect only local-column based properties (i.e. scalar columns or many-to-one foreign keys) that would result in an UPDATE for this instance upon flush.
The passive flag indicates if unloaded attributes and collections should not be loaded in the course of performing this test.
A few caveats to this method apply:
Instances present in the ‘dirty’ collection may result in a value of False when tested with this method. This because while the object may have received attribute set events, there may be no net changes on its state.
Scalar attributes may not have recorded the “previously” set value when a new value was applied, if the attribute was not loaded, or was expired, at the time the new value was received - in these cases, the attribute is assumed to have a change, even if there is ultimately no net change against its database value. SQLAlchemy in most cases does not need the “old” value when a set event occurs, so it skips the expense of a SQL call if the old value isn’t present, based on the assumption that an UPDATE of the scalar value is usually needed, and in those few cases where it isn’t, is less expensive on average than issuing a defensive SELECT.
The “old” value is fetched unconditionally only if the attribute container has the “active_history” flag set to True. This flag is set typically for primary key attributes and scalar references that are not a simple many-to-one.
Copy the state an instance onto the persistent instance with the same identifier.
If there is no persistent instance currently associated with the session, it will be loaded. Return the persistent instance. If the given instance is unsaved, save a copy of and return it as a newly persistent instance. The given instance does not become associated with the session.
This operation cascades to associated instances if the association is mapped with cascade="merge".
See Merging for a detailed discussion of merging.
The set of all instances marked as ‘new’ within this Session.
Return the Session to which an object belongs.
Prepare the current transaction in progress for two phase commit.
If no transaction is in progress, this method raises an InvalidRequestError.
Only root transactions of two phase sessions can be prepared. If the current transaction is not such, an InvalidRequestError is raised.
Remove unreferenced instances cached in the identity map.
Note that this method is only meaningful if “weak_identity_map” is set to False. The default weak identity map is self-pruning.
Removes any object in this Session’s identity map that is not referenced in user code, modified, new or scheduled for deletion. Returns the number of objects pruned.
Return a new Query object corresponding to this Session.
Expire and refresh the attributes on the given instance.
A query will be issued to the database and all attributes will be refreshed with their current database value.
Lazy-loaded relational attributes will remain lazily loaded, so that the instance-wide refresh operation will be followed immediately by the lazy load of that attribute.
Eagerly-loaded relational attributes will eagerly load within the single refresh operation.
Note that a highly isolated transaction will return the same values as were previously read in that same transaction, regardless of changes in database state outside of that transaction - usage of refresh() usually only makes sense if non-ORM SQL statement were emitted in the ongoing transaction, or if autocommit mode is turned on.
Parameters: |
|
---|
Rollback the current transaction in progress.
If no transaction is in progress, this method is a pass-through.
This method rolls back the current transaction or nested transaction regardless of subtransactions being in effect. All subtransactions up to the first real transaction are closed. Subtransactions occur when begin() is called multiple times.
Like execute() but return a scalar result.
A common need in applications, particularly those built around web frameworks, is the ability to “share” a Session object among disparate parts of an application, without needing to pass the object explicitly to all method and function calls. What you’re really looking for is some kind of “global” session object, or at least “global” to all the parts of an application which are tasked with servicing the current request. For this pattern, SQLAlchemy provides the ability to enhance the Session class generated by sessionmaker() to provide auto-contextualizing support. This means that whenever you create a Session instance with its constructor, you get an existing Session object which is bound to some “context”. By default, this context is the current thread. This feature is what previously was accomplished using the sessioncontext SQLAlchemy extension.
The scoped_session() function wraps around the sessionmaker() function, and produces an object which behaves the same as the Session subclass returned by sessionmaker():
from sqlalchemy.orm import scoped_session, sessionmaker
Session = scoped_session(sessionmaker())
However, when you instantiate this Session “class”, in reality the object is pulled from a threadlocal variable, or if it doesn’t exist yet, it’s created using the underlying class generated by sessionmaker():
>>> # call Session() the first time. the new Session instance is created.
>>> session = Session()
>>> # later, in the same application thread, someone else calls Session()
>>> session2 = Session()
>>> # the two Session objects are *the same* object
>>> session is session2
True
Since the Session() constructor now returns the same Session object every time within the current thread, the object returned by scoped_session() also implements most of the Session methods and properties at the “class” level, such that you don’t even need to instantiate Session():
# create some objects
u1 = User()
u2 = User()
# save to the contextual session, without instantiating
Session.add(u1)
Session.add(u2)
# view the "new" attribute
assert u1 in Session.new
# commit changes
Session.commit()
The contextual session may be disposed of by calling Session.remove():
# remove current contextual session
Session.remove()
After remove() is called, the next operation with the contextual session will start a new Session for the current thread.
A (really, really) common question is when does the contextual session get created, when does it get disposed ? We’ll consider a typical lifespan as used in a web application:
Web Server Web Framework User-defined Controller Call
-------------- -------------- ------------------------------
web request ->
call controller -> # call Session(). this establishes a new,
# contextual Session.
session = Session()
# load some objects, save some changes
objects = session.query(MyClass).all()
# some other code calls Session, it's the
# same contextual session as "sess"
session2 = Session()
session2.add(foo)
session2.commit()
# generate content to be returned
return generate_content()
Session.remove() <-
web response <-
The above example illustrates an explicit call to ScopedSession.remove(). This has the effect such that each web request starts fresh with a brand new session, and is the most definitive approach to closing out a request.
It’s not strictly necessary to remove the session at the end of the request - other options include calling Session.close(), Session.rollback(), Session.commit() at the end so that the existing session returns its connections to the pool and removes any existing transactional context. Doing nothing is an option too, if individual controller methods take responsibility for ensuring that no transactions remain open after a request ends.
Provides thread-local or scoped management of Session objects.
This is a front-end function to ScopedSession:
Session = scoped_session(sessionmaker(autoflush=True))
To instantiate a Session object which is part of the scoped context, instantiate normally:
session = Session()
Most session methods are available as classmethods from the scoped session:
Session.commit()
Session.close()
See also: Contextual/Thread-local Sessions.
Parameters: |
|
---|---|
Returns: | a ScopedSession instance |
Provides thread-local management of Sessions.
Typical invocation is via the scoped_session() function:
Session = scoped_session(sessionmaker())
The internal registry is accessible, and by default is an instance of ThreadLocalRegistry.
See also: Contextual/Thread-local Sessions.
reconfigure the sessionmaker used by this ScopedSession.
return a mapper() function which associates this ScopedSession with the Mapper.
Deprecated since version 0.5: ScopedSession.mapper() is deprecated. Please see http://www.sqlalchemy.org/trac/wiki/UsageRecipes/SessionAwareMapper for information on how to replicate its behavior.
return a class property which produces a Query object against the class when called.
e.g.:
Session = scoped_session(sessionmaker())
class MyClass(object):
query = Session.query_property()
# after mappers are defined
result = MyClass.query.filter(MyClass.name=='foo').all()
Produces instances of the session’s configured query class by default. To override and use a custom implementation, provide a query_cls callable. The callable will be invoked with the class’s mapper as a positional argument and a session keyword argument.
There is no limit to the number of query properties placed on a class.
Dispose of the current contextual session.
A Registry that can store one or multiple instances of a single class on the basis of a “scope” function.
The object implements __call__ as the “getter”, so by calling myregistry() the contained object is returned for the current scope.
Parameters: |
|
---|
Construct a new ScopedRegistry.
Parameters: |
|
---|
Clear the current scope, if any.
Return True if an object is present in the current scope.
Set the value forthe current scope.
A ScopedRegistry that uses a threading.local() variable for storage.
Vertical partitioning places different kinds of objects, or different tables, across multiple databases:
engine1 = create_engine('postgresql://db1')
engine2 = create_engine('postgresql://db2')
Session = sessionmaker(twophase=True)
# bind User operations to engine 1, Account operations to engine 2
Session.configure(binds={User:engine1, Account:engine2})
session = Session()
Horizontal partitioning partitions the rows of a single table (or a set of tables) across multiple databases.
See the “sharding” example: Horizontal Sharding.
Make the given instance ‘transient’.
This will remove its association with any session and additionally will remove its “identity key”, such that it’s as though the object were newly constructed, except retaining its values. It also resets the “deleted” flag on the state if this object had been explicitly deleted by its session.
Attributes which were “expired” or deferred at the instance level are reverted to undefined, and will not trigger any loads.
Return the Session to which instance belongs.
If the instance is not a mapped instance, an error is raised.
These functions are provided by the SQLAlchemy attribute instrumentation API to provide a detailed interface for dealing with instances, attribute values, and history. Some of them are useful when constructing event listener functions, such as those described in ORM Event Interfaces.
Delete the value of an attribute, firing history events.
This function may be used regardless of instrumentation applied directly to the class, i.e. no descriptors are required. Custom attribute management schemes will need to make usage of this method to establish attribute state as understood by SQLAlchemy.
Get the value of an attribute, firing any callables required.
This function may be used regardless of instrumentation applied directly to the class, i.e. no descriptors are required. Custom attribute management schemes will need to make usage of this method to make usage of attribute state as understood by SQLAlchemy.
Return a History record for the given object and attribute key.
Parameters: |
|
---|
Initialize a collection attribute and return the collection adapter.
This function is used to provide direct access to collection internals for a previously unloaded attribute. e.g.:
collection_adapter = init_collection(someobject, 'elements')
for elem in values:
collection_adapter.append_without_event(elem)
obj is an instrumented object instance. An InstanceState is accepted directly for backwards compatibility but this usage is deprecated.
Return the InstanceState for a given object.
Return True if the given attribute on the given instance is instrumented by the attributes package.
This function may be used regardless of instrumentation applied directly to the class, i.e. no descriptors are required.
Return the ClassManager for a given class.
Set the value of an attribute, firing history events.
This function may be used regardless of instrumentation applied directly to the class, i.e. no descriptors are required. Custom attribute management schemes will need to make usage of this method to establish attribute state as understood by SQLAlchemy.
Set the value of an attribute with no history events.
Cancels any previous history present. The value should be a scalar value for scalar-holding attributes, or an iterable for any collection-holding attribute.
This is the same underlying method used when a lazy loader fires off and loads additional data from the database. In particular, this method can be used by application code which has loaded additional attributes or collections through separate queries, which can then be attached to an instance as though it were part of its original loaded state.
A 3-tuple of added, unchanged and deleted values, representing the changes which have occurred on an instrumented attribute.
Each tuple member is an iterable sequence.
Return the collection of items added to the attribute (the first tuple element).
Return the collection of items that have been removed from the attribute (the third tuple element).
Return a collection of unchanged + deleted.
Return a collection of added + unchanged.
Return a collection of added + unchanged + deleted.
Return the collection of items that have not changed on the attribute (the second tuple element).
Symbol indicating that loader callables should not be fired off, and a non-initialized attribute should remain that way.
Symbol indicating that loader callables should not boe fired off. Non-initialized attributes should be initialized to an empty value.
Symbol indicating that loader callables should be executed.