This section describes a variety of configurational patterns that are usable with mappers. It assumes you’ve worked through Object Relational Tutorial and know how to construct and use rudimentary mappers and relationships.
Note that all patterns here apply both to the usage of explicit mapper() and Table objects as well as when using the sqlalchemy.ext.declarative extension. Any example in this section which takes a form such as:
mapper(User, users_table, primary_key=[users_table.c.id])
Would translate into declarative as:
class User(Base):
__table__ = users_table
__mapper_args__ = {
'primary_key':[users_table.c.id]
}
Or if using __tablename__, Column objects are declared inline with the class definition. These are usable as is within __mapper_args__:
class User(Base):
__tablename__ = 'users'
id = Column(Integer)
__mapper_args__ = {
'primary_key':[id]
}
The default behavior of mapper() is to assemble all the columns in the mapped Table into mapped object attributes. This behavior can be modified in several ways, as well as enhanced by SQL expressions.
To reference a subset of columns referenced by a table as mapped attributes, use the include_properties or exclude_properties arguments. For example:
mapper(User, users_table, include_properties=['user_id', 'user_name'])
...will map the User class to the users_table table, only including the “user_id” and “user_name” columns - the rest are not refererenced. Similarly:
mapper(Address, addresses_table,
exclude_properties=['street', 'city', 'state', 'zip'])
...will map the Address class to the addresses_table table, including all columns present except “street”, “city”, “state”, and “zip”.
When this mapping is used, the columns that are not included will not be referenced in any SELECT statements emitted by Query, nor will there be any mapped attribute on the mapped class which represents the column; assigning an attribute of that name will have no effect beyond that of a normal Python attribute assignment.
In some cases, multiple columns may have the same name, such as when mapping to a join of two or more tables that share some column name. To exclude or include individual columns, Column objects may also be placed within the “include_properties” and “exclude_properties” collections (new feature as of 0.6.4):
mapper(UserAddress, users_table.join(addresses_table),
exclude_properties=[addresses_table.c.id],
primary_key=[users_table.c.id]
)
It should be noted that insert and update defaults configured on individal Column objects, such as those configured by the “default”, “on_update”, “server_default” and “server_onupdate” arguments, will continue to function normally even if those Column objects are not mapped. This functionality is part of the SQL expression and execution system and occurs below the level of the ORM.
To change the name of the attribute mapped to a particular column, place the Column object in the properties dictionary with the desired key:
mapper(User, users_table, properties={
'id': users_table.c.user_id,
'name': users_table.c.user_name,
})
When using declarative, the above configuration is more succinct - place the full column name in the Column definition, using the desired attribute name in the class definition:
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class User(Base):
__tablename__ = 'user'
id = Column('user_id', Integer, primary_key=True)
name = Column('user_name', String(50))
To change the names of all attributes using a prefix, use the column_prefix option. This is useful for some schemes that would like to declare alternate attributes:
mapper(User, users_table, column_prefix='_')
The above will place attribute names such as _user_id, _user_name, _password etc. on the mapped User class.
To place multiple columns which are known to be “synonymous” based on foreign key relationship or join condition into the same mapped attribute, put them together using a list, as below where we map to a join():
from sqlalchemy.sql import join
# join users and addresses
usersaddresses = join(users_table, addresses_table, \
users_table.c.user_id == addresses_table.c.user_id)
# user_id columns are equated under the 'user_id' attribute
mapper(User, usersaddresses, properties={
'id':[users_table.c.user_id, addresses_table.c.user_id],
})
For further examples on this particular use case, see Mapping a Class against Multiple Tables.
The establishment of a Column on a mapper() can be further customized using the column_property() function, as specified to the properties dictionary. This function is usually invoked implicitly for each mapped Column. Explicit usage looks like:
from sqlalchemy.orm import mapper, column_property
mapper(User, users, properties={
'name':column_property(users.c.name, active_history=True)
})
or with declarative:
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
name = column_property(Column(String(50)), active_history=True)
Further examples of column_property() are at SQL Expressions as Mapped Attributes.
Provide a column-level property for use with a Mapper.
Column-based properties can normally be applied to the mapper’s properties dictionary using the Column element directly. Use this function when the given column is not directly present within the mapper’s selectable; examples include SQL expressions, functions, and scalar SELECT queries.
Columns that aren’t present in the mapper’s selectable won’t be persisted by the mapper and are effectively “read-only” attributes.
Parameters: |
|
---|
This feature allows particular columns of a table to not be loaded by default, instead being loaded later on when first referenced. It is essentially “column-level lazy loading”. This feature is useful when one wants to avoid loading a large text or binary field into memory when it’s not needed. Individual columns can be lazy loaded by themselves or placed into groups that lazy-load together:
book_excerpts = Table('books', metadata,
Column('book_id', Integer, primary_key=True),
Column('title', String(200), nullable=False),
Column('summary', String(2000)),
Column('excerpt', Text),
Column('photo', Binary)
)
class Book(object):
pass
# define a mapper that will load each of 'excerpt' and 'photo' in
# separate, individual-row SELECT statements when each attribute
# is first referenced on the individual object instance
mapper(Book, book_excerpts, properties={
'excerpt': deferred(book_excerpts.c.excerpt),
'photo': deferred(book_excerpts.c.photo)
})
With declarative, Column objects can be declared directly inside of deferred():
class Book(Base):
__tablename__ = 'books'
book_id = Column(Integer, primary_key=True)
title = Column(String(200), nullable=False)
summary = Column(String(2000))
excerpt = deferred(Column(Text))
photo = deferred(Column(Binary))
Deferred columns can be associted with a “group” name, so that they load together when any of them are first accessed:
book_excerpts = Table('books', metadata,
Column('book_id', Integer, primary_key=True),
Column('title', String(200), nullable=False),
Column('summary', String(2000)),
Column('excerpt', Text),
Column('photo1', Binary),
Column('photo2', Binary),
Column('photo3', Binary)
)
class Book(object):
pass
# define a mapper with a 'photos' deferred group. when one photo is referenced,
# all three photos will be loaded in one SELECT statement. The 'excerpt' will
# be loaded separately when it is first referenced.
mapper(Book, book_excerpts, properties = {
'excerpt': deferred(book_excerpts.c.excerpt),
'photo1': deferred(book_excerpts.c.photo1, group='photos'),
'photo2': deferred(book_excerpts.c.photo2, group='photos'),
'photo3': deferred(book_excerpts.c.photo3, group='photos')
})
You can defer or undefer columns at the Query level using the defer() and undefer() query options:
query = session.query(Book)
query.options(defer('summary')).all()
query.options(undefer('excerpt')).all()
And an entire “deferred group”, i.e. which uses the group keyword argument to deferred(), can be undeferred using undefer_group(), sending in the group name:
query = session.query(Book)
query.options(undefer_group('photos')).all()
Return a DeferredColumnProperty, which indicates this object attributes should only be loaded from its corresponding table column when first accessed.
Used with the properties dictionary sent to mapper().
Return a MapperOption that will convert the column property of the given name into a deferred load.
Used with options().
Any SQL expression that relates to the primary mapped selectable can be mapped as a read-only attribute which will be bundled into the SELECT emitted for the target mapper when rows are loaded. This effect is achieved using the column_property() function. Any scalar-returning ClauseElement may be used. Unlike older versions of SQLAlchemy, there is no label() requirement:
from sqlalchemy.orm import column_property
mapper(User, users_table, properties={
'fullname': column_property(
users_table.c.firstname + " " + users_table.c.lastname
)
})
Correlated subqueries may be used as well:
from sqlalchemy.orm import column_property
from sqlalchemy import select, func
mapper(User, users_table, properties={
'address_count': column_property(
select([func.count(addresses_table.c.address_id)]).\
where(addresses_table.c.user_id==users_table.c.user_id)
)
})
The declarative form of the above is described in Defining SQL Expressions.
Note that column_property() is used to provide the effect of a SQL expression that is actively rendered into the SELECT generated for a particular mapped class. Alternatively, for the typical attribute that represents a composed value, its usually simpler to define it as a Python property which is evaluated as it is invoked on instances after they’ve been loaded:
class User(object):
@property
def fullname(self):
return self.firstname + " " + self.lastname
To invoke a SQL statement from an instance that’s already been loaded, the session associated with the instance can be acquired using object_session() which will provide the appropriate transactional context from which to emit a statement:
from sqlalchemy.orm import object_session
from sqlalchemy import select, func
class User(object):
@property
def address_count(self):
return object_session(self).\
scalar(
select([func.count(addresses_table.c.address_id)]).\
where(addresses_table.c.user_id==self.user_id)
)
On the subject of object-level methods, be sure to see the derived_attributes example, which provides a simple method of reusing instance-level expressions simultaneously as SQL expressions. The derived_attributes example is slated to become a built-in feature of SQLAlchemy in a future release.
A quick way to add a “validation” routine to an attribute is to use the validates() decorator. An attribute validator can raise an exception, halting the process of mutating the attribute’s value, or can change the given value into something different. Validators, like all attribute extensions, are only called by normal userland code; they are not issued when the ORM is populating the object.
from sqlalchemy.orm import validates
addresses_table = Table('addresses', metadata,
Column('id', Integer, primary_key=True),
Column('email', String)
)
class EmailAddress(object):
@validates('email')
def validate_email(self, key, address):
assert '@' in address
return address
mapper(EmailAddress, addresses_table)
Validators also receive collection events, when items are added to a collection:
class User(object):
@validates('addresses')
def validate_address(self, key, address):
assert '@' in address.email
return address
Decorate a method as a ‘validator’ for one or more named properties.
Designates a method as a validator, a method which receives the name of the attribute as well as a value to be assigned, or in the case of a collection to be added to the collection. The function can then raise validation exceptions to halt the process from continuing, or can modify or replace the value before proceeding. The function should otherwise return the given value.
Note that a validator for a collection cannot issue a load of that collection within the validation routine - this usage raises an assertion to avoid recursion overflows. This is a reentrant condition which is not supported.
A more comprehensive way to produce modified behavior for an attribute is to use descriptors. These are commonly used in Python using the property() function. The standard SQLAlchemy technique for descriptors is to create a plain descriptor, and to have it read/write from a mapped attribute with a different name. Below we illustrate this using Python 2.6-style properties:
class EmailAddress(object):
@property
def email(self):
return self._email
@email.setter
def email(self, email):
self._email = email
mapper(EmailAddress, addresses_table, properties={
'_email': addresses_table.c.email
})
The approach above will work, but there’s more we can add. While our EmailAddress object will shuttle the value through the email descriptor and into the _email mapped attribute, the class level EmailAddress.email attribute does not have the usual expression semantics usable with Query. To provide these, we instead use the synonym() function as follows:
mapper(EmailAddress, addresses_table, properties={
'email': synonym('_email', map_column=True)
})
The email attribute is now usable in the same way as any other mapped attribute, including filter expressions, get/set operations, etc.:
address = session.query(EmailAddress).filter(EmailAddress.email == 'some address').one()
address.email = 'some other address'
session.flush()
q = session.query(EmailAddress).filter_by(email='some other address')
If the mapped class does not provide a property, the synonym() construct will create a default getter/setter object automatically.
To use synonyms with declarative, see the section Defining Synonyms.
Note that the “synonym” feature is eventually to be replaced by the superior “hybrid attributes” approach, slated to become a built in feature of SQLAlchemy in a future release. “hybrid” attributes are simply Python properties that evaulate at both the class level and at the instance level. For an example of their usage, see the derived_attributes example.
Set up name as a synonym to another mapped property.
Used with the properties dictionary sent to mapper().
Any existing attributes on the class which map the key name sent to the properties dictionary will be used by the synonym to provide instance-attribute behavior (that is, any Python property object, provided by the property builtin or providing a __get__(), __set__() and __del__() method). If no name exists for the key, the synonym() creates a default getter/setter object automatically and applies it to the class.
name refers to the name of the existing mapped property, which can be any other MapperProperty including column-based properties and relationships.
If map_column is True, an additional ColumnProperty is created on the mapper automatically, using the synonym’s name as the keyname of the property, and the keyname of this synonym() as the name of the column to map. For example, if a table has a column named status:
class MyClass(object):
def _get_status(self):
return self._status
def _set_status(self, value):
self._status = value
status = property(_get_status, _set_status)
mapper(MyClass, sometable, properties={
"status":synonym("_status", map_column=True)
})
The column named status will be mapped to the attribute named _status, and the status attribute on MyClass will be used to proxy access to the column-based attribute.
The expressions returned by comparison operations, such as User.name=='ed', can be customized, by implementing an object that explicitly defines each comparison method needed. This is a relatively rare use case. For most needs, the approach in SQL Expressions as Mapped Attributes will often suffice, or alternatively a scheme like that of the derived_attributes example. Those approaches should be tried first before resorting to custom comparison objects.
Each of column_property(), composite(), relationship(), and comparable_property() accept an argument called comparator_factory. A subclass of PropComparator can be provided for this argument, which can then reimplement basic Python comparison methods such as __eq__(), __ne__(), __lt__(), and so on.
It’s best to subclass the PropComparator subclass provided by each type of property. For example, to allow a column-mapped attribute to do case-insensitive comparison:
from sqlalchemy.orm.properties import ColumnProperty
from sqlalchemy.sql import func
class MyComparator(ColumnProperty.Comparator):
def __eq__(self, other):
return func.lower(self.__clause_element__()) == func.lower(other)
mapper(EmailAddress, addresses_table, properties={
'email':column_property(addresses_table.c.email,
comparator_factory=MyComparator)
})
Above, comparisons on the email column are wrapped in the SQL lower() function to produce case-insensitive matching:
>>> str(EmailAddress.email == 'SomeAddress@foo.com')
lower(addresses.email) = lower(:lower_1)
When building a PropComparator, the __clause_element__() method should be used in order to acquire the underlying mapped column. This will return a column that is appropriately wrapped in any kind of subquery or aliasing that has been applied in the context of the generated SQL statement.
Bases: sqlalchemy.sql.expression.ColumnOperators
Defines comparison operations for MapperProperty objects.
User-defined subclasses of PropComparator may be created. The built-in Python comparison and math operator methods, such as __eq__(), __lt__(), __add__(), can be overridden to provide new operator behaivor. The custom PropComparator is passed to the mapper property via the comparator_factory argument. In each case, the appropriate subclass of PropComparator should be used:
from sqlalchemy.orm.properties import \
ColumnProperty,\
CompositeProperty,\
RelationshipProperty
class MyColumnComparator(ColumnProperty.Comparator):
pass
class MyCompositeComparator(CompositeProperty.Comparator):
pass
class MyRelationshipComparator(RelationshipProperty.Comparator):
pass
Provides a method of applying a PropComparator to any Python descriptor attribute.
Allows a regular Python @property (descriptor) to be used in Queries and SQL constructs like a managed attribute. comparable_property wraps a descriptor with a proxy that directs operator overrides such as == (__eq__) to the supplied comparator but proxies everything else through to the original descriptor:
from sqlalchemy.orm import mapper, comparable_property
from sqlalchemy.orm.interfaces import PropComparator
from sqlalchemy.sql import func
class MyClass(object):
@property
def myprop(self):
return 'foo'
class MyComparator(PropComparator):
def __eq__(self, other):
return func.lower(other) == foo
mapper(MyClass, mytable, properties={
'myprop': comparable_property(MyComparator)})
Used with the properties dictionary sent to mapper().
Note that comparable_property() is usually not needed for basic needs. The recipe at derived_attributes offers a simpler pure-Python method of achieving a similar result using class-bound attributes with SQLAlchemy expression constructs.
Parameters: |
|
---|
Sets of columns can be associated with a single user-defined datatype. The ORM provides a single attribute which represents the group of columns using the class you provide.
A simple example represents pairs of columns as a “Point” object. Starting with a table that represents two points as x1/y1 and x2/y2:
from sqlalchemy import Table, Column
vertices = Table('vertices', metadata,
Column('id', Integer, primary_key=True),
Column('x1', Integer),
Column('y1', Integer),
Column('x2', Integer),
Column('y2', Integer),
)
We create a new class, Point, that will represent each x/y as a pair:
class Point(object):
def __init__(self, x, y):
self.x = x
self.y = y
def __composite_values__(self):
return self.x, self.y
def __set_composite_values__(self, x, y):
self.x = x
self.y = y
def __eq__(self, other):
return other is not None and \
other.x == self.x and \
other.y == self.y
def __ne__(self, other):
return not self.__eq__(other)
The requirements for the custom datatype class are that it have a constructor which accepts positional arguments corresponding to its column format, and also provides a method __composite_values__() which returns the state of the object as a list or tuple, in order of its column-based attributes. It also should supply adequate __eq__() and __ne__() methods which test the equality of two instances.
The __set_composite_values__() method is optional. If it’s not provided, the names of the mapped columns are taken as the names of attributes on the object, and setattr() is used to set data.
The composite() function is then used in the mapping:
from sqlalchemy.orm import composite
class Vertex(object):
pass
mapper(Vertex, vertices, properties={
'start': composite(Point, vertices.c.x1, vertices.c.y1),
'end': composite(Point, vertices.c.x2, vertices.c.y2)
})
We can now use the Vertex instances as well as querying as though the start and end attributes are regular scalar attributes:
session = Session()
v = Vertex(Point(3, 4), Point(5, 6))
session.add(v)
v2 = session.query(Vertex).filter(Vertex.start == Point(3, 4))
The “equals” comparison operation by default produces an AND of all corresponding columns equated to one another. This can be changed using the comparator_factory, described in Custom Comparators. Below we illustrate the “greater than” operator, implementing the same expression that the base “greater than” does:
from sqlalchemy.orm.properties import CompositeProperty
from sqlalchemy import sql
class PointComparator(CompositeProperty.Comparator):
def __gt__(self, other):
"""redefine the 'greater than' operation"""
return sql.and_(*[a>b for a, b in
zip(self.__clause_element__().clauses,
other.__composite_values__())])
maper(Vertex, vertices, properties={
'start': composite(Point, vertices.c.x1, vertices.c.y1,
comparator_factory=PointComparator),
'end': composite(Point, vertices.c.x2, vertices.c.y2,
comparator_factory=PointComparator)
})
Return a composite column-based property for use with a Mapper.
See the mapping documention section Composite Column Types for a full usage example.
Parameters: |
|
---|
Mappers can be constructed against arbitrary relational units (called Selectables) as well as plain Tables. For example, The join keyword from the SQL package creates a neat selectable unit comprised of multiple tables, complete with its own composite primary key, which can be passed in to a mapper as the table.
from sqlalchemy.orm import mapper
from sqlalchemy.sql import join
class AddressUser(object):
pass
# define a Join
j = join(users_table, addresses_table)
# map to it - the identity of an AddressUser object will be
# based on (user_id, address_id) since those are the primary keys involved
mapper(AddressUser, j, properties={
'user_id': [users_table.c.user_id, addresses_table.c.user_id]
})
Note that the list of columns is equivalent to the usage of column_property() with multiple columns:
from sqlalchemy.orm import mapper, column_property
mapper(AddressUser, j, properties={
'user_id': column_property(users_table.c.user_id, addresses_table.c.user_id)
})
The usage of column_property() is required when using declarative to map to multiple columns, since the declarative class parser won’t recognize a plain list of columns:
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class AddressUser(Base):
__table__ = j
user_id = column_property(users_table.c.user_id, addresses_table.c.user_id)
A second example:
from sqlalchemy.sql import join
# many-to-many join on an association table
j = join(users_table, userkeywords,
users_table.c.user_id==userkeywords.c.user_id).join(keywords,
userkeywords.c.keyword_id==keywords.c.keyword_id)
# a class
class KeywordUser(object):
pass
# map to it - the identity of a KeywordUser object will be
# (user_id, keyword_id) since those are the primary keys involved
mapper(KeywordUser, j, properties={
'user_id': [users_table.c.user_id, userkeywords.c.user_id],
'keyword_id': [userkeywords.c.keyword_id, keywords.c.keyword_id]
})
In both examples above, “composite” columns were added as properties to the mappers; these are aggregations of multiple columns into one mapper property, which instructs the mapper to keep both of those columns set at the same value.
Similar to mapping against a join, a plain select() object can be used with a mapper as well. Below, an example select which contains two aggregate functions and a group_by is mapped to a class:
from sqlalchemy.sql import select
s = select([customers,
func.count(orders).label('order_count'),
func.max(orders.price).label('highest_order')],
customers.c.customer_id==orders.c.customer_id,
group_by=[c for c in customers.c]
).alias('somealias')
class Customer(object):
pass
mapper(Customer, s)
Above, the “customers” table is joined against the “orders” table to produce a full row for each customer row, the total count of related rows in the “orders” table, and the highest price in the “orders” table, grouped against the full set of columns in the “customers” table. That query is then mapped against the Customer class. New instances of Customer will contain attributes for each column in the “customers” table as well as an “order_count” and “highest_order” attribute. Updates to the Customer object will only be reflected in the “customers” table and not the “orders” table. This is because the primary key columns of the “orders” table are not represented in this mapper and therefore the table is not affected by save or delete operations.
The first mapper created for a certain class is known as that class’s “primary mapper.” Other mappers can be created as well on the “load side” - these are called secondary mappers. This is a mapper that must be constructed with the keyword argument non_primary=True, and represents a load-only mapper. Objects that are loaded with a secondary mapper will have their save operation processed by the primary mapper. It is also invalid to add new relationship() objects to a non-primary mapper. To use this mapper with the Session, specify it to the query method:
example:
# primary mapper
mapper(User, users_table)
# make a secondary mapper to load User against a join
othermapper = mapper(User, users_table.join(someothertable), non_primary=True)
# select
result = session.query(othermapper).select()
The “non primary mapper” is a rarely needed feature of SQLAlchemy; in most cases, the Query object can produce any kind of query that’s desired. It’s recommended that a straight Query be used in place of a non-primary mapper unless the mapper approach is absolutely needed. Current use cases for the “non primary mapper” are when you want to map the class to a particular select statement or view to which additional query criterion can be added, and for when the particular mapped select statement or view is to be placed in a relationship() of a parent mapper.
The non_primary mapper defines alternate mappers for the purposes of loading objects. What if we want the same class to be persisted differently, such as to different tables ? SQLAlchemy refers to this as the “entity name” pattern, and in Python one can use a recipe which creates anonymous subclasses which are distinctly mapped. See the recipe at Entity Name.
Mapping imposes no restrictions or requirements on the constructor (__init__) method for the class. You are free to require any arguments for the function that you wish, assign attributes to the instance that are unknown to the ORM, and generally do anything else you would normally do when writing a constructor for a Python class.
The SQLAlchemy ORM does not call __init__ when recreating objects from database rows. The ORM’s process is somewhat akin to the Python standard library’s pickle module, invoking the low level __new__ method and then quietly restoring attributes directly on the instance rather than calling __init__.
If you need to do some setup on database-loaded instances before they’re ready to use, you can use the @reconstructor decorator to tag a method as the ORM counterpart to __init__. SQLAlchemy will call this method with no arguments every time it loads or reconstructs one of your instances. This is useful for recreating transient properties that are normally assigned in your __init__:
from sqlalchemy import orm
class MyMappedClass(object):
def __init__(self, data):
self.data = data
# we need stuff on all instances, but not in the database.
self.stuff = []
@orm.reconstructor
def init_on_load(self):
self.stuff = []
When obj = MyMappedClass() is executed, Python calls the __init__ method as normal and the data argument is required. When instances are loaded during a Query operation as in query(MyMappedClass).one(), init_on_load is called instead.
Any method may be tagged as the reconstructor(), even the __init__ method. SQLAlchemy will call the reconstructor method with no arguments. Scalar (non-collection) database-mapped attributes of the instance will be available for use within the function. Eagerly-loaded collections are generally not yet available and will usually only contain the first element. ORM state changes made to objects at this stage will not be recorded for the next flush() operation, so the activity within a reconstructor should be conservative.
While the ORM does not call your __init__ method, it will modify the class’s __init__ slightly. The method is lightly wrapped to act as a trigger for the ORM, allowing mappers to be compiled automatically and will fire a init_instance() event that MapperExtension objects may listen for. MapperExtension objects can also listen for a reconstruct_instance event, analogous to the reconstructor() decorator above.
Decorate a method as the ‘reconstructor’ hook.
Designates a method as the “reconstructor”, an __init__-like method that will be called by the ORM after the instance has been loaded from the database or otherwise reconstituted.
The reconstructor will be invoked with no arguments. Scalar (non-collection) database-mapped attributes of the instance will be available for use within the function. Eagerly-loaded collections are generally not yet available and will usually only contain the first element. ORM state changes made to objects at this stage will not be recorded for the next flush() operation, so the activity within a reconstructor should be conservative.
Return a new Mapper object.
Parameters: |
|
---|
Given an object, return the primary Mapper associated with the object instance.
Raises UnmappedInstanceError if no mapping is configured.
Given a class, return the primary Mapper associated with the key.
Raises UnmappedClassError if no mapping is configured.
Compile all mappers that have been defined.
This is equivalent to calling compile() on any individual mapper.
Remove all mappers from all classes.
This function removes all instrumentation from classes and disposes of their associated mappers. Once called, the classes are unmapped and can be later re-mapped with new mappers.
clear_mappers() is not for normal use, as there is literally no valid usage for it outside of very specific testing scenarios. Normally, mappers are permanent structural components of user-defined classes, and are never discarded independently of their class. If a mapped class itself is garbage collected, its mapper is automatically disposed of as well. As such, clear_mappers() is only for usage in test suites that re-use the same classes with different mappings, which is itself an extremely rare use case - the only such use case is in fact SQLAlchemy’s own test suite, and possibly the test suites of other ORM extension libraries which intend to test various combinations of mapper construction upon a fixed set of classes.
Get an identity key.
Valid call signatures:
identity_key(class, ident)
mapped class (must be a positional argument)
primary key, if the key is composite this is a tuple
identity_key(instance=instance)
object instance (must be given as a keyword arg)
identity_key(class, row=row)
mapped class (must be a positional argument)
result proxy row (must be given as a keyword arg)
Create a UNION statement used by a polymorphic mapper.
See Concrete Table Inheritance for an example of how this is used.
Define the correlation of class attributes to database table columns.
Instances of this class should be constructed via the mapper() function.
Construct a new mapper.
Mappers are normally constructed via the mapper() function. See for details.
Add the given dictionary of properties to this mapper, using add_property.
Add an individual MapperProperty to this mapper.
If the mapper has not been compiled yet, just adds the property to the initial properties dictionary sent to the constructor. If this Mapper has already been compiled, then the given MapperProperty is compiled immediately.
Iterate each element and its mapper in an object graph, for all relationships that meet the given cascade rule.
Parameters: |
|
---|
the return value are object instances; this provides a strong reference so that they don’t fall out of scope immediately.
Return true if the given mapper shares a common inherited parent as this mapper.
Compile this mapper and all other non-compiled mappers.
This method checks the local compiled status as well as for any new mappers that have been defined, and is safe to call repeatedly.
return a MapperProperty associated with the given key.
resolve_synonyms=False and raiseerr=False are deprecated.
Given a Column object, return the MapperProperty which maps this column.
Return the identity key for the given instance, based on its primary key attributes.
This value is typically also found on the instance state under the attribute name key.
Return an identity-map key for use in storing/retrieving an item from an identity map.
Return an identity-map key for use in storing/retrieving an item from the identity map.
Return True if the this mapper inherits from the given mapper.
return an iterator of all MapperProperty objects.
Iterate through the collection including this mapper and all descendant mappers.
This includes not just the immediately inheriting mappers but all their inheriting mappers as well.
To iterate through an entire hierarchy, use mapper.base_mapper.polymorphic_iterator().
Return the list of primary key values for the given instance.
Return the primary mapper corresponding to this mapper’s class key (class).
The collection including this mapper and all descendant mappers.
This includes not just the immediately inheriting mappers but all their inheriting mappers as well.