SQLAlchemy has a variety of extensions available which provide extra functionality to SA, either via explicit usage or by augmenting the core behavior. Several of these extensions are designed to work together.
Author: Mike Bayer
Version: 0.4.4 or greater
declarative
intends to be a fully featured replacement for the very old activemapper
extension. Its goal is to redefine the organization of class, Table
, and mapper()
constructs such that they can all be defined "at once" underneath a class declaration. Unlike activemapper
, it does not redefine normal SQLAlchemy configurational semantics - regular Column
, relation()
and other schema or ORM constructs are used in almost all cases.
declarative
is a so-called "micro declarative layer"; it does not generate table or column names and requires almost as fully verbose a configuration as that of straight tables and mappers. As an alternative, the Elixir project is a full community-supported declarative layer for SQLAlchemy, and is recommended for its active-record-like semantics, its convention-based configuration, and plugin capabilities.
SQLAlchemy object-relational configuration involves the usage of Table, mapper(), and class objects to define the three areas of configuration. declarative moves these three types of configuration underneath the individual mapped class. Regular SQLAlchemy schema and ORM constructs are used in most cases:
from sqlalchemy.ext.declarative import declarative_base, declared_synonym engine = create_engine('sqlite://') Base = declarative_base(engine) class SomeClass(Base): __tablename__ = 'some_table' id = Column('id', Integer, primary_key=True) name = Column('name', String(50))
Above, the declarative_base
callable produces a new base class from which all mapped classes inherit from. When the class definition is
completed, a new Table
and mapper()
have been generated, accessible via the __table__
and __mapper__
attributes on the
SomeClass
class.
Attributes may be added to the class after its construction, and they will be added to the underlying Table
and mapper()
definitions as
appropriate:
SomeClass.data = Column('data', Unicode) SomeClass.related = relation(RelatedInfo)
Classes which are mapped explicitly using mapper()
can interact freely with declarative classes. The declarative_base
base class contains a
MetaData
object as well as a dictionary of all classes created against the base. So to access the above metadata and create tables we can say:
Base.metadata.create_all()
The declarative_base
can also receive a pre-created MetaData
object:
mymetadata = MetaData() Base = declarative_base(metadata=mymetadata)
Relations to other classes are done in the usual way, with the added feature that the class specified to relation()
may be a string name. The
"class registry" associated with Base
is used at mapper compilation time to resolve the name into the actual class object, which is expected to
have been defined once the mapper configuration is used:
class User(Base): __tablename__ = 'users' id = Column('id', Integer, primary_key=True) name = Column('name', String(50)) addresses = relation("Address", backref="user") class Address(Base): __tablename__ = 'addresses' id = Column('id', Integer, primary_key=True) email = Column('email', String(50)) user_id = Column('user_id', Integer, ForeignKey('users.id'))
Column constructs, since they are just that, are immediately usable, as below where we define a primary join condition on the Address
class
using them:
class Address(Base) __tablename__ = 'addresses' id = Column('id', Integer, primary_key=True) email = Column('email', String(50)) user_id = Column('user_id', Integer, ForeignKey('users.id')) user = relation(User, primaryjoin=user_id==User.id)
Synonyms are one area where declarative
needs to slightly change the usual SQLAlchemy configurational syntax. To define a getter/setter which
proxies to an underlying attribute, use declared_synonym
:
class MyClass(Base): __tablename__ = 'sometable' _attr = Column('attr', String) def _get_attr(self): return self._some_attr def _set_attr(self, attr) self._some_attr = attr attr = declared_synonym(property(_get_attr, _set_attr), '_attr')
The above synonym is then usable as an instance attribute as well as a class-level expression construct:
x = MyClass() x.attr = "some value" session.query(MyClass).filter(MyClass.attr == 'some other value').all()
As an alternative to __tablename__
, a direct Table
construct may be used:
class MyClass(Base): __table__ = Table('my_table', Base.metadata, Column('id', Integer, primary_key=True), Column('name', String(50)) )
This is the preferred approach when using reflected tables, as below:
class MyClass(Base): __table__ = Table('my_table', Base.metadata, autoload=True)
Mapper arguments are specified using the __mapper_args__
class variable. Note that the column objects declared on the class are immediately
usable, as in this joined-table inheritance example:
class Person(Base): __tablename__ = 'people' id = Column('id', Integer, primary_key=True) discriminator = Column('type', String(50)) __mapper_args__ = {'polymorphic_on':discriminator} class Engineer(Person): __tablename__ = 'engineers' __mapper_args__ = {'polymorphic_identity':'engineer'} id = Column('id', Integer, ForeignKey('people.id'), primary_key=True) primary_language = Column('primary_language', String(50))
For single-table inheritance, the __tablename__
and __table__
class variables are optional on a class when the class inherits from another
mapped class.
As a convenience feature, the declarative_base()
sets a default constructor on classes which takes keyword arguments, and assigns them to the
named attributes:
e = Engineer(primary_language='python')
Note that declarative
has no integration built in with sessions, and is only intended as an optional syntax for the regular usage of mappers
and Table objects. A typical application setup using scoped_session
might look like:
engine = create_engine('postgres://scott:tiger@localhost/test') Session = scoped_session(sessionmaker(transactional=True, autoflush=False, bind=engine)) Base = declarative_base()
Mapped instances then make usage of Session
in the usual way.
Author: Mike Bayer and Jason Kirtland
Version: 0.3.1 or greater
associationproxy
is used to create a simplified, read/write view of a relationship. It can be used to cherry-pick fields from a collection of related objects or to greatly simplify access to associated objects in an association relationship.
Consider this "association object" mapping:
users_table = Table('users', metadata, Column('id', Integer, primary_key=True), Column('name', String(64)), ) keywords_table = Table('keywords', metadata, Column('id', Integer, primary_key=True), Column('keyword', String(64)) ) userkeywords_table = Table('userkeywords', metadata, Column('user_id', Integer, ForeignKey("users.id"), primary_key=True), Column('keyword_id', Integer, ForeignKey("keywords.id"), primary_key=True) ) class User(object): def __init__(self, name): self.name = name class Keyword(object): def __init__(self, keyword): self.keyword = keyword mapper(User, users_table, properties={ 'kw': relation(Keyword, secondary=userkeywords_table) }) mapper(Keyword, keywords_table)
Above are three simple tables, modeling users, keywords and a many-to-many relationship between the two. These Keyword
objects are little more than a container for a name, and accessing them via the relation is awkward:
user = User('jek') user.kw.append(Keyword('cheese inspector')) print user.kw # [<__main__.Keyword object at 0xb791ea0c>] print user.kw[0].keyword # 'cheese inspector' print [keyword.keyword for keyword in u._keywords] # ['cheese inspector']
With association_proxy
you have a "view" of the relation that contains just the .keyword
of the related objects. The proxy is a Python property, and unlike the mapper relation, is defined in your class:
from sqlalchemy.ext.associationproxy import association_proxy class User(object): def __init__(self, name): self.name = name # proxy the 'keyword' attribute from the 'kw' relation keywords = association_proxy('kw', 'keyword') # ... >>> user.kw [<__main__.Keyword object at 0xb791ea0c>] >>> user.keywords ['cheese inspector'] >>> user.keywords.append('snack ninja') >>> user.keywords ['cheese inspector', 'snack ninja'] >>> user.kw [<__main__.Keyword object at 0x9272a4c>, <__main__.Keyword object at 0xb7b396ec>]
The proxy is read/write. New associated objects are created on demand when values are added to the proxy, and modifying or removing an entry through the proxy also affects the underlying collection.
creator
function can be used to create instances instead.
Above, the Keyword.__init__
takes a single argument keyword
, which maps conveniently to the value being set through the proxy. A creator
function could have been used instead if more flexiblity was required.
Because the proxies are backed a regular relation collection, all of the usual hooks and patterns for using collections are still in effect. The most convenient behavior is the automatic setting of "parent"-type relationships on assignment. In the example above, nothing special had to be done to associate the Keyword to the User. Simply adding it to the collection is sufficient.
back to section topAssociation proxies are also useful for keeping association objects out the way during regular use. For example, the userkeywords
table might have a bunch of auditing columns that need to get updated when changes are made- columns that are updated but seldom, if ever, accessed in your application. A proxy can provide a very natural access pattern for the relation.
from sqlalchemy.ext.associationproxy import association_proxy # users_table and keywords_table tables as above, then: userkeywords_table = Table('userkeywords', metadata, Column('user_id', Integer, ForeignKey("users.id"), primary_key=True), Column('keyword_id', Integer, ForeignKey("keywords.id"), primary_key=True), # add some auditing columns Column('updated_at', DateTime, default=datetime.now), Column('updated_by', Integer, default=get_current_uid, onupdate=get_current_uid), ) def _create_uk_by_keyword(keyword): """A creator function.""" return UserKeyword(keyword=keyword) class User(object): def __init__(self, name): self.name = name keywords = association_proxy('user_keywords', 'keyword', creator=_create_uk_by_keyword) class Keyword(object): def __init__(self, keyword): self.keyword = keyword def __repr__(self): return 'Keyword(%s)' % repr(self.keyword) class UserKeyword(object): def __init__(self, user=None, keyword=None): self.user = user self.keyword = keyword mapper(User, users_table, properties={ 'user_keywords': relation(UserKeyword) }) mapper(Keyword, keywords_table) mapper(UserKeyword, userkeywords_table, properties={ 'user': relation(User), 'keyword': relation(Keyword), }) user = User('log') kw1 = Keyword('new_from_blammo') # Adding a Keyword requires creating a UserKeyword association object user.user_keywords.append(UserKeyword(user, kw1)) # And accessing Keywords requires traversing UserKeywords print user.user_keywords[0] # <__main__.UserKeyword object at 0xb79bbbec> print user.user_keywords[0].keyword # Keyword('new_from_blammo') # Lots of work. # It's much easier to go through the association proxy! for kw in (Keyword('its_big'), Keyword('its_heavy'), Keyword('its_wood')): user.keywords.append(kw) print user.keywords # [Keyword('new_from_blammo'), Keyword('its_big'), Keyword('its_heavy'), Keyword('its_wood')]
stocks = Table("stocks", meta, Column('symbol', String(10), primary_key=True), Column('description', String(100), nullable=False), Column('last_price', Numeric) ) brokers = Table("brokers", meta, Column('id', Integer,primary_key=True), Column('name', String(100), nullable=False) ) holdings = Table("holdings", meta, Column('broker_id', Integer, ForeignKey('brokers.id'), primary_key=True), Column('symbol', String(10), ForeignKey('stocks.symbol'), primary_key=True), Column('shares', Integer) )
Above are three tables, modeling stocks, their brokers and the number of shares of a stock held by each broker. This situation is quite different from the association example above. shares
is a property of the relation, an important one that we need to use all the time.
For this example, it would be very convenient if Broker
objects had a dictionary collection that mapped Stock
instances to the shares held for each. That's easy.
from sqlalchemy.ext.associationproxy import association_proxy from sqlalchemy.orm.collections import attribute_mapped_collection def _create_holding(stock, shares): """A creator function, constructs Holdings from Stock and share quantity.""" return Holding(stock=stock, shares=shares) class Broker(object): def __init__(self, name): self.name = name holdings = association_proxy('by_stock', 'shares', creator=_create_holding) class Stock(object): def __init__(self, symbol, description=None): self.symbol = symbol self.description = description self.last_price = 0 class Holding(object): def __init__(self, broker=None, stock=None, shares=0): self.broker = broker self.stock = stock self.shares = shares mapper(Stock, stocks_table) mapper(Broker, brokers_table, properties={ 'by_stock': relation(Holding, collection_class=attribute_mapped_collection('stock')) }) mapper(Holding, holdings_table, properties={ 'stock': relation(Stock), 'broker': relation(Broker) })
Above, we've set up the 'by_stock' relation collection to act as a dictionary, using the .stock
property of each Holding as a key.
Populating and accessing that dictionary manually is slightly inconvenient because of the complexity of the Holdings association object:
stock = Stock('ZZK') broker = Broker('paj') broker.holdings[stock] = Holding(broker, stock, 10) print broker.holdings[stock].shares # 10
The by_stock
proxy we've added to the Broker
class hides the details of the Holding
while also giving access to .shares
:
for stock in (Stock('JEK'), Stock('STPZ')): broker.holdings[stock] = 123 for stock, shares in broker.holdings.items(): print stock, shares # lets take a peek at that holdings_table after committing changes to the db print list(holdings_table.select().execute()) # [(1, 'ZZK', 10), (1, 'JEK', 123), (1, 'STEPZ', 123)]
Further examples can be found in the examples/
directory in the SQLAlchemy distribution.
The association_proxy
convenience function is not present in SQLAlchemy versions 0.3.1 through 0.3.7, instead instantiate the class directly:
from sqlalchemy.ext.associationproxy import AssociationProxy class Article(object): keywords = AssociationProxy('keyword_associations', 'keyword')
Author: Jason Kirtland
orderinglist
is a helper for mutable ordered relations. It will intercept
list operations performed on a relation collection and automatically
synchronize changes in list position with an attribute on the related objects.
(See advdatamapping_properties_entitycollections for more information on the general pattern.)
Example: Two tables that store slides in a presentation. Each slide has a number of bullet points, displayed in order by the 'position' column on the bullets table. These bullets can be inserted and re-ordered by your end users, and you need to update the 'position' column of all affected rows when changes are made.
slides_table = Table('Slides', metadata, Column('id', Integer, primary_key=True), Column('name', String)) bullets_table = Table('Bullets', metadata, Column('id', Integer, primary_key=True), Column('slide_id', Integer, ForeignKey('Slides.id')), Column('position', Integer), Column('text', String)) class Slide(object): pass class Bullet(object): pass mapper(Slide, slides_table, properties={ 'bullets': relation(Bullet, order_by=[bullets_table.c.position]) }) mapper(Bullet, bullets_table)
The standard relation mapping will produce a list-like attribute on each Slide containing all related Bullets, but coping with changes in ordering is totally your responsibility. If you insert a Bullet into that list, there is no magic- it won't have a position attribute unless you assign it it one, and you'll need to manually renumber all the subsequent Bullets in the list to accommodate the insert.
An orderinglist
can automate this and manage the 'position' attribute on all
related bullets for you.
mapper(Slide, slides_table, properties={ 'bullets': relation(Bullet, collection_class=ordering_list('position'), order_by=[bullets_table.c.position]) }) mapper(Bullet, bullets_table) s = Slide() s.bullets.append(Bullet()) s.bullets.append(Bullet()) s.bullets[1].position >>> 1 s.bullets.insert(1, Bullet()) s.bullets[2].position >>> 2
Use the ordering_list
function to set up the collection_class
on relations
(as in the mapper example above). This implementation depends on the list
starting in the proper order, so be SURE to put an order_by on your relation.
ordering_list
takes the name of the related object's ordering attribute as
an argument. By default, the zero-based integer index of the object's
position in the ordering_list
is synchronized with the ordering attribute:
index 0 will get position 0, index 1 position 1, etc. To start numbering at 1
or some other integer, provide count_from=1
.
Ordering values are not limited to incrementing integers. Almost any scheme
can implemented by supplying a custom ordering_func
that maps a Python list
index to any value you require. See the module
documentation for more
information, and also check out the unit tests for examples of stepped
numbering, alphabetical and Fibonacci numbering.
Author: Jonathan Ellis
SqlSoup creates mapped classes on the fly from tables, which are automatically reflected from the database based on name. It is essentially a nicer version of the "row data gateway" pattern.
>>> from sqlalchemy.ext.sqlsoup import SqlSoup >>> soup = SqlSoup('sqlite:///') >>> db.users.select(order_by=[db.users.c.name]) [MappedUsers(name='Bhargan Basepair',email='basepair@example.edu',password='basepair',classname=None,admin=1), MappedUsers(name='Joe Student',email='student@example.edu',password='student',classname=None,admin=0)]
Full SqlSoup documentation is on the SQLAlchemy Wiki.
back to section topA lot of our extensions are deprecated. But this is a good thing. Why ? Because all of them have been refined and focused, and rolled into the core of SQLAlchemy. So they aren't removed, they've just graduated into fully integrated features. Below we describe a set of extensions which are present in 0.4 but are deprecated.
Author: Jonas Borgström
NOTE: As of version 0.3.6 of SQLAlchemy, most behavior of SelectResults
has been rolled into the base Query
object. Explicit usage of SelectResults
is therefore no longer needed.
SelectResults
gives transformative behavior to the results returned from the select
and select_by
methods of Query
.
from sqlalchemy.ext.selectresults import SelectResults query = session.query(MyClass) res = SelectResults(query) res = res.filter(table.c.column == "something") # adds a WHERE clause (or appends to the existing via "and") res = res.order_by([table.c.column]) # adds an ORDER BY clause for x in res[:10]: # Fetch and print the top ten instances - adds OFFSET 0 LIMIT 10 or equivalent print x.column2 # evaluate as a list, which executes the query x = list(res) # Count how many instances that have column2 > 42 # and column == "something" print res.filter(table.c.column2 > 42).count() # select() is a synonym for filter() session.query(MyClass).select(mytable.c.column=="something").order_by([mytable.c.column])[2:7]
An important facet of SelectResults is that the actual SQL execution does not occur until the object is used in a list or iterator context. This means you can call any number of transformative methods (including filter
, order_by
, list range expressions, etc) before any SQL is actually issued.
Configuration of SelectResults may be per-Query, per Mapper, or per application:
from sqlalchemy.ext.selectresults import SelectResults, SelectResultsExt # construct a SelectResults for an individual Query sel = SelectResults(session.query(MyClass)) # construct a Mapper where the Query.select()/select_by() methods will return a SelectResults: mapper(MyClass, mytable, extension=SelectResultsExt()) # globally configure all Mappers to return SelectResults, using the "selectresults" mod import sqlalchemy.mods.selectresults
SelectResults greatly enhances querying and is highly recommended. For example, heres an example of constructing a query using a combination of joins and outerjoins:
mapper(User, users_table, properties={ 'orders':relation(mapper(Order, orders_table, properties={ 'items':relation(mapper(Item, items_table)) })) }) session = create_session() query = SelectResults(session.query(User)) result = query.outerjoin_to('orders').outerjoin_to('items').select(or_(Order.c.order_id==None,Item.c.item_id==2))
For a full listing of methods, see the generated documentation.
back to section topAuthor: Daniel Miller
The SessionContext
extension is still available in the 0.4 release of SQLAlchemy, but has been deprecated in favor of the scoped_session() function, which provides a class-like object that constructs a Session
on demand which references a thread-local scope.
For docs on SessionContext
, see the SQLAlchemy 0.3 documentation.
Author: Mike Bayer
The assignmapper
extension is still available in the 0.4 release of SQLAlchemy, but has been deprecated in favor of the scoped_session() function, which provides a mapper
callable that works similarly to assignmapper
.
For docs on assignmapper
, see the SQLAlchemy 0.3 documentation.
Author: Jonathan LaCour
Please note that ActiveMapper has been deprecated in favor of either Elixir, a comprehensive solution to declarative mapping, or declarative, a built in convenience tool which reorganizes Table
and mapper()
configuration.
ActiveMapper is a so-called "declarative layer" which allows the construction of a class, a Table
, and a Mapper
all in one step:
class Person(ActiveMapper): class mapping: id = column(Integer, primary_key=True) full_name = column(String) first_name = column(String) middle_name = column(String) last_name = column(String) birth_date = column(DateTime) ssn = column(String) gender = column(String) home_phone = column(String) cell_phone = column(String) work_phone = column(String) prefs_id = column(Integer, foreign_key=ForeignKey('preferences.id')) addresses = one_to_many('Address', colname='person_id', backref='person') preferences = one_to_one('Preferences', colname='pref_id', backref='person') def __str__(self): s = '%s\n' % self.full_name s += ' * birthdate: %s\n' % (self.birth_date or 'not provided') s += ' * fave color: %s\n' % (self.preferences.favorite_color or 'Unknown') s += ' * personality: %s\n' % (self.preferences.personality_type or 'Unknown') for address in self.addresses: s += ' * address: %s\n' % address.address_1 s += ' %s, %s %s\n' % (address.city, address.state, address.postal_code) return s class Preferences(ActiveMapper): class mapping: __table__ = 'preferences' id = column(Integer, primary_key=True) favorite_color = column(String) personality_type = column(String) class Address(ActiveMapper): class mapping: id = column(Integer, primary_key=True) type = column(String) address_1 = column(String) city = column(String) state = column(String) postal_code = column(String) person_id = column(Integer, foreign_key=ForeignKey('person.id'))
More discussion on ActiveMapper can be found at Jonathan LaCour's Blog as well as the SQLAlchemy Wiki.
back to section top