Coder Social home page Coder Social logo

sqlalchemy's Introduction

SQLAlchemy

PyPI PyPI - Python Version PyPI - Downloads

The Python SQL Toolkit and Object Relational Mapper

Introduction

SQLAlchemy is the Python SQL toolkit and Object Relational Mapper that gives application developers the full power and flexibility of SQL. SQLAlchemy provides a full suite of well known enterprise-level persistence patterns, designed for efficient and high-performing database access, adapted into a simple and Pythonic domain language.

Major SQLAlchemy features include:

  • An industrial strength ORM, built from the core on the identity map, unit of work, and data mapper patterns. These patterns allow transparent persistence of objects using a declarative configuration system. Domain models can be constructed and manipulated naturally, and changes are synchronized with the current transaction automatically.
  • A relationally-oriented query system, exposing the full range of SQL's capabilities explicitly, including joins, subqueries, correlation, and most everything else, in terms of the object model. Writing queries with the ORM uses the same techniques of relational composition you use when writing SQL. While you can drop into literal SQL at any time, it's virtually never needed.
  • A comprehensive and flexible system of eager loading for related collections and objects. Collections are cached within a session, and can be loaded on individual access, all at once using joins, or by query per collection across the full result set.
  • A Core SQL construction system and DBAPI interaction layer. The SQLAlchemy Core is separate from the ORM and is a full database abstraction layer in its own right, and includes an extensible Python-based SQL expression language, schema metadata, connection pooling, type coercion, and custom types.
  • All primary and foreign key constraints are assumed to be composite and natural. Surrogate integer primary keys are of course still the norm, but SQLAlchemy never assumes or hardcodes to this model.
  • Database introspection and generation. Database schemas can be "reflected" in one step into Python structures representing database metadata; those same structures can then generate CREATE statements right back out - all within the Core, independent of the ORM.

SQLAlchemy's philosophy:

  • SQL databases behave less and less like object collections the more size and performance start to matter; object collections behave less and less like tables and rows the more abstraction starts to matter. SQLAlchemy aims to accommodate both of these principles.
  • An ORM doesn't need to hide the "R". A relational database provides rich, set-based functionality that should be fully exposed. SQLAlchemy's ORM provides an open-ended set of patterns that allow a developer to construct a custom mediation layer between a domain model and a relational schema, turning the so-called "object relational impedance" issue into a distant memory.
  • The developer, in all cases, makes all decisions regarding the design, structure, and naming conventions of both the object model as well as the relational schema. SQLAlchemy only provides the means to automate the execution of these decisions.
  • With SQLAlchemy, there's no such thing as "the ORM generated a bad query" - you retain full control over the structure of queries, including how joins are organized, how subqueries and correlation is used, what columns are requested. Everything SQLAlchemy does is ultimately the result of a developer-initiated decision.
  • Don't use an ORM if the problem doesn't need one. SQLAlchemy consists of a Core and separate ORM component. The Core offers a full SQL expression language that allows Pythonic construction of SQL constructs that render directly to SQL strings for a target database, returning result sets that are essentially enhanced DBAPI cursors.
  • Transactions should be the norm. With SQLAlchemy's ORM, nothing goes to permanent storage until commit() is called. SQLAlchemy encourages applications to create a consistent means of delineating the start and end of a series of operations.
  • Never render a literal value in a SQL statement. Bound parameters are used to the greatest degree possible, allowing query optimizers to cache query plans effectively and making SQL injection attacks a non-issue.

Documentation

Latest documentation is at:

https://www.sqlalchemy.org/docs/

Installation / Requirements

Full documentation for installation is at Installation.

Getting Help / Development / Bug reporting

Please refer to the SQLAlchemy Community Guide.

Code of Conduct

Above all, SQLAlchemy places great emphasis on polite, thoughtful, and constructive communication between users and developers. Please see our current Code of Conduct at Code of Conduct.

License

SQLAlchemy is distributed under the MIT license.

sqlalchemy's People

Contributors

ants avatar caselit avatar cjw296 avatar dependabot[bot] avatar dianaclarke avatar donkopotamus avatar empty avatar gdementen avatar gordthompson avatar jbellis avatar jeffwidman avatar jek avatar jvanasco avatar lelit avatar malor avatar msabramo avatar nakagami avatar nphilipp avatar olduvaihand avatar paj28 avatar pjenvey avatar ramonwill avatar ricmo avatar scop avatar scottdugas avatar tiangolo avatar tlocke avatar vrajmohan avatar xflr6 avatar zzzeek avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sqlalchemy's Issues

convert primary_keys arguments to primary_key

Migrated issue, originally created by Michael Bayer (@zzzeek)

there are several places where we deal with the attribute or argument "primary_keys". Kevin Dangoor properly pointed out that a table has only one primary key, which might be composite and consist of multiple columns, but its still just one primary key.

so wherever an argument "primary_keys" exists, make it "primary_key", and make it take a scalar or a list. convert attributes named "primary_keys" to "primary_key".

reflect postgres default column values

Migrated issue, originally created by Michael Bayer (@zzzeek)

a user has requseted that the defaults set for coluimns in postgres be pulled in as bona fide "Default" objects on the Column object, instead of the "implicit sequencing" it does now. from the thread:

Serial in fact is a an int(4) type with a default to nextval (table_column_seq):

Test=# \d Test
                                   Table "public.test"
 Column |            Type             |                     Modifiers
--------+-----------------------------+---------------------------------------------------
 id     | integer                     | not null default
nextval('test_id_seq'::regclass)

You can fetch this information with

Test=#  select column_default from information_schema.columns where
column_name='id';

          column_default
----------------------------------
 nextval('test_id_seq'::regclass)
(1 row)

need table introspection for mysql 3/4

Migrated issue, originally created by Michael Bayer (@zzzeek)

right now table introspection on mysql takes the easy way out and duplicated the information_schema method from postgres, but that only works in mysql5 (and barely). someone needs to show me the best way to do it across all mysqls.

document mapper primary_key better, custom column properties

Migrated issue, originally created by Michael Bayer (@zzzeek)

two more big ones that come on the list a lot, being able to do this:

   m =mapper(class, table, properties= 
     'foo':table.c.foocol
   })

to override column names, particularly when they want to make a setter/getter property around it, and this:

   m = mapper(class, table, primary_key = [table.c.col2](table.c.col1,)))

to specify custom primary key columns.

example fix: Advanced Data Mapping - Mapping a Class with Table Inheritance

Migrated issue, originally created by Anonymous

I've found an issue with the following documentation section:
Advanced Data Mapping - Mapping a Class with Table Inheritance

AddressUser.mapper = mapper(
addresses, inherits=User.mapper
)

should probably be:

AddressUser.mapper = mapper(AddressUser,
addresses, inherits=User.mapper
)

Additionally, the lower definition specifying the join condition should be:

AddressUser.mapper = mapper(AddressUser,
addresses, inherits=User.mapper,
inherit_condition=users.c.user_id==addresses.c.user_id
)

need convenient count() function at SQL and ORM level

Migrated issue, originally created by Michael Bayer (@zzzeek)

On Dec 17, 2005, at 12:05 AM, limodou wrote:

If there is a count() method, just like:

number = count(user).execute()

or number = select(user).execute().count()

or somethins else

May be more helpful. So that you don't need to invoke fetchone() again.

I would vote for a "count" function on the base class for Selectable, which is any table, join, alias, select statement, union, etc.

so when you do:

x = mytable.count()

it compiles to:

x = select([count(1)](count(1)), from_obj=[mytable](mytable)).scalar()

when you do:

x = mytable.count(mytable.c.col1==7)

it compiles to:

x = select([count(1)](count(1)), mytable.c.col1==7).scalar()

the question is, do we want to put these functions at the ORM level as well:

x = User.mapper.count(User.c.user_name=='fred')

x = User.mapper.count_by(user_name='fred')

add keys() to RowProxy, rename built-in attributes with an underscore

Migrated issue, originally created by Michael Bayer (@zzzeek)

since we added getattr to RowProxy, we should proabbly rename built in attributes with an underscore to decrase the chances of collisions. Also add a keys() method.

user-submitted patch:

Index: test/query.py
===================================================================
--- test/query.py       (revision 846)
+++ test/query.py       (working copy)
@@ -108,6 +108,11 @@
         self.assert_(r.user_id == r['user_id']('user_id') == r[self.users.c.user_id](self.users.c.user_id) ==
2)
         self.assert_(r.user_name == r['user_name']('user_name') ==
r[self.users.c.user_name](self.users.c.user_name) == 'jack')

+    def test_column_keys(self):
+        self.users.insert().execute(user_id=1, user_name='foo')
+        r = self.users.select().execute().fetchone()
+        self.assert_(r.keys() == ['user_name']('user_id',))
+
     def test_column_accessor_shadow(self):
         shadowed = Table('test_shadowed', db,
                          Column('shadow_id', INT, primary_key = True),
Index: lib/sqlalchemy/engine.py
===================================================================
--- lib/sqlalchemy/engine.py    (revision 846)
+++ lib/sqlalchemy/engine.py    (working copy)
@@ -631,6 +631,7 @@
         self.rowcount = engine.context.rowcount
         metadata = cursor.description
         self.props = {}
+        self.keys = [        i = 0
         if metadata is not None:
             for item in metadata:
@@ -644,6 +645,7 @@
                     raise "None for metadata " + colname
                 if self.props.setdefault(colname, rec) is not rec:
                     self.props[colname](]
) =
(ResultProxy.AmbiguousColumn(colname), 0)
+                self.keys.append(colname)
                 self.props[i](i) = rec
                 i+=1

@@ -700,6 +702,5 @@
             return self.parent._get_col(self.row, name)
         except:
             raise AttributeError
-
-
-
+    def keys(self):
+        return self.parent.keys

get Binary() constructor into types.Binary

Migrated issue, originally created by Michael Bayer (@zzzeek)

postgres requires the usage of the psycopg.Binary object to properly save binary objects. Place this either into the postgres module, or put it into the base Binary type in the types module, modifying convert_bind_param and convert_result_value to take an engine parameter in order to find the dbapi type. make a decent binary unit test probably inside of test/types.py which takes a large sized binary to test binary functionality across all databases.

Cannot insert entries when using an eager Join multiple times to one table

Migrated issue, originally created by Anonymous

(original reporter: rtl) When using an eager join multiple times to one table as shown in the documentation and attempting to add a new address to the boston (or new york) addresses, i.e. u.boston_addresses.append(Address(..from Boston)), when commit() is run it is failing because the foreign key of the new address (i.e. user_id) has not been set.

mapper.select_mapper()

Migrated issue, originally created by Anonymous

when one does a mapper.select() objects
of the mapper.class_ are returned, based
on the output of mapper.table.select().

now the results of the table.select() is
a selectable/view, and can be used in any
context that accepts a selectable, including
the binding of a mapper.

these two observations in combination seem
to indicate that it is possible for there
to be a mapper.select_mapper() method which
returns a new mapper which is a clone of
the original mapper, except that it is
bound to the new selectable based on the
arguments instead of the original mapper's
table.

add float datatype

Migrated issue, originally created by Michael Bayer (@zzzeek)

right now floats are descending from Numeric. we need Float by itself, that takes only the precision as an argument, plus all the database-specific views of it.

need "table_exists" function

Migrated issue, originally created by Michael Bayer (@zzzeek)

getting more than one request for "create if not created", "drop if not dropped" type of functionality. easy enough if the engine has a table_exists() function, which is of course slightly tedious.

cant use column.label() effectively yet

Migrated issue, originally created by Michael Bayer (@zzzeek)

if you say:

select([table.c.col2.label('column_two')](table.c.col1.label('column_one'),))

its not going to work. particularly if you try to select from that selectable.

cant use bind parameters in a database neutral way with textual queries, text()

Migrated issue, originally created by Michael Bayer (@zzzeek)

the text() function which specifies textual queries should be able to take database-independent bind parameters:

text("select * from users where user_id=:user_id", engine=e).execute(user_id=10)

currently, the method of ansisql compilation only looks for bind parameters that are compiled from bindparam() objects.

also, its not consistent within the code if :bindparam or %(bindparam) is the generally recognized format for bind params in text queries.

look into this information_schema patch

Migrated issue, originally created by Michael Bayer (@zzzeek)

the attached patch is for providing more complete information_schema support to the informaiton_schema module. this should be reviewed and integrated.


Attachments: sa.diff

have select's compile additional where clauses from bind params

Migrated issue, originally created by Michael Bayer (@zzzeek)

if you do:

table.select().execute(user_id=7)

it should compile as though you did:

table.select(table.c.user_id==bindparam('user_id')).execute(user_id=7)

or just

table.select(table.c.user_id==7).execute()

this is somewhat like the select_by thing in mapper, in that key/value pairs become WHERE claues, or also like table.insert().execute(x=7), in that the bind params set to the execution get factored into the compilation of the clause.

[patch] typos

Migrated issue, originally created by Anonymous

just two typos in the docs


Attachments: tmp.diff

Unique columns get 'magic' methods

Migrated issue, originally created by Ben Bangert (@bbangert)

From the docs example for finding a user:

# load a third User from the database            
sqlmyuser3 = User.mapper.select(User.c.user_name=='fred')[0](0) 

If user_name is a unique column (or perhaps if it isn't, ActiveRecord's just returns the first, not sure if thats such a good idea), you'd be able to shorten the above with:

# load a third User from the database            
sqlmyuser3 = User.mapper.select_by_user_name('fred') 

oracle LIMIT/OFFSET needs overhaul

Migrated issue, originally created by Michael Bayer (@zzzeek)

I did this with a quickie "rowid" concept that doesnt even work for OFFSET. A user has told me all about row_number(), described at http://troels.arvin.dk/db/rdbms/#select-limit. Look into integrating this technique into oracle. If it works really well, maybe use it elsewhere, though the LIMIT/OFFSET keywords are probably a lot easier. Also check how well the nested query does when mapper queries get more complicated, i seem to remember having an issue (i.e. limit with eager loading), and report a new ticket for that if it doesnt work with oracle.

add "deferred property" loader (half done - need 'group' + docs)

Migrated issue, originally created by Michael Bayer (@zzzeek)

a lazy loader right now only applies to a second table that is related to a primary table. we need a lazy loader that applies to a subset of the columns within a single table.

when you load an object from the DB, all the columns of the table are loaded into the new object instance unconditionally. for situations where it is wasteful to load all the columns into memory every time, create a new kind of property, the "deferred column", that is not selected by default when an object is loaded. the property contains a callable which when refrenced, triggers a load of all the "deferred" columns.

as a bonus, make it so that if an object is loaded, modified, and saved, all without touching any of the "deferred" columns, they remain unloaded from the database even after save.

Detect if limits on identifier names in the db are exceeded

Migrated issue, originally created by Anonymous

(original reporter: rtl) Postgres has a limit of 63 characters on the length of any identifier name (table, column, etc) and it discards any characters past this limit. This limit also includes bind parameters, which are named as 'table_column', effectively limiting the total of both table name and column to 63. This is a problem in queries, as this name is used as a key in the ResultProxy and Postgres will return the truncated names, but ResultProxy does lookups on the full table_column names, so no match will be found and the query fails.
I'm not sure how Postgres specific this is, but the safest option (rather than silently truncating as Postgres does) is to detect the excessive length and report an error when creating the table. Any other suggestions?

create default type fallback behavior for SQLIte date type

Migrated issue, originally created by Michael Bayer (@zzzeek)

SQLite represents just about everything as a string. To allow whatever datatype to be legal in sqlite, change its datatype lookup inside of engine.reflecttable to use String if no datatype can be found. (paraphrasing Tim Golden's issue)

smallint support

Migrated issue, originally created by Anonymous

Support for "smallint" column types seems to missing.

Change echo to allow option of sql only

Migrated issue, originally created by Anonymous

(original reporter: rtl) The echo option outputs every row of the result set as well as the original sql. For large result sets this can be too much information. It would be good to be able to specify echo=Off, Sql, All (or similar) to limit the amount of echo'd output.

need a MapperOption that creates a new Mapper with an alias of the table

Migrated issue, originally created by Michael Bayer (@zzzeek)

if you want to use the same table twice within two different relations inside a mapper, you have to make an alias of the table. this means you cant relate to an existing mapper in both places, and you have to make a full copy of that mappers properties on the alias.

make a MapperOption so you can say this:

m = MyClass.mapper.options(alias('foo'))

returns a copy of MyClass.mapper against a table alias 'foo'.

lazy initialization to support per-thread engine proxy

Migrated issue, originally created by Anonymous

sqlalchemy would be much easier to use in some scenarios if it were possible to rebind a mapped table to a new engine.

For instance, in a web app meant to be paste-deployable, connection parameters (including the engine to use) must be thread-safe and mutable per request, because two copies of the same app may be installed in the same server process, but with different configurations. Currently, as far as I can tell, this would require creating all of the mappers at the start of each request -- which is much too much redundant work on each request.

It would be much more efficient to be able to do something like:

table.use_engine(engine_for_this_request)

Except it looks like it would require a great deal of work to make that operation thread-safe.

So, what I wanted to do was create a proxy engine that could be used at definition time, and that would delegate to an actual engine at connection time. Unfortunately, some of the implementation in schema.py makes this impossible. Specifically, Table and Column reach into the engine when they are instantiated, eg (from Table.__init__):

self._impl = self.engine.tableimpl(self)

If those sorts of eager initializations were replaced with lazy initializations using properties, then it would be possible to build a delegating proxy engine that would enable define-once, use-many-engines tables and mappers.

Example patch for schema.py:

Index: sqlalchemy/schema.py
===================================================================
--- sqlalchemy/schema.py	(revision 841)
+++ sqlalchemy/schema.py	(working copy)
@@ -233,7 +233,7 @@
         column, which generally isnt in column lists.
         """
         self.name = str(name) # in case of incoming unicode
-        self.type = type
+        self.coltype = type
         self.args = args
         self.key = kwargs.pop('key', name)
         self.primary_key = kwargs.pop('primary_key', False)
@@ -247,6 +247,20 @@
         
     original = property(lambda s: s._orig or s)
     engine = property(lambda s: s.table.engine)
+
+    def _get_type(self):
+        # some caching would be nice here, if types are invariant
+        # per engine
+        if self.table.engine is not None:
+            return self.table.engine.type_descriptor(self.coltype)
+        return self.coltype
+    type = property(_get_type)
+
+    def _get_impl(self):
+        if self.table.engine is not None:
+            return self.table.engine.columnimpl(self)
+        return None        
+    _impl = property(_get_impl)
      
     def __repr__(self):
        return "Column(%s)" % string.join(
@@ -271,10 +285,6 @@
             if self.primary_key:
                 table.primary_key.append(self)
         self.table = table
-        if self.table.engine is not None:
-            self.type = self.table.engine.type_descriptor(self.type)
-            
-        self._impl = self.table.engine.columnimpl(self)
 
         if self.default is not None:
             self.default = ColumnDefault(self.default)

lazy/eager relationships need an optional order_by parameter

Migrated issue, originally created by Michael Bayer (@zzzeek)

right now lazyloader and eagerloader both look at OID to produce insert-order returns of the results. since MySQL cant really handle insert-order particularly for many-to-many, there will probably be a need for the return order to be controlled externally.

create test case for date types

Migrated issue, originally created by Michael Bayer (@zzzeek)

modify the test/types.py module so that it tests that date types go in as bind params correctly for various DB modules and come out OK. particularly psycopg1 seems to be picky, also sqlite stores dates as plain text so parsing/formatting should be unit tested.

add overrideable columns to a table with autoload=True

Migrated issue, originally created by Michael Bayer (@zzzeek)

users want to be able to specify datatypes to columns that were reflected, i.e.:

t = Table('mytable', autoload=True,
   Column('col3', MyType)
)

where 'col3' overrides that column in the table as it was reflected. seems reasonable (and useful) !

add string-based urls for engine connection

Migrated issue, originally created by Michael Bayer (@zzzeek)

right now you have to use a dictionary to send parameters to the create_engine function in order to get an engine. allow an optional string-based URL, such as "sqlite://username=foo&password=bar" or something similar.

need full support of math/boolean operators

Migrated issue, originally created by Michael Bayer (@zzzeek)

Python lets you override pretty much every operator: +, -, *, &, |, etc etc. put support in the SQL package so ClauseElements can use these operators. & = and_, | = or_, ~ = not, etc. also insure that you can make expressions out of columns and literals, i.e. select(+ 10, foo.c.lala + foo.c.hoho)

column copying methods pass foreign key/sequence objects incorrectly

Migrated issue, originally created by Michael Bayer (@zzzeek)

initial comment from rtl:

I noticed that SQLAlchemy wasn't generating foreign key references in postgres fully so I spent some time chasing down why.

The scenario is that because my app needs to access a lot of databases (historical data with the same structure in many different physical databases) I've set all the tables to use the ansi sql engine by default and I have a factory that takes the actual postgres engine (determined at run time) and calls toengine() for each table. One of the things that toengine() does is to replicate the Table structure using the specified engine. As part of this replication any specified ForeignKey() constructs are also copied (see schema.py, copy() on line 173 in rev644).

The issue was that the copied foreign key was supplied to the new Column constructor (line 179 in the same file) as a keyword argument - 'foreign_key = fk'. However, this resulted in the copied ForeignKey never having its _set_parent() called because the code in the Column._set_parent() includes the following line:

self._init_items(*self.args)

where self.args are the non-keyword args passed in to the Column constructor and since the copied ForeignKey was passed in as a keyword, its _set_parent() was never called.

One solution is to change line 173 of schema.py to pass the copied ForeignKey as a standard (non-keyword) parameter (see the svn diff below). This works for Postgres at least, resulting in the correct sql statements being issued (and all the existing test passing).

The question is then - is this the best way to fix the problem and if so, should the same fix be applied to the Column._make_proxy() function as well (I don't have enough SQLAlchemy background to say for sure).

Hope there's enough detail in the explanation for people to easily understand the problem :-/

Robert

svn diff....

===================================================================
--- schema.py   (revision 644)
+++ schema.py   (working copy)
@@ -176,7 +176,7 @@
             fk = None
         else:
             fk = self.foreign_key.copy()
-        return Column(self.name, self.type, key = self.key, primary_key = self.primary_key, foreign_key = fk, sequence = self.sequence)
+        return Column(self.name, self.type, fk, key = self.key, primary_key = self.primary_key, sequence = self.sequence)

     def _make_proxy(self, selectable, name = None):
         """creates a copy of this Column, initialized the way this Column is"""

make oid/rowid functionality optional for Postgres

Migrated issue, originally created by Michael Bayer (@zzzeek)

postgres has disabled oids by default as of 8.1 (rowids in sqlalchemy). make all useage of rowids optional, and have postgres by default not use rowids at all. SERIAL columns should use the default sequence

__seq to generate primary keys.

we probably want to make a special sequence internal to postgres that doesnt conflict with an existing "optional" sequence and modify sqlengine's _process_defaults() to give more control to the subclass in running a column's default generators.

sqlite.SLDateTime.convert_result_value falls over if value is None

Migrated issue, originally created by Anonymous

engine.ResultProxy._get_col calls convert_result_value on the column's value. If the column is a TimeStamp in sqlite, this equates to calling sqlite.SLDateTime.convert_result_value, which then tries to split the value on a dot. If the value was NULL and therefore None, this split fails:
(warning line numbers will be wrong in sqlite.py; I have patched this file slightly for new data types)

Traceback (most recent call last):
File "", line 1, in ?
File "c:\work_in_progress\sqlalchemy\lib\sqlalchemy\engine.py", line 390, in repr
return repr(tuple(key) for key in range(0, len(self.row)))
File "c:\work_in_progress\sqlalchemy\lib\sqlalchemy\engine.py", line 360, in _get_col
return rec0.convert_result_value(row[rec1])
File "c:\work_in_progress\sqlalchemy\lib\sqlalchemy\databases\sqlite.py", line 41, in convert_result_value
(value, microsecond) = value.split('.')
AttributeError: 'NoneType' object has no attribute 'split'

Shorthand for multiple column comparisons

Migrated issue, originally created by Ben Bangert (@bbangert)

When testing for multiple column equivilants, specifying the full class for each column becomes a chore, take this example:

# load a User from the database            
sqlmyuser = User.mapper.select(User.c.user_name=='fred', User.c.password=='something',
                                                   User.c.email=='[email protected]') 

If we use a slightly different keyword (inspired by SQLObject notation) like so:

# load a User from the database            
sqlmyuser = User.mapper.select_by(user_name='fred', password='something', email='[email protected]') 

We save ourselves a bit of repetitive typing. I come across usage like this fairly often so I think it'd be rather useful in SQLAlchemy as well.

deprecate relation(class, table) syntax

Migrated issue, originally created by Michael Bayer (@zzzeek)

people are more often tripping over this thing:

   m = mapper(class1, table1)
   m2 = mapper(class2, table2, properties = {
     'foo': relation(class1, table1)
   })

since its not easy to see that relation(class1, table1) creates another new mapper. just get rid of that style; they can specify the mapper or the class by itself and thats it (or maybe just the mapper).

add Date and Time type

Migrated issue, originally created by Michael Bayer (@zzzeek)

we pretty much should do the three-format thing, i.e. Date, Time, and DateTime, corresponding to the same object constructors in DBAPI

test ticket.

Migrated issue, originally created by Anonymous

this is a test. testing. more testing. ho hum.

Ability to specify order_by for relations

Migrated issue, originally created by Anonymous

(original reporter: rtl) Currently the child rows in a parent-child relationship defined using mapper are returned in oid order (in postgres at least). It would be very useful to be able to specify the sort column using the order_by keyword arg when the relation is defined.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.