Coder Social home page Coder Social logo

python-ibmdbsa's People

Contributors

abhi7436 avatar amukherjee28 avatar andrasore-kodinfo avatar bchoudhary6415 avatar bimalkjha avatar eonu avatar hemlatabhatt avatar jazlee avatar jospaul1 avatar markusfraks avatar moseswynn avatar msdhupp avatar murchurl avatar openmax avatar praveen-db2 avatar rhgit01 avatar rpriyadh avatar sabakauser avatar sasa-tomic avatar xnot avatar zzzeek avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

python-ibmdbsa's Issues

Getting error sqlalchemy.exc.DataError: (pyodbc.DataError) ('22018', '[22018] [IBM][System i Access ODBC Driver]Error in assignment

Hi,

I want to insert data with default Boolean values into the table using AS400 dialect and the sql alchemy ( using iseries driver on OS Ubuntu 16.04) . While inserting data into the table, I got following error

sqlalchemy.exc.DataError: (pyodbc.DataError) ('22018', '[22018] [IBM][System i Access ODBC Driver]Error in assignment

Hi,

I want to insert data with default Boolean values into the table using AS400 dialect and the sql alchemy ( using iseries driver on OS Ubuntu 16.04) . While inserting data into the table, I got following error
sqlalchemy.exc.DataError: (pyodbc.DataError) ('22018', '[22018] [IBM][System i Access ODBC Driver]Error in assignment

Code snippet:

from sqlalchemy import *
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.pool import StaticPool

URL = "ibm_db_sa+pyodbc400://{user}:{password}@{host}:{port}/{database}"

engine = create_engine(URL, poolclass=StaticPool, echo=True)
meta = MetaData(bind=engine, schema="test")
db_session = sessionmaker(bind=engine)
model = declarative_base(metadata=meta)

json_data = {
"title": "Sample Konfabulator Widget",
"name": "main_window",
"width": 500,
"height": 500
}

class Client(model):
tablename = 'client'
id = Column("id", Integer, nullable=False, primary_key=True)
name = Column("name", String(128), nullable=True)
age = Column("age", Integer)
is_active = Column("is_active", Boolean, default=True)
is_deleted = Column("is_deleted", Boolean, default=False)
client_address = Column("client_address", Text)

def add_client(client_details):
session = db_session()
client = Client(**client_details)
session.add(client)
session.commit()
return True

def get_client():
session = db_session()
client = session.query(Client).filter(Client.id == "1").first()
print("Client_id =", client.id)
print("Client_name =", client.name)
session.close()
return True

client_data = {"name": "text", "age": 30, "client_address": json.dumps(json_data)}
add_client(client_data)
get_client()

Add_client() and get_client() methods gives the same error.

Note: Same code working on the windows machine with windows iseries driver.

Please help

Support for informix DB

Currently some errors arise when trying to to use this dialect for Informix DB. Is there plans to support informix db as well under this dialect or will a new one be created?

ModuleNotFoundError: No module named 'future'

Traceback (most recent call last):
File "alchemy_test.py", line 3, in <module>
import ibm_db_sa.ibm_db_sa
File "/QOpenSys/pkgs/lib/python3.6/site-packages/ibm_db_sa/init.py", line 22, in <module>
from . import ibm_db, pyodbc, base, zxjdbc
File "/QOpenSys/pkgs/lib/python3.6/site-packages/ibm_db_sa/ibm_db.py", line 20, in <module>
from .base import DB2ExecutionContext, DB2Dialect
File "/QOpenSys/pkgs/lib/python3.6/site-packages/ibm_db_sa/base.py", line 30, in <module>
from . import reflection as ibm_reflection
File "/QOpenSys/pkgs/lib/python3.6/site-packages/ibm_db_sa/reflection.py", line 26, in <module>
from future.utils import iteritems
ModuleNotFoundError: No module named 'future'

Looks like
a5bb019#diff-b3a9e1dd4976b655040c5eebb69309d1L24 added a dependency on the future package, but it wasn't added as a dependency to setup.py.

CASTs get removed

Having SQLAlchemy use CASTs, causes IBM_DB_SA layer to remove them on the call to:
ibm_db_sa/base.py:351: sql_ori = compiler.SQLCompiler.visit_select(self, select, **kwargs)

, where SQLCompiler is sqlalchemy.sql.compiler.SQLCompiler.

While 'select' input parameter is made of proper SQL query (incl. CASTs), the 'sql_ori' doesn't have it anymore (both in SELECT and WHERE clauses).

Example:
select is "SELECT CAST('test plain returns' AS VARCHAR(60)) AS anon_1"
sql_ori is "SELECT 'test plain returns' AS anon_1 FROM SYSIBM.SYSDUMMY1"

"No module named ibm_db_dbi"

Hi,

I cannot use your module since I get an main error. I took this is from your documentation

from sqlalchemy import create_engine
e = create_engine("db2+ibm_db://user:pass@host[:port]/database")

but get this error

Traceback (most recent call last):
  File "ibm_db2_test3.py", line 2, in <module>
    e = create_engine("db2+ibm_db://user:pass@host:1111/database")
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/__init__.py", line 386, in create_engine
    return strategy.create(*args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/strategies.py", line 75, in create
    dbapi = dialect_cls.dbapi(**dbapi_args)
  File "/usr/lib/python2.7/dist-packages/ibm_db_sa/ibm_db.py", line 58, in dbapi
    import ibm_db_dbi as module
ImportError: No module named ibm_db_dbi

I can also use another engine (or same kind of alias)

from sqlalchemy import create_engine
import ibm_db_sa.ibm_db
e = create_engine('ibm_db_sa://user:pass@host:1111/database')

with the same error. The system has the follwing properties:

  • Debian Linux Strech 64bit
  • Python 2.7 (as you see from the errors) in version 2.7.13-2 via apt
  • Your lib is installed via apt as 'python-ibm-db-sa ' in recent version 0.3.0-3
  • sqlalchemy via apt in version 1.0.15

Searching for "No module named ibm_db_dbi" via google shows some guys with the same problems but no real solution. Do you think that the Debian package is some kind of broken?

PyPI package is outdated

Given that there have been 6 commits since the 8/31/2016 release, is there a reason that the package hasn't been uploaded to PyPI? I tend to install via pip but that doesn't get me the current state of the package.

executing rename table via sqlalchemy silently fails

Hello. I have a python application and I am using sqlalchemy to manage my db2 database. I can run most sql command successfully like:

from sqlalchemy import create_engine

engine = create_engine("db2+ibm_db://userid:passwd@hostname:port/database;SECURITY=SSL;")
stmt = text("ALTER TABLE TEST.A DROP CONSTRAINT SQL191023141500200")
engine.execute(stmt)

However, when I run a rename table, it silently fails. (doesn't throw an exception)

stmt = text("RENAME TABLE TEST.TAB1 TO TAB2")

If I run the same command via other means, it works fine. Any suggestions on what the issue might be?

ibm_db_sa: decfloat support

From @GoogleCodeExporter on March 16, 2015 6:50

At time of writing, there is no field mapping in ibm_db_sa for the DB2 DECFLOAT 
type, so they just come back as strings.  decimal.Decimal is the appropriate 
python-side type to map them to, as discussed - 
https://groups.google.com/d/msg/ibm_db/mjjocQ-rHEw/OWv3h3zLZI4J

Original issue reported on code.google.com by [email protected] on 14 Mar 2014 at 2:53

Copied from original issue: ibmdb/python-ibmdb#148

Blob data is not saved correctly

I have the following table defined:

class FileStore(_Base):
  __tablename__ = 'file_storage'

  uuid = Column('uuid', String(48), primary_key=True)
  filename = Column('filename', String(255))
  filecontent = Column('filecontent', BLOB(1048576), nullable=False)
  last_update = Column('last_update', DateTime)

  def __init__(self, attrs):
    self.uuid = attrs['uuid']
    self.filename = attrs['filename']
    self.filecontent = attrs['filecontent']
    self.last_update = attrs['last_update']
    print('filecontent length: {}'.format(len(self.filecontent)))

Before calling session.add(record) to insert new row to DB2, the print statement show that the file (read in as binary) has 4102461 bytes. The record is inserted without error. However, the query to retrieve the same record results only 46 bytes for filecontent.

Please advice.

Cannot create inspector for DB2 v11.0 on z/OS cause missing "CURRENT ISOLATION" register

After creating the engine, when instantiating the Inspector:

reflection.Inspector.from_engine(engine)

...it fails with error:

[IBM][CLI driver][DB2] SQL0206N "CURRENT" non è valido nel contesto in cui viene utilizzato. SQLSTATE=42703 SQLCODE=-206

The query that fails, issued by the inspector to get the isolation level, is:

SELECT CURRENT ISOLATION FROM sysibm.sysdummy1

After some research I found that DB2 for z/OS doesn't have that register.

Add support isolation levels

ibm_db_sa does not support isolation levels. When DB2 is used as ceilometer backend, this lack of support of isolation levels causes failures in REST API /v2/events:

2015-09-14 10:51:05.879 542 ERROR wsme.api [req-32584490-e803-4851-87fa-84ccb99595d8 - - - - -] Server-side error: "". Detail:
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/wsmeext/pecan.py", line 84, in callfunction
result = f(self, _args, *_kwargs)
File "/usr/lib/python2.7/site-packages/ceilometer/api/controllers/v2/events.py", line 258, in get_all
limit)]
File "/usr/lib/python2.7/site-packages/ceilometer/event/storage/impl_sqlalchemy.py", line 220, in get_events
execution_options={'isolation_level': self.isolation_level})
File "build/bdist.linux-x86_64/egg/sqlalchemy/orm/session.py", line 882, in connection
execution_options=execution_options)
File "build/bdist.linux-x86_64/egg/sqlalchemy/orm/session.py", line 887, in _connection_for_bind
engine, execution_options)
File "build/bdist.linux-x86_64/egg/sqlalchemy/orm/session.py", line 337, in _connection_for_bind
conn = conn.execution_options(**execution_options)
File "build/bdist.linux-x86_64/egg/sqlalchemy/engine/base.py", line 285, in execution_options
self.dialect.set_connection_execution_options(c, opt)
File "build/bdist.linux-x86_64/egg/sqlalchemy/engine/default.py", line 403, in set_connection_execution_options
self._set_connection_isolation(connection, opts['isolation_level'])
File "build/bdist.linux-x86_64/egg/sqlalchemy/engine/default.py", line 412, in _set_connection_isolation
self.set_isolation_level(connection.connection, level)
File "build/bdist.linux-x86_64/egg/sqlalchemy/engine/interfaces.py", line 716, in set_isolation_level
raise NotImplementedError()

ibm_db_sa should support isolation levels similar to other db's.

Python Decimal to Decimal SA to Decimal Column casting problem

Given a python Decimal(54000000000000), when you put it into a Decimal column in SQLAlchemy that maps to a DB2 column, it adds extra data on it. Version of ibm_db_sa is 0.3.3. This has been seen on Mac, Windows and Linux.

from pprint import pprint
from decimal import Decimal
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column
from sqlalchemy.orm import sessionmaker
from sqlalchemy.orm import scoped_session
from sqlalchemy.types import DECIMAL
from sqlalchemy.types import Integer
import ibm_db_sa
Base = declarative_base()

class TestTable(Base):
    __tablename__ = 'TEST_TABLE'
    __table_args__ = {'schema' : 'ALCF_TEST'}
    
    AUTO_GENERATED_ID = Column(Integer, primary_key=True)
    DECIMAL_26_6 = Column(DECIMAL)

def get_all_sa(session):
    tts = session.query(TestTable).all()
    for tt in tts:
        dct = {
               'DECIMAL_26_6':tt.DECIMAL_26_6,
               }
        pprint(dct)
    
def sa_test(username, password, host, port, database):
    dsn = '%s:%s@%s:%s/%s' % (username, password, host, port, database)
    engine = create_engine('ibm_db_sa://%s' % dsn)#, echo=True)
    conn = engine.connect()
    session = scoped_session(sessionmaker(bind=engine))
    
    tt = TestTable()
    n = Decimal(54000000000000)
    tt.DECIMAL_26_6 = n
    session.add(tt)
    session.commit()
    get_all_sa(session)
    conn.close()
        
if __name__ == "__main__":
    print(ibm_db_sa.__version__)
    username = ''
    password = ''
    host = ''
    port = 50001
    database = ""
    sa_test(username, password, host, port, database)    

When you select the data inserted you will get the following: Decimal('54000000000000.010000'), but you should get exactly Decimal(54000000000000).

Here is the SCHEMA for the table.
-- DROP TABLE ALCF_TEST.TEST_TABLE;
CREATE SCHEMA ALCF_TEST;
CREATE TABLE ALCF_TEST.TEST_TABLE (
AUTO_GENERATED_ID INTEGER PRIMARY KEY GENERATED BY DEFAULT AS IDENTITY,
DECIMAL_26_6 DECIMAL(26, 6),
);

boolean datatype returned as nulltype

Using version
ibm-db | 3.0.1
ibm-db-sa | 0.3.3

I have a Db2 table with a BOOLEAN column
Boolean column in table ddl

CREATE TABLE "SCHEMA."TABLE"  (
		  ...
		  "VALUE_B" BOOLEAN , 
		  ...

I get the following warning every time I read the table:

python3.7/site-packages/ibm_db_sa/reflection.py:255: SAWarning: Did not recognize type 'BOOLEAN' of column 'VALUE_B'

and the column gets assigned sqlalchemy.types.NULLTYPE by default in bm_db_sa/reflection/DB2Reflector/get_columns

Is it possible to add support for BOOLEAN columns so that they are not returned as nulltype?

Trouble using credentials for SSL connection with flask-sqlalchemy

I am trying to initialize a connection to Db2 Warehouse on Cloud by setting SQLALCHEMY_DATABASE_URI to one of the provided credential parts. They look like this:


{
  "hostname": "db2w-host.us-south.db2w.cloud.ibm.com",
  "password": "passwd",
  "https_url": "https://db2w-host.us-south.db2w.cloud.ibm.com",
  "port": 50001,
  "ssldsn": "DATABASE=BLUDB;HOSTNAME=db2w-host.us-south.db2w.cloud.ibm.com;PORT=50001;PROTOCOL=TCPIP;UID=bluadmin;PWD=passwd;Security=SSL;",
  "host": "db2w-host.us-south.db2w.cloud.ibm.com",
  "uri": "db2://bluadmin:[email protected]:50001/BLUDB?ssl=true;",
  "db": "BLUDB",
  "username": "bluadmin",
  "ssljdbcurl": "jdbc:db2://db2w-host.us-south.db2w.cloud.ibm.com:50001/BLUDB:sslConnection=true;"
}

I tried ssldsn and uri, but both result in errors. What is the correct connection URI? Could you

  • improve the documentation and add more examples with SSL connections
  • support direct consumption of credentials coming from Db2 services

AttributeError when using pyodbc

self.dbms_ver = connection.connection.dbms_ver
self.dbms_name = connection.connection.dbms_name

These two lines raise AttributeError when connecting with pyodbc ('db2+pyodbc://<dsn_name>'). I can make a PR to fix this, but I wanted to check if anyone knows of a way to retrieve these attributes when using pyodbc, or if it would be best to just leave them None or ''.

OS390 (z/OS) support

From @hemlatabhatt on August 14, 2018 6:20

From @GoogleCodeExporter on March 16, 2015 6:49

I implemented a project using SQLAlchemy because I've read that there is DB2 
support available for it.

However after I've implemented a lot of code testing it successfully on SQLite 
and MySQL (while waiting for credentials for corporate DB2). When I've finally 
connected to DB2 I've figured out that it is not supported by ibm_db_sa code.

Luckily the ibm_db driver (the binary part) seemed to work, but the system 
tables had different layout. I needed also some SQL fix to support dedicated 
DB2 syntax extension of creation of the tables in the specific internal entity 
called DATABASE (sic!).

I've found the AS400Reflector that seemed more similar to the one I needed, but 
firstly it was not working for me after all (different system tables) and 
secondly - it was not integrated with the rest of the ibm_db_sa system, so it 
couldn't be used without some sort of patching the ibm_db_sa code anyway. (see 
the issue no 127 created for this - 
https://code.google.com/p/ibm-db/issues/detail?id=127 )
But it could be used as very good starting point.

There are few changes I've done into the original code to make it running in my 
project. Although it seems to work, I'm not sure if all of them are implemented 
in correct way and at least one for sure is ugly (CREATE TABLE IN DATABASE is 
partly an ugly hack). :)

I wanted IBM support team to validate this code and if any good thing can be 
extracted of it - to include OS390 (z/OS) support in the global ibm_db_sa 
official distribution. To be able to do so I needed an open issue, and so I've 
created this one.

This applies to:
ibm_db_sa-0.3.0-py2.7.egg - that needed the update
and
ibm_db-2.0.2-py2.7-win32.egg - that was working fine

Original issue reported on code.google.com by [email protected] on 2 Aug 2013 at 3:51

Copied from original issue: ibmdb/python-ibmdb#131

Copied from original issue: ibmdb/python-ibmdb-django#29

Problems using automap_base with DB2-s SAMPLE DB

Using the official docker form IBM for DB2 10.5.5:

https://hub.docker.com/r/ibmcom/db2express-c/

docker run -it -p 50000:50000 -e DB2INST1_PASSWORD=mypass -e LICENSE=accept ibmcom/db2express-c:latest bash

# create db
su - db2inst1
db2start
db2sampl

python requirements:

sqlalchemy==1.0.17
ibm-db==2.0.5.1
ibm_db_sa==0.3.2

(btw. tried this with the newest versions also - same result)

OS: Ubuntu 14.04 64bit

Doing:

from sqlalchemy.ext.automap import automap_base
from sqlalchemy import create_engine

Base = automap_base()
engine = create_engine("db2://db2inst1:[email protected]:50000/SAMPLE")
Base.prepare(engine, reflect=True)

raises:

Traceback (most recent call last):
  File "db2_automap.py", line 8, in <module>
    Base.prepare(engine, reflect=True)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/ext/automap.py", line 777, in prepare
    generate_relationship)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/ext/automap.py", line 899, in _relationships_for_fks
    nullable = False not in set([fk.parent.nullable for fk in fks])
AttributeError: 'NoneType' object has no attribute 'nullable'

seems that some fk-s have incorrect parent and column attributes

this issue was reported to SQLAlchemy-s team, but this place is more appropriate for it

lets add ability to produce columns reflection with comments

sqlalchemy 12 has the capability to add comment to a column definition

so if I create a table with comments describing the field

some_table = schema.Table( 'some_table', metadata,
Column('some_field' , Date, nullable=False, index= True,unique=False,
comment = 'some comments about my column')
)

columns = sqlalchemy.engine.reflection.Inspector.from_engine(my_engine).get_columns('some_table','schema_name')

This columns dict returned by the Inspector will have a comment key with the column/field comments.

something like this will add this capability...
file reflection.py from ibm_db_sa

sys_columns = Table("COLUMNS", ischema,
Column("TABSCHEMA", CoerceUnicode, key="tabschema"),
Column("TABNAME", CoerceUnicode, key="tabname"),
Column("COLNAME", CoerceUnicode, key="colname"),
Column("COLNO", sa_types.Integer, key="colno"),
Column("TYPENAME", CoerceUnicode, key="typename"),
Column("LENGTH", sa_types.Integer, key="length"),
Column("SCALE", sa_types.Integer, key="scale"),
Column("DEFAULT", CoerceUnicode, key="defaultval"),
Column("NULLS", CoerceUnicode, key="nullable"),
Column("IDENTITY", CoerceUnicode, key="identity"),
Column("GENERATED", CoerceUnicode, key="generated"),
Column("REMARKS", CoerceUnicode, key="remmarks"),
schema="SYSCAT")

on the function
def get_columns(self, connection, table_name, schema=None, **kw):

query = sql.select([syscols.c.colname, syscols.c.typename,
syscols.c.defaultval, syscols.c.nullable,
syscols.c.length, syscols.c.scale,
syscols.c.identity, syscols.c.generated,
syscols.c.remmarks],
sql.and_(
syscols.c.tabschema == current_schema,
syscols.c.tabname == table_name
),
order_by=[syscols.c.colno]
)

class DB2Reflector(BaseReflector):
'type': coltype,
'nullable': r[3] == 'Y',
'default': r[2] or None,
'comment' : r[8] or None,
'autoincrement': (r[6] == 'Y') and (r[7] != ' '),
})

ibm_db_dbi - connect / pconnect methods should accept **kwargs instead of a dictionary

From @hemlatabhatt on August 14, 2018 6:20

From @GoogleCodeExporter on March 16, 2015 6:49

I think the connect / pconnect methods in ibm_db_dbi should accept **kwargs 
intead of a dictionary. DBAPI is vague about this, but from looking at other 
libraries, ibm_db_dbi seems to be the only one that expects a dictionary for 
the 6th parameter. Psycopg2, for example, has 2 extra keyword arguments plus 
**kwargs, while pymssql just uses a bunch of keyword arguments.

By changing to **kwargs, the code can be simplified and also be more compatible 
with how SQLAlchemy's create_connect_args() works, e.g. we can have the more 
natural:

return ((dsn, conn_args['username'],'','',''), {'xxx': 'yyy'})

instead of:

return ((dsn, conn_args['username'],'','',''), {'conn_options': {'xxx': 'yyy'}})

Original issue reported on code.google.com by [email protected] on 2 May 2012 at 1:43

Attachments:

Copied from original issue: ibmdb/python-ibmdb#107

Copied from original issue: ibmdb/python-ibmdb-django#28

AS400 Dialect.

From @GoogleCodeExporter on March 16, 2015 6:49

Hi,

currently there is no explicit Dialect for choosing an AS400 Remote Database.
As we have already the AS400 Reflector it's in my opinion not an hard task to 
add this new Dialect.

This little snippet is enough in my environment to get things working...



from ibm_db_sa.ibm_db import DB2Dialect_ibm_db
from ibm_db_sa.reflection import AS400Reflector

class AS400Dialect(DB2Dialect_ibm_db):
    _reflector_cls = AS400Reflector


setup.py:
'sqlalchemy.dialects': ['db2as400=ukh.db:AS400Dialect',],

DSN:
db2as400://...

Can you add this explict Dialect to the core package?

Original issue reported on code.google.com by [email protected] on 22 Mar 2013 at 3:24

Copied from original issue: ibmdb/python-ibmdb#127

add capability to the driver to generate query with literals, compile_kwargs={"literal_binds": True}

if I create a query

s_with_table_name = select([table_with_custom_columns]).
where(table_with_custom_columns.c.some_custom_column == "IBM")

mylog.info( "result of the compilation specifying table name \n\n%s\n" % s_with_table_name)
my_str_db2 = s_with_table_name.compile(
dialect=ibm_db_sa.dialect(),
compile_kwargs={"literal_binds": True})

Now the driver error out on this, other sa drivers produce a query with the literals

SELECT table_with_custom_columns.some_custom_column, table_with_custom_columns.id, table_with_custom_columns.some_id
FROM table_with_custom_columns
WHERE table_with_custom_columns.some_custom_column = 'IBM'

vs

SELECT table_with_custom_columns.some_custom_column, table_with_custom_columns.id, table_with_custom_columns.some_id
FROM table_with_custom_columns
WHERE table_with_custom_columns.some_custom_column = :some_custom_column_1

to fix this

file base.py from ibm_db_sa driver

change function get_select_precolumns signature
from
def get_select_precolumns(self, select):

to
def get_select_precolumns(self, select,**kwargs):

as sqlalchemy compiler looks for a function with this signature when we compile with

compile_kwargs={"literal_binds": True}

it doesn't brake the code if compile_kwargs is not specify on the compilation
my_str_db2 = s_with_table_name.compile(
dialect=ibm_db_sa.dialect())

Simple queries extremely slow to return

Running this query with sqlalchemy:

SELECT live.videos.place_id AS place_id
FROM live.videos
WHERE date(live.videos.create_dt) = ? AND live.videos.place_id <= ? ORDER BY live.videos.place_id DESC FETCH FIRST 10 ROWS ONLY

So slow it times out. This does, however, run in one second or so on a SQL browser. I feel like this may be a driver issue but i'm not sure how to go about debugging it.

ACR support

Hi - I'm working on an application that has a backing db2 on cloud DB with a remote disaster recovery node, using ibm_db_sa to connect to it. The connection string we used is ibm_db_sa://dummy:[email protected]:50001/BLUDB;clientRerouteAlternateServerName=dashdb-txn-flex-yp-wdc04-13.services.dal.bluemix.net;clientRerouteAlternatePortNumber=50001;enableSeamlessACR=true;;SECURITY=SSL; . Today, we tested the failove, and got repeated failures: "ibm_db_dbi::OperationalError: [IBM][CLI Driver] CLI0106E Connection is closed. SQLSTATE=08003 SQLCODE=-99999 [SQL: 'SELECT 1 FROM SYSIBM.SYSDUMMY1'] (Background on this error at: http://sqlalche.me/e/e3q8)". Is ACR supported with ibm_db_sa? If not, is there a workaround?

Getting AttributeError for AS400

File "/home/siddhesh/Envs/agentenv/lib/python3.5/site-packages/ibm_db_sa/base.py", line 750, in get_unique_constraints
return self._reflector.get_unique_constraints(
AttributeError: 'AS400Reflector' object has no attribute 'get_unique_constraints'

ibm_db_sa - Fails on Python 3

From @GoogleCodeExporter on March 16, 2015 6:50

What steps will reproduce the problem?
1. Issue a create_engine() using "db+pyodbc400" dialect.

What is the expected output? What do you see instead?

Expecting return of engine using pyodbc, but fails on Python 3.

error in base.py:
  File "C:\Python34\lib\site-packages\ibm_db_sa\base.py", line 35, in <listcomp>
    SA_Version = [long(ver_token) for ver_token in SA_Version.split('.')[0:2]]
NameError: name 'long' is not defined
shell returned 1

What version of the product are you using? On what operating system?
Windows 7, Python 3.4 (32bit), SQLAlchemy 0.9.4, ibm_db_sa 0.3.1

Please provide any additional information below.

It appears that it has not been ported for Python 3, are there plans to support 
Python 3?


Original issue reported on code.google.com by [email protected] on 11 Jun 2014 at 9:11

Copied from original issue: ibmdb/python-ibmdb#154

pyodbc connectionstring broken

create_engine('db2+pyodbc:///?odbc_connect={}'.format(connection_string)) fails with the following stack trace:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/mnt/c/Users/a683040/Projects/uccx/.venv/lib/python3.6/site-packages/sqlalchemy/engine/__init__.py", line 488, in create_engine
    return strategy.create(*args, **kwargs)
  File "/mnt/c/Users/a683040/Projects/uccx/.venv/lib/python3.6/site-packages/sqlalchemy/engine/strategies.py", line 98, in create
    (cargs, cparams) = dialect.create_connect_args(u)
  File "/mnt/c/Users/a683040/Projects/uccx/.venv/lib/python3.6/site-packages/ibm_db_sa/pyodbc.py", line 51, in create_connect_args
    connectors = [urllib.unquote_plus(keys.pop('odbc_connect'))]
AttributeError: module 'urllib' has no attribute 'unquote_plus'

This is because unquote_plus has been moved to urllib.parse.

DB2 on AS400: An unexpected token "ISOLATION" was found on ibm_db_sa/ibm_db.py

[Environment]
Database Server: DB2 on AS400 with DB2 Connect installed to allow TCP/IP connections (connection tested using ibm_db and the default port number)
Application Server: Linux server running Python 3 + Flask application, using ibm_db, ibm_db_sa and SQLAlchemy.

[Problem]
Trying to run a simple query to test that the connection from the application server to the test server is working when using SQLAlchemy + ibm_db_sa adapter:

# Tried using two different adapter names
engine = sqlalchemy.create_engine('ibm_db_sa://user:secret@database_server.host.com:446/database1')
engine = sqlalchemy.create_engine("db2+ibm_db://user:secret@database_server.host.com:446/database1')
result = engine.execute("select USER from USERS")
for row in result:
    print("USER:", row['USER'])

This produces the following error:

Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/ibm_db_dbi.py", line 1235, in _set_cursor_helper
    num_columns = ibm_db.num_fields(self.stmt_handler)
Exception: SQLNumResultCols failed: [IBM][CLI Driver][AS] SQL0104N  An unexpected token "ISOLATION" was found following "".  Expected tokens may include:  "SERVER CLIENT_ACCTNG CLIENT_APPLNAME CLIENT_PROGRAMID CLIE".  SQLSTATE=42601 SQLCODE=-104

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/engine/base.py", line 2158, in _wrap_pool_connect
    return fn()
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/pool.py", line 403, in connect
    return _ConnectionFairy._checkout(self)
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/pool.py", line 791, in _checkout
    fairy = _ConnectionRecord.checkout(pool)
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/pool.py", line 532, in checkout
    rec = pool._do_get()
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/pool.py", line 1196, in _do_get
    self._dec_overflow()
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/util/langhelpers.py", line 66, in __exit__
    compat.reraise(exc_type, exc_value, exc_tb)
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/util/compat.py", line 187, in reraise
    raise value
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/pool.py", line 1193, in _do_get
    return self._create_connection()
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/pool.py", line 350, in _create_connection
    return _ConnectionRecord(self)
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/pool.py", line 477, in __init__
    self.__connect(first_connect_check=True)
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/pool.py", line 684, in __connect
    exec_once(self.connection, self)
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/event/attr.py", line 274, in exec_once
    self(*args, **kw)
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/event/attr.py", line 284, in __call__
    fn(*args, **kw)
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/util/langhelpers.py", line 1334, in go
    return once_fn(*arg, **kw)
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/engine/strategies.py", line 183, in first_connect
    dialect.initialize(c)
  File "/usr/local/lib/python3.5/dist-packages/ibm_db_sa/base.py", line 695, in initialize
    super(DB2Dialect, self).initialize(connection)
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/engine/default.py", line 274, in initialize
    self.get_isolation_level(connection.connection)
  File "/usr/local/lib/python3.5/dist-packages/ibm_db_sa/ibm_db.py", line 135, in get_isolation_level
    cursor.execute('SELECT CURRENT ISOLATION FROM sysibm.sysdummy1')
  File "/usr/local/lib/python3.5/dist-packages/ibm_db_dbi.py", line 1355, in execute
    self._set_cursor_helper()
  File "/usr/local/lib/python3.5/dist-packages/ibm_db_dbi.py", line 1238, in _set_cursor_helper
    raise self.messages[len(self.messages) - 1]
ibm_db_dbi.ProgrammingError: ibm_db_dbi::ProgrammingError: SQLNumResultCols failed: [IBM][CLI Driver][AS] SQL0104N  An unexpected token "ISOLATION" was found following "".  Expected tokens may include:  "SERVER CLIENT_ACCTNG CLIENT_APPLNAME CLIENT_PROGRAMID CLIE".  SQLSTATE=42601 SQLCODE=-104

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "test_sqla.py", line 12, in <module>
    result = engine.execute("select USER from USERS")
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/engine/base.py", line 2074, in execute
    connection = self.contextual_connect(close_with_result=True)
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/engine/base.py", line 2123, in contextual_connect
    self._wrap_pool_connect(self.pool.connect, None),
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/engine/base.py", line 2162, in _wrap_pool_connect
    e, dialect, self)
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/engine/base.py", line 1476, in _handle_dbapi_exception_noconnection
    exc_info
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause
    reraise(type(exception), exception, tb=exc_tb, cause=cause)
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/util/compat.py", line 186, in reraise
    raise value.with_traceback(tb)
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/engine/base.py", line 2158, in _wrap_pool_connect
    return fn()
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/pool.py", line 403, in connect
    return _ConnectionFairy._checkout(self)
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/pool.py", line 791, in _checkout
    fairy = _ConnectionRecord.checkout(pool)
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/pool.py", line 532, in checkout
    rec = pool._do_get()
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/pool.py", line 1196, in _do_get
    self._dec_overflow()
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/util/langhelpers.py", line 66, in __exit__
    compat.reraise(exc_type, exc_value, exc_tb)
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/util/compat.py", line 187, in reraise
    raise value
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/pool.py", line 1193, in _do_get
    return self._create_connection()
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/pool.py", line 350, in _create_connection
    return _ConnectionRecord(self)
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/pool.py", line 477, in __init__
    self.__connect(first_connect_check=True)
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/pool.py", line 684, in __connect
    exec_once(self.connection, self)
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/event/attr.py", line 274, in exec_once
    self(*args, **kw)
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/event/attr.py", line 284, in __call__
    fn(*args, **kw)
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/util/langhelpers.py", line 1334, in go
    return once_fn(*arg, **kw)
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/engine/strategies.py", line 183, in first_connect
    dialect.initialize(c)
  File "/usr/local/lib/python3.5/dist-packages/ibm_db_sa/base.py", line 695, in initialize
    super(DB2Dialect, self).initialize(connection)
  File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/engine/default.py", line 274, in initialize
    self.get_isolation_level(connection.connection)
  File "/usr/local/lib/python3.5/dist-packages/ibm_db_sa/ibm_db.py", line 135, in get_isolation_level
    cursor.execute('SELECT CURRENT ISOLATION FROM sysibm.sysdummy1')
  File "/usr/local/lib/python3.5/dist-packages/ibm_db_dbi.py", line 1355, in execute
    self._set_cursor_helper()
  File "/usr/local/lib/python3.5/dist-packages/ibm_db_dbi.py", line 1238, in _set_cursor_helper
    raise self.messages[len(self.messages) - 1]
sqlalchemy.exc.ProgrammingError: (ibm_db_dbi.ProgrammingError) ibm_db_dbi::ProgrammingError: SQLNumResultCols failed: [IBM][CLI Driver][AS] SQL0104N  An unexpected token "ISOLATION" was found following "".  Expected tokens may include:  "SERVER CLIENT_ACCTNG CLIENT_APPLNAME CLIENT_PROGRAMID CLIE".  SQLSTATE=42601 SQLCODE=-104 (Background on this error at: http://sqlalche.me/e/f405)

I also tried using table models instead of using my own query, tried using Python 2.7 as well. Everything returned the same error code

sqlalchemy.exc.ProgrammingError: (ibm_db_dbi.ProgrammingError) ibm_db_dbi::ProgrammingError: 
SQLNumResultCols failed: [IBM][CLI Driver][AS] SQL0104N  An unexpected token "ISOLATION" was found following "".  
Expected tokens may include:  "SERVER CLIENT_ACCTNG CLIENT_APPLNAME CLIENT_PROGRAMID CLIE".  
SQLSTATE=42601 SQLCODE=-104 (Background on this error at: http://sqlalche.me/e/f405)

Parameter injection issue upon db engine creation

Hi,

I discovered an issue in db engine creation code.
when the code prepares the db connection string it fails to escape the semicolon char. This makes it possible to inject arbitrary parameters in the connection string.
Under some circumstances this can be a security issue.

PoC:
I created two db2 instances in docker one is listening on tcp/50000 (default) and the other is on tcp/50001.

ActivePython 2.7.8.10 (ActiveState Software Inc.) based on
Python 2.7.8 (default, Jul 2 2014, 19:48:49) [MSC v.1500 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.

import sqlalchemy
from sqlalchemy import *
import ibm_db_sa
from urllib import quote
import sys

pw="aaa"
db2 = sqlalchemy.create_engine('ibm_db_sa://db2inst1:%[email protected]/test' % quote(pw))
conn = db2.connect()
conn.execute("select host_name from TABLE(SYSPROC.ENV_GET_SYS_INFO()) AS SYSTEMINFO").fetchall()
[(u'b5b9411e889e',)]
conn.close()

pw="aaa;PORT=50001"
db2 = sqlalchemy.create_engine('ibm_db_sa://db2inst1:%[email protected]/test' % quote(pw))
conn = db2.connect()
conn.execute("select host_name from TABLE(SYSPROC.ENV_GET_SYS_INFO()) AS SYSTEMINFO").fetchall()
[(u'e6aa666c18bf',)]
conn.close()
sqlalchemy.version
'0.9.8'

As you can see I was able to trick the program to connect to an alternative database via the password. (Which is a realistic vector I think.)

I originally reported this issue here:
https://groups.google.com/forum/#!topic/sqlalchemy/t0qooflT1Ao

I received this response:
"... This issue is specific to the workings of the dialect, in that it is
concatenating the URL components into a single string for the DBAPI
connect function. As ibm_db_sa is not part of SQLAlchemy, you'd need to
report this issue to them. ..."
Therefore I report it here.

urllib.error.URLError: <urlopen error [Errno -2] Name or service not known>

When i try to pip install it I get

    During handling of the above exception, another exception occurred:
  Collecting ibm_db>=2.0.0
  Downloading http://pypi.site.com/packages/98/cb/f77d9bd5f64246074af364cc30e20e3044c533890f3b67d30e89615c2fc5/ibm_db-3.0.1.tar.gz (642 kB)
     |████████████████████████████████| 642 kB 705 kB/s 
    ERROR: Command errored out with exit status 1:
     command: /local/scratch/venv/nd1/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-60zm8kcj/ibm-db/setup.py'"'"'; __file__='"'"'/tmp/pip-install-60zm8kcj/ibm-db/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-install-60zm8kcj/ibm-db/pip-egg-info
         cwd: /tmp/pip-install-60zm8kcj/ibm-db/
    Complete output (44 lines):
    Detected 64-bit Python
    Downloading https://public.dhe.ibm.com/ibmdl/export/pub/software/data/db2/drivers/odbc_cli/linuxx64_odbc_cli.tar.gz
    Traceback (most recent call last):
      File "/local/scratch/python-base/lib/python3.7/urllib/request.py", line 1317, in do_open
        encode_chunked=req.has_header('Transfer-encoding'))
      File "/local/scratch//python-base/lib/python3.7/http/client.py", line 1244, in request
        self._send_request(method, url, body, headers, encode_chunked)
      File "/local/scratch/python-base/lib/python3.7/http/client.py", line 1290, in _send_request
        self.endheaders(body, encode_chunked=encode_chunked)
      File "/local/scratch/python-base/lib/python3.7/http/client.py", line 1239, in endheaders
        self._send_output(message_body, encode_chunked=encode_chunked)
      File "/local/scratch/python-base/lib/python3.7/http/client.py", line 1026, in _send_output
        self.send(msg)
      File "/local/scratch/python-base/lib/python3.7/http/client.py", line 966, in send
        self.connect()
      File "/local/scratch/python-base/lib/python3.7/http/client.py", line 1406, in connect
        super().connect()
      File "/local/scratch/python-base/lib/python3.7/http/client.py", line 938, in connect
        (self.host,self.port), self.timeout, self.source_address)
      File "/local/scratch/python-base/lib/python3.7/socket.py", line 707, in create_connection
        for res in getaddrinfo(host, port, 0, SOCK_STREAM):
      File "/local/scratch/python-base/lib/python3.7/socket.py", line 748, in getaddrinfo
        for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
    socket.gaierror: [Errno -2] Name or service not known
  
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/tmp/pip-install-60zm8kcj/ibm-db/setup.py", line 185, in <module>
        file_stream = BytesIO(request.urlopen(url, context=context).read())
      File "/local/scratch/python-base/lib/python3.7/urllib/request.py", line 222, in urlopen
        return opener.open(url, data, timeout)
      File "/local/scratch/python-base/lib/python3.7/urllib/request.py", line 525, in open
        response = self._open(req, data)
      File "/local/scratch/python-base/lib/python3.7/urllib/request.py", line 543, in _open
        '_open', req)
      File "/local/scratch/python-base/lib/python3.7/urllib/request.py", line 503, in _call_chain
        result = func(*args)
      File "/local/scratch/python-base/lib/python3.7/urllib/request.py", line 1360, in https_open
        context=self._context, check_hostname=self._check_hostname)
      File "/local/scratch/python-base/lib/python3.7/urllib/request.py", line 1319, in do_open
        raise URLError(err)
    urllib.error.URLError: <urlopen error [Errno -2] Name or service not known>
    ----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

No module named 'ibm_db_sa.ibm_db_sa' on Python3 Ubuntu 18.04

Is Python3 not officially support for the SA Db2 driver? If not, I think we should add that.

Error:

$ python3 django-test.py
Traceback (most recent call last):
  File "django-test.py", line 3, in <module>
    import ibm_db_sa.ibm_db_sa
ModuleNotFoundError: No module named 'ibm_db_sa.ibm_db_sa'

On a fresh Ubuntu 18.04 virtual machine, I did this..

sudo apt-get update
sudo apt install python-pip
sudo apt install python3-pip
sudo apt-get install python-setuptools

pip3 install ibm_db
pip3 install camelcase # as a test to ensure pip3 is working
python3 testpy3.py

update multiple not raising IntegrityError

Exception ibm_db_dbi.Error is raised instead of IntegrityError when updating multiple objects. Works fine when updating single object.

Here's some code to reproduce. In second try IntegrityError was not raised.

from sqlalchemy import create_engine, Column, Integer
from sqlalchemy.exc import IntegrityError
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker

connstr = 'db2://chinook:p4ssw0rd@localhost:50000/Chinook'
engine = create_engine(connstr)
Base = declarative_base()

class Item(Base):
    __tablename__ = 'item'
    id = Column(Integer, primary_key=True)

Base.metadata.drop_all(engine)
Base.metadata.create_all(engine)

Session = sessionmaker(bind=engine)
session = Session()

session.add_all([
    Item(id=1),
    Item(id=2),
    Item(id=3),
    Item(id=4)
])
session.commit()

query = session.query(Item).filter(Item.id > 1)

item2 = query.first()
item2.id = 1
try:
    session.commit()
except IntegrityError as exc:
    print exc
    print 'expected error'
finally:
    session.rollback()


print '----'


items = query.all()
for i in items:
    i.id = 1

try:
    session.commit()
except IntegrityError as exc:
    print exc
    print 'expected error'
except Exception as exc:
    print exc
    print 'unexpected error'
finally:
    session.rollback()

The above code is updating PK to already present in table and database returns error 803.

When updating single object a nice IntegrityError is raised. That is correct.

(ibm_db_dbi.IntegrityError) ibm_db_dbi::IntegrityError: Statement Execute Failed: [IBM][CLI Driver][DB2/LINUXX8664] SQL0803N One or more values in the INSERT statement, UPDATE statement, or foreign key update caused by a DELETE statement are not valid because the primary key, unique constraint or unique index identified by "1" constrains table "CHINOOK.Invoice" from having duplicate values for the index key. SQLSTATE=23505 SQLCODE=-803 [SQL: u'UPDATE "Invoice" SET "InvoiceId"=? WHERE "Invoice"."InvoiceId" = ?'] [parameters: (1, 2)] (Background on this error at: http://sqlalche.me/e/gkpj)

But when updating PK of multiple objects in the same session we get general Error instead of IntegrityError

(ibm_db_dbi.Error) ibm_db_dbi::Error:
Error 1: [IBM][CLI Driver][DB2/LINUXX8664] SQL0803N One or more values in the INSERT statement, UPDATE statement, or foreign key update caused by a DELETE statement are not valid because the primary key, unique constraint or unique index identified by "1" constrains table "CHINOOK.Invoice" from having duplicate values for the index key. SQLSTATE=23505 SQLCODE=-803

Error 2: [IBM][CLI Driver][DB2/LINUXX8664] SQL0803N One or more values in the INSERT statement, UPDATE statement, or foreign key update caused by a DELETE statement are not valid because the primary key, unique constraint or unique index identified by "1" constrains table "CHINOOK.Invoice" from having duplicate values for the index key. SQLSTATE=23505 SQLCODE=-803
[SQL: u'UPDATE "Invoice" SET "InvoiceId"=? WHERE "Invoice"."InvoiceId" = ?'] [parameters: ((1, 2), (1, 3))] (Background on this error at: http://sqlalche.me/e/dbapi)

Don't work on AS400

Script:
import pandas as pd
from sqlalchemy import create_engine
import glob

csv=(glob.glob("/home/nahum/*.csv"))
df_csv = pd.DataFrame(csv)

for index, row in df_csv.iterrows():
datastore=pd.read_csv(str(row[0]), delimiter=',', header=[0],skipfooter=3)
engine = create_engine('ibm_db_sa://USUARIO:[email protected]/*local')
datastore.to_sql('SCHEMACSV.conc', engine, if_exists='append', index=False)

Don't work on AS400 gives this error:
OperationalError: (ibm_db_dbi.OperationalError) ibm_db_dbi::OperationalError: [IBM][CLI Driver] SQL30081N A communication error has been detected. Communication protocol being used: "TCP/IP". Communication API being used: "SOCKETS". Location where the error was detected: "172.16.0.78". Communication function detecting the error: "connect". Protocol specific error code(s): "111", "", "". SQLSTATE=08001 SQLCODE=-30081

This code works on:
Postgresql, DB2 for Linux, Mysql... in all server but not on DB2 iseries.

Improper specification of the illegal_initial_characters

From @GoogleCodeExporter on March 16, 2015 6:49

ibm_db_sa-0.3.0-py27.egg/ibm_db_sa/base.py:509
    illegal_initial_characters = set(xrange(0, 10)).union(["_", "$"])

The expression 'set(xrange(0, 10))' gives set made of 10 integers, where set of 
10 single-char strings is needed.

This can be compared to the specification of this variable originally done in 
SQLAlchemy (I'm using versions 0.8.2, file sqlalchemy/sql/compiler.py:54):
ILLEGAL_INITIAL_CHARACTERS = set([str(x) for x in xrange(0, 10)]).union(['$'])


The problem shows up later, when this is used for checking 
(sqlalchemy/sql/compiler.py:2419):
                or value[0] in self.illegal_initial_characters


You can see below the results of the test made in Python 2.7:
>>> '1test'[0] in set(xrange(0, 10)).union(["_", "$"])
False
>>> '1test'[0] in set([str(x) for x in xrange(0, 10)]).union(['$'])
True

The line mentioned at the beginning should be changed to:
    illegal_initial_characters = set([str(x) for x in xrange(0, 10)]).union(["_", "$"])

Original issue reported on code.google.com by [email protected] on 26 Aug 2013 at 2:01

Copied from original issue: ibmdb/python-ibmdb#133

Error attempting to write dataframe to table and specifying replace

I have tried the following engine strings:

from sqlalchemy import create_engine
#e = create_engine("db2+ibm_db://user:pass@host[:port]/database")
#e = create_engine("ibm_db_sa://user:pass@host[:port]/database")

retrieving data from table into dataframe is no problem:

import pandas as pd
dff = pd.read_sql("select * from model_output_1 limit 10", con=e)

writing data from a dataframe to a table is no problem in append mode:

dff.to_sql(name="model_output_4", con=e, schema=schema, if_exists="append", index=False)

as soon as I switch the if_exists parameter to "replace", I get the following error:

dff.to_sql(name="model_output_4", con=e, schema=schema, if_exists="replace", index=False)

TypeError Traceback (most recent call last)
in ()
----> 1 dff.to_sql(name='model_output_4', con=e, schema='DASH5294', if_exists='replace', index=False)

/opt/conda/lib/python3.6/site-packages/pandas/core/generic.py in to_sql(self, name, con, schema, if_exists, index, index_label, chunksize, dtype, method)
2529 sql.to_sql(self, name, con, schema=schema, if_exists=if_exists,
2530 index=index, index_label=index_label, chunksize=chunksize,
-> 2531 dtype=dtype, method=method)
2532
2533 def to_pickle(self, path, compression='infer',

/opt/conda/lib/python3.6/site-packages/pandas/io/sql.py in to_sql(frame, name, con, schema, if_exists, index, index_label, chunksize, dtype, method)
458 pandas_sql.to_sql(frame, name, if_exists=if_exists, index=index,
459 index_label=index_label, schema=schema,
--> 460 chunksize=chunksize, dtype=dtype, method=method)
461
462

/opt/conda/lib/python3.6/site-packages/pandas/io/sql.py in to_sql(self, frame, name, if_exists, index, index_label, schema, chunksize, dtype, method)
1171 if_exists=if_exists, index_label=index_label,
1172 schema=schema, dtype=dtype)
-> 1173 table.create()
1174 table.insert(chunksize, method=method)
1175 if (not name.isdigit() and not name.islower()):

/opt/conda/lib/python3.6/site-packages/pandas/io/sql.py in create(self)
575 "Table '{name}' already exists.".format(name=self.name))
576 elif self.if_exists == 'replace':
--> 577 self.pd_sql.drop_table(self.name, self.schema)
578 self._execute_create()
579 elif self.if_exists == 'append':

/opt/conda/lib/python3.6/site-packages/pandas/io/sql.py in drop_table(self, table_name, schema)
1220 schema = schema or self.meta.schema
1221 if self.has_table(table_name, schema):
-> 1222 self.meta.reflect(only=[table_name], schema=schema)
1223 self.get_table(table_name, schema).drop()
1224 self.meta.clear()

/opt/conda/lib/python3.6/site-packages/sqlalchemy/sql/schema.py in reflect(self, bind, schema, views, only, extend_existing, autoload_replace, resolve_fks, **dialect_kwargs)
4197
4198 available = util.OrderedSet(
-> 4199 bind.engine.table_names(schema, connection=conn)
4200 )
4201 if views:

/opt/conda/lib/python3.6/site-packages/sqlalchemy/engine/base.py in table_names(self, schema, connection)
2241 if not schema:
2242 schema = self.dialect.default_schema_name
-> 2243 return self.dialect.get_table_names(conn, schema)
2244
2245 def has_table(self, table_name, schema=None):

/opt/conda/lib/python3.6/site-packages/ibm_db_sa/base.py in get_table_names(self, connection, schema, **kw)
721
722 def get_table_names(self, connection, schema=None, **kw):
--> 723 return self._reflector.get_table_names(connection, schema=schema, **kw)
724
725 def get_view_names(self, connection, schema=None, **kw):

in get_table_names(self, connection, schema, **kw)

/opt/conda/lib/python3.6/site-packages/sqlalchemy/engine/reflection.py in cache(fn, self, con, *args, **kw)
41 info_cache = kw.get("info_cache", None)
42 if info_cache is None:
---> 43 return fn(self, con, *args, **kw)
44 key = (
45 fn.name,

/opt/conda/lib/python3.6/site-packages/ibm_db_sa/reflection.py in get_table_names(self, connection, schema, **kw)
208 where(systbl.c.tabschema == current_schema).
209 order_by(systbl.c.tabname)
--> 210 return [self.normalize_name(r[0]) for r in connection.execute(query)]
211
212 @reflection.cache

/opt/conda/lib/python3.6/site-packages/ibm_db_sa/reflection.py in (.0)
208 where(systbl.c.tabschema == current_schema).
209 order_by(systbl.c.tabname)
--> 210 return [self.normalize_name(r[0]) for r in connection.execute(query)]
211
212 @reflection.cache

TypeError: expected bytes, str found

Getting error ImportError: No module named 'ibm_db_dbi'

Hi,
I am using ibm_db_sa driver.My connection code is "create_engine("db2+ibm_db://dbinst1:********@192.168.XX.XXX:XXXXX/sample",pool_size=20, echo=False, poolclass=QueuePool).connect()"
When I am try to connect to db I got below error

ImportError: No module named 'ibm_db_dbi'

Whats wrong with my code ?

ibm_db_sa 0.3.4 -import the DB API module 'No driver found for dialect 'ibm_db_sa+pyodbc400'' but failed

** While creating the engine for 'ibm_db_sa+pyodbc400://u026607:*****@172.219.154.200:446/t10050bp', SQLAlchemy tried to
** import the DB API module 'No driver found for dialect 'ibm_db_sa+pyodbc400'' but failed.


** + This is because 1 of 2 reasons:
** 1.) You forgot to install the DB API module 'No driver found for dialect 'ibm_db_sa+pyodbc400''.
** --> (Try: 'pip install No driver found for dialect 'ibm_db_sa+pyodbc400'')
** 2.) If the above step fails, you most likely forgot to
** --> install the actual database driver on your local
** --> machine! The driver is needed in order to install
** --> the Python DB API ('No driver found for dialect 'ibm_db_sa+pyodbc400'').

Supports connection option 'CurrentSchema'

The connection configuration CurrentSchema is a supported configuration for pip package ibm-db but not in this package where it supports only.

E.g., when using ibm-db:

import ibm_db
import ibm_db_dbi
import os

conn_str = 'DATABASE={};HOSTNAME={};PORT={};Security=SSL;UID={};PWD={};CurrentSchema={}' \
    .format(
        'BLUDB',
        'db_host',
        50001,
        'bluadmin',
        os.environ['DB2_PASSWORD'],
        'NOT_BLUADMIN',
    )

conn_handler = ibm_db.connect(conn_str, '', '')
conn = ibm_db_dbi.Connection(conn_handler)

sql_stmt = '''
SELECT * FROM ACTOR LIMIT 10
'''

cur = conn.cursor()
cur.execute(sql_stmt)
row=cur.fetchall()
print(row)

it works well by querying table ``

Use case

given table ACTOR in the schema NOT_BLUADMIN
when using this package:

import os

from sqlalchemy import *
from sqlalchemy.engine import create_engine
from sqlalchemy.schema import *

# Db2
e = create_engine(
    'db2+ibm_db://bluadmin:xxxx@db_host:50001/BLUDB?Security=SSL&CurrentSchema=NOT_BLUADMIN'
)
sql = text("select * from ACTOR LIMIT 10")
result = e.execute(sql)
print([row for row in result])

Expected result

it shall query table NOT_BLUADMIN.ACTOR successfully.

Actual result

It will report error which shows it still use current user name as current schema:

sqlalchemy.exc.ProgrammingError: (ibm_db_dbi.ProgrammingError) ibm_db_dbi::ProgrammingError: Exception('SQLNumResultCols failed: [IBM][CLI Driver][DB2/LINUXX8664] SQL0204N  "BLUADMIN.ACTOR" is an undefined name.  SQLSTATE=42704 SQLCODE=-204')
[SQL: select * from ACTOR LIMIT 10]

Flask-admin SQL Error

Hi There,

Use the Flask-admin and connect to DB2 with Flask Sqlalchemy(ORM)

and description on https://flask-admin.readthedocs.io/en/latest/api/mod_contrib_sqla/

Usage
admin = Admin()
admin.add_view(ModelView(User, db.session))

Errors shown below

sqlalchemy.exc.ProgrammingError: (ibm_db_dbi.ProgrammingError) ibm_db_dbi::ProgrammingError: Exception('SQLNumResultCols failed: [IBM][CLI Driver][DB2/AIX64] SQL0245N The invocation of routine "COUNT" is ambiguous. The argument in position "1" does not have a best fit. SQLSTATE=428F5\r SQLCODE=-245')
[SQL: SELECT count(?) AS count_1
FROM "user"]
[parameters: (b'*',)]
(Background on this error at: http://sqlalche.me/e/f405)

Traceback (most recent call last):
File "C:\Users\ct\Anaconda3\envs\rpt_siqx\lib\site-packages\flask\app.py", line 2463, in call
return self.wsgi_app(environ, start_response)
File "C:\Users\ct\Anaconda3\envs\rpt_siqx\lib\site-packages\flask\app.py", line 2449, in wsgi_app
response = self.handle_exception(e)
File "C:\Users\ct\Anaconda3\envs\rpt_siqx\lib\site-packages\flask\app.py", line 1866, in handle_exception
reraise(exc_type, exc_value, tb)
File "C:\Users\ct\Anaconda3\envs\rpt_siqx\lib\site-packages\flask_compat.py", line 39, in reraise
raise value
File "C:\Users\ct\Anaconda3\envs\rpt_siqx\lib\site-packages\flask\app.py", line 2446, in wsgi_app
response = self.full_dispatch_request()
File "C:\Users\ct\Anaconda3\envs\rpt_siqx\lib\site-packages\flask\app.py", line 1951, in full_dispatch_request
rv = self.handle_user_exception(e)
File "C:\Users\ct\Anaconda3\envs\rpt_siqx\lib\site-packages\flask\app.py", line 1820, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "C:\Users\ct\Anaconda3\envs\rpt_siqx\lib\site-packages\flask_compat.py", line 39, in reraise
raise value
File "C:\Users\ct\Anaconda3\envs\rpt_siqx\lib\site-packages\flask\app.py", line 1949, in full_dispatch_request
rv = self.dispatch_request()
File "C:\Users\ct\Anaconda3\envs\rpt_siqx\lib\site-packages\flask\app.py", line 1935, in dispatch_request
return self.view_functionsrule.endpoint
File "C:\Users\ct\Anaconda3\envs\rpt_siqx\lib\site-packages\flask_admin\base.py", line 69, in inner
return self._run_view(f, *args, **kwargs)
File "C:\Users\ct\Anaconda3\envs\rpt_siqx\lib\site-packages\flask_admin\base.py", line 368, in _run_view
return fn(self, args, **kwargs)
File "C:\Users\ct\Anaconda3\envs\rpt_siqx\lib\site-packages\flask_admin\model\base.py", line 1970, in index_view
view_args.search, view_args.filters, page_size=page_size)
File "C:\Users\ct\Anaconda3\envs\rpt_siqx\lib\site-packages\flask_admin\contrib\sqla\view.py", line 1065, in get_list
count = count_query.scalar() if count_query else None
File "C:\Users\ct\Anaconda3\envs\rpt_siqx\lib\site-packages\sqlalchemy\orm\query.py", line 3330, in scalar
ret = self.one()
File "C:\Users\ct\Anaconda3\envs\rpt_siqx\lib\site-packages\sqlalchemy\orm\query.py", line 3300, in one
ret = self.one_or_none()
File "C:\Users\ct\Anaconda3\envs\rpt_siqx\lib\site-packages\sqlalchemy\orm\query.py", line 3269, in one_or_none
ret = list(self)
File "C:\Users\ct\Anaconda3\envs\rpt_siqx\lib\site-packages\sqlalchemy\orm\query.py", line 3342, in iter
return self._execute_and_instances(context)
File "C:\Users\ct\Anaconda3\envs\rpt_siqx\lib\site-packages\sqlalchemy\orm\query.py", line 3367, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
File "C:\Users\ct\Anaconda3\envs\rpt_siqx\lib\site-packages\sqlalchemy\engine\base.py", line 988, in execute
return meth(self, multiparams, params)
File "C:\Users\ct\Anaconda3\envs\rpt_siqx\lib\site-packages\sqlalchemy\sql\elements.py", line 287, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "C:\Users\ct\Anaconda3\envs\rpt_siqx\lib\site-packages\sqlalchemy\engine\base.py", line 1107, in _execute_clauseelement
distilled_params,
File "C:\Users\ct\Anaconda3\envs\rpt_siqx\lib\site-packages\sqlalchemy\engine\base.py", line 1253, in _execute_context
e, statement, parameters, cursor, context
File "C:\Users\ct\Anaconda3\envs\rpt_siqx\lib\site-packages\sqlalchemy\engine\base.py", line 1473, in _handle_dbapi_exception
util.raise_from_cause(sqlalchemy_exception, exc_info)
File "C:\Users\ct\Anaconda3\envs\rpt_siqx\lib\site-packages\sqlalchemy\util\compat.py", line 398, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "C:\Users\ct\Anaconda3\envs\rpt_siqx\lib\site-packages\sqlalchemy\util\compat.py", line 152, in reraise
raise value.with_traceback(tb)
File "C:\Users\ct\Anaconda3\envs\rpt_siqx\lib\site-packages\sqlalchemy\engine\base.py", line 1249, in _execute_context
cursor, statement, parameters, context
File "C:\Users\ct\Anaconda3\envs\rpt_siqx\lib\site-packages\ibm_db_sa\ibm_db.py", line 112, in do_execute
cursor.execute(statement, parameters)
File "C:\Users\ct\Anaconda3\envs\rpt_siqx\lib\site-packages\ibm_db_dbi.py", line 1356, in execute
self._set_cursor_helper()
File "C:\Users\ct\Anaconda3\envs\rpt_siqx\lib\site-packages\ibm_db_dbi.py", line 1239, in _set_cursor_helper
raise self.messages[len(self.messages) - 1]
sqlalchemy.exc.ProgrammingError: (ibm_db_dbi.ProgrammingError) ibm_db_dbi::ProgrammingError: Exception('SQLNumResultCols failed: [IBM][CLI Driver][DB2/AIX64] SQL0245N The invocation of routine "COUNT" is ambiguous. The argument in position "1" does not have a best fit. SQLSTATE=428F5\r SQLCODE=-245')
[SQL: SELECT count(?) AS count_1
FROM "user"]
[parameters: (b'
',)]
(Background on this error at: http://sqlalche.me/e/f405)

Many thanks.

Getting error 'ibm_db_dbi::Error: 0 params bound not matching 1 required'

Hi,

I am calling stored procedure which has 1 out parameter. So my stored procedure call is like
CALL SOURCE.add_user_data ('BBBB', 'AAAA', 2,?);

When I am executing this statement using connection.execute() method , getting following error

sqlalchemy.exc.DBAPIError: (ibm_db_dbi.Error) ibm_db_dbi::Error: 0 params bound not matching 1 required [SQL: "CALL SOURCE.add_user_data ('BBBB', 'AAAA', 2,?);"] (Background on this error at: http://sqlalche.me/e/dbapi)

Please help

db2+pyodbc reflection fails on default value for column

I'm getting error

sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ('ODBC SQL type -99 is not yet supported. column-index=2 type=-99', 'HY106') (Background on this error at: http://sqlalche.me/e/f405)

when making a reflection of Chinook database on DB2 10.5 server due column DEFAULT being of type CLOB and pyodbc does not support it. Is that default value necessary, since it's server side default?

My current walk around is by selecting None for default values when collecting column information https://github.com/ibmdb/python-ibmdbsa/blob/master/ibm_db_sa/ibm_db_sa/reflection.py#L240. Any drawbacks doing such?

Below more detail:

import urllib
from sqlalchemy import create_engine
odbcstr = 'DRIVER=DB2; HOSTNAME=127.0.0.1; DATABASE=Chinook; PROTOCOL=TCPIP; PORT=50000; UID=chinook; PWD=p4ssw0rd'
odbcstr = urllib.quote(odbcstr)
connstr = 'db2+pyodbc:///?odbc_connect={}'.format(odbcstr)
engine = create_engine(connstr)

Base = automap_base()
Base.prepare(engine, reflect=True)

Full traceback

Traceback (most recent call last):
  File "/home/mike/projects/slashdb/experiments/db2_over_pyodbc.py", line 69, in <module>
    Base.prepare(engine, reflect=True)
  File "build/bdist.linux-x86_64/egg/sqlalchemy/ext/automap.py", line 761, in prepare
  File "build/bdist.linux-x86_64/egg/sqlalchemy/sql/schema.py", line 4159, in reflect
  File "build/bdist.linux-x86_64/egg/sqlalchemy/sql/schema.py", line 469, in __new__
  File "build/bdist.linux-x86_64/egg/sqlalchemy/util/langhelpers.py", line 67, in __exit__
  File "build/bdist.linux-x86_64/egg/sqlalchemy/sql/schema.py", line 464, in __new__
  File "build/bdist.linux-x86_64/egg/sqlalchemy/sql/schema.py", line 552, in _init
  File "build/bdist.linux-x86_64/egg/sqlalchemy/sql/schema.py", line 574, in _autoload
  File "build/bdist.linux-x86_64/egg/sqlalchemy/engine/base.py", line 1596, in run_callable
  File "build/bdist.linux-x86_64/egg/sqlalchemy/engine/default.py", line 415, in reflecttable
  File "build/bdist.linux-x86_64/egg/sqlalchemy/engine/reflection.py", line 626, in reflecttable
  File "build/bdist.linux-x86_64/egg/sqlalchemy/engine/reflection.py", line 370, in get_columns
  File "/home/mike/envs/slashdb/local/lib/python2.7/site-packages/ibm_db_sa/base.py", line 734, in get_columns
    connection, table_name, schema=schema, **kw)
  File "<string>", line 2, in get_columns
  File "build/bdist.linux-x86_64/egg/sqlalchemy/engine/reflection.py", line 55, in cache
  File "/home/mike/envs/slashdb/local/lib/python2.7/site-packages/ibm_db_sa/reflection.py", line 250, in get_columns
    for r in connection.execute(query):
  File "build/bdist.linux-x86_64/egg/sqlalchemy/engine/result.py", line 946, in __iter__
  File "build/bdist.linux-x86_64/egg/sqlalchemy/engine/result.py", line 1276, in fetchone
  File "build/bdist.linux-x86_64/egg/sqlalchemy/engine/base.py", line 1458, in _handle_dbapi_exception
  File "build/bdist.linux-x86_64/egg/sqlalchemy/util/compat.py", line 296, in raise_from_cause
  File "build/bdist.linux-x86_64/egg/sqlalchemy/engine/result.py", line 1268, in fetchone
  File "build/bdist.linux-x86_64/egg/sqlalchemy/engine/result.py", line 1148, in _fetchone_impl
sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ('ODBC SQL type -99 is not yet supported.  column-index=2  type=-99', 'HY106') (Background on this error at: http://sqlalche.me/e/f405)

The error is raised when executing sql for column information of tables in this reflection code https://github.com/ibmdb/python-ibmdbsa/blob/master/ibm_db_sa/ibm_db_sa/reflection.py#L240

The executed sql is like

SELECT 
  "SYSCAT"."COLUMNS"."COLNAME", 
  "SYSCAT"."COLUMNS"."TYPENAME", 
  "SYSCAT"."COLUMNS"."DEFAULT", 
  "SYSCAT"."COLUMNS"."NULLS", 
  "SYSCAT"."COLUMNS"."LENGTH", 
  "SYSCAT"."COLUMNS"."SCALE", 
  "SYSCAT"."COLUMNS"."IDENTITY", 
  "SYSCAT"."COLUMNS"."GENERATED" 
FROM 
  "SYSCAT"."COLUMNS" 
WHERE "SYSCAT"."COLUMNS"."TABSCHEMA" = 'CHINOOK' 
      AND "SYSCAT"."COLUMNS"."TABNAME" = 'Album' 
ORDER BY "SYSCAT"."COLUMNS"."COLNO"

According to below the column "SYSCAT"."COLUMNS"."DEFAULT" is CLOB.

SELECT NAME, COLTYPE, TBNAME FROM SYSIBM.SYSCOLUMNS WHERE NAME = 'DEFAULT';
NAME,COLTYPE,TBNAME
DEFAULT,"CLOB    ",SYSCOLUMNS
DEFAULT,"CLOB    ",SYSROUTINEPARMS
DEFAULT,"CLOB    ",SYSATTRIBUTES
DEFAULT,"CLOB    ",SYSVARIABLES
DEFAULT,"CLOB    ",COLUMNS
DEFAULT,"CLOB    ",ROUTINEPARMS
DEFAULT,"CLOB    ",ROWFIELDS
DEFAULT,"CLOB    ",VARIABLES
DEFAULT,"VARCHAR ",ADMINTEMPCOLUMNS

Pyodbc does not support CLOB or BLOB and there we get an error.

sqlalchemy can't find dialects:db2.ibm_db

I don't have sudo access where I am installing sqlalchemy. here is how I installed:
screenshot from 2017-09-18 16-40-08

this is the method I'm using

import pandas as pd
from sqlalchemy import create_engine

def load_esx_data():
    sql = "my_sql_stmt"
    con = create_engine("db2+ibm_db://dashdb-entry-yp-dal09-09.services.dal.bluemix.net:50000/BLUDB")
    return pd.read_sql_query(sql,con)

and this is the error


NoSuchModuleErrorTraceback (most recent call last)
<ipython-input-31-d4e40433e6a4> in <module>()
----> 1 esx_data = load_esx_data()
      2 esx.head()

<ipython-input-30-d5ba26ec9fe0> in load_esx_data()
      4 def load_esx_data():
      5     sql = "select * from DASH7228.ESXCAPACITY UNION select * from DASH7228.ESXCAPACITY1X UNION select * from DASH7228.ESXCAPACITY2X select * from DASH7228.ESXDRCAPACITY select * from DASH7228.ESXDRCAPACITY1X"
----> 6     con = create_engine("db2+ibm_db://dashdb-entry-yp-dal09-09.services.dal.bluemix.net:50000/BLUDB")
      7     return pd.read_sql_query(sql,con)

/gpfs/fs01/user/s5d6-da75800f90bd2c-09e06b09ef9b/.local/lib/python2.7/site-packages/sqlalchemy/engine/__init__.pyc in create_engine(*args, **kwargs)
    389     strategy = kwargs.pop('strategy', default_strategy)
    390     strategy = strategies.strategies[strategy]
--> 391     return strategy.create(*args, **kwargs)
    392 
    393 

/gpfs/fs01/user/s5d6-da75800f90bd2c-09e06b09ef9b/.local/lib/python2.7/site-packages/sqlalchemy/engine/strategies.pyc in create(self, name_or_url, **kwargs)
     54         u.query.pop('plugin', None)
     55 
---> 56         entrypoint = u._get_entrypoint()
     57         dialect_cls = entrypoint.get_dialect_cls(u)
     58 

/gpfs/fs01/user/s5d6-da75800f90bd2c-09e06b09ef9b/.local/lib/python2.7/site-packages/sqlalchemy/engine/url.pyc in _get_entrypoint(self)
    137         else:
    138             name = self.drivername.replace('+', '.')
--> 139         cls = registry.load(name)
    140         # check for legacy dialects that
    141         # would return a module with 'dialect' as the

/gpfs/fs01/user/s5d6-da75800f90bd2c-09e06b09ef9b/.local/lib/python2.7/site-packages/sqlalchemy/util/langhelpers.pyc in load(self, name)
    216         raise exc.NoSuchModuleError(
    217             "Can't load plugin: %s:%s" %
--> 218             (self.group, name))
    219 
    220     def register(self, name, modulepath, objname):

NoSuchModuleError: Can't load plugin: sqlalchemy.dialects:db2.ibm_db

​``

AttributeError: 'com.ziclix.python.sql.PyConnection' object has no attribute 'dbms_ver'

Using jython 2.7.0 and jt400.jar, I am receiving this error on first attempts using engine.connect().

I can get the connection to work on a second attempt by catching the initial exception.

engine = create_engine('db2+zxjdbc400://user:pass@as400;prompt=false')
try:
    # this will fail
    connection = engine.connect()
except AttributeError:
    # this will pass
    connection = engine.connect()

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.