Coder Social home page Coder Social logo

pypyodbc's People

Watchers

 avatar

pypyodbc's Issues

Passing datetime.date returns TypeError: Wrong Type

What steps will reproduce the problem?
1. Insert Row into a date column with datetime.date

What is the expected output? What do you see instead?
Using pyodbc with Python2.7 it works.
I've updated all my code to 3.4 and when using pypyodbc I get the following:
  File "\pypyodbc.py", line 1470, in execute
    self._BindParams(param_types)
  File "\pypyodbc.py", line 1431, in _BindParams
    dec_num, ADDR(ParameterBuffer), BufferLen,ADDR(LenOrIndBuf))
ctypes.ArgumentError: argument 7: <class 'TypeError'>: wrong type

What version of the product are you using? On what operating system?
Python 3.4.1
pypyodbc 1.3.3
OpenEdge Database
Windows 7

Please provide any additional information below.

I've traced the issue to where on line 1431 of the pypyodbc.py its trying to 
process the datetime.date object and it throws the error.

The column I'm inserting into shows in the cursor_description as:
('create-date', <class 'datetime.date'>, 10, 10, 10, 0, True)

Regards,

Original issue reported on code.google.com by [email protected] on 15 Aug 2014 at 11:26

Oracle and Decimal_cvt issue

OS: Microsoft 2008 R2

I'm trying to obtain data from Oracle to Microsoft SQL Server but having issues 
with decimal values.

Connection string:
Microsoft: con_string = "DSN=localhost;" 
Oracle: con_string = "DSN=SERVER;Uid=login;Pwd=pwd;"


When using Microsoft for Oracle ODBC, when reading a decimal value it returns 
an error.

for field in row:
   params.append(field)


To read the data I use a workaround to work:

temp = "{0}".format(field)
if temp == "None":
   temp = None
params.append(temp)


But if I'm using Oracle driver it returns an error in the "fetchall()"

Driver: Microsoft ODBC for ORACLE
('HY104', '[HY104] [Microsoft][ODBC SQL Server Driver]Invalid scale value')

                    try:
                        for row in cursor_source.fetchall():
                            params = []
                            for field in row:
                                #temp = "{0}".format(field)
                                #if temp == "None":
                                #    temp = None
                                #params.append(temp)
                                params.append(field)
                            try:
                                cursor.execute(temp_sql, params)
                                cursor.commit()
                            except pypyodbc.Error as e:
                                func_error = "{0}".format(e)
                                break
                    except pypyodbc.Error as e:
                        func_error = "{0}".format(e)


Driver: Oracle in OraClient11g
  for row in cursor_source.fetchall():
  File "C:\Python34\lib\site-packages\pypyodbc-1.3.3-py3.4.egg\pypyodbc.py", line 1819, in fetchall
  File "C:\Python34\lib\site-packages\pypyodbc-1.3.3-py3.4.egg\pypyodbc.py", line 1871, in fetchone
  File "C:\Python34\lib\site-packages\pypyodbc-1.3.3-py3.4.egg\pypyodbc.py", line 587, in Decimal_cvt
decimal.InvalidOperation: [<class 'decimal.ConversionSyntax'>]

Original issue reported on code.google.com by [email protected] on 24 Jul 2014 at 9:57

pypyodbc hangs with pypy

I think we had this problem before.   pypyodbc can connect and work OK with 
cPython but using a nightly pypy snapshot, I'm getting a hang.   Ctrl-C below 
illusrtates the stack trace where it's stopped.


Python 2.7.3 (2f0d7dced731, Jan 11 2014, 23:29:24)
[PyPy 2.3.0-alpha0 with GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.2.79)] 
on darwin
Type "help", "copyright", "credits" or "license" for more information.
And now for something completely different: ``PyPy 1.3 awaiting release''
>>>> import pypyodbc
>>>> pypyodbc.connect(uid='scott', pwd='tiger', dsn='ms_2005')
^CTraceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/src/pypyodbc/trunk/pypyodbc/pypyodbc.py", line 2259, in __init__
    self.connect(connectString, autocommit, ansi, timeout, unicode_results, readonly)
  File "/usr/local/src/pypyodbc/trunk/pypyodbc/pypyodbc.py", line 2307, in connect
    check_success(self, ret)
  File "/usr/local/src/pypyodbc/trunk/pypyodbc/pypyodbc.py", line 930, in check_success
    ctrl_err(SQL_HANDLE_DBC, ODBC_obj.dbc_h, ret, ODBC_obj.ansi)
  File "/usr/local/src/pypyodbc/trunk/pypyodbc/pypyodbc.py", line 891, in ctrl_err
    NativeError, Message, len(Message), ADDR(Buffer_len))
  File "/usr/local/src/pypy-zeek/lib_pypy/_ctypes/function.py", line 707, in __call__
    result = self._call_funcptr(funcptr, *args)
  File "/usr/local/src/pypy-zeek/lib_pypy/_ctypes/function.py", line 370, in _call_funcptr
    result = funcptr(*newargs)
KeyboardInterrupt

Original issue reported on code.google.com by [email protected] on 12 Jan 2014 at 1:07

support field name as cursor attribute

What steps will reproduce the problem?
1.no support field name as cursor attribute 

Please provide any additional information below.

i managed to get this support but it's only apply for fetchall()
cause i only use this function when retrieving data


Original issue reported on code.google.com by [email protected] on 26 Mar 2013 at 10:15

Attachments:

patch for add gevent support

Hi,

please refer the attchments for details.

usage:
"""
import gevent, gevent.monkey
gevent.monkey.patch_all()
import pypyodbc
pypyodbc.monkey_patch_for_gevent()
"""
could you please merge this patch to svn? Thanks a lot!

BR,
Phus Lu

Original issue reported on code.google.com by phus.lu on 27 Sep 2013 at 8:34

Attachments:

Description attribute returns only first char of name (as byte) using Oracle ODBC & Linux

What steps will reproduce the problem?

While getting the .description list of a cursor there seems to be a problem 
with the object-type and value of the "name" element of the list. 

Consider the following code:

import pypyodbc
db_conn = pypyodbc.connect( 
"Driver={Ora11gR264};DBQ=//<HOST>:1521/unic8;Uid=<...>;Pwd=<...>;" )
cursor = db_conn.cursor()
cursor.execute( "SELECT * from test" )
cursor.fetchone()
print( cursor.description )

What is the expected output? What do you see instead?

This code is accessing the following table:

-- CREATE TABLE TEST( TESTCOLUMN  VARCHAR2(1 BYTE) );

Basically, I expected to see something like this:

>> [('testcolumn', <class 'str'>, 1, 1, 1, 0, True)]

But the code above returns the following:

>> [(b't', <class 'str'>, 1, 1, 1, 0, True)]

This is reproducable for all tables. Only the first character of a column name 
will be returned--not as char, but as byte.

What version of the product are you using? On what operating system?

- SLES 11 SP3 64bit
- Python 3.4 64bit
- pypypodbc 1.3.3 
- unixODBC (Tested with both versions shipped with SLES 2.2.12 and 2.3.1)
- Oracle Instant Client 11g (11.2.0.4) ODBC drivers

Please provide any additional information below.

The same code runs fine using Windows (I've tested Win & Win8.1) with the 
otherwise same setup. 


Original issue reported on code.google.com by [email protected] on 1 Jul 2014 at 3:24

Memory leak on Connection and Cursor objects

What steps will reproduce the problem?
1.Memory leak, python will not collect unused cursor and connect object.
2. Python 2.7.5, OS: Winows8, pypyodbc version 1.2.0
3. Please see attached py file for testing.

I think the reason of resulting this is the loop in connection.close() method, 
and this object has __del__() method be called. Circular references between 
objects are preventing them from being collected.

below from doc of _del_ function:

http://docs.python.org/2.7/reference/datamodel.html?highlight=__del__#object.__d
el__

Note del x doesn’t directly call x.__del__() — the former decrements the 
reference count for x by one, and the latter is only called when x‘s 
reference count reaches zero. Some common situations that may prevent the 
reference count of an object from going to zero include: circular references 
between objects (e.g., a doubly-linked list or a tree data structure with 
parent and child pointers); a reference to the object on the stack frame of a 
function that caught an exception (the traceback stored in sys.exc_traceback 
keeps the stack frame alive); or a reference to the object on the stack frame 
that raised an unhandled exception in interactive mode (the traceback stored in 
sys.last_traceback keeps the stack frame alive). The first situation can only 
be remedied by explicitly breaking the cycles; the latter two situations can be 
resolved by storing None in sys.exc_traceback or sys.last_traceback. Circular 
references which are garbage are detected when the option cycle detector is 
enabled (it’s on by default), but can only be cleaned up if there are no 
Python-level __del__() methods involved. Refer to the documentation for the gc 
module for more information about how __del__() methods are handled by the 
cycle detector, particularly the description of the garbage value.



Original issue reported on code.google.com by [email protected] on 17 Oct 2013 at 11:18

Attachments:

TypeError: 'int' object is not subscriptable with single row Sql Server query

What steps will reproduce the problem?
1. Connect to a Sql Server 2005/2008
2. Run this query

SELECT SYSTEM_USER AS UserName,
        @@SERVICENAME As ServiceName,
        @@version AS Version,
        SERVERPROPERTY('Edition') AS Edition,
        SERVERPROPERTY('InstanceName') AS Instance,
        SERVERPROPERTY('IsIntegratedSecurityOnly') AS IsIntegratedSecurityOnly,
        SERVERPROPERTY('IsSingleUser') AS IsSingleUser,
        SERVERPROPERTY('MachineName') AS MachineName,
        SERVERPROPERTY('ProductVersion') AS ProductVersion,
        SERVERPROPERTY('ProductLevel') AS ProductLevel,
        SERVERPROPERTY('ServerName') AS ServerName,
        DATABASEPROPERTYEX('Factu01', 'RECOVERY') AS RecoveryMode,
        SERVERPROPERTY('SqlCharSetName') AS SqlCharSetName


con.cursor().execute(sql).fetchall()

What is the expected output? What do you see instead?

Get the error:

  File "C:\Proyectos\TeleportServer\SqlServerEngine.py", line 156, in selectSql
    cursor.execute(sql)
  File "C:\Programacion\Python\27\lib\site-packages\pypyodbc.py", line 1305, in execute
    self.execdirect(query_string)
  File "C:\Programacion\Python\27\lib\site-packages\pypyodbc.py", line 1340, in execdirect
    self._UpdateDesc()
  File "C:\Programacion\Python\27\lib\site-packages\pypyodbc.py", line 1707, in _UpdateDesc
    ColDescr.append((col_name, SQL_data_type_dict.get(Ctype_code.value,(Ctype_code.value))[0],Cdisp_size.value,\
TypeError: 'int' object is not subscriptable

What version of the product are you using? On what operating system?

Windows 7 64, Sql Server 2008, Python 2.7, last release version of pypyodbc 
from pip

Original issue reported on code.google.com by [email protected] on 22 Feb 2013 at 4:05

Couldn't connect to DB2 using iSeriesAccess driver.

What steps will reproduce the problem?
1. pypyodbc.connect("DSN=PREDEFINED_DSN")


What is the expected output? What do you see instead?
Expected to connect, 

Result:

 cnx = pypyodbc.connect("DSN=PREDEFINED_DSN")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "pypyodbc.py", line 2273, in __init__
    self.connect(connectString, autocommit, ansi, timeout, unicode_results, readonly)
  File "pypyodbc.py", line 2342, in connect
    self.update_db_special_info()
  File "pypyodbc.py", line 2393, in update_db_special_info
    info_tuple = cur.getTypeInfo(sql_type)
  File "pypyodbc.py", line 1871, in getTypeInfo
    return self.fetchone()
  File "pypyodbc.py", line 1794, in fetchone
    value_list.append(buf_cvt_func(raw_value))
ValueError: invalid literal for int() with base 10: ''


What version of the product are you using? On what operating system?
Pypyodbc Version: 1.1.2
OS : Fedora 17
Python Version: 2.7
ODBC Driver : iSeriesAccess for AS400 DB2 Database.

Switched trying 64bit and 32bit driver and same error occurs 


It worked with an oracle Driver  and also with TDS for a SQL Server. 



Original issue reported on code.google.com by [email protected] on 26 Apr 2013 at 9:58

"invalid literal" exceptions querying empty numeric fields when using LinuxODBC with FreeTDS

What steps will reproduce the problem?
1. Setup a linux system running pypyodbc using LinuxODBC and FreeTDS to access 
a MSSQL database
2. Query any table that contains entries with empty numeric fields

What is the expected output? What do you see instead?
Instead of getting a valid response, depending on the field type, one of the 
following two exceptions will be thrown:
invalid literal for int() with base 10: ''
Invalid literal for Decimal: ''

The issue appears to be that FreeTDS returns empty fields as '' which the stock 
python conversion functions throw an exception on while converting.


What version of the product are you using? On what operating system?
pypyodbc 1.3.3
python 2.6
linux

Please provide any additional information below.

We solved this by:
1. editing the Decimal_cvt() function to return None if x = "" 
2. Adding other conversion functions int_cvt(), long_cvt(), and float_cvt()
3.  editing editing the SQL_data_type_dict to use the new conversion functions 
instead of the stock python int, float, and long functions

See the attached edited version of pypyodbc.py version 1.3.3 for all of the 
changes that were made.


Original issue reported on code.google.com by [email protected] on 12 Aug 2014 at 4:40

Attachments:

Column names get wonky on 64-bit system using iSeries DB2 driver

What steps will reproduce the problem?
1. Find someone with the iSeries DB2
2. Get hold of the iSeries DB2 driver from your IBM entitled support person
3. Install the drivers on Ubuntu 12.04 LTS
4. Configure unixodbc to access your system
5. Run the following query:

SELECT * FROM qsys2.systables FETCH FIRST ROW ONLY

6. Use the following python code:

# Do import & connection
for row in cursor.execute(sql):
    print(row.cursor_description)

7. The column name (the [0] value of cursor_description) will be b'x' where x 
is whatever the first letter of the column name is

What is the expected output? What do you see instead?

Instead of "my_column" you'll see b'm'


Please provide any additional information below.

If you look at Cname.raw instead of Cname.value on line 1770 (in version 1.3.3, 
apilevel 2.0), you'll notice it looks like `M\x00Y\x00_\x00C` and so on and so 
forth. 

I was able to patch this by changing

col_name = Cname.value

to 

col_name = Cname.raw.decode().replace('\x00', '')

There may be a better way to fix this, but that one worked for me.

It looks like probably the driver is sending over a slightly wider character 
than pypyodbc is expecting. Not sure if this is a general case on 64-bit 
machines, or just for the iSeries (which would not surprise me)

Original issue reported on code.google.com by [email protected] on 16 Jul 2014 at 9:09

Cannot find libodbc.so on linux

What steps will reproduce the problem?
1. Install libodbc.so and the drivers in /usr/local/lib
2. Try to use pypyodbc
3. See the error

What is the expected output? What do you see instead?
Error: it cannot load the library

What version of the product are you using? On what operating system?
pypyodbc 1.1.1, linux

Please provide any additional information below.

I looked at the portion of the code that deals with loading the library.
It uses ctypes.util.find_library function. According to the manual 
http://docs.python.org/3/library/ctypes.html (and I verified it), on Linux it 
returns only the file name of the library, but not the full path. So it is 
pretty much useless, the library will not load this way.
The workaround is: set LD_LIBRARY_PATH properly before running the program
. This will also help to load the drivers.

Here is my suggestion: simplify that piece of code, do only the LoadLibrary 
call. If it fails, just print error message and suggest to the user to set 
LD_LIBRARY_PATH.

Original issue reported on code.google.com by [email protected] on 5 Jul 2013 at 7:30

Connecting fails on narrow build of Python

What steps will reproduce the problem?
1. Compile Python with default settings (i.e. narrow)
2. Try to connect via ODBC

What is the expected output? What do you see instead?

Narrow builds use "ucs2_buf = lambda s: s" while wide builds have 
"s.encode('utf_16_le')". This seems to cause problems, although I don't know 
why. Trying to connect with a narrow Python ends with the following error:

  File "/home/tpievila/.virtualenvs/reservedusage/lib/python2.7/site-packages/pypyodbc.py", line 2285, in __init__
    self.connect(connectString, autocommit, ansi, timeout, unicode_results, readonly)
  File "/home/tpievila/.virtualenvs/reservedusage/lib/python2.7/site-packages/pypyodbc.py", line 2333, in connect
    check_success(self, ret)
  File "/home/tpievila/.virtualenvs/reservedusage/lib/python2.7/site-packages/pypyodbc.py", line 951, in check_success
    ctrl_err(SQL_HANDLE_DBC, ODBC_obj.dbc_h, ret, ODBC_obj.ansi)
  File "/home/tpievila/.virtualenvs/reservedusage/lib/python2.7/site-packages/pypyodbc.py", line 931, in ctrl_err
    raise DatabaseError(state,err_text)
pypyodbc.DatabaseError: (u'\U000d0049\U001000302', u'[\U000d0049\U001000302] 
\udd00\udc5b\U0009006e\U000f0078\U00020044\udf00\udc43\U0004005b\U00090072\U0005
0076\udfc0\udc72\U0001004d\U0001006e\U00050067\udf00\udc72\U00010044\U00010074\u
dc80\udc20\udd00\udc6f\U00030072\udfc0\udc65\U0001006e\U0005006d\U000e0020\udcc0
\udc6f\U00060020\udd00\udc6f\U0004006e\udfc0\udc2c\U000e0061\udfc0\udc64\U000f00
6e\U00040020\U00060065\udd00\udc61\udcc0\udc6c\U00040020\U00090072\U00050076\udf
c0\udc72\U00100073\U00030065\U00060069\U00050069d')


What version of the product are you using? On what operating system?

1.1.5 on Python 2.6/2.7, RHEL6.

Original issue reported on code.google.com by [email protected] on 20 Aug 2013 at 7:23

Commit returns SQL_NO_DATA_FOUND and empty err_list causing exception.

What steps will reproduce the problem?
1. I call a stored procedure.  Not sure why it works most times and not others
2. SP executes fine, but I get an error on the commit.
3. I suspect this starts when the network can not look up my DNS

What is the expected output? What do you see instead?
I would expect that the commit does not throw an error.

What version of the product are you using? On what operating system?
Downloaded today (12/11)

Please provide any additional information below.
So I'm calling a SP, the execute seems to work fine.  When I try to commit an 
error is thrown.  I traced it to line 947 in pypyodbc. 

            state = err_list[0][0]

In my case err_list = [], so the exception is the 'IndexError: list index out 
of range'.   This is all inside of the ctrl_err method.

So the return value of ODBC_func100 (NO DATA).  The first thing it does is try 
to 


Original issue reported on code.google.com by [email protected] on 11 Dec 2013 at 9:29

Error SQL_INVALID_HANDLE trying to connect a Progress/Openedge database.

What steps will reproduce the problem?
1. Run pypy
2. Import pypyodbc
3. Connect to a DSN (working under cPython+pyodbc)

What is the expected output? What do you see instead?
==========================
Error:
--------------------------
$ pypy
Python 2.7.3 (87aa9de10f9ca71da9ab4a3d53e0ba176b67d086, Nov 28 2013, 12:34:23)
[PyPy 2.2.1 with GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
And now for something completely different: ``"You know what's nice about
RPython?  Longer sword fights."''
>>>> import pypyodbc
>>>> pypyodbc.connect("DSN=pyProgress")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/pypy/site-packages/pypyodbc.py", line 2402, in __init__
    AllocateEnv()
  File "/opt/pypy/site-packages/pypyodbc.py", line 995, in AllocateEnv
    check_success(SQL_NULL_HANDLE, ret)
  File "/opt/pypy/site-packages/pypyodbc.py", line 989, in check_success
    ctrl_err(SQL_HANDLE_ENV, ODBC_obj, ret, False)
  File "/opt/pypy/site-packages/pypyodbc.py", line 969, in ctrl_err
    raise ProgrammingError('', 'SQL_INVALID_HANDLE')
ProgrammingError: ('', 'SQL_INVALID_HANDLE')
>>>>

==========================
Expected (using cPython):
--------------------------
$ python
Python 2.7.2 (default, Jun 19 2012, 12:13:20)
[GCC 4.6.1 20110627 (Mandriva)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pyodbc
>>> pyodbc.connect("DSN=pyProgress")
<pyodbc.Connection object at 0x7f693e285ed0>
>>>

What version of the product are you using? On what operating system?
- pypyodbc 1.3.1
- Mandriva Linux 2011.0 (turtle)

Please provide any additional information below.
- currently using driver libodbc.so.1 from unixodbc

Original issue reported on code.google.com by [email protected] on 17 Mar 2014 at 6:08

pip errors out because file is "unverified"

What steps will reproduce the problem?
1. Try to install pypyodbc using pip:
pip install pypypodbc

2. Get an error about it being hosted externally:
Downloading/unpacking pypyodbc
  Could not find any downloads that satisfy the requirement pypyodbc
  Some externally hosted files were ignored (use --allow-external pypyodbc to allow).

3. Add flag indicated by message:
pip install pypyodbc --allow-external pypyodbc

4. Get an error about it being unverified:
Downloading/unpacking pypyodbc
  Could not find any downloads that satisfy the requirement pypyodbc
  Some insecure and unverifiable files were ignored (use --allow-unverified pypyodbc to allow).

5. Finally install using both arguments:
pip install pypyodbc --allow-external pypyodbc --allow-unverified pypyodbc


What version of the product are you using? On what operating system?
Latest on Windows


Please provide any additional information below.

Looking at the issue list for pip on github, the --allow-unverified argument 
might imply the --allow-external argument in the future, but as I understand 
it, this isn't going away and can only be fixed repo side. 
https://github.com/pypa/pip/issues/1268 suggests something about adding a hash 
fixing it, but I don't know what that means.

Original issue reported on code.google.com by [email protected] on 9 Jan 2014 at 1:17

TIMESTAMP columns in SAP HANA cause ValueError: microsecond must be in 0.999999

What steps will reproduce the problem?

1. Connect to a SAP HANA database and select columns which include a timestamp

e.g. This example should work on any HANA instance

cur.execute('select END_TIME from M_CONNECTIONS')
cur.fetchall()

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "pypyodbc.py", line 1795, in fetchall
    row = self.fetchone()
  File "pypyodbc.py", line 1844, in fetchone
    value_list.append(buf_cvt_func(alloc_buffer.value))
  File "pypyodbc.py", line 568, in dttm_cvt
    if x == '': return None
ValueError: microsecond must be in 0..999999

The END_TIME column is HANA type TIMESTAMP.  Below is the pdb output:

>>> cur.fetchall()
> pypyodbc.py(566)dttm_cvt()
-> if py_v3:
(Pdb) x
'2014-02-25 14:57:08.487000000'


What is the expected output? What do you see instead?

Expected output is no Traceback.


What version of the product are you using? On what operating system?

pypyodbc version: 1.2.1
python version: 2.7.2
OS version: Linux localhost.localdomain 2.6.32-279.19.1.el6.x86_64 #1 SMP Wed 
Dec 19 07:05:20 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux

Please provide any additional information below.


NOTE: I fixed this locally, by changing the following line in dttm_cvt(x) from:

else: return 
datetime.datetime(int(x[0:4]),int(x[5:7]),int(x[8:10]),int(x[10:13]),int(x[14:16
]),int(x[17:19]),int(x[20:].ljust(6,'0')))

to:

else: return 
datetime.datetime(int(x[0:4]),int(x[5:7]),int(x[8:10]),int(x[10:13]),int(x[14:16
]),int(x[17:19]),int(x[20:25].ljust(6,'0')))

Original issue reported on code.google.com by [email protected] on 25 Feb 2014 at 11:48

ValueError: invalid literal for int() with base 10: '' (./pypyodbc.py", line 1772, in fetchone)

What steps will reproduce the problem?

Attempting to connect to MSSQL database using below code:

    cstr = 'DRIVER=FreeTDS;SERVER=%s;DATABASE=%s;PORT=%s;UID=%s;PWD=%s' % (host, db, '1433', user, passwd)
    self.mssql = pypyodbc.connect(cstr)

Returns below error/traceback:

Traceback (most recent call last):
  File "/data01/python/wins/e2e/usage_gatherer.py", line 258, in <module>
    usage_mon = usage_gatherer()
  File "/data01/python/wins/e2e/usage_gatherer.py", line 33, in __init__
    '***REMOVED User****', '***REMOVED Pass***', '*****REMOVED Table****', timeout=300)
  File "/usr/src/pypy/site-packages/wins/db_core/mssql.py", line 29, in __init__
    self.mssql = pypyodbc.connect(cstr)
  File "/usr/src/pypy/site-packages/pypyodbc-1.1.3-py2.7.egg/pypyodbc.py", line 2285, in __init__
    self.connect(connectString, autocommit, ansi, timeout, unicode_results, readonly)
  File "/usr/src/pypy/site-packages/pypyodbc-1.1.3-py2.7.egg/pypyodbc.py", line 2355, in connect
    self.update_db_special_info()
  File "/usr/src/pypy/site-packages/pypyodbc-1.1.3-py2.7.egg/pypyodbc.py", line 2405, in update_db_special_info
    info_tuple = cur.getTypeInfo(sql_type)
  File "/usr/src/pypy/site-packages/pypyodbc-1.1.3-py2.7.egg/pypyodbc.py", line 1882, in getTypeInfo
    return self.fetchone()
  File "/usr/src/pypy/site-packages/pypyodbc-1.1.3-py2.7.egg/pypyodbc.py", line 1772, in fetchone
    value_list.append(buf_cvt_func(alloc_buffer.value))
ValueError: invalid literal for int() with base 10: ''

Code works fine with pyodbc version 3.0.7. 

What version of the product are you using? On what operating system?
pypyodbc-1.1.3
Linux - RHEL 5 kernel 2.6.18-274.17.1.el5, freetds.x86_64 v 0.91-2.el5

Please provide any additional information below.

I was able to resolve the error by adding a try/except at line 1772, but I am 
uncertain as to what implications this may have.

try:
    value_list.append(buf_cvt_func(alloc_buffer.value))
except ValueError:
    #if DEBUG:print 'ValueError at fetchone() - %s %s %s' % (col_name, target_type, alloc_buffer.value, )
    pass


Original issue reported on code.google.com by [email protected] on 28 Aug 2013 at 6:55

  • Merged into: #14

1 character text column values are read incorrectly

python 2.7
windows 32 bit
pypyodbc version: 0.6
connection via user DSN to remote microsoft sql server

symptoms:
all works except database text fields of length 1 are read as having value 
equal to empty string when they in fact have valid one character values.

example: code below (with correct database tables) shows this problem. The code 
works fine with pyodbc, symptom as above when swapped to pypyodbc


import pprint from pprint
import pypyodbc as pyodbc

# eep.categ and eep.status are colums typed as
# text
# field size 1
test_sql = """SELECT eep.eeid, eep.namei,eep.categ, eep.status
            FROM ((eedbo.eep LEFT JOIN eedbo.Ug ON eep.eeid=Ug.Ueeid)
            LEFT JOIN eedbo.Phd ON eep.eeid=Phd.Peeid)
            LEFT JOIN eedbo.MSc ON eep.eeid=MSc.Meeid
            WHERE (eep.Namei LIKE 'Bal%') and (eep.Status = 'F') and (eep.Categ = 'U');"""

def read_odbc():
        cnxn = pyodbc.connect('DSN=sqlserver;Trusted_Connection=yes')
        cursor = cnxn.cursor()
        cursor.execute(sql)
        rows = cursor.fetchall()
        for row in rows:
            pprint(row)


Original issue reported on code.google.com by [email protected] on 15 Jul 2012 at 12:25

Column names are truncated to single character

What steps will reproduce the problem?

1. Connect to database
2. Execute query, e.g.:
>>> cursor.execute("SELECT 1 test")
<pypyodbc.Cursor object at 0x7f84f3e95eb8>
3. Access column names:
>>> cursor.description
[(b't', <class 'int'>, 11, 10, 10, 0, False)]

What is the expected output? What do you see instead?

If I use pyodbc, the result is correct:

>>> cursor.execute("select 1 test")
<pyodbc.Cursor object at 0x7f69af0a4db0>
>>> cursor.description
(('test', <class 'int'>, None, 10, 10, 0, False),)



What version of the product are you using? On what operating system?

pypyodbc==1.3.1
Python 3.4.0 (compiled, CFLAGS='-fPIC' ./configure --prefix=/usr/local && make 
&& make install)
Ubuntu 12.04 64bit


Please provide any additional information below.

isql output or pyodbc library gets all column names correctly


Original issue reported on code.google.com by [email protected] on 4 Apr 2014 at 7:46

pypyodbc duplicates previous value for column instead of NULL

Steps to reproduce the problem.

1. Create a database and populate it with test data

CREATE DATABASE test;
USE test;
CREATE TABLE test (host NVARCHAR(256), ip NVARCHAR(15));
INSERT INTO test VALUES ('srv1', '192.168.1.1');
INSERT INTO test VALUES ('srv2', '192.168.1.2');
INSERT INTO test VALUES ('srv3', '192.168.1.3');
INSERT INTO test VALUES ('srv4', NULL);
INSERT INTO test VALUES ('srv5', NULL);
INSERT INTO test VALUES ('srv6', '192.168.1.6');
INSERT INTO test VALUES ('srv7', NULL);
INSERT INTO test VALUES ('srv8', NULL);
INSERT INTO test VALUES ('srv9', NULL);


2. Check database content

SELECT * FROM test;

Database must contain the following data

srv1    192.168.1.1
srv2    192.168.1.2
srv3    192.168.1.3
srv4    NULL
srv5    NULL
srv6    192.168.1.6
srv7    NULL
srv8    NULL
srv9    NULL


3. Now execute sample Python script

import pypyodbc as odbc
sql = 
odbc.connect("Driver=FreeTDS;Server=127.0.0.1;Port=1433;Database=test;UID=test;P
WD=secret")
cursor = sql.cursor()
cursor.execute("SELECT * FROM test;")

while True:
    row = cursor.fetchone()
    if row is None:
        break
    print row


Expected output

(u'srv1', u'192.168.1.1')
(u'srv2', u'192.168.1.2')
(u'srv3', u'192.168.1.3')
(u'srv4', None)
(u'srv5', None)
(u'srv6', u'192.168.1.6')
(u'srv7', None)
(u'srv8', None)
(u'srv9', None)


But script duplicates previous value for column instead of NULL

('srv1', '192.168.1.1')
('srv2', '192.168.1.2')
('srv3', '192.168.1.3')
('srv4', '192.168.1.3')
('srv5', '192.168.1.3')
('srv6', '192.168.1.6')
('srv7', '192.168.1.6')
('srv8', '192.168.1.6')
('srv9', '192.168.1.6')


The same code with pyodbc returns correct output!


Version of the product and operating system

Linux, Python 2.7.3, pypyodbc-1.2.1, libfreetds-0.91, libtdsodbc0-0.91
Mycrosoft SQL Server 2005

Original issue reported on code.google.com by [email protected] on 27 Nov 2013 at 4:53

endless loop when SQL_NULL_DATA is returned in ctrl_error()

I'm having problems connecting on OSX with FreeTDS, and while the DSN works 
fine with cPython + pyodbc, pypyodbc isn't managing it.

However, at least to prevent an endless loop, I'm observing an unhandled case 
in ctrl_err() which leads to a hang.    My random guess would be because OSX 
uses iODBC and not unixODBC (or win32 ODBC), things are different than what 
pypyodbc is expecting.    I'm not sure how this value should be dealt with but 
the patch below at least causes the program to exit, rather than looping 
endlessly because it does not accommodate a return value of -1:

--- pypyodbc.py 2013-08-05 14:04:52.000000000 -0400
+++ pypyodbc.py.new 2013-08-05 14:05:00.000000000 -0400
@@ -939,7 +939,8 @@
             else:
                 err_list.append((from_buffer_u(state), from_buffer_u(Message), NativeError.value))
             number_errors += 1
-
+        elif ret == SQL_NULL_DATA:
+             raise ProgrammingError('', 'SQL_NULL_DATA')


 def check_success(ODBC_obj, ret):

Original issue reported on code.google.com by [email protected] on 5 Aug 2013 at 6:05

Unable to set query timeout

Hi,

Trying to set query timeout to hard limit script execution time.

Quick patch just for testing

--- pypyodbc.py.orig    2014-03-07 16:08:39.208447167 +0200
+++ pypyodbc.py 2014-03-07 16:28:40.035181023 +0200
@@ -89,6 +89,7 @@
 SQL_MODE_DEFAULT = SQL_MODE_READ_WRITE = 0; SQL_MODE_READ_ONLY = 1
 SQL_AUTOCOMMIT_OFF, SQL_AUTOCOMMIT_ON = 0, 1
 SQL_IS_UINTEGER = -5
+SQL_ATTR_QUERY_TIMEOUT = 0
 SQL_ATTR_LOGIN_TIMEOUT = 103; SQL_ATTR_CONNECTION_TIMEOUT = 113
 SQL_COMMIT, SQL_ROLLBACK = 0, 1

@@ -2424,6 +2425,8 @@
             self.settimeout(timeout)
             ret = ODBC_API.SQLSetConnectAttr(self.dbc_h, SQL_ATTR_LOGIN_TIMEOUT, timeout, SQL_IS_UINTEGER);
             check_success(self, ret)
+            ret = ODBC_API.SQLSetConnectAttr(self.dbc_h, 
SQL_ATTR_QUERY_TIMEOUT, timeout, SQL_IS_UINTEGER);
+            check_success(self, ret)


         # Create one connection with a connect string by calling SQLDriverConnect


And test script

#!/usr/bin/python
import pypyodbc
DSN="Driver=FreeTDS;TDS_Version=8.0;Server=10.0.0.9;Port=1433;Database=test;UID=
test;PWD=secret"
sql = pypyodbc.connect(DSN, timeout=10).cursor()
sql.execute("WAITFOR DELAY '00:00:30'");


Unfortunately timeout does not apply

> time ./test.py
real    0m30.335s
user    0m0.026s
sys     0m0.011s



Any ideas?


Regards,
Aleksey

Original issue reported on code.google.com by [email protected] on 7 Mar 2014 at 2:42

does this support foxpro

What steps will reproduce the problem?
1.
2.
3.

What is the expected output? What do you see instead?
a connection

What version of the product are you using? On what operating system?
Windows XP the latest


Please provide any additional information below.
cnxn = pypyodbc.connect(DRIVER='{Microsoft FoxPro VFP Driver (*.dbf)}',  
SourceType='DBF',  SourceDB='c:\amsql\amaddon.dbf')
does not work
TypeError: connect() got an unexpected keyword argument 'DRIVER'
but works with pyodbc.  Maybe you can suggest a different connect string?


Original issue reported on code.google.com by [email protected] on 23 Sep 2012 at 5:59

cursor.fetchone raise ValueError Exception on linux unixodc

What steps will reproduce the problem?

pypyodbc read the mdb file on windows is perfect,but on debian linux use 
unixODBC, the same script,the same mdb file, pypyodbc crashed.

the output list below:

Traceback (most recent call last):
  File "extract_building.py", line 161, in <module>
    extract_building('Record.mdb')
  File "extract_building.py", line 26, in extract_building
    row = cur.fetchone()
  File "/usr/local/lib/python3.3/dist-packages/pypyodbc.py", line 1842, in fetchone
    value_list.append(buf_cvt_func(alloc_buffer.value))
ValueError: invalid literal for int() with base 10: b'\xe0'

try pyodbc to read the mdb file,pyodbc did not crash,but the result is all mess.

the mdb database has chinese character encoded by GB charset, the problem maybe 
due to charset convert. 
i use the latest pypyodbc 1.2.1 on debian testing,also on windows 8.1

the script and mdb file attached.

best regards


Original issue reported on code.google.com by [email protected] on 23 Nov 2013 at 12:41

Attachments:

Data source name not found, and no default driver specified

What steps will reproduce the problem?
1. Configure driver and DSN in odbcinst.ini and odbc.ini
2. Call pypyodbc.connect() with 'DRIVER={driver};...'

What is the expected output? What do you see instead?

I would expect to get a connection back, as it works with isql and pyodbc.

Instead, I get this:

  File "db.py", line 141, in db_connection
    cnxn = pyodbc.connect(cnstring, autocommit=autocommit)
  File "/home/foo/.virtualenvs/env/lib/python2.6/site-packages/pypyodbc.py", line 1729, in connect
    return odbc(connectString, autocommit, ansi, timeout, unicode_results, readonly)
  File "/home/foo/.virtualenvs/env/lib/python2.6/site-packages/pypyodbc.py", line 1544, in __init__
    self.connect(connectString, autocommit, ansi, timeout, unicode_results, readonly)
  File "/home/foo/.virtualenvs/env/lib/python2.6/site-packages/pypyodbc.py", line 1573, in connect
    validate(ret, SQL_HANDLE_DBC, self.dbc_h)
  File "/home/foo/.virtualenvs/env/lib/python2.6/site-packages/pypyodbc.py", line 627, in validate
    ctrl_err(handle_type, handle, ret)
  File "/home/foo/.virtualenvs/env/lib/python2.6/site-packages/pypyodbc.py", line 612, in ctrl_err
    raise Error(state,err_text)
pypyodbc.Error: ('IM002', '[IM002] [unixODBC][Driver Manager]Data source name 
not found, and no default driver specified')

What version of the product are you using? On what operating system?

pypyodbc 0.8.6
python 2.6
SuSE Linux Enterprise 11.3

Please provide any additional information below.
Doesn't work with system configuration files, user files, or 
local+ODBCINI+ODBCSYSINI. All of those work with isql and pyodbc.

Original issue reported on code.google.com by [email protected] on 5 Oct 2012 at 11:28

Unable to set SQL Connect timeout

Unable to set SQL Connect timeout

What steps will reproduce the problem?

1. Сonnect to any server that drop (not reject!) the connection

import pypyodbc
conn = 
pypyodbc.Connection("Driver=FreeTDS;Server=192.0.2.1;port=1443;UID=sa;PWD=secret
;database=master;TDS_Version=8.0", timeout=5)


What is the expected output? What do you see instead?

Connection hang longer than 5s because Connection.__init__ automaticaly calls 
self.connect() that sets SQL_ATTR_LOGIN_TIMEOUT but not 
SQL_ATTR_CONNECTION_TIMEOUT

What version of the product are you using? On what operating system?

version = '1.1.5'
Linux

Please provide any additional information below.

I looked through the code and there is settimeout method that is not used. So 
quick solution is simply call self.settimeout(timeout) before self.connect in 
__init__ method.

--- pypyodbc.py.orig    2013-09-06 19:47:05.000000000 +0400
+++ pypyodbc.py 2013-09-06 19:47:27.000000000 +0400
@@ -2285,6 +2285,7 @@
         ret = ODBC_API.SQLAllocHandle(SQL_HANDLE_DBC, shared_env_h, ADDR(self.dbc_h))
         check_success(self, ret)

+        self.settimeout(timeout)
         self.connect(connectString, autocommit, ansi, timeout, unicode_results, readonly)


But more elegant solution is separate timeouts:
SQL_ATTR_CONNECTION_TIMEOUT
SQL_ATTR_LOGIN_TIMEOUT
SQL_ATTR_QUERY_TIMEOUT

Original issue reported on code.google.com by [email protected] on 6 Sep 2013 at 3:52

Return character data as string rather than byte array

This is not a defect, but a suggestion for enhancement.

Currently database columns of char or varchar types come as byte arrays.
The same is true for metadata, like column names.
It would be much nicer and more intuitive if they were converted to string type.

For example, I have a simple program reading the table and displaying the 
output; it is strange to see values looking like b'abcd'.
It is also easy to make mistake when comparing values.
Generally, in my opinion, we should normally use strings in our programs, and 
byte arrays only when we really need them.

This is my opinion, I don't know how the whole Python community would feel 
about it. My main language is Java, so I am used to this uniformity. 

Original issue reported on code.google.com by [email protected] on 8 Jul 2013 at 5:14

No older versions available

What steps will reproduce the problem?
1. pip install pypyodbc==1.1.2

What is the expected output? What do you see instead?

I expect 1.1.2 to be installed, instead I get an error that it's not available. 
This makes using pip requirements files impossible with version pinning. That's 
a pretty major obstacle to dependable deployments. Please do not remove old 
versions from pip.

It seems that the old versions are not available from google code either, nor 
are there any tags in version control.

Original issue reported on code.google.com by [email protected] on 20 Aug 2013 at 6:44

Getting "DatabaseError" for all connections

What steps will reproduce the problem?
1. import pypyodbc.
2. try to connect.

What is the expected output? What do you see instead?
A connection object.

What version of the product are you using? On what operating system?
pypyodbc-1.0.11 / python2.7.2 / linux

Please provide any additional information below.
Fresh install.  All I'm trying to do is start a connection.  See log attached.  
I get the same error with MS-SQL (Driver SQL Server Native Client 11.0) as well.

Original issue reported on code.google.com by [email protected] on 1 Apr 2013 at 9:03

  • Merged into: #22

Attachments:

use mbdtools,chinese character encoded by gb charset can't access correctly

What steps will reproduce the problem?

What is the expected output? What do you see instead?

when pyptodbc 1.3.3 to read a mdb include chinese character encoded by 
gb(gbk,gb2312 or gb18030) charset, use the default parameter for connect 
function, it returns not chinese character but unrecognized character, the 
number and english character is normal. 

when i change the parameter unicode_results to False of connect function,
read some column (include chinese character) of bytes, and decode it to 
UTF-8,like this:

>>> conn=pypyodbc.connect('Driver=MDBTools;DBQ=/path/to/record.mdb', 
unicode_results=False)
>>> conn.cursor().execute('SELECT * FROM Build').fetchone()[0].decode('UTF-8')

the result is what i expected, it's exactly the chinese character, it's amazing!

so, i believe the mdbtools odbc driver already convert chinese charac ter from 
gb charset to unicode charset, if convert again will be all mess up.

What version of the product are you using? On what operating system?

1.3.3 on debian jessie amd64

Please provide any additional information below.
I writed a post to discribe this, the url is  
http://openwares.net/linux/pypyodbc_gb_mdb_mess.html

Original issue reported on code.google.com by [email protected] on 15 Aug 2014 at 12:23

Attachments:

varbinary insert/update failure to SQL Server in Python 3

Code Snippet:

cur.execute("create table TEST ( name varchar(30), record varbinary(max) )")
cur.commit()

data = bytearray()
for i in range(10):
    for b in range(256):
        data.append(b)
        print(len(data))
        cur.execute("insert into TEST values (?, ?)", ('foo' + str(i*256+b), data))

Code works fine running in 2.7, but fails in 3.3:
pypyodbc.DataError: ('22001', '[22001] [Microsoft][SQL Server Native Client 
11.0]String data, right truncation')


Running on Win7 with 64bit python against SQL Server 2012.
Problem was originally discovered when trying to store gzip data blobs in a 
varbinary(max) column.

Original issue reported on code.google.com by [email protected] on 11 Jan 2014 at 6:19

Cannot use NamedTupleRow

What steps will reproduce the problem?
Create cursor with conn.cursor(pypyodbc.NamedTupleRow)

What is the expected output? What do you see instead?

Traceback (most recent call last):
  File "statsco.py", line 21, in <module>
    row = c.fetchone()
  File "/usr/local/lib/python2.7/dist-packages/pypyodbc.py", line 1839, in fetchone
    return self._row_type(value_list)
  File "/usr/local/lib/python2.7/dist-packages/pypyodbc.py", line 1073, in __init__
    super(Row, self).__init__(*iterable)
TypeError: __init__() takes exactly 1 argument (191 given)

What version of the product are you using? On what operating system?

Version 1.2.0 on Ubuntu 12.04.2 64 bit
iSeries 6.1

Please provide any additional information below.

Same exception with MutableNamedTupleRow

Original issue reported on code.google.com by [email protected] on 27 Oct 2013 at 9:57

IronPython 2.7 StopIteration exception raised when retrieving SQL data set

What steps will reproduce the problem?
1. Connect to Access Database
2. Connect the cursor
3. Exception is raised

What is the expected output? What do you see instead?
No output. Instead a stopIteration error is raised

What version of the product are you using? On what operating system?
IronPython 2.7 with Visual Studio Desktop 2013 on Windows 7

Please provide any additional information below.


Original issue reported on code.google.com by [email protected] on 3 Aug 2014 at 5:55

Attachments:

Handle unicode in table column

What steps will reproduce the problem?
1.Create a table with pypyodbc, with some column headers containing character Σ
2.Read the column header using cursor.description


What is the expected output? What do you see instead?
The expected string should contain a good string with unicode encoding
The encoding of Σ turns into \xa6\xb2 which is not recognizable by python

What version of the product are you using? On what operating system?
Version:1.3.0 on windows 7 64 bit

Please provide any additional information below.
I don't know if I was doing something wrong, but the description property seems 
doesn't return correct column header string properly if there is unicode 
character contained. I tried to search internet but couldn't find solution

Original issue reported on code.google.com by [email protected] on 13 Mar 2014 at 2:44

Default driver name is different on 64bit Windows

On 64bit windows, using the AccessDatabaseEngine_x64.exe package the default 
driver name has changed from 'Microsoft Access Driver (*.mdb)' to 'Microsoft 
Access Driver (*.mdb, *.accdb)'

Changing this in the pypyodbc.win_ functions seems to fix it fine.

Not sure if this is really a bug, but would be nice to have this automatically 
detected and handled.

Original issue reported on code.google.com by [email protected] on 7 Apr 2013 at 6:59

ODBC permission error when attempting to run pypyodbc.connect under cygwin via fabric

What steps will reproduce the problem?
1.Execute pypyodbc.connect on remote host under Cygwin/Cygwin Python2.7.3 via 
fabric.

What is the expected output? What do you see instead?
The same script makes the connection as expected when run from the same machine 
as the ODBC DSN, but when ran remotely via fabric, I get the following error:

[192.168.1.30] out:     conn = pyodbc.connect('DSN=xxx;PWD=xxx')
[192.168.1.30] out:   File 
"/home/tschmidt/.virtualenvs/dashboard/src/pypyodbc/pypyodbc.py", line 2081, in 
__init__
[192.168.1.30] out:     self.connect(connectString, autocommit, ansi, timeout, 
unicode_results, readonly)
[192.168.1.30] out:   File 
"/home/tschmidt/.virtualenvs/dashboard/src/pypyodbc/pypyodbc.py", line 2129, in 
connect
[192.168.1.30] out:     validate(ret, SQL_HANDLE_DBC, self.dbc_h)
[192.168.1.30] out:   File 
"/home/tschmidt/.virtualenvs/dashboard/src/pypyodbc/pypyodbc.py", line 989, in 
validate
[192.168.1.30] out:     ctrl_err(handle_type, handle, ret)
[192.168.1.30] out:   File 
"/home/tschmidt/.virtualenvs/dashboard/src/pypyodbc/pypyodbc.py", line 974, in 
ctrl_err
[192.168.1.30] out:     raise Error(state,err_text)
[192.168.1.30] out: pypyodbc.Error: ('HY000', "[HY000] [Microsoft][ODBC 
Microsoft Access Driver] The Microsoft Jet database engine cannot open the file 
'(unknown)'.  It is already opened exclusively by another user, or you need 
permission to view its data.")


What version of the product are you using? On what operating system?

Running from the my pypyodbc fork with the latest changes merged from rev 
e657d5acbc5f on the bitbucket repo plus my Cygwin compatibility modifications: 
https://bitbucket.org/yekibud/pypyodbc/src/7575b1d44214cf51aa5c24d1bf010b5564fa9
2a4/pypyodbc.py?at=master#cl-456

Please provide any additional information below.

Original issue reported on code.google.com by [email protected] on 19 Feb 2013 at 5:00

Reuse of prepared statement does not work

What steps will reproduce the problem?
1. Write a simple script that executes SQL statement with parameter, something 
like cursor.execute("select * from tbl where x = ?", ('ABC',))
2. In the script, execute the statement twice, with different parameters.
3. After each execution, write a simple loop to go through the rows.

What is the expected output? What do you see instead?
Expected output is, getting rows from both calls. 
Instead, I am getting only results of the first call, and then exception:
pypyodbc.ProgrammingError: ('24000', '[24000] [Microsoft][SQL Server Native 
Client 10.0]Invalid cursor state')


What version of the product are you using? On what operating system?
pypyodbc 1.1.1, Python3, Windows 7

Please provide any additional information below.
Looks like something in the cursor is supposed to be closed after iteration 
through the rows. That probably should happen at the time of the second call.
Or am I missing some call that I have to make? Anything that effectively closes 
the result set, but leaves the prepared statement intact?
I tried calling _free_results(), but it did not help.
Should I try development version of the module?

Original issue reported on code.google.com by [email protected] on 8 Jul 2013 at 3:24

error install with python3.3

What steps will reproduce the problem?
1.git clone git://github.com/jiangwen365/pypyodbc.git
2.python setup.py install

What is the expected output? What do you see instead?
expect:install of pypyodbc module 
i see :ImportError: No module named 'setuptools'

What version of the product are you using? On what operating system?
python 3.3
winxp sp3

Please provide any additional information below.
i think we should use standard distutils library
in setup.py replace
-from setuptools import setup
+from distutils.core import setup


Original issue reported on code.google.com by [email protected] on 24 Mar 2013 at 9:07

Trimmed varchar results.

What steps will reproduce the problem?

Create procedure which returns more than 2048 characters length varchar column.

What is the expected output? What do you see instead?

Result is trimmed.

What version of the product are you using? On what operating system?

MS SQL 2012 Express + Python 3.4

Please provide any additional information below.

After modifying pypyodbc.py I've managed to overcome this issue only by 
changing line:

SQL_VARCHAR         : (str,                 lambda x: x,                
SQL_C_CHAR,         create_buffer,      2048  ,         False         ),

to

SQL_VARCHAR         : (str,                 lambda x: x,                
SQL_C_CHAR,         create_buffer,      2048  ,         True          ),

I've also changed this for SQL_WVARCHAR. Now it's working.

Did I fix this right, or was that False value right and I should overcome this 
differently?

Original issue reported on code.google.com by [email protected] on 16 Jul 2014 at 8:10

check_status fails if driver doesn't provide details

What steps will reproduce the problem?

It's a bit tricky - it depends on the driver you're using.
1. I've been using solidDB (IBM)
2. Connect to the DB with readonly=True (or do any activity for which the 
driver reports an error but doesn't return info)
3. Inner exception occurs (list index out of range)

What is the expected output? What do you see instead?

    Using the same parameters, pyodbc raise pyodbc.Error('HY000', 'The driver did not supply an error!')
    (I expect them act similarly, or at list report that there is no error supplied)

What version of the product are you using? On what operating system?

    pypyodbc-1.3.1 to connect to SolidDB with unixODBC


Please provide any additional information below.

I think that solidDB doesn't support readonly attribute. In pyodbc the 
SQLSetConnectAttr for readonly is only called when readonly=True, so it's not a 
problem. In pyodbc it's always called, which is why trying to connect to solid 
fails.

The state that SQLGetDiagRec returns in ctrl_err is 00000, and the message is 
empty. The return code is SQL_NO_DATA_FOUND, but since it's the first time it 
is called, it fails.

Traceback for the error:

pypyodbc.py in connect(self, connectString, autocommit, ansi, timeout, 
unicode_results, readonly)
   2480 
   2481         ret = ODBC_API.SQLSetConnectAttr(self.dbc_h, SQL_ATTR_ACCESS_MODE, self.readonly and SQL_MODE_READ_ONLY or SQL_MODE_READ_WRITE, SQL_IS_UINTEGER)
-> 2482         check_success(self, ret)
   2483 
   2484         self.unicode_results = unicode_results

pypyodbc.py in check_success(ODBC_obj, ret)
    986             ctrl_err(SQL_HANDLE_STMT, ODBC_obj.stmt_h, ret, ODBC_obj.ansi)
    987         elif isinstance(ODBC_obj, Connection):
--> 988             ctrl_err(SQL_HANDLE_DBC, ODBC_obj.dbc_h, ret, ODBC_obj.ansi)
    989         else:
    990             ctrl_err(SQL_HANDLE_ENV, ODBC_obj, ret, False)

pypyodbc.py in ctrl_err(ht, h, val_ret, ansi)
    948             #No more data, I can raise
    949             #print(err_list[0][1])
--> 950             state = err_list[0][0]
    951             err_text = raw_s('[')+state+raw_s('] ')+err_list[0][1]
    952             if state[:2] in (raw_s('24'),raw_s('25'),raw_s('42')):

IndexError: list index out of range


Original issue reported on code.google.com by [email protected] on 28 Apr 2014 at 7:55

ValueError raised

The attached file should reproduce the error. Im using Python 2.7.6 on windows 
7 64 bit and pypyodbc 1.3.1.

To see the error:
1. Adjust paths in python file
2. Run the python file

For me this produces the following error:

"C:\Users\reknowles\Desktop\pypyodbc access bug.py in <module>()
     11 c = conn.cursor()
     12 
---> 13 a = c.execute('select Field1 from table1').fetchone()

C:\Users\reknowles\AppData\Local\Continuum\Anaconda\lib\site-packages\pypyodbc.p
y in fetchone(self)
   1867                                     value_list.append(buf_cvt_func(from_buffer_u(alloc_buffer)))
   1868                                 else:
-> 1869                                     
value_list.append(buf_cvt_func(alloc_buffer.value))
   1870                             else:
   1871                                 # There are previous fetched raw data to combine

ValueError: could not convert string to float: E-2"

The value it's trying to read from the access database is -2.90667617099896E-02 
so I suspect somewhere along the line something is struggling with the 'E-2' 
bit.

Possibly related to the following issue?: 
https://code.google.com/p/pypyodbc/issues/detail?id=28&colspec=ID%20Type%20Prior
ity%20Milestone%20Status%20Owner%20Summary

Original issue reported on code.google.com by [email protected] on 24 Apr 2014 at 2:40

Attachments:

Disallowed implicit conversion from data type ntext to data type varchar

What steps will reproduce the problem?
1. Create a table with a type of varchar(1024)
2. Insert some data into the table.
3. Use 'update table set longcolumn = ? where id=1' and pass params
with length of 600 for the parameter.

What is the expected output? What do you see instead?
Expected to see the row updated. Instead, get an error:
Disallowed implicit conversion from data type ntext to data type varchar
It seems that it's calling sp_prepexec and setting the parameter type to ntext 
even though it would work as varchar or nvarchar.

What version of the product are you using? On what operating system?
latest version, Windows 2008 Web Server.


Please provide any additional information below.

Original issue reported on code.google.com by [email protected] on 1 May 2013 at 11:31

cygwin compatibility

What steps will reproduce the problem?
1.import pypyodbc under cygwin

What is the expected output? What do you see instead?
OdbcNoLibrary exception raised since the sys.platform instpection around line 
#490 of pypyodbc.py doesn't include 'cygwin'.  However, the proper way to load 
the native odbc dll would then be something like:

ODBC_API = ctypes.cdll.LoadLibrary('/cygdrive/c/WINDOWS/system32/odbc32.dll')

So I'm not sure how that should best be implemented.


What version of the product are you using? On what operating system?
pypyodbc 0.8.7
cygwin 1.7.17-1 running on Windows XP 

Please provide any additional information below.


Original issue reported on code.google.com by [email protected] on 6 Nov 2012 at 6:59

  • Merged into: #7

Cannot access row object by column name, attribute error gets thrown

Accessing Row object by attribute fails. This works in pyodbc.

What steps will reproduce the problem?
1. Run a select query
2. Try accessing row object by column name
3. AttributeError will be thrown

E.g.

for row in cursor.fetchall():
    id = row.id # <<<< fails

What is the expected output? What do you see instead?
I expect the value of the column to be return, instead I get an attributeerror

What version of the product are you using? On what operating system?
0.9.3


Please provide any additional information below.

Original issue reported on code.google.com by [email protected] on 13 Feb 2013 at 4:54

Can't get column value in the row by column name in MS ACCESS 2010

What steps will reproduce the problem?
1. Connect to MS access 2010 database
2. select all rows from table
3. try to retrieve column in row by name

What is the expected output? What do you see instead?
Get value of the expected column in row. I see "nothing" for all rows

What version of the product are you using? On what operating system?
Version 1.1.1

Please provide any additional information below.
for i in connection.cursor().execute(SQL).description:
  i[0] 

returns byte strings (b'id', b'name'). so b'id' != 'id'

Potential solution (works for me, but must be tested):

REPLACE
col_name = Cname.value
TO
col_name = Cname.value.decode("utf-8")

Original issue reported on code.google.com by [email protected] on 16 Apr 2013 at 3:30

  • Merged into: #19

Values of bit type are read incorrectly

What steps will reproduce the problem?
1. In SQL Server have a table with the column of bit type, and some rows where 
the value of that column is 1.
2. Write a simple code retrieving the data.
3. Run the script.

What is the expected output? What do you see instead?
Expected output is value "True" for that column: it is converted automatically 
to the Python "bool" type, so "1" should become "True".
Instead, I see "False".

What version of the product are you using? On what operating system?
Windows7, Python 3.3.2, pypyodbc-1.1.1-py3.3, SQL Server 10.0.4000, 
driver "SQL Server Native Client 10.0"

Please provide any additional information below.


Original issue reported on code.google.com by [email protected] on 5 Jul 2013 at 3:03

Using pypyodbc on Mac

What steps will reproduce the problem?
1. Just open a simple python file and paste this 

#!/usr/bin/env python

import sys
import httplib
import os.path

import pypyodbc
cnxn = pypyodbc.connect('DSN=servername;UID=username;PWD=password')

(use a valid servername/uid/password above ..I just put in placeholders)
This should be done on the latest Mac OSX .

What is the expected output? What do you see instead?
The expected out is that the connection should be established.Instead I see 
something like this 

[iODBC][Driver Manager]Data source name not found and no default driver 
specified. Driver could not
Traceback (most recent call last):
  File "./myfile.py", line 8, in <module>
    cnxn = pypyodbc.connect('DSN=servername;UID=username;PWD=password' )
  File "/../../pypyodbc.py", line 2329, in __init__
    self.connect(connectString, autocommit, ansi, timeout, unicode_results, readonly)
  File "/../../pypyodbc.py", line 2394, in connect
    check_success(self, ret)
  File "/../../pypyodbc.py", line 995, in check_success
    ctrl_err(SQL_HANDLE_DBC, ODBC_obj.dbc_h, ret, ODBC_obj.ansi)
  File "/../../pypyodbc.py", line 982, in ctrl_err
    err_list.append((from_buffer_u(state), from_buffer_u(Message), NativeError.value))
  File "/../../pypyodbc.py", line 486, in UCS_dec
    uchar = buffer.raw[i:i + ucs_length].decode(odbc_decoding)
  File "/../../Python.framework/Versions/2.7/lib/python2.7/encodings/utf_32.py", line 11, in decode
    return codecs.utf_32_decode(input, errors, True)
UnicodeDecodeError: 'utf32' codec can't decode bytes in position 0-1: truncated 
data




What version of the product are you using? On what operating system?

Using the latest pypyodbc.This is MacOSX.

Please provide any additional information below.


I installed freeTDS and unixODBC using macports.The tsql and isql show the 
connection properly (and are under /opt/local/) .I am not sure if pypyodbc is 
able to detect them?


I did look into this issue that was already reported 

http://code.google.com/p/pypyodbc/issues/detail?id=20

and my pypyodbc.py has 


 # only iODBC uses utf-32 / UCS4 encoding data, others normally use utf-16 / UCS2
        # So we set those for handling.
        if 'libiodbc.dylib' in library:
            odbc_decoding = 'utf_32'
            odbc_encoding = 'utf_32_le'
            ucs_length = 4


so that should be taken care of .

The ODBC api name/usr/lib/libiodbc.dylib.

Original issue reported on code.google.com by [email protected] on 18 Dec 2013 at 2:45

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.