Coder Social home page Coder Social logo

python's Introduction

logo

Synopsis

Join the chat at https://gitter.im/boostorg/python

Welcome to Boost.Python, a C++ library which enables seamless interoperability between C++ and the Python programming language. The library includes support for:

  • References and Pointers
  • Globally Registered Type Coercions
  • Automatic Cross-Module Type Conversions
  • Efficient Function Overloading
  • C++ to Python Exception Translation
  • Default Arguments
  • Keyword Arguments
  • Manipulating Python objects in C++
  • Exporting C++ Iterators as Python Iterators
  • Documentation Strings

See the Boost.Python documentation for details.

Hint : Check out the development version of the documentation to see work in progress.

Building Test Ubuntu Test OSX Test Windows

While Boost.Python is part of the Boost C++ Libraries super-project, and thus can be compiled as part of Boost, it can also be compiled and installed stand-alone, i.e. against a pre-installed Boost package.

Prerequisites

Build

Run

faber

to build the library.

Test

Run

faber test.report

to run the tests.

Build docs

Run

faber doc.html

to build the documentation.

python's People

Contributors

ankitdaf avatar beman avatar cowo78 avatar dabrahams avatar danieljames avatar djowel avatar douggregor avatar eldiener avatar grafikrobot avatar jcpunk avatar jewillco avatar jhunold avatar jzmaddock avatar kojoley avatar lastique avatar mclow avatar nikiml avatar pdimov avatar pmenso57 avatar raoouul avatar spkorhonen avatar stefanseefeld avatar steveire avatar straszheim avatar swatanabe avatar tadeu avatar talljimbo avatar teeks99 avatar thwitt avatar vprus avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

python's Issues

Calling Python function from C++

Hi,

Please can you help explain how a user defined C++ class instance can be passed as an argument to a Python function. I am able to pass an integer argument using the PyInt_FromLong function as below.

	int MyClass::getSquare( int x )
	{
		return pow( x, 2 );
	}
	BOOST_PYTHON_MODULE( Test )
	{
		class_<MyClass>( "MyClass" )
			.def( "getSquare", &MyClass::getSquare )
			;
	}

	PyObject * py_args_tuple = PyTuple_New(1);
	PyObject * py_int;
	py_int = PyInt_FromLong( 4 );
	PyTuple_SetItem( py_args_tuple, 0, py_int );
	PyObject_CallObject( pFun, py_args_tuple );

        def getSquare( x ):
                        // Create MyClass object myClass
			print myClass.getSquare(4)

Thanks,

Nilufar

Deprecate use of boost::shared_ptr in favour of std::shared_ptr

Assuming boost::shared_ptr and std::shared_ptr are distinct types, let Boost.Python provide builtin support for std::shared_ptr whenever the compiler supports it, and optionally support for boost::shared_ptr (for backward compatibility). The latter may be removed later, to reduce Boost.Python's dependency on Boost.

make_unqiue

acording to this http://stackoverflow.com/questions/20581679/boost-python-how-to-expose-stdunique-ptr
i think many people need make_unqie for boost::python
Boost.Python's news/change log has no indication that it has been updated for C++11 move-semantics
Additionally, this feature request for unique_ptr support has not been touched for over a year.

We need unqiue_ptr and make unqiue_support as you can see in tht topic on staackoverfow is already a solution.
Thanks

Use of this header (ice_eq.hpp) is deprecated

Compiling on ArchLinux I get:

In file included from /usr/include/boost/type_traits/ice.hpp:15:0,
                 from /usr/include/boost/python/detail/def_helper.hpp:9,
                 from /usr/include/boost/python/class.hpp:29,
                 from /usr/include/boost/python.hpp:18,
                 from /home/abaumann/strusBindings/lang/python/strusPythonModule.cpp:10:
/usr/include/boost/type_traits/detail/ice_or.hpp:17:71: note: #pragma message: NOTE: Use of this header (ice_or.hpp) is deprecated
 # pragma message("NOTE: Use of this header (ice_or.hpp) is deprecated")

tests for python3 - what about to remove useless dependence on 'past.builtins' ?

Rationale:
'past.builtins' is not a part of stock python modules, as result it's difficult to run boost-python tests in limited environments (like Android) at the same time it is the one line only to fix it -
Code like
if (sys.version_info.major >= 3):
from past.builtins import long

Can be easily replaced with:
if (sys.version_info.major >= 3):
long = int

integrate support of std::shared_ptr in addition to boost::shared_ptr and perhaps other smart pointers implementing similar concepts

currently the support of boost::shared_ptr islands code using std::shared_ptr. While in some cases the user can simply choose to use boost::shared_ptr over std::shared_ptr, this does not work on a pre-existing library whom chose to use std::shared_ptr.

It may also be worth considering older libraries out there who'm had chosen to implement their own smart ptr ( e.g. OpenSceneGraph, ITK/VTK) though I'm not sure they'll support 1:1 the same exact concepts shared_ptr does.

Numpy detection/support in Boost build system

I am building Boost 1.63 (all libraries) as a nix package on a Linux machine. Depending on how I supply numpy to the build environment the libboost_numpy is either built or not (in all cases it is available to the relevant interpreter via import numpy).

I am not familiar with boost jam build system and find it very difficult to understand how it decides whether to build the numpy extension or not. Hence I would be grateful if someone could point me in the right direction. I could contribute to documenting that aspect of the build system as it is very well warranted IMO.

./b2 install --prefix=$VIRTUALENV does not honour `include/pythonX.Ym`

PEP 3149 introduced optional flags to the Python installation (see also this SO question), that are propagated to the virtualenv.

Specifically, the virtualenv include dir is built as .virtualenvs/.../include/python3.6m instead of .virtualenvs/.../include/python3.6 when certain conditions are met. However, running

./bootstrap.sh --prefix=$VIRTUAL_ENV --with-libraries=python --with-python=$VIRTUAL_ENV/bin/python
./b2 install --prefix=$VIRTUAL_ENV

will call the compiler as
g++ ... -I".../include/python3.6" ... "libs/python/src/object_operators.cpp", causing Python headers to not be found.

IMO Boost.python should honor the default installation within a virtualenv.

Segfault (sometimes!) using array::set_module_and_type and Python 3

The following stripped down example demonstrates the issue.

Module definition:

#include <boost/python/def.hpp>
#include <boost/python/module.hpp>
#include <boost/python/numeric.hpp>

using namespace boost::python;

void pass_array(const numeric::array &) {

}

BOOST_PYTHON_MODULE(test_arrays) {
  numeric::array::set_module_and_type("numpy", "ndarray");
  def("pass_array", pass_array);
}

Running the following produces a segfault but not for every run:

echo; while [ $? == 0 ]; do PYTHONPATH=$PWD /usr/bin/python3 -c "from test_arrays import pass_array; import numpy as np; print('testing'); pass_array(np.arange(100.))"; done

The backtrace from a debug with of Python:

 Program terminated with signal 11, Segmentation fault.
#0  0x0000000000456c9a in dict_dealloc (mp=0x7f62ea20f488) at ../Objects/dictobject.c:1379
(gdb) bt
#0  0x0000000000456c9a in dict_dealloc (mp=0x7f62ea20f488) at ../Objects/dictobject.c:1379
#1  0x000000000046355a in module_dealloc (m=0x7f62ebe40c28) at ../Objects/moduleobject.c:398
#2  0x000000000062aca5 in meth_dealloc (m=0x7f62ea20f548) at ../Objects/methodobject.c:150
#3  0x00007f62eaadf4f9 in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#4  0x00007f62eaadf545 in exit () from /lib/x86_64-linux-gnu/libc.so.6
#5  0x00007f62eaac4ecc in __libc_start_main () from /lib/x86_64-linux-gnu/libc.so.6
#6  0x000000000041deb9 in _start ()

I am not certain if it is related to this Python issue: http://bugs.python.org/issue17703

boost::python::objects::add_cast can make incorrect connections

(this was previously logged on trac: https://svn.boost.org/trac/boost/ticket/12807 before I noticed boost::python issues are meant to be reported on github)

We seem to be fairly unique in using add_cast (at least I couldn't find many other reports on Google), so maybe we're not playing by the rules. However.....

We've observed that when adding lots of casting relationships via add_cast, sometimes the relationships aren't setup properly.

I've narrowed this down to demand_types() in inheritance.cpp

It appropriately reserves space to avoid overall reallocation for the type_index vector, however as best I can tell, demand_type() will still shift elements in the vector when it calls insert() with an iterator in the middle of the vector. As a result, we're observing:

  • demand_types() calls demand_type() to set 'first'
  • demand_types() calls demand_type() to set 'second'
    • this call to demand_type() calls insert() which shifts the element that 'first' was previously pointing at

As a result, 'first' ends up pointing at the wrong element, and the wrong pair of vertices are connected in the adjacency_list, effectively setting up a bogus casting relationship.

The eventual impact is that users' python scripts start failing with error messages about their arguments not matching the C++ signature.

Maybe we're breaking a contract by calling add_cast without somehow registering both end-points into the adjacency_list and, if so, I'd love to get some best-practice guidance.

If not, and this is a legitimate issue, here's the patch we're putting in place for the time being:

    type_index_t::iterator first = demand_type(t1);
    type_index_t::iterator second = demand_type(t2);
+   first = demand_type(t1);

The first two (unchanged) calls still ensure both types have associated vertices, and the last (new) call compensates for the possibility that the second call invalidated the result from the first call.

I hope this report is useful. It's my first, so please let me know if I can provide any more info! I'll try to upload a repro case later if I can extract it from our production codebase.

Python crash on deleting shared_ptrs stored in a C++ vector

Hi,

There is a struct A defined in C++ and exposed to python as shared_ptr.
There are two global vectors of shared_ptrs of A.
Create and add a object of type A from python to C++ in the two vectors
Clear one vector
Then when the python exits the program crashes with access violation
error, while deleting the shared_ptrs

TestShared.zip

Environment: Boost 1.58.0, compiler VS 2012

I have attached both C++ and python file to recreate this issue.

Thanks

header only integration of Eigen types

If you google around a bit, you'll see a number of users trying to integrate Eigen types with Boost.Python. It's a bit hard and the solutions are lack-luster and cannot handle all issues. Let's strive for something built-in and header based so we don't complicate other uses of Boost.Python.

Here's an incomplete list:

MSVC iso646.h conflict in boost/python/operators.hpp

MSVC includes iso646.h under various circumstances, which is easily worked around and is actually seen elsewhere in MPL (Boost issue https://svn.boost.org/trac/boost/ticket/3018). This applies strictly to MSVC, and when iso646.h is included from various means. The trac specifically said use github for Boost.Python related issues, so here I am.

iso646.h defines various bitwise macros which are conflicting with the python operators (or, and, xor).

This is easily fixed with prefix/postfix push_macro and pop_macro's for the appropriate functions.

I have attached a diff against 1.61.

operators.patch.txt

segfault on importing boost::python module with default arguments using mingw64

I experience a segfault on importing the module resulting from the code snippet below:
Environment: Windows7, msys2, mingw64-gcc-5.2. <=boost-1.59.0

No Problems occur for win32 and under linux 64/32bit or if I remove the default parameter inside the declartion.

#include <boost/python.hpp>
#include <boost/python/def.hpp>

using namespace boost::python;

 inline int f(double x, double y=1.0){return 33;}

BOOST_PYTHON_MODULE(_bp_)
{
   def("f", f, (arg("x"), arg("y") = 1.0));
}

Compiled with:

ARCH=64
GCC=/C/msys64/mingw64/bin/g++.exe
PYROOT=../../thirdParty/WinPython-$(ARCH)bit-3.4.3.3/python-3.4.3.amd64/
PYLIB=$(PYROOT)/libs/libpython34.dll.a

$(GCC) -O2 -Os -pipe -ansi -Wno-unused-local-typedefs $(INC) -o bp.obj -c bp.cpp
$(GCC) -shared -o bp.pyd bp.obj $(PYLIB)
$(BOOSTROOT)/lib/libboost_python-mt.dll
$(BOOSTROOT)/lib/libboost_system-mt.dll

My libboost_python*.dll are compiled with the same compiler and linked again my local WinPython.

or:

/C/msys64/mingw64/bin/g++.exe -O2 -Os -pipe -ansi -Wno-unused-local-typedefs -I ../gimli/thirdParty/dist-GNU-5.2.0-64/boost_1_59_0-gcc-5.2.0-64-py34/include/ -I ../../thirdParty/WinPython-64bit-3.4.3.3/python-3.4.3.amd64//include -o bp.obj -c bp.cpp
/C/msys64/mingw64/bin/g++.exe -shared -o _bp_.pyd bp.obj ../../thirdParty/WinPython-64bit-3.4.3.3/python-3.4.3.amd64//libs/libpython34.dll.a ../gimli/thirdParty/dist-GNU-5.2.0-64/boost_1_59_0-gcc-5.2.0-64-py34/lib/libboost_python-mt.dll  ../gimli/thirdParty/dist-GNU-5.2.0-64/boost_1_59_0-gcc-5.2.0-64-py34/lib/libboost_system-mt.dll

Testet with:

python -c 'import _bp_; print(_bp_.f(1, 2))'

If I remove the default parameter '=1.0' inside the def declaration everthing works as expected.
There is also no problem with default arguments if the arg 'y' is of type int.

Are there any ideas how to overcome this problem unless removing all default arguments in my whole c++ library?

Bests,
Carsten

Tests fail on OS X

On OS X (10.11.3) the tests fail because a relative rpath is not allowed:

running...
Traceback (most recent call last):
  File "crossmod_exception.py", line 7, in <module>
    import crossmod_exception_a
ImportError: dlopen(/Users/spkersten/Development/boost-github/boost/bin.v2/libs/python/test/crossmod_exception.test/darwin-4.2.1/debug/crossmod_exception_a.so, 2): Library not loaded: libboost_python.dylib
  Referenced from: /Users/spkersten/Development/boost-github/boost/bin.v2/libs/python/test/crossmod_exception.test/darwin-4.2.1/debug/crossmod_exception_a.so
  Reason: unsafe use of relative rpath libboost_python.dylib in /Users/spkersten/Development/boost-github/boost/bin.v2/libs/python/test/crossmod_exception.test/darwin-4.2.1/debug/crossmod_exception_a.so with restricted binary

I'm running the tests by executing bjam in boost/libs/python/test.

nullptr as default argument results in unreported exception during import

The code at the bottom can be compiled like so:

clang++ -std=c++11 -I/opt/local/include -I/opt/local/Library/Frameworks/Python.framework/Versions/3.5/include/python3.5m  -shared -o foo.so nullptr_error.cpp -L/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib -lpython3.5 -L/opt/local/lib -lboost_python

When importing the resulting module in Python I see the following error:

>>> import foo
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
SystemError: initialization of foo raised unreported exception

If I replace nullptr with NULL, then there is no error. This is with Boost 1.59 installed via MacPorts on the latest version of OS X with the latest version of Xcode.

#include <boost/python.hpp>

struct Foo
{
    Foo() : foo_(1) {}
    void setFoo(int * foo = nullptr) { foo_ = foo ? *foo : 1; }
    int getFoo() { return foo_; }
    int foo_;
};

BOOST_PYTHON_MODULE(foo)
{
    boost::python::class_<Foo>("Foo", boost::python::init<>())
        .def("setFoo", &Foo::setFoo, ( boost::python::arg("foo")=nullptr))
        .def("getFoo", &Foo::getFoo)
        ;
}

Segmentation Fault (core dumped) on Python 3.5.2 but not Python 2.7.12

So I'm trying to create a boost python module that simply creates and returns a numpy array,
but the function crashes (sometimes) and it doesn't ever seem to crash on Python 2.

Here's the source code I made:

#include <boost/python.hpp>
#include <numpy/ndarrayobject.h>

using namespace boost::python;

object create_numpy_array() {
    npy_intp dims = 1;
    long* data = new long[1];
    data[0] = 1;
    PyObject* obj = PyArray_SimpleNewFromData(1, &dims, PyArray_LONGLTR, data);
    boost::python::handle<> handle(obj);
    boost::python::numeric::array arr(handle);
    return arr.copy();
}

BOOST_PYTHON_MODULE(create) {
    import_array();
    numeric::array::set_module_and_type("numpy", "ndarray");
    def("numpy_array", &create_numpy_array);
}

using a simple python script to test:

import create
print(create.numpy_array())

The stack trace indicates that the crash occurs on a boost::python::handle destructor trying to decrease the ref count of a PyObject with a ref count of 0.

boost_python3-vc140-mt-gd-1_60.dll!boost::python::xdecref<_object>(_object * p)  Line 36 + 0x5b bytes   C++
boost_python3-vc140-mt-gd-1_60.dll!boost::python::handle<_object>::~handle<_object>()  Line 184 + 0xd bytes C++
boost_python3-vc140-mt-gd-1_60.dll!boost::python::numeric::`anonymous namespace'::`dynamic atexit destructor for 'array_function''()  + 0x10 bytes  C++

I've tried this on both Windows 7 and Ubuntu 16.04 both 64-bit.

Failed to compile on 'develop'

Caused by commit b2b9ab1, "Remove unused deprecated includes" of Boost.Iterator

Please change boost::detail::distance to std::distance in include/boost/python/slice.hpp (2 occurrances).

Numpy integration?

I'm the author of "numpy-boost", a library to allow access to Numpy arrays through the boost::multi_array interface, as well as integration with the type converters in boost::python.

See http://github.com/mdboom/numpy-boost

Would there be interest in a pull request to add this functionality to boost::python?

Are there contributing guidelines available?

Py3 std::string is not converted to 'bytes'

In Python 3, std::string is incorrectly converted to python 'str' type. It should be converted to 'bytes' type. Pull request #54 probably solves this issue.

Code of str_test module:

#include <boost/python.hpp>

std::string getString()
{
    return "string";
}

std::wstring getWString()
{
    return L"wstring";
}

BOOST_PYTHON_MODULE( str_test )
{
    boost::python::def( "getString",  &getString );
    boost::python::def( "getWString", &getWString );
}

First example is Python 2 and shows correct results.

Python 2.7.9 (default, Mar  1 2015, 12:57:24) 
[GCC 4.9.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from str_test import getString, getWString
>>> type(getString())
<type 'str'>
>>> type(getWString())
<type 'unicode'>

Second example is Python 3 and shows that std::string is unexpectedly converted to 'str'.

Python 3.4.2 (default, Oct  8 2014, 10:45:20) 
[GCC 4.9.1] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from str_test import getString, getWString
>>> type(getString())
<class 'str'>     # ooops!!! expected bytes there!
>>> type(getWString())
<class 'str'>

/test/exec.cpp - it is wrong to call PyImport_AppendInittab() after Py_Initialize()

The test in /test/exec.cpp looks like

  1. Py_Initialize() is called in main()
  2. far lately in exec_test() PyImport_AppendInittab() is called to inject module 'embedded_hello'

It is wrong and doesn't work in python3.5

According to to discussion in python.devel mailing list ( see http://permalink.gmane.org/gmane.comp.python.devel/157983 ) it is wrong to call PyImport_AppendInittab() after Py_Initialize()

Any embedded module injected by PyImport_AppendInittab() and after Py_Initialize() can't be imported in python3.5. Also there is a comment in python sources confirming this fact:

https://github.com/python/cpython/blob/f680b517e2701e9a3859afb62628a46eccdce17c/Python/import.c#L2153

Comment essence:

API for embedding applications that want to add their own entries
to the table of built-in modules. This should normally be called
before Py_Initialize(). When the table resize fails, -1 is
returned and the existing table is unchanged.

stl_iterator is broken (bad equality and copy)

The following asserts, which should be valid, fail.

void broken_equality(boost::python::numeric::array& data) {
    using boost::python;

    stl_input_iterator<int> begin(data);
    stl_input_iterator<int> begin_other(data);
    boost::python::stl_input_iterator<int> end;

    assert(begin == begin_other);

    ++begin;

    assert(begin != begin_other); // fails: evaluates to true when should be false
}

The equality check in the below function is incorrect, it only checks that both iterators are pointing to a non-null object.

bool stl_input_iterator_impl::equal(stl_input_iterator_impl const &that) const
{
    return !this->ob_ == !that.ob_;
}

Next, copying iterators copies the internal state in a way that is not independent of its source, i.e.

stl_input_iterator<int> begin(data);
stl_input_iterator<int> end;
assert(std::distance(begin, end) == std::distance(begin, end));

std::distance copies begin, but alters its internal state it_ and so both iterators are changed, and doing std::distance again doesn't return the same result.

Fixing this is non-trivial (at least to me) because I can't find an easy way to get an independent copy of it_ without having the original object used to construct the iterator.

'pyconfig.h': No such file or directory

Boost 1.61 @ VS2015:

` call "C:\Users\Olaf\AppData\Local\Temp\b2_msvc_14.0_vcvarsall_x86.cmd" >nul
cl /Zm800 -nologo @"bin.v2\libs\python\build\msvc-14.0\debug\link-static\threading-multi\exec.obj.rsp"

...failed compile-c-c++ bin.v2\libs\python\build\msvc-14.0\debug\link-static\threading-multi\exec.obj...
compile-c-c++ bin.v2\libs\python\build\msvc-14.0\debug\link-static\threading-multi\object\function_doc_signature.obj
function_doc_signature.cpp
C:\VC\boost\boost/python/detail/wrap_python.hpp(50): fatal error C1083: Cannot open include file: 'pyconfig.h': No such file or directory

call "C:\Users\Olaf\AppData\Local\Temp\b2_msvc_14.0_vcvarsall_x86.cmd" >nul

cl /Zm800 -nologo @"bin.v2\libs\python\build\msvc-14.0\debug\link-static\threading-multi\object\function_doc_signature.obj.rsp"

...failed compile-c-c++ bin.v2\libs\python\build\msvc-14.0\debug\link-static\threading-multi\object\function_doc_signature.obj...
...skipped <pbin.v2\libs\python\build\msvc-14.0\debug\link-static\threading-multi>libboost_python-vc140-mt-gd-1_61.lib for lack of <pbin.v2\libs\python\build\msvc-14.0\debug\link-static\threading-multi>numeric.obj...
...skipped <pstage\lib>libboost_python-vc140-mt-gd-1_61.lib for lack of <pbin.v2\libs\python\build\msvc-14.0\debug\link-static\threading-multi>libboost_python-vc140-mt-gd-1_61.lib...
...failed updating 56 targets...
...skipped 4 targets...`

Building on Windows does not create shared Boost.Numpy

At conda-forge/boost-feedstock#32 we are trying to finalize the Boost Python package on Conda.
The build currently fails while copying boost_numpy-vc90-mt-1_63.lib, because the file does not exist.

The full log is at: boost_numpy-vc90-mt-1_63.lib
The build command is:

.\b2 install ^
    --build-dir=buildboost ^
    --prefix=%LIBRARY_PREFIX% ^
    toolset=msvc-%VS_MAJOR%.0 ^
    address-model=%ARCH% ^
    variant=release ^
    threading=multi ^
    link=static,shared ^
    --with-python ^
    -j%CPU_COUNT%

Is there something wrong in the build process, or did we find an issue in the Boost building process?

Add indexing operators

Thanks for moving forward with this work.

My usage is often porting C code to use with python. In that case, the C code expects containers that are indexable. I can often convert to using ndarray (that is, https://github.com/ndarray/ndarray), using pretty simple mechanical translation. These codes expect to index into arrays.

Without indexing operators, I don't think the current version is useful here.

boost::python::objects::iterator_range::next::operator() might return reference to temporary object

iterator_range in boost/python/object/iterator.hpp defines an inner class called next which defines operator() (see https://github.com/boostorg/python/blob/develop/include/boost/python/object/iterator.hpp#L44). Here is the implementation:

result_type
operator()(iterator_range<NextPolicies,Iterator>& self)
{
  if (self.m_start == self.m_finish)
    stop_iteration_error();
  return *self.m_start++;
}

The last line is a potential problem as a temporary object (the return value of self.m_start++) is dereferenced. If result_type is a reference (meaning operator() returns a reference) and the dereferenced temporary object returns a reference to something within the temporary object, you get a dangling reference.

Here is how result_type is determined:

typedef boost::detail::iterator_traits<Iterator> traits_t;

typedef typename mpl::if_<
  is_reference<
    typename traits_t::reference
  >
  , typename traits_t::reference
  , typename traits_t::value_type
>::type result_type;

In general this code looks good. However it implies that if a reference is returned, the reference does not refer to something in the iterator (as with the current implementation the iterator is a temporary object).

Here is an attempt to fix operator():

result_type
operator()(iterator_range<NextPolicies,Iterator>& self)
{
  if (self.m_start == self.m_finish)
    stop_iteration_error();
  self.m_current = self.m_start++;
  return *self.m_current;
}

Here an additional member variable called m_current is used to turn the temporary object returned by self.m_start++ into something which lives longer. I tested this fix, and it seems to work. However I didn't test it thoroughly (for example, I don't know when and where Boost.Python creates copies of iterator_range; if you have a reference to something within a member variable, the member variable better doesn't change its location in memory :).

A valid question is of course whether this is a Boost.Python issue. First, I'm opening this ticket to document this issue (for others and also for myself ;). Secondly, I would be already happy if Boost.Python somehow supports me detecting this issue at compile-time (I'm fine if I get a compiler error and know I need to work around something). Thirdly, I'm working with a third-party library where iterators return references to something within iterators - it's easier for me to fix Boost.Python than all the iterators in the third-party library.

Linux: Allow the build system to generate multiple py3k boost.python libraries

Since 8910035 (SVN r56305, GSoC 2009), the Python 3 variant of Boost.Python is built as libboost_python3.so (on Linux), and since 37b45d2 (SVN r59987, trac 3544), users can use --python-buildid to add additional strings in the library name. (Though it's broken since 1.48, see trac 6286)

Based on these options, Linux distributions use this option to build multiple Boost.Python libraries against different Python versions. For example Debian's rules(Debian source tarball):

    for pyver in $(pyversions); do \
        pyid=$$(echo $$pyver | tr -d .); \
        echo "Building Boost.Python for python version $$pyver"; \
        $(JAM) --with-python --with-mpi --python-buildid=py$$pyid python=$$pyver; \
        mv stage/lib/mpi.so stage/lib/mpi-py$$pyid.so || true; \
    done

So on Debian those files are libboost_python-py35.so and libboost_python-py27.so (amd64 filelist). On the other way, Gentoo have a different scheme:

PYTHON_OPTIONS=" --python-buildid=${EPYTHON#python}"

So on Gentoo files are libboost_python-3.5.so and libboost_python-2.7.so. Some other distributions, like Fedora (build spec, file list and Arch Linux (build script, file list), does not use --python-buildid, so they have libboost_python3.so and libboost_python.so.

With such a case, determine a portable way to link to the correct Boost.Python library is very difficult, especially for projects using CMake. (proposed solution at CMake)

Here I suggest Boost.Python to have a unified suffix scheme for libraries targetting different Python version, so that downstream users can follow it. An idea is use the same name libpythonXX.so. For example, libboost_python-3.5m.so targets libpython3.5m.so, and libboost_python-2.7.so targets libpython2.7.so. The suffix can be detected with the following Python script:

'.'.join(map(str, sys.version_info[:2])) + (sys.abiflags if hasattr(sys, 'abiflags') else '')

crash under python3.5

include <boost/python.hpp>

using namespace boost::python;
int main()
{
Py_Initialize();
object main_module = import("main");
}

There is an exception:

Exception thrown at 0x00000000004E126A (python35.dll) in test.exe: 0xC0000005: Access violation reading location 0x0000000000000025.

Call stack:

python35.dll!PyUnicode_InternInPlace(_object * * p) Line 15007 C python35.dll!PyImport_Import(_object * module_name) Line 1752 C python35.dll!PyImport_ImportModule(const char * name) Line 1260 C test.exe!boost::python::import(boost::python::str name) Line 20 C++

I try static link or dynamic link, all the same. I use vs2015, python 3.5.1, x64 build, Unicode set

Boost Python List Append

Hi,

I am using boost python to create a python list of my C++ class objects as below. Below is the error thrown when I call the append function. My thoughts are boost python is unable to ascertain the python object it should create from my C++ class object. How can I resolve this?

"TypeError: No to_python (by-value) converter found for C++ type: class MyClass",

class MyClass(); // Class
MyClass myClass; // Class instance

boost::python::list list;
list.append( myClass );

BOOST_PYTHON_MODULE( Test )
{
     class_<MyClass>( "MyClass" )
	def( "getSquare", &MyClass::getSquare)
}

Thanks,

Nilufar

Conversion of `char` to python is broken in Python >= 3.0

char is currently being converted using PyUnicode_FromStringAndSize(&x, 1), but PyUnicode_FromStringAndSize interprets the input string as UTF-8, so every char between 128 and 255 is a malformed UTF-8 string and can't be converted.

Before submitting a fix, I'd like to discuss the solution: returning a positive integer seems to be more appropriate, since this is the new convention in Python 3 (e.g., b'a'[0] returns 97 in Python 3, while it returns b'a' in Python 2). Also, this is more consistent with how signed char and unsigned char are currently converted (as integers too).

boost::python::make_setter(&X::y) no longer compiles

This trivial example, similar to the examples in the Boost.Python documentation, compiled with 1.58 but doesn't with 1.59 (using various versions of GCC and Clang):

#include <boost/python.hpp>

struct X { int y; };

int main()
{
  boost::python::make_setter(&X::y);
}

The relevant error is:

/usr/local/boost-1.59.0/include/boost/python/data_members.hpp:303:15: note: candidate: boost::python::api::object boost::python::make_setter(D&) [with D = int X::*] <near match>

    inline object make_setter(D& x)
                  ^

/usr/local/boost-1.59.0/include/boost/python/data_members.hpp:303:15: note: conversion of argument 1 would be ill-formed:

prog.cc:7:37: error: invalid initialization of non-const reference of type 'int X::*&' from an rvalue of type 'int X::*'

     boost::python::make_setter(&X::y);
                                     ^

boost.python incompatible with -std=c++17 with `noexcept` on g++-7

Hi,

I wanted to post a ticket on svn, but it says 'python USE GITHUB', so I went here...
The problem can be reproduced on g++-7 20170205 snapshot, and earlier including 201701xx with -std=c++17 enabled and compile the following code:

#include "boost/python.hpp"

struct A {
    int do_nothing() noexcept {
        return 0;
    }
};

BOOST_PYTHON_MODULE(test){
    boost::python::class_<A>("A", boost::python::no_init)
        .def("get", &A::do_nothing);
}

And compiler would complain

./boost/python/detail/invoke.hpp:75:16: error: must use ‘.*’ or ‘->*’ to call pointer-to-member function in ‘f (...)’, e.g. ‘(... ->* f) (...)’
     return rc(f( BOOST_PP_ENUM_BINARY_PARAMS_Z(1, N, ac, () BOOST_PP_INTERCEPT) ));
               ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

However, on g++ 6.3 with -std=c++17, it was all okay. Or if you comment noexcept it works fine, too. (I personally think g++-7 would be more standard conforming compared to g++ 6.3, since function signature now comes with exceptions specification) And I believe, that single line is not the real problem, since as I see, invoke has multiple functions to call c++ objects, and it seems to be the higher level code that called a wrong invoke function.

Other part of my project are using C++17 features, so that I'm using g++7 even before release. How can this be resolved?

wrap_python.hpp won't find necessary .h files

I've installed boost on my Mac (running CLion on El Captain) and it installed with no problems. However now I'm trying to run a simple .cpp HelloWorld program that simply imports boost (without using it yet):

#include <boost/python.hpp>
#include <iostream>


int main() {
    std::cout << "Hello, World!" << std::endl;
    return 0;
    }

But when I run this I get the following error:

/Applications/CLion.app/Contents/bin/cmake/bin/cmake --build /Users/ralston/Library/Caches/CLion2016.2/cmake/generated/HelloWorld-ca434ad0/ca434ad0/Debug --target HelloWorld -- -j 4
Scanning dependencies of target HelloWorld
[ 50%] Building CXX object CMakeFiles/HelloWorld.dir/main.cpp.o
In file included from /Users/ralston/Desktop/CLion/extensions/HelloWorld/main.cpp:1:
In file included from /usr/local/include/boost/python.hpp:11:
In file included from /usr/local/include/boost/python/args.hpp:8:
In file included from /usr/local/include/boost/python/detail/prefix.hpp:13:
/usr/local/include/boost/python/detail/wrap_python.hpp:50:11: fatal error: 'pyconfig.h' file not found
# include <pyconfig.h>
          ^
1 error generated.
make[3]: *** [CMakeFiles/HelloWorld.dir/main.cpp.o] Error 1
make[2]: *** [CMakeFiles/HelloWorld.dir/all] Error 2
make[1]: *** [CMakeFiles/HelloWorld.dir/rule] Error 2
make: *** [HelloWorld] Error 2

I'm raising this as an issue because I've read another question similar to this, and I tried the recommended solutions from that issue (e.g., $CPLUS_INCLUDE_PATH="$CPLUS_INCLUDE_PATH:/Library/Framework/Python.framework/Versions/3.5/Headers/[here are the necessary .h files]" but that didn't work.

The other issue I saw related to this involved someone compiling from the command line. Is there any way to get the wrap_python.hpp file to recognize the appropriate .h files?

Here is my CMakeLists.txt file

cmake_minimum_required(VERSION 3.6)
project(HelloWorld)

set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11")

include_directories("/usr/local/include") # where root boost director sits

set(CMAKE_RUNTIME_OUTPUT_DIRECTORY, "HelloWorld")
#set(CMAKE_VERBOSE_MAKEFILE ON)
set(SOURCE_FILES main.cpp)
add_executable(HelloWorld ${SOURCE_FILES})

make Boost.Python respect alignment of types

Currently if a type has an alignment, such as

struct alignas(32){
    double pos[3];
};

Boost.Python will merrily proceed to violate the alignment required by the type and place at any address in memory. This makes a big problem for sharing code utilizing vectorization (SSE) and a common example of this is the passing around of Eigen types.

TypeError: No to_python (by-value) converter found for C++ type: boost::shared_ptr<...>

This is already being discussed in the hijacked issue #29 but since it's a different thing - regression between 1.59 and 1.60 - let's open a new issue for it. To summarize what I just googled:

The problem is that converters to_python are not created automatically for boost::shared_ptr now.
It was also raised a month ago on C++-sig -- although workaround is known it'd be better to not have to add it everywhere.

@bennybp commented:

I managed to bisect with a simple test program and script. It appears to come from an update/rewrite of type_traits around commit boostorg/type_traits@f0da159 (in the type_traits submodule, although that commit won't compile for me).

Unfortunately, it looks like a big rewrite in one commit, so I don't know the exact cause, but I hope this helps

This issue was found in many projects that use boost.python: cctbx, mapnik/python-mapnik#79, BVLC/caffe#3494

neither Debian/Ubuntu 1.63 packages nor CMake feature NumPy support (discussion, not a bug)

Excuse me for posting here as it is likely a packaging problem, yet I thought I could get some hints here before proceeding with a Debian bug report.

Trying to use the new NumPy support, I came up with the following partial hello world code which compiles, links but fails with an undefined symbol error at runtime:

hello.cpp:

#include <boost/python/numpy.hpp>

void init()
{
  Py_Initialize();
  boost::python::numpy::initialize();
}

BOOST_PYTHON_MODULE(hello)
{
}

CMakeLists.txt:

project(hello CXX)
cmake_minimum_required(VERSION 3.7)

add_library(hello SHARED hello.cpp)
set_target_properties(hello PROPERTIES SUFFIX ".so") # e.g. Mac defaults to .dylib which is not looked for by Python

# informing the Python bindings where to find Python
find_package(PythonInterp REQUIRED)
find_package(PythonLibs ${PYTHON_VERSION_STRING} EXACT REQUIRED)
target_include_directories(hello PUBLIC ${PYTHON_INCLUDE_DIRS})
target_link_libraries(hello ${PYTHON_LIBRARIES})


# informing the Python bindings where to find Boost.Python
# NumPy support was introduced in 1.63
find_package(Boost 1.63 COMPONENTS python-py${PYTHON_VERSION_MAJOR}${PYTHON_VERSION_MINOR} QUIET REQUIRED)
target_link_libraries(hello ${Boost_LIBRARIES})
target_include_directories(hello PUBLIC ${Boost_INCLUDE_DIRS})

Here's what happens on a all-standard Ubuntu zesty installation with the Debian/Ubuntu 1.63 packages:

$ mkdir build
$ cd build/
$ cmake ..
-- The CXX compiler identification is GNU 4.9.2
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found PythonInterp: /usr/bin/python (found version "2.7.13")
-- Found PythonLibs: /usr/lib/x86_64-linux-gnu/libpython2.7.so (found suitable exact version "2.7.13")
CMake Warning at /usr/share/cmake-3.7/Modules/FindBoost.cmake:1534 (message):
  No header defined for python-py27; skipping header check
Call Stack (most recent call first):
  CMakeLists.txt:16 (find_package)


-- Configuring done
-- Generating done
-- Build files have been written to: /home/vagrant/temp/build
$ make
Scanning dependencies of target hello
[ 50%] Building CXX object CMakeFiles/hello.dir/hello.cpp.o
[100%] Linking CXX shared library libhello.so
[100%] Built target hello
$ python
Python 2.7.13 (default, Jan 19 2017, 14:48:08)
[GCC 6.3.0 20170118] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import libhello
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: ./libhello.so: undefined symbol: _ZN5boost6python5numpy10initializeEb
>>>

Also:

$ ldd libhello.so
        linux-vdso.so.1 =>  (0x00007ffd301d5000)
        libpython2.7.so.1.0 => /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0 (0x00007f2c04b3b000)
        libboost_python-py27.so.1.63.0 => /usr/lib/x86_64-linux-gnu/libboost_python-py27.so.1.63.0 (0x00007f2c048f2000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f2c0452b000)
        libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f2c0430d000)
        libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f2c040f2000)
        libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f2c03eec000)
        libutil.so.1 => /lib/x86_64-linux-gnu/libutil.so.1 (0x00007f2c03ce9000)
        libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f2c039e0000)
        libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f2c03658000)
        libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f2c03441000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f2c052de000)
$ ls -1 /usr/lib/x86_64-linux-gnu/libboost_python*
/usr/lib/x86_64-linux-gnu/libboost_python.a
/usr/lib/x86_64-linux-gnu/libboost_python-py27.a
/usr/lib/x86_64-linux-gnu/libboost_python-py27.so
/usr/lib/x86_64-linux-gnu/libboost_python-py27.so.1.63.0
/usr/lib/x86_64-linux-gnu/libboost_python-py35.a
/usr/lib/x86_64-linux-gnu/libboost_python-py35.so
/usr/lib/x86_64-linux-gnu/libboost_python-py35.so.1.63.0
/usr/lib/x86_64-linux-gnu/libboost_python.so
$ nm /usr/lib/x86_64-linux-gnu/libboost_python*.a | grep numpy | wc -l
0

Thanks in advance for any hints

boost::python::file_exec not fully closing file with Python 3.4.3

Using Python 3.4.3 and Boost 1.57.0 on 64-bit Windows 7 with Visual Studio 2013.

After boost::python::exec_file is called on a file, that file can't be modified by the program using fopen_s, I think because it is still open.

In the following example, when ModifyFile is called before executing the file, it succeeds. After the file has been executed, fopen_s returns 13 (Permission denied) and ModifyFile fails.

#include "stdafx.h"
#include "boost\python.hpp"
#include <iostream>

#define FILE_NAME         "myTest.py"

void ModifyFile( std::string newContent )
{
  FILE* myFile = nullptr;
  errno_t result = fopen_s( &myFile, FILE_NAME, "wb" );
  if ( result == 0 )
  {
    fwrite( newContent.c_str( ), sizeof( byte ), newContent.length( ), myFile );
    fclose( myFile );
    std::cout << "Success" << std::endl;
    return;
  }

  std::cout << "Failure" << std::endl;
}

int main( int argc, char** argv )
{
  Py_Initialize( );

  ModifyFile( "print(\"Hello\")" );

  boost::python::api::object mainNamespace = boost::python::import( "__main__" ).attr( "__dict__" );
  boost::python::exec_file( FILE_NAME, mainNamespace, mainNamespace );

  ModifyFile("print(\"Goodbye\")");

  Py_Finalize( );
  return 0;
}

I have tried a similar example using std::fstream to modify the file, and this doesn't seem to have the same problems.

Impossible to link with python (compiled with pymalloc)

Hi all,

Today I'm using a configuration like that:

using python : 3.4
             : /build/bin/python3.4
             : /build/include/python3.4m
             : /build/lib
             ;

The problem lies in the fact that the library is libpython3.4m.a and Boost.Build is completely ignoring the m suffix:

    "g++" -dynamiclib -Wl,-single_module -install_name "libboost_python3.dylib" -L"/Users/Xcloud/sandbox/8cube/build/lib" -o "bin.v2/libs/python/build/darwin-4.2.1/release/threading-multi/libboost_python3.dylib" "bin.v2/libs/python/build/darwin-4.2.1/release/threading-multi/numeric.o" "bin.v2/libs/python/build/darwin-4.2.1/release/threading-multi/list.o" "bin.v2/libs/python/build/darwin-4.2.1/release/threading-multi/long.o" "bin.v2/libs/python/build/darwin-4.2.1/release/threading-multi/dict.o" "bin.v2/libs/python/build/darwin-4.2.1/release/threading-multi/tuple.o" "bin.v2/libs/python/build/darwin-4.2.1/release/threading-multi/str.o" "bin.v2/libs/python/build/darwin-4.2.1/release/threading-multi/slice.o" "bin.v2/libs/python/build/darwin-4.2.1/release/threading-multi/converter/from_python.o" "bin.v2/libs/python/build/darwin-4.2.1/release/threading-multi/converter/registry.o" "bin.v2/libs/python/build/darwin-4.2.1/release/threading-multi/converter/type_id.o" "bin.v2/libs/python/build/darwin-4.2.1/release/threading-multi/object/enum.o" "bin.v2/libs/python/build/darwin-4.2.1/release/threading-multi/object/class.o" "bin.v2/libs/python/build/darwin-4.2.1/release/threading-multi/object/function.o" "bin.v2/libs/python/build/darwin-4.2.1/release/threading-multi/object/inheritance.o" "bin.v2/libs/python/build/darwin-4.2.1/release/threading-multi/object/life_support.o" "bin.v2/libs/python/build/darwin-4.2.1/release/threading-multi/object/pickle_support.o" "bin.v2/libs/python/build/darwin-4.2.1/release/threading-multi/errors.o" "bin.v2/libs/python/build/darwin-4.2.1/release/threading-multi/module.o" "bin.v2/libs/python/build/darwin-4.2.1/release/threading-multi/converter/builtin_converters.o" "bin.v2/libs/python/build/darwin-4.2.1/release/threading-multi/converter/arg_to_python_base.o" "bin.v2/libs/python/build/darwin-4.2.1/release/threading-multi/object/iterator.o" "bin.v2/libs/python/build/darwin-4.2.1/release/threading-multi/object/stl_iterator.o" "bin.v2/libs/python/build/darwin-4.2.1/release/threading-multi/object_protocol.o" "bin.v2/libs/python/build/darwin-4.2.1/release/threading-multi/object_operators.o" "bin.v2/libs/python/build/darwin-4.2.1/release/threading-multi/wrapper.o" "bin.v2/libs/python/build/darwin-4.2.1/release/threading-multi/import.o" "bin.v2/libs/python/build/darwin-4.2.1/release/threading-multi/exec.o" "bin.v2/libs/python/build/darwin-4.2.1/release/threading-multi/object/function_doc_signature.o"  -lpython3.4    -headerpad_max_install_names -Wl,-dead_strip -no_dead_strip_inits_and_terms

ld: library not found for -lpython3.4

If I execute

/build/bin/python3.4 -c 'import distutils.sysconfig as s; print(s.get_config_var("LIBRARY"))'

it prints the correct library filename.

Also, I tried to specify the 3.4m version instead of 3.4 but it does nothing more than warn me about it:

 Warning: "using python" expects a two part (major, minor) version number; got 3.4m instead

Does anybody know how to link with python when it as been compiled with pymalloc ?
Is there a workaround not involving a symlink ?

boost::python v1.63 mangles py2 for py3 build with numpy

with the integration of numpy into v1.63, it appears there is mangling of python2 libs for python3 environments.
Run time error:
Symbol not found: _PyClass_Type
Referenced from: /usr/local/opt/boost-python/lib/libboost_python.dylib

built on mac with brew:
brew install boost
brew install boost-python --with-python3

V1.63 boost::python::numpy ndarray from_data method 'own' parameter usage

Not sure how to use the 'own' parameter.
End result is that on the python side when the ndarray 'flags' are printed, I see:
OWNDATE: False
The returned array on python side is garbled (though works sometimes!).
Data on the C++ side of the ndarray (that is being returned to Python) is correct always.
Is this related to the 'own' parameter. Or some other scope issue?
Currently I have 'own' set to: py::object()

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.