Coder Social home page Coder Social logo

xtensor-stack / xtensor Goto Github PK

View Code? Open in Web Editor NEW
3.3K 89.0 390.0 11.29 MB

C++ tensors with broadcasting and lazy computing

License: BSD 3-Clause "New" or "Revised" License

C++ 97.90% CMake 1.57% Jupyter Notebook 0.37% Python 0.15%
c-plus-plus-14 numpy multidimensional-arrays tensors

xtensor's People

Contributors

adriendelsalle avatar antoineprv avatar davidbrochart avatar davisvaughan avatar derthorsten avatar dhermes avatar egpbos avatar emmenlau avatar ewoudwempe avatar frozenwinters avatar ghisvail avatar gouarin avatar johanmabille avatar jvce92 avatar khanley6 avatar kolibri91 avatar martinrenou avatar matwey avatar oneraynyday avatar potpath avatar randl avatar serge-sans-paille avatar sounddev avatar spectre-ns avatar stuarteberg avatar sylvaincorlay avatar tdegeus avatar ukoethe avatar wolfv avatar zhujun98 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

xtensor's Issues

transpose operator

it can be emulated by using reshape, but it's currently missing, isn't it?

Overloads of derived_cast

xexpression<T>::derived_cast should have different behaviors depending on whether this is an lvalue or an rvalue.

xindex_function for arange, linspace, meshgrid ...

In order to implement the mentioned functions, it might be good to have a xfunction like xexpression that takes an object overloading operator(Args... args) and operator[](xindex) and provides the appropriate values for each index.

Compiler workarounds

This issue is meant for tracking the workarounds we have implemented around compiler bugs

MSVC 2015: bug with std::enable_if and invalid types

std::enable_if evaluates its second argument, even if the condition is false. This is the reason for the get_xfunction_type_t workaround which adds a level of indirection for the second type to always be a valid type (Original issue #80, fixed in PR #148).

MSVC 2015: math functions not fully qualified

fma class is ambiguous if not fully qualified. See #81.

GCC-4.9 and clang < 3.8: constexpr std::min and std::max

std::min and std::max are not constexpr in these compiler. In xio.hpp, we define a XTENSOR_MIN macro before its usage and undefine it right after.

clang < 3.8 matching initializer_list with static arrays.

Old versions of clang don't handle overload resolution with braced initializer lists correctly: braced initializer lists are not properly matched to static arrays. This prevent compile-time detection of the length of a braced initializer list.

A consequence is that we need to use stack-allocated shape types in these cases.

GCC-6: std::isnan and std::isinf.

We are not directly using std::isnan or std::isinf in xmath as a workaround to the following bug in GCC-6 for the following reason.

C++11 requires that the <cmath> header declares bool std::isnan(double) and bool std::isinf(double).
C99 requires that the <math.h> header declares int ::isnan(double) and int ::isinf(double).
These two definitions would clash when importing both headers and using namespace std.

As of version 6, gcc detects whether the obsolete functions are present in the C <math.h> header and uses them if they are, avoiding the clash. However, this means that the function might return int instead
of bool as C++11 requires, which is a bug.

GCC 6 -> Test failure in xview_on_xfunction

Compiler (Fedora 25):

Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/6.3.1/lto-wrapper
Target: x86_64-redhat-linux
Configured with: ../configure --enable-bootstrap --enable-languages=c,c++,objc,obj-c++,fortran,ada,go,lto --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-shared --enable-threads=posix --enable-checking=release --enable-multilib --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --with-linker-hash-style=gnu --enable-plugin --enable-initfini-array --disable-libgcj --with-isl --enable-libmpx --enable-gnu-indirect-function --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux
Thread model: posix
gcc version 6.3.1 20161221 (Red Hat 6.3.1-1) (GCC) 

And the failure:

/home/wolfv/Programs/xorig/test/test_xview.cpp:182: Failure
Value of: iter_end
  Actual: 120-byte object <20-61 C4-F7 FD-7F 00-00 50-61 C4-F7 FD-7F 00-00 8C-8F 54-01 00-00 00-00 00-00 00-00 00-00 00-00 90-62 C4-F7 FD-7F 00-00 20-61 C4-F7 FD-7F 00-00 B8-8E 54-01 00-00 00-00 00-00 00-00 00-00 00-00 DA-FF FF-FF FF-FF FF-FF 10-95 54-01 00-00 00-00 18-95 54-01 00-00 00-00 18-95 54-01 00-00 00-00 A0-97 54-01 00-00 00-00 A8-97 54-01 00-00 00-00 A8-97 54-01 00-00 00-00>
Expected: iter
Which is: 120-byte object <20-61 C4-F7 FD-7F 00-00 50-61 C4-F7 FD-7F 00-00 8C-8F 54-01 00-00 00-00 00-00 00-00 00-00 00-00 90-62 C4-F7 FD-7F 00-00 B0-61 C4-F7 FD-7F 00-00 40-97 54-01 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 B0-94 54-01 00-00 00-00 B8-94 54-01 00-00 00-00 B8-94 54-01 00-00 00-00 40-8D 54-01 00-00 00-00 48-8D 54-01 00-00 00-00 48-8D 54-01 00-00 00-00>
[  FAILED  ] xview.xview_on_xfunction (1 ms)

Computational Data Flow Expression / DAG Builder

I want to be able to do something like the following

xt::xexp<double> exp_res1 = xt::xvar("x") + xt::xvar("y") + xt::xconst(3);
xt::xexp<double> exp_res2 = exp_res1 /  xt::xconst(2);

xt::xarray<double> res1 = exp_res1.set("x", arr1).set("y", arr2).eval();
xt::xarray<double> res2 = exp_res1.set("y", arr3).eval();

xt::xarray<double> res3 = exp_res2.eval();

Here I am reusing the expression and also doing the evaluation when needed.

View in function with array as const reference gives errors

This seems like a strange bug. The following doesn't work for me:

#include "xtensor/xscalar.hpp"
#include "xtensor/xarray.hpp"
#include "xtensor/xio.hpp"

int main() {

	xt::xarray<double> arr1
	  {{1.0, 2.0, 3.0, 9},
	   {2.0, 5.0, 7.0, 9},
	   {2.0, 5.0, 7.0, 9}};

	auto func = [](const auto& arr1) {
		auto view = make_xview(arr1, 1, xt::all());
		for(const auto& el : view) {
			std::cout << el << " ";
		}
	};
	func(arr1);

}

While this one works perfectly fine:

#include "xtensor/xscalar.hpp"
#include "xtensor/xarray.hpp"
#include "xtensor/xio.hpp"

int main() {

	xt::xarray<double> arr1
	  {{1.0, 2.0, 3.0, 9},
	   {2.0, 5.0, 7.0, 9},
	   {2.0, 5.0, 7.0, 9}};

	auto func = [](const auto arr1) {
		auto view = make_xview(arr1, 1, xt::all());
		for(const auto& el : view) {
			std::cout << el << " ";
		}
	};
	func(arr1);

}

matmul and dot

I've been looking, but I didn't find implementations for those two.

Is there a plan to leverage e.g. BLAS or similar libraries for those operations?

- Wolf

Numpy style cheat sheet

Add a section in the html documentation sumilat numpy cheat sheet. But for a numpy - xtensor correspondance table.

xfunction optimization

Currently each time you instantiate an xfunction, its shape is computed and stored. This drastically hurts performance when manipulating complicated expressions involving xarray instances. For instance, consider the following code:

xt::array<double> a, b, c;
// init a, b, and c ....
xt::xarray<double> res = 2 * a + (b / c);

Here three xfunction instances are built, and thus three shape containers are dynamically allocated while only one is required (the global shape of the expression to be assigned).

A way to fix this is to make the computation of the shape lazy. The computation of the shape of the root node of the expression should not require computation of the shape of the other nodes but rely on the broadcast_shape applied to each node.

xrange_adaptor

I think it would be cool to allow for numpy-style ranges in views with "colons" that find out about their length later, from the shape of the underlying expression.

I.e. numpy style (or pythons): a[:3] or a[1:] or a[::-1] ...

E.g.

struct xnone {};

template <class A, class B, class C>
range(A min, B max, C step)

could return a range_adaptor object, which in turn returns a valid range when initiated with some shape.

E.g. if class A was a xnone tag, and step is positive then min -> 0. If step negative, it would be shape.
If class B is a xnone tag, then max is the size at that dimension. If step negative, -1.

I am not sure about the naming though. xnone is not so nice.

xio not working with "columnar" xview

This is not compiling:

#include <iostream>
#include "xtensor/xscalar.hpp"
#include "xtensor/xarray.hpp"
#include "xtensor/xio.hpp"
#include "xtensor/xslice.hpp"

int main() {
	xt::xarray<double> arr1
	  {{1.0, 2.0, 3.0, 9},
	   {2.0, 5.0, 7.0, 9},
	   {2.0, 5.0, 7.0, 9}};

	std::cout <<  xt::make_xview(arr1, xt::all(), 1) << std::endl;
}

Semantic

Semantic has to be fixed for xindexview and xview like it has been done for xbroadcast and xfunction.

Allow xscalar to take references on scalar

xscalar should be able to take a reference on the scalar it wraps instead of a copy. That would improve perofrmances when the copy of the scalar type is expensive.

However this behavior should be explicitly specified (via a xref function for instance), the default behavior should remain to take a copy.

Incorrect iteration over xviews

As of master:

    xt::xarray<int> arr {{1, 2, 3}, {4, 5, 6}};
    auto arr_view = xt::make_xview(arr, 1);
    std::cout << arr_view << std::endl;
    // -> {4, 5, 6}, OK
    std::cout << arr_view.dimension() << std::endl;
    // -> 1, OK
    for (auto x: arr_view) {std::cout << arr_view << std::endl;}
    // -> 4 4 4, ???

eval method

Armadillo has an eval method, which forces evaluation of expressions.

Maybe this would be useful for xtensor, too? E.g. giving some expression, it would return either an xtensor or xarray with the evaluation results.

If an xarray or xtensor is given, it just returns a closure to that.

Iteration over xtensor fails

As of xtensor 0.2.1,

    xt::xarray<int> arr {{1, 2, 3}, {4, 5, 6}};
    auto arr_view = xt::make_xview(arr, 0);
    std::cout << std::accumulate(arr_view.begin(), arr_view.end(), 0) << std::endl;

works (as advertised) but

    xt::xtensor<int, 2> tens {{3, 3}};
    auto tens_view = xt::make_xview(tens, 0);
    std::cout << std::accumulate(tens_view.begin(), tens_view.end(), 0) << std::endl;

fails to compile (gcc 6.2.1 from Arch Linux).

Iterator api renaming

Following the discussion we had on gitter about performances, I think we should rename storage_begin and storage_end into begin and end. The current begin and end would become xbegin and xend without argument.

There are mainly two reasons for that:

  • the range for loop is equivalent to a loop with the begin/end iterator pair. If storage_begin/storage_end pair is faster than begin/end pair, it's like we prevent to have performance with this syntax.

  • iterating on the storage container (i.e. regardless of the shape of the expression) is generally used for performing stl-like algorithms on the data. In that case, the algorithms are generally invoked with the begin/end iterator pair. Keeping the current interface would be a performance hit for generic code.

Since it's breaking backward compatibility, I think we should do it as soon as possible.

xreducer

Goal: Provide an xexpression corresponding to the reduction of dimension based on a reducer

If m has shape (4, 3, 2, 5), sum(m, {1, 3}) sums over dimensions 1 and 3 giving an expression of shape (3, 2), lazily.

Similarly to xfunction and vectorize, this should come with a helper generator function which creates a xreducer for a given function that takes a 1-D array.

Default types

I think it could be nice to set a default type for zeros, ones, linspace ... following NumPy I think double is the right choice.

What do you think?

Incomplete indexings append zeros

I believe that "incomplete indexings" (e.g. indexing a 3d array/tensor with 2 indices) add as many zeros as needed to complete the multi-index. At least in the case of tensors (where the dimensionality is known at compile time), perhaps it may make more sense to return a view in such a case? This would mimic numpy's behavior.

xio with view + newaxis never compiles

This snippet never finishes compiling for me (no error, just takes forever):

	xt::xarray<double> d1 = xt::random::rand<double>({5});
	auto d12 = view(d1, newaxis(), all());
	std::cout << d12 << std::endl;

However, this compiles fine:

	xt::xarray<double> d1 = xt::random::rand<double>({5});
	auto d12 = view(d1, newaxis(), all());
	xt::xarray<double> a = d12;
	std::cout << a << std::endl;

Pretty printing

Add pretty printing, like Numpy, and make it the default way of outputting xexpressions

Cannot take view of const xtensor to new xtensor

As of master,

xt::xtensor<double, 3> const arr {{1, 2, 3}};
xt::xtensor<double, 2> arr2 {{2, 3}};
arr2 = xt::make_xview(arr, 0);

fails to compile even though constness of arr is not violated (as a copy is being made).

Static tensor class

Goal: in addition to the dynamicly-dimensioned xarray, provide an xexpression of fixed dimension.

  • strides and shape attributes will then be std::arrays of specified length, and will be on the stack.

cannot modify filter view?

This does not compile

xt::xarray<double> a = {{1, 5, 3}, {4, 5, 6}};
auto v = xt::filter(a, a >= 5);
v = 100;

but this does

xt::xarray<double> a = {{1, 5, 3}, {4, 5, 6}};
auto v = xt::view(a, xt::all());
v = 100;

Documentation update

Documentation should be refactored to integrate all recent features (generators and builders, comparison operators, newaxis, random module)

Dynamic operator[](index)

Goal: in addition to the variadic operator(), provide an operator[] taking a single multi-index argument.

Like for reshape, we should also enable passing a braced initializer list {4, 5, 6}.

Testing of expression access with more or less arguments

The desired behavior when accessing elements of an xexpression with operator(), element() and operator[] is

  • when the number of arguments is lesser than the dimensionality, behave as if zeros were appended to match the dimension.
  • when the number of arguments is greater than the dimensionality, discard the first* arguments until their number matches dimensionality.

Iteration over trivial xview does not terminate

As of master:

xt::xtensor<double, 1> arr1 {{2}};
std::fill(arr1.begin(), arr1.end(), 6);
auto view {xt::make_xview(arr1, 0)};
std::cout << view << std::endl;
// -> 6, OK
for (auto x: view) { std::cout << x << std::endl; }
// -> infinite stream of 6's

xiterator constructor missing?

When trying auto itpair = std::minmax_element(arr.begin(), arr.end()); or similar such functions, it doesn't compile as an temporary xiterator cannot be instantiated from an empty initializer list, which minmax_element apparently tries.

-Wreorder on xexpression

As of master,

xt::xtensor<double, 2> arr {{2, 3}};
xt::xtensor<double, 2> arr2 {{2, 3}};
arr2 = arr + 1;

triggers a -Wreorder warning with gcc 6.2.1.

Make xscalar a non-const expression

Non-const functions should be added to xscalar so it can be used as a non-const xexpression. This is required by the xref feature, allowing xscalar to take a reference on the wrapped scalar instead of a copy.

newaxis

Goal: Provide a special type of slice for xview to insert new dimensions of length one, like numpy.newaxis.

Broadcasting assign operator for simple assignments

It would be nice if

xarray<double> e = xt::random::rand<double>({3, 3});
auto v = make_xindexview(e, {{1, 1}, {1, 2}, {2, 2}});
v = 3;

would be working.
With v = xt::broadcast(3, v.shape()); it's currently working and should be easy to implement for the general case.

Homogenize naming for meta functions.

Following #101 , I propose we homogenize the naming of meta-functions used in xtensor

common_value_type, common_difference_type, xclosure, get_xfunction_type...

and providing STL-style _t variants for version returning the typename...

Dynamic xview's

Currently xview is implemented using tuple as holder for the slices.
As far as I understand this necessitates that all slices are known at compile time. .....

But for example when creating a view from python, it's not possible to know the slices at compile time. It would also make writing the xreducer functionality easier (as e.dimension() is not a constexpr and cannot be used as template parameter etc.) (or that's at least how I tried doing it).

So I am wondering whether it would be a good idea to either create a separate, dynamic xview class or exchange the tuple in xview for a std::vector holding an std::variant<xall, xrange, size_t> or similar.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.