xtensor-stack / xtensor Goto Github PK
View Code? Open in Web Editor NEWC++ tensors with broadcasting and lazy computing
License: BSD 3-Clause "New" or "Revised" License
C++ tensors with broadcasting and lazy computing
License: BSD 3-Clause "New" or "Revised" License
it can be emulated by using reshape, but it's currently missing, isn't it?
I played a bit with newaxis today and found that it's currently not possible to compile when printing the resulting view.
I've added a test case in #114.
xexpression<T>::derived_cast
should have different behaviors depending on whether this
is an lvalue or an rvalue.
In order to implement the mentioned functions, it might be good to have a xfunction like xexpression that takes an object overloading operator(Args... args)
and operator[](xindex)
and provides the appropriate values for each index.
This issue is meant for tracking the workarounds we have implemented around compiler bugs
MSVC 2015: bug with std::enable_if
and invalid types
std::enable_if
evaluates its second argument, even if the condition is false. This is the reason for the get_xfunction_type_t
workaround which adds a level of indirection for the second type to always be a valid type (Original issue #80, fixed in PR #148).
MSVC 2015: math functions not fully qualified
fma
class is ambiguous if not fully qualified. See #81.
GCC-4.9 and clang < 3.8: constexpr
std::min
and std::max
std::min
and std::max
are not constexpr in these compiler. In xio.hpp
, we define a XTENSOR_MIN macro before its usage and undefine it right after.
clang < 3.8 matching initializer_list
with static arrays.
Old versions of clang don't handle overload resolution with braced initializer lists correctly: braced initializer lists are not properly matched to static arrays. This prevent compile-time detection of the length of a braced initializer list.
A consequence is that we need to use stack-allocated shape types in these cases.
GCC-6: std::isnan
and std::isinf
.
We are not directly using std::isnan
or std::isinf
in xmath
as a workaround to the following bug in GCC-6 for the following reason.
C++11 requires that the <cmath>
header declares bool std::isnan(double)
and bool std::isinf(double)
.
C99 requires that the <math.h>
header declares int ::isnan(double)
and int ::isinf(double)
.
These two definitions would clash when importing both headers and using namespace std.
As of version 6, gcc detects whether the obsolete functions are present in the C <math.h>
header and uses them if they are, avoiding the clash. However, this means that the function might return int instead
of bool as C++11 requires, which is a bug.
It would make sense for diag to return an assignable view.
Compiler (Fedora 25):
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/6.3.1/lto-wrapper
Target: x86_64-redhat-linux
Configured with: ../configure --enable-bootstrap --enable-languages=c,c++,objc,obj-c++,fortran,ada,go,lto --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-shared --enable-threads=posix --enable-checking=release --enable-multilib --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --with-linker-hash-style=gnu --enable-plugin --enable-initfini-array --disable-libgcj --with-isl --enable-libmpx --enable-gnu-indirect-function --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux
Thread model: posix
gcc version 6.3.1 20161221 (Red Hat 6.3.1-1) (GCC)
And the failure:
/home/wolfv/Programs/xorig/test/test_xview.cpp:182: Failure
Value of: iter_end
Actual: 120-byte object <20-61 C4-F7 FD-7F 00-00 50-61 C4-F7 FD-7F 00-00 8C-8F 54-01 00-00 00-00 00-00 00-00 00-00 00-00 90-62 C4-F7 FD-7F 00-00 20-61 C4-F7 FD-7F 00-00 B8-8E 54-01 00-00 00-00 00-00 00-00 00-00 00-00 DA-FF FF-FF FF-FF FF-FF 10-95 54-01 00-00 00-00 18-95 54-01 00-00 00-00 18-95 54-01 00-00 00-00 A0-97 54-01 00-00 00-00 A8-97 54-01 00-00 00-00 A8-97 54-01 00-00 00-00>
Expected: iter
Which is: 120-byte object <20-61 C4-F7 FD-7F 00-00 50-61 C4-F7 FD-7F 00-00 8C-8F 54-01 00-00 00-00 00-00 00-00 00-00 00-00 90-62 C4-F7 FD-7F 00-00 B0-61 C4-F7 FD-7F 00-00 40-97 54-01 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 B0-94 54-01 00-00 00-00 B8-94 54-01 00-00 00-00 B8-94 54-01 00-00 00-00 40-8D 54-01 00-00 00-00 48-8D 54-01 00-00 00-00 48-8D 54-01 00-00 00-00>
[ FAILED ] xview.xview_on_xfunction (1 ms)
I want to be able to do something like the following
xt::xexp<double> exp_res1 = xt::xvar("x") + xt::xvar("y") + xt::xconst(3);
xt::xexp<double> exp_res2 = exp_res1 / xt::xconst(2);
xt::xarray<double> res1 = exp_res1.set("x", arr1).set("y", arr2).eval();
xt::xarray<double> res2 = exp_res1.set("y", arr3).eval();
xt::xarray<double> res3 = exp_res2.eval();
Here I am reusing the expression and also doing the evaluation when needed.
operator== and operator!= are missing for expressions. Contrary to inequality comparison operators, they should have the usual semantic in C++.
This seems like a strange bug. The following doesn't work for me:
#include "xtensor/xscalar.hpp"
#include "xtensor/xarray.hpp"
#include "xtensor/xio.hpp"
int main() {
xt::xarray<double> arr1
{{1.0, 2.0, 3.0, 9},
{2.0, 5.0, 7.0, 9},
{2.0, 5.0, 7.0, 9}};
auto func = [](const auto& arr1) {
auto view = make_xview(arr1, 1, xt::all());
for(const auto& el : view) {
std::cout << el << " ";
}
};
func(arr1);
}
While this one works perfectly fine:
#include "xtensor/xscalar.hpp"
#include "xtensor/xarray.hpp"
#include "xtensor/xio.hpp"
int main() {
xt::xarray<double> arr1
{{1.0, 2.0, 3.0, 9},
{2.0, 5.0, 7.0, 9},
{2.0, 5.0, 7.0, 9}};
auto func = [](const auto arr1) {
auto view = make_xview(arr1, 1, xt::all());
for(const auto& el : view) {
std::cout << el << " ";
}
};
func(arr1);
}
In dealing with vectorised time series it might be great to be able to efficiently represent such structures with both push and / or pull evaluation.
I've been looking, but I didn't find implementations for those two.
Is there a plan to leverage e.g. BLAS or similar libraries for those operations?
- Wolf
Add a section in the html documentation sumilat numpy cheat sheet. But for a numpy - xtensor correspondance table.
Currently each time you instantiate an xfunction
, its shape is computed and stored. This drastically hurts performance when manipulating complicated expressions involving xarray
instances. For instance, consider the following code:
xt::array<double> a, b, c;
// init a, b, and c ....
xt::xarray<double> res = 2 * a + (b / c);
Here three xfunction
instances are built, and thus three shape containers are dynamically allocated while only one is required (the global shape of the expression to be assigned).
A way to fix this is to make the computation of the shape lazy. The computation of the shape of the root node of the expression should not require computation of the shape of the other nodes but rely on the broadcast_shape
applied to each node.
I think it would be cool to allow for numpy-style ranges in views with "colons" that find out about their length later, from the shape of the underlying expression.
I.e. numpy style (or pythons): a[:3] or a[1:] or a[::-1] ...
E.g.
struct xnone {};
template <class A, class B, class C>
range(A min, B max, C step)
could return a range_adaptor
object, which in turn returns a valid range when initiated with some shape.
E.g. if class A was a xnone tag, and step is positive then min -> 0. If step negative, it would be shape.
If class B is a xnone tag, then max is the size at that dimension. If step negative, -1.
I am not sure about the naming though. xnone
is not so nice.
This is not compiling:
#include <iostream>
#include "xtensor/xscalar.hpp"
#include "xtensor/xarray.hpp"
#include "xtensor/xio.hpp"
#include "xtensor/xslice.hpp"
int main() {
xt::xarray<double> arr1
{{1.0, 2.0, 3.0, 9},
{2.0, 5.0, 7.0, 9},
{2.0, 5.0, 7.0, 9}};
std::cout << xt::make_xview(arr1, xt::all(), 1) << std::endl;
}
Semantic has to be fixed for xindexview
and xview
like it has been done for xbroadcast
and xfunction
.
xscalar
should be able to take a reference on the scalar it wraps instead of a copy. That would improve perofrmances when the copy of the scalar type is expensive.
However this behavior should be explicitly specified (via a xref
function for instance), the default behavior should remain to take a copy.
As of master:
xt::xarray<int> arr {{1, 2, 3}, {4, 5, 6}};
auto arr_view = xt::make_xview(arr, 1);
std::cout << arr_view << std::endl;
// -> {4, 5, 6}, OK
std::cout << arr_view.dimension() << std::endl;
// -> 1, OK
for (auto x: arr_view) {std::cout << arr_view << std::endl;}
// -> 4 4 4, ???
As of master, construct as simple as
xt::xtensor<double, 1> tens {{3}};
triggers pages and pages of g++ warnings (from -Wunused-parameter / -Wextra, gcc 6.2.1).
Armadillo has an eval method, which forces evaluation of expressions.
Maybe this would be useful for xtensor, too? E.g. giving some expression, it would return either an xtensor or xarray with the evaluation results.
If an xarray or xtensor is given, it just returns a closure to that.
As of xtensor 0.2.1,
xt::xarray<int> arr {{1, 2, 3}, {4, 5, 6}};
auto arr_view = xt::make_xview(arr, 0);
std::cout << std::accumulate(arr_view.begin(), arr_view.end(), 0) << std::endl;
works (as advertised) but
xt::xtensor<int, 2> tens {{3, 3}};
auto tens_view = xt::make_xview(tens, 0);
std::cout << std::accumulate(tens_view.begin(), tens_view.end(), 0) << std::endl;
fails to compile (gcc 6.2.1 from Arch Linux).
Following the discussion we had on gitter about performances, I think we should rename storage_begin
and storage_end
into begin
and end
. The current begin
and end
would become xbegin
and xend
without argument.
There are mainly two reasons for that:
the range for loop is equivalent to a loop with the begin
/end
iterator pair. If storage_begin
/storage_end
pair is faster than begin
/end
pair, it's like we prevent to have performance with this syntax.
iterating on the storage container (i.e. regardless of the shape of the expression) is generally used for performing stl-like algorithms on the data. In that case, the algorithms are generally invoked with the begin
/end
iterator pair. Keeping the current interface would be a performance hit for generic code.
Since it's breaking backward compatibility, I think we should do it as soon as possible.
Goal: Provide an xexpression
corresponding to the reduction of dimension based on a reducer
If m
has shape (4, 3, 2, 5)
, sum(m, {1, 3})
sums over dimensions 1
and 3
giving an expression of shape (3, 2)
, lazily.
Similarly to xfunction
and vectorize
, this should come with a helper generator function which creates a xreducer
for a given function that takes a 1-D array.
I think it could be nice to set a default type for zeros
, ones
, linspace
... following NumPy I think double
is the right choice.
What do you think?
I believe that "incomplete indexings" (e.g. indexing a 3d array/tensor with 2 indices) add as many zeros as needed to complete the multi-index. At least in the case of tensors (where the dimensionality is known at compile time), perhaps it may make more sense to return a view in such a case? This would mimic numpy's behavior.
This snippet never finishes compiling for me (no error, just takes forever):
xt::xarray<double> d1 = xt::random::rand<double>({5});
auto d12 = view(d1, newaxis(), all());
std::cout << d12 << std::endl;
However, this compiles fine:
xt::xarray<double> d1 = xt::random::rand<double>({5});
auto d12 = view(d1, newaxis(), all());
xt::xarray<double> a = d12;
std::cout << a << std::endl;
TensorFlow has its framework XLA (https://www.tensorflow.org/versions/master/resources/xla_prerelease.html) to compile the pipeline. Perhaps you can have an accelerator.
Add pretty printing, like Numpy, and make it the default way of outputting xexpressions
As of master,
xt::xtensor<double, 3> const arr {{1, 2, 3}};
xt::xtensor<double, 2> arr2 {{2, 3}};
arr2 = xt::make_xview(arr, 0);
fails to compile even though constness of arr
is not violated (as a copy is being made).
Goal: in addition to the dynamicly-dimensioned xarray
, provide an xexpression
of fixed dimension.
strides
and shape
attributes will then be std::array
s of specified length, and will be on the stack.This does not compile
xt::xarray<double> a = {{1, 5, 3}, {4, 5, 6}};
auto v = xt::filter(a, a >= 5);
v = 100;
but this does
xt::xarray<double> a = {{1, 5, 3}, {4, 5, 6}};
auto v = xt::view(a, xt::all());
v = 100;
So that the shape of an xview(xtensor)
remains stack-allocated.
Documentation should be refactored to integrate all recent features (generators and builders, comparison operators, newaxis, random module)
There was a bug in forward shape where forwarding a shape of type std::array had multiple template matches.
I've added a bugfix here: dc096c6
Goal: in addition to the variadic operator()
, provide an operator[]
taking a single multi-index argument.
Like for reshape
, we should also enable passing a braced initializer list {4, 5, 6}
.
The desired behavior when accessing elements of an xexpression with operator()
, element()
and operator[]
is
As of master:
xt::xtensor<double, 1> arr1 {{2}};
std::fill(arr1.begin(), arr1.end(), 6);
auto view {xt::make_xview(arr1, 0)};
std::cout << view << std::endl;
// -> 6, OK
for (auto x: view) { std::cout << x << std::endl; }
// -> infinite stream of 6's
When trying auto itpair = std::minmax_element(arr.begin(), arr.end());
or similar such functions, it doesn't compile as an temporary xiterator cannot be instantiated from an empty initializer list, which minmax_element
apparently tries.
As of master,
xt::xtensor<double, 2> arr {{2, 3}};
xt::xtensor<double, 2> arr2 {{2, 3}};
arr2 = arr + 1;
triggers a -Wreorder warning with gcc 6.2.1.
Non-const functions should be added to xscalar
so it can be used as a non-const xexpression
. This is required by the xref
feature, allowing xscalar
to take a reference on the wrapped scalar instead of a copy.
Goal: Provide a special type of slice for xview
to insert new dimensions of length one, like numpy.newaxis
.
This would simplify things in case of views (xview and xbroadcast)
Two possibilities:
operator[]
that takes an std::pair
of iterators.at
/ loc
/ element
?It would be nice if
xarray<double> e = xt::random::rand<double>({3, 3});
auto v = make_xindexview(e, {{1, 1}, {1, 2}, {2, 2}});
v = 3;
would be working.
With v = xt::broadcast(3, v.shape());
it's currently working and should be easy to implement for the general case.
Following #101 , I propose we homogenize the naming of meta-functions used in xtensor
common_value_type
, common_difference_type
, xclosure
, get_xfunction_type
...
and providing STL-style _t
variants for version returning the typename...
Currently xview is implemented using tuple as holder for the slices.
As far as I understand this necessitates that all slices are known at compile time. .....
But for example when creating a view from python, it's not possible to know the slices at compile time. It would also make writing the xreducer functionality easier (as e.dimension() is not a constexpr and cannot be used as template parameter etc.) (or that's at least how I tried doing it).
So I am wondering whether it would be a good idea to either create a separate, dynamic xview class or exchange the tuple in xview for a std::vector holding an std::variant<xall, xrange, size_t> or similar.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.