Comments (11)
@certik That's it. This limitation on xtensor::reshape
is because it operates in place, like in NumPy. I think we don't want to mix "copy" and "in-place" behaviors in the same function (at least implicitly), so an additional free function (although the name copy_and_reshape
might not be ideal) looks like a good option to me. Also it would work for any kind of expressions (even those which are not evaluated).
EDIT: maybe eval_reshaped
could be a good name?
from xtensor.
@JohanMabille thanks!
Just so that I understand: the code in #2760 (comment) is currently the only way to take a 16x16 xtensor
and reshape it into a new 16x8x2 xtensor
?
The current facility is the xtensor::reshape
, but that must keep the number of dimensions the same (and the number of elements of course), so it can change 16x16 to 8x32, but it can't change it to 16x8x2, correct?
from xtensor.
cc: @certik
from xtensor.
xtensor provides an API similar to numpy, where reshape
changes the shape of the passed array. xtensor is missing a free funcion reshape
, but if we add it, it should forward the call to the reshape
method of its argument.
For creating a new array of a given shape, you can use the empty free function, which has less restrictions than the reshape
funciton in Fortran.
from xtensor.
Actually I am trying to port https://github.com/lfortran/lfortran/blob/main/integration_tests/arrays_reshape_14.f90 to C++ using xt::xtensor
APIs. For transforming the following line to C++,
I want the following to work.
// xt::xtensor_fixed<double, xt::xshape<256>> b
// xt::xtensor<double, 2>& a
// xt::xtensor_fixed<int, xt::xshape<1>> newshape
b = xt::reshape(a, newshape); // would be great to have xt::reshape which can work for this case.
I am using xtensor
and xtensor_fixed
because they map well to fortran types. I might be wrong but AFAIK, xarray
doesn't store any information related to the rank of the array but Fortran specifies rank of the array in its type itself (analogous to xtensor
).
I am open to ideas and different approaches for this problem.
from xtensor.
xtensor is missing a free funcion
reshape
, but if we add it, it should forward the call to thereshape
method of its argument.
I see. The return type of xt::reshape
would depend on the size of its second argument (a fixed size 1 D array).
from xtensor.
I might be wrong but AFAIK, xarray doesn't store any information related to the rank of the array but Fortran specifies rank of the > array in its type itself (analogous to xtensor).
xarray
provides the rank information at runtime only, while xtensor
and xtensor_fixed
can provide it at build time.
I am open to ideas and different approaches for this problem.
I think the implementation could be something like:
template <class E, class S>
auto copy_and_reshape(E&& e, const S& shape)
{
using value_type = typename std::decay_t<E>::value_type;
auto res = xt::empty_like<value_type>(shape);
std::copy(e.cbegin(), e.cend(), res.begin());
return res;
}
We can probably find a better name, but it should definitely not be reshape
to avoid confusion with the existing reshape feature.
xtensor is missing a free function
reshape
, but if we add it, it should forward the call to thereshape
method of its argument.I see. The return type of
xt::reshape
would depend on the size of its second argument (a fixed size 1 D array).
Not sure to get you, what I meant is that xt::reshape(tensor, shape)
should call tensor.reshape(shape)
that reshapes tensor
in place (therefore the return type would be void
), sorry if my first message was not clear.
from xtensor.
Not sure to get you, what I meant is that
xt::reshape(tensor, shape)
should calltensor.reshape(shape)
that reshapestensor
in place (therefore the return type would bevoid
), sorry if my first message was not clear.
For example if we want to reshape, xt::xtensor<double, 2> a
(say of shape [16, 16]
) to xt::xtensor<double, 1> b
(say of shape [256]
) then xt::reshape(a, {256})
should return an object of type xt::xtensor<double, 1>
. However, if we want to reshape a
to xt::xtensor<double, 3> c
(say of shape [16, 8, 2]
) then xt::reshape(a, {16, 8, 2})
should return xt::xtensor<double, 3>
. Basically the return type depends on the size of shape
argument. If shape
is of size n
then returned xt::xtensor
would be of rank n
. Even if are able to define the signature of xt::reshape
, implementing it might be tricky. In Fortran, reshape
is an intrinsic and its upto the Fortran compiler on how it implements this feature depending upon the way it handles array internally.
from xtensor.
I think the implementation could be something like:
I see. Is this implementation doing the same thing as reshape
feature of Fortran as I described in my above comment? Seems like it's creating an object res
with shape
and then contents of input e
are copied to it? If so then it is making sense to me.
from xtensor.
Basically the return type depends on the size of shape argument. If shape is of size n then returned xt::xtensor would be of rank n
Ha I see; indeed xtensor
does not allow that, since the reshape is done in place, and since the rank of an xtensor
object is known at build time, reshape
must preserve the rank.
Seems like it's creating an object res with shape and then contents of input e are copied to it? If so then it is making sense to me.
Yes, that's exactly what it is doing. Also notice that the initial tensor values are not initialized before the copy. This avoids iterating over the whole buffer to set 0 everywhere.
from xtensor.
Great to hear. It would be great have this functionality in xtensor
library.
EDIT: maybe
eval_reshaped
could be a good name?
Sounds good to me.
from xtensor.
Related Issues (20)
- [C++20] Module and Concepts HOT 2
- update for xsimd 13 HOT 3
- arm64 build timing out on test_xblockwise_reducer.cpp HOT 1
- [Performance] Sub-optimal performance on means of strided axes
- Linear algebra operators HOT 1
- Buggy behaviour of non-contiguous xadaptor and scalar assignment
- saturation arithmetic?
- Tensor View Operations Slower Than Manual Looping
- How can I compile xtensor with gcc-7.5 HOT 5
- Avoiding template explosion on views HOT 4
- Performance issue possibly related to returning C-style adaptor from function
- Non-conformant testing of template-template-args c++ standard feature HOT 1
- transpose slower than numpy
- xt::variance crashes HOT 2
- Unexpected value of `xt::get_rank<xt::xtensor_adaptor>`.
- Support for iterating over elements of a specified axis. HOT 2
- Can't check equality of shapes of xtensor_fixed and xscalar
- Output of example in ReadMe file is not matching with the current master HOT 2
- Support of dumping 1D expressions
- *** stack smashing detected ***: terminated
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from xtensor.