Coder Social home page Coder Social logo

Comments (11)

JohanMabille avatar JohanMabille commented on July 17, 2024 3

@certik That's it. This limitation on xtensor::reshape is because it operates in place, like in NumPy. I think we don't want to mix "copy" and "in-place" behaviors in the same function (at least implicitly), so an additional free function (although the name copy_and_reshape might not be ideal) looks like a good option to me. Also it would work for any kind of expressions (even those which are not evaluated).

EDIT: maybe eval_reshaped could be a good name?

from xtensor.

certik avatar certik commented on July 17, 2024 1

@JohanMabille thanks!

Just so that I understand: the code in #2760 (comment) is currently the only way to take a 16x16 xtensor and reshape it into a new 16x8x2 xtensor?

The current facility is the xtensor::reshape, but that must keep the number of dimensions the same (and the number of elements of course), so it can change 16x16 to 8x32, but it can't change it to 16x8x2, correct?

from xtensor.

czgdp1807 avatar czgdp1807 commented on July 17, 2024

cc: @certik

from xtensor.

JohanMabille avatar JohanMabille commented on July 17, 2024

xtensor provides an API similar to numpy, where reshape changes the shape of the passed array. xtensor is missing a free funcion reshape, but if we add it, it should forward the call to the reshape method of its argument.

For creating a new array of a given shape, you can use the empty free function, which has less restrictions than the reshape funciton in Fortran.

from xtensor.

czgdp1807 avatar czgdp1807 commented on July 17, 2024

Actually I am trying to port https://github.com/lfortran/lfortran/blob/main/integration_tests/arrays_reshape_14.f90 to C++ using xt::xtensor APIs. For transforming the following line to C++,

https://github.com/lfortran/lfortran/blob/8e4d530121d8ad7d218854fbe37db221b8395500/integration_tests/arrays_reshape_14.f90#L23

I want the following to work.

// xt::xtensor_fixed<double, xt::xshape<256>> b
// xt::xtensor<double, 2>& a
// xt::xtensor_fixed<int, xt::xshape<1>> newshape
b = xt::reshape(a, newshape); // would be great to have xt::reshape which can work for this case.

I am using xtensor and xtensor_fixed because they map well to fortran types. I might be wrong but AFAIK, xarray doesn't store any information related to the rank of the array but Fortran specifies rank of the array in its type itself (analogous to xtensor).

I am open to ideas and different approaches for this problem.

from xtensor.

czgdp1807 avatar czgdp1807 commented on July 17, 2024

xtensor is missing a free funcion reshape, but if we add it, it should forward the call to the reshape method of its argument.

I see. The return type of xt::reshape would depend on the size of its second argument (a fixed size 1 D array).

from xtensor.

JohanMabille avatar JohanMabille commented on July 17, 2024

I might be wrong but AFAIK, xarray doesn't store any information related to the rank of the array but Fortran specifies rank of the > array in its type itself (analogous to xtensor).

xarray provides the rank information at runtime only, while xtensor and xtensor_fixed can provide it at build time.

I am open to ideas and different approaches for this problem.

I think the implementation could be something like:

template <class E, class S>
auto copy_and_reshape(E&& e, const S& shape)
{
    using value_type = typename std::decay_t<E>::value_type;
    auto res = xt::empty_like<value_type>(shape);
    std::copy(e.cbegin(), e.cend(), res.begin());
    return res;
}

We can probably find a better name, but it should definitely not be reshape to avoid confusion with the existing reshape feature.

xtensor is missing a free function reshape, but if we add it, it should forward the call to the reshape method of its argument.

I see. The return type of xt::reshape would depend on the size of its second argument (a fixed size 1 D array).

Not sure to get you, what I meant is that xt::reshape(tensor, shape) should call tensor.reshape(shape) that reshapes tensor in place (therefore the return type would be void), sorry if my first message was not clear.

from xtensor.

czgdp1807 avatar czgdp1807 commented on July 17, 2024

Not sure to get you, what I meant is that xt::reshape(tensor, shape) should call tensor.reshape(shape) that reshapes tensor in place (therefore the return type would be void), sorry if my first message was not clear.

For example if we want to reshape, xt::xtensor<double, 2> a (say of shape [16, 16]) to xt::xtensor<double, 1> b (say of shape [256]) then xt::reshape(a, {256}) should return an object of type xt::xtensor<double, 1>. However, if we want to reshape a to xt::xtensor<double, 3> c (say of shape [16, 8, 2]) then xt::reshape(a, {16, 8, 2}) should return xt::xtensor<double, 3>. Basically the return type depends on the size of shape argument. If shape is of size n then returned xt::xtensor would be of rank n. Even if are able to define the signature of xt::reshape, implementing it might be tricky. In Fortran, reshape is an intrinsic and its upto the Fortran compiler on how it implements this feature depending upon the way it handles array internally.

from xtensor.

czgdp1807 avatar czgdp1807 commented on July 17, 2024

I think the implementation could be something like:

I see. Is this implementation doing the same thing as reshape feature of Fortran as I described in my above comment? Seems like it's creating an object res with shape and then contents of input e are copied to it? If so then it is making sense to me.

from xtensor.

JohanMabille avatar JohanMabille commented on July 17, 2024

Basically the return type depends on the size of shape argument. If shape is of size n then returned xt::xtensor would be of rank n

Ha I see; indeed xtensor does not allow that, since the reshape is done in place, and since the rank of an xtensor object is known at build time, reshape must preserve the rank.

Seems like it's creating an object res with shape and then contents of input e are copied to it? If so then it is making sense to me.

Yes, that's exactly what it is doing. Also notice that the initial tensor values are not initialized before the copy. This avoids iterating over the whole buffer to set 0 everywhere.

from xtensor.

czgdp1807 avatar czgdp1807 commented on July 17, 2024

Great to hear. It would be great have this functionality in xtensor library.

EDIT: maybe eval_reshaped could be a good name?

Sounds good to me.

from xtensor.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.