Comments (8)
@sarah-ek: I'd love to hear your thoughts on a possible integration. Do you think a plan like what I've outlined makes sense? Do you foresee any problems, or can you think of a better design/plan?
from nalgebra.
Hi,
we have been playing around with linear algebra interfaces in our own experimental Rust Linear solver Toolbox (https://github.com/linalg-rs/rlst), which we hope to have a first release for later this year.
We want to eventually support faer
and Lapack
equally. Our proposed structure is as follows. For example to call an SVD from a matrix mat
you call
mat.linalg().svd()
The linalg
method is from a LinAlg
trait that for dense matrices builds a struct DenseMatrixLinAlgBuilder
. The svd
is now another trait that is implemented for DenseMatrixLinAlgBuilder
and internally for Lapack copies the object and then calls the Lapack routine.
The idea of this is that we can hide behind a feature flag whether we want a Lapack implementation of the Svd
trait, a faer
implementation or something else. This is now a compile time choice. Making this a compile time choice is important for other methods who depend on the svd
, e.g. the implementation of the Norm2
trait is done for all objects that implement the LinAlg
trait and whose output itself implements the Svd
trait. The Norm2
trait now calls the svd
and returns the largest singular value as norm (if it sees that the matrix is just a vector it directly computes the vector norm instead).
We played around with a lot of ways of doing it and hiding faer
and Lapack
implementations. This seems for us the most workable so far and additionally allows to abstract over different matrix like objects through the LinAlg
trait (e.g. treat sparse matrices completely differently from dense matrices, etc.)
from nalgebra.
i like the idea of leaving it as an experimental feature for now, since faer is relatively new as a library and the api might still change in the future. overall, the plan looks solid to me
from nalgebra.
Thanks for your input @tbetcke, really appreciate it!
What is the rationale for having the intermediate LinAlg
trait instead of, say, an SVD
trait directly on the matrix? It's not quite clear to me what the purpose of the intermediate DenseMatrixLinAlgBuilder
struct is.
The idea of this is that we can hide behind a feature flag whether we want a Lapack implementation of the Svd trait, a faer implementation or something else.
While I understand this design goal, my previous experience with this kind of design has been catastrophic. LAPACK does not have any notion of local configuration of parallelism, i.e. you can not control the degree of parallelism per call. This is a severe problem if, in the same application, you need both:
- to solve a single "large" matrix problem in parallel
- to solve many small matrix problems separately in parallel
I've had exactly this issue before: on the one hand I needed to solve a single global system in parallel, but in order to build the matrix in the first place I needed to solve many small systems of varying size individually in parallel. In the end I had to resort to something else than LAPACK for the smaller matrix solves.
Regardless, I do agree that there is potentially a lot of value in abstracting the backend. But I think unifying faer
- which thankfully supports configuration of parallelism at the call site - and LAPACK under a single abstraction is very deeply problematic for the aforementioned reason. However, you have likely considered this, and perhaps you have found some kind of solution that is not obvious to me at this time?
Using feature flags for backend selection is a topic that I remember discussing with some of the ndarray
authors already back in 2017 or so. There several additionally issues that I wonder if you have solved in a satisfactory manner. The most pressing, I believe, are:
- Feature flags should not be mutually exclusive. This leads to two alternatives:
- Ignore this and decide categorically that only the end user should ever set a single of these feature flags.
- If the linear algebra library is only used deep in a hierarchy of libraries, the end-user might not even have a clue that the library is used, which might make it difficult to put the decision on the user.
- Discourage library authors from setting any of these feature flags but allow different backend flags at the same time (such as
faer
andlapack
), but have a principled manner of making a final decision.- This is also suboptimal because if a user enables, say,
faer
, a different crate might already have enabledlapack
. Iflapack
is preferred overfaer
, the library would still usefaer
even though the user selectedfaer
, which is more than a little confusing.
- This is also suboptimal because if a user enables, say,
- Ignore this and decide categorically that only the end user should ever set a single of these feature flags.
- If the end-user should set the feature flag, then there is the question of how, and there are essentially two options that I'm aware of:
- Every dependency that transitively depends on the linear algebra library has to propagate the necessary feature flags (e.g.
faer
,lapack
), which might get a bit unwieldy and relies on every single library author in the chain to actually do this. - Have the end-user depend on the linear algebra library and set the desired feature. However, this is very brittle because it requires the end user to select exactly the same version as the version of the linear algebra library used possibly deep in the dependency hierarchy, otherwise you might just end up with a duplicated dependency, and the wrong backend is selected by the linear algebra library. Possibly without any warning.
- Every dependency that transitively depends on the linear algebra library has to propagate the necessary feature flags (e.g.
An alternative to features is to use environment variables to control the backend instead. I haven't considered this in detail, but it potentially resolves some of the aforementioend problems, though it might have new problems of its own.
I'm curious if you considered all these issues and still opted for compile time features to select backends. I'm not personally convinced that this is altogether a great solution, but I'd be happy to learn more!
Long-term we might definitely want to provide some kind of high-level abstraction, but I think it is very difficult to come up with an abstraction that offers sufficiently granular control of parallelism. Would be great to have a continued discussion on this though, and I'd like to follow your progress on rlst
.
For the time being, in any case, I think it would be great to come up with an initial experimental integration of faer
that is independent in the sense that other APIs are not impacted. That way nalgebra
users can already start to enjoy the performance benefits of faer
. Then once we feel more confident about what we need, we might revisit the idea of abstracting backends.
from nalgebra.
i like the idea of leaving it as an experimental feature for now, since faer is relatively new as a library and the api might still change in the future. overall, the plan looks solid to me
Great to hear, thanks for chiming in!
I likely won't have time to work on this myself any time soon. If anyone is willing to take a stab, that would be most appreciated. Please do announce yourself here in the issue, however, so we avoid unintentionally duplicating efforts!
from nalgebra.
@Andlon The rationale of the LinAlg
trait is two-fold:
1.) Depending on the type of the matrix it can give out different LinAlgBuilder
objects. Right now we have the DenseMatrixLinalgBuilder
but will soon add a SparseMatrixLinalgBuilder
which the LinAlg
trait will output when the linalg
method is called for a sparse matrix.
2.) In the future data may be available on accelerator devices, etc. The LinAlg
builder allows to inject additional fetch operations/checks, etc. before the actual linear algebra routines are called.
The problem of switching threading types is solved for now by us through defaulting to Blis as Blas provider. Blis has interface routines that allow during runtime to change the threading model (e.g. single/multi, etc.). We need this a lot in our codes.
Regarding faer integration. We haven't done this yet. But the idea is that the specific traits such as SVD, etc. are implemented for the DenseMatrixBuilder
. So we can just switch around via feature flag the trait implementation from Lapack to faer. The disadvantage is that feature flags in this case are not composable. The user needs to choose one or the other which contradicts composability of features a bit. But difficult to achieve both.
from nalgebra.
are there updates on this topic ?
from nalgebra.
are there updates on this topic ?
No, as far as I know, nobody is actively working on this. Though, I have been thinking about it.
from nalgebra.
Related Issues (20)
- matrix literals or const constructors HOT 1
- `normalize` docs need expansion HOT 1
- Test flake in `f64::symmetric_eigen`
- Unit Quaternion logarithm inconsistent with exp HOT 1
- Matrix Views: non contiguous/regular slices
- Minimizing trait bound complexity when working with dimensional generics HOT 3
- SVD Computes Wrong Singular Values for Very Small-Valued Matrices HOT 1
- Make nalgebra_lapack::qr::QRScalar trait public
- Homographic transform via SVD/DLT - is it broken or am I doing it wrong?
- Inconsistency between Numpy/MATLAB and and nalgebra HOT 2
- Export crates used in public APIs
- unit quaternions and orthonormal matrices are only closed under successive multiplications in theory HOT 1
- Memory leak in generic_resize/reallocate_copy
- Inverting a 4x4 matrix with `try_inverse_mut` doesn't leave `self` unchanged if the inversion fails.
- Numerically vulnerable axis calculation in Rotation3
- How to construct a symmetric matrix in nalgebra? HOT 3
- `from_iter` implementation may run indefinitely HOT 8
- Request: Allow the multiplication of matrices of matrices
- Owned MatrixView Lifetime issues
- GAT in Allocator Trait HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from nalgebra.