olson-sean-k / decorum Goto Github PK
View Code? Open in Web Editor NEWMaking floating-point behave.
License: MIT License
Making floating-point behave.
License: MIT License
Right now, constraints are controlled by a crate feature. This comes with some complications, but it notably disagrees with the established patterns in the standard library. Integer types check for things like overflow in debug builds but do not in release builds. Moreover, integer types provide explicitly checked operations for code that cannot reasonably guarantee that input values or the results of operations are valid (even in release builds).
Mimic the standard library instead:
enforce-constraints
feature.Decorum is opinionated about the total ordering it provides for floating-point values. Users cannot modify this behavior, and it is essentially hard-coded (see the canonical
and constraint
modules).
Maybe this should be parameterized via an additional type parameter. That parameter should have a default, and that default should probably use the "natural" ordering provided today (i.e., NaN
and zero have single canonical forms). However, some users may want to use something more akin to the IEEE-754 ordering, where -NaN
is less than -INF
, NaN
is greater than INF
, and negative zero is less than zero (i.e., there is a distinction between negative and positive variants of NaN
and zero).
I've seen some discussion about this regarding similar libraries, and I think alternative ordering may be useful for some applications.
I'd like to define custom constraints on floating points, like
Is that possible already? If so, am I missing the documentation?
Other crates like approx and alga (used by nalgebra) define traits and implement them for native floating point numbers. They do use generics, so that Decorums numbers can be used. Some of their functionality isn't available though with Decorum's numbers, as their traits are not implemented for Decorums numbers.
Due to Rust's Orphan Rule users of Decorum and those libraries cannot implement the other libraries' traits for their use of Decorum. Either the libraries declaring the traits or Decorum must implement them. What strategy does Decorum use for implementing foreign traits? What dependency hierarchy should be created? Should those libraries depend on Decorum or should Decorum depend on those libraries?
I can imagine creating features in Decorum for use with well known libraries, like the above mentioned, might work.
The implementation of serialization with serde is incorrect. Only the raw floating point value is encoded, omitting any information about the originating proxy type. For example, serialization should probably use serialize_newtype_struct
instead of into_raw_float
+ serialize
.
Consider using cfg_attr
with derive(Deserialize, Serialize)
, though this may not be possible due to the type parameters on ConstrainedFloat
.
The documentation could use some work. Fix any inconsistencies, document errors, and provide examples.
This issue should be examined after any other 0.1.0
milestone issues that affect the API have been closed.
Zero values are handled differently for comparison (cmp_float
) than hashing (hash_float
). Hashing canonicalizes zeroes to a single representation, but ordering does not handle zeroes at all. In other words, hashing assumes -0 == 0
and ordering assumes -0 < 0
. Ordering should detect zeroes and agree with the -0 == 0
relation.
This is somewhat related to #7. If different orderings are implemented, it will be important to ensure that they interact well with hashing.
https://doc.rust-lang.org/std/iter/trait.Sum.html
https://doc.rust-lang.org/std/iter/trait.Product.html
These two.
Right now, instead of iterator_of_r32.sum()
I have to write iterator_of_r32.fold(R32::default(), |a, b| a + b)
.
The Encoding
trait expresses some notion of minimum and maximum values that floating-point types can represent (and the Bounded
trait from num-traits
is similar). For floating-point types, this is a bit misleading, because these traits only consider representations of real numbers and disagree with ordering when considering all classes of values that floating-point can represent.
For example, <f64 as Encoding>::MAX
and <f64 as Bounded>::max_value()
do not yield +INF
despite the fact that the partial ordering for f64
considers infinity greater than these values. The name MAX
suggests "the maximum possible value", but is really "the maximum possible real value".
These semantics are fine and useful, but Encoding
should reflect the restriction to real numbers in the names of its associated constants. Perhaps MAX
should be MAX_REAL
or MAX_NUM
, for example.
I get the following error while tryring to use magnitude()
on a nalgebra::Vector3<decorum::R64>
:
no method named `magnitude` found for type `na::Matrix<decorum::ConstrainedFloat<f64, decorum::constraint::FiniteConstraint<f64>>, na::U3, na::U1, na::ArrayStorage<decorum::ConstrainedFloat<f64, decorum::constraint::FiniteConstraint<f64>>, na::U3, na::U1>>` in the current scope
note: the method `magnitude` exists but the following trait bounds were not satisfied:
`decorum::ConstrainedFloat<f64, decorum::constraint::FiniteConstraint<f64>> : na::Real`rustc(E0599)
The same problem comes up with normalize()
but I can bypass using plexus::geomertry::ops::Normalize
trait. Is this how it's supposed to be? Should I bypass the magnitude issue by implementing my own trait?
(I'm new to Rust, sorry if I'm missing something obvious here)
use decorum::{N64, Real};
fn main() {
let nan = N64::from(-1.0).sqrt();
assert!(nan.into_inner().is_nan());
println!("nan: {}", nan);
}
The above code runs and prints nan: NaN
. It is expected to panic instead.
You should also check https://en.wikipedia.org/wiki/NaN#Operations_generating_NaN to make sure there aren't any more operations that can be exploited to produce NaN.
Is there a way to define a const FOO :R64 = 0.2;
?
I've had a look at the code and I don't see the std
feature being used anywhere. Why is there a std
feature and a conditional extern crate std;
declaration if that is never used anywhere?
Note that removing these will still result in the default features being no_std-incompatible, due to serde(-derive)'s default features being enabled. You probably don't depend on serde's std feature though (at least I don't see how you would). If your crate indeed works with serde = { version = "1.0", default-features = false }
, your crate could unconditionally be no_std compatible. It would be a breaking change for anyone manually specifying features though.
Proxy
implements de/serialization using Serde, but currently serializes as a structure with a single field. It would probably be better to serialize proxy types as raw floating-point primitives instead (as seen in #25).
Care must be taken to enforce constraints when deserializing, especially if the serialized format gives no indication that any such constraints should be applied. I have a working approach in d93535c on the serde
branch. It uses an intermediate type with transparent de/serialization and a conversion into/from proxy types that applies constraints.
One remaining problem is that serde_json
does not support serialization of non-real values for floating-point primitives out of the box. NaN
s and infinities are serialized as "null"
, which cannot be deserialized. Not only does this lose information, but there is no way to round-trip a non-real value. Note that commonly used serializations like "nan"
are not supported. One option for improving this is custom de/serialization via additional types gated by a Cargo feature. Gating would be necessary, since the de/serialization would be non-standard, but could be used on a case-by-case basis for any downstream crates that want to be able to de/serialize non-real floating-point values.
Hashing functions like hash_float_array
are provided because sometimes it is not possible or ergonomic to use wrapper types within another type. README.md
mentions this example:
use decorum;
#[derive(Derivative)]
#[derivative(Hash)]
pub struct Vertex {
#[derivative(Hash(hash_with = "decorum::hash_float_array"))]
pub position: [f32; 3],
...
}
This vertex type may be fed to graphics code that expects raw floating point data (for example, see the gfx pipeline macros and shader compilers).
A similar problem exists for implementing Eq
: if it is not possible or ergonomic to use wrapper types, there is currently no convenient way to implement Eq
. This can be done via conversions, but that gets messy fairly quickly. Instead, Decorum should provide eq_float
, eq_float_slice
, and eq_float_array
functions that are analogous to the hashing functions.
Numeric traits like Float
require additional traits like Num
, Neg
, etc., which tends to be more ergonomic. Require certain traits for Real
(and friends) so that constraints need not be so verbose.
The ToCanonicalBits
trait is implemented for a type T
with the bounds T: Encoding + Nan
. This is a convenient implementation in which Encoding + Nan
implies ToCanonicalBits
, but it does not include the NotNan
and Finite
types, because they do not implement Nan
.
It may be better to implement ToCanonicalBits
explicitly for the primitive f32
and f64
types and provide a blanket implementation for ConstrainedFloat
that simply delegates to the implementation of its wrapped floating-point type (supporting all of its type definitions).
For the 0.4.x
series, a99c247 introduces an associated type so that the size of the output bit vector can depend on the implementation. The implementations for f32
and f64
would be identical before this change, but should diverge and use u32
and u64
as the output types, respectively.
R64
does have some niche values (e.g. Nan values), is it possible to somehow communicate them to the compiler to optimize the size of Option<R64>
, so that it's no bigger ?
The 0.2.*
series of the num-traits crate includes a Real
trait (introduced in 0.1.42
) that is nearly identical to Decorum's. Before hitting 0.1.0
, Decorum should integrate with these changes and replace it's Real
trait with num-traits'.
This will be tricky though. num-traits currently provides a blanket implementation such that T: Float โ T: Real
(i.e., impl<T> Real for T where T: Float { ... }
). This makes it impossible to write implementations of the Float
and Real
traits for ConstrainedFloat
that are parameterized over the input FloatConstraint
type. I'm still not sure how to work around this.
Hello there and thank you for the lib!
Currently, std::fmt::Debug
is implemented for the float types via a derive. However, due to it containing a PhantomData
field, the output is rather chunky. It looks like like:
ConstrainedFloat {
value: -0.00025,
phantom: PhantomData,
}
With structs that have a lot of floating point members, this printing is very noisy. Personally, I'd prefer if we could perhaps instead just print the floating point literal?
Please let me know what you think -- If you're ok with this, I'd be happy to PR this!
Newer versions of decorum
(0.2.0 or later) fail to compile using Rust 1.42.0 with the error:
error[E0391]: cycle detected when const-evaluating + checking `primitive::<impl at /Users/me/.cargo/registry/src/github.com-1ecc6299db9ec823/decorum-0.3.1/src/primitive.rs:23:9: 29:10>::NAN`
If there is some incompatibility and there is a minimal supported version of Rust, this should be explained in the README.
Some of the basic wrapper traits like Proxy
and Primitive
can be useful in a broader context. For example, see this code that provides a numeric wrapper that clamps values.
Perhaps tools like this should be provided by Decorum. Another possibility is refactoring basic traits into another crate and then providing more specific crates atop that.
use decorum::N64;
use std::f64::NAN;
fn main() {
N64::from(NAN);
}
The above code panics with
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: ()', libcore/result.rs:945:5
note: Run with `RUST_BACKTRACE=1` for a backtrace.
The code calling unwrap is at
Line 71 in 7d9449a
There are basically no tests. At the very least, create unit tests to ensure that contraints are properly upheld and that conversions work as expected.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.