Collection of crates used in Parity Technologies projects
paritytech / parity-common Goto Github PK
View Code? Open in Web Editor NEWCollection of crates used in Parity projects
Home Page: https://www.paritytech.io/
License: Apache License 2.0
Collection of crates used in Parity projects
Home Page: https://www.paritytech.io/
License: Apache License 2.0
Collection of crates used in Parity Technologies projects
https://github.com/microsoft/mimalloc seems to be a better choice than jemalloc (time- and memory-wise). We would probably need a low-level rust crate wrapping its C API first, like jemalloc-sys.
such as: H32
(4 bytes),H48
(6 bytes),..., H264
(33 bytes),H520
(65 bytes)
I have the following data structures defined as Encodable
and Decodable
:
#[derive(Debug, PartialEq, Eq)]
struct Inner(H256, H256);
impl Encodable for Inner {
fn rlp_append(&self, s: &mut RlpStream) {
s.begin_unbounded_list()
.append(&self.0)
.append(&self.1)
.complete_unbounded_list();
}
}
impl Decodable for Inner {
fn decode(rlp: &Rlp) -> Result<Self, DecoderError> {
Ok(Inner(rlp.val_at(0)?, rlp.val_at(1)?))
}
}
#[derive(Debug, PartialEq, Eq)]
struct Nest(Vec<Inner>);
impl Encodable for Nest {
fn rlp_append(&self, s: &mut RlpStream) {
s.begin_list(1)
.append_list(&self.0);
}
}
impl Decodable for Nest {
fn decode(rlp: &Rlp) -> Result<Self, DecoderError> {
Ok(Nest(rlp.list_at(0)?))
}
}
An encoding-decoding roundtrip of Nest
fails like so:
Test:
#[test]
fn test_nested_list_roundtrip() {
let items = (0..8).map(|_| Inner(H256::random(), H256::random())).collect();
let nest = Nest(items);
let encoded = rlp::encode(&nest);
let decoded = rlp::decode(&encoded).unwrap();
assert_eq!(nest, decoded)
}
Error:
thread 'test_nested_list_roundtrip' panicked at 'assertion failed: `(left == right)`
left: `Nest([Inner(0x347162d6b27b8f60eab7510e7c5f1a13cf8cf02128d4e0a75811bb6b21f924c1, 0x8a8681a07c0093db405e3462fcf77050f43b3ae8045494659d843e7d4c68c3dd), Inner(0x7b226f962317f5c23ed52c8920f22496014fba993aac53f1abefb38798a34aa0, 0x161de16522b077c1f7d05a369228c4ff2dceed19a5a70cc95dae7318ef273ac1), Inner(0x4c2c46c6f104296d554d2d46d7d87df9a7bd6096ea1ff3e46047378182507dc2, 0x633bbd4c3d50ef2cf2547659d4c5e77cbabd432f61461eb907352461872cf64f), Inner(0x0f07c3990ef3206c14bf5f7c24bb67fbae003aab039fbbcb11e01c9db8827376, 0x51e3c48e3be8e6ff7006f7d38d13c0f63371894df645a5321fbe2bda93fce29f), Inner(0x2c38e80703a49398c1972b43b8d3586e6fabbd8c8c9821ac02147c70fd790330, 0x1250dce35e0f998535255d7f9f442d945afe547adebbbaf71b38fa490171af2f), Inner(0xc8bf2de05f9601109e0de4a6a979ae7ccda4563d024b4f333a7187d05f0a7971, 0xee98d63d812605661a868dfccf508fc15cefcc4825c87c1883a43867f2c00a47), Inner(0x002c77234f46917656e6ef2d03d62baf066b8bb87cb4145a1a485f86fe626e53, 0x031f58ed89b8b8b7660772a2a7c8669e9ee6298d09f7edc09e0a7f0a554e7436), Inner(0x85425be5867ca860f1f568a08f0d13dd8a70bdb8ade9db1d0cc8e65899a4a6d9, 0x17d50a0baf5975038eaec5667cb4df6f1c49c5bf34c0897ed59559972a3a5554)])`,
right: `Nest([Inner(0x347162d6b27b8f60eab7510e7c5f1a13cf8cf02128d4e0a75811bb6b21f924c1, 0x8a8681a07c0093db405e3462fcf77050f43b3ae8045494659d843e7d4c68c3dd), Inner(0x7b226f962317f5c23ed52c8920f22496014fba993aac53f1abefb38798a34aa0, 0x161de16522b077c1f7d05a369228c4ff2dceed19a5a70cc95dae7318ef273ac1), Inner(0x4c2c46c6f104296d554d2d46d7d87df9a7bd6096ea1ff3e46047378182507dc2, 0x633bbd4c3d50ef2cf2547659d4c5e77cbabd432f61461eb907352461872cf64f), Inner(0x0f07c3990ef3206c14bf5f7c24bb67fbae003aab039fbbcb11e01c9db8827376, 0x51e3c48e3be8e6ff7006f7d38d13c0f63371894df645a5321fbe2bda93fce29f)])`', tests/tests.rs:546:2
note: Run with `RUST_BACKTRACE=1` for a backtrace.
It seems like rlp
crate only encodes half of the nested list (4 items of nest.0
in this case instead of 8) when the full list is expected.
Changing initial amount of objects in nest.0
to any other number yields the same result - only half of the vec is encoded.
https://github.com/iqlusioninc/crates/tree/develop/zeroize seems to be the best option available atm.
Also, https://github.com/iqlusioninc/crates/tree/develop/secrecy could replace our Secret
type.
Currently the readme does not document all features. Address that.
rel #97 (comment)
github.com-1ecc6299db9ec823/triehash-0.1.2/src/lib.rs:141:2
138 | fn gen_trie_root<A: AsRef<[u8]>, B: AsRef<[u8]>>(input: &[(A, B)]) -> H256 {
| ---- expected ethereum_types::H256
because of return type
...
141 | keccak(stream.out())
| ^^^^^^^^^^^^^^^^^^^^ expected struct ethereum_types::H256
, found struct hash::H256
|
= note: expected type ethereum_types::H256
found type hash::H256
H-types (such as H256) created through the construct_fixed_hash
macro implement the AsRef<[u8]>
trait to gain access to the internal data, but U-types (such as U256) does not implement a similar trait (such as AsRef<[u64]>
) to gain access to their internal data.
Please note, there seems to be an inconsistency in how uint
and hash
use AsRef
trait, which may be the source of this issue
parity-common/fixed-hash/src/hash.rs
Lines 101 to 106 in cd0fe15
parity-common/uint/src/uint.rs
Lines 427 to 431 in cd0fe15
kvdb-rocksdb
currently uses our own fork of rust-rocksdb
. The fork is diverged quite a bit. We should switch back to upstream, to get the latest updates. I started doing this in: 616b401
There are some TODO
s on what is missing. We also need to upstream some functionality from our fork to upstream.
rel #87 (comment)
rustc version: rustc 1.34.1 (fc50f328b 2019-04-24)
I'm trying to use kvdb-memorydb but it shows the above error : expected
&[u8], found ElasticArray32<u8>
in the following code:
Lines 55 to 62 in e208250
Could you tell me how to fix it, Thanks!
I've been working with @tomusdrw on adding pure Rust feature switch to ethsign
, which inevitably lead to developing a drop-in replacement for parity-crypto
that does not depend on ring
or other C
dependencies (those are a massive pain in the neck when compiling to Android/iOS).
This is quite a chore, and it's doubly troublesome as the crate lacks documentation (which came up during the PR review).
Also while porting stuff, I found that the scrypt
algo used is using the deprecated rust-crypto
crate, instead of the standalone scrypt
crate. It might be worth upgrading that one, along with switching some of the ring
stuff to pure Rust alternatives in cases where there performance regression for doing so is either tolerable or not very relevant (aes
would be a good example).
I think having ring
behind a feature flag, disabling which would also disable all parts of parity-crypto
that depend on it, leaving only Rust dependencies would be ideal for long term maintainability.
Audit the current uint
benchmark and make sure all operations have at least a basic benchmark for the common U256
and U512
types so we have a baseline upon which we can evaluate improvements.
I got many errors like this one after upgrading to the latest version of ethereum-types
the trait bound
ethereum_types::hash::H256: std::convert::From<ethereum_types::H256>
is not satisfied
We lost track of the rationals and reasoning behind the quickcheck
crate support in fixed-hash
.
This is the tracking issue for the purpose of finding out current uses of it in the ecosystem and if there are no uses removing the corresponding code in fixed-hash
.
Should allow for a cleaner no_std
impl.
In the uint
crate there is the follow declaration:
Also provides commonly used U128, U256 and U512 out of the box.
I don't see any of these defined in the crate. I'm not sure I'm misunderstanding what is supposed to be provided. I would've expected to see a few constuct_uint! { pub struct N (N/64) }
in the repo.
According to the RLP spec, RLP elements (string or list) of encoded length > 55 should be encoded with a long encoding, and RLP elements with an encoded length <= 55 should be encoded with a short encoding.
It is currently possible to decode RLP elements of length <= 55 encoded with the long encoding without raising any error.
I am not sure there is harm in this, but this means we can have 2 different binary RLPs that have exactly the same semantics, which seems a bit fishy in some situations.
Some failing examples:
#[test]
fn test_canonical_string_encoding() {
assert_ne!(Rlp::new(&vec![0xc0 + 4, 0xb7 + 1, 2, b'a', b'b']).val_at::<String>(0),
Rlp::new(&vec![0xc0 + 3, 0x82, b'a', b'b']).val_at::<String>(0)); // fails
}
#[test]
fn test_canonical_list_encoding() {
assert_ne!(Rlp::new(&vec![0xc0 + 3, 0x82, b'a', b'b']).val_at::<String>(0),
Rlp::new(&vec![0xf7 + 1, 3, 0x82, b'a', b'b']).val_at::<String>(0)); // fails
}
ethereum-types
implements a bunch of conversions that hashes created with uint
do not have, e.g. ethereum_types::H256
can do 1.into()
.
If possible, add the same conversion facilities to uint
and remove the manual impls from ethereum_types
.
Due to how we handle null nodes in memorydb
โ by calculating them on instantiation โ we can't use MemoryDB::default()
as a synonym for MemoryDB::new()
.
Do one of three things:
#[derive(Default)]
and fix all code that uses MemoryDB::default()
Default
properlyThe address of a contract can be deterministically calculated from the from
address and the nonce
of the transaction.
Currently, we implemented this code ourselves in our codebase, however since it is generally useful, we would like to contribute it back. Which of the crates in this repository would be best for this functionality?
uint types has as_u32/as_u64/as_u128
functions, but they would panic if overflow. It would be helpful if TryFrom/TryInto
is supported, so that caller can decide what to do (for instance saturating) instead of panic.
There's a single test and it is not currently run by travis (or by cargo test
). Sort out the features and make sure the test runs.
Adding tests would be good too.
Some reasons:
rustfmt
is one of the official rust toolchain components, most rustacean know it.rustfmt
has more flexible configuration options for rust language, you can even skip some code that doesn't want to format by adding #[rustfmt::skip]
attribute in code.Initially the code for big unsigned ints lived in this repo https://github.com/paritytech/bigint. At some point we copied the bigint code into https://github.com/paritytech/primitives. The problem is that we still kept committing to the original bigint repo. At this point the code in the bigint repo has seen some optimizations and is faster (u128 for limbs, dropped all assembly) and they're forgotten there. Now that bigint has moved once again, we should try to merge the current bigint
code into this crate (and deprecate the bigint
repo).
A bit more controversial than #124 is removing the dependency with ring
.
Related discussion: paritytech/substrate#2415
Add an implementation of the kvdb
traits on top of the Web IndexedDB API: https://developer.mozilla.org/fr/docs/Web/API/API_IndexedDB
The bindings to IndexedDB are already present in the web_sys
crate (https://docs.rs/web-sys/0.3.24/web_sys/struct.IdbFactory.html?search=Idb), and we should use this crate to access IndexedDB.
The use-case for this is running Substrate (or parity-ethereum?) from within a browser.
I'm personally not very familiar with neither kvdb
nor IndexedDB, so there might or might not be problems that I can't think of.
And uint
crate seems to be missing the quickcheck
and heapsize
features in Cargo.toml, however, the heapsize
and quickcheck
dependencies is optional.
About the macro blocks that involve operations around using libc for implementation of PartialEq
and Ord
.
#[cfg(all(feature = "libc", not(target_os = "unknown")))]
#[macro_export]
#[doc(hidden)]
macro_rules! impl_libc_for_fixed_hash { ... }
Question: Is this feature (using libc
instead of core
) actually useful and used?
Thesis: The compiler should be able to do this optimization by itself.
@pepyakin told in another PR Link
hm, I was almost sure that self.as_bytes() == other.as_bytes()/self.as_bytes().cmp(other.as_bytes()) should be lowered down to memcmp.
Besides this, existence of an OS around isn't requisite for using memcmp et al since they assumed to always exists by the compiler (or rather LLVM). In wasm32-unknown-unknown they're pulled from the compiler-builtins lib.
Todo:
Currently triehash
is very tightly tied to the RLP encoding format. This issue is about untying that and make it generic over the encoding format, similarly to how the patricia-trie
uses the NodeCodec
trait.
And keep in mind that this would be a backwards incompatible change (even if we don't expose anything from ring, multiple versions of ring can't be put together in the same binary), so publish 0.3 instead of 0.2.1.
Currently when importing a new transaction, if the pool is full (or adding the new tx would take it over the limit) then we take the current worst transaction and call should_replace
together with the new tx. This user defined implementation will then determine whether to replace or reject the new transaction.
It's an optimisation to avoid the work of inserting the new transaction in the queue, only to remove it if it is "worse" than any of the existing transactions. It also provides a special hook for the users to define special logic not covered the scoring, e.g. considering readiness.
However, when the new transaction is replacing an existing one for the same sender/nonce but with a better score
, then arguably the transaction should just be added anyway and should_replace
should not be called at all (since the new tx directly replaces an existing one, not increasing the pool size).
So if we can detect this case it will avoid pushing out the worst transaction unnecessarily.
See also: @tomusdrw's TODO comment: https://github.com/paritytech/parity-common/blob/master/transaction-pool/src/pool.rs#L154.
There are quite a few weird things in these APIs:
We use strong typing for the hashing algorithm, eg. Digest<Sha256>
or Digest<Sha512>
. However when performing any operation, we don't enforce any trait on the generic, which means that internally we always have to perform a runtime dispatch and we always have to store the largest possible key size.
Building a Digest
, a SigKey
, a Signer
or a VerifyKey
requires a key by reference, but the type then always owns its key, meaning that we always have to clone it. It would make sense to hold the key by reference, or use a Cow
, or allow both.
Building one of these types with the wrong slice length panics.
Polkadot uses transaction-pool in extrinsics-pool
and has minimal dependencies so it's probably a good candidate for inclusion here.
The rust-crypto crate is completely dead and shouldn't be used.
In version 2.5 of fixed-hash
, certain methods like from_slice
were deprecated with the message "unconventional API, replaced by new_from_slice in version 0.3"
but surprisingly, the method new_from_slice
doesn't exist yet.
I am curious in what the reasoning was behind adding a deprecation (which causes a compile-time warning) and not adding the suggested method at the same time?
This is a meta-issue related to multiple possible improvements :
- (1) moving all crypto dependency at the same place
- (2) ability to run a test suite that is directly related to parity use
- (3) ability to run bench on different crates
- (4) traits over main parity crypto use, and hability to switch crates without to much efforts (not meant to be std crypto trait, more a convenient way to audit/check our crypto libs usage).
- (5) wasm32 compatibility/alternative
- (6) no_std compatibility/alternative
Following steps are proposed :
A previous PR covered first three steps is at #80 , I put it on ice in favor of smaller PRs. It contains lots of additional information.
In #![no_std] libraries, we need to use alloc::vec::Vec
instead of std::vec::Vec
, like bitvec
did.
https://github.com/myrrlyn/bitvec
My pleasure to create a pr about it. :)
There's already a path
crate on crates.io.
fixed-hash v0.3.0 cannot define Hash struct in constants directly, like:
pub const TEST_HASH: H256 = H256([0xc5, 0xd2, 0x46, 0x01, 0x86, 0xf7, 0x23, 0x3c, 0x92, 0x7e, 0x7d, 0xb2, 0xdc, 0xc7, 0x03, 0xc0, 0xe5, 0x00, 0xb6, 0x53, 0xca, 0x82, 0x27, 0x3b, 0x7b, 0xfa, 0xd8, 0x04, 0x5d, 0x85, 0xa4, 0x70]);
In version 0.2.5, the definition of Hash structures is
pub struct $from (pub [u8; $size]);
but in version 0.3.0, the definition of Hash structure is
$visibility struct $name ([u8; $n_bytes]);
Why does the new version of fixed-hash
change the public field in the Hash structure to the private one?
Currently in kvdb-rocksdb
the memory budget is split evenly across all column families. In reality many use cases store data very unevenly across CFs, e.g. in parity-ethereum
probably 90% of the data exists on the state CF.
Make the memory budget distribution configurable or settable on a per-CF basis.
When RLP decoding a list, the list length as specified in the RLP data is not enforced as a hard limit when decoding its content.
For example, a 3-byte string in a list (eg ["ab"]
) is encoded as c3 82 xx xx
("list of 3 bytes" then "string of 2 bytes" then "string content"), but RLP such as c1 82 xx xx
("list of 2 bytes" then "string of 2 bytes" then "string content") is accepted and decoded the same way.
I am opening this as an issue and not as a PR since I'm not 100% sure there is harm in accepting such malformed RLP, but reading the code it seems this lib is trying to reject non-canonical encodings in general.
FYI here is a set of tests that showcase a few problematic cases (marked as // fails
):
#[test]
fn test_inner_length_capping_for_short_lists() {
assert_eq!(Rlp::new(&vec![0xc0 + 0, 0x82, b'a', b'b']).val_at::<String>(0), Err(DecoderError::RlpInconsistentLengthAndData)); // fails
assert_eq!(Rlp::new(&vec![0xc0 + 1, 0x82, b'a', b'b']).val_at::<String>(0), Err(DecoderError::RlpInconsistentLengthAndData)); // fails
assert_eq!(Rlp::new(&vec![0xc0 + 2, 0x82, b'a', b'b']).val_at::<String>(0), Err(DecoderError::RlpInconsistentLengthAndData)); // fails
assert_eq!(Rlp::new(&vec![0xc0 + 3, 0x82, b'a', b'b']).val_at::<String>(0), Ok("ab".to_owned())); // ok
assert_eq!(Rlp::new(&vec![0xc0 + 4, 0x82, b'a', b'b']).val_at::<String>(0), Err(DecoderError::RlpIsTooShort)); // ok
}
#[test]
fn test_inner_length_capping_for_long_lists() {
assert_eq!(Rlp::new(&vec![0xf7 + 1, 0, 0x82, b'a', b'b']).val_at::<String>(0), Err(DecoderError::RlpDataLenWithZeroPrefix)); // ok
assert_eq!(Rlp::new(&vec![0xf7 + 1, 1, 0x82, b'a', b'b']).val_at::<String>(0), Err(DecoderError::RlpInconsistentLengthAndData)); // fails
assert_eq!(Rlp::new(&vec![0xf7 + 1, 2, 0x82, b'a', b'b']).val_at::<String>(0), Err(DecoderError::RlpInconsistentLengthAndData)); // fails
assert_eq!(Rlp::new(&vec![0xf7 + 1, 3, 0x82, b'a', b'b']).val_at::<String>(0), Ok("ab".to_owned())); // ok
assert_eq!(Rlp::new(&vec![0xf7 + 1, 4, 0x82, b'a', b'b']).val_at::<String>(0), Err(DecoderError::RlpIsTooShort)); // ok
}
Hey guys,
I was playing around with Parity tech and I was wondering how hard it would be to add support for big endian & 32 bit architectures such as MIPS/MIPS64?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.