Coder Social home page Coder Social logo

parity-common's Introduction

parity-common's People

Contributors

andresilva avatar arkpar avatar ascjones avatar bkchr avatar cheme avatar debris avatar demi-marie avatar dependabot[bot] avatar dvdplm avatar gavofyork avatar grbizl avatar hawstein avatar kichjang avatar koushiro avatar ngotchac avatar niklasad1 avatar nikvolf avatar ordian avatar robbepop avatar rphmeier avatar snd avatar sorpaas avatar tafia avatar thiolliere avatar tomaka avatar tomusdrw avatar twittner avatar vorot93 avatar zinfour avatar zrzka avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

parity-common's Issues

[RLP] Unexpected behavior when encoding nested lists with RLP

I have the following data structures defined as Encodable and Decodable:

#[derive(Debug, PartialEq, Eq)]
struct Inner(H256, H256);

impl Encodable for Inner {
	fn rlp_append(&self, s: &mut RlpStream) {
		s.begin_unbounded_list()
			.append(&self.0)
			.append(&self.1)
			.complete_unbounded_list();
	}
}

impl Decodable for Inner {
	fn decode(rlp: &Rlp) -> Result<Self, DecoderError> {
		Ok(Inner(rlp.val_at(0)?, rlp.val_at(1)?))
	}
}

#[derive(Debug, PartialEq, Eq)]
struct Nest(Vec<Inner>);

impl Encodable for Nest {
	fn rlp_append(&self, s: &mut RlpStream) {
		s.begin_list(1)
			.append_list(&self.0);
	}
}

impl Decodable for Nest {
	fn decode(rlp: &Rlp) -> Result<Self, DecoderError> {
		Ok(Nest(rlp.list_at(0)?))
	}
}

An encoding-decoding roundtrip of Nest fails like so:

Test:

#[test]
fn test_nested_list_roundtrip() {
	let items = (0..8).map(|_| Inner(H256::random(), H256::random())).collect();
	let nest = Nest(items);

	let encoded = rlp::encode(&nest);
	let decoded = rlp::decode(&encoded).unwrap();

	assert_eq!(nest, decoded)
}

Error:

thread 'test_nested_list_roundtrip' panicked at 'assertion failed: `(left == right)`
  left: `Nest([Inner(0x347162d6b27b8f60eab7510e7c5f1a13cf8cf02128d4e0a75811bb6b21f924c1, 0x8a8681a07c0093db405e3462fcf77050f43b3ae8045494659d843e7d4c68c3dd), Inner(0x7b226f962317f5c23ed52c8920f22496014fba993aac53f1abefb38798a34aa0, 0x161de16522b077c1f7d05a369228c4ff2dceed19a5a70cc95dae7318ef273ac1), Inner(0x4c2c46c6f104296d554d2d46d7d87df9a7bd6096ea1ff3e46047378182507dc2, 0x633bbd4c3d50ef2cf2547659d4c5e77cbabd432f61461eb907352461872cf64f), Inner(0x0f07c3990ef3206c14bf5f7c24bb67fbae003aab039fbbcb11e01c9db8827376, 0x51e3c48e3be8e6ff7006f7d38d13c0f63371894df645a5321fbe2bda93fce29f), Inner(0x2c38e80703a49398c1972b43b8d3586e6fabbd8c8c9821ac02147c70fd790330, 0x1250dce35e0f998535255d7f9f442d945afe547adebbbaf71b38fa490171af2f), Inner(0xc8bf2de05f9601109e0de4a6a979ae7ccda4563d024b4f333a7187d05f0a7971, 0xee98d63d812605661a868dfccf508fc15cefcc4825c87c1883a43867f2c00a47), Inner(0x002c77234f46917656e6ef2d03d62baf066b8bb87cb4145a1a485f86fe626e53, 0x031f58ed89b8b8b7660772a2a7c8669e9ee6298d09f7edc09e0a7f0a554e7436), Inner(0x85425be5867ca860f1f568a08f0d13dd8a70bdb8ade9db1d0cc8e65899a4a6d9, 0x17d50a0baf5975038eaec5667cb4df6f1c49c5bf34c0897ed59559972a3a5554)])`,
 right: `Nest([Inner(0x347162d6b27b8f60eab7510e7c5f1a13cf8cf02128d4e0a75811bb6b21f924c1, 0x8a8681a07c0093db405e3462fcf77050f43b3ae8045494659d843e7d4c68c3dd), Inner(0x7b226f962317f5c23ed52c8920f22496014fba993aac53f1abefb38798a34aa0, 0x161de16522b077c1f7d05a369228c4ff2dceed19a5a70cc95dae7318ef273ac1), Inner(0x4c2c46c6f104296d554d2d46d7d87df9a7bd6096ea1ff3e46047378182507dc2, 0x633bbd4c3d50ef2cf2547659d4c5e77cbabd432f61461eb907352461872cf64f), Inner(0x0f07c3990ef3206c14bf5f7c24bb67fbae003aab039fbbcb11e01c9db8827376, 0x51e3c48e3be8e6ff7006f7d38d13c0f63371894df645a5321fbe2bda93fce29f)])`', tests/tests.rs:546:2
note: Run with `RUST_BACKTRACE=1` for a backtrace.

It seems like rlp crate only encodes half of the nested list (4 items of nest.0 in this case instead of 8) when the full list is expected.

Changing initial amount of objects in nest.0 to any other number yields the same result - only half of the vec is encoded.

build error

github.com-1ecc6299db9ec823/triehash-0.1.2/src/lib.rs:141:2
138 | fn gen_trie_root<A: AsRef<[u8]>, B: AsRef<[u8]>>(input: &[(A, B)]) -> H256 {
| ---- expected ethereum_types::H256 because of return type
...
141 | keccak(stream.out())
| ^^^^^^^^^^^^^^^^^^^^ expected struct ethereum_types::H256, found struct hash::H256
|
= note: expected type ethereum_types::H256
found type hash::H256

Migrate all crates to 2018 edition

cc #140 (comment)

  • ethbloom
  • ethereum-types
  • fixed-hash
  • keccak-hash
  • kvdb
  • kvdb-memorydb
  • kvdb-rocksdb
  • parity-bytes
  • parity-crypto
  • parity-path
  • parity-util-mem
  • plain_hasher
  • primitive-types
  • trace-time
  • transaction-pool
  • triehash
  • uint
  • impl-codec (in primitive-types/impls)
  • impl-rlp (in primitive-types/impls)
  • impl-serde (in primitive-types/impls)

Add AsRef<[u64]> trait + impl to all U-type integers

H-types (such as H256) created through the construct_fixed_hash macro implement the AsRef<[u8]> trait to gain access to the internal data, but U-types (such as U256) does not implement a similar trait (such as AsRef<[u64]>) to gain access to their internal data.

Please note, there seems to be an inconsistency in how uint and hash use AsRef trait, which may be the source of this issue

impl AsRef<[u8]> for $name {
#[inline]
fn as_ref(&self) -> &[u8] {
self.as_bytes()
}
}

impl AsRef<$name> for $name {
fn as_ref(&self) -> &$name {
&self
}
}

Switch to `rust-rocksdb` in `kvdb-rocksdb`

kvdb-rocksdb currently uses our own fork of rust-rocksdb. The fork is diverged quite a bit. We should switch back to upstream, to get the latest updates. I started doing this in: 616b401

There are some TODOs on what is missing. We also need to upstream some functionality from our fork to upstream.

expected `&[u8]`, found ElasticArray32<u8>

rustc version: rustc 1.34.1 (fc50f328b 2019-04-24)

I'm trying to use kvdb-memorydb but it shows the above error : expected &[u8], found ElasticArray32<u8> in the following code:

impl DBOp {
/// Returns the key associated with this operation.
pub fn key(&self) -> &[u8] {
match *self {
DBOp::Insert { ref key, .. } => key,
DBOp::Delete { ref key, .. } => key,
}
}

Could you tell me how to fix it, Thanks!

[parity-crypto] doc comments and updating some dependencies

I've been working with @tomusdrw on adding pure Rust feature switch to ethsign, which inevitably lead to developing a drop-in replacement for parity-crypto that does not depend on ring or other C dependencies (those are a massive pain in the neck when compiling to Android/iOS).

This is quite a chore, and it's doubly troublesome as the crate lacks documentation (which came up during the PR review).

Also while porting stuff, I found that the scrypt algo used is using the deprecated rust-crypto crate, instead of the standalone scrypt crate. It might be worth upgrading that one, along with switching some of the ring stuff to pure Rust alternatives in cases where there performance regression for doing so is either tolerable or not very relevant (aes would be a good example).

I think having ring behind a feature flag, disabling which would also disable all parts of parity-crypto that depend on it, leaving only Rust dependencies would be ideal for long term maintainability.

[uint] All ops need a benchmark

Audit the current uint benchmark and make sure all operations have at least a basic benchmark for the common U256 and U512 types so we have a baseline upon which we can evaluate improvements.

[fixed-hash] Figure out uses of quickcheck support

We lost track of the rationals and reasoning behind the quickcheck crate support in fixed-hash.

This is the tracking issue for the purpose of finding out current uses of it in the ecosystem and if there are no uses removing the corresponding code in fixed-hash.

Where is U256?

In the uint crate there is the follow declaration:

Also provides commonly used U128, U256 and U512 out of the box.

I don't see any of these defined in the crate. I'm not sure I'm misunderstanding what is supposed to be provided. I would've expected to see a few constuct_uint! { pub struct N (N/64) } in the repo.

RLP decoding of non-canonical length encodings

According to the RLP spec, RLP elements (string or list) of encoded length > 55 should be encoded with a long encoding, and RLP elements with an encoded length <= 55 should be encoded with a short encoding.

It is currently possible to decode RLP elements of length <= 55 encoded with the long encoding without raising any error.

I am not sure there is harm in this, but this means we can have 2 different binary RLPs that have exactly the same semantics, which seems a bit fishy in some situations.

Some failing examples:

#[test]
fn test_canonical_string_encoding() {
	assert_ne!(Rlp::new(&vec![0xc0 + 4, 0xb7 + 1, 2, b'a', b'b']).val_at::<String>(0),
			   Rlp::new(&vec![0xc0 + 3, 0x82, b'a', b'b']).val_at::<String>(0)); // fails

}

#[test]
fn test_canonical_list_encoding() {
	assert_ne!(Rlp::new(&vec![0xc0 + 3, 0x82, b'a', b'b']).val_at::<String>(0),
			   Rlp::new(&vec![0xf7 + 1, 3, 0x82, b'a', b'b']).val_at::<String>(0)); // fails

}

[uint] extend macro with conversion utilities

ethereum-types implements a bunch of conversions that hashes created with uint do not have, e.g. ethereum_types::H256 can do 1.into().
If possible, add the same conversion facilities to uint and remove the manual impls from ethereum_types.

MemoryDB can't derive default

Due to how we handle null nodes in memorydb โ€“ by calculating them on instantiation โ€“ we can't use MemoryDB::default() as a synonym for MemoryDB::new().

Do one of three things:

  1. fix the null node instantiation hack (come talk to me or @rphmeier first)
  2. remove #[derive(Default)] and fix all code that uses MemoryDB::default()
  3. implement Default properly

Calculate contract address from `from` address + `nonce`

The address of a contract can be deterministically calculated from the from address and the nonce of the transaction.

Currently, we implemented this code ourselves in our codebase, however since it is generally useful, we would like to contribute it back. Which of the crates in this repository would be best for this functionality?

Support `TryFrom/TryInto` for uint types

uint types has as_u32/as_u64/as_u128 functions, but they would panic if overflow. It would be helpful if TryFrom/TryInto is supported, so that caller can decide what to do (for instance saturating) instead of panic.

[parity-util-mem] Add more tests

There's a single test and it is not currently run by travis (or by cargo test). Sort out the features and make sure the test runs.

Adding tests would be good too.

[Suggestion] use `rustfmt` to maintain code format consistency

Some reasons:

  1. rustfmt is one of the official rust toolchain components, most rustacean know it.
  2. rustfmt has more flexible configuration options for rust language, you can even skip some code that doesn't want to format by adding #[rustfmt::skip] attribute in code.
  3. It's easy to configure format checking in CI, just take a few more seconds to check the code format in CI, I think this is more friendly to contributors.

Merge uint and bigint

Initially the code for big unsigned ints lived in this repo https://github.com/paritytech/bigint. At some point we copied the bigint code into https://github.com/paritytech/primitives. The problem is that we still kept committing to the original bigint repo. At this point the code in the bigint repo has seen some optimizations and is faster (u128 for limbs, dropped all assembly) and they're forgotten there. Now that bigint has moved once again, we should try to merge the current bigint code into this crate (and deprecate the bigint repo).

Add an implementation of kvdb on top of IndexedDB

Add an implementation of the kvdb traits on top of the Web IndexedDB API: https://developer.mozilla.org/fr/docs/Web/API/API_IndexedDB

The bindings to IndexedDB are already present in the web_sys crate (https://docs.rs/web-sys/0.3.24/web_sys/struct.IdbFactory.html?search=Idb), and we should use this crate to access IndexedDB.

The use-case for this is running Substrate (or parity-ethereum?) from within a browser.

I'm personally not very familiar with neither kvdb nor IndexedDB, so there might or might not be problems that I can't think of.

[fixed-hash] Reconsider libc crate feature support

About the macro blocks that involve operations around using libc for implementation of PartialEq and Ord.

#[cfg(all(feature = "libc", not(target_os = "unknown")))]
#[macro_export]
#[doc(hidden)]
macro_rules! impl_libc_for_fixed_hash { ... }

Question: Is this feature (using libc instead of core) actually useful and used?
Thesis: The compiler should be able to do this optimization by itself.

@pepyakin told in another PR Link

hm, I was almost sure that self.as_bytes() == other.as_bytes()/self.as_bytes().cmp(other.as_bytes()) should be lowered down to memcmp.

Besides this, existence of an OS around isn't requisite for using memcmp et al since they assumed to always exists by the compiler (or rather LLVM). In wasm32-unknown-unknown they're pulled from the compiler-builtins lib.

Todo:

  • Make some benchmarks to test the above thesis.
  • If benchmarks are in favor of the compiler doing this, remove the libc feature and code surrounding it.

Update parity-crypto to ring v0.14

And keep in mind that this would be a backwards incompatible change (even if we don't expose anything from ring, multiple versions of ring can't be put together in the same binary), so publish 0.3 instead of 0.2.1.

[tx-pool] don't remove worst when adding replacement tx

Currently when importing a new transaction, if the pool is full (or adding the new tx would take it over the limit) then we take the current worst transaction and call should_replace together with the new tx. This user defined implementation will then determine whether to replace or reject the new transaction.

It's an optimisation to avoid the work of inserting the new transaction in the queue, only to remove it if it is "worse" than any of the existing transactions. It also provides a special hook for the users to define special logic not covered the scoring, e.g. considering readiness.

However, when the new transaction is replacing an existing one for the same sender/nonce but with a better score, then arguably the transaction should just be added anyway and should_replace should not be called at all (since the new tx directly replaces an existing one, not increasing the pool size).

So if we can detect this case it will avoid pushing out the worst transaction unnecessarily.

See also: @tomusdrw's TODO comment: https://github.com/paritytech/parity-common/blob/master/transaction-pool/src/pool.rs#L154.

[parity-crypto] The API of digest and hmac has some issues

There are quite a few weird things in these APIs:

  • We use strong typing for the hashing algorithm, eg. Digest<Sha256> or Digest<Sha512>. However when performing any operation, we don't enforce any trait on the generic, which means that internally we always have to perform a runtime dispatch and we always have to store the largest possible key size.

  • Building a Digest, a SigKey, a Signer or a VerifyKey requires a key by reference, but the type then always owns its key, meaning that we always have to clone it. It would make sense to hold the key by reference, or use a Cow, or allow both.

  • Building one of these types with the wrong slice length panics.

Why are methods deprecated without providing the alternative?

In version 2.5 of fixed-hash, certain methods like from_slice were deprecated with the message "unconventional API, replaced by new_from_slice in version 0.3" but surprisingly, the method new_from_slice doesn't exist yet.

I am curious in what the reasoning was behind adding a deprecation (which causes a compile-time warning) and not adding the suggested method at the same time?

[parity-crypto] Allow easier implementation switch and specific targets

This is a meta-issue related to multiple possible improvements :
- (1) moving all crypto dependency at the same place
- (2) ability to run a test suite that is directly related to parity use
- (3) ability to run bench on different crates
- (4) traits over main parity crypto use, and hability to switch crates without to much efforts (not meant to be std crypto trait, more a convenient way to audit/check our crypto libs usage).
- (5) wasm32 compatibility/alternative
- (6) no_std compatibility/alternative

Following steps are proposed :

  • #85 Remove rust-crypto. The rust-crypto crate is unmaintained and in itself is problematics for (5) and (6). Probably going for RustCrypto.
  • Add support for secp256k1 with traits. Concur to (1) and (4)
  • Enable cross-compilation to wasm32 target. Concur to (5)
  • Make the crate no_std compatible. Concur to (6)
  • Add further trait and tests/bench...

A previous PR covered first three steps is at #80 , I put it on ice in favor of smaller PRs. It contains lots of additional information.

fixed-hash: cannot define Hash struct, like 'H256', in constants directly

fixed-hash v0.3.0 cannot define Hash struct in constants directly, like:

pub const TEST_HASH: H256 = H256([0xc5, 0xd2, 0x46, 0x01, 0x86, 0xf7, 0x23, 0x3c, 0x92, 0x7e, 0x7d, 0xb2, 0xdc, 0xc7, 0x03, 0xc0, 0xe5, 0x00, 0xb6, 0x53, 0xca, 0x82, 0x27, 0x3b, 0x7b, 0xfa, 0xd8, 0x04, 0x5d, 0x85, 0xa4, 0x70]);

In version 0.2.5, the definition of Hash structures is

pub struct $from (pub [u8; $size]);

but in version 0.3.0, the definition of Hash structure is

$visibility struct $name ([u8; $n_bytes]);

Why does the new version of fixed-hash change the public field in the Hash structure to the private one?

[kvdb-rocksdb] configurable column memory budget

Currently in kvdb-rocksdb the memory budget is split evenly across all column families. In reality many use cases store data very unevenly across CFs, e.g. in parity-ethereum probably 90% of the data exists on the state CF.

Make the memory budget distribution configurable or settable on a per-CF basis.

RLP decoding does not limit list content to specified list length

When RLP decoding a list, the list length as specified in the RLP data is not enforced as a hard limit when decoding its content.

For example, a 3-byte string in a list (eg ["ab"]) is encoded as c3 82 xx xx ("list of 3 bytes" then "string of 2 bytes" then "string content"), but RLP such as c1 82 xx xx ("list of 2 bytes" then "string of 2 bytes" then "string content") is accepted and decoded the same way.

I am opening this as an issue and not as a PR since I'm not 100% sure there is harm in accepting such malformed RLP, but reading the code it seems this lib is trying to reject non-canonical encodings in general.

FYI here is a set of tests that showcase a few problematic cases (marked as // fails):

#[test]
fn test_inner_length_capping_for_short_lists() {
	assert_eq!(Rlp::new(&vec![0xc0 + 0, 0x82, b'a', b'b']).val_at::<String>(0), Err(DecoderError::RlpInconsistentLengthAndData)); // fails
	assert_eq!(Rlp::new(&vec![0xc0 + 1, 0x82, b'a', b'b']).val_at::<String>(0), Err(DecoderError::RlpInconsistentLengthAndData)); // fails
	assert_eq!(Rlp::new(&vec![0xc0 + 2, 0x82, b'a', b'b']).val_at::<String>(0), Err(DecoderError::RlpInconsistentLengthAndData)); // fails
	assert_eq!(Rlp::new(&vec![0xc0 + 3, 0x82, b'a', b'b']).val_at::<String>(0), Ok("ab".to_owned())); // ok
	assert_eq!(Rlp::new(&vec![0xc0 + 4, 0x82, b'a', b'b']).val_at::<String>(0), Err(DecoderError::RlpIsTooShort)); // ok

}

#[test]
fn test_inner_length_capping_for_long_lists() {
	assert_eq!(Rlp::new(&vec![0xf7 + 1, 0, 0x82, b'a', b'b']).val_at::<String>(0), Err(DecoderError::RlpDataLenWithZeroPrefix)); // ok
	assert_eq!(Rlp::new(&vec![0xf7 + 1, 1, 0x82, b'a', b'b']).val_at::<String>(0), Err(DecoderError::RlpInconsistentLengthAndData)); // fails
	assert_eq!(Rlp::new(&vec![0xf7 + 1, 2, 0x82, b'a', b'b']).val_at::<String>(0), Err(DecoderError::RlpInconsistentLengthAndData)); // fails
	assert_eq!(Rlp::new(&vec![0xf7 + 1, 3, 0x82, b'a', b'b']).val_at::<String>(0), Ok("ab".to_owned())); // ok
	assert_eq!(Rlp::new(&vec![0xf7 + 1, 4, 0x82, b'a', b'b']).val_at::<String>(0), Err(DecoderError::RlpIsTooShort)); // ok
}

Support for 32 bits and big endian

Hey guys,

I was playing around with Parity tech and I was wondering how hard it would be to add support for big endian & 32 bit architectures such as MIPS/MIPS64?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.