Coder Social home page Coder Social logo

polkadot-fellows / rfcs Goto Github PK

View Code? Open in Web Editor NEW
107.0 40.0 45.0 18.36 MB

Proposals for change to standards administered by the Fellowship.

Home Page: https://polkadot-fellows.github.io/RFCs/

License: Creative Commons Zero v1.0 Universal

CSS 13.65% Handlebars 86.35%

rfcs's Introduction

RFCs

This repository contains a number of Requests for Comment (RFCs) detailing proposed changes to the technical implementation of the Polkadot network. These RFCs are for the discussion and design of features which have been submitted for consideration to the developer Fellowship of Polkadot, as well as targets for the Fellowship's on-chain bodies to signal approval or disapproval of.

The RFCs can be viewed here.

Scope

According to the Fellowship Manifesto, members of the Polkadot Fellowship are responsible for expertise in the strict description(s) and/or implementation(s) of these areas of contribution:

  • the internals of all functional Polkadot node implementations;
  • cryptographic data-structures, algorithms, languages and APIs required for the continued upkeep of the Polkadot (Main) Network;
  • consensus algorithms concerning the Relay-chain (BABE & GRANDPA);
  • trust-free bridges relying on said consensus algorithms (planned to be) utilised by system chains;
  • parachain consensus;
  • cross-chain message passing (XCMP, HRMP, DMP & UMP);
  • the Polkadot libp2p-based peer networking protocol;
  • the Polkadot topology strategies;
  • chain synchronisation strategies utilised by Polkadot;
  • the Polkadot business-logic (aka the 'runtime');
  • pallets utilised by the Polkadot (Main) Network and its system chains;
  • the internals of the frame pallet framework;
  • runtime and host APIs;
  • the XCM specification and realisation;
  • standard RPCs;
  • user-interface code required to practically execute upgrades to the Polkadot (Main) Network; and
  • code or technology required by, and utilised primarily for, any code or technology already included.

These RFCs are scoped to the subset of these concerns which must be held consistent across all implementations. Various implementation details, such as internal node algorithms, programming languages, or database formats are out of scope. Non-exhaustively, changes to network protocol descriptions, runtime logic and runtime public interfaces, inherents, transaction formats should be discussed via RFCs.

Significance

These RFCs are in practice only a signaling mechanism to determine and indicate the Fellowship's design and architecture preferences and to coordinate discussion and social consensus on architectures and designs according to open-source principles.

The Fellowship holds only the powers vested in it by Polkadot's governance, which are limited to the expression of expert opinion and the ability to move proposals to more lenient governance tracks when necessary. It is not an arbiter of the "correctness" of any particular runtime or node implementation, and the practical meaning of these RFCs follows as a consequence of its limited powers.

For any RFC concerning runtime logic or interfaces, the Fellowship's capabilities are bounded by relay-chain governance, which is the ultimate decider of what code is adopted for block processing. As such, these RFCs are only loosely binding - the chains' governance has no obligation to accept the features as implemented and may accept features which have not gone through the RFC process. When it comes to node-side areas of expertise, the Fellowship's vote is more strongly binding, as the governance systems of the chains can't determine the environment the runtime is executed within, and in practice all node implementations should conform to some foundational standards in order to communicate.

Merged RFCs are only an indication of support for a specific design, not a commitment to an implementation of a feature on any particular timeframe or roadmap ordering.

Process

The RFC process is open to all contributors. Anyone may open an RFC or provide comments on open RFCs.

To open an RFC, follow these steps:

  • Copy the 0000-template.md file into the text folder and rename to match the title of the RFC
  • Fill out the RFC template and open a PR.
  • Rename the file to correspond to the GitHub pull request number and update the "RFC PR" field in the file.

The Fellowship will decide, via an on-chain voting mechanism including members III-Dan or above, when to approve and merge RFCs. It does so by issuing an on-chain remark with the body RFC_APPROVE(xxxx, h) from the Fellows origin on the Polkadot Collectives blockchain, where xxxx is the number of the RFC and h is the blake2-256 hash of the raw proposal text. Once this remark has been made, the PR can be merged. This on-chain process is designed to be resilient to where the RFCs are hosted and in what format, so it can be migrated away from GitHub in the future. The fellowship should not approve more than one RFC with the same number.

The Fellowship may also decide to reject an RFC by issuing a remark with the text RFC_REJECT(xxxx, h). This is a formality to provide clarity on when PRs (or their analogue on non-GitHub platforms) may be closed. PRs may be closed by their author, as well. PRs may be closed when sufficiently stale, as well - after a period of 1 year without acceptance.

Problems, requirements, and descriptions in RFC text should be stated using the following definitions of terms, roughly as laid out in IETF RFC 2119:

  • The terms "MUST", "MUST NOT", "SHALL", "SHALL NOT", or "REQUIRED" mean that the requirement is fixed and must be adhered to by implementations. These statements should be limited to those required for interoperability and security.
  • The terms "SHOULD", "RECOMMENDED", "SHOULD NOT", or "NOT RECOMMENDED" mean that there are only limited valid circumstances in which a requirement may be ignored.
  • The terms "MAY" or "OPTIONAL" mean that the requirement is optional, though interoperability between implementations making different choices in this respect is required.

Bots

RFC Cron

The repository provides a bot for:

  • Proposing RFCs on chain in a referenda to let the fellowship vote on the RFC. The referenda can only be created by accounts that are part of the fellowship.
  • Processing (merging or closing) the PR after the on-chain referendum gets confirmed.

To use the bot you need to write the following comment into a pull request:

/rfc (help|propose|process)

It takes a moment and then the bot should answer with a comment with more instructions on how to proceed.

Communication channels

The Fellowship is using Matrix for communication. Right now there exists two channels:

rfcs's People

Contributors

alindima avatar bkchr avatar brenzi avatar bullrich avatar gavofyork avatar georgepisaltu avatar ggwpez avatar joepetrowski avatar jonasw3f avatar jsdw avatar poppyseeddev avatar rphmeier avatar rzadp avatar skunert avatar tomaka avatar vedhavyas avatar xlc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rfcs's Issues

Mild gossip reform

We've always kinda winged our gossip protocols, so they're both heavier than required, and maybe do not deliver the desired resilience.

We've different gossip in different places, but an important one being used looks like a grid plus a spamy layer. If I recall, the spamy layer always fires, even if when grid works, which sounds wasteful.

At an intuitive theory level @chenda-w3f and I have now reached some understanding of what should be the relevant properties. In brief..

  • randomized typologies improve resistance against eclipse attacks and similar, with
  • a locally randomized topology being somewhat better than a globally randomized one,
  • but local randomization gives a random regular graph with overlaps which make the gossip less efficient,
  • It's supposedly not much less efficient, but every time he quotes me numbers then they sound quite noticeable.
  • If you rerandomize a topology by global randomness then faster helps more I guess.

In asymptotics,if we've n nodes then a locally randomized topology with valency O(n^{1/k}) for k=2,3 should have "small" constant diameter d, but small is not d=2,3 like you get from k=2,3 respectively with a deterministic 2d or 3d grid topology. I've now forgotten what Chen said O(log n) yields, but maybe diameter O(log n).

We're largely lacking in concrete numbers here, although sometimes Chen reported various ones from the literature.

We suspect the diameter two given by the 2d grid topology winds up being overkill, so we'd suggest polkadot pay some latency in exchange for a less spamy topology. We propose an experiment in which the gossip topology be defined by the union of two topologies:

  1. We define a 3d grid in which every validator sends to 3 n^{1/3} = 30 other validators. We deterministically resort all validators into this grid based upon a context-entropy under which the messages gets sent. Although deterministic, we'll update the context-entropy faster for more sensitive protocols, so this helps more against eclipse there:

    • Approval system messages would set context-entropy to be the relay chain VRF output for the relay chain block being discussed.
    • Initially grandpa would set the context-entropy to be the epoch randomness, but maybe we could select something faster, maybe post-BEEFY.
  2. We send to n^{1/2} = 32ish random other validators selected using system randomness.

At present, our 2d grid sends to 2 n^{1/2} = 64 validators and our spammy layers sends to even more, so this reduces the total message attempts by whatever the spammy layer does.

We'll be adding hops here though so TTL shall matter more, so we should discuss how TTL works in gossip now. I donno yet..

I'd expect two development costs here: Any TTL changes of course, required even for experimental work. All the plumbing required by the new context-entropy input. I think context-entropy being constant makes sense for testing though, which simplifies this considerably.

We'll have @chenda-w3f run some theory simulations of this topology before anyone starts implementation work. It's be helpful if someone could comment here on what the existing topologies really looks like so Chen can compare. We think more about how much context-entropy input helps too.

cc @rphmeier @eskimor @sandreim @heversonbr

Permisionless way to create HRMP channels between system parachains and other parachains

Almost all parachains will want to connect to one or more system parachains for various reasons and system parachains are created to offer functionalities to relaychain and other parachains.

However, a HRMP channel is required before a non-system parachain to be able to communicate with system parachain and it requires a governance proposal for that to happen. This is slow, high overhead, and unnecessary.

We should develop a pallet for system parachains to allow other parachains to permisionlessly create bidirectional HRMP channels with it. There maybe some deposit requirements etc to ensure security.

Define the responsibilities so we can evaluate salary payment requests

Related to #50

We need a way to define responsibilities for everyone so that when evaluate salary payment requests, everyone will have a clear understanding of what is acceptable or not.

e.g. we expect Rank 3 members to perform X amount of review work, Y amount of development, Z amount of support (to users or to other devs)

Eventually, the payout should be performance based for obvious reasons and that could replace the passive mode. For example, I personally am most likely going to be part-time working for fellowship that may not be qualified for full salary but should be more than the passive mode payout.

Adding a `CoreIndex` commitment to candidate receipts

Starting this pre-RFC conversation to gather some feedback on some changes of CandidateReceipt and CommitedCandidateReceipt primitives.

The necessary changes are:

  • add a commitment to a CoreIndex in CandidateCommitments
  • add the a core_index: CoreIndex field in CandidateDescriptor

These are needed to remove the limitations of only using a trusted collator set for elastic scaling parachains. Without a CoreIndex commitment in the candidate receipt it is possible for malicious collators to spam relay chain by sending the same collation to all backing groups assigned to a parachain at a RCB. In such a scenario elasticity is effectively disabled as all backing groups would back the same candidate in parallel instead of multiple chained candidates.

The candidate receipt primitive is used across networking protocols, Parachains Runtime, node implementation, collators and even tooling. Any breaking change here is very hard to deploy n practice without upgrading everything at the same time or breaking anyone. So, the following is an approach without breaking things but which might be considered hacky.

Please keep in mind that this is very importat for short/medium term in the context of the Elastic Scaling feature. As such, a proposal around a more flexible, backwards compatible and future-proof (allowing for more dramatic changes) is out of scope in this proposal, but otherwise something that I am working already on.

Proposed approach

Changes in CandidateDescriptor:

  • reclaim 32 bytes from collator: CollatorId and 64 bytes from signature: CollatorSignature fields as reserved fields
  • use 4 bytes out of this reclaimed space for a new core_index: u32 field.

CandidateCommitments doesn't really need to be changed, but one idea is to define a special XCM instruction like CommitCoreIndex(u32) that is appended as the last message in the upward_messages vec. The message wouldn't ever be executed, but would just serve as a commitment to a specific CoreIndex.

I have also thought about appending an additional core_index: u32 field to the end of the structure but that doesn't really seem to work because we compute the CandidateHash with hash(CandidateDescriptor, hash(CandidateCommitments)) . Older non-upgraded validator nodes for example would get a different commitment hash if they don't decode and hash the whole thing and this would break consensus.

Any better and less hacky ideas especially regarding the XCM stuff would help me a lot.

mdBook fails to build if there is a link in the RFC title

If there is a markdown link in the title of an RFC (like here), the mdBook fails to build.

# RFC-0070: X Track for [@kusamanetwork](https://x.com/kusamanetwork)
[WARN] (mdbook::book::summary): Expected a start of a link, actually got Some(Text(Borrowed("[")))
[ERROR] (mdbook::utils): Error: Summary parsing failed for file="<snip>/SUMMARY.md"
[ERROR] (mdbook::utils):    Caused By: There was an error parsing the numbered chapters
[ERROR] (mdbook::utils):    Caused By: There was an error parsing the numbered chapters
[ERROR] (mdbook::utils):    Caused By: failed to parse SUMMARY.md line 9, column 3: The link items for nested chapters must only contain a hyperlink

Research: FRAME alternative based on WASM components

Here's a topic I've had in mind for quite some time that could evolve into an RFC:


Substrate uses WebAssembly MVP version for its high performance, language agnosticism, security, and to support easy upgradability and interoperability of blockchains. It has proven to be a very successful format that keeps evolving(slowly) and we could leverage some of its newer features.

This initial verison of WASM due to its limited functionality requires Substrate to define custom low level interfaces between the host client and the runtime, the WebAssebly System Interface(WASI) preview served as an experiment on how WASM can run in different contexts outside of the web browser interfacing with a host environment using a well defined interface, an outcome of said experiment is the component model, it allows separately-compiled components to work as interoperable WASM libraries that enable cross-language composition. Interfaces are now defined with WIT, a language to define contracts between components using higher level type definitions.

Retrofitting this concept into FRAME and the Substrate client would be challenging, instead we can explore using the component model as a new alternative way to write runtime code. Some of the possibilities to explore that the CM could open up:

  • Upgradability: Splitting up a runtime into a tree of components can simplify the way runtimes are upgraded, instead of changing the entire :code, one can update only the small part of the runtime that changed(e.g. config, a "pallet").
  • Security: Instead of having every piece of the runtime sharing the same privileged space, we can define layers of privilege similar to how unix systems define protection rings that have more or less access to certain resources/capabilities.
  • Developer experience: Cross-language composition allows critical parts of the runtime be defined in Rust while higher level "user land" code can be written in more developer friendly languages with simple opinionated APIs that allow for easier on-boarding and faster development cycle.

Light clients and downloading block bodies

After having submitted a transaction, a light client generally wants to know when and where the transaction is included in the finalized chain. This is both for UI purposes and to know when to stop gossiping that transaction.

In order to know when a transaction is included, the light client needs to download the body of each block of the best chain and compare the transactions of the body with the one that it has submitted.

While this is not a problem right now, if block bodies become larger in the future, light clients might have to use a lot of bandwidth just to download block bodies. After all, block bodies is one of the scaling factors of the chain. The busier the chain, the bigger the block bodies.

While a light client could ask a full node to send back a proof that a transaction is in a certain block, the full node could simply lie and pretend that the transaction isn't there. There is no way to prove that a transaction is not in a certain block body without sending the entire block body.

I don't have a solution to that problem, which is why I'm just opening an issue.

Add a changes trie

This issue is a "pre-RFC". Writing a complete RFC would be time consuming (especially because it requires some technical knowledge about BEEFY that I'm currently lacking), so I would like to gather some feedback first.
The idea of making the changes trie asynchronous (explanations below) comes from @gavofyork.

Context

It is currently notoriously difficult to obtain for example the history of the balance of an account.
The only way to do this in a trustless way currently is to download each block of the entire chain one by one, and check if the balance has been modified by this block. Doing this is an extremely expensive operation that would likely take multiple days.

One elegant way to solve this problem is to store somewhere (explanations below) the history of the changes of each storage key. Doing this is very elegant as it automatically contains the history of everything that happened on the chain, and doesn't need any case-by-case handling by the various pallets.

In order to make it possible for light clients to access this history in a trustless way, the proposed method is to organize this history into a trie whose root hash is then made trustless. This trie is called the changes trie.

History

Substrate used to have a changes trie, but it was removed due to being too heavy on I/O operations. In other words, this was slowing down blocks production and verification too much.

In order to solve this problem, this RFC proposes to make validators construct the changes trie and then vote for it through the exact same mechanism as BEEFY votes for an MMR root.

In order to make it possible for validators to vote on a single changes trie root, this changes trie need to cover the entire history of the chain, rather than only one block like was the case for the previous changes trie.

Details

Here is a list of the changes:

  • At block 0, the changes trie is filled with the same entries as the state trie, except that keys are modified to be concat(":current", key_of_state_trie) and that values are set to 0 (the block number where this key was last modified).
  • For each block between the BEEFY head and the BEEFY voting target, validators do this:
    • For each key of the state that has been newly added, add a changes trie entry to concat(":current", key_of_state_trie) and set its value to block_number.
    • For each key of the state that has been modified, set the changes trie entry at concat(block_number, key_of_state_trie) to the value at concat(":current", key_of_state_trie), then set the changes trie entry at concat(":current", key_of_state_trie) to block_number.
    • For each key of the state that has been removed, remove the changes trie entry at concat(":current", key_of_state_trie).
  • When validators perform their BEEFY voting, they also exchange the changes trie root hash of the block that they are voting for.
  • Once a block has been "BEEFY-finalized", in order to save space, validators can remove from their database all entries that start with block_number and keep only the key and hash of the subtree that they have removed. Archive nodes (as defined in #59) do not remove the subtree from their database.

In order words, the changes trie would contain a subtree of prefix :current, and one subtree for each block number. The subtree of prefix :current contains the latest block where each key of the state has last been modified, and the subtree of each block number contains, for each key, the previous block number where it has been modified.
This scheme makes it possible to progressively iterate down block numbers, looking up only blocks where the entry has effectively been modified.

The exact keys and encoding of values are all details that aren't important at the moment. The block number could be encoded in a way to make it possible to strip chunks of block numbers from the database entire, for example you could replace the changes of all blocks between 0 and 100000 with a single hash.

Drawbacks:

  • It's not possible for example to obtain the history of the balance of an account that has been removed. Maybe we could add, within the :current subtree and at various branches, the block number where an entry has last been removed.

Unresolved questions:

  • If the runtime modifies a value to the same value as it already had (i.e. does something like storage_set(key, storage_get(key))), do we update the changes trie or not?

As I'm not very familiar with BEEFY, I hope that what is suggested is possible.

Not covered by this issue is the fact that we would add a networking protocol to make it possible to query the content of the changes trie from archive nodes through the peer-to-peer network.

Bot: Notify people about new fellowship referendas

We should have some bot that is notifying people about new fellowship referendas. The idea was that this bot is maybe posting to the fellowship channel or to some dedicated notifications channel. In a future version it would be also nice to ping people directly when a RFC they commented is put on for voting.

CC @Bullrich @mordamax

[xcm-emulator] Make a generic `genesis` constructor method

Currently we write a separate genesis function for each Parachain when creating the emulated test environments cumulus/parachains/integration-tests/emulated/chains/parachains/bridges/bridge-hub-rococo/src/genesis.rs.
Much of this logic is repeated, but some is chain / environment specific. We can make a generic function, reducing code repetition and making this code more expressive.

We could do something like

trait RuntimeGenesisConfig {
    fn configure_storage(storage: &mut Storage);
}

Then we could do a impl RuntimeConfigurator for BridgeHubRococoRuntime, for example.

And finally:

impl Storage {
    pub fn new_with_config<R: GenesisRuntimeConfig>() -> Self {
        let mut storage = Storage::default();
        R::configure_storage(&mut storage);
        storage
    }
}

Originally suggested here by @liamaharon

Bot for automatic merging apprroved RFCs/closing rejected RFCs

After RFCs are approved on chain, they can be merged directly. This should be done by a bot and should not require any manual intervention. The same applies for rejects.

The bot should monitor the fellowship referenda for referendums that want to enact a remark with RFC_APPROVE or RFC_REJECT. The exact process on how a referendum is executed, is documented in the README. After the referendum was approved, the bot should do the appropriate action(merge for RFC_APPROVE and close for RFC_REJECT).

Governance parachain fallback

I think #32 makes sense for Kusama as written, but we've always envisioned Polkadot holding up even when finality stalls, just using longest chain. Approvals depends upon grandpa however, so parachains become completely insecure under longest chain, so even system parachains like governance cannot be trusted.

@AlistairStewart and I have discussed a few fallback models for this, but one simple-ish model goes like:

  • Governance depends upon beefy self proofs for relay parent, and hence messaging, not merely relay chain state. If grandpa stalls, then governance can still message the relay chain, but not anybody else.
  • We add a "manual execute" option by which governance can order 2+ full parachain blocks to be executed within the relay chain block, one of which itself communicates the "manual execute" command. This can advance the state of the governance chain, even without beefy proofs.

We do not imho require this complexity to deploy on Kusama, but we'd ideally do something along these lines before Polkadot, and supersedes paritytech/polkadot-sdk#1963

Pre-RFC: XMS language (XCM Made Simple)

Context: This is an early stage idea to gather some feedback, it's a byproduct of the design of "SubeX", the next version of Sube that aims to be XCM centric(a simple/lightweight client that can(will) run anywhere).


XMS is(would be) a KDL-based client agnostic format/language for representing XCM instructions and extrinsics, it's a text based document language with xml-like semantics to be machine and human readable. With XMS one can declare snippets of code that abstract low level XCM instructions to later compose them creating parametrisable scripts that are simple to read and write.

idea of an XMS script adapted from this Moonbeam example

import "./prelude.xms" // import type declarations from other files/URLs

const "candidate" val="0x1234"

// Define custom calls that return an assembled extrinsic
// it can define custom arguments or accept children that are injected in a designated "slot"
cmd "stake" amount=null {
    call pallet="parachain_staking" x="delegate_with_auto_compound" {
        candidate="$candidate"
        amount="$amount"
        auto_compound=100
        // We can regiter extensions to evaluate values with custom types
        candidate_delegation_count=(query)"parachain_staking/candidate_info/$candidate | get delegation_count"
        candidate_auto_compounding_delegation_count=(query)"parachain_staking/auto_compounding_delegations/$candidate | len"
    }
}

// Declare a series of XCM instructions that can be parametrized with arguments 
// or extended children nodes that are injected in a "slot"
cmd "remote_stake" amount=null {
    xcm
        withdraw_asset {
             asset id="./pallet_instance(3)" fun=100_000_000_000_000_000
        }
        buy_execution fees="./pallet_instance(3)/$100_000_000_000_000_000" weight_limit="unlimited"
        transact \
            origin_kind="sovereign_account" \
            require_weight_at_most="(40_000_000_000,900_000)" \
            call=(cmd)"stake $amount | encode"
}

// scripts should call one or more definitions
remote_stake 500_e10

Basing XMS in KDL makes its implementation easy and client/language agnostic(libraries in multiple languages). It's a language with great syntax that makes it feel like you designed a specific purpose language but at the end of the day it's just an XML-like document with nodes, arguments and children, tools and clients can "expand" an XMS document into a concrete extrinsic or XCM ready to be signed/submitted or generate code for clients or pallets.

A side effect of working on this format is having well defined string representations of common types like MultiLocation

I believe having a simple way to declare composable snippets of XCM is the best way to abstract it, maintaining libraries for specific languages with a limited amount of commonly used patterns doesn't scale well. Instead letting the community come up with scripts that can be shared around can allow for better experimentation until eventually we land into a stable collections of "preludes" suitable for different use cases.

Impl Display for MultiLocation

In a number of occasions I've wanted to represent a MultiLocation in a short, url friendly way. (if something like it doesn't exist yet) I think we should have such a representation, e.g. similar to how a MultiAddress are displayed.

Before proposing an RFC it would be nice to hear some thoughts about possible ways to represent locations or if the feature is desired in the first place. The first format that naturally comes to mind on a high level is translating junctions into a URL-path-like format, with some prefix to represent parents(and version?), perhaps an "absolute path" representation as well.

Starting with

struct MultiLocation {
	pub parents: u8,
	pub interior: Junctions,
}

We can do something like .N/(junction1)/(junctionX) where N is the number of parents and junctions repeat separated by a /. The "absolute path" can be to just omit the parents prefix and expect the first junction to be of type GlobalConsensus.

Next, with current v3 we have

pub enum Junction {
	Parachain(u32),
	AccountId32 { network: Option<NetworkId>, id: [u8; 32] },
	AccountIndex64 { network: Option<NetworkId>, index: u64 },
	AccountKey20 { network: Option<NetworkId>, key: [u8; 20] },
	PalletInstance(u8),
	GeneralIndex(u128),
	GeneralKey { length: u8, data: [u8; 32] },
	OnlyChild,
	Plurality { id: BodyId, part: BodyPart },
	GlobalConsensus(NetworkId),
}

This junctions can have a long version that simply converts the enum variant to a lowercase snake version(e.g. only_child) and in case it's a tuple/struct variant we include the params wrapped in parentheses or squared brackets(for structs we ignore the name of the field), i.e. parachain(1000), account_id32(polkadot/11111111111111111111111111111111), account_index64(none/1), plurality(legislative/fraction[6/10]).
Now to actually make it short what about using a short form by default where we replace the long words with abbreviations or sigils that act as an alias, when using a sigil parenthesis/brackets could be omitted(?).
Here are some ideas ...

parachain -> : // e.g. `:1000` 
account_id32 -> @ // e.g. `@polkadot/11111111111111111111111111111111` (plain base58, no SS58)
account_index64 -> ! // e.g. `!123`
account_key20 -> k20 // e.g. `k20(-/11111111111111111111)` (none replaced as `-`)
pallet_instance -> p // e.g. `p(99)`
general_index -> idx // e.g. idx(123)
general_key -> key // e.g. `key(2/1)`
only_child -> ~
plurality -> * // e.g. `*technical/voice`
global_consensus -> / // e.g. `/polkadot`

For example purposes only let's imagine for a moment there is a xcm:// URI scheme to illustrate how several URL friendly multi-locations can be "rendered" together(also with multi-assets?) ...

xcm:///polkadot/:1002/*index(10)/at_least_proportion(10/100)?pay=.2/kusama/:1000/!42&asset=$./p(50)/idx(1984)+999999

Inquiry regarding scope for proposed Polkadot Provider API RFC

Hello Polkadot Fellows,

I am currently spearheading an initiative aimed at significantly enhancing the Polkadot ecosystem's infrastructure. Before I proceed with a detailed RFC submission, I seek your guidance to ensure that my proposal aligns with the scope of this repository.

Proposal Overview

The project that I'm currently working on involves developing a light-client first library, envisioned as a successor to PolkadotJS. A central aspect of this initiative is the creation of a standardized Polkadot Provider API. This API is designed to be library agnostic, setting a foundation for a diverse ecosystem of tools and applications.

Key Objectives

  • Establish a standardized, library-agnostic API for dApps to interact seamlessly with the Polkadot network.
  • Foster an ecosystem where various tools and applications can be developed around this standardized API.
  • Enhance interoperability and developer experience within the Polkadot ecosystem.

Specific Inquiries

  1. Language Specificity: The initial proposal is likely to be JavaScript/TypeScript-centric. However, I'm open to considering a more generic approach, possibly leveraging a JSON-RPC based API, to ensure broader applicability and inclusiveness. Would a language-specific proposal be suitable for this repository? If not, would a shift towards a more generic, protocol-based approach be more appropriate?

  2. Fit within the Current Scope: One of the key aspects of my proposal is the establishment of a simplified protocol that enables dApps to interact seamlessly with both the JSON-RPC endpoints provided by a node, and a set of functions (or JSON-RPC calls) designed for interfacing with keyring agents present in various environments. This dual interaction model aims to streamline and standardize the way dApps communicate with the Polkadot network.

However, it is not entirely clear to me whether this proposal, focusing on a standardized protocol for a node and keyring interactions, falls within the current scope of this repository. The proposal extends beyond mere feature enhancement, aiming to introduce a fundamental protocol layer that could serve as a cornerstone for future development within the Polkadot ecosystem. As such, I am seeking clarification on whether the scope of this repository encompasses the establishment of such foundational protocols.

Thank you for your consideration and I look forward to your valuable feedback.

Best regards,

Josep

Stale Nomination Reward Curve

Would like to start a conversation, pre-RFC, about an idea that was brought up while at Sub0 Asia 2024.

The high level goal is to create some incentive / pressure for nominators to re-nominate in some regular frequency, with the hope that we can better identify the best validators for Polkadot at any given time.

The problem is that many nominators have stale nominations, that is they nominated some of the OG validators back when the network first launched, and perhaps have not updated their nominations for months / years.

The NPoS process will better identify high quality validators, and allow new validators into the set, when nominators are updating their nominations.

A proposed solution to this problem is to create a decreasing reward curve on nominator payouts based on the last time they submitted nominations.

For example:

Imagine a simple curve which is flat at 1.0 until 6 months. Then linearly decreases to 0 for the next 2 years.

A nominator will receive their full staking rewards for the first 6 months after their original nomination. However, after 6 months, if they do not submit a new nomination extrinsic, they will start to receive slightly lower rewards. For example, at 1.5 years since their original nomination, they will only receive 50% of their potential rewards.

The first important detail to note is that this mechanism does not force anyone to do anything. It is simply an economic pressure to incentivize a behavior. I also want to note that this mechanism cannot force nominators to pick different validators. In fact, we might expect some nominators to pick the same exact set of validators every 6 months. This is okay, and allowed by this mechanism. We do NOT want to force the selection of random new validators when a nominator feels that a certain set of validators are best.

However, this pressure to at least submit a new nomination extrinsic I believe will be enough pressure for people to evaluate if their current selection can be improved. There are new dashboards and metrics every day which are telling us more information about the quality of validators, and validators have had more requirements to perform as of recently (with beefy, parachains, etc...). My guess is that some pressure like this would be a strong net benefit to the security and quality of the network.

Second to note is that this curve should be easily configurable. I suggest above some values just for the sake of illustration. Whether we tune the curve to start decreasing at 3 months or 6 months, whether we tune the curve to go all the way to 0 or stop at .5, and how long it takes to get there are all things which can be easily updated. The curve could also not be linear. Thankfully with OpenGov code, I believe all the logic needed for many curves are already part of the Polkadot-SDK. I think my suggested curve is not so bad as a starting point, but would not push back on less or more aggressive configurations.

Finally, I want to note that I believe this features adds little overhead to the staking system, and can be easily implemented in a fully scalable way. The only thing we need to introduce here is:

  • An additional field in the storage of nominators which tracks the block number the last time they submitted a nomination.
    • The additional value will be a field on an existing storage, not a new storage, thus the overhead to weights and benchmarks is nominal.
    • This value will be updated when they next call nominate, and requires no changes to the end user experience.
  • A static and configurable nomination reward curve that defines a global behavior for all nominators.
  • A single lightweight calculation, during payouts, which will adjust the final payout to a nominator based on their last nomination block and the nomination reward curve.

A potential migration would be needed to update the storage of all nominators to include the new "last block nominated" storage field. There are probably methods we could use to migrate to this lazily, since things won't affect end users for 6 months anyway.

From an economics perspective, it is possible that this curve decreases the rate of inflation of the network where stale nominators are not getting their maximal possible payouts if their nominations are stale.

Not included in this write up is brainstorming on how we can expose this new feature to end users and UIs, and how we can make the experience better for everyone. But open to those discussions below.

Overall, I think this is a simple but high quality improvement we can make to our network.

Define categories and scope of RFCs

It is not yet super obvious to everyone what’s in scope of a RFC and what’s not. We should have a set of clearly defined rules to help people to evaluate if their idea should be a RFC or not.

Also per @shawntabrizi’s suggestion #32 (comment) we should have different categories for the RFCs

RFC idea: Parachain management & recovery parachain

Fellowup of #14

Related discussions:

https://forum.polkadot.network/t/how-to-recover-a-parachain/673
https://forum.polkadot.network/t/polkadot-summit-ecosystem-technical-fellowship-workshop-notes/3269

We want a system parachain that's able to help other parachains to manage the parachain wasm/state.

We need to figure out how much power we want to give to the new ecosystem fellowship collective.

I will image parachain should be able to opt-in/out various ways of updating wasm. Including from para manager, or by the ecosystem fellowship, or by some custom origin which maybe controlled by some other parachains.

Metered Weights in the Polkadot-SDK

Before creating a full RFC, I want to start a discussion on a potential direction around improving Weights and Benchmarks in the Polkadot SDK.

Problems to solve

  • Weights / Benchmarking are identified as one of the more complex parts of using the Polkadot SDK.
  • Keeping Polkadot-SDK as general as possible for the runtime, allowing for other frameworks to be created.
  • Improving safety and performance of the Polkadot SDK execution environment.

High Level Ideas

The Polkadot-SDK runtime should take a "step backwards", and introduce weight metering as the base level of execution limiting, rather than the pre-measured weight system that exists today.

Weight Metering is compatible with pre-measured weights, but not vice-versa.

One of the goals of the Polkadot-SDK is to be as general as possible, and allow for customization at each level of the stack, especially the runtime. As I understand, we have chosen a system where execution of a block in the runtime requires knowledge of the weight of that block ahead of time. This appears to be less flexible than using execution metering.

For example, assuming a system where we did execution metering, the runtime could bypass the metering system and directly inject the weights that it knows are correct for a given execution. However, with pre-measured weights, we have no flexibility to implement a metering system within a custom runtime framework.

Benchmarking pushes overhead to developers.

Benchmarking is quite the laborious process, especially with more complex pallets.
It is a large blocker from building an idea, and deploying a product which is relatively safe to use.

If we want to keep Polkadot SDK competitive for innovators and builders, we cannot have this large overhead where other existing platforms do not.

Benchmarking can be extremely pessimistic.

Because we need to use the worst case situation for every extrinsic, the final calculated weights for a block can be much more than the time it actually takes to execute that block. It was previously calculated that a block full of only transactions uses only 60% of the total pre-calculated weight.

Even if extrinsics use weight refunds, it is likely that we wont optimally fill blocks because we only start include an extrinsic in a block if the worst case scenario weight would allow it to fit, not the final weight.

High Level Solutions

At the base Runtime API, support Weight Metering

Blocks and extrinsics being executed in the runtime should provide a max_weight parameter, and fail to execute if the metered weight is higher than the max_weight.

Perhaps this should be an Option where None can be provided to be backwards compatible and the runtime will be forced to provide a pre-calculated weight.

Runtime Should Support Panics

It seems that in order for metering to ever work, we would need to be able to suddenly halt extrinsic execution when the metered weight is beyond the expected max_weight. A panic is the right tool for this, correct?

In any case, allowing panics in the runtime would also improve developer experience since this is a major area where a runtime developer can make a mistake, and make their chain vulnerable to attack.

Weight Metered Database

It is not my suggestion that we provide full weight metering to all execution in the runtime. This would just bring us back to the performance of smart contracts.

Instead, I suggest we create a special DB layer which provides very specific weight information about database access as it happens during runtime execution.

We know that DB operations account for the majority of weight costs in the runtime, and that usually the number of DB operations is also quite low. (We should do basic analysis of existing pre-metered weights to back this up tangibly).

If we only meter the database, and assume that other execution is nominal, then we can get a very high performance environment with high accuracy.

The DB Layer could provide very specific details like exactly where the item exists in the merkle trie (depth, size, neighboring children, if it or other neighboring children have already been cached, etc..). Then with really comprehensive database benchmarks, we can dynamically meter how much weight each data operation would be.

Perhaps it is possible to forgo this minimal overhead when pre-calculated weights already exist, or, this can be used to automatically provide weight refunds when there is knowledge that the db weights are overestimated.

Handling Execution Weight

With a metered database, I suspect we will calculate a majority of the weight used in a block / extrinsic.

However, to get full saftey, we can provide a few different tools:

Custom Additional Weight

We already provide APIs for runtime developers to manually add more weight during extrinsic execution. This can be used to increase the weight where we know that the metered databse is not enough.

In fact, the benchmarking system already splits benchmarking between Wasm execution and the database operations, so we already provide a method for users to actually discover the "missing" weight.

Custom Weight Buffering

We could also allow runtime developers to add their own custom "weight buffer" to keep their extrinsics more safe. For example, we could add an additional 20% overhead to the weight returned by metered database.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.