Coder Social home page Coder Social logo

polkadot-fellows / runtimes Goto Github PK

View Code? Open in Web Editor NEW
123.0 123.0 68.0 123.06 MB

The various runtimes which make up the core subsystems of networks for which the Fellowship is represented.

License: GNU General Public License v3.0

Rust 99.13% Shell 0.87%

runtimes's People

Contributors

acatangiu avatar andresilva avatar apopiak avatar arkpar avatar athei avatar bkchr avatar bkontur avatar bullrich avatar cecton avatar chevdor avatar coderobe avatar dependabot[bot] avatar egorpopelyaev avatar emostov avatar gavofyork avatar ggwpez avatar gilescope avatar joepetrowski avatar kianenigma avatar kichjang avatar michalkucharczyk avatar muharem avatar ordian avatar pepyakin avatar rphmeier avatar s3krit avatar shawntabrizi avatar skunert avatar thiolliere avatar xlc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

runtimes's Issues

1.2.0 Release

Please everyone leave a comment with a link to a pr or issue, so we have it tracked.

TODO

  • PR with bump to [email protected] #137 (minimal configurations, just to compile)
  • PR with bump to [email protected] #159
  • PR with bump to [email protected] #187
  • check/fix migrations when 1.1.0 / 1.1.2 deployed #195
  • regenerate weights
    • regenerate weights for all pallets #223
    • check bridge transfer fee constants
  • bump spec_versions to 1_002_000 - done in #187
  • check and bump transaction versions #250
  • check Nacho's integration tests
    • check/verify e2e tests: OpenGov -> relay to AssetHub - see comment @muharem
  • fix #[api_version(7)] impl primitives::runtime_api::ParachainHost - fn node_features + verify what is missing #194
  • add benchmark cumulus_pallet_parachain_system to SP runtimes
  • address comment here
  • integration tests improvement - adapt comment fixed by #185
    # TODO: replace with `[email protected]` from `polkadot-sdk`
    integration-tests-common = { path = "../common" }
    
  • remove unneeded dependencies (cargo machete)
  • fix pallet_asset_conversion benchmarks paritytech/polkadot-sdk#3440
  • Adds Snowbridge to Kusama and Polkadot Runtimes #130

License headers

Think of a good license header and change it in all files. Many projects are just fine with the minimal SPDX:

// SPDX-FileCopyrightText: [year] [copyright holder] <[email address]>
//
// SPDX-License-Identifier: [identifier]

This allows for easy CI checking.

Merge/Approval rights for Fellows

Fellowship members should be getting approval/merge rights based on their rank like it is done on chain. So, to merge a pull request it will require X members to pass "the voting" to get the approval to merge. The list of fellowship members is maintained on chain, so it would introduce extra overhead to maintain the list on github as well. Like having teams for each rank and then having some CI check that succeeds when "the voting" has passed. As already said, the list and the associated rank is already on chain and we should use this directly in the CI. There exists the pr-custom-review github action from Parity that already overs fine grained control on the approval process. This should be augmented with a script that is using smoldot to fetch the latest list of fellowship members, their ranks and their identities from the latest block. This will require that every fellowship member also provides their github account as part of their on chain identity. Then the CI job will be able to map from a github name to the fellowship rank to do "the voting".

1.1.0 Release

Let's already start with the list of things to include for the 1.1.0 release. Please everyone leave a comment with a link to a pr or issue, so we have it tracked.

I would propose that we directly start with the release after we have finally finished 1.0.0.

Missing `transaction_version` bumps

transaction_version are missing for some runtimes such as Polkadot.

Running the following:

cd /tmp
mkdir test
cd test
subwasm get --chain polkadot -o 9431.wasm
wget https://github.com/polkadot-fellows/runtimes/releases/download/v1.0.0/polkadot_runtime-v1000000.compact.compressed.wasm -O 1000000.wasm
subwasm diff 9431.wasm 1000000.wasm

shows:

subwasm differ output
!!! THE SUBWASM REDUCED DIFFER IS EXPERIMENTAL, DOUBLE CHECK THE RESULTS !!!
[≠] pallet 0: System -> 2 change(s)
  - constants changes:
    [≠] BlockWeights: [ 7, 56, 255, 253, 104, 2, 0, 11, 0, 32, 74, 169, 209, 1, 19, 255, 255, 255, 255, 255, 255, 255, 255, 2, 80, 170, 25, 0, 1, 11, 0, 60, ... ]
        [Value([Changed(1, U8Change(56, 176)), Changed(2, U8Change(255, 189)), Changed(3, U8Change(253, 233)), Changed(4, U8Change(104, 54)), Changed(5, U8Change(2, 3)), Changed(23, U8Change(2, 34)), Changed(24, U8Change(80, 45)), Changed(25, U8Change(170, 13)), Changed(26, U8Change(25, 30)), Changed(30, U8Change(0, 184)), Changed(31, U8Change(60, 132)), Changed(32, U8Change(117, 92)), Changed(33, U8Change(144, 143)), Changed(65, U8Change(2, 34)), Changed(66, U8Change(80, 45)), Changed(67, U8Change(170, 13)), Changed(68, U8Change(25, 30)), Changed(72, U8Change(0, 184)), Changed(73, U8Change(196, 12)), Changed(74, U8Change(199, 175)), Changed(75, U8Change(250, 249)), Changed(120, U8Change(2, 34)), Changed(121, U8Change(80, 45)), Changed(122, U8Change(170, 13)), Changed(123, U8Change(25, 30))])]
    [≠] Version: [ 32, 112, 111, 108, 107, 97, 100, 111, 116, 60, 112, 97, 114, 105, 116, 121, 45, 112, 111, 108, 107, 97, 100, 111, 116, 0, 0, 0, 0, 215, 36, 0, ... ]
        [Value([Changed(29, U8Change(215, 64)), Changed(30, U8Change(36, 66)), Changed(31, U8Change(0, 15)), Changed(130, U8Change(4, 5)), Changed(142, U8Change(2, 3))])]

[≠] pallet 1: Scheduler -> 11 change(s)
  - calls changes:
    [≠]  0: schedule ( when: T::BlockNumber, maybe_periodic: Option<schedule::Period<T::BlockNumber>>, priority: schedule::Priority, call: Box<<T as Config>::RuntimeCall>, )  )
        [Signature(SignatureChange { args: [Changed(0, [Ty(StringChange("T::BlockNumber", "BlockNumberFor<T>"))]), Changed(1, [Ty(StringChange("Option<schedule::Period<T::BlockNumber>>", "Option<schedule::Period<BlockNumberFor<T>>>"))])] })]
    [≠]  1: cancel ( when: T::BlockNumber, index: u32, )  )
        [Signature(SignatureChange { args: [Changed(0, [Ty(StringChange("T::BlockNumber", "BlockNumberFor<T>"))])] })]
    [≠]  2: schedule_named ( id: TaskName, when: T::BlockNumber, maybe_periodic: Option<schedule::Period<T::BlockNumber>>, priority: schedule::Priority, call: Box<<T as Config>::RuntimeCall>, )  )
        [Signature(SignatureChange { args: [Changed(1, [Ty(StringChange("T::BlockNumber", "BlockNumberFor<T>"))]), Changed(2, [Ty(StringChange("Option<schedule::Period<T::BlockNumber>>", "Option<schedule::Period<BlockNumberFor<T>>>"))])] })]
    [≠]  4: schedule_after ( after: T::BlockNumber, maybe_periodic: Option<schedule::Period<T::BlockNumber>>, priority: schedule::Priority, call: Box<<T as Config>::RuntimeCall>, )  )
        [Signature(SignatureChange { args: [Changed(0, [Ty(StringChange("T::BlockNumber", "BlockNumberFor<T>"))]), Changed(1, [Ty(StringChange("Option<schedule::Period<T::BlockNumber>>", "Option<schedule::Period<BlockNumberFor<T>>>"))])] })]
    [≠]  5: schedule_named_after ( id: TaskName, after: T::BlockNumber, maybe_periodic: Option<schedule::Period<T::BlockNumber>>, priority: schedule::Priority, call: Box<<T as Config>::RuntimeCall>, )  )
        [Signature(SignatureChange { args: [Changed(1, [Ty(StringChange("T::BlockNumber", "BlockNumberFor<T>"))]), Changed(2, [Ty(StringChange("Option<schedule::Period<T::BlockNumber>>", "Option<schedule::Period<BlockNumberFor<T>>>"))])] })]

  - events changes:
    [≠]  0: Scheduled ( when: T::BlockNumber, index: u32, )  )
        [Signature(SignatureChange { args: [Changed(0, [Ty(StringChange("T::BlockNumber", "BlockNumberFor<T>"))])] })]
    [≠]  1: Canceled ( when: T::BlockNumber, index: u32, )  )
        [Signature(SignatureChange { args: [Changed(0, [Ty(StringChange("T::BlockNumber", "BlockNumberFor<T>"))])] })]
    [≠]  2: Dispatched ( task: TaskAddress<T::BlockNumber>, id: Option<TaskName>, result: DispatchResult, )  )
        [Signature(SignatureChange { args: [Changed(0, [Ty(StringChange("TaskAddress<T::BlockNumber>", "TaskAddress<BlockNumberFor<T>>"))])] })]
    [≠]  3: CallUnavailable ( task: TaskAddress<T::BlockNumber>, id: Option<TaskName>, )  )
        [Signature(SignatureChange { args: [Changed(0, [Ty(StringChange("TaskAddress<T::BlockNumber>", "TaskAddress<BlockNumberFor<T>>"))])] })]
    [≠]  4: PeriodicFailed ( task: TaskAddress<T::BlockNumber>, id: Option<TaskName>, )  )
        [Signature(SignatureChange { args: [Changed(0, [Ty(StringChange("TaskAddress<T::BlockNumber>", "TaskAddress<BlockNumberFor<T>>"))])] })]
    [≠]  5: PermanentlyOverweight ( task: TaskAddress<T::BlockNumber>, id: Option<TaskName>, )  )
        [Signature(SignatureChange { args: [Changed(0, [Ty(StringChange("TaskAddress<T::BlockNumber>", "TaskAddress<BlockNumberFor<T>>"))])] })]

[≠] pallet 2: Babe -> 3 change(s)
  - calls changes:
    [≠]  0: report_equivocation ( equivocation_proof: Box<EquivocationProof<T::Header>>, key_owner_proof: T::KeyOwnerProof, )  )
        [Signature(SignatureChange { args: [Changed(0, [Ty(StringChange("Box<EquivocationProof<T::Header>>", "Box<EquivocationProof<HeaderFor<T>>>"))])] })]
    [≠]  1: report_equivocation_unsigned ( equivocation_proof: Box<EquivocationProof<T::Header>>, key_owner_proof: T::KeyOwnerProof, )  )
        [Signature(SignatureChange { args: [Changed(0, [Ty(StringChange("Box<EquivocationProof<T::Header>>", "Box<EquivocationProof<HeaderFor<T>>>"))])] })]

  - constants changes:
    [+] ConstantDesc { name: "MaxNominators", value: [0, 2, 0, 0] }

[≠] pallet 7: Staking -> 4 change(s)
  - events changes:
    [≠] 14: ForceEra ( mode: Forcing, )  )
        [Name(StringChange("ForceEra", "SnapshotVotersSizeExceeded")), Signature(SignatureChange { args: [Changed(0, [Name(StringChange("mode", "size")), Ty(StringChange("Forcing", "u32"))])] })]
    [+] EventDesc { index: 15, name: "SnapshotTargetsSizeExceeded", signature: SignatureDesc { args: [ArgDesc { name: "size", ty: "u32" }] } }
    [+] EventDesc { index: 16, name: "ForceEra", signature: SignatureDesc { args: [ArgDesc { name: "mode", ty: "Forcing" }] } }

  - constants changes:
    [-] "MaxNominations"

[≠] pallet 11: Grandpa -> 4 change(s)
  - calls changes:
    [≠]  0: report_equivocation ( equivocation_proof: Box<EquivocationProof<T::Hash, T::BlockNumber>>, key_owner_proof: T::KeyOwnerProof, )  )
        [Signature(SignatureChange { args: [Changed(0, [Ty(StringChange("Box<EquivocationProof<T::Hash, T::BlockNumber>>", "Box<EquivocationProof<T::Hash, BlockNumberFor<T>>>"))])] })]
    [≠]  1: report_equivocation_unsigned ( equivocation_proof: Box<EquivocationProof<T::Hash, T::BlockNumber>>, key_owner_proof: T::KeyOwnerProof, )  )
        [Signature(SignatureChange { args: [Changed(0, [Ty(StringChange("Box<EquivocationProof<T::Hash, T::BlockNumber>>", "Box<EquivocationProof<T::Hash, BlockNumberFor<T>>>"))])] })]
    [≠]  2: note_stalled ( delay: T::BlockNumber, best_finalized_block_number: T::BlockNumber, )  )
        [Signature(SignatureChange { args: [Changed(0, [Ty(StringChange("T::BlockNumber", "BlockNumberFor<T>"))]), Changed(1, [Ty(StringChange("T::BlockNumber", "BlockNumberFor<T>"))])] })]

  - constants changes:
    [+] ConstantDesc { name: "MaxNominators", value: [0, 2, 0, 0] }

[≠] pallet 12: ImOnline -> 1 change(s)
  - calls changes:
    [≠]  0: heartbeat ( heartbeat: Heartbeat<T::BlockNumber>, signature: <T::AuthorityId as RuntimeAppPublic>::Signature, )  )
        [Signature(SignatureChange { args: [Changed(0, [Ty(StringChange("Heartbeat<T::BlockNumber>", "Heartbeat<BlockNumberFor<T>>"))])] })]

[≠] pallet 20: ConvictionVoting -> 1 change(s)
  - constants changes:
    [≠] VoteLockingPeriod: [192, 137, 1, 0]
        [Value([Changed(0, U8Change(192, 0)), Changed(1, U8Change(137, 39)), Changed(2, U8Change(1, 6))])]

[≠] pallet 21: Referenda -> 1 change(s)
  - calls changes:
    [≠]  0: submit ( proposal_origin: Box<PalletsOriginOf<T>>, proposal: BoundedCallOf<T, I>, enactment_moment: DispatchTime<T::BlockNumber>, )  )
        [Signature(SignatureChange { args: [Changed(2, [Ty(StringChange("DispatchTime<T::BlockNumber>", "DispatchTime<BlockNumberFor<T>>"))])] })]

[≠] pallet 24: Claims -> 1 change(s)
  - calls changes:
    [≠]  1: mint_claim ( who: EthereumAddress, value: BalanceOf<T>, vesting_schedule: Option<(BalanceOf<T>, BalanceOf<T>, T::BlockNumber)>, statement: Option<StatementKind>, )  )
        [Signature(SignatureChange { args: [Changed(2, [Ty(StringChange("Option<(BalanceOf<T>, BalanceOf<T>, T::BlockNumber)>", "Option<(BalanceOf<T>, BalanceOf<T>, BlockNumberFor<T>)>"))])] })]

[≠] pallet 25: Vesting -> 2 change(s)
  - calls changes:
    [≠]  2: vested_transfer ( target: AccountIdLookupOf<T>, schedule: VestingInfo<BalanceOf<T>, T::BlockNumber>, )  )
        [Signature(SignatureChange { args: [Changed(1, [Ty(StringChange("VestingInfo<BalanceOf<T>, T::BlockNumber>", "VestingInfo<BalanceOf<T>, BlockNumberFor<T>>"))])] })]
    [≠]  3: force_vested_transfer ( source: AccountIdLookupOf<T>, target: AccountIdLookupOf<T>, schedule: VestingInfo<BalanceOf<T>, T::BlockNumber>, )  )
        [Signature(SignatureChange { args: [Changed(2, [Ty(StringChange("VestingInfo<BalanceOf<T>, T::BlockNumber>", "VestingInfo<BalanceOf<T>, BlockNumberFor<T>>"))])] })]

[≠] pallet 29: Proxy -> 6 change(s)
  - calls changes:
    [≠]  1: add_proxy ( delegate: AccountIdLookupOf<T>, proxy_type: T::ProxyType, delay: T::BlockNumber, )  )
        [Signature(SignatureChange { args: [Changed(2, [Ty(StringChange("T::BlockNumber", "BlockNumberFor<T>"))])] })]
    [≠]  2: remove_proxy ( delegate: AccountIdLookupOf<T>, proxy_type: T::ProxyType, delay: T::BlockNumber, )  )
        [Signature(SignatureChange { args: [Changed(2, [Ty(StringChange("T::BlockNumber", "BlockNumberFor<T>"))])] })]
    [≠]  4: create_pure ( proxy_type: T::ProxyType, delay: T::BlockNumber, index: u16, )  )
        [Signature(SignatureChange { args: [Changed(1, [Ty(StringChange("T::BlockNumber", "BlockNumberFor<T>"))])] })]
    [≠]  5: kill_pure ( spawner: AccountIdLookupOf<T>, proxy_type: T::ProxyType, index: u16, height: T::BlockNumber, ext_index: u32, )  )
        [Signature(SignatureChange { args: [Changed(3, [Ty(StringChange("T::BlockNumber", "BlockNumberFor<T>"))])] })]

  - events changes:
    [≠]  3: ProxyAdded ( delegator: T::AccountId, delegatee: T::AccountId, proxy_type: T::ProxyType, delay: T::BlockNumber, )  )
        [Signature(SignatureChange { args: [Changed(3, [Ty(StringChange("T::BlockNumber", "BlockNumberFor<T>"))])] })]
    [≠]  4: ProxyRemoved ( delegator: T::AccountId, delegatee: T::AccountId, proxy_type: T::ProxyType, delay: T::BlockNumber, )  )
        [Signature(SignatureChange { args: [Changed(3, [Ty(StringChange("T::BlockNumber", "BlockNumberFor<T>"))])] })]

[≠] pallet 30: Multisig -> 6 change(s)
  - calls changes:
    [≠]  1: as_multi ( threshold: u16, other_signatories: Vec<T::AccountId>, maybe_timepoint: Option<Timepoint<T::BlockNumber>>, call: Box<<T as Config>::RuntimeCall>, max_weight: Weight, )  )
        [Signature(SignatureChange { args: [Changed(2, [Ty(StringChange("Option<Timepoint<T::BlockNumber>>", "Option<Timepoint<BlockNumberFor<T>>>"))])] })]
    [≠]  2: approve_as_multi ( threshold: u16, other_signatories: Vec<T::AccountId>, maybe_timepoint: Option<Timepoint<T::BlockNumber>>, call_hash: [u8; 32], max_weight: Weight, )  )
        [Signature(SignatureChange { args: [Changed(2, [Ty(StringChange("Option<Timepoint<T::BlockNumber>>", "Option<Timepoint<BlockNumberFor<T>>>"))])] })]
    [≠]  3: cancel_as_multi ( threshold: u16, other_signatories: Vec<T::AccountId>, timepoint: Timepoint<T::BlockNumber>, call_hash: [u8; 32], )  )
        [Signature(SignatureChange { args: [Changed(2, [Ty(StringChange("Timepoint<T::BlockNumber>", "Timepoint<BlockNumberFor<T>>"))])] })]

  - events changes:
    [≠]  1: MultisigApproval ( approving: T::AccountId, timepoint: Timepoint<T::BlockNumber>, multisig: T::AccountId, call_hash: CallHash, )  )
        [Signature(SignatureChange { args: [Changed(1, [Ty(StringChange("Timepoint<T::BlockNumber>", "Timepoint<BlockNumberFor<T>>"))])] })]
    [≠]  2: MultisigExecuted ( approving: T::AccountId, timepoint: Timepoint<T::BlockNumber>, multisig: T::AccountId, call_hash: CallHash, result: DispatchResult, )  )
        [Signature(SignatureChange { args: [Changed(1, [Ty(StringChange("Timepoint<T::BlockNumber>", "Timepoint<BlockNumberFor<T>>"))])] })]
    [≠]  3: MultisigCancelled ( cancelling: T::AccountId, timepoint: Timepoint<T::BlockNumber>, multisig: T::AccountId, call_hash: CallHash, )  )
        [Signature(SignatureChange { args: [Changed(1, [Ty(StringChange("Timepoint<T::BlockNumber>", "Timepoint<BlockNumberFor<T>>"))])] })]

[≠] pallet 36: ElectionProviderMultiPhase -> 5 change(s)
  - events changes:
    [≠]  5: PhaseTransitioned ( from: Phase<T::BlockNumber>, to: Phase<T::BlockNumber>, round: u32, )  )
        [Signature(SignatureChange { args: [Changed(0, [Ty(StringChange("Phase<T::BlockNumber>", "Phase<BlockNumberFor<T>>"))]), Changed(1, [Ty(StringChange("Phase<T::BlockNumber>", "Phase<BlockNumberFor<T>>"))])] })]

  - constants changes:
    [≠] MinerMaxWeight: [11, 200, 60, 119, 39, 86, 1, 19, 163, 112, 61, 10, 215, 163, 112, 189]
        [Value([Changed(1, U8Change(200, 8)), Changed(2, U8Change(60, 199)), Changed(3, U8Change(119, 114)), Changed(4, U8Change(39, 88)), Changed(5, U8Change(86, 85))])]
    [≠] SignedMaxWeight: [11, 200, 60, 119, 39, 86, 1, 19, 163, 112, 61, 10, 215, 163, 112, 189]
        [Value([Changed(1, U8Change(200, 8)), Changed(2, U8Change(60, 199)), Changed(3, U8Change(119, 114)), Changed(4, U8Change(39, 88)), Changed(5, U8Change(86, 85))])]
    [-] "MaxElectableTargets"
    [-] "MaxElectingVoters"

[≠] pallet 37: VoterList -> 1 change(s)
  - calls changes:
    [+] CallDesc { index: 2, name: "put_in_front_of_other", signature: SignatureDesc { args: [ArgDesc { name: "heavier", ty: "AccountIdLookupOf<T>" }, ArgDesc { name: "lighter", ty: "AccountIdLookupOf<T>" }] } }

[≠] pallet 39: NominationPools -> 10 change(s)
  - calls changes:
    [≠] 19: set_commission_change_rate ( pool_id: PoolId, change_rate: CommissionChangeRate<T::BlockNumber>, )  )
        [Signature(SignatureChange { args: [Changed(1, [Ty(StringChange("CommissionChangeRate<T::BlockNumber>", "CommissionChangeRate<BlockNumberFor<T>>"))])] })]

  - events changes:
    [≠] 13: PoolCommissionChangeRateUpdated ( pool_id: PoolId, change_rate: CommissionChangeRate<T::BlockNumber>, )  )
        [Signature(SignatureChange { args: [Changed(1, [Ty(StringChange("CommissionChangeRate<T::BlockNumber>", "CommissionChangeRate<BlockNumberFor<T>>"))])] })]

  - errors changes:
    [≠] 23: CommissionChangeThrottled
        [Name(StringChange("CommissionChangeThrottled", "CommissionExceedsGlobalMaximum"))]
    [≠] 24: CommissionChangeRateNotAllowed
        [Name(StringChange("CommissionChangeRateNotAllowed", "CommissionChangeThrottled"))]
    [≠] 25: NoPendingCommission
        [Name(StringChange("NoPendingCommission", "CommissionChangeRateNotAllowed"))]
    [≠] 26: NoCommissionCurrentSet
        [Name(StringChange("NoCommissionCurrentSet", "NoPendingCommission"))]
    [≠] 27: PoolIdInUse     
        [Name(StringChange("PoolIdInUse", "NoCommissionCurrentSet"))]
    [≠] 28: InvalidPoolId   
        [Name(StringChange("InvalidPoolId", "PoolIdInUse"))]
    [≠] 29: BondExtraRestricted
        [Name(StringChange("BondExtraRestricted", "InvalidPoolId"))]
    [+] ErrorDesc { index: 30, name: "BondExtraRestricted" }

[≠] pallet 40: FastUnstake -> 3 change(s)
  - events changes:
    [≠]  2: InternalError ( )  )
        [Name(StringChange("InternalError", "BatchChecked")), Signature(SignatureChange { args: [Added(0, ArgDesc { name: "eras", ty: "Vec<EraIndex>" })] })]
    [≠]  3: BatchChecked ( eras: Vec<EraIndex>, )  )
        [Name(StringChange("BatchChecked", "BatchFinished")), Signature(SignatureChange { args: [Changed(0, [Name(StringChange("eras", "size")), Ty(StringChange("Vec<EraIndex>", "u32"))])] })]
    [≠]  4: BatchFinished ( size: u32, )  )
        [Name(StringChange("BatchFinished", "InternalError")), Signature(SignatureChange { args: [Removed(0, ArgDesc { name: "size", ty: "u32" })] })]

[≠] pallet 51: Configuration -> 20 change(s)
  - calls changes:
    [≠]  0: set_validation_upgrade_cooldown ( new: T::BlockNumber, )  )
        [Signature(SignatureChange { args: [Changed(0, [Ty(StringChange("T::BlockNumber", "BlockNumberFor<T>"))])] })]
    [≠]  1: set_validation_upgrade_delay ( new: T::BlockNumber, )  )
        [Signature(SignatureChange { args: [Changed(0, [Ty(StringChange("T::BlockNumber", "BlockNumberFor<T>"))])] })]
    [≠]  2: set_code_retention_period ( new: T::BlockNumber, )  )
        [Signature(SignatureChange { args: [Changed(0, [Ty(StringChange("T::BlockNumber", "BlockNumberFor<T>"))])] })]
    [≠]  6: set_parathread_cores ( new: u32, )  )
        [Name(StringChange("set_parathread_cores", "set_on_demand_cores"))]
    [≠]  7: set_parathread_retries ( new: u32, )  )
        [Name(StringChange("set_parathread_retries", "set_on_demand_retries"))]
    [≠]  8: set_group_rotation_frequency ( new: T::BlockNumber, )  )
        [Signature(SignatureChange { args: [Changed(0, [Ty(StringChange("T::BlockNumber", "BlockNumberFor<T>"))])] })]
    [≠]  9: set_chain_availability_period ( new: T::BlockNumber, )  )
        [Name(StringChange("set_chain_availability_period", "set_paras_availability_period")), Signature(SignatureChange { args: [Changed(0, [Ty(StringChange("T::BlockNumber", "BlockNumberFor<T>"))])] })]
    [≠] 15: set_dispute_post_conclusion_acceptance_period ( new: T::BlockNumber, )  )
        [Signature(SignatureChange { args: [Changed(0, [Ty(StringChange("T::BlockNumber", "BlockNumberFor<T>"))])] })]
    [≠] 43: set_minimum_validation_upgrade_delay ( new: T::BlockNumber, )  )
        [Signature(SignatureChange { args: [Changed(0, [Ty(StringChange("T::BlockNumber", "BlockNumberFor<T>"))])] })]
    [+] CallDesc { index: 47, name: "set_on_demand_base_fee", signature: SignatureDesc { args: [ArgDesc { name: "new", ty: "Balance" }] } }
    [+] CallDesc { index: 48, name: "set_on_demand_fee_variability", signature: SignatureDesc { args: [ArgDesc { name: "new", ty: "Perbill" }] } }
    [+] CallDesc { index: 49, name: "set_on_demand_queue_max_size", signature: SignatureDesc { args: [ArgDesc { name: "new", ty: "u32" }] } }
    [+] CallDesc { index: 50, name: "set_on_demand_target_queue_utilization", signature: SignatureDesc { args: [ArgDesc { name: "new", ty: "Perbill" }] } }
    [+] CallDesc { index: 51, name: "set_on_demand_ttl", signature: SignatureDesc { args: [ArgDesc { name: "new", ty: "BlockNumberFor<T>" }] } }
    [+] CallDesc { index: 52, name: "set_minimum_backing_votes", signature: SignatureDesc { args: [ArgDesc { name: "new", ty: "u32" }] } }
    [-] "set_thread_availability_period"
    [-] "set_hrmp_max_parathread_inbound_channels"
    [-] "set_hrmp_max_parathread_outbound_channels"
    [-] "set_pvf_checking_enabled"

  - storages changes:
    [≠] Default  ActiveConfig: [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ... ]
        [DefaultValue([Changed(117, U8Change(0, 16)), Changed(118, U8Change(0, 39)), Changed(121, U8Change(0, 128)), Changed(122, U8Change(0, 178)), Changed(123, U8Change(0, 230)), Changed(124, U8Change(0, 14)), Changed(125, U8Change(1, 128)), Changed(126, U8Change(0, 195)), Changed(127, U8Change(0, 201)), Changed(128, U8Change(0, 1)), Changed(129, U8Change(1, 128)), Changed(130, U8Change(0, 150)), Changed(131, U8Change(0, 152)), Changed(133, U8Change(1, 0)), Changed(143, U8Change(6, 0)), Changed(145, U8Change(0, 5)), Changed(147, U8Change(100, 0)), Changed(149, U8Change(0, 1)), Changed(151, U8Change(1, 0)), Changed(153, U8Change(0, 1)), Changed(157, U8Change(0, 1)), Changed(163, U8Change(0, 6)), Changed(167, U8Change(0, 100)), Changed(171, U8Change(0, 1)), Changed(172, U8Change(2, 0)), Changed(176, U8Change(2, 0)), Added(180, 0), Added(181, 0), Added(182, 0), Added(183, 0), Added(184, 0), Added(185, 0), Added(186, 0), Added(187, 0), Added(188, 0), Added(189, 0), Added(190, 0), Added(191, 2), Added(192, 0), Added(193, 0), Added(194, 0), Added(195, 2), Added(196, 0), Added(197, 0), Added(198, 0), Added(199, 2), Added(200, 0), Added(201, 0), Added(202, 0)])]

[≠] pallet 52: ParasShared -> 1 change(s)
  - storages changes:
    [+] StorageDesc { name: "AllowedRelayParents", modifier: "Default", default_value: [0, 0, 0, 0, 0] }

[≠] pallet 53: ParaInclusion -> 6 change(s)
  - errors changes:
    [≠] 11: WrongCollator   
        [Name(StringChange("WrongCollator", "ScheduledOutOfOrder"))]
    [≠] 12: ScheduledOutOfOrder
        [Name(StringChange("ScheduledOutOfOrder", "HeadDataTooLarge"))]
    [≠] 13: HeadDataTooLarge
        [Name(StringChange("HeadDataTooLarge", "PrematureCodeUpgrade"))]
    [≠] 14: PrematureCodeUpgrade
        [Name(StringChange("PrematureCodeUpgrade", "NewCodeTooLarge"))]
    [≠] 15: NewCodeTooLarge 
        [Name(StringChange("NewCodeTooLarge", "DisallowedRelayParent"))]
    [≠] 16: CandidateNotInParentContext
        [Name(StringChange("CandidateNotInParentContext", "InvalidAssignment"))]

[≠] pallet 54: ParaInherent -> 1 change(s)
  - calls changes:
    [≠]  0: enter ( data: ParachainsInherentData<T::Header>, )  )
        [Signature(SignatureChange { args: [Changed(0, [Ty(StringChange("ParachainsInherentData<T::Header>", "ParachainsInherentData<HeaderFor<T>>"))])] })]

[≠] pallet 55: ParaScheduler -> 4 change(s)
  - storages changes:
    [+] StorageDesc { name: "ClaimQueue", modifier: "Default", default_value: [0] }
    [-] "ParathreadClaimIndex"
    [-] "ParathreadQueue"
    [-] "Scheduled"

[≠] pallet 56: Paras -> 3 change(s)
  - calls changes:
    [≠]  2: force_schedule_code_upgrade ( para: ParaId, new_code: ValidationCode, relay_parent_number: T::BlockNumber, )  )
        [Signature(SignatureChange { args: [Changed(2, [Ty(StringChange("T::BlockNumber", "BlockNumberFor<T>"))])] })]
    [+] CallDesc { index: 8, name: "force_set_most_recent_context", signature: SignatureDesc { args: [ArgDesc { name: "para", ty: "ParaId" }, ArgDesc { name: "context", ty: "BlockNumberFor<T>" }] } }

  - storages changes:
    [+] StorageDesc { name: "MostRecentContext", modifier: "Optional", default_value: [0] }

[≠] pallet 62: ParasDisputes -> 1 change(s)
  - events changes:
    [≠]  2: Revert ( : T::BlockNumber, )  )
        [Signature(SignatureChange { args: [Changed(0, [Ty(StringChange("T::BlockNumber", "BlockNumberFor<T>"))])] })]

[+] id: 64 - new pallet: ParaAssignmentProvider
[≠] pallet 72: Auctions -> 3 change(s)
  - calls changes:
    [≠]  0: new_auction ( duration: T::BlockNumber, lease_period_index: LeasePeriodOf<T>, )  )
        [Signature(SignatureChange { args: [Changed(0, [Ty(StringChange("T::BlockNumber", "BlockNumberFor<T>"))])] })]

  - events changes:
    [≠]  0: AuctionStarted ( auction_index: AuctionIndex, lease_period: LeasePeriodOf<T>, ending: T::BlockNumber, )  )
        [Signature(SignatureChange { args: [Changed(2, [Ty(StringChange("T::BlockNumber", "BlockNumberFor<T>"))])] })]
    [≠]  6: WinningOffset ( auction_index: AuctionIndex, block_number: T::BlockNumber, )  )
        [Signature(SignatureChange { args: [Changed(1, [Ty(StringChange("T::BlockNumber", "BlockNumberFor<T>"))])] })]

[≠] pallet 73: Crowdloan -> 2 change(s)
  - calls changes:
    [≠]  0: create ( index: ParaId, cap: BalanceOf<T>, first_period: LeasePeriodOf<T>, last_period: LeasePeriodOf<T>, end: T::BlockNumber, verifier: Option<MultiSigner>, )  )
        [Signature(SignatureChange { args: [Changed(4, [Ty(StringChange("T::BlockNumber", "BlockNumberFor<T>"))])] })]
    [≠]  5: edit ( index: ParaId, cap: BalanceOf<T>, first_period: LeasePeriodOf<T>, last_period: LeasePeriodOf<T>, end: T::BlockNumber, verifier: Option<MultiSigner>, )  )
        [Signature(SignatureChange { args: [Changed(4, [Ty(StringChange("T::BlockNumber", "BlockNumberFor<T>"))])] })]

[≠] pallet 99: XcmPallet -> 24 change(s)
  - calls changes:
    [≠]  4: force_xcm_version ( location: Box<MultiLocation>, xcm_version: XcmVersion, )  )
        [Signature(SignatureChange { args: [Changed(1, [Name(StringChange("xcm_version", "version"))])] })]

  - events changes:
    [≠]  0: Attempted ( : xcm::latest::Outcome, )  )
        [Signature(SignatureChange { args: [Changed(0, [Name(StringChange("", "outcome"))])] })]
    [≠]  1: Sent ( : MultiLocation, : MultiLocation, : Xcm<()>, )  )
        [Signature(SignatureChange { args: [Changed(0, [Name(StringChange("", "origin"))]), Changed(1, [Name(StringChange("", "destination"))]), Changed(2, [Name(StringChange("", "message"))]), Added(3, ArgDesc { name: "message_id", ty: "XcmHash" })] })]
    [≠]  2: UnexpectedResponse ( : MultiLocation, : QueryId, )  )
        [Signature(SignatureChange { args: [Changed(0, [Name(StringChange("", "origin"))]), Changed(1, [Name(StringChange("", "query_id"))])] })]
    [≠]  3: ResponseReady ( : QueryId, : Response, )  )
        [Signature(SignatureChange { args: [Changed(0, [Name(StringChange("", "query_id"))]), Changed(1, [Name(StringChange("", "response"))])] })]
    [≠]  4: Notified ( : QueryId, : u8, : u8, )  )
        [Signature(SignatureChange { args: [Changed(0, [Name(StringChange("", "query_id"))]), Changed(1, [Name(StringChange("", "pallet_index"))]), Changed(2, [Name(StringChange("", "call_index"))])] })]
    [≠]  5: NotifyOverweight ( : QueryId, : u8, : u8, : Weight, : Weight, )  )
        [Signature(SignatureChange { args: [Changed(0, [Name(StringChange("", "query_id"))]), Changed(1, [Name(StringChange("", "pallet_index"))]), Changed(2, [Name(StringChange("", "call_index"))]), Changed(3, [Name(StringChange("", "actual_weight"))]), Changed(4, [Name(StringChange("", "max_budgeted_weight"))])] })]
    [≠]  6: NotifyDispatchError ( : QueryId, : u8, : u8, )  )
        [Signature(SignatureChange { args: [Changed(0, [Name(StringChange("", "query_id"))]), Changed(1, [Name(StringChange("", "pallet_index"))]), Changed(2, [Name(StringChange("", "call_index"))])] })]
    [≠]  7: NotifyDecodeFailed ( : QueryId, : u8, : u8, )  )
        [Signature(SignatureChange { args: [Changed(0, [Name(StringChange("", "query_id"))]), Changed(1, [Name(StringChange("", "pallet_index"))]), Changed(2, [Name(StringChange("", "call_index"))])] })]
    [≠]  8: InvalidResponder ( : MultiLocation, : QueryId, : Option<MultiLocation>, )  )
        [Signature(SignatureChange { args: [Changed(0, [Name(StringChange("", "origin"))]), Changed(1, [Name(StringChange("", "query_id"))]), Changed(2, [Name(StringChange("", "expected_location"))])] })]
    [≠]  9: InvalidResponderVersion ( : MultiLocation, : QueryId, )  )
        [Signature(SignatureChange { args: [Changed(0, [Name(StringChange("", "origin"))]), Changed(1, [Name(StringChange("", "query_id"))])] })]
    [≠] 10: ResponseTaken ( : QueryId, )  )
        [Signature(SignatureChange { args: [Changed(0, [Name(StringChange("", "query_id"))])] })]
    [≠] 11: AssetsTrapped ( : H256, : MultiLocation, : VersionedMultiAssets, )  )
        [Signature(SignatureChange { args: [Changed(0, [Name(StringChange("", "hash"))]), Changed(1, [Name(StringChange("", "origin"))]), Changed(2, [Name(StringChange("", "assets"))])] })]
    [≠] 12: VersionChangeNotified ( : MultiLocation, : XcmVersion, : MultiAssets, )  )
        [Signature(SignatureChange { args: [Changed(0, [Name(StringChange("", "destination"))]), Changed(1, [Name(StringChange("", "result"))]), Changed(2, [Name(StringChange("", "cost"))]), Added(3, ArgDesc { name: "message_id", ty: "XcmHash" })] })]
    [≠] 13: SupportedVersionChanged ( : MultiLocation, : XcmVersion, )  )
        [Signature(SignatureChange { args: [Changed(0, [Name(StringChange("", "location"))]), Changed(1, [Name(StringChange("", "version"))])] })]
    [≠] 14: NotifyTargetSendFail ( : MultiLocation, : QueryId, : XcmError, )  )
        [Signature(SignatureChange { args: [Changed(0, [Name(StringChange("", "location"))]), Changed(1, [Name(StringChange("", "query_id"))]), Changed(2, [Name(StringChange("", "error"))])] })]
    [≠] 15: NotifyTargetMigrationFail ( : VersionedMultiLocation, : QueryId, )  )
        [Signature(SignatureChange { args: [Changed(0, [Name(StringChange("", "location"))]), Changed(1, [Name(StringChange("", "query_id"))])] })]
    [≠] 16: InvalidQuerierVersion ( : MultiLocation, : QueryId, )  )
        [Signature(SignatureChange { args: [Changed(0, [Name(StringChange("", "origin"))]), Changed(1, [Name(StringChange("", "query_id"))])] })]
    [≠] 17: InvalidQuerier ( : MultiLocation, : QueryId, : MultiLocation, : Option<MultiLocation>, )  )
        [Signature(SignatureChange { args: [Changed(0, [Name(StringChange("", "origin"))]), Changed(1, [Name(StringChange("", "query_id"))]), Changed(2, [Name(StringChange("", "expected_querier"))]), Changed(3, [Name(StringChange("", "maybe_actual_querier"))])] })]
    [≠] 18: VersionNotifyStarted ( : MultiLocation, : MultiAssets, )  )
        [Signature(SignatureChange { args: [Changed(0, [Name(StringChange("", "destination"))]), Changed(1, [Name(StringChange("", "cost"))]), Added(2, ArgDesc { name: "message_id", ty: "XcmHash" })] })]
    [≠] 19: VersionNotifyRequested ( : MultiLocation, : MultiAssets, )  )
        [Signature(SignatureChange { args: [Changed(0, [Name(StringChange("", "destination"))]), Changed(1, [Name(StringChange("", "cost"))]), Added(2, ArgDesc { name: "message_id", ty: "XcmHash" })] })]
    [≠] 20: VersionNotifyUnrequested ( : MultiLocation, : MultiAssets, )  )
        [Signature(SignatureChange { args: [Changed(0, [Name(StringChange("", "destination"))]), Changed(1, [Name(StringChange("", "cost"))]), Added(2, ArgDesc { name: "message_id", ty: "XcmHash" })] })]
    [≠] 21: FeesPaid ( : MultiLocation, : MultiAssets, )  )
        [Signature(SignatureChange { args: [Changed(0, [Name(StringChange("", "paying"))]), Changed(1, [Name(StringChange("", "fees"))])] })]
    [≠] 22: AssetsClaimed ( : H256, : MultiLocation, : VersionedMultiAssets, )  )
        [Signature(SignatureChange { args: [Changed(0, [Name(StringChange("", "hash"))]), Changed(1, [Name(StringChange("", "origin"))]), Changed(2, [Name(StringChange("", "assets"))])] })]

[≠] pallet 100: MessageQueue -> 1 change(s)
  - errors changes:
    [+] ErrorDesc { index: 7, name: "QueuePaused" }

[-] pallet 14: Democracy
[-] pallet 15: Council
[-] pallet 16: TechnicalCommittee
[-] pallet 17: PhragmenElection
[-] pallet 18: TechnicalMembership
[-] pallet 35: Tips
SUMMARY:
- Compatible.......................: false
- Require transaction_version bump.: true

!!! THE SUBWASM REDUCED DIFFER IS EXPERIMENTAL, DOUBLE CHECK THE RESULTS !!!

tldr:

[-] pallet 14: Democracy
[-] pallet 15: Council
[-] pallet 16: TechnicalCommittee
[-] pallet 17: PhragmenElection
[-] pallet 18: TechnicalMembership
[-] pallet 35: Tips
SUMMARY:
- Compatible.......................: false
- Require transaction_version bump.: true

Merge rights

We have support for giving approval rights to fellowship members. Now we need to have merge rights. IMO every fellowship member or the block author should be able to write "bot merge" into a pull request comment. If all CI checks are green, the pull request should be merged to master.

Best practice for community led runtime upgrade

Currently a community led runtime upgrade has been proposed on Kusama Network through referendum 235.

Kusama Network is permissionless and allows anyone to propose any change, including a change of the runtime of the network itself.
The Polkadot Fellowship seems to (have) become the organization that maintains the runtimes for Polkadot ecosystem relay chains and system parachains.

This issue voices worries in the ecosystem that a community led runtime upgrade may not necessarily follow best-practices for runtime upgrades that have been developed over the past 5 years and may result in the unintended consequence of a bricked chain.

This issue requests guidance from the Polkadot Fellowship to offer a best-practice for a community led runtime upgrade.

Mind you: this issue does not seek an expert opinion on the content of Kusama Network Referendum 235, only an expert opinion on the procedure to best get any community led runtime change implemented.

Bad storage key value in Kusama runtime

There are number of undecoable key value pairs in Kusama. Here are some from Balances.locks

  • Key 0xc2261276cc9d1f8598ea4b6a74b15c2f218f26c73add634897550b4003b26bc625dd314d5cf9255ed0f3bdf2589adc292f47aed28c0690f788ed2dbceb1fc3ff

  • Value 0x087374616b696e672000000000000000000000000000000000ffffffff1f64656d6f63726163ffffffffffffffffffffffffffffffffc0b0070002

  • Key 0xc2261276cc9d1f8598ea4b6a74b15c2f218f26c73add634897550b4003b26bc6386a9e3f0e74067eef90a9a281c234e05d6fa9c4a25b20fd03913b81d7fccf13

  • Value 0x047374616b696e672000000000000000000000000000000000ffffffff1f

  • Key 0xc2261276cc9d1f8598ea4b6a74b15c2f218f26c73add634897550b4003b26bc6402fc69a7eb122806b64c9e27735e45009aa922e1ba4675758c75607a96a2172

  • Value 0x047374616b696e672000000000000000000000000000000000ffffffff1f

  • Key 0xc2261276cc9d1f8598ea4b6a74b15c2f218f26c73add634897550b4003b26bc645c200adc666cca26df86e2e4a5aa1af186606b9e2a5b86c14cc954228a3cd00

  • Value 0x047374616b696e672000000000000000000000000000000000ffffffff1f

  • Key 0xc2261276cc9d1f8598ea4b6a74b15c2f218f26c73add634897550b4003b26bc651e634907ab2f0b63eadc4091e366f51bcf6f73763f216dcf716cfe7757813e7

  • Value 0x0464656d6f63726163ffffffffffffffffffffffffffffffffc0b0070002

  • Key 0xc2261276cc9d1f8598ea4b6a74b15c2f218f26c73add634897550b4003b26bc65abb1b03f39287c41885312b50a18dc337de40d15f99c182eba0e5aa6629b9bc

  • Value 0x0464656d6f63726163ffffffffffffffffffffffffffffffff7218030002

all of the above keys are touched by this block https://kusama.subscan.io/block/20466150?tab=event

Add `xcm-emulator` for (at least some) testing of relay and system parachains runtimes

Other than try-runtime migrations and some very basic unit tests the code for these runtimes is really not tested (IMO dangerously under-tested).

For example, core misconfigurations can be easily introduced by accident and currently we're relying just on manual reviews to catch them - when they could easily be caught by (mostly existing) automated testing.

I heard about some plans of adding some chopsticks-based tests in the future, which sounds great! But in the meantime let's at least use what we have, namely xcm-emulator (and even some zombienet tests if we already have ones that cover good or critical chunks of runtime functionality).

Change OpenGov configuration

In RFC 20 there is some discussion about changing the certain parameters of the OpenGov configuration. In general the fellowship doesn't want to decide on these values on its own as this should be done by governance. Thus, the proposal is that we create a runtime upgrade that only enacts the proposed changes and then governance can decide if they want these changes by enacting the runtime upgrade or not.

The parameter changes can be done in the Polkadot/Kusama runtime itself. However we want to make sure that people that voted using the old locking periods on a referenda that gets enacted after the runtime upgrade get the locking period at the time of their vote getting enacted on chain. This will require some changes to the pallet itself to support this.

After all the changes are ready, the fellowship can create a release that only includes these changes and then propose it on chain.

Improve release changelogs

Currently the release changelogs just consumes the entire CHANGELOG.md file. This isn't that great and we should only include the change of the particular release.

Add `transaction_version` bump check

We should write some CI job that checks if we need to bump the transaction_version. Subwasm and polkadot-js can help with this, but both currently don't provide any "perfect" solution AFAIK.

Script to automate weight generation

Now that the weight generation has been documented (#127), a script should be written to automate the process.

Should take inputs like:

  1. Set of runtimes & pallets/extrinsics to bench
  2. SSH credentials for a benchmark machine
  3. Local output dir

and pallet-by-pallet exec the benchmarks on the remote machine and write the results to the specified local directory.

the command should by default 'pick up from where it left off' if it was terminated early, e.g. be tolerant to stopping/starting and not repeat benchmarks that have already been done.

Add the `recovery` pallet to the Polkadot runtime

This pallet has been present on Kusama for many years, and is currently under-utilized but extremely important IMHO.
https://github.com/paritytech/polkadot-sdk/tree/master/substrate/frame/recovery

If the appropriate tooling is built around this pallet, it would bring a lot of safety to users assets. Allowing them to effectively recover funds, NFT etc, in case they loose access to their accounts. I believe that it is currently not widely supported in the ecosystem wallets because it is only present on Kusama, hence this issue to bring it to the Polkadot runtime.

:open_umbrella: Migrate the Polkadot and Kusama runtimes

Some high-level tasks to complete the migration of the Polkadot and Kusama runtimes into this repo:

  • Copy the code from the Polkadot repo #1
  • Add a basic GitHub actions CI like to check that everything compiles.
  • Maybe create a Rust toolchain file to fix the Rust version.
  • Think about how to use the runtimes downstream in the Polkadot repo; crates, copy&paste, git-subtree/module, etc. (Answer: No integration testing from the Monorepo - we have to adhere to semver)

What else? Please edit the issue directly.

Undecodable Kusama session keys

          > @bkchr do you know what might be the issue with the session keys or should I take a look? polkadot has a `UpgradeSessionKeys` migration, maybe Kusama needs one too?

I just think that this is the problem here. Aka the key is broken there since quite some time and we just always skip it now when doing migrations. Just check how many are broken and then go ahead to remove these from the state. This can probably be done using a kill_storage extrinsic, so we don't need to wait for a runtime upgrade.

Originally posted by @bkchr in #140 (comment)

Taking a look

single PR on polkadot-sdk version update

Problem:

An engineer seeking to integrate a feature from a new polkadot-sdk version must submit a pull request (PR) in which the versions of all crates from polkadot-sdk are updated. A single crate in most cases can not be updated, since they are interdependent. This necessitates the engineer to first address all breaking changes, resulting in a single PR covering various topics. This, in turn, leads to an extended review process for the PR.

I believe it's important to agree on how to handle such updates to ensure that everyone has the right expectations.

Examples: #87 #56

Possible Solutions:

There are several approaches that an engineer can take to tackle this:

[updated on 09.01.2024]
please see new proposal in the comments - #101 (comment), #101 (comment)
[updated

Option 1: minimal integration / flexible
In this approach, an engineer updating to a new polkadot-sdk version will address, at the very least, all breaking changes in a manner that prepares it for production. The extent of additional changes is left to the discretion of the engineer.

Option 2: exhaustive
In this scenario, one or more engineers ensure that all anticipated and reasonable features and fixes from the polkadot-sdk are integrated.

Option 3: regressive
Here, an engineer tasked with updating all crates within the runtimes repository applies minimal changes with the objective of making the crates compile. New features are disabled, and adjustments to contracts or their items are made in a restrictive manner. The PR is opened and merged to unblock subsequent PRs. Engineers responsible for specific domains or those who have worked on new features then follow up with their own PRs.
To support this, restrictive default pallet configs may be introduced on FRAME level.

Any other alternatives?

Executing the `Utility::batch_all()` method via `XCM`

This RP paritytech/polkadot-sdk#1303 has removed restrictions related to SafeCallFilter, allowing actions such as sending XCM messages from Parachain and executing the Utility::batch_all() method in the Relaychain. However, we've noticed that SafeCallFilter is still retained after the separation of polkadot-runtime and kusama-runtime from the polkadot-sdk into this new project
Was this an oversight in updating? I believe cross-chain invocations of Utility::batch_all() are a common practice and should not be restricted.

Improve release process

Instead of doing a release when a pr hits master that is modifying CHANGELOG.md, we should have some script that checks if there is actually a new release. A new release would mean that inside of the CHANGELOG.md there is a version number that is higher than the highest version number found in CHANGELOG.md in the parent commit.

We should also change the tag to be in the format vVERSION_NUMBER. I think this is currently not the case.

Introduce more sophisticated approval rights

Currently the approval rights allow to only specify a minimum rank and the number of approvals required to approve a pull request. While this is working, it is not "very inviting" as lower ranks are not incentivized to even review/approve. However on the other side it is sometimes hard to "cat herd" to get everyone together for reviewing/approval. Thus, we should improve the approval bot to have some sort of "voting power". This voting power should be similar to how on chain voting is done. Every rank gets a certain voting power and we will then require X amount of voting power to approve a pull request. Thus, we could for example have that 3 rank II members together have the same voting power as one rank III member.

CC @Bullrich

Release 1.0 runtimes

The repository should now be prepared enough to create the first runtime release from this repo. Parity released all crates alongside releasing the 1.0 release of the node. As the 1.0 release is now already out for some time, I propose that we do the runtimes 1.0 release based on the next Parity Polkadot node release.

So, we first need to wait for this next node release to happen. After the release happened, someone can propose a pull request that updates to these crates on crates.io.

CC @ggwpez @coderobe

Move `runtime-common` to the fellowship

While creating #26, it seemed to me that the fact that runtime-common is still in paritytech/polkadot-sdk is somewhat wrong. This crate contains some critical configurations and I think either entirely, or a subset of it that is highly entangled with the polkadot/kusama runtimes, should be brought into this repo.

Chopsticks + CI/CD

In addition to being the authors of Chopsticks, Acala also has a good e2e tests repo that is generic enough that all chains could use assuming they added the correct configs.

It would be good if we had a chopsticks test running on CI that validates that the new runtimes work as expected with the existing chain state

I am planning on building a GitHub workflow that

  1. Builds all the runtimes
  2. Tests an XCM transfer between Kusama and assethub

Let me know if the above approach is unreasonable or if I should change it in any way

We can add more tests as we go along but just having that as a start would go a long way and hopefully help builders to ensure they test their changes against their live state and avoid future bricking of the chains

I think using the e2e test repo is good if we want to ensure consistency across the board but if we want to only use chopsticks we can but the tests will be slightly harder to build

Is there a potential issue of using an external dependency like the e2e tests repo in CI? because we would be just as dependent on chopsticks unless the fellows fork it?

Add support for update benchmarks

Problem

The polkadot-sdk repository has a short-benchmarks pipeline that helps verify the correctness of benchmarks. In polkadot-fellowsrepository, we currently lack a similar pipeline for benchmark verification. Additionally, there is no existing "bench bot" to automatically regenerate weight files for polkadot-fellows runtimes. To maintain the integrity of our benchmarks and ensure the accuracy of weight files, we need to address these issues.

Proposed solution 1 - add pipeline

This pipeline will be responsible for running benchmarks and verifying their correctness on every PR.

Proposed solution 1 - regeneration

Since there is no existing "bench bot" for automatic regeneration of weight files for runtimes, we can implement a manual process or script.

Unresolved questions

  • How and where to regenerate weights for this repo?
  • What binary version to use? Benchmark commands are inside polkadot or polkadot-parachain binary)
  • When to regenerate weight? Manually? On every release? On master?

cc: @ggwpez

Make salary budget configurable via onchain governance

          Actually, we will want to make this number configurable, as obviously we need to adjust the budget every once a while. This will be a good candidate to integrate with parameters pallet, whatever it is the one in [ORML](https://github.com/open-web3-stack/open-runtime-module-library/blob/master/parameters/src/lib.rs), or [polkadot-sdk](https://github.com/paritytech/polkadot-sdk/pull/2061)

Originally posted by @xlc in #121 (comment)

A Tool for Scrutinizing OpenGov Proposals (Don't Trust, Verify)

Background

Last week, Kusama Referendum 297 upgrading Relay and System chains to V1 runtimes failed due to a misconfiguration in the call and needed to be re-submitted.

There is clearly a need for a tool to ease the process for both the Fellowship and wider community to collectively verify the integrity of proposals prior to voting.

Existing work

https://github.com/AcalaNetwork/chopsticks is well suited for dry-running proposals and verifying they do as intended. @xlc has made the most progress with this, and has provided an example script of how to

  1. Whitelist a Kusama runtime upgrade call
  2. Dry-run the call, and inspect post-upgrade state using Polkadot.js

Kusama:

npx @acala-network/[email protected] xcm --relaychain kusama --parachain statemine --parachain kusama-bridge-hub

ws://127.0.0.1:8000 for asset hub, ws://127.0.0.1:8001 for bridge hub, ws://127.0.0.1:8002 for kusama

Open https://polkadot.js.org/apps/?rpc=ws%3A%2F%2F127.0.0.1%3A8002#/js and run this JS

const number = (await api.rpc.chain.getHeader()).number.toNumber()
await api.rpc('dev_setStorage', {
 scheduler: {
   agenda: [
     [
       [number + 1], [
         {
           call: {
             Inline: '0x2c0042d9642af11541927831b1795a1d47e29b2491ce8166d65076e4d63d83ccf973'
           },
           origin: {
             system: 'Root'
           }
         }
       ]
     ],
     [
       [number + 2], [
         {
           call: {
             Lookup: {
               hash: '0xd51bacd5ea827cf48fd539d4cdd3edfc11de191ef7e3df301cebcb0cafccb5ab',
               len: 1423658
             }
           },
           origin: {
             system: 'Root'
           }
         }
       ]
     ]
   ]
 }
})
await api.rpc('dev_newBlock', { count: 2})

Specification (DRAFT)

A CLI that dry-runs and verifies arbitrary proposals, including those that involve XCM.

  • Inputs
    • Some set of calls to dry-run (e.g. whitelist + execute referendum)
    • Expected state changes (e.g. :CODE is updated and nothing else)
  • Outputs
    • Expected state changes and unexpected state changes
    • Exposes local RPCs allowing devs to connect to and play with post-upgrade state using Polkadot.js
  • If convenient, consider dedicated subcommands to ease the process of verifying common types of proposals like runtime upgrades. Do not sacrifice the flexibility of the tool to this end.
  • Likely will heavily lean on Chopsticks

Next steps

  • I would love some feedback from @xlc on this issue and any details we can add to the specification, given his expertise with Chopsticks.
  • It has been suggested we create a bounty for the community to create this
    • Need to clear up details around how much it will be, where to get the funding, and how to take applications (can someone connect me with someone experienced with this?)
    • I suggest we set a deadline where if there is no progress on the bounty in N months we (Fellowsihp) develop it ourselves, or I fear it may never get done
  • Based on discussion I will keep the issue up to date

CI: fix rust caching

Eg here it prints No cache found although there should be some?!
Also seems to happen on other branches. Maybe @alvicsam knows how to fix it? Can we maybe even use this new Parity caching tool?

The check-migrations job could also need some caching for its build 😅

Timeline for runtime proposals

Is there a timeline for when this repository will officially take over as the source for runtime code? Right now the Polkadot v1.0.0 release in paritytech/polkadot invites developers to come to this repo and contribute, however the README states that the runtime code should not be altered at this time, so is there an expected timeline for when we can start proposing changes?

Add runtime try-state tests

          I would love it if we could finally introduce some kind of a runtime burn-in for runtime as well. 

For example, running follow-chain --check RoundRobin(1) for 1024 blocks should yield no ERROR logs.

Originally posted by @kianenigma in #33 (comment)

[ci] `arduino/setup-protoc@v1` API rate limit exceeded for 52.238.27.195

E.g,: failed job

Error: Error: API rate limit exceeded for 52.238.27.195. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)

We use this for check-migrations.yml and runtime-matrix will just grow with adding new SP runtimes (e.g. people, coretime, ...).

How to fix? Add that authentication? Or install some prerequisites just once before matrix job runs (not sure if possible)?

I guess, we need that protoc or can be removed?

Fix Encointer benchmarks

I fixed one for frame_system::set_code here,
but there are several others failing (probably easy to fix e.g. "attempt to divide by zero"):

...

Created file: "./encointer-kusama-weights/pallet_balances.rs"
Running benchmarks for pallet_collective to ./encointer-kusama-weights/
2024-01-22 16:55:50 Starting benchmark: pallet_collective::set_members    
2024-01-22 16:55:51 Starting benchmark: pallet_collective::execute    
2024-01-22 16:55:51 Starting benchmark: pallet_collective::propose_execute    
2024-01-22 16:55:51 Starting benchmark: pallet_collective::propose_proposed    
2024-01-22 16:55:51 Starting benchmark: pallet_collective::vote    
2024-01-22 16:55:51 Starting benchmark: pallet_collective::close_early_disapproved    
2024-01-22 16:55:51 Starting benchmark: pallet_collective::close_early_approved    
2024-01-22 16:55:51 Starting benchmark: pallet_collective::close_disapproved    
2024-01-22 16:55:51 Starting benchmark: pallet_collective::close_approved    
Pallet: "pallet_collective", Extrinsic: "set_members", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========
Storage: `Collective::Members` (r:1 w:1)
Proof: `Collective::Members` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
Storage: `Collective::Proposals` (r:1 w:0)
Proof: `Collective::Proposals` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
Storage: `Collective::Voting` (r:100 w:100)
Proof: `Collective::Voting` (`max_values`: None, `max_size`: None, mode: `Measured`)
Storage: `Collective::Prime` (r:0 w:1)
Proof: `Collective::Prime` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=        0
    + m    8.286
    + n        0
    + p    8.239
              µs

Reads = 0 + (1 * m) + (0 * n) + (1 * p)
Writes = 0 + (1 * m) + (0 * n) + (1 * p)
Recorded proof Size = 0 + (3224 * m) + (0 * n) + (3191 * p)

Min Squares Analysis
========
-- Extrinsic Time --

Data points distribution:
    m     n     p   mean µs  sigma µs       %
    0   100   100     15.99         0    0.0%
  100     0   100     867.4         0    0.0%
  100   100     0     20.72         0    0.0%
  100   100   100     845.9     29.96    3.5%

Quality and confidence:
param     error
m         1.871
n         1.871
p         1.871

Model:
Time ~=    15.99
    + m    5.208
    + n        0
    + p    5.161
              µs

Reads = 2 + (1 * m) + (0 * n) + (1 * p)
Writes = 2 + (1 * m) + (0 * n) + (1 * p)
Recorded proof Size = 66 + (1996 * m) + (0 * n) + (1964 * p)

Pallet: "pallet_collective", Extrinsic: "execute", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========
Storage: `Collective::Members` (r:1 w:0)
Proof: `Collective::Members` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    11.96
    + b        0
    + m    0.014
              µs

Reads = 1 + (0 * b) + (0 * m)
Writes = 0 + (0 * b) + (0 * m)
Recorded proof Size = 68 + (0 * b) + (32 * m)

Min Squares Analysis
========
-- Extrinsic Time --

Data points distribution:
    b     m   mean µs  sigma µs       %
    2   100     13.45         0    0.0%
 1024     1     12.56         0    0.0%
 1024   100     13.68     0.349    2.5%

Quality and confidence:
param     error
b             0
m         0.006

Model:
Time ~=    12.31
    + b        0
    + m    0.011
              µs

Reads = 1 + (0 * b) + (0 * m)
Writes = 0 + (0 * b) + (0 * m)
Recorded proof Size = 69 + (0 * b) + (32 * m)

Pallet: "pallet_collective", Extrinsic: "propose_execute", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========
Storage: `Collective::Members` (r:1 w:0)
Proof: `Collective::Members` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
Storage: `Collective::ProposalOf` (r:1 w:0)
Proof: `Collective::ProposalOf` (`max_values`: None, `max_size`: None, mode: `Measured`)

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    11.97
    + b    0.002
    + m    0.037
              µs

Reads = 2 + (0 * b) + (0 * m)
Writes = 0 + (0 * b) + (0 * m)
Recorded proof Size = 68 + (0 * b) + (32 * m)

Min Squares Analysis
========
-- Extrinsic Time --

Data points distribution:
    b     m   mean µs  sigma µs       %
    2   100     15.72         0    0.0%
 1024     1     15.03         0    0.0%
 1024   100     17.69     1.039    5.8%

Quality and confidence:
param     error
b         0.001
m         0.018

Model:
Time ~=    13.02
    + b    0.001
    + m    0.026
              µs

Reads = 2 + (0 * b) + (0 * m)
Writes = 0 + (0 * b) + (0 * m)
Recorded proof Size = 69 + (0 * b) + (32 * m)

Pallet: "pallet_collective", Extrinsic: "propose_proposed", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========
Storage: `Collective::Members` (r:1 w:0)
Proof: `Collective::Members` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
Storage: `Collective::ProposalOf` (r:1 w:1)
Proof: `Collective::ProposalOf` (`max_values`: None, `max_size`: None, mode: `Measured`)
Storage: `Collective::Proposals` (r:1 w:1)
Proof: `Collective::Proposals` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
Storage: `Collective::ProposalCount` (r:1 w:1)
Proof: `Collective::ProposalCount` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
Storage: `Collective::Voting` (r:0 w:1)
Proof: `Collective::Voting` (`max_values`: None, `max_size`: None, mode: `Measured`)

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    15.27
    + b    0.001
    + m     0.04
    + p    0.117
              µs

Reads = 4 + (0 * b) + (0 * m) + (0 * p)
Writes = 4 + (0 * b) + (0 * m) + (0 * p)
Recorded proof Size = 4 + (0 * b) + (32 * m) + (39 * p)

Min Squares Analysis
========
-- Extrinsic Time --

Data points distribution:
    b     m     p   mean µs  sigma µs       %
    2   100   100     31.07         0    0.0%
 1024     2   100     28.57         0    0.0%
 1024   100     1     20.93         0    0.0%
 1024   100   100     37.43     7.108   18.9%

Quality and confidence:
param     error
b         0.009
m         0.102
p         0.101

Model:
Time ~=    5.363
    + b    0.006
    + m     0.09
    + p    0.166
              µs

Reads = 4 + (0 * b) + (0 * m) + (0 * p)
Writes = 4 + (0 * b) + (0 * m) + (0 * p)
Recorded proof Size = 4 + (0 * b) + (32 * m) + (39 * p)

Pallet: "pallet_collective", Extrinsic: "vote", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========
Storage: `Collective::Members` (r:1 w:0)
Proof: `Collective::Members` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
Storage: `Collective::Voting` (r:1 w:1)
Proof: `Collective::Voting` (`max_values`: None, `max_size`: None, mode: `Measured`)

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    18.13
    + m    0.031
              µs

Reads = 2 + (0 * m)
Writes = 1 + (0 * m)
Recorded proof Size = 807 + (64 * m)

Min Squares Analysis
========
-- Extrinsic Time --

Model:
Time ~=    21.25
              µs

Reads = 2
Writes = 1
Recorded proof Size = 7210

Pallet: "pallet_collective", Extrinsic: "close_early_disapproved", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========
Storage: `Collective::Voting` (r:1 w:1)
Proof: `Collective::Voting` (`max_values`: None, `max_size`: None, mode: `Measured`)
Storage: `Collective::Members` (r:1 w:0)
Proof: `Collective::Members` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
Storage: `Collective::Proposals` (r:1 w:1)
Proof: `Collective::Proposals` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
Storage: `Collective::ProposalOf` (r:0 w:1)
Proof: `Collective::ProposalOf` (`max_values`: None, `max_size`: None, mode: `Measured`)

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    13.48
    + m    0.053
    + p    0.141
              µs

Reads = 3 + (0 * m) + (0 * p)
Writes = 3 + (0 * m) + (0 * p)
Recorded proof Size = 181 + (64 * m) + (38 * p)

Min Squares Analysis
========
-- Extrinsic Time --

Data points distribution:
    m     p   mean µs  sigma µs       %
    4   100     27.89         0    0.0%
  100     1     18.96         0    0.0%
  100   100     31.39     1.627    5.1%

Quality and confidence:
param     error
m         0.029
p         0.028

Model:
Time ~=    15.19
    + m    0.036
    + p    0.125
              µs

Reads = 3 + (0 * m) + (0 * p)
Writes = 3 + (0 * m) + (0 * p)
Recorded proof Size = 181 + (64 * m) + (39 * p)

Pallet: "pallet_collective", Extrinsic: "close_early_approved", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========
Storage: `Collective::Voting` (r:1 w:1)
Proof: `Collective::Voting` (`max_values`: None, `max_size`: None, mode: `Measured`)
Storage: `Collective::Members` (r:1 w:0)
Proof: `Collective::Members` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
Storage: `Collective::ProposalOf` (r:1 w:1)
Proof: `Collective::ProposalOf` (`max_values`: None, `max_size`: None, mode: `Measured`)
Storage: `Collective::Proposals` (r:1 w:1)
Proof: `Collective::Proposals` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    24.77
    + b        0
    + m    0.027
    + p    0.165
              µs

Reads = 4 + (0 * b) + (0 * m) + (0 * p)
Writes = 3 + (0 * b) + (0 * m) + (0 * p)
Recorded proof Size = 396 + (0 * b) + (64 * m) + (44 * p)

Min Squares Analysis
========
-- Extrinsic Time --

Data points distribution:
    b     m     p   mean µs  sigma µs       %
    2   100   100     44.07         0    0.0%
 1024     4   100     42.11         0    0.0%
 1024   100     1     28.29         0    0.0%
 1024   100   100     44.43     0.428    0.9%

Quality and confidence:
param     error
b             0
m         0.006
p         0.006

Model:
Time ~=    25.36
    + b        0
    + m    0.024
    + p    0.163
              µs

Reads = 4 + (0 * b) + (0 * m) + (0 * p)
Writes = 3 + (0 * b) + (0 * m) + (0 * p)
Recorded proof Size = 396 + (1 * b) + (64 * m) + (45 * p)

Pallet: "pallet_collective", Extrinsic: "close_disapproved", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========
Storage: `Collective::Voting` (r:1 w:1)
Proof: `Collective::Voting` (`max_values`: None, `max_size`: None, mode: `Measured`)
Storage: `Collective::Members` (r:1 w:0)
Proof: `Collective::Members` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
Storage: `Collective::Prime` (r:1 w:0)
Proof: `Collective::Prime` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
Storage: `Collective::Proposals` (r:1 w:1)
Proof: `Collective::Proposals` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
Storage: `Collective::ProposalOf` (r:0 w:1)
Proof: `Collective::ProposalOf` (`max_values`: None, `max_size`: None, mode: `Measured`)

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    13.41
    + m    0.061
    + p    0.166
              µs

Reads = 4 + (0 * m) + (0 * p)
Writes = 3 + (0 * m) + (0 * p)
Recorded proof Size = 201 + (64 * m) + (38 * p)

Min Squares Analysis
========
-- Extrinsic Time --

Data points distribution:
    m     p   mean µs  sigma µs       %
    4   100     30.35         0    0.0%
  100     1     19.71         0    0.0%
  100   100     35.68      0.56    1.5%

Quality and confidence:
param     error
m          0.01
p         0.009

Model:
Time ~=       14
    + m    0.055
    + p    0.161
              µs

Reads = 4 + (0 * m) + (0 * p)
Writes = 3 + (0 * m) + (0 * p)
Recorded proof Size = 201 + (64 * m) + (39 * p)

Pallet: "pallet_collective", Extrinsic: "close_approved", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========
Storage: `Collective::Voting` (r:1 w:1)
Proof: `Collective::Voting` (`max_values`: None, `max_size`: None, mode: `Measured`)
Storage: `Collective::Members` (r:1 w:0)
Proof: `Collective::Members` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
Storage: `Collective::Prime` (r:1 w:0)
Proof: `Collective::Prime` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
Storage: `Collective::ProposalOf` (r:1 w:1)
Proof: `Collective::ProposalOf` (`max_values`: None, `max_size`: None, mode: `Measured`)
Storage: `Collective::Proposals` (r:1 w:1)
Proof: `Collective::Proposals` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    29.41
    + b        0
    + m    0.026
    + p    0.177
              µs

Reads = 5 + (0 * b) + (0 * m) + (0 * p)
Writes = 3 + (0 * b) + (0 * m) + (0 * p)
Recorded proof Size = 416 + (0 * b) + (64 * m) + (44 * p)

Min Squares Analysis
========
-- Extrinsic Time --

Data points distribution:
    b     m     p   mean µs  sigma µs       %
    2   100   100     49.83         0    0.0%
 1024     4   100     45.17         0    0.0%
 1024   100     1     30.17         0    0.0%
 1024   100   100     50.36     3.855    7.6%

Quality and confidence:
param     error
b         0.005
m         0.056
p         0.055

Model:
Time ~=    24.04
    + b        0
    + m    0.054
    + p    0.203
              µs

Reads = 5 + (0 * b) + (0 * m) + (0 * p)
Writes = 3 + (0 * b) + (0 * m) + (0 * p)
Recorded proof Size = 416 + (1 * b) + (64 * m) + (45 * p)

Pallet: "pallet_collective", Extrinsic: "disapprove_proposal", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========
Storage: `Collective::Proposals` (r:1 w:1)
Proof: `Collective::Proposals` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
Storage: `Collective::Voting` (r:0 w:1)
Proof: `Collective::Voting` (`max_values`: None, `max_size`: None, mode: `Measured`)
Storage: `Collective::ProposalOf` (r:0 w:1)
Proof: `Collective::ProposalOf` (`max_values`: None, `max_size`: None, mode: `Measured`)

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=     10.6
    + p    0.096
              µs

Reads = 1 + (0 * p)
Writes = 3 + (0 * p)
Recorded proof Size = 224 + (32 * p)

Min Squares Analysis
========
-- Extrinsic Time --

Model:
Time ~=    20.28
              µs

Reads = 1
Writes = 3
Recorded proof Size = 3427

2024-01-22 16:55:51 Starting benchmark: pallet_collective::disapprove_proposal    
Created file: "./encointer-kusama-weights/pallet_collective.rs"
Running benchmarks for pallet_encointer_balances to ./encointer-kusama-weights/
Pallet: "pallet_encointer_balances", Extrinsic: "transfer", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========
Storage: `EncointerBalances::Balance` (r:2 w:2)
Proof: `EncointerBalances::Balance` (`max_values`: None, `max_size`: Some(93), added: 2568, mode: `MaxEncodedLen`)
Storage: `EncointerBalances::DemurragePerBlock` (r:1 w:0)
Proof: `EncointerBalances::DemurragePerBlock` (`max_values`: None, `max_size`: Some(41), added: 2516, mode: `MaxEncodedLen`)
Storage: `System::Account` (r:1 w:1)
Proof: `System::Account` (`max_values`: None, `max_size`: Some(128), added: 2603, mode: `MaxEncodedLen`)

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    40.74
              µs

Reads = 4
Writes = 3
Recorded proof Size = 235

Min Squares Analysis
========
-- Extrinsic Time --

Model:
Time ~=    40.74
              µs

Reads = 4
Writes = 3
Recorded proof Size = 235

Pallet: "pallet_encointer_balances", Extrinsic: "transfer_all", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========
Storage: `EncointerBalances::Balance` (r:2 w:2)
Proof: `EncointerBalances::Balance` (`max_values`: None, `max_size`: Some(93), added: 2568, mode: `MaxEncodedLen`)
Storage: `EncointerBalances::DemurragePerBlock` (r:1 w:0)
Proof: `EncointerBalances::DemurragePerBlock` (`max_values`: None, `max_size`: Some(41), added: 2516, mode: `MaxEncodedLen`)
Storage: `System::Account` (r:2 w:2)
Proof: `System::Account` (`max_values`: None, `max_size`: Some(128), added: 2603, mode: `MaxEncodedLen`)

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=     54.8
              µs

Reads = 5
Writes = 4
Recorded proof Size = 338

Min Squares Analysis
========
-- Extrinsic Time --

Model:
Time ~=     54.8
              µs

Reads = 5
Writes = 4
Recorded proof Size = 338

Pallet: "pallet_encointer_balances", Extrinsic: "set_fee_conversion_factor", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========
Storage: `EncointerBalances::FeeConversionFactor` (r:0 w:1)
Proof: `EncointerBalances::FeeConversionFactor` (`max_values`: Some(1), `max_size`: Some(16), added: 511, mode: `MaxEncodedLen`)

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    13.49
              µs

Reads = 0
Writes = 1
Recorded proof Size = 0

Min Squares Analysis
========
-- Extrinsic Time --

Model:
Time ~=    13.49
              µs

Reads = 0
Writes = 1
Recorded proof Size = 0

2024-01-22 16:55:52 Starting benchmark: pallet_encointer_balances::transfer    
2024-01-22 16:55:52 Starting benchmark: pallet_encointer_balances::transfer_all    
2024-01-22 16:55:52 Starting benchmark: pallet_encointer_balances::set_fee_conversion_factor    
2024-01-22 16:55:52 set fee conversion factor to 1    
2024-01-22 16:55:52 set fee conversion factor to 1    
2024-01-22 16:55:52 set fee conversion factor to 1    
Created file: "./encointer-kusama-weights/pallet_encointer_balances.rs"
Running benchmarks for pallet_encointer_bazaar to ./encointer-kusama-weights/
2024-01-22 16:55:53 Starting benchmark: pallet_encointer_bazaar::create_business    
2024-01-22 16:55:53 panicked at /home/bparity/.cargo/registry/src/index.crates.io-6f17d22bba15001f/pallet-encointer-communities-3.0.2/src/lib.rs:546:23:
attempt to divide by zero    
Error: 
   0: Invalid input: Error executing and verifying runtime benchmark: Execution aborted due to trap: wasm trap: wasm `unreachable` instruction executed
      WASM backtrace:
      error while executing at wasm backtrace:
          0: 0x4ddc - <unknown>!rust_begin_unwind
          1: 0x2470 - <unknown>!core::panicking::panic_fmt::hd79411a297d06dc8
          2: 0x4ade - <unknown>!core::panicking::panic::hd37d8d0a98259c88
          3: 0x98ef2 - <unknown>!pallet_encointer_communities::<impl pallet_encointer_communities::pallet::Pallet<T>>::validate_location::h9d893732d1ca0031
          4: 0x931b9 - <unknown>!pallet_encointer_communities::pallet::Pallet<T>::new_community::h85076daf4e2da290
          5: 0x291a3d - <unknown>!pallet_encointer_bazaar::benchmarking::create_community::h4444360ac37b2072
          6: 0x27c53e - <unknown>!<encointer_kusama_runtime::Runtime as frame_benchmarking::utils::runtime_decl_for_benchmark::BenchmarkV1<sp_runtime::generic::block::Block<sp_runtime::generic::header::Header<u32,sp_runtime::traits::BlakeTwo256>,sp_runtime::generic::unchecked_extrinsic::UncheckedExtrinsic<sp_runtime::multiaddress::MultiAddress<<<sp_runtime::MultiSignature as sp_runtime::traits::Verify>::Signer as sp_runtime::traits::IdentifyAccount>::AccountId,()>,encointer_kusama_runtime::RuntimeCall,sp_runtime::MultiSignature,(frame_system::extensions::check_non_zero_sender::CheckNonZeroSender<encointer_kusama_runtime::Runtime>,frame_system::extensions::check_spec_version::CheckSpecVersion<encointer_kusama_runtime::Runtime>,frame_system::extensions::check_tx_version::CheckTxVersion<encointer_kusama_runtime::Runtime>,frame_system::extensions::check_genesis::CheckGenesis<encointer_kusama_runtime::Runtime>,frame_system::extensions::check_mortality::CheckMortality<encointer_kusama_runtime::Runtime>,frame_system::extensions::check_nonce::CheckNonce<encointer_kusama_runtime::Runtime>,frame_system::extensions::check_weight::CheckWeight<encointer_kusama_runtime::Runtime>,pallet_asset_tx_payment::ChargeAssetTxPayment<encointer_kusama_runtime::Runtime>)>>>>::dispatch_benchmark::hddfe61b935a99018
          7: 0x30646d - <unknown>!Benchmark_dispatch_benchmark

Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.
2024-01-22 16:55:53 1 storage transactions are left open by the runtime. Those will be rolled back.    
2024-01-22 16:55:53 1 storage transactions are left open by the runtime. Those will be rolled back.    
Running benchmarks for pallet_encointer_ceremonies to ./encointer-kusama-weights/
2024-01-22 16:55:54 Starting benchmark: pallet_encointer_ceremonies::register_participant    
2024-01-22 16:55:54 panicked at /home/bparity/.cargo/registry/src/index.crates.io-6f17d22bba15001f/pallet-encointer-communities-3.0.2/src/lib.rs:546:23:
attempt to divide by zero    
2024-01-22 16:55:54 1 storage transactions are left open by the runtime. Those will be rolled back.    
2024-01-22 16:55:54 1 storage transactions are left open by the runtime. Those will be rolled back.    
Error: 
   0: Invalid input: Error executing and verifying runtime benchmark: Execution aborted due to trap: wasm trap: wasm `unreachable` instruction executed
      WASM backtrace:
      error while executing at wasm backtrace:
          0: 0x4ddc - <unknown>!rust_begin_unwind
          1: 0x2470 - <unknown>!core::panicking::panic_fmt::hd79411a297d06dc8
          2: 0x4ade - <unknown>!core::panicking::panic::hd37d8d0a98259c88
          3: 0x98ef2 - <unknown>!pallet_encointer_communities::<impl pallet_encointer_communities::pallet::Pallet<T>>::validate_location::h9d893732d1ca0031
          4: 0x931b9 - <unknown>!pallet_encointer_communities::pallet::Pallet<T>::new_community::h85076daf4e2da290
          5: 0x2a042f - <unknown>!pallet_encointer_ceremonies::benchmarking::create_community::h320f865da7db3304
          6: 0x291dab - <unknown>!<pallet_encointer_ceremonies::benchmarking::SelectedBenchmark as frame_benchmarking::utils::BenchmarkingSetup<T>>::instance::h8f9439e6c1aea178
          7: 0x27e975 - <unknown>!<encointer_kusama_runtime::Runtime as frame_benchmarking::utils::runtime_decl_for_benchmark::BenchmarkV1<sp_runtime::generic::block::Block<sp_runtime::generic::header::Header<u32,sp_runtime::traits::BlakeTwo256>,sp_runtime::generic::unchecked_extrinsic::UncheckedExtrinsic<sp_runtime::multiaddress::MultiAddress<<<sp_runtime::MultiSignature as sp_runtime::traits::Verify>::Signer as sp_runtime::traits::IdentifyAccount>::AccountId,()>,encointer_kusama_runtime::RuntimeCall,sp_runtime::MultiSignature,(frame_system::extensions::check_non_zero_sender::CheckNonZeroSender<encointer_kusama_runtime::Runtime>,frame_system::extensions::check_spec_version::CheckSpecVersion<encointer_kusama_runtime::Runtime>,frame_system::extensions::check_tx_version::CheckTxVersion<encointer_kusama_runtime::Runtime>,frame_system::extensions::check_genesis::CheckGenesis<encointer_kusama_runtime::Runtime>,frame_system::extensions::check_mortality::CheckMortality<encointer_kusama_runtime::Runtime>,frame_system::extensions::check_nonce::CheckNonce<encointer_kusama_runtime::Runtime>,frame_system::extensions::check_weight::CheckWeight<encointer_kusama_runtime::Runtime>,pallet_asset_tx_payment::ChargeAssetTxPayment<encointer_kusama_runtime::Runtime>)>>>>::dispatch_benchmark::hddfe61b935a99018
          8: 0x30646d - <unknown>!Benchmark_dispatch_benchmark

Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.
Running benchmarks for pallet_encointer_communities to ./encointer-kusama-weights/
2024-01-22 16:55:55 Starting benchmark: pallet_encointer_communities::new_community    
2024-01-22 16:55:55 panicked at /home/bparity/.cargo/registry/src/index.crates.io-6f17d22bba15001f/pallet-encointer-communities-3.0.2/src/lib.rs:546:23:
attempt to divide by zero    
Error: 
   0: Invalid input: Error executing and verifying runtime benchmark: Execution aborted due to trap: wasm trap: wasm `unreachable` instruction executed
      WASM backtrace:
      error while executing at wasm backtrace:
          0: 0x4ddc - <unknown>!rust_begin_unwind
          1: 0x2470 - <unknown>!core::panicking::panic_fmt::hd79411a297d06dc8
          2: 0x4ade - <unknown>!core::panicking::panic::hd37d8d0a98259c88
          3: 0x98ef2 - <unknown>!pallet_encointer_communities::<impl pallet_encointer_communities::pallet::Pallet<T>>::validate_location::h9d893732d1ca0031
          4: 0x931b9 - <unknown>!pallet_encointer_communities::pallet::Pallet<T>::new_community::h85076daf4e2da290
          5: 0x29518b - <unknown>!pallet_encointer_communities::benchmarking::setup_test_community::h78fab09c1fb81085
          6: 0x27fc7e - <unknown>!<encointer_kusama_runtime::Runtime as frame_benchmarking::utils::runtime_decl_for_benchmark::BenchmarkV1<sp_runtime::generic::block::Block<sp_runtime::generic::header::Header<u32,sp_runtime::traits::BlakeTwo256>,sp_runtime::generic::unchecked_extrinsic::UncheckedExtrinsic<sp_runtime::multiaddress::MultiAddress<<<sp_runtime::MultiSignature as sp_runtime::traits::Verify>::Signer as sp_runtime::traits::IdentifyAccount>::AccountId,()>,encointer_kusama_runtime::RuntimeCall,sp_runtime::MultiSignature,(frame_system::extensions::check_non_zero_sender::CheckNonZeroSender<encointer_kusama_runtime::Runtime>,frame_system::extensions::check_spec_version::CheckSpecVersion<encointer_kusama_runtime::Runtime>,frame_system::extensions::check_tx_version::CheckTxVersion<encointer_kusama_runtime::Runtime>,frame_system::extensions::check_genesis::CheckGenesis<encointer_kusama_runtime::Runtime>,frame_system::extensions::check_mortality::CheckMortality<encointer_kusama_runtime::Runtime>,frame_system::extensions::check_nonce::CheckNonce<encointer_kusama_runtime::Runtime>,frame_system::extensions::check_weight::CheckWeight<encointer_kusama_runtime::Runtime>,pallet_asset_tx_payment::ChargeAssetTxPayment<encointer_kusama_runtime::Runtime>)>>>>::dispatch_benchmark::hddfe61b935a99018
          7: 0x30646d - <unknown>!Benchmark_dispatch_benchmark

Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.
2024-01-22 16:55:55 1 storage transactions are left open by the runtime. Those will be rolled back.    
2024-01-22 16:55:55 1 storage transactions are left open by the runtime. Those will be rolled back.    
Running benchmarks for pallet_encointer_faucet to ./encointer-kusama-weights/
2024-01-22 16:55:56 Starting benchmark: pallet_encointer_faucet::create_faucet    
2024-01-22 16:55:56 panicked at /home/bparity/.cargo/registry/src/index.crates.io-6f17d22bba15001f/pallet-encointer-communities-3.0.2/src/lib.rs:546:23:
attempt to divide by zero    
Error: 
   0: Invalid input: Error executing and verifying runtime benchmark: Execution aborted due to trap: wasm trap: wasm `unreachable` instruction executed
      WASM backtrace:
      error while executing at wasm backtrace:
          0: 0x4ddc - <unknown>!rust_begin_unwind
          1: 0x2470 - <unknown>!core::panicking::panic_fmt::hd79411a297d06dc8
          2: 0x4ade - <unknown>!core::panicking::panic::hd37d8d0a98259c88
          3: 0x98ef2 - <unknown>!pallet_encointer_communities::<impl pallet_encointer_communities::pallet::Pallet<T>>::validate_location::h9d893732d1ca0031
          4: 0x931b9 - <unknown>!pallet_encointer_communities::pallet::Pallet<T>::new_community::h85076daf4e2da290
          5: 0x2a1cf7 - <unknown>!pallet_encointer_faucet::benchmarking::create_community::h4c9ec25964919234
          6: 0x295eb7 - <unknown>!pallet_encointer_faucet::benchmarking::<impl frame_benchmarking::utils::Benchmarking for pallet_encointer_faucet::pallet::Pallet<T>>::run_benchmark::h7f9e6aeff4d45377
          7: 0x281a09 - <unknown>!<encointer_kusama_runtime::Runtime as frame_benchmarking::utils::runtime_decl_for_benchmark::BenchmarkV1<sp_runtime::generic::block::Block<sp_runtime::generic::header::Header<u32,sp_runtime::traits::BlakeTwo256>,sp_runtime::generic::unchecked_extrinsic::UncheckedExtrinsic<sp_runtime::multiaddress::MultiAddress<<<sp_runtime::MultiSignature as sp_runtime::traits::Verify>::Signer as sp_runtime::traits::IdentifyAccount>::AccountId,()>,encointer_kusama_runtime::RuntimeCall,sp_runtime::MultiSignature,(frame_system::extensions::check_non_zero_sender::CheckNonZeroSender<encointer_kusama_runtime::Runtime>,frame_system::extensions::check_spec_version::CheckSpecVersion<encointer_kusama_runtime::Runtime>,frame_system::extensions::check_tx_version::CheckTxVersion<encointer_kusama_runtime::Runtime>,frame_system::extensions::check_genesis::CheckGenesis<encointer_kusama_runtime::Runtime>,frame_system::extensions::check_mortality::CheckMortality<encointer_kusama_runtime::Runtime>,frame_system::extensions::check_nonce::CheckNonce<encointer_kusama_runtime::Runtime>,frame_system::extensions::check_weight::CheckWeight<encointer_kusama_runtime::Runtime>,pallet_asset_tx_payment::ChargeAssetTxPayment<encointer_kusama_runtime::Runtime>)>>>>::dispatch_benchmark::hddfe61b935a99018
          8: 0x30646d - <unknown>!Benchmark_dispatch_benchmark

Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.
2024-01-22 16:55:56 1 storage transactions are left open by the runtime. Those will be rolled back.    
2024-01-22 16:55:56 1 storage transactions are left open by the runtime. Those will be rolled back.    
Running benchmarks for pallet_encointer_reputation_commitments to ./encointer-kusama-weights/
Pallet: "pallet_encointer_reputation_commitments", Extrinsic: "register_purpose", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========
Storage: `EncointerReputationCommitments::CurrentPurposeId` (r:1 w:1)
Proof: `EncointerReputationCommitments::CurrentPurposeId` (`max_values`: Some(1), `max_size`: Some(8), added: 503, mode: `MaxEncodedLen`)
Storage: `EncointerReputationCommitments::Purposes` (r:0 w:1)
Proof: `EncointerReputationCommitments::Purposes` (`max_values`: None, `max_size`: Some(138), added: 2613, mode: `MaxEncodedLen`)

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    19.67
              µs

Reads = 1
Writes = 2
Recorded proof Size = 4

Min Squares Analysis
========
-- Extrinsic Time --

Model:
Time ~=    19.67
              µs

Reads = 1
Writes = 2
Recorded proof Size = 4

Pallet: "pallet_encointer_reputation_commitments", Extrinsic: "commit_reputation", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========
Storage: `EncointerReputationCommitments::Purposes` (r:1 w:0)
Proof: `EncointerReputationCommitments::Purposes` (`max_values`: None, `max_size`: Some(138), added: 2613, mode: `MaxEncodedLen`)
Storage: `EncointerCeremonies::ParticipantReputation` (r:1 w:0)
Proof: `EncointerCeremonies::ParticipantReputation` (`max_values`: None, `max_size`: None, mode: `Measured`)
Storage: `EncointerReputationCommitments::Commitments` (r:1 w:1)
Proof: `EncointerReputationCommitments::Commitments` (`max_values`: None, `max_size`: Some(102), added: 2577, mode: `MaxEncodedLen`)

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    25.34
              µs

Reads = 3
Writes = 1
Recorded proof Size = 329

Min Squares Analysis
========
-- Extrinsic Time --

Model:
Time ~=    25.34
              µs

Reads = 3
Writes = 1
Recorded proof Size = 329

Created file: "./encointer-kusama-weights/pallet_encointer_reputation_commitments.rs"
2024-01-22 16:55:57 Starting benchmark: pallet_encointer_reputation_commitments::register_purpose    
2024-01-22 16:55:57 commitment purpose registered: 0, BoundedVec([83, 111, 109, 101, 32, 68, 101, 115, 99, 114, 105, 112, 116, 111, 114], 128)    
2024-01-22 16:55:57 commitment purpose registered: 0, BoundedVec([83, 111, 109, 101, 32, 68, 101, 115, 99, 114, 105, 112, 116, 111, 114], 128)    
2024-01-22 16:55:57 commitment purpose registered: 0, BoundedVec([83, 111, 109, 101, 32, 68, 101, 115, 99, 114, 105, 112, 116, 111, 114], 128)    
2024-01-22 16:55:57 Starting benchmark: pallet_encointer_reputation_commitments::commit_reputation    
2024-01-22 16:55:57 commitment purpose registered: 0, BoundedVec([83, 111, 109, 101, 32, 70, 97, 117, 99, 101, 116, 32, 78, 97, 109, 101], 128)    
2024-01-22 16:55:57  commited reputation for cid 1111 at cindex 10 for purposed id 0    
2024-01-22 16:55:57 commitment purpose registered: 0, BoundedVec([83, 111, 109, 101, 32, 70, 97, 117, 99, 101, 116, 32, 78, 97, 109, 101], 128)    
2024-01-22 16:55:57  commited reputation for cid 1111 at cindex 10 for purposed id 0    
2024-01-22 16:55:57 commitment purpose registered: 0, BoundedVec([83, 111, 109, 101, 32, 70, 97, 117, 99, 101, 116, 32, 78, 97, 109, 101], 128)    
2024-01-22 16:55:57  commited reputation for cid 1111 at cindex 10 for purposed id 0    
Running benchmarks for pallet_encointer_scheduler to ./encointer-kusama-weights/
Error: 
   0: Invalid input: Benchmark pallet_encointer_scheduler::next_phase failed: DivisionByZero

Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.
2024-01-22 16:55:58 Starting benchmark: pallet_encointer_scheduler::next_phase    
2024-01-22 16:55:58 new ceremony phase with index 1    
Running benchmarks for pallet_membership to ./encointer-kusama-weights/
2024-01-22 16:55:59 Starting benchmark: pallet_membership::add_member    
2024-01-22 16:55:59 Starting benchmark: pallet_membership::remove_member    
2024-01-22 16:55:59 Starting benchmark: pallet_membership::swap_member    
Pallet: "pallet_membership", Extrinsic: "add_member", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========
Storage: `Membership::Members` (r:1 w:1)
Proof: `Membership::Members` (`max_values`: Some(1), `max_size`: Some(3202), added: 3697, mode: `MaxEncodedLen`)
Storage: `Collective::Proposals` (r:1 w:0)
Proof: `Collective::Proposals` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
Storage: `Collective::Members` (r:0 w:1)
Proof: `Collective::Members` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
Storage: `Collective::Prime` (r:0 w:1)
Proof: `Collective::Prime` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    10.92
    + m    0.011
              µs

Reads = 2 + (0 * m)
Writes = 3 + (0 * m)
Recorded proof Size = 99 + (64 * m)

Min Squares Analysis
========
-- Extrinsic Time --

Model:
Time ~=    12.03
              µs

Reads = 2
Writes = 3
Recorded proof Size = 6440

Pallet: "pallet_membership", Extrinsic: "remove_member", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========
Storage: `Membership::Members` (r:1 w:1)
Proof: `Membership::Members` (`max_values`: Some(1), `max_size`: Some(3202), added: 3697, mode: `MaxEncodedLen`)
Storage: `Collective::Proposals` (r:1 w:0)
Proof: `Collective::Proposals` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
Storage: `Membership::Prime` (r:1 w:0)
Proof: `Membership::Prime` (`max_values`: Some(1), `max_size`: Some(32), added: 527, mode: `MaxEncodedLen`)
Storage: `Collective::Members` (r:0 w:1)
Proof: `Collective::Members` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
Storage: `Collective::Prime` (r:0 w:1)
Proof: `Collective::Prime` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    11.73
    + m     0.01
              µs

Reads = 3 + (0 * m)
Writes = 3 + (0 * m)
Recorded proof Size = 205 + (64 * m)

Min Squares Analysis
========
-- Extrinsic Time --

Model:
Time ~=    12.76
              µs

Reads = 3
Writes = 3
Recorded proof Size = 6608

Pallet: "pallet_membership", Extrinsic: "swap_member", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========
Storage: `Membership::Members` (r:1 w:1)
Proof: `Membership::Members` (`max_values`: Some(1), `max_size`: Some(3202), added: 3697, mode: `MaxEncodedLen`)
Storage: `Collective::Proposals` (r:1 w:0)
Proof: `Collective::Proposals` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
Storage: `Membership::Prime` (r:1 w:0)
Proof: `Membership::Prime` (`max_values`: Some(1), `max_size`: Some(32), added: 527, mode: `MaxEncodedLen`)
Storage: `Collective::Members` (r:0 w:1)
Proof: `Collective::Members` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
Storage: `Collective::Prime` (r:0 w:1)
Proof: `Collective::Prime` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    10.72
    + m    0.029
              µs

Reads = 3 + (0 * m)
Writes = 3 + (0 * m)
Recorded proof Size = 205 + (64 * m)

Min Squares Analysis
========
-- Extrinsic Time --

Model:
Time ~=    13.72
              µs

Reads = 3
Writes = 3
Recorded proof Size = 6608

Pallet: "pallet_membership", Extrinsic: "reset_member", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========
Storage: `Membership::Members` (r:1 w:1)
Proof: `Membership::Members` (`max_values`: Some(1), `max_size`: Some(3202), added: 3697, mode: `MaxEncodedLen`)
Storage: `Collective::Proposals` (r:1 w:0)
Proof: `Collective::Proposals` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
Storage: `Membership::Prime` (r:1 w:0)
Proof: `Membership::Prime` (`max_values`: Some(1), `max_size`: Some(32), added: 527, mode: `MaxEncodedLen`)
Storage: `Collective::Members` (r:0 w:1)
Proof: `Collective::Members` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
Storage: `Collective::Prime` (r:0 w:1)
Proof: `Collective::Prime` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    10.53
    + m    0.135
              µs

Reads = 3 + (0 * m)
Writes = 3 + (0 * m)
Recorded proof Size = 203 + (64 * m)

Min Squares Analysis
========
-- Extrinsic Time --

Model:
Time ~=    24.06
              µs

Reads = 3
Writes = 3
Recorded proof Size = 6608

Pallet: "pallet_membership", Extrinsic: "change_key", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========
Storage: `Membership::Members` (r:1 w:1)
Proof: `Membership::Members` (`max_values`: Some(1), `max_size`: Some(3202), added: 3697, mode: `MaxEncodedLen`)
Storage: `Collective::Proposals` (r:1 w:0)
Proof: `Collective::Proposals` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
Storage: `Membership::Prime` (r:1 w:1)
Proof: `Membership::Prime` (`max_values`: Some(1), `max_size`: Some(32), added: 527, mode: `MaxEncodedLen`)
Storage: `Collective::Members` (r:0 w:1)
Proof: `Collective::Members` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)
Storage: `Collective::Prime` (r:0 w:1)
Proof: `Collective::Prime` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    10.96
    + m    0.025
              µs

Reads = 3 + (0 * m)
Writes = 4 + (0 * m)
Recorded proof Size = 203 + (64 * m)

Min Squares Analysis
========
-- Extrinsic Time --

Model:
Time ~=    13.54
              µs

Reads = 3
Writes = 4
Recorded proof Size = 6608

Pallet: "pallet_membership", Extrinsic: "set_prime", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========
Storage: `Membership::Members` (r:1 w:0)
Proof: `Membership::Members` (`max_values`: Some(1), `max_size`: Some(3202), added: 3697, mode: `MaxEncodedLen`)
Storage: `Membership::Prime` (r:0 w:1)
Proof: `Membership::Prime` (`max_values`: Some(1), `max_size`: Some(32), added: 527, mode: `MaxEncodedLen`)
Storage: `Collective::Prime` (r:0 w:1)
Proof: `Collective::Prime` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    4.636
    + m    0.006
              µs

Reads = 1 + (0 * m)
Writes = 2 + (0 * m)
Recorded proof Size = 30 + (32 * m)

Min Squares Analysis
========
-- Extrinsic Time --

Model:
Time ~=    5.288
              µs

Reads = 1
Writes = 2
Recorded proof Size = 3233

Pallet: "pallet_membership", Extrinsic: "clear_prime", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========
Storage: `Membership::Prime` (r:0 w:1)
Proof: `Membership::Prime` (`max_values`: Some(1), `max_size`: Some(32), added: 527, mode: `MaxEncodedLen`)
Storage: `Collective::Prime` (r:0 w:1)
Proof: `Collective::Prime` (`max_values`: Some(1), `max_size`: None, mode: `Measured`)

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    2.005
    + m        0
              µs

Reads = 0 + (0 * m)
Writes = 2 + (0 * m)
Recorded proof Size = 0 + (0 * m)

Min Squares Analysis
========
-- Extrinsic Time --

Model:
Time ~=    2.094
              µs

Reads = 0
Writes = 2
Recorded proof Size = 0

Created file: "./encointer-kusama-weights/pallet_membership.rs"
2024-01-22 16:55:59 Starting benchmark: pallet_membership::reset_member    
2024-01-22 16:55:59 Starting benchmark: pallet_membership::change_key    
2024-01-22 16:55:59 Starting benchmark: pallet_membership::set_prime    
2024-01-22 16:55:59 Starting benchmark: pallet_membership::clear_prime    
Running benchmarks for pallet_proxy to ./encointer-kusama-weights/
2024-01-22 16:56:00 Starting benchmark: pallet_proxy::proxy    
2024-01-22 16:56:00 Starting benchmark: pallet_proxy::proxy_announced    
2024-01-22 16:56:00 Starting benchmark: pallet_proxy::remove_announcement    
2024-01-22 16:56:00 Starting benchmark: pallet_proxy::reject_announcement    
2024-01-22 16:56:00 Starting benchmark: pallet_proxy::announce    
Pallet: "pallet_proxy", Extrinsic: "proxy", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========
Storage: `Proxy::Proxies` (r:1 w:0)
Proof: `Proxy::Proxies` (`max_values`: None, `max_size`: Some(1241), added: 3716, mode: `MaxEncodedLen`)

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    22.15
    + p        0
              µs

Reads = 1 + (0 * p)
Writes = 0 + (0 * p)
Recorded proof Size = 125 + (37 * p)

Min Squares Analysis
========
-- Extrinsic Time --

Model:
Time ~=    21.95
              µs

Reads = 1
Writes = 0
Recorded proof Size = 1274

Pallet: "pallet_proxy", Extrinsic: "proxy_announced", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========
Storage: `Proxy::Proxies` (r:1 w:0)
Proof: `Proxy::Proxies` (`max_values`: None, `max_size`: Some(1241), added: 3716, mode: `MaxEncodedLen`)
Storage: `Proxy::Announcements` (r:1 w:1)
Proof: `Proxy::Announcements` (`max_values`: None, `max_size`: Some(2233), added: 4708, mode: `MaxEncodedLen`)
Storage: `System::Account` (r:1 w:1)
Proof: `System::Account` (`max_values`: None, `max_size`: Some(128), added: 2603, mode: `MaxEncodedLen`)

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    27.11
    + a    0.044
    + p    0.263
              µs

Reads = 3 + (0 * a) + (0 * p)
Writes = 2 + (0 * a) + (0 * p)
Recorded proof Size = 488 + (68 * a) + (35 * p)

Min Squares Analysis
========
-- Extrinsic Time --

Data points distribution:
    a     p   mean µs  sigma µs       %
    0    31     35.28         0    0.0%
   31     1     28.74         0    0.0%
   31    31     33.18      3.46   10.4%

Quality and confidence:
param     error
a         0.193
p         0.199

Model:
Time ~=    30.68
    + a        0
    + p    0.148
              µs

Reads = 3 + (0 * a) + (0 * p)
Writes = 2 + (0 * a) + (0 * p)
Recorded proof Size = 488 + (68 * a) + (36 * p)

Pallet: "pallet_proxy", Extrinsic: "remove_announcement", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========
Storage: `Proxy::Announcements` (r:1 w:1)
Proof: `Proxy::Announcements` (`max_values`: None, `max_size`: Some(2233), added: 4708, mode: `MaxEncodedLen`)
Storage: `System::Account` (r:1 w:1)
Proof: `System::Account` (`max_values`: None, `max_size`: Some(128), added: 2603, mode: `MaxEncodedLen`)

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    18.17
    + a    0.083
    + p        0
              µs

Reads = 2 + (0 * a) + (0 * p)
Writes = 2 + (0 * a) + (0 * p)
Recorded proof Size = 404 + (68 * a) + (0 * p)

Min Squares Analysis
========
-- Extrinsic Time --

Data points distribution:
    a     p   mean µs  sigma µs       %
    0    31     18.03         0    0.0%
   31     1     20.76         0    0.0%
   31    31     20.51      0.12    0.5%

Quality and confidence:
param     error
a         0.006
p         0.006

Model:
Time ~=    18.29
    + a    0.079
    + p        0
              µs

Reads = 2 + (0 * a) + (0 * p)
Writes = 2 + (0 * a) + (0 * p)
Recorded proof Size = 404 + (68 * a) + (0 * p)

Pallet: "pallet_proxy", Extrinsic: "reject_announcement", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========
Storage: `Proxy::Announcements` (r:1 w:1)
Proof: `Proxy::Announcements` (`max_values`: None, `max_size`: Some(2233), added: 4708, mode: `MaxEncodedLen`)
Storage: `System::Account` (r:1 w:1)
Proof: `System::Account` (`max_values`: None, `max_size`: Some(128), added: 2603, mode: `MaxEncodedLen`)

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    18.55
    + a    0.155
    + p    0.014
              µs

Reads = 2 + (0 * a) + (0 * p)
Writes = 2 + (0 * a) + (0 * p)
Recorded proof Size = 404 + (68 * a) + (0 * p)

Min Squares Analysis
========
-- Extrinsic Time --

Data points distribution:
    a     p   mean µs  sigma µs       %
    0    31     18.99         0    0.0%
   31     1     23.39         0    0.0%
   31    31     22.38      1.44    6.4%

Quality and confidence:
param     error
a          0.08
p         0.083

Model:
Time ~=    20.04
    + a    0.109
    + p        0
              µs

Reads = 2 + (0 * a) + (0 * p)
Writes = 2 + (0 * a) + (0 * p)
Recorded proof Size = 404 + (68 * a) + (0 * p)

Pallet: "pallet_proxy", Extrinsic: "announce", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========
Storage: `Proxy::Proxies` (r:1 w:0)
Proof: `Proxy::Proxies` (`max_values`: None, `max_size`: Some(1241), added: 3716, mode: `MaxEncodedLen`)
Storage: `Proxy::Announcements` (r:1 w:1)
Proof: `Proxy::Announcements` (`max_values`: None, `max_size`: Some(2233), added: 4708, mode: `MaxEncodedLen`)
Storage: `System::Account` (r:1 w:1)
Proof: `System::Account` (`max_values`: None, `max_size`: Some(128), added: 2603, mode: `MaxEncodedLen`)

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    24.37
    + a    0.112
    + p    0.054
              µs

Reads = 3 + (0 * a) + (0 * p)
Writes = 2 + (0 * a) + (0 * p)
Recorded proof Size = 339 + (70 * a) + (35 * p)

Min Squares Analysis
========
-- Extrinsic Time --

Data points distribution:
    a     p   mean µs  sigma µs       %
    0    31     26.06         0    0.0%
   31     1     27.91         0    0.0%
   31    31     29.33     0.214    0.7%

Quality and confidence:
param     error
a         0.011
p         0.012

Model:
Time ~=    24.59
    + a    0.105
    + p    0.047
              µs

Reads = 3 + (0 * a) + (0 * p)
Writes = 2 + (0 * a) + (0 * p)
Recorded proof Size = 339 + (71 * a) + (36 * p)

Pallet: "pallet_proxy", Extrinsic: "add_proxy", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========
Storage: `Proxy::Proxies` (r:1 w:1)
Proof: `Proxy::Proxies` (`max_values`: None, `max_size`: Some(1241), added: 3716, mode: `MaxEncodedLen`)

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    19.01
    + p    0.007
              µs

Reads = 1 + (0 * p)
Writes = 1 + (0 * p)
Recorded proof Size = 125 + (37 * p)

Min Squares Analysis
========
-- Extrinsic Time --

Model:
Time ~=    19.25
              µs

Reads = 1
Writes = 1
Recorded proof Size = 1274

Pallet: "pallet_proxy", Extrinsic: "remove_proxy", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========
Storage: `Proxy::Proxies` (r:1 w:1)
Proof: `Proxy::Proxies` (`max_values`: None, `max_size`: Some(1241), added: 3716, mode: `MaxEncodedLen`)

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    18.91
    + p     0.02
              µs

Reads = 1 + (0 * p)
Writes = 1 + (0 * p)
Recorded proof Size = 125 + (37 * p)

Min Squares Analysis
========
-- Extrinsic Time --

Model:
Time ~=    19.53
              µs

Reads = 1
Writes = 1
Recorded proof Size = 1274

Pallet: "pallet_proxy", Extrinsic: "remove_proxies", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========
Storage: `Proxy::Proxies` (r:1 w:1)
Proof: `Proxy::Proxies` (`max_values`: None, `max_size`: Some(1241), added: 3716, mode: `MaxEncodedLen`)

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    16.95
    + p        0
              µs

Reads = 1 + (0 * p)
Writes = 1 + (0 * p)
Recorded proof Size = 125 + (37 * p)

Min Squares Analysis
========
-- Extrinsic Time --

Model:
Time ~=    16.95
              µs

Reads = 1
Writes = 1
Recorded proof Size = 1274

Pallet: "pallet_proxy", Extrinsic: "create_pure", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========
Storage: `Proxy::Proxies` (r:1 w:1)
Proof: `Proxy::Proxies` (`max_values`: None, `max_size`: Some(1241), added: 3716, mode: `MaxEncodedLen`)

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    21.89
    + p        0
              µs

Reads = 1 + (0 * p)
Writes = 1 + (0 * p)
Recorded proof Size = 139 + (0 * p)

Min Squares Analysis
========
-- Extrinsic Time --

Model:
Time ~=    21.82
              µs

Reads = 1
Writes = 1
Recorded proof Size = 139

Pallet: "pallet_proxy", Extrinsic: "kill_pure", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========
Storage: `Proxy::Proxies` (r:1 w:1)
Proof: `Proxy::Proxies` (`max_values`: None, `max_size`: Some(1241), added: 3716, mode: `MaxEncodedLen`)

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    18.26
    + p    0.021
              µs

Reads = 1 + (0 * p)
Writes = 1 + (0 * p)
Recorded proof Size = 163 + (37 * p)

Min Squares Analysis
========
-- Extrinsic Time --

Model:
Time ~=    18.91
              µs

Reads = 1
Writes = 1
Recorded proof Size = 1274

Created file: "./encointer-kusama-weights/pallet_proxy.rs"
2024-01-22 16:56:00 Starting benchmark: pallet_proxy::add_proxy    
2024-01-22 16:56:00 Starting benchmark: pallet_proxy::remove_proxy    
2024-01-22 16:56:00 Starting benchmark: pallet_proxy::remove_proxies    
2024-01-22 16:56:00 Starting benchmark: pallet_proxy::create_pure    
2024-01-22 16:56:00 Starting benchmark: pallet_proxy::kill_pure    
Running benchmarks for pallet_timestamp to ./encointer-kusama-weights/
Pallet: "pallet_timestamp", Extrinsic: "set", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========
Storage: `Timestamp::Now` (r:1 w:1)
Proof: `Timestamp::Now` (`max_values`: Some(1), `max_size`: Some(8), added: 503, mode: `MaxEncodedLen`)
Storage: `EncointerScheduler::NextPhaseTimestamp` (r:1 w:0)
Proof: `EncointerScheduler::NextPhaseTimestamp` (`max_values`: Some(1), `max_size`: Some(8), added: 503, mode: `MaxEncodedLen`)
Storage: `EncointerScheduler::CurrentCeremonyIndex` (r:1 w:1)
Proof: `EncointerScheduler::CurrentCeremonyIndex` (`max_values`: Some(1), `max_size`: Some(4), added: 499, mode: `MaxEncodedLen`)
Storage: `EncointerScheduler::PhaseDurations` (r:3 w:0)
Proof: `EncointerScheduler::PhaseDurations` (`max_values`: None, `max_size`: Some(25), added: 2500, mode: `MaxEncodedLen`)

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    22.02
              µs

Reads = 6
Writes = 2
Recorded proof Size = 95

Min Squares Analysis
========
-- Extrinsic Time --

Model:
Time ~=    22.02
              µs

Reads = 6
Writes = 2
Recorded proof Size = 95

Pallet: "pallet_timestamp", Extrinsic: "on_finalize", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    3.282
              µs

Reads = 0
Writes = 0
Recorded proof Size = 94

Min Squares Analysis
========
-- Extrinsic Time --

Model:
Time ~=    3.282
              µs

Reads = 0
Writes = 0
Recorded proof Size = 94

2024-01-22 16:56:01 Starting benchmark: pallet_timestamp::set    
2024-01-22 16:56:01 resync ceremony phase failed    
2024-01-22 16:56:01 resync ceremony phase failed    
2024-01-22 16:56:01 resync ceremony phase failed    
2024-01-22 16:56:01 Starting benchmark: pallet_timestamp::on_finalize    
2024-01-22 16:56:01 resync ceremony phase failed    
2024-01-22 16:56:01 resync ceremony phase failed    
2024-01-22 16:56:01 resync ceremony phase failed    
Created file: "./encointer-kusama-weights/pallet_timestamp.rs"
Running benchmarks for pallet_treasury to ./encointer-kusama-weights/
2024-01-22 16:56:02 Starting benchmark: pallet_treasury::spend_local    
2024-01-22 16:56:02 WARNING: benchmark weightless skipped - spend_local    
2024-01-22 16:56:02 WARNING: benchmark weightless skipped - spend_local    
2024-01-22 16:56:02 WARNING: benchmark weightless skipped - spend_local    
2024-01-22 16:56:02 Starting benchmark: pallet_treasury::propose_spend    
2024-01-22 16:56:02 Starting benchmark: pallet_treasury::reject_proposal    
2024-01-22 16:56:02 Starting benchmark: pallet_treasury::approve_proposal    
2024-01-22 16:56:02 Starting benchmark: pallet_treasury::remove_approval    
2024-01-22 16:56:02 Starting benchmark: pallet_treasury::on_initialize_proposals    
2024-01-22 16:56:02 Starting benchmark: pallet_treasury::spend    
2024-01-22 16:56:02 WARNING: benchmark weightless skipped - spend    
2024-01-22 16:56:02 WARNING: benchmark weightless skipped - spend    
2024-01-22 16:56:02 WARNING: benchmark weightless skipped - spend    
Error: 
   0: Invalid input: Benchmark pallet_treasury::payout failed: No origin

Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.
2024-01-22 16:56:02 Starting benchmark: pallet_treasury::payout    
Running benchmarks for pallet_utility to ./encointer-kusama-weights/
Pallet: "pallet_utility", Extrinsic: "batch", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    8.247
    + c    2.771
              µs

Reads = 0 + (0 * c)
Writes = 0 + (0 * c)
Recorded proof Size = 0 + (0 * c)

Min Squares Analysis
========
-- Extrinsic Time --

Model:
Time ~=     2779
              µs

Reads = 0
Writes = 0
Recorded proof Size = 0

Pallet: "pallet_utility", Extrinsic: "as_derivative", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    7.709
              µs

Reads = 0
Writes = 0
Recorded proof Size = 0

Min Squares Analysis
========
-- Extrinsic Time --

Model:
Time ~=    7.709
              µs

Reads = 0
Writes = 0
Recorded proof Size = 0

Pallet: "pallet_utility", Extrinsic: "batch_all", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    6.946
    + c    2.722
              µs

Reads = 0 + (0 * c)
Writes = 0 + (0 * c)
Recorded proof Size = 0 + (0 * c)

Min Squares Analysis
========
-- Extrinsic Time --

Model:
Time ~=     2729
              µs

Reads = 0
Writes = 0
Recorded proof Size = 0

Pallet: "pallet_utility", Extrinsic: "dispatch_as", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    8.533
              µs

Reads = 0
Writes = 0
Recorded proof Size = 0

Min Squares Analysis
========
-- Extrinsic Time --

Model:
Time ~=    8.533
              µs

Reads = 0
Writes = 0
Recorded proof Size = 0

Pallet: "pallet_utility", Extrinsic: "force_batch", Lowest values: [], Highest values: [], Steps: 2, Repeat: 1
Raw Storage Info
========

Median Slopes Analysis
========
-- Extrinsic Time --

Model:
Time ~=    6.795
    + c    2.556
              µs

Reads = 0 + (0 * c)
Writes = 0 + (0 * c)
Recorded proof Size = 0 + (0 * c)

Min Squares Analysis
========
-- Extrinsic Time --

Model:
Time ~=     2563
              µs

Reads = 0
Writes = 0
Recorded proof Size = 0

2024-01-22 16:56:03 Starting benchmark: pallet_utility::batch    
2024-01-22 16:56:03 Starting benchmark: pallet_utility::as_derivative    
2024-01-22 16:56:03 Starting benchmark: pallet_utility::batch_all    
2024-01-22 16:56:03 Starting benchmark: pallet_utility::dispatch_as    
2024-01-22 16:56:03 Starting benchmark: pallet_utility::force_batch    
Created file: "./encointer-kusama-weights/pallet_utility.rs"

Publishing new releases

A new release should be published after a "special" commit got merged to the main branch. This "special" commit should alter the CHANGELOG file. When this change is detected, there should be an action that automatically builds all the runtimes and puts out a release. Would probably be super cool to take the diff in the CHANGELOG file to put it as changes into the release.

Dismiss approval on new changes

Do we want to dismiss approval on PR on new changes? Otherwise the PR author can change the PR to something else and merge it.

Allow Treasury/Spender Origins to do XCM transfers

I raised this in the fellows channel but raising this as a separate issue to hopefully keep track of discussions

We want to build out a way for treasury and spender tracks to support reserve asset transfers

Currently, the only way to do this is through the utility pallet dispatchAs. It is very little work to enable the treasury tracks to use XCM

Is there a reason why this is a bad idea?

If not I am happy to work on it

I aim to enable it via a SpenderToPlurality similar to FellowsToPlurality

Move/copy `runtime-common` to fellowship repo

Some of the implementations under polkadot-sdk's polkadot-runtime-common crate are critical to Polkadot and Kusama (e.g. fn era_payout, which implements the inflation formula). This logic should probably be under the umbrella of the fellowship. Since the westend and rococo runtimes also need the runtime-common, another option is to keep a different runtime-common crate in each repo.

State migration

We need to migrate all parachains and relay chain to state version 1. There is a pallet for doing this. With 1.0.0 Kusama will enable the state migration. After that we also need to migrate the parachains and Polkadot and its parachains. This issue works as a tracking issue.

  • Polkadot [x], [x], [ ]
    • Asset-Hub [ ], [ ], [ ]
    • Bridge-Hub [ ], [ ], [ ], [ ]
    • Collectives [ ], [ ], [ ]
  • Kusama [x], [x], [x]
    • Asset-Hub [x], [ ], [ ]
    • Bridge-Hub [ ], [ ], [ ], [ ]
    • Coretime [ ], [ ], [ ], [ ]
    • Encointer [ ], [ ], [ ]
    • Glutton [ ], [ ], [ ], [ ]
    • People [ ], [ ], [ ], [ ]

Three check marks: migration deployed, RPC reports done, migration removed from the runtime.

CI: `setup-protoc` hits rate limit

CI log from here:

Run arduino/setup-protoc@v1
Error: Error: API rate limit exceeded for 20.55.14.224. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)

Retry fixed it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.