metisprotocol / metis-verifier-node Goto Github PK
View Code? Open in Web Editor NEWMetis Andromeda verifier node
Metis Andromeda verifier node
Watching resource
https://metis-verifier-stats.metissafe.tech/nodes/
I noticed that 38 nodes are stuck on block 5643925.
What advice would you give to node operators in this case?
I get this error occasionally and there seems no recovery from that other than resetting the data.
This wastes a lot of time/resources although looks like a recoverable error.
I would like to know more about it and a more efficient way of recovering from there.
Is it possible?
https://metis-verifier-stats.metissafe.tech/nodes/
Cloudflare Ray ID: 7da2f3186bab0ea7
I'm regularly getting the error below. This triggers after approx 10 days of running but also after any restart of the container. I'm using the latest compose file and other containers are running fine. Memory looks ok (2.7gb used) and no other errors from the host machine. Anything I can try to avoid a missing event error ?
dtl-mainnet_1 | {"level":40,"time":1648424056988,"message":"TransactionEnqueued: missing event: TransactionEnqueued","msg":"recovering from a missing event"}
dtl-mainnet_1 | Well, that's that. We ran into a fatal error. Here's the dump. Goodbye!
dtl-mainnet_1 | (node:1) UnhandledPromiseRejectionWarning: Error: unable to recover from missing event
dtl-mainnet_1 | at L1IngestionService._start (/opt/optimism/packages/data-transport-layer/dist/src/services/l1-ingestion/service.js:157:31)
dtl-mainnet_1 | at async L1IngestionService.start (/opt/optimism/packages/common-ts/dist/base-service.js:33:9)
dtl-mainnet_1 | at async Promise.all (index 1)
dtl-mainnet_1 | at async L1DataTransportService._start (/opt/optimism/packages/data-transport-layer/dist/src/services/main/service.js:64:13)
dtl-mainnet_1 | at async L1DataTransportService.start (/opt/optimism/packages/common-ts/dist/base-service.js:33:9)
dtl-mainnet_1 | at async /opt/optimism/packages/data-transport-layer/dist/src/services/run.js:69:9
dtl-mainnet_1 | (Use node --trace-warnings ...
to show where the warning was created)
dtl-mainnet_1 | (node:1) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag --unhandled-rejections=strict
(see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
dtl-mainnet_1 | (node:1) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
I see someone has asked the same questions. We have use the latest docker-compose.yml , and my server have upgraded to 4core 16g memory. Then delete the data ,and docker-compose up again, but still have this error?
ERROR[04-14|01:43:50.316] Could not verify error="Verifier cannot sync transaction batches to tip: Cannot sync transaction batches to tip: Cannot sync batches: The remote stateroot is not equal to the local: remote %!w(string=0xf28df9dbe2e434ad69a32d5e47de552f8d0472ba8d47dbce1da200fc01f916ad), local %!w(string=0x9d04dbae524e24511d7c9ba1943838c154bb362005588d51f19dc4e54829223c), batch-root %!w(string=0x497d4877548f42f19061012e9c23503a729e247b72b977e11109d9b05554ff70)" INFO [04-14|01:43:50.807] Syncing transaction batch range start=13230 end=16529 DEBUG[04-14|01:43:50.807] Fetching transaction batch index=13230 TRACE[04-14|01:43:50.814] Applying batched transaction index=954690 TRACE[04-14|01:43:50.814] Applying indexed transaction index=954690 ERROR[04-14|01:43:50.814] Mismatched transaction index=954690
Hi guys,
can't find the answer how to resolve:
[04-15|05:46:03.442] Syncing transaction batch range start=25438 end=25494
l2geth-mainnet_1 | DEBUG[04-15|05:46:03.442] Fetching transaction batch index=25438
l2geth-mainnet_1 | TRACE[04-15|05:46:03.452] Applying batched transaction index=2287472
l2geth-mainnet_1 | TRACE[04-15|05:46:03.452] Applying indexed transaction index=2287472
l2geth-mainnet_1 | ERROR[04-15|05:46:03.452] Mismatched transaction index=2287472
l2geth-mainnet_1 | ERROR[04-15|05:46:03.454] Could not verify error="Verifier cannot sync transaction batches to tip: Cannot sync transaction batches to tip: Cannot sync batches: The remote stateroot is not equal to the local: remote %!w(string=0xba35d6899c0e97dd0d7be9800300487902d3fdaf24d80001720336f73e7cf79e), local %!w(string=0x494d115314bab29a45139ce34b7faed3b3d8edfd6eb129609bfb9ca38592f119), batch-root %!w(string=0xad590519348392bebdee7640fad69d7a383aaded6b428db2ca509cef2728c5fa)"
Seems it happend apter the the last update.
Facing this error when moved my verifier to a new vps:
metis-dtl | Well, that's that. We ran into a fatal error. Here's the dump. Goodbye!
metis-dtl | (node:1) UnhandledPromiseRejectionWarning: Error: could not detect network (event="noNetwork", code=NETWORK_ERROR, version=providers/5.4.4)
metis-dtl | at Logger.makeError (/opt/optimism/node_modules/ethers/node_modules/@ethersproject/providers/node_modules/@ethersproject/logger/lib/index.js:199:21)
metis-dtl | at Logger.throwError (/opt/optimism/node_modules/ethers/node_modules/@ethersproject/providers/node_modules/@ethersproject/logger/lib/index.js:208:20)
metis-dtl | at StaticJsonRpcProvider. (/opt/optimism/node_modules/ethers/node_modules/@ethersproject/providers/lib/json-rpc-provider.js:491:54)
metis-dtl | at step (/opt/optimism/node_modules/ethers/node_modules/@ethersproject/providers/lib/json-rpc-provider.js:48:23)
metis-dtl | at Object.throw (/opt/optimism/node_modules/ethers/node_modules/@ethersproject/providers/lib/json-rpc-provider.js:29:53)
metis-dtl | at rejected (/opt/optimism/node_modules/ethers/node_modules/@ethersproject/providers/lib/json-rpc-provider.js:21:65)
metis-dtl | at processTicksAndRejections (internal/process/task_queues.js:95:5)
metis-dtl | (Use node --trace-warnings ...
to show where the warning was created)
metis-dtl | (node:1) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag --unhandled-rejections=strict
(see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
metis-dtl | (node:1) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
any idea about how to solve it?
Using ubuntu 20.04
$ docker image ls --digests | grep metisdao/mvm-andromeda
metisdao/mvm-andromeda dtl sha256:d0b7419f7443510a58d073403fc4add4657ba5b58d41e4c89e41bfbff86c571f 878c093d85ca 2 weeks ago 912MB
metisdao/mvm-andromeda l2geth sha256:39f16ebac9ce7021ad2c098cff544d68be1f739f51251a93f72a74e7e218d36f 18514286d9fe 2 weeks ago 41.1MB
$ docker-compose logs -f l2geth-mainnet
l2geth-mainnet_1 | DEBUG[05-01|08:22:10.130] Ancient blocks frozen already number=2467831 hash=833156…babc12 frozen=2377831
l2geth-mainnet_1 | DEBUG[05-01|08:23:10.131] Ancient blocks frozen already number=2467831 hash=833156…babc12 frozen=2377831
l2geth-mainnet_1 | DEBUG[05-01|08:24:10.132] Ancient blocks frozen already number=2467831 hash=833156…babc12 frozen=2377831
l2geth-mainnet_1 | DEBUG[05-01|08:25:10.132] Ancient blocks frozen already number=2467831 hash=833156…babc12 frozen=2377831
l2geth-mainnet_1 | DEBUG[05-01|08:26:10.133] Ancient blocks frozen already number=2467831 hash=833156…babc12 frozen=2377831
l2geth-mainnet_1 | DEBUG[05-01|08:27:10.134] Ancient blocks frozen already number=2467831 hash=833156…babc12 frozen=2377831
l2geth-mainnet_1 | DEBUG[05-01|08:28:10.134] Ancient blocks frozen already number=2467831 hash=833156…babc12 frozen=2377831
l2geth-mainnet_1 | DEBUG[05-01|08:29:10.134] Ancient blocks frozen already number=2467831 hash=833156…babc12 frozen=2377831
l2geth-mainnet_1 | DEBUG[05-01|08:30:10.135] Ancient blocks frozen already number=2467831 hash=833156…babc12 frozen=2377831
l2geth-mainnet_1 | DEBUG[05-01|08:31:10.136] Ancient blocks frozen already number=2467831 hash=833156…babc12 frozen=2377831
l2geth-mainnet_1 | DEBUG[05-01|08:32:10.136] Ancient blocks frozen already number=2467831 hash=833156…babc12 frozen=2377831
l2geth-mainnet_1 | DEBUG[05-01|08:33:10.137] Ancient blocks frozen already number=2467831 hash=833156…babc12 frozen=2377831
l2geth-mainnet_1 | DEBUG[05-01|08:34:10.138] Ancient blocks frozen already number=2467831 hash=833156…babc12 frozen=2377831
l2geth-mainnet_1 | DEBUG[05-01|08:35:10.139] Ancient blocks frozen already number=2467831 hash=833156…babc12 frozen=2377831
l2geth-mainnet_1 | DEBUG[05-01|08:36:10.139] Ancient blocks frozen already number=2467831 hash=833156…babc12 frozen=2377831
l2geth-mainnet_1 | TRACE[05-01|08:36:39.443] Refreshing port mapping proto=tcp extport=30303 intport=30303 interface="UPnP or NAT-PMP"
l2geth-mainnet_1 | DEBUG[05-01|08:36:39.443] Couldn't add port mapping proto=tcp extport=30303 intport=30303 interface="UPnP or NAT-PMP" err="no UPnP or NAT-PMP router discovered"
Why this error happened? What I need to solve it?
I see the node log, which is normal, but the node block height is not updated
curl -s 'http://localhost:8080/verifier/get/true/1088' | jq '.batch.blockNumber' 13922794
docker image inspect metisdao/mvm-andromeda:dtl | jq -r '.[0].RepoDigests[0]' | cut -d ':' -f 2 | cut -c 1-12
6f7e62106d47
node log
`{"level":30,"time":1649755985798,"highestSyncedL1Block":14569931,"targetL1Block":14569932,"msg":"Synchronizing events from Layer 1 (Ethereum)"}
{"level":30,"time":1649755991221,"highestSyncedL1Block":14569932,"targetL1Block":14569933,"msg":"Synchronizing events from Layer 1 (Ethereum)"}
{"level":30,"time":1649756005642,"highestSyncedL1Block":14569933,"targetL1Block":14569934,"msg":"Synchronizing events from Layer 1 (Ethereum)"}
{"level":30,"time":1649756012784,"highestSyncedL1Block":14569934,"targetL1Block":14569935,"msg":"Synchronizing events from Layer 1 (Ethereum)"}
{"level":30,"time":1649756024584,"highestSyncedL1Block":14569935,"targetL1Block":14569936,"msg":"Synchronizing events from Layer 1 (Ethereum)"}
{"level":30,"time":1649756032355,"highestSyncedL1Block":14569936,"targetL1Block":14569937,"msg":"Synchronizing events from Layer 1 (Ethereum)"}
{"level":30,"time":1649756040906,"highestSyncedL1Block":14569937,"targetL1Block":14569938,"msg":"Synchronizing events from Layer 1 (Ethereum)"}
{"level":30,"time":1649756044763,"chainId":1088,"parsedEvent":{"stateRootBatchEntry":{"index":5879,"blockNumber":14569938,"timestamp":1649755916,"submitter":"0x9cB01d516D930EF49591a05B09e0D33E6286689D","size":85,"root":"0x0f6bd7455bb4150ea9d0aa5ae272c9d59928fc0253c98613c391efcf45b130af","prevTotalElements":2284743,"extraData":"0x000000000000000000000000000000000000000000000000000000006255470c0000000000000000000000009cb01d516d930ef49591a05b09e0d33e6286689d","l1TransactionHash":"0xeeb87aefd665cb86b49ffab02d9955782720a9e782dc3a36a8c2ea3168f665af"},"stateRootEntries":[{"index":2284743,"batchIndex":5879,"value":"0xfd07af36c9edac6f1ccaed41ec78f3f907fc6689c0d3b7d05e5689fced5b229d","confirmed":true},{"index":2284744,"batchIndex":5879,"value":"0x8f81c015f132c09718e309a0a21b212d12b0fc81c156b2e5660b409515022da0","confirmed":true},{"index":2284745,"batchIndex":5879,"value":"0xb3a637c450dc15081e3c1ef3fac6f19110ccab6b1ac4ca0c0e13b22bd4ed99c9","confirmed":true},{"index":2284746,"batchIndex":5879,"value":"0x88c4f7d44924e91d3ce9f3e3b3a291c1fdab8b1b767c6e0c559fc61c7d20191f","confirmed":true},{"index":2284747,"batchIndex":5879,"value":"0xaa2ba72da117073b9283fad6f782d8bcae0493a5eef668a6064b31255e8c6773","confirmed":true},{"index":2284748,"batchIndex":5879,"value":"0xe95ed48225836013d323a006acac5cbaffff79a6572e6ac280c5bc0ef9a4f6ff","confirmed":true},{"index":2284749,"batchIndex":5879,"value":"0x86a1844fbd76614ea598f9040ad2cd7c833b989e4a6126b9c5c59cd6b5d5998a","confirmed":true},{"index":2284750,"batchIndex":5879,"value":"0x2bc3de650c7028a8e2979478f0f080da445aa94e71a232aadf4d56903841f2df","confirmed":true},{"index":2284751,"batchIndex":5879,"value":"0xde39c109fcd67da6e0d47d7153924709d79ee8cacefa159f1dde4fd40628ff3c","confirmed":true},{"index":2284752,"batchIndex":5879,"value":"0x93b07edc9ca70e727100e5d6385474686e02372f9d0abddda42bf298c9183a45","confirmed":true},{"index":2284753,"batchIndex":5879,"value":"0x5c66a151096db3a7b83fd02f0831f112e1dd56164d492ac7632aaf43eb03f0f9","confirmed":true},{"index":2284754,"batchIndex":5879,"value":"0xa7a0b85d3280716f6df7f0fa48d74901e0304688e9df35b6808dff7fb9f3d8ae","confirmed":true},{"index":2284755,"batchIndex":5879,"value":"0xbd6f7bf80b66ed2572abcf2dab7228a58824a2d3156af6754a4a21b93ac142ff","confirmed":true},{"index":2284756,"batchIndex":5879,"value":"0xaa2c746e42d7d0f6a260e63e137f7e08eb9857c54160705a3dd693f8d8ef30ec","confirmed":true},{"index":2284757,"batchIndex":5879,"value":"0x73639fe34ad1be4430bf7929ac9b5064447dab88b752345d3f2bcea4cc04f6b6","confirmed":true},{"index":2284758,"batchIndex":5879,"value":"0xa2119b85e423505e9f54978dfcc11de1db1b578a00d115a460a50aaf53f5050e","confirmed":true},{"index":2284759,"batchIndex":5879,"value":"0x254127f95a8ee31bd2696ba853f2aefb45d95eca6402b1a30669a1a744af2dc3","confirmed":true},{"index":2284760,"batchIndex":5879,"value":"0xef051c93c4f5b1d3d94463585734bc17179d4d9cc73164533e75eb2b2d211e7c","confirmed":true},{"index":2284761,"batchIndex":5879,"value":"0x04e796c470e727aa8dd89f566b9b95bc4adee99a6f7152b0000fc9a42807c24d","confirmed":true},{"index":2284762,"batchIndex":5879,"value":"0x0d8f780b7100d84d85db096273dfdcd94ed2bbe83d784cec3cbd95cdfe9ce864","confirmed":true},{"index":2284763,"batchIndex":5879,"value":"0x8d83483d72e034b9ffd4ef6b217a87db362fa47d575b6ef70145c6badc19eda0","confirmed":true},{"index":2284764,"batchIndex":5879,"value":"0x7f314f8491abd21aa595f333c258014fc6f76a2fb077019aa9e56a73a30e74a5","confirmed":true},{"index":2284765,"batchIndex":5879,"value":"0x4688d48c2b8b79aa9d9ebdfbfef44415fd79803e3fcadd5a727f8f5e2449eecf","confirmed":true},{"index":2284766,"batchIndex":5879,"value":"0x57fbf883f2c120849e87fdc50a819b510331814a8dd6b65d19205cefbe2c7fad","confirmed":true},{"index":2284767,"batchIndex":5879,"value":"0x8a68b48f07f9d4dc52102dea4e87697648ca85467ebd8a9593dad84c7b1cf3b3","confirmed":true},{"index":2284768,"batchIndex":5879,"value":"0x5e68dfe2a4fc15922604583cc51967083eb209e6ba86b636de028a3d82071aae","confirmed":true},{"index":2284769,"batchIndex":5879,"value":"0x3dce2adf430`
Install curl
and jq
first, and run following command to check if your node is healthy
$ curl -s 'http://localhost:8080/verifier/get/true/1088' | jq '.verify.index'
1139855
You will get a number, it's current height of your node, and ensure that it is not zero and strictly increasing.
Run following command to check your node is synced
$ curl 'http://localhost:8080/verifier/get/true/1088' | jq '.batch.l1TransactionHash'
"0xef898dfc868d7936bc828513278192305883042c1038e28af3785f9a9770d25a"
You will get a transaction hash, it's the latest L1 batch transaction hash.
Open https://etherscan.io/address/0xf209815e595cdf3ed0aaf9665b1772e608ab9380
If the first transaction on the page is equal with you got, your node is synced.
Hi Team Metis,
After the second update of verifier node based on the latest post, my node stopped working as attached.
I tried to restart but it keeps abnormal. Then I deleted the entire old database, and restarted the service, during the first 5 hours, it is running well and started pulling the data correctly until reaching the block #954690, where the l2geth stopped working again. Please review the screenshot and kindly let me know what's going on for my node.
Thanks! Oliver
Hello.
I have received confirmation from [email protected] that my Node IP and wallet address are checked and added to verifier's list. But I don't see my node's IP in the Nodes list. What happened and what should I do?
You mention in the readme that we may not need this
https://github.com/ericlee42/metis-verifier-node-setup/blob/bd6021cd752fee1c8dba979ef1051868429e1a84/README.md?plain=1#L3
and that there might be block height lag:
What's the correct way of using the mainnet rpc? Can we connect l2geth directly to the mainnet without needing our own verifier or replica? If so, how?
If you are using infura, your node may have received the effects of its outage.
If your node is not working, please delete all of the data and resync.
For better node service availability, you can deploy a geth node
Hi,
I am one of the verifier and first of all, would really like to thank all for creating this.
This week, I have encounter twice the error shown below at different index number:
l2geth-mainnet_1 | ERROR[02-26|02:30:11.504] Could not verify error="Verifier cannot sync transaction batches to tip: Cannot sync transaction batches to tip: Cannot sync batches: cannot apply batched transaction: Cannot apply batched transaction: Received tx at index 1015330 when looking for 1015090"
Would like to seek for advice on how to avoid having this issue again?
Restarted the node once then it worked fine till the issue appear again later on after 2 days when it sync on higher index number.
Thank you.
Hi, It seems all verifier nodes are stopping syncing properly, please look into it and let us know what happened to the DTL service. Thanks!
Hi all,
We got the below error sometimes. Tried to delete /data and re-sync but the issue still reappear at different tx
metis-verifier-node-setup-l2geth-mainnet-1 | INFO [05-30|03:13:49.410] Syncing transaction batch range start=26490 end=26546
metis-verifier-node-setup-l2geth-mainnet-1 | DEBUG[05-30|03:13:49.410] Fetching transaction batch index=26490
metis-verifier-node-setup-l2geth-mainnet-1 | TRACE[05-30|03:13:49.476] Applying batched transaction index=2752732
metis-verifier-node-setup-l2geth-mainnet-1 | TRACE[05-30|03:13:49.476] Applying indexed transaction index=2752732
metis-verifier-node-setup-l2geth-mainnet-1 | ERROR[05-30|03:13:49.476] Could not verify error="Verifier cannot sync transaction batches to tip: Cannot sync transaction batches to tip: Cannot sync batches: cannot apply batched transaction: Cannot apply batched transaction: Received tx at index 2752732 when looking for 2751374"
docker image digest
metisdao/mvm-andromeda dtl sha256:d0b7419f7443510a58d073403fc4add4657ba5b58d41e4c89e41bfbff86c571f 878c093d85ca 6 weeks ago 912MB
metisdao/mvm-andromeda l2geth sha256:39f16ebac9ce7021ad2c098cff544d68be1f739f51251a93f72a74e7e218d36f 18514286d9fe 6 weeks ago 41.1MB
We are using alchemy.com for the DATA_TRANSPORT_LAYER__L1_RPC_ENDPOINT
and ETH1_HTTP
.
Noticed DATA_TRANSPORT_LAYER__L1_START_HEIGHT
. Can this be set to to a block height that is close to the current block height found at https://etherscan.io/address/0xf209815e595cdf3ed0aaf9665b1772e608ab9380 (to speed up sync) - or is this not suggested for some reason?
After getting the latest "Pull latest DTL image"
image digest 6f7e62106d47
What is different from what is listed on github cb66c5b23f2d
Is it ok?
I keep getting stuck on the same block:
l2geth-mainnet_1 | DEBUG[05-15|12:53:43.494] Ancient blocks frozen already number=2654066 hash=230200…5c9f25 frozen=2564066
I've deleted my l2geth data and restarted however it always get stuck at the same block. The dtl-mainnet seems to be processing transactions. Any idea on how to resolve without having to do a full re-sync ?
Can't solve this problem by using:
docker-compose down
sudo rm -rf /data/metis
docker-compose up -d
Already tried to reinstall all nodes from scratch - same issue.
Any ideas how to fix this?
Hi Eric - My node has been stable for several months but just today experienced a sync issue. Is there a way to recover without performing a full re-sync ?
Log from STL is below..
metis-dtl | {"level":30,"time":1683954974005,"msg":"Service L1_Ingestion_Service is starting..."}
metis-dtl | {"level":30,"time":1683954974006,"host":"0.0.0.0","port":7878,"msg":"Server started and listening"}
metis-dtl | {"level":30,"time":1683954974007,"msg":"Service L1_Transport_Server can stop now"}
metis-dtl | {"level":30,"time":1683954974967,"highestSyncedL1Block":17245799,"targetL1Block":17247799,"msg":"Synchronizing events from Layer 1 (Ethereum)"}
metis-dtl | {"level":30,"time":1683954977769,"chainId":1088,"parsedEvent":{"index":26480,"target":"0x4200000000000000000000000000000000000007","data":"0xcbd4ece9000000000000000000000000727c6beb46bac2e0418cc1e3f83be1f30ab8056b0000000000000000000000008bab7a91dafbbf3df3219bb115f687edce48ff0e000000000000000000000000000000000000000000000000000000000000008000000000000000000000000000000000000000000000000000000000000067700000000000000000000000000000000000000000000000000000000000000044ed8378f5000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000645e875700000000000000000000000000000000000000000000000000000000","gasLimit":"1900000","origin":"0x192E1101855bD523Ba69a9794e0217f0Db633510","blockNumber":17245800,"timestamp":1683954977,"ctcIndex":null},"msg":"Storing Event:"}
metis-dtl | {"level":40,"time":1683954977775,"message":"TransactionEnqueued: missing event: TransactionEnqueued","msg":"recovering from a missing event"}
metis-dtl | Well, that's that. We ran into a fatal error. Here's the dump. Goodbye!
metis-dtl | (node:1) UnhandledPromiseRejectionWarning: Error: unable to recover from missing event
metis-dtl | at L1IngestionService._start (/opt/optimism/packages/data-transport-layer/dist/src/services/l1-ingestion/service.js:161:31)
metis-dtl | at async L1IngestionService.start (/opt/optimism/packages/common-ts/dist/base-service.js:35:9)
metis-dtl | at async Promise.all (index 1)
metis-dtl | at async L1DataTransportService._start (/opt/optimism/packages/data-transport-layer/dist/src/services/main/service.js:64:13)
metis-dtl | at async L1DataTransportService.start (/opt/optimism/packages/common-ts/dist/base-service.js:35:9)
metis-dtl | at async /opt/optimism/packages/data-transport-layer/dist/src/services/run.js:76:9
metis-dtl | (Use node --trace-warnings ...
to show where the warning was created)
metis-dtl | (node:1) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag --unhandled-rejections=strict
(see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
metis-dtl | (node:1) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
Verifier cannot sync transaction batches to tip: Cannot sync transaction batches to tip: Cannot sync batches: cannot apply batched transaction: Cannot apply batched transaction: insufficient balance for transfer
Hi Eric. My node keeps getting stuck every day... I can resolve with a simple restart but things are unstable. Below is an extract from l2geth at the block which gets stuck. Anything I can try to stabilise the node?
metis-l2geth | TRACE[06-11|12:28:59.212] Deep froze ancient block number=6090681 hash=e13abb…b2687a
metis-l2geth | TRACE[06-11|12:28:59.212] Deep froze ancient block number=6090682 hash=851f33…730992
metis-l2geth | TRACE[06-11|12:28:59.212] Deep froze ancient block number=6090683 hash=bd4771…0b25e5
metis-l2geth | INFO [06-11|12:28:59.267] Deep froze chain segment blocks=1072 elapsed=193.833ms number=6090683 hash=bd4771…0b25e5
metis-l2geth | DEBUG[06-11|12:29:59.268] Ancient blocks frozen already number=6180684 hash=ab5e2a…1de05c frozen=6090684
metis-l2geth | DEBUG[06-11|12:30:59.269] Ancient blocks frozen already number=6180684 hash=ab5e2a…1de05c frozen=6090684
metis-l2geth | INFO [06-11|12:31:32.189] Syncing transaction batch range start=35539 end=35539
metis-l2geth | DEBUG[06-11|12:31:32.189] Fetching transaction batch index=35539
metis-l2geth | TRACE[06-11|12:31:32.264] Applying batched transaction index=6180684
metis-l2geth | TRACE[06-11|12:31:32.264] Applying indexed transaction index=6180684
metis-l2geth | DEBUG[06-11|12:31:32.264] Applying transaction to tip index=6180684 hash=0xa3a181ae0cb1e7d09fa8a4aa8b0e62d092e7db3b744ce9d6eedbf069321a1531 origin=sequencer
metis-l2geth | TRACE[06-11|12:31:32.264] Waiting for transaction to be added to chain hash=0xa3a181ae0cb1e7d09fa8a4aa8b0e62d092e7db3b744ce9d6eedbf069321a1531
metis-l2geth | DEBUG[06-11|12:31:32.264] Attempting to commit rollup transaction hash=0xa3a181ae0cb1e7d09fa8a4aa8b0e62d092e7db3b744ce9d6eedbf069321a1531
metis-l2geth | INFO [06-11|12:31:32.264] Use berlin InstructionSet
metis-l2geth | DEBUG[06-11|12:31:32.264] Current L1FeeInL2 fee=0
metis-l2geth | DEBUG[06-11|12:31:32.264] preCheck checknonce=true gas=248383
metis-l2geth | DEBUG[06-11|12:31:32.264] buygas gas=248383 initialGas=248383
metis-l2geth | DEBUG[06-11|12:31:32.264] getting in vm gas=225279 value=0 sender=0xd2c7aB41175155Bae488390914d77ede82f15A60 gasprice=13000000000
metis-l2geth | INFO [06-11|12:31:32.274] New block index=6180684 l1-timestamp=1686482556 l1-blocknumber=17456535 tx-hash=0xa3a181ae0cb1e7d09fa8a4aa8b0e62d092e7db3b744ce9d6eedbf069321a1531 queue-orign=sequencer gas=211322 fees=0.002747186 elapsed=9.943ms
metis-l2geth | TRACE[06-11|12:31:32.274] Waiting for slot to sign and propagate delay=0s
metis-l2geth | DEBUG[06-11|12:31:32.275] Persisted trie from memory database nodes=98 size=34.99KiB time=626.912µs gcnodes=0 gcsize=0.00B gctime=0s livenodes=1 livesize=-2321228.00B
metis-l2geth | DEBUG[06-11|12:31:32.275] Miner got new head height=6180685 block-hash=0x0a93c0bd2866a32f885e7597d5adb6d6b789f34ba12186eb292b9f0424b55011 tx-hash=0xa3a181ae0cb1e7d09fa8a4aa8b0e62d092e7db3b744ce9d6eedbf069321a1531 tx-hash=0xa3a181ae0cb1e7d09fa8a4aa8b0e62d092e7db3b744ce9d6eedbf069321a1531
metis-l2geth | TRACE[06-11|12:31:32.275] Propagated block hash=0a93c0…b55011 recipients=0 duration=2562047h47m16.854s
metis-l2geth | TRACE[06-11|12:31:32.275] Announced block hash=0a93c0…b55011 recipients=0 duration=2562047h47m16.854s
metis-l2geth | DEBUG[06-11|12:31:32.275] Reinjecting stale transactions count=0
metis-l2geth | INFO [06-11|12:31:32.276] Fetch stateroot nil, retry in 1000ms i=0 index=6180684
metis-l2geth | INFO [06-11|12:31:33.277] Fetch stateroot nil, retry in 1000ms i=1 index=6180684
metis-l2geth | INFO [06-11|12:31:34.280] Fetch stateroot nil, retry in 1000ms i=2 index=6180684
metis-l2geth | INFO [06-11|12:31:35.281] Fetch stateroot nil, retry in 1000ms i=3 index=6180684
As you can see this was paused for several hours until I restarted:
metis-l2geth | INFO [06-11|22:25:27.841] Fetch stateroot nil, retry in 1000ms i=35571 index=6180684
metis-l2geth | INFO [06-11|22:25:28.843] Fetch stateroot nil, retry in 1000ms i=35572 index=6180684
metis-l2geth | INFO [06-11|22:25:29.845] Fetch stateroot nil, retry in 1000ms i=35573 index=6180684
If you upgrade successfully, you will get the following log
INFO [08-16|03:45:07.048] Initialised chain configuration config="{ChainID: 1088 Homestead: 0 DAO: <nil> DAOSupport: false EIP150: 0 EIP155: 0 EIP158: 0 Byzantium: 0 Constantinople: 0 Petersburg: 0 Istanbul: 0, Muir Glacier: 0, Berlin: 3380000, Engine: clique}"
The keyword is Berlin: 3380000
, you can search it in your l2geth log.
Hello, we have solved this isusse in the last upgrade. Today, I checked the log and found this error again.
ERROR[04-22|01:37:24.173] Could not verify error="Verifier cannot sync transaction batches to tip: Cannot sync transaction batches to tip: Cannot sync batches: cannot apply batched transaction: Cannot apply batched transaction: Received tx at index 2386325 when looking for 2384945"
The dtl-mainnet is operating fine and infura is receiving requests from the docker-compose.yml, main issue is l2geth-mainnet is throwing this error.
l2geth-error.txt
Ubuntu 22.04 is running off a hyper-v VM with bridge network adapter.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.