Coder Social home page Coder Social logo

ethereum-metrics-exporter's People

Contributors

nabaruns avatar pablocastellano avatar samcm avatar skylenet avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

ethereum-metrics-exporter's Issues

ethereum-metrics-exporter can "hammer" Lighthouse REST API

Observed with Lighthouse treestates, though this may also happen without.

This is a Lighthouse that gets fairly heavy queries every 5 min, where queries will take >4s to return. If ethereum-metrics-exporter runs, this eventually becomes pathological, where queries take >18s.

I am wondering whether there's something in exporter that starts hammering Lighthouse with retries when queries take a long time.

Here's a screenshot of Lighthouse's P1 REST API return time. 10s is really >= 10s. You can see where metrics exporter was turned off. The spikes every 5 min are the heavy REST queries by an app.

image

Add metric for basefee

The EL blocks contain a field called basefee. This basefee fluctuation is something we'd like to track on shadow forks. It'd be great if we can add this and update the dashboard to visualize the fluctuations.

Support deneb hardfork

Since this morning, some metrics are no longer available because of the deneb hardfork:

Jan 17 12:17:17 devhub-rpc-server-01 ethereum-metrics-exporter[4142094]: {"component":"exporter","error":"failed to parse response: unrecognised data version \"deneb\"","level":"error","module":"consensus/beacon","msg":"Failed to get signed beacon block at finalized","time":"2024-01-17T12:17:17+01:00"}
Jan 17 12:17:17 devhub-rpc-server-01 ethereum-metrics-exporter[4142094]: {"component":"exporter","error":"failed to parse response: unrecognised data version \"deneb\"","level":"error","module":"consensus/beacon","msg":"Failed to get signed beacon block at head","time":"2024-01-17T12:17:17+01:00"}
Jan 17 12:17:17 devhub-rpc-server-01 ethereum-metrics-exporter[4142094]: {"component":"exporter","error":"failed to parse response: unrecognised data version \"deneb\"","level":"error","module":"consensus/beacon","msg":"Failed to get signed beacon block at finalized","time":"2024-01-17T12:17:17+01:00"}
Jan 17 12:17:17 devhub-rpc-server-01 ethereum-metrics-exporter[4142094]: {"component":"exporter","error":"failed to parse response: unrecognised data version \"deneb\"","level":"error","module":"consensus/beacon","msg":"Failed to get signed beacon block at head","time":"2024-01-17T12:17:17+01:00"}

Is it planned to support this hardfork ?

SIGSEGV at start if execution module is disabled in config

I ran into the following issue when trying to use the exporter on a machine that only runs a consensus layer client.

With the following config:

consensus:
  enabled: true
  url: "http://localhost:5052"
  name: "consensus-client"
execution:
  enabled: false
diskUsage:
  enabled: false

The exporter fails to start with the following error output:

{"cfgFile":"/home/ethmetrics/.ethereum-metrics-exporter.yaml","level":"info","msg":"Loading config","time":"2022-08-23T00:48:48+02:00"}
{"component":"exporter","level":"info","msg":"Initializing...","time":"2022-08-23T00:48:48+02:00"}
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x38 pc=0xd3bff2]

goroutine 8 [running]:
github.com/samcm/ethereum-metrics-exporter/pkg/exporter.(*exporter).Serve(0xc0000cb290, {0x10f9ac0?, 0xc00003a078}, 0x238e)
        /home/admin/ethereum-metrics-exporter/pkg/exporter/exporter.go:132 +0xb2
github.com/samcm/ethereum-metrics-exporter/cmd.glob..func1(0x16cf240, {0xf686bf?, 0x4?, 0x4?})
        /home/admin/ethereum-metrics-exporter/cmd/root.go:20 +0x5c
github.com/spf13/cobra.(*Command).execute(0x16cf240, {0xc0000360b0, 0x4, 0x4})
        /home/admin/go/pkg/mod/github.com/spf13/[email protected]/command.go:860 +0x663
github.com/spf13/cobra.(*Command).ExecuteC(0x16cf240)
        /home/admin/go/pkg/mod/github.com/spf13/[email protected]/command.go:974 +0x3bd
github.com/spf13/cobra.(*Command).Execute(...)
        /home/admin/go/pkg/mod/github.com/spf13/[email protected]/command.go:902
github.com/samcm/ethereum-metrics-exporter/cmd.Execute()
        /home/admin/ethereum-metrics-exporter/cmd/root.go:47 +0x25
created by main.main
        /home/admin/ethereum-metrics-exporter/main.go:16 +0x85

The expected behaviour is that the exporter should start with the execution plugin disabled, rather than encountering SIGSEGV.

If execution is disabled in config, the execution property of the exporter is not initialised in the Init function of pkg/exporter/exporter.go. Because of this, the error appears when trying to call the e.execution.URL() method in the Serve function.

The issue can be avoided by replacing e.execution.URL() with e.config.Execution.URL:

func (e *exporter) Serve(ctx context.Context, port int) error {
        e.log.
                WithField("consensus_url", e.config.Consensus.URL).
                //WithField("execution_url", e.execution.URL()).
                WithField("execution_url", e.config.Execution.URL).
                Info(fmt.Sprintf("Starting metrics server on :%v", port))

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Ignored or Blocked

These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.

Detected dependencies

dockerfile
Dockerfile
  • docker/dockerfile 1
  • golang 1.20
goreleaser-debian.Dockerfile
goreleaser-scratch.Dockerfile
github-actions
.github/workflows/alpha-releases.yaml
  • actions/checkout v3
  • ubuntu 20.04
.github/workflows/golangci-lint.yml
  • actions/setup-go v3
  • actions/checkout v3
  • golangci/golangci-lint-action v3
.github/workflows/goreleaser.yaml
  • actions/checkout v3
  • actions/setup-go v3
  • docker/setup-qemu-action v2
  • docker/setup-buildx-action v2
  • docker/login-action v2
  • goreleaser/goreleaser-action v4
gomod
go.mod
  • go 1.17
  • github.com/ethereum/go-ethereum v1.10.26
  • github.com/ethpandaops/beacon v0.34.0
  • github.com/onrik/ethrpc v1.1.1
  • github.com/prometheus/client_golang v1.16.0
  • github.com/sirupsen/logrus v1.9.0
  • github.com/spf13/cobra v1.6.1
  • gopkg.in/yaml.v2 v2.4.0

Ethereum Metrics Exporter Listening on Port 4222?

I see this in my sudo netstat -tnlp output:

tcp6 0 0 :::4222 :::* LISTEN 755/ethereum-metric

I have not yet found a reference to this port in the code or docs.

I am running Ethereum Metrics Exporter with --metrics-port 9095 and it is responding to requests on that port correctly.

What is port 4222 being used for?

linux/arm64 docker image not compatible

ethereum-metrics-exporter The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested

eth_exe_sync_is_syncing is being set to 1 while there's no indication of it in logs

eth_exe_sync_is_syncing is intermittently changing to 1 in one of our instances, but Execution client logs show no issues or falling behind.

Software:

  • Lighthouse v4.2.0
  • Erigon 2.44.0-stable

Both clients have been running for ~9 days and the intermittent issue surfaced recently.

Can you please provide details on how the eth_exe_sync_is_syncing metric is calculated so we can check the data on the client and figure out if it's actually an issue with the client or with ethereum-metrics-exporter?

teku metrics exporter error

teku - 22.9.1
geth - 1.10.23

{"component":"exporter","error":"fetched block is nil","level":"error","module":"consensus/beacon","msg":"Subscriber error","time":"2022-09-21T23:46:26Z","topic":"block"}

Some json-rpc endpoints are a bit more strict

There are some json-rpc endpoints that are a bit more strict that others. When running on them I get these errors

{"component":"exporter","error":"Error -32600 (Params must be an array)","exporter":"execution","level":"error","module":"net","msg":"Failed to get peer count","time":"2023-02-09T07:49:42Z"}                                                                                       │
{"component":"exporter","error":"Error -32600 (Params must be an array)","exporter":"execution","level":"error","module":"web3","msg":"Failed to get node info","time":"2023-02-09T07:49:42Z"}                                                                                       

txpool_status does not exist

It seems like this project is calling a txpool_status method on the execution RPC endpoing which does not exist on Geth 1.10.18-stable.

Geth is complaining with these messages in the log:

WARN [06-06|19:45:59.953] Served txpool_status                     conn=127.0.0.1:55350 reqid=0 duration="37.21µs"   err="the method txpool_status does not exist/is not available"

txpool_status response from Nethermind isn't parsed correctly because it doesn't return 0x values

Nov 12 22:16:49 ns1 ethereum-metrics-exporter[92538]: {"component":"exporter","exporter":"execution","level":"error","module":"general","msg":"Failed to get txpool status: json: cannot unmarshal non-string into Go struct field TXPoolStatus.pending of type hexutil.Uint64","time":"2023-11-12T22:16:49-05:00"}

curl 192.168.0.3:8547 -X POST -H "Content-Type: application/json" --data '{"jsonrpc": "2.0","id": 0,"method": "txpool_status","params": []}'

{"jsonrpc":"2.0","result":{"pending":2048,"queued":0},"id":0}%

Nimbus - Failed to fetch peers

Following error is repeating in the logs:

Nov 19 18:00:54 Ubuntu-VM ethereum-metrics-exporter[21296]: {"component":"exporter","error":"nats: maximum payload exceeded","level":"error","module":"consensus/beacon","msg":"Failed to fetch peers","time":"2022-11-19T18:00:54-07:00"}

All other metrics seem to pull in just fine, both for execution and remaining consensus metrics. Also, able to get peers from Nimbus using the metrics exported by Nimbus itself with no problem.

Attached logfile of the startup of ethereum-metrics-exporter.
Nimbus v. 22.10.1
Besu v. 22.10.0
go v. 1.19.3
prometheus v. 2.39.1

ethereum-metrics-exporter.log

[Consensus/beacon] Failed to handle event log error with Lighthouse v4.2.0

Hey, I get error logs for the consensus module that seems strange to me:

Versions

  • distro: Ubuntu 20.04 LTS
  • ethereum-metrics-exporter: 0.21.0
  • execution client: erigon/2.45.1/linux-amd64/go1.19.3
  • consensus client: Lighthouse/v4.2.0-c547a11/x86_64-linux
  • network: goerli

Problem

When starting the ethereum-metrics-exporter binary I got these errors in the log while the exporter seems to be working well:

{"component":"exporter","level":"error","module":"consensus/beacon","msg":"Failed to handle event: invalid voluntary exit event","time":"2023-06-21T18:09:05+02:00"}

Launch command

ethereum-metrics-exporter \
  --metrics-port 1234 \
  --consensus-url http://localhost:5052 \
  --execution-url http://localhost:8545 \
  --monitored-directories /home/ethereum/var/lib

Logs

Started Ethereum exporter service.
{"cfgFile":"","level":"info","msg":"Loading config","time":"2023-06-21T17:55:25+02:00"}
{"component":"exporter","level":"info","msg":"Initializing...","time":"2023-06-21T17:55:25+02:00"}
{"component":"exporter","level":"info","modules":"eth, net, web3","msg":"Initializing execution...","time":"2023-06-21T17:55:25+02:00"}
{"component":"exporter","exporter":"execution","level":"info","msg":"Enabling sync status metrics","time":"2023-06-21T17:55:25+02:00"}
{"component":"exporter","exporter":"execution","level":"info","msg":"Enabling general metrics","time":"2023-06-21T17:55:25+02:00"}
{"component":"exporter","exporter":"execution","level":"info","msg":"Enabling block metrics","time":"2023-06-21T17:55:25+02:00"}
{"component":"exporter","exporter":"execution","level":"info","msg":"Enabling web3 metrics","time":"2023-06-21T17:55:25+02:00"}
{"component":"exporter","exporter":"execution","level":"info","msg":"Enabling net metrics","time":"2023-06-21T17:55:25+02:00"}
{"component":"exporter","level":"info","msg":"Initializing disk usage...","time":"2023-06-21T17:55:25+02:00"}
{"component":"exporter","consensus_url":"http://localhost:5052","execution_url":"http://localhost:8545","level":"info","msg":"Starting metrics server on :1234","time":"2023-06-21T17:55:25+02:00"}
{"component":"exporter","execution_url":"http://localhost:8545","level":"info","msg":"Starting execution metrics...","time":"2023-06-21T17:55:25+02:00"}
{"component":"exporter","level":"info","msg":"Starting disk usage metrics...","time":"2023-06-21T17:55:25+02:00"}
{"component":"exporter","consensus_url":"http://localhost:5052","level":"info","msg":"Starting consensus metrics...","time":"2023-06-21T17:55:25+02:00"}
{"component":"exporter","exporter":"execution","level":"info","msg":"Started metrics exporter jobs","time":"2023-06-21T17:55:25+02:00"}
{"component":"exporter","level":"info","module":"consensus/beacon","msg":"Initializing beacon state","time":"2023-06-21T17:55:25+02:00"}
{"component":"exporter","level":"info","module":"consensus/beacon","msg":"Beacon state initialized! Ready to serve requests...","time":"2023-06-21T17:55:25+02:00"}
{"component":"exporter","level":"error","module":"consensus/beacon","msg":"Failed to handle event: invalid voluntary exit event","time":"2023-06-21T17:59:42+02:00"}
{"component":"exporter","level":"error","module":"consensus/beacon","msg":"Failed to handle event: invalid voluntary exit event","time":"2023-06-21T18:00:43+02:00"}
{"component":"exporter","level":"error","module":"consensus/beacon","msg":"Failed to handle event: invalid voluntary exit event","time":"2023-06-21T18:01:44+02:00"}
{"component":"exporter","level":"error","module":"consensus/beacon","msg":"Failed to handle event: invalid voluntary exit event","time":"2023-06-21T18:02:45+02:00"}

Checking consensus client HTTP port

The beacon chain is well responding on the 5052 port, with no error logs. Same for the execution client, my validator is well attesting on goerli.

$ curl -s -X GET "http://localhost:5052/eth/v1/beacon/headers/head" -H  "accept: application/json" | jq
{
  "execution_optimistic": false,
  "finalized": false,
  "data": {
    "root": "0x233a05580d58acde8de50a761727e787f5f5fd26b77b90b190e83b44bb038611",
    "canonical": true,
    "header": {
      "message": {
        "slot": "5904592",
        "proposer_index": "7141",
        "parent_root": "0x38f27cf545ce8b819f7836dd9f403de6cc97f21aaf43a5ad381f9e94e39e488c",
        "state_root": "0x4be4a6228ca9ce5e6361978f23bbb7a0f7279e82bc93d5cf03a1cc8690a7cb93",
        "body_root": "0xced47a8b6bb0b79fe1f027966630695cc97f2fba1488881c867132f8d320228d"
      },
      "signature": "0xa4242f45a124ad248c3a0c9df7f78feea0d99d607b88905435eb21bc9ba3ad9c6945db9b61406763903c4a337d11be9517032269a565d8eceb573fa2a59fce7317de933be188de711b8edd6a4e6c2dc4b6698ee0a015bf3eaa8949b29fa567cd"
    }
  }
}

Ethereum exporter Metrics

# HELP eth_con_beacon_attestations The amount of attestations in the block.
# TYPE eth_con_beacon_attestations gauge
eth_con_beacon_attestations{block_id="finalized",module="beacon",node="consensus",version="CAPELLA"} 128
eth_con_beacon_attestations{block_id="head",module="beacon",node="consensus",version="CAPELLA"} 128
# HELP eth_con_beacon_deposits The amount of deposits in the block.
# TYPE eth_con_beacon_deposits gauge
eth_con_beacon_deposits{block_id="finalized",module="beacon",node="consensus",version="CAPELLA"} 0
eth_con_beacon_deposits{block_id="head",module="beacon",node="consensus",version="CAPELLA"} 0
# HELP eth_con_beacon_empty_slots_count The number of slots that have expired without a block proposed.
# TYPE eth_con_beacon_empty_slots_count counter
eth_con_beacon_empty_slots_count{module="beacon",node="consensus"} 0
# HELP eth_con_beacon_finality_checkpoint_epochs That epochs of the finality checkpoints.
# TYPE eth_con_beacon_finality_checkpoint_epochs gauge
eth_con_beacon_finality_checkpoint_epochs{checkpoint="finalized",module="beacon",node="consensus",state_id="head"} 184517
eth_con_beacon_finality_checkpoint_epochs{checkpoint="justified",module="beacon",node="consensus",state_id="head"} 184518
eth_con_beacon_finality_checkpoint_epochs{checkpoint="previous_justified",module="beacon",node="consensus",state_id="head"} 184517
# HELP eth_con_beacon_proposer_delay The delay of the proposer.
# TYPE eth_con_beacon_proposer_delay histogram
eth_con_beacon_proposer_delay_bucket{module="beacon",node="consensus",le="0"} 0
eth_con_beacon_proposer_delay_bucket{module="beacon",node="consensus",le="1000"} 15
eth_con_beacon_proposer_delay_bucket{module="beacon",node="consensus",le="2000"} 56
eth_con_beacon_proposer_delay_bucket{module="beacon",node="consensus",le="3000"} 62
eth_con_beacon_proposer_delay_bucket{module="beacon",node="consensus",le="4000"} 64
eth_con_beacon_proposer_delay_bucket{module="beacon",node="consensus",le="5000"} 64
eth_con_beacon_proposer_delay_bucket{module="beacon",node="consensus",le="6000"} 65
eth_con_beacon_proposer_delay_bucket{module="beacon",node="consensus",le="7000"} 65
eth_con_beacon_proposer_delay_bucket{module="beacon",node="consensus",le="8000"} 65
eth_con_beacon_proposer_delay_bucket{module="beacon",node="consensus",le="9000"} 65
eth_con_beacon_proposer_delay_bucket{module="beacon",node="consensus",le="10000"} 65
eth_con_beacon_proposer_delay_bucket{module="beacon",node="consensus",le="11000"} 65
eth_con_beacon_proposer_delay_bucket{module="beacon",node="consensus",le="12000"} 65
eth_con_beacon_proposer_delay_bucket{module="beacon",node="consensus",le="+Inf"} 65
eth_con_beacon_proposer_delay_sum{module="beacon",node="consensus"} 93345
eth_con_beacon_proposer_delay_count{module="beacon",node="consensus"} 65
# HELP eth_con_beacon_reorg_count The count of reorgs.
# TYPE eth_con_beacon_reorg_count counter
eth_con_beacon_reorg_count{module="beacon",node="consensus"} 0
# HELP eth_con_beacon_reorg_depth The number of reorgs.
# TYPE eth_con_beacon_reorg_depth counter
eth_con_beacon_reorg_depth{module="beacon",node="consensus"} 0
# HELP eth_con_beacon_slashings The amount of slashings in the block.
# TYPE eth_con_beacon_slashings gauge
eth_con_beacon_slashings{block_id="finalized",module="beacon",node="consensus",type="attester",version="CAPELLA"} 0
eth_con_beacon_slashings{block_id="finalized",module="beacon",node="consensus",type="proposer",version="CAPELLA"} 0
eth_con_beacon_slashings{block_id="head",module="beacon",node="consensus",type="attester",version="CAPELLA"} 0
eth_con_beacon_slashings{block_id="head",module="beacon",node="consensus",type="proposer",version="CAPELLA"} 0
# HELP eth_con_beacon_slot The slot number in the block.
# TYPE eth_con_beacon_slot gauge
eth_con_beacon_slot{block_id="finalized",module="beacon",node="consensus",version="CAPELLA"} 5.904575e+06
eth_con_beacon_slot{block_id="head",module="beacon",node="consensus",version="CAPELLA"} 5.904654e+06
# HELP eth_con_beacon_transactions The amount of transactions in the block.
# TYPE eth_con_beacon_transactions gauge
eth_con_beacon_transactions{block_id="finalized",module="beacon",node="consensus",version="CAPELLA"} 79
eth_con_beacon_transactions{block_id="head",module="beacon",node="consensus",version="CAPELLA"} 157
# HELP eth_con_beacon_voluntary_exits The amount of voluntary exits in the block.
# TYPE eth_con_beacon_voluntary_exits gauge
eth_con_beacon_voluntary_exits{block_id="finalized",module="beacon",node="consensus",version="CAPELLA"} 0
eth_con_beacon_voluntary_exits{block_id="head",module="beacon",node="consensus",version="CAPELLA"} 0
# HELP eth_con_beacon_withdrawals The amount of withdrawals in the block.
# TYPE eth_con_beacon_withdrawals gauge
eth_con_beacon_withdrawals{block_id="finalized",module="beacon",node="consensus",version="CAPELLA"} 16
eth_con_beacon_withdrawals{block_id="head",module="beacon",node="consensus",version="CAPELLA"} 16
# HELP eth_con_beacon_withdrawals_amount_gwei The sum amount of all the withdrawals in the block (in gwei).
# TYPE eth_con_beacon_withdrawals_amount_gwei gauge
eth_con_beacon_withdrawals_amount_gwei{block_id="finalized",module="beacon",node="consensus",version="CAPELLA"} 3.2764565e+07
eth_con_beacon_withdrawals_amount_gwei{block_id="head",module="beacon",node="consensus",version="CAPELLA"} 3.3202973e+07
# HELP eth_con_beacon_withdrawals_index_max The maximum index of the withdrawals in the block.
# TYPE eth_con_beacon_withdrawals_index_max gauge
eth_con_beacon_withdrawals_index_max{block_id="finalized",module="beacon",node="consensus",version="CAPELLA"} 8.944379e+06
eth_con_beacon_withdrawals_index_max{block_id="head",module="beacon",node="consensus",version="CAPELLA"} 8.945435e+06
# HELP eth_con_beacon_withdrawals_index_min The minimum index of the withdrawals in the block.
# TYPE eth_con_beacon_withdrawals_index_min gauge
eth_con_beacon_withdrawals_index_min{block_id="finalized",module="beacon",node="consensus",version="CAPELLA"} 8.944364e+06
eth_con_beacon_withdrawals_index_min{block_id="head",module="beacon",node="consensus",version="CAPELLA"} 8.94542e+06
# HELP eth_con_event_count The count of beacon events.
# TYPE eth_con_event_count counter
eth_con_event_count{event="attestation",module="event",node="consensus"} 38392
eth_con_event_count{event="block",module="event",node="consensus"} 65
eth_con_event_count{event="contribution_and_proof",module="event",node="consensus"} 934
eth_con_event_count{event="finalized_checkpoint",module="event",node="consensus"} 2
eth_con_event_count{event="head",module="event",node="consensus"} 65
eth_con_event_count{event="voluntary_exit",module="event",node="consensus"} 14
# HELP eth_con_event_time_since_last_subscription_event_ms The amount of time since the last subscription event (in milliseconds).
# TYPE eth_con_event_time_since_last_subscription_event_ms gauge
eth_con_event_time_since_last_subscription_event_ms{module="event",node="consensus"} 0
# HELP eth_con_fork_activated The activation status of the fork (1 for activated).
# TYPE eth_con_fork_activated gauge
eth_con_fork_activated{fork="ALTAIR",module="fork",node="consensus"} 1
eth_con_fork_activated{fork="BELLATRIX",module="fork",node="consensus"} 1
eth_con_fork_activated{fork="CAPELLA",module="fork",node="consensus"} 1
eth_con_fork_activated{fork="GENESIS",module="fork",node="consensus"} 1
# HELP eth_con_fork_current The current fork.
# TYPE eth_con_fork_current gauge
eth_con_fork_current{fork="CAPELLA",module="fork",node="consensus"} 1
# HELP eth_con_fork_epoch The epoch for the fork.
# TYPE eth_con_fork_epoch gauge
eth_con_fork_epoch{fork="ALTAIR",module="fork",node="consensus"} 36660
eth_con_fork_epoch{fork="BELLATRIX",module="fork",node="consensus"} 112260
eth_con_fork_epoch{fork="CAPELLA",module="fork",node="consensus"} 162304
eth_con_fork_epoch{fork="GENESIS",module="fork",node="consensus"} 0
# HELP eth_con_health_check_results_total Total of health checks results.
# TYPE eth_con_health_check_results_total counter
eth_con_health_check_results_total{module="health",node="consensus",result="success"} 62
# HELP eth_con_health_up Whether the node is up or not.
# TYPE eth_con_health_up gauge
eth_con_health_up{module="health",node="consensus"} 1
# HELP eth_con_node_version The version of the running beacon node.
# TYPE eth_con_node_version gauge
eth_con_node_version{module="general",node="consensus",version="Lighthouse/v4.2.0-c547a11/x86_64-linux"} 1
# HELP eth_con_peers The count of peers connected to beacon node.
# TYPE eth_con_peers gauge
eth_con_peers{direction="inbound",module="general",node="consensus",state="connected"} 24
eth_con_peers{direction="inbound",module="general",node="consensus",state="connecting"} 0
eth_con_peers{direction="inbound",module="general",node="consensus",state="disconnected"} 61
eth_con_peers{direction="inbound",module="general",node="consensus",state="disconnecting"} 0
eth_con_peers{direction="outbound",module="general",node="consensus",state="connected"} 76
eth_con_peers{direction="outbound",module="general",node="consensus",state="connecting"} 0
eth_con_peers{direction="outbound",module="general",node="consensus",state="disconnected"} 180
eth_con_peers{direction="outbound",module="general",node="consensus",state="disconnecting"} 0
# HELP eth_con_spec_base_reward_factor The base reward factor.
# TYPE eth_con_spec_base_reward_factor gauge
eth_con_spec_base_reward_factor{module="spec",node="consensus"} 64
# HELP eth_con_spec_config_name The name of the config.
# TYPE eth_con_spec_config_name gauge
eth_con_spec_config_name{module="spec",name="prater",node="consensus"} 1
# HELP eth_con_spec_deposit_chain_id The chain ID of the deposit contract.
# TYPE eth_con_spec_deposit_chain_id gauge
eth_con_spec_deposit_chain_id{module="spec",node="consensus"} 5
# HELP eth_con_spec_effective_balance_increment The effective balance increment.
# TYPE eth_con_spec_effective_balance_increment gauge
eth_con_spec_effective_balance_increment{module="spec",node="consensus"} 1e+09
# HELP eth_con_spec_epochs_per_sync_committee_period The number of epochs per sync committee period.
# TYPE eth_con_spec_epochs_per_sync_committee_period gauge
eth_con_spec_epochs_per_sync_committee_period{module="spec",node="consensus"} 256
# HELP eth_con_spec_eth1_follow_distance The number of blocks to follow behind the head of the eth1 chain.
# TYPE eth_con_spec_eth1_follow_distance gauge
eth_con_spec_eth1_follow_distance{module="spec",node="consensus"} 2048
# HELP eth_con_spec_genesis_delay The number of epochs to wait before processing the genesis block.
# TYPE eth_con_spec_genesis_delay gauge
eth_con_spec_genesis_delay{module="spec",node="consensus"} 1.919188e+06
# HELP eth_con_spec_max_attestations The maximum number of attestations.
# TYPE eth_con_spec_max_attestations gauge
eth_con_spec_max_attestations{module="spec",node="consensus"} 128
# HELP eth_con_spec_max_deposits The maximum number of deposits.
# TYPE eth_con_spec_max_deposits gauge
eth_con_spec_max_deposits{module="spec",node="consensus"} 16
# HELP eth_con_spec_max_effective_balance The maximum effective balance.
# TYPE eth_con_spec_max_effective_balance gauge
eth_con_spec_max_effective_balance{module="spec",node="consensus"} 3.2e+10
# HELP eth_con_spec_max_validators_per_committee The maximum number of validators per committee.
# TYPE eth_con_spec_max_validators_per_committee gauge
eth_con_spec_max_validators_per_committee{module="spec",node="consensus"} 2048
# HELP eth_con_spec_min_deposit_amount The minimum deposit amount.
# TYPE eth_con_spec_min_deposit_amount gauge
eth_con_spec_min_deposit_amount{module="spec",node="consensus"} 1e+09
# HELP eth_con_spec_min_genesis_active_validator_count The minimum number of active validators at genesis.
# TYPE eth_con_spec_min_genesis_active_validator_count gauge
eth_con_spec_min_genesis_active_validator_count{module="spec",node="consensus"} 16384
# HELP eth_con_spec_min_sync_committee_participants The minimum number of sync committee participants.
# TYPE eth_con_spec_min_sync_committee_participants gauge
eth_con_spec_min_sync_committee_participants{module="spec",node="consensus"} 1
# HELP eth_con_spec_preset_base The base of the preset.
# TYPE eth_con_spec_preset_base gauge
eth_con_spec_preset_base{module="spec",node="consensus",preset="mainnet"} 1
# HELP eth_con_spec_safe_slots_to_update_justified The number of slots to wait before updating the justified checkpoint.
# TYPE eth_con_spec_safe_slots_to_update_justified gauge
eth_con_spec_safe_slots_to_update_justified{module="spec",node="consensus"} 8
# HELP eth_con_spec_seconds_per_eth1_block The number of seconds per ETH1 block.
# TYPE eth_con_spec_seconds_per_eth1_block gauge
eth_con_spec_seconds_per_eth1_block{module="spec",node="consensus"} 14
# HELP eth_con_spec_seconds_per_slot The number of seconds per slot.
# TYPE eth_con_spec_seconds_per_slot gauge
eth_con_spec_seconds_per_slot{module="spec",node="consensus"} 12
# HELP eth_con_spec_slots_per_epoch The number of slots per epoch.
# TYPE eth_con_spec_slots_per_epoch gauge
eth_con_spec_slots_per_epoch{module="spec",node="consensus"} 32
# HELP eth_con_spec_sync_committee_size The sync committee size.
# TYPE eth_con_spec_sync_committee_size gauge
eth_con_spec_sync_committee_size{module="spec",node="consensus"} 512
# HELP eth_con_spec_target_committee_size The target committee size.
# TYPE eth_con_spec_target_committee_size gauge
eth_con_spec_target_committee_size{module="spec",node="consensus"} 128
# HELP eth_con_spec_terminal_block_hash_activation_epoch The epoch at which the terminal block hash is activated.
# TYPE eth_con_spec_terminal_block_hash_activation_epoch gauge
eth_con_spec_terminal_block_hash_activation_epoch{module="spec",node="consensus"} 1.8446744073709552e+19
# HELP eth_con_spec_terminal_total_difficulty The terminal total difficulty.
# TYPE eth_con_spec_terminal_total_difficulty gauge
eth_con_spec_terminal_total_difficulty{module="spec",node="consensus"} 1.079e+07
# HELP eth_con_spec_terminal_total_difficulty_trillions The terminal total difficulty in trillions.
# TYPE eth_con_spec_terminal_total_difficulty_trillions gauge
eth_con_spec_terminal_total_difficulty_trillions{module="spec",node="consensus"} 0
# HELP eth_con_sync_distance The sync distance of the node.
# TYPE eth_con_sync_distance gauge
eth_con_sync_distance{module="sync",node="consensus"} 2
# HELP eth_con_sync_estimated_highest_slot The estimated highest slot of the network.
# TYPE eth_con_sync_estimated_highest_slot gauge
eth_con_sync_estimated_highest_slot{module="sync",node="consensus"} 5.904653e+06
# HELP eth_con_sync_head_slot The current slot of the node.
# TYPE eth_con_sync_head_slot gauge
eth_con_sync_head_slot{module="sync",node="consensus"} 5.904651e+06
# HELP eth_con_sync_is_syncing 1 if the node is in syncing state.
# TYPE eth_con_sync_is_syncing gauge
eth_con_sync_is_syncing{module="sync",node="consensus"} 0
...

Failed to request signed beacon block

Hi,

Trying out this awesome exporter with Prometheus and Grafana and got it working, but still see an error in the logs.
Using Geth (latest) for EL and Prysm (latest) for CL.

The error in the logs show this:

ethereum-metrics-exporter[25186]: {"component":"exporter","error":"failed to request signed beacon block: GET failed with status 500: {\"message\":\"GetBlockV2: rpc error: code = NotFound desc = Could not find requested block: signed beac on block can't be nil\",\"code\":500}","level":"error","module":"consensus/beacon","msg":"Subscriber error","time":"2022-08-01T11:21:00+02:00","topic":"block"}

Is this a misconfiguration from my side or could it be something else?

Flags used: --consensus-url=http://localhost:3500 --execution-url=http://localhost:8545

Prysm public API: https://docs.prylabs.network/docs/how-prysm-works/prysm-public-api

High CPU usage when beacon node is overloaded

Metrics exporter keeps hammering a the beacon node when its unable to produce events fast enough, thus causing very high CPU utilization when beacon node is unable to respond.

To fix, we might want to slow down with the ethereum metrics exporter when we don't get a message back from the beacon node.

Logs:

nats: slow consumer, messages dropped on connection [4] for subscription on "raw_event"

CPU up to 600% utilization.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.