Coder Social home page Coder Social logo

purestorage-openconnect / pure-fb-openmetrics-exporter Goto Github PK

View Code? Open in Web Editor NEW
11.0 6.0 9.0 19 MB

Pure Storage OpenMetrics exporter for FlashBlade

License: Apache License 2.0

Makefile 3.42% Dockerfile 0.84% Go 95.73%
flashblade monitoring purestorage observability openmetrics prometheus metrics prometheus-exporter

pure-fb-openmetrics-exporter's Introduction

Current version

Pure Storage FlashBlade OpenMetrics exporter

OpenMetrics exporter for Pure Storage FlashBlade.

Support Statement

This exporter is provided under Best Efforts support by the Pure Portfolio Solutions Group, Open Source Integrations team. For feature requests and bugs please use GitHub Issues. We will address these as soon as we can, but there are no specific SLAs.

Overview

This application aims to help monitor Pure Storage FlashBlades by providing an "exporter", which means it extracts data from the Purity API and converts it to the OpenMetrics format, which is for instance consumable by Prometheus.

The stateless design of the exporter allows for easy configuration management as well as scalability for a whole fleet of Pure Storage systems. Each time Prometheus scrapes metrics for a specific system, it should provide the hostname via GET parameter and the API token as Authorization token to this exporter.

To monitor your Pure Storage appliances, you will need to create a new dedicated user on your array, and assign read-only permissions to it. Afterwards, you also have to create a new API key.

Building and Deploying

The exporter is a Go application based on the Prometheus Go client library and Resty, a simple but reliable HTTP and REST client library for Go . It is preferably built and launched via Docker. You can also scale the exporter deployment to multiple containers on Kubernetes thanks to the stateless nature of the application.


The official docker images are available at Quay.io

docker pull quay.io/purestorage/pure-fb-om-exporter:<release>

where the release tag follows the semantic versioning.


Binaries

Binary downloads of the exporter can be found on the Releases page.


Local development

The following commands describe how to run a typical build :

# clone the repository
git clone [email protected]:PureStorage-OpenConnect/pure-fb-openmetrics-exporter.git

# modify the code and build the package
cd pure-fb-openmetrics-exporter
...
make build

The newly built exporter executable can be found in the ./out/bin directory.

Optionally, to build the binary with the vendor cache, you may use

make build-with-vendor

Docker image

The provided dockerfile can be used to generate a docker image of the exporter. It accepts the version of the package as the build parameter, therefore you can build the image using docker as follows

docker build -t pure-fb-ome:$VERSION .

Authentication

Authentication is used by the exporter as the mechanism to cross authenticate to the scraped appliance, therefore for each array it is required to provide the REST API token for an account that has a 'readonly' role. The api-token can be provided in two ways

  • using the HTTP Authorization header of type 'Bearer', or
  • via a configuration map in a specific configuration file.

The first option requires specifying the api-token value as the authorization parameter of the specific job in the Prometheus configuration file. The second option provides the FlashBlade/api-token key-pair map for a list of arrays in a simple YAML configuration file that is passed as parameter to the exporter. This makes possible to write more concise Prometheus configuration files and also to configure other scrapers that cannot use the HTTP authentication header.

The exporter can be started in TLS mode (HTTPS, mutually exclusive with the HTTP mode) by providing the X.509 certificate and key files in the command parameters. Self-signed certificates are also accepted.

Usage

usage: pure-fb-om-exporter [-h|--help] [-a|--address "<value>"] [-p|--port <integer>] [-d|--debug] [-t|--tokens <file>] [-k|--key <file>] [-c|--cert <file>]

                           Pure Storage FB OpenMetrics exporter

Arguments:

  -h  --help     Print help information
  -a  --address  IP address for this exporter to bind to. Default: 0.0.0.0
  -p  --port     Port for this exporter to listen. Default: 9491
  -d  --debug    Enable debug. Default: false
  -t  --tokens   API token(s) map file
  -c  --cert     SSL/TLS certificate file. Required only for TLS
  -k  --key      SSL/TLS private key file. Required only for TLS

The array token configuration file must have to following syntax:

<array_id1>:
  address: <ip-address>|<hosname1>
  api_token: <api-token1> 
<array_id2>:
  address: <ip-address2>|<hostname2>
  api_token: <api-token2>
...
<array_idN>:
  address: <ip-addressN>|<hostnameN>
  api_token: <api-tokenN>

Scraping endpoints

The exporter uses a RESTful API schema to provide Prometheus scraping endpoints.

Authentication

Authentication is used by the exporter as the mechanism to cross authenticate to the scraped appliance, therefore for each array it is required to provide the REST API token for an account that has a 'readonly' role. The api-token must be provided in the http request using the HTTP Authorization header of type 'Bearer'. This is achieved by specifying the api-token value as the authorization parameter of the specific job in the Prometheus configuration file.

The exporter understands the following requests:

URL GET parameters description
http://<exporter-host>:<port>/metrics endpoint Full array metrics
http://<exporter-host>:<port>/metrics/array endpoint Array metrics
http://<exporter-host>:<port>/metrics/clients endpoint Clients metrics
http://<exporter-host>:<port>/metrics/usage endpoint Quotas usage metrics
http://<exporter-host>:<port>/metrics/policies endpoint NFS policies info metrics

Depending on the target array, scraping for the whole set of metrics could result into timeout issues, in which case it is suggested either to increase the scraping timeout or to scrape each single endpoint instead.

Usage examples

In a typical production scenario, it is recommended to use a visual frontend for your metrics, such as Grafana. Grafana allows you to use your Prometheus instance as a datasource, and create Graphs and other visualizations from PromQL queries. Grafana, Prometheus, are all easy to run as docker containers.

To spin up a very basic set of those containers, use the following commands:

# Pure exporter
docker run -d -p 9491:9491 --name pure-fb-om-exporter quay.io/purestorage/pure-fb-om-exporter:<version>

# Prometheus with config via bind-volume (create config first!)
docker run -d -p 9090:9090 --name=prometheus -v /tmp/prometheus-pure.yml:/etc/prometheus/prometheus.yml -v /tmp/prometheus-data:/prometheus prom/prometheus:latest

# Grafana
docker run -d -p 3000:3000 --name=grafana -v /tmp/grafana-data:/var/lib/grafana grafana/grafana

Please have a look at the documentation of each image/application for adequate configuration examples.

A simple but complete example to deploy a full monitoring stack on kubernetes can be found in the examples directory

Bugs and Limitations

  • Pure FlashBlade REST APIs are not designed for efficiently reporting on full clients and objects quota KPIs, therefrore it is suggested to scrape the "array" metrics preferably and use the "clients" and "usage" metrics individually and with a lower frequency than the other.. In any case, as a general rule, it is advisable to do not lower the scraping interval down to less than 30 sec. In case you experience timeout issues, you may want to increase the Prometheus scraping timeout and interval approriately.

Metrics Collected

Metric Name Description
purefb_alerts_open FlashBlade open alert events
purefb_info FlashBlade system information
purefb_array_http_specific_performance_latency_usec FlashBlade array HTTP specific latency
purefb_array_http_specific_performance_throughput_iops FlashBlade array HTTP specific throughput
purefb_array_nfs_specific_performance_latency_usec FlashBlade array NFS specific latency
purefb_array_nfs_specific_performance_throughput_iops FlashBlade array NFS specific throughput
purefb_array_performance_latency_usec FlashBlade array latency
purefb_array_performance_throughput_iops FlashBlade array throughput
purefb_array_performance_bandwidth_bytes FlashBlade array throughput
purefb_array_performance_average_bytes FlashBlade array average operations size
purefb_array_performance_replication FlashBlade array replication throughput
purefb_array_s3_performance_latency_usec FlashBlade array latency
purefb_array_s3_performance_throughput_iops FlashBlade array throughput
purefb_array_space_data_reduction_ratio FlashBlade space data reduction
purefb_array_space_bytes FlashBlade space in bytes
purefb_array_space_parity FlashBlade space parity
purefb_array_space_utilization FlashBlade array space utilization in percent
purefb_buckets_performance_latency_usec FlashBlade buckets latency
purefb_buckets_performance_throughput_iops FlashBlade buckets throughput
purefb_buckets_performance_bandwidth_bytes FlashBlade buckets bandwidth
purefb_buckets_performance_average_bytes FlashBlade buckets average operations size
purefb_buckets_s3_specific_performance_latency_usec FlashBlade buckets S3 specific latency
purefb_buckets_s3_specific_performance_throughput_iops FlashBlade buckets S3 specific throughput
purefb_buckets_space_data_reduction_ratio FlashBlade buckets space data reduction
purefb_buckets_space_bytes FlashBlade buckets space in bytes
purefb_clients_performance_latency_usec FlashBlade clients latency
purefb_clients_performance_throughput_iops FlashBlade clients throughput
purefb_clients_performance_bandwidth_bytes FlashBlade clients bandwidth
purefb_clients_performance_average_bytes FlashBlade clients average operations size
purefb_file_systems_performance_latency_usec FlashBlade file systems latency
purefb_file_systems_performance_throughput_iops FlashBlade file systems throughput
purefb_file_systems_performance_bandwidth_bytes FlashBlade file systems bandwidth
purefb_file_systems_performance_average_bytes FlashBlade file systems average operations size
purefb_file_systems_space_data_reduction_ratio FlashBlade file systems space data reduction
purefb_file_systems_space_bytes FlashBlade file systems space in bytes
purefb_hardware_health FlashBlade hardware component health status
purefb_hardware_connectors_performance_throughput_pkts FlashBlade hardware connectors performance throughput
purefb_hardware_connectors_performance_bandwidth_bytes FlashBlade hardware connectors performance bandwidth
purefb_shardware_connectors_performance_errors FlashBlade hardware connectors performance errors per sec
purefb_file_system_usage_users_bytes FlashBlade file system users usage
purefb_file_system_usage_groups_bytes FlashBlade file system groups usage
purefb_nfs_export_rule FlashBlade NFS export policies information

Monitoring On-Premise with Prometheus and Grafana

Take a holistic overview of your Pure Storage FlashBlade estate on-premise with Prometheus and Grafana to summarize statistics such as:

  • FlashBlade Utilization
  • Purity OS version
  • Data Reduction Rate
  • Number and type of open alerts

Drill down into specific arrays and identify top busy hosts while correlating read and write operations and throughput to quickly highlight or eliminate investigation enquiries.

For more information on dependencies, and notes to deploy -- take look at the examples for Grafana and Prometheus in the extra/grafana/ and extra/prometheus/ folders respectively.

License

This project is licensed under the Apache 2.0 License - see the LICENSE file for details

pure-fb-openmetrics-exporter's People

Contributors

chrroberts-pure avatar genegr avatar james-laing avatar sdodsley avatar unni-pure avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

pure-fb-openmetrics-exporter's Issues

Expose purefb_array_space_bytes{space="destroyed"} to OME endpoint

While investigating issue 49 I discovered OME latest and previous versions have never returned space metrics for

purefb_array_space_bytes{space="destroyed",type="file-system"}
purefb_array_space_bytes{space="destroyed_virtual",type="file-system"}
purefb_array_space_bytes{space="destroyed",type="object-store"}
purefb_array_space_bytes{space="destroyed_virtual",type="object-store"}

This would be helpful for more granular capacity reporting to understand how much capacity is waiting to be eradicated.
I'm not sure if this is a bug or an enhancement request.

Exposed metrics from OME v1.0.9 for purefb_array_space_bytes

# HELP purefb_array_space_bytes FlashBlade space in bytes
# TYPE purefb_array_space_bytes gauge
purefb_array_space_bytes{space="capacity",type="array"} 1.78628350604839e+14
purefb_array_space_bytes{space="capacity",type="file-system"} 1.78628350604839e+14
purefb_array_space_bytes{space="capacity",type="object-store"} 1.78628350604839e+14
purefb_array_space_bytes{space="empty",type="array"} 1.36556712710542e+14
purefb_array_space_bytes{space="snapshots",type="array"} 2.484569726e+09
purefb_array_space_bytes{space="snapshots",type="file-system"} 2.484569726e+09
purefb_array_space_bytes{space="snapshots",type="object-store"} 0
purefb_array_space_bytes{space="total_physical",type="array"} 4.2071637894297e+13
purefb_array_space_bytes{space="total_physical",type="file-system"} 3.8869690324535e+13
purefb_array_space_bytes{space="total_physical",type="object-store"} 3.201947569762e+12
purefb_array_space_bytes{space="unique",type="array"} 4.2067604064078e+13
purefb_array_space_bytes{space="unique",type="file-system"} 3.8867137439755e+13
purefb_array_space_bytes{space="unique",type="object-store"} 3.200466624323e+12
purefb_array_space_bytes{space="virtual",type="array"} 5.1417848040718e+13
purefb_array_space_bytes{space="virtual",type="file-system"} 4.8126804911104e+13
purefb_array_space_bytes{space="virtual",type="object-store"} 3.291043129614e+12

Output from debug shows metrics are being collected by REST API query.
GET /api/2.12/arrays/space?type=file-system

==============================================================================
2024/02/27 11:17:38.413972 DEBUG RESTY
==============================================================================
~~~ REQUEST ~~~
GET  /api/2.12/arrays/space?type=file-system  HTTP/1.1
HOST   : fb03
HEADERS:
        Accept: application/json
        Content-Type: application/json
        User-Agent: Pure_FB_OpenMetrics_exporter/1.0
        X-Auth-Token: d0c4a9bb-fae9-4e44-9706-c4d3c092d690
BODY   :
***** NO CONTENT *****
------------------------------------------------------------------------------
~~~ RESPONSE ~~~
STATUS       : 200
PROTO        : HTTP/1.1
RECEIVED AT  : 2024-02-27T11:17:38.41389486Z
TIME DURATION: 4.275515ms
HEADERS      :
        Cache-Control: no-cache, no-store, max-age=0, must-revalidate
        Connection: keep-alive
        Content-Security-Policy: frame-ancestors 'none'
        Content-Type: application/json;charset=UTF-8
        Date: Tue, 27 Feb 2024 11:17:38 GMT
        Expires: 0
        Pragma: no-cache
        Request-Id: 42886705
        Server: nginx
        Strict-Transport-Security: max-age=31536000; includeSubDomains;
        Vary: Origin, Access-Control-Request-Method, Access-Control-Request-Headers
        X-Content-Type-Options: nosniff
        X-Frame-Options: DENY
        X-Xss-Protection: 1; mode=block
BODY         :
{
   "continuation_token": null,
   "total_item_count": 1,
   "items": [
      {
         "name": "FB03",
         "id": "25cb9381-4be2-4ff8-b5c8-ada1bcd4a18c",
         "time": 1709032650000,
         "capacity": 178628350604839,
         "parity": 1.0,
         "space": {
            "virtual": 48126976205824,
            "unique": 38867280110911,
            "snapshots": 2484569726,
            "data_reduction": 1.2382388,
            "total_physical": 38869832995691,
            "total_provisioned": null,
            "destroyed_virtual": 68157440,
            "destroyed": 68315054,
            "available_provisioned": null,
            "available_ratio": null
         }
      }
   ]
}

/api/2.12/arrays/space?type=object-store

==============================================================================
2024/02/27 11:17:38.421215 DEBUG RESTY
==============================================================================
~~~ REQUEST ~~~
GET  /api/2.12/arrays/space?type=object-store  HTTP/1.1
HOST   : fb03
HEADERS:
        Accept: application/json
        Content-Type: application/json
        User-Agent: Pure_FB_OpenMetrics_exporter/1.0
        X-Auth-Token: d0c4a9bb-fae9-4e44-9706-c4d3c092d690
BODY   :
***** NO CONTENT *****
------------------------------------------------------------------------------
~~~ RESPONSE ~~~
STATUS       : 200
PROTO        : HTTP/1.1
RECEIVED AT  : 2024-02-27T11:17:38.421085927Z
TIME DURATION: 4.103292ms
HEADERS      :
        Cache-Control: no-cache, no-store, max-age=0, must-revalidate
        Connection: keep-alive
        Content-Security-Policy: frame-ancestors 'none'
        Content-Type: application/json;charset=UTF-8
        Date: Tue, 27 Feb 2024 11:17:38 GMT
        Expires: 0
        Pragma: no-cache
        Request-Id: 42886707
        Server: nginx
        Strict-Transport-Security: max-age=31536000; includeSubDomains;
        Vary: Origin, Access-Control-Request-Method, Access-Control-Request-Headers
        X-Content-Type-Options: nosniff
        X-Frame-Options: DENY
        X-Xss-Protection: 1; mode=block
BODY         :
{
   "continuation_token": null,
   "total_item_count": 1,
   "items": [
      {
         "name": "FB03",
         "id": "25cb9381-4be2-4ff8-b5c8-ada1bcd4a18c",
         "time": 1709032650000,
         "capacity": 178628350604839,
         "parity": 1.0,
         "space": {
            "virtual": 3291043129614,
            "unique": 3200466624323,
            "snapshots": 0,
            "data_reduction": 1.028301,
            "total_physical": 3201947569762,
            "total_provisioned": null,
            "destroyed_virtual": 1480945439,
            "destroyed": 1480945439,
            "available_provisioned": null,
            "available_ratio": null
         }
      }
   ]
}

Target authorization token is missing

i have kept the token in yaml filen and getting Target authorization token is missing issue.

address: XXX.XXX.XXX.XXX
api_token: XXXXXXXXXXXXXXXXXXXXXXXXXX

Is it possible to have a breakdown of the file system?

I pulled the /metrics and I don't see a more granular look at the file system. Below is what I would like to see
purefb-01> purefs list
Name Size Virtual Hard Limit Source Created Protocols Writable Promotion Status
EA1__dss135 10T 3.60T True - 2024-01-11 14:55:53 CST nfsv3 True promoted
nfsv4.1
EA1__dss136 10T 0.00 True - 2024-02-07 11:36:21 CST nfsv3 True promoted
nfsv4.1
EA1__dss137 10T 0.00 True - 2024-02-07 11:37:10 CST nfsv3 True promoted
nfsv4.1
EA1__dss138 10T 0.00 True - 2024-02-07 11:37:55 CST nfsv3 True promoted
nfsv4.1
EA1__dss139 10T 0.00 True - 2024-02-07 11:40:39 CST nfsv3 True promoted
nfsv4.1
EA1__dss140 10T 0.00 True - 2024-02-07 11:41:20 CST nfsv3 True promoted
nfsv4.1
EA1__dss141 10T 0.00 True - 2024-02-07 11:42:04 CST nfsv3 True promoted
nfsv4.1
EA1__dss142 10T 0.00 True - 2024-02-07 11:42:44 CST nfsv3 True promoted
nfsv4.1
EA1__dss143 10T 0.00 True - 2024-02-07 11:43:24 CST nfsv3 True promoted
nfsv4.1
EA1__dss144 10T 0.00 True - 2024-02-07 11:44:10 CST nfsv3 True promoted

I have the attached, but I would like to have a more granular look especially Name,size virtual and maybe protocal
image

scrape time issue with filesystem metrics

Hi,

These two metrics do not return the data within the 2 minutes scrape interval. What is the expected scrape time for these metrics?

purefb_file_system_usage_groups_bytes
purefb_file_system_usage_users_bytes

Other metrics are scrapped fine on the same instances.

podman issue

Hello im trying to put run the container as podman
podman run -p 9491:9491 --name pure-fb-om-exporter quay.io/purestorage/pure-fb-om-exporter
2023/05/10 16:50:32 Error in token file: unknown arguments --host 0.0.0.0

can please have a look

readonly user on FB

Hi,
you write in the guide that it is possible to create a new readonly user. I am sure this is not possible. On Flashblades there is only the local pureuser.
You can only add a LDAP-user for monitoring, add it to a ad-group and map this group to readonly. Or is there any method?

Regards

Expose more detail to purefb_open_alerts

Can we implement the same changes to purefb OME as purefa OME with regard to open alerts detail.
PureStorage-OpenConnect/pure-fa-openmetrics-exporter#97

Current cardinality
purefb_alerts_open{component_name="filesystem01",component_type="file-systems",severity="info"} 1

Example open alert from REST API

    {
      "name": "234",
      "index": 234,
      "flagged": true,
      "code": 1101,
      "severity": "info",
      "component_name": "filesystem01",
      "component_type": "file-systems",
      "state": "open",
      "created": 1707324042933,
      "updated": 1709728947957,
      "notified": 1707324163175,
      "summary": "File system 'filesystem01' approaching space quota",
      "description": "The 'alexusmb' file system is at 81% of its space quota of 400.00 G. Pure Storage Support is not notified of this alert. If you would like assistance from our support team, you can open a case by emailing [email protected]. As an INFO-level alert, there will be no reminder notifications unless the issue worsens.",
      "knowledge_base_url": "https://support.purestorage.com/?cid=Alert_1101",
      "action": "Remove data from the file system or increase its space quota.",
      "variables": {
        "CurrentUtilization": "0.814668344259262",
        "FileSystemName": "filesystem01"
      },

Suggest we expose

    {
      "action": "string",
      "code": 0,
      "component_name": "string",
      "component_type": "string",
      "created": 0,
      "description": "string",
      "knowledge_base_url": "string",
      "summary": "string",
    }

New set up.

I am trying to set this up on RHEL 7 using docker. I was able to get the docker images deployed but I have multiple endpoints to monitor with different API tokens. I want to set up a files_sd config to be able to add and remove endpoints. The top portion is with the previous pure_exporter, the bottom is the new way.

---
#- targets:
# - INT_IP_ADDR
# labels:
# env: prod
# authorization:
# credentials: SOME_API_KEY
# - INT_IP_ADDR
# labels:
# env: prod
# authorization:
# credentials: SOME_API_KEY

params:
endpoint: INT_IP_ADDR
authorization:
credentials: SOME_API_KEY
endpoint: INT_IP_ADDR
authorization:
credentials: SOME_API_KEY

RFE - Replication FS data

Looking for information exposed via /file-system-replica-links to be included so we may track record of file systems replicated lag time and whether they are "protected" based on SLA.

New exporter does not have client and filesystem level metrics

I have set up the new exporter for one of the pure arrays. I noticed that all these client and filesystem level metrics are not available in the new exporter. Not available as part of any other new metrics either.

These are important filesystem and client level metrics. Can you please provide more information on why these are deprecated from the new exporter and if there is any plan to add these back?

purefb_client_performance_iops
purefb_client_performance_latency_usec
purefb_client_performance_opns_bytes
purefb_client_performance_throughput_bytes
purefb_filesystem_user_usage_bytes
purefb_filesystem_group_usage_bytes
purefb_filesystem_performance_iops
purefb_filesystem_performance_latency_usec
purefb_filesystem_performance_opns_bytes
purefb_filesystem_performance_throughput_bytes

array_hardware_metrics - fill null values for severity: info, warning, critical with 0

Currently only active alerts have values in purefb_alerts_open.

It would be useful to have a 0 value for the known levels of the severity dimension instead of null, if there are no other alerts for that severity.

Example,

Before:

purefb_alerts_open{component_name="example_file_system",component_type="file-systems",severity="warning"} 1.0
purefb_alerts_open{component_name="CH0.FB0",component_type="blades",severity="warning"} 1.0

After:

purefb_alerts_open{component_name="example_file",component_type="file-systems",severity="warning"} 1.0
purefb_alerts_open{component_name="CH0.FB0",component_type="blades",severity="warning"} 1.0
purefb_alerts_open{severity="critical"} 0.0
purefb_alerts_open{severity="info"} 0.0

Prometheus Scrape Job Relabel Configuration

In the previous pure-exporter, there is relabel configuration to support multiple targets.

See below:
relabel_configs:

- source_labels: [__address__]
  target_label: __param_endpoint

- source_labels: [__pure_apitoken]
  target_label: __param_apitoken

- source_labels: [__address__]
  target_label: instance

- target_label: __address__
  replacement: [xxxxx] # CHANGE THIS (pure-exporter address)

In this new openmetrics exporter, what will the configuration looks like? looks like __param_apitoken is not supported anymore.

Volume Level Policy Metric from the Requested Enhancement.

Regarding Volume Level Policy Metric from the Requested Enhancement.

Original Enhancement Request: [(https://github.com//issues/19)]

@genegr - I see purefb_nfs_export_rule policy metric but I do not see that policy name added to any of the filesystem/volume metric.

So the idea is to have one metric with all the policy rules and one of the filesystem metric should have one more label with policy name. That way I can join both the metrics to get all the exports rules for a given volume/filesystem.

Let me know if I missed looking at any filesystem metric. I looked for policy label and I could not find it on any filesystem metric.

Adding a screenshot from previous discussion on this thread.

image

Make purefb_info available to all metric endpoints

We need to be able to capture the array name and unique ID to correlate and tag all metrics in monitoring platforms. We implemented this for purefa_info which we used in the Dynatrace extension and require the same functionality for purefb_info.

Purefb Array Total Capacity

In 2021, I had initiated discussion with Pure team to get some answers on how to get the Array's total capacity from Purefb Metrics.

The ask was to get this metric purefb_array_capacity_bytes to show the capacity of the array. In the meantime, I had created the static capacity timeseries metric for dashboards to fill the gap.

I want to re-initiate the follow up on this so that we can get the Array capacity dynamically instead of statically.

def __init__(self, fb):
    self.fb = fb
    self.capacity = GaugeMetricFamily(**'purefb_array_capacity_bytes',**
                                   **'FlashBlade total capacity in bytes',**
                                   labels=[])

Can you please provide some information on this?

Does this metric exist today? if not, is there a way to calculate this based off of current available metrics? If not, is there an API I can use to build this metric?

not working when exporter is running is a different cluster and scraping via https

Hi,

I am trying to run the exporter on a k8s cluster behind a load balancer. My prom is set up in another vm and now when i try to get the metrics from flashblade, it is always returning 400 bad request. When i run the same exporter on prom vm and let the prometheus vm communicate via internal ip, able to scrape the metrics.
not sure what is going wrong when scraping on a different kubernetes cluster. my prom config is

  • job_name: 'purestorage-fb'
    scheme: https
    metrics_path: /metrics/array
    bearer_token:
    params:
    endpoint: ['pt-flashblade-001.com]
    static_configs:
    - targets: ["pure-exporter.com"]
    labels:
    location: germany
    site: frankfurt
    instance: pt-flashblade-001

whereas just changing the endpoint to the local container is working

  • job_name: 'purestorage-fb'
    scheme: https
    metrics_path: /metrics/array
    bearer_token:
    params:
    endpoint: ['pt-flashblade-001.com]
    static_configs:
    - targets: ["pure-fb-exporter:9491"]
    labels:
    location: germany
    site: frankfurt
    instance: pt-flashblade-001

Thanks for any help

Missing dimensions in purefb_array_space_bytes

Hi,

On 0.9.0 it seems the exporter does not export the metric purefb.array.space_bytes in the dimension type:array,space:capacity . This is required to report total space available on the flashblade, and needs to be monitored in case of blade or drive failure.

Here is the output I am getting from this metric:

# HELP purefb_array_space_bytes FlashBlade space in bytes
# TYPE purefb_array_space_bytes gauge
purefb_array_space_bytes{space="snapshots",type="array"} 0
purefb_array_space_bytes{space="snapshots",type="file-system"} 0
purefb_array_space_bytes{space="snapshots",type="object-store"} 0
purefb_array_space_bytes{space="total_physical",type="array"} 4.1013406407976e+13
purefb_array_space_bytes{space="total_physical",type="file-system"} 3.802964023162e+13
purefb_array_space_bytes{space="total_physical",type="object-store"} 2.983766176356e+12
purefb_array_space_bytes{space="unique",type="array"} 4.1013406407976e+13
purefb_array_space_bytes{space="unique",type="file-system"} 3.802964023162e+13
purefb_array_space_bytes{space="unique",type="object-store"} 2.983766176356e+12
purefb_array_space_bytes{space="virtual",type="array"} 5.4392994087991e+13
purefb_array_space_bytes{space="virtual",type="file-system"} 5.1398708366336e+13
purefb_array_space_bytes{space="virtual",type="object-store"} 2.994285721655e+12

Report S3 bucket object count

Bucket object count is available in /buckets and is useful for tracking for growth of an S3 store.

purefb_buckets_object_count

TODO:
metric name and symantec convention submission for review.

Need Information about how to get more information in metric labels and custom metric collection

purefb_alerts_open metric has below labels:

  • component_name
  • component_type
  • severity
  1. Is there a way to capture more detailed Alert information - probably short summary and description as part of the labels?
  2. If I want to collect more information about the metrics which are currently not part of the exporter, is there a way for me to build some object template for the data collection that exporter will export?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.