Coder Social home page Coder Social logo

prometheus / procfs Goto Github PK

View Code? Open in Web Editor NEW
739.0 28.0 302.0 1.07 MB

procfs provides functions to retrieve system, kernel and process metrics from the pseudo-filesystem proc.

License: Apache License 2.0

Go 98.45% Makefile 0.11% Shell 1.45%
procfs pseudo-filesystem-proc process-metrics kernel go process prometheus

procfs's Introduction

procfs

This package provides functions to retrieve system, kernel, and process metrics from the pseudo-filesystems /proc and /sys.

WARNING: This package is a work in progress. Its API may still break in backwards-incompatible ways without warnings. Use it at your own risk.

Go Reference CircleCI Go Report Card

Usage

The procfs library is organized by packages based on whether the gathered data is coming from /proc, /sys, or both. Each package contains an FS type which represents the path to either /proc, /sys, or both. For example, cpu statistics are gathered from /proc/stat and are available via the root procfs package. First, the proc filesystem mount point is initialized, and then the stat information is read.

fs, err := procfs.NewFS("/proc")
stats, err := fs.Stat()

Some sub-packages such as blockdevice, require access to both the proc and sys filesystems.

    fs, err := blockdevice.NewFS("/proc", "/sys")
    stats, err := fs.ProcDiskstats()

Package Organization

The packages in this project are organized according to (1) whether the data comes from the /proc or /sys filesystem and (2) the type of information being retrieved. For example, most process information can be gathered from the functions in the root procfs package. Information about block devices such as disk drives is available in the blockdevices sub-package.

Building and Testing

The procfs library is intended to be built as part of another application, so there are no distributable binaries.
However, most of the API includes unit tests which can be run with make test.

Updating Test Fixtures

The procfs library includes a set of test fixtures which include many example files from the /proc and /sys filesystems. These fixtures are included as a ttar file which is extracted automatically during testing. To add/update the test fixtures, first ensure the fixtures directory is up to date by removing the existing directory and then extracting the ttar file using make fixtures/.unpacked or just make test.

rm -rf testdata/fixtures
make test

Next, make the required changes to the extracted files in the fixtures directory. When the changes are complete, run make update_fixtures to create a new fixtures.ttar file based on the updated fixtures directory. And finally, verify the changes using git diff testdata/fixtures.ttar.

procfs's People

Contributors

alexzzz avatar bdrung avatar bobrik avatar conallob avatar dentrax avatar dependabot[bot] avatar discordianfish avatar dongjiang1989 avatar dswarbrick avatar grobie avatar hamiltont avatar ideaship avatar juliusv avatar k1low avatar klatys avatar mdlayher avatar mjtrangoni avatar nikakis avatar peonone avatar pgier avatar pierref avatar prombot avatar shaardie avatar superq avatar themeier avatar timothy-boners avatar tklauser avatar treydock avatar weidongkl avatar you-neverknow avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

procfs's Issues

Mountstats rolls over with older kernels / high traffic

A user running RHEL 7.x with Linux 3.10 reported mountstats collector returning negative values.

Snipit from /proc/self/mountstats:

device X.X.X.X:/export mounted on /target with fstype nfs4 statvers=1.1
...
        per-op statistics
              ACCESS: 2927395007 2927394995 0 526931094212 362996810236 18446743919241604546 1667369447 1953587717
             GETATTR: 3496133443 3496133439 0 601334946972 685241596164 18446743941975445342 2055274291 2376907228
              LOOKUP: 2440299007 2440299000 0 511100701900 135283040252 18446743979242238815 1334197895 1556584543

Support for parsing opts line in /proc/self/mountstats

Currently procfs does not support parsing of the opts: line in proc/self/mountstats. Having these values would give users access to more info related to their mounts. This would also help with resolving an issue with node_exporter (prometheus/node_exporter#993) where different versions of nfs would not be recognized thus causing only a single form of metrics to show.

debian packging warns source-contains-unsafe-symlink

A run of
lintian -I --pedantic
across the package reveals
E: juju-core source: source-contains-unsafe-symlink src/github.com/prometheus/procfs/fixtures/26231/exe

Anyone packaging this project needs to delete the fixtures directory, but then the tests cannot be run.

Consolidate fixtures structure

So the current fixtures structure doesn't make too much sense. It's mixing "root" level files/folders like stat with special purpose subdirectories (like buddyinfo/short/buddinfo). I was thinking about moving all "root" files into a default subdirectory so that we'd end up with fixtures/{default,specialcasedir,likebuddyinfo}/actualpath. Given we support now both procfs and sysfs, maybe it should become fixtures/{proc,sys}/{default,special,otherspecial}/...?

@mdlayher wdyt?

Symlink fixtures/26231/exe breaks git and doesn't work on windows

I've checkout out this repository, and also use it as a dependency in some golang work I do.
Unfortunately, git always shows there is a difference in this file:

MinGW 06:57:54 ~/.glide/cache/src/https-github.com-prometheus-procfs$ git diff
diff --git a/fixtures/26231/exe b/fixtures/26231/exe
index a91bec4..690239f 120000
--- a/fixtures/26231/exe
+++ b/fixtures/26231/exe
@@ -1 +1 @@
-/usr/bin/vim
\ No newline at end of file
+C:/usr/bin/vim
\ No newline at end of file

Even I check it out multiple times:

MinGW 06:57:39 ~/.glide/cache/src/https-github.com-prometheus-procfs$ git status
On branch master
Your branch is up-to-date with 'origin/master'.
Changes not staged for commit:
  (use "git add <file>..." to update what will be committed)
  (use "git checkout -- <file>..." to discard changes in working directory)

        modified:   fixtures/26231/exe

no changes added to commit (use "git add" and/or "git commit -a")
MinGW 06:57:42 ~/.glide/cache/src/https-github.com-prometheus-procfs$ git checkout fixtures/26231/exe
MinGW 06:57:50 ~/.glide/cache/src/https-github.com-prometheus-procfs$ git status
On branch master
Your branch is up-to-date with 'origin/master'.
Changes not staged for commit:
  (use "git add <file>..." to update what will be committed)
  (use "git checkout -- <file>..." to discard changes in working directory)

        modified:   fixtures/26231/exe

no changes added to commit (use "git add" and/or "git commit -a")

It appears to be a symlink, that gets changed on checkout for some reason:

MinGW 06:58:31 ~/.glide/cache/src/https-github.com-prometheus-procfs/fixtures/26231$ ls -la
total 20
drwxr-xr-x 1 cduncan 1049089    0 Aug  1 18:57 ./
drwxr-xr-x 1 cduncan 1049089    0 Aug  1 18:57 ../
-rw-r--r-- 1 cduncan 1049089   16 Aug  1 18:41 cmdline
-rw-r--r-- 1 cduncan 1049089    4 Aug  1 18:41 comm
lrwxrwxrwx 1 cduncan 1049089   14 Aug  1 18:57 exe -> /c/usr/bin/vim
drwxr-xr-x 1 cduncan 1049089    0 Aug  1 18:57 fd/
-rw-r--r-- 1 cduncan 1049089  116 Aug  1 18:41 io
-rw-r--r-- 1 cduncan 1049089 1213 Aug  1 18:41 limits
-rw-r--r-- 1 cduncan 1049089 1004 Aug  1 18:41 mountstats
-rw-r--r-- 1 cduncan 1049089  330 Aug  1 18:41 stat

Besides git always showing it is changed, it also breaks our dependency management (glide):

MinGW 06:44:19 ~/workspace/go/src/github.com/ReturnPath/apollo/scratch$ glideup
[WARN]  The --update-vendored flag is deprecated. This now works by default.
[WARN]  The --strip-vcs flag is deprecated. This now works by default.
[INFO]  Downloading dependencies. Please wait...
[INFO]  --> Fetching updates for github.com/prometheus/client_golang
[INFO]  --> Fetching updates for github.com/ventu-io/slog
[INFO]  Resolving imports
[INFO]  --> Fetching updates for github.com/prometheus/client_model
[INFO]  --> Fetching updates for github.com/prometheus/common
[INFO]  --> Fetching updates for github.com/prometheus/procfs
[ERROR] Error looking for github.com/prometheus/procfs: github.com/prometheus/procfs contains uncommitted changes. Skipping update
[ERROR] Failed to retrieve a list of dependencies: Error resolving imports

Checksum mismatch during install of procfs

While trying to install this package on its own or as part of the blackbox_exporter "make" getting a failure verifying download:

[kcameron@kevlar1 blackbox_exporter-master]$ sudo go install "github.com/prometheus/procfs"
go: downloading github.com/prometheus/procfs v0.0.0-20170703101242-e645f4e5aaa8
go: verifying github.com/prometheus/[email protected]: checksum mismatch
downloaded: h1:uZfczEBIA1FZfOQo4/JWgGnMNd/4HVsM9A+B30wtlkA=
go.sum: h1:Kh7M6mzRpQ2de1rixoSQZr4BTINXFm8WDbeN5ttnwyE=

Also noted in: https://groups.google.com/forum/?utm_source=digest&utm_medium=email/#!topic/prometheus-users/98Q-q_2OlW0

cpu_freq metrics: use scaling or cpuinfo?

Since PR #94 it has the default behaviour of reading scaling_cpu_freq in favor of cpuinfo_cur_freq. These 2 represent different metrics in my eyes. The scaling one reflects at which frequency the linux kernel thinks the cpu runs, while cpuinfo reflects the actual HW frequency of the CPU.
source: https://www.kernel.org/doc/Documentation/cpu-freq/user-guide.txt

IMHO cpuinfo should still be the default value to be read here, or these 2 values should maybe represent 2 different metrics. I'd like to know the reasoning for picking the scaling over the cpuinfo statistic.

Consolidate method/function names

The current naming is not stringent nor idiomatic.

From @mdlayher

My personal rules of thumb for each case:

  • NewThing: top level constructor functions. Never methods.
  • ParseThing: parses some input data like []byte or io.Reader.
  • Thing: retrieval methods for some data. Doesn't make much sense as a name for top-level functions to me.

It might make sense to just use the procfs names wherever possible.

Can't use this module with go 1.11

I'm trying to build a go program which imports the prometheus_client library and it's experiencing the same problem. I boiled it down to a very simple repro case here:

โ€บ go version
go version go1.11 darwin/amd64

โ€บ cat go.mod
module demo

> cat main.go
package main
import "github.com/prometheus/procfs"
func main() {}

โ€บ go build
go: finding github.com/prometheus/procfs latest
build demo: cannot find module for path github.com/prometheus/procfs

Semver request

Once this package is ready to be declared stable, will you be implementing semver for releases? Would be super helpful. Thanks!

Test don't pass on OSX 10.10.5

--- FAIL: TestSelf (0.00s)
    proc_test.go:13: could not read /proc: stat /proc: no such file or directory
--- FAIL: TestFileDescriptors (0.00s)
    proc_test.go:102: could not read /proc: stat /proc: no such file or directory
--- FAIL: TestFileDescriptorTargets (0.00s)
    proc_test.go:149: could not read /proc: stat /proc: no such file or directory
FAIL
FAIL    github.comcast.com/vbo/derp-4/vendor/github.com/prometheus/procfs   0.013s

userHz is hard-coded to 100

Running node_exporter inside a LX zone on Joyent's SmartOS (or their cloud platform, Triton) reports incorrect CPU stats.
SmartOS is based on Solaris, and LX zones are containers that enable running Linux application on Solaris.

LX zones report a USER_HZ value of 1000, which results in incorrect CPU stats being reported.

While it may be argued that SmartOS is incorrectly emulating the USER_HZ value (aka, it should report 100), I feel that procfs should query the value rather than have a hard-coded value to maintain compatibility across multiple platforms.

(And yes, I know that procfs did originally query for the value, but it was replaced with a constant for "reasons")

Build failed when using both prometheus/node_exporter and prometheus/client_golang

I have a simple program:

package main

import (
	"github.com/prometheus/client_golang/prometheus/push"
	"github.com/prometheus/node_exporter/collector"
)

func main() {
	nc, err := collector.NewNodeCollector()
	if err != nil {
		panic(err)
	}
	pusher := push.New("http://localhost/metrics", "test")
	for _, c := range nc.Collectors {
		pusher.Collector(c)
	}
}

Run it with go module enabled:

$ GO111MODULE=on go1.12.5 run main.go
go: finding github.com/prometheus/node_exporter/collector latest
go: finding github.com/prometheus/client_golang/prometheus/push latest
go: finding github.com/prometheus/client_golang/prometheus latest
# github.com/prometheus/node_exporter/collector
/home/cuonglm/go/pkg/mod/github.com/prometheus/[email protected]/collector/buddyinfo.go:58:24: c.fs.NewBuddyInfo undefined (type procfs.FS has no field or method NewBuddyInfo)
/home/cuonglm/go/pkg/mod/github.com/prometheus/[email protected]/collector/cpufreq_linux.go:83:23: c.fs.NewSystemCpufreq undefined (type sysfs.FS has no field or method NewSystemCpufreq)
/home/cuonglm/go/pkg/mod/github.com/prometheus/[email protected]/collector/ipvs_linux.go:109:24: c.fs.NewIPVSStats undefined (type procfs.FS has no field or method NewIPVSStats)
/home/cuonglm/go/pkg/mod/github.com/prometheus/[email protected]/collector/ipvs_linux.go:124:27: c.fs.NewIPVSBackendStatus undefined (type procfs.FS has no field or method NewIPVSBackendStatus)
/home/cuonglm/go/pkg/mod/github.com/prometheus/[email protected]/collector/netclass_linux.go:172:23: c.fs.NewNetClass undefined (type sysfs.FS has no field or method NewNetClass)

The problem is prometheus/node_exporter and prometheus/client_golang requires 2 version of procfs, which are conflicts with each other.

The graph shows the version github.com/prometheus/[email protected] golang.org/x/[email protected] was chosen. The commit 65bdadf (requires by node_exporter) is behind the commit 833678b (v0.0.2).

In any case, I think the main problem is from procfs itself, as some public APIs were removed in commit 985b823

The better idea is intoducing new public APIs and mark the old one as deprecated.

Support metrics for offline CPUs

Hi @rtreffer @SuperQ ,
This issue is related to #873.

When parsing the /proc/stat file, I am missing the latest offline CPUs bunch of metrics (from cpu154 to cpu159). As @brian-brazil said, the CPU metrics should always be there.

See:

# lscpu
Architecture:          ppc64le
Byte Order:            Little Endian
CPU(s):                160
On-line CPU(s) list:   0,1,8,9,16,17,24,25,32,33,40,41,48,49,56,57,64,65,72,73,80,81,88,89,96,97,104,105,112,113,120,121,128,129,136,137,144,145,152,153
Off-line CPU(s) list:  2-7,10-15,18-23,26-31,34-39,42-47,50-55,58-63,66-71,74-79,82-87,90-95,98-103,106-111,114-119,122-127,130-135,138-143,146-151,154-159
Thread(s) per core:    2
Core(s) per socket:    5
Socket(s):             4
NUMA node(s):          4
Model:                 2.1 (pvr 004b 0201)
Model name:            POWER8E (raw), altivec supported
L1d cache:             64K
L1i cache:             32K
L2 cache:              512K
L3 cache:              8192K
NUMA node0 CPU(s):     0,1,8,9,16,17,24,25,32,33
NUMA node1 CPU(s):     40,41,48,49,56,57,64,65,72,73
NUMA node16 CPU(s):    80,81,88,89,96,97,104,105,112,113
NUMA node17 CPU(s):    120,121,128,129,136,137,144,145,152,153
  • The /proc/stat file for the records.
cpu  8955653 5338 10313729 6891866013 1194210 0 38962 0 0 0
cpu0 138803 9 56167 172504763 16187 0 1296 0 0 0
cpu1 322651 754 427280 171926334 25235 0 1291 0 0 0
cpu8 199865 3 91024 172386730 20071 0 646 0 0 0
cpu9 326453 474 412719 171934902 24410 0 723 0 0 0
cpu16 181309 2 66982 172442461 21437 0 788 0 0 0
cpu17 317509 348 398066 171978692 19749 0 711 0 0 0
cpu24 162611 8 61226 172478776 28065 0 707 0 0 0
cpu25 320518 335 402933 171988746 27002 0 653 0 0 0
cpu32 167024 9 60329 172464237 24645 0 857 0 0 0
cpu33 300664 484 388081 171994667 15890 0 721 0 0 0
cpu40 149963 1 97562 172440250 57631 0 1636 0 0 0
cpu41 349011 123 504120 171857581 42197 0 2032 0 0 0
cpu48 119442 1 74060 172508062 37574 0 2162 0 0 0
cpu49 346802 119 487142 171870884 36441 0 2296 0 0 0
cpu56 133608 3 73781 172488230 30166 0 1639 0 0 0
cpu57 340640 144 493004 171860490 33535 0 2412 0 0 0
cpu64 122117 5 68766 172506171 37048 0 1620 0 0 0
cpu65 346848 142 490790 171861649 44282 0 1396 0 0 0
cpu72 138939 3 67941 172506876 29311 0 1300 0 0 0
cpu73 349307 172 496688 171860930 35681 0 1120 0 0 0
cpu80 139125 92 94140 172450292 54207 0 659 0 0 0
cpu81 295747 183 411438 172009728 31455 0 593 0 0 0
cpu88 96750 62 60035 172573444 25950 0 563 0 0 0
cpu89 319147 509 476759 171926997 34378 0 489 0 0 0
cpu96 101846 22 78433 172521391 21805 0 624 0 0 0
cpu97 275081 192 401865 172034352 27952 0 491 0 0 0
cpu104 117902 134 74486 172523631 25512 0 683 0 0 0
cpu105 266655 380 426705 172028466 29963 0 488 0 0 0
cpu112 97858 34 47786 172583361 23911 0 598 0 0 0
cpu113 287918 184 437298 171997757 27889 0 468 0 0 0
cpu120 129084 14 66533 172521771 23712 0 795 0 0 0
cpu121 362991 50 524281 171833723 27812 0 779 0 0 0
cpu128 120565 4 57278 172552862 20093 0 858 0 0 0
cpu129 328225 144 477274 171917779 24834 0 657 0 0 0
cpu136 101479 4 55334 172573896 17711 0 723 0 0 0
cpu137 310827 84 447858 171966775 23698 0 619 0 0 0
cpu144 120642 0 52113 172561125 17948 0 696 0 0 0
cpu145 279375 37 412635 172039812 20124 0 559 0 0 0
cpu152 88002 7 48537 172507258 81117 0 633 0 0 0
cpu153 279513 63 420666 172016877 25035 0 969 0 0 0
intr 1382710716 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 301836321 4861826 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1074012 460 97 64 627135 48 28 22 711427 78 58 13 718673 25 63 26 71
4420 96 64 19 1227903 240 356 379 1135858 266 173 36 1087877 488 336 265 1165729 526 215 94 1034760 368 314 143 1028797 47 71 59 873159 77 42 50 832224 104 156 94 844309 90 106 96 793847 93
29 75 1050764 35 18 1 546452 434979 48 39 4 18 22 0 458336 399137 40 61 11 32 23 0 455018 358193 54 3 0 4 18 1 345891 369552 25 14 31 6 28 34 0 1915 1172145 572915 910960 391591 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 515 290 91 667877 63 14 41 723471 21 18 27 710718 113 42 98 698773 11 70 45 1212026 677 589 614 1161716 251 318 213 1130302
234 265 308 1156661 516 137 323 1038834 70 117 135 1007899 25 37 81 851490 53 60 57 827356 41 28 39 832789 54 25 7 784532 62 75 26 816984 42 22 64 552798 437519 40 66 23 22 36 2 448306 45867
0 47 71 1 12 18 15 433335 406005 29 9 41 13 1 0 16 0 0 0 6 377927 477440 58 25 19 13 15 7 0 0 4292776 1009617 403 169 108 685906 51 51 47 706091 73 85 65 724938 87 25 96 729496 78 31 58 1210
148 115 236 210 1169285 206 312 209 1195163 327 210 297 1227099 400 437 369 1082046 273 66 175 926198 51 62 55 858369 81 43 41 819595 74 39 31 834143 87 1 7 814523 114 64 14 766292 32 45 46
551690 541114 52 51 21 0 1 5 480895 501190 26 24 11 0 0 11 448565 377180 6 16 1 19 0 1 360328 394864 25 20 3 27 18 34 1124560 45651 467041 0 8597 15531 1 1 4 4 356295 344451 4 22 38 14 42976
8 376309 10 5 8 18 478115 484921 20 5 8 29 561639 546080 74 96 46 19 800801 42 0 0 13 0 785645 13 15 4 1 200 883121 38 37 63 171 241 1217454 325 352 205 537 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 1336145 297 250 0 0 0 0 64 1189213 114 716263 56 106 728956 50 128 669205 49 60 656438 65 447284 836474 325 447284 447284 447284 0 0 447285 447285 447284 447284 447285 32 447283 447284 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 447285 447285 447285 447283 158 62 76 121 85 259 214 340 209 1271402 424 1222407 167 862852 0 820882 0 697519 21 15 1 19 34 0 18 22 20 1083606 1 1 1 1 1 1 1 1
 1 1 1 1 1 55650520 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 18 5 0 0 0 0 0 0 0 0 0 0 55658426 12660943 853479 80702 4518888 205165 71525 405256 18899159 0 0 0 0 0 0 0 2
2538547 3375254 1736399 680537 20693729 12337920 3928537 851492 0 0 0 0 0 0 0 0
ctxt 1489044540
btime 1521031332
processes 17180357
procs_running 1
procs_blocked 0
softirq 928568474 562 279676948 9739565 110787468 78190096 6693156 9823146 255584320 0 178073213
  • This is what node-exporter v0.16.0-rc.0 is exporting now.
    1. cpu151 is offline.
    2. cpu152 and cpu153 are online.
    3. cpu{154..159} are offline.
$ curl -s xxx:9100/metrics | egrep -w -v -e '(HELP|TYPE)' | grep node_cpu_seconds_total | grep 'cpu="151"'
node_cpu_seconds_total{cpu="151",mode="idle"} 0
node_cpu_seconds_total{cpu="151",mode="iowait"} 0
node_cpu_seconds_total{cpu="151",mode="irq"} 0
node_cpu_seconds_total{cpu="151",mode="nice"} 0
node_cpu_seconds_total{cpu="151",mode="softirq"} 0
node_cpu_seconds_total{cpu="151",mode="steal"} 0
node_cpu_seconds_total{cpu="151",mode="system"} 0
node_cpu_seconds_total{cpu="151",mode="user"} 0
$ curl -s xxx:9100/metrics | egrep -w -v -e '(HELP|TYPE)' | grep node_cpu_seconds_total | grep 'cpu="152"'                                                                                                                   
node_cpu_seconds_total{cpu="152",mode="idle"} 1.72526771e+06
node_cpu_seconds_total{cpu="152",mode="iowait"} 811.17
node_cpu_seconds_total{cpu="152",mode="irq"} 0
node_cpu_seconds_total{cpu="152",mode="nice"} 0.07
node_cpu_seconds_total{cpu="152",mode="softirq"} 6.33
node_cpu_seconds_total{cpu="152",mode="steal"} 0
node_cpu_seconds_total{cpu="152",mode="system"} 485.37
node_cpu_seconds_total{cpu="152",mode="user"} 880.05
$ curl -s xxx:9100/metrics | egrep -w -v -e '(HELP|TYPE)' | grep node_cpu_seconds_total | grep 'cpu="153"'                                                                                                                   
node_cpu_seconds_total{cpu="153",mode="idle"} 1.72036582e+06
node_cpu_seconds_total{cpu="153",mode="iowait"} 250.35
node_cpu_seconds_total{cpu="153",mode="irq"} 0
node_cpu_seconds_total{cpu="153",mode="nice"} 0.63
node_cpu_seconds_total{cpu="153",mode="softirq"} 9.69
node_cpu_seconds_total{cpu="153",mode="steal"} 0
node_cpu_seconds_total{cpu="153",mode="system"} 4206.78
node_cpu_seconds_total{cpu="153",mode="user"} 2795.17
$ curl -s xxx:9100/metrics | egrep -w -v -e '(HELP|TYPE)' | grep node_cpu_seconds_total | grep 'cpu="154"'       
(no metrics)
[...]
$ curl -s xxx:9100/metrics | egrep -w -v -e '(HELP|TYPE)' | grep node_cpu_seconds_total | grep 'cpu="159"'       
(no metrics)

Summarizing, every CPU metrics of an offline CPU until cpu151 are zero, while the last bunch cpu{154..159} are missing completely .

fixtures folder contains dangerous symlinks

I vender this in another project, which is scp'd up to an AWS instance during a packer build. Noticed my build started bombing because scp was complaining about copying the funky symlinks in the fixtures folder, like self/exe etc.

Limits that don't fit in an int32 aren't parsed, causing errors

In proc_limits, parseInt() uses strconv.ParseInt(s, 10, 32) to read limit values. Some limit values won't fit in 32 bits, e.g. max file size, max address space. As a result NewLimits() returns an error instead of the limits for the proc whenever one of those limits is too large.

Parse ipv6 addresses in ipvs backend status

We are running ipvs collector from node_exporter on a machine receiving ipv6 traffic too, though node_exporter 0.13 fails with this:

time="2017-04-19T09:47:26Z" level=error msg="ERROR: ipvs collector failed after 0.000758s: could not get backend status: invalid IP: [2620" source="node_exporter.go:91"

And the /proc/net/ip_vs file looks like this (after redacting the addresses)

# cat /proc/net/ip_vs
IP Virtual Server version 1.2.1 (size=1048576)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP  00000000:0050 sh  
  -> 00000000:0050      Route   1      33         10        
  -> 00000000:0050      Route   1      48         52        
  -> 00000000:0050      Route   1      56         92        
  -> 00000000:0050      Route   1      48         15        
  -> 00000000:0050      Route   1      32         52        
  -> 00000000:0050      Route   1      62         28        
  -> 00000000:0050      Route   1      19         30        
  -> 00000000:0050      Route   1      93         43        
TCP  [2620:0000:0000:0000:0000:0000:0000:0000]:01BB sh 
  -> [2620:0000:0000:0000:0000:0000:0000:0000]:01BB      Route   1      0          0         
  -> [2620:0000:0000:0000:0000:0000:0000:0000]:01BB      Route   1      2          0         
  -> [2620:0000:0000:0000:0000:0000:0000:0000]:01BB      Route   1      2          0         
  -> [2620:0000:0000:0000:0000:0000:0000:0000]:01BB      Route   1      0          0         
  -> [2620:0000:0000:0000:0000:0000:0000:0000]:01BB      Route   1      2          0         
  -> [2620:0000:0000:0000:0000:0000:0000:0000]:01BB      Route   1      1          3         
  -> [2620:0000:0000:0000:0000:0000:0000:0000]:01BB      Route   1      1          1         
  -> [2620:0000:0000:0000:0000:0000:0000:0000]:01BB      Route   1      1          3         
TCP  00000000:01BB sh  
  -> 00000000:01BB      Route   1      35         59        
  -> 00000000:01BB      Route   1      74         104       
  -> 00000000:01BB      Route   1      275        340       
  -> 00000000:01BB      Route   1      25         56        
  -> 00000000:01BB      Route   1      26         290       
  -> 00000000:01BB      Route   1      36         303       
  -> 00000000:01BB      Route   1      20         326       
  -> 00000000:01BB      Route   1      27         297       
TCP  [2620:0000:0000:0000:0000:0000:0000:0000]:0050 sh 
  -> [2620:0000:0000:0000:0000:0000:0000:0000]:0050      Route   1      0          0         
  -> [2620:0000:0000:0000:0000:0000:0000:0000]:0050      Route   1      0          0         
  -> [2620:0000:0000:0000:0000:0000:0000:0000]:0050      Route   1      1          1         
  -> [2620:0000:0000:0000:0000:0000:0000:0000]:0050      Route   1      0          0         
  -> [2620:0000:0000:0000:0000:0000:0000:0000]:0050      Route   1      0          0         
  -> [2620:0000:0000:0000:0000:0000:0000:0000]:0050      Route   1      0          2         
  -> [2620:0000:0000:0000:0000:0000:0000:0000]:0050      Route   1      0          0         
  -> [2620:0000:0000:0000:0000:0000:0000:0000]:0050      Route   1      0          2         

Tests fail due to missing fixtures

commit d0f344d83b0c80a1bc03b547a2374a9ec6711144 (HEAD -> netclass-split, upstream/master, origin/master, origin/HEAD, master)
Author: Paul Gier <[email protected]>
Date:   Wed Mar 6 17:32:01 2019 -0600

    Consolidate fixtures (#138)

~/D/g/d/procfs โฏโฏโฏ make fixtures/.unpacked
./ttar -C ./ -x -f fixtures.ttar
touch fixtures/.unpacked

~/D/g/d/procfs โฏโฏโฏ ls -hFGa -lah fixtures/
total 52K
drwxr-xr-x 11 daenney 4.0K Mar 13 09:45 ./
drwxr-xr-x 12 daenney 4.0K Mar 13 09:45 ../
drwxr-xr-x  5 daenney 4.0K Mar 13 09:45 26231/
drwxr-xr-x  3 daenney 4.0K Mar 13 09:45 26232/
drwxr-xr-x  2 daenney 4.0K Mar 13 09:45 26233/
drwxr-xr-x  2 daenney 4.0K Mar 13 09:45 584/
drwxr-xr-x  5 daenney 4.0K Mar 13 09:45 buddyinfo/
drwxr-xr-x  3 daenney 4.0K Mar 13 09:45 fs/
-rw-r--r--  1 daenney 1.1K Mar 13 09:45 mdstat
drwxr-xr-x  3 daenney 4.0K Mar 13 09:45 net/
drwxr-xr-x  2 daenney 4.0K Mar 13 09:45 pressure/
lrwxrwxrwx  1 daenney    5 Mar 13 09:45 self -> 26231/
-rw-r--r--  1 daenney 2.1K Mar 13 09:45 stat
drwxr-xr-x  2 daenney 4.0K Mar 13 09:45 symlinktargets/
-rw-r--r--  1 daenney    0 Mar 13 09:45 .unpacked
~/D/g/d/procfs โฏโฏโฏ make common-test
>> running all tests
GO111MODULE=on go test -race  ./...
--- FAIL: TestBuddyInfo (0.00s)
    buddyinfo_test.go:24: open fixtures/proc/buddyinfo: no such file or directory
--- FAIL: TestDiskstats (0.00s)
    diskstats_test.go:28: open fixtures/proc/diskstats: no such file or directory
--- FAIL: TestFSXFSStats (0.00s)
    fs_test.go:35: failed to parse XFS stats: open fixtures/proc/fs/xfs/stat: no such file or directory
--- FAIL: TestIPVSStats (0.00s)
    ipvs_test.go:164: open fixtures/proc/net/ip_vs_stats: no such file or directory
--- FAIL: TestIPVSBackendStatus (0.00s)
    ipvs_test.go:218: open fixtures/proc/net/ip_vs: no such file or directory
--- FAIL: TestMDStat (0.00s)
    mdstat_test.go:23: parsing of reference-file failed entirely: error parsing fixtures/proc/mdstat: open fixtures/proc/mdstat: no such file or directory
--- FAIL: TestMountStats (0.00s)
    mountstats_test.go:339: [00] test "no devices"
    mountstats_test.go:339: [01] test "device has too few fields"
    mountstats_test.go:339: [02] test "device incorrect format"
    mountstats_test.go:339: [03] test "device incorrect format"
    mountstats_test.go:339: [04] test "device incorrect format"
    mountstats_test.go:339: [05] test "device incorrect format"
    mountstats_test.go:339: [06] test "device rootfs cannot have stats"
    mountstats_test.go:339: [07] test "NFSv4 device with too little info"
    mountstats_test.go:339: [08] test "NFSv4 device with bad bytes"
    mountstats_test.go:339: [09] test "NFSv4 device with bad events"
    mountstats_test.go:339: [10] test "NFSv4 device with bad per-op stats"
    mountstats_test.go:339: [11] test "NFSv4 device with bad transport stats"
    mountstats_test.go:339: [12] test "NFSv4 device with bad transport version"
    mountstats_test.go:339: [13] test "NFSv4 device with bad transport stats version 1.0"
    mountstats_test.go:339: [14] test "NFSv4 device with bad transport stats version 1.1"
    mountstats_test.go:339: [15] test "NFSv3 device with bad transport protocol"
    mountstats_test.go:339: [16] test "NFSv3 device using TCP with transport stats version 1.0 OK"
    mountstats_test.go:339: [17] test "NFSv3 device using UDP with transport stats version 1.0 OK"
    mountstats_test.go:339: [18] test "NFSv3 device using TCP with transport stats version 1.1 OK"
    mountstats_test.go:339: [19] test "NFSv3 device using UDP with transport stats version 1.1 OK"
    mountstats_test.go:339: [20] test "NFSv3 device with mountaddr OK"
    mountstats_test.go:339: [21] test "device rootfs OK"
    mountstats_test.go:339: [22] test "NFSv3 device with minimal stats OK"
    mountstats_test.go:339: [23] test "fixtures/proc OK"
    mountstats_test.go:349: failed to create proc: <nil>
--- FAIL: TestNewNetDev (0.00s)
    net_dev_test.go:37: could not read fixtures/proc: stat fixtures/proc: no such file or directory
--- FAIL: TestProcNewNetDev (0.00s)
    net_dev_test.go:65: stat fixtures/proc/26231: no such file or directory
--- FAIL: TestProcIO (0.00s)
    proc_io_test.go:21: stat fixtures/proc/26231: no such file or directory
--- FAIL: TestNewLimits (0.00s)
    proc_limits_test.go:21: stat fixtures/proc/26231: no such file or directory
--- FAIL: TestNewNamespaces (0.00s)
    proc_ns_test.go:23: stat fixtures/proc/26231: no such file or directory
--- FAIL: TestPSIStats (0.00s)
    --- FAIL: TestPSIStats/cpu (0.00s)
        proc_psi_test.go:36: psi_stats: unavailable for cpu
    --- FAIL: TestPSIStats/memory (0.00s)
        proc_psi_test.go:73: psi_stats: unavailable for memory
    --- FAIL: TestPSIStats/io (0.00s)
        proc_psi_test.go:73: psi_stats: unavailable for io
--- FAIL: TestProcStat (0.00s)
    proc_stat_test.go:24: stat fixtures/proc/26231: no such file or directory
--- FAIL: TestProcStatComm (0.00s)
    proc_stat_test.go:53: stat fixtures/proc/26231: no such file or directory
--- FAIL: TestProcStatVirtualMemory (0.00s)
    proc_stat_test.go:71: stat fixtures/proc/26231: no such file or directory
--- FAIL: TestProcStatResidentMemory (0.00s)
    proc_stat_test.go:82: stat fixtures/proc/26231: no such file or directory
--- FAIL: TestProcStatStartTime (0.00s)
    proc_stat_test.go:93: stat fixtures/proc/26231: no such file or directory
--- FAIL: TestProcStatCPUTime (0.00s)
    proc_stat_test.go:108: stat fixtures/proc/26231: no such file or directory
--- FAIL: TestSelf (0.00s)
    proc_test.go:27: stat fixtures/proc/26231: no such file or directory
--- FAIL: TestAllProcs (0.00s)
    proc_test.go:42: open fixtures/proc: no such file or directory
--- FAIL: TestCmdLine (0.00s)
    proc_test.go:63: stat fixtures/proc/26231: no such file or directory
--- FAIL: TestComm (0.00s)
    proc_test.go:85: stat fixtures/proc/26231: no such file or directory
--- FAIL: TestExecutable (0.00s)
    proc_test.go:107: stat fixtures/proc/26231: no such file or directory
--- FAIL: TestCwd (0.00s)
    proc_test.go:131: stat fixtures/proc/26231: no such file or directory
--- FAIL: TestRoot (0.00s)
    proc_test.go:159: stat fixtures/proc/26231: no such file or directory
--- FAIL: TestFileDescriptors (0.00s)
    proc_test.go:178: stat fixtures/proc/26231: no such file or directory
--- FAIL: TestFileDescriptorTargets (0.00s)
    proc_test.go:193: stat fixtures/proc/26231: no such file or directory
--- FAIL: TestFileDescriptorsLen (0.00s)
    proc_test.go:215: stat fixtures/proc/26231: no such file or directory
--- FAIL: TestStat (0.00s)
    stat_test.go:21: open fixtures/proc/stat: no such file or directory
--- FAIL: TestXfrmStats (0.00s)
    xfrm_test.go:23: open fixtures/proc/net/xfrm_stat: no such file or directory
FAIL
FAIL    github.com/prometheus/procfs    0.021s
ok      github.com/prometheus/procfs/bcache     1.010s
?       github.com/prometheus/procfs/internal/util      [no test files]
?       github.com/prometheus/procfs/iostats    [no test files]
ok      github.com/prometheus/procfs/nfs        1.010s
--- FAIL: TestBlockDevice (0.00s)
    block_device_test.go:30: open ../fixtures/sys/block: no such file or directory
--- FAIL: TestNewPowerSupplyClass (0.00s)
    class_power_supply_test.go:27: could not read ../fixtures/sys: stat ../fixtures/sys: no such file or directory
--- FAIL: TestClassThermalZoneStats (0.00s)
    class_thermal_test.go:28: could not read ../fixtures/sys: stat ../fixtures/sys: no such file or directory
--- FAIL: TestFSXFSStats (0.00s)
    fs_test.go:55: unexpected number of XFS stats: 0
--- FAIL: TestFSBcacheStats (0.00s)
    fs_test.go:93: unexpected number of bcache stats: 0
--- FAIL: TestNewNetClass (0.00s)
    net_class_test.go:26: could not read ../fixtures/sys: stat ../fixtures/sys: no such file or directory
--- FAIL: TestNewSystemCpufreq (0.00s)
    system_cpu_test.go:30: could not read ../fixtures/sys: stat ../fixtures/sys: no such file or directory
FAIL
FAIL    github.com/prometheus/procfs/sysfs      0.009s
--- FAIL: TestParseStats (0.00s)
    parse_test.go:435: unexpected error: open ../fixtures/proc/fs/xfs/stat: no such file or directory
    parse_test.go:439: unexpected XFS stats:
        want:
        &{ {92447 97589 92448 93751} {0 0 0 0} {1767055 188820 184891 92447 92448 2140766 0} {0 0 0 0} {185039 92447 92444 136422} {706 944304 0} {185045 58807 0 126238 0 33637 22} {2883 113448 9 17360 739} {107739 94045} {4 0 0 0} {8677 7849 135802} {92601 0 0 0 92444 92444 92444 0} {2666287 7122 2659202 3599 2 7085 0 10297 7085} {399724544 92823103 86219234}}
        have:
        <nil>
FAIL
FAIL    github.com/prometheus/procfs/xfs        0.010s
make: *** [Makefile.common:113: common-test] Error 1

Support for /proc/[pid]/fdinfo/[fd]

Hello,

Any plans to support reading the contents of /proc/[pid]/fdinfo/[fd]? If not, but you think it's a good idea, I can try and submit a PR for it.

Thanks!

Request: Expose hard limits as well

It would be useful to expose both soft and hard limits in proc_limits.go. I've run across a number of systems / processes where these values differ for "Max open files".

Recent change removed methods required by elastic/go-sysinfo with no version number bump

Related to: #167

This commit removed methods that are used by other libraries (breaking changes) while maintaining v0.0.1 as the latest "release":
985b823#diff-33838b2a8ea2f8e4311d86efea5944c5L151

I propose that this project follow semantic versioning. In the meantime, would it be possible to add the methods required by elastic/go-sysinfo back?
https://github.com/elastic/go-sysinfo/blob/master/providers/linux/boottime_linux.go

These are the specific errors:

../../../github.com/elastic/go-sysinfo/providers/linux/boottime_linux.go:40:17: fs.NewStat undefined (type procfs.FS has no field or method NewStat)
../../../github.com/elastic/go-sysinfo/providers/linux/host_linux.go:75:23: h.procFS.NewStat undefined (type procFS has no field or method NewStat)
../../../github.com/elastic/go-sysinfo/providers/linux/host_linux.go:93:17: fs.NewStat undefined (type procFS has no field or method NewStat)
../../../github.com/elastic/go-sysinfo/providers/linux/process_linux.go:49:23: s.procFS.NewProc undefined (type procFS has no field or method NewProc)
../../../github.com/elastic/go-sysinfo/providers/linux/process_linux.go:82:19: p.fs.NewProc undefined (type procFS has no field or method NewProc)

There was a partial revert of the breaking changes for the sake of client_golang here:
9b1831c

It seems the re-addition of the func NewStat here was meant to be bound to (fs FS):
9b1831c#diff-33838b2a8ea2f8e4311d86efea5944c5R148

Breaks docker-compose

We are using the prometheus library at one of our projects and it seems like the fixtures directory is breaking the last docker-compose build:

> docker-compose up --build
Building offices
Traceback (most recent call last):
  File "docker-compose", line 6, in <module>
  File "compose/cli/main.py", line 71, in main
  File "compose/cli/main.py", line 124, in perform_command
  File "compose/cli/main.py", line 1001, in up
  File "compose/cli/main.py", line 997, in up
  File "compose/project.py", line 463, in up
  File "compose/service.py", line 310, in ensure_image_exists
  File "compose/service.py", line 989, in build
  File "site-packages/docker/api/build.py", line 150, in build
  File "site-packages/docker/utils/build.py", line 14, in tar
  File "site-packages/docker/utils/utils.py", line 103, in create_archive
IOError: Can not access file in context: /Users/someuser/projects/src/somerepo.net/repo/offices/vendor/github.com/prometheus/procfs/fixtures/26231/ns/mnt
Failed to execute script docker-compose

Building the service with docker itself is working prefectly fine.

Docker version:

Client:
 Version:	18.02.0-ce
 API version:	1.36
 Go version:	go1.9.3
 Git commit:	fc4de44
 Built:	Wed Feb  7 21:13:05 2018
 OS/Arch:	darwin/amd64
 Experimental:	true
 Orchestrator:	kubernetes

Server:
 Engine:
  Version:	18.02.0-ce
  API version:	1.36 (minimum version 1.12)
  Go version:	go1.9.3
  Git commit:	fc4de44
  Built:	Wed Feb  7 21:20:15 2018
  OS/Arch:	linux/amd64
  Experimental:	true

Docker-compose version:

docker-compose version 1.19.0, build 9e633ef

Support binary files in ttar

The procfs file structure is not compatible with non-Linux systems. For example symlinks cause issues when checked out on Windows or Mac (see #60 and #75). For similar reasons, a ttar solution was contributed which allows packaging the whole fixtures structure into a single file, while still allowing meaningful diffs and code reviews.

In order to use ttar for the fixtures/ directory, it needs to support encodeing+decoding null bytes so that files like cmdline can be packaged as well.

@ideaship @SuperQ

Add parser for CIFS client stats

The CIFS kernel module includes client stats in /proc/fs/cifs/Stats, but only if the kernel is configured with CONFIG_CIFS_STATS.

It includes per mountpoint info, so it should be pretty useful.

Stats.NumThreads gives wrong value

I'm using procfs for observing number of threads spawned by current process and simultaneously looking to procfs by simple cat /proc//status|grep Thread.
When count of threads if pretty low (under 100) results are the same. But when accoring to data extracted by cat from proc are pretty high (a few hundreds) then procfs library gives me 0. What is the reason of such inconsistency?

Source code can be found here

Error for NFS mounts via UDP

I get the following error when using prometheus-node-exporter on a host that has NFS mounts via UDP:

failed to parse mountstats: invalid NFS transport stats 1.1 statement: [740 1 881477 875055 5946 2888414103 286261 16 258752 2080886]

I traced it back to the xprt field in /proc/self/mountstats, which has different fields depending on UDP/TCP, see also https://utcc.utoronto.ca/~cks/space/blog/linux/NFSMountstatsXprt

For the udp RPC transport there is no connection count, connect idle time, or idle time (fields #3, #4, and #5); all other fields are the same. This omission of fields is quite annoying since it means you get to shuffle your code around instead of just ignoring certain fields for certain transports (they could perfectly well report the particular statistic as zero).

This means that

return &NFSTransportStats{
is only valid for TCP mounts.

I'd go ahead and try my luck at adding support for UDP mounts if you want. To keep the interface consistent, I would simply set the non-existent fields to 0 for UDP mounts.

Any objections?

Commit fdb70a7 breaks golang_client on Windows

Commit fdb70a7 breaks pulling master golang_client on Windows due to unsupported windows filenames in prometheus/procfs/sysfs/fixtures/devices/pci0000:00/0000:00:0d.0

I'm guessing its the colons ":" in the filenames, as the git error is:

$ go get -u github.com/prometheus/procfs
# cd C:\_MANUALLY_ELIDED_\src\github.com\prometheus\procfs; git pull --ff-only
From https://github.com/prometheus/procfs
   abf152e..a66a2f8  master     -> origin/master
 * [new branch]      superq/thermal -> origin/superq/thermal
Updating abf152e..a66a2f8
fatal: cannot create directory at 'sysfs/fixtures/devices/pci0000:00': Invalid argument
package github.com/prometheus/procfs: exit status 128

harcoding pagesize in proc_stat_test.go causes build failure on ppc64

there's hardcoding of pagesize 4096 in proc_stat_test.go which causes build failure on ppc64.
The pagesize is 64k on ppc64. It should use the os.Getpagesize() to get the correct pagesize instead hardcoding.

diff -up procfs-406e5b7bfd8201a36e2bb5f7bdae0b03380c2ce8/proc_stat_test.go.than procfs-406e5b7bfd8201a36e2bb5f7bdae0b03380c2ce8/proc_stat_test.go
--- procfs-406e5b7bfd8201a36e2bb5f7bdae0b03380c2ce8/proc_stat_test.go.than      2016-04-11 06:37:23.590234409 -0400
+++ procfs-406e5b7bfd8201a36e2bb5f7bdae0b03380c2ce8/proc_stat_test.go   2016-04-11 07:33:59.835360813 -0400
@@ -1,5 +1,6 @@
 package procfs

+import "os"
 import "testing"

 func TestProcStat(t *testing.T) {
@@ -66,7 +67,7 @@ func TestProcStatResidentMemory(t *testi
                t.Fatal(err)
        }

-       if want, have := 1981*4096, s.ResidentMemory(); want != have {
+       if want, have := 1981*os.Getpagesize(), s.ResidentMemory(); want != have {
                t.Errorf("want resident memory %d, have %d", want, have)
        }
 }

Add pressure stall information

I'd like to add support for /proc/pressure/* that the Linux kernel exposes (to in turn add support for that to the node exporter).

I started looking at it but I'm stumbling on a thing; currently Proc and its path helper always assumes that what we want is /proc/<pid>/<path...> however what I need access to is /proc/pressure/{cpu,memory,io} and I can't seem to find a good way to do that.

Did I miss something in the Proc API or should I just read the contents of those three files without leveraging the Proc struct and its methods?

tests fail when used with go modules

Both the main procfs package and the xfs package fail on my machine.
Here's a reproducer:

#!/bin/sh
set -ex
t=$(mktemp -d)
trap 'rm -r $t' EXIT
cd $t
cat >ex.go <<.
package main

import (
	_ "github.com/prometheus/procfs"
)

func main() {
	println("example")
}
.
go mod init example
go test all

The tests should automatically unpack fixtures.ttar if they need the files.

Refactor interpackage dependencies to minimize client dependency tree

@SuperQ brought up the issue that the prometheus client_golang project uses only a couple of functions from procfs for gathering process data, but the procfs package has dependencies on the xfs and nfs (and recently also iostats) packages, so these subpackages are pulled in to the client's dependency tree even if they are not used. Ideally, any clients of procfs should only see minimal additions to their dependency tree.

Before looking at possible solutions, I think it's important to note that even with the current dependency tree, the go compiler seems to be smart enough to not actually bring in code from the xfs or nfs packages into the resulting binary if they are not used.

However, if we do decide to remove the dependency from procfs to the xfs and nfs packages, there are a few options.

(1) We could move the XFSStats method into the xfs package (and do something similar with the NFS methods). However, since XFSStats is currently defined as a method of the FS type, it makes it a little more tricky to decouple it. Since golang doesn't allow methods to be defined on a struct from another package, we'd have to either change it to a normal function and have a dependency on procfs, or duplicate the FS type and a couple other associated methods in the xfs package.

(2) Another option is to just merge the xfs and nfs packages into procfs. This doesn't really reduce the client dependencies, but it does make the tree appear smaller and simplifies the procfs packages structure. This solution would likely also require making the sysfs package dependent on the procfs package to use some of the common data structures and functions.

(3) We could do a larger package reorganization to make each package very small and specific. We could do something like create a package structure similar to the path structure of the /proc and /sys filesystems. We would probably also need a set of packages for common data structures which are used by both proc and sys.

A recent commit broke prometheus/client_golang

# github.com/prometheus/client_golang/prometheus
../../prometheus/client_golang/prometheus/process_collector.go:129:15: undefined: procfs.NewStat
../../prometheus/client_golang/prometheus/process_collector.go:169:19: p.NewStat undefined (type procfs.Proc has no field or method NewStat)
../../prometheus/client_golang/prometheus/process_collector.go:188:21: p.NewLimits undefined (type procfs.Proc has no field or method NewLimits)

bcache folders breaks transtive imports to windows

#51 The folders in the repo cannot be created on a windows machine when checking out the head.

` git clone https://github.com/prometheus/procfs I:\tools\go\path\src\github.com\prometheus\procfs
Cloning into 'I:\tools\go\path\src\github.com\prometheus\procfs'...
fatal: cannot create directory at 'sysfs/fixtures/devices/pci0000:00': Invalid argument
warning: Clone succeeded, but checkout failed.
You can inspect what was checked out with 'git status'
and retry the checkout with 'git checkout -f HEAD'

package github.com/prometheus/procfs: exit status 128`

I do not use your package directly it is a transitive dependency from go-kit

*: remove non-test uses of reflect

As mentioned in a couple of issues, IMHO the use of reflection is not justified in order to save a few keystrokes. The data coming from proc/sysfs files is not truly dynamic: there are known file names with values that can be mapped to known fields. This type of code can be easily rewritten using a switch statement. When implementing something like a JSON decoder that may have to handle truly arbitrary data, the use of reflection is much more justified.

The use of reflection throws away many of the compile time checks we take for granted, and two of the Go proverbs come to mind: "Clear is better than clever. Reflection is never clear."

https://go-proverbs.github.io/

Therefore, I suggest we remove all uses of reflection from the non-test code in these packages, and do not allow any new code that uses reflection.

$ grep -r "reflect" | grep -v "_test"
sysfs/net_class.go:     "reflect"
sysfs/net_class.go:     interfaceElem := reflect.ValueOf(&interfaceClass).Elem()
sysfs/net_class.go:     interfaceType := reflect.TypeOf(interfaceClass)
sysfs/net_class.go:             case reflect.String:
sysfs/net_class.go:             case reflect.Ptr:
sysfs/net_class.go:                     case reflect.TypeOf(int64ptr):
sysfs/net_class.go:                             fieldValue.Set(reflect.ValueOf(&intValue))
sysfs/class_power_supply.go:    "reflect"
sysfs/class_power_supply.go:    powerSupplyElem := reflect.ValueOf(&powerSupply).Elem()
sysfs/class_power_supply.go:    powerSupplyType := reflect.TypeOf(powerSupply)
sysfs/class_power_supply.go:            case reflect.String:
sysfs/class_power_supply.go:            case reflect.Ptr:
sysfs/class_power_supply.go:                    case reflect.TypeOf(int64ptr):
sysfs/class_power_supply.go:                            fieldValue.Set(reflect.ValueOf(&intValue))

sysfs/fixtures filenames are too long

Trying to install github.com\prometheus\client_golang\prometheus using glide which has procfs as a dependency.

The install fails on windows because sysfs/fixtures contains subfolders with long filenames:

Cloning into 'C:\Users\user\.glide\cache\src\https-github.com-prometheus-procfs'...
error: unable to create file sysfs/fixtures.src/devices/pci0000_@colon@_00/0000_@colon@_00_@colon@_0d.0/ata4/host3/target3_@colon@_0_@colon@_0/3_@colon@_0_@colon@_0_@colon@_0/block/sdb/bcache/stats_day/cache_bypass_misses: Filename too long
error: unable to create file sysfs/fixtures.src/devices/pci0000_@colon@_00/0000_@colon@_00_@colon@_0d.0/ata4/host3/target3_@colon@_0_@colon@_0/3_@colon@_0_@colon@_0_@colon@_0/block/sdb/bcache/stats_day/cache_miss_collisions: Filename too long
fatal: cannot create directory at 'sysfs/fixtures.src/devices/pci0000_@colon@_00/0000_@colon@_00_@colon@_0d.0/ata4/host3/target3_@colon@_0_@colon@_0/3_@colon@_0_@colon@_0_@colon@_0/block/sdb/bcache/stats_five_minute': Filename too long
warning: Clone succeeded, but checkout failed.
You can inspect what was checked out with 'git status'
and retry the checkout with 'git checkout -f HEAD'

ref #55 #56

potential crash in vm.go

In vm.go code like this is used:

vp := util.NewValueParser(value)

		switch f.Name() {
		case "admin_reserve_kbytes":
			vm.AdminReserveKbytes = *vp.PInt64()

if the methods of valueparser return a nil pionter, which they do if they can't parse the value, this will generate a crash:

panic: runtime error: invalid memory address or nil pointer dereference [recovered]
        panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x51d9a3]

Since this was added quite recently I'd simply change the interface to use pointers for parsed values, thus simply resulting in nil pointers in the return struct fields.

Parse XFS stats

Looks like there could be some useful information in here:

matt@servnerr-2:~$ cat /sys/fs/xfs/stats/stats
extent_alloc 92021 97163 92022 93325
abt 0 0 0 0
blk_map 1760270 187968 184039 92021 92022 2132277 0
bmbt 0 0 0 0
dir 184613 92021 92018 136144
trans 706 940035 0
ig 184619 58536 0 126083 0 33482 22
log 2871 112984 9 17314 739
push_ail 940745 0 132520 15426 0 3940 240 154770 0 40
xstrat 92021 0
rw 107225 93619
attr 4 0 0 0
icluster 8648 7824 135243
vnodes 92601 0 0 0 92018 92018 92018 0
buf 2654846 7122 2647761 3599 2 7085 0 10297 7085
abtb2 184089 1271462 13227 13193 0 0 0 0 0 0 0 0 0 0 2733130
abtc2 343698 2405723 171866 171832 0 0 0 0 0 0 0 0 0 0 21306938
bmbt2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
ibt2 341422 1352238 0 0 0 0 0 0 0 0 0 0 0 0 0
fibt2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
qm 0 0 0 0 0 0 0 0
xpc 397979648 92452093 85848224
debug 0

Feel free to assign this to me. To be done for prometheus/node_exporter#456.

Support for /proc/sys/vm

Hi,

I raised the suggestion of having the node exporter expose metrics from /proc/sys/vm on the mailing list and was suggested to raise an issue here.

For my specific use case, exposing /proc/sys/vm/max_map_count is sufficient. There are also other files in /proc/sys/vm that we might consider exposing.

I'm happy to work on coding this once there's consensus on the approach.

Regards,
Steven

Add support for parsing /proc/crypto

This shows which cryptographic algorithms are available in the kernel, and which are being used.

$ head -n 9 /proc/crypto 
name         : crct10dif
driver       : crct10dif-pclmul
module       : crct10dif_pclmul
priority     : 200
refcnt       : 1
selftest     : passed
type         : shash
blocksize    : 1
digestsize   : 2

Seems like it'd probably be useful for some work I'm doing. Maybe not as much in a metrics context, outside of the "reference count" field.

Feel free to assign to me.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.