jdesouza / blog Goto Github PK
View Code? Open in Web Editor NEWLicense: MIT License
License: MIT License
For Deployment postgres in namespace default
Description:
Disabling access to the root filesystem can prevent attackers from altering configuration,
installing new software, or overwriting binaries with malicious code.
By default, Kubernetes runs containers with a writeable filesystem. However, Kubernetes workloads should
generally be stateless, and should not need write access to the root filesystem. If workloads do need
to write files to disk, there are safer methods of doing so (see remediation guidance).
References
Remediation:
In your workload's container configuration, set securityContext.readOnlyRootFilesystem = true
.
If your workload needs to write data to disk, you can still protect the root filesystem by
mounting a separate volume. This creates a special writeable directory for your app, while
preventing changes to system components, application configuration, etc. See the example
below, or view the docs to learn more.
Examples
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
volumes:
- name: data
emptyDir: {}
containers:
- name: busybox
image: busybox
command: [ "sh", "-c", "echo 'hello world' > /data/hello.txt" ]
volumeMounts:
- name: data
mountPath: /data
securityContext:
readOnlyRootFilesystem: true
For Deployment local-path-provisioner in namespace local-path-storage
Description:
Nova found a new version for the container docker.io/rancher/local-path-provisioner.
You have version v0.0.14 installed, but v0.0.24 is now available.
Remediation:
Update your workload configuration to use tag v0.0.24.
For Deployment grafana in namespace grafana
Description:
Reducing kernel capabilities available to a container limits its attack surface
Remediation:
For Deployment postgres in namespace default
Description:
By default, the Kubernetes scheduler has some rules about spreading pods across
different failure zones, however these defaults are not always consistent, and
they are never guaranteed. Best practice is to specify a topologySpreadConstraint
on pods that have multiple replicas (such as in a deployment).
References
Remediation:
Add a topologySpreadConstraint
block to your pod specification:
spec:
topologySpreadConstraints:
- maxSkew: <integer>
minDomains: <integer> # optional; beta since v1.25
topologyKey: <string>
whenUnsatisfiable: <string>
labelSelector: <object>
matchLabelKeys: <list> # optional; alpha since v1.25
nodeAffinityPolicy: [Honor|Ignore] # optional; beta since v1.26
nodeTaintsPolicy: [Honor|Ignore] # optional; beta since v1.26
** Example **
This example spreads pods across nodes and availablility zones,
but does not block scheduling if that is impossible.
spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: "topology.kubernetes.io/zone"
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app.kubernetes.io/name: your-app-name
- maxSkew: 1
topologyKey: "kubernetes.io/hostname"
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app.kubernetes.io/name: your-app-name
For Pod kube-controller-manager-kind-control-plane in namespace kube-system
Description:
Nova found a new version for the container k8s.gcr.io/kube-controller-manager.
You have version v1.21.1 installed, but v1.27.1 is now available.
Remediation:
Update your workload configuration to use tag v1.27.1.
For DaemonSet kube-proxy in namespace kube-system
Description:
By default, the Kubernetes scheduler has some rules about spreading pods across
different failure zones, however these defaults are not always consistent, and
they are never guaranteed. Best practice is to specify a topologySpreadConstraint
on pods that have multiple replicas (such as in a deployment).
References
Remediation:
Add a topologySpreadConstraint
block to your pod specification:
spec:
topologySpreadConstraints:
- maxSkew: <integer>
minDomains: <integer> # optional; beta since v1.25
topologyKey: <string>
whenUnsatisfiable: <string>
labelSelector: <object>
matchLabelKeys: <list> # optional; alpha since v1.25
nodeAffinityPolicy: [Honor|Ignore] # optional; beta since v1.26
nodeTaintsPolicy: [Honor|Ignore] # optional; beta since v1.26
** Example **
This example spreads pods across nodes and availablility zones,
but does not block scheduling if that is impossible.
spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: "topology.kubernetes.io/zone"
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app.kubernetes.io/name: your-app-name
- maxSkew: 1
topologyKey: "kubernetes.io/hostname"
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app.kubernetes.io/name: your-app-name
For ConfigMap postgres-config in namespace default
Description:
A ConfigMap stores non-confidential data that can be accessed by Pods as environment variables, files, or command-line arguments.
This ConfigMap may contain sensitive information, which could leak secrets or allow remote code execution.
Remediation:
Move sensitive content into a Kubernetes Secret, which can be protected using RBAC.
Examples of sensitive content include
AWS_SECRET_ACCESS_KEY
, GOOGLE_APPLICATION_CREDENTIALS
, AZURE_*_KEY
, or OCI_CLI_KEY_CONTENT
.password
, token
, bearer
, or secret
.---- BEGIN * PRIVATE KEY ---
.For Deployment postgres in namespace default
Description:
Kubernetes will often cache images on worker nodes. By default, an image will only be pulled
if it isn't already cached on the node attempting to run it.
However, utilizing cached versions of a Docker image can be a reliability issue.
It can lead to different images running on different nodes, leading to inconsistent
behavior. It can also be a security issue, as a workload can access a cached image even if it
doesn't have permission to access the remote Docker repository (via imagePullSecret
).
Specifying pullPolicy=Always
will prevent these problems by ensuring the latest image is
downloaded every time a new pod is created.
References
Remediation:
In your Pod spec, set imagePullPolicy
to Always
Examples
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- name: busybox
image: busybox
imagePullPolicy: Always
command: [ "echo", "SUCCESS" ]
For Deployment postgres in namespace default
Description:
Setting CPU requests guarantees that your container will have at least that much CPU available.
Without CPU requests, a pod may be scheduled on a node that is already overutilized, and
will suffer from performance problems as a result.
For production-grade workloads, CPU requests should always be set.
References
Remediation:
Add CPU requests to each of your container specifications. CPU may be set in terms of
whole CPUs (e.g. 1.0
or .25
), or more commonly, in terms of millicpus (e.g. 1000m
or 250m
).
It's up to you to decide how much CPU to allocate to your application. Setting CPU requests
too high could potentially lead to cost overruns, whereas setting it too low may cause
performance issues.
Insights can help you determine your application's CPU usage via the
Prometheus Collector
and Goldilocks reports.
We strongly recommend you enable one or both of these reports to help determine appropriate resource
requests and limits.
Examples
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- name: busybox
image: busybox
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
For Deployment postgres in namespace default
Description:
If runAsNonRoot
is not set to true, containers are allowed to run as the root user.
This makes it easier for an attacker to gain access to resources on the host, especially
when used in combination
with other security configurations (like adding additional capabilities
or procMount
).
Occasionally, there may be good reason for a container process to run as the root user,
but unless you have an explicit need, setting runAsNonRoot=true
will greatly reduce
security risk.
References
Remediation:
In your workload configuration, set securityContext.runAsNonRoot=true
. This can be set at either
the container level (in which case it needs to be set for all containers) or at the pod level
(where it will become the default for all containers).
If the corresponding container is currently running as the root user,
you will also need to do one of the following to change it to run as a non-root user:
securityContext.runAsUser
to a non-zero integerUSER 1000
If there are no non-root users in the Docker image, you may also need to create one
in the Dockerfile, e.g. with
RUN useradd nonroot -u 1000 --user-group
If you're changing from a root user to a non-root user, be on the lookout for issues with
things like file permissions - you may need to utilize chown
and chmod
in your Dockerfile
to get things running properly.
Examples
At pod level
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
At the container level
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
containers:
- name: busybox
image: busybox
securityContext:
runAsNonRoot: true
runAsUser: 1000
For Pod kube-scheduler-kind-control-plane in namespace kube-system
Description:
Nova found a new version for the container k8s.gcr.io/kube-scheduler.
You have version v1.21.1 installed, but v1.27.1 is now available.
Remediation:
Update your workload configuration to use tag v1.27.1.
For Deployment postgres in namespace default
Description:
Readiness probes are designed to ensure that an application has reached a "ready" state -
e.g. it's ready to serve traffic to your end-users.
In many cases there is a period of time between when a webserver process starts and when
it is ready to receive traffic; for example, your app may need some time to connect to a
database or download configuration. Your application may also intermittently need to
stop serving traffic, e.g. to reconnect to the database.
When a readiness probe fails on a particular pod, Kubernetes will divert traffic
away from that pod, preferring pods that are in a ready state. Not setting a readiness
probe can lead to gaps in service or inconsistencies if users
connect to a pod which is not yet ready for traffic.
References
Remediation:
Add a readinessProbe
to your container spec.
It's up to you how to decide if your application is ready for traffic. You might
check that you're able to connect to your database, or ensure that a configuration
file exists.
You can configure your readiness probe
to execute a command or check for a healthy HTTP/TCP response.
Examples
Using an HTTP request
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- name: busybox
image: busybox
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
Using a command
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- name: busybox
image: busybox
args:
- /bin/bash
- -c
- sleep 30; touch /tmp/ready
readinessProbe:
exec:
command:
- cat
- /tmp/ready
initialDelaySeconds: 5
periodSeconds: 10
For Deployment dns-controller in namespace kube-system
Description:
Reducing kernel capabilities available to a container limits its attack surface
Remediation:
For Deployment postgres in namespace default
Description:
Under particular configurations, a container may be able to escalate its privileges.
Setting allowPrivilegeEscalation
to false will set the no_new_privs
flag
on the container process, preventing setuid
binaries from changing the effective user ID.
Setting this flag is particularly important when using runAsNonRoot
, which can otherwise be circumvented.
References
Remediation:
In your workload configuration, set securityContext.allowPrivilegeEscalation
to false
Note that your workload might currently rely on the ability to escalate its privileges -
be sure to test the change to ensure nothing breaks.
Examples
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- name: busybox
image: busybox
securityContext:
allowPrivilegeEscalation: false
For DaemonSet kindnet in namespace kube-system
Description:
If runAsNonRoot
is not set to true, containers are allowed to run as the root user.
This makes it easier for an attacker to gain access to resources on the host, especially
when used in combination
with other security configurations (like adding additional capabilities
or procMount
).
Occasionally, there may be good reason for a container process to run as the root user,
but unless you have an explicit need, setting runAsNonRoot=true
will greatly reduce
security risk.
References
Remediation:
In your workload configuration, set securityContext.runAsNonRoot=true
. This can be set at either
the container level (in which case it needs to be set for all containers) or at the pod level
(where it will become the default for all containers).
If the corresponding container is currently running as the root user,
you will also need to do one of the following to change it to run as a non-root user:
securityContext.runAsUser
to a non-zero integerUSER 1000
If there are no non-root users in the Docker image, you may also need to create one
in the Dockerfile, e.g. with
RUN useradd nonroot -u 1000 --user-group
If you're changing from a root user to a non-root user, be on the lookout for issues with
things like file permissions - you may need to utilize chown
and chmod
in your Dockerfile
to get things running properly.
Examples
At pod level
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
At the container level
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
containers:
- name: busybox
image: busybox
securityContext:
runAsNonRoot: true
runAsUser: 1000
For DaemonSet kindnet in namespace kube-system
Description:
Although Kubernetes allows you to deploy a pod with access to the host network namespace,
it's rarely a good idea. A pod running with the hostNetwork attribute enabled will have access to the loopback device,
services listening on localhost, and could be used to snoop on network activity of other pods on the same node.
There are certain examples where setting hostNetwork to true is required, such as deploying a networking plugin like Flannel.
Remediation:
Do not configure hostNetwork
attribute
For Deployment postgres in namespace default
Description:
Deployments should run with multiple replicas in order to ensure high availability and avoid downtime.
Read the documentation
Remediation:
Set spec.replicas
to 2 or more
For DaemonSet kindnet in namespace kube-system
Description:
Under particular configurations, a container may be able to escalate its privileges.
Setting allowPrivilegeEscalation
to false will set the no_new_privs
flag
on the container process, preventing setuid
binaries from changing the effective user ID.
Setting this flag is particularly important when using runAsNonRoot
, which can otherwise be circumvented.
References
Remediation:
In your workload configuration, set securityContext.allowPrivilegeEscalation
to false
Note that your workload might currently rely on the ability to escalate its privileges -
be sure to test the change to ensure nothing breaks.
Examples
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- name: busybox
image: busybox
securityContext:
allowPrivilegeEscalation: false
For DaemonSet kube-proxy in namespace kube-system
Description:
Every container running in Kubernetes comes with a configurable set of kernel-level
capabilities. Generally these capabilities are not needed by most workloads, and
can be exploited by an attacker who gains access to the container.
Many of the available capabilities are enabled by default in Kubernetes, including:
NET_ADMIN
CHOWN
DAC_OVERRIDE
FSETID
FOWNER
MKNOD
NET_RAW
SETGID
SETUID
SETFCAP
SETPCAP
NET_BIND_SERVICE
SYS_CHROOT
KILL
AUDIT_WRITE
Unless these capabilities are explicitly needed by your workload, they should be dropped.
References
Remediation:
The easiest way to remove insecure capabilities from your container is to drop
them all (see example below).
If particular capabilities are necessary for your workload to run properly,
Fairwinds recommends dropping all, and then explicitly adding the ones that are necessary
Examples
Drop all
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
containers:
- name: busybox
image: busybox
securityContext:
capabilities:
drop:
- ALL
Add back necessary capabilities
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
containers:
- name: busybox
image: busybox
securityContext:
capabilities:
drop:
- ALL
add:
- CHOWN
- SETUID
For DaemonSet kindnet in namespace kube-system
Description:
Every container running in Kubernetes comes with a configurable set of kernel-level
capabilities. Generally these capabilities are not needed by most workloads, and
can be exploited by an attacker who gains access to the container.
Many of the available capabilities are enabled by default in Kubernetes, including:
NET_ADMIN
CHOWN
DAC_OVERRIDE
FSETID
FOWNER
MKNOD
NET_RAW
SETGID
SETUID
SETFCAP
SETPCAP
NET_BIND_SERVICE
SYS_CHROOT
KILL
AUDIT_WRITE
Unless these capabilities are explicitly needed by your workload, they should be dropped.
References
Remediation:
The easiest way to remove insecure capabilities from your container is to drop
them all (see example below).
If particular capabilities are necessary for your workload to run properly,
Fairwinds recommends dropping all, and then explicitly adding the ones that are necessary
Examples
Drop all
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
containers:
- name: busybox
image: busybox
securityContext:
capabilities:
drop:
- ALL
Add back necessary capabilities
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
containers:
- name: busybox
image: busybox
securityContext:
capabilities:
drop:
- ALL
add:
- CHOWN
- SETUID
For Pod kube-scheduler-kind-control-plane in namespace kube-system
Description:
Kubernetes will often cache images on worker nodes. By default, an image will only be pulled
if it isn't already cached on the node attempting to run it.
However, utilizing cached versions of a Docker image can be a reliability issue.
It can lead to different images running on different nodes, leading to inconsistent
behavior. It can also be a security issue, as a workload can access a cached image even if it
doesn't have permission to access the remote Docker repository (via imagePullSecret
).
Specifying pullPolicy=Always
will prevent these problems by ensuring the latest image is
downloaded every time a new pod is created.
References
Remediation:
In your Pod spec, set imagePullPolicy
to Always
Examples
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- name: busybox
image: busybox
imagePullPolicy: Always
command: [ "echo", "SUCCESS" ]
For Pod etcd-kind-control-plane in namespace kube-system
Description:
Nova found a new version for the container k8s.gcr.io/etcd.
You have version 3.4.13-0 installed, but 3.5.7-0 is now available.
Remediation:
Update your workload configuration to use tag 3.5.7-0.
For DaemonSet kindnet in namespace kube-system
Description:
Kubernetes will often cache images on worker nodes. By default, an image will only be pulled
if it isn't already cached on the node attempting to run it.
However, utilizing cached versions of a Docker image can be a reliability issue.
It can lead to different images running on different nodes, leading to inconsistent
behavior. It can also be a security issue, as a workload can access a cached image even if it
doesn't have permission to access the remote Docker repository (via imagePullSecret
).
Specifying pullPolicy=Always
will prevent these problems by ensuring the latest image is
downloaded every time a new pod is created.
References
Remediation:
In your Pod spec, set imagePullPolicy
to Always
Examples
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- name: busybox
image: busybox
imagePullPolicy: Always
command: [ "echo", "SUCCESS" ]
For DaemonSet kube-proxy in namespace kube-system
Description:
Under particular configurations, a container may be able to escalate its privileges.
Setting allowPrivilegeEscalation
to false will set the no_new_privs
flag
on the container process, preventing setuid
binaries from changing the effective user ID.
Setting this flag is particularly important when using runAsNonRoot
, which can otherwise be circumvented.
References
Remediation:
In your workload configuration, set securityContext.allowPrivilegeEscalation
to false
Note that your workload might currently rely on the ability to escalate its privileges -
be sure to test the change to ensure nothing breaks.
Examples
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- name: busybox
image: busybox
securityContext:
allowPrivilegeEscalation: false
For Deployment postgres in namespace default
Description:
Setting memory requests guarantees that your container will have at least that much memory available.
Without memory requests, a pod may be scheduled on a node that is already overutilized, and
will suffer from performance problems or out-of-memory errors as a result.
For production-grade workloads, memory requests should always be set.
References
Remediation:
Add a memory request to each of your container specifications. Memory can be specified
as a fixed number of bytes (e.g. 1024
), with a power-of-ten suffix (e.g. 1K
),
or with a power-of-two suffix (e.g. 1Ki
).
It's up to you to decide how much memory to allocate to your application. Setting memory requests
too high could potentially lead to cost overruns, whereas setting it too low may cause
performance issues.
Insights can help you determine your application's memory usage via the
Prometheus Collector
and Goldilocks reports.
We strongly recommend you enable one or both of these reports to help determine appropriate resource
requests and limits.
Examples
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- name: busybox
image: busybox
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
For Deployment postgres in namespace default
Description:
Every container running in Kubernetes comes with a configurable set of kernel-level
capabilities. Generally these capabilities are not needed by most workloads, and
can be exploited by an attacker who gains access to the container.
Many of the available capabilities are enabled by default in Kubernetes, including:
NET_ADMIN
CHOWN
DAC_OVERRIDE
FSETID
FOWNER
MKNOD
NET_RAW
SETGID
SETUID
SETFCAP
SETPCAP
NET_BIND_SERVICE
SYS_CHROOT
KILL
AUDIT_WRITE
Unless these capabilities are explicitly needed by your workload, they should be dropped.
References
Remediation:
The easiest way to remove insecure capabilities from your container is to drop
them all (see example below).
If particular capabilities are necessary for your workload to run properly,
Fairwinds recommends dropping all, and then explicitly adding the ones that are necessary
Examples
Drop all
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
containers:
- name: busybox
image: busybox
securityContext:
capabilities:
drop:
- ALL
Add back necessary capabilities
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
containers:
- name: busybox
image: busybox
securityContext:
capabilities:
drop:
- ALL
add:
- CHOWN
- SETUID
For HelmChart insights-agent in namespace default
Description:
Nova found a new version for the Helm chart insights-agent.
You have version 2.10.0 installed, but 2.10.6 is now available.
Remediation:
Run helm repo update
, and reinstall this Helm chart.
You can specify --version=2.10.6
when reinstalling the chart with helm upgrade
For Pod kube-apiserver-kind-control-plane in namespace kube-system
Description:
Nova found a new version for the container k8s.gcr.io/kube-apiserver.
You have version v1.21.1 installed, but v1.27.1 is now available.
Remediation:
Update your workload configuration to use tag v1.27.1.
For DaemonSet kindnet in namespace kube-system
Description:
Readiness probes are designed to ensure that an application has reached a "ready" state -
e.g. it's ready to serve traffic to your end-users.
In many cases there is a period of time between when a webserver process starts and when
it is ready to receive traffic; for example, your app may need some time to connect to a
database or download configuration. Your application may also intermittently need to
stop serving traffic, e.g. to reconnect to the database.
When a readiness probe fails on a particular pod, Kubernetes will divert traffic
away from that pod, preferring pods that are in a ready state. Not setting a readiness
probe can lead to gaps in service or inconsistencies if users
connect to a pod which is not yet ready for traffic.
References
Remediation:
Add a readinessProbe
to your container spec.
It's up to you how to decide if your application is ready for traffic. You might
check that you're able to connect to your database, or ensure that a configuration
file exists.
You can configure your readiness probe
to execute a command or check for a healthy HTTP/TCP response.
Examples
Using an HTTP request
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- name: busybox
image: busybox
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
Using a command
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- name: busybox
image: busybox
args:
- /bin/bash
- -c
- sleep 30; touch /tmp/ready
readinessProbe:
exec:
command:
- cat
- /tmp/ready
initialDelaySeconds: 5
periodSeconds: 10
For Job cronjob-executor in namespace insights-agent
Description:
By default, the Kubernetes scheduler has some rules about spreading pods across
different failure zones, however these defaults are not always consistent, and
they are never guaranteed. Best practice is to specify a topologySpreadConstraint
on pods that have multiple replicas (such as in a deployment).
References
Remediation:
Add a topologySpreadConstraint
block to your pod specification:
spec:
topologySpreadConstraints:
- maxSkew: <integer>
minDomains: <integer> # optional; beta since v1.25
topologyKey: <string>
whenUnsatisfiable: <string>
labelSelector: <object>
matchLabelKeys: <list> # optional; alpha since v1.25
nodeAffinityPolicy: [Honor|Ignore] # optional; beta since v1.26
nodeTaintsPolicy: [Honor|Ignore] # optional; beta since v1.26
** Example **
This example spreads pods across nodes and availablility zones,
but does not block scheduling if that is impossible.
spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: "topology.kubernetes.io/zone"
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app.kubernetes.io/name: your-app-name
- maxSkew: 1
topologyKey: "kubernetes.io/hostname"
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app.kubernetes.io/name: your-app-name
For DaemonSet kindnet in namespace kube-system
Description:
Image Name: docker.io/kindest/kindnetd:0.5.3
From sha256:aa67fec7d7ef71445da9a84e9bc88afca2538e9a0aebcba6ef9509b7cf313d17:
Package | CVE | Version | Severity | Title | Description |
---|---|---|---|---|---|
coreutils | CVE-2016-2781 | 8.26-3 | LOW | coreutils: Non-privileged session can escape to the parent session in chroot | chroot in GNU coreutils, when used with --userspec, allows local users to escape to the parent session via a crafted TIOCSTI ioctl call, which pushes characters to the terminal's input buffer. |
CVE-2017-18018 | LOW | coreutils: race condition vulnerability in chown and chgrp | In GNU Coreutils through 8.29, chown-core.c in chown and chgrp does not prevent replacement of a plain file with a symlink during use of the POSIX "-R -L" options, which allows local users to modify the ownership of arbitrary files by leveraging a race condition. | ||
gpgv | CVE-2018-1000858 | 2.1.18-8~deb9u4 | MEDIUM | gnupg2: Cross site request forgery in dirmngr resulting in an information disclosure or denial of service | GnuPG version 2.1.12 - 2.2.11 contains a Cross ite Request Forgery (CSRF) vulnerability in dirmngr that can result in Attacker controlled CSRF, Information Disclosure, DoS. This attack appear to be exploitable via Victim must perform a WKD request, e.g. enter an email address in the composer window of Thunderbird/Enigmail. This vulnerability appears to have been fixed in after commit 4a4bb874f63741026bd26264c43bb32b1099f060. |
CVE-2018-9234 | MEDIUM | GnuPG: Unenforced configuration allows for apparently valid certifications actually signed by signing subkeys | GnuPG 2.2.4 and 2.2.5 does not enforce a configuration in which key certification requires an offline master Certify key, which results in apparently valid certifications that occurred only with access to a signing subkey. | ||
CVE-2019-14855 | LOW | ||||
libapt-pkg5.0 | CVE-2011-3374 | 1.4.9 | MEDIUM | It was found that apt-key in apt, all versions, do not correctly validate gpg keys with the master keyring, leading to a potential man-in-the-middle attack. | |
libbz2-1.0 | CVE-2019-12900 | 1.0.6-8.1 | HIGH | bzip2: out-of-bounds write in function BZ2_decompress | BZ2_decompress in decompress.c in bzip2 through 1.0.6 has an out-of-bounds write when there are many selectors. |
libcomerr2 | CVE-2019-5094 | 1.43.4-2 | MEDIUM | e2fsprogs: crafted ext4 partition leads to out-of-bounds write | An exploitable code execution vulnerability exists in the quota file functionality of E2fsprogs 1.45.3. A specially crafted ext4 partition can cause an out-of-bounds write on the heap, resulting in code execution. An attacker can corrupt a partition to trigger this vulnerability. |
CVE-2019-5188 | MEDIUM | e2fsprogs: Out-of-bounds write in e2fsck/rehash.c | A code execution vulnerability exists in the directory rehashing functionality of E2fsprogs e2fsck 1.45.4. A specially crafted ext4 directory can cause an out-of-bounds write on the stack, resulting in code execution. An attacker can corrupt a partition to trigger this vulnerability. | ||
libelf1 | CVE-2018-16062 | 0.168-1 | MEDIUM | elfutils: Heap-based buffer over-read in libdw/dwarf_getaranges.c:dwarf_getaranges() via crafted file | dwarf_getaranges in dwarf_getaranges.c in libdw in elfutils before 2018-08-18 allows remote attackers to cause a denial of service (heap-based buffer over-read) via a crafted file. |
CVE-2018-16402 | HIGH | elfutils: Double-free due to double decompression of sections in crafted ELF causes crash | libelf/elf_end.c in elfutils 0.173 allows remote attackers to cause a denial of service (double free and application crash) or possibly have unspecified other impact because it tries to decompress twice. | ||
CVE-2018-16403 | MEDIUM | elfutils: Heap-based buffer over-read in libdw/dwarf_getabbrev.c and libwd/dwarf_hasattr.c causes crash | libdw in elfutils 0.173 checks the end of the attributes list incorrectly in dwarf_getabbrev in dwarf_getabbrev.c and dwarf_hasattr in dwarf_hasattr.c, leading to a heap-based buffer over-read and an application crash. | ||
CVE-2018-18310 | MEDIUM | elfutils: invalid memory address dereference was discovered in dwfl_segment_report_module.c in libdwfl | An invalid memory address dereference was discovered in dwfl_segment_report_module.c in libdwfl in elfutils through v0.174. The vulnerability allows attackers to cause a denial of service (application crash) with a crafted ELF file, as demonstrated by consider_notes. | ||
CVE-2018-18520 | MEDIUM | elfutils: eu-size cannot handle recursive ar files | An Invalid Memory Address Dereference exists in the function elf_end in libelf in elfutils through v0.174. Although eu-size is intended to support ar files inside ar files, handle_ar in size.c closes the outer ar file before handling all inner entries. The vulnerability allows attackers to cause a denial of service (application crash) with a crafted ELF file. | ||
CVE-2018-18521 | MEDIUM | elfutils: Divide-by-zero in arlib_add_symbols function in arlib.c | Divide-by-zero vulnerabilities in the function arlib_add_symbols() in arlib.c in elfutils 0.174 allow remote attackers to cause a denial of service (application crash) with a crafted ELF file, as demonstrated by eu-ranlib, because a zero sh_entsize is mishandled. | ||
CVE-2019-7148 | MEDIUM | elfutils: excessive memory allocation in read_long_names in elf_begin.c in libelf | DISPUTED An attempted excessive memory allocation was discovered in the function read_long_names in elf_begin.c in libelf in elfutils 0.174. Remote attackers could leverage this vulnerability to cause a denial-of-service via crafted elf input, which leads to an out-of-memory exception. NOTE: The maintainers believe this is not a real issue, but instead a "warning caused by ASAN because the allocation is big. By setting ASAN_OPTIONS=allocator_may_return_null=1 and running the reproducer, nothing happens." | ||
CVE-2019-7149 | MEDIUM | elfutils: heap-based buffer over-read in read_srclines in dwarf_getsrclines.c in libdw | A heap-based buffer over-read was discovered in the function read_srclines in dwarf_getsrclines.c in libdw in elfutils 0.175. A crafted input can cause segmentation faults, leading to denial-of-service, as demonstrated by eu-nm. | ||
CVE-2019-7150 | MEDIUM | elfutils: segmentation fault in elf64_xlatetom in libelf/elf32_xlatetom.c | An issue was discovered in elfutils 0.175. A segmentation fault can occur in the function elf64_xlatetom in libelf/elf32_xlatetom.c, due to dwfl_segment_report_module not checking whether the dyn data read from a core file is truncated. A crafted input can cause a program crash, leading to denial-of-service, as demonstrated by eu-stack. | ||
CVE-2019-7664 | MEDIUM | elfutils: out of bound write in elf_cvt_note in libelf/note_xlate.h | In elfutils 0.175, a negative-sized memcpy is attempted in elf_cvt_note in libelf/note_xlate.h because of an incorrect overflow check. Crafted elf input causes a segmentation fault, leading to denial of service (program crash). | ||
CVE-2019-7665 | MEDIUM | elfutils: heap-based buffer over-read in function elf32_xlatetom in elf32_xlatetom.c | In elfutils 0.175, a heap-based buffer over-read was discovered in the function elf32_xlatetom in elf32_xlatetom.c in libelf. A crafted ELF input can cause a segmentation fault leading to denial of service (program crash) because ebl_core_note does not reject malformed core file notes. | ||
libgcrypt20 | CVE-2018-6829 | 1.7.6-2+deb9u3 | MEDIUM | libgcrypt: ElGamal implementation doesn't have semantic security due to incorrectly encoded plaintexts possibly allowing to obtain sensitive information | cipher/elgamal.c in Libgcrypt through 1.8.2, when used to encrypt messages directly, improperly encodes plaintexts, which allows attackers to obtain sensitive information by reading ciphertext data (i.e., it does not have semantic security in face of a ciphertext-only attack). The Decisional Diffie-Hellman (DDH) assumption does not hold for Libgcrypt's ElGamal implementation. |
CVE-2019-12904 | MEDIUM | Libgcrypt: physical addresses being available to other processes leads to a flush-and-reload side-channel attack | In Libgcrypt 1.8.4, the C implementation of AES is vulnerable to a flush-and-reload side-channel attack because physical addresses are available to other processes. (The C implementation is used on platforms where an assembly-language implementation is unavailable.) | ||
CVE-2019-13627 | MEDIUM | libgcrypt: ECDSA timing attack in the libgcrypt20 cryptographic library | It was discovered that there was a ECDSA timing attack in the libgcrypt20 cryptographic library. Version affected: 1.8.4-5, 1.7.6-2+deb9u3, and 1.6.3-2+deb8u4. Versions fixed: 1.8.5-2 and 1.6.3-2+deb8u7. | ||
liblz4-1 | CVE-2019-17543 | 0.0~r131-2 | MEDIUM | lz4: heap-based buffer overflow in LZ4_write32 | LZ4 before 1.9.2 has a heap-based buffer overflow in LZ4_write32 (related to LZ4_compress_destSize), affecting applications that call LZ4_compress_fast with a large input. (This issue can also lead to data corruption.) NOTE: the vendor states "only a few specific / uncommon usages of the API are at risk." |
libnettle6 | CVE-2018-16869 | 3.3-1 | LOW | nettle: Leaky data conversion exposing a manager oracle | A Bleichenbacher type side-channel based padding oracle attack was found in the way nettle handles endian conversion of RSA decrypted PKCS#1 v1.5 data. An attacker who is able to run a process on the same physical core as the victim process, could use this flaw extract plaintext or in some cases downgrade any TLS connections to a vulnerable server. |
libpcre3 | CVE-2017-11164 | 2:8.39-3 | HIGH | pcre: OP_KETRMAX feature in the match function in pcre_exec.c | In PCRE 8.41, the OP_KETRMAX feature in the match function in pcre_exec.c allows stack exhaustion (uncontrolled recursion) when processing a crafted regular expression. |
CVE-2017-16231 | LOW | pcre: self-recursive call in match() in pcre_exec.c leads to denial of service | ** DISPUTED ** In PCRE 8.41, after compiling, a pcretest load test PoC produces a crash overflow in the function match() in pcre_exec.c because of a self-recursive call. NOTE: third parties dispute the relevance of this report, noting that there are options that can be used to limit the amount of stack that is used. | ||
CVE-2017-7245 | MEDIUM | pcre: stack-based buffer overflow write in pcre32_copy_substring | Stack-based buffer overflow in the pcre32_copy_substring function in pcre_get.c in libpcre1 in PCRE 8.40 allows remote attackers to cause a denial of service (WRITE of size 4) or possibly have unspecified other impact via a crafted file. | ||
CVE-2017-7246 | MEDIUM | pcre: stack-based buffer overflow write in pcre32_copy_substring | Stack-based buffer overflow in the pcre32_copy_substring function in pcre_get.c in libpcre1 in PCRE 8.40 allows remote attackers to cause a denial of service (WRITE of size 268) or possibly have unspecified other impact via a crafted file. | ||
libstdc++6 | CVE-2018-12886 | 6.3.0-18+deb9u1 | MEDIUM | gcc: spilling of stack protection address in cfgexpand.c and function.c leads to stack-overflow protection bypass | stack_protect_prologue in cfgexpand.c and stack_protect_epilogue in function.c in GNU Compiler Collection (GCC) 4.1 through 8 (under certain circumstances) generate instruction sequences when targeting ARM targets that spill the address of the stack protector guard, which allows an attacker to bypass the protection of -fstack-protector, -fstack-protector-all, -fstack-protector-strong, and -fstack-protector-explicit against stack overflow by controlling what the stack canary is compared against. |
libtinfo5 | CVE-2018-19211 | 6.0+20161126-1+deb9u2 | MEDIUM | ncurses: Null pointer dereference at function _nc_parse_entry in parse_entry.c | In ncurses 6.1, there is a NULL pointer dereference at function _nc_parse_entry in parse_entry.c that will lead to a denial of service attack. The product proceeds to the dereference code path even after a "dubious character `*' in name or alias field" detection. |
CVE-2019-17594 | MEDIUM | ncurses: heap-based buffer overflow in the _nc_find_entry function in tinfo/comp_hash.c | There is a heap-based buffer over-read in the _nc_find_entry function in tinfo/comp_hash.c in the terminfo library in ncurses before 6.1-20191012. | ||
CVE-2019-17595 | MEDIUM | ncurses: heap-based buffer overflow in the fmt_entry function in tinfo/comp_hash.c | There is a heap-based buffer over-read in the fmt_entry function in tinfo/comp_hash.c in the terminfo library in ncurses before 6.1-20191012. | ||
libuuid1 | CVE-2016-2779 | 2.29.2-1+deb9u1 | HIGH | util-linux: runuser tty hijack via TIOCSTI ioctl | runuser in util-linux allows local users to escape to the parent session via a crafted TIOCSTI ioctl call, which pushes characters to the terminal's input buffer. |
libxtables12 | CVE-2012-2663 | 1.6.0+snapshot20161117-6 | HIGH | iptables: --syn flag bypass | extensions/libxt_tcp.c in iptables through 1.4.21 does not match TCP SYN+FIN packets in --syn rules, which might allow remote attackers to bypass intended firewall restrictions via crafted packets. NOTE: the CVE-2012-6638 fix makes this issue less relevant. |
CVE-2019-11360 | MEDIUM | A buffer overflow in iptables-restore in netfilter iptables 1.8.2 allows an attacker to (at least) crash the program or potentially gain code execution via a specially crafted iptables-save file. This is related to add_param_to_argv in xshared.c. | |||
multiarch-support | CVE-2009-5155 | 2.24-11+deb9u4 | MEDIUM | glibc: parse_reg_exp in posix/regcomp.c misparses alternatives leading to denial of service or trigger incorrect result | In the GNU C Library (aka glibc or libc6) before 2.28, parse_reg_exp in posix/regcomp.c misparses alternatives, which allows attackers to cause a denial of service (assertion failure and application exit) or trigger an incorrect result by attempting a regular-expression match. |
CVE-2010-4051 | MEDIUM | CVE-2010-4052 glibc: De-recursivise regular expression engine | The regcomp implementation in the GNU C Library (aka glibc or libc6) through 2.11.3, and 2.12.x through 2.12.2, allows context-dependent attackers to cause a denial of service (application crash) via a regular expression containing adjacent bounded repetitions that bypass the intended RE_DUP_MAX limitation, as demonstrated by a {10,}{10,}{10,}{10,}{10,} sequence in the proftpd.gnu.c exploit for ProFTPD, related to a "RE_DUP_MAX overflow." | ||
CVE-2010-4052 | MEDIUM | CVE-2010-4051 CVE-2010-4052 glibc: De-recursivise regular expression engine | Stack consumption vulnerability in the regcomp implementation in the GNU C Library (aka glibc or libc6) through 2.11.3, and 2.12.x through 2.12.2, allows context-dependent attackers to cause a denial of service (resource exhaustion) via a regular expression containing adjacent repetition operators, as demonstrated by a {10,}{10,}{10,}{10,} sequence in the proftpd.gnu.c exploit for ProFTPD. | ||
CVE-2010-4756 | MEDIUM | glibc: glob implementation can cause excessive CPU and memory consumption due to crafted glob expressions | The glob implementation in the GNU C Library (aka glibc or libc6) allows remote authenticated users to cause a denial of service (CPU and memory consumption) via crafted glob expressions that do not match any pathnames, as demonstrated by glob expressions in STAT commands to an FTP daemon, a different vulnerability than CVE-2010-2632. | ||
CVE-2015-8985 | MEDIUM | glibc: potential denial of service in pop_fail_stack() | The pop_fail_stack function in the GNU C Library (aka glibc or libc6) allows context-dependent attackers to cause a denial of service (assertion failure and application crash) via vectors related to extended regular expression processing. | ||
CVE-2016-10228 | MEDIUM | glibc: iconv program can hang when invoked with the -c option | The iconv program in the GNU C Library (aka glibc or libc6) 2.25 and earlier, when invoked with the -c option, enters an infinite loop when processing invalid multi-byte input sequences, leading to a denial of service. | ||
CVE-2016-10739 | MEDIUM | glibc: getaddrinfo should reject IP addresses with trailing characters | In the GNU C Library (aka glibc or libc6) through 2.28, the getaddrinfo function would successfully parse a string that contained an IPv4 address followed by whitespace and arbitrary characters, which could lead applications to incorrectly assume that it had parsed a valid string, without the possibility of embedded HTTP headers or other potentially dangerous substrings. | ||
CVE-2017-12132 | MEDIUM | glibc: Fragmentation attacks possible when EDNS0 is enabled | The DNS stub resolver in the GNU C Library (aka glibc or libc6) before version 2.26, when EDNS support is enabled, will solicit large UDP responses from name servers, potentially simplifying off-path DNS spoofing attacks due to IP fragmentation. | ||
CVE-2018-1000001 | HIGH | glibc: realpath() buffer underflow when getcwd() returns relative path allows privilege escalation | In glibc 2.26 and earlier there is confusion in the usage of getcwd() by realpath() which can be used to write before the destination buffer leading to a buffer underflow and potential code execution. | ||
CVE-2018-20796 | MEDIUM | glibc: uncontrolled recursion in function check_dst_limits_calc_pos_1 in posix/regexec.c | In the GNU C Library (aka glibc or libc6) through 2.29, check_dst_limits_calc_pos_1 in posix/regexec.c has Uncontrolled Recursion, as demonstrated by '(\227 | ||
CVE-2018-6485 | HIGH | glibc: Integer overflow in posix_memalign in memalign functions | An integer overflow in the implementation of the posix_memalign in memalign functions in the GNU C Library (aka glibc or libc6) 2.26 and earlier could cause these functions to return a pointer to a heap area that is too small, potentially leading to heap corruption. | ||
CVE-2018-6551 | HIGH | glibc: integer overflow in malloc functions | The malloc implementation in the GNU C Library (aka glibc or libc6), from version 2.24 to 2.26 on powerpc, and only in version 2.26 on i386, did not properly handle malloc calls with arguments close to SIZE_MAX and could return a pointer to a heap region that is smaller than requested, eventually leading to heap corruption. | ||
CVE-2019-1010022 | HIGH | glibc: stack guard protection bypass | GNU Libc current is affected by: Mitigation bypass. The impact is: Attacker may bypass stack guard protection. The component is: nptl. The attack vector is: Exploit stack buffer overflow vulnerability and use this bypass vulnerability to bypass stack guard. | ||
CVE-2019-1010023 | MEDIUM | glibc: running ldd on malicious ELF leads to code execution because of wrong size computation | GNU Libc current is affected by: Re-mapping current loaded libray with malicious ELF file. The impact is: In worst case attacker may evaluate privileges. The component is: libld. The attack vector is: Attacker sends 2 ELF files to victim and asks to run ldd on it. ldd execute code. | ||
CVE-2019-1010024 | MEDIUM | glibc: ASLR bypass using cache of thread stack and heap | GNU Libc current is affected by: Mitigation bypass. The impact is: Attacker may bypass ASLR using cache of thread stack and heap. The component is: glibc. | ||
CVE-2019-1010025 | MEDIUM | glibc: information disclosure of heap addresses of pthread_created thread | ** DISPUTED ** GNU Libc current is affected by: Mitigation bypass. The impact is: Attacker may guess the heap addresses of pthread_created thread. The component is: glibc. NOTE: the vendor's position is "ASLR bypass itself is not a vulnerability." | ||
CVE-2019-19126 | LOW | glibc: LD_PREFER_MAP_32BIT_EXEC not ignored in setuid binaries | On the x86-64 architecture, the GNU C Library (aka glibc) before 2.31 fails to ignore the LD_PREFER_MAP_32BIT_EXEC environment variable during program execution after a security transition, allowing local attackers to restrict the possible mapping addresses for loaded libraries and thus bypass ASLR for a setuid program. | ||
CVE-2019-6488 | MEDIUM | glibc: Incorrect attempt to use a 64-bit register for size_t in assembly codes results in segmentation fault | The string component in the GNU C Library (aka glibc or libc6) through 2.28, when running on the x32 architecture, incorrectly attempts to use a 64-bit register for size_t in assembly codes, which can lead to a segmentation fault or possibly unspecified other impact, as demonstrated by a crash in __memmove_avx_unaligned_erms in sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S during a memcpy. | ||
CVE-2019-7309 | LOW | glibc: memcmp function incorrectly returns zero | In the GNU C Library (aka glibc or libc6) through 2.29, the memcmp function for the x32 architecture can incorrectly return zero (indicating that the inputs are equal) because the RDX most significant bit is mishandled. | ||
CVE-2019-9169 | HIGH | glibc: regular-expression match via proceed_next_node in posix/regexec.c leads to heap-based buffer over-read | In the GNU C Library (aka glibc or libc6) through 2.29, proceed_next_node in posix/regexec.c has a heap-based buffer over-read via an attempted case-insensitive regular-expression match. | ||
CVE-2019-9192 | MEDIUM | glibc: uncontrolled recursion in function check_dst_limits_calc_pos_1 in posix/regexec.c | ** DISPUTED ** In the GNU C Library (aka glibc or libc6) through 2.29, check_dst_limits_calc_pos_1 in posix/regexec.c has Uncontrolled Recursion, as demonstrated by '( | ||
passwd | CVE-2007-5686 | 1:4.4-4.1 | MEDIUM | initscripts in rPath Linux 1 sets insecure permissions for the /var/log/btmp file, which allows local users to obtain sensitive information regarding authentication attempts. NOTE: because sshd detects the insecure permissions and does not log certain events, this also prevents sshd from logging failed authentication attempts by remote attackers. | |
CVE-2013-4235 | LOW | shadow-utils: TOCTOU race conditions by copying and removing directory trees | shadow: TOCTOU (time-of-check time-of-use) race condition when copying and removing directory trees | ||
CVE-2017-12424 | HIGH | shadow-utils: Buffer overflow via newusers tool | In shadow before 4.5, the newusers tool could be made to manipulate internal data structures in ways unintended by the authors. Malformed input may lead to crashes (with a buffer overflow or other memory corruption) or other unspecified behaviors. This crosses a privilege boundary in, for example, certain web-hosting environments in which a Control Panel allows an unprivileged user account to create subaccounts. | ||
CVE-2018-7169 | MEDIUM | shadow-utils: newgidmap allows unprivileged user to drop supplementary groups potentially allowing privilege escalation | An issue was discovered in shadow 4.5. newgidmap (in shadow-utils) is setuid and allows an unprivileged user to be placed in a user namespace where setgroups(2) is permitted. This allows an attacker to remove themselves from a supplementary group, which may allow access to certain filesystem paths if the administrator has used "group blacklisting" (e.g., chmod g-rwx) to restrict access to paths. This flaw effectively reverts a security feature in the kernel (in particular, the /proc/self/setgroups knob) to prevent this sort of privilege escalation. | ||
CVE-2019-19882 | MEDIUM | shadow-utils: local users can obtain root access because setuid programs are misconfigured | shadow 4.8, in certain circumstances affecting at least Gentoo, Arch Linux, and Void Linux, allows local users to obtain root access because setuid programs are misconfigured. Specifically, this affects shadow 4.8 when compiled using --with-libpam but without explicitly passing --disable-account-tools-setuid, and without a PAM configuration suitable for use with setuid account management tools. This combination leads to account management tools (groupadd, groupdel, groupmod, useradd, userdel, usermod) that can easily be used by unprivileged local users to escalate privileges to root in multiple ways. This issue became much more relevant in approximately December 2019 when an unrelated bug was fixed (i.e., the chmod calls to suidusbins were fixed in the upstream Makefile which is now included in the release version 4.8). | ||
TEMP-0628843-DBAD28 | LOW | ||||
perl-base | CVE-2011-4116 | 5.24.1-3+deb9u5 | MEDIUM | perl: File::Temp insecure temporary file handling | _is_safe in the File::Temp module for Perl does not properly handle symlinks. |
tar | CVE-2005-2541 | 1.29b-1.1 | CRITICAL | Tar 1.15.1 does not properly warn the user when extracting setuid or setgid files, which may allow local users or remote attackers to gain privileges. | |
CVE-2018-20482 | LOW | tar: Infinite read loop in sparse_dump_region function in sparse.c | GNU Tar through 1.30, when --sparse is used, mishandles file shrinkage during read access, which allows local users to cause a denial of service (infinite read loop in sparse_dump_region in sparse.c) by modifying a file that is supposed to be archived by a different user's process (e.g., a system backup running as root). | ||
CVE-2019-9923 | MEDIUM | tar: null-pointer dereference in pax_decode_header in sparse.c | pax_decode_header in sparse.c in GNU Tar before 1.32 had a NULL pointer dereference when parsing certain archives that have malformed extended headers. | ||
TEMP-0290435-0B57B5 | LOW |
Remediation:
If this image is from a third-party application or Helm chart, try updating
to the latest version. If this doesn't solve the issue, let the maintainers
know that the image they're using may contain vulnerabilities, e.g. by
filing an issue on GitHub.
If this image is from an application you own, try updating the base operating system (currently debian 9.8),
as well as any libraries you're installing on top of it. If this doesn't solve
the issue, let the maintainer of the base operating system or offending libraries
know, e.g. by filing an issue on GitHub.
For Job insights-agent-install-reporter in namespace insights-agent
Description:
By default, the Kubernetes scheduler has some rules about spreading pods across
different failure zones, however these defaults are not always consistent, and
they are never guaranteed. Best practice is to specify a topologySpreadConstraint
on pods that have multiple replicas (such as in a deployment).
References
Remediation:
Add a topologySpreadConstraint
block to your pod specification:
spec:
topologySpreadConstraints:
- maxSkew: <integer>
minDomains: <integer> # optional; beta since v1.25
topologyKey: <string>
whenUnsatisfiable: <string>
labelSelector: <object>
matchLabelKeys: <list> # optional; alpha since v1.25
nodeAffinityPolicy: [Honor|Ignore] # optional; beta since v1.26
nodeTaintsPolicy: [Honor|Ignore] # optional; beta since v1.26
** Example **
This example spreads pods across nodes and availablility zones,
but does not block scheduling if that is impossible.
spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: "topology.kubernetes.io/zone"
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app.kubernetes.io/name: your-app-name
- maxSkew: 1
topologyKey: "kubernetes.io/hostname"
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app.kubernetes.io/name: your-app-name
For Pod kube-scheduler-kind-control-plane in namespace kube-system
Description:
Under particular configurations, a container may be able to escalate its privileges.
Setting allowPrivilegeEscalation
to false will set the no_new_privs
flag
on the container process, preventing setuid
binaries from changing the effective user ID.
Setting this flag is particularly important when using runAsNonRoot
, which can otherwise be circumvented.
References
Remediation:
In your workload configuration, set securityContext.allowPrivilegeEscalation
to false
Note that your workload might currently rely on the ability to escalate its privileges -
be sure to test the change to ensure nothing breaks.
Examples
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- name: busybox
image: busybox
securityContext:
allowPrivilegeEscalation: false
For Deployment postgres in namespace default
Description:
Liveness probes are designed to ensure that an application stays in a healthy state.
When a liveness probe goes from passing to failing, the pod will be restarted.
Without a liveness probe, Kubernetes' self-healing ability is hampered.
If your application is long-lived, it's important to utilize a liveness probe to tell
if the application has crashed or become unresponsive. Otherwise, Kubernetes may keep
a broken Pod running, and continue serving traffic to it.
References
Remediation:
Add a livenessProbe
to your container spec.
It's up to you how to decide if your application is alive. You might
check that you're able to connect to an HTTP endpoint, or ensure that a
particular process is running.
You can configure your liveness probe
to execute a command or check for a healthy HTTP/TCP response.
Examples
Using an HTTP request
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- name: busybox
image: busybox
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
Using a command
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- name: busybox
image: busybox
livenessProbe:
exec:
command:
- sh
- -c
- ps -ef | grep my-app
initialDelaySeconds: 5
periodSeconds: 10
For Deployment postgres in namespace default
Description:
Workloads can be hardened using the AppArmor, seccomp, SELinux, and Linux capabilities security mechanisms in Kubernetes.
This container is not using one of these mechanisms, which could allow escalating permissions and risk to the Kubernetes cluster and workloads.
Remediation:
Unconfined
in the container specification.drop
field, but the add
field may contain ALL
.For Deployment dns-controller in namespace kube-system
Description:
Reducing kernel capabilities available to a container limits its attack surface
Remediation:
For Deployment coredns in namespace kube-system
Description:
Nova found a new version for the container k8s.gcr.io/coredns/coredns.
You have version v1.8.0 installed, but v1.10.1 is now available.
Remediation:
Update your workload configuration to use tag v1.10.1.
For Deployment postgres in namespace default
Description:
Configuring memory limits makes sure a container never uses an excessive amount of memory.
If memory limits are not set, a misbehaving application could end up utilizing most of the memory
available on its node, potentially affecting other workloads or causing cost overruns as the cluster tries to scale up.
Note that if your pod tries to use more memory than the limit, it will be terminated.
Memory limits are a good way to ensure that a misbehaving workload (e.g. with a memory leak) is terminated and replaced.
References
Remediation:
Add a memory limit to each of your container specifications. Memory can be specified
as a fixed number of bytes (e.g. 1024
), with a power-of-ten suffix (e.g. 1K
),
or with a power-of-two suffix (e.g. 1Ki
).
It's up to you to decide how much memory to allocate to your application. Setting memory limits
too high could potentially lead to cost overruns, whereas setting it too low may cause
the Pod to be terminated.
Insights can help you determine your application's memory usage via the
Prometheus Collector
and Goldilocks reports.
We strongly recommend you enable one or both of these reports to help determine appropriate resource
requests and limits.
Examples
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- name: busybox
image: busybox
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
For Pod kube-scheduler-kind-control-plane in namespace kube-system
Description:
By default, the Kubernetes scheduler has some rules about spreading pods across
different failure zones, however these defaults are not always consistent, and
they are never guaranteed. Best practice is to specify a topologySpreadConstraint
on pods that have multiple replicas (such as in a deployment).
References
Remediation:
Add a topologySpreadConstraint
block to your pod specification:
spec:
topologySpreadConstraints:
- maxSkew: <integer>
minDomains: <integer> # optional; beta since v1.25
topologyKey: <string>
whenUnsatisfiable: <string>
labelSelector: <object>
matchLabelKeys: <list> # optional; alpha since v1.25
nodeAffinityPolicy: [Honor|Ignore] # optional; beta since v1.26
nodeTaintsPolicy: [Honor|Ignore] # optional; beta since v1.26
** Example **
This example spreads pods across nodes and availablility zones,
but does not block scheduling if that is impossible.
spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: "topology.kubernetes.io/zone"
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app.kubernetes.io/name: your-app-name
- maxSkew: 1
topologyKey: "kubernetes.io/hostname"
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app.kubernetes.io/name: your-app-name
For Deployment grafana in namespace grafana
Description:
Reducing kernel capabilities available to a container limits its attack surface
Remediation:
For Pod kube-scheduler-kind-control-plane in namespace kube-system
Description:
Every container running in Kubernetes comes with a configurable set of kernel-level
capabilities. Generally these capabilities are not needed by most workloads, and
can be exploited by an attacker who gains access to the container.
Many of the available capabilities are enabled by default in Kubernetes, including:
NET_ADMIN
CHOWN
DAC_OVERRIDE
FSETID
FOWNER
MKNOD
NET_RAW
SETGID
SETUID
SETFCAP
SETPCAP
NET_BIND_SERVICE
SYS_CHROOT
KILL
AUDIT_WRITE
Unless these capabilities are explicitly needed by your workload, they should be dropped.
References
Remediation:
The easiest way to remove insecure capabilities from your container is to drop
them all (see example below).
If particular capabilities are necessary for your workload to run properly,
Fairwinds recommends dropping all, and then explicitly adding the ones that are necessary
Examples
Drop all
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
containers:
- name: busybox
image: busybox
securityContext:
capabilities:
drop:
- ALL
Add back necessary capabilities
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
containers:
- name: busybox
image: busybox
securityContext:
capabilities:
drop:
- ALL
add:
- CHOWN
- SETUID
For Deployment postgres in namespace default
Description:
Configuring CPU limits makes sure a container never uses an excessive amount of CPU.
If CPU limits are not set, a misbehaving application could end up utilizing most of the
CPU available on its node, potentially slowing down other workloads or causing cost overruns as
the cluster tries to scale up.
In contrast to memory limits, a CPU limit will never cause your application to crash.
Instead, it will get throttled - it will only be allowed to run a certain number of operations
per second.
References
Remediation:
Add a CPU limit to each of your container specifications. CPU may be set in terms of
whole CPUs (e.g. 1.0
or .25
), or more commonly, in terms of millicpus (e.g. 1000m
or 250m
).
It's up to you to decide how much CPU to allocate to your application. Setting CPU limits
too high could potentially lead to cost overruns, whereas setting it too low may cause
your application to get throttled.
Insights can help you determine your application's CPU usage via the
Prometheus Collector
and Goldilocks reports.
We strongly recommend you enable one or both of these reports to help determine appropriate resource
requests and limits.
For mission-critical or user-facing applications,
Fairwinds recommends setting a high CPU limit, so only a misbehaving application will be throttled.
Examples
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- name: busybox
image: busybox
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
For Deployment local-path-provisioner in namespace local-path-storage
Description:
By default, the Kubernetes scheduler has some rules about spreading pods across
different failure zones, however these defaults are not always consistent, and
they are never guaranteed. Best practice is to specify a topologySpreadConstraint
on pods that have multiple replicas (such as in a deployment).
References
Remediation:
Add a topologySpreadConstraint
block to your pod specification:
spec:
topologySpreadConstraints:
- maxSkew: <integer>
minDomains: <integer> # optional; beta since v1.25
topologyKey: <string>
whenUnsatisfiable: <string>
labelSelector: <object>
matchLabelKeys: <list> # optional; alpha since v1.25
nodeAffinityPolicy: [Honor|Ignore] # optional; beta since v1.26
nodeTaintsPolicy: [Honor|Ignore] # optional; beta since v1.26
** Example **
This example spreads pods across nodes and availablility zones,
but does not block scheduling if that is impossible.
spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: "topology.kubernetes.io/zone"
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app.kubernetes.io/name: your-app-name
- maxSkew: 1
topologyKey: "kubernetes.io/hostname"
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app.kubernetes.io/name: your-app-name
For DaemonSet kindnet in namespace kube-system
Description:
Liveness probes are designed to ensure that an application stays in a healthy state.
When a liveness probe goes from passing to failing, the pod will be restarted.
Without a liveness probe, Kubernetes' self-healing ability is hampered.
If your application is long-lived, it's important to utilize a liveness probe to tell
if the application has crashed or become unresponsive. Otherwise, Kubernetes may keep
a broken Pod running, and continue serving traffic to it.
References
Remediation:
Add a livenessProbe
to your container spec.
It's up to you how to decide if your application is alive. You might
check that you're able to connect to an HTTP endpoint, or ensure that a
particular process is running.
You can configure your liveness probe
to execute a command or check for a healthy HTTP/TCP response.
Examples
Using an HTTP request
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- name: busybox
image: busybox
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
Using a command
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- name: busybox
image: busybox
livenessProbe:
exec:
command:
- sh
- -c
- ps -ef | grep my-app
initialDelaySeconds: 5
periodSeconds: 10
For DaemonSet kube-proxy in namespace kube-system
Description:
Kubernetes will often cache images on worker nodes. By default, an image will only be pulled
if it isn't already cached on the node attempting to run it.
However, utilizing cached versions of a Docker image can be a reliability issue.
It can lead to different images running on different nodes, leading to inconsistent
behavior. It can also be a security issue, as a workload can access a cached image even if it
doesn't have permission to access the remote Docker repository (via imagePullSecret
).
Specifying pullPolicy=Always
will prevent these problems by ensuring the latest image is
downloaded every time a new pod is created.
References
Remediation:
In your Pod spec, set imagePullPolicy
to Always
Examples
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- name: busybox
image: busybox
imagePullPolicy: Always
command: [ "echo", "SUCCESS" ]
For DaemonSet kindnet in namespace kube-system
Description:
By default, the Kubernetes scheduler has some rules about spreading pods across
different failure zones, however these defaults are not always consistent, and
they are never guaranteed. Best practice is to specify a topologySpreadConstraint
on pods that have multiple replicas (such as in a deployment).
References
Remediation:
Add a topologySpreadConstraint
block to your pod specification:
spec:
topologySpreadConstraints:
- maxSkew: <integer>
minDomains: <integer> # optional; beta since v1.25
topologyKey: <string>
whenUnsatisfiable: <string>
labelSelector: <object>
matchLabelKeys: <list> # optional; alpha since v1.25
nodeAffinityPolicy: [Honor|Ignore] # optional; beta since v1.26
nodeTaintsPolicy: [Honor|Ignore] # optional; beta since v1.26
** Example **
This example spreads pods across nodes and availablility zones,
but does not block scheduling if that is impossible.
spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: "topology.kubernetes.io/zone"
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app.kubernetes.io/name: your-app-name
- maxSkew: 1
topologyKey: "kubernetes.io/hostname"
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app.kubernetes.io/name: your-app-name
For DaemonSet kindnet in namespace kube-system
Description:
Disabling access to the root filesystem can prevent attackers from altering configuration,
installing new software, or overwriting binaries with malicious code.
By default, Kubernetes runs containers with a writeable filesystem. However, Kubernetes workloads should
generally be stateless, and should not need write access to the root filesystem. If workloads do need
to write files to disk, there are safer methods of doing so (see remediation guidance).
References
Remediation:
In your workload's container configuration, set securityContext.readOnlyRootFilesystem = true
.
If your workload needs to write data to disk, you can still protect the root filesystem by
mounting a separate volume. This creates a special writeable directory for your app, while
preventing changes to system components, application configuration, etc. See the example
below, or view the docs to learn more.
Examples
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
volumes:
- name: data
emptyDir: {}
containers:
- name: busybox
image: busybox
command: [ "sh", "-c", "echo 'hello world' > /data/hello.txt" ]
volumeMounts:
- name: data
mountPath: /data
securityContext:
readOnlyRootFilesystem: true
For DaemonSet kube-proxy in namespace kube-system
Description:
Nova found a new version for the container k8s.gcr.io/kube-proxy.
You have version v1.21.1 installed, but v1.27.1 is now available.
Remediation:
Update your workload configuration to use tag v1.27.1.
For Deployment grafana in namespace grafana
Description:
RunAsUser is the UID to run the entrypoint of the container process. The user id that runs the first process in the container
Remediation:
For DaemonSet kindnet in namespace kube-system
Description:
Linux Capabilities allow you to specify privileges for a process at a granular level. The default list of capabilities included with a container are already fairly minimal, but often can be further restricted.
Remediation:
In your workload configuration, capabilities can be added or removed by adjusting securityContext.capabilities
.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.