Coder Social home page Coder Social logo

gvm-containers's Introduction

gvm-containers

Introduction

This is the Git repo of the tools to deploy Greenbone Vulnerability Management with containers. It is based on the Greenbone Source Edition (GSE) open source project.

Docker

The source code of admirito’s unofficial docker images for Greenbone Vulnerability Management 22–which is based on admirito’s GVM PPA–is hosted on this repo. It contains the source for the following docker images:

  • gvmd: Greenbone Vulnerability Manager
  • openvas-scanner: OpenVAS remote network security scanner
  • gsad: Greenbone Security Assistant
  • gvm-postgres: PostgreSQL 14 Database with postgresql-14-gvm extension to be used by gvmd

To setup the GVM system with docker-compose, first clone the repo and issue docker-compose up commands to download and synchronize the data feeds required by the GVM:

git clone https://github.com/admirito/gvm-containers.git

cd gvm-containers

docker-compose -f nvt-sync.yml up
docker-compose -f cert-sync.yml up
docker-compose -f scap-sync.yml up
docker-compose -f gvmd-data-sync.yml up

Then, you can run GVM services with a simple docker-compose up command. The initialization process can take a few minutes for the first time:

# in the gvm-containers directory
docker-compose up

## docker images of a specific version can also be specified with
## an environment variable (for more information take a look at the
## .env file):
# GVM_VERSION=22 docker-compose up

The Greenbone Security Assistant gsad port is exposed on the host’s port 8080. So you can access it from http://localhost:8080.

Helm Chart

A helm chart for deploying the docker images on kubernetes is also available. To install GVM on a kubernetes cluster, first create a namespace and then install the helm chart:

kubectl create namespace gvm

helm install gvm \
    https://github.com/admirito/gvm-containers/releases/download/chart-1.3.0/gvm-1.3.0.tgz \
    --namespace gvm --set gvmd-db.postgresqlPassword="mypassword"

By default a cron job with a @daily schedule will be created to update the GVM feeds. You can also enable a helm post installation hook to perform the feeds synchronization before the installation is complete by adding --timeout 90m --set syncFeedsAfterInstall=true arguments to the helm install command. Of course, this will slow down the installation process considerably, although you can view the feeds sync post installation progress by kubectl logs command:

NS=gvm

kubectl logs -n $NS -f $(kubectl get pod -n $NS -l job-name=gvm-feeds-sync -o custom-columns=:metadata.name --no-headers)

Please note that feed.community.greenbone.net servers will only allow only one feed sync per time so you should avoid running multiple feed sync jobs, otherwise the source ip will be temporarily blocked. So if you are enabling syncFeedsAfterInstall you have to make sure the cron job will not be scheduled during the post installation process.

For more information and see other options please read the chart/README.org.

gvm-containers's People

Contributors

admirito avatar konvergence avatar rizlas avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

gvm-containers's Issues

changes for api:s?

Just a small thought, the python3 api can only connect with tls or unix socket - perhaps this should be handled properly for us whom need it =)

Setting a username and password?

Hi,

How are we supposed to login to the GVM web portal when we are not asked to create a username or password?
Thanks, David.

update docker-compose

Hi,

Here a proposal of docker-compose.yml that not use host path for /var/run and not use volumes_from

version: '2.1'
volumes:
  redis-data: {}
  openvas-var-lib: {}
  gvm-var-lib: {}
  postgres-data: {}
  run-redis: {}
  run-ospd: {}
  
  
services:
  gvm-postgres:
    image: admirito/gvm-postgres:11
    environment:
      PGDATA: /var/lib/postgresql/data
      POSTGRES_DB: gvmd
      POSTGRES_PASSWORD: mypassword
      POSTGRES_USER: gvmduser
    stdin_open: true
    volumes:
    - postgres-data:/var/lib/postgresql/data

  gvmd:
    #  CONNECTED     9598310  /var/run/ospd/ospd.sock
    image: admirito/gvmd:11
    environment:
      GVMD_POSTGRESQL_URI: postgresql://gvmduser:mypassword@gvm-postgres:5432/gvmd?application_name=gvmd
    volumes:
    - openvas-var-lib:/var/lib/openvas
    - gvm-var-lib:/var/lib/gvm
    - run-redis:/var/run/redis
    - run-ospd:/var/run/ospd
    depends_on:
      postgres:
        condition: gvm-postgres

  gsad:
    image: admirito/gsad:11
    ports:
    - 8080:80

    environment:
      GVMD_HOST: gvmd
      GVMD_PORT: '9390'

    depends_on:
      postgres:
        condition: gvmd

  openvas:
    # LISTENING     9431657  /var/run/ospd/ospd.sock
    # CONNECTED     9499517  /var/run/redis/redis.sock
    image: admirito/openvas:11
    environment:
      OV_PASSWORD: Securepassword41
    privileged: true
    sysctls:
      net.core.somaxconn: '2048'
    volumes:
    - openvas-var-lib:/var/lib/openvas
    - run-redis:/var/run/redis
    - run-ospd:/var/run/ospd
    depends_on:
      postgres:
        condition: gvmd



# on node must  add     vm.overcommit_memory=1 into /etc/systcl.conf
  redis:
    # LISTENING     9418817  /var/run/redis/redis.sock
    image: redis:5.0
    volumes:
    - run-redis:/var/run/redis
    - redis-data:/data
    command: redis-server --port 0 --unixsocket /var/run/redis/redis.sock --unixsocketperm 755
    privileged: true
    sysctls:
      net.core.somaxconn: '2048'
    depends_on:
      postgres:
        condition: openvas


  cert-sync:
    image: admirito/gvmd:11
    volumes:
    - openvas-var-lib:/var/lib/openvas
    - gvm-var-lib:/var/lib/gvm
    - run-redis:/var/run/redis
    - run-ospd:/var/run/ospd
    command: greenbone-certdata-sync --curl --verbose

    depends_on:
      postgres:
        condition: gvmd
        
  scap-sync:
    image: admirito/gvmd:11
    volumes:
    - openvas-var-lib:/var/lib/openvas
    - gvm-var-lib:/var/lib/gvm
    - run-redis:/var/run/redis
    - run-ospd:/var/run/ospd
    command: greenbone-scapdata-sync --curl --verbose

    depends_on:
      postgres:
        condition: gvmd



  nvt-sync:
    image: admirito/openvas:11
    volumes:
    - openvas-var-lib:/var/lib/openvas
    - run-redis:/var/run/redis
    - run-ospd:/var/run/ospd
    command: greenbone-nvt-sync

    depends_on:
      postgres:
        condition: gvmd
        


Error on Multinode Kubernetes

I try to start the helm chart on my multinode kubernetes cluster.
I have several nodes running K3S and Longhorn as Persistent Storage.

I see three deployments, all deploments run on different nodes.

  • gvm-gsad, active
  • gvm-gvmd, active
  • gvm-openvas, error

I see three persistent volumes, which are bound to four pods

  • 8GB bound to gvm-openvas-redis
  • 5GB bound to gvm-gvmd and gvm-openvas
  • 8GB bound to gvm-gvmd-db

gvm-openvas throws "Multi-Attach error for volume" "Volume is already used by pod(s) gvm-gvmd".
gvm-gvmd runs on a different node than gvm-openvas.

Do I have to bind gvm-openvas and gvm-gvmd to the same node?
How to do that?

Are there other means to handle this error?

Thanks,
Birger

gvm-postgres is not using entrypoints or uuid-ossp

This is rather a question than an issue because I am struggling to understand how this is working.

  • How come there is no creation of an uuid-ossp extension on gvm-postgres which is needed for openvas?

  • How does gvm-postgres's docker-entrypoint.sh ever run if it is never copied in the Dockerfile altogether?

  • "${FORCE_DB_INIT}" = "1" present in gvmd's entrypoint is only when building on top of older openvas versions with sqlite?

Unable to get scan results

Hi! For some reason, I am not able to get scan results. When I go to Configuration -> Scan Configs -> Full and fast (for example) -> scanner preferences, I get an empty list / white page. I did a docker exec -it openvas bash. When I ran openvas -s, the scan preference was there. Anyone else had this issue or have an idea of what's going on?

Thanks

Database is wrong version.

After I oullled new version from GIT ,
I am unable to start GVM because the database is not being updated

Steps used are

docker-compose -f nvt-sync.yml -f cert-sync.yml -f scap-sync.yml -f gvmd-data-sync up
docker-compose up -d

The error is

"
..
gvmd_1 | md manage:MESSAGE:2021-08-16 18h32.30 utc:21: check_db_versions: database version of database: 221
gvmd_1 | md manage:MESSAGE:2021-08-16 18h32.30 utc:21: check_db_versions: database version supported by manager: 233
gvmd_1 | Database is wrong version.
gsad_1 | gsad main:MESSAGE:2021-08-16 18h32.31 utc:1: Starting GSAD version 20.08.1
.."

Any help/ suggestions will be appreciated

Thanks

Migrating to 21.4.4

Hi,

First thank you for supplying this docker container. Before I upgrade, is there any documentation around the correct way to do this? I'm particularly confused about how I would migrate the database (especially being a docker noob). Any assistance would be awesome

Can't create Tasks 'Failed to find config 'daba56c8-73ec-11df-a475-002264764cea' and no Scan Configs

Hi! I just set up my environment with docke-compose. Docker containers are all up, but when I want to create a new Task it throws the following error: 'Failed to find config 'daba56c8-73ec-11df-a475-002264764cea'. Then I realized there is no default Scan Config (Full and Fast, Host Discovering... etc.) Is there a way I can update or any site where I can download this?

Saying that, I'm sharing a few screenshots of the current state:

image

image

Any response would be nice for me, thank you in advance!

Migrating PSQL to RDS?

Hi,
Is it possible to migrate to a different Postgres? e.g. AWS RDS?
In my case, I'm running on EKS and it makes more sense to run on RDS that have a local PSQL running.
I tried to change the connection string but I'm stuck on "waiting for database..."
I can confirm that I manage to log in from the container so it either be a compatibility issue or a scheme on RDS.
Thanks

Default Scan Configs Not Found

Versions:

  • Kubernetes 1.22.4
  • gvmd: 21.4.4
  • gsad: 21.4.3
  • openvas: 21.4.3
  • socat: 1.0.5

Storage: Rook-Ceph Cluster running CephFS

After updating all the feeds (Under Administration > Feed Status is all current), I'm not able to add my own scan (Configuration > Scan Configs) due to the default scans not being there. I get an error message when creating my own scan:

Failed to find config 'd21f6c81-2b88-4ac1-b7b4-a2a9f2ad4663'

Here is my "stack" status:

NAME                                   READY   STATUS    RESTARTS   AGE
openvas-gvm-gsad-7bb9878549-kzxhd      1/1     Running   0          16m
openvas-gvm-gvmd-574594678d-4fxrn      2/2     Running   0          16m
openvas-gvm-openvas-697bddb54b-7pl7f   5/5     Running   0          16m
openvas-gvmd-db-0                      1/1     Running   0          10h
openvas-openvas-redis-master-0         1/1     Running   0          10h

Here is what my persistent storage looks like:

NAME                                        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-openvas-gvmd-db-0                      Bound    pvc-ce0112d6-02d4-4d90-a4a0-c686c74577d9   8Gi        RWO            cephfs         10h
openvas-gvm                                 Bound    pvc-2428fc36-f004-4f0a-9fea-fb7cc732eb9f   5Gi        RWX            cephfs         10h
redis-data-openvas-openvas-redis-master-0   Bound    pvc-3c9a7e91-4769-4618-b03d-4ff89089f687   8Gi        RWO            cephfs         10h

I do see some chatter under the openvas project on something similar, but that turns out to be a redis issue.

Here are my redis-connector logs:

2021/12/14 13:57:23 socat[1] N listening on AF=1 "/run/redis/redis.sock"
2021/12/14 13:57:24 socat[1] N accepting connection from AF=1 "<anon>" on AF=1 "/run/redis/redis.sock"
2021/12/14 13:57:24 socat[1] N forked off child process 9
2021/12/14 13:57:24 socat[1] N listening on AF=1 "/run/redis/redis.sock"
2021/12/14 13:57:24 socat[9] N opening connection to AF=2 10.23.3.149:6379
2021/12/14 13:57:24 socat[9] N successfully connected from local address AF=2 10.100.3.190:48860
2021/12/14 13:57:24 socat[9] N starting data transfer loop with FDs [6,6] and [5,5]
2021/12/14 13:57:35 socat[1] N accepting connection from AF=1 "<anon>" on AF=1 "/run/redis/redis.sock"
2021/12/14 13:57:35 socat[1] N forked off child process 10
2021/12/14 13:57:35 socat[1] N listening on AF=1 "/run/redis/redis.sock"
2021/12/14 13:57:35 socat[10] N opening connection to AF=2 10.23.3.149:6379
2021/12/14 13:57:35 socat[10] N successfully connected from local address AF=2 10.100.3.190:48886
2021/12/14 13:57:35 socat[10] N starting data transfer loop with FDs [6,6] and [5,5]
2021/12/14 14:06:10 socat[10] N socket 1 (fd 6) is at EOF
2021/12/14 14:06:10 socat[10] N socket 1 (fd 6) is at EOF
2021/12/14 14:06:10 socat[10] N socket 2 (fd 5) is at EOF
2021/12/14 14:06:10 socat[10] N exiting with status 0
2021/12/14 14:06:10 socat[1] N childdied(): handling signal 17
2021/12/14 14:06:10 socat[1] N accepting connection from AF=1 "<anon>" on AF=1 "/run/redis/redis.sock"
2021/12/14 14:06:10 socat[1] N forked off child process 11
2021/12/14 14:06:10 socat[1] N listening on AF=1 "/run/redis/redis.sock"
2021/12/14 14:06:10 socat[11] N opening connection to AF=2 10.23.3.149:6379
2021/12/14 14:06:10 socat[11] N successfully connected from local address AF=2 10.100.3.190:50358
2021/12/14 14:06:10 socat[11] N starting data transfer loop with FDs [6,6] and [5,5]
2021/12/14 14:06:10 socat[1] N accepting connection from AF=1 "<anon>" on AF=1 "/run/redis/redis.sock"
2021/12/14 14:06:10 socat[1] N forked off child process 12
2021/12/14 14:06:10 socat[1] N listening on AF=1 "/run/redis/redis.sock"
2021/12/14 14:06:10 socat[12] N opening connection to AF=2 10.23.3.149:6379
2021/12/14 14:06:10 socat[12] N successfully connected from local address AF=2 10.100.3.190:50360
2021/12/14 14:06:10 socat[12] N starting data transfer loop with FDs [6,6] and [5,5]
2021/12/14 14:06:10 socat[11] N socket 1 (fd 6) is at EOF
2021/12/14 14:06:10 socat[11] N socket 1 (fd 6) is at EOF
2021/12/14 14:06:10 socat[11] N socket 2 (fd 5) is at EOF
2021/12/14 14:06:10 socat[11] N exiting with status 0
2021/12/14 14:06:10 socat[12] N childdied(): handling signal 17

So it looks like we have connectivity here. Let me know if there is anything else needed to be seen for debugging?

Failed to find config 'daba56c8-73ec-11df-a475-002264764cea'

Hello!
When adding a new task I get an error:

Failed to find config 'daba56c8-73ec-11df-a475-002264764cea'

containers list:

 docker ps
CONTAINER ID   IMAGE                         COMMAND                  CREATED        STATUS          PORTS                                   NAMES
dcbb2acdea0d   admirito/gsad:21              "docker-entrypoint.s…"   19 hours ago   Up 57 minutes   0.0.0.0:8080->80/tcp, :::8080->80/tcp   gvm-containers_gsad_1
3139803dd929   redis:5.0                     "docker-entrypoint.s…"   19 hours ago   Up 57 minutes   6379/tcp                                gvm-containers_redis_1
60756fe1d232   admirito/gvmd:21              "docker-entrypoint.s…"   19 hours ago   Up 57 minutes   9390/tcp                                gvm-containers_gvmd_1
1e7d2811d618   admirito/openvas-scanner:21   "/tini -- bash /usr/…"   19 hours ago   Up 57 minutes                                           gvm-containers_openvas_1
b89bc6a433fd   admirito/gvm-postgres:21      "docker-entrypoint.s…"   19 hours ago   Up 57 minutes   5432/tcp                                gvm-containers_gvm-postgres_1

gvmd.log:

md manage:WARNING:2021-11-14 06h57.15 utc:5957: init_manage_open_db: sql_open failed
md   main:MESSAGE:2021-11-14 06h57.15 utc:5962:    Greenbone Vulnerability Manager version 21.4.4 (DB revision 242)
md manage:   INFO:2021-11-14 06h57.15 utc:5962:    Getting users.
md manage:WARNING:2021-11-14 06h57.15 utc:5962: sql_open: PQconnectPoll failed
md manage:WARNING:2021-11-14 06h57.15 utc:5962: sql_open: PQerrorMessage (conn): could not connect to server: Connection refused
        Is the server running on host "localhost" (127.0.0.1) and accepting
        TCP/IP connections on port 5432?
could not connect to server: Cannot assign requested address
        Is the server running on host "localhost" (::1) and accepting
        TCP/IP connections on port 5432?
md manage:WARNING:2021-11-14 06h57.15 utc:5962: init_manage_open_db: sql_open failed

Scaling up GMV

Hi,
I'd appreciate some tips on how to scale up the stack.
I tried to change the redis from a single pod to a cluster which helped a bit but still seems to get to a point where it all crash when multiple tasks are initiated together.
What is needed in order to make GVM work on a very high load (e.g. create 20-30 tasks with 50 targets and run it in parallel)?
Thanks!

gvm-tools question

Hello,

this is not an issue but a question.
How to connect gvm-tools (ideally from another Docker instance) to your docker instance(s) ?
It can be on the very same VM (that is host for docker containers).

Otherwise, I like how openvas works on your Docker instances, this is one thing that I need, not to import and update the IP lists manually.

Thanks, best.

Version of python-redis needs to be upgraded for openvas

openvas_1         | 2020-02-12 23:15:12,950 OSPD - openvas: ERROR: (ospd.ospd) While scanning 101.98.66.45:
openvas_1         | Traceback (most recent call last):
openvas_1         |   File "/usr/lib/python3/dist-packages/ospd/ospd.py", line 777, in parallel_scan
openvas_1         |     ret = self.exec_scan(scan_id, target)
openvas_1         |   File "/usr/lib/python3/dist-packages/ospd_openvas/daemon.py", line 1439, in exec_scan
openvas_1         |     self.openvas_db.remove_list_item('internal/dbindex', i)
openvas_1         |   File "/usr/lib/python3/dist-packages/ospd_openvas/db.py", line 291, in remove_list_item
openvas_1         |     ctx.lrem(key, count=LIST_ALL, value=value)
openvas_1         | TypeError: lrem() got an unexpected keyword argument 'count'

Here's the relevant issue: greenbone/ospd-openvas#179

Add a global launcher to start (and update) everything

Hi,

I've created a simple launcher that basically does the job of refreshing everything and launching all. It basically automates what you described in the readme.

Below is the code, may it help 👍

#!/bin/bash

cd $(dirname $0)/gvm-containers
git pull

for i in nvt-sync.yml cert-sync.yml scap-sync.yml gvmd-data-sync.yml
do
  docker-compose -f $i up
done

docker-compose up

New push to master declares a version in each sync file, and fails with "Can not find manifest"

I was using this repository last week and it worked fine, today i went to install it in a different environment, and it failed with

manifest for admirito/openvas:21 not found, manifest not found, unknown manifest

Changing the version back to latest fixed that. Here is the output of git diff patch if you want to fix it like I did, or perhaps the manifest issue is fixed another way, I didn't want to assume, so no PR.

diff --git a/cert-sync.yml b/cert-sync.yml
index ba7767e..da4e52c 100644
--- a/cert-sync.yml
+++ b/cert-sync.yml
@@ -6,7 +6,7 @@ volumes:

 services:
   cert-sync:
-    image: admirito/gvmd:21
+    image: admirito/gvmd:latest
     volumes:
       - gvm-var-lib:/var/lib/gvm
       - run-gvm:/run/gvm
diff --git a/gvmd-data-sync.yml b/gvmd-data-sync.yml
index 647a793..3332c74 100644
--- a/gvmd-data-sync.yml
+++ b/gvmd-data-sync.yml
@@ -6,7 +6,7 @@ volumes:

 services:
   gvmd-data-sync:
-    image: admirito/gvmd:21
+    image: admirito/gvmd:latest
     volumes:
       - gvm-var-lib:/var/lib/gvm
       - run-gvm:/run/gvm
diff --git a/nvt-sync.yml b/nvt-sync.yml
index cdb3a0d..2f4af4d 100644
--- a/nvt-sync.yml
+++ b/nvt-sync.yml
@@ -6,7 +6,7 @@ volumes:

 services:
   nvt-sync:
-    image: admirito/openvas:21
+    image: admirito/openvas:latest
     volumes:
       - openvas-var-lib:/var/lib/openvas
       - run-gvm:/run/gvm
diff --git a/scap-sync.yml b/scap-sync.yml
index f7cba9d..d90c85b 100644
--- a/scap-sync.yml
+++ b/scap-sync.yml
@@ -6,7 +6,7 @@ volumes:

 services:
   scap-sync:
-    image: admirito/gvmd:21
+    image: admirito/gvmd:latest
     volumes:
       - gvm-var-lib:/var/lib/gvm
       - run-gvm:/run/gvm

xml missing from the gvm image

gvmd_1 | sh: 1: xml_split: not found
gvmd_1 | md manage:WARNING:2020-10-07 09h52.47 utc:44204: split_xml_file: system failed with ret 32512, 127, xml_split -s40Mb split.xml && head -n 2 split-00.xml > head.xml && echo '' > tail.xml && for F in split-*.xml; do tail -n +3 $F | head -n -1 | cat head.xml - tail.xml > new.xml; mv new.xml $F; done

Cannot read property 'scan_run_status' of undefined

Steps to reproduce:

  • Use the default setup steps
  • Start a Task and visit the report by clicking the progress bar under Status or the number under Report

Expected behavior:

  • For the report in progress to be displayed

Current behavior

  • fails to display with the following error:
    Cannot read property 'scan_run_status' of undefined

This is discussed here and has been addressed in later commits.

Workaround (provided above):

  • Get a listing of running containers
    docker ps -a
  • locate the CONTAINER ID that matches admirito/gvmd:20 and connect to it
    docker exec -ti 12ea9bdf9b61 /bin/bash (replace with your ID)
  • change permissions
    chmod 755 /var/lib/gvm/gvmd/report_formats/

p.s.
Thanks for an amazing job!

Error: failed pre-install: timed out waiting for the condition

When I tried to install gvm by helm I have some issues and after that installation failed. I don't know why?

Command:
helm install gvm ./gvm-*.tgz --namespace wazuh--timeout 15m --set gvmd-db.postgresqlPassword="mypassword"

-installed on namespace which I created previosly

Output:

Error: failed pre-install: timed out waiting for the condition

[Solution for other humans to find] password authentication failed for user "gvmduser" password does not match for user "gvmduser"

This is not a bug report, I am posting this as a solution for others because there is no solution posted on the internet elsewhere, and hopefully this will help someone else.

I'm using gvm with the helm chart in my dev homelab environment (with rancher btw), and somewhere along the lines I messed something up. I treat my testing environment abusively, so I can learn where things fail. Turns out I found a failing point.

I think the password for the "gvmduser" got corrupted somewhere along the lines. The gmvd pod would keep failing, and the gvmd-db pod logs would keep spouting:

2022-02-10 21:40:36.480 UTC [267] FATAL:  password authentication failed for user "gvmduser"
2022-02-10 21:40:36.480 UTC [267] DETAIL:  Password does not match for user "gvmduser".
	Connection matched pg_hba.conf line 99: "host all all all md5"

It would spit that out non-stop.

I checked everywhere, password wasn't changed, the correct one was being injected at runtime, even within the containers the environment variables had the correct password. I even went into the database container command line...

su - postgres
psql -U gvmduser gvmd

Now that I have a postgresql commandline, I tried a bunch of things... to no avail. I even changed the password to the same one (and yes, I'm running defaults this is a test environment after all)

ALTER USER gvmduser WITH PASSWORD 'mypassword';

That didn't help at all!

So after multiple hours of exhaustive research (I'm a stubborn bastard and I'm on the case!) I modified the "pg_hba.conf" in the volume (it's on NFS storage, so I had direct file access on my NAS, love you TrueNAS and iX Systems. As a testing method, I changed the line in that file...

from:
"host all all all md5"
to
"host all all all trust"
DISCLAIMER: THIS IS AN INSECURE CONFIGURATION. DO NOT USE THIS BEYOND TESTING/REPAIR.

And then in the postgresql commandline triggered a config reload:

select pg_reload_conf();

I then set the gvmd pod to desirable 0, and then back to 1. Watched the gvm-db logs, and no authentication errors... after a minute, IT'S WORKING AGAIN! I can log into gvm, I see content, yay!

But wait, I need to go back to a secure configuration. So I changed that pg_hba.conf file again...

from:
"host all all all trust"
back to:
"host all all all md5"

And then reloaded config from postgresql command line:
select pg_reload_conf();

And then at the postgresql CLI, I did something different. I changed the password to something random "asdf" and then back to the password that was being used... then set the pod for gvmd to desired 0, then back to desired 1, and BOOM it all came back up! With the correct password, the secure configuration, and no database errors in the log!

YAY! Hopefully this helps someone else. I really didn't want to delete my container volume :/

Feel free to close this issue. I'm posting this here to help others, and myself if I forget this solution lol.

redis server needs configuration

Any openvas changes like this will not work:

  • environment:
  •  OV_MAX_HOST:  512
    
  •  OV_MAX_CHECKS: 20
    

Since the amount of redis databases is 16 by default and you need one per host at least - so you should switch to using a configuration taking things like this in to account

Failed to open plugin_feed_info

Hi,

Just was looking to spin up a docker image on a fresh install of Ubuntu 18x
Ran "docker-compose up" and then ran into the below:

openvassd_1 | lib nvticache:WARNING:2019-07-08 15h47.19 utc:1: nvt_feed_version: Failed to open file “/var/lib/openvas/plugins/plugin_feed_info.inc”: No such file or directory

Any idea?

you can specify network interface to use so...

Change it to use host networking, that way it will work as expected.

@@ -49,10 +49,17 @@ services:
openvas:
# LISTENING /var/run/ospd/ospd.sock
# CONNECTED /var/run/redis/redis.sock

  • network_mode: host

Can not log in

I've create a .env file wich contains GVMD_USER=admin, then I run

docker-compose -f docker-compose.yml -f nvt-sync.yml -f cert-sync.yml -f scap-sync.yml up

Unfortunately, I see login page but cannot log in with admin/admin. I thought it should work ( As I see at https://github.com/admirito/gvm-containers/blob/master/gvmd/docker-entrypoint.sh )

Can you help me? My log is:

Starting gvm-containers_gvm-postgres_1 ... done
Starting gvm-containers_gvmd_1 ... done
Starting gvm-containers_openvas_1 ... done
Starting gvm-containers_redis_1 ... done
Starting gvm-containers_scap-sync_1 ... done
Starting gvm-containers_nvt-sync_1 ... done
Starting gvm-containers_gsad_1 ... done
Starting gvm-containers_cert-sync_1 ... done
Attaching to gvm-containers_gvm-postgres_1, gvm-containers_openvas_1, gvm-containers_gvmd_1, gvm-containers_redis_1, gvm-containers_scap-sync_1, gvm-containers_cert-sync_1, gvm-containers_gsad_1, gvm-containers_nvt-sync_1
gvm-postgres_1 |
gvm-postgres_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
gvm-postgres_1 |
gvm-postgres_1 | 2020-07-20 11:02:16.126 UTC [1] LOG: starting PostgreSQL 12.3 (Debian 12.3-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
gvm-postgres_1 | 2020-07-20 11:02:16.126 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
gvm-postgres_1 | 2020-07-20 11:02:16.126 UTC [1] LOG: listening on IPv6 address "::", port 5432
gvm-postgres_1 | 2020-07-20 11:02:16.127 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
gvm-postgres_1 | 2020-07-20 11:02:16.147 UTC [24] LOG: database system was shut down at 2020-07-20 11:00:28 UTC
gvm-postgres_1 | 2020-07-20 11:02:16.154 UTC [1] LOG: database system is ready to accept connections
gvmd_1 | waiting for the database...
gvmd_1 | md main:MESSAGE:2020-07-20 11h02.19 utc:1: Greenbone Vulnerability Manager version 9.0.1 (DB revision 221)
gvmd_1 | md manage:WARNING:2020-07-20 11h02.19 utc:1: database must be initialised from scanner
gvmd_1 | util gpgme:MESSAGE:2020-07-20 11h02.20 utc:1: Setting GnuPG dir to '/var/lib/gvm/gvmd/gnupg'
gvmd_1 | util gpgme:MESSAGE:2020-07-20 11h02.20 utc:1: Using OpenPGP engine version '2.2.19'
gvmd_1 | md manage:WARNING:2020-07-20 11h02.21 utc:38: manage_update_nvt_cache_osp: failed to connect to /var/run/ospd/ospd.sock
openvas_1 | waiting for the redis...
redis_1 | 1:C 20 Jul 2020 11:02:20.437 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1 | 1:C 20 Jul 2020 11:02:20.440 # Redis version=5.0.9, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1 | 1:C 20 Jul 2020 11:02:20.440 # Configuration loaded
redis_1 | 1:M 20 Jul 2020 11:02:20.445 * Running mode=standalone, port=0.
redis_1 | 1:M 20 Jul 2020 11:02:20.445 # Server initialized
redis_1 | 1:M 20 Jul 2020 11:02:20.445 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis_1 | 1:M 20 Jul 2020 11:02:20.445 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_1 | 1:M 20 Jul 2020 11:02:20.446 * The server is now ready to accept connections at /var/run/redis/redis.sock
nvt-sync_1 | <28>Jul 20 11:02:23 greenbone-nvt-sync: The log facility is not working as expected. All messages will be written to the standard error stream.
nvt-sync_1 | <28>Jul 20 11:02:23 greenbone-nvt-sync: Another process related to the feed update is already running.
cert-sync_1 | rsync: failed to connect to feed.community.greenbone.net (45.135.106.142): Connection refused (111)
cert-sync_1 | rsync: failed to connect to feed.community.greenbone.net (2a0e:6b40:20:106:20c:29ff:fe67:cbb5): Cannot assign requested address (99)
cert-sync_1 | rsync error: error in socket IO (code 10) at clientserver.c(127) [Receiver=3.1.3]
scap-sync_1 | Greenbone community feed server - http://feed.community.greenbone.net/
scap-sync_1 | This service is hosted by Greenbone Networks - http://www.greenbone.net/
scap-sync_1 |
scap-sync_1 | All transactions are logged.
scap-sync_1 |
scap-sync_1 | If you have any questions, please use the Greenbone community portal.
scap-sync_1 | See https://community.greenbone.net for details.
scap-sync_1 |
scap-sync_1 | By using this service you agree to our terms and conditions.
scap-sync_1 |
scap-sync_1 | Only one sync per time, otherwise the source ip will be temporarily blocked.
scap-sync_1 |
gsad_1 | gsad main:MESSAGE:2020-07-20 11h02.23 utc:1: Starting GSAD version 9.0.1
scap-sync_1 | receiving incremental file list
scap-sync_1 | timestamp
13 100% 12.70kB/s 0:00:00 (xfr#1, to-chk=0/1)
scap-sync_1 |
scap-sync_1 | sent 43 bytes received 113 bytes 62.40 bytes/sec
scap-sync_1 | total size is 13 speedup is 0.08
gvm-containers_nvt-sync_1 exited with code 1
scap-sync_1 | Greenbone community feed server - http://feed.community.greenbone.net/
scap-sync_1 | This service is hosted by Greenbone Networks - http://www.greenbone.net/
scap-sync_1 |
scap-sync_1 | All transactions are logged.
scap-sync_1 |
scap-sync_1 | If you have any questions, please use the Greenbone community portal.
scap-sync_1 | See https://community.greenbone.net for details.
scap-sync_1 |
scap-sync_1 | By using this service you agree to our terms and conditions.
scap-sync_1 |
scap-sync_1 | Only one sync per time, otherwise the source ip will be temporarily blocked.
scap-sync_1 |
gvm-containers_cert-sync_1 exited with code 1
scap-sync_1 | receiving incremental file list
scap-sync_1 | ./
scap-sync_1 | COPYING
1,719 100% 1.64MB/s 0:00:00 (xfr#1, to-chk=42/44)
scap-sync_1 | nvdcve-2.0-2002.xml
gvmd_1 | md manage:WARNING:2020-07-20 11h02.31 utc:42: manage_update_nvt_cache_osp: failed to connect to /var/run/ospd/ospd.sock
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.35 utc:19: plugin_feed_info.inc: Not able to open nor to locate it in include paths
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.35 utc:19: /var/lib/openvas/plugins/gb_manageengine_servicedesk_plus_consolidation.nasl: Parse error at or near line 19
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.37 utc:19: plugin_feed_info.inc: Not able to open nor to locate it in include paths
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.37 utc:19: /var/lib/openvas/plugins/gb_fortimail_consolidation.nasl: Parse error at or near line 21
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.38 utc:19: plugin_feed_info.inc: Not able to open nor to locate it in include paths
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.38 utc:19: /var/lib/openvas/plugins/gb_apache_derby_consolidation.nasl: Parse error at or near line 19
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.38 utc:19: plugin_feed_info.inc: Not able to open nor to locate it in include paths
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.38 utc:19: /var/lib/openvas/plugins/unknown_services.nasl: Parse error at or near line 26
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.39 utc:19: plugin_feed_info.inc: Not able to open nor to locate it in include paths
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.39 utc:19: /var/lib/openvas/plugins/gb_netapp_data_ontap_consolidation.nasl: Parse error at or near line 27
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.39 utc:19: plugin_feed_info.inc: Not able to open nor to locate it in include paths
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.39 utc:19: /var/lib/openvas/plugins/gb_dell_sonicwall_sma_sra_consolidation.nasl: Parse error at or near line 19
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.39 utc:19: plugin_feed_info.inc: Not able to open nor to locate it in include paths
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.39 utc:19: /var/lib/openvas/plugins/gb_ibm_security_identity_manager_consolidation.nasl: Parse error at or near line 21
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.40 utc:19: plugin_feed_info.inc: Not able to open nor to locate it in include paths
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.40 utc:19: /var/lib/openvas/plugins/gb_manageengine_assetexplorer_consolidation.nasl: Parse error at or near line 19
gsad_1 | gsad gmp:MESSAGE:2020-07-20 11h02.41 utc:1: Authentication success for 'admin' from 172.18.0.1
gvmd_1 | md manage:WARNING:2020-07-20 11h02.41 utc:48: manage_update_nvt_cache_osp: failed to connect to /var/run/ospd/ospd.sock
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.43 utc:19: plugin_feed_info.inc: Not able to open nor to locate it in include paths
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.43 utc:19: /var/lib/openvas/plugins/2015/gb_f5_big_iq_webinterface_default_credentials.nasl: Parse error at or near line 27
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.47 utc:19: plugin_feed_info.inc: Not able to open nor to locate it in include paths
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.47 utc:19: /var/lib/openvas/plugins/gb_ibm_db2_consolidation.nasl: Parse error at or near line 19
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.47 utc:19: plugin_feed_info.inc: Not able to open nor to locate it in include paths
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.47 utc:19: /var/lib/openvas/plugins/os_detection.nasl: Parse error at or near line 27
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.47 utc:19: plugin_feed_info.inc: Not able to open nor to locate it in include paths
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.47 utc:19: /var/lib/openvas/plugins/gb_grandstream_ucm_consolidation.nasl: Parse error at or near line 19
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.48 utc:19: plugin_feed_info.inc: Not able to open nor to locate it in include paths
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.48 utc:19: /var/lib/openvas/plugins/gb_cisco_nam_consolidation.nasl: Parse error at or near line 21
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.48 utc:19: plugin_feed_info.inc: Not able to open nor to locate it in include paths
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.48 utc:19: /var/lib/openvas/plugins/os_fingerprint.nasl: Parse error at or near line 26
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.48 utc:19: plugin_feed_info.inc: Not able to open nor to locate it in include paths
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.48 utc:19: /var/lib/openvas/plugins/kb_2_sc.nasl: Parse error at or near line 58
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.48 utc:19: plugin_feed_info.inc: Not able to open nor to locate it in include paths
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.48 utc:19: /var/lib/openvas/plugins/gb_rockwell_micrologix_ethernetip_detect.nasl: Parse error at or near line 27
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.49 utc:19: plugin_feed_info.inc: Not able to open nor to locate it in include paths
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.49 utc:19: /var/lib/openvas/plugins/gb_oracle_weblogic_consolidation.nasl: Parse error at or near line 21
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.49 utc:19: plugin_feed_info.inc: Not able to open nor to locate it in include paths
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.49 utc:19: /var/lib/openvas/plugins/gb_manageengine_servicedesk_plus_msp_consolidation.nasl: Parse error at or near line 19
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.50 utc:19: plugin_feed_info.inc: Not able to open nor to locate it in include paths
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.50 utc:19: /var/lib/openvas/plugins/host_scan_end.nasl: Parse error at or near line 50
gvmd_1 | md manage:WARNING:2020-07-20 11h02.51 utc:52: manage_update_nvt_cache_osp: failed to connect to /var/run/ospd/ospd.sock
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.54 utc:19: plugin_feed_info.inc: Not able to open nor to locate it in include paths
openvas_1 | lib nasl:MESSAGE:2020-07-20 11h02.54 utc:19: /var/lib/openvas/plugins/gb_option_cloudgate_consolidation.nasl: Parse error at or near line 21
openvas_1 | lib nvticache:WARNING:2020-07-20 11h02.54 utc:18: nvt_feed_version: Failed to open file “/var/lib/openvas/plugins/plugin_feed_info.inc”: No such file or directory
openvas_1 | 2020-07-20 11:02:54,376 OSPD - openvas: ERROR: (ospd_openvas.daemon) OpenVAS Scanner failed to load NVTs. Command '['openvas', '--update-vt-info']' returned non-zero exit status 1.
openvas_1 | 2020-07-20 11:02:54,432 OSPD - openvas: INFO: (ospd_openvas.daemon) Loading vts in memory.
gvmd_1 | md manage:WARNING:2020-07-20 11h03.02 utc:56: manage_update_nvt_cache_osp: failed to connect to /var/run/ospd/ospd.sock
openvas_1 | 2020-07-20 11:03:10,622 OSPD - openvas: INFO: (ospd_openvas.daemon) Finish loading up vts.

Use Cronjob instead of try helm.sh/hook

Hi Admirito,

I don't really understood and succeeed to use the job with helm.sh/hook

I try also to use rsync-server, but also not succeed.

Why don't use cronjob ?

Report outdated / end-of-life Scan Engine / Environment (local) with Docker admirito/openvas-scanner:21.4.3 and admirito/gsad:21.4.3

Hi Mohammad,

I really appreciate your job on the project https://github.com/admirito/gvm-containers and http://ppa.launchpad.net/mrazavi/gvm/ubuntu

Currently there is only a docker admirito/openvas-scanner:21.4.3 and admirito/gsad:21.4.3

But I get the error into GVM :
Version of installed component: 21.4.3 (Installed component: openvas-libraries on OpenVAS <= 9, openvas-scanner on GVM >= 10)
Latest available openvas-scanner version: 21.4.4

Is it possible to update de ppa packages for 21.4.4 and do update the docker image ?

Regards

rsync does not work because feed URL is wrong

RSYNC does not work because it is configured to use this feed feed.openvas.org

It does work though if we are using feed.community.greenbone.net

I tested by changing in the container using sed

Is there a variable similar with PUBLIC_NAME that can be used when building container ?
..or any other smarted way to deal with this ?
Thanks

Incompatible python redis package

I've been bumping up against some issues when running scans where only partial results are returned even though the GSAD interface is suggesting that it's all complete. I believe it basically comes down to an incompatibility with the version of the python redis package installed for bionic VS those required by openvas. See https://community.greenbone.net/t/scans-error-on-gvm-11/4343

You know you've got this issue if you see something like:

Host process failure (lrem() got an unexpected keyword argument 'count').

or

Host process failure (set_redisctx: Argument ctx is required).

in the "Error Messages" tab on your reports. You'd likely notice that your scans ran exceptionally fast, or possibly failed almost instantly.

In short, bionic has 2.10.6 and openvas needs >= 3.0.1.

I've managed to get around this locally by nuking the apt installed version within the openvas container then installing one using pip:

python3 -c "import redis; import pprint; pprint.pprint(redis.__version__)"
'2.10.6'

apt-get update
apt-get install -y python3-pip
dpkg -r --force-depends python3-redis
pip3 install redis

python3 -c "import redis; import pprint; pprint.pprint(redis.__version__)"
'3.4.1'

Downside to this method is that it does mess up the local install as the ospd-openvas package does depend on the python3-redis package. However it's probably not a huge issue given it's living in a container?!

I've searched to see if someone may have a version of the python3-redis package >= 3.x but so far no luck.

Not sure if you've got any thoughts about a better way to patch this issue??

Problems with feed updates

Getting rsync error on docker-compose up because all three feed sync containers are trying to run at the same time and there can only be one:

"rsync: failed to connect to feed.community.greenbone.net (45.135.106.142): Connection refused (111)"

To get around this I have changed the command on the first service as follows and deleted the other two services:

command: sh -c "greenbone-certdata-sync --curl --verbose && greenbone-nvt-sync && greenbone-scapdata-sync --curl --verbose"

Could really do with implementing some way to run the sync commands on a schedule as well.

Question: Backing up gvm-containers

Hi, can you tell me which "gvm-container" directory I need to backup in case of system corruption?
I use Arch Linux and my entire docker directory is: /var/lib/docker
Do I need to backup the entire /var/lib/docker directory or just specific directories?
Thanks.
David.

Error deplying using Rancher

When I try to deploy this helm chart using Rancher I get the following error message:
Error: INSTALLATION FAILED: found in Chart.yaml, but missing in charts/ directory: postgresql, redis

My Kubernetes Cluster is built using K3OS and gets managed using Rancher. I created a git repo within Rancher for the cluster with the URL https://github.com/admirito/gvm-containers/.

Thanks for any help,
Birger

No progress on scan

I've created a simple "Discovery" scan which scans the internal Docker network, using the "Discovery" configuration.
After starting, it scans a few IPs and then it seems to stop. The web interface shows a progress of 1% (I left it running overnight).
The logs of gvmd:

gvmd_1            | event target:MESSAGE:2019-05-15 14h44.54 UTC:4458: Target Internal (b31fea7b-6ddd-4283-9bad-be43735c159b) has been created by admin
gvmd_1            | event task:MESSAGE:2019-05-15 14h45.22 UTC:4518: Status of task  (7f0bfbf9-d914-4f36-b6cf-3ea92b84453b) has changed to New
gvmd_1            | event task:MESSAGE:2019-05-15 14h45.23 UTC:4518: Task Internal - Discovery (7f0bfbf9-d914-4f36-b6cf-3ea92b84453b) has been created by admin
gvmd_1            | event task:MESSAGE:2019-05-15 14h45.31 UTC:4540: Status of task Internal - Discovery (7f0bfbf9-d914-4f36-b6cf-3ea92b84453b) has changed to Requested
gvmd_1            | event task:MESSAGE:2019-05-15 14h45.32 UTC:4540: Task Internal - Discovery (7f0bfbf9-d914-4f36-b6cf-3ea92b84453b) has been requested to start by admin
gvmd_1            | event task:MESSAGE:2019-05-15 14h45.33 UTC:4544: Status of task Internal - Discovery (7f0bfbf9-d914-4f36-b6cf-3ea92b84453b) has changed to Running

The logs of openvassd:

openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.32 utc:1160: Starts a new scan. Target(s) : 172.17.0.0/24, with max_hosts = 20 and max_checks = 4
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.32 utc:1163: Testing 172.17.0.3 [1163]
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.32 utc:1162: Testing 172.17.0.2 (Vhosts: 0ff2c0638ddf) [1162]
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.32 utc:1164: Testing 172.17.0.4 [1164]
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.32 utc:1165: Testing 172.17.0.5 [1165]
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.32 utc:1166: Testing 172.17.0.6 [1166]
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.32 utc:1167: Testing 172.17.0.7 [1167]
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.32 utc:1168: Testing 172.17.0.8 [1168]
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.32 utc:1169: Testing 172.17.0.9 [1169]
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.32 utc:1170: Testing 172.17.0.10 [1170]
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.32 utc:1171: Testing 172.17.0.11 [1171]
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.32 utc:1172: Testing 172.17.0.12 [1172]
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.32 utc:1173: Testing 172.17.0.13 [1173]
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.32 utc:1174: Testing 172.17.0.14 [1174]
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.32 utc:1161: Testing 172.17.0.1 [1161]
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.36 utc:1162: Finished testing 172.17.0.2. Time : 4.35 secs
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.38 utc:1165: Finished testing 172.17.0.5. Time : 6.36 secs
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.38 utc:1167: The remote host 172.17.0.7 is dead
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.39 utc:1173: The remote host 172.17.0.13 is dead
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.39 utc:1172: The remote host 172.17.0.12 is dead
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.39 utc:1167: Finished testing 172.17.0.7. Time : 6.44 secs
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.39 utc:1173: Finished testing 172.17.0.13. Time : 6.44 secs
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.39 utc:1168: The remote host 172.17.0.8 is dead
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.39 utc:1163: Finished testing 172.17.0.3. Time : 6.46 secs
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.39 utc:1172: Finished testing 172.17.0.12. Time : 6.47 secs
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.39 utc:1174: The remote host 172.17.0.14 is dead
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.39 utc:1171: The remote host 172.17.0.11 is dead
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.39 utc:1169: The remote host 172.17.0.9 is dead
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.39 utc:1161: Finished testing 172.17.0.1. Time : 6.50 secs
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.39 utc:1168: Finished testing 172.17.0.8. Time : 6.50 secs
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.39 utc:1164: Finished testing 172.17.0.4. Time : 6.50 secs
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.39 utc:1170: The remote host 172.17.0.10 is dead
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.39 utc:1171: Finished testing 172.17.0.11. Time : 6.52 secs
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.39 utc:1174: Finished testing 172.17.0.14. Time : 6.52 secs
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.39 utc:1169: Finished testing 172.17.0.9. Time : 6.52 secs
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.39 utc:1170: Finished testing 172.17.0.10. Time : 6.56 secs
openvassd_1       | sd   main:MESSAGE:2019-05-15 14h45.39 utc:1166: Finished testing 172.17.0.6. Time : 6.59 secs

The output of docker-compose top:

gvmcontainers_gsad_1
UID     PID    PPID    C   STIME   TTY     TIME                                        CMD                                    
------------------------------------------------------------------------------------------------------------------------------
root   12994   12965   0   mei15   ?     00:00:03   gsad -f --listen=0.0.0.0 --port=80 --http-only --mlisten=gvmd --mport=9390

gvmcontainers_gvm-postgres_1
UID    PID    PPID   C   STIME   TTY     TIME                           CMD                       
--------------------------------------------------------------------------------------------------
999   2944    5189   0   mei15   ?     00:01:30   postgres: gvmduser gvmd 172.17.0.5(44532) idle  
999   3367    5189   0   mei15   ?     00:00:00   postgres: gvmduser gvmd 172.17.0.5(44582) idle  
999   5189    5143   0   mei15   ?     00:00:08   postgres                                        
999   6004    5189   0   mei15   ?     00:00:11   postgres: checkpointer process                  
999   6005    5189   0   mei15   ?     00:00:18   postgres: writer process                        
999   6006    5189   0   mei15   ?     00:00:23   postgres: wal writer process                    
999   6007    5189   0   mei15   ?     00:00:01   postgres: autovacuum launcher process           
999   6008    5189   0   mei15   ?     00:00:25   postgres: stats collector process               
999   6009    5189   0   mei15   ?     00:00:00   postgres: bgworker: logical replication launcher
999   12469   5189   0   mei15   ?     00:00:42   postgres: gvmduser gvmd 172.17.0.5(59222) idle  

gvmcontainers_gvmd_1
UID     PID    PPID    C   STIME   TTY     TIME                                  CMD                             
-----------------------------------------------------------------------------------------------------------------
root   2943    12360   0   mei15   ?     00:00:34   gvmd: OTP: Handling scan c866eb97-63ed-4cd7-bebe-ebe9af428fde
root   3337    12360   0   mei15   ?     00:00:00   gvmd: Reloading NVTs                                         
root   3344    3337    0   mei15   ?     00:00:02   gvmd: Updating NVT cache                                     
root   12360   12332   0   mei15   ?     00:01:14   gvmd: Waiting for incoming connections                       

gvmcontainers_openvassd_1
UID    PID    PPID   C   STIME   TTY     TIME                         CMD                    
---------------------------------------------------------------------------------------------
root   2939   5183   0   mei15   ?     00:00:01   openvassd: Serving /var/run/openvassd.sock 
root   5183   5129   0   mei15   ?     00:00:01   openvassd: Waiting for incoming connections

gvmcontainers_redis_1
UID   PID    PPID   C   STIME   TTY     TIME           CMD       
-----------------------------------------------------------------
999   5440   5408   0   mei15   ?     00:01:44   redis-server *:0

One thing I also noticed in the gvmd logs was this line:

gvmd_1            | md manage:WARNING:2019-05-15 14h43.31 UTC:4373: Failed to execute /usr/sbin/greenbone-nvt-sync: Failed to execute child process “/usr/sbin/greenbone-nvt-sync” (No such file or directory)

This is probably because this command is not located in the gvmd container but in the openvassd container.

Missing latex packages for PDF report generation

I was having some issues downloading the openvas reports as PDF (downloaded PDF was 0 bytes) and found the cause was some missing latex packages in the gvmd container. I resolved it with:

sudo docker exec -it gvmd /bin/bash
apt-get update
apt-get install -y texlive-latex-extra --no-install-recommends
apt-get install -y texlive-fonts-recommended

Not sure if that's necessarily something you'd want to include in the base install, but might be useful for others

CrashLoopBackOff container gvm

image

Installed gvm on minikube gvm-1.0.2.tar.gz

It was not first time after run 3 scans in the same time. So could you help me with this issue? Thank you

How to set it up on Manjaro

Hello,
Hi I would like to install it on my Manjaro KDE distro but I do not have any knowledge of docker or GVM can you please tell me how can I begin setting this up and running on manjaro.

Helm chart requires node affinity

According to this comment in #27 on a multi-nodes cluster, the following error has occured:

ReadWriteOnce – the volume can be mounted as read-write by a single node

But we have 3 different pods accessing the same volume (same PVC): openvas-deployment, gvmd-deployment, feeds-sync-hook.

Edit: indeed that was the problem, specifying a required node affinity solved my problem. If it's intended, maybe it should at least be documented that all the pods need to be scheduled on the same node for the chart to work.

lot of defunt process in openvas docker service

Hi,

there a lot of defunt process in the openvas docker service

root@2bb3b43c986f:/# ps -ef | grep defunct | wc -l
464

this is due to docker with python.
If you start the pid 1 process with tini, it manage well the process termination

update the Dockerfile like this

# Add Tini
ARG TINI_VERSION="v0.18.0"
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini

ENTRYPOINT ["/tini", "--", "bash", "/usr/local/bin/docker-entrypoint.sh" ]

you can check my pull request

NVTs and Scan Configs empty until ospd sock changed (helm)

Hey so I pulled the container set up from helm, and out of the box the sync pulls generally everything. I see CVEs, ports, stuff like that. But when I go look at NVTs, blank, when I look at scan configs, blank.

I then found this thread : #48

And did the steps:

"
kubectl -n gvm edit deployments.apps gvm-gvmd
And changing: "- UNIX-LISTEN:/run/ospd/ospd.sock,fork" with "- UNIX-LISTEN:/run/ospd/ospd-openvas.sock,fork"
"

As outlined from that thread.

Then the relevant container was replaced with the updated deployment, and the feeds in the "Feed Status" section went into "updating" for a while. Then the NVTs show up now, and I see Scan Configs.

Now, I'm not an expert on how OpenVAS/GVM works at all. But when I look at the helm deployments, the default "ospd.sock" should work, since it matches what ospd-openvas is defined to expose as a socket, so I don't understand why this works. But this needs to be fixed because GVM out of the box is effectively broken until this is fixed.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.