Coder Social home page Coder Social logo

Comments (19)

bjoernricks avatar bjoernricks commented on July 2, 2024 5

We are actively working on this issue. As always debugging and implementing the best possible solution needs some time.

0463e70 from #2028 is identified as the culprit of the issue. Either we implement a fix or are going the revert that change the next days.

from gvmd.

mattmundell avatar mattmundell commented on July 2, 2024 3

I tried to increase the number of max_connections to 300 still experience the error

What's happening for me is that gvmd is keeping the connections from gsad open, and this is causing postgres to reach max_connections. /pull/2042 solves the issue.

from gvmd.

tomassrnka avatar tomassrnka commented on July 2, 2024 3

I'm hoping /pull/2042 will solve this. When trying to reproduce I saw many gvmd processes building up when I was using GSA, and I guess the same issue is causing the database connections to run out.

Thank you! I rebuilt gvmd with the patch applied and it seems to fix the issue,

from gvmd.

dosera avatar dosera commented on July 2, 2024 2

This issue originated from https://forum.greenbone.net/t/sql-open-pqerrormessage-conn-fatal-remaining-connection-slots-are-reserved-for-non-replication-superuser-connections/15137

As far as I understand using gvmd 22.5.1 should work because it got introduced with #2028

Downgrade did not work for me due to gvmd complaining about version inconsistencies:

md   main:CRITICAL:2023-07-08 04h22.44 utc:52:  gvmd: database is wrong version

from gvmd.

mattmundell avatar mattmundell commented on July 2, 2024 2

I'm hoping /pull/2042 will solve this. When trying to reproduce I saw many gvmd processes building up when I was using GSA, and I guess the same issue is causing the database connections to run out.

from gvmd.

bjoernricks avatar bjoernricks commented on July 2, 2024 1

This issue originated from https://forum.greenbone.net/t/sql-open-pqerrormessage-conn-fatal-remaining-connection-slots-are-reserved-for-non-replication-superuser-connections/15137

As far as I understand using gvmd 22.5.1 should work because it got introduced with #2028

from gvmd.

benbrummer avatar benbrummer commented on July 2, 2024

The docker deployment works, when downgrading gvmd to 22.5.2

Update 22.5.2 does have the same issue

from gvmd.

alex-feel avatar alex-feel commented on July 2, 2024

I have encountered the issue with PostgreSQL connections after building and running Greenbone Community Edition 22.4 from the source code on Ubuntu 22.04 LTS under Windows Subsystem for Linux (WSL).

During normal operation, I encountered the following errors:

md manage:WARNING:2023-07-07 11h57.55 utc:16975: init_manage_open_db: sql_open failed
md manage:WARNING:2023-07-07 11h57.55 utc:16974: sql_open: PQconnectPoll failed
md manage:WARNING:2023-07-07 11h57.55 utc:16974: sql_open: PQerrorMessage (conn): connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL:  remaining connection slots are reserved for non-replication superuser connections

After I saw this in the logs, the web interface stopped working and only three tabs remained. I ran sudo systemctl restart gvmd many times and the process resumed. For example, if at first I received the message:

md   main:MESSAGE:2023-07-07 13h19.47 utc:5411:    Greenbone Vulnerability Manager version 22.5.2 (DB revision 255)
md manage:MESSAGE:2023-07-07 13h19.47 utc:5411: No SCAP database found
md manage:WARNING:2023-07-07 13h19.49 UTC:5433: update_scap: No SCAP db present, rebuilding SCAP db from scratch
md manage:   INFO:2023-07-07 13h19.49 UTC:5433: update_scap: Updating data from feed
md manage:   INFO:2023-07-07 13h19.49 UTC:5433: Updating CPEs
md manage:   INFO:2023-07-07 13h22.20 UTC:5433: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2014.xml
md manage:   INFO:2023-07-07 13h22.28 UTC:5433: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2002.xml
md manage:   INFO:2023-07-07 13h22.35 UTC:5433: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2022.xml
md manage:WARNING:2023-07-07 13h23.58 utc:6217: sql_open: PQconnectPoll failed
md manage:WARNING:2023-07-07 13h23.58 utc:6217: sql_open: PQerrorMessage (conn): connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL:  remaining connection slots are reserved for non-replication superuser connections

Then after several restarts, the progress moved forward and I already saw the following:

md   main:MESSAGE:2023-07-07 13h28.13 utc:6377:    Greenbone Vulnerability Manager version 22.5.2 (DB revision 255)
md manage:MESSAGE:2023-07-07 13h28.13 utc:6377: No SCAP database found
md manage:WARNING:2023-07-07 13h28.15 UTC:6400: update_scap: No SCAP db present, rebuilding SCAP db from scratch
md manage:   INFO:2023-07-07 13h28.16 UTC:6400: update_scap: Updating data from feed
md manage:   INFO:2023-07-07 13h28.16 UTC:6400: Updating CPEs
md manage:   INFO:2023-07-07 13h31.14 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2014.xml
md manage:   INFO:2023-07-07 13h31.21 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2002.xml
md manage:   INFO:2023-07-07 13h31.25 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2022.xml
md manage:   INFO:2023-07-07 13h32.01 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2019.xml
md manage:   INFO:2023-07-07 13h32.36 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2006.xml
md manage:   INFO:2023-07-07 13h32.42 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2013.xml
md manage:   INFO:2023-07-07 13h32.49 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2020.xml
md manage:   INFO:2023-07-07 13h33.15 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2018.xml
md manage:   INFO:2023-07-07 13h33.42 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2010.xml
md manage:   INFO:2023-07-07 13h33.49 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2012.xml
md manage:   INFO:2023-07-07 13h33.57 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2003.xml
md manage:   INFO:2023-07-07 13h33.59 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2023.xml
md manage:   INFO:2023-07-07 13h34.14 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2005.xml
md manage:   INFO:2023-07-07 13h34.19 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2004.xml
md manage:   INFO:2023-07-07 13h34.23 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2021.xml
md manage:   INFO:2023-07-07 13h35.02 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2016.xml
md manage:   INFO:2023-07-07 13h35.16 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2007.xml
md manage:   INFO:2023-07-07 13h35.23 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2008.xml
md manage:   INFO:2023-07-07 13h35.34 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2017.xml
md manage:   INFO:2023-07-07 13h35.57 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2011.xml
md manage:   INFO:2023-07-07 13h36.08 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2009.xml
md manage:   INFO:2023-07-07 13h36.20 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2015.xml
md manage:   INFO:2023-07-07 13h36.32 UTC:6400: Updating CVSS scores and CVE counts for CPEs
md manage:   INFO:2023-07-07 13h39.12 UTC:6400: Updating placeholder CPEs
md manage:   INFO:2023-07-07 13h39.43 UTC:6400: Updating Max CVSS for DFN-CERT
md manage:   INFO:2023-07-07 13h39.49 UTC:6400: Updating DFN-CERT CVSS max succeeded.
md manage:   INFO:2023-07-07 13h39.49 UTC:6400: Updating Max CVSS for CERT-Bund
md manage:   INFO:2023-07-07 13h39.53 UTC:6400: Updating CERT-Bund CVSS max succeeded.
md manage:   INFO:2023-07-07 13h39.56 UTC:6400: update_scap_end: Updating SCAP info succeeded

However, when I added a target for scanning, the following appeared in the log:

event target:MESSAGE:2023-07-07 13h41.21 UTC:7368: Target Target for immediate scan of IP app.example.com - 2023-07-07 13:41:21 (6e1b7247-56c6-41ea-bf91-1abe2ed7dd84) has been created by admin
event task:MESSAGE:2023-07-07 13h41.21 UTC:7368: Status of task  (54d76b19-39d7-4f6d-a21d-1c80cfbf279f) has changed to New
event task:MESSAGE:2023-07-07 13h41.21 UTC:7368: Task Immediate scan of IP app.example.com (54d76b19-39d7-4f6d-a21d-1c80cfbf279f) has been created by admin
event task:MESSAGE:2023-07-07 13h41.22 UTC:7368: Status of task Immediate scan of IP app.example.com (54d76b19-39d7-4f6d-a21d-1c80cfbf279f) has changed to Requested
event task:MESSAGE:2023-07-07 13h41.22 UTC:7368: Task Immediate scan of IP app.example.com (54d76b19-39d7-4f6d-a21d-1c80cfbf279f) has been requested to start by admin
event wizard:MESSAGE:2023-07-07 13h41.22 UTC:7368: Wizard quick_first_scan has been run by admin
md manage:WARNING:2023-07-07 13h41.22 utc:7413: sql_open: PQconnectPoll failed
md manage:WARNING:2023-07-07 13h41.22 utc:7413: sql_open: PQerrorMessage (conn): connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL:  remaining connection slots are reserved for non-replication superuser connections
md manage:WARNING:2023-07-07 13h41.22 utc:7413: init_manage_open_db: sql_open failed
md manage:WARNING:2023-07-07 13h41.22 utc:7412: sql_open: PQconnectPoll failed
md manage:WARNING:2023-07-07 13h41.22 utc:7412: sql_open: PQerrorMessage (conn): connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL:  remaining connection slots are reserved for non-replication superuser connections
md manage:WARNING:2023-07-07 13h41.22 utc:7412: init_manage_open_db: sql_open failed

Here is my current system environment:

Operating System: Windows 11 Pro
WSL Version: WSL 2
Linux Distribution: Ubuntu 22.04 LTS
Greenbone Community Edition Version: 22.4

from gvmd.

malwarework avatar malwarework commented on July 2, 2024

Find decision but it's not a good way. After starting pg-gvm I connected to container and increase the value of max_connections (Thanks to this article).
After restarted all services and error is gone.

from gvmd.

xenago avatar xenago commented on July 2, 2024

Same issue here, downgrading doesn't work.

md   main:MESSAGE:2023-07-10 01h54.38 utc:27:    Greenbone Vulnerability Manager version 22.5.1 (DB revision 254)
md manage:MESSAGE:2023-07-10 01h54.38 utc:28: check_db_versions: database version of database: 255
md manage:MESSAGE:2023-07-10 01h54.38 utc:28: check_db_versions: database version supported by manager: 254
md   main:CRITICAL:2023-07-10 01h54.38 utc:28: gvmd: database is wrong version

from gvmd.

mikadmswnrto avatar mikadmswnrto commented on July 2, 2024

I have encountered the issue with PostgreSQL connections after building and running Greenbone Community Edition 22.4 from the source code on Ubuntu 22.04 LTS under Windows Subsystem for Linux (WSL).

During normal operation, I encountered the following errors:

md manage:WARNING:2023-07-07 11h57.55 utc:16975: init_manage_open_db: sql_open failed
md manage:WARNING:2023-07-07 11h57.55 utc:16974: sql_open: PQconnectPoll failed
md manage:WARNING:2023-07-07 11h57.55 utc:16974: sql_open: PQerrorMessage (conn): connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL:  remaining connection slots are reserved for non-replication superuser connections

After I saw this in the logs, the web interface stopped working and only three tabs remained. I ran sudo systemctl restart gvmd many times and the process resumed. For example, if at first I received the message:

md   main:MESSAGE:2023-07-07 13h19.47 utc:5411:    Greenbone Vulnerability Manager version 22.5.2 (DB revision 255)
md manage:MESSAGE:2023-07-07 13h19.47 utc:5411: No SCAP database found
md manage:WARNING:2023-07-07 13h19.49 UTC:5433: update_scap: No SCAP db present, rebuilding SCAP db from scratch
md manage:   INFO:2023-07-07 13h19.49 UTC:5433: update_scap: Updating data from feed
md manage:   INFO:2023-07-07 13h19.49 UTC:5433: Updating CPEs
md manage:   INFO:2023-07-07 13h22.20 UTC:5433: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2014.xml
md manage:   INFO:2023-07-07 13h22.28 UTC:5433: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2002.xml
md manage:   INFO:2023-07-07 13h22.35 UTC:5433: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2022.xml
md manage:WARNING:2023-07-07 13h23.58 utc:6217: sql_open: PQconnectPoll failed
md manage:WARNING:2023-07-07 13h23.58 utc:6217: sql_open: PQerrorMessage (conn): connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL:  remaining connection slots are reserved for non-replication superuser connections

Then after several restarts, the progress moved forward and I already saw the following:

md   main:MESSAGE:2023-07-07 13h28.13 utc:6377:    Greenbone Vulnerability Manager version 22.5.2 (DB revision 255)
md manage:MESSAGE:2023-07-07 13h28.13 utc:6377: No SCAP database found
md manage:WARNING:2023-07-07 13h28.15 UTC:6400: update_scap: No SCAP db present, rebuilding SCAP db from scratch
md manage:   INFO:2023-07-07 13h28.16 UTC:6400: update_scap: Updating data from feed
md manage:   INFO:2023-07-07 13h28.16 UTC:6400: Updating CPEs
md manage:   INFO:2023-07-07 13h31.14 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2014.xml
md manage:   INFO:2023-07-07 13h31.21 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2002.xml
md manage:   INFO:2023-07-07 13h31.25 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2022.xml
md manage:   INFO:2023-07-07 13h32.01 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2019.xml
md manage:   INFO:2023-07-07 13h32.36 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2006.xml
md manage:   INFO:2023-07-07 13h32.42 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2013.xml
md manage:   INFO:2023-07-07 13h32.49 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2020.xml
md manage:   INFO:2023-07-07 13h33.15 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2018.xml
md manage:   INFO:2023-07-07 13h33.42 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2010.xml
md manage:   INFO:2023-07-07 13h33.49 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2012.xml
md manage:   INFO:2023-07-07 13h33.57 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2003.xml
md manage:   INFO:2023-07-07 13h33.59 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2023.xml
md manage:   INFO:2023-07-07 13h34.14 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2005.xml
md manage:   INFO:2023-07-07 13h34.19 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2004.xml
md manage:   INFO:2023-07-07 13h34.23 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2021.xml
md manage:   INFO:2023-07-07 13h35.02 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2016.xml
md manage:   INFO:2023-07-07 13h35.16 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2007.xml
md manage:   INFO:2023-07-07 13h35.23 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2008.xml
md manage:   INFO:2023-07-07 13h35.34 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2017.xml
md manage:   INFO:2023-07-07 13h35.57 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2011.xml
md manage:   INFO:2023-07-07 13h36.08 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2009.xml
md manage:   INFO:2023-07-07 13h36.20 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2015.xml
md manage:   INFO:2023-07-07 13h36.32 UTC:6400: Updating CVSS scores and CVE counts for CPEs
md manage:   INFO:2023-07-07 13h39.12 UTC:6400: Updating placeholder CPEs
md manage:   INFO:2023-07-07 13h39.43 UTC:6400: Updating Max CVSS for DFN-CERT
md manage:   INFO:2023-07-07 13h39.49 UTC:6400: Updating DFN-CERT CVSS max succeeded.
md manage:   INFO:2023-07-07 13h39.49 UTC:6400: Updating Max CVSS for CERT-Bund
md manage:   INFO:2023-07-07 13h39.53 UTC:6400: Updating CERT-Bund CVSS max succeeded.
md manage:   INFO:2023-07-07 13h39.56 UTC:6400: update_scap_end: Updating SCAP info succeeded

However, when I added a target for scanning, the following appeared in the log:

event target:MESSAGE:2023-07-07 13h41.21 UTC:7368: Target Target for immediate scan of IP app.example.com - 2023-07-07 13:41:21 (6e1b7247-56c6-41ea-bf91-1abe2ed7dd84) has been created by admin
event task:MESSAGE:2023-07-07 13h41.21 UTC:7368: Status of task  (54d76b19-39d7-4f6d-a21d-1c80cfbf279f) has changed to New
event task:MESSAGE:2023-07-07 13h41.21 UTC:7368: Task Immediate scan of IP app.example.com (54d76b19-39d7-4f6d-a21d-1c80cfbf279f) has been created by admin
event task:MESSAGE:2023-07-07 13h41.22 UTC:7368: Status of task Immediate scan of IP app.example.com (54d76b19-39d7-4f6d-a21d-1c80cfbf279f) has changed to Requested
event task:MESSAGE:2023-07-07 13h41.22 UTC:7368: Task Immediate scan of IP app.example.com (54d76b19-39d7-4f6d-a21d-1c80cfbf279f) has been requested to start by admin
event wizard:MESSAGE:2023-07-07 13h41.22 UTC:7368: Wizard quick_first_scan has been run by admin
md manage:WARNING:2023-07-07 13h41.22 utc:7413: sql_open: PQconnectPoll failed
md manage:WARNING:2023-07-07 13h41.22 utc:7413: sql_open: PQerrorMessage (conn): connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL:  remaining connection slots are reserved for non-replication superuser connections
md manage:WARNING:2023-07-07 13h41.22 utc:7413: init_manage_open_db: sql_open failed
md manage:WARNING:2023-07-07 13h41.22 utc:7412: sql_open: PQconnectPoll failed
md manage:WARNING:2023-07-07 13h41.22 utc:7412: sql_open: PQerrorMessage (conn): connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL:  remaining connection slots are reserved for non-replication superuser connections
md manage:WARNING:2023-07-07 13h41.22 utc:7412: init_manage_open_db: sql_open failed

Here is my current system environment:

Operating System: Windows 11 Pro WSL Version: WSL 2 Linux Distribution: Ubuntu 22.04 LTS Greenbone Community Edition Version: 22.4

Same problem here, any solution? after restarting the gvmd the web back to normal, but then several seconds back to error again

from gvmd.

swarooplendi avatar swarooplendi commented on July 2, 2024

Do we have a document where it mentions what version of psql container greenbone/pg-gvm-build:<version> needs to be used with specific version of gvmd image: greenbone/gvmd:<version> ,even if we use 22.04 version for all the containers at compose file we are facing check_db_versions: database version supported by manager: 254 gvmd: database is wrong version so that it would be a proper walkaround till it is fixed .

from gvmd.

mikadmswnrto avatar mikadmswnrto commented on July 2, 2024

I'm hoping /pull/2042 will solve this. When trying to reproduce I saw many gvmd processes building up when I was using GSA, and I guess the same issue is causing the database connections to run out.

I tried to increase the number of max_connections to 300 still experience the error

from gvmd.

nunofranciscomoreira avatar nunofranciscomoreira commented on July 2, 2024

Do we have a document where it mentions what version of psql container greenbone/pg-gvm-build:<version> needs to be used with specific version of gvmd image: greenbone/gvmd:<version> ,even if we use 22.04 version for all the containers at compose file we are facing check_db_versions: database version supported by manager: 254 gvmd: database is wrong version so that it would be a proper walkaround till it is fixed .

you can also tag the image of pg-gvm using:
image: greenbone/pg-gvm:22.5.1

from gvmd.

bjoernricks avatar bjoernricks commented on July 2, 2024

you can also tag the image of pg-gvm using: image: greenbone/pg-gvm:22.5.1

This wont change anything. pg-gvm stable is the same as pg-gvm:22, pg-gvm:22.5, pg-gvm:22.5.1 and also pg-gvm:latest. Additionally as I wrote pg-gvm doesn't not what you currently expect. You data in the database wont be rolled back to an older version.

from gvmd.

dosera avatar dosera commented on July 2, 2024

you can also tag the image of pg-gvm using: image: greenbone/pg-gvm:22.5.1

This wont change anything. pg-gvm stable is the same as pg-gvm:22, pg-gvm:22.5, pg-gvm:22.5.1 and also pg-gvm:latest. Additionally as I wrote pg-gvm doesn't not what you currently expect. You data in the database wont be rolled back to an older version.

Are you tagging like this intentionally? If so may I ask about the reason for it?
I was honestly a little stunned since I am using the stable tagged image and didn't expect such a regression.

from gvmd.

nunofranciscomoreira avatar nunofranciscomoreira commented on July 2, 2024

you can also tag the image of pg-gvm using: image: greenbone/pg-gvm:22.5.1

This wont change anything. pg-gvm stable is the same as pg-gvm:22, pg-gvm:22.5, pg-gvm:22.5.1 and also pg-gvm:latest. Additionally as I wrote pg-gvm doesn't not what you currently expect. You data in the database wont be rolled back to an older version.

If you say so...
I'm telling you how to solve the problem regarding the minor mismatch. If it was as you say, that pg-gvm is the same for all the tags, then the logs wouldn't be telling otherwise and that you have a minor version mismatch.

What I reported was that using both tags with the same version works, It worked for me, it has been working on a fresh install with both greenbone/pg-gvm:22.5.1 and greenbone/gvmd:22.5.1

Again, do as you please, this is a community branch.

from gvmd.

nunofranciscomoreira avatar nunofranciscomoreira commented on July 2, 2024

you can also tag the image of pg-gvm using: image: greenbone/pg-gvm:22.5.1

This wont change anything. pg-gvm stable is the same as pg-gvm:22, pg-gvm:22.5, pg-gvm:22.5.1 and also pg-gvm:latest. Additionally as I wrote pg-gvm doesn't not what you currently expect. You data in the database wont be rolled back to an older version.

Are you tagging like this intentionally? If so may I ask about the reason for it? I was honestly a little stunned since I am using the stable tagged image and didn't expect such a regression.

Yes, I'm tagging like that based on the suggestions here.

My docker-compose.yml:

services:
  vulnerability-tests:
    image: greenbone/vulnerability-tests
    environment:
      STORAGE_PATH: /var/lib/openvas/22.04/vt-data/nasl
    volumes:
      - vt_data_vol:/mnt

  notus-data:
    image: greenbone/notus-data
    volumes:
      - notus_data_vol:/mnt

  scap-data:
    image: greenbone/scap-data
    volumes:
      - scap_data_vol:/mnt

  cert-bund-data:
    image: greenbone/cert-bund-data
    volumes:
      - cert_data_vol:/mnt

  dfn-cert-data:
    image: greenbone/dfn-cert-data
    volumes:
      - cert_data_vol:/mnt
    depends_on:
      - cert-bund-data

  data-objects:
    image: greenbone/data-objects
    volumes:
      - data_objects_vol:/mnt

  report-formats:
    image: greenbone/report-formats
    volumes:
      - data_objects_vol:/mnt
    depends_on:
      - data-objects

  gpg-data:
    image: greenbone/gpg-data
    volumes:
      - gpg_data_vol:/mnt

  redis-server:
    image: greenbone/redis-server
    restart: on-failure
    volumes:
      - redis_socket_vol:/run/redis/

  pg-gvm:
    image: greenbone/pg-gvm:22.5.1
    restart: on-failure
    volumes:
      - psql_data_vol:/var/lib/postgresql
      - psql_socket_vol:/var/run/postgresql

  gvmd:
    image: greenbone/gvmd:22.5.1
    restart: on-failure
    volumes:
      - gvmd_data_vol:/var/lib/gvm
      - scap_data_vol:/var/lib/gvm/scap-data/
      - cert_data_vol:/var/lib/gvm/cert-data
      - data_objects_vol:/var/lib/gvm/data-objects/gvmd
      - vt_data_vol:/var/lib/openvas/plugins
      - psql_data_vol:/var/lib/postgresql
      - gvmd_socket_vol:/run/gvmd
      - ospd_openvas_socket_vol:/run/ospd
      - psql_socket_vol:/var/run/postgresql
    depends_on:
      pg-gvm:
        condition: service_started
      scap-data:
        condition: service_completed_successfully
      cert-bund-data:
        condition: service_completed_successfully
      dfn-cert-data:
        condition: service_completed_successfully
      data-objects:
        condition: service_completed_successfully
      report-formats:
        condition: service_completed_successfully

  gsa:
    image: greenbone/gsa:stable
    restart: on-failure
    ports:
      - 9392:80
    volumes:
      - gvmd_socket_vol:/run/gvmd
    depends_on:
      - gvmd

  ospd-openvas:
    image: greenbone/ospd-openvas:stable
    restart: on-failure
    init: true
    hostname: ospd-openvas.local
    cap_add:
      - NET_ADMIN # for capturing packages in promiscuous mode
      - NET_RAW # for raw sockets e.g. used for the boreas alive detection
    security_opt:
      - seccomp=unconfined
      - apparmor=unconfined
    command:
      [
        "ospd-openvas",
        "-f",
        "--config",
        "/etc/gvm/ospd-openvas.conf",
        "--mqtt-broker-address",
        "mqtt-broker",
        "--notus-feed-dir",
        "/var/lib/notus/advisories",
        "-m",
        "666"
      ]
    volumes:
      - gpg_data_vol:/etc/openvas/gnupg
      - vt_data_vol:/var/lib/openvas/plugins
      - notus_data_vol:/var/lib/notus
      - ospd_openvas_socket_vol:/run/ospd
      - redis_socket_vol:/run/redis/
    depends_on:
      redis-server:
        condition: service_started
      gpg-data:
        condition: service_completed_successfully
      vulnerability-tests:
        condition: service_completed_successfully

  mqtt-broker:
    restart: on-failure
    image: greenbone/mqtt-broker
    ports:
      - 1883:1883
    networks:
      default:
        aliases:
          - mqtt-broker
          - broker

  notus-scanner:
    restart: on-failure
    image: greenbone/notus-scanner:stable
    volumes:
      - notus_data_vol:/var/lib/notus
      - gpg_data_vol:/etc/openvas/gnupg
    environment:
      NOTUS_SCANNER_MQTT_BROKER_ADDRESS: mqtt-broker
      NOTUS_SCANNER_PRODUCTS_DIRECTORY: /var/lib/notus/products
    depends_on:
      - mqtt-broker
      - gpg-data
      - vulnerability-tests

  gvm-tools:
    image: greenbone/gvm-tools
    volumes:
      - gvmd_socket_vol:/run/gvmd
      - ospd_openvas_socket_vol:/run/ospd
    depends_on:
      - gvmd
      - ospd-openvas

volumes:
  gpg_data_vol:
  scap_data_vol:
  cert_data_vol:
  data_objects_vol:
  gvmd_data_vol:
  psql_data_vol:
  vt_data_vol:
  notus_data_vol:
  psql_socket_vol:
  gvmd_socket_vol:
  ospd_openvas_socket_vol:
  redis_socket_vol:

from gvmd.

iampilot avatar iampilot commented on July 2, 2024

Maybe the Greenbone Community Containers still have same problem , if worker have free time, please fix it, thanks!
From a China kind boy

from gvmd.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.