Coder Social home page Coder Social logo

wazuh / wazuh-kubernetes Goto Github PK

View Code? Open in Web Editor NEW
242.0 242.0 149.0 1.02 MB

Wazuh - Wazuh Kubernetes

Home Page: https://wazuh.com/

License: GNU General Public License v2.0

Shell 100.00%
elasticsearch hacktoberfest hacktoberfest-accepted hacktoberfest2021 k8s kibana kubernetes yaml

wazuh-kubernetes's Introduction

Wazuh Kubernetes

Slack Email Documentation Documentation

Deploy a Wazuh cluster with a basic indexer and dashboard stack on Kubernetes.

Branches

  • master branch contains the latest code, be aware of possible bugs on this branch.
  • stable branch on correspond to the last Wazuh stable version.

Documentation

Amazon EKS development

To deploy a cluster on Amazon EKS cluster read the instructions on instructions.md. Note: For Kubernetes version 1.23 or higher, the assignment of an IAM Role is necessary for the CSI driver to function correctly. Within the AWS documentation you can find the instructions for the assignment: https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html The installation of the CSI driver is mandatory for new and old deployments if you are going to use Kubernetes 1.23 for the first time or you need to upgrade the cluster.

Local development

To deploy a cluster on your local environment (like Minikube, Kind or Microk8s) read the instructions on local-environment.md.

Directory structure

├── CHANGELOG.md
├── cleanup.md
├── envs
│   ├── eks
│   │   ├── dashboard-resources.yaml
│   │   ├── indexer-resources.yaml
│   │   ├── kustomization.yml
│   │   ├── storage-class.yaml
│   │   ├── wazuh-master-resources.yaml
│   │   └── wazuh-worker-resources.yaml
│   └── local-env
│       ├── indexer-resources.yaml
│       ├── kustomization.yml
│       ├── storage-class.yaml
│       └── wazuh-resources.yaml
├── instructions.md
├── LICENSE
├── local-environment.md
├── README.md
├── upgrade.md
├── VERSION
└── wazuh
    ├── base
    │   ├── storage-class.yaml
    │   └── wazuh-ns.yaml
    ├── certs
    │   ├── dashboard_http
    │   │   └── generate_certs.sh
    │   └── indexer_cluster
    │       └── generate_certs.sh
    ├── indexer_stack
    │   ├── wazuh-dashboard
    │   │   ├── dashboard_conf
    │   │   │   └── opensearch_dashboards.yml
    │   │   ├── dashboard-deploy.yaml
    │   │   └── dashboard-svc.yaml
    │   └── wazuh-indexer
    │       ├── cluster
    │       │   ├── indexer-api-svc.yaml
    │       │   └── indexer-sts.yaml
    │       ├── indexer_conf
    │       │   ├── internal_users.yml
    │       │   └── opensearch.yml
    │       └── indexer-svc.yaml
    ├── kustomization.yml
    ├── secrets
    │   ├── dashboard-cred-secret.yaml
    │   ├── indexer-cred-secret.yaml
    │   ├── wazuh-api-cred-secret.yaml
    │   ├── wazuh-authd-pass-secret.yaml
    │   └── wazuh-cluster-key-secret.yaml
    └── wazuh_managers
        ├── wazuh-cluster-svc.yaml
        ├── wazuh_conf
        │   ├── master.conf
        │   └── worker.conf
        ├── wazuh-master-sts.yaml
        ├── wazuh-master-svc.yaml
        ├── wazuh-workers-svc.yaml
        └── wazuh-worker-sts.yaml

Contribute

If you want to contribute to our project please don't hesitate to send a pull request. You can also join our users mailing list or the Wazuh Slack community channel to ask questions and participate in discussions.

Credits and Thank you

Based on the previous work from JPLachance coveo/wazuh-kubernetes (2018/11/22).

License and copyright

WAZUH Copyright (C) 2016, Wazuh Inc. (License GPLv2)

References

wazuh-kubernetes's People

Contributors

1stofhisgame avatar alberpilot avatar anonymous2ch avatar c-bordon avatar cdare avatar davidcr01 avatar davidjiglesias avatar dfolcha avatar havidarou avatar jctello avatar jesuslinares avatar jm404 avatar leoquicenoz avatar manuasir avatar mateocervilla avatar okynos avatar pereyra-m avatar rauldpm avatar selutario avatar sitorbj avatar snaow avatar teddytpc1 avatar vcerenu avatar victormorenojimenez avatar xr09 avatar zenidd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

wazuh-kubernetes's Issues

Wazuh Release 3.11.0_7.5.1

Wazuh version: 3.11.0
Elastic version: 7.5.1

  • Adapt to new versions (3.11.0_7.5.1)
  • Update changelog
  • Tests
  • Tag: v3.11.0_7.5.1
  • Draft release
  • Update Documentation

Release 3.9.1_7.1.0

Wazuh version: 3.9.1
Elastic version: 7.1.0

  • Adapt to new versions (3.9.1 - 7.1.0)
  • Update changelog
  • Tests
  • Tag: v3.9.1
  • Draft release
  • Update Documentation

Release 3.9.2_7.1.1

Wazuh version: 3.9.2
Elastic version: 7.1.1

  • Adapt to new versions (3.9.2 - 7.1.1)
  • Update changelog
  • Tests
  • Tag: v3.9.2
  • Draft release
  • Update Documentation

DaemonSet container progress

Hi @jesuslinares,

Thank you for the work you've done here; it's great and really helpful.

You mention in the instructions that you are researching if the agent is able to run as a DaemonSet container:

We are researching if the agent is able to run as a DaemonSet container. A DaemonSet is a special type of Pod which is logically guaranteed to run on each Kubernetes node. This kind of agent will have access only to its container, so we should mount volumes used by other containers to monitor logs, files, etc.

Has there been any progress on this? Seems to be a requirement for my team rather than installing the agent on the host. Any help or advice would be appreciated.

Thanks!

Release 3.9.3_7.2.0

Wazuh version: 3.9.3
Elastic version: 7.2.0

  • Adapt to new versions (3.9.3 - 7.2.0)
  • Update changelog
  • Tests
  • Tag: v3.9.3_7.2.0
  • Draft release
  • Update Documentation

Kibana "no spaces match search criteria"

Following the instructions.md, using the elasticsearch cluster folder, in an AWS cluster, the Kibana instance redirects the base URL of the ALB to "[xyz]/spaces/space_selector#?_g=()". Most of the time, this page shows no spaces, as seen in the attached image. Occasionally upon refreshing the "Default" space will show up, but behavior after clicking it is shaky, and seems to redirect to the space_selector page again some time after.

Capture

The following error, possibly unrelated, appears in the pod logs repeatedly:
{"type":"log","@timestamp":"2020-01-23T16:55:46Z","tags":["error","task_manager"],"pid":25,"message":"Failed to poll for work: [script_exception] link error, with { script_stack={ 0=\"doc['task.retryAt'].value\" & 1=\" ^---- HERE\" } & script=\"doc['task.retryAt'].value || doc['task.runAt'].value\" & lang=\"expression\" } :: {\"path\":\"/.kibana_task_manager/_update_by_query\",\"query\":{\"ignore_unavailable\":true,\"refresh\":true,\"max_docs\":10,\"conflicts\":\"proceed\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"must\\\":[{\\\"term\\\":{\\\"type\\\":\\\"task\\\"}},{\\\"bool\\\":{\\\"must\\\":[{\\\"bool\\\":{\\\"should\\\":[{\\\"bool\\\":{\\\"must\\\":[{\\\"term\\\":{\\\"task.status\\\":\\\"idle\\\"}},{\\\"range\\\":{\\\"task.runAt\\\":{\\\"lte\\\":\\\"now\\\"}}}]}},{\\\"bool\\\":{\\\"must\\\":[{\\\"bool\\\":{\\\"should\\\":[{\\\"term\\\":{\\\"task.status\\\":\\\"running\\\"}},{\\\"term\\\":{\\\"task.status\\\":\\\"claiming\\\"}}]}},{\\\"range\\\":{\\\"task.retryAt\\\":{\\\"lte\\\":\\\"now\\\"}}}]}}]}},{\\\"bool\\\":{\\\"should\\\":[{\\\"exists\\\":{\\\"field\\\":\\\"task.interval\\\"}},{\\\"bool\\\":{\\\"must\\\":[{\\\"term\\\":{\\\"task.taskType\\\":\\\"vis_telemetry\\\"}},{\\\"range\\\":{\\\"task.attempts\\\":{\\\"lt\\\":3}}}]}},{\\\"bool\\\":{\\\"must\\\":[{\\\"term\\\":{\\\"task.taskType\\\":\\\"lens_telemetry\\\"}},{\\\"range\\\":{\\\"task.attempts\\\":{\\\"lt\\\":3}}}]}}]}}]}}]}},\\\"sort\\\":{\\\"_script\\\":{\\\"type\\\":\\\"number\\\",\\\"order\\\":\\\"asc\\\",\\\"script\\\":{\\\"lang\\\":\\\"expression\\\",\\\"source\\\":\\\"doc['task.retryAt'].value || doc['task.runAt'].value\\\"}}},\\\"seq_no_primary_term\\\":true,\\\"script\\\":{\\\"source\\\":\\\"ctx._source.task.ownerId=params.ownerId; ctx._source.task.status=params.status; ctx._source.task.retryAt=params.retryAt;\\\",\\\"lang\\\":\\\"painless\\\",\\\"params\\\":{\\\"ownerId\\\":\\\"56ac10ee-3238-4965-9cf1-5978f7aec544\\\",\\\"retryAt\\\":\\\"2020-01-23T16:56:16.241Z\\\",\\\"status\\\":\\\"claiming\\\"}}}\",\"statusCode\":400,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"script_exception\\\",\\\"reason\\\":\\\"link error\\\",\\\"script_stack\\\":[\\\"doc['task.retryAt'].value\\\",\\\" ^---- HERE\\\"],\\\"script\\\":\\\"doc['task.retryAt'].value || doc['task.runAt'].value\\\",\\\"lang\\\":\\\"expression\\\"}],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[{\\\"shard\\\":0,\\\"index\\\":\\\".kibana_task_manager\\\",\\\"node\\\":\\\"K8FORPE1TnqqHWOaN2YJRw\\\",\\\"reason\\\":{\\\"type\\\":\\\"script_exception\\\",\\\"reason\\\":\\\"link error\\\",\\\"script_stack\\\":[\\\"doc['task.retryAt'].value\\\",\\\" ^---- HERE\\\"],\\\"script\\\":\\\"doc['task.retryAt'].value || doc['task.runAt'].value\\\",\\\"lang\\\":\\\"expression\\\",\\\"caused_by\\\":{\\\"type\\\":\\\"parse_exception\\\",\\\"reason\\\":\\\"Field [task.retryAt] does not exist in mappings\\\"}}}]},\\\"status\\\":400}\"}"}

I made a modification to the CPU requests in the yaml files for wazuh-elasticsearch (500m->200m) and wazuh workers (2->800m), otherwise left everything as-is.

I would appreciate any insight you may have.
Many thanks.

Add settings to make Kubernetes set up a local environment

Hi team,

Currently, our k8s code only allows the installation using Amazon AWS, specifically EKS as stated in the instructions.md

Deploying in a local machine can be really useful for development and testing purposes.

In order to achieve the said objective, the following tasks must be performed:

  • Adapt Deployments/StatefulSets to local mode
  • Add required settings to choose between EKS and Local
  • Flexibilize number of nodes and installation settings
  • Test
  • Update CHANGELOG.md
  • Update README.md

Best regards,

Jose

Update Elasticsearch cluster statefulset files

Hello team,

Since the Docker image was updated here, we are able to create an Elasticsearch cluster in a more efficient way and it would be necessary to add the necessary changes in the files corresponding to the creation of the cluster through the use of Kubernetes.

In addition, the relevant documentation must be added.

Best regards,

Alfonso Ruiz-Bravo

Release 3.9.3_6.8.1

Wazuh version: 3.9.3
Elastic version: 6.8.1

  • Adapt to new versions (3.9.3 - 6.8.1)
  • Update changelog
  • Tests
  • Tag: v3.9.3_6.8.1
  • Draft release
  • Update Documentation

Investigate monitoring solutions to visualize k8s metrics.

Hi team,

There are multiple solutions that can give a great overview of k8s metrics like Prometheus and Grafana (both opensource) among many others.

Grafana on top of Prometheus is a common combination whose dashboards looks like the following (from https://github.com/camilb/prometheus-kubernetes):

image

It would be great to have a deployment of this combination with custom dashboards to visualize all the metrics about the pod and the

Resources:

Unable to access master on port 55000 from inside and outside the container

From the node ...

root@ip-10-83-94-228:~/wazuh# curl http://wazuh-master.example.com:55000/agents
curl: (52) Empty reply from server

From the container ...

root@wazuh-manager-master-0:/# curl http://127.0.0.1:55000/agents
curl: (52) Empty reply from server

This is what's running in the master's container ...

$ kubectl exec -it wazuh-manager-master-0 /bin/bash -n wazuh
root@wazuh-manager-master-0:/# ps auxf
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root     12688  0.1  0.0  21512  3824 ?        Ss   14:40   0:00 /bin/bash
root     12704  0.0  0.0  37800  3432 ?        R+   14:40   0:00  \_ ps auxf
root         1  0.0  0.0  21328  3400 ?        Ss   Jan30   0:00 /bin/bash /entrypoint.sh
root        14  0.0  0.0  32464 11016 ?        S    Jan30   0:00 /usr/bin/python3 -u /sbin/my_init
root        25  0.0  0.0   4388  1156 ?        S    Jan30   0:00  \_ /usr/bin/runsvdir -P /etc/service
root        26  0.0  0.0   4236   800 ?        Ss   Jan30   0:00      \_ runsv cron
root        32  0.0  0.0  29268  2876 ?        S    Jan30   0:00      |   \_ /usr/sbin/cron -f
root        27  0.0  0.0   4236   752 ?        Ss   Jan30   0:00      \_ runsv sshd
root        28  0.0  0.0   4236   652 ?        Ss   Jan30   0:00      \_ runsv filebeat
root       113  0.0  0.0   4500   852 ?        S    Jan30   0:00      |   \_ /bin/sh ./run
root       139  0.0  0.0   7608   692 ?        S    Jan30   0:02      |       \_ tail -f /var/log/filebeat/filebeat
root        29  0.0  0.0   4236   700 ?        Ss   Jan30   0:00      \_ runsv postfix
root        40  0.0  0.0   4500   712 ?        S    Jan30   0:00      |   \_ /bin/sh ./run
root        55  0.0  0.0   7608   676 ?        S    Jan30   0:00      |       \_ tail -f /var/log/mail.log
root        30  0.0  0.0   4236   668 ?        Ss   Jan30   0:00      \_ runsv wazuh-api
root        35  0.0  0.0   4500   784 ?        S    Jan30   0:00      |   \_ /bin/sh ./run
root        53  0.0  0.0   7608   836 ?        S    Jan30   0:00      |       \_ tail -f /var/ossec/data/logs/api.log
root        31  0.0  0.0   4236   664 ?        Ss   Jan30   0:00      \_ runsv wazuh
root        38  0.0  0.0   4500   784 ?        S    Jan30   0:00          \_ /bin/sh ./run
root       312  0.0  0.0   7608   680 ?        S    Jan30   0:00              \_ tail -f /var/ossec/data/logs/ossec.log
root        19  0.0  0.0  72384  7612 ?        S    Jan30   0:00 /usr/sbin/syslog-ng --pidfile /var/run/syslog-ng.pid -F --no-caps
ossec       52  0.0  0.1 935540 52384 ?        Sl   Jan30   0:03 /usr/bin/nodejs /var/ossec/api/app.js
root        85  0.0  0.0   9300   636 ?        S    Jan30   0:00 /usr/share/filebeat/bin/filebeat-god -r / -n -p /var/run/filebeat.pid -- /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.
root        86  0.0  0.0 608140 27612 ?        Sl   Jan30   0:13  \_ /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log
root       149  0.0  0.0 183712  6084 ?        Sl   Jan30   0:28 /var/ossec/bin/ossec-authd
ossec      160  0.0  0.0 642000  6148 ?        Sl   Jan30   0:49 /var/ossec/bin/wazuh-db
root       177  0.0  0.0 101648  2848 ?        Sl   Jan30   0:02 /var/ossec/bin/ossec-execd
ossec      186  0.0  0.0 868596 19144 ?        Sl   Jan30   0:37 /var/ossec/bin/ossec-analysisd
root       194  0.0  0.0 111192  5636 ?        Sl   Jan30   0:25 /var/ossec/bin/ossec-syscheckd
ossecr     203  0.0  0.0 512460  3756 ?        Sl   Jan30   1:11 /var/ossec/bin/ossec-remoted
root       210  0.0  0.0 404740  3312 ?        Sl   Jan30   0:35 /var/ossec/bin/ossec-logcollector
ossec      216  0.0  0.0  36100  4012 ?        Sl   Jan30   0:04 /var/ossec/bin/ossec-monitord
root       223  0.0  0.0 357864  6164 ?        Sl   Jan30   0:00 /var/ossec/bin/wazuh-modulesd
ossec      298  0.1  0.1 478576 44092 ?        Sl   Jan30   1:50 python /var/ossec/bin/wazuh-clusterd

So it is running.

This is what we have listening on the container ports. See:

root@wazuh-manager-master-0:/# netstat -vatn
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 0.0.0.0:1514            0.0.0.0:*               LISTEN
tcp        0      0 0.0.0.0:1515            0.0.0.0:*               LISTEN
tcp        0      0 0.0.0.0:1516            0.0.0.0:*               LISTEN
tcp        0      0 100.117.128.21:1516     100.113.192.7:36770     ESTABLISHED
tcp        0      0 100.117.128.21:1516     100.123.160.10:56778    ESTABLISHED
tcp6       0      0 :::55000                :::*                    LISTEN

Release 3.9.0

Wazuh version: 3.9.0
Elastic version: 6.7.1

  • Adapt to new versions (3.9.0 - 6.7.1)
  • Update changelog
  • Tests
  • Tag: v3.9.0
  • Draft release

Wazuh Kubernetes Release 3.10.0_7.3.2

Wazuh version: 3.10.0
Elastic version: 7.3.2

  • Adapt to new versions (3.10.0_7.3.2)
  • Update changelog
  • Tests
  • Tag: v3.10.0_7.3.2
  • Draft release
  • Update Documentation

Worker nodes not able to communicate with Wazuh manager master

Hello,
I'm trying to deploy the Wazuh server in Kubernetes (I'm using your Wazuh Kubernetes repo for reference).
I've deployed wazuh/wazuh:3.9.0_6.7.2 docker image in my k8s cluster hosted on AWS.
I have performed all the steps as instructed.

The problem I am facing is:
The wazuh agent I registered shows as never connected in Kibana dashboard.

On further investigation, I tried to curl to my wazuh load balancer services at ports 1515 and 1514 from the machine which holds wazuh agent and it was able to connect to both of them with empty reply from load-balancer:1514/tcp

However, my agent logs showed me this:

2019/10/08 21:07:27 ossec-agentd: WARNING: Unable to reload hostname for 'my-nlb-url-pointing-at-1514'. Using previous address.
2019/10/08 21:07:27 ossec-agentd: INFO: Trying to connect to server (my-nlb-url-pointing-at-1514/172.23.5.32:1514/tcp).
2019/10/08 21:07:31 ossec-syscheckd: INFO: (6010): File integrity monitoring scan frequency: 43200 seconds
2019/10/08 21:07:31 ossec-syscheckd: INFO: (6008): File integrity monitoring scan started.
2019/10/08 21:07:49 ossec-agentd: WARNING: Unable to reload hostname for 'my-nlb-url-pointing-at-1514'. Using previous address.
2019/10/08 21:07:49 ossec-agentd: INFO: Trying to connect to server (my-nlb-url-pointing-at-1514:1514/tcp).
2019/10/08 21:08:10 ossec-agentd: WARNING: Unable to reload hostname for 'my-nlb-url-pointing-at-1514'. Using previous address.
2019/10/08 21:08:10 ossec-agentd: INFO: Trying to connect to server (my-nlb-url-pointing-at-1514/IP_ADDRESS:1514/tcp).
2019/10/08 21:08:31 ossec-agentd: WARNING: Unable to reload hostname for 'my-nlb-url-pointing-at-1514'. Using previous address.
2019/10/08 21:08:31 ossec-agentd: INFO: Trying to connect to server (my-nlb-url-pointing-at-1514/IP_ADDRESS:1514/tcp).

My agent config file snippet:

  <client>
    <server>
      <address>my-nlb-url-pointing-at-1514</address>
      <port>1514</port>
      <protocol>tcp</protocol>
    </server>
    <config-profile>centos, centos7, centos7.6</config-profile>
    <notify_time>10</notify_time>
    <time-reconnect>60</time-reconnect>
    <auto_restart>yes</auto_restart>
    <crypto_method>aes</crypto_method>
  </client>

On further investigation, I tried to find any errors on my wazuh manager master side using cat /var/ossec/logs/ossec.log and could not find any warnings.

But when i connected to wazuh worker pod, I saw this error

2019/10/08 21:25:20 wazuh-clusterd: ERROR: [Worker] [Main] Could not connect to master: [Errno -2] Name or service not known. Trying again in 10 seconds.
2019/10/08 21:25:30 wazuh-clusterd: ERROR: [Worker] [Main] Could not connect to master: [Errno -2] Name or service not known. Trying again in 10 seconds.
2019/10/08 21:25:40 wazuh-clusterd: ERROR: [Worker] [Main] Could not connect to master: [Errno -2] Name or service not known. Trying again in 10 seconds.
2019/10/08 21:25:50 wazuh-clusterd: ERROR: [Worker] [Main] Could not connect to master: [Errno -2] Name or service not known. Trying again in 10 seconds.
2019/10/08 21:26:00 wazuh-clusterd: ERROR: [Worker] [Main] Could not connect to master: [Errno -2] Name or service not known. Trying again in 10 seconds.
2019/10/08 21:26:10 wazuh-clusterd: ERROR: [Worker] [Main] Could not connect to master: [Errno -2] Name or service not known. Trying again in 10 seconds.
2019/10/08 21:26:20 wazuh-clusterd: ERROR: [Worker] [Main] Could not connect to master: [Errno -2] Name or service not known. Trying again in 10 seconds.
2019/10/08 21:26:30 wazuh-clusterd: ERROR: [Worker] [Main] Could not connect to master: [Errno -2] Name or service not known. Trying again in 10 seconds.
2019/10/08 21:26:40 wazuh-clusterd: ERROR: [Worker] [Main] Could not connect to master: [Errno -2] Name or service not known. Trying again in 10 seconds.
2019/10/08 21:26:50 wazuh-clusterd: ERROR: [Worker] [Main] Could not connect to master: [Errno -2] Name or service not known. Trying again in 10 seconds.

Its still not able to connect to wazuh master

Release 3.9.5_7.2.0

Wazuh version: 3.9.5
Elastic version: 7.2.1

  • Adapt to new versions (3.9.5 - 7.2.1)
  • Update changelog
  • Tests
  • Tag: v3.9.5_7.2.1
  • Draft release
  • Update Documentation

Release 3.9.2_6.8.0

Wazuh version: 3.9.2
Elastic version: 6.8.0

  • Adapt to new versions (3.9.3 - 6.8.1)
  • Update changelog
  • Tests
  • Tag: v3.9.3_6.8.1
  • Draft release
  • Update Documentation

Following Setup Instructions Results in Never-Connected Clients

following the instructions, as seen here: https://github.com/wazuh/wazuh-kubernetes/blob/b1d99161d5254645b4a3b6eae72f9cff35ae8011/instructions.md

will result in clients that cannot connect, showing a "never-connected" status.

in order to allow them to connect, a modification must be made on each client, and the client needs to be restarted.
here is the fix incase someone needs it:

# apt remove --purge wazuh-agent -y # you might need this to clear old keys from a pre-existing install
curl -so /tmp/wazuh-agent.deb \
    'https://packages.wazuh.com/3.x/apt/pool/main/w/wazuh-agent/wazuh-agent_3.11.3-1_amd64.deb'
sudo WAZUH_MANAGER_IP='wazuh.example.org' dpkg -i /tmp/wazuh-agent.deb
sed -i \
    's#<protocol>udp</protocol>#<protocol>tcp</protocol>#' \
    /var/ossec/etc/ossec.conf
systemctl restart wazuh-agent.service

the fix here is to change the protocol from udp to tcp, because the kubernetes service does not result in an available udp connection if you follow the instructions.

Wazuh Kubernetes release 3.11.2_7.5.1

Wazuh version: 3.11.2
Elastic version: 7.5.1

Tasks

  • Adapt to new versions (3.11.2 - 7.5.1)

  • Check templates and update them if needed

  • Tests

  • Update changelog

  • Tag: v3.11.2_7.5.1

  • Update documentation references

  • Draft release

Release 3.9.1_6.8.0

Wazuh version: 3.9.1
Elastic version: 6.8.0

  • Adapt to new versions (3.9.1 - 6.8.0)
  • Update changelog
  • Tests
  • Tag: v3.9.0
  • Draft release

AWS EFS: Queue '/var/ossec/queue/ossec/queue' not accessible: 'Connection refused'

Hello!

I deployed a Wazuh Cluster in Kubernetes based on my own work and on https://github.com/wazuh/wazuh-kubernetes. To improve the master availability, I'm trying to use an EFS instead of an EBS.

I found an issue similar to this one (#23), but I was asked to open a new one.

My diff for my Kubernetes StatefulSet:

--- a/wazuh-master-sts.yaml
+++ b/wazuh-master-sts.yaml
@@ -30,6 +30,10 @@ spec:
         filebeat_conf_cm_version: '@(filebeat_conf_cm_version)'
     spec:
       volumes:
+        - name: wazuh-master-efs
+          nfs:
+            path: /
+            server: {{ undef "<computed>" .efs_dns_name }}
         - name: ossec-conf
           secret:
             secretName: wazuh-master-conf
@@ -87,9 +91,9 @@ spec:
               mountPath: /etc/filebeat/filebeat.yml
               subPath: filebeat.yml
               readOnly: true
-            - name: data
+            - name: wazuh-master-efs
               mountPath: /var/ossec/data
-            - name: data
+            - name: wazuh-master-efs
               mountPath: /etc/postfix
           ports:
             - containerPort: 1515
@@ -98,14 +102,3 @@ spec:
               name: cluster
             - containerPort: {{ .api_port }}
               name: api
-  volumeClaimTemplates:
-    - metadata:
-        name: data
-        namespace: '@(namespace)'
-      spec:
-        accessModes:
-          - ReadWriteOnce
-        storageClassName: gp2-encrypted-retained
-        resources:
-          requests:
-            storage: 50Gi

On boot of the Wazuh manager container, I get:

2019/05/29 01:45:17 wazuh-modulesd: ERROR: (1210): Queue '/var/ossec/queue/ossec/queue' not accessible: 'Connection refused'.
2019/05/29 01:45:18 ossec-remoted: ERROR: (1210): Queue '/queue/ossec/queue' not accessible: 'Connection refused'.
2019/05/29 01:45:18 ossec-analysisd: INFO: Reading rules file: 'ruleset/rules/0190-ms_ftpd_rules.xml'
2019/05/29 01:45:18 ossec-remoted: CRITICAL: (1211): Unable to access queue: '/queue/ossec/queue'. Giving up..

I tried to fix it by changing the wazuh.runit.service script:

#!/bin/sh

/var/ossec/bin/ossec-control start
/var/ossec/bin/ossec-control start

tail -f /var/ossec/logs/ossec.log

It almost fixes the issue, but I still get some random failures (sometimes, some daemons fail to boot).

In Slack, @SitoRBJ said: "rootcheck is not finding the analysisd queue because that daemon is not yet standing".

How can we fix it? 😄 It looks like ossec-control is not aware of dependencies between each daemon it is managing. Therefore, if we give him a NFS with slow IO performances (compared to a SSD), it fails.

Thanks for your help!

Increase the default vm.max_map_count for Elasticsearch

Hello team,

We should add the option to increase the value of vm.max_map_count because we would have to do it manually in the Kubernetes node where the pod falls if we want the Elasticsearch container to work.

Regards,

Alfonso Ruiz-Bravo

Upgrade path: Upgrading the Wazuh manager 3.6.1 to 3.9.5 is failing

Hello,

I'm trying to figure out how to upgrade my Wazuh managers. I tried the simple approach: change the image tag, apply, pray.

It failed 🙁

Here are the logs of the Wazuh manager trying to boot:

rm: cannot remove '/var/ossec/queue/db/.template.db': No such file or directory
Identified Wazuh configuration files to mount...
'/wazuh-config-mount/etc/authd.pass' -> '/var/ossec/data/etc/authd.pass'
'/wazuh-config-mount/etc/ossec.conf' -> '/var/ossec/data/etc/ossec.conf'
'/wazuh-config-mount/etc/rules/local_rules.xml' -> '/var/ossec/data/etc/rules/local_rules.xml'
'/wazuh-config-mount/etc/shared/default/agent.conf' -> '/var/ossec/data/etc/shared/default/agent.conf'
Performing Wazuh API port and credentials setup
### Wazuh API Configuration ###

Using 55000 port.


Adding password for user ndev-wazuh-manager.


Configuration changed.

Restarting API.

### [Configuration changed] ###
sed: cannot rename /etc/filebeat/sedJyZzQl: Device or resource busy
*** Running /etc/my_init.d/00_regen_ssh_host_keys.sh...
*** Running /etc/my_init.d/10_syslog-ng.init...
Sep 13 20:27:19 wazuh-manager-3-9-0 syslog-ng[60]: syslog-ng starting up; version='3.13.2'
*** Booting runit daemon...
*** Runit started as PID 67
WAZUH-API is already running.
WazuhAPI 2019-09-13 20:21:43 ndev-wazuh-manager: [::ffff:10.6.21.70] GET /agents/?limit=500&offset=0 - 200 - error: '0'.
WazuhAPI 2019-09-13 20:21:44 ndev-wazuh-manager: [::ffff:100.102.185.128] GET /version? - 200 - error: '0'.
WazuhAPI 2019-09-13 20:21:45 ndev-wazuh-manager: [::ffff:10.6.21.70] GET /cluster/status? - 200 - error: '0'.
WazuhAPI 2019-09-13 20:21:45 ndev-wazuh-manager: [::ffff:100.114.66.128] GET /cluster/node? - 200 - error: '0'.
WazuhAPI 2019-09-13 20:21:45 ndev-wazuh-manager: [::ffff:100.100.234.128] GET /version? - 200 - error: '0'.
WazuhAPI 2019-09-13 20:21:46 ndev-wazuh-manager: [::ffff:100.100.26.64] GET /agents/summary? - 200 - error: '0'.
WazuhAPI 2019-09-13 20:21:46 ndev-wazuh-manager: [::ffff:100.100.234.128] GET /rules/pci? - 200 - error: '0'.
WazuhAPI 2019-09-13 20:21:46 ndev-wazuh-manager: [::ffff:100.123.247.192] GET /rules/gdpr? - 200 - error: '0'.
WazuhAPI 2019-09-13 20:24:52 : ERROR: Wazuh manager v3.9.5 found. Wazuh manager v3.6.x expected. Exiting.
WazuhAPI 2019-09-13 20:24:53 : ERROR: Wazuh manager v3.9.5 found. Wazuh manager v3.6.x expected. Exiting.
Sep 13 20:27:19 wazuh-manager-3-9-0 cron[73]: (CRON) INFO (pidfile fd = 3)
Sep 13 20:27:19 wazuh-manager-3-9-0 cron[73]: (CRON) INFO (Running @reboot jobs)
2019-09-13T20:27:19.460Z	INFO	instance/beat.go:611	Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2019-09-13T20:27:19.465Z	INFO	instance/beat.go:618	Beat UUID: ecf3180a-9430-4918-a210-6729a6b1a191
2019-09-13T20:27:19.465Z	INFO	[beat]	instance/beat.go:931	Beat info	{"system_info": {"beat": {"path": {"config": "/etc/filebeat", "data": "/var/lib/filebeat", "home": "/usr/share/filebeat", "logs": "/var/log/filebeat"}, "type": "filebeat", "uuid": "ecf3180a-9430-4918-a210-6729a6b1a191"}}}
2019-09-13T20:27:19.465Z	INFO	[beat]	instance/beat.go:940	Build info	{"system_info": {"build": {"commit": "0ffbeab5a52fa93586e4178becf1252e6a837028", "libbeat": "6.8.2", "time": "2019-07-24T14:24:45.000Z", "version": "6.8.2"}}}
2019-09-13T20:27:19.465Z	INFO	[beat]	instance/beat.go:943	Go runtime info	{"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":16,"version":"go1.10.8"}}}
2019-09-13T20:27:19.466Z	INFO	[beat]	instance/beat.go:947	Host info	{"system_info": {"host": {"architecture":"x86_64","boot_time":"2019-09-13T15:48:26Z","containerized":true,"name":"wazuh-manager-3-9-0","ip":["127.0.0.1/8","::1/128","100.107.198.212/32","fe80::10ac:caff:fed6:59e7/64"],"kernel_version":"4.15.0-1044-aws","mac":["12:ac:ca:d6:59:e7"],"os":{"family":"debian","platform":"ubuntu","name":"Ubuntu","version":"18.04.3 LTS (Bionic Beaver)","major":18,"minor":4,"patch":3,"codename":"bionic"},"timezone":"UTC","timezone_offset_sec":0}}}
2019-09-13T20:27:19.466Z	INFO	[beat]	instance/beat.go:976	Process info	{"system_info": {"process": {"capabilities": {"inheritable":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"permitted":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"effective":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"bounding":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"ambient":null}, "cwd": "/", "exe": "/usr/share/filebeat/bin/filebeat", "name": "filebeat", "pid": 102, "ppid": 97, "seccomp": {"mode":"disabled","no_new_privs":false}, "start_time": "2019-09-13T20:27:18.919Z"}}}
2019-09-13T20:27:19.466Z	INFO	instance/beat.go:280	Setup Beat: filebeat; Version: 6.8.2
2019-09-13T20:27:19.466Z	INFO	[publisher]	pipeline/module.go:110	Beat name: wazuh-manager-3-9-0
Config OK
tail: cannot open '/var/log/filebeat/filebeat' for reading: No such file or directory
tail: no files remaining
2019/09/13 20:27:19 ossec-analysisd: ERROR: Invalid decoder name: 'syscheck_integrity_changed_2nd'.
2019/09/13 20:27:19 ossec-analysisd: CRITICAL: (1220): Error loading the rules: 'ruleset/rules/0015-ossec_rules.xml'.
ossec-analysisd: Configuration error. Exiting
2019/09/13 20:26:48 ossec-analysisd: ERROR: Invalid decoder name: 'syscheck_integrity_changed_2nd'.
2019/09/13 20:26:48 ossec-analysisd: CRITICAL: (1220): Error loading the rules: 'ruleset/rules/0015-ossec_rules.xml'.
2019/09/13 20:26:52 ossec-analysisd: ERROR: Invalid decoder name: 'syscheck_integrity_changed_2nd'.
2019/09/13 20:26:52 ossec-analysisd: CRITICAL: (1220): Error loading the rules: 'ruleset/rules/0015-ossec_rules.xml'.
2019/09/13 20:27:02 ossec-analysisd: ERROR: Invalid decoder name: 'syscheck_integrity_changed_2nd'.
2019/09/13 20:27:02 ossec-analysisd: CRITICAL: (1220): Error loading the rules: 'ruleset/rules/0015-ossec_rules.xml'.
2019/09/13 20:27:12 ossec-analysisd: ERROR: Invalid decoder name: 'syscheck_integrity_changed_2nd'.
2019/09/13 20:27:12 ossec-analysisd: CRITICAL: (1220): Error loading the rules: 'ruleset/rules/0015-ossec_rules.xml'.
2019/09/13 20:27:19 ossec-analysisd: ERROR: Invalid decoder name: 'syscheck_integrity_changed_2nd'.
2019/09/13 20:27:19 ossec-analysisd: CRITICAL: (1220): Error loading the rules: 'ruleset/rules/0015-ossec_rules.xml'.
WazuhAPI 2019-09-13 20:27:19 : ERROR: Wazuh manager v3.9.5 found. Wazuh manager v3.6.x expected. Exiting.
2019-09-13T20:27:20.565Z	INFO	instance/beat.go:611	Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2019-09-13T20:27:20.565Z	INFO	instance/beat.go:618	Beat UUID: ecf3180a-9430-4918-a210-6729a6b1a191
2019-09-13T20:27:20.565Z	INFO	[beat]	instance/beat.go:931	Beat info	{"system_info": {"beat": {"path": {"config": "/etc/filebeat", "data": "/var/lib/filebeat", "home": "/usr/share/filebeat", "logs": "/var/log/filebeat"}, "type": "filebeat", "uuid": "ecf3180a-9430-4918-a210-6729a6b1a191"}}}
2019-09-13T20:27:20.565Z	INFO	[beat]	instance/beat.go:940	Build info	{"system_info": {"build": {"commit": "0ffbeab5a52fa93586e4178becf1252e6a837028", "libbeat": "6.8.2", "time": "2019-07-24T14:24:45.000Z", "version": "6.8.2"}}}
2019-09-13T20:27:20.565Z	INFO	[beat]	instance/beat.go:943	Go runtime info	{"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":16,"version":"go1.10.8"}}}
2019-09-13T20:27:20.566Z	INFO	[beat]	instance/beat.go:947	Host info	{"system_info": {"host": {"architecture":"x86_64","boot_time":"2019-09-13T15:48:26Z","containerized":true,"name":"wazuh-manager-3-9-0","ip":["127.0.0.1/8","::1/128","100.107.198.212/32","fe80::10ac:caff:fed6:59e7/64"],"kernel_version":"4.15.0-1044-aws","mac":["12:ac:ca:d6:59:e7"],"os":{"family":"debian","platform":"ubuntu","name":"Ubuntu","version":"18.04.3 LTS (Bionic Beaver)","major":18,"minor":4,"patch":3,"codename":"bionic"},"timezone":"UTC","timezone_offset_sec":0}}}
2019-09-13T20:27:20.566Z	INFO	[beat]	instance/beat.go:976	Process info	{"system_info": {"process": {"capabilities": {"inheritable":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"permitted":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"effective":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"bounding":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"ambient":null}, "cwd": "/", "exe": "/usr/share/filebeat/bin/filebeat", "name": "filebeat", "pid": 159, "ppid": 158, "seccomp": {"mode":"disabled","no_new_privs":false}, "start_time": "2019-09-13T20:27:20.099Z"}}}
2019-09-13T20:27:20.566Z	INFO	instance/beat.go:280	Setup Beat: filebeat; Version: 6.8.2
2019-09-13T20:27:20.567Z	INFO	[publisher]	pipeline/module.go:110	Beat name: wazuh-manager-3-9-0
Config OK
/usr/share/filebeat/bin/filebeat-god already running.
2019-09-13T20:27:19.554Z	INFO	[publisher]	pipeline/module.go:110	Beat name: wazuh-manager-3-9-0
2019-09-13T20:27:19.554Z	INFO	instance/beat.go:402	filebeat start running.
2019-09-13T20:27:19.554Z	INFO	registrar/registrar.go:97	No registry file found under: /var/lib/filebeat/registry. Creating a new registry file.
2019-09-13T20:27:19.560Z	INFO	registrar/registrar.go:134	Loading registrar data from /var/lib/filebeat/registry
2019-09-13T20:27:19.560Z	INFO	registrar/registrar.go:141	States Loaded from registrar: 0
2019-09-13T20:27:19.560Z	WARN	beater/filebeat.go:367	Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2019-09-13T20:27:19.560Z	INFO	crawler/crawler.go:72	Loading Inputs: 1
2019-09-13T20:27:19.561Z	INFO	log/input.go:148	Configured paths: [/var/ossec/logs/alerts/alerts.json]
2019-09-13T20:27:19.561Z	INFO	input/input.go:114	Starting input of type: log; ID: 13571056894027297000 
2019-09-13T20:27:19.561Z	INFO	crawler/crawler.go:106	Loading and starting Inputs completed. Enabled inputs: 1
2019/09/13 20:28:22 ossec-analysisd: ERROR: Invalid decoder name: 'syscheck_integrity_changed_2nd'.
2019/09/13 20:28:22 ossec-analysisd: CRITICAL: (1220): Error loading the rules: 'ruleset/rules/0015-ossec_rules.xml'.
2019/09/13 20:28:32 ossec-analysisd: ERROR: Invalid decoder name: 'syscheck_integrity_changed_2nd'.
2019/09/13 20:28:32 ossec-analysisd: CRITICAL: (1220): Error loading the rules: 'ruleset/rules/0015-ossec_rules.xml'.
2019/09/13 20:28:42 ossec-analysisd: ERROR: Invalid decoder name: 'syscheck_integrity_changed_2nd'.
2019/09/13 20:28:42 ossec-analysisd: CRITICAL: (1220): Error loading the rules: 'ruleset/rules/0015-ossec_rules.xml'.
2019/09/13 20:28:52 ossec-analysisd: ERROR: Invalid decoder name: 'syscheck_integrity_changed_2nd'.
2019/09/13 20:28:52 ossec-analysisd: CRITICAL: (1220): Error loading the rules: 'ruleset/rules/0015-ossec_rules.xml'.
2019/09/13 20:28:58 ossec-analysisd: ERROR: Invalid decoder name: 'syscheck_integrity_changed_2nd'.
2019/09/13 20:28:58 ossec-analysisd: CRITICAL: (1220): Error loading the rules: 'ruleset/rules/0015-ossec_rules.xml'.
2019/09/13 20:29:02 ossec-analysisd: ERROR: Invalid decoder name: 'syscheck_integrity_changed_2nd'.
2019/09/13 20:29:02 ossec-analysisd: CRITICAL: (1220): Error loading the rules: 'ruleset/rules/0015-ossec_rules.xml'.
2019/09/13 20:29:08 ossec-analysisd: ERROR: Invalid decoder name: 'syscheck_integrity_changed_2nd'.
2019/09/13 20:29:08 ossec-analysisd: CRITICAL: (1220): Error loading the rules: 'ruleset/rules/0015-ossec_rules.xml'.
2019/09/13 20:29:12 ossec-analysisd: ERROR: Invalid decoder name: 'syscheck_integrity_changed_2nd'.
2019/09/13 20:29:12 ossec-analysisd: CRITICAL: (1220): Error loading the rules: 'ruleset/rules/0015-ossec_rules.xml'.
2019/09/13 20:29:18 ossec-analysisd: ERROR: Invalid decoder name: 'syscheck_integrity_changed_2nd'.
2019/09/13 20:29:18 ossec-analysisd: CRITICAL: (1220): Error loading the rules: 'ruleset/rules/0015-ossec_rules.xml'.
2019/09/13 20:29:22 ossec-analysisd: ERROR: Invalid decoder name: 'syscheck_integrity_changed_2nd'.
2019/09/13 20:29:22 ossec-analysisd: CRITICAL: (1220): Error loading the rules: 'ruleset/rules/0015-ossec_rules.xml'.
2019/09/13 20:29:32 ossec-analysisd: ERROR: Invalid decoder name: 'syscheck_integrity_changed_2nd'.
2019/09/13 20:29:32 ossec-analysisd: CRITICAL: (1220): Error loading the rules: 'ruleset/rules/0015-ossec_rules.xml'.
2019/09/13 20:29:42 ossec-analysisd: ERROR: Invalid decoder name: 'syscheck_integrity_changed_2nd'.
2019/09/13 20:29:42 ossec-analysisd: CRITICAL: (1220): Error loading the rules: 'ruleset/rules/0015-ossec_rules.xml'.

As you can see, the API fails to start (WazuhAPI 2019-09-13 20:24:52 : ERROR: Wazuh manager v3.9.5 found. Wazuh manager v3.6.x expected. Exiting.) and analysisd fails to start too (2019/09/13 20:27:19 ossec-analysisd: ERROR: Invalid decoder name: 'syscheck_integrity_changed_2nd'.).

What is the correct upgrade path for a Wazuh manager 3.6.1 deployed in Kubernetes?

Steps to reproduce:

  1. Deploy a Wazuh 3.6.1 manager STS in Kubernetes
  2. Add agents to that manager
  3. Update the StatefulSet image tag
  4. Apply the change
  5. Check logs

Thanks for the help! 😁

Release 3.9.3_7.1.1

Wazuh version: 3.9.3
Elastic version: 7.1.1

  • Adapt to new versions (3.9.3 - 7.1.1)
  • Update changelog
  • Tests
  • Tag: v3.9.3_7.1.1
  • Draft release
  • Update Documentation

Wazuh Release 3.11.1_7.5.1

Wazuh version: 3.11.1
Elastic version: 7.5.1

Tasks

  • Adapt to new versions (3.11.1 - 7.5.1)

  • Check templates and update them if needed

  • Tests

  • Update changelog

  • Tag: v3.11.1_7.5.1

  • Update documentation references

  • Draft release

Wazuh Kubernetes Release 3.10.2_7.3.2

Wazuh version: 3.10.2
Elastic version: 7.3.2

  • Adapt to new versions (3.10.2_7.3.2)
  • Update changelog
  • Tests
  • Tag: v3.10.2_7.3.2
  • Draft release
  • Update Documentation

Wazuh agent can't connect to managers in public

I'm trying to use Wazuh in Kubernetes on AWS. I modified wazuh-workers-svc.yaml for using internet-facing Load Balancer.
But my agent can't connect to Wazuh managers endpoint.

wazuh-workers-svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: wazuh-workers
  namespace: wazuh
  labels:
    app: wazuh-manager
spec:
  selector:
    app: wazuh-manager
    node-type: worker
  ports:
    - name: agents-events
      port: 1514
      targetPort: 1514
      protocol: TCP
  type: LoadBalancer

If I use real ip address when register agent, logs that I received in Wazuh workers as below:

2019/10/22 01:46:33 ossec-remoted: WARNING: (1213): Message from '100.112.0.0' not allowed.
2019/10/22 01:46:38 ossec-remoted: WARNING: (1213): Message from '100.119.0.0' not allowed.
2019/10/22 01:46:44 ossec-remoted: WARNING: (1213): Message from '100.116.0.0' not allowed.
2019/10/22 01:46:51 ossec-remoted: WARNING: (1213): Message from '100.112.0.0' not allowed.
2019/10/22 01:47:02 ossec-remoted: WARNING: (1213): Message from '100.119.0.0' not allowed.
2019/10/22 01:47:09 ossec-remoted: WARNING: (1213): Message from '100.116.0.0' not allowed.

If I use "any" in ip address field when register agent:

2019/10/22 01:50:49 ossec-remoted: WARNING: (1408): Invalid ID 007 for the source ip: '100.116.0.0' (name 'unknown').
2019/10/22 01:50:56 ossec-remoted: WARNING: (1408): Invalid ID 007 for the source ip: '100.112.0.0' (name 'unknown').
2019/10/22 01:51:01 ossec-remoted: WARNING: (1408): Invalid ID 007 for the source ip: '100.119.0.0' (name 'unknown').

If I turn on proxy protocol in ELB:

apiVersion: v1
kind: Service
metadata:
  name: wazuh-workers
  namespace: wazuh
  labels:
    app: wazuh-manager
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
spec:
  selector:
    app: wazuh-manager
    node-type: worker
  ports:
    - name: agents-events
      port: 1514
      targetPort: 1514
      protocol: TCP
  type: LoadBalancer

Output log from Wazuh workers:

2019/10/22 01:53:01 ossec-remoted: WARNING: Too big message size from 100.119.0.0 [13].
2019/10/22 01:53:01 ossec-remoted: WARNING: Too big message size from 100.116.0.0 [13].
2019/10/22 01:53:01 ossec-remoted: WARNING: Too big message size from 100.116.0.0 [14].
2019/10/22 01:53:05 ossec-remoted: WARNING: Too big message size from 100.119.0.0 [13].
2019/10/22 01:53:09 ossec-remoted: WARNING: Too big message size from 100.119.0.0 [13].
2019/10/22 01:53:09 ossec-remoted: WARNING: Too big message size from 100.116.0.0 [13].
2019/10/22 01:53:09 ossec-remoted: WARNING: Too big message size from 100.116.0.0 [14].
2019/10/22 01:53:11 ossec-remoted: WARNING: Too big message size from 100.112.0.0 [13].
2019/10/22 01:53:11 ossec-remoted: WARNING: Too big message size from 100.116.0.0 [14].
2019/10/22 01:53:11 ossec-remoted: WARNING: Too big message size from 100.116.0.0 [13].

How an agent can connect to my Wazuh cluster via internet? You have my appreciated for any ideas.

Release 3.8.0

Wazuh version: 3.8.0
Elastic version: 6.5.4

  • Adapt to new versions (3.8.0 - 6.5.4)
  • Update changelog
  • Tests
  • Tag: v3.8.0
  • Draft release

Mounting /etc/postfix causes the override of that path in the pod.

Hi team,

When mounting a volume in /etc/postfix like in:

- name: wazuh-manager-master
mountPath: /etc/postfix

If the volume is empty, will cause the override of the content saved in that path. In order to avoid that, it's required to implement a mechanism in our Docker images wazuh/wazuh-docker#240 that copies the content from another path, this way we avoid the override while still being able to load configurations.

In order to fix this, the following task must be done:

Tasks:

  • Add new path to Wazuh Manager statefulsets

  • Deploy with the new configuration and check if the content from the volume is mounted without overriding.

  • Test the postfix service after loading custom configuration

Best regards,

Jose

Wazuh managers cluster in Kubernetes behind an AWS Application Load Balancer?

I'm trying to deploy the Wazuh server in Kubernetes (I'm using your wazuh Kubernetes repo for reference).
From there, I want to expose the Wazuh server to make it available for my other EC2 instances that are not part of my Kubernetes cluster. To do so, I'm creating an ALB Ingress Service to point to my Wazuh NodePort service at 1515 and 55000.

My Service looks like this:

apiVersion: v1
kind: Service
metadata:
  name: wazuh
  namespace: wazuh
  labels:
    app: wazuh-manager
spec:
  type: NodePort
  selector:
    app: wazuh-manager
    node-type: master
  ports:
    - name: registration
      port: 1515
      targetPort: 1515
    - name: api
      port: 55000
      targetPort: 55000

ALB configuration looks like this:

apiVersion: v1
kind: ConfigMap
metadata:
  name: monitoring-ingress
  namespace: wazuh
data:
  annotations: |
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internal

And Ingress looks something like this

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: wazuh-manager
  namespace: wazuh
  labels:
    app: wazuh-manager
  annotations:
    kubernetes.io/ingress.class: merge
    merge.ingress.kubernetes.io/config: monitoring-ingress
spec:
  rules:
    - host: wazuhmanager.nonprod.com
      http:
        paths:
          - path: /*
            backend:
              serviceName: wazuh
              servicePort: 55000

My AWS ALB works fine for other non wazuh services.
image

But somehow, I keep getting a 502 Bad Gateway from the URL everytime I try to hit my wazuh ingress endpoint.
I tried to do tcpdump over my k8s nodes and did not see any 502 related information.

image

If I skip the Ingress installation and setup my wazuh service as LoadBalancer it works without any problems.
The difference I see while using LoadBalancer wazuh service is the Listener
image

ALB does not support TCP listeners. Is this what is stopping my other EC2 (wazuh agent) instances that are not part of my Kubernetes cluster to communicate to wazuh server?

Wazuh Kubernetes Release 3.11.3_7.5.2

Wazuh version: 3.11.3
Elastic version: 7.5.2

Tasks

  • Adapt to new versions (3.11.3 - 7.5.2)

  • Check templates and update them if needed

  • Tests

  • Update changelog

  • Tag: v3.11.3_7.5.2

  • Update documentation references

  • Draft release

Test: Multiple replicas for Logstash

Increase the number of replicas for Logstash and test if everything works as expected.

Document how to increase the number of replicas manually.

Duplicate alerts.

Hi team,
right now duplicate alerts are found sometimes at least when updating from elastic stack 6.5.4 to 6.6.1. Have tested tested with our current configuration and with the one used at SAAS:

          volumeMounts:
            - name: config
              mountPath: /wazuh-config-mount/etc/ossec.conf
              subPath: ossec.conf
              readOnly: true
            - name: wazuh-manager-master
              mountPath: /var/ossec/data
            - name: wazuh-manager-master
              mountPath: /etc/postfix
            - name: wazuh-manager-master
              mountPath: /var/lib/filebeat   

Disable Rootcheck and Syscheck for Wazuh Master

Hello team,

By default, trial environments will have to come with these modules disabled (syscheck and rootcheck).

The localfile entries must also be deleted.

Regards,

Alfonso Ruiz-Bravo

Deprecated options in wazuh configmaps

Running kubectl logs on the master wazuh pod shows deprecated options are in use:

2020/01/23 15:46:25 ossec-syscheckd: WARNING: The check_unixaudit option is deprecated in favor of the SCA module.
2020/01/23 15:46:25 wazuh-modulesd: WARNING: This vulnerability-detector declaration is deprecated. Use <vulnerability-detector> instead.
2020/01/23 15:46:25 wazuh-modulesd: WARNING: 'disabled' option at module 'vulnerability-detector' is deprecated. Use 'enabled' instead.
2020/01/23 15:46:25 wazuh-modulesd: WARNING: 'feed' option at module 'vulnerability-detector' is deprecated. Use 'provider' instead.

Configure Liveness and Readiness Probes for managers

Hello!

For Kubernetes to do its job properly, we need to define readiness and liveliness probes. For Wazuh managers, I'm thinking of using commands.

An example:

readinessProbe:
  exec:
    command:
    - cat
    - /tmp/healthy
  initialDelaySeconds: 5
  periodSeconds: 5

If the command succeeds, it returns 0, and the kubelet considers the Container to be alive and healthy. If the command returns a non-zero value, the kubelet kills the Container and restarts it.

How would you design those probes? I'm thinking of using /var/ossec/bin/ossec-control status, but you might have a better solution! 😄

Thanks for the help!

Release 3.9.4_7.2.0

Wazuh version: 3.9.4
Elastic version: 7.2.0

  • Adapt to new versions (3.9.4 - 7.2.0)
  • Update changelog
  • Tests
  • Tag: v3.9.4_7.2.0
  • Draft release
  • Update Documentation

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.