Coder Social home page Coder Social logo

gluster / gluster-containers Goto Github PK

View Code? Open in Web Editor NEW

This project forked from humblec/gluster-containers

220.0 25.0 134.0 234 KB

Dockerfiles (CentOS, Fedora, Red Hat) for GlusterFS

Home Page: https://github.com/gluster/gluster-containers/pkgs/container/gluster-containers

Shell 67.56% Makefile 0.17% Dockerfile 32.27%

gluster-containers's Introduction

GlusterFS Containers

This repo contains dockerfiles (CentOS and Fedora) for GlusterFS containers namely server, client and S3.

The support matrix of GlusterFS and container versions:

GlusterFS Version Container Tag Container name
GlusterFS Server Container (CentOS Stream 9) v10.2 centos, latest gluster-containers
GlusterFS Server Container (Fedora 36) v10.2 fedora gluster-containers
GlusterFS Client Container v3.13 latest glusterfs-client
Gluster S3 Server Container v4.0, v3.13, v3.12, v3.10 latest gluster-s3

Gluster Server Docker container:

Although Setting up a glusterfs environment is a pretty simple and straight forward procedure, Gluster community do maintain docker images of gluster both Fedora and CentOS as base image in the docker hub for the ease of users. The community maintains docker images of GlusterFS release in both Fedora and CentOS distributions.

The following are the steps to run the GlusterFS docker images that we maintain:

To pull the docker image from the docker hub run the following command:

Note: Using the latest tag will default to CentOS based image.

Fedora:

$ docker pull ghcr.io/gluster/gluster-containers:fedora

CentOS:

$ docker pull ghcr.io/gluster/gluster-containers:centos

This will pull the glusterfs docker image from the docker hub. Alternatively, one could build the image from the Dockerfile directly. For this, clone the gluster-containers source repository and build the image using Dockerfiles in the repository. For getting the source, One can make use of git:

$ git clone [email protected]:gluster/gluster-containers.git

This repository consists of Dockerfiles for GlusterFS to build on both CentOS and Fedora distributions. Once you clone the repository, to build the image, run the following commands:

For Fedora,

$ docker build -t gluster-fedora Fedora

For CentOS,

$ docker build -t gluster-centos CentOS

This command will build the docker image from the Dockerfile and will be assigned the name gluster-fedora or gluster-centos respectively. ‘-t’ option is used to give a name to the image we built.

Once the image is built in either of the above two steps, now we can run the container with gluster daemon running.

Before this, ensure the following directories are created on the host where docker is running:

  • /etc/glusterfs
  • /var/lib/glusterd
  • /var/log/glusterfs

Ensure all the above directories are empty to avoid any conflicts.

Also, ntp service like chronyd / ntpd service needs to be started in the host. This way all the gluster containers started will be time synchronized.

Now run the following command:

$ docker run -v /etc/glusterfs:/etc/glusterfs:z -v /var/lib/glusterd:/var/lib/glusterd:z -v /var/log/glusterfs:/var/log/glusterfs:z -v /sys/fs/cgroup:/sys/fs/cgroup:rw -d --privileged=true --net=host -v /dev/:/dev --cgroupns=host gluster-centos

( is either gluster-fedora or gluster-centos as per the configurations so far)

Where:

        --net=host        ( Optional: This option brings maximum network throughput for your storage container)

        --privileged=true ( If you are exposing the `/dev/` tree of host to the container to create bricks from the container)

Bind mounting of following directories enables:

        `/var/lib/glusterd`     : To make gluster metadata persistent in the host.
        `/var/log/glusterfs`    : To make gluster logs persistent in the host.
        `/etc/glusterfs`        : To make gluster configuration persistent in the host.

Systemd has been installed and is running in the container we maintain.

Once issued, this will boot up the Fedora or CentOS system and you have a container started with glusterd running in it.

Verify the container is running successfully:
$ docker ps -a

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d273cc739c9d gluster/gluster-fedora:latest "/usr/sbin/init" 3 minutes ago Up 3 minutes 49157/tcp, 49161/tcp, 49158/tcp, 38466/tcp, 8080/tcp, 2049/tcp, 24007/tcp, 49152/tcp, 49162/tcp, 49156/tcp, 6010/tcp, 111/tcp, 49154/tcp, 443/tcp, 49160/tcp, 38468/tcp, 49159/tcp, 245/tcp, 49153/tcp, 6012/tcp, 38469/tcp, 6011/tcp, 38465/tcp, 0.0.0.0:49153->22/tcp angry_morse
Note the Container ID of the image and inspect the image to get the IP address. Say the Container ID of the image is d273cc739c9d , so to get the IP do:
To inspect the container:
$ docker inspect d273cc739c9d

"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"LinkLocalIPv6Address": "fe80::42:acff:fe11:2",
"LinkLocalIPv6PrefixLen": 64,
The IP address is “172.17.0.2”

Get inside the container
$ docker exec -ti d273cc739c9d bash

-bash-4.3# ps aux |grep glusterd
root 34 0.0 0.0 448092 15800 ? Ssl 06:01 0:00 /usr/sbin/glusterd -p /var/run/glusterd.pid
root 159 0.0 0.0 112992 2224 pts/0 S+ 06:22 0:00 grep --color=auto glusterd

-bash-4.3# gluster peer status
Number of Peers: 0

-bash-4.3# gluster --version

That’s it!

Additional Ref# https://goo.gl/3031Mm

Capturing coredumps

/var/log/core directory is already added in the container. Coredumps can be configured to be generated under /var/log/core directory.

User can copy the coredump(s) generated under /var/log/core/ directory from the container to the host.

For example:

ssh <hostmachine>
sysctl -w kernel.core_pattern=/var/log/core/core_%e.%p

Gluster Object Docker container:

To pull gluster-s3:

$ docker pull gluster/gluster-s3

To run gluster-s3 container:

On the host machine, mount one or more gluster volumes under the directory /mnt/gluster-object with mountpoint name being same as that of the volume.

For example, if you have two gluster volumes named test and test2, they should be mounted at /mnt/gluster-object/test and /mnt/gluster-object/test2 respectively. This directory on the host machine containing all the individual glusterfs mounts is then bind-mounted inside the container. This avoids having to bind mount individual gluster volumes.

The same needs to be updated in etc/sysconfig/swift-volumes.

For example(in swift-volumes): S3_ACCOUNT='tv1'

Where tv1 is the volume name.

$ docker run -d --privileged  -v /sys/fs/cgroup/:/sys/fs/cgroup/:ro -p 8080:8080 -v /mnt/gluster-object:/mnt/gluster-object -e S3_ACCOUNT="tv1" -e S3_USER="admin" -e S3_PASSWORD="redhat" gluster/gluster-s3

Now, We can get/put objects into the gluster volume, using the gluster-s3 Docker container. Refer this link[1] for testing.

[1] https://github.com/gluster/gluster-swift/blob/master/doc/markdown/quick_start_guide.md#using_swift

gluster-containers's People

Contributors

black-dragon74 avatar humblec avatar jarrpa avatar kbsingh avatar kshithijiyer avatar madhu-1 avatar mcquinne avatar mreithub avatar mscherer avatar nandajavarma avatar nixpanic avatar phlogistonjohn avatar powo avatar pronix avatar raghavendra-talur avatar saravanastoragenetwork avatar vredara avatar xavinux avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gluster-containers's Issues

Glusterfs - 6.0.0 Dockerfile and Image

Hi Team,

We are actually facing this issue - https://bugzilla.redhat.com/show_bug.cgi?id=1596787 with the glusterfs - 4.* and the latest glusterfs images that are available are 4.* images only. And we have come across the release notes of glusterfs - 6.0.0 where they have addressed that they resolved that bug - https://docs.gluster.org/en/latest/release-notes/6.0/ . So kindly release the glusterfs - 6.0.0 images at the earliest, so that we can start using that image.

Thanks in Advance!

Glusterfs snapshot failed

Hi,
I will use glusterfs snapshot for backup my volume. I write this command in glusterfs pod.
gluster snapshot create snap-testvol testvol
And after 1 min loading is say:

Error : Request timed out
Snapshot command failed

I used latest glusterfs docker image, gluster version is:
glusterfs 4.1.7 Please help me, how to fixed my problem and use gluster snapshot feature?
Thanks, Peter

Could not enable Encryption feature on Glusterfs Container6.1 version

Dear All,
I have created the glusterfs container for the version 6.1 based on the instruction mentioned on this project. I have tested the glusterfs functionality, its all working fine except encryption...

When i enable the encryption, i am getting the following error
""/usr/lib64/glusterfs/6.1/xlator/encryption/crypt.so" -> No such file or directory"
It seems within the glusterfs container, in the above path encryption dynamic library is not available. But it is available on 4.1 version...
Kindly help us to enable encryption on latest glusterfs image....

Setup Automated builds on Dockerhub

Hey,

I recently saw that we have added a workaround for the lvm2 bug that caused issues on #128 .

I see that the 4u1 as well as latest have not been rebuilt to take into account these fixes.

First @humblec could you rebuild these so that I can test and confirm that #128 is indeed fixed by the downgrade please ?

Secondly, I think we should setup on Dockerhub a GitHub integration that would trigger an image build everytime something is pushed to the branch as explained here (https://docs.docker.com/docker-hub/builds/)

We could have automated builds for every branch you support for example, every time there is a commit on the gluster-4.1 branch this would trigger a build for the tag 4u1_centos7. Everytime there is a commit on master, this would trigger a build for the tag latest etc.

This would remove the need to open an issue each time we need you to rebuild the image.

What do you think ?

Make gluster-block less of a pet

With our push toward GCS, we are working to remove host dependencies from the gluster containers. Today, that includes directly accessed devices and using the host's IP.

While we have a workable plan using kube services and raw block PVs to remove these dependencies for the gluster bricks and GD2, gluster-block has presented more of a challenge. Our dependency on tcmu means we have a host IP dependency for the iSCSI target.

Yesterday, I had a discussion w/ @cuppett, and he mentioned an approach that may help us get rid of our node-locked gluster-block pods...

Today, we have the iSCSI target running on a chosen few nodes, with clients connecting to them via multipath. The target nodes can then use gfapi to access the bricks. Stephen's suggestion was to move the iSCSI target to the client nodes themselves as a part of the CSI driver.

In this model, the CSI "node" portion that already runs as a DaemonSet would have the tcmu code, and it would play the role of the iSCSI target for that individual host only. Multipath would no longer be used since the client and server are on the same machine.

Open issues:

  • We would still have the fencing problems when a node goes offline.
  • I'm not sure how to upgrade the CSI driver w/o draining the node (same problem w/ file though)
  • This requires separating GD2 and gluster-block into separate pods, potentially on separate nodes

cc: @ShyamsundarR @amarts @humblec @gluster/gluster-block @atinmu

Add Version Tag

Can we please get some version tags in Dockerhub that are not latest? The next version tag after latest (7.1) is 4.1.9. Thanks.

Do not call systemctl from script in service file

PR #100 added a nice script to check for the status of /var/lib/glusterfs, however @raghavendra-talur and I were discussing this change and realized that we should not be calling systemctl from within the script. I have seen doing this lead to systemctl blocking. It would be cleaner to call kill on the pid of the glusterd service instead. I believe this is how glusterd is terminated by systemd currently so the impact on glusterd should be the same.

NFS share with glusterfs

Hi,
I will used nfs server on glusterfs, but don't work me. How to configuration pods and glusterfs?
I installed in glusterfs pods the nfs-ganesha packpage and I set nfs.disabled option to off. I stop and started the volume.
But volume status is nfs server localhost and nodes is offline.
Glusterd.log file say nfs/server.so xlator is not installed. I stop rbcbind and nfs server on host node and docker centos enabled and started rbcbind and nfs-server,nfs-ganesha.
I used latest, glusterfs 4.1 image.
Please help me.
Thanks, Peter

gluster-setup.sh does not mknodes on failed mount

After the update in commit 2e2e284, gluster-setup.sh no longer attempts to mount failed bricks.

According to the man page RETURN CODE 1 is incorrect permissions or invocation (bad parameters). The previous commit attempted to only test for that error, which if continuing in the script logic allows the glusterd pod to attempt to identify the failed bricks, and try various tricks to mount them.

I recommend testing the script for RETURN CODE 1, 2, and 4 - which should cover most fatal errors, and still allow the script to attempt to mount failed bricks using the logic below that test.

[glusterfsd.c:2233:parse_cmdline] 0-glusterfs: ERROR: parsing the volfile failed [No such file or directory]

I'm deploy image: gluster/gluster-centos:latest but latest version about glusterfs need to be update before using.

systemd[1]: Starting GlusterFS, a clustered file-system server...
GlusterFS[72]: [glusterfsd.c:2233:parse_cmdline] 0-glusterfs: ERROR: parsing the volfile failed [No such file or directory]
glusterd[72]: USAGE: /usr/sbin/glusterd [options] [mountpoint]
systemd[1]: glusterd.service: control process exited, code=exited status=255
systemd[1]: Failed to start GlusterFS, a clustered file-system server.
systemd[1]: Unit glusterd.service entered failed state.
systemd[1]: glusterd.service failed.
systemd[1]: Starting GlusterFS, a clustered file-system server...
glusterd[80]: USAGE: /usr/sbin/glusterd [options] [mountpoint]
systemd[1]: glusterd.service: control process exited, code=exited status=255

"Fix that issue inside container"

yum update -y glusterfs glusterfs-server glusterfs-fuse

systemctl restart glusterd

Coud u please update latest version of glusterfs inside image gluster-centos.

No IP address

Hi, I used this repo gluster/gluster-centos and run container with this command
$ docker run -v /etc/glusterfs:/etc/glusterfs:z -v /var/lib/glusterd:/var/lib/glusterd:z -v /var/log/glusterfs:/var/log/glusterfs:z -v /sys/fs/cgroup:/sys/fs/cgroup:ro -d --privileged=true --net=host -v /dev/:/dev gluster/gluster-centos .
Container work but, without IPAddress. Result of inspect container:
"NetworkSettings": { "Bridge": "", "SandboxID": "757b221cb864333d4aa37558fb45225ceb486424160751ee1dd77e4668fd93c7", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": {}, "SandboxKey": "/var/run/docker/netns/default", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { "host": { "IPAMConfig": null, "Links": null, "Aliases": null, "NetworkID": "9d5450549e853284547239b6d69fe4943029247ed82132ad2078b0197972aaa5", "EndpointID": "04dd5fbfd79ff9da57adb7277a464de617942abc3e8786c1d9ccb95b5f464b33", "Gateway": "", "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "", "DriverOpts": null }
Other containers works fine with any problems with IPAddress.
Could I ask for help?

Why are Fedora and CentOS so vastly out of sync? Do they need to be synced more?

The centos and fedora containers are pretty much out of sync.

How the Dockerfile does things

Several things done differently in the container, like how to add files into the container, e.g.:

ADD gluster-setup.sh /usr/sbin/gluster-setup.sh

for centos and

COPY gluster-setup.sh [...] /
...
RUN ...&& \
...
mv /gluster-setup.sh /usr/sbin/gluster-setup.sh && \
...

for fedora, which seems less idiomatic and more complicated. So here a style / technique sync may be good.

Different components

Several mechanisms/components are only added to one of the containers, not both:

Fedora has

  • gluster-setup
  • fake-disk
  • gluster-brickmultiplex

Centos has:

  • gluster-setup
  • fake-disk
  • gluster-block-setup
  • update-params
  • status-probe
  • check-diskspace

Should we reconcile those services as much as possible?

What do people think?

Readiness probe failed: /usr/local/bin/status-probe.sh failed check: systemctl -q is-active glusterd.service

I've followed the setup guide using the gk-deploy script

./gk-deploy.sh --admin-key xxxx --user-key xxx -v -g

This went fine. However, the deployed container fail with

Readiness probe failed: /usr/local/bin/status-probe.sh failed check: systemctl -q is-active glusterd.service 

The glusterd.service on the node is running:

$ systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/lib/systemd/system/glusterd.service; enabled; vendor preset: enabled)
   Active: active (running) since Sat 2019-10-19 18:48:25 CEST; 1 day 17h ago
     Docs: man:glusterd(8)
  Process: 939 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_
 Main PID: 955 (glusterd)
    Tasks: 10 (limit: 4915)
   Memory: 30.5M
   CGroup: /system.slice/glusterd.service
           └─955 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
....

gluster cluster from docker-compose file

We need deploy Gluster cluster thought docker-compose.yml file.
Our docker-compose file:

version: '3.6'
services:
    gfs:
        image: gluster/gluster-centos
        networks:
          - gfsnet
#        privileged: true
#        volumes:
#            - /home/glusterfs/data:/var/lib/glusterd:rw
#            - /home/glusterfs/volume:/data:rw
#            - /home/glusterfs/logs:/var/log/glusterfs:rw
#            - /home/glusterfs/conf:/etc/glusterfs:rw
#            - /dev:/dev:rw
#            - /etc/hosts:/etc/hosts:rw
        deploy:
            mode: global
            restart_policy:
                condition: on-failure

networks:
  gfsnet:
    attachable: true

I see next:

ivan@ivan1 /media/docker_compose/gluster $ sudo docker exec -it c3e69a4fcb7f bash
[root@c3e69a4fcb7f /]# ps aux |grep glusterd
root        24  0.0  0.0  12488  2180 pts/0    S+   19:09   0:00 grep --color=auto glusterd
[root@c3e69a4fcb7f /]# gluster peer status
Connection failed. Please check if gluster daemon is operational.
peer status: failed
[root@c3e69a4fcb7f /]# gluster --version
glusterfs 3.13.2
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.

How we should to use docker-compose file for deploy Gluster cluster?

Ran into bug #120 on origin 3.11 - how to upgrade ?

Hi everybody

Ran into the #120 bug mentioned in the forum. I used the ansible installer with 3.11 what is the best way to only upgrade glusterfs with the commited bugfix ? I dont‘t want to upgrade the whole cluster to latest

Thx Martin.

Glusterd not running

Hello,

I 've tried to setup gluster in docker on an synology.
i ve done these steps
ash-4.3# docker run -v /volume1/gluster/etc_glusterfs/:/etc/glusterfs:z -v /volume1/gluster/lib_glusterd:/var/lib/glusterd:z -v /volume1/gluster/log_glusterfs:/var/log/glusterfs:z -v /sys/fs/cgroup:/sys/fs/cgroup:ro -d --privileged=true --net=host -v /volume1/gluster/brick1/:/brick1 --name gluster gluster/gluster-centos a43aaaba3146de16d68ac03a27b10c5cb81af82edd79ee2009670436a5e74a76 ash-4.3# docker exec -ti gluster bash [root@rd815P /]# ps aux|grep glusterd root 28 0.0 0.0 12516 968 ? S+ 07:33 0:00 grep --color=auto glusterd [root@rd815P /]# systemctl status glusterd Failed to get D-Bus connection: No such file or directory [root@rd815P /]#
I am not sure what's the problem

maybe somebody can point me right direction
thanks

how can I use gluster-containers as Gluster PersistentVolume with gluster-server deployed in kubernetes?

hello, I deployed gluster server within kubernetes, and the pods work well, I can mount the gluster-volume in the gluster-client pod with docker image : gluster/glusterfs-client, here is the script within gluster-client:

  mkdir -p /mnt/glusterfs/
  #the gluster1 is one of the gluster server:
  mount -t glusterfs gluster1:/k8s-volume /mnt/glusterfs/
  touch /mnt/glusterfs/demo.txt

this works well.
but how can I add Gluster PersistentVolume using this gluster-server? what's the Gluster Endpoint here?

gluster-blockd and gluster-block-setup not starting after reboot of node: wrong place for 'systemctl enable...' in update-params.sh ?

Hi,

we have a strange issue on OpenShift 3.11 with GlusterFS (latest version installed yesterday):

OpenShift installs GlusterFS, glusterfs-storage pods start successfully.

At some point during the installation procedure glusterfs nodes are rebooted. Afterwards the glusterfs-storage pods don't start.

oc logs glusterfs-xxx shows:

Enabling gluster-block service and updating env. variables
Failed to get D-Bus connection: Connection refused
Failed to get D-Bus connection: Connection refused

If I exec into the container and call

systemctl status glusterd

It's enabled and running.

If I do

systemctl status gluster-blockd
or 
systemctl status gluster-block-setup

The service is reported as disabled and doesn't run.

If I call

systemctl enable gluster-blockd
systemctl start gluster-blockd

and leave the pod after a few seconds the glusterfs-storage pod is reported as running by the Openshift oc command.

I can get the same result by deleting the glusterfs-storage pod. Thus we have a daemonset in OpenShift the pods restart and are running immediately. The logs dont show any D-Bus errors in this case!

It looks like some weird race condition where the GlusterFS container behaves differently on a reboot as on a pod restart command.

Best regards,

Josef

[question] Does gluster/gluster-object support s3v4?

I deployed gluster/gluster-object on k8s.
boto test ok. But aws-cli and minio/mc not work.

AWS_ACCESS_KEY_ID=adminuser:testuser AWS_SECRET_ACCESS_KEY=welcome1 aws --endpoint-url http://172.17.2.101:32005 s3api head-object --bucket boto-demo --key mykey

An error occurred (403) when calling the HeadObject operation: Forbidden

the minio/mc team said it should be bug in object storage.
issue #2691

Run fstrim weekly in the container.

I had an issue where PVs had space when mounted via glusterFS df /var/lib/heketi/mounts/vg_XXXX/brick_YYY/brick), but 'lvs' should the volumes as being full.

The tool fstrim came to the rescue, running "fstrim -av" freed up the space.
"The TRIM command is an operation that allows the operating system to propagate information down to the SSD about which blocks of data are no longer in use.... ". https://www.digitalocean.com/community/tutorials/how-to-configure-periodic-trim-for-ssd-storage-on-linux-servers

For a definite solution, this needs to be automated inside the pod, there is already a service available:

systemctl enable fstrim.timer
Created symlink from /etc/systemd/system/multi-user.target.wants/fstrim.timer to /usr/lib/systemd/system/fstrim.timer.
systemctl start fstrim.timer

Could you consider adding this to the bottom of your Dockerfile please, for example?

Any plans to upgrade container images

Hi
I really like glusterfs but looking at the container images, I am not sure what is going on.
Looking at the registries docker and quay I am asking myself why the last push happened around one year ago, even the latest commit in this repo is only one month old.
Even worse, quay reports alot of vulnerabilities: https://quay.io/repository/gluster/gluster-centos?tab=tags

Are there any future plans for updating or will you stop provide container images at all?

I would like to see image using recent operating systems and glusterfs 6 or even 7.

Thank you and kind regards

GlusterFS container capture the entries which should be inserted into worker node syslog

GlusterFS container capture the entries which should be inserted into worker node syslog, the issue only happens on the GlusterFS storage node.

Steps for reproduction:

On the worker node which also used as glusterFS storage node, run this commands to insert a record into host syslog:

logger -is -p auth.err authpriv-test

Check the worker node syslog or run "journalctl -r" command to check, nothing was inserted into worker node syslog.

Now, login the glusterfs container which running on this worker node, run "journalctl -r" command to check, found that record was inserted into the container syslog.

The implication of this behavior is that the log of "secure", "maillog", "spooler", "boot.log" will be captured by the glusterFS container syslog and missed in node syslog, this caused user can not monitor the node's syslog correctly.

GlusterFS: Building on s390x and creating RPMs

Hi @humblec , we are trying to build glusterfs rpms on s390x for version 4.1.5. We built it on s390x and created rpms. Followed link: https://docs.gluster.org/en/v3/Install-Guide/compiling-rpms/
The following rpms got generated on s390x platform on ClefOS:7

bash-4.2# ls
Makefile                                     glusterfs-client-xlators-4.1.5-0.0.el7.s390x.rpm   glusterfs-libs-4.1.5-0.0.el7.s390x.rpm
Makefile.am                                  glusterfs-debuginfo-4.1.5-0.0.el7.s390x.rpm        glusterfs-regression-tests-4.1.5-0.0.el7.s390x.rpm
Makefile.in                                  glusterfs-devel-4.1.5-0.0.el7.s390x.rpm            glusterfs-resource-agents-4.1.5-0.0.el7.noarch.rpm
glusterfs-4.1.5-0.0.el7.s390x.rpm            glusterfs-events-4.1.5-0.0.el7.s390x.rpm           glusterfs-server-4.1.5-0.0.el7.s390x.rpm
glusterfs-4.1.5-0.0.el7.src.rpm              glusterfs-extra-xlators-4.1.5-0.0.el7.s390x.rpm    make_glusterrpms
glusterfs-api-4.1.5-0.0.el7.s390x.rpm        glusterfs-fuse-4.1.5-0.0.el7.s390x.rpm             python2-gluster-4.1.5-0.0.el7.s390x.rpm
glusterfs-api-devel-4.1.5-0.0.el7.s390x.rpm  glusterfs-geo-replication-4.1.5-0.0.el7.s390x.rpm
glusterfs-cli-4.1.5-0.0.el7.s390x.rpm        glusterfs-gnfs-4.1.5-0.0.el7.s390x.rpm

But I didn't see glusterfs-rdma rpm. Is it missing on s390x or its part of any other rpms?

Could you please confirm? Thank you.

Arm architecture GlusterFS Docker Image

I'm trying to setup GlusterFS on an Arm architecture (Raspberry Pi) based Kubernetes Cluster but there does not seem to be an image which supports this architecture. Are there any plans to provide a Raspbian-based GlusterFS official docker image? I intend to create one myself and host it somewhere but I think in order for people to have more trust in the contents of the container and feel confident using it on their production clusters, it would be good to have an official image from gluster which runs on arm devices.

support centos 8 based image

do we have any plan to support centos 8 based image for glusterfs 4.1, since centos 7 based image have many vulnerability issues include curl,openssl and python.

sshd server

What is the reasoning behind having sshd installed on this docker image?

It would be better if this image simply shipped glusterfs and nothing else.

Is the docker image glusterfs 8.2?

It seems like glusterfs docker image only container version 3.0 for latest.

Is this project no longer maintained? as current gluster version is already 8.2

Thanks.

how to turn off systemd-logind and systemd-journald processes in this image

We had some security issue rasied by our security team that processes inside container for systemd-logind and systemd-journald are interfering the nodelevel. How to turn off these? Is it enough to add below to Dockerfile? Please advise.

RUN systemctl disable systemd-journald
RUN systemctl disable systemd-logind

GlusterFS latest version

Hello,

Is this project abandoned? because latest image has been published 10 month ago with gluster v4, but the docker image on the repo is getting the latest one.

Could someone publish a new version of the image on the hub?

GCS containers should use block PVs

The Gluster containers should use a volumeMode: block based PV for its persistent storage.

By moving from using a raw device to one provided via a PVC/PV, we begin to break the host dependencies of the gluster container.

Acceptance criteria:

  • The container should no longer have any host path mappings
  • The container should accept an arbitrarily sized raw block PV on which it will create bricks

This is going to depend on getting a GD2-based container first: #89

Location for "cattle" containers

In keeping with our plan to move toward "cattle" containers, I'd like to arrive at consensus on where development for them should live.
I'll suggest two options:

  1. In a top-level directory like "Fedora" and "CentOS" containers currently do.
  2. In its own branch.

My favored approach is (1) because it doesn't overload branching nor require a cutover to the "new" way of using the containers at any particular time.

Also welcome are suggestions for the new directory... WDYT: "GCS"?

centos server container: Failed to get D-Bus connection: No such file or directory

After docker pull gluster/gluster-centos and firing it up:

$ docker exec -it gluster /bin/sh
sh-4.2# systemctl status glusterd
Failed to get D-Bus connection: No such file or directory
sh-4.2# systemctl status gluster 
Failed to get D-Bus connection: No such file or directory
sh-4.2# ps -AF | grep gluster
root        19    11  0  2275   832   2 19:37 pts/2    00:00:00 grep gluster
sh-4.2# service glusterd restart
Redirecting to /bin/systemctl restart glusterd.service
Failed to get D-Bus connection: No such file or directory
sh-4.2# 

GCS container should use GD2

The GCS container should use GD2, not glusterd.

This will significantly decrease the state that needs to be maintained persistently on the node, making it much easier to remove node-based dependencies of the gluster container.

Acceptance criteria:

  • GD2 & gluster 4.x are in the image
  • GD2's node identity and etcd endpoint are provided via file or env var (ConfigMap in kube)
  • No directories from the host are mounted into the container for the purpose of preserving state or retrieving configuration information

Gluster pods should be able to load certain lvm/device-mapper modules

There is no guarantee that the dm_thin_pool, dm_mirror and dm_snapshot kernel modules are loaded. Creating volumes with Heketi enabled thin-provisioning for the bricks. This will fail if the device-mapper/lvm modules are not available.

It seems most reasonable to let the Gluster pod automatically load the required modules. The automatic loading of modules is already standard practice for non-containerized environments. In order to make this happen in the most cases, it will be needed to bind-mount /lib/modules from the host into the pod. Without this bind-mount, the pod will not be able to run modprobe with the right .ko files.

See-also: gluster/gluster-kubernetes#440

GCS container should use TLS by default

This covers the following requirements:

  • GCS containers should enable gluster's TLS capabilities for both management and data by default.
  • It should also secure all communication w/ etcd
  • Authentication should be mandatory for accessing the GD2 API

In a kube environment, user pods are untrusted and have the ability to send arbitrary requests to gluster. We must make sure all network-exposed endpoints are properly secured to prevent untrusted users from doing bad things:tm:

For kube deployments, I expect an init container to generate the node-specific keys from a CA key pair for the gluster TLS part, meaning this container just needs to pick up the TLS keys and use them.
I don't know offhand what is required for etcd or the GD2 API.

Accumulating rpc_clnt_submit error in glustershd.log

Hi,

Not sure this is the correct place to raise an issue...so please correct me if it isn't.

I'm having an issue with the gluster/gluster-centos images used in the default openshift-ansible deployments.

It used to only happen on the gluster3u13_centos7, gluster4u0_centos7 and latest tags and I've already reported this issue on openshift-ansible but now it is manifesting in the new gluster3u12_centos7 tag as well ever since the latest push.

Description

On an OKD v3.9 and v3.10 cluster with CNS Gluster, deleting PVCs from the file based glusterfs storageclass causes accumulation of rpc_clnt_submit errors in /var/log/glusterfs/glustershd.log. Over time all available CPU on the storage nodes are consumed by these errors (presumably attempting to establish accumulating rpc connections).

Observed Results

Error log from a storage node:

[centos@node-0 ~]$ sudo tail -n 5 /var/log/glusterfs/glustershd.log 
[2018-09-29 10:57:40.210212] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-vol_898c14de3760573e31912959d9281898-client-0: error returned while attempting to connect to host:(null), port:0
[2018-09-29 10:57:40.213736] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-vol_898c14de3760573e31912959d9281898-client-1: error returned while attempting to connect to host:(null), port:0
[2018-09-29 10:57:40.214018] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-vol_898c14de3760573e31912959d9281898-client-1: error returned while attempting to connect to host:(null), port:0
[2018-09-29 10:57:40.216782] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-vol_898c14de3760573e31912959d9281898-client-2: error returned while attempting to connect to host:(null), port:0
[2018-09-29 10:57:40.217041] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-vol_898c14de3760573e31912959d9281898-client-2: error returned while attempting to connect to host:(null), port:0

Any ideas?

Production Ready?

I can't seem to find a lot of gluster official docs or guides about gluster in docker.

Is this suitable to be used in a production cluster (for example - for sharing a volume between all nodes in a docker cluster)? Or is it more for testing / dev?

Gluster 3.11 image build error.

Gluster 3.11 container images are failing to build

Build failed: The command '/bin/sh -c yum --setopt=tsflags=nodocs -y update; yum install -y centos-release-gluster311; yum clean all; (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); rm -f /lib/systemd/system/multi-user.target.wants/*;rm -f /etc/systemd/system/*.wants/*;rm -f /lib/systemd/system/local-fs.target.wants/*; rm -f /lib/systemd/system/sockets.target.wants/*udev*; rm -f /lib/systemd/system/sockets.target.wants/*initctl*; rm -f /lib/systemd/system/basic.target.wants/*;rm -f /lib/systemd/system/anaconda.target.wants/*;yum --setopt=tsflags=nodocs -y install nfs-utils attr iputils iproute openssh-server openssh-clients ntp rsync tar cronie sudo xfsprogs glusterfs glusterfs-server glusterfs-rdma glusterfs-geo-replication;yum clean all; sed -i '/Defaults requiretty/c\#Defaults requiretty' /etc/sudoers; sed -i '/Port 22/c\Port 2222' /etc/ssh/sshd_config; sed -i 's/Requires\=rpcbind\.service//g' /usr/lib/systemd/system/glusterd.service; sed -i 's/rpcbind\.service/gluster-setup\.service/g' /usr/lib/systemd/system/glusterd.service; sed -i 's/ENV{DM_UDEV_DISABLE_OTHER_RULES_FLAG}=="1", ENV{SYSTEMD_READY}="0"/ENV{DM_UDEV_DISABLE_OTHER_RULES_FLAG}=="1", GOTO="systemd_end"/g' /usr/lib/udev/rules.d/99-systemd.rules; mkdir -p /etc/glusterfs_bkp /var/lib/glusterd_bkp /var/log/glusterfs_bkp;cp -r /etc/glusterfs/* /etc/glusterfs_bkp;cp -r /var/lib/glusterd/* /var/lib/glusterd_bkp;cp -r /var/log/glusterfs/* /var/log/glusterfs_bkp; sed -i.save -e "s#udev_sync = 1#udev_sync = 0#" -e "s#udev_rules = 1#udev_rules = 0#" -e "s#use_lvmetad = 1#use_lvmetad = 0#" /etc/lvm/lvm.conf;' returned a non-zero code: 2

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.