Coder Social home page Coder Social logo

sclorg / mongodb-container Goto Github PK

View Code? Open in Web Editor NEW
51.0 23.0 179.0 560 KB

MongoDB container images based on Red Hat Software Collections and intended for OpenShift and general usage. Users can choose between Red Hat Enterprise Linux, Fedora, and CentOS based images.

Home Page: https://softwarecollections.org

License: Apache License 2.0

Shell 84.31% Makefile 0.20% Python 8.00% Dockerfile 7.49%
mongodb database rhel centos dockerfile openshift docker container fedora

mongodb-container's People

Contributors

bparees avatar csrwng avatar danielhelfand avatar danmcp avatar eliskasl avatar ewolinetz avatar ficap avatar gabemontero avatar hhorak avatar jhadvig avatar jim-minter avatar kargakis avatar mfojtik avatar mnagy avatar mohammedzee1000 avatar nekop avatar omron93 avatar php-coder avatar phracek avatar pi-victor avatar pkubatrh avatar praiskup avatar pvalena avatar rhcarvalho avatar rpitonak avatar sdodson avatar soltysh avatar stevekuznetsov avatar trepik avatar wanghaoran1988 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mongodb-container's Issues

Using this for production apps?

I've been thinking about what it would take to run production apps using this OC template, and wanted to get some advice on the following topics:

  1. Data corruption/loss: Assuming that we're ok with a standalone MongoDB (i.e. we'd rely on K8 restarting a pod following a failure and are ok with few mins of downtime), and have persistent volumes based on GlusterFS, do you guys see any issues with this template being used for production apps? So my concern isn't high-availability, but a pod restart or an unclean shutdown causing data corruption/loss. However, I tested this and the default behaviour is for MongoDB to start in recovery mode when it detects this. As a good practice, we would obviously also be having a backup strategy that we could use to recover following issues.

  2. Upgrades: It seems currently, users could update the image referenced in the deploymentconfig and click on deploy... e.g. when you add support for 3.4, would that be all we'd need to do to upgrade?

Any other areas I should investigate?

Set cacheSizeGB for 3.2 image

According to the docs:

If you run mongod in a container (e.g. lxc, cgroups, Docker, etc.) that does not have access to all of the RAM available in a system, you must set storage.wiredTiger.engineConfig.cacheSizeGB to a value less than the amount of RAM available in the container. The exact amount depends on the other processes running in the container.

(https://docs.mongodb.com/manual/faq/storage/#to-what-size-should-i-set-the-wiredtiger-internal-cache)

We currently do not set that configuration option.

Code sharing

Now we have in master replset scripts.
#184 add a new way how to initialize replset and how to manage pods. It does not use much of current code - it works differently (so maybe it's right).
#178 tries to add support to run replication in pure docker environment. The best way how to do this is not to use post-deploy hook for initialization and remove members in cleanup(). So this differs from both above mentioned set of scripts! Should this PR bring own set of scripts?

@php-coder @rhcarvalho Any idea how to solve this? I know both PRs are [WIP], but I would like to agree on some "vision" how it could be done. Not to waste time while reworking PRs again and again... or having to make many PRs to bring back sharing of code. Thanks.

(I hope discussing this in separate issue will be more easy and clean than in any of mentioned PRs)

Unable to access with mongodb:27017

Am I missing something about how DNS works in Openshift. I used the replica set template and start three mongodb pods to create a replica set. The Openshift documentation says I should be able to access the database using mongodb:27017. That is not working for me; I just get timeout errors. Do I need to do extra setup to make this work?

No way to specify the oplogSize

We're hard-coding the oplogSize to 64 MB which greatly limits setting up our replication example for anything past a "it works!" demo.

If an application takes 10 minutes to add 64 MB of data to MongoDB, it means you have a window of only 10 minutes to recover a secondary in case of failures before you have no way to recover it other than a full sync of the database.

I'd prefer to have MongoDB's default of 5% of free space in place, and be able to specify a different value if necessary.

oplogSize specifies a maximum size in megabytes for the replication operation log. mongod creates an oplog based on the maximum amount of space available, typically 5% of available disk space. Once the mongod has created the oplog for the first time, changing oplogSize will not affect the size of the oplog.

The oplog (operations log) is a special capped collection that keeps a rolling record of all operations that modify the data stored in your databases. MongoDB applies database operations on the primary and then records the operations on the primary’s oplog. The secondary members then copy and apply these operations in an asynchronous process. All replica set members contain a copy of the oplog, allowing them to maintain the current state of the database.

If an oplog fills up in 24 hours of operations, then secondaries can stop copying entries from the oplog for up to 24 hours without becoming stale.

Deploying with glusterfs storage volume

Has anyone had any experience using this image with a glusterfs storage volume?

From trying it out it works well so far, but was wondering if anyone has seen any issues or might know of potential problems that may arise.

Make the _USER and _PASSWORD optional when _ADMIN_PASSWORD is set

When the _ADMIN_PASSWORD is set (required now in MongoDB) we should make the the _USER and _PASSWORD optional as the user already gets the admin account created. When the admin account is created, then we can't force users to also create regular user account as they can use the admin account to create more users.
Also to keep this backward compatible, we can make _USER and _PASSWORD optional for the case that users want to do full bootstrapping.

This was discussed with database guys (@praiskup, @hhorak) already and it makes sense to me as well.

CC @bparees @jhadvig @mnagy @soltysh

Not possible to do port-forward to mongo32

There is no socat in the image and port-wordfarding fails:

$oc port-forward mongodb-1-zusc8 27017:27017
I0811 13:31:54.234934   35913 portforward.go:213] Forwarding from 127.0.0.1:27017 -> 27017
I0811 13:31:54.235037   35913 portforward.go:213] Forwarding from [::1]:27017 -> 27017
I0811 13:32:01.249564   35913 portforward.go:247] Handling connection for 27017
E0811 13:32:01.260688   35913 portforward.go:318] an error occurred forwarding 27017 -> 27017: error forwarding port 27017 to pod mongodb-1-zusc8_mongodb, uid : unable to do port forwarding: socat not found.
I0811 13:32:01.772538   35913 portforward.go:247] Handling connection for 27017
E0811 13:32:01.780011   35913 portforward.go:318] an error occurred forwarding 27017 -> 27017: error forwarding port 27017 to pod mongodb-1-zusc8_mongodb, uid : unable to do port forwarding: socat not found.
I0811 13:32:02.284900   35913 portforward.go:247] Handling connection for 27017
E0811 13:32:02.290660   35913 portforward.go:318] an error occurred forwarding 27017 -> 27017: error forwarding port 27017 to pod mongodb-1-zusc8_mongodb, uid : unable to do port forwarding: socat not found.
I0811 13:32:02.791519   35913 portforward.go:247] Handling connection for 27017
E0811 13:32:02.797904   35913 portforward.go:318] an error occurred forwarding 27017 -> 27017: error forwarding port 27017 to pod mongodb-1-zusc8_mongodb, uid : unable to do port forwarding: socat not found.

petset replica fails to initialize

The extended test reveals that sometimes a slave member fails to initialize, in this case member "1" (of 0,1,2) fails to join the replica set because it can't contact itself:

Dec 18 05:38:55.488: INFO: Running 'oc logs --config=/tmp/extended-test-mongodb-petset-replica-jqdp4-qrh5o-user.kubeconfig --namespace=extended-test-mongodb-petset-replica-jqdp4-qrh5o mongodb-replicaset-1 --timestamps'
pod logs for 2016-12-18T10:37:43.981767000Z 2016-12-18T10:37:43.980+0000 I CONTROL  [initandlisten] MongoDB starting : pid=16 port=27017 dbpath=/var/lib/mongodb/data 64-bit host=mongodb-replicaset-1
2016-12-18T10:37:43.982068000Z 2016-12-18T10:37:43.980+0000 I CONTROL  [initandlisten] db version v3.2.6
2016-12-18T10:37:43.982308000Z 2016-12-18T10:37:43.980+0000 I CONTROL  [initandlisten] git version: 05552b562c7a0b3143a729aaa0838e558dc49b25
2016-12-18T10:37:43.982538000Z 2016-12-18T10:37:43.980+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013
2016-12-18T10:37:43.982805000Z 2016-12-18T10:37:43.980+0000 I CONTROL  [initandlisten] allocator: tcmalloc
2016-12-18T10:37:43.983041000Z 2016-12-18T10:37:43.980+0000 I CONTROL  [initandlisten] modules: none
2016-12-18T10:37:43.983276000Z 2016-12-18T10:37:43.980+0000 I CONTROL  [initandlisten] build environment:
2016-12-18T10:37:43.983512000Z 2016-12-18T10:37:43.980+0000 I CONTROL  [initandlisten]     distarch: x86_64
2016-12-18T10:37:43.983760000Z 2016-12-18T10:37:43.980+0000 I CONTROL  [initandlisten]     target_arch: x86_64
2016-12-18T10:37:43.983997000Z 2016-12-18T10:37:43.981+0000 I CONTROL  [initandlisten] options: { config: "/etc/mongod.conf", net: { http: { enabled: false }, port: 27017 }, replication: { oplogSizeMB: 64, replSet: "rs0" }, security: { keyFile: "/var/lib/mongodb/keyfile" }, storage: { dbPath: "/var/lib/mongodb/data" }, systemLog: { quiet: true } }
2016-12-18T10:37:43.990110000Z 2016-12-18T10:37:43.989+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=3G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),
2016-12-18T10:37:44.052474000Z 2016-12-18T10:37:44.051+0000 I CONTROL  [initandlisten] 
2016-12-18T10:37:44.052789000Z 2016-12-18T10:37:44.051+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2016-12-18T10:37:44.053041000Z 2016-12-18T10:37:44.051+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2016-12-18T10:37:44.053288000Z 2016-12-18T10:37:44.051+0000 I CONTROL  [initandlisten] 
2016-12-18T10:37:44.053518000Z 2016-12-18T10:37:44.051+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2016-12-18T10:37:44.053782000Z 2016-12-18T10:37:44.051+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2016-12-18T10:37:44.055259000Z 2016-12-18T10:37:44.054+0000 I CONTROL  [initandlisten] 
2016-12-18T10:37:44.062866000Z 2016-12-18T10:37:44.062+0000 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument: Did not find replica set lastVote document in local.replset.election
2016-12-18T10:37:44.067225000Z 2016-12-18T10:37:44.064+0000 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument: Did not find replica set configuration document in local.system.replset
2016-12-18T10:37:44.068022000Z 2016-12-18T10:37:44.067+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/var/lib/mongodb/data/diagnostic.data'
2016-12-18T10:37:44.073139000Z 2016-12-18T10:37:44.069+0000 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
2016-12-18T10:37:44.077176000Z 2016-12-18T10:37:44.075+0000 I NETWORK  [initandlisten] waiting for connections on port 27017
2016-12-18T10:37:44.116189000Z => [Sun Dec 18 10:37:44] Waiting for local MongoDB to accept connections ...
2016-12-18T10:37:44.201977000Z 2016-12-18T10:37:44.201+0000 I ACCESS   [conn1] note: no users configured in admin.system.users, allowing localhost access
2016-12-18T10:37:44.207463000Z => [Sun Dec 18 10:37:44] Adding mongodb-replicaset-1.mongodb-replicaset.extended-test-mongodb-petset-replica-jqdp4-qrh5o.svc.cluster.local to replica set ...
2016-12-18T10:37:44.289860000Z 2016-12-18T10:37:44.289+0000 I NETWORK  [thread1] Starting new replica set monitor for rs0/mongodb-replicaset-0.mongodb-replicaset.extended-test-mongodb-petset-replica-jqdp4-qrh5o.svc.cluster.local:27017
2016-12-18T10:37:44.290694000Z 2016-12-18T10:37:44.290+0000 I NETWORK  [ReplicaSetMonitorWatcher] starting
2016-12-18T10:37:44.828167000Z {
2016-12-18T10:37:44.828441000Z 	"ok" : 0,
2016-12-18T10:37:44.828688000Z 	"errmsg" : "Quorum check failed because not enough voting nodes responded; required 2 but only the following 1 voting nodes responded: mongodb-replicaset-0.mongodb-replicaset.extended-test-mongodb-petset-replica-jqdp4-qrh5o.svc.cluster.local:27017; the following nodes did not respond affirmatively: mongodb-replicaset-1.mongodb-replicaset.extended-test-mongodb-petset-replica-jqdp4-qrh5o.svc.cluster.local:27017 failed with HostUnreachable",
2016-12-18T10:37:44.828954000Z 	"code" : 74
2016-12-18T10:37:44.829205000Z }
2016-12-18T10:37:44.887793000Z => [Sun Dec 18 10:37:44] ERROR: couldn't add host to replica set!

Guessing there is a race condition here, need to review the petset initialization logic to see if it's reasonable for it to expect to contact itself, and if so, why it can't at this point.

Replica Set initialization without election

In a conversation with @php-coder today he brought an excellent question and idea:

Why does the pod running the replica set initiation code starts as a PRIMARY member, and then it steps down and triggers an election among the other members?

It would be much simpler if the post-deployment-hook pod simply pick one of the deployed pods and make it the primary, run all the initialization steps, add the other pods to the replica set, and quit. The hook pod don't need to run mongod.

This idea could simplify matters greatly, reduce the startup time, and reduce the surface area for bugs.

erroneous error message when specifying admin password and a database name

while testing out this image i am running into a problem where the error output from the container does not match the behavior i am seeing.

steps to reproduce:

  1. create a new container with the following command
oc new-app centos/mongodb-32-centos7 \
-e MONGODB_ADMIN_PASSWORD=admin \
-e MONGODB_DATABASE=foo \
--name mongodb

this results in a pod that will fail with the following error log:

You must specify the following environment variables:
  MONGODB_ADMIN_PASSWORD
Optionally you can provide settings for user with 'readWrite' role:
  MONGODB_USER
  MONGODB_PASSWORD
  MONGODB_DATABASE
MongoDB settings:
  MONGODB_QUIET (default: true)

i am positive that the environment variable is set, but i am informed that the true error is that you cannot specify the admin password and a database name.

this is the result of running the new-app command with the -o yaml option:

apiVersion: v1
items:
- apiVersion: v1
  kind: ImageStream
  metadata:
    annotations:
      openshift.io/generated-by: OpenShiftNewApp
    creationTimestamp: null
    labels:
      app: mongodb
    name: mongodb
  spec:
    tags:
    - annotations:
        openshift.io/imported-from: centos/mongodb-32-centos7
      from:
        kind: DockerImage
        name: centos/mongodb-32-centos7
      generation: null
      importPolicy: {}
      name: latest
      referencePolicy:
        type: ""
  status:
    dockerImageRepository: ""
- apiVersion: v1
  kind: DeploymentConfig
  metadata:
    annotations:
      openshift.io/generated-by: OpenShiftNewApp
    creationTimestamp: null
    labels:
      app: mongodb
    name: mongodb
  spec:
    replicas: 1
    selector:
      app: mongodb
      deploymentconfig: mongodb
    strategy:
      resources: {}
    template:
      metadata:
        annotations:
          openshift.io/generated-by: OpenShiftNewApp
        creationTimestamp: null
        labels:
          app: mongodb
          deploymentconfig: mongodb
      spec:
        containers:
        - env:
          - name: MONGODB_ADMIN_PASSWORD
            value: admin
          - name: MONGODB_DATABASE
            value: ophicleide
          image: centos/mongodb-32-centos7
          name: mongodb
          ports:
          - containerPort: 27017
            protocol: TCP
          resources: {}
          volumeMounts:
          - mountPath: /var/lib/mongodb/data
            name: mongodb-volume-1
        volumes:
        - emptyDir: {}
          name: mongodb-volume-1
    test: false
    triggers:
    - type: ConfigChange
    - imageChangeParams:
        automatic: true
        containerNames:
        - mongodb
        from:
          kind: ImageStreamTag
          name: mongodb:latest
      type: ImageChange
  status:
    availableReplicas: 0
    latestVersion: 0
    observedGeneration: 0
    replicas: 0
    unavailableReplicas: 0
    updatedReplicas: 0
- apiVersion: v1
  kind: Service
  metadata:
    annotations:
      openshift.io/generated-by: OpenShiftNewApp
    creationTimestamp: null
    labels:
      app: mongodb
    name: mongodb
  spec:
    ports:
    - name: 27017-tcp
      port: 27017
      protocol: TCP
      targetPort: 27017
    selector:
      app: mongodb
      deploymentconfig: mongodb
  status:
    loadBalancer: {}
kind: List
metadata: {}

suggested solution

the error message should be made more explicit, for example let the user know that they cannot specify both. alternatively, allow the user to specify both.

Remove CentOS sclo testing repository

After RHSCL is released rh-mongodb32 will be added into repository provided by centos-release-scl package. So 3.2/Dockerfile have to be changed to use this repository.

This issue came from this PR - #127

Database name should be required

DATABASE_NAME needs to be required because users can't make assumptions about 'default' database name. If we make them to specify the database name, then we don't need them to point to documentation for the default one.

Make test failures reporting better

@jhadvig when Jenkins fails, it is impossible to parse the Jenkins console output by my poor brain. Can you please print out the test case name and mark the failure using bold color or at least three exclamation marks :-)

Default values for MongoDB configuration

We set default values for noprealloc=true, smallfiles=true and quiet=true which are the opposite of MongoDB's default, and against recommended practice for production.

I'd like to raise a discussion on why we set those values that way.

  • noprealloc

    Default: false
    Set noprealloc = true to disable the preallocation of data files. This will shorten the start up time in some cases, but can cause significant performance penalties during normal operations.

    Are we optimizing for short start up times (quick demos) or normal usage?

  • smallfiles

    Default: false
    Set to true to modify MongoDB to use a smaller default data file size. [...]
    Use the smallfiles setting if you have a large number of databases that each hold a small quantity of data. The smallfiles setting can lead mongod to create many files, which may affect performance for larger databases.

    Are we assuming somehow that users will be using MongoDB with a small dataset? Right now we support creating only a single database. I think we should not be assuming anything here, just let MongoDB use its default settings and let users consciously opt for something other than Mongo's default. Again, seems like an optimization for short-lived demos.

  • quiet

    Default: false
    Runs the mongod or mongos instance in a quiet mode that attempts to limit the amount of output. [...]
    For production systems this option is not recommended as it may make tracking problems during particular connections much more difficult.

    Why do we suppress logging by default here? Could there be any good reason?

@bparees @mfojtik @jhadvig @mnagy I'm searching for arguments to keep the current default values for those, otherwise I propose changing those to false to match Mongo's default. It is possibly too late to remove the env vars, right?

Openshift Console Starts Printing same error message on endless loop

I have been trying to get the mongodb replica set example to work for a few days. I have three pods running and I start up the router and add the route to my headless mongodb server.

I add:

mongodb.test.router.default.svc.cluster.local to my /etc/hosts file.

That is a route I added for my mongodb headless service (no ClusterIP). When I attempt to connect to the service using UMONGO with the host mongodb.test.router.default.svc.cluster.local:80, my openshift console forever begins to send this message (it never stops after just one request to connect - it keeps repeating over and over):

E0910 12:54:38.017351 19243 proxysocket.go:89] Couldn't find an endpoint for default/router:80-tcp: missing service entry
E0910 12:54:38.017371 19243 proxysocket.go:134] Failed to connect to balancer: missing service entry
E0910 12:54:38.892463 19243 proxysocket.go:89] Couldn't find an endpoint for default/router:80-tcp: missing service entry
E0910 12:54:38.892524 19243 proxysocket.go:134] Failed to connect to balancer: missing service entry
E0910 12:54:38.892774 19243 proxysocket.go:89] Couldn't find an endpoint for default/router:80-tcp: missing service entry
E0910 12:54:38.892820 19243 proxysocket.go:134] Failed to connect to balancer: missing service entry
E0910 12:54:39.250190 19243 proxysocket.go:89] Couldn't find an endpoint for default/router:80-tcp: missing service entry
E0910 12:54:39.250232 19243 proxysocket.go:134] Failed to connect to balancer: missing service entry
E0910 12:54:39.250448 19243 proxysocket.go:89] Couldn't find an endpoint for default/router:80-tcp: missing service entry
E0910 12:54:39.250465 19243 proxysocket.go:134] Failed to connect to balancer: missing service entry
E0910 12:54:41.211770 19243 proxysocket.go:89] Couldn't find an endpoint for default/router:80-tcp: missing service entry
E0910 12:54:41.211835 19243 proxysocket.go:134] Failed to connect to balancer: missing service entry
E0910 12:54:41.212198 19243 proxysocket.go:89] Couldn't find an endpoint for default/router:80-tcp: missing service entry
E0910 12:54:41.212243 19243 proxysocket.go:134] Failed to connect to balancer: missing service entry
E0910 12:54:42.628829 19243 proxysocket.go:89] Couldn't find an endpoint for default/router:80-tcp: missing service entry
E0910 12:54:42.628884 19243 proxysocket.go:134] Failed to connect to balancer: missing service entry
E0910 12:54:42.629189 19243 proxysocket.go:89] Couldn't find an endpoint for default/router:80-tcp: missing service entry
E0910 12:54:42.629210 19243 proxysocket.go:134] Failed to connect to balancer: missing service entry
E0910 12:54:43.017986 19243 proxysocket.go:89] Couldn't find an endpoint for default/router:80-tcp: missing service entry
E0910 12:54:43.018050 19243 proxysocket.go:134] Failed to connect to balancer: missing service entry
E0910 12:54:43.018459 19243 proxysocket.go:89] Couldn't find an endpoint for default/router:80-tcp: missing service entry
E0910 12:54:43.018509 19243 proxysocket.go:134] Failed to connect to balancer: missing service entry
E0910 12:54:43.893363 19243 proxysocket.go:89] Couldn't find an endpoint for default/router:80-tcp: missing service entry
E0910 12:54:43.893422 19243 proxysocket.go:134] Failed to connect to balancer: missing service entry
E0910 12:54:43.893760 19243 proxysocket.go:89] Couldn't find an endpoint for default/router:80-tcp: missing service entry
E0910 12:54:43.893813 19243 proxysocket.go:134] Failed to connect to balancer: missing service entry
E0910 12:54:44.250892 19243 proxysocket.go:89] Couldn't find an endpoint for default/router:80-tcp: missing service entry
E0910 12:54:44.250935 19243 proxysocket.go:134] Failed to connect to balancer: missing service entry
E0910 12:54:44.251186 19243 proxysocket.go:89] Couldn't find an endpoint for default/router:80-tcp: missing service entry
E0910 12:54:44.251203 19243 proxysocket.go:134] Failed to connect to balancer: missing service entry
E0910 12:54:46.212723 19243 proxysocket.go:89] Couldn't find an endpoint for default/router:80-tcp: missing service entry
E0910 12:54:46.212769 19243 proxysocket.go:134] Failed to connect to balancer: missing service entry
E0910 12:54:46.213060 19243 proxysocket.go:89] Couldn't find an endpoint for default/router:80-tcp: missing service entry
E0910 12:54:46.213080 19243 proxysocket.go:134] Failed to connect to balancer: missing service entry
E0910 12:54:47.629693 19243 proxysocket.go:89] Couldn't find an endpoint for default/router:80-tcp: missing service entry
E0910 12:54:47.629787 19243 proxysocket.go:134] Failed to connect to balancer: missing service entry
E0910 12:54:47.630105 19243 proxysocket.go:89] Couldn't find an endpoint for default/router:80-tcp: missing service entry
E0910 12:54:47.630124 19243 proxysocket.go:134] Failed to connect to balancer: missing service entry
E0910 12:54:48.019095 19243 proxysocket.go:89] Couldn't find an endpoint for default/router:80-tcp: missing service entry
E0910 12:54:48.019136 19243 proxysocket.go:134] Failed to connect to balancer: missing service entry
E0910 12:54:48.019446 19243 proxysocket.go:89] Couldn't find an endpoint for default/router:80-tcp: missing service entry
E0910 12:54:48.019493 19243 proxysocket.go:134] Failed to connect to balancer: missing service entry

Why does this not work to connect to mongodb externally? And why does the openshift console start writing that message and never stop?

PVC with replicas

I created mongo db cluster with 3 replica set, did any one tried to mount using the PV to persist the data

Please guide

At what point does oc env MONGODB_SERVICE_HOST get populated?

I have a question. The code knows to pull the correct IP for mongodb using $MONGODB_SERVICE_HOST. When does this variable get created and populated? When "oc new-app mongodb" get executed? If so, I tried to do a search across this project (returned 0 results) and couldn't find the spot that defines it.

Test DATADIR permissions and fail cleanly

When running this image in Kubernetes 1.1.0, the USER clause and the ownership/permissions imposed on the volume /var/lib/mongodb/data by kubernetes are incompatible. If we check the permissions before starting the daemon we can provide a clear error message and avoid waiting/thrashing.

Process and File Ownership/Permissions

process: uid=184(mongodb) gid=998(mongodb) groups=998(mongodb)
directory: drwxr-x---. 2 root root 6 Oct 29 13:49 /var/lib/mongodb/data

This causes the intialization to fail when creating a lock file. The script proceeds to wait for the DB to become available and eventually times out. The actual error is buried in the logs.

Log Example

...
Thu Oct 29 13:57:59.288 [initandlisten] exception in initAndListen std::exception: boost::filesystem::status: Permission denied: "/var/lib/mongodb/data/mongod.lock", terminating
Thu Oct 29 13:57:59.288 dbexit: 
Thu Oct 29 13:57:59.288 [initandlisten] shutdown: going to close listening sockets...
...
Thu Oct 29 13:57:59.289 dbexit: really exiting now
 => Waiting for MongoDB service startup ...
=> Waiting for MongoDB service startup  ...
=> Waiting for MongoDB service startup  ...
... 
=> Waiting for MongoDB service startup  ...
=> Giving up: Failed to start MongoDB service!

NOTE: In OpenShift this is not observed because OpenShift imposes a high numbered UID and GID 0. The effect is that the mongodb process can write to the volume even though it has root:root ownership.

Upstream changes to Kubernetes, allowing Security Contexts to impose non-root groups on both processes and filesystems will allow this container to run as expected. It still would be nice to check and report if the DATADIR is not writable.

how to make mongod accept only ssl connections...?

i was able to use my on mongod.conf to specify ssl connections only

# mongodb.conf

port = 27017
pidfilepath = /var/lib/mongodb/mongodb.pid

# Set this value to designate a directory for the mongod instance to store its data.
# Default: /var/lib/mongodb/data
dbpath = /var/lib/mongodb/data

# Disable data file preallocation. Default: true
# only for mounted data directory from older MongoDB server
noprealloc = true

# Set MongoDB to use a smaller default data file size. Default: true
# only for mounted data directory from older MongoDB server
smallfiles = true

# Runs MongoDB in a quiet mode that attempts to limit the amount of output.
# Default: true
quiet = true

# Disable the HTTP interface (Defaults to localhost:28017).
nohttpinterface = true

#force connections to use TLS
sslMode = requireSSL

#specify the certificate
sslPEMKeyFile = /etc/mongodb.pem

but the problem is that the first mongo connection used to specify the mongo users fails because it doesn't try to connect with ssl.

From run-mongod

if [ "$(mongo admin --quiet --eval "$js_command")" == "1" ]; then
  echo "=> Admin user is already created. Resetting password ..."
  mongo_reset_admin
else
  mongo_create_admin
fi

any ideas on how we should account for this WITHOUT modifying common.sh and run-mongod??

petset-replicas - unclear shutdown

petset-replicas, after removing all db pods, new pod cannot be running again, the expect result is all db pods are running again

# oc logs -f mongodb-0
=> sourcing 10-check-env-vars.sh ...
=> sourcing 30-set-config-file.sh ...
=> sourcing 35-setup-default-datadir.sh ...
=> sourcing 40-setup-keyfile.sh ...
=> [Tue Jul 25 09:29:37] Waiting for local MongoDB to accept connections ...
note: noprealloc may hurt performance in many applications
2017-07-25T09:29:38.317+0000 I CONTROL [initandlisten] MongoDB starting : pid=26 port=27017 dbpath=/var/lib/mongodb/data 64-bit host=mongodb-0
2017-07-25T09:29:38.317+0000 I CONTROL [initandlisten] db version v3.0.11
2017-07-25T09:29:38.317+0000 I CONTROL [initandlisten] git version: 48f8b49dc30cc2485c6c1f3db31b723258fcbf39
2017-07-25T09:29:38.317+0000 I CONTROL [initandlisten] build info: Linux c1bg.rdu2.centos.org 2.6.32-573.22.1.el6.x86_64 #1 SMP Wed Mar 23 03:35:39 UTC 2016 x86_64 BOOST_LIB_VERSION=1_53
2017-07-25T09:29:38.317+0000 I CONTROL [initandlisten] allocator: tcmalloc
2017-07-25T09:29:38.317+0000 I CONTROL [initandlisten] options: { config: "/etc/mongod.conf", net: { port: 27017 }, replication: { oplogSizeMB: 64, replSet: "rs0" }, security: { keyFile: "/var/lib/mongodb/keyfile" }, storage: { dbPath: "/var/lib/mongodb/data", mmapv1: { preallocDataFiles: false, smallFiles: true } }, systemLog: { quiet: true } }
2017-07-25T09:29:38.322+0000 W - [initandlisten] Detected unclean shutdown - /var/lib/mongodb/data/mongod.lock is not empty.
2017-07-25T09:29:38.332+0000 I STORAGE [initandlisten] **************
old lock file: /var/lib/mongodb/data/mongod.lock. probably means unclean shutdown,
but there are no journal files to recover.
this is likely human error or filesystem corruption.
please make sure that your journal directory is mounted.
found 2 dbs.
see: http://dochub.mongodb.org/core/repair for more information

2017-07-25T09:29:38.334+0000 I STORAGE [initandlisten] exception in initAndListen: 12596 old lock file, terminating
2017-07-25T09:29:38.334+0000 I CONTROL [initandlisten] dbexit: rc: 100

Originally reported by @dongboyan77 in #239 (comment)

After a redeploy, MongoDB replica set is lost, pods become independent mongo instances

The replication example does not survive a redeploy.
The way it works with a run-once pod has likely no future if we will support a redeploy.

Steps to reproduce:

  1. On a new project, create cluster from the template:

    $ oc new-app 2.4/examples/replica/mongodb-clustered.json
    services/mongodb
    pods/mongodb-service
    deploymentconfigs/mongodb
    Service "mongodb" created at None with port mappings 27017.
    Run 'oc status' to view your app.
    
  2. Wait until replica set is deployed and stand-alone pod shuts down:

    $ oc logs mongodb-service -f
    ...
    => Waiting for MongoDB service shutdown ...
    => MongoDB service has stopped
    => Successfully initialized replSet
    
  3. List pods and connect to one of them as the 'admin' user:

    $ oc get pods
    NAME              READY     STATUS       RESTARTS   AGE
    mongodb-1-0u9qv   1/1       Running      0          2m
    mongodb-1-g0lma   1/1       Running      0          2m
    mongodb-1-rm4bj   1/1       Running      0          2m
    mongodb-service   0/1       ExitCode:0   0          2m
    $ oc exec -it mongodb-1-0u9qv -- bash -c 'mongo $MONGODB_DATABASE -u admin -p $MONGODB_ADMIN_PASSWORD --authenticationDatabase=admin'
    MongoDB shell version: 2.4.9
    connecting to: userdb
    Welcome to the MongoDB shell.
    For interactive help, type "help".
    For more comprehensive documentation, see
        http://docs.mongodb.org/
    Questions? Try the support group
        http://groups.google.com/group/mongodb-user
    rs0:SECONDARY> 
    bye
    

    Ok, we have a replica set. Now, let's continue...

  4. Redeploy:

    $ oc deploy --latest mongodb                                                            
    Started deployment #2
    
  5. Again, list pods and try to connect to one of them as the 'admin' user:

    $ oc get pods
    NAME               READY     STATUS       RESTARTS   AGE
    mongodb-2-deploy   1/1       Running      0          12s
    mongodb-2-e9c66    0/1       Running      0          7s
    mongodb-2-govx7    1/1       Running      0          7s
    mongodb-2-imdo4    1/1       Running      0          7s
    mongodb-service    0/1       ExitCode:0   0          4m
    $ oc exec -it mongodb-2-e9c66 -- bash -c 'mongo $MONGODB_DATABASE -u admin -p $MONGODB_ADMIN_PASSWORD --authenticationDatabase=admin'
    MongoDB shell version: 2.4.9
    connecting to: userdb
    Thu Aug 13 13:59:11.983 Error: 18 { code: 18, ok: 0.0, errmsg: "auth fails" } at src/mongo/shell/db.js:228
    exception: login failed
    

    It failed because there is no data persistence and with the redeploy all the data an configuration was gone.

  6. Connect without authentication:

    $ oc exec -it mongodb-2-e9c66 -- bash -c 'mongo'
    MongoDB shell version: 2.4.9
    connecting to: test
    Welcome to the MongoDB shell.
    For interactive help, type "help".
    For more comprehensive documentation, see
        http://docs.mongodb.org/
    Questions? Try the support group
        http://groups.google.com/group/mongodb-user
    > 
    bye
    

    As we can see, we have now an independent MongoDB instance, running without authentication and without any of the configuration applied originally by the mongodb-service pod.

How do I use my own conf file inside Openshift?

I see that the readme says to...

For example to use configuration file stored in /home/user directory use this option for docker run command: -v /home/user/mongod.conf:/etc/mongod.conf:Z.

But how do we edit the docker run command inside Openshift?

run-mongod ignores rest of arguments

Both, MariaDB and PostgreSQL pass the rest of the run- arguments given to this script by docker run command to the itself:

exec ${MYSQL_PREFIX}/libexec/mysqld --defaults-file=$MYSQL_DEFAULTS_FILE "$@" 2>&1
...
exec postgres "$@"

This allows to change/add some options on the docker run command line or in template without providing whole new configuration. If there are no side effects, we should do it the same in mongodb container.

[1] https://github.com/sclorg/mariadb-container/blob/master/10.1/root/usr/bin/run-mysqld#L35
[2] https://github.com/sclorg/postgresql-container/blob/master/9.5/root/usr/bin/run-postgresql#L28

connecting as admin via gui

Hi,

I'm attempting to connecting to db as default admin user using the admin password which was set during setup.

I am using port forward to enable connecting from local host using Studio 3T but have also tried with Mongo Management Studio and MongoDB Compass but am unable to connect using the admin user.

I can connect as a standard user but need the admin access so i can create collections etc.

I'd appreciate any guidance.

Thanks and happy new year to all :)

Mike

Cluster init not working against GA

Did a straight up create - can you replicate? Want to show this at summit.

Mon Jun 22 00:04:04.462 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
=> Waiting for MongoDB service startup  ...
Mon Jun 22 00:04:05.046 [conn1] note: no users configured in admin.system.users, allowing localhost access
=> MongoDB service has started
+ mongo_add
++ mongo_addr
+++ container_addr
++++ cat /var/lib/mongodb/.address
+++ echo -n 172.17.0.70
++ echo -n 172.17.0.70:
++ mongo_primary_member_addr
+++ endpoints
+++ service_name=mongodb
+++ dig mongodb A +search +short
++ local 'current_endpoints=172.17.0.69
172.17.0.70'
+++ echo -n 172.17.0.69 172.17.0.70
+++ cut -d ' ' -f 1
++ local mongo_node=172.17.0.69:
+++ mongo admin -u admin -p 7HsVOJ8NglVa --host 172.17.0.69: --quiet --eval 'print(rs.isMaster().primary);'
exception: connect failed
++ echo -n Mon Jun 22 00:04:05.160 Error: Missing port number in connection string '"172.17.0.69:/admin"' at src/mongo/shell/mongo.js:129
+ echo '=> Adding 172.17.0.70: to Mon Jun 22 00:04:05.160 Error: Missing port number in connection string "172.17.0.69:/admin" at src/mongo/shell/mongo.js:129 ...'
=> Adding 172.17.0.70: to Mon Jun 22 00:04:05.160 Error: Missing port number in connection string "172.17.0.69:/admin" at src/mongo/shell/mongo.js:129 ...
++ mongo_primary_member_addr
+++ endpoints
+++ service_name=mongodb
+++ dig mongodb A +search +short
++ local 'current_endpoints=172.17.0.69
172.17.0.70'
+++ echo -n 172.17.0.69 172.17.0.70
+++ cut -d ' ' -f 1
++ local mongo_node=172.17.0.69:
+++ mongo admin -u admin -p 7HsVOJ8NglVa --host 172.17.0.69: --quiet --eval 'print(rs.isMaster().primary);'
exception: connect failed
++ echo -n Mon Jun 22 00:04:05.234 Error: Missing port number in connection string '"172.17.0.69:/admin"' at src/mongo/shell/mongo.js:129
++ mongo_addr
+++ container_addr
++++ cat /var/lib/mongodb/.address
+++ echo -n 172.17.0.70
++ echo -n 172.17.0.70:
+ mongo admin -u admin -p 7HsVOJ8NglVa --host Mon Jun 22 00:04:05.234 Error: Missing port number in connection string '"172.17.0.69:/admin"' at src/mongo/shell/mongo.js:129 --eval 'rs.add('\''172.17.0.70:'\'');'
MongoDB shell version: 2.4.9
connecting to: Mon:27017/admin

Rename MONGODB_USERNAME env variable

Both MySQL and PostgreSQL images have config variable named _USER. MongoDB is thus anomaly and I think the variable should be renamed or support both formats _USER and _USERNAME

First replica set member is not adding to existing replica set after failure

Because of the hard coded memberid (0), after an error of the first replica set, the POD cannot add itself to an existing replica set of the other replicas (1+). The first POD then initializes a new replica set.

# Initialize replica set only if we're the first member
if [ "${MEMBER_ID}" = '0' ]; then
  initiate "${MEMBER_HOST}"
else
  add_member "${MEMBER_HOST}"
fi

The error can be simulated if the first POD and the corresponding PVC is deleted.
It would be better to determine beforehand whether a replica set already exists.

No Persistence?

Is there a reason no example is given for how to make the data persistent for the clustered replica set version? If I modify this to include a persistent mount location will that not work?

mongo 3.2 lacks most of the tools

To do initial loads into mongodb there is a need for mongoimport. As the package is no in rh-mongodb32-mongo-tools there is a need for:

  • Have the tools in the mongo image
  • Have an additional mongo-tools image that can be used to import data in the mongo db. This image can be used as sidecar, as post-deployment or as a job or runonce pod to load data into the mongo database.

Replication: after scale up, new pod cannot connect to replica set

Steps to reproduce

  1. Create new app:

    $ oc new-app https://raw.githubusercontent.com/openshift/mongodb/master/2.4/examples/replica/mongodb-clustered.json
  2. Stream logs until the deployment is finished. Replica set initialization logs are streamed as well:

    $ oc logs -f mongodb-1-deploy
  3. Connect to the primary member via any pod, check that replica set configuration is ok:

    $ oc get pods
    NAME              READY     STATUS    RESTARTS   AGE
    mongodb-1-4thdo   1/1       Running   0          11m
    mongodb-1-7l8j1   1/1       Running   0          11m
    mongodb-1-qcoob   1/1       Running   1          12m
    $ oc exec mongodb-1-4thdo -it -- bash -c 'mongo admin -u admin -p $MONGODB_ADMIN_PASSWORD --host $MONGODB_REPLICA_NAME/localhost'
    rs0:PRIMARY> rs.isMaster()
    {
        "setName" : "rs0",
        "ismaster" : true,
        "secondary" : false,
        "hosts" : [
                "172.17.0.4:27017",
                "172.17.0.5:27017",
                "172.17.0.3:27017"
        ],
        "primary" : "172.17.0.4:27017",
        "me" : "172.17.0.4:27017",
        "maxBsonObjectSize" : 16777216,
        "maxMessageSizeBytes" : 48000000,
        "localTime" : ISODate("2016-02-10T21:16:02.683Z"),
        "ok" : 1
    }
  4. Scale up, look at the logs of the new pod:

    $ oc scale dc/mongodb --replicas=4
    deploymentconfig "mongodb" scaled
    $ oc get pods
    NAME              READY     STATUS    RESTARTS   AGE
    mongodb-1-4thdo   1/1       Running   0          2m
    mongodb-1-7l8j1   1/1       Running   0          2m
    mongodb-1-qcoob   1/1       Running   1          2m
    mongodb-1-sj5hq   1/1       Running   0          18s
    $ oc logs -f mongodb-1-sj5hq
    ...
    Wed Feb 10 21:15:16.566 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)

    The last message repeats indefinitely.

Expected result

The new pod should be added to the replica set, and querying the primary should show the 4 members.

Exception in NodeJS app when new version deployed

Hi 👋

Our openshift app updates our mongodb container to the latest version automatically when a new version is released: https://access.redhat.com/containers/?tab=overview#/registry.access.redhat.com/rhscl/mongodb-36-rhel7

After auto updating and re-running the mongodb container the actual container starts fine. However our NodeJS (which is using official nodejs driver & mongoose) app get's an exception: {"name":"MongoError","message":"Topology was destroyed"} and can't recover even tough the container is running correctly.

So I have to manually stop the NodeJS app and re-run it. Then it connects fine and stays connected until the next auto update.

Has anyone encountered this problem as well and found a solution?

EDIT: I already tried configuring connection settings in mongoose: { keepAlive: 1, reconnectTries: Number.MAX_VALUE }

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.