Coder Social home page Coder Social logo

cassandra's People

Contributors

alexanderjardim avatar atoulme avatar cscetbon avatar docker-library-bot avatar emerkle826 avatar j0wi avatar laurentgoderre avatar ltagliamonte avatar sage-service-user avatar tianon avatar wgerlach avatar yosifkit avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cassandra's Issues

Metrics Reporting

I've been through these posts a few times now, and i'm stumped as to how I can get it working with the docker image.

cassandra-master:
  image: cassandra:3.0
  container_name: cassandra-master
  ulimits:
    nproc: 2048
  environment:
    - JAVA_OPTS="-Xmx1g"
    - JVM_OPTS="$JVM_OPTS -Dcassandra.metricsReporterConfigFile=influxreporting.yaml"
    - CASSANDRA_CLUSTER_NAME="My Cluster"
    - CASSANDRA_START_RPC=true
  volumes:
    - /home/cassandra/config/cassandra/metrics-graphite-2.2.0.jar:/usr/share/cassandra/lib/metrics-graphite-2.2.0.jar
    - /home/cassandra/config/cassandra/influx-reporting.yaml:/etc/cassandra/influxreporting.yaml
  privileged: true
  links:
    - influx-db:influxdb
  ports:
    - "7000:7000"
    - "7001:7001"
    - "7199:7199"
    - "9042:9042"
    - "9160:9160"

I simply receive an output along the lines of:

cassandra-master | Error: Could not find or load main class "

Anyone else got this working and if so how?

Tables are empty after restart

We are using the docker image for our 2 node cluster. It will be started by mesos/marathon with the following command:

docker run --rm -m 800m --name cassandra-`hostname` -v /data/cassandra/smile:/var/lib/cassandra/data -e CASSANDRA_CLUSTER_NAME=smile-database -e CASSANDRA_BROADCAST_ADDRESS=`hostname` -p 7000:7000 -p 9160:9160 -p 9042:9042 -p 7199:7199 -e CASSANDRA_SEEDS=<hostname1>,<hostname2> cassandra:2.2.1

After that we are able to create a keyspace and inserting data. Everything looks fine. Data is synchronized between the nodes and available to the clients that are connecting.

After a restart which means also spawning a new container, the keyspace is still existing but all the tables are empty.

After reading the documentation at http://docs.datastax.com/en/cassandra/2.0/cassandra/dml/dml_write_path_c.html

it look like as if the data is not flushed to the disc. The content of my keyspace directory (find .) looks like:

./lounge_offer_by_id-dd3708508cec11e5af9b091830ac5256
./lounge_offer_by_id-dd3708508cec11e5af9b091830ac5256/la-1-big-Statistics.db
./lounge_offer_by_id-dd3708508cec11e5af9b091830ac5256/la-1-big-TOC.txt
./lounge_offer_by_id-dd3708508cec11e5af9b091830ac5256/la-1-big-Data.db
./lounge_offer_by_id-dd3708508cec11e5af9b091830ac5256/backups
./lounge_offer_by_id-dd3708508cec11e5af9b091830ac5256/la-1-big-CompressionInfo.db
./lounge_offer_by_id-dd3708508cec11e5af9b091830ac5256/la-1-big-Index.db
./lounge_offer_by_id-dd3708508cec11e5af9b091830ac5256/la-1-big-Summary.db
./lounge_offer_by_id-dd3708508cec11e5af9b091830ac5256/la-1-big-Filter.db
./lounge_offer_by_id-dd3708508cec11e5af9b091830ac5256/la-1-big-Digest.adler32
./lounge_capacity-e3eb94b07bf111e590f8091830ac5256
./lounge_capacity-e3eb94b07bf111e590f8091830ac5256/backups
./lounge_offer_opened-e30b46d07bf111e590f8091830ac5256
./lounge_offer_opened-e30b46d07bf111e590f8091830ac5256/backups
./lounge_offer_pending-e2d2f8c07bf111e590f8091830ac5256
./lounge_offer_pending-e2d2f8c07bf111e590f8091830ac5256/backups
./lounge_details-e35548207bf111e590f8091830ac5256
./lounge_details-e35548207bf111e590f8091830ac5256/.lounge_type_index
./lounge_details-e35548207bf111e590f8091830ac5256/backups
./lounge_details-e35548207bf111e590f8091830ac5256/backups/.lounge_type_index
./lounge_details-de0be4808cec11e5af9b091830ac5256
./lounge_details-de0be4808cec11e5af9b091830ac5256/.lounge_type_index
./lounge_details-de0be4808cec11e5af9b091830ac5256/backups
./lounge_details-de0be4808cec11e5af9b091830ac5256/backups/.lounge_type_index
./lounge_capacity-de5633f08cec11e5af9b091830ac5256
./lounge_capacity-de5633f08cec11e5af9b091830ac5256/backups
./opened_lounge_offer_by_lounge-dde856f08cec11e5af9b091830ac5256
./opened_lounge_offer_by_lounge-dde856f08cec11e5af9b091830ac5256/backups
./lounge_offer_sent-e33404807bf111e590f8091830ac5256
./lounge_offer_sent-e33404807bf111e590f8091830ac5256/backups
./offer_by_lounge_type_and_airport_code-ddab4df08cec11e5af9b091830ac5256
./offer_by_lounge_type_and_airport_code-ddab4df08cec11e5af9b091830ac5256/backups
./lounge_offer_by_focus_id-dd5873008cec11e5af9b091830ac5256
./lounge_offer_by_focus_id-dd5873008cec11e5af9b091830ac5256/backups
./opened_lounge_offer-ddc6c5308cec11e5af9b091830ac5256
./opened_lounge_offer-ddc6c5308cec11e5af9b091830ac5256/backups
./pending_lounge_offer-dd8947008cec11e5af9b091830ac5256
./pending_lounge_offer-dd8947008cec11e5af9b091830ac5256/backups

There are snapshot directories that i excluded from this output.

Any hints are highly appreciated :)

Add jemalloc to the Dockerfile

When I start cassandra I get this warning:
jemalloc shared library could not be preloaded to speed up memory allocations

Should maybe a apt-get -y libjemalloc1 be added to the Dockerfile ?

Using external data store in windows fails

When using the option to mount a volume for data storage, cassandra is not able to wite logs and stops. The instruction used is

docker run --name myCass2 --rm -v //c/cassandra_storage:/var/lib/cassandra -it cassandra:2.2

and the error:

ERROR 23:05:11 Exiting due to error while processing commit log during initialization. org.apache.cassandra.io.FSWriteError: java.io.IOException: Invalid argument

I tried with Cassandra 2.2 and 3.3 and it is the same. Using store inside the image woks perfectly

Problems build cluster with 2 vms

Trying to build a dev cluster at Digital Ocean.
These are the steps I followed:

  1. Created 2 boxes with docker 1.6.2 Ubuntu 14.04 default image from Digital Ocean
  2. docker pull cassandra on both boxes
  3. first node ip is 10.13.13.13 and second node is 10.13.13.14
  4. started cassandra on first node docker run --name cassandra1 -d -e CASSANDRA_BROADCAST_ADDRESS=10.13.13.13 -p 7000:7000 -v /data:/var/lib/cassandra/data cassandra
  5. telnet 10.13.13.13 7000 from second node. Got Escape character is '^]'. as usual
  6. Started second cassandra node: docker run --name cassandra2 -d -e CASSANDRA_SEEDS=10.13.13.13 -e CASSANDRA_BOADCAST_ADDRESS=10.13.13.13 -v /data:/var/lib/cassandra/data cassandra
  7. Nodetool output on first node:
root@cassandra1:~# docker exec -ti cassandra1 /usr/bin/nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address         Load       Tokens  Owns (effective)  Host ID                               Rack
UN  10.13.13.13  51.29 KB   256     100.0%            0daf1848-0d59-46a9-8067-a8f89ad52b24  rack1

And second node:

Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address         Load       Tokens  Owns (effective)  Host ID                               Rack
UN  10.13.13.13  65.66 KB   256     100.0%            be11a163-cc73-4c52-9c93-c8dadb0daf54  rack1

It seems that second node is node becoming a member of the cluster. What am I doing wrong?

Docker Cloud initialization option

It would be nice to have an in the box option for initializing a C* service in docker cloud.

There's an option to sequentially start the servers, which would allow the foo-1 service to know it's the first node in the foo cluster... and each progressive instance can then know it is foo-N and to talk to foo-1

Multi DC Support through environment variables

We should add the following environment variables to this image

CASSANDRA_DC

This variable sets the datacenter name of this node. Defaults to DC1
(it should change cassandra-rackdc.properties)

CASSANDRA_RACK

This variable sets the rack name of this node. Defaults to RAC1
(it should change cassandra-rackdc.properties)

CASSANDRA_ENDPOINT_SNITCH

This variable sets the snitch implementation this node will use. Defaults to SimpleSnitch
(it should change cassandra.yml)

These are the minimum setup we need to do to get a multi datacenter compatible cluster with this Cassandra image.

Cassandra 2.1.11 doesn't start in docker 1.9.1

Pulling from the official image, cassandra doesn't seems to start and hang the container. Starting a Single node cluster:

docker run --name dev-cassandra -d cassandra:2.1

Wait for a while to be shure cassandra is started and start a new linked container to run cqlsh

docker run -it --link dev-cassandra:cassandra --rm cassandra:2.1 cqlsh cassandra

Connection error: ('Unable to connect to any servers', {'cassandra':    error(111, "Tried connecting to [('172.17.0.2', 9042)]. Last error: Connection refused")})

then no way to kill or remove the container (restart of the machine needed)

docker kill dev-cassandra
docker rm -f dev-cassandra

The cassandra 2 & 3 just work fine

docker run --name latest-cassandra -d cassandra:latest

Then waiting for the cassandra to be ready:

docker run -it --link latest-cassandra:cassandra --rm cassandra:latest cqlsh cassandra

Connected to Test Cluster at cassandra:9042.
[cqlsh 5.0.1 | Cassandra 3.0.0 | CQL spec 3.3.1 | Native protocol v4]
Use HELP for help.
cqlsh>

My Configuration is:

Docker (Server Version: 1.9.1, Kernel Version: 4.1.13-boot2docker)
Yosemite
VirtualBox 5.0.10

Node stuck on UJ

Hello.
I'm trying to test a 4 node setup on 4 different VMs (Centos 7, Docker 1.10.1)
I have two seed nodes and two other nodes.
My startup is as follows (No data in Cassandra, fresh container):
1st node:
docker run --name cassandra1 -d -e CASSANDRA_BROADCAST_ADDRESS=10.26.194.33 -e CASSANDRA_SEEDS=10.26.194.33,10.26.194.47 -p 7000:7000 -p 9042:9042 cassandra:latest

  • waiting for the node to become UN

2nd node:
docker run --name cassandra2 -d -e CASSANDRA_BROADCAST_ADDRESS=10.26.194.47 -e CASSANDRA_SEEDS=10.26.194.33,10.26.194.47 -p 7000:7000 -p 9042:9042 cassandra:latest

  • waiting for the node to become UN

3rd node:
docker run --name cassandra3 -d -e CASSANDRA_BROADCAST_ADDRESS=10.26.194.78 -e CASSANDRA_SEEDS=10.26.194.33,10.26.194.47 -p 7000:7000 -p 9042:9042 cassandra:latest

  • waiting for the node to become UN

4th node:
docker run --name cassandra4 -d -e CASSANDRA_BROADCAST_ADDRESS=10.26.194.16 -e CASSANDRA_SEEDS=10.26.194.33,10.26.194.47 -p 7000:7000 -p 9042:9042 cassandra:latest

The 4th node is stuck on UJ, no error are shown on the logs:

INFO  [main] 2016-02-14 17:17:09,173 StorageService.java:1181 - JOINING: sleeping 30000 ms for pending range setup
DEBUG [PendingRangeCalculator:1] 2016-02-14 17:17:09,223 PendingRangeCalculatorService.java:64 - finished calculation for 3 keyspaces in 49ms
DEBUG [GossipTasks:1] 2016-02-14 17:17:09,399 FailureDetector.java:293 - Still not marking nodes down due to local pause
DEBUG [GossipTasks:1] 2016-02-14 17:17:09,399 FailureDetector.java:293 - Still not marking nodes down due to local pause
DEBUG [GossipTasks:1] 2016-02-14 17:17:09,399 FailureDetector.java:293 - Still not marking nodes down due to local pause
DEBUG [GossipTasks:1] 2016-02-14 17:17:10,401 FailureDetector.java:293 - Still not marking nodes down due to local pause
DEBUG [GossipTasks:1] 2016-02-14 17:17:10,401 FailureDetector.java:293 - Still not marking nodes down due to local pause
DEBUG [GossipTasks:1] 2016-02-14 17:17:10,401 FailureDetector.java:293 - Still not marking nodes down due to local pause
DEBUG [GossipTasks:1] 2016-02-14 17:17:11,402 FailureDetector.java:293 - Still not marking nodes down due to local pause
DEBUG [GossipTasks:1] 2016-02-14 17:17:11,402 FailureDetector.java:293 - Still not marking nodes down due to local pause
DEBUG [GossipTasks:1] 2016-02-14 17:17:11,402 FailureDetector.java:293 - Still not marking nodes down due to local pause
DEBUG [GossipStage:1] 2016-02-14 17:17:25,428 FailureDetector.java:456 - Ignoring interval time of 2003749978 for /10.26.194.78
DEBUG [GossipStage:1] 2016-02-14 17:17:25,429 FailureDetector.java:456 - Ignoring interval time of 2627652047 for /10.26.194.47
DEBUG [GossipStage:1] 2016-02-14 17:17:28,432 FailureDetector.java:456 - Ignoring interval time of 2005226789 for /10.26.194.78
INFO  [main] 2016-02-14 17:17:39,176 StorageService.java:1181 - JOINING: Starting to bootstrap...
INFO  [main] 2016-02-14 17:17:39,744 StreamResultFuture.java:88 - [Stream #d94d19d0-d33e-11e5-9066-091830ac5256] Executing streaming plan for Bootstrap
DEBUG [main] 2016-02-14 17:17:39,744 StreamCoordinator.java:144 - Connecting next session d94d19d0-d33e-11e5-9066-091830ac5256 with 10.26.194.78.
INFO  [StreamConnectionEstablisher:1] 2016-02-14 17:17:39,753 StreamSession.java:238 - [Stream #d94d19d0-d33e-11e5-9066-091830ac5256] Starting streaming to /10.26.194.78
DEBUG [StreamConnectionEstablisher:1] 2016-02-14 17:17:39,753 ConnectionHandler.java:82 - [Stream #d94d19d0-d33e-11e5-9066-091830ac5256] Sending stream init for incoming stream
DEBUG [StreamConnectionEstablisher:1] 2016-02-14 17:17:39,771 ConnectionHandler.java:87 - [Stream #d94d19d0-d33e-11e5-9066-091830ac5256] Sending stream init for outgoing stream
INFO  [StreamConnectionEstablisher:1] 2016-02-14 17:17:39,779 StreamCoordinator.java:266 - [Stream #d94d19d0-d33e-11e5-9066-091830ac5256, ID#0] Beginning stream session with /10.26.194.78
DEBUG [STREAM-OUT-/10.26.194.78] 2016-02-14 17:17:39,779 ConnectionHandler.java:334 - [Stream #d94d19d0-d33e-11e5-9066-091830ac5256] Sending Prepare (3 requests,  0 files}
DEBUG [STREAM-IN-/10.26.194.78] 2016-02-14 17:17:39,827 ConnectionHandler.java:266 - [Stream #d94d19d0-d33e-11e5-9066-091830ac5256] Received Prepare (0 requests,  0 files}
DEBUG [STREAM-IN-/10.26.194.78] 2016-02-14 17:17:39,828 ConnectionHandler.java:266 - [Stream #d94d19d0-d33e-11e5-9066-091830ac5256] Received Complete
DEBUG [STREAM-IN-/10.26.194.78] 2016-02-14 17:17:39,828 ConnectionHandler.java:110 - [Stream #d94d19d0-d33e-11e5-9066-091830ac5256] Closing stream connection handler on /10.26.194.78
INFO  [STREAM-IN-/10.26.194.78] 2016-02-14 17:17:39,828 StreamResultFuture.java:185 - [Stream #d94d19d0-d33e-11e5-9066-091830ac5256] Session with /10.26.194.78 is complete
DEBUG [STREAM-OUT-/10.26.194.78] 2016-02-14 17:17:39,832 ConnectionHandler.java:334 - [Stream #d94d19d0-d33e-11e5-9066-091830ac5256] Sending Complete

Any idea what I'm doing wrong?

Cassandra configuration options

Documentation lists limited number of options which can be passed to Cassandra image when executing docker run command. Is there a way to configure remaining configuration options such as replication factor and some such?

Mounting external volume on windows logs out CompactionExecutor warnings

Hi,

When storing the cassandra data/commitlog on disk, in Windows outside of container, we can mount the relevant volumes successfully. Data can be written & using nodetool flush to push data from memtable to sstable on disk shows no write issues (from what we can tell). However, the system/debug logs are flooded with warnings - not 100% sure exactly what they mean but looks to be something to do with compation from commitlog? If not mounting volumes, and using internal container storage, these warnigns do not occur. Anyway, the warnings are as follows - any thoughts or advice on this would be greatly appreciated as not sure how worried to be:

WARN  [CompactionExecutor:2] 2016-12-12 13:27:04,758 CLibrary.java:304 - fsync(317) failed, errno (22) {}
com.sun.jna.LastErrorException: [22] @?H?
        at org.apache.cassandra.utils.CLibrary.fsync(Native Method) ~[apache-cassandra-3.9.jar:3.9]
        at org.apache.cassandra.utils.CLibrary.trySync(CLibrary.java:293) ~[apache-cassandra-3.9.jar:3.9]
        at org.apache.cassandra.utils.SyncUtil.trySync(SyncUtil.java:179) [apache-cassandra-3.9.jar:3.9]
        at org.apache.cassandra.utils.SyncUtil.trySyncDir(SyncUtil.java:190) [apache-cassandra-3.9.jar:3.9]
        at org.apache.cassandra.io.util.SequentialWriter.openChannel(SequentialWriter.java:107) [apache-cassandra-3.9.jar:3.9]
        at org.apache.cassandra.io.util.SequentialWriter.<init>(SequentialWriter.java:141) [apache-cassandra-3.9.jar:3.9]
        at org.apache.cassandra.io.sstable.format.big.BigTableWriter.writeMetadata(BigTableWriter.java:386) [apache-cassandra-3.9.jar:3.9]
        at org.apache.cassandra.io.sstable.format.big.BigTableWriter.access$300(BigTableWriter.java:51) [apache-cassandra-3.9.jar:3.9]
        at org.apache.cassandra.io.sstable.format.big.BigTableWriter$TransactionalProxy.doPrepare(BigTableWriter.java:352) [apache-cassandra-3.9.jar:3.9]
        at org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.prepareToCommit(Transactional.java:173) [apache-cassandra-3.9.jar:3.9]
        at org.apache.cassandra.io.sstable.format.SSTableWriter.prepareToCommit(SSTableWriter.java:280) [apache-cassandra-3.9.jar:3.9]
        at org.apache.cassandra.io.sstable.SSTableRewriter.doPrepare(SSTableRewriter.java:373) [apache-cassandra-3.9.jar:3.9]
        at org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.prepareToCommit(Transactional.java:173) [apache-cassandra-3.9.jar:3.9]
        at org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.doPrepare(CompactionAwareWriter.java:111) [apache-cassandra-3.9.jar:3.9]
        at org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.prepareToCommit(Transactional.java:173) [apache-cassandra-3.9.jar:3.9]
        at org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.finish(Transactional.java:184) [apache-cassandra-3.9.jar:3.9]
        at org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.finish(CompactionAwareWriter.java:121) [apache-cassandra-3.9.jar:3.9]
        at org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:200) [apache-cassandra-3.9.jar:3.9]
        at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) [apache-cassandra-3.9.jar:3.9]
        at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:82) [apache-cassandra-3.9.jar:3.9]
        at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60) [apache-cassandra-3.9.jar:3.9]
        at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:264) [apache-cassandra-3.9.jar:3.9]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_111]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_111]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_111]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_111]
        at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]

If flush with nodetool then the following warning:

WARN  14:49:54 fsync(176) failed, errno (22) {}
com.sun.jna.LastErrorException: [22] ?1???
at org.apache.cassandra.utils.CLibrary.fsync(Native Method) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.utils.CLibrary.trySync(CLibrary.java:293) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.lifecycle.LogReplica.syncDirectory(LogReplica.java:96) [apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.lifecycle.LogReplica.append(LogReplica.java:90) [apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.lifecycle.LogReplicaSet.lambda$null$5(LogReplicaSet.java:209) [apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.utils.Throwables.perform(Throwables.java:113) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.utils.Throwables.perform(Throwables.java:103) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.lifecycle.LogReplicaSet.append(LogReplicaSet.java:209) [apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.lifecycle.LogFile.addRecord(LogFile.java:298) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.lifecycle.LogFile.add(LogFile.java:279) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.lifecycle.LogTransaction.trackNew(LogTransaction.java:134) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.lifecycle.LifecycleTransaction.trackNew(LifecycleTransaction.java:520) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.io.sstable.format.big.BigTableWriter.<init>(BigTableWriter.java:78) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.io.sstable.format.big.BigFormat$WriterFactory.open(BigFormat.java:92) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.io.sstable.format.SSTableWriter.create(SSTableWriter.java:101) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.create(SimpleSSTableMultiWriter.java:119) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.compaction.AbstractCompactionStrategy.createSSTableMultiWriter(AbstractCompactionStrategy.java:553) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.compaction.CompactionStrategyManager.createSSTableMultiWriter(CompactionStrategyManager.java:905) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.ColumnFamilyStore.createSSTableMultiWriter(ColumnFamilyStore.java:506) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.Memtable$FlushRunnable.createFlushWriter(Memtable.java:483) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.Memtable$FlushRunnable.<init>(Memtable.java:424) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.Memtable$FlushRunnable.<init>(Memtable.java:394) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.Memtable.createFlushRunnables(Memtable.java:295) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.Memtable.flushRunnables(Memtable.java:279) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.ColumnFamilyStore$Flush.flushMemtable(ColumnFamilyStore.java:1117) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1084) ~[apache-cassandra-3.9.jar:3.9]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_111]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[na:1.8.0_111]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_111]

limit memory usage and adding authentication and authorization options

How to limit the memory usage of each Cassandra nodes for development purposes? Is there any environment variable? Also it is nice if the Authentication and Authorization options are added since it is pretty easy by minor modification of docker-entrypoint.sh file so no more branching from this official image is needed.

Initialization script

This is a feature request,
Some docker images like the official postgres allow initialization scrips during execution. Just adding some SQL or Shell files to the folder /docker-entrypoint-initdb.d and docker will execute all these scripts after building the image, so we are able to create schemes, add initial data ETC... I believe this is a great feature and wouldn't be really hard to add to cassandra, just take a look at postgres entry-point.sh .

Health check

It would be nice to add a Health Check in Dockerfile!

Handle rpc_address in docker-entrypoint.sh

We can currently configure easily those parameter from the cassandra.yaml :
broadcast_address
broadcast_rpc_address
cluster_name
endpoint_snitch
listen_address
num_tokens

rpc_address should be added to be complete, otherwise it defaults to 0.0.0.0.

cassandra configurations don't update when container's IP address is changed after restarts

Docker containers can get different IP addresses in between restarts. Cassandra entry point script does not support updating configurations with the new IP addresses, and that causes problems when a container is restarted, especially if the previous address is offered to a new cassandra instance, then current instance tries to make a cluster with the other one if restarted.

Java VM version : OpenJDK -> Oracle

Hello,

When starting your docker image, I get those warning in my logs :

INFO [main] 2015-06-25 14:37:18,967 CassandraDaemon.java:168 - JVM vendor/version: OpenJDK 64-Bit Server VM/1.7.0_79
WARN [main] 2015-06-25 14:37:18,968 CassandraDaemon.java:173 - OpenJDK is not recommended. Please upgrade to the newest Oracle Java release

It seems a bit strange to provide an "official" cassandra image that uses a "not recommended" JVM version.

I suggest you switch to the latest Oracle 1.8 JVM.

Warning messages

Hi, I use cassandra:3.0.4 and have next warning messages in the logs

WARN  11:50:19 Small commitlog volume detected at /var/lib/cassandra/commitlog; setting 
WARN  11:50:19 Unable to lock JVM memory (ENOMEM). This can result in part of the JVM being swapped out, especially with mmapped I/O enabled. Increase RLIMIT_MEMLOCK or run Cassandra as root.
WARN  11:50:19 jemalloc shared library could not be preloaded to speed up memory allocations
WARN  11:50:19 OpenJDK is not recommended. Please upgrade to the newest Oracle Java release

I disabled swap on the host, and changed resource limits in container to

/etc/security/limits.d/cassandra.conf

<cassandra_user> - memlock unlimited
<cassandra_user> - nofile 100000
<cassandra_user> - nproc 32768
<cassandra_user> - as unlimited

/etc/sysctl.conf:

vm.max_map_count = 131072

but still have this messages.
Can somebody help me?

4 nodes Cassandra cluster on 2 servers

Hello

seriously I'd like to make 4 nodes Cassandra cluster on 2 servers like

1st Server : Container A, Container B

2st Server : Container C, Container D

Container A, Container C are seed node.

I'd like to make

Container A is linked (bridge) to 1st host eth0

Container B is linked (bridge) to 1st host eth1

Container C is linked (bridge) to 2nd host eth0

Container D is linked (bridge) to 2nd host eth1

but it looks like

Container A, B links to 1st node's eth0 all.

Container C, D links to 2nd node's eth0 all.

โ€“net=host is not good option to me.

Do you have any good suggestion for 4 nodes (4 container) Cassandra cluster on 2 servers ?

Add Cassandra 3.7

Hi There!

Cassandra 3.6 + 3.7 have been released. Could we add them to this container? I am happy to submit a PR for this, I just need to know if 3.7 should be a new tag or if it should replace 3.5?

Thanks!
-Marc

Enhancement(UDF): environment variable

Hello,
It would be interesting to have the possibility of setting enable_user_defined_functions=true in our cassandra.yaml through an environment variable like CASSANDRA_UDF_ENABLE=true.

Thank you so much for your work

Can't pull debian:jessie-backports

cassandra$ docker pull debian:jessie-backports
Pulling repository docker.io/library/debian
9e9e62af0532: Error pulling image (jessie-backports) from docker.io/library/debian, Driver aufs failed to create image rootfs 9e9e62af0532d0163b7c6fc979448202fc1c77e5601159458893aa57d9bf3fd6: open /mnt/sda1/var/lib/docker/aufs/layers/1d6f63d023f51ae1bbc8c5623bcde3de05751dbe9bba5ae4b3405005f8b856c9: no such file or directory b3405005f8b856c9: no such file or directory
Error pulling image (jessie-backports) from docker.io/library/debian, Driver aufs failed to create image rootfs 9e9e62af0532d0163b7c6fc979448202fc1c77e5601159458893aa57d9bf3fd6: open /mnt/sda1/var/lib/docker/aufs/layers/1d6f63d023f51ae1bbc8c5623bcde3de05751dbe9bba5ae4b3405005f8b856c9: no such file or directory

Persisting data to host file system on OSX fails on startup

I was attempting to run Cassandra on OSX under Virtualbox and store data directly to the OSX. docker-machine already maps /Users in the virtual machine, so I thought it would be easy to start Cassandra with the following:

docker run --name cassandra -v /Users/username/cassandra:/var/lib/cassandra/data cassandra:latest

However, on startup Cassandra reports the following and exits. It manages to create some directories and files on the host directory, so it's not simply a permissions issue. Is there a work around?

INFO 10:50:57 Initializing system.available_ranges
INFO 10:50:58 Enqueuing flush of local: 653 (0%) on-heap, 0 (0%) off-heap
INFO 10:50:58 Writing Memtable-local@1904409473(0.107KiB serialized bytes, 3 ops, 0%/0% of on/off-heap limit)
INFO 10:50:58 Completed flushing /var/lib/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/tmp-la-5-big-Data.db (0.000KiB) for commitlog position ReplayPosition(segmentId=1441104657062, position=283)
WARN 10:50:58 fsync(104) failed, errno (22).
WARN 10:50:58 fsync(102) failed, errno (22).
WARN 10:50:58 fsync(103) failed, errno (22).
INFO 10:50:58 Compacting (541defc0-5097-11e5-b5f0-091830ac5256) [/var/lib/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/la-5-big-Data.db:level=0, /var/lib/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/la-2-big-Data.db:level=0, /var/lib/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/la-1-big-Data.db:level=0, /var/lib/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/la-3-big-Data.db:level=0, /var/lib/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/la-4-big-Data.db:level=0, ]
INFO 10:50:58 Initializing system_distributed.parent_repair_history
INFO 10:50:58 Initializing system_distributed.repair_history
INFO 10:50:58 Initializing system_traces.sessions
INFO 10:50:58 Initializing system_traces.events
WARN 10:50:58 fsync(105) failed, errno (22).
WARN 10:50:58 fsync(107) failed, errno (22).
INFO 10:50:58 completed pre-loading (8 keys) key cache.
INFO 10:50:58 No commitlog files found; skipping replay
ERROR 10:50:58 Exception in thread Thread[CompactionExecutor:1,1,main]
org.apache.cassandra.io.FSWriteError: java.nio.file.FileSystemException: /var/lib/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/tmplink-la-6-big-Index.db -> /var/lib/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/tmp-la-6-big-Index.db: Operation not permitted
at org.apache.cassandra.io.util.FileUtils.createHardLink(FileUtils.java:93) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.io.sstable.format.big.BigTableWriter.makeTmpLinks(BigTableWriter.java:295) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.io.sstable.format.big.BigTableWriter.openFinalEarly(BigTableWriter.java:332) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.io.sstable.SSTableRewriter.switchWriter(SSTableRewriter.java:298) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.io.sstable.SSTableRewriter.doPrepare(SSTableRewriter.java:346) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.prepareToCommit(Transactional.java:169) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.doPrepare(CompactionAwareWriter.java:79) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.prepareToCommit(Transactional.java:169) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.finish(Transactional.java:179) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.finish(CompactionAwareWriter.java:89) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:196) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:236) ~[apache-cassandra-2.2.0.jar:2.2.0]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) ~[na:1.7.0_79]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_79]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) ~[na:1.7.0_79]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_79]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79]
Caused by: java.nio.file.FileSystemException: /var/lib/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/tmplink-la-6-big-Index.db -> /var/lib/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/tmp-la-6-big-Index.db: Operation not permitted
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91) ~[na:1.7.0_79]
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[na:1.7.0_79]
at sun.nio.fs.UnixFileSystemProvider.createLink(UnixFileSystemProvider.java:475) ~[na:1.7.0_79]
at java.nio.file.Files.createLink(Files.java:1039) ~[na:1.7.0_79]
at org.apache.cassandra.io.util.FileUtils.createHardLink(FileUtils.java:89) ~[apache-cassandra-2.2.0.jar:2.2.0]
... 19 common frames omitted

support minor versions (2.2.2)

My use case for the image is to run the cqlsh
the udpated 2.2.3 uses 3.3.1 protocol, which I can't connect to the server which is still using 3.3.0
on the other hand 2.1 will also be too old

Is it also possible to support minor versions in docker hub?
(It is honestly a problem how cassandra handle version compatibility, but if this image help solve the issue it will be great)

Connection error: ('Unable to connect to any servers', {'1.1.1.1': ProtocolError("cql_version '3.3.1' is not supported by remote (w/ native protocol). Supported versions: [u'3.3.0']",)})

insufficient memory when starting the container

docker run -d --name kong-database \
>                 -p 9042:9042 \
>                 cassandra:2.2
Unable to find image 'cassandra:2.2' locally
2.2: Pulling from library/cassandra

51f5c6a04d83: Pull complete 
a3ed95caeb02: Pull complete 
2a21294c613b: Pull complete 
4d679f1ad019: Pull complete 
3b49a7450c9a: Pull complete 
1e88c3a1e691: Pull complete 
b464a0d9b04f: Pull complete 
8b59c8e0cfa3: Pull complete 
bdbda7d0c45f: Pull complete 
f0664e7f1560: Pull complete 
01554e7bf5c4: Pull complete 
Digest: sha256:c33aeb965f108ac458d1fe8c9c7e563c1cf62cbdee16ed7ff2c658e24a0fb53f
Status: Downloaded newer image for cassandra:2.2
684b7a8e819ed376c8429d89810ec740a044bf9d07740834fe19d51b36cfb01d
docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                      PORTS               NAMES
684b7a8e819e        cassandra:2.2       "/docker-entrypoint.s"   34 seconds ago      Exited (1) 33 seconds ago                       kong-database
docker logs kong-database
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000a6600000, 1329594368, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 1329594368 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /tmp/hs_err_pid1.log

hostname: Name or service not known

starting the container with:

docker run -d --net=host --name=cassandra1 cassandra

the container terminates right away, with the logs indicating:

hostname: Name or service not known

CASSANDRA_SEEDS in Swarm Mode

I would like to bring up a stack by running:

docker stack deploy --compose-file docker-compose-stack.yaml cassandra

docker-compose-stack.yaml contents:

version: '3'
services:
  cassandra:
    image: cassandra
    environment:
      - CASSANDRA_SEEDS=10.0.0.3,10.0.0.4,10.0.0.5,10.0.0.6
    ports:
      - 7000:7000
    deploy:
      mode: global

However, the different Cassandra nodes don't know about each other unless I manually specify IPs that might quit getting used, in CASSANDRA_SEEDS, which is not ideal. How would I use a load balanced IP for CASSANDRA_SEEDS? I tried using CASSANDRA_SEEDS=10.0.0.2 and CASSANDRA_SEEDS=cassandra, but neither worked. Also, how should I handle the situation where the load balanced IP ends up pointing to the same Cassandra node instead of another one?

Nodes don't reconnect when one of the containers is restarted

Hi There -

Thanks for producing these docker images. I hit a snag though. When start a 2-machine cluster on EC2, the dockerized cassandra nodes connect to each other and all seems fine. But when I restart one of the docker containers, the nodes don't reconnect.

Here are the commands I used to start the containers:

docker run --name cass -d \
--restart=on-failure:10 \
-e CASSANDRA_BROADCAST_ADDRESS=172.30.0.35 \
-e CASSANDRA_CLUSTER_NAME=spot \
-p 7000:7000 \
cassandra:latest

docker run --name cass -d \
--restart=on-failure:10 \
-e CASSANDRA_BROADCAST_ADDRESS=172.30.1.116 \
-e CASSANDRA_CLUSTER_NAME=spot \
-p 7000:7000 \
-e CASSANDRA_SEEDS=172.30.0.35 \
cassandra:latest

Here's the nodetool status when they first connect:

Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address       Load       Tokens  Owns (effective)  Host ID                               Rack
UN  172.30.0.35   51.45 KB   256     100.0%            015e7e26-91ec-4bdc-b59b-aa29c35248e6  rack1
UN  172.30.1.116  ?          256     100.0%            243efb28-02d4-4ded-8419-ce32a28186b8  rack1

When I run

docker restart cass
on either of the nodes, they don't reconnect:


Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address       Load       Tokens  Owns (effective)  Host ID                               Rack
UN  172.30.0.35   51.45 KB   256     100.0%            015e7e26-91ec-4bdc-b59b-aa29c35248e6  rack1
DN  172.30.1.116  51.51 KB   256     100.0%            243efb28-02d4-4ded-8419-ce32a28186b8  rack1

Here's the last bit of log from the node I restarted:


INFO  20:06:21 Enqueuing flush of local: 51479 (0%) on-heap, 0 (0%) off-heap
INFO  20:06:21 Writing Memtable-local@983239489(8566 serialized bytes, 259 ops, 0%/0% of on/off-heap limit)
INFO  20:06:21 OutboundTcpConnection using coalescing strategy DISABLED
INFO  20:06:21 Completed flushing /var/lib/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/system-local-ka-9-Data.db (5277 bytes) for commitlog position ReplayPosition(segmentId=1435089979529, position=104922)
INFO  20:06:21 Handshaking version with /172.30.0.35
INFO  20:06:21 Node /172.30.1.116 state jump to normal
INFO  20:06:21 Node /172.30.0.35 has restarted, now UP
INFO  20:06:21 InetAddress /172.30.0.35 is now UP
INFO  20:06:21 Node /172.30.0.35 state jump to normal
INFO  20:06:21 Waiting for gossip to settle before accepting client requests...
INFO  20:06:21 Compacted 4 sstables to [/var/lib/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/system-local-ka-10,].  6,295 bytes to 5,723 (~90% of original) in 251ms = 0.021745MB/s.  4 total partitions merged to 1.  Partition merge counts were {4:1, }
INFO  20:06:29 No gossip backlog; proceeding
INFO  20:06:29 Netty using native Epoll event loop
INFO  20:06:29 Using Netty Version: [netty-buffer=netty-buffer-4.0.23.Final.208198c, netty-codec=netty-codec-4.0.23.Final.208198c, netty-codec-http=netty-codec-http-4.0.23.Final.208198c, netty-codec-socks=netty-codec-socks-4.0.23.Final.208198c, netty-common=netty-common-4.0.23.Final.208198c, netty-handler=netty-handler-4.0.23.Final.208198c, netty-transport=netty-transport-4.0.23.Final.208198c, netty-transport-rxtx=netty-transport-rxtx-4.0.23.Final.208198c, netty-transport-sctp=netty-transport-sctp-4.0.23.Final.208198c, netty-transport-udt=netty-transport-udt-4.0.23.Final.208198c]
INFO  20:06:29 Starting listening for CQL clients on /0.0.0.0:9042...
INFO  20:06:29 Binding thrift service to /0.0.0.0:9160
INFO  20:06:29 Listening for thrift clients...

And here's the last bit of log from the node that I did NOT restart:


4392bcdd35a684174e047860b377/system-local-ka-4-Data.db'), SSTableReader(path='/var/lib/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/system-local-ka-1-Data.db'), SSTableReader(path='/var/lib/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/system-local-ka-2-Data.db'), SSTableReader(path='/var/lib/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/system-local-ka-3-Data.db')]
INFO  20:05:13 Node /172.30.0.35 state jump to normal
INFO  20:05:13 Waiting for gossip to settle before accepting client requests...
INFO  20:05:13 Compacted 4 sstables to [/var/lib/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/system-local-ka-5,].  5,885 bytes to 5,721 (~97% of original) in 129ms = 0.042294MB/s.  4 total partitions merged to 1.  Partition merge counts were {4:1, }
INFO  20:05:19 OutboundTcpConnection using coalescing strategy DISABLED
INFO  20:05:20 Handshaking version with /172.30.1.116
INFO  20:05:20 Node /172.30.1.116 is now part of the cluster
INFO  20:05:20 InetAddress /172.30.1.116 is now UP
WARN  20:05:20 Not marking nodes down due to local pause of 9701751803 > 5000000000
INFO  20:05:21 No gossip backlog; proceeding
INFO  20:05:21 Netty using native Epoll event loop
INFO  20:05:21 Using Netty Version: [netty-buffer=netty-buffer-4.0.23.Final.208198c, netty-codec=netty-codec-4.0.23.Final.208198c, netty-codec-http=netty-codec-http-4.0.23.Final.208198c, netty-codec-socks=netty-codec-socks-4.0.23.Final.208198c, netty-common=netty-common-4.0.23.Final.208198c, netty-handler=netty-handler-4.0.23.Final.208198c, netty-transport=netty-transport-4.0.23.Final.208198c, netty-transport-rxtx=netty-transport-rxtx-4.0.23.Final.208198c, netty-transport-sctp=netty-transport-sctp-4.0.23.Final.208198c, netty-transport-udt=netty-transport-udt-4.0.23.Final.208198c]
INFO  20:05:21 Starting listening for CQL clients on /0.0.0.0:9042...
INFO  20:05:21 Binding thrift service to /0.0.0.0:9160
INFO  20:05:21 Listening for thrift clients...
INFO  20:06:13 InetAddress /172.30.1.116 is now DOWN
INFO  20:06:13 Handshaking version with /172.30.1.116
INFO  20:06:21 Handshaking version with /172.30.1.116
INFO  20:06:21 Node /172.30.1.116 has restarted, now UP
INFO  20:06:21 InetAddress /172.30.1.116 is now UP
INFO  20:06:21 InetAddress /172.30.1.116 is now UP
INFO  20:06:21 InetAddress /172.30.1.116 is now UP
INFO  20:06:21 InetAddress /172.30.1.116 is now UP
INFO  20:06:21 Node /172.30.1.116 state jump to normal

Any idea what the problem is?

cassandra:2.2 image won't start on an overlay network

I went through the following article to setup a docker swarm environment on my laptop:
https://medium.com/on-docker/docker-overlay-networks-that-was-easy-8f24baebb698#.8bbx8sc6h
I got through this fine and everything worked as expected.

Then made a new network with docker network create -d overlay cassNet
and then run docker run -d --name cass0 --net cassNet cassandra:2.2 and it exits immediately. Upon further inspect I found the following reason by running docker run -it --name cass1 --net cassNet cassandra:2.2

INFO  14:26:06 Loading settings from file:/etc/cassandra/cassandra.yaml
INFO  14:26:06 Node configuration:[authenticator=AllowAllAuthenticator; authorizer=AllowAllAuthorizer; auto_snapshot=true; batch_size_fail_threshold_in_kb=50; batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024; broadcast_address=10.0.0.2 172.18.0.2; broadcast_rpc_address=10.0.0.2 172.18.0.2; cas_contention_timeout_in_ms=1000; client_encryption_options=<REDACTED>; cluster_name=Test Cluster; column_index_size_in_kb=64; commit_failure_policy=stop; commitlog_directory=/var/lib/cassandra/commitlog; commitlog_segment_size_in_mb=32; commitlog_sync=periodic; commitlog_sync_period_in_ms=10000; compaction_large_partition_warning_threshold_mb=100; compaction_throughput_mb_per_sec=16; concurrent_counter_writes=32; concurrent_reads=32; concurrent_writes=32; counter_cache_save_period=7200; counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; cross_node_timeout=false; data_file_directories=[/var/lib/cassandra/data]; disk_failure_policy=stop; dynamic_snitch_badness_threshold=0.1; dynamic_snitch_reset_interval_in_ms=600000; dynamic_snitch_update_interval_in_ms=100; enable_user_defined_functions=false; endpoint_snitch=SimpleSnitch; hinted_handoff_enabled=true; hinted_handoff_throttle_in_kb=1024; incremental_backups=false; index_summary_capacity_in_mb=null; index_summary_resize_interval_in_minutes=60; inter_dc_tcp_nodelay=false; internode_compression=all; key_cache_save_period=14400; key_cache_size_in_mb=null; listen_address=10.0.0.2 172.18.0.2; max_hint_window_in_ms=10800000; max_hints_delivery_threads=2; memtable_allocation_type=heap_buffers; native_transport_port=9042; num_tokens=256; partitioner=org.apache.cassandra.dht.Murmur3Partitioner; permissions_validity_in_ms=2000; range_request_timeout_in_ms=10000; read_request_timeout_in_ms=5000; request_scheduler=org.apache.cassandra.scheduler.NoScheduler; request_timeout_in_ms=10000; role_manager=CassandraRoleManager; roles_validity_in_ms=2000; row_cache_save_period=0; row_cache_size_in_mb=0; rpc_address=0.0.0.0; rpc_keepalive=true; rpc_port=9160; rpc_server_type=sync; saved_caches_directory=/var/lib/cassandra/saved_caches; seed_provider=[{class_name=org.apache.cassandra.locator.SimpleSeedProvider, parameters=[{seeds=10.0.0.2 172.18.0.2}]}]; server_encryption_options=<REDACTED>; snapshot_before_compaction=false; ssl_storage_port=7001; sstable_preemptive_open_interval_in_mb=50; start_native_transport=true; start_rpc=false; storage_port=7000; thrift_framed_transport_size_in_mb=15; tombstone_failure_threshold=100000; tombstone_warn_threshold=1000; tracetype_query_ttl=86400; tracetype_repair_ttl=604800; trickle_fsync=false; trickle_fsync_interval_in_kb=10240; truncate_request_timeout_in_ms=60000; windows_timer_interval=1; write_request_timeout_in_ms=2000]
INFO  14:26:06 DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
INFO  14:26:06 Global memtable on-heap threshold is enabled at 122MB
INFO  14:26:06 Global memtable off-heap threshold is enabled at 122MB
Exception (org.apache.cassandra.exceptions.ConfigurationException) encountered during startup: Unknown listen_address '10.0.0.2 172.18.0.2'
Unknown listen_address '10.0.0.2 172.18.0.2'
ERROR 14:26:06 Exception encountered during startup: Unknown listen_address '10.0.0.2 172.18.0.2'

I've noticed that the 3.x images will run on an overlay network just fine.

Memory settings

Hello,
How are we supposed to configure the memory limits (Xmx) of cassandra using this image ?
No env variable for those ?

Passing "CASSANDRA_RACK" and "CASSANDRA_DC" do nothing

Hi.
I've started up 3 nodes Cassandra 2.1.13 cluster with passing some env variables like:
CASSANDRA_CLUSTER_NAME=mycluster
CASSANDRA_DC=dc2
CASSANDRA_RACK=rack2

but when I'm using "nodetool status" on a first node(seed node in my case) I see default "datacenter1" and "rack1" so it seems that passing this variables don't do nothing

root@39559689ed20:/# cat /etc/cassandra/cassandra-rackdc.properties | egrep -v "^\s*(#|$)"
dc= dc2
rack= rack2
root@39559689ed20:/# nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address      Load       Tokens  Owns (effective)  Host ID                               Rack
UN  192.168.0.4  107.39 KB  256     69.0%             6c39fcc4-a54d-407a-811f-f4bc3d5587b7  rack1
UN  192.168.0.3  120.16 KB  256     67.2%             f04417e4-0124-4eb4-842a-37636a14ab0d  rack1
UN  192.168.0.2  112.63 KB  256     63.8%             65ad3bdb-519e-4568-8370-8f53d0a0fd49  rack1

Reusing cassandra image with data preloaded from a docker commit

I am trying to setup a developer environment with a cassandra container from an image that is based on this one, but with data already loaded. The reason is because loading the test data is complicated and relies on 3rd party components and other running containers, so it's just something that can just be done in a Dockerfile.

So we have a running container (just a single container, no cluster) with the data prelaoded and we save that as an image with docker commit. This works just fine. But when we try to use that image later it fails with an error like this:

INFO  19:58:33 OutboundTcpConnection using coalescing strategy DISABLED
ERROR 19:59:04 Exception encountered during startup
java.lang.RuntimeException: Unable to gossip with any seeds
        at org.apache.cassandra.gms.Gossiper.doShadowRound(Gossiper.java:1307) ~[apache-cassandra-2.1.7.jar:2.1.7]
        at org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:533) ~[apache-cassandra-2.1.7.jar:2.1.7]
        at org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:777) ~[apache-cassandra-2.1.7.jar:2.1.7]
        at org.apache.cassandra.service.StorageService.initServer(StorageService.java:714) ~[apache-cassandra-2.1.7.jar:2.1.7]
        at org.apache.cassandra.service.StorageService.initServer(StorageService.java:605) ~[apache-cassandra-2.1.7.jar:2.1.7]
        at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378) [apache-cassandra-2.1.7.jar:2.1.7]
        at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:524) [apache-cassandra-2.1.7.jar:2.1.7]
        at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:613) [apache-cassandra-2.1.7.jar:2.1.7]

So the seed value (which contains the IP address the container had when it was preloaded before the commit) is wrong for the new container because the IP address has changed.

So how can we work around this? Can we tell cassandra to ignore it's previous seeds when it starts up? Can we somehow clear the IP/seeds of the previous container before we commit it to a new image? Something else?

Thanks

Create schema

Hi everyone,

Is there any "standard" way of creating a schema? Or should I write my own Dockerfile/Entrypoint?

Thank you!

cannot connect cqlsh from different container for 2.1.16 image

$  docker run -it --link node1:cassandra --rm cassandra sh -c 'exec cqlsh "$CASSANDRA_PORT_9042_TCP_ADDR"'
Connection error: ('Unable to connect to any servers', {'172.17.0.2': DriverException('ProtocolError returned from server while using explicitly set client protocol_version 4',)})

As per some search it seems cqlsh version incompatibility ?

intuit/wasabi#98

I am running image 2.1.16

As per
https://docs.datastax.com/en/landing_page/doc/landing_page/compatibility.html

It should be 3.1

on container if I do cqlsh it show.
[cqlsh 5.0.1 | Cassandra 2.1.16 | CQL spec 3.2.1 | Native protocol v3]

Please guide if I am missing anything

CASSANDRA_CLUSTER_NAME Failing

When trying to set a custom cluster name with the CASSANDRA_CLUSTER_NAME variable, Cassandra fails to start up. This command:-

docker run --name cassandra-1 -e CASSANDRA_CLUSTER_NAME="My New Cluster" -p=127.0.0.1:9042:9042 -v C:/Work/Cassandra/cassandra-1:/var/lib/cassandra -d cassandra:3.9

fails to start and the following entries appear in the logs:-

ERROR 14:31:33 Fatal exception during initialization org.apache.cassandra.exceptions.ConfigurationException: Saved cluster name Test Cluster != configured name My New Cluster at org.apache.cassandra.db.SystemKeyspace.checkHealth(SystemKeyspace.java:935) ~[apache-cassandra-3.9.jar:3.9] at org.apache.cassandra.service.StartupChecks$9.execute(StartupChecks.java:325) ~[apache-cassandra-3.9.jar:3.9] at org.apache.cassandra.service.StartupChecks.verify(StartupChecks.java:112) ~[apache-cassandra-3.9.jar:3.9] at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:196) [apache-cassandra-3.9.jar:3.9] at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:601) [apache-cassandra-3.9.jar:3.9] at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:730) [apache-cassandra-3.9.jar:3.9]

The container starts up just fine if the cluster name is left as the default

occasionally see this error: "error setting limit (Operation not permitted)" on my windows machine

New to Docker, docker-library/cassandra ran fine on my windows laptop until I restarted, now I see this error after starting a container and don't remember making any changes. This probably has not to do with cassandra image itself but I don't see a way to set it on container.

This only sets it on the image.

docker -d --default-ulimit nproc=1024:2048

This will set a soft limit of 1,024 and a hard limit of 2,048 child processes for all containers. You can set this option multiple times for different ulimit values: --default-ulimit nproc=1024:2408 --default-ulimit nofile=100:200
root@34d4b228cd55:/# cat /etc/security/limits.d/cassandra.conf
# Provided by the cassandra package
cassandra  -  memlock  unlimited
cassandra  -  nofile   100000
cassandra  -  as       unlimited
cassandra  -  nproc    8096
root@34d4b228cd55:/# service cassandra status
Cassandra is not running.
root@34d4b228cd55:/# service cassandra start
/etc/init.d/cassandra: 71: ulimit: error setting limit (Operation not permitted)
root@34d4b228cd55:/# ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 7893
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1048576
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 1048576
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Environment: Windows 7 laptop

$ docker version
Client:
Version: 1.9.0
API version: 1.21
Go version: go1.4.3
Git commit: 76d6bc9
Built: Tue Nov 3 19:20:09 UTC 2015
OS/Arch: windows/amd64

Server:
Version: 1.9.1
API version: 1.21
Go version: go1.4.3
Git commit: a34a1d5
Built: Fri Nov 20 17:56:04 UTC 2015
OS/Arch: linux/amd64

$ docker info
Containers: 32
Images: 279
Server Version: 1.9.1
Storage Driver: aufs
Root Dir: /mnt/sda1/var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 343
Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 4.1.13-boot2docker
Operating System: Boot2Docker 1.9.1 (TCL 6.4.1); master : cef800b - Fri Nov 20 19:33:59 UTC 2015
CPUs: 1
Total Memory: 1.956 GiB
Name: default
ID: AXXY:B6JJ:6M6W:LXME:VWED:BNEA:4MKU:LDP5:TWKR:CCUH:3SKV:BSBE
Debug mode (server): true
File Descriptors: 28
Goroutines: 46
System Time: 2016-01-16T02:28:33.17774389Z
EventsListeners: 0
Init SHA1:
Init Path: /usr/local/bin/docker
Docker Root Dir: /mnt/sda1/var/lib/docker
Labels:
provider=virtualbox

opscenter

Hi, I was wondering if you could help me figure out how to set up the datastax opscenter with this cassandra image on multiple ec2 instances. Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.