Comments (18)
Easy peasy. Here's the docker-compose file I use:
version: '2'
services:
neo4j:
image: neo4j
ports:
- "7474:7474"
networks:
- basic-lan
environment:
- NEO4J_AUTH=none
neo4j-test:
image: neo4j
ports:
- "7475:7474"
networks:
- basic-lan
environment:
- NEO4J_AUTH=none
web:
image: debian
networks:
- basic-lan
networks:
basic-lan:
ipam:
config:
- subnet: 192.168.37.0/24
ip-range: 192.168.37.0/24
First, I get all the containers up and running:
docker-compose up
Then, I start a bash session in the web container
docker-compose run web bash
All the rest is typed into the bash session.
We need curl for the example:
apt-get update && apt-get install curl
Once done, let's create two nodes in the prod database:
curl -X POST -H Accept:application/json -v neo4j:7474/db/data/node
curl -X POST -H Accept:application/json -v neo4j:7474/db/data/node
and four in the test database (note host and port here):
curl -X POST -H Accept:application/json -v neo4j-test:7474/db/data/node
curl -X POST -H Accept:application/json -v neo4j-test:7474/db/data/node
curl -X POST -H Accept:application/json -v neo4j-test:7474/db/data/node
curl -X POST -H Accept:application/json -v neo4j-test:7474/db/data/node
Done. Open the browser and view the result:
Production (localhost:7474):
And Test (localhost:7475):
It is of course a trivial matter to replace the Curl statements with some actual code importing real data.
from docker-neo4j.
@igorescobar The bit of the docs that you mention is for running Neo4j clusters where the instances need to talk to each other. Is that what you are doing? If you are just running standalone instances there is certainly no need for Docker networks.
from docker-neo4j.
@igorescobar Assuming that you are running standalone instances, I don't understand the problem that you are facing. You should be able to use Docker's standard port mapping mechanism to publish Neo4j's default 7474
on any port you like within the Docker host.
You can expose Neo4j on the host's port 1234
like this using "raw" Docker:
docker run --publish=1234:7474 neo4j
Or like this using docker-compose
:
neo4j:
image: neo4j
ports:
- "1234:7474"
Does that solve your problem?
from docker-neo4j.
@igorescobar here is a docker-compose file which works, and starts a 3-node HA-cluster. It essentially implements what @benbc suggests, and each instance's browser is available to look at 7474
, 7475
, and 7476
, respectively.
I also had to turn off Use bolt by default
in the browser for member1 , or expose the commented out port.
version: '2'
services:
member1:
image: neo4j
ports:
- "7474:7474"
#- "7687:7687"
networks:
- basic-lan
environment:
- NEO4J_SERVER_ID=1
- NEO4J_HA_ADDRESS=member1
- NEO4J_PUBLIC_HOST=member1
- NEO4J_DATABASE_MODE=HA
- NEO4J_AUTH=none
- NEO4J_INITIAL_HOSTS=member1:5001,member2:5001,member3:5001
member2:
image: neo4j
ports:
- "7475:7474"
networks:
- basic-lan
environment:
- NEO4J_SERVER_ID=2
- NEO4J_HA_ADDRESS=member2
- NEO4J_PUBLIC_HOST=member2
- NEO4J_DATABASE_MODE=HA
- NEO4J_AUTH=none
- NEO4J_INITIAL_HOSTS=member1:5001,member2:5001,member3:5001
member3:
image: neo4j
ports:
- "7476:7474"
networks:
- basic-lan
environment:
- NEO4J_SERVER_ID=3
- NEO4J_HA_ADDRESS=member3
- NEO4J_PUBLIC_HOST=member3
- NEO4J_DATABASE_MODE=HA
- NEO4J_AUTH=none
- NEO4J_INITIAL_HOSTS=member1:5001,member2:5001,member3:5001
networks:
basic-lan:
ipam:
config:
- subnet: 192.168.36.0/24
ip-range: 192.168.36.0/24
The docker network settings are unnecessarily specific with regards to ip-subnet and so forth so change those however you like.
from docker-neo4j.
@spacecowboy Thanks for sharing your docker-compose-yml
with me! I will for sure keep it to use in a future but as I said... for now I think it is a bit too much for testing/development.
@benbc and thanks for your prompt answer, very appreciated!
Your solution works fine if you are only using docker and creating standalone containers. When we are working with docker-compose
we sort of compose our environments with another containers and their are able to communicate with other using their internal ports.
The following recipe is using the suggestion you did:
db:
image: postgres:9.4.5
ports:
- "5432"
neo4j:
image: neo4j
ports:
- "7474:7474"
environment:
NEO4J_AUTH: none
neo4j-test:
image: neo4j
ports:
- "7475:7474"
environment:
NEO4J_AUTH: none
redis:
image: redis:2.8.23
ports:
- "6379"
web:
build: .
command: bundle exec rails s -p 9000 -b '0.0.0.0'
environment:
BUNDLE_JOBS: 2
PORT: 9000
links:
- db:db
- neo4j:neo4j
- neo4j-test:neo4j-test
- redis:redis
ports:
- "9000:9000"
volumes:
- ./:/usr/src/app
- ./vendor/bundle:/bundle
This is the command I use to run it:
docker-compose run --rm --service-ports web bash
And what happens is:
- Externally I can access both neo4j containers just fine, I opened the browser and everything is there...
- BUT from inside of your application container the port 7574 doesn't exist.
For example, my host has the IP: 192.168.99.100
. I can only access my second container from that ip, like: 192.168.99.100:7574 but from the inside of the container all communication goes via: tcp://172.17.0.4:7474
. I could explicitly tells my container use the host´s IP instead of the local IP but it would be a little hacky
because docker-compose not even inject a environment variable to be used by our containers so that would be a little weird because would need a pre-setup before running the docker-compose.yml
.
If I could choose those before neo4j starts, as proposed, this problem wouldn't happen and everything would work without previous setup.
from docker-neo4j.
@igorescobar Why do you need to access Neo4j from within its own container?
from docker-neo4j.
@benbc this is my neo4j.yml
:
development:
type: server_db
url: http://neo4j:7474
test:
type: server_db
url: http://neo4j-test:7475
Which means that when my env variable looks like RAILS_ENV=development
I will be connected to the container neo4j:7474
and when I ran my suit tests like bundle exec rspec
it will be connected to the container neo4j-test:7475
. The major problem in sharing the same container is that I don't want to mix both data. For the same reasons you have a db_development and a db_test inside of your application. This is a pretty common use case for any kind of service like redis, memcached, postgres and so on... And also... this is pretty much the main reason why would would use docker-compose
instead of pure docker
commands.
from docker-neo4j.
@benbc In order to understand a little better what docker-compose help us... its pretty much like this.. when you compose your application environment it injects hosts in your /etc/hosts
like:
Also inject environment variables so you can use them from inside of your application container:
from docker-neo4j.
@igorescobar I am a bit confused as to why you need to change the port for the test database.
Inside your web
container, the test database is available at neo4j-test
and the production db at neo4j
, which resolve to different containers with different ip addresses.
Can your app not simply use different hostnames depending on prod/test environment?
from docker-neo4j.
@spacecowboy it looks like they are different but at the bottom they are the same. it still being redirect to the same container in the end, same container, same port, same host, same partition, same everything.
from docker-neo4j.
But I guess I was wrong... maybe it is harder that it looks to choose which port to start the server... 😞
from docker-neo4j.
@igorescobar It's not that choosing which port to start is hard (you can just add a config volume and specify it to your liking), but I think that's solving the wrong problem here.
it looks like they are diffrent but at the bottom they are the same
They should most definitely not be the same. What is the "bottom" we are talking about here? I'd be happy to try some stuff out locally and give you a working config if I have some more details on how your app tries to connect to neo4j in prod/test.
from docker-neo4j.
@spacecowboy its pretty simple. I want to be able to have different data in my development environment and my testing environment. Pretty much the same outcome as using a postgres database, where my connection URL would be like: postgres://postgres@db/myDb_development
and postgres://postgres@db/myDb_test
... In that case we are using the same postgres but there is no data being shared between those environments.
Imagine the following docker-compose file:
neo4j:
image: neo4j
ports:
- "7474:7474"
environment:
NEO4J_AUTH: none
neo4j-test:
image: neo4j
ports:
- "7475:7474"
environment:
NEO4J_AUTH: none
web:
build: .
links:
- neo4j:neo4j
- neo4j-test:neo4j-test
ports:
- "9000:9000"
volumes:
- ./:/usr/src/app
- ./vendor/bundle:/bundle
Run docker-compose run --rm --service-ports web bash
.
if you were able to get inside of the web
container, import data X
into 7474
neo4j container... and import data Y
into 7475
container in which DATA X
can not exists inside of 7475
and Y
can not exists into 7474
(without using the external IP). This means that you succeed in this quest. :)
from docker-neo4j.
@spacecowboy thanks for your help. It does work for me. But, maybe you guys should think about my suggestion (or not :P). Would be a lot more easy (and natural at least for me). Thanks you all guys.
Saved as a gist for me: https://gist.github.com/igorescobar/be32be98bb83b884c5b9598b8b03980d
from docker-neo4j.
@spacecowboy Thanks for walking us through your docker-compose setup! This is exactly what I needed. My only comment is that it took me a few hours and a lot of internet searches to stumble on this. If you were to write this up as a blog post, or a docker + neo4j tutorial, I think people would appreciate it.
I do have one question: I haven't tried using docker for my neo4j-based app in production yet, but I plan to do so (if possible). When I run docker-compose up on production, is there a way I can avoid running the neo4j-test service? Or should I just create a separate docker-compose-production.yml file for production?
Thanks in advance for any advice.
from docker-neo4j.
@nwshane I suggest you to open a new issue and reference this one.
from docker-neo4j.
@nwshane A separate compose file is required for production with this kind of compose setup. This is because docker-compose does not allow you to start only some of the containers using up
.
On the other hand, I wouldn't recommend using docker-compose at all in production, this limitation being one specific reason why. Instead, I would place each container in its own specific SystemD-service file. But that's just me perhaps.
Anyway, such a service set up could look like this:
- testdb.service
[Unit]
Description=Test database
# Requirements
Requires=docker.service
# Dependency ordering
After=docker.service
[Service]
# This is the name we give the docker container
Environment=NAME=neo4j-test
# This is the image we will run
Environment=IMAGE=neo4j:3.0
Restart=always
# Let processes take awhile to start up (for first run Docker containers)
TimeoutStartSec=0
# Change killmode from "control-group" to "none" to let Docker remove
# work correctly.
KillMode=none
# Pre-start and Start
## Directives with "=-" are allowed to fail without consequence
## Try to create a docker network, in case one doesn't exist.
ExecStartPre=-/usr/bin/docker network create --driver bridge mynet
## Just to be thorough, kill and remove container each restart.
## Save any data you have to a volume
ExecStartPre=-/usr/bin/docker kill {NAME}
ExecStartPre=-/usr/bin/docker rm ${NAME}
# Allow this to fail to allow it to start even when offline
ExecStartPre=-/usr/bin/docker pull ${IMAGE}
# And actually start container
ExecStart=/usr/bin/docker run --rm --net=mynet --name ${NAME} \
-p 7475:7474 -e NEO4J_AUTH=none \
${IMAGE}
# Stop container
ExecStop=/usr/bin/docker stop ${NAME}
[Install]
# Set up a dependency to the web frontend
WantedBy=web.service
- prod.service
[Unit]
Description=Production database
# Requirements
Requires=docker.service
# Dependency ordering
After=docker.service
[Service]
# This is the name we give the docker container
Environment=NAME=neo4j-prod
# This is the image we will run
Environment=IMAGE=neo4j:3.0
Restart=always
# Let processes take awhile to start up (for first run Docker containers)
TimeoutStartSec=0
# Change killmode from "control-group" to "none" to let Docker remove
# work correctly.
KillMode=none
# Pre-start and Start
## Directives with "=-" are allowed to fail without consequence
## Try to create a docker network, in case one doesn't exist.
ExecStartPre=-/usr/bin/docker network create --driver bridge mynet
## Just to be thorough, kill and remove container each restart.
## Save any data you have to a volume
ExecStartPre=-/usr/bin/docker kill {NAME}
ExecStartPre=-/usr/bin/docker rm ${NAME}
# Allow this to fail to allow it to start even when offline
ExecStartPre=-/usr/bin/docker pull ${IMAGE}
# And actually start container
ExecStart=/usr/bin/docker run --rm --net=mynet --name ${NAME} \
-p 7474:7474 -e NEO4J_AUTH=none \
${IMAGE}
# Stop container
ExecStop=/usr/bin/docker stop ${NAME}
[Install]
# Set up a dependency to the web frontend
WantedBy=web.service
- web.service
[Unit]
Description=Web frontend (whatever that is for you)
# Requirements
Requires=docker.service
# Dependency ordering
After=docker.service
[Service]
# This is the name we give the docker container
Environment=NAME=web
# This is the image we will run, change this to whatever you use
Environment=IMAGE=web
Restart=always
# Let processes take awhile to start up (for first run Docker containers)
TimeoutStartSec=0
# Change killmode from "control-group" to "none" to let Docker remove
# work correctly.
KillMode=none
# Pre-start and Start
## Directives with "=-" are allowed to fail without consequence
## Try to create a docker network, in case one doesn't exist.
ExecStartPre=-/usr/bin/docker network create --driver bridge mynet
## Just to be thorough, kill and remove container each restart.
## Save any data you have to a volume
ExecStartPre=-/usr/bin/docker kill {NAME}
ExecStartPre=-/usr/bin/docker rm ${NAME}
# Allow this to fail to allow it to start even when offline
ExecStartPre=-/usr/bin/docker pull ${IMAGE}
# And actually start container, add whatever ports and volumes you need
ExecStart=/usr/bin/docker run --rm --net=mynet --name ${NAME} \
-p 80:80 \
${IMAGE}
# Stop container
ExecStop=/usr/bin/docker stop ${NAME}
[Install]
# Start on boot
WantedBy=multi-user.target
And you would do this to make it all go:
cp *.service /etc/systemd/system/
systemctl enable web.service proddb.service
systemctl start web.service # Will start proddb.service too, because it is "wanted"
Note that I did not enable the test db. That is only used to get something to auto-start. You can now manually bring your test db up/down via
systemctl start testdb.service
systemctl stop testdb.service
without messing with the lifecycle of proddb
or web
. Rebooting your computer will also automatically start those two, but not testdb
(unless you enable it). You can of course also start any docker container and interact with proddb
/testdb
simply by specifying --net mynet
as in the service files. (but not from inside a docker-compose
file, since compose will add a prefix to network names to make sure they are unique).
from docker-neo4j.
@spacecowboy Fantastic! I'll try this out and ask for help if I get stuck on something.
from docker-neo4j.
Related Issues (20)
- Stopping neo4j to dump database state does not work HOT 2
- neo4j:4.4.16-enterprise Docker image fails when apoc plugin is requested HOT 1
- VSCode can't start a stopped devcontainer HOT 2
- Downloaded Plugin (APOC) File Permission HOT 5
- Missing docker-entrypoint.sh in latest image? HOT 2
- Error using or configuring APOC in GithubAction HOT 1
- Why am I encountering an issue with the GDS library on a Neo4j Docker container on Windows, despite it working on Linux? HOT 5
- Version 4.4.19 fails when installing APOC HOT 3
- Missing proper health check
- chown: changing ownership of '/var/lib/neo4j/conf/neo4j.conf': Read-only file system HOT 1
- Volume inaccessible, although uid and gid match those of the user parameter HOT 2
- my_ip:7474 >> This site can’t be reached HOT 3
- Pods unready/CrashLoopBackOff with message "sed: cannot rename /var/lib/neo4j/conf/sed8pylkX: Operation not permitted" HOT 2
- neo4j-admin:5.10.0 not released on Docker Hub HOT 2
- Unable to mount local config folder with apoc.conf HOT 2
- ERROR: No compatible "graph-data-science" plugin found for Neo4j 5.13.0. HOT 8
- neo4j-admin:5.17.0 not released on Docker Hub HOT 1
- Docker bind only 192.168.58.110 address and not all interfaces HOT 1
- Plugins for "genai","n10s","graphql","graph-algorithms" missing after docker install neo4j:5.18 HOT 3
- Critical/High Severity issues reported by Snyk for neo4j:5.18.1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from docker-neo4j.