tapad / sbt-docker-compose Goto Github PK
View Code? Open in Web Editor NEWIntegrates Docker Compose functionality into sbt (archived as unmaintained)
License: BSD 3-Clause "New" or "Revised" License
Integrates Docker Compose functionality into sbt (archived as unmaintained)
License: BSD 3-Clause "New" or "Revised" License
Hi,
when using the new 2.0 file format compose creates a new docker network every time, so if you do docker inspect
on any of the containers you get something like this:
"NetworkSettings": {
"Bridge": "",
"SandboxID": "09599d0d6575a5c1790f09fca7f9a38f89421516bb48b9b9adf868bd6bd0bb80",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {},
"SandboxKey": "/var/run/docker/netns/09599d0d6575",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"910330_default": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"zookeeper",
"72471c8769e1"
],
"NetworkID": "9a147c09c7f873c1ded7ad8743a7cf82fec1849a0827ed0dfc6c5ca33fa12a1c",
"EndpointID": "5ea92039c19addeb85c32d2484e7c87b896b13e48099a0a4a688c52b308aa457",
"Gateway": "172.18.0.1",
"IPAddress": "172.18.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:12:00:02"
}
}
}
Note that Gateway
is empty, so the current approach to determine the host gives an empty string.
Everything works as expected with 1.0 compose format.
Not sure what are your plans regarding 2.0, but I suppose a warning in the docs would be nice at the very least :)
If I have an application.conf
file in src/main/resources
and in src/test/resources/
dockerComposeTest
picks the one from src/main/resources/
and SBT test the other one. This might be solved by changing the way the testDependenciesClasspath
is built. (as mentioned in #36 )
testDependenciesClasspath := {
val fullClasspathCompile = (fullClasspath in Compile).value
val classpathTestManaged = (managedClasspath in Test).value
val classpathTestUnmanaged = (unmanagedClasspath in Test).value
val testResources = (resources in Test).value
(testResources ++ fullClasspathCompile.files ++ classpathTestManaged.files ++ classpathTestUnmanaged.files).map(_.getAbsoluteFile).mkString(":")
}
Hi!
Having a great time with your plugin!
When developing, I use dockerComposeRestart
to restart my application with the changes I made to it.
This works great for a while, but since there's a limitation to the number of Docker networks you can have, I get this message when trying to create the 32nd network :
failed to parse pool request for address space "LocalDefault" pool "" subpool "": could not find an available, non-overlapping IPv4 address pool among the defaults to assign to the network
After removing a network, I can start the application again.
I guess this could be fixed if dockerComposeRestart
were an alias for docker-compose down && docker-compose up
since docker-compose down
removes the network.
Thanks!
I'm running on GNU/Linux and have the luxury of a native docker. But sbt-docker-compose
insists that I use a docker-machine
with a name.
This is posing a problem because sbt-native-packager
is publishing to my native docker, not the docker-machine
named default
(same with our CI)
Could there please be an option to use native docker? (A workaround to publish to the docker-machine would be appreciated)
Is it possible to support some sort of "no-color" setting so that the output from sbt-docker-compose plugin will remove the colors from its STDOUT/ERR? Use case is with jenkins interface and wanting the color characters to go away.
Hi!
I'm trying to execute a test that queries the database which runs in its own container.
The database is Postgres and I configure it using environment variables in my docker-compose.yml.
The test currently fails since I can't connect to the database.
SEVERE: Connection error:
org.postgresql.util.PSQLException: The connection attempt failed.
...
Caused by: java.net.UnknownHostException: postgresForScalaJsExample
For debugging, I print out the name and value of each environment variable in the test suite. This should contain the credentials for the database.
When I executed dockerComposeTest skipPull
I was in for a surprise: I saw the environment variables of my Windows host system.
Is this a bug? I'm not sure anymore where the tests are executed, on the host or inside the Docker VM?
Thanks a lot in advance!
It would be ideal if docker compose commands were defined as tasks instead. The reason is that tasks execute in the context of scope, but the docker compose commands do not.
Use case 1: As a developer on a multi-module project I would like to have each module have their own docker compose file.
Use case 2: As a developer I would like to have a separate docker compose file for the IntegrationTest
configuration and the Test
configuration (so I can do it:dockerComposeUp
and test:dockerComposeUp
)
Use case 3: As a developer I would like to use docker compose commands as a step of a larger task without being forced to define sbt commands
Use case 4: As a developer on a multi-module project I would like to only run docker compose for a single module (so I can do foo/dockerComposeUp
)
SBT documentation of commands indicates:
Typically, you would resort to a command when you need to do something that’s impossible in a regular task.
What was the motivation for using commands over tasks?
First of I would like to thank you very much for an excellent contribution to the Scala ecosystem. I have been spending the whole weekend Googling how to get a proper integration test environment setup in Scala for my simple hobby project and to my big surprise it turns out there are not any very good alternatives until i found this project. It does exactly what I want, enable me to run proper integration tests to a stand alone instance.
However, I manage to get most things setup but Im still not able to access my Akka HTTP server on OSX.
This is the output that dockerComposeUp
gives me:
+---------+-----------------+--------------+--------------+----------------+--------------+---------+
| Service | Host:Port | Tag Version | Image Source | Container Port | Container Id | IsDebug |
+=========+=================+==============+==============+================+==============+=========+
| bfg | localhost:32770 | 0.1.SNAPSHOT | build | 8080 | 4e1c0460bcf0 | |
| bfg | localhost:32771 | 0.1.SNAPSHOT | build | 5005 | 4e1c0460bcf0 | DEBUG |
+---------+-----------------+--------------+--------------+----------------+--------------+---------+
Running
docker-compose -p 206507 -f /var/folders/yh/mm1bdmx9073_b15lw69b2qmh0000gn/T/compose-updated6736504145219745864.yml logs -f
Gives me the last line wich looks ok:
bfg_1 | 17:04:47 [default-akka.actor.default-dispatcher-3] INFO com.bfg.infrastructure.server.Server - Server started on /127.0.0.1:8080
However when Im not running with docker-compose I can access my server with:
localhost:8080
However trying:
localhost:32770
Which should be the same when Im using docker-compose just gives me this i Chrome:
This page isn’t working
localhost didn’t send any data.
ERR_EMPTY_RESPONSE
This is the content of build.sbt
wich is a mix from your multi-project example and the integration test example.
import Dependencies._
import java.io.File
lazy val commonSettings = Seq(
version := "0.1.SNAPSHOT",
organization := "com.bfg",
scalaVersion := "2.12.1"
)
enablePlugins(DockerComposePlugin)
docker <<= (docker in bfg) map {(image) => image}
//Set the image creation Task to be the one used by sbt-docker
dockerImageCreationTask := docker.value
lazy val bfg = project
.settings(
name := "bfg",
Defaults.itSettings,
commonSettings,
libraryDependencies ++= commonDeps,
libraryDependencies += "org.scalaj" %% "scalaj-http" % "2.3.0" % "it",
//To use 'dockerComposeTest' to run tests in the 'IntegrationTest' scope instead of the default 'Test' scope:
// 1) Package the tests that exist in the IntegrationTest scope
testCasesPackageTask := (sbt.Keys.packageBin in IntegrationTest).value,
// 2) Specify the path to the IntegrationTest jar produced in Step 1
testCasesJar := artifactPath.in(IntegrationTest, packageBin).value.getAbsolutePath,
// 3) Include any IntegrationTest scoped resources on the classpath if they are used in the tests
testDependenciesClasspath := {
val fullClasspathCompile = (fullClasspath in Compile).value
val classpathTestManaged = (managedClasspath in IntegrationTest).value
val classpathTestUnmanaged = (unmanagedClasspath in IntegrationTest).value
val testResources = (resources in IntegrationTest).value
(fullClasspathCompile.files ++ classpathTestManaged.files ++ classpathTestUnmanaged.files ++ testResources).map(_.getAbsoluteFile).mkString(File.pathSeparator)
},
dockerfile in docker := {
new Dockerfile {
val dockerAppPath = "/app/"
val mainClassString = (mainClass in Compile).value.get
val classpath = (fullClasspath in Compile).value
from("java")
add(classpath.files, dockerAppPath)
entryPoint("java", "-cp", s"$dockerAppPath:$dockerAppPath/*", s"$mainClassString")
}
},
imageNames in docker := Seq(ImageName(
repository = name.value.toLowerCase,
tag = Some(version.value))
)
)
.configs(IntegrationTest)
.enablePlugins(DockerPlugin, DockerComposePlugin)
And the docker-compose.yml
bfg:
image: bfg:0.1.SNAPSHOT
environment:
JAVA_TOOL_OPTIONS: -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005
ports:
- "0:8080"
- "0:5005"
My questions is if there are any know additional steps need to get Akka Http working or if you could give any hints on how to debug the issue more? Any input would be appreciated since I would really like to get this working.
I'm afraid I don't know how to debug this failure to run dockerComposeUp
, what can I do to try and find the problem?
I guess this is being called? https://github.com/Tapad/sbt-docker-compose/blob/master/src/main/scala/com/tapad/docker/DockerComposePlugin.scala#L269
I have dockerImageCreationTask := (publishLocal in Docker).value
and this is published.
The .yml
file is deleted before I can look at it.
It's kind of weird that it says it's skipping ruleengine
(my project) and then it tries to pull it.
Reading Compose File: docker/ruleengine.yml
Created Compose File with Processed Custom Tags: /tmp/compose-updated7013807111702461847.yml
Pulling Docker images except for locally built images and images defined as <skipPull> or <localBuild>.
Skipping Pull of image: ruleengine:latest
Creating network "935436_default" with the default driver
Pulling ruleengine (ruleengine:latest)...
repository ruleengine not found: does not exist or no pull access
No stopped containers
935436_default
Error starting Docker Compose instance. Shutting down containers...
After some experimentation and looking through the code, it looks like this supports only the Test
scope. I like to use the IntegrationTest
scope to keep my integration tests separated from my unit tests instead of using tags.
Is there a setting that I am missing to enable this?
What needs to be done in order to support the extends
keyword? We have all our docker compose files under a directory and it would be really messy to explicitly copy/paste everything.
Hi!
The following line in docker.compose.yml trips up the plugin:
environment:
- TLS_KEY_STORE_PASSWORD=${TLS_KEY_STORE_PASSWORD:-9f0ht032fr09fds909SDG$3gt32f#@FDSfs}
Console output with stack trace:
Reading Compose File: C:\Users\maine\Dev\Scala\scala-js-example\app\jvm/../../docker-compose.yml
java.lang.IndexOutOfBoundsException: No group 3
at java.util.regex.Matcher.start(Matcher.java:375)
at java.util.regex.Matcher.appendReplacement(Matcher.java:880)
at java.util.regex.Matcher.replaceAll(Matcher.java:955)
at java.lang.String.replaceAll(String.java:2223)
at com.tapad.docker.ComposeFile$$anonfun$processVariableSubstitution$1.apply(ComposeFile.scala:340)
at com.tapad.docker.ComposeFile$$anonfun$processVariableSubstitution$1.apply(ComposeFile.scala:339)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:144)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:144)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:144)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1157)
at com.tapad.docker.ComposeFile$class.processVariableSubstitution(ComposeFile.scala:339)
at com.tapad.docker.DockerComposePluginLocal.processVariableSubstitution(DockerComposePlugin.scala:89)
at com.tapad.docker.ComposeFile$class.readComposeFile(ComposeFile.scala:311)
at com.tapad.docker.DockerComposePluginLocal.readComposeFile(DockerComposePlugin.scala:89)
at com.tapad.docker.DockerComposePluginLocal.startDockerCompose(DockerComposePlugin.scala:248)
at com.tapad.docker.DockerComposePluginLocal.launchInstanceWithLatestChanges(DockerComposePlugin.scala:181)
at com.tapad.docker.DockerComposePluginLocal.restartRunningInstance(DockerComposePlugin.scala:204)
at com.tapad.docker.DockerComposePluginLocal$$anonfun$dockerComposeRestartCommand$2.apply(DockerComposePlugin.scala:130)
at com.tapad.docker.DockerComposePluginLocal$$anonfun$dockerComposeRestartCommand$2.apply(DockerComposePlugin.scala:128)
at sbt.Command$$anonfun$applyEffect$1$$anonfun$apply$2.apply(Command.scala:59)
at sbt.Command$$anonfun$applyEffect$1$$anonfun$apply$2.apply(Command.scala:59)
at sbt.Command$$anonfun$applyEffect$2$$anonfun$apply$3.apply(Command.scala:61)
at sbt.Command$$anonfun$applyEffect$2$$anonfun$apply$3.apply(Command.scala:61)
at sbt.Command$.process(Command.scala:93)
at sbt.MainLoop$$anonfun$1$$anonfun$apply$1.apply(MainLoop.scala:96)
at sbt.MainLoop$$anonfun$1$$anonfun$apply$1.apply(MainLoop.scala:96)
at sbt.State$$anon$1.process(State.scala:184)
at sbt.MainLoop$$anonfun$1.apply(MainLoop.scala:96)
at sbt.MainLoop$$anonfun$1.apply(MainLoop.scala:96)
at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
at sbt.MainLoop$.next(MainLoop.scala:96)
at sbt.MainLoop$.run(MainLoop.scala:89)
at sbt.MainLoop$$anonfun$runWithNewLog$1.apply(MainLoop.scala:68)
at sbt.MainLoop$$anonfun$runWithNewLog$1.apply(MainLoop.scala:63)
at sbt.Using.apply(Using.scala:24)
at sbt.MainLoop$.runWithNewLog(MainLoop.scala:63)
at sbt.MainLoop$.runAndClearLast(MainLoop.scala:46)
at sbt.MainLoop$.runLoggedLoop(MainLoop.scala:30)
at sbt.MainLoop$.runLogged(MainLoop.scala:22)
at sbt.StandardMain$.runManaged(Main.scala:57)
at sbt.xMain.run(Main.scala:29)
at xsbt.boot.Launch$$anonfun$run$1.apply(Launch.scala:109)
at xsbt.boot.Launch$.withContextLoader(Launch.scala:128)
at xsbt.boot.Launch$.run(Launch.scala:109)
at xsbt.boot.Launch$$anonfun$apply$1.apply(Launch.scala:35)
at xsbt.boot.Launch$.launch(Launch.scala:117)
at xsbt.boot.Launch$.apply(Launch.scala:18)
at xsbt.boot.Boot$.runImpl(Boot.scala:41)
at xsbt.boot.Boot$.main(Boot.scala:17)
at xsbt.boot.Boot.main(Boot.scala)
[error] java.lang.IndexOutOfBoundsException: No group 3
With trial and error I isolated the character that causes the exception: It's the dollar sign $
in the password.
Yet when I tried to simplify the line to
- A_SECRET=${ENV_VAR_WITH_SECRET:-secret$password}
I merely got
Illegal group reference
in the console, no stack trace.
Thanks in advance!
maybe I'm missing something in the docs? It would be really useful to be able to have multiple different configurations, e.g. one for integration tests, another for performance tests.
By using a Command
instead of a Task
it means we can't have per-project settings. But perhaps names of configs could be used instead? e.g. dockerComposeUp --config=perf
The plugin currently implements the dockerComposeStop
command, which ultimately runs docker-compose stop
, but it also deletes the stopped containers, their anonymous volumes, and the network for the Compose project instance (composition?) according to composeRemoveContainersOnShutdown
and composeRemoveNetworkOnShutdown
settings which default to true.
This essentially matches the semantics of docker-compose down
, rather than docker-compose stop
which leaves the resources around to be started back up again (one might intuitively expect dockerComposeRestart
to do that).
I feel that a dedicated dockerComposeDown
command would be less surprising for users with knowledge of Docker Compose, changing dockerComposeStop
to not remove containers/network. IMO the two settings mentioned above should then go away entirely.
This would also open the way to resolve a current shortcoming: named volumes are never deleted. I have real-world Compose configs using those—see #81. The current implementation deletes the containers with:
docker-compose rm -v -f
That only deletes anonymous volumes. This would remove named ones as well:
docker-compose down --volumes
Without this, sbt-docker-compose will leave my CI agents with tons of orphaned Docker volumes over time if builds reference Compose configs containing named volumes.
Changing dockerComposeStop
behavior would be backwards-incompatible, but to me the improved parity with docker-compose
and named volume cleanup would be a worthy part of a major version bump. We could introduce dockerComposeDown
initially in a minor version without yet changing dockerComposeStop
or its settings, if desired.
because all sbt keys use a single namespace, it's standard convention for sbt plugins to prefix all plugin-specific keys with their name. This would mean prefixing everything with dockerCompose
... but if that's too much maybe dc
would be enough.
Hey,
Thanks for writing this sbt plugin, it seems promising.
I would like to use it to automate some integration testing with a Spark driver app and its dependent services: kafka and cassandra. I've been using docker-compose for dev purposes with a fairly rudimentary setup. The docker-compose.yml works fine with the docker-compose command, but something in your parsing logic is blowing up on it. I'm using version 1.0.1
.
Creating Local Docker Compose Environment.
Reading Compose File: /home/seglo/source/demo/docker/docker-compose.yml
java.lang.NullPointerException
at com.tapad.docker.ComposeFile$$anonfun$processCustomTags$1.apply(ComposeFile.scala:47)
at com.tapad.docker.ComposeFile$$anonfun$processCustomTags$1.apply(ComposeFile.scala:45)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.immutable.Map$Map2.foreach(Map.scala:130)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at com.tapad.docker.ComposeFile$class.processCustomTags(ComposeFile.scala:45)
at com.tapad.docker.DockerComposePluginLocal.processCustomTags(DockerComposePlugin.scala:70)
at com.tapad.docker.DockerComposePluginLocal.startDockerCompose(DockerComposePlugin.scala:150)
at com.tapad.docker.DockerComposePluginLocal.launchInstanceWithLatestChanges(DockerComposePlugin.scala:114)
at com.tapad.docker.DockerComposePluginLocal$$anonfun$dockerComposeUpCommand$1.apply(DockerComposePlugin.scala:80)
at com.tapad.docker.DockerComposePluginLocal$$anonfun$dockerComposeUpCommand$1.apply(DockerComposePlugin.scala:78)
at sbt.Command$$anonfun$sbt$Command$$apply1$1$$anonfun$apply$6.apply(Command.scala:70)
at sbt.Command$.process(Command.scala:92)
at sbt.MainLoop$$anonfun$1$$anonfun$apply$1.apply(MainLoop.scala:98)
at sbt.MainLoop$$anonfun$1$$anonfun$apply$1.apply(MainLoop.scala:98)
at sbt.State$$anon$1.process(State.scala:184)
at sbt.MainLoop$$anonfun$1.apply(MainLoop.scala:98)
at sbt.MainLoop$$anonfun$1.apply(MainLoop.scala:98)
at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
at sbt.MainLoop$.next(MainLoop.scala:98)
at sbt.MainLoop$.run(MainLoop.scala:91)
at sbt.MainLoop$$anonfun$runWithNewLog$1.apply(MainLoop.scala:70)
at sbt.MainLoop$$anonfun$runWithNewLog$1.apply(MainLoop.scala:65)
at sbt.Using.apply(Using.scala:24)
at sbt.MainLoop$.runWithNewLog(MainLoop.scala:65)
at sbt.MainLoop$.runAndClearLast(MainLoop.scala:48)
at sbt.MainLoop$.runLoggedLoop(MainLoop.scala:32)
at sbt.MainLoop$.runLogged(MainLoop.scala:24)
at sbt.StandardMain$.runManaged(Main.scala:53)
at sbt.xMain.run(Main.scala:28)
at xsbt.boot.Launch$$anonfun$run$1.apply(Launch.scala:109)
at xsbt.boot.Launch$.withContextLoader(Launch.scala:128)
at xsbt.boot.Launch$.run(Launch.scala:109)
at xsbt.boot.Launch$$anonfun$apply$1.apply(Launch.scala:35)
at xsbt.boot.Launch$.launch(Launch.scala:117)
at xsbt.boot.Launch$.apply(Launch.scala:18)
at xsbt.boot.Boot$.runImpl(Boot.scala:41)
at xsbt.boot.Boot$.main(Boot.scala:17)
at xsbt.boot.Boot.main(Boot.scala)
[error] java.lang.NullPointerException
[error] Use 'last' for the full log.
Here's my docker-compose.yml:
version: '2'
services:
cassandra:
container_name: bp-cassandra
image: cassandra:2.1.14
ports:
- "7000-7001:7000-7001"
- "7199:7199"
- "9042:9042"
- "9160:9160"
kafka:
container_name: bp-kafka
build: ./containers/spotify-docker-kafka/kafka
ports:
- "9092:9092"
- "2181:2181"
environment:
- ADVERTISED_HOST=172.17.0.1 # this must match the docker host ip
- ADVERTISED_PORT=9092
I updated the spotify/kafka image to the latest version which is why I'm building it locally.
It should return a non-0 exit code, otherwise a CI server will think that the test completed successfully.
Workaround (on Mac/Linux):
sbt dockerComposeUp && sbt test && sbt dockerComposeStop
Hi,
do you have any idea why i am getting this error?
Creating Local Docker Compose Environment. Reading Compose File: /home/cioconnor/development/risk/docker/docker-compose.yml java.lang.NullPointerException at scala.collection.convert.Wrappers$JMapWrapperLike$$anon$2.<init>(Wrappers.scala:265) at scala.collection.convert.Wrappers$JMapWrapperLike$class.iterator(Wrappers.scala:264) at scala.collection.convert.Wrappers$JMapWrapper.iterator(Wrappers.scala:275) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at scala.collection.TraversableLike$class.filter(TraversableLike.scala:263) at scala.collection.AbstractTraversable.filter(Traversable.scala:105) at com.tapad.docker.ComposeFile$class.getPortInfo(ComposeFile.scala:107) at com.tapad.docker.DockerComposePluginLocal.getPortInfo(DockerComposePlugin.scala:70) at com.tapad.docker.ComposeFile$$anonfun$processCustomTags$1.apply(ComposeFile.scala:63) at com.tapad.docker.ComposeFile$$anonfun$processCustomTags$1.apply(ComposeFile.scala:45) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at scala.collection.immutable.Map$Map1.foreach(Map.scala:109) at scala.collection.TraversableLike$class.map(TraversableLike.scala:244) at scala.collection.AbstractTraversable.map(Traversable.scala:105) at com.tapad.docker.ComposeFile$class.processCustomTags(ComposeFile.scala:45) at com.tapad.docker.DockerComposePluginLocal.processCustomTags(DockerComposePlugin.scala:70) at com.tapad.docker.DockerComposePluginLocal.startDockerCompose(DockerComposePlugin.scala:150) at com.tapad.docker.DockerComposePluginLocal.launchInstanceWithLatestChanges(DockerComposePlugin.scala:114) at com.tapad.docker.DockerComposePluginLocal$$anonfun$dockerComposeUpCommand$1.apply(DockerComposePlugin.scala:80) at com.tapad.docker.DockerComposePluginLocal$$anonfun$dockerComposeUpCommand$1.apply(DockerComposePlugin.scala:78) at sbt.Command$$anonfun$sbt$Command$$apply1$1$$anonfun$apply$6.apply(Command.scala:70) at sbt.Command$.process(Command.scala:92) at sbt.MainLoop$$anonfun$1$$anonfun$apply$1.apply(MainLoop.scala:98) at sbt.MainLoop$$anonfun$1$$anonfun$apply$1.apply(MainLoop.scala:98) at sbt.State$$anon$1.process(State.scala:184) at sbt.MainLoop$$anonfun$1.apply(MainLoop.scala:98) at sbt.MainLoop$$anonfun$1.apply(MainLoop.scala:98) at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17) at sbt.MainLoop$.next(MainLoop.scala:98) at sbt.MainLoop$.run(MainLoop.scala:91) at sbt.MainLoop$$anonfun$runWithNewLog$1.apply(MainLoop.scala:70) at sbt.MainLoop$$anonfun$runWithNewLog$1.apply(MainLoop.scala:65) at sbt.Using.apply(Using.scala:24) at sbt.MainLoop$.runWithNewLog(MainLoop.scala:65) at sbt.MainLoop$.runAndClearLast(MainLoop.scala:48) at sbt.MainLoop$.runLoggedLoop(MainLoop.scala:32) at sbt.MainLoop$.runLogged(MainLoop.scala:24) at sbt.StandardMain$.runManaged(Main.scala:53) at sbt.xMain.run(Main.scala:28) at xsbt.boot.Launch$$anonfun$run$1.apply(Launch.scala:109) at xsbt.boot.Launch$.withContextLoader(Launch.scala:128) at xsbt.boot.Launch$.run(Launch.scala:109) at xsbt.boot.Launch$$anonfun$apply$1.apply(Launch.scala:35) at xsbt.boot.Launch$.launch(Launch.scala:117) at xsbt.boot.Launch$.apply(Launch.scala:18) at xsbt.boot.Boot$.runImpl(Boot.scala:41) at xsbt.boot.Boot$.main(Boot.scala:17) at xsbt.boot.Boot.main(Boot.scala)
Thanks
I have a multi module project and why do i need to have a docker-compose file in the root directory.
I just want to have the docker-compose file in each subproject and build them from there ...
Not sure why the plugin is looking for the docker-compose file in the root of the multi module prooject
Given a docker compose file:
version: "2.1"
services:
mongonode:
image: mongo
networks:
- mongodb
networks:
mongodb:
After running dockerComposeTest:
Going to remove 245930_mongonode_1
Error response from daemon: network 245930_default not found
And docker network ls
shows:
8b286c100378 295810_mongodb bridge local
It seems that dockerComposeTest uses docker-compose stop
to stop the containers. docker-compose down
will clean up resources properly.
Currently I'm trying to keep certain configuration in one spot and to load it I need the fullClasspath which, in sbt, is a Task. This means I can't use my config in the substitution variables for my compose files which means I have to repeat things and .. well.. that sucks. :(
Thoughts?
In addition to ScalaTest, it would be cool to have support for Specs2. If there is interest and if you could guide me on how I can go about this, I could find some time and try to do this myself.
Please support running arbitrary sbt tasks/commands while the docker-compose containers are up, not just ScalaTest.
I'm attempting to write my integration/acceptance-tests using cucumber-scala instead of ScalaTest. Currently I'm forced to do this via a ScalaTest wrapper-test-case which "manually" invokes cucumber.runtime.Runtime.run() and fail the test in case Runtime.exitstatus != 0, since the sbt cucumber plugin cannot be invoked from sbt-docker-compose it appears.
I understand that you'd then miss out on the possibility to use the ScalaTest specific ConfigMap, however environment variables (system properties) still work fine in those cases.
However, the application I'd like to test better uses Akka Cluster Sharding, so I think it would be more straightforward to use Akka's multi-jvm or maybe even multi-node support which are implemented as sbt plugins plus test libraries. Would be awesome if sbt-docker-compose could support that as well. Those are however based on ScalaTest, so think I might better give in and switch from cucumber to ScalaTest's FeatureSpec instead.
If you want to look into it, a similar sample project with multi-jvm tests (but no sbt-docker-compose usage yet) is available at https://github.com/typesafehub/activator-akka-cluster-sharding-scala
I have two services
defined in my docker-compose.yml
of which both are set to restart: always
. When one crashes, I noticed dockerComposeInstances
does not reflect the new IP:PORT
that it was started on.
Do you think it would be easy / a good idea to not cache that value anymore and just lookup from docker
every time?
Today I updated to docker 1.13.0 broke plugin.
When running dockerComposeUp
it prints following error:
Waiting for container Id to be available for service 'db' time remaining: 499
Error response from daemon: Invalid filter '"name'
As you can see there is a "
before the name, so I guess the error is in DockerCommands.getDockerContainerId
.
When running dockerComposeTest the docker container is created ok. However then the whole project is compiled again and now for some reason using 2.10 (see the line close the end marked with **) instead for reusing the compiled classes in 2.12? Nowhere in my sbt file am I saying 2.10 should be used. Why am I compiling the project again and why do the plugin use 2.10 this time?
import Dependencies._
import java.io.File
lazy val commonSettings = Seq(
version := "0.1.SNAPSHOT",
organization := "com.bfg",
scalaVersion := "2.12.1"
)
enablePlugins(DockerComposePlugin)
docker <<= (docker in bfg) map {(image) => image}
//Set the image creation Task to be the one used by sbt-docker
dockerImageCreationTask := docker.value
lazy val bfg = project
.settings(
name := "bfg",
Defaults.itSettings,
commonSettings,
libraryDependencies ++= commonDeps,
libraryDependencies += "org.scalaj" %% "scalaj-http" % "2.3.0" % "it",
//To use 'dockerComposeTest' to run tests in the 'IntegrationTest' scope instead of the default 'Test' scope:
// 1) Package the tests that exist in the IntegrationTest scope
testCasesPackageTask := (sbt.Keys.packageBin in IntegrationTest).value,
// 2) Specify the path to the IntegrationTest jar produced in Step 1
testCasesJar := artifactPath.in(IntegrationTest, packageBin).value.getAbsolutePath,
// 3) Include any IntegrationTest scoped resources on the classpath if they are used in the tests
testDependenciesClasspath := {
val fullClasspathCompile = (fullClasspath in Compile).value
val classpathTestManaged = (managedClasspath in IntegrationTest).value
val classpathTestUnmanaged = (unmanagedClasspath in IntegrationTest).value
val testResources = (resources in IntegrationTest).value
(fullClasspathCompile.files ++ classpathTestManaged.files ++ classpathTestUnmanaged.files ++ testResources).map(_.getAbsoluteFile).mkString(File.pathSeparator)
},
dockerfile in docker := {
new Dockerfile {
val dockerAppPath = "/app/"
val mainClassString = (mainClass in Compile).value.get
val classpath = (fullClasspath in Compile).value
from("java")
add(classpath.files, dockerAppPath)
entryPoint("java", "-cp", s"$dockerAppPath:$dockerAppPath/*", s"$mainClassString")
}
},
imageNames in docker := Seq(ImageName(
repository = name.value.toLowerCase,
tag = Some("latest"))
)
)
.configs(IntegrationTest)
.enablePlugins(DockerPlugin, DockerComposePlugin)
> dockerComposeTest
Starting Test Pass against a new local Docker Compose instance.
Building a new Docker image.
[info] Updating {file:/Users/henke/Documents/code/scala/bfg6/bfg6/}bfg...
[info] Resolving jline#jline;2.14.1 ...
[info] Done updating.
[warn] There may be incompatibilities among your library dependencies.
[warn] Here are some of the libraries that were evicted:
[warn] * com.typesafe.akka:akka-stream-testkit_2.12:2.4.17 -> 2.5.0
[warn] * com.typesafe.akka:akka-stream_2.12:2.4.17 -> 2.5.0
[warn] Run 'evicted' to see detailed eviction warnings
[info] Compiling 17 Scala sources to /Users/henke/Documents/code/scala/bfg6/bfg6/bfg/target/scala-2.12/classes...
[info] Sending build context to Docker daemon 63.44 MB
[info]
[info] Step 1/3 : FROM java
[info] ---> d23bdf5b1b1b
[info] Step 2/3 : ADD 0/classes 1/scala-library-2.12.1.jar 2/akka-http-cors_2.12-0.2.1.jar 3/scala-java8-compat_2.12-0.8.0.jar 4/akka-stream_2.12-2.5.0.jar 5/akka-actor_2.12-2.5.0.jar 6/config-1.3.1.jar 7/reactive-streams-1.0.0.jar 8/ssl-config-core_2.12-0.2.1.jar 9/scala-parser-combinators_2.12-1.0.4.jar 10/swagger-akka-http_2.12-0.9.1.jar 11/swagger-core-1.5.12.jar 12/commons-lang3-3.2.1.jar 13/jackson-dataformat-yaml-2.8.6.jar 14/jackson-core-2.8.6.jar 15/snakeyaml-1.17.jar 16/swagger-models-1.5.12.jar 17/swagger-annotations-1.5.12.jar 18/guava-18.0.jar 19/validation-api-1.1.0.Final.jar 20/swagger-jaxrs-1.5.12.jar 21/jsr311-api-1.1.1.jar 22/reflections-0.9.10.jar 23/javassist-3.18.2-GA.jar 24/annotations-2.0.1.jar 25/swagger-scala-module_2.12-1.0.3.jar 26/jackson-module-scala_2.12-2.8.6.jar 27/scala-reflect-2.12.1.jar 28/jackson-annotations-2.8.6.jar 29/jackson-databind-2.8.6.jar 30/jackson-module-paranamer-2.8.6.jar 31/paranamer-2.8.jar 32/scalatest_2.12-3.0.1.jar 33/scalactic_2.12-3.0.1.jar 34/scala-xml_2.12-1.0.5.jar 35/util_2.12-2.3.0.jar 36/tagging_2.12-1.0.0.jar 37/akka-http-testkit_2.12-10.0.5.jar 38/akka-stream-testkit_2.12-2.5.0.jar 39/akka-testkit_2.12-2.5.0.jar 40/scalamock-scalatest-support_2.12-3.5.0.jar 41/scalamock-core_2.12-3.5.0.jar 42/akka-http-circe_2.12-1.16.0.jar 43/akka-http_2.12-10.0.6.jar 44/akka-http-core_2.12-10.0.6.jar 45/akka-parsing_2.12-10.0.6.jar 46/circe-core_2.12-0.8.0.jar 47/circe-numbers_2.12-0.8.0.jar 48/cats-core_2.12-0.9.0.jar 49/cats-macros_2.12-0.9.0.jar 50/simulacrum_2.12-0.10.0.jar 51/macro-compat_2.12-1.1.1.jar 52/machinist_2.12-0.6.1.jar 53/cats-kernel_2.12-0.9.0.jar 54/circe-jawn_2.12-0.8.0.jar 55/jawn-parser_2.12-0.10.4.jar 56/cats_2.12-0.9.0.jar 57/cats-kernel-laws_2.12-0.9.0.jar 58/scalacheck_2.12-1.13.4.jar 59/test-interface-1.0.jar 60/discipline_2.12-0.7.2.jar 61/catalysts-platform_2.12-0.0.5.jar 62/catalysts-macros_2.12-0.0.5.jar 63/cats-laws_2.12-0.9.0.jar 64/cats-free_2.12-0.9.0.jar 65/cats-jvm_2.12-0.9.0.jar 66/monocle-core_2.12-1.4.0.jar 67/scalaz-core_2.12-7.2.8.jar 68/monocle-macro_2.12-1.4.0.jar 69/monocle-law_2.12-1.4.0.jar 70/logback-classic-1.1.7.jar 71/logback-core-1.1.7.jar 72/scala-logging_2.12-3.5.0.jar 73/slf4j-api-1.7.21.jar 74/circe-generic_2.12-0.8.0.jar 75/shapeless_2.12-2.3.2.jar 76/circe-parser_2.12-0.8.0.jar 77/macros_2.12-2.3.0.jar /app/
[info] ---> a335a34eff8b
[info] Removing intermediate container aac35aca620c
[info] Step 3/3 : ENTRYPOINT java -cp /app/:/app//* com.bfg.infrastructure.Application
[info] ---> Running in 5ffb5a40920c
[info] ---> cdf9242e1283
[info] Removing intermediate container 5ffb5a40920c
[info] Successfully built cdf9242e1283
[info] Tagging image cdf9242e1283 with name: bfg:latest
Creating Local Docker Compose Environment.
Reading Compose File: /Users/henke/Documents/code/scala/bfg6/bfg6/docker/docker-compose.yml
Created Compose File with Processed Custom Tags: /var/folders/yh/mm1bdmx9073_b15lw69b2qmh0000gn/T/compose-updated7060145577096407405.yml
Pulling Docker images except for locally built images and images defined as <skipPull> or <localBuild>.
Skipping Pull of image: bfg:latest
Creating 488921_bfg_1
Waiting for container Id to be available for service 'bfg' time remaining: 499
bfg Container Id: e7392c25c760
Inspecting container e7392c25c760 to get the port mappings
Docker for Mac environment detected. Using the localhost for the container.
The following endpoints are available for your local instance: 488921
+---------+-----------------+-------------+--------------+----------------+--------------+---------+
| Service | Host:Port | Tag Version | Image Source | Container Port | Container Id | IsDebug |
+=========+=================+=============+==============+================+==============+=========+
| bfg | localhost:32772 | latest | build | 8080 | e7392c25c760 | |
| bfg | localhost:32773 | latest | build | 5005 | e7392c25c760 | DEBUG |
+---------+-----------------+-------------+--------------+----------------+--------------+---------+
Instance commands:
1) To stop instance from sbt run:
dockerComposeStop 488921
2) To open a command shell from bash run:
docker exec -it <Container Id> bash
3) To view log files from bash run:
docker-compose -p 488921 -f /var/folders/yh/mm1bdmx9073_b15lw69b2qmh0000gn/T/compose-updated7060145577096407405.yml logs -f
4) To execute test cases against instance from sbt run:
dockerComposeTest 488921
Compiling and Packaging test cases...
[info] Updating {file:/Users/henke/Documents/code/scala/bfg6/bfg6/}bfg6...
[info] Resolving org.fusesource.jansi#jansi;1.4 ...
[info] Done updating.
**[info] Compiling 14 Scala sources to /Users/henke/Documents/code/scala/bfg6/bfg6/target/scala-2.10/classes...**
[error] /Users/henke/Documents/code/scala/bfg6/bfg6/src/main/scala/com/bfg/infrastructure/server/Server.scala:8: object softwaremill is not a member of package com
[error] import com.softwaremill.tagging._
[error] ^
[error] /Users/henke/Documents/code/scala/bfg6/bfg6/src/main/scala/com/bfg/infrastructure/server/Server.scala:9: object typesafe is not a member of package com
[error] import com.typesafe.scalalogging.LazyLogging
[error] ^
[error] /Users/henke/Documents/code/scala/bfg6/bfg6/src/main/scala/com/bfg/infrastructure/server/Server.scala:18: not found: type LazyLogging
[error] (implicit ac: ActorSystem, afm: ActorMaterializer, ec: ExecutionContext) extends LazyLogging {
[error] ^
[error] /Users/henke/Documents/code/scala/bfg6/bfg6/src/main/scala/com/bfg/infrastructure/server/Server.scala:5: not found: object akka
[error] import akka.http.scaladsl.Http.ServerBinding
[error] ^
...
Hi,
I would like to write tests that involve stopping/killing/relaunching containers -- I could do that by simply invoking docker as a shell command, however currently there's no reliable way to figure out which container I should interact with.
I wanted to submit a PR that passes the full information about the running services to the test runner e.g. like this, but couldn't decide on the exact syntax, i.e. should it be something like zookeeper:id=abc123 zookeeper:tag=3.4.8
or something else.
Any suggestions / existing plans regarding this ?
If I have a complex docker-compose.yml
with a lot of services, it's a bit slow to have to run sbt dockerComposeStop dockerComposeUp
every time I make a code change and want to update the container. I really want a command to just rebuild and restart the individual container corresponding to the project I'm working on, and not all the other services. e.g. something similar to docker-compose up -d --no-deps --build my-service
. Is it possible to add support for this to sbt-docker-compose?
Also, what's the best way to use sbt-docker-compose with IntelliJ run configurations? Ideally I'd like a run configuration which just reloads an individual service/container if sbt dockerComposeUp
has already been ran!
This is a cosmetic issue.
After upgrading from version 17 to version 22, the plugin thinks I'm on OSX while I'm on Win 10 x64 using Git Bash as my command line.
The log message in version 17 is
Non-OSX environment detected. Using the host from the container.
On 22 it's
Docker for Mac environment detected. Using the localhost for the container.
It doesn't bother me and apart from that, everything's fine.
I'm using Docker Version 17.03.1-ce-win12 (12058)
Hi @kurtkopchik,
This is the issue/question I referred to in #80:
I have a single-module sbt project and a docker-compose.yml
that runs multiple service containers, one of which being the application image for the project built by sbt-native-packager via sbt-docker-compose. If I remove the application service—leaving only the remotely-published images—and set composeNoBuild := true
then I can run the Compose environment through sbt as expected, so my system setup basically works. But with the application service included, its image gets locally built but then the plugin tries to pull it from remote registry instead of recognizing it as local.
Here's the relevant output on dockerComposeUp
when I (successfully) run the basic-native-packager
example which is closest to my usage:
[info] Built image basic:1.0.0
Creating Local Docker Compose Environment.
Reading Compose File: /Users/cmartin/src/sbt-docker-compose/examples/basic-native-packager/docker/docker-compose.yml
Created Compose File with Processed Custom Tags: /var/folders/r1/yj9gp8sx2znb08j427kgyl5d8c3f9w/T/compose-updated1542783405310045140.yml
Pulling Docker images except for locally built images and images defined as <skipPull> or <localBuild>.
Skipping Pull of image: basic:1.0.0
Looks good, built the image and then sbt-docker-compose skipped trying to pull it. But when running for my project, I see this:
[info] Built image DockerAlias(Some(mycompanyregistry.org/my-team),None,my-app,Some(latest))
Creating Local Docker Compose Environment.
Reading Compose File: /Users/cmartin/src/mycompany/my-team/my-app/docker-compose.yml
Created Compose File with Processed Custom Tags: /var/folders/r1/yj9gp8sx2znb08j427kgyl5d8c3f9w/T/compose-updated786481022455347392.yml
Pulling Docker images except for locally built images and images defined as <skipPull> or <localBuild>.
Pulling repository mycompanyregistry.org/my-team/my-app
invalid character '<' looking for beginning of value
2.1.14: Pulling from library/cassandra
Digest: sha256:c1ec441d7b4e04ede4e645bfc34ea4069fc90904676998240e4c135911fdf11f
Status: Image is up to date for cassandra:2.1.14
0.9.0.1: Pulling from ches/kafka
012a7829fd3f: Already exists
41158247dd50: Already exists
916b974d99af: Already exists
a3ed95caeb02: Already exists
a3ed95caeb02: Already exists
a3ed95caeb02: Already exists
a3ed95caeb02: Already exists
b33a83f5a279: Already exists
c7572144309b: Already exists
711f8578e1f8: Already exists
15a61b3f48c1: Already exists
77fe76114497: Already exists
f67e1e2c91c8: Already exists
714116a3e57a: Already exists
18511de21194: Already exists
a5d90c80930c: Already exists
b432d205d520: Already exists
Digest: sha256:de7ed1b61c38441be44878a0b6ad6f090a590cd9afbc7b14bd5c04f84c4d1399
Status: Image is up to date for ches/kafka:0.9.0.1
3.4: Pulling from library/zookeeper
Digest: sha256:6308fff92245ff7232e90046976d2c17ffb363ae88c0d6208866ae0ab5a4b886
Status: Image is up to date for zookeeper:3.4
Creating network "634132_default" with the default driver
Creating volume "634132_cassandra-data" with default driver
Creating volume "634132_kafka-logs" with default driver
Creating volume "634132_kafka-data" with default driver
Pulling my-app (mycompanyregistry.org/my-team/my-app:latest)...
Pulling repository mycompanyregistry.org/my-team/my-app
invalid character '<' looking for beginning of value
No stopped containers
634132_default
Error starting Docker Compose instance. Shutting down containers...
So clearly my app image gets built, but then we try to pull it, which shouldn't happen.
(The invalid character '<' looking for beginning of value
messages come directly from the response from our registry—a poor error message, but it's the result of trying to pull a nonexistent image).
Here are the minimized bits of my relevant configs:
/* NATIVE PACKAGER */
// We don't use Docker images for production deployment yet, they're for testing/CI
enablePlugins(JavaServerAppPackaging, DockerPlugin)
// Make sure we don't `docker:publish` this to the public Docker Hub
dockerRepository := Some("mycompanyregistry.org/my-team")
packageName in Docker := "my-app"
version in Docker := { if (isSnapshot.value) "latest" else version.value }
/* SBT-DOCKER-COMPOSE */
enablePlugins(DockerComposePlugin)
dockerImageCreationTask := (publishLocal in Docker).value
version: '3'
services:
# Our app is not published to remote registry at this time. Build locally with:
# sbt docker:publishLocal
# Or it will be built automatically for functional test runs.
my-app:
image: mycompanyregistry.org/my-team/my-app:latest
depends_on:
- kafka
- cassandra
cassandra:
image: cassandra:2.1.14
ports:
- '9042:9042' # Thrift
- '9160:9160' # CQL
volumes:
- cassandra-data:/var/lib/cassandra
environment:
# Avoid having to specify host if you want to `docker-compose run cassandra cqlsh`
CQLSH_HOST: cassandra
kafka:
image: ches/kafka:0.9.0.1
ports:
- '9092:9092' # Broker
- '7203:7203' # JMX
depends_on:
- zookeeper
volumes:
- kafka-data:/data
- kafka-logs:/logs
environment:
KAFKA_ADVERTISED_HOST_NAME: localhost
ZOOKEEPER_IP: zookeeper
zookeeper:
image: zookeeper:3.4
restart: unless-stopped
ports:
- '2181:2181'
volumes:
cassandra-data:
kafka-data:
kafka-logs:
(Specific local ports were bound for local development, this will be changed for sbt builds once it's working, don't worry 😄 ).
I'm starting to look over the code for why this might be happening, and I can produce an actual reproduction project if you like (or send one as a PR for the examples directory), but I wanted to make sure I'm not simply misunderstanding something about intended usage. This almost works; it should work, right?
I am not sure what is the issue here ...
Created Compose File with Processed Custom Tags: /tmp/compose-updated198967574635630207.yml
Pulling Docker images except for locally built images and images defined as or .
Skipping Pull of image: backendapi:latest
Skipping Pull of image: frontendapi:
Creating network "926046_default" with the default driver
Pulling frontendapi (frontendapi:latest)...
repository frontendapi not found: does not exist or no pull access
No stopped containers
926046_default
Error starting Docker Compose instance. Shutting down containers...
version: '2'
services:
backendapi:
image: backendapi:latest
environment:
JAVA_TOOL_OPTIONS: -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005
ports:
- "5005:5005"
frontendapi:
image: frontendapi:
environment:
JAVA_TOOL_OPTIONS: -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005
ports:
- "5005:5005"
Hi!
I was wondering if there are plans to support the creation of the docker-compose.yml
file using sbt-docker-compose?
This feature could work similarly to the way sbt-docker
creates Dockerfiles from the contents of build.sbt
.
A simple but very fruitful use case is to conditionally expose ports: If the debug flag in SBT is true, docker-compose.yml
contains the port mapping - "5005:5005"
, otherwise it doesn't.
Thanks a lot for your plugin!
When running on Docker for Mac, the IP address reported for the machine in the config map is not correct. The plugin appears to be reporting the IP address within the machine rather than the external IP address.
installing the plugin as instructed in the README
, including setting a custom docker compose file, when I type dockerComposeUp
sbt doesn't recognise it.
I've had a look through your examples and I can't see what I'm doing differently. Paying close attention to the multi-project example as that is closer to my case... but I note that you install the plugin on the root project.
Hi!
I was wondering if you have any guidelines or best practices to keep your builds fast?
I'm trying to improve build speeds during development for my example web app that's built with docker compose: https://gitlab.com/bullbytes/scala-js-example
I would immensely appreciate it if you could have a look at the build.sbt and tell me if there's room for improvement.
Speeding up the build would also benefit newcomers to Scala.js since the app is now one of the skeleton applications on the Scala.js website.
Hi!
I'm getting the error
Could not find or load main class org.scalatest.tools.Runner
when trying to run my tests using dockerComposeTest
.
The tests run fine when run from SBT using test
.
I added testExecutionArgs := s"-R ${baseDirectory.value}/target/scala-2.12/test-classes"
thinking that maybe ScalaTest can't find the classes when being started using the plugin but that didn't help.
I also tried to add all the class files to my docker-compose.yml by adding this to build.sbt
to no avail (I'm using JavaAppPackaging):
val targetDir = "/app/"
new Dockerfile {
from("java:8-jre")
val classpath = (fullClasspath in Test).value
add(classpath.files, targetDir)
// ...
My build.sbt
is here: https://gitlab.com/bullbytes/scala-js-example
Thanks, any help is appreciated.
I'm evaluating sbt-docker-compose and the primary thing that first attracted me to it—as opposed to possible alternatives like docker-it-scala which leaves me to duplicate container definitions in yet another config format—is the hope of reusing perfectly good existing Compose configs that people can use in their local development with standard docker-compose
, independent of sbt-docker-compose.
Adding custom preprocessed extensions to the file format like <localBuild>
and <skipPull>
breaks compatibility of the files with the standard tools. IMO this is a fundamental design misfeature.
I'd like to propose that custom labels could potentially fill this need instead, and likely make further extensibility easier to boot (you could abstract a contract for an sbt-docker-compose label processor instead of requiring processCustomTags
specifically to be overridden).
As I said I'm still evaluating, but if the plugin otherwise meets my needs, I'll try to whip up an implementation as a PR.
Hello,
I have this simple docker-compose.yml
using env_file
:
version: '2'
services:
users-presence-service:
image: iadvize/users-presence-service:CORE-209
env_file: .env
redis:
image: iadvize/redis:3
env_file: .env
When using docker-compose up
it works.
But through this plugin I have this:
...
Status: Image is up to date for iadvize/redis:3
Couldn't find env file: /var/folders/j0/r7msg51x67ggf3y8jkdm7m5c0000gp/T/.env
Waiting for container Id to be available for service 'users-presence-service' time remaining: 499
Waiting for container Id to be available for service 'users-presence-service' time remaining: 497
...
Am I doing something wrong? The .env
file is on the same level of docker-compose.yml
.
Or is there no support of env_file
?
Oh and it would be nice if when an error occurs (like my example: Couldn't find env file...
) the command fails directly. I spent some time wondering why it was waiting forever the container ID...
Thanks!
I have a play application and resources in the test/resources folder which is the default test-resources folder for a play application. If I run a test which accesses test resources from dockerComposeTest
the test resources are not on the classpath although they are if I run the same test witch test
.
Unfortunately I recognised that sbt-docker-compose dockerComposeTest does not create test reports even if I can see that all tests are run.
Is there a way to force the generation of test-reports?
I only discovered the feature from #10 when I happened to read a test case for it in the source. It's a high-level feature that is probably worth documenting in the README.
Projects using coursier's SBT plugin to fetch dependencies cannot use dockerComposeTest
.
This is due to the fact that the presence of scalatest in the classpath is checked by testing if the string "org.scalatest"
appears in the result of testDependenciesClasspath
. But since coursier uses maven-style folder structure, the scalatest jar ends up in a folder resembling to /home/johndoe/.coursier/cache/v1/https/repo1.maven.org/maven2/org/scalatest/scalatest_2.11/2.2.6/scalatest_2.11-2.2.6.jar
, which makes the aforementioned check fail although scalatest is present.
It would be good to have the option to stop
all docker instances for all projects on sbt exit.
In addition, there doesn't seem to be a way to do this kind of logic in CI: ... start docker ... try { ... normal integration tests ... } finally { ... stop instance ...}
since sbt dockerComposeUp it:test dockerComposeStop
will halt at it:test
if they fail (I think).
Observe a same test command below, where the project lives in /path/to/some directory with spaces/myproject
. This is passed verbatim to java, so the command will fail. Wrapping in quotes should do the trick (have not tested yet).
java -Dredis:6379=172.17.0.1:32787 -Dredis:containerId=447d3ea0ab57 -cp /path/to/some directory with spaces/myproject/target/scala-2.11/classes:/home/myuser/.ivy2/cache/org.scala-lang/scala-library/jars/scala-library-2.11.7.jar ...
Hi Kurt!
I've created a Scala.js sample application using your plugin: https://gitlab.com/bullbytes/scala-js-example
When running the app using docker-compose up
, I'd like to use an environment variable to define which tag of the image to use (latest
, staging
, or production
).
The default tag should be latest
.
Thus, I added this to my docker-compose.yml
, using Docker's variable substitution with default values:
image: registry.gitlab.com/bullbytes/scala-js-example:${APP_DOCKER_IMAGE_TAG:-latest}
While this works using docker-compose up
, I run into an error when executing dockerComposeUp
in SBT:
Error parsing reference: "registry.gitlab.com/bullbytes/scala-js-example:${APP_DOCKER_IMAGE_TAG:-latest}" is not a valid repository/tag: invalid reference format
Here's my docker-compose.yml
.
Thanks!
P.S.: If you have any feedback for my example app (e.g., my build.sbt
), I'd be more than glad to hear it. 😊
What is the reason it is unsupported
In docker-compose files where there are short lived containers (that perform a post start initialization task) will time out.
version: "2.1"
services:
hello:
image: hello-world
Produces:
Status: Image is up to date for hello-world:latest
Creating network "430023_default" with the default driver
Creating 430023_hello_1
Waiting for container Id to be available for service 'hello' time remaining: 499
Waiting for container Id to be available for service 'hello' time remaining: 497
Waiting for container Id to be available for service 'hello' time remaining: 495
Waiting for container Id to be available for service 'hello' time remaining: 493
.....
Waiting for container Id to be available for service 'hello' time remaining: 8
Waiting for container Id to be available for service 'hello' time remaining: 6
Waiting for container Id to be available for service 'hello' time remaining: 4
Waiting for container Id to be available for service 'hello' time remaining: 2
Waiting for container Id to be available for service 'hello' time remaining: 0
Cannot determine container Id for service: hello
Removing 430023_hello_1 ... done
Going to remove 430023_hello_1
Expected result: Overview should ignore/not show port mappings.
Version 1.0.20
In docker compose files from version 2 there is an option to state if the volume is read only or read-write enabled.
For example:
version: '2'
services:
rabbit:
image: rabbitmq:management
ports:
- "5672:5672"
- "15672:15672"
volumes:
- ~/data/event_distribution/rabbit:/var/lib/rabbitmq:rw
- ./env/rabbit/definitions.json:/etc/rabbitmq/definitions.json:ro
- ./env/rabbit/rabbitmq.config:/etc/rabbitmq/rabbitmq.config:ro
note the :rw
and :ro
at the end of each volume definition
I would love for this to be supported. Thanks!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.