Comments (31)
Hey,
Thanks for filing the issue. There are a couple of parts in your compose format that my plugin-does not yet support. I'll definitely look into making the parsing for these scenarios more robust in a future release.
- Port Ranges are not supported. Instead of "7000-7001" you need to define them separately:
ports:
- "7000:7000"
- "7001:7001"
- Environment variables need to use this equivalent formatting:
environment:
ADVERTISED_HOST: 172.17.0.1
- The plugin does not currently have support for building external images using the docker-compose "build:" tag. Each service in the docker-compose file needs to refer to an already built image via an "image:" tag. Although I should be able to support this in a future version.
The general use case is that you are running dockerComposeUp on a source repository that you are actively working on. When the plugin is enabled on your project running "dockerComposeUp" will compile your latest code, build a new docker image, and then start the docker-compose instance whose "image:" tag will refer to the updated image you just created.
For your case it looks like if you can build the image once and just update your compose file to refer to the local copy you should be all set.
Updating your compose file to be something like the following should work:
version: '2'
services:
cassandra:
container_name: bp-cassandra
image: cassandra:2.1.14
ports:
- "7000:7000"
- "7001:7001"
- "7199:7199"
- "9042:9042"
- "9160:9160"
kafka:
container_name: bp-kafka
image: kafka
ports:
- "9092:9092"
- "2181:2181"
environment:
ADVERTISED_HOST: 172.17.0.1 # this must match the docker host ip
ADVERTISED_PORT: 9092
When I use the plugin for testing I typically allow the ports and container names to be dynamically assigned so that I can start up multiple instances on the same machine. I'm not sure if this is useful for what you are doing but thought I'd mention it:
version: '2'
services:
cassandra:
image: cassandra:2.1.14
ports:
- "0:7000"
- "0:7001"
- "0:7199"
- "0:9042"
- "0:9160"
kafka:
image: kafka
ports:
- "0:9092"
- "0:2181"
environment:
ADVERTISED_HOST: 172.17.0.1 # this must match the docker host ip
ADVERTISED_PORT: 9092
I hope that this helps enable you to use the plugin! Let me know if you get it working or run into any other issues.
from sbt-docker-compose.
Also, if you are creating the Spark driver app the real power of the plugin comes into play when you include your repository into the compose-file as well. That way any time you make a change to the Spark driver code you can run dockerComposeUp and you'll be running a newly built docker instance of your changes connected to Cassandra and Kafka, ready for integration testing.
version: '2'
services:
sparkdriver: <--- This should match sbt project name or be explicity set on the project with the "composeServiceName" setting
image: <imagename> <-- Where the <imagename> is updated to be the docker image name that is created by your project. For example: sparkdriver:latest
ports:
- "<port for your driver app>:<port for your driver app>"
links:
- cassandra:cassandra
- kafka:kafka
cassandra:
container_name: bp-cassandra
image: cassandra:2.1.14
ports:
- "7000:7000"
- "7001:7001"
- "7199:7199"
- "9042:9042"
- "9160:9160"
kafka:
container_name: bp-kafka
image: kafka
ports:
- "9092:9092"
- "2181:2181"
environment:
ADVERTISED_HOST: 172.17.0.1 # this must match the docker host ip
ADVERTISED_PORT: 9092
from sbt-docker-compose.
Cool. Thanks for the assistance. Yes, ultimately I'd like to include my app into the compose as well.
I made the changes you suggested and published a local image for kafka. Now I'm getting a timeout while it tries to find my instance id for cassandra.
Waiting for container Id to be available for service 'cassandra' time remaining: 499
The raw docker ps
output is here. It looks like you're trying to parse out the container ID, but I'm not sure why it's failing.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
28bba1fe7bb2 cassandra:2.1.14 "/docker-entrypoint.s" 7 minutes ago Up 7 minutes 0.0.0.0:7000-7001->7000-7001/tcp, 0.0.0.0:7199->7199/tcp, 0.0.0.0:9042->9042/tcp, 0.0.0.0:9160->9160/tcp bp-cassandra
d1038cfc49a4 spotify/kafka09 "supervisord -n" 7 minutes ago Up 7 minutes 0.0.0.0:2181->2181/tcp, 0.0.0.0:9092->9092/tcp bp-kafka
from sbt-docker-compose.
Happy to help. What are the logs saying before:
Waiting for container Id to be available for service 'cassandra' time remaining: 499
I'm wondering if there is some other error from docker-compose itself that is preventing the container from starting. Using the docker-compose snippet below I was able to successfully start a Cassandra instance on my machine using the plugin:
version: '2'
services:
cassandra:
image: cassandra:2.1.14
ports:
- "7000:7000"
- "7001:7001"
- "7199:7199"
- "9042:9042"
- "9160:9160"
Output I'm seeing from docker-compose up:
> dockerComposeUp
Creating Local Docker Compose Environment.
Reading Compose File: /Users/kurtkopchik/Source/docker-compose-definitions/aerospike/docker/docker-compose.yml
Created Compose File with Processed Custom Tags: /var/folders/gy/swztjnld0cz0dm9s_45j_x0h0000gp/T/compose-updated3414098698870308792.yml
Pulling Docker images except for locally built images and images defined as <skipPull> or <localBuild>.
2.1.14: Pulling from library/cassandra
efd26ecc9548: Already exists
a3ed95caeb02: Already exists
c50f81084f58: Already exists
db7e7b58735f: Already exists
a3ed95caeb02: Already exists
eb568ae1aaa9: Already exists
61a4e1ace676: Already exists
ccb01b1ff714: Already exists
a3ed95caeb02: Already exists
16481f86d368: Already exists
0a212d43c1f0: Already exists
a3ed95caeb02: Already exists
f0664e7f1560: Already exists
a3ed95caeb02: Already exists
2b82eb73264b: Already exists
a3ed95caeb02: Already exists
a3ed95caeb02: Already exists
a3ed95caeb02: Already exists
Digest: sha256:62036a318ef92813469e5aa4182208801580caf4f48d431b539a443bf75bae81
Status: Image is up to date for cassandra:2.1.14
Creating network "37238_default" with the default driver
Creating 37238_cassandra_1
Waiting for container Id to be available for service 'cassandra' time remaining: 499
cassandra Container Id: 3b1048359d55
Inspecting container 3b1048359d55 to get the port mappings
OSX boot2docker environment detected. Using the docker-machine IP for the container.
The following endpoints are available for your local instance: 37238
+-----------+---------------------+-------------+--------------+----------------+--------------+---------+
| Service | Host:Port | Tag Version | Image Source | Container Port | Container Id | IsDebug |
+===========+=====================+=============+==============+================+==============+=========+
| cassandra | 192.168.99.100:7000 | 2.1.14 | defined | 7000 | 3b1048359d55 | |
| cassandra | 192.168.99.100:7001 | 2.1.14 | defined | 7001 | 3b1048359d55 | |
| cassandra | 192.168.99.100:7199 | 2.1.14 | defined | 7199 | 3b1048359d55 | |
| cassandra | 192.168.99.100:9042 | 2.1.14 | defined | 9042 | 3b1048359d55 | |
| cassandra | 192.168.99.100:9160 | 2.1.14 | defined | 9160 | 3b1048359d55 | |
+-----------+---------------------+-------------+--------------+----------------+--------------+---------+
Instance commands:
1) To stop instance from sbt run:
dockerComposeStop 37238
2) To open a command shell from bash run:
docker exec -it <Container Id> bash
3) To view log files from bash run:
docker-compose -p 37238 -f /var/folders/gy/swztjnld0cz0dm9s_45j_x0h0000gp/T/compose-updated3414098698870308792.yml logs
4) To execute test cases against instance from sbt run:
dockerComposeTest 37238
from sbt-docker-compose.
It also may be something going on with the new networking options for docker-compose in the 2.0 format. You can try using the original compose format to see if that works:
cassandra:
image: cassandra:2.1.14
ports:
- "7000:7000"
- "7001:7001"
- "7199:7199"
- "9042:9042"
- "9160:9160"
Same as what you have but does not include the section:
version: '2'
services:
from sbt-docker-compose.
@seglo I did a little more investigation and found that the new 2.0 compose file format creates a new docker network each time a compose instance is launched. If you reach the limit of docker networks that can be created your instance will fail to start. The plugin was not removing newly created networks on shutdown as this was not an issue with the previous version of the compose file format.
I published a new release which on 'dockerComposeStop' will now remove the new network that was created on startup so that network instances will not hang around and cause the limit to be reached:
addSbtPlugin("com.tapad" % "sbt-docker-compose" % "1.0.2")
I'm not sure of the exact issue you were hitting but it looks like this may have been it.
You may need to clean up the extra networks that were created by the plugin. You can see the list by running:
docker network ls
Anything with a randomNumber_default can be removed with:
docker network rm randomNumber_default
Let me know if the new release of the plugin gets you up and running after you clean up any leftover networks.
from sbt-docker-compose.
Thanks a lot for the help. We're making progress.
I tried going through your suggestions. I didn't have many of the random network's you mentioned (although I removed them anyway). What ended up working was when I removed the "container_name" property for Cassandra (I noticed your latest config file didn't have this).
version: '2'
services:
cassandra:
# container_name: bp-cassandra
image: cassandra:2.1.14
ports:
- "7000:7000"
- "7001:7001"
- "7199:7199"
- "9042:9042"
- "9160:9160"
# kafka:
# container_name: bp-kafka
# image: spotify/kafka09
# ports:
# - "9092:9092"
# - "2181:2181"
# environment:
# ADVERTISED_HOST: 172.17.0.1 # this must match the docker host ip
# ADVERTISED_PORT: 9092
It container_name is really useful, especially when running docker-compose manually. I'm thinking now about having two docker-compose files, one that plays nice with this plugin, and another I can use with docker-compose itself. Is there a way to override the location to the docker-compose.yml file so I can create a sbt-docker-compose version (i.e. docker/sbt-dockercompose.yml)?
from sbt-docker-compose.
I was able to get cassandra and kafka working, but when inserting my driver app in as you suggested I'm getting failures again.
Output:
$ sbt dockerComposeUp
[info] Loading project definition from /home/seglo/source/ciena-bp/demo/demo/project
[info] Set current project to demo-spark-app (in build file:/home/seglo/source/ciena-bp/demo/demo/)
Building a new Docker image.
[warn] Scala version was updated by one of library dependencies:
[warn] * org.scala-lang:scala-compiler:2.10.0 -> 2.10.5
[warn] To force scalaVersion, add the following:
[warn] ivyScala := ivyScala.value map { _.copy(overrideScalaVersion = true) }
[warn] There may be incompatibilities among your library dependencies.
[warn] Here are some of the libraries that were evicted:
[warn] * org.apache.kafka:kafka_2.10:0.8.2.1 -> 0.9.0.1
[warn] Run 'evicted' to see detailed eviction warnings
...
[info] Sending build context to Docker daemon 187.3 MB
[info]
[info] Step 1 : FROM java
[info] ---> 081ce13c85db
[info] Step 2 : ADD 0/classes 1/scala-library-2.10.6.jar 2/spark-cassandra-connector_2.10-1.6.0-M2.jar ...
[info] ---> Using cache
[info] ---> 2e8226d6fcaf
[info] Step 3 : ENTRYPOINT java -cp /app/:/app//* TestMessageGenerator
[info] ---> Using cache
[info] ---> 3e327128a3ad
[info] Successfully built 3e327128a3ad
[info] Tagging image 3e327128a3ad with name: demo-spark-app:1.0
[info] Warning: '-f' is deprecated, it will be removed soon. See usage.
Creating Local Docker Compose Environment.
Reading Compose File: /home/seglo/source/ciena-bp/demo/demo/docker/docker-compose.yml
Created Compose File with Processed Custom Tags: /tmp/compose-updated4010721323998883867.yml
Pulling Docker images except for locally built images and images defined as <skipPull> or <localBuild>.
Skipping Pull of image: <imagename>
Skipping Pull of image: kafka09:latest
2.1.14: Pulling from library/cassandra
efd26ecc9548: Already exists
a3ed95caeb02: Already exists
c50f81084f58: Already exists
db7e7b58735f: Already exists
a3ed95caeb02: Already exists
eb568ae1aaa9: Already exists
61a4e1ace676: Already exists
ccb01b1ff714: Already exists
a3ed95caeb02: Already exists
16481f86d368: Already exists
0a212d43c1f0: Already exists
a3ed95caeb02: Already exists
f0664e7f1560: Already exists
a3ed95caeb02: Already exists
2b82eb73264b: Already exists
a3ed95caeb02: Already exists
a3ed95caeb02: Already exists
a3ed95caeb02: Already exists
Digest: sha256:62036a318ef92813469e5aa4182208801580caf4f48d431b539a443bf75bae81
Status: Image is up to date for cassandra:2.1.14
Creating network "74655_default" with the default driver
Creating 74655_kafka_1
Creating 74655_cassandra_1
Pulling demo-spark-app (<imagename>:latest)...
Error parsing reference: "<imagename>" is not a valid repository/tag
Waiting for container Id to be available for service 'demo-spark-app' time remaining: 499
Waiting for container Id to be available for service 'demo-spark-app' time remaining: 497
Waiting for container Id to be available for service 'demo-spark-app' time remaining: 495
Waiting for container Id to be available for service 'demo-spark-app' time remaining: 493
^C
docker-compose.yml
version: '2'
services:
demo-spark-app: # <--- This should match sbt project name or be explicity set on the project with the "composeServiceName" setting
image: <imagename> # <-- Where the <imagename> is updated to be the docker image name that is created by your project. For example: sparkdriver:latest
# ports:
# - "<port for your driver app>:<port for your driver app>"
links:
- cassandra:cassandra
- kafka:kafka
cassandra:
# container_name: bp-cassandra
image: cassandra:2.1.14
ports:
- "7000:7000"
- "7001:7001"
- "7199:7199"
- "9042:9042"
- "9160:9160"
kafka:
# container_name: bp-kafka
image: kafka09:latest<localBuild>
ports:
- "9092:9092"
- "2181:2181"
environment:
ADVERTISED_HOST: 172.17.0.1 # this must match the docker host ip
ADVERTISED_PORT: 9092
from sbt-docker-compose.
I can look into it more this afternoon but it looks like your image: field is not populated correctly for the app you are building. From your logs it looks like it should be:
image:demo-spark-app:1.0
from sbt-docker-compose.
Ah, that worked. It appears to have run my container (although crashed shortly after.. I have something to go on now). I must have misunderstood your earlier comment. I thought <imagename>
would have been substituted.
from sbt-docker-compose.
Yeah, I should have used a different delimiter so it was clear that you had to actually replace that value as other locations used the same format and the values were being replaced.
As for your question on supplying a different docker_compose.yml for the plugin. Yes, you can do this by setting the 'composeFile' setting to the location of the compose yml file you want to use. You can specify whatever you want for the path and file name as long as it can be resolved. For example you can do something like:
project.
enablePlugins(DockerPlugin, DockerComposePlugin).
settings(
composeFile := "./{your project name here}/docker/sbt-dockercompose.yml"
)
As for explicitly setting a container name on a service I went though the code and you are correct it is not working at the moment. The code expects to find the dynamically generated name for the container and not a hard-coded one. I can look into adding support for this it in a future release but from my experience when testing using docker-compose I've never needed to explicitly set the container name. It ends up causing more issues as you can never have more than one instance on the same machine running with the same explicitly defined name.
As a general principle when using docker compose we try to have all of our instances be uniquely named (which the plugin does for you) so that can have multiple instances running on the same machine whether that be your local box, the build machine or a shared cloud environment like Mesos. When we add all of the services we are using for integration testing to the same compose file we haven't had the need to define explicit names for each service.
Let me know if you run into any more issue or if things are now working for you.
from sbt-docker-compose.
@seglo FYI - I updated the README to document the currently unsupported docker-compose fields that we hit. I also pushed a change that adds the ability to parse either of the "environment:" field formats so that won't be an issue after I publish the next release.
from sbt-docker-compose.
Thanks @kurtkopchik
I'm fairly new to the dockerverse so I wasn't aware about the best practice of using generated names. It seems sensible to me, but I've found it convenient while using docker-compose to reference instances by names I've defined so I don't have to hunt for the instance id. I guess this is mainly just a convenience during development when you're not likely referring to multiple instances at the same time.
I have things running, for the most part. The last thing I need to figure out is the test scenario. I referenced your basic-with-tests
spec, but when I attempt to run sbt dockerComposeTest
I get the following error.
java.io.IOException: Cannot run program "scala": error=2, No such file or directory
I saw your note in the readme about having the right version of ScalaTest and the right version of 'Scala', but I'm confused. Are you referring to the same version of scala-library with the ScalaTest jar compatible with that version of Scala? If so, I believe that's correct given these two entries in my classpath:
1/scala-library-2.10.6.jar
28/scalatest_2.10-2.2.1.jar
from sbt-docker-compose.
The test cases are executed using the ScalaTest Runner which uses the scala executable to launch the test pass. From your error it looks like "scala" is not being found on your path.
When you type "scala" from the command line it needs to be able to launch "scala".
The note in the Readme is that the version of Scala that gets discovered on your path is going to have to be the 2.10 version otherwise it won't be compatible with the ScalaTest jar being used.
from sbt-docker-compose.
A little more info:
The example project I have uses:
scalaVersion := "2.11.1"
So the version of Scala on your command line path would have to be version 2.11.x for the example to work.
If your project uses Scala 2.10.x then the version of Scala on your command line path would have to be version 2.10.x to be able to execute ScalaTest test cases against your docker instance using the ScalaTest Runner.
You can use brew to install Scala which I think will give you the 2.11.x version by default. You can also use brew to switch to the 2.10.x version. I haven't followed these steps personally but here is a link with some info on how to change versions.
from sbt-docker-compose.
I see. I think my lack of docker is showing here. How does the the app container get scala
installed on the path? Are you saying my host system has to have the right version of scala
on the path and the container picks this up? What if my test runs on a system where scala isn't in the path or is the wrong version?
from sbt-docker-compose.
The tests run on the host system against the docker compose instance. The tests are outside of the compose instance. The host system needs to have scala installed for the tests to execute via dockerComposeTest.
When the test cases start they are given a configMap that contains the endpoint information for your running instance. Your test case then uses the end points to connect to the running compose instance.
I hope this helps clarify a bit.
from sbt-docker-compose.
Having the test cases outside of the docker-compose instance allows you to iterate on your test case development without having to rebuild and restart the docker-compose instance each time you make a test code change.
from sbt-docker-compose.
OK. I've never actually installed scala via homebrew. I switch between Linux and Mac frequently (currently working on Linux), and I've only ever used SBT or plainjane java to bootstrap my apps. Instead of using scala
on the PATH maybe using java
would make more sense and build the classpath for the right version of scala that the project uses.
from sbt-docker-compose.
At the moment the plugin uses the following line to kick off the test pass:
s"scala $debugSettings -cp $testDependencies org.scalatest.tools.Runner -o -R ${getSetting(testCasesJar)} $testTags $testParams".!
It requires scala
to exist. I could potentially change that to bejava
instead. You could try making scala be an alias for java to see if that works for you.
from sbt-docker-compose.
I just released a 1.0.2-SNAPSHOT version of the plugin that uses "java" instead of "scala" to kick off the test execution. Not sure if it'll work for you but feel free to give it a try:
addSbtPlugin("com.tapad" % "sbt-docker-compose" % "1.0.2-SNAPSHOT")
from sbt-docker-compose.
Nice. That did the trick to bootstrap the test runner. Does $testDependencies
include deps not in my test
scope? It can't appear to locate my non-test code in src/main/
. Anything within test
is working though :)
@kurtkopchik I just want to say that you've gone above and beyond this weekend helping me out with this. Thank so much. I look forward to digging into this project and contributing back when I have some time.
from sbt-docker-compose.
The $testDependencies
value is populated based off of a configurable setting:
val testDependenciesClasspath = taskKey[String]("The path to all managed and unmanaged Test dependencies. This path needs to include the ScalaTest Jar for the tests to execute. This defaults to all managedClasspath and unmanagedClasspath in the Test Scope.")
By default it contains anything in the test
scope:
testDependenciesClasspath := {
val classpathTestManaged = (managedClasspath in Test).value
val classpathTestUnmanaged = (unmanagedClasspath in Test).value
(classpathTestManaged.files ++ classpathTestUnmanaged.files).map(_.getAbsoluteFile).mkString(":")
}
However, you can override this default setting in your project to also include whatever you want form src/main/
as well.
@seglo I'm glad that we were able to work through your issues and in the process make the plugin better for others as well. I look forward any future contributions that you are able to make!
I'll get an official 1.0.3 release out soon with the "java" and "environment" variable format support change. Let me know if you have more questions or hit other issues.
from sbt-docker-compose.
I tried overriding testDependenciesClasspath
like you said, but it doesn't seem visible from my build.sbt. Should it be part of the autoImport
you have defined here?
What's the reason not to include the main sourcepaths by default?
from sbt-docker-compose.
It looks like not having testDependenciesClasspath
in autoImport
was just an oversight. I only had the Test in sourcepaths by default as I was keeping the test code separate from the product code but I can see why it would be good to include that by default as well.
I had already released a 1.0.3 update this morning with the changes to date. So I've put out a 1.0.3-SNAPSHOT that should address the issues above:
addSbtPlugin("com.tapad" % "sbt-docker-compose" % "1.0.3-SNAPSHOT")
You may need to add a resolver to get the snapshot release if you are trying this with the example project:
resolvers ++= Seq(Resolver.sonatypeRepo("public"))
from sbt-docker-compose.
I got your latest snapshot but it still can't find my main sourcepaths. I think it's because you're including managedClasspath in Compile
when fullClasspath in Compile
is what includes the classes from my main. I checked the contents of both and noticed only fullClasspath included target/scala-2.10/classes
.
Also, I noticed testDependenciesClasspath
still isn't accessible, but I guess that's not important anymore once this is sorted.
from sbt-docker-compose.
Take 2. I updated 1.0.3-SNAPSHOT
to instead usefullClasspath in Compile
addSbtPlugin("com.tapad" % "sbt-docker-compose" % "1.0.3-SNAPSHOT")
I'm not sure what's going on with testDependenciesClasspath
for you. I see it now in autoImport
and I'm able to reference it in mybuild.sbt
.
from sbt-docker-compose.
Awesome. I think I'm in business! Thanks.
I'm not sure why I can't see testDependenciesClasspath
. Obviously I pulled down the new snapshot because I can run my main sourcepaths.
from sbt-docker-compose.
That's great!
I know that when I'm using IntelliJ the nested example projects sometimes say they can't resolve settings in the IDE but it builds just fine from the command line. I just tried again and created a simple build.sbt
file and it had no problem resolving and letting me define the testDependenciesClasspath
setting.
name := "test"
version := "1.0.0"
scalaVersion := "2.10.5"
lazy val test = project.
enablePlugins(DockerPlugin, DockerComposePlugin).
settings(
testDependenciesClasspath := {
"FakeClasspath"
},
composeServiceName := "test",
composeNoBuild := true
)
Kind of a moot point for you now though as we are past your need to customize this setting.
from sbt-docker-compose.
Closing this issue out as I released an official update with the 'fullClasspath in Compile'
included in the test dependencies: addSbtPlugin("com.tapad" % "sbt-docker-compose" % "1.0.4")
Thanks for using the plugin! Just open up a new issue if anything else comes up.
from sbt-docker-compose.
Sounds good and will do!
from sbt-docker-compose.
Related Issues (20)
- Plugin starts tests even if not all containers are ready. HOT 4
- useStaticPorts does not seems to take effect HOT 2
- -debug doesn't work with specs2 HOT 2
- How to link my scala app's container, created by sbt-native-packager, to the dependency containers? HOT 1
- dockerComposeTest does not remove Docker instances when the compliation fails HOT 7
- multi project build with sbt-native-packager HOT 2
- DockerComposeTest does not pick up FeatureSpecs HOT 3
- Specs2 tests that have a different constructor don't run
- dockerComposeTest returns success on compilation error
- Provide dockerComposeTestOnly HOT 1
- Question - where does sbt-docker-compose plugin live? HOT 2
- Exception for invalid volume definition
- Configurable 'instanceName' (-p project-name)
- Forthcoming Deprecation / Transfer of Ownership Notice HOT 2
- SBT 1.3.3 breaks sbt-docker-compose
- dockerComposeTest doesn't include parent test scope in dependency classpath
- "Could not parse image id" HOT 1
- Support docker-compose 2.x HOT 3
- Run specs2 tests in multi-project code? HOT 4
- SBT 1.1.0-M1 compatibility issues HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from sbt-docker-compose.