upserve / docker-api Goto Github PK
View Code? Open in Web Editor NEWA lightweight Ruby client for the Docker Remote API
License: MIT License
A lightweight Ruby client for the Docker Remote API
License: MIT License
In Docker 0.6 the default mode is no longer daemonized. It'd be great if the client library supported specifying a socket as well as the server.
Hello,
I am testing with this library, and cannot seem to find a way to use the -d mode (cli docker run -d) to detach so that the command run within the container can be a daemon (web server, application server, etc) Any assistance would be greatly appreciated!
Example code:
server = Docker::Container.create(
:Hostname => name,
:Image => image,
:Cmd => ['/usr/sbin/sshd -D'],
:PortSpecs => ['22']
)
server.start
Thanks!
Greg
I am certain i am missing a crucial piece of documentation somewhere, but when i do something like:
Docker::Container.all(:all => true)
Which returns:
Docker::Container { :id => f9557322025a6b470dbad7d73f5e3fa7edb882c0b8e420a2ba821a8afa55de75, :connection => Docker::Connection { :url => unix:///, :options => {:socket=>"/var/run/docker.sock"} } }
I get back an array with what looks like individual hash elements, but strangely they are string representations of hashes.
I saw the to_s method you set in containers.rb.. and am curious how one goes about getting back actual json or direct hash objects from the API. Additionally, I seem to only get back the container id and socket url info, not what one would normally see if they ran 'docker ps -a' from the shell command line on a docker host.
I know i am missing something important here, but need some help finding this magical piece of info :)
Sometime, it's a bit long to pull an image from registry, how to extend/disable the duration before timeout ?
If it's not possible, it would be interesting to implement it, to let the user the choice.
Thanks
I'm not getting any streaming output until docker passes the threshold of Excon's :chunk_size. I think it's a bug in excon. See excon/excon#350
Hi guys,
Currently using docker-api 1.10.9 against docker 0.10.0 and i'm having some serious issues lately using it.
Main issues are when i invoke the image build action based on a dockerfile using
image = ::Docker::Image.build(dockerfile_for(host), { :rm => true })
Where the dockerfile_for(host)
function creates a dockerfile string.
The error i'm getting now is:
/home/jenkins/.rbenv/versions/1.9.3-p448/lib/ruby/gems/1.9.1/gems/docker-api-1.10.6/lib/docker/util.rb:52:in `extract_id': Couldn't find id: {"stream":"Step 0 : FROM jordansissel/system:centos-6.4\n"}
(Docker::Error::UnexpectedResponseError)
{"errorDetail":{"message":"invalid character 'u' after top-level value"},"error":"invalid character 'u' after top-level value"}
from /home/jenkins/.rbenv/versions/1.9.3-p448/lib/ruby/gems/1.9.1/gems/docker-api-1.10.6/lib/docker/image.rb:168:in `build'
Any idea's ?
I also sometimes get the error that the ID can't be extracted because the docker-api sent wrong json data to docker.
The "latest" docs are wrong, but the "master" docs have been fixed to show that to specify the "runtime config" of an image during container commit, it should be supplied in the POST body: http://docs.docker.io/en/master/api/docker_remote_api_v1.6/#create-a-new-image-from-a-container-s-changes
There isn't currently a way to do so via docker-api, and I can't seem to figure out how to get connection.post
to include a POST body (or this'd be a PR instead of an issue). :)
Hello,
I've been able to build images and push them over to quay.io
Now I'd like to start containers from those images on another Docker instance. I'd think my approach would be
If my approach is sane, how would #pull be used via the API?
To delete a container the method is Docker::Container#delete
. To delete an image it's Docker::Image#remove
. The docker API HTTP method for both operations is "DELETE", and the documentation describes both as a remove operation. While I don't think it matters much what the method name is, I do think it should be consistent between both containers and images.
Especially now that OSX is supported 😸, it's desirable to be able to run tests against a live version of Docker, see #83.
We should use an env var to drive the decision, and experiment with adding Travis CI build matrix support for a couple versions of Docker that aren't in the vcr cassettes.
The following psuedo-code may show that my use of 'rm' => true is not working as intended:
build_image = Docker::Image.build('from base\n', {rm: true})
puts build_image.inspect
files_image = build_image.insert_local 'localPath' => '/tmp/foo_dir', 'outputPath' => '/tmp/foo_dir', 'rm' => true
puts files_image.inspect
The output:
Docker::Image { :id => 5541feba8708, :info => {"id"=>"5541feba8708"}, :connection => Docker::Connection { :url => http://relvpc22:4243, :options => {} } }
Docker::Image { :id => 28e9765330bb, :info => {"id"=>"28e9765330bb"}, :connection => Docker::Connection { :url => http://relvpc22:4243, :options => {} } }
If my use of 'rm' => true
with #include_local
were working, I don't think the 5541feba8708 image would still exist. However, docker images
shows that it does.
Am I onto something or perhaps I'm confused?
Is there a way to log all HTTP requests that hit docker API ? Not sure if its possible to do in the latest release.
Ideally functionality like this would be great:
require "docker"
require "logger"
# Setup logger
Docker.logger = Logger.new(STDOUT)
# Create container
Docker::Container.create(options)
Logger then will print something like this:
I, [2013-12-19T20:07:10.512166 #60271] INFO -- : [:post, "/containers/create", {}, {:body=>"{\"Cmd\":[\"sleep\",\"10\"],\"Image\":\"base\"}"}]
Docker::Image.import
should be able to import from an external url, not just from a local file. Would it work to use open-uri?
Is there any way to mount a data volume through the api (the -v
command)?
Hello: your code is escaping the json, which is sending mal-formed json to the docker API.
We are trying to set the ExposedPorts in the create, which you need to set now with 0.6.5 in order to bind to a publicly exposed port on the real host:
container=Docker::Container.create('Cmd' => ['/usr/sbin/sshd','-D'], 'Image' => 'dillera/centos-sshd', 'ExposedPorts' => '{"22/tcp": {}}')
But when we send this, something is escaping that "22/tcp":
{"Cmd"=>["/usr/sbin/sshd", "-D"], "Image"=>"dillera/centos-sshd", "ExposedPorts"=>"{\"22/tcp\": {}}"}
and that is being rejected by the API as invalid JSON, which is it.
How can we stop this from being escaped?
The latest Docker version seems to ship a broken v1.6 API: /images/json
returns empty ID
values.
Using v1.7 seems to fix this, at least for Docker::Image
.
Maybe it would make sense to test against different Docker versions on Travis CI?
$ docker -v
Docker version 0.8.0, build cc3a8c8
$ curl http://localhost:4243/v1.6/images/json
[{"Created":1384460541,"ID":"","Repository":"travis","Size":0,"Tag":"ruby","VirtualSize":4847453689}]
I would expect to be able to push multiple tags such as 0.12.4
and latest
to a private registry. So if I have an image object I would do
image.push(nil, tag: 'latest')
image.push(nil, tag: '0.12.4')
The push method always uses the first tag found in RepoTags. You can not pass in the :tag parameter because it is always overwritten by the line
opts = options.merge(:tag => tag)
Perhaps that line should be changed to
opts = {:tag => tag}.merge(options)
In the case of a long running build of an image it would be really useful to have access to the standard out stream.
Looking at the code I couldn't see a way of doing this. Is this possible?
Since .search queries only the Docker Index, it would be helpful to have a method that will also search through local images. You can retrieve all local images via Docker::Images.all and match a query from that data set, but it would be nice to push that functionality up into the API rather than rolling our own.
After building an image and tagging it into a repository, push fails with
rake aborted!
Docker::Error::ArgumentError
.../lib/ruby/gems/2.0.0/gems/docker-api-1.8.0/lib/docker/image.rb:40:in `push'
because RepoTags is not used anymore.
Hi,
lately im running against this issue allot:
/home/jenkins/.rbenv/versions/1.9.3-p448/lib/ruby/gems/1.9.1/gems/docker-api-1.9.1/lib/docker/connection.rb:52:in `rescue in request': read timeout reached (Docker::Error::TimeoutError)
Any idea's on increasing the timeout value?
Cheers
I've been able to use #insert_local
in order to add files into my image. Now I'm looking for a way to copy complete directories over. I tried refactoring this method to use Docker::Util#create_dir_tar
when localPath
was detected as a directory; otherwise, use Docker::Util#create_tar
for a file. That attempt has not resulted in anything usable but I wanted to share the idea in order to hear opinions.
Also...
The build_from_dir
method does not appear to actually transfer the directory over despite its use of Docker::Util#create_dir_tar
. Perhaps I am misunderstanding the intended use of the build_from_dir
method...
Is anyone using the API to copy complete directories?
What is the reason Docker::Container.new is private? Docker::Image.new is public in contrast.
My use case is to create container and image instances from known ids or names.
I can't currently reproduce this issue, but I was hoping someone else may have experienced this and can explain to me what's going on.
In my app, tests that related to using Docker would occasionally fail; I had code like this:
messages = container.attach
stdout = messages[0].join
stderr = messages[1].join
output = stdout + stderr
On occasion output
would be an empty string, though most of the time it was the expected value. This was quite the heisenbug -- when I re-ran the specs the bug would go away.
I've now changed my call to #attach
to
messages = container.attach(logs: true)
And now output
is always the correct value.
I've looked at this for awhile and I'm pretty sure it's not a coincidence. Is there any reason why logs: true
would be doing anything? Is it possible that logs: true
is immediately producing output, thus preventing Excon or docker-api from closing the stream? That's the only tentative explanation I can offer.
I'll try to create a reproducible example as soon as I have the time.
Though as it is I can just call #insert_local
a few times to insert files into an image, it'd be nice if #insert_local
could take an array of filenames too, so I could do
image.insert_local('localPath' => [ 'Gemfile', 'Rakefile' ], 'outputPath' => '/')
instead of something like
image.insert_local('localPath' => 'Gemfile', 'outputPath' => '/').insert_local('localPath' => 'Rakefile', 'outputPath' => '/')
Which will also create an unnecessary image.
Also, and this is somewhat unrelated, why does #insert_local
strictly take strings for argument keys? I feel as though using symbols would be more Ruby-ish, but I'm sure there's a reason this wasn't done.
After update I get next exception:
No such file or directory @ unlink_internal - /tmp/out20140512-14799-3tjzjc
/home/mikdiet/.rvm/rubies/ruby-2.1.1/lib/ruby/2.1.0/fileutils.rb:1454:in `unlink'
/home/mikdiet/.rvm/rubies/ruby-2.1.1/lib/ruby/2.1.0/fileutils.rb:1454:in `block in remove_file'
/home/mikdiet/.rvm/rubies/ruby-2.1.1/lib/ruby/2.1.0/fileutils.rb:1459:in `platform_support'
/home/mikdiet/.rvm/rubies/ruby-2.1.1/lib/ruby/2.1.0/fileutils.rb:1453:in `remove_file'
/home/mikdiet/.rvm/rubies/ruby-2.1.1/lib/ruby/2.1.0/fileutils.rb:791:in `remove_file'
/home/mikdiet/.rvm/rubies/ruby-2.1.1/lib/ruby/2.1.0/fileutils.rb:569:in `block in rm'
/home/mikdiet/.rvm/rubies/ruby-2.1.1/lib/ruby/2.1.0/fileutils.rb:568:in `each'
/home/mikdiet/.rvm/rubies/ruby-2.1.1/lib/ruby/2.1.0/fileutils.rb:568:in `rm'
/home/mikdiet/.rvm/gems/ruby-2.1.1/gems/docker-api-1.10.10/lib/docker/image.rb:192:in `build_from_dir'
It's nice that the library creates Docker::Image objects for me when executing Image.search, but apart from the name of the image, the actual search result JSON is essentially discarded.
hashes.map { |hash| new(connection, hash['name']) }
As the constructor defines a third parameter, info
, it's possible to retain that search result data by passing the entire hash as the argument. I've reopened the class in my own project and done so with success.
hashes.map { |hash| new(connection, hash['name'], hash) }
This makes it possible to do
star_count = image.info['star_count']
description = image.info['description']
is_official = image.info['is_official']
is_trusted = image.info['is_trusted']
It doesn't appear there are ramifications to this elsewhere, and I'd happily submit a patch if I'm not being short-sited.
The current version of Container#attach
returns output as an array containing standard out and standard error separately. I'm not sure if this is possible to do with the way the Docker API is set up, but is there a way to still be able to get standard out and standard error in the order they were printed as a single string?
Hello,
I'm encountering a small issue which obliges me tu use docker command line for something.
These two lines don't have the same result.
@image.push
`docker push #{@image.info["Repository"]}`
The API request are respectively:
POST /v1.6/images/<image_id>/push
POST /v1.6/images/<image_repo>/push
My use case is this one. I've already an image of "myrepo" taggued "latest".
When I push my newly created image of "myrepo" (which is locally latest) with docker-api, the tag latest is not updated in the repository. As a result another server which is getting myrepo:latest from the registry still has the old version.
However, by using docker command line with the repo name in the URL, the tag is correctly updated and finally, the other server is able to get the newly taggued latest
This issue is not specifically linked to docker-api, I think docker API may be better designed, however it may be usefull to update the gem to do that ?
Thank you !
It would be nice to be able to optionally specify all the arguments for build, but currently "nocache" seems to be the most useful one:
http://docs.docker.io/en/master/api/docker_remote_api_v1.4/#build-an-image-from-dockerfile-via-stdin
(unfortunately, the docs appear to have some broken formatting there currently, but you can see the query parameters nevertheless)
Hi,
I ran into a bit of an issue today when I was executing something like this:
image = Docker::Image.all.first
puts image.run("echo 'foo bar'").attach
The outputted result is 'foo bar'
instead of the expected foo bar
. I think what's causing this is the line: https://github.com/swipely/docker-api/blob/master/lib/docker/image.rb#L20.
Is there a reason for this behavior? If so, how could I avoid this issue? I do want to be able to use quotes and other bash goodies.
Add support for Docker 0.6.x and the v1.5 API - http://docs.docker.io/en/latest/api/docker_remote_api_v1.5/
Hi ,
I am trying to run a container as follows :
c = Docker::Container.create('Cmd' => ['service supervisord start'] , 'Image' => 'base_image','name' => "foo", "HostConfig" => {"PortBindings"=>{"8080/tcp"=>[{"HostIp"=>"0.0.0.0", "HostPort"=>"8080"}]}})
but it throws the following error :
Docker::Error::ServerError: Expected(200..204) <=> Actual(500 InternalServerError)
from /var/lib/gems/1.9.1/gems/docker-api-1.7.5/lib/docker/connection.rb:44:in rescue in request' from /var/lib/gems/1.9.1/gems/docker-api-1.7.5/lib/docker/connection.rb:36:in
request'
from /var/lib/gems/1.9.1/gems/docker-api-1.7.5/lib/docker/connection.rb:51:in block (2 levels) in <class:Connection>' from /var/lib/gems/1.9.1/gems/docker-api-1.7.5/lib/docker/container.rb:128:in
create'
from (irb):66
What's the right way of doing this ?
Hi,
I've tried to let container.stop wait for a timeout before killing it but its not behing accepted.
container.stop({'timeout' => '10'})
Am i doing something wrong? or is it a bug? ( Gem version 1.9.1 btw )
Hi. The following code worked fine on Docker 0.6.4, but seems to be breaking on 0.6.6:
c = Docker::Container.create({
'Cmd' => ['/root/pynb/start.sh'],
'Image' => msg["image"],
'PortSpecs' => ['8888']
})
c.start # Note that network settings aren't established until the container is started
...
container_port = c.json["NetworkSettings"]["PortMapping"]["Tcp"]["8888"]
When I run "docker ps", I can see the command worked, but there is no longer a port mapped to the host:
PORTS NAMES
23499b81401f odewahn/learning-data-science:latest /root/pynb/start.sh 40 seconds ago Up 37 seconds 8888/tcp olive_deer
Basically, it seems docker-api is is no longer forwarding the ports to the host machine in this new version (unless I'm missing something).
Also, thanks for this great gem.
I am unable to use this gem aloneside fog v1.22 as they rely on incompatible version of excon, can you update your excon dep to >= 0.32?
This script hangs on the call to get():
require 'docker'
Docker.url = "http://google.com:4243"
cnts = Docker::Util.parse_json(Docker.connection.get('/containers/json', {}))
puts cnts
In #43 it was remarked that Docker only gives specific information about images if they're accessed through Image.all
. I think it'd be great to have methods #repository
, #tag
, #created
, #size
, and #virtual_size
, or at the very least a consistent way of getting a complete set of the information stored in info
.
To find the information, we could just do something like:
image = # ...
Docker::Image.all.find { |img| img.id == image.id }.info
Are there any serious performance costs to .all
? If not, I don't see why this couldn't be added.
Since this is useful for long-running processes that handle events, there should be an option to never time out, if possible.
Starting in version 1.10.1 I see the following in docker running in debug mode when trying to perform a build command against Docker version 0.8.1:
2014/03/17 19:28:06 POST /v1.10/build?rm=false
[error] api.go:998 Error: Multipart upload for build is no longer supported. Please upgrade your docker client.
[error] api.go:105 HTTP Error: statusCode=500 Multipart upload for build is no longer supported. Please upgrade your docker client.
If I revert to 1.9.x of the docker-api client this problem stops.
This error doesn't seem to occur when operating against Docker 0.9 however given the minor version increment of the gem from 1.9.x to 1.10.x I would have expected it to be backward compatible. This makes moving forward to 1.10.x difficult as it means anyone using the new version has to have docker >= 0.9
Thoughts on why this may be?
Hi,
I have been trying to use the Docker::Image.build_from_dir
to build an image from a dockerfile on an ubuntu virtual machine that resides on my mac. The thing is, the dockerfile starts with a FROM
command that pulls an image stored in a private registry.
I have tried adding a .dockercfg
file with the registry's credentials to the ubuntu guest box, but it did not make any difference, the weird thing is, when I tried to build the image from the dockerfile inside the virtual machine it worked as expected (without asking for authentication, but only after adding .dockercfg
).
However, using the build_from_dir
function I always get the following:
Couldn't find id: {"stream":"Step 0 : FROM registry.example.com/repo:latest\n"}
{"status":"Pulling repository registry.example.com/repo”}
{"errorDetail":{"message":"Authentication is required."},"error":"Authentication is required."}
I have also tried using Docker.creds
as suggested in #55 with no luck.
What do you think could be the issue? I mean, it seems to build the image without issues when used directly from the docker-cli, so I think it might have something to do with docker-api
in the end.
The docker-api gem version is 1.10.9
. And the Docker version(s) are as follows:
Client version: 0.10.0
Client API version: 1.10
Go version (client): go1.2.1
Git commit (client): dc9c28f
Server version: 0.10.0
Server API version: 1.10
Git commit (server): dc9c28f
Go version (server): go1.2.1
Last stable version: 0.10.0
Currently, the Docker::Image#push
method only allow to push on Docker public Index. It may be interesting to be able to push on any registry.
I'm getting the following error when I try to create 2 or more containers at the same time using https://github.com/mperham/sidekiq:
Expected(200..204) <=> Actual(500 InternalServerError)
The sidekiq worker looks like this:
class TestWorker
include Sidekiq::Worker
sidekiq_options :retry => false
def perform(index)
container = Docker::Container.create(
'Image' => 'busybox',
'Cmd' => ["date"]
)
logger.info { container.json }
end
end
and the stacktrace looks like this:
2013-11-06T00:38:06Z 22330 TID-owmegh960 TestWorker JID-2963d74016c3cb695f4f6997 INFO: start
2013-11-06T00:38:06Z 22330 TID-owmed52sk TestWorker JID-674fb203ab4d13403e74ded7 INFO: start
2013-11-06T00:38:06Z 22330 TID-owmegh960 TestWorker JID-2963d74016c3cb695f4f6997 INFO: fail: 0.158 sec
2013-11-06T00:38:06Z 22330 TID-owmegh960 WARN: {"retry"=>false, "queue"=>"default", "class"=>"TestWorker", "args"=>[1], "jid"=>"2963d74016c3cb695f4f6997", "enqueued_at"=>1383698286.6713398}
2013-11-06T00:38:06Z 22330 TID-owmegh960 WARN: Expected(200..204) <=> Actual(500 InternalServerError)
2013-11-06T00:38:06Z 22330 TID-owmegh960 WARN: /home/vagrant/.rbenv/versions/2.0.0-p247/lib/ruby/gems/2.0.0/gems/docker-api-1.7.0/lib/docker/connection.rb:42:in `rescue in request'
/home/vagrant/.rbenv/versions/2.0.0-p247/lib/ruby/gems/2.0.0/gems/docker-api-1.7.0/lib/docker/connection.rb:36:in `request'
/home/vagrant/.rbenv/versions/2.0.0-p247/lib/ruby/gems/2.0.0/gems/docker-api-1.7.0/lib/docker/connection.rb:49:in `block (2 levels) in <class:Connection>'
/home/vagrant/.rbenv/versions/2.0.0-p247/lib/ruby/gems/2.0.0/gems/docker-api-1.7.0/lib/docker/container.rb:125:in `create'
/app/app/workers/test_worker.rb:13:in `perform'
[...]
2013-11-06T00:38:06Z 22330 TID-owmed52sk TestWorker JID-674fb203ab4d13403e74ded7 INFO: {"ID"=>"7f83868a600db43031c5729181bac00a2dccaca9b59cf5427a0a683b295edfb4", "Created"=>"2013-11-06T00:38:06.87068055Z", "Path"=>"date", "Args"=>[], "Config"=>{"Hostname"=>"7f83868a600d", "Domainname"=>"", "User"=>"", "Memory"=>0, "MemorySwap"=>0, "CpuShares"=>0, "AttachStdin"=>false, "AttachStdout"=>false, "AttachStderr"=>false, "PortSpecs"=>nil, "ExposedPorts"=>nil, "Tty"=>false, "OpenStdin"=>false, "StdinOnce"=>false, "Env"=>nil, "Cmd"=>["date"], "Dns"=>nil, "Image"=>"busybox", "Volumes"=>nil, "VolumesFrom"=>"", "WorkingDir"=>"", "Entrypoint"=>nil, "NetworkDisabled"=>false, "Privileged"=>false}, "State"=>{"Running"=>false, "Pid"=>0, "ExitCode"=>0, "StartedAt"=>"0001-01-01T00:00:00Z", "FinishedAt"=>"0001-01-01T00:00:00Z", "Ghost"=>false}, "Image"=>"e9aa60c60128cad1", "NetworkSettings"=>{"IPAddress"=>"", "IPPrefixLen"=>0, "Gateway"=>"", "Bridge"=>"", "PortMapping"=>nil, "Ports"=>nil}, "SysInitPath"=>"/usr/bin/docker", "ResolvConfPath"=>"/etc/resolv.conf", "HostnamePath"=>"/var/lib/docker/containers/7f83868a600db43031c5729181bac00a2dccaca9b59cf5427a0a683b295edfb4/hostname", "HostsPath"=>"/var/lib/docker/containers/7f83868a600db43031c5729181bac00a2dccaca9b59cf5427a0a683b295edfb4/hosts", "Name"=>"/gray_cow0", "Volumes"=>nil, "VolumesRW"=>nil}
2013-11-06T00:38:06Z 22330 TID-owmed52sk TestWorker JID-674fb203ab4d13403e74ded7 INFO: done: 0.2 sec
Stuff I observed while fiddling around with this problem for the past 2 hours:
I'm really stuck and not quite sure how to debug the problem.
Any help is appreciated.
Currently the docker-api (this project) uses the DOCKER_URL to find the docker host. If I am already using docker on os x this information is already set a different variable and it seems redundant to have DOCKER_URL when DOCKER_HOST would do the same thing.
Can you update the code to use DOCKER_URL by default and fallback to DOCKER_HOST if DOCKER_URL is not present.
DOCKER_HOST=tcp://localhost:4243 (http://docs.docker.io/en/latest/installation/mac/)
DOCKER_URL=tcp://localhost:4243
puts image.info["Repository"] image.info["Tag"]
# repo1/app latest
image.tag repo: "repo2/app"
image.delete
# => 409 conflict
My issues is the following, I want to rename an image, to do that, I will retag it then ask to delete the old tag.
If an image has two tags (from the example above, when you do
DELETE /images/repo1%2Fapp%3Alatest
=> [{"Untagged":"39e0e4c41dceeae8b6c7fa41339ebc00e36e6ac92701ff77945e305cf2d401d2"}]
it only destroys the tag, it can only be done by hand currently, which is not really beautiful.
Docker.connection.delete "/images/repo1%2Fapp%3Alatest"
When trying to name a container you have to pass the 'name' parameter into the opts{} hash as a string, but all other options are symbols, is there a reason this couldn't be made consistent? I'm happy to submit a PR & you could even make it support both for a period of time.
in lib/docker/container.rb:
def self.create(opts = {}, conn = Docker.connection)
name = opts.delete('name')
query = {}
query['name'] = name if name
resp = conn.post('/containers/create', query, :body => opts.to_json)
hash = Docker::Util.parse_json(resp) || {}
new(conn, hash)
end
I know, I know... but my Dockerfile's RUN command (not shown) actually depends on the hostname. So... trying this:
image = Docker::Image.build "from my_base_image", "Hostname" => "foobox"
and yet, the container's hostname appears to still be auto-generated:
image.json
=> {"id"=>"0f4170b06c150aa1000d70f913f56a67fedc26d05b5724b6e034e4225a420d97", "parent"=>"d20240c81001808519f321ab4df96992850012882f9dd7640ede2e4a30d86b31", "created"=>"2014-04-07T20:10:33.437269935Z", "container"=>"5c052c022bcce24fdcdc7fd797540e4d902e8151f21ab6050c9fa2895f315b53", "container_config"=>{"Hostname"=>"a4df64ba4be2", "Domainname"=>"", "User"=>"", "Memory"=>0, "MemorySwap"=>0, "CpuShares"=>0, "AttachStdin"=>false, "AttachStdout"=>false, "AttachStderr"=>false, "PortSpecs"=>nil, "ExposedPorts"=>nil, "Tty"=>false, "OpenStdin"=>false, "StdinOnce"=>false, "Env"=>["HOME=/", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "DEBIAN_FRONTEND=noninteractive"], "Cmd"=>["/bin/sh", "-c", "apt-get -qq -y install puppet"], "Dns"=>nil, "Image"=>"d20240c81001808519f321ab4df96992850012882f9dd7640ede2e4a30d86b31", "Volumes"=>nil, "VolumesFrom"=>"", "WorkingDir"=>"", "Entrypoint"=>nil, "NetworkDisabled"=>false, "OnBuild"=>[]}, "docker_version"=>"0.9.1", "author"=>"", "config"=>{"Hostname"=>"a4df64ba4be2", "Domainname"=>"", "User"=>"", "Memory"=>0, "MemorySwap"=>0, "CpuShares"=>0, "AttachStdin"=>false, "AttachStdout"=>false, "AttachStderr"=>false, "PortSpecs"=>nil, "ExposedPorts"=>nil, "Tty"=>false, "OpenStdin"=>false, "StdinOnce"=>false, "Env"=>["HOME=/", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "DEBIAN_FRONTEND=noninteractive"], "Cmd"=>["/bin/bash"], "Dns"=>nil, "Image"=>"d20240c81001808519f321ab4df96992850012882f9dd7640ede2e4a30d86b31", "Volumes"=>nil, "VolumesFrom"=>"", "WorkingDir"=>"", "Entrypoint"=>nil, "NetworkDisabled"=>false, "OnBuild"=>[]}, "architecture"=>"amd64", "os"=>"linux", "Size"=>37092125}
With docker-api 1.7.4 and docker 0.7.1
Docker::Image.create fromImage: "image" , tag: "latest"
Results in an exception
Docker::Error::UnexpectedResponseError: Create response did not contain an Id
from /usr/lib/ruby/gems/1.9.1/gems/docker-api-1.7.4/lib/docker/image.rb:119:in `create'
Some companies will only use gems with a certain license.
The canonical and easy way to check is via the gemspec,
via e.g.
spec.license = 'MIT'
# or
spec.licenses = ['MIT', 'GPL-2']
Even for projects that already specify a license, including a license in your gemspec is a good practice, since it is easily
discoverable there without having to check the readme or for a license file. For example, it is the field that rubygems.org uses to display a gem's license.
For example, there is a License Finder gem to help companies ensure all gems they use
meet their licensing needs. This tool depends on license information being available in the gemspec. This is an important enough
issue that even Bundler now generates gems with a default 'MIT' license.
If you need help choosing a license (sorry, I haven't checked your readme or looked for a license file), github has created a license picker tool.
In case you're wondering how I found you and why I made this issue, it's because I'm collecting stats on gems (I was originally looking for download data) and decided to collect license metadata,too, and make issues for gemspecs not specifying a license as a public service :).
I hope you'll consider specifying a license in your gemspec. If not, please just close the issue and let me know. In either case, I'll follow up. Thanks!
p.s. I've written a blog post about this project
Sorry to be pestering you guys with so many questions, but I'm not sure where else I could inquire.
It seems as though insert_local
creates a temporary container, presumably used to insert the files into the new image. However, only the resulting image is ever returned to the user, as far as I can tell -- how do I get that temporary container?
In case what I'm saying isn't very clear, hopefully this irb session speaks for itself:
vagrant@precise64:/vagrant$ irb
2.0.0-p353 :001 > require 'docker'
true
2.0.0-p353 :002 > Docker::Image.all.count
11
2.0.0-p353 :003 > Docker::Container.all(all: true).count
0
2.0.0-p353 :004 > Docker::Image.all.first.insert_local('localPath' => 'Dockerfile', 'outputPath' => '/')
#<Docker::Image:0x00000002bd3da8 @connection=#<Docker::Connection:0x00000002901580 @url="unix:///", @options={:socket=>"/var/run/docker.sock"}>, @id="f5cb6cd8488c", @info={}>
2.0.0-p353 :005 > Docker::Image.all.count
12
2.0.0-p353 :006 > Docker::Container.all(all: true).count
1
(insert_local
increments the image count by one (and returns that new image), but it also increments the container count -- this discrepant container is what I'd like to delete)
I'm asking because I'd rather not have the result of docker ps -a
be totally cluttered with these temp containers, which I create quite a few of for a little project I'm working on.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.