Coder Social home page Coder Social logo

fog-rackspace's Introduction

Fog::Rackspace

Circle CI Build Status

Installation

Add this line to your application's Gemfile:

gem 'fog-rackspace'

And then execute:

$ bundle

Or install it yourself as:

$ gem install fog-rackspace

Usage

See https://github.com/fog/fog for usage.

fog-rackspace's People

Contributors

amirfefer avatar codeodor avatar geemus avatar plribeiro3000 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

fog-rackspace's Issues

How to apply a security group to a port

I have successfully created a security group with a hand full of rules. update_port() or port.save seems to only update port.name. Is there a different way to apply the security_group to a port?

Put Object 202 Response

We have just started to receive 202 responses from Rackspace during a put_object in our asset sync at deployment. This results in our deployments failing. To resolve this we could alter the expected response codes to accept 201 and 202 like this:

params.merge!( :expects => [201, 202], :idempotent => !params[:request_block], :headers => headers, :method => 'PUT', :path => "#{Fog::Rackspace.escape(container)}/#{Fog::Rackspace.escape(object)}" )

Would you recommend doing this?

Fog Rackspace BadRequest: [HTTP 400 | ] Validation failed

We are on the latest gem versions, here is a list or our Fog gem versions:
fog (1.38.0)
fog-aliyun (0.1.0)
fog-atmos (0.1.0)
fog-aws (0.9.2)
fog-brightbox (0.10.1)
fog-cloudatcost (0.2.1)
fog-core (1.40.0)
fog-dynect (0.0.3)
fog-ecloud (0.3.0)
fog-google (0.3.2)
fog-json (1.0.2)
fog-local (0.3.0)
fog-openstack (0.1.6)
fog-powerdns (0.1.1)
fog-profitbricks (0.0.5)
fog-rackspace (0.1.1)
fog-radosgw (0.0.5)
fog-riakcs (0.1.0)
fog-sakuracloud (1.7.5)
fog-serverlove (0.1.2)
fog-softlayer (1.1.1)
fog-storm_on_demand (0.1.1)
fog-terremark (0.1.0)
fog-vmfusion (0.1.0)
fog-voxel (0.1.0)
fog-vsphere (0.7.0, 0.6.3)
fog-xenserver (0.2.3)
fog-xml (0.1.2)

Thanks

Improve handling of asynchronous DNS callback request failures

Rackspace treats non-GET requests dealing with DNS (and maybe other services?) as asynchronous:
https://docs.rackspace.com/docs/cloud-dns/v1/general-api-info/synchronous-and-asynchronous-responses

This gem handles that via:

response = wait_for_job service.add_records(@zone.identity, [options]).body['jobId']

def wait_for_job(job_id, timeout=Fog.timeout, interval=1)
retries = 5
response = nil
Fog.wait_for(timeout, interval) do
response = service.callback job_id
if response.body['status'] == 'COMPLETED'
true
elsif response.body['status'] == 'ERROR'
raise Fog::DNS::Rackspace::CallbackError.new(response)
elsif retries == 0
raise Fog::Errors::Error.new("Wait on job #{job_id} took too long")
else
retries -= 1
false
end
end
response
end

def callback(job_id, show_details=true)
validate_path_fragment :job_id, job_id
request(
:expects => [200, 202, 204],
:method => 'GET',
:path => "status/#{job_id}",
:query => "showDetails=#{show_details}"
)
end

However it's expecting a 200, 202, or 204 response for the polling of the status to be successful.

Rackspace also enforces rate limits on requests, 5 per second for polling status, returning a 413 code when exceeding the limit:
https://docs.rackspace.com/docs/cloud-dns/v1/general-api-info/limits#rate-limits

This can create a scenario where your code tries to create a record, the initial request to Rackspace is successful (but your application is still waiting), the gem starts polling for status, a polling request may return a non-200, 202, or 204 response (more common if you have multiple background jobs dealing with DNS simultaneously), which the gem then treats as a failure, which surfaces as if your code to create the record failed. However, that's not fully accurate, just a status request failed, the underlying job to create the record may still be processing and eventually succeed on it's own, however an error has already been raised in your application.

Given that the callback/status request is idempotent and just polling, I wonder if it should be less strict about what codes it expects, instead treating non-200, 202, 204 responses as a silent failure, triggering another retry. For example, if a callback request returned a 413 or 500 code, we likely don't need to treat the outer operation (adding a record) as a failure, we could just consider the callback a failure and hope we get a better response on the next retry, or ultimately erroring if we exceed the number of retries.

Error while building

Hi,

There seems to be one of the error while building the package. Here is what it says:

  Fog::Rackspace::LoadBalancers | load_balancer_get_stats (rackspace)
    Initialization parameters must be an attributes hash, got Fog::Rackspace::LoadBalancers::VirtualIp     <Fog::Rackspace::LoadBalancers::VirtualIp
      id="3817",
      address="732.903.662.424",
      type="PUBLIC",
      ip_version="IPV4"
    > (ArgumentError)
      /usr/lib/ruby/vendor_ruby/fog/core/collection.rb:86:in `new'
      /usr/lib/ruby/vendor_ruby/fog/core/collection.rb:76:in `block in load'
      /usr/lib/ruby/vendor_ruby/fog/core/collection.rb:18:in `each'
      /usr/lib/ruby/vendor_ruby/fog/core/collection.rb:18:in `each'
      /usr/lib/ruby/vendor_ruby/fog/core/collection.rb:76:in `load'
      /home/utkarsh2102/utkarsh/rackspace/ruby-fog-rackspace/lib/fog/rackspace/models/load_balancers/load_balancer.rb:111:in `virtual_ips='
      /usr/lib/ruby/vendor_ruby/fog/core/attributes.rb:134:in `block in merge_attributes'
      /usr/lib/ruby/vendor_ruby/fog/core/attributes.rb:129:in `each_pair'
      /usr/lib/ruby/vendor_ruby/fog/core/attributes.rb:129:in `merge_attributes'
      /usr/lib/ruby/vendor_ruby/fog/core/model.rb:51:in `reload'
      /usr/lib/ruby/vendor_ruby/fog/core/model.rb:73:in `block in wait_for'
      /usr/lib/ruby/vendor_ruby/fog/core/wait_for.rb:7:in `block in wait_for'
      /usr/lib/ruby/vendor_ruby/fog/core/wait_for.rb:6:in `loop'
      /usr/lib/ruby/vendor_ruby/fog/core/wait_for.rb:6:in `wait_for'
      /usr/lib/ruby/vendor_ruby/fog/core/model.rb:72:in `wait_for'
      /home/utkarsh2102/utkarsh/rackspace/ruby-fog-rackspace/tests/rackspace/helper.rb:21:in `given_a_load_balancer'
      /home/utkarsh2102/utkarsh/rackspace/ruby-fog-rackspace/tests/rackspace/requests/load_balancers/get_stats_tests.rb:4:in `block (2 levels) in <top (required)>'
      /home/utkarsh2102/utkarsh/rackspace/ruby-fog-rackspace/tests/rackspace/helper.rb:10:in `instance_eval'
      /home/utkarsh2102/utkarsh/rackspace/ruby-fog-rackspace/tests/rackspace/helper.rb:10:in `given_a_load_balancer_service'
      /home/utkarsh2102/utkarsh/rackspace/ruby-fog-rackspace/tests/rackspace/requests/load_balancers/get_stats_tests.rb:3:in `block in <top (required)>'
      /usr/lib/ruby/vendor_ruby/shindo.rb:79:in `instance_eval'
      /usr/lib/ruby/vendor_ruby/shindo.rb:79:in `tests'
      /usr/lib/ruby/vendor_ruby/shindo.rb:38:in `initialize'
      /usr/lib/ruby/vendor_ruby/shindo.rb:13:in `new'
      /usr/lib/ruby/vendor_ruby/shindo.rb:13:in `tests'
      /home/utkarsh2102/utkarsh/rackspace/ruby-fog-rackspace/tests/rackspace/requests/load_balancers/get_stats_tests.rb:1:in `<top (required)>'
      /usr/lib/ruby/vendor_ruby/shindo/bin.rb:61:in `load'
      /usr/lib/ruby/vendor_ruby/shindo/bin.rb:61:in `block (2 levels) in run_in_thread'
      /usr/lib/ruby/vendor_ruby/shindo/bin.rb:58:in `each'
      /usr/lib/ruby/vendor_ruby/shindo/bin.rb:58:in `block in run_in_thread'

Files#get with block has incorrect data when writing a gzip file

When using the fog storage rackspace cloud files Files#get method http://www.rubydoc.info/github/fog/fog/Fog/Storage/Rackspace/Files#get-instance_method to retrieve a gzipped tar file (file.tar.gz) using the example shown in the link with a block and writing the data to the file, the resulting file is not of the correct content type. Trying "file file.tar.gz" results in "file.tar.gz: data".
If I retrieve the same file by allowing the method to return an object, then write the object.body to a file it writes correctly and the command "file file.tar.gz" results in "file.tar.gz: gzip compressed data, from Unix, last modified: Fri Aug 14 11:37:29 2015" as expected. Using the block method on a simple text file seems to work fine. Is this a bug or am I doing something wrong? I thought perhaps there was a different approach needed if the file was binary but the example in the link above shows the block method on an image file so I assume it should work for binary files, no?

Original issue opened by @patakijv at fog/fog#3663.

support rax_service_level_automation=Complete

It would be awesome if there was a flag in the fog-rackspace library that would allow for managed => True to be set for a server the fog is going to create.

This would allow for waiting for the metadata rax_service_level_automation=Complete to be set before fog logs in and makes changes, so that the rackspace postbuild automation can be run before the Root user's password is disabled.

There are 3 options that rax_service_level_automation gets set to

'In Progress' when the automation is running
'Complete' when the automation is finished
'Build Error' when the automation fails

Any help in getting that added would be awesome, thanks!
Daniel

Original issue opened by @gtmanfred at fog/fog#3714

Merge existing yet offline tests

Fog traditionally uses shindo as it's test manager. I tend to prefer rspec and as such wrote a bunch of tests for Orchestration, CDNV2, and NetworkingV2. I need to refactor these tests and merge.

Storage: copy doesn't work on files with "special" characters

Hello,

It seems that the copy functionality of rackspace doesn't work on some files. I sort of suspect the problem is that rackspace didn't said it correctly in documentation, but nevertheless ...

In the following snippet, the first copy operation works, while the second fails with "resource not found in ORD region (Fog::Storage::Rackspace::NotFound)"

file = cdc.directory.files.new(:key => 'Test%file', :body => 'Testing.file')
file.save
puts "Save done"
f1 = file.copy(cdc.name, 'Test-file')

file = cdc.directory.files.new(:key => 'Test%2file', :body => 'Testing.file')
file.save
puts "Save done"
f1 = file.copy(cdc.name, 'Test-file')

Looking at output with EXCON_DEBUG=true, I suspect that problem is that "X-Copy-From" is not escaped, which matters in second case.

Honza

Original issue opened by @hkmaly at fog/fog#3719

Different etags for Mock storage object between head request and get request

When using Fog.Mock! in tests I have noticed that a different etag is returned in the response when using a files.head request versus a file.get request. I don't know enough about what is going on, but I think the Mock class for the head request has a bug or I really don't understand what its trying to do.

Using Fog 1.37.0 but can see the code is the same here for the two files I am looking at.

From my test on the same file here is the metadata for the same file using each request

Head:

<Fog::Storage::Rackspace::File
    key="global/teacher/test_program/u1/g1/default/grade_web_resources/test.pdf",
    content_length=19,
    content_type="application/pdf",
    content_disposition=nil,
    etag="\"ba6969cb26232dd85e5b47c8f3a181bb\"",
    last_modified=2017-08-09 21:28:21 UTC,
    access_control_allow_origin=nil,
    origin=nil,
    content_encoding=nil,
    delete_at=nil,
    delete_after=nil

and using get

<Fog::Storage::Rackspace::File
    key="global/teacher/test_program/u1/g1/default/grade_web_resources/test.pdf",
    content_length=19,
    content_type="application/pdf",
    content_disposition=nil,
    etag="f354d658bc16113d80d596a8b61e87bf",
    last_modified=2017-08-09 21:27:53 UTC,
    access_control_allow_origin=nil,
    origin=nil,
    content_encoding=nil,
    delete_at=nil,
    delete_after=nil

The get request has the correct etag based on what we set prior to saving the file.

Part of the method we are using to push the file

    cloud_file = directory.files.create key: cloud_path
    cloud_file.body = File.open local_path
    cloud_file.etag = checksum local_path
    cloud_file.save

In looking at the rackspace/requests/storage/head_object.rb file, specifically the Mock class, I get stuck when looking at how the etag is generated. It looks like it tries to grab the value from each part of the file object that is stored in the hash key, adding that value to a new hash and then later calculating and md5 digest of that new hash turned into a string. This seems weird that you are creating an md5 digest of a value that is a string of all the md5 digests of each part of the file and using that for the etag.
In testing, when inspecting the mock object of the file in question I can see that the @meta entry already contains the correct Etag as set.

...@last_modified=2017-08-10 17:31:12 UTC, @hash="f354d658bc16113d80d596a8b61e87bf", @meta={"Etag"=>"f354d658bc16113d80d596a8b61e87bf"}, @static_manifest=false>

In summary, I can't figure out how to properly test methods that are relying on a head call to test data. We have a method, partly listed above, that creates a cloud file, writes the file body and etag before saving and then does files.head call against the newly uploaded file to verify the file was uploaded with the correct etag and size. That method is failing tests now because the wrong etag is being returned in the head request under Fog.Mock!.

I don't fully understand the process or the best place to make a change, but it seems like the Moc class in get_object is just relying on the .to_headers method to return what is there while head is doing something different.

I would be happy to submit a pull request at some point, but I'm afraid I don't understand enough about the entire process but more specifically why the head_object in the Mock class is creating what seems to be a random etag. If there is no need to have the etag returned with something besides nil, then it seems letting the value that may or may not exist in the @meta value of the object seems the correct way. Not to mention the .to_headers method seems to take care of adding the bytes for each part up.

Sorry for the rambling. Happy to provide more info as needed. Looking for help, clarification or guidance on perhaps the best way or needed way to make the change.

Memory leak in Cloud Files directory and file calls?

I'm using Fog to access Rackspace Cloud Files within EventMachine. I seem to be getting a memory leak related to the Rackspace::Service#request method. At this point I have not managed to separate out the code to make a simple example of the issue, but I'd like to see if the description of my current investigation can trigger any ideas around the real source of the problem. I've tried various Fog versions and have currently landed at 1.27.0 for testing.

I create a Fog connection up front and to get a single directory by name within an HTTP request handled by EventMachine (em-http-server)

service = Fog::Storage.new(req_opt)
# within an EventMachine deferred request
  service.directories.get(@full_container_name)
# end the EventMachine request

I have separated out the Fog stuff, and have just returned data through my server and there is no leak. As soon as I hit service.directories.get the memory usage grows and grows.

Now if I just do:

(0..50).each do |d|
  service.directories.get(@full_container_name)
end

The memory does not grow more than for a single call. So I believe that something is being held onto inside the first call within an EventMachine request.

If I do service.directories.all the memory does appear to leak too in a similar way, so its not just the #get method at fault.

For testing, I reduced the call to just do

service.directories.do_nothing

within the EM deferred request, where I created a dummy method in the Rackspace::Directories class. The memory does not grow in the same way.

Inside the Rackspace::Real class I created a similar dummy method and call this instead of Rackspace::Real#get_container, just to make sure. The memory jumps around a little more but does not grow endlessly.

This suggest to me that there is something happening inside the actual request that is not letting go. So I have just returned from Rackspace::Service#request immediately before @connection.request. Logging the @connection variable shows this connection is always the same (as expected). The memory does not grow if I don't call the @connection.request method.

When the @connection.request is reintroduced the memory grows indefinitely.

I'm assuming at this point I am close to hitting Excon code. But before I start wasting time over there too, I want to see if any of this discussion means anything to anybody here. I'm open to offers!

Original issue opened by @philayres at fog/fog#3442.

Still maintained?

Hi there!

I was wondering if the fog ecosystem is still being maintained. We're using the fog-rackspace Gem over at kitchen-rackspace, but we're using Ruby 3.1 as Ruby 3.0 is shortly going to be EOL'd.

If this isn't being maintained, then that's totally understandable!

Thanks,
Dan

Error: wrong constant name CDN v2 (NameError)

lib/fog/rackspace.rb
service has a wrong model name

service(:cdn_v2, 'CDN v2') service(:compute_v2, 'Compute v2')

Should replace with

service(:cdn_v2, 'CDNV2') service(:compute_v2, 'ComputeV2')

Throwing an error:
/usr/local/bundle/ruby/2.6.0/gems/fog-core-2.3.0/lib/fog/core/provider.rb:49:in `const_defined?': wrong constant name CDN v2 (NameError)

Can't update a file

I'm trying to update a file in the following way:

file = directory.files.create(key: "justtesting", body: "some body")
file.body = ""
file.save

However, this returns me

Fog::Storage::Rackspace::ServiceError: [HTTP 503 | tx2a7abaa345d14c9898cc1-0057754df5lon3]

Everything is fine when I do

puts file.inspect 

(before the file.save though).

Any ideas? In the documentation it says only some objects can be updated. How do I know which one it is?

Provisioning fails for Windows images

Hi there,

I encountered this bug while poking at kitchen-rackspace, which uses fog-rackspace (see here).

When trying to provision a Windows VM, I hit a timeout...

$ kitchen converge
-----> Starting Kitchen (v1.10.2)
-----> Creating <default-windows-2012R2>...
[fog][WARNING] Rackspace[:no_passwd_lock] is deprecated since it is now the default behavior, use Rackspace[:passwd_lock] instead
[fog][WARNING] Rackspace[:no_passwd_lock] is deprecated since it is now the default behavior, use Rackspace[:passwd_lock] instead
>>>>>> ------Exception-------
>>>>>> Class: Kitchen::ActionFailed
>>>>>> Message: 1 actions failed.
>>>>>>     Failed to complete #create action: [Net::SSH::ConnectionTimeout] on default-windows-2012R2
>>>>>> ----------------------
>>>>>> Please see .kitchen/logs/kitchen.log for more details
>>>>>> Also try running `kitchen diagnose --all` for configuration

I suspect the problem is with this bit of code. Since Windows machines don't support SSH but the SSH setup step is unskippable, Fog will always fail on a timeout.

Is it possible to add some logic to skip the SSH setup on Windows VMs so provisioning will succeed?

Fog::Mock.reset raises NameError: uninitialized constant Fog::Rackspace::Storage

This is basically the same problem as in fog/fog-openstack#59 (which I think should be closed, from fog/fog-openstack#65) ie. that in lib/fog/rackspace.rb you using/expecting Fog::Rackspace::Storage but in lib/fog/rackspace/storage.rb you have defined it the other way around: Fog::Storage::Rackspace.

My gut feel would be to correct the definition to match the filesystem, but that's not what they did in fog/fog-openstack#65 - so I suspect there's some other pieces of the "fog" world that expect the Fog::Storage::Rackspace hierarchy.

Happy to basically reproduce what was done in fog-openstack in a PR, just would like someone to say "Yep, do that" before I proceed.

:rackspace_temp_url_key

ActionView::Template::Error (Storage must be instantiated with the :rackspace_temp_url_key option) when accessing a public CloudFiles container via CarrierWave/Fog.

[fog][DEPRECATION] Unable to load Fog::Rackspace::CDN

I just did a bundle update in updating rails and I'm getting this error when running my tests:

[fog][DEPRECATION] Unable to load Fog::Rackspace::CDN
[fog][DEPRECATION] The format Fog::CDN::Rackspace is deprecated

The trouble is twofold:

  1. I don't actually use the code Fog::Rackspace::CDN in my app
  2. it doesn't tell me where it is failing, so I am not even sure where to look among my gems.

Any ideas on how to go about fixing this?

Versions > 0.1.12 throwing error on server create

Attempts to build a server fail with versions of fog-rackspace > 0.1.12 (tested with 0.1.11, 0.1.12, 0.1.13 and 0.1.14).

reverting to <= 0.1.12 worksso change was definitely between 0.1.12 & 0.1.13 - everything else is constant.

The error is:

/opt/rbenv/versions/2.3.3/lib/ruby/gems/2.3.0/gems/excon-0.54.0/lib/excon/middlewares/expects.rb:7:in response_call': [HTTP 500 | req-7bb21890-2799-4838-9af7-890b009cdc92] The server has either erred or is incapable of performing the requested operation. (Fog::Compute::RackspaceV2::InternalServerError) from /opt/rbenv/versions/2.3.3/lib/ruby/gems/2.3.0/gems/excon-0.54.0/lib/excon/middlewares/response_parser.rb:9:in response_call'
from /opt/rbenv/versions/2.3.3/lib/ruby/gems/2.3.0/gems/excon-0.54.0/lib/excon/connection.rb:388:in response' from /opt/rbenv/versions/2.3.3/lib/ruby/gems/2.3.0/gems/excon-0.54.0/lib/excon/connection.rb:252:in request'
from /opt/rbenv/versions/2.3.3/lib/ruby/gems/2.3.0/gems/fog-core-1.43.0/lib/fog/core/connection.rb:81:in request' from /opt/rbenv/versions/2.3.3/lib/ruby/gems/2.3.0/gems/fog-rackspace-0.1.3/lib/fog/rackspace/service.rb:42:in request'
from /opt/rbenv/versions/2.3.3/lib/ruby/gems/2.3.0/gems/fog-rackspace-0.1.3/lib/fog/rackspace/compute_v2.rb:164:in request' from /opt/rbenv/versions/2.3.3/lib/ruby/gems/2.3.0/gems/fog-rackspace-0.1.3/lib/fog/rackspace/requests/compute_v2/create_server.rb:124:in create_server'
from /opt/rbenv/versions/2.3.3/lib/ruby/gems/2.3.0/gems/fog-rackspace-0.1.3/lib/fog/rackspace/models/compute_v2/server.rb:274:in create' from /opt/rbenv/versions/2.3.3/lib/ruby/gems/2.3.0/gems/fog-rackspace-0.1.3/lib/fog/rackspace/models/compute_v2/server.rb:232:in save'
from /opt/rbenv/versions/2.3.3/lib/ruby/gems/2.3.0/gems/fog-core-1.43.0/lib/fog/core/collection.rb:51:in create' from /usr/local/bin/mkserver.rb:457:in

'

This is on:
ruby 2.3.3p222 (2016-11-21 revision 56859) [x86_64-linux]

Here is a list of the other gems:
aws-sdk (2.7.0)
aws-sdk-core (2.7.0)
aws-sdk-resources (2.7.0)
aws-sigv4 (1.0.0)
bigdecimal (1.2.8)
builder (3.2.3)
CFPropertyList (2.3.4)
coderay (1.1.1)
did_you_mean (1.0.0)
excon (0.54.0)
fission (0.5.0)
fog (1.38.0)
fog-aliyun (0.1.0)
fog-atmos (0.1.0)
fog-aws (1.2.0)
fog-brightbox (0.11.0)
fog-cloudatcost (0.1.2)
fog-core (1.43.0)
fog-dynect (0.0.3)
fog-ecloud (0.3.0)
fog-google (0.1.0)
fog-json (1.0.2)
fog-local (0.3.1)
fog-openstack (0.1.19)
fog-powerdns (0.1.1)
fog-profitbricks (3.0.0)
fog-rackspace (0.1.2)
fog-radosgw (0.0.5)
fog-riakcs (0.1.0)
fog-sakuracloud (1.7.5)
fog-serverlove (0.1.2)
fog-softlayer (1.1.4)
fog-storm_on_demand (0.1.1)
fog-terremark (0.1.0)
fog-vmfusion (0.1.0)
fog-voxel (0.1.0)
fog-vsphere (1.7.0)
fog-xenserver (0.2.3)
fog-xml (0.1.2)
formatador (0.2.5)
inflecto (0.0.2)
io-console (0.4.5)
ipaddress (0.8.3)
jmespath (1.3.1)
json (1.8.3)
method_source (0.8.2)
mini_portile2 (2.1.0)
minitest (5.8.5)
multi_json (1.12.1)
net-telnet (0.1.1)
nokogiri (1.7.0.1)
power_assert (0.2.6)
pry (0.10.4)
psych (2.1.0)
rake (10.4.2)
rbvmomi (1.9.4)
rdoc (4.2.1)
slop (3.6.0)
test-unit (3.1.5)
trollop (2.1.2)
xml-simple (1.1.5)

Download from Rackspace Doesn't always get all Segments

We are using fog (fog-rackspace (0.1.6) to do offsite backups from Heroku to Rackspace. Our files are 6GB+ so we need to do the segmented uploads as described here: upload-large-files.

The upload works perfectly every time. When we manually download the file from the Rackspace UI it is fully intact.

The problem we are having is when we try downloading the file using fog we sometimes only get the first segment of the file. We can tell because the size of the file we see download is exactly 4GB (our UPLOAD_SEGMENT_LIMIT is set to 4GB). We are downloading the file with the suggested code from the above mentioned blog:

File.open('downloaded-file.jpg', 'w') do | f |
  directory.files.get("my_big_file.jpg") do | data, remaining, content_length |
    f.syswrite data
  end
end

The problem with the download is intermittent. Has anyone else experienced this issue?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.