Coder Social home page Coder Social logo

pivotal-cf / azure-blobstore-resource Goto Github PK

View Code? Open in Web Editor NEW
16.0 23.0 15.0 31.71 MB

A concourse resource to interact with the azure blob service.

License: MIT License

Go 97.45% Shell 2.38% Dockerfile 0.16%
azure blob concourse concourse-resource

azure-blobstore-resource's Introduction

azure-blobstore-resource

A concourse resource to interact with the azure blob service.

NOTE: The resource has been moved from the czero dockerhub account to the pcfabr dockerhub account. If your pipeline is currently using the resource from czero it should be switched to pcfabr.

Source Configuration

  • storage_account_name: Required. The storage account name on Azure.

  • storage_account_key: Required. The storage account access key for the storage account on Azure.

  • container: Required. The name of the container in the storage account.

  • base_url: Optional. The storage endpoint to use for the resource. Defaults to the Azure Public Cloud (core.windows.net).

Filenames

One of the two options must be specified:

  • regexp: Optional. The pattern to match filenames against. At least one capture group must be specified, with parentheses to extract the version. If multiple capture groups are provided the first group is used by default, but if a group is named version that will be extracted as the version. Semantic versions and numbers are supported for versioning.

  • versioned_file: Optional. The file name of the blob to be managed by the resource. The resource only pulls the latest snapshot. If the blob doesn't have a snapshot, the resource will not find the blob. A new snapshot must also be created when a blob is updated for the resource to successfully check new versions.

Behavior

check: Extract snapshot versions from the container.

Checks for new versions of a file. The resource will either check snapshots when using versioned_file or versions in the path name when using regexp. When using snapshots, if a blob exists without a snapshot the resource will create a 0001-01-01T00:00:00Z timestamp.

in: Fetch a blob from the container.

Places the following files in the destination:

  • (filename): The file fetched from the bucket.

  • url: A file containing the URL of the object. If private is true, this URL will be signed.

  • version: The version identified in the file name.

Parameters

  • skip_download: Optional. Skip downloading object.

  • unpack: Optional. If true, the blob will be unpacked before running the task. Supports tar, zip, gzip files.

  • block_size: Optional. Changes the block size used when downloading from Azure. Defaults to 4 MB. Maximum block size is 100 MB. A blob can include up to 50,000 blocks. This means with the default of 4 MB, blobs are limited to a size of a little more than 195 GB (4 MB x 50000 blocks). The max size of a blob with a block size of 100 MB will be 4.75 TB (100 MB x 50000 blocks).

  • retry:

    • try_timeout: Optional. Changes the try timeout in the retry options when uploading or downloading to Azure. This is the maximum allowed time for a single try of an HTTP request. A value of zero uses the default timeout. NOTE: When transferring large amounts of data, the default TryTimeout will probably not be sufficient. You should override this value based on the bandwidth available to the host machine and proximity to the Storage service. A good starting point may be something like (60 seconds per MB of anticipated-payload-size)[1]. The size of the payload would be the block size, not the overall size of the blob. This field accepts a either an integer that uses ns as the unit or a string that is a decimal number with a suffix. Valid suffixes are ns, us, ms, s, m, h.

out: Upload a blob to the container.

Uploads a file to the container. If regexp is specified, the new file will be uploaded to the directory that the regex searches in. If versioned_file is specified, the new file will be uploaded as a new snapshot of that file.

Parameters

  • file: Required. Path to the file to upload, provided by an output of a task. If multiple files are matched by the glob, an error is raised. The file that matches the glob will be uploaded into the directory specified by the regexp. Only supports bash glob expansion, not regex.

  • block_size: Optional. Changes the block size used when uploading to Azure. Defaults to 4 MB. Maximum block size is 100 MB. A blob can include up to 50,000 blocks. This means with the default of 4 MB, blobs are limited to a size of a little more than 195 GB (4 MB x 50000 blocks). The max size of a blob with a block size of 100 MB will be 4.75 TB (100 MB x 50000 blocks).

  • retry:

    • try_timeout: Optional. Changes the try timeout in the retry options when uploading or downloading to Azure. This is the maximum allowed time for a single try of an HTTP request. A value of zero uses the default timeout. NOTE: When transferring large amounts of data, the default TryTimeout will probably not be sufficient. You should override this value based on the bandwidth available to the host machine and proximity to the Storage service. A good starting point may be something like (60 seconds per MB of anticipated-payload-size)[1]. The size of the payload would be the block size, not the overall size of the blob. This field accepts a either an integer that uses ns as the unit or a string that is a decimal number with a suffix. Valid suffixes are ns, us, ms, s, m, h.

Example Configuration

An example pipeline exists in the example directory.

Resource

When using Azure snapshots:

resource_types:
- name: azure-blobstore
  type: docker-image
  source:
    repository: pcfabr/azure-blobstore-resource

resources:
  - name: terraform-state
    type: azure-blobstore
    source:
      storage_account_name: {{storage_account_name}}
      storage_account_key: {{storage_account_key}}
      container: {{container}}
      versioned_file: terraform.tfstate

or with regexp:

resource_types:
- name: azure-blobstore
  type: docker-image
  source:
    repository: pcfabr/azure-blobstore-resource

resources:
  - name: my-release
    type: azure-blobstore
    source:
      storage_account_name: {{storage_account_name}}
      storage_account_key: {{storage_account_key}}
      container: {{container}}
      regexp: release-(.*).tgz

Plan

- get: terraform-state
- put: terraform-state
  params:
    file: terraform-state/terraform.tfstate

and with regexp:

- put: my-release
  params:
    file: my-release/release-*.tgz

azure-blobstore-resource's People

Contributors

christianang avatar crsimmons avatar crusstu avatar eitansuez avatar lancefrench avatar mastercactapus avatar rkoster avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

azure-blobstore-resource's Issues

consistency (with s3 blobstore resource) with respect to file placement

i have gone through the exercise of retrofitting the pipelines from platform automation (see http://docs.pivotal.io/platform-automation/v2.1/reference/pipeline.html ) to use the azure blobstore resource instead of the s3 blobstore resource.

in the process, i discovered a difference in behavior between the two resource implementations which i'd like to describe next.

in the first pipeline (retrieving external dependencies), take for example the resource pas-stemcell.
its regexp specifies a subdirectory pas-stemcell/.

in the subsequent pipeline ("installing ops mgr and tiles"), we see pas-stemcell again, with the same regexp.

when the job named 'upload-stemcells' runs, the task named 'upload-pas-stemcell' fails with a "file not found". when i concourse hijacked into the container, i discovered that the path to the stemcell inside the container was pas-stemcell/pas-stemcell/{stemcellfilename}. i.e. it had a nested subdirectory. not so when i use the s3 blobstore resource.

i worked around the issue by adding a step to move the file, like so:

      - task: move-file-shim
        config:
          platform: linux
          inputs:
          - name: pas-stemcell
          run:
            path: /bin/sh
            args:
            - -c
            - mv pas-stemcell/pas-stemcell/* wellplaced-stemcell/
          outputs:
          - name: wellplaced-stemcell

and then in the subsequent task, i replaced the input_mapping -> stemcell to the output wellplaced-stemcell.

i realized a more elegant solution is to consider revising the blobstore resource's implementation to match whatever the s3 resource currently does.

i'm not yet familiar enough with go and with this project to contribute a PR just yet. but in case this is easily captured via a couple of unit tests and easily fixed, at least i could put this on your radar.

thanks in advance for your consideration.

failed to put blob since update

since the changes from February an upload to the blobstore fails.

we have

- name: test-image-path
  type: azure-blob
  source:
    container: pipeline
    storage_account_name: ((azure-storage-account-name))
    storage_account_key:   ((azure-storage-access-key))
    versioned_file: test-image-path

- task foobar
  - put: test-image-path
    params:
      file: packer_artifacts/vhd_uri

as i see in the azure blobstore it created a blob with the name vhd_uri
but should update the file test-image-path

concourse replies with

2019/02/08 10:34:28 failed to copy blob: storage: service returned error: StatusCode=404, ErrorCode=404 The specified blob does not exist., ErrorMessage=no response body was available for error status code, RequestInitiated=Fri, 08 Feb 2019 10:34:28 GMT, RequestId=32ea0716-801e-007d-2799-bfb435000000, API Version=2016-05-31, QueryParameterName=, QueryParameterValue=

Directories not supported

Specifying a versioned_file blob inside a logical directory currently breaks the get (in) operation.

YAML Specified:
versioned_file: platform-automation/0.0.1-rc.248/platform-automation-0.0.1-rc.248-tasks.zip

Error Output:
2018/11/09 19:50:42 failed to copy blob: open /tmp/build/get/platform-automation/0.0.1-rc.248/platform-automation-0.0.1-rc.248-tasks.zip: no such file or directory

Putting the blob at the container root fixes the error.

Regexp broken if Azure container contains a folder with more that one file in it

The way to reproduce:
Create an Azure container with a structure containing a folder and more than one file inside of the folder in addition to files located in the root of the container.
Ex:

<container_name>

  • <folder_name>
    • file_in_folder_a-1.2.3.tgz
    • file_in_folder_b-1.3.4.tgz
  • file_in_root_a-1.2.3.tgz
  • file_in_root_b-1.2.4.tgz

With a container file/folder structure as depicted above, the capturing group regex is not working anymore.
Ex: file_in_root_a-(.*).tgz
I had only a limited amount of time to troubleshoot but it looks like the API call/ go library is returning an empty array in this case so the root cause might reside in there.

0.6.x fails any resource that requires unpack: true

Was using this with tag "latest" at a customer and all pipelines started failing because they use this resource with unpack: true for Platform Automation image and tasks.

Reverted to 0.5.0 and it works again.

tar.gz is not supported

Issue:
When a file is named as tar.gz instead of tgz the mime type comes across as x-gzip instead of gzip. Therefore, it fails to extract the file since there is no case to support this.

What should happen:
If i have a file named as artifact.tar.gz it should be able to extract that file as if it was a gzipped tarball.

Solution:
Add a switch case for x-gzip to do the same thing as gzip.

switch fileType {
case "application/gzip":
cmd = exec.Command("gzip", "-d", filename)

Parallel Uploads - The specified block list is invalid

Running multiple uploads from multiple jobs running in parallel sometimes gives the following error.
We did set the block_size of 100Mb. Not sure if it is related to this StackOverflow - The specified block list is invalid )

2020/11/05 14:58:06 failed to upload blob: -> github.com/Azure/azure-storage-blob-go/azblob.newStorageError, /go/pkg/mod/github.com/!azure/[email protected]/azblob/zc_storage_error.go:42
===== RESPONSE ERROR (ServiceCode=InvalidBlockList) =====
Description=The specified block list is invalid.
RequestId:8fdbe99d-d01e-0004-2484-b3c323000000
Time:2020-11-05T14:58:06.4775739Z, Details: 
   Code: InvalidBlockList
   PUT https://xxxxx.blob.core.windows.net/product-tiles/stemcells/[stemcells-ubuntu-xenial,621.90]bosh-stemcell-621.90-azure-hyperv-ubuntu-xenial-go_agent.tgz?comp=blocklist&timeout=61
   Authorization: REDACTED
   Content-Length: [653]
   Content-Type: [application/xml]
   User-Agent: [Azure-Storage/0.7 (go1.14.2; linux)]
   X-Ms-Blob-Cache-Control: []
   X-Ms-Blob-Content-Disposition: []
   X-Ms-Blob-Content-Encoding: []
   X-Ms-Blob-Content-Language: []
   X-Ms-Blob-Content-Type: []
   X-Ms-Client-Request-Id: [599d2426-0cbd-4acb-60e5-2d8db6fecfb2]
   X-Ms-Date: [Thu, 05 Nov 2020 14:58:06 GMT]
   X-Ms-Version: [2018-11-09]
   --------------------------------------------------------------------------------
   RESPONSE Status: 400 The specified block list is invalid.
   Content-Length: [221]
   Content-Type: [application/xml]
   Date: [Thu, 05 Nov 2020 14:58:05 GMT]
   Server: [Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0]
   X-Ms-Error-Code: [InvalidBlockList]
   X-Ms-Request-Id: [8fdbe99d-d01e-0004-2484-b3c323000000]
   X-Ms-Version: [2018-11-09]

Regexp issue with platform-automation artifiacts

We are experiencing some inconsistencies when using the regexp in the source configuration when retrieving artifacts from our Blobstore in Azure:

Below is an snippet of the files in the blobstore:

image (8)

Below is the example of our configuration:

resources:
- name: platform-automation-image
 type: azure-blobstore
 source:
   storage_account_name: ((storage_account_name))
   storage_account_key: ((storage_account_key))
   container: ((container))
   regexp: platform-automation-image-(.*).tgz

- name: platform-automation-tasks
 type: azure-blobstore
 source:
   storage_account_name: ((storage_account_name))
   storage_account_key: ((storage_account_key))
   container: ((container))
   regexp: platform-automation-tasks-(.*).zip

Below is the error we are seeing in Concourse:

image (7)

When we initially flew up the pipeline, this was working just fine. We ran a couple of jobs multiple times with no issues. Now all of a sudden, the resource isn't finding the blob anymore for some odd reason. We've deleted the pipeline and re-flown it up but still no luck.

As a workaround, we switched to using the versioned_file source parameter by giving the explicit file name. This works however, we don't want to do this in the long term due to newer versions of this being released.

Would it be possible to support GLOBs/wildcard for PUTs?

Would it be possible to support GLOBs/wildcards for resource PUTs?

Currently in order to PUT a file to this azure blob store resource, we have to call out the entire filename explicitly. Even the simplest GLOBs/regex fails with an unable to find file error.

File uploads fail over 195G

We are trying to upload files to the blob store that are over 195 G and they are failing with a message of 'block list is over 50,000'. Seems like the 4M limit on the chunk size only applied to the REST version < 2016-05-31. The Rest versions after that supports chunk size up to 100M. The current service version used by 'Azure/azure-storage-blob-go' is 2018-11-09. Can we make the chunk size in 'azure-blobstore-resource/azure/client.go' configurable or just be set to 100M?
We are using this through Concourse's azure-blobsotre resource-type, which uses the 'pcfabr/azure-blobstore-resource' docker image.

Resource produces duplicate Concourse versions for the same blob

Hi there ๐Ÿ‘‹ I'm not entirely sure if this is known/expected behaviour and someone can correct my usage of the resource or if this actually a bug, but I've noticed that the azure-blobstore-resource seems to be producing duplicate Concourse resource versions for the same blob. This is an issue for me because some of my jobs are set up to trigger: true on new blob versions, but sometimes they get stuck looking for the newest versions that satisfy my passed criteria and don't trigger as expected.

In my pipeline I take in a blob from another source and put it to an Azure blobstore container via the resource to keep a copy of the blob I can manage myself instead of relying on the original source to be available. Later on in the same pipeline I get the blob using the same azure-blobstore-resource instance I putted to earlier and do some work with it. When I look at the versions produced by the azure-blobstore-resource I can see 2 resource versions for the same blob:

image

One has just the path field and the other has path and version. Both of the paths are the same and refer to the same blob, but it looks like because the second resource version has the additional version field Concourse treats them as 2 separate resource versions.

I had a quick look over the implementation and it looks like in the out script it produces new Concourse resource versions with just the path field (I'm using regexes, not blobstore snapshots):

versionsJSON, err := json.Marshal(api.Response{
Version: api.ResponseVersion{
Snapshot: snapshot,
Path: path,
},

but the check script produces new Concourse resource versions with a path and a version:

newerVersions = append(newerVersions, Version{
Path: stringPtr(blob.Name),
Version: stringPtr(ver.AsString()),
comparableVersion: ver,
})

Unless I'm not using the resource correctly, I would expect to see only 1 resource version for a given blob.

Specify block size in terms of units

The block_size param is currently specified in bytes, which isn't the most intuitive way to specify the block size considering most times its going to be somewhere between 4MB-100MB. The block_size param should allow the user to specify the block size in terms of MB. However, to not break existing users specifying the block size in terms of bytes the resource shouldn't just switch to considering the value as MB.

Allow the user to add the unit to the block_size e.g 10M or 10MB will set the block size to 10 megabytes.

0.8.0: x509: certificate signed by unknown authority

With the latest version, we are seeing failures uploading due to cert issues.

Reverting to 0.7.0 resolved the issue.

Output:

2019/12/04 23:28:46 failed to upload blob: -> github.com/Azure/azure-pipeline-go/pipeline.NewError, /go/pkg/mod/github.com/!azure/[email protected]/pipeline/error.go:154
HTTP request failed

Put https://<REDACTED>.blob.core.windows.net/backup/export/export-2019-12-04T23-26-06+0000.sql.gz?blockid=<REDACTED>&comp=block&timeout=61: x509: certificate signed by unknown authority

Azure China Cloud is not supported

Hi there,

We're trying to use this resource with Azure China and it fails to check the resource because the base URL of the blobstore is hardcoded and by default is blob.core.windows.net.
In Azure China, this URL should be blob.core.windows.cn.

There should be a way to provide an input to define which Azure you want to use (defaulting to AzureCloud) like in this resource: https://github.com/pivotal-cloudops/azure-blobstore-concourse-resource

The error you'd get by using this resource against Azure China Cloud is:

resource script '/opt/resource/check []' failed: exit status 1

stderr:
2018/11/05 08:21:26 failed to get latest version: Get https://BLOBSTORENAME.blob.core.windows.net/XXXXXXXX?comp=list&include=snapshots&prefix=FILE.NAME&restype=container: dial tcp: lookup BLOBSTORENAME.blob.core.windows.net on 168.63.129.16:53: no such host

Thanks for your help.
CC: @lakshmantgld @keliangneu

When unarchiving, the CLIs handle edge cases with symlinks

When we had an issue with the GCS resource (the user could not unarchive our docker image properly), we investigated.
It turned out that the resource was using the golang libraries for tar and zip.
The libraries are helpful, but it turned out they did not handle edge cases with symlinks.

We made a PR to the GCS resource to have it use the CLIS.
We did try to use the archiver library in golang, but it could not handle the symlinks.
The PR follows the same patterns done by the native s3-resource.

Can't extract object when object key has a "folder

Issue:
When you have a blobname with a folder path in it and when you go to extract it, it can't find the file. The file gets downloaded to the tmp folder.

Example:
object = somefolder/apps/artifact.tar.gz

File gets download to /tmp/build/23rfwef/artifact.tar.gz, but when it goes to extract the file, it is looking for the file at /tmp/build/23rfwef/somefolder/apps/artifact.tar.gz which doesn't exist.

What should happen:
File gets downloaded to /tmp/build/23rfwef/artifact.tar.gz and it extracts from the same place it's downloaded to.

Solution:

err = in.UnpackBlob(filepath.Join(destinationDirectory, blobName))

blobName should be path.Base(blobName).

err = in.UnpackBlob(filepath.Join(destinationDirectory, path.Base(blobName)))

Please configure GITBOT

Pivotal uses GITBOT to synchronize Github issues and pull requests with Pivotal Tracker.
Please add your new repo to the GITBOT config-production.yml in the Gitbot configuration repo.
If you don't have access you can send an ask ticket to the CF admins. We prefer teams to submit their changes via a pull request.

Steps:

  • Fork this repo: cfgitbot-config
  • Add your project to config-production.yml file
  • Submit a PR

If there are any questions, please reach out to [email protected].

Issue with sub-microseconds digit in snapshot timestamps

Hi,

Our pipelines are blocked because when the last 7th digit is 0 the resource omits it, whereas Azure is expecting it.

Example:

  • The Azure blobstore resources creates a terraform.tfstate?snapshot=2019-10-23T14:40:22.186881Z snapshot.
  • Azure stores 2019-10-23T14:40:22.1868810Z as the snapshot timestamp, please note the last 0, right before Z, that seems to be added by Azure.
  • When the resource tries to access this snapshot, it uses 2019-10-23T14:40:22.186881Z as the timestamp, but Azure is expecting the last 0 digit to be specified, so an error 400 is returned because the timestamp is deemed invalid by Azure.

The resulting error is:

2019/10/23 15:43:07 failed to copy blob: -> github.com/Azure/azure-storage-blob-go/azblob.newStorageError, /go/pkg/mod/github.com/!azure/[email protected]/azblob/zc_storage_error.go:42
===== RESPONSE ERROR (ServiceCode=) =====
Description=400 Value for one of the query parameters specified in the request URI is invalid., Details: (none)
   HEAD https://<redacted>.blob.core.windows.net/terraform/terraform.tfstate?snapshot=2019-10-23T14%3A40%3A22.186881Z&timeout=61
   Authorization: REDACTED
   User-Agent: [Azure-Storage/0.7 (go1.13.3; linux)]
   X-Ms-Client-Request-Id: [2f79966f-02b2-4382-47ba-9f0083c91dca]
   X-Ms-Date: [Wed, 23 Oct 2019 15:43:07 GMT]
   X-Ms-Version: [2018-11-09]
   --------------------------------------------------------------------------------
   RESPONSE Status: 400 Value for one of the query parameters specified in the request URI is invalid.
   Date: [Wed, 23 Oct 2019 15:43:07 GMT]
   Server: [Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0]
   X-Ms-Request-Id: [afa3f0b9-601e-0000-30b8-891dd2000000]

When the last digit of the timestamp is not 0, then the resource succeeds at downloading the snapshot.

The issue is the same when storing a new version. Whenever the last digit is 0, it fails, and whenever it is not 0 it succeeds.

Cloud you please push a fix as soon as possible, because our pipelines are experiencing big flaky errors with this issue.

Best,
Benjamin

Check/Get shouldn't require a snapshot to exist first

Not having a snapshot throws the following error:

2018/11/15 02:06:42 failed to copy blob: storage: service returned error: StatusCode=400, ErrorCode=OutOfRangeInput, ErrorMessage=One of the request inputs is out of range.
RequestId:93d35261-801e-010b-6f87-7c0af5000000
Time:2018-11-15T02:06:42.0180762Z, RequestInitiated=Thu, 15 Nov 2018 02:06:41 GMT, RequestId=93d35261-801e-010b-6f87-7c0af5000000, API Version=2016-05-31, QueryParameterName=, QueryParameterValue=

A snapshot shouldn't really be required if I just uploaded a file to be retrieved by Concourse.

Expose TryTimeout on In and Out

It seems to be possible to get a context deadline exceeded on very large blobs, being able to increase the timeout seems to be a possible mitigation for this issue.

How do I avoid the "failed to copy blob: context deadline exceeded" error?

I have 15GB blob that I am trying to download, but I keep getting the "failed to copy blob: context deadline exceeded" error. I have successfully downloaded smaller blobs, so I know I am doing it the right way. The failures always seems to happen at right around 10 minutes (give or take 30 seconds).

I have tried bumping the retry_timeout to 60m and setting the block_size to 50 and 100.

resource_types:
- name: azure-blobstore
  type: docker-image
  source:
    repository: pcfabr/azure-blobstore-resource
resources:
- name: pas-product
  type: azure-blobstore
  check_every: 4h
  source:
    storage_account_name: ((storage_account_name))
    storage_account_key: ((storage_account_key))
    container: tile-downloads
    regexp: srt-(((pas_major_minor_version)))-(.*).pivotal
    block_size: 50
    retry:
      try_timeout: "60m"
jobs:
- name: upload-and-stage-pas
  serial: true
  plan:
  - aggregate:
    - get: pas-product
      params:
        globs:
        - "srt-*"
  - task: test-config-files
    config:
      platform: linux
      image_resource:
        type: docker-image
        source:
          repository: sandyg1/om-cred-auto
      run:
        path: sh
        args:
        - -ec
        - |
           ls -lah

Release ARM64 images

Hi Team,

I am trying to use the azure-blobstore-resource image on the arm64 platform but it seems the arm64 tag is not available for this image.

I have built the image successfully in the local arm64 machine.

Do you have any plans in releasing the arm64 image?

It will be very helpful if the arm64 supported tag is available. If interested, I can raise a PR.

consider upgrading dependencies to latest stable

see my Gopkg.toml for the recipe.

this brings ginkgo, gomega, and azure-sdk-for-go up to latest stable version.

the override of fsnotify is a known issue with dep.

the override of opencensus-proto has to do with incorrect dependency version between two transitive dependencies.

Upload failing with http/413 Request Entity Too Large files over 4MB

Hey there; I'm trying to use the azure-blobstore-resource within a concourse pipeline and when the pipeline uploads files over 4MB I get this error:

2018/08/13 21:28:07 failed to upload blob: storage: service returned error: StatusCode=413, ErrorCode=RequestBodyTooLarge, ErrorMessage=The request body is too large and exceeds the maximum permissible limit.
RequestId:50459769-801e-0017-184c-33f80a000000
Time:2018-08-13T21:28:06.9913482Z, RequestInitiated=Mon, 13 Aug 2018 21:28:06 GMT, RequestId=50459769-801e-0017-184c-33f80a000000, API Version=2016-05-31, QueryParameterName=, QueryParameterValue=

Confirmed that I'm able to upload smaller files that are smaller than 4MB. It looks like there's some chunking that needs to happen to allow for files larger than 4MB.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.