Coder Social home page Coder Social logo

azure-storage-node's Introduction

Legacy Azure Storage SDK for JavaScript

NPM version Build Status Coverage Status

This project provides the legacy Node.js package azure-storage which is browser compatible to consume and manage Microsoft Azure Storage Services like Azure Blob Storage, Azure Queue Storage, Azure Files and Azure Table Storage

Please note, newer packages @azure/storage-blob, @azure/storage-queue and @azure/storage-file are available as of November 2019 and @azure/data-tables is available as of June 2021 for the individual services. While the legacy azure-storage package will continue to receive critical bug fixes, we strongly encourage you to upgrade.

Below are a set of links with information on both the latest and legacy packages for the different Storage services from Azure. For more, please read State of the Azure SDK 2021

Package Version Description API Reference Links Migration Guide Links
@azure/storage-blob v12 The next generation SDK for Azure Blob Storage API Reference for Blob SDK Migration Guide from azure-storage to @azure/storage-blob
@azure/storage-queue v12 The next generation SDK for Azure Queue Storage API Reference for Queues SDK Migration Guide from azure-storage to @azure/storage-queue
@azure/storage-file-share v12 The next generation SDK for Azure Files API Reference for Files SDK Migration Guide from azure-storage to @azure/storage-file-share
@azure/data-tables v12 The next generation SDK for Azure Table Storage API Reference for Tables SDK Migration Guide from azure-storage to @azure/data-tables
azure-storage v2 Legacy Storage SDK in this repository (Blob/Queue/File/Table, callback style) API Reference for legacy Storage SDK
@azure/arm-storage v7 & above Management SDKs including Storage Resource Provider APIs API Reference for Storage Management SDK

azure-storage-node's People

Contributors

abramz avatar aichi avatar anchepiece avatar andrewconnell avatar cpwinn avatar dabutvin avatar dependabot[bot] avatar emmazhu avatar faust64 avatar hason-msft avatar jasonklotzer avatar jiacfan avatar laura-barluzzi avatar marcbachmann avatar microsoft-github-policy-service[bot] avatar mrayermannmsft avatar noopkat avatar patrick-seafield avatar peterblazejewicz avatar pyo25 avatar ramya-rao-a avatar sinedied avatar slepox avatar tmacro avatar ulrikstrid avatar veena-udayabhanu avatar vinaysh-msft avatar vinjiang avatar xiaoningliu avatar yaxia avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

azure-storage-node's Issues

what is the recommended version of node for the sdk

I could not find the compatibility details about node versions & sdk. Hence, opening the issue to get the details.

I have been using it with node 0.10.33 & azure-storage 0.3.3 for last couple of months & found it very stable & no performance issues. Recently I am trying to upgrade to node 0.12 and azure-storage 0.4.3 find high CPU usage & socket hang errors.

While I am looking into the issues, I wanted to check the compatibility of the sdk with node current version 0.12?

BlobService.getServiceProperties

Hi;
I made some test within Azure Mobiles Services and Blob storage.
In a custom API I try to get (and maybe to set, later ...) the blob storage properties, and especially the CORS metadatas.
My code is pretty straightforward to get thoses properties :

        var blobService = azure.createBlobService(accountName, accountKey, host);

        blobService.getServiceProperties(null, function (error, sp, r) {
                    console.log(sp);
        });

The results :

        [
            {
    "Logging": {
        "Version": "1.0",
        "Delete": false,
        "Read": false,
        "Write": false,
        "RetentionPolicy": {
            "Enabled": false
        }
    },
    "Metrics": {
        "Version": "1.0",
        "Enabled": false,
        "RetentionPolicy": {
            "Enabled": false
        }
    }
}
]

Even if I set the CORS properties within Cynapta Azure CORS Helper

cynapta

The results are still the same.

By the way, I have a similar problem with the setServiceProperties methods. I can't set the CORS metadata within this method.

Sebastien Pertus

Improve specs, and documentation

The unit tests in this project need major work. Unit tests written in the spec syntax should function as documentation when run, something which is sorely lacking here.

When one runs the unit tests and gets an output like the one below it is just frustrating.

BlobContainer
  setContainerAcl
    It should work

How are we suppose to understand what setContainerAcl does or how to use it when the level of specification/documentation provided is IT SHOULD WORK. We assume it should work, but what does it mean for it to work?

http://betterspecs.org/ provides some great guidance on how to write better specs, and while it is for ruby I find it has value in any language which has an rspec style testing framework. (like the one being used in this project)


I love how we are seeing such support for node.js coming out of the Azure team, but I really do hope we see continued improvement in these libraries.

datatype conversion issue

Currently I'm facing a critical issue for my application. I expect when I write the following code when creating an entity that it will end up in a property of type double

 doubleValue: entGen.Double(0.0),

but it results in a string property. If it would be a single issue with 0.0 it would be manageable in my code. But it looks like the same issue happens when using values ending up with .0, e.g.

 doubleValue: entGen.Double(85.0),

In my application I calculate the data and then put them to a table record. Now after reading them I expect a double but get back a string.

I digged into this problem and debugged the code a bit. At the end the system generates two different network messages:

{"doubleValue":"0.0"}

This ends up in a string. The following one is generated for the value 5.4 works well:

{"doubleValue":5.4}

The table store backend generates a double also without type information. I also changed the code in a way that the system generates the oData type indicator but it doesn't help, e.g.:

{"doubleValue":"0.0","[email protected]":"Edm.Double"}

Concurrent operations

I noticed that this library is setting http.globalAgent.maxSockets to 1 which happens via BatchOperation.setConcurrency() which is set based on Defaults.DEFAULT_PARALLEL_OPERATION_THREAD_COUNT which equals 1

A few comments:

  1. I understand this will limit the number of concurrent blob requests to 1. This seems like a very low limit for the default - what is the rationale for such a low default limit?
  2. If a default limit is required, the library should create its own http.Agent instead of setting this in the globalAgent as that will impact other http requests the application will make! This behavior impacted our application and was completely unexpected - I consider this a major bug.

Fail to upload blob when using io.js

Using blobuploaddownloadsample.js code, I always get The MD5 value specified in the request did not match with the MD5 value calculated by the server. error when uploading blob.

Starting blobuploaddownloadsample.
Created the container media001
Entering uploadBlobs.
{ [Error: The MD5 value specified in the request did not match with the MD5 value calculated by the server.
RequestId:83f0668c-0001-0043-4e2e-db9a27000000
Time:2015-08-20T10:00:05.4120374Z]
  code: 'Md5Mismatch',
  userspecifiedmd5: '2MLq/ZDCZuGaudysxHn4rw==',
  servercalculatedmd5: 'toArex8eBSX6wlXEoWViTw==',
  statusCode: 400,
  requestId: '83f0668c-0001-0043-4e2e-db9a27000000' }
{ [Error: The MD5 value specified in the request did not match with the MD5 value calculated by the server.
RequestId:260faa5e-0001-0026-172e-db2b7a000000
Time:2015-08-20T10:00:10.1948410Z]
  code: 'Md5Mismatch',
  userspecifiedmd5: 'ovSNfpyWBEVGyKcKMu5Frw==',
  servercalculatedmd5: 'dXyRu2Ikaa5iSxBTyzL3Sg==',
  statusCode: 400,
  requestId: '260faa5e-0001-0026-172e-db2b7a000000' }

Change to node v0.1.2.5, the file successfully uploaded.

My environment

  • Windows 7 x64
  • io.js iojs-v3.0.0-x64
  • node.js node-v0.12.5-x64

Ordering rows in a table

Hi all,

How i can order rows in a azure storage table using azure storage nodejs sdk? I have about 100.000 rows in a table. And it has a property which name is 'Score'. I want to order these rows by Score property.

Is it possible to order by a custom property in azure table storage?

Error: An HTTP header that's mandatory for this request is not specified.

I am using the azure in china, it's have a different host from other district, so I create a service as below:

MY_ACCOUNT_URL = 'https://mediasvc4skk4dmmx0k19.blob.core.chinacloudapi.cn/'
MY_ACCOUNT_NAME = '***'
ACCOUNT_KEY = '***'

azure = require('azure-storage')
queueSvc = azure.createQueueService(MY_ACCOUNT_NAME, ACCOUNT_KEY, MY_ACCOUNT_URL)
queueSvc.createQueueIfNotExists 'mytable', (error, result, response)->
    console.log error||result

Error shows up:
{ [Error: An HTTP header that's mandatory for this request is not specified.
RequestId:07e05eed-0001-001e-6ab2-8e8b48000000
Time:2014-10-19T11:25:45.0819834Z]
code: 'MissingRequiredHeader',
headername: 'x-ms-blob-type',
statusCode: 400,
requestId: '07e05eed-0001-001e-6ab2-8e8b48000000' }

any help? thanks~

TableService.createTableIfNotExists fails if table already exists

If you call createTableIfNotExists in quick succession, the handler for the createTable call in the _doesTableExist callback fails. The problem boils down to the fact that _doesTableExist can return false for a bunch of requests. Then when those requests return they will call createTable. The handler for this table, checks if the error status code states that the table already exists. It then attempts to set the isSuccessful flag on the response to true. The problem is that on error, the createResponse object is null. See https://github.com/Azure/azure-storage-node/blob/master/lib/services/table/tableservice.js#L663

Entity generation is different for different calls

The entity generator is fine, but seems a little complex - especially when other calls, like deleting or retrieving don't require using it. Why is it needed for inserting? Couldn't this be built into the function for insertion too?

Saving blob metadata forces keys to lowercase

Hi,

it seems that the nodejs libraries force lowercase metadata keys when storing metadata. According to http://msdn.microsoft.com/en-us/library/azure/hh225342.aspx metadata keys must conform to C# identifiers which are case sensitive, and the dotnet API correctly store metadata keys with their case preserved.

The offending line seems to be at /lib/common/http/webresource.js:266

Of course some incompatibility seems to be built in since metadata is set through headers and the HTTP specs say,

Each header field consists of a name followed by a colon (":") and the field value. Field names are case-insensitive.

RFC 2616 - "Hypertext Transfer Protocol -- HTTP/1.1", paragraph 4.2, "Message Headers":

Is this something we should expect to change or something to live with?

EventEmitter Memory Leaked

Am using meteor. I build a meteor package which helps in uploading files to blob storage . Am getting the following error.

W20141025-15:22:40.195(5.5)? (STDERR) (node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.
W20141025-15:22:40.546(5.5)? (STDERR) Trace
W20141025-15:22:40.546(5.5)? (STDERR)     at addListener (events.js:160:15)
W20141025-15:22:40.546(5.5)? (STDERR)     at /Users/user/boutfeeds/packages/jamesfebin:azure-blob-upload/.build.jamesfebin:azure-blob-upload/npm/node_modules/azure-storage/lib/common/services/storageserviceclient.js:399:31
W20141025-15:22:40.547(5.5)? (STDERR)     at /Users/user/boutfeeds/packages/jamesfebin:azure-blob-upload/.build.jamesfebin:azure-blob-upload/npm/node_modules/azure-storage/lib/common/services/storageserviceclient.js:516:5
W20141025-15:22:40.547(5.5)? (STDERR)     at SharedKey.signRequest (/Users/user/boutfeeds/packages/jamesfebin:azure-blob-upload/.build.jamesfebin:azure-blob-upload/npm/node_modules/azure-storage/lib/common/signing/sharedkey.js:81:3)
W20141025-15:22:40.547(5.5)? (STDERR)     at Object.StorageServiceClient._buildRequestOptions (/Users/user/boutfeeds/packages/jamesfebin:azure-blob-upload/.build.jamesfebin:azure-blob-upload/npm/node_modules/azure-storage/lib/common/services/storageserviceclient.js:498:27)
W20141025-15:22:40.634(5.5)? (STDERR)     at operation (/Users/user/boutfeeds/packages/jamesfebin:azure-blob-upload/.build.jamesfebin:azure-blob-upload/npm/node_modules/azure-storage/lib/common/services/storageserviceclient.js:255:10)
W20141025-15:22:40.634(5.5)? (STDERR)     at func [as _onTimeout] (/Users/user/boutfeeds/packages/jamesfebin:azure-blob-upload/.build.jamesfebin:azure-blob-upload/npm/node_modules/azure-storage/lib/common/services/storageserviceclient.js:422:11)
W20141025-15:22:40.635(5.5)? (STDERR)     at Timer.listOnTimeout [as ontimeout] (timers.js:110:15)

The code is here

 azureUpload:function(fileName,accountName,key,container,callback) {

      var buffer = new Buffer(this.data);
      retryOperations = new azure.ExponentialRetryPolicyFilter();
      blobService = azure.createBlobService(accountName, key).withFilter(retryOperations);
      var blockId = this.blockArray[this.blockArray.length-1];
      var stream = new ReadableStreamBuffer(buffer);
      var self = this;
      Future = Npm.require('fibers/future');
      var myFuture = new Future;


      blobService.createBlockFromStream(blockId,container,fileName,stream,stream.size(),function(err,response)
            {

                if(err)
                {
                    myFuture.return();
                }
                else if (response)
                {     


                 if (self.bytesUploaded+self.data.length >= self.size)
                    {
                         blobService.commitBlocks(container, fileName, {LatestBlocks: self.blockArray}, function(error, result){
                                if(error){
                                 myFuture.return();

                                } else {
                                    myFuture.return({url:"https://"+accountName+".blob.core.windows.net/"+container+"/"+fileName});
                                }
                            });



                    }
                    else
                    {
                         myFuture.return();
                    }


                }

            });

        return myFuture.wait();



      }

You can view the full source code here https://github.com/jamesfebin/azure-blob-upload/blob/master/azureupload.js (Scroll down to azureUpload function)

Error when using the file service

Getting this error when I try to use the var fileService = azure.createFileService(); to create shares and directories.

{ [Error: getaddrinfo ENOTFOUND contoso.file.core.windows.net contoso.file.core.windows.net:443]
  code: 'ENOTFOUND',
  errno: 'ENOTFOUND',
  syscall: 'getaddrinfo',
  hostname: 'contoso.file.core.windows.net',
  host: 'contoso.file.core.windows.net',
  port: '443' }

Obviously I use my storage account and not contoso.

When checking on azure I can't see any url for file.core.windows.net only blob, table and queue, I feel like that is the actual problem.

listBlobDirectories Method

The .NET API provides the type CloudBlobDirectory to enumerate BlobPrefix objects in the XML return from the server. Example of enumeration:
AzureContainer.ListBlobs().OfType<CloudBlobDirectory>
Example of Azure API call made from executing this method:
https://account.blob.core.windows.net/container?restype=container&comp=list&delimeter=%2F
Which just lists all blobs with a delimiter - same as listBlobsSegmented with a delimiter passed as an option.

The current azure-storage-node listBlobsSegmented & withPrefix method drop the BlobPrefix objects from the result, as it should.

I suggest an addition of a listBlobDirectories method to return these results, given a container and prefix.

Table name validation throws even if callback is supplied

When using createTableIfNotExists with an invalid table name the function throws. In the previous version of your excellent module the validation-utilities was exposed. Then this unhandled exception could be avoided.

Imho, the expected behavior would be to return the error in the callback. Or maybe you could expose the validation-utilities once again.

Best regards,

/pål

Copy image from blob to blob

If there a way to get image from an blob and put it into another blob without create a file on disk? I have tried to read stream with getBlobToStream from blob and put it into blob with createBlockBlobFromStream.

I have something like this.

var storageClient = azureStorage.createBlobService(name,key);
storageClient.getBlobToStream(blobName, name, stream, function(error, image, response) {
    storageClient.createBlockBlobFromStream(newBlobName, image.blob, image, image.contentLength, {
        contentType: image.contentType
    }, callback);
});

It show me err:
body.outputStream.on('open', function(){ undefined is not a function.

Inconsistent property vs. XML attribute name for permission(s) when parsing/serializing access policies

According to documentation and sample code, access policies within the azure-node-storage module use the property Permissions (capitalized, plural). The Azure API expects the XML attribute to be permission (lowercase, singular). For XML attributes, constants exists in common/util/constants.js.

The serialization code in lib/common/models/aclresult.js correctly uses these constants:

  if (signedIdentifier.AccessPolicy.Permissions) {
    doc = doc
        .ele(AclConstants.PERMISSION)
          .txt(signedIdentifier.AccessPolicy.Permissions)
        .up();
  }

The XML parsing code does not use the constants, and incorrectly writes Permission (incorrect: capitalized) from the XML to the Permission property (incorrect: singular):

if (signedIdentifier.AccessPolicy.Permission) {
    si.AccessPolicy.Permission = signedIdentifier.AccessPolicy.Permission;
}

Thus, the access policy permission information in a downloaded XML is being lost/ends up in the wrong field.

To fix, the parse() function should also use the proper AclConstants and read the results into the Permissions field (plural, not singular) that is used by serialize(), sample code, and documentation. The only place where Permission is used for the property name in singular is the unit test that should have caught the bug.

The following patches should fix the incorrect unit test and the actual bug:

diff --git a/test/services/blob/blobservice-container-tests.js b/test/services/blob/blobservice-container-tests.js
index 63ad178..7a8fd7b 100644
--- a/test/services/blob/blobservice-container-tests.js
+++ b/test/services/blob/blobservice-container-tests.js
@@ -578,7 +578,7 @@ describe('BlobContainer', function () {
               if (identifier.Id === 'id1') {
                 assert.equal(identifier.AccessPolicy.Start.getTime(), new Date('2009-10-10T00:00:00.123Z').getTime());
                 assert.equal(identifier.AccessPolicy.Expiry.getTime(), new Date('2009-10-11T00:00:00.456Z').getTime());
-                assert.equal(identifier.AccessPolicy.Permission, 'r');
+                assert.equal(identifier.AccessPolicy.Permissions, 'r');
                 entries += 1;
               }
               else if (identifier.Id === 'id2') {
@@ -586,7 +586,7 @@ describe('BlobContainer', function () {
                 assert.equal(identifier.AccessPolicy.Start.getMilliseconds(), 6);
                 assert.equal(identifier.AccessPolicy.Expiry.getTime(), new Date('2009-11-11T00:00:00.4Z').getTime());
                 assert.equal(identifier.AccessPolicy.Expiry.getMilliseconds(), 400);
-                assert.equal(identifier.AccessPolicy.Permission, 'w');
+                assert.equal(identifier.AccessPolicy.Permissions, 'w');
                 entries += 2;
               }
             });


diff --git a/lib/common/models/aclresult.js b/lib/common/models/aclresult.js
index 105d164..5bea440 100644
--- a/lib/common/models/aclresult.js
+++ b/lib/common/models/aclresult.js
@@ -91,7 +91,7 @@ exports.serialize = function (signedIdentifiersJs) {
 exports.parse = function (signedIdentifiersXml) {
   var signedIdentifiers = [];

-  signedIdentifiersXml = azureutil.tryGetValueChain(signedIdentifiersXml, [ 'SignedIdentifiers', 'SignedIdentifier' ]);
+  signedIdentifiersXml = azureutil.tryGetValueChain(signedIdentifiersXml, [ AclConstants.SIGNED_IDENTIFIERS_ELEMENT, AclConstants.SIGNED_IDENTIFIER_ELEMENT ]);
   if (signedIdentifiersXml) {
     if (!_.isArray(signedIdentifiersXml)) {
       signedIdentifiersXml = [ signedIdentifiersXml ];
@@ -99,20 +99,22 @@ exports.parse = function (signedIdentifiersXml) {

     signedIdentifiersXml.forEach(function (signedIdentifier) {
       var si = {};
-      si.Id = signedIdentifier.Id;
-      if (signedIdentifier.AccessPolicy) {
+      var accessPolicy;
+      si.Id = signedIdentifier[AclConstants.ID];
+      accessPolicy = signedIdentifier[AclConstants.ACCESS_POLICY];
+      if (accessPolicy) {
         si.AccessPolicy = {};

-        if (signedIdentifier.AccessPolicy.Start) {
-          si.AccessPolicy.Start = ISO8061Date.parse(signedIdentifier.AccessPolicy.Start);
+        if (accessPolicy[AclConstants.START]) {
+          si.AccessPolicy.Start = ISO8061Date.parse(accessPolicy[AclConstants.START]);
         }

-        if (signedIdentifier.AccessPolicy.Expiry) {
-          si.AccessPolicy.Expiry = ISO8061Date.parse(signedIdentifier.AccessPolicy.Expiry);
+        if (accessPolicy[AclConstants.EXPIRY]) {
+          si.AccessPolicy.Expiry = ISO8061Date.parse(accessPolicy[AclConstants.EXPIRY]);
         }

-        if (signedIdentifier.AccessPolicy.Permission) {
-          si.AccessPolicy.Permission = signedIdentifier.AccessPolicy.Permission;
+        if (accessPolicy[AclConstants.PERMISSION]) {
+          si.AccessPolicy.Permissions = accessPolicy[AclConstants.PERMISSION];
         }
       }

Losing internet connection leaves non-called callback upon blob download

Try downloading a sizable blob (tested on a blob of size 137.9mb).

blobService.getBlobToStream(model.get('container_id'), model.get('name'), stream, options,  function (err) { 
  console.log('I am never called with: ' + err);
});

During the download, cut your internet connection. We expect the callback to be called with err. Actual: its never called back.

Table Storage module not support emoji

Hi,

I getting this error when I try insert a new entity using emoji characters.

Error: Invalid character (�) in string: Ok obrigada😃😃😃
at XMLFragment.assertLegalChar (D:\home\site\wwwroot\node_modules\azure\node_modules\xmlbuilder\lib\XMLFragment.js:354:15)
at XMLFragment.text (D:\home\site\wwwroot\node_modules\azure\node_modules\xmlbuilder\lib\XMLFragment.js:139:12)
at XMLFragment.txt (D:\home\site\wwwroot\node_modules\azure\node_modules\xmlbuilder\lib\XMLFragment.js:369:19)
at AtomHandler._writeAtomEntryValue (D:\home\site\wwwroot\node_modules\azure\lib\util\atomhandler.js:249:37)
at AtomHandler._writeAtomEntryValue (D:\home\site\wwwroot\node_modules\azure\lib\util\atomhandler.js:239:32)
at AtomHandler._writeAtomEntryValue (D:\home\site\wwwroot\node_modules\azure\lib\util\atomhandler.js:239:32)
at AtomHandler.serialize (D:\home\site\wwwroot\node_modules\azure\lib\util\atomhandler.js:188:16)
at Function.EntityResult.serialize (D:\home\site\wwwroot\node_modules\azure\lib\services\table\models\entityresult.js:68:25)
at TableService.insertEntity (D:\home\site\wwwroot\node_modules\azure\lib\services\table\tableservice.js:519:42)
at </table/messages.insert.js>:97:38
at Array.forEach (native)

Unable to turn off nagling

I'm working with Windows Azure Tables through a Node.js app. Performance seems to be an issue at this time. While investigating how to improve performance, I learned a common practice is to turn off 'Nagling'.

I cannot find a ServicePointManager in the azure-storage-node module. The ability to turn off Nagling can drastically improve the performance of writing to Windows Azure Tables.

Function names in README are invalid / outdated

I wanted to use this package to read the contents of a blob and found getBlockBlobToStream mentioned in the README.

Turns out that this function does not exist in the code, but there is one called getBlobToStream.

I think all the examples should be updated unless the README is ahead of its time and the code is about to change soon.

Convenience query method to traverse the continuation token suggestion

Queries to Azure tables may occasionally return a continuation token for a variety of reasons (see http://blog.smarx.com/posts/windows-azure-tables-expect-continuation-tokens-seriously). The following is an example of how to use the queryEntitiesContinuation extension method to automatically retrieve data when a continuation token is present.

It would be great to see something like this added to the TableService methods so we can automatically traverse the continuation token.

var azure = require('azure-storage');

azure.TableService.prototype.queryEntitiesContinuation = function (tableName, query, maxContinuationCount, callback) {

    var tableService = this;
    var data = new Array();
    var countinuationCount = 0;
    var operation = function (tableName, query, continuationToken) {
        tableService.queryEntities(tableName, query, continuationToken, function (error, result) {

            if (!error) {
                if (result.continuationToken) {

                    result.entries.forEach(function (entry) {
                        data.push(entry);
                    });

                    if (maxContinuationCount === null || countinuationCount < maxContinuationCount) {
                        ++countinuationCount;
                        //update top
                        if (query._top !== null) {
                            query._top = query._top - data.length;
                            if (query._top !== 0) {
                                operation(tableName, query, result.continuationToken);
                            } else {
                                callback(error, result);
                            }
                        } else {
                            operation(tableName, query, result.continuationToken);
                        }
                    } else {
                        callback(error, result);
                    }


                } else {

                    data.forEach(function (entry) {
                        result.entries.push(entry)
                    });

                    callback(error, result);
                }
            } else {

                result.entries.push(data);
                callback(error, result);
            }
        });
    };

    operation(tableName, query, null);
};

module.exports = azure;

JSDoc documentation is out of date

The JSDoc generated documentation (http://dl.windowsazure.com/nodedocs/index.html) is out of date.

I couldn't understand why I wasn't able to insert entities into my tables. Eventually, I reached the Azure docs (http://azure.microsoft.com/en-us/documentation/articles/storage-nodejs-how-to-use-table-storage/) and figured out there is a new syntax for table entities' properties: key: {"_": "value"}.

The JSDocs said nothing about this in the start page.

I suggest to either update or remove those docs.

Failure when piping 200MB zip from blob storage to stream

I have some code that has worked in the past but has been having issues recently.

There are files in blob storage that I'm expecting to be able to pipe to a stream. In production I'm piping to an http response, but in the example below I'm simply trying to pipe to a file. Both result in the same error.

process.env.AZURE_STORAGE_ACCOUNT = 'storageAccount';
process.env.AZURE_STORAGE_ACCESS_KEY = 'storageKey';    

var require('fs');
var require('azure-storage');

function downloadBlob(containerName, fileName, stream)
{
    var blobService = azure.createBlobService();
    blobService.getBlobProperties(containerName, fileName, function(error, properties, status){
        if(error || !status.isSuccessful)
        {
            console.log('Couldn\'t find file');
        }
        else
        {
            console.log('downloading file');
            var blobStream = blobService.createReadStream(containerName, fileName).pipe(stream);
        }; 
    }
}

var stream = fs.createWriteStream('download.zip');
downloadBlob("container", "filename", stream);

Running this code results in the following error and I'm not sure what exactly has changed or where I need to go from here to fix it.

TypeError: Object function () {
        self.logger.debug('Write stream has ended');
        if (writeStream.close) {
          writeStream.close();
        }
        if (!savedBlobResult) {
          savedBlobResult = {};
        }
        savedBlobResult.contentMD5 = options.contentMD5;
        savedBlobResult.clientSideContentMD5 = null;
        if (md5Hash) {
          savedBlobResult.clientSideContentMD5 = md5Hash.digest('base64');
        }
        callback(error, savedBlobResult, savedBlobResponse);
      } has no method 'copy'
    at ChunkStream._copyToInternalBuffer (C:\projects\aldis\GridSmartCloud\coreserver\node_modules\azure-storage\lib\common\streams\chunkstream.js:180:21)
    at ChunkStream._buildChunk (C:\projects\aldis\GridSmartCloud\coreserver\node_modules\azure-storage\lib\common\streams\chunkstream.js:127:12)
    at ChunkStream.write (C:\projects\aldis\GridSmartCloud\coreserver\node_modules\azure-storage\lib\common\streams\chunkstream.js:106:8)
    at ChunkStream.end (C:\projects\aldis\GridSmartCloud\coreserver\node_modules\azure-storage\lib\common\streams\chunkstream.js:72:10)
    at EventEmitter.<anonymous> (C:\projects\aldis\GridSmartCloud\coreserver\node_modules\azure-storage\lib\services\blob\blobservice.js:4335:19)
    at EventEmitter.emit (events.js:98:17)    at BatchOperation._tryEmitEndEvent (C:\projects\aldis\GridSmartCloud\coreserver\node_modules\azure-storage\lib\common\streams\batchoperation.js:279:19)
    at BatchOperation._fireOperationUserCallback (C:\projects\aldis\GridSmartCloud\coreserver\node_modules\azure-storage\lib\common\streams\batchoperation.js:267:10)
    at BatchOperation._fireOperationUserCallback (C:\projects\aldis\GridSmartCloud\coreserver\node_modules\azure-storage\lib\common\streams\batchoperation.js:262:10)
    at C:\projects\aldis\GridSmartCloud\coreserver\node_modules\azure-storage\lib\common\streams\batchoperation.js:215:14

BlobService: listBlocks() does not return single item block lists (BlockListResult.parse)

When trying to list all uncommitted/committed blocks of a blob using the listBlocks() method, the blocklist supplied to the callback function keeps being empty if there's only a single block item in any of the list results (Committed, Uncommitted, Latest).

Method call used for testing:

  blobService.listBlocks(container, blobName, 'all', function(err, blocks, response) {
    console.log(err, blocks, response);
  });

Correct result, when there are 2 or more committed blocks in a blob:

null { CommittedBlocks:
   [ { Name: 'bG9nLTAwMDAwMQ==', Size: '9196' },
     { Name: 'bG9nLTAwMDAwMg==', Size: '9196' } ] } { isSuccessful: true,
  statusCode: 200,
  body: { BlockList: { CommittedBlocks: [Object], UncommittedBlocks: '' } },
  ...

Incorrect result, when there is only 1 committed block in a blob:


null {} { isSuccessful: true,
  statusCode: 200,
  body: { BlockList: { CommittedBlocks: [Object], UncommittedBlocks: '' } },
  ...

In the incorrect case, the response.body.BlockList contains the following:

{
    "CommittedBlocks": {
        "Block": {
            "Name": "bG9nLTAwMDAwMQ==",
            "Size": "9196"
        }
    },
    "UncommittedBlocks": ""
}

So, instead of being an array it's a plain object.

The BlockListResult.parse() method in
https://github.com/Azure/azure-storage-node/blob/master/lib/services/blob/models/blocklistresult.js should most likely take this into account.

One possible solution would be (if we're not concerned with performance :-)):

BlockListResult.parse = function (blockListXml) {
  var blockListResult = new BlockListResult();

  if (blockListXml.CommittedBlocks && blockListXml.CommittedBlocks.Block) {
    blockListResult.CommittedBlocks = [].concat(blockListXml.CommittedBlocks.Block);
  }

  if (blockListXml.UncommittedBlocks && blockListXml.UncommittedBlocks.Block) {
    blockListResult.UncommittedBlocks = [].concat(blockListXml.UncommittedBlocks.Block);
  }

  if (blockListXml.LatestBlocks && blockListXml.LatestBlocks.Block) {
    blockListResult.LatestBlocks = [].concat(blockListXml.LatestBlocks.Block);
  }

  return blockListResult;
};

Security vulnerabilities with the 'request' module

Hello,

Could you update the 'request' module to fix the security vulnerabilities reported by the 'nsp' tool (https://nodesecurity.io/advisories/qs_dos_extended_event_loop_blocking and https://nodesecurity.io/advisories/qs_dos_memory_exhaustion).
The issue is with the submodule 'qs' which is in version 0.6.6 in 'request' 2.27.0. This issue is fixed in 'request' 2.40.0 (the version 2.45.0 is currently used in the azure main module).

Thank you in advance.

I'm recieving intermittent "Error: NotFound" when attempting to stream files from Azure storage blobs.

I'm not sure where to even start with debugging this. It is intermittent in the sense that sometimes the files stream correctly, and sometimes the same file returns a "NotFound" error. Any ideas?

I've confirmed that both the file and the container exist. The storage container that I'm uploading to is geo-redundant. Is there a delay between when the file is uploaded and becomes available for reading? I'm usually firing these operations immediately after one another.

File uploaded to azure container -> create read stream -> stream the file and parse it's contents line-by-line.

            var readStream = blobService.createReadStream(container, filename, function(err) {
                if(err) {
                    logger.error({err: err});
                    done(err);
                } else {
                    csv.fromStream(readStream, {
                        headers: true,
                        objectMode: true,
                        ignoreEmpty: true
                    })
                    .on('data', readLine)
                    .on('error', handleError)
                    .on('end', handleEnd);
                }
            });

Memory leak in TableService.queryEntities

I recently discovered a memory leak in a Node app of ours which I eventually tracked down to being related to the queryEntities method on a TableService instance. After pulling the code out and completely stripping it down the problem still existed, the basic setup I have to reproduce this issue is below:

var AzureSvc = require('azure-storage');

var azure = {
    eventsTable: 'eventtable',
    tableService: AzureSvc.createTableService('{name}','{key}'),
    eventsQuery: (new AzureSvc.TableQuery()).where('Sent eq ?', 'false'),
    eventCheckTimeout: 5000
};
azure.checkEvents = function Azure_CheckEvents() {
    azure.tableService.queryEntities(azure.eventsTable, azure.eventsQuery, null, function(error, result) {
        if(!error)
        {
            setTimeout(Azure_CheckEvents, azure.eventCheckTimeout);
        }
        else
        {
            setTimeout(Azure_CheckEvents, azure.eventCheckTimeout);
        }
    });
};
azure.tableService.createTableIfNotExists(azure.eventsTable, function(error) {
    if(!error)
    {
        setTimeout(azure.checkEvents, azure.eventCheckTimeout);
    }
});

Validator "stringAllowEmpty" should have a more specific error message

Hi,
While trying to debug a customer application, we realized that the error they were receiving was not phrased as clearly as it could be.

The argument validator being called by tableServiceInstance retrieveEntity calls in to stringAllowEmpty, which checks that it is a string type (empty is OK). The code of the validator is here: https://github.com/Azure/azure-storage-node/blob/master/lib/common/util/validate.js#L386

Unfortunately the app was using an integer in JavaScript and not a string; we kept looking for the argument and it was not undefined, implying it should work...

Perhaps the error message could be more clear, right now it says the argument is missing, when really is it is potentially that the argument is of the wrong type.

Thanks.

ReferenceError: err is not defined

azure-storage/lib/services/blob/blobservice.js:3607:7

what I did is tried to use createBlockBlobFromLocalFile with file with very (very) long name (hoped to catch and error, but it crashed).

full log:
ReferenceError: err is not defined
at /var/www/html/ex2/server.js:279:94
at finalCallback (/var/www/html/ex2/node_modules/azure-storage/lib/services/blob/blobservice.js:3607:7)
at /var/www/html/ex2/node_modules/azure-storage/lib/common/filters/retrypolicyfilter.js:174:13
at /var/www/html/ex2/node_modules/azure-storage/lib/common/services/storageserviceclient.js:651:17
at /var/www/html/ex2/node_modules/azure-storage/lib/common/services/storageserviceclient.js:818:11
at /var/www/html/ex2/node_modules/azure-storage/lib/common/services/storageserviceclient.js:650:15
at processResponseCallback (/var/www/html/ex2/node_modules/azure-storage/lib/services/blob/blobservice.js:3610:5)
at Request.processResponseCallback as _callback
at Request.self.callback (/var/www/html/ex2/node_modules/azure-storage/node_modules/request/request.js:129:22)
at Request.emit (events.js:98:17)
at Request. (/var/www/html/ex2/node_modules/azure-storage/node_modules/request/request.js:873:14)
at Request.emit (events.js:117:20)
at IncomingMessage. (/var/www/html/ex2/node_modules/azure-storage/node_modules/request/request.js:824:12)
at IncomingMessage.emit (events.js:117:20)
at _stream_readable.js:944:16
at process._tickCallback (node.js:448:13)

Error: Cannot find module './services/StorageUtilities'

Line 41 in this file fails at Object. (/opt/ayakm/webapps/release-33/target/node_modules/azure-storage/lib/common/lib/common.js:41:28)

Problem:
exports.StorageUtilities = require('./services/StorageUtilities'); <== wrong case. should be lowercased

Fix:
exports.StorageUtilities = require('./services/storageutilities'); <== notice the correct case. This fails on Linux based systems. Please fix as soon as possible. Projects will not build on Linux env. Thx

Missing getBlob(...)

The old SDK has a getBlob(...) method which returns a Stream.

It would be nice if that was included in the new SDK.

Error when communicating to azure blob storage in a Worker Role

Setup:
node v0.10.21
azure-storage ^0.4.1

I have a node/express app running as a worker role and two storage containers I'm trying to stream files from. When a user requests one of the files, I pick the file from the appropriate blob container and stream it back to the user with the following code:

exports.getBlobToStream = function(containerName, fileName, res)
{
    var blobService = azure.createBlobService();
    blobService.getBlobProperties(containerName, fileName, function(error, properties, status){                     
        if(error || !status.isSuccessful)
        {
            res.header('Content-Type', "text/plain");
            res.status(404).send("File " + fileName + " not found");
        }
        else
        {
            res.header('Content-Type', properties.contentType);
            res.header('Content-Disposition', 'attachment; filename=' + fileName);
            blobService.createReadStream(containerName, fileName).pipe(res);
        }
    });
};

I have no problems reading from one of the containers, but when trying to retrieve files from the second I get the following error:

{
  "code": "ECONNREFUSED",
  "errno": "ECONNREFUSED",
  "syscall": "connect"
}

Both are configured as Public Blob containers and are under the same storage account and are using the same access key. If it makes any difference, that same account has 6 other containers for a total of 8.

This seems similar to an issue posted on the azure-sdk-for-node package here Azure/azure-sdk-for-node#434

The solution that resolved that bug also seems to fix mine: Removing the EMULATED variable from ServiceDefinition.csdef.

QueueService.updateMessage is broken (mix of messagetext and messageText options)

The docs (well some of them) not sure which docs are official says it takes messageText the sample says messagetext the code uses both so that you have to provide both for anything to work :)

Check out:

if (options.messagetext) {
content = QueueMessageResult.serialize(options.messageText, this.encodeMessage);
}

I suggest you go with messageText instead of messagetext :)

Azure Storage SDK Project Doc createBlockBlobFromFile incorrect

The 'Azure Storage SDK Index' documentation here specifies to install the storage SDK by doing npm install azure-storage (not npm install azure). So this installs (I'm guessing) the new storage SDK as opposed to the legacy one.

Anyways, it specifies a method on the blobService object called createBlockBlobFromFile. That method does not exist in azure-storage and rather is called createBlockBlobFromLocalFile. The github README for azure-storage is correct but this page is not. We should also point to where the backing markdown is for this web page so that people can send pull requests to fix it.

cannot find module azure-storage error in custom API

this is strange, I have been using the below code in my azure mobileServices Custom API for past two months and it was working good till morning. but now I am getting error - cannot find module azure-storage - at the below line in my custom API
var azure = require('azure-storage');
var retryOperationFilter = new azure.ExponentialRetryPolicyFilter();
var tableService= azure.createTableService().withFilter(retryOperationFilter);

I do have azure-storage included in my package.json -
"dependencies": {"tracer": "0.7.3", "colors" : "1.0.3", "lodash" :"2.4.1", "azure-storage": "0.4.1"},

has any change been made to the lib?

Accessing to response entities object is a little bit confusing

Hello all,

When I try to access returned row objects, I always should do something like "object._" to get value. Is there any workaround to override this situation ?

tableSvc.queryEntities('Likes', query, null, function(error, result, response) {
    if(!error) {
        var returnedLikeCount = result.entries[0].likeCount._;
    }
});

Thanks

TableQuery on string prefix that is a numeric string

I am looking to make a TableQuery that returns results where a string property starts with a number. I have attempted to use the same prefix method described on the MSDN but I cannot get any results returned when the characters are numeric.

Here is an example of what I'm trying to do:
titleQuery = new azure.TableQuery()
.and 'title ge ?', '1'
.and 'title lt ?', '2'

Sample title values are "1-something", "something", and "123-something"

Looking at the comments in source, I can see that the query API has been updated, but don't see any examples of doing queries based on prefix.

Any thoughts on how or if this is possible?

FileService.createResourceName - Error: The specifed resource name contains invalid characters.

This error seems to be caused by "indexOf" on lines 120 and 129 of 'lib\services\file\fileservice.js'

current:

if (directory) {
    // if directory does not start with '/', add it
    if (directory.indexOf(0) !== '/') {
      name += ('/');
    }

    name += encode(directory);
  } 

  if (file) {
    // if the current path does not end with '/', add it
    if (name.indexOf(name.length - 1) !== '/') {
      name += ('/');
    }

    name += encode(file);
  }

This is working for me:

if (directory) {
    // if directory does not start with '/', add it
    if (directory[0] !== '/') {
      name += ('/');
    }

    name += encode(directory);
  } 

  if (file) {
    // if the current path does not end with '/', add it
    if (name[name.length - 1] !== '/') {
      name += ('/');
    }

    name += encode(file);
  }

Thanks!

Adaptable to work in browser?

I've been trying to run browserify on the azure-storage package to see if a running version of API could be run in the browser for CORS access to the blob service. This would also be useful for Cordova based apps. So far these are the issues/workarounds that I've hit:

  • node version check breaks. This uses process.version ( in the azure-util module). I hacked this to return a sensible value if process.version is undefined.
  • The calls to readFileSync fail. The brfs transform for browserify does not fix the issue (the AST parsing cannot rewrite the code because it goes through an indirection). It's used in two places, one for the MIME type loading and the other for loading the packages.json (which is just a version export).
  • I ended up using browserify-mime to replace the mime dependency at bundle time using remapify. This seems to have worked, but I'm still testing.

Could these fixes be added to the node package to support a in-browser compatible build?

EADDRINUSE Error in intensive use

I'm working with azure tables with nodejs on azure web sites. If I make lot of transactions, i've the EADDRINUSE error. This symptoms are explain on the Gluwer blog's http://blog.gluwer.com/2014/03/story-of-eaddrinuse-and-econnreset-errors/

The request module used by your framework can use forever agent. Forever agent keep socket connections alive and reuse it if necessary.
request is not configured to use forever agent instead of the httpModule default agent.

Sébastien

Chunked stream writing methods do not accept string

The stream created by blobService#createWriteStreamToBlockBlob does not work properly with strings. I can pass a Buffer, but when I pass a string it fails with a undefined function error. I have written a self-contained example to reproduce the problem.

var azure = require('azure-storage');

var blobService = azure.createBlobService();

blobService.createContainerIfNotExists('test', function(error, result){
  if (error) return console.log(error);

  var stream = blobService.createWriteStreamToBlockBlob('test', 'myblob');  

  // Replace the write operation with this one and it will work:
  // stream.write(new Buffer("abcabc"));
  stream.write('abcabc', 'utf-8');

  stream.on('error', function(){
    console.log('failed'); 
  });

  stream.end(function(){
    console.log('passed'); 
  });

});

I had a quick look to the code, and the problem seem to be in the chunked stream implementation. The .write method accept encoding and pass it to a helper function

this._buildChunk(chunk, encoding);

but this function does not declare or use the argument at all. I imagine is a problem with other services, not just blob. Ideally the chunkedStream module could be rewritten to define streams like in

https://nodejs.org/api/stream.html#stream_class_stream_writable_1

in such a case, the decodeString option (default to true) is the desired one.

Best Regards.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.