Coder Social home page Coder Social logo

azure-storage-net's Introduction

Microsoft Azure Storage SDK for .NET (Deprecated)

If you would like to access our latest .NET SDK, please refer to the Storage SDK v12 for .NET link in the table below. If you would like more information on Azure's burgeoning effort to coordinate the development of the SDKs across services, of which this change is a part, please refer to this article.

We will continue to respond to issues here, but prefer that you post them on the v12 repo. Thank you for your patience. We look forward to continuing to work together with you.

SDK Name Version Description NuGet/API Reference Links
Blob Storage SDK v12 for .NET v12.0.0 The next generation Blob Storage SDK. Supports sync and async IO. NuGet - Reference
File Storage SDK v12 for .NET 12.0.0-preview.5 The next generation File Storage SDK. Supports sync and async IO. NuGet - Reference
Queue Storage SDK v12 for .NET v12.0.0 The next generation Queue Storage SDK. Supports sync and async IO. NuGet - Reference
Data Lake Storage SDK v12 for .NET 12.0.0-preview.6 The next generation Data Lake Storage SDK. Supports sync and async IO. NuGet

For more details, please visit the proper location for each repo.

Support Statement

  • We will be making only fixes related data integrity and security for 11.x.
  • We will not be adding new storage service version support for this SDK.
  • We will not be back porting fixes and features added to the current version to the versions in this repo
  • We will not be making any changes to the performance characteristics of this SDK.

We have engineered a highly performant and scalable SDK with our V12 releases. We encourage all our customers to give it a try.

Microsoft Azure Storage SDK for .NET (11.2.3)

Server Version: 2019-07-07

The Microsoft Azure Storage SDK for .NET allows you to build Azure applications that take advantage of scalable cloud computing resources.

This repository contains the open source subset of the .NET SDK. For documentation of the complete Azure SDK, please see the Microsoft Azure .NET Developer Center.

Note: As of 10.0.0, the namespace has changed to Microsoft.Azure.Storage.Common, .Blob, .File, and .Queue. This is required for some SxS scenarios.

Note: As of 9.4.0, the Table service is not supported by this library.
Table support is being provided by CosmosDB.

Features

  • Blobs (Change Log)
    • Create/Read/Update/Delete Blobs
  • Files (Change Log)
    • Create/Update/Delete Directories
    • Create/Read/Update/Delete Files
  • Queues (Change Log)
    • Create/Delete Queues
    • Insert/Peek Queue Messages
    • Advanced Queue Operations

Getting Started

The complete Microsoft Azure SDK can be downloaded from the Microsoft Azure Downloads Page and ships with support for building deployment packages, integrating with tooling, rich command line tooling, and more.

Please review Get started with Azure Storage if you are not familiar with Azure Storage.

For the best development experience, developers should use the official Microsoft NuGet packages for libraries. NuGet packages are regularly updated with new functionality and hotfixes.

Target Frameworks

  • .NET Framework 4.5.2: As of September 2018, Storage Client Libraries for .NET supports primarily the desktop .NET Framework 4.5.2 release and above.
  • Netstandard1.3: Storage Client Libraries for .NET are available to support Netstandard application development including Xamarin/UWP applications.
  • Netstandard2.0: Storage Client Libraries for .NET are available to support Netstandard2.0 application development including Xamarin/UWP applications.

Requirements

  • Microsoft Azure Subscription: To call Microsoft Azure services, you need to first create an account. Sign up for a free trial or use your MSDN subscriber benefits.
  • Hosting: To host your .NET code in Microsoft Azure, you additionally need to download the full Microsoft Azure SDK for .NET - which includes packaging, emulation, and deployment tools, or use Microsoft Azure Web Sites to deploy ASP.NET web applications.

Versioning Information

Use with the Azure Storage Emulator

  • The Client Libraries use a particular Storage Service version. In order to use the Storage Client Libraries with the Storage Emulator, a corresponding minimum version of the Azure Storage Emulator must be used. Older versions of the Storage Emulator do not have the necessary code to successfully respond to new requests.
  • Currently, the minimum version of the Azure Storage Emulator needed for this library is 5.3. If you encounter a VersionNotSupportedByEmulator (400 Bad Request) error, please update the Storage Emulator.

Download & Install

The Storage Client Libraries ship with the Microsoft Azure SDK for .NET and also on NuGet. You'll find the latest version and hotfixes on NuGet via the Microsoft.Azure.Storage.Blob, Microsoft.Azure.Storage.File, Microsoft.Azure.Storage.Queue, and Microsoft.Azure.Storage.Common packages.

Via Git

To get the source code of the SDK via git just type:

git clone git://github.com/Azure/azure-storage-net.git
cd azure-storage-net

Via NuGet

To get the binaries of this library as distributed by Microsoft, ready for use within your project you can also have them installed by the .NET package manager: Blob, File, Queue.

Please note that the minimum NuGet client version requirement has been updated to 2.12 in order to support multiple .NET Standard targets in the NuGet package.

Install-Package Microsoft.Azure.Storage.Blob
Install-Package Microsoft.Azure.Storage.File
Install-Package Microsoft.Azure.Storage.Queue

The Microsoft.Azure.Storage.Common package should be automatically entailed by NuGet.

Dependencies

Newtonsoft Json

The libraries depend on Newtonsoft.Json, which can be downloaded directly or referenced by your code project through Nuget.

Key Vault

The client-side encryption support depends on the KeyVault.Core package, which can be downloaded directly or referenced by your code project through Nuget.

Test Dependencies

FiddlerCore

FiddlerCore is required by:

  • Test\FaultInjection\HttpMangler
  • Test\FaultInjection\AzureStoreMangler
  • Microsoft.Azure.Storage.Test.NetFx
  • Microsoft.Azure.Storage.Test.NetCore2

This dependency is not included and must be downloaded from Telerik.

Once obtained:

  • Copy FiddlerCore.dll to Test\FaultInjection\Dependencies\DotNet2
  • Copy FiddlerCore4.dll to Test\FaultInjection\Dependencies\DotNet4

Key Vault

Tests for the client-side encryption support also depend on KeyVault.Extensions, which can be downloaded directly or referenced by your code project through Nuget.

ActiveDirectory

OAuth testing requires the ActiveDirectory identity model also available via NuGet:

Code Samples

How-Tos focused around accomplishing specific tasks are available on the Microsoft Azure .NET Developer Center.

Need Help?

Be sure to check out the Azure Community Support page if you have trouble with the provided code or use StackOverflow.

Collaborate & Contribute

We gladly accept community contributions.

  • Issues: Please report bugs using the Issues section of GitHub
  • Forums: Interact with the development teams on StackOverflow or the Microsoft Azure Forums
  • Source Code Contributions: Please see CONTRIBUTING.md for instructions on how to contribute code.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

For general suggestions about Microsoft Azure please use our UserVoice forum.

Learn More

azure-storage-net's People

Contributors

amishra-dev avatar asorrin-msft avatar crjens avatar danielmarbach avatar doguva-msft avatar emgerner-msft avatar emmazhu avatar erezvani1529 avatar garrettowen avatar jagilber avatar jasonnewyork avatar jehine-msft avatar joshlove-msft avatar kasobol-msft avatar kfarmer-msft avatar microsoft-github-policy-service[bot] avatar mirobers avatar ob1dev avatar pemari-msft avatar rickle-msft avatar seanmcc-msft avatar stephenjust avatar tamram avatar veena-udayabhanu avatar vinaysh-msft avatar vinjiang avatar zezha-msft avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

azure-storage-net's Issues

IfNoneMatchETag doesn't work when a LeaseID is specified

Let's say the blob "test.txt" exists in a container and the variable blob is a CloudBlockBlob object that references that blob. I'd expect this code to fail:

string leaseId = blob.AcquireLease(TimeSpan.FromMinutes(1), null);
blob.UploadFromByteArray(new byte[0], 0, 0, new AccessCondition() { IfNoneMatchETag = "*", LeaseId = leaseId });

Surprisingly (at least to me) it succeeds. If I remove the AcquireLease and the LeaseId condition from the UploadFromByteArray method, it fails as expected.

Is this the expected behavior? Even if I have a lease on the blob, I think it should fail if I require to match (or not) an ETag.

DELETE requests fail on Mono env

I am using client v3.0.3.0 on Mac OS X with Mono. All DELETE requests (delete container, CloudBlockBlob.Delete, CloudPageBlob.Delete) are failing with HTTP 403 Forbidden:

Microsoft.WindowsAzure.Storage.StorageException: The remote server returned an error: (403) Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.. ---> System.Exception: The remote server returned an error: (403) Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature..
at System.Net.HttpWebRequest.CheckFinalStatus (System.Net.WebAsyncResult result) [0x0030c] in /private/tmp/source/bockbuild-mono-3.2.6/profiles/mono-mac-xamarin/build-root/mono-3.2.6/mcs/class/System/System.Net/HttpWebRequest.cs:1606

Seems like a signing issue. Repro is rather easy:

var account = new CloudStorageAccount (new Microsoft.WindowsAzure.Storage.Auth.StorageCredentials (STORAGE_ACCT, STORAGE_KEY), true);
var client = account.CreateCloudBlobClient ();
var container = client.GetContainerReference (Guid.NewGuid().ToString());
container.Delete (); // throws exception

It was also the same on v3.0.2.x, I just upgraded today to see if it's a known issue. Same version works fine on Windows. And please note, it's just DELETE operations, all others work fine.

Here's a dump of RequestEventArgs.Request.Header.ToString() caught in OperationContext.ResponseReceived event on both platforms,

OS X:

{User-Agent: WA-Storage/3.0.3 (.NET CLR 4.0.30319.17020; Unix 13.0.0.0)
x-ms-version: 2013-08-15
x-ms-client-request-id: b90b3f36-1f88-4150-a71c-20f1502a8e96
x-ms-date: Wed, 12 Feb 2014 06:27:20 GMT
Authorization: SharedKey f00f00f00:x+0U7Z9ggFl3MHuYfvrOR3wgieFLhQr/5j+sjaSxBnc=
Content-Length: 0
Connection: keep-alive
Host: f00f00f00.blob.core.windows.net

}

Windows:

{User-Agent: WA-Storage/3.0.3 (.NET CLR 4.0.30319.34003; Win32NT 6.2.9200.0)
x-ms-version: 2013-08-15
x-ms-client-request-id: e34b16cc-2921-4493-888c-307994d532d5
x-ms-date: Wed, 12 Feb 2014 06:29:47 GMT
Authorization: SharedKey f00f00f00:I9Gd72nFal6v8IPOYigOQsiJCYq90/VkDTLEsWGYWR8=
Host: f00f00f00.blob.core.windows.net
Connection: Keep-Alive

}

The only difference I see is Content-Length: 0 added on Windows probably while sending the request. I'll try to see if it is missing while signing. I'll be investigating a bit.

DateTime on EntityProperty is internal

This issue was originally opened in the azure-sdk-for-net repo by @sitereactor, who commented the following:

I'm not sure if this intentional or an oversight, so thought I'd create an issue for it as I just ran into an issue trying to read a DateTime object from an EntityProperty.

I'm extending TableEntity with the following (simplified) class impl.:

public class DictionaryTableEntity : TableEntity, IDictionary<string, EntityProperty>
{
    private IDictionary<string, EntityProperty> _properties;

    public DictionaryTableEntity()
    {
        _properties = new Dictionary<string, EntityProperty>();
    }

    public override void ReadEntity(IDictionary<string, EntityProperty> properties, OperationContext operationContext)
    {
        _properties = properties;
    }

    public override IDictionary<string, EntityProperty> WriteEntity(OperationContext operationContext)
    {
        return _properties;
    }

    // Remaining implementation is left out
}

When reading a value from the DictionaryTableEntity like this:
entity["myDate"] I don't have the option for a typed DateTime value like:
entity["myDate"].DateTime.Value because DateTime is internal. So I have to do something like this instead:
DateTime.Parse(entity["myDate"].PropertyAsObject.ToString())

This is using version 2.1.0.3 of the Windows Azure Storage SDK.

Variable naming in DeleteContainerImpl

In CloudBlobContainer.cs:2568 (method DeleteContainerImpl), the variable named as putCmd instead of deleteCmd (this is the name that appears on DeleteBlobImpl methods). Seems totally harmless, just a naming issue.

DownloadTextAsync for files with encoding signature inside the file.

We ran into a problem the other day using DownloadTextAsync.

A little background: We are creating cscfg/cspkg files from visual studio and uploading them to blob storage and using them in a automated deployment process. The cscfg files are when saved with visual studio stored as xml files with a encoding signature char at the first char of the file.

When using DownloadTextAsync on the blob, then it do not remove this signatur char but rather include it in the returned string. If you try doing a XDocument.Parse(await blob.DownloadTextAsync()) you will get a parse error because the first char is not '<' but the encoding signature.

Fix to protect against a Json.NET bug

Hi

There is a bug in Json.NET that the Windows Azure Storage Client 3.0 is exposed to. Basically if the storage client is run using Json.NET 5.0.4 or earlier it will throw an error when foreaching over an array.

The fix is really simple. Change this line - https://github.com/WindowsAzure/azure-storage-net/blob/c9d52db3f18f971933111f5ba3f7ce4e79927a73/Lib/ClassLibraryCommon/Table/Protocol/TableOperationHttpResponseParsers.cs#L364 - to this:

JToken dataTable = dataSet["value"];

Removing the cast to JArray will stop your library from using the bad GetEnumerator method.

I'm going to fix this bug in Json.NET 6.0. At some point in the future when you upgrade to it you can choose to revert this change if you want.

Improve error message when dealing with duplicate partition-/rowkey items in batch

When you try to execute a batch in which there are two items with the exact same partition key and row key, the library throws the following exception:

Additional information: Unexpected response code for operation : 1

This exception does not really disclose what went wrong. Perhaps it would be better to show something like:

Could not insert two items with with the same partition- and row key.

The code to reproduce this issue is as follows:

CloudStorageAccount storageAccount = CloudStorageAccount.DevelopmentStorageAccount;
CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
CloudTable table = tableClient.GetTableReference("demo");

table.CreateIfNotExists();

TableBatchOperation batch = new TableBatchOperation
{
    TableOperation.InsertOrReplace(new TableEntity { PartitionKey = "NBA", RowKey = "Lakers" }),
    TableOperation.InsertOrReplace(new TableEntity { PartitionKey = "NBA", RowKey = "Lakers" }),
};

table.ExecuteBatch(batch);

ReleaseLeaseAsync cannot be tested in async way

At the moment the test are written so that they use the method .Wait() to test the async methods https://github.com/Azure/azure-storage-net/blob/master/Test/ClassLibraryCommon/Blob/LeaseTests.cs. This is all fine and will pass the tests, but as soon as you try to test the method with keyword await it will deadlock if there is no lease to release. The deadlock happens because the method apparently doesn't handle the errors correctly? This makes testing all the lease related async methods really difficult in real life situations.

Mandating of contentMD5 argument inconsistent

In latest storage client (v3.0.2.0), there's inconsistency in the analogous CloudBlockBlob.PutBlock and CloudPageBlob.WritePages method signatures per contentMD5 argument.

class CloudBlockBlob:

  • public void PutBlock (string blockId, Stream blockData, string contentMD5, AccessCondition accessCondition = null, BlobRequestOptions options = null, OperationContext operationContext = null)
  • public Task PutBlockAsync (string blockId, Stream blockData, string contentMD5, AccessCondition accessCondition, BlobRequestOptions options, OperationContext operationContext, CancellationToken cancellationToken)
  • public Task PutBlockAsync (string blockId, Stream blockData, string contentMD5)
  • public Task PutBlockAsync (string blockId, Stream blockData, string contentMD5, CancellationToken cancellationToken)
  • public Task PutBlockAsync (string blockId, Stream blockData, string contentMD5, AccessCondition accessCondition, BlobRequestOptions options, OperationContext operationContext)

contentMD5 parameter is always required.


class CloudPageBlob:

  •   public void WritePages (Stream pageData, long startOffset, **string contentMD5 = null**, AccessCondition accessCondition = null, BlobRequestOptions options = null, OperationContext operationContext = null)
    
  •   public Task WritePagesAsync (Stream pageData, long startOffset, **string contentMD5**, AccessCondition accessCondition, BlobRequestOptions options, OperationContext operationContext, CancellationToken cancellationToken)
    
  •   public Task WritePagesAsync (Stream pageData, long startOffset, **string contentMD5**, AccessCondition accessCondition, BlobRequestOptions options, OperationContext operationContext)
    
  •   public Task WritePagesAsync (Stream pageData, long startOffset, **string contentMD5**)
    
  •   public Task WritePagesAsync (Stream pageData, long startOffset, **string contentMD5**, CancellationToken cancellationToken)
    

contentMD5 parameter is optional on one overload, on the others always required.


I believe those two methods are analogous to each other therefore should have consistent policy towards contentMD5.

In fact this argument is always optional on REST API, why enforcing users to put null/empty string in the client library?

Suggestion: ListBlobsAsync/ListContainersAsync

Right now, async equivalents of ListBlobs/ListContainers are ListBlobsSegmentedAsync/ListContainersSegmentedAsync. These segmented methods needs to be used in a while loop to pass ContinuationToken between HTTP requests to get all blobs (just like in ListBlobs method).

Many people get caught into this pitfall and think segmented methods are direct equivalents of those sync methods and therefore they only get first e.g. 5000 results from the REST API –which mostly introduces a bug later on. I have seen many people using it improperly like this.

Therefore adding helper methods ListBlobsAsync/ListContainersAsync would certainly help people to find async equivalent of those sync methods and they wouldn't need to go back and forth between MSDN docs & VS.

MultiBufferMemoryStream objects passed as arguments are disposed by the library

The library executor automatically disposes MultiBufferMemoryStream instance passed to any blob API call, for example CloudBlockBlob.UploadFromStreamAsync. This logic causes UB if the stream instance is reused after the library call end.

Call chain that leads to the stream disposal:

           Microsoft.WindowsAzure.Storage.dll!Microsoft.WindowsAzure.Storage.Core.MultiBufferMemoryStream.Dispose(bool disposing) 
           mscorlib.dll!System.IO.Stream.Close()                     Microsoft.WindowsAzure.Storage.dll!Microsoft.WindowsAzure.Storage.Core.Executor.ExecutionState<Microsoft.WindowsAzure.Storage.Core.NullType>.CheckDisposeSendStream()               Microsoft.WindowsAzure.Storage.dll!Microsoft.WindowsAzure.Storage.Core.Executor.ExecutionState<Microsoft.WindowsAzure.Storage.Core.NullType>.Dispose(bool disposing)
           Microsoft.WindowsAzure.Storage.dll!Microsoft.WindowsAzure.Storage.Core.Util.StorageCommandAsyncResult.Dispose()
           Microsoft.WindowsAzure.Storage.dll!Microsoft.WindowsAzure.Storage.Core.Util.StorageCommandAsyncResult.End()               Microsoft.WindowsAzure.Storage.dll!Microsoft.WindowsAzure.Storage.Core.Executor.Executor.EndExecuteAsync<Microsoft.WindowsAzure.Storage.Core.NullType>(System.IAsyncResult result)>               Microsoft.WindowsAzure.Storage.dll!Microsoft.WindowsAzure.Storage.Blob.CloudBlockBlob.UploadFromStreamHandler.AnonymousMethod__14(System.IAsyncResult ar)

GenerateFilterCondition and accents

Microsoft.WindowsAzure.Storage.Core.Auth.SharedAccessSignatureHelper

We generated a SAS for Table Access and could not get Access when using partition and rowKeys in the signature in conjunction with the CloudTable(uri) constructor.

        var tableUri = table.Uri.AbsoluteUri;
        var sharedAccessSignature = table.GetSharedAccessSignature(policy, null, partitionKey, rowKey, partitionKey, rowKey);

        var cloudTabel = new CloudTable(new Uri(tableUri + sharedAccessSignature));

        var customer = GetCustomerToInsert();
        cloudTable.Execute(TableOperation.InsertOrReplace(customer));

As we used start and end row and partition keys in the signature and used the Uri parsing constructor of CloudTable we lost part of the signature parameters resulting in an invalid signature.

This code works:

        var tableUri = table.Uri.AbsoluteUri;
        var sharedAccessSignature = table.GetSharedAccessSignature(policy, null, partitionKey, rowKey, partitionKey, rowKey);

        var storageCredentials = new StorageCredentials(sharedAccessSignature);
        var cloudTable = new CloudTable(new Uri(tableUri), storageCredentials);

        var customer = GetCustomerToInsert();
        cloudTable.Execute(TableOperation.InsertOrReplace(customer));

Thanks to the source code we found out that SharedAccessSignatureHelper in the ParseQuery method does not use the start and end keys from the signature to build the StorageCredentials.

How the call loses the data?

CloudTable:

public CloudTable(Uri tableAddress) : this(tableAddress, null /* credentials */)

public CloudTable(Uri tableAbsoluteUri, StorageCredentials credentials) : this(new StorageUri(tableAbsoluteUri), credentials)

public CloudTable(StorageUri tableAddress, StorageCredentials credentials)
{
this.ParseQueryAndVerify(tableAddress, credentials);
}

private void ParseQueryAndVerify(StorageUri address, StorageCredentials credentials)
{
...
this.StorageUri = NavigationHelper.ParseQueueTableQueryAndVerify(address, out parsedCredentials);
...
}

NavigationHelper:

internal static StorageUri ParseQueueTableQueryAndVerify(StorageUri address, out StorageCredentials parsedCredentials)
{
...
return new StorageUri(
ParseQueueTableQueryAndVerify(address.PrimaryUri, out parsedCredentials),
...
}

private static Uri ParseQueueTableQueryAndVerify(Uri address, out StorageCredentials parsedCredentials)
{
...
parsedCredentials = SharedAccessSignatureHelper.ParseQuery(queryParameters, false);
...
}

SharedAccesHelper:

    internal static StorageCredentials ParseQuery(IDictionary<string, string> queryParameters, bool mandatorySignedResource)

This method only knows:
string signature = null;
string signedStart = null;
string signedExpiry = null;
string signedResource = null;
string sigendPermissions = null;
string signedIdentifier = null;
string signedVersion = null;

Not compatible with WP8.1

Pulled the nuGet package and spent a lot of time writing the code to use it. Everything went well and works great!

Then I went to submit for certification:

Supported APIs

•Error Found: The supported APIs test detected the following errors:◦This API is not supported for this application type - Api=CryptAcquireContextW. Module=advapi32.dll. File=Microsoft.WindowsAzure.Storage.dll.
◦This API is not supported for this application type - Api=CryptCreateHash. Module=advapi32.dll. File=Microsoft.WindowsAzure.Storage.dll.
◦This API is not supported for this application type - Api=CryptDestroyHash. Module=advapi32.dll. File=Microsoft.WindowsAzure.Storage.dll.
◦This API is not supported for this application type - Api=CryptGetHashParam. Module=advapi32.dll. File=Microsoft.WindowsAzure.Storage.dll.
◦This API is not supported for this application type - Api=CryptHashData. Module=advapi32.dll. File=Microsoft.WindowsAzure.Storage.dll.
◦This API is not supported for this application type - Api=CryptReleaseContext. Module=advapi32.dll. File=Microsoft.WindowsAzure.Storage.dll.


•Impact if not fixed: Using an API that is not part of the Windows SDK for Windows Phone Store apps violates the Windows Phone Store certification requirements.


•How to fix: Review the error messages to identify the API that is not part of the Windows SDK for Windows Phone Store app. Please note, C++ apps that are built in a debug configuration will fail this test even if they only use APIs from the Windows SDK for Windows Phone Store apps.

When can we expect the ability to submit apps to the store that utilize this project??

PCL support

Hi!

I am really missing PCL support! It would be great if you include it.

Any plans om making a PCL version?

Is a version of WindowsAzure.Storage for PCL projects (not just universal apps) on the roadmap? This way it would be super easy to use in a MVVM context where you have a PCL for all model code which could be shared with Universal apps, Xamarin.Android, Xamarin.iOS and other platforms.

Can you please add the Storage Emulator to this repo?

It seems like every time you guys push out a new release (YAY!) it 400s the current version of the Storage Emulator (boo!).

It seems to me that if you rolled the emulator into this solution, not only would it simplify keeping the emulator up to date with the latest and greatest (by force of failed tests, yeah, but still...), but it would also allow us users easy and quick access to the updated emulator, making it super easy to get up and running after an update.

I'm writing this now prior to my tracking down the latest version of the storage emulator. Experience has taught me that I may end up chucking my laptop across the room in frustration trying to track it down and install it correctly.

GetSharedAccessSignature fails for containers with trailing slash

Moving this from Azure/azure-sdk-for-net#131 as suggested by stankovski. I still have the same problem. Also see http://stackoverflow.com/questions/13456606/azure-access-denied-on-shared-access-signature-for-storage-2-0.

Given the following code:

var blobClient = account.CreateCloudBlobClient();
var container = blobClient.GetContainerReference(containerName);
CloudBlockBlob _blockblob = container.GetBlockBlobReference(fileName);
var sharedAccessPolicy = new SharedAccessBlobPolicy
{
SharedAccessStartTime = DateTime.UtcNow.AddMinutes(-10),
SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(30),
Permissions = SharedAccessBlobPermissions.Read
};
var sharedAccessSignature = _blockblob.GetSharedAccessSignature(sharedAccessPolicy);
var link = _blockblob.Uri.AbsoluteUri + sharedAccessSignature;

a AuthenticationFailed error will occur from the link if the containerName has a trailing slash (this is allowed in all other places).

The reason for this is that the GetCanonicalName method in CloudBlockBlobBase will add a slash to the container name resulting in a double slash. This is then signed and returned in the SAS. The AbsoluteUri however will not add the extra slash and thus the signature will not be valid for the created link.

A change in GetCanonicalName to trim any trailing slash from container name would solve this.

Extraneous comment when writing RequestResult to XML

While serializing a RequestResult using its WriteXML method, I noticed a comment was being written to the stream:

<!--An exception has occurred. For more information please deserialize this message via RequestResult.TranslateFromExceptionMessage.-->

Now take a look at line 240 here:

writer.WriteComment(SR.ExceptionOccurred);

What's the purpose behind this?

Thanks,

Felipe

Please update to OData 5.6.1

Looks like this package is dependent on OData = 5.6.0, not >= 5.6.0, which is a problem when combining with Application Insights, which requires OData >= 5.6.1.

Is it possible to fix this?

Please add server task id to all modify operations response objects

Hello
As far as I could see non of the operations that modify objects (crud excluding r letter :) )
returns job it/task id. For example in azure sdk for .net most responses have RequestId (for example OperationResponse) so we could track status. I could see two async versions: first that returns Task and second IAsyncResult. Theoretically they should solve this problem (if they send requests about progress to server to set completed, as far as I could see for example create snapshot returns x-ms-request-id). But anyway we will loose persistence: we could not save task to database so main process could just save task id and other periodically check. In addition for example for snapshots, Task become completed, Blob is available over api, but physically copying process is not finished. So I am not sure that storage client really checks job id (or may be this is api problem that returns completed before really copying).

Any ideas are welcome.

Metadata not returned from ListQueues

The error is In function ListQueuesImpl at row 165
List queuesList = listQueuesResponse.Queues.Select(item => new CloudQueue(item.Name, this)).ToList();

When the CloudQueue object is created the Metadata information is lost.
It exist in the listQueuesReponse.Queues but it not moved over to CloudeQueue object.

Throwing exception if no permissions are provided when creating a SAS

This is just an improvement proposal/question.

According to this doc the "sp" (signed permissions) query parameter is required in all valid signatures.

The code in SharedAccessSignatureHelper::GetSignature does not add the "sp" query parameter if the permissions string is empty (which could occur if the flags are incorrectly passed in, believe me :)), which generates what I understand is an invalid SAS.

string permissions = SharedAccessBlobPolicy.PermissionsToString(policy.Permissions);
if (!string.IsNullOrEmpty(permissions))
{
    AddEscapedIfNotNull(builder, Constants.QueryConstants.SignedPermissions, permissions);
}

Should an exception be thrown notifying that no permissions were specified?

Bug: Access to disposed object in AsyncStreamCopier

My app crashed during loading/uploading blobs.
Here is a Stack Trace:

There are no context policies.

System.NullReferenceException: Object reference not set to an instance of an object.
at Microsoft.WindowsAzure.Storage.Core.Util.AsyncStreamCopier1.ForceAbort(AsyncStreamCopier1 copier, Boolean timedOut) in AsyncStreamCopier.cs: line 317
at Microsoft.WindowsAzure.Storage.Core.Util.AsyncStreamCopier`1.MaximumCopyTimeCallback(Object copier, Boolean timedOut) in AsyncStreamCopier.cs: line 305
at System.Threading._ThreadPoolWaitOrTimerCallback.WaitOrTimerCallback_Context(Object state, Boolean timedOut)
at System.Threading._ThreadPoolWaitOrTimerCallback.WaitOrTimerCallback_Context_t(Object state)
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading._ThreadPoolWaitOrTimerCallback.PerformWaitOrTimerCallback(Object state, Boolean timedOut)

SAS urls generated with version 2.1 incompatible with 3.0

Trying to use (in version 3.0.x) a SAS that was generated using the 2.1 version for a cloud table fails with the following error:

Microsoft.WindowsAzure.Storage.StorageException: The remote server returned an error: (415) JSON format is not supported.. ---> System.Net.WebException: The remote server returned an error: (415) JSON format is not supported..
   at System.Net.HttpWebRequest.GetResponse()
   at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.ExecuteSync[T](RESTCommand`1 cmd, IRetryPolicy policy, OperationContext operationContext)
   --- End of inner exception stack trace ---
   at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.ExecuteSync[T](RESTCommand`1 cmd, IRetryPolicy policy, OperationContext operationContext)
   at Microsoft.WindowsAzure.Storage.Table.TableOperation.Execute(CloudTableClient client, CloudTable table, TableRequestOptions requestOptions, OperationContext operationContext)
   at Microsoft.WindowsAzure.Storage.Table.CloudTable.Exists(Boolean primaryOnly, TableRequestOptions requestOptions, OperationContext operationContext)
   at Microsoft.WindowsAzure.Storage.Table.CloudTable.Exists(TableRequestOptions requestOptions, OperationContext operationContext)

FilterString confusing behavior on TableQuery depending on creation path

If you create a TableQuery<T> using the factory method (for LINQ) of CloudTable.CreateQuery<T>(), and then set a FilterString on the resulting object, the FilterString is ignored by ExecuteQuery on the TableQuery<T>. However, if you set the FilterString on a TableQuery<T> that is created via the default constructor of the TableQuery<T> object, it works as expected.

It seems like there are some strange behavior differences based on whether the queryProvider private field is set in the TableQuery<T> object. While I'd have personally preferred that the LINQ stuff wasn't comingled with the FilterString, at the very least it seems like FilterString should throw if the queryProvider is set, or TableQuery.Execute, etc. should throw if the FilterString is set and the queryProvider is also set.

Handle WebExceptionStatus.RequestCanceled in EndGetResponse

You currently handle 'WebException.Status as WebExceptionStatus.Timeout' to generate TimeoutException in EndGetResponse method. Unfortunately in some cases you can also get WebExceptionStatus.RequestCanceled. Because of this original WebException is escaped when TimeoutException is expected.

Note, I do see a lock around State.ReqTimedOut which suppose to act as memory barrier in case of request.Abort(), but the fact is I see following exception in my log originated from storage client library 3.0.3: "Request failed with unhanded exception of type 'WebException' and message: 'The request was aborted: The request was canceled.'."

So, can we add a check for "WebExceptionStatus.RequestCanceled" additional to "WebExceptionStatus.Timeout" inside EndGetResponse?

Partition Keys or Row Keys that contain char.MaxValue fail

With the switch to using Json as the default PayloadFormat, a bug has been exposed in OData that causes partition keys or row keys with char.MaxValue in the string to fail.

The workaround I've used is to set the PayloadFormat back to AtomPub but I'd like to use the Json format.

Feel free to close this bug if it's not useful, I just wasn't seeing any action on the issue I posted in the OData Codeplex repo.

Cache ability for table storage queries

Hi,

I am interested in designing an optional caching system for the table storage client so queries (that are explicitly marked "cachable") can use a local (possibly distributed) cache of Json responses instead of executing the query against the table storage endpoint.

Can the developers offer any insights as to where I should best look at implementing this feature?

implement INotifyPropertyChanged in TableEntity

Hi,

I'm currently developing an WinRT app using MVVM. I've tried to bind my storage classes (which derive from TableEntity) to the UI. This works fine as long as nothing changes in the objects. But if I go and change a property in one of the class properties it obviously doesn't update the UI.
Tried to derive my classes from the INotifyPropertyChanged but it seems that deriving from TableEntity prevents it from deriving it from it.

I think that it would be a great adition having TableEntity implement INotifyPropertyChanged and adding a OnPropertyChanged method that could be called in the property setters. Something like the following:

    private string _name;       
    [DataMember(Name = "Name")]
    public string Name 
    {
        get
        {
            return _name;
        } 

        set
        {
            if (_name == value)
                return;
            _name = value;
            OnPropertyChanged("Name");
        } 
    }

This would allow developers to use TableEntity objects directly in their MVVM apps and on other use scenarios that call for property change notifications.

I believe that this is not such a huge or complicated change for the team to implement. Nevertheless I'm willing to contribute with the necessary code if that is worth the effort of going through an external contribution.

Looking forward for your feedback!
Cheers,

CloudBlockBlob.DownloadText() handles UTF8 BOM incorrectly

CloudBlockBlob.DownloadText() behaves differently than File.ReadAllText in respect to UTF8 pre-amble/BOM

Repro:
Create an XML File in Visual Studio and upload it to a Cloud Blob container. The file will begin with a BOM (EF BB BF). Then download it using CloudBlockBlob.DownloadText() and pass the resulting string to XDocument.Parse. The parser will fail with XMLException - "Data at the root level is invalid. Line 1, position 1.".
Failing code:

var storageAccount = CloudStorageAccount.Parse("connectionString");
var blobClient = storageAccount.CreateCloudBlobClient();
var container = blobClient.GetContainerReference("MyContainer");
var blob = container.GetBlockBlobReference("my.xml");
var s = blob.DownloadText();
var x = XDocument.Parse(s);

A workaround suggested at http://stackoverflow.com/questions/2111586/parsing-xml-string-to-an-xml-document-fails-if-the-string-begins-with-xml by Dave Cluderay suggests passing the read string through StreamReader.
Working code

        var storageAccount = CloudStorageAccount.Parse("connectionString");
        var blobClient = storageAccount.CreateCloudBlobClient();
        var container = blobClient.GetContainerReference("MyContainer");
        var blob = container.GetBlockBlobReference("my.xml");
        var s = blob.DownloadText();
        using (var memoryStream = new MemoryStream(Encoding.UTF8.GetBytes(s)))
        {
            using (var streamReader = new StreamReader(memoryStream))
            {
                var x = XDocument.Load(streamReader);
            }
        }

Moved from Azure/azure-sdk-for-net#626

Allow Upsert operations to set EchoContent

Is there any reason why EchoContent can only be set on TableOperations of type Insert? We're using Upsert primarily (in order to make our data import idempotent) and aren't really interested in the response of these messages so would like to make the most of the performance gain by not transferring and processing the data

TypeLoadException thrown

Seems that StorageException.GetObjectData needs to be marked [SecurityCritical].

Received this error:

A first chance exception of type 'System.TypeLoadException' occurred in Microsoft.WindowsAzure.Storage.dll

Additional information: Inheritance security rules violated while overriding member: 'Microsoft.WindowsAzure.Storage.StorageException.GetObjectData(System.Runtime.Serialization.SerializationInfo,

at Microsoft.WindowsAzure.Storage.Blob.CloudBlobContainer.CreateIfNotExists(BlobContainerPublicAccessType accessType, BlobRequestOptions requestOptions, OperationContext operationContext)
at Microsoft.WindowsAzure.Storage.Blob.CloudBlobContainer.CreateIfNotExists(BlobRequestOptions requestOptions, OperationContext operationContext)

Received the error on code running in an AppDomain with restricted permissions. .Net doesn't seem to perform this validation by default.

Azure Storage emulator performs case-insensitive comparisons

The Azure Storage emulator (3.0.0.0) performs case-insensitive queries, although Azure Storage performs case-sensitive queries. This can result in bugs where results are found where they should NOT be found.

In my case, ASP.NET Identity expects a user be uniquely identifiable by user name (ugh, idiots). Since AS performs case-sensitive searches (example later), you can end up with different users with the same user name that differ only by case. This is not optimal, and should be avoided, for obvious reasons.

GIVEN the following table query (everything not shown is obvious)

var userNameQuery = new TableQuery().Where(
    TableQuery.CombineFilters(
        TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.Equal, "derp"),
        TableOperators.And,
        TableQuery.GenerateFilterCondition("Name", QueryComparisons.Equal, userName)))
        .Take(1);
return table.ExecuteQuery(userNameQuery).FirstOrDefault();

Azure Storage will perform a case-sensitive search on "Name". However, if you perform the same exact query against the storage emulator, it will perform a case-insensitive search (e.g., search for "moe" you'll get "Moe" back when it should return nothing).

Connection String does not support Shared Access Signatures

Here's a connection string which uses a SAS instead of the traditional account name / key authentication:

TableEndpoint=http://....table.core.windows.net/;SharedAccessSignature=?sv=2014-02-14&tn=MyTable&sig=MySig&se=2114-09-28T19%3A28%3A32Z&sp=au; 

This used to work in a previous version of the SDK, but with the latest version of the SDK this is no longer supported. This is caused by some validation code in the CloudStorageAccount class (https://github.com/Azure/azure-storage-net/blob/master/Lib/Common/CloudStorageAccount.cs)

if (splittedNameValue.Length != 2)
{
    error("Settings must be of the form \"name=value\".");
    return null;
}

README title

Currently readme title is

Windows Azure SDK for .NET

the following sounds more accurate:

Windows Azure Storage SDK for .NET

System.ArgumentOutOfRangeException in Microsoft.WindowsAzure.Storage.Table.CloudTable.EndExecute

Using WindowsAzure.Storage-Preview 3.0.1.0-preview from NuGet. Created a simple Windows Phone 8.0 app. All it does is insert a table entity. This fails on device without a debugger attached. WITH a debugger attached OR in the emulator it works fine.

public partial class MainPage : PhoneApplicationPage
{
    // Constructor
    public MainPage()
    {
        InitializeComponent();

        this.Loaded += MainPage_Loaded;
    }

    async void MainPage_Loaded(object sender, RoutedEventArgs e)
    {
        try
        {
            CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connection);

            CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
            CloudTable table = tableClient.GetTableReference("ratings");
            bool createresult = await table.CreateIfNotExistsAsync();

            Rating rating = new Rating(Guid.NewGuid().ToString(), 1, "[email protected]");
            TableOperation operation = TableOperation.InsertOrReplace(rating);
            var insertresult = await table.ExecuteAsync(operation);
        }
        catch (Exception ex)
        {
            Output.Text += "Failed " + ex.ToString() + "\n";
        }
    }
}

public class Rating : TableEntity
{
    public Rating(string product, int value, string user)
    {
        PartitionKey = product;
        RowKey = user;
        Value = value;
    }
    public Rating() { }
    public int Value { get; set; }
}

This produces the following exception

Microsoft.WindowsAzure.Storage.StorageException: The argument 'offset' is larger than maximum of '3075'
Parameter name: offset ---> System.ArgumentOutOfRangeException: The argument 'offset' is larger than maximum of '3075'
Parameter name: offset
at Microsoft.WindowsAzure.Storage.Core.Util.CommonUtility.AssertInBounds[T](String paramName, T val, T min, T max)
at Microsoft.WindowsAzure.Storage.Core.MultiBufferMemoryStream.Write(Byte[] buffer, Int32 offset, Int32 count)
at System.IO.StreamWriter.Flush(Boolean flushStream, Boolean flushEncoder)
at System.IO.StreamWriter.Flush()
at Microsoft.Data.OData.Json.IndentedTextWriter.Flush()
at Microsoft.Data.OData.Json.JsonWriter.Flush()
at Microsoft.Data.OData.Json.ODataJsonOutputContextBase.Flush()
at Microsoft.Data.OData.JsonLight.ODataJsonLightWriter.FlushSynchronously()
at Microsoft.Data.OData.ODataWriterCore.Flush()
at Microsoft.Data.OData.ODataWriterCore.WriteEnd()
at Microsoft.WindowsAzure.Storage.Table.Protocol.TableOperationHttpWebRequestFactory.WriteOdataEntity(ITableEntity entity, TableOperationType operationType, OperationContext ctx, ODataWriter writer)
at Microsoft.WindowsAzure.Storage.Table.Protocol.TableOperationHttpWebRequestFactory.BuildRequestForTableOperation(Uri uri, UriQueryBuilder builder, IBufferManager bufferManager, Nullable1 timeout, TableOperation operation, OperationContext ctx, TablePayloadFormat payloadFormat, String accountName) at Microsoft.WindowsAzure.Storage.Table.TableOperation.<>c__DisplayClassa.<InsertImpl>b__7(Uri uri, UriQueryBuilder builder, Nullable1 timeout, OperationContext ctx)
at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.ProcessStartOfRequest[T](ExecutionState1 executionState, String startLogMessage) at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.InitRequest[T](ExecutionState1 executionState)
--- End of inner exception stack trace ---
at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.EndExecuteAsync[T](IAsyncResult result)
at Microsoft.WindowsAzure.Storage.Table.CloudTable.EndExecute(IAsyncResult asyncResult)
at Microsoft.WindowsAzure.Storage.Core.Util.AsyncExtensions.<>c__DisplayClass11.<CreateCallback>b__0(IAsyncResult ar) --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at System.Runtime.CompilerServices.TaskAwaiter1.GetResult()
at AzureStorageTest.MainPage.<MainPage_Loaded>d__0.MoveNext()
Request Information
RequestID:
RequestDate:
StatusMessage:

Change of behaviour and IgnorePropertyAttribute not being honoured.

Since updating to Storage Client 3.1.0.1 from version 2.1.0.3, a StorageException "InvalidInput" is thrown on List properties where as before the behaviour was simply that they were ignored.

Adding the Microsoft.WindowsAzure.Storage.Table IgnorePropertyAttribute has no effect. The problem is solved as soon as I change the offending property to an array.

Reproduction from LinqPad is below, referencing WindowsAzure.Storage 3.1.0.1 nuget package.
The error is produced by the below code, remedied by changing ListProperty to an array.

void Main()
{
        var acc_dev = Microsoft.WindowsAzure.Storage.CloudStorageAccount.Parse("UseDevelopmentStorage=true;DevelopmentStorageProxyUri=http://127.0.0.1");
        Test(acc_dev, "testtable");

}

public void Test(Microsoft.WindowsAzure.Storage.CloudStorageAccount toAcc, string table) {

    var toTC = toAcc.CreateCloudTableClient();
    var toT = toTC.GetTableReference(table);
    toT.CreateIfNotExists();

    var toContext = toTC.GetTableServiceContext();
    toContext.Format.UseAtom();

    var fromData = new List<TestClass>();
    fromData.Add(new TestClass(){Foo="x", PartitionKey="foo", RowKey="bar", ListProperty=(new List<string>(){{"Hello"},{"Azure"}})});
    fromData.Dump();

    foreach (var item in fromData.ToList())
    {
        toContext.AddObject(table,item);
        toContext.UpdateObject(item);
    }
    toContext.SaveChangesWithRetries(SaveChangesOptions.ReplaceOnUpdate);
}

public class TestClass : TableServiceEntity {
        public string Foo {get;set;}
        [IgnoreProperty]
        public List<string> ListProperty {get;set;}
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.