Coder Social home page Coder Social logo

apache / libcloud Goto Github PK

View Code? Open in Web Editor NEW
2.0K 85.0 927.0 34.21 MB

Apache Libcloud is a Python library which hides differences between different cloud provider APIs and allows you to manage different cloud resources through a unified and easy to use API.

Home Page: https://libcloud.apache.org

License: Apache License 2.0

PowerShell 0.01% Python 99.81% Shell 0.13% HTML 0.03% Dockerfile 0.02%
python libcloud cloud library apache

libcloud's Introduction

Apache Libcloud - a unified interface for the cloud

Apache Libcloud is a Python library which hides differences between different cloud provider APIs and allows you to manage different cloud resources through a unified and easy to use API.

image

image

image

image

image

image

image

image

image

image

image

image

image

image

image

image

Code

https://github.com/apache/libcloud

License

Apache 2.0; see LICENSE file

Issues

https://issues.apache.org/jira/projects/LIBCLOUD/issues

Website

https://libcloud.apache.org/

Documentation

https://libcloud.readthedocs.io

Supported Python Versions

Python >= 3.8, PyPy >= 3.8, Python 3.10 + Pyjion (Python 2.7 and Python 3.4 is supported by the v2.8.x release series, last version which supports Python 3.5 is v3.4.0, v3.6.x for Python 3.6, and v3.8.x for Python 3.7)

Resources you can manage with Libcloud are divided into the following categories:

  • Compute - Cloud Servers and Block Storage - services such as Amazon EC2 and Rackspace Cloud Servers (libcloud.compute.*)
  • Storage - Cloud Object Storage and CDN - services such as Amazon S3 and Rackspace CloudFiles (libcloud.storage.*)
  • Load Balancers - Load Balancers as a Service, LBaaS (libcloud.loadbalancer.*)
  • DNS - DNS as a Service, DNSaaS (libcloud.dns.*)
  • Container - Container virtualization services (libcloud.container.*)

Apache Libcloud is an Apache project, see <http://libcloud.apache.org> for more information.

Documentation

Documentation can be found at <https://libcloud.readthedocs.org>.

Note on Python Version Compatibility

Libcloud supports Python >= 3.8 and PyPy >= 3.8.

  • Support for Python 3.7 has been dropped in v3.9.0 release. Last release series which supports Python 3.6 is v3.6.x.
  • Support for Python 3.6 has been dropped in v3.7.0 release. Last release series which supports Python 3.6 is v3.6.x.
  • Support for Python 3.5 has been dropped in v3.5.0 release.
  • Last release series which supports Python 3.5 is v3.4.x.
  • Support for Python 2.7 and 3.4 has been dropped in Libcloud v3.0.0 (last release series which support Python 2.7 and Python 3.4 is v2.8.x).

Feedback

Please send feedback to the mailing list at <[email protected]>, or Github repo at <https://github.com/apache/libcloud/issues>.

Contributing

For information on how to contribute, please see the Contributing chapter in our documentation <https://libcloud.readthedocs.org/en/latest/development.html#contributing>.

Website

Source code for the website is available at <https://github.com/apache/libcloud-site>.

License

Apache Libcloud is licensed under the Apache 2.0 license. For more information, please see LICENSE and NOTICE file.

Security

This is a project of the Apache Software Foundation and follows the ASF vulnerability handling process.

Reporting a Vulnerability

To report a new vulnerability you have discovered please follow the ASF vulnerability reporting process.

libcloud's People

Contributors

alex avatar andymaheshw avatar baldwinspc avatar c-w avatar d-mo avatar davidcrossland avatar dependabot[bot] avatar dimgal1 avatar eis-d-z avatar erjohnso avatar jadunham1 avatar jcsalterego avatar jedsmith avatar johnnywalnut avatar juanfont avatar kami avatar ktdreyer avatar lrvan avatar micafer avatar novel avatar pockerman avatar pquentin avatar pquerna avatar samuelchong avatar snailjie avatar tgn-outscale avatar tonybaloney avatar vdloo avatar wrigri avatar zulupro avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

libcloud's Issues

InvalidCredsError when uploading .xlsm file to Google Storage bucket

Summary

When running on Ubuntu it is not possible to upload an Excel macro-enabled file with the suffix .xlsm to a Google Storage bucket, the upload fails with an InvalidCredsError exception raised after calling upload_object_via_stream()

Detailed Information

I first noticed this in a Django app running on Heroku, and have since reproduced the issue with libcloud only, on standalone Ubuntu 16.04 and 18.04 AMIs. The issue is triggered on Python2 and Python3 but does not occur on my OSX dev enviroment nor in testing on Amazon Linux. It does not reproduce when uploading to S3 and seems specific to GCP.

Steps to reproduce:

Create a bucket in Google Cloud Storage and in Storage Settings add an Access key for your account, under Interoperability.

Install apache-libcloud==2.8.0 via pip

Attempt to upload a test file to the bucket as in the following example

from io import BytesIO
from libcloud.storage.types import Provider
from libcloud.storage.providers import get_driver

cls = get_driver(Provider.GOOGLE_STORAGE)
driver = cls('my-access-key', 'my-secret')
container = driver.get_container(container_name='my-test-bucket')
obj = driver.upload_object_via_stream(iterator=BytesIO(b'hello'), container=container, object_name='test1.xlsm')

This results in the following stack trace

Traceback (most recent call last):
  File "upload.py", line 9, in <module>
    obj = driver.upload_object_via_stream(iterator=BytesIO(b'hello'), container=container, object_name='test1.xlsm')
  File "/home/ubuntu/.local/lib/python3.6/site-packages/libcloud/storage/drivers/s3.py", line 698, in upload_object_via_stream
    storage_class=ex_storage_class)
  File "/home/ubuntu/.local/lib/python3.6/site-packages/libcloud/storage/drivers/s3.py", line 842, in _put_object
    headers=headers, file_path=file_path, stream=stream)
  File "/home/ubuntu/.local/lib/python3.6/site-packages/libcloud/storage/base.py", line 640, in _upload_object
    response.parse_error()
  File "/home/ubuntu/.local/lib/python3.6/site-packages/libcloud/storage/drivers/s3.py", line 123, in parse_error
    raise InvalidCredsError(self.body)
  File "/home/ubuntu/.local/lib/python3.6/site-packages/libcloud/common/base.py", line 293, in body
    return self.response.body
  File "/home/ubuntu/.local/lib/python3.6/site-packages/libcloud/common/base.py", line 286, in response
    self.parse_error()
  File "/home/ubuntu/.local/lib/python3.6/site-packages/libcloud/storage/drivers/s3.py", line 123, in parse_error
    raise InvalidCredsError(self.body)
libcloud.common.types.InvalidCredsError: b"<?xml version='1.0' encoding='UTF-8'?><Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your Google secret key and signing method.</Message><StringToSign>PUT\n\napplication/vnd.ms-excel.sheet.macroEnabled.12\nSun, 26 Jan 2020 22:45:12 GMT\nx-goog-storage-class:STANDARD\n/hasler-race-entry-test/test1.xlsm</StringToSign></Error>"

Changing the file extension in the object name to something more well-known such as .xlsx allows the operation to complete just fine, so I know the access credentials are working OK and it seems to be something to do with the mimetype.

I notice that if I log in to the GCP Storage UI and upload a .xlsm file via the browser, it shows as having the type application/vnd.ms-excel.sheet.macroenabled.12, that is there is a small difference in case, I wonder if this is significant.

object storage range download

It would be nice if libcloud could support range requests to objects, this is e.g. supported by Amazon S3 and the boto3 library.

This helps when you store bigger files, but have some kind of index locally to be able to request only parts thereof, for example.

GKE Cannot get a list of clusters/containers/pods

Summary

Cannot get a list of GKE (Google Kubernetes Engine) clusters/containers or pods

Detailed Information

As mentioned in the documentation, followed the get_driver API to get the container driver for the GKE provider. But got the below stacktrace.

Traceback (most recent call last):
  File "/home/lkp//list-instances.py", line 146, in <module>
    main(sys.argv[1:])
  File "/home/lkp/list-instances.py", line 51, in main
    list_container_gce()
  File "/home/lkp/list-instances.py", line 37, in list_container_gce
    for cluster in driver.list_containers():
  File "/usr/lib/python2.7/site-packages/libcloud/container/drivers/kubernetes.py", line 163, in list_containers
    ROOT_URL + "v1/pods").object
  File "/usr/lib/python2.7/site-packages/libcloud/container/drivers/gke.py", line 68, in request
    response = super(GKEConnection, self).request(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/libcloud/common/google.py", line 815, in request
    *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/libcloud/common/base.py", line 650, in request
    response = responseCls(**kwargs)
  File "/usr/lib/python2.7/site-packages/libcloud/common/base.py", line 162, in __init__
    self.object = self.parse_body()
  File "/usr/lib/python2.7/site-packages/libcloud/common/google.py", line 278, in parse_body
    raise ResourceNotFoundError(message, self.status, code)
libcloud.common.google.ResourceNotFoundError: u'<!DOCTYPE html>\n<html lang=en>\n  <meta charset=utf-8>\n  <meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width">\n  <title>Error 404 (Not Found)!!1</title>\n  <style>\n    *{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px}\n  </style>\n  <a href=//www.google.com/><span id=logo aria-label=Google></span></a>\n  <p><b>404.</b> <ins>That\u2019s an error.</ins>\n  <p>The requested URL <code>/v1/projects/absolute-point-196613/api/v1/pods</code> was not found on this server.  <ins>That\u2019s all we know.</ins>'

Libcloud version: 2.8.0, 3.0.0-rc1
Python version: 2.7, 3.7
OS: Fedora 30

Inconsistent uuid generation in azure_arm

Summary

Azure returns inconsistent casing of the resource groups name in the id string, resulting in different uuid's for the same node (depending on what calls you make).

Detailed Information

When a node is created, the resource groups name in the id string is lower case. In subsequent calls, like in list_nodes(), for some reason Azure returns the resource groups name in upper case. This results in a different uuid from when create_node() is called. This also breaks wait_until_running(). Forcing the id string to lower case in _to_node() seems to resolve this issue.

I made the change here: foospidy@2682d12

I'd be happy to submit a PR.

list_nodes on GCE Projects Only Returns 500 Results

Summary

When running list_nodes against a GCE environment that has greater than 500 instances, only the top 500 results will be returned.

Detailed Information

Using libcloud 2.6.0, running list_nodes using the GCE Provider will only return the top 500 results. This is similar to issue #1203 where the list_nodes function is running self.connection.request rather than self.connection.request_aggregated_items.

I believe changing lines 19-23 of the list_nodes function to something like this will fix the issue:

if zone is None:
        response = self.connection.request_aggregated_items('instances')
    else:
        request = '/zones/%s/instances' % (zone.name)
        response = self.connection.request(request, method='GET').object

list_nodes for softlayer takes forever/doesn't complete

Hi, I'm trying to do this:

drivers = [ Softlayer(sys.argv[1], sys.argv[2]) ]

print "drivers"
nodes = [ driver.list_nodes() for driver in drivers ]
print "nodes"

print nodes

It hangs for a long time then has this error:

Traceback (most recent call last):
File "example.py", line 28, in
nodes = [ driver.list_nodes() for driver in drivers ]
File "/usr/local/Cellar/python/2.7.1/lib/python2.7/site-packages/apache_libcloud-0.4.0-py2.7.egg/libcloud/drivers/softlayer.py", line 429, in list_nodes
object_mask=mask
File "/usr/local/Cellar/python/2.7.1/lib/python2.7/site-packages/apache_libcloud-0.4.0-py2.7.egg/libcloud/drivers/softlayer.py", line 187, in request
return getattr(sl, method)(*params)
File "/usr/local/Cellar/python/2.7.1/lib/python2.7/xmlrpclib.py", line 1224, in call
return self.__send(self.__name, args)
File "/usr/local/Cellar/python/2.7.1/lib/python2.7/xmlrpclib.py", line 1570, in __request
verbose=self.__verbose
File "/usr/local/Cellar/python/2.7.1/lib/python2.7/xmlrpclib.py", line 1264, in request
return self.single_request(host, handler, request_body, verbose)
File "/usr/local/Cellar/python/2.7.1/lib/python2.7/xmlrpclib.py", line 1292, in single_request
self.send_content(h, request_body)
File "/usr/local/Cellar/python/2.7.1/lib/python2.7/xmlrpclib.py", line 1439, in send_content
connection.endheaders(request_body)
File "/usr/local/Cellar/python/2.7.1/lib/python2.7/httplib.py", line 937, in endheaders
self._send_output(message_body)
File "/usr/local/Cellar/python/2.7.1/lib/python2.7/httplib.py", line 797, in _send_output
self.send(msg)
File "/usr/local/Cellar/python/2.7.1/lib/python2.7/httplib.py", line 759, in send
self.connect()
File "/usr/local/Cellar/python/2.7.1/lib/python2.7/httplib.py", line 740, in connect
self.timeout, self.source_address)
File "/usr/local/Cellar/python/2.7.1/lib/python2.7/socket.py", line 571, in create_connection
raise err
socket.error: [Errno 60] Operation timed out

host parameter of S3StorageDriver does nothing in 2.7.0

Summary

We noticed that connecting to a local minio server failed after upgrading to libcloud 2.7.0

Detailed Information

To use the S3 interface of a local server we need to set the host explicitly. In the 2.7.0 sources (see https://github.com/apache/libcloud/blob/trunk/libcloud/storage/drivers/s3.py) we have this:

class S3StorageDriver(AWSDriver, BaseS3StorageDriver):
    name = 'Amazon S3'
    connectionCls = S3SignatureV4Connection
    region_name = 'us-east-1'

    def __init__(self, key, secret=None, secure=True, host=None, port=None,
                 region=None, token=None, **kwargs):
        # Here for backward compatibility for old and deprecated driver class
        # per region approach
        if hasattr(self, 'region_name') and not region:
            region = self.region_name  # pylint: disable=no-member

        self.region_name = region

        if region and region not in REGION_TO_HOST_MAP.keys():
            raise ValueError('Invalid or unsupported region: %s' % (region))

        self.name = 'Amazon S3 (%s)' % (region)

        host = REGION_TO_HOST_MAP[region]
        super(S3StorageDriver, self).__init__(key=key, secret=secret,
                                              secure=secure, host=host,
                                              port=port,
                                              region=region, token=token,
                                              **kwargs)

Note that the host = REGION_TO_HOST_MAP[region] line overrides the host parameter

S3 get_driver using region name

For some reason it seems there is one driver for each S3 region.
Thus we cannot dynamically get a s3 driver depending on the region string.

For example something like this:

cls = get_driver(Provider.S3, region_name='eu-west-1')

Or maybe it would be better to have only one S3StorageDriver class where we can specify the region during construction.

driver = cls(access_key_id, secret_acces_key, region=region_name, region_name=region_name)

Though it seems there is two region attribute which should match. region_name from S3StorageDriver and region from AWSDriver.

Upload large file to Azure Blobs

Summary

Could not upload a large file (100 GB) to Azure Blob Storage.

Detailed Information

Package libcloud: 2.7.0
File 100.gb was generated with:

import os
f = open('100.gb', "wb")
f.seek(107374127424-1)
f.write(b"\0")
f.close()

Code snippet:

from io import FileIO

cls = get_driver(Provider.AZURE_BLOBS)
driver = cls(key='STORAGE_ACCOUNT_NAME', secret='ACCESS_KEY')
container = driver.get_container(container_name='CONTAINER_NAME')

# This method blocks until all the parts have been uploaded.
extra = {'content_type': 'application/octet-stream'}

with FileIO('100.gb', 'rb') as iterator:
    obj = driver.upload_object_via_stream(iterator=iterator,
                                          container=container,
                                          object_name='libcloud/100.gb',
                                          extra=extra)

Error:

Traceback (most recent call last):
  File "/root/rvolykh/venv/lib/python3.5/site-packages/urllib3/connectionpool.py", line 672, in urlopen
    chunked=chunked,
  File "/root/rvolykh/venv/lib/python3.5/site-packages/urllib3/connectionpool.py", line 387, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/usr/lib/python3.5/http/client.py", line 1122, in request
    self._send_request(method, url, body, headers)
  File "/usr/lib/python3.5/http/client.py", line 1167, in _send_request
    self.endheaders(body)
  File "/usr/lib/python3.5/http/client.py", line 1118, in endheaders
    self._send_output(message_body)
  File "/usr/lib/python3.5/http/client.py", line 946, in _send_output
    self.send(message_body)
  File "/usr/lib/python3.5/http/client.py", line 915, in send
    self.sock.sendall(datablock)
  File "/usr/lib/python3.5/ssl.py", line 891, in sendall
    v = self.send(data[count:])
  File "/usr/lib/python3.5/ssl.py", line 861, in send
    return self._sslobj.write(data)
  File "/usr/lib/python3.5/ssl.py", line 586, in write
    return self._sslobj.write(data)
ConnectionResetError: [Errno 104] Connection reset by peer

Error with proxy_url in OpenStack auth

Summary

An error appears when using some OpenStack auth classes where this new proxy_url parameter has not been added.

Error introduced in commit: c20cf01 from PR ##1324.

Detailed Information

I get the error when trying to use the OpenStackIdentity_3_0_Connection_OIDC_access_token class. But it will appear using any other auth classes where the proxy_url parameter has not been added.

The error shown is:

  File "/libcloud/compute/drivers/openstack.py", line 417, in ex_get_node_details
    resp = self.connection.request(uri, method='GET')
  File "/libcloud/common/openstack.py", line 225, in request
    raw=raw)
  File "/libcloud/common/base.py", line 538, in request
    action = self.morph_action_hook(action)
  File "/libcloud/common/openstack.py", line 292, in morph_action_hook
    self._populate_hosts_and_request_paths()
  File "/libcloud/common/openstack.py", line 305, in _populate_hosts_and_request_paths
    osa = self.get_auth_class()
  File "/libcloud/common/openstack.py", line 206, in get_auth_class
    parent_conn=self)
TypeError: __init__() got an unexpected keyword argument 'proxy_url'

Add missing "cpuPlatform" field to GCE driver

Add "cpuPlatform" and "minCpuPlatform" to GCE driver.
This field is already being fetched by libcloud, but gets discarded during the _to_node method.

on file compute/drivers/gce.py, line 8852 add:
extra['cpuPlatform'] = node.get('cpuPlatform')
extra['minCpuPlatform'] = node.get('minCpuPlatform')

Performance Issues

Summary

I have some months using Libcloud v2.3.0, now when I tried to update to v2.5.0 with the same application files I'm getting a lot of errors.

Detailed Information

Using the same server and applications with v2.5.0 scripts stop responding and freezing very often (like 10%) and randomly (can't reproduce it), it never happens with v2.3.0. I'm just using some simple scripts for Digital Ocean and AWS to create, stop and terminate servers, and also listing some things like images, locations, sizes and keys.

Python 3.6.8
Ubuntu 18.04

Now I've downgraded again to v2.3.0 and everything is normal and stable as usual.

For more information on contributing, please see https://libcloud.readthedocs.io/en/latest/development.html

Patch to add ex_create_tags and ex_delete_tags to EC2 driver

Here is a patch that adds the ability to manage tags for Amazon EC2 instances. This ability is important, among other things, because the tag called "Name" specifies the name shown in EC2 control panels, including the one provided by Amazon itself.

GCE fails to find image for newer image families (i.e. ubuntu-1804-lts)

i was trying to use ex_create_multiple_nodes with ex_image_family="ubuntu-1804-lts" on libcloud 2.4.0 which ended up in this error:

  File "<...>/libcloud/compute/drivers/gce.py", line 5033, in ex_create_multiple_nodes
    image = self.ex_get_image_from_family(ex_image_family)
  File "<...>/libcloud/compute/drivers/gce.py", line 7209, in ex_get_image_from_family
    '\'%s\'' % (image_family), None, None)
ResourceNotFoundError: "Could not find image for family 'ubuntu-1804-lts'"

while the 1604 and 1404 counter parts were working.

after some investigation i found that

if not image and ex_standard_projects:
for img_proj, short_list in self.IMAGE_PROJECTS.items():
for short_name in short_list:
if partial_name.startswith(short_name):
image = self._match_images(img_proj, partial_name)

is supposed to handle that case, but it turns out IMAGE_PROJECTS hasn't been updated in while:
IMAGE_PROJECTS = {
"centos-cloud": ["centos-6", "centos-7"],
"cos-cloud": ["cos-beta", "cos-dev", "cos-stable"],
"coreos-cloud": ["coreos-alpha", "coreos-beta", "coreos-stable"],
"debian-cloud": ["debian-8", "debian-9"],
"opensuse-cloud": ["opensuse-leap"],
"rhel-cloud": ["rhel-6", "rhel-7"],
"suse-cloud": ["sles-11", "sles-12"],
"suse-byos-cloud": [
"sles-11-byos", "sles-12-byos",
"sles-12-sp2-sap-byos", "sles-12-sp3-sap-byos",
"suse-manager-proxy-byos", "suse-manager-server-byos"
],
"suse-sap-cloud": ["sles-12-sp2-sap", "sles-12-sp3-sap"],
"ubuntu-os-cloud": [
"ubuntu-1404-lts", "ubuntu-1604-lts", "ubuntu-1710"
],

afaik https://cloud.google.com/compute/docs/images#unshielded-images could be used to update the available image families and their matching image projects.

it could also be possible that some entries in IMAGE_PROJECTS are actually not available anymore i.e. ubuntu-1710

VCloud driver - why AdminPassword is removed from GuestCustomizationSection

Summary

There are 2 places in the VCloud driver code that remove the AdminPassword parameter from the GuestCustomizationSection of an API request due to an "API quirk", which I cannot find the origin of.

Detailed Information

Version of libcloud: forked from commit a3334dc, version 2.6.1-dev
Python version: 3.7.4
OS: Windows 10 1903

The error response I am getting from VCloud:

<ns0:Error majorErrorCode="400" message="[ xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx ] The administrator password cannot be empty when it is enabled and automatic password generation is not selected." minorErrorCode="BAD_REQUEST" xmlns:ns0="http://www.vmware.com/vcloud/v1.5"/>

More info on the GuestCustomizationSection can be found here

I am mainly interested in what the API quirk was that lead to this code being added. I am currently working on adding some new features to the libcloud driver and would be happy to implement any changes necessary if I have a better idea of what the problem is. From some quick digging around, it looks like the changes were added by Michal Galet.

Error in resize function in OpenStack driver

When calling the "ex_resize" function of the OpenStack driver I get this error:

BaseHTTPError: 400 Bad Request Invalid input for field/attribute resize. Value: {u'flavorRef': u'8', 
 u'personality': [], u'name': u'userimage-156283439906', u'imageRef': u'0de96743-4a12-4470-b8b2-
 6dc260977a40', u'metadata': {}}. Additional properties are not allowed (u'metadata', u'name', 
 u'imageRef', u'personality' were unexpected)

ec2 driver does not support strings for size and image parameters

Summary

Both the GCP and the Dummy drivers for compute allow passing the imaged id, and size id as a string, and both drivers will do a lookup to find them.

The EC2 driver requires that the image and size be objects that have the name as an id attribute.

Detailed Information

The commit newcontext-oss/openc2-aws-actuator@f67cbee fixes the issue for that code. GCP works w/ the previous commit.

This is on MacOSX 10.14.6.

$python
Python 3.6.7 (default, Oct 21 2018, 08:56:20)
[GCC 4.2.1 Compatible Apple LLVM 10.0.0 (clang-1000.11.45.2)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import libcloud
>>> libcloud.__version__
'2.7.0'
>>> from libcloud.compute.types import Provider
>>> from libcloud.compute.providers import get_driver
>>> cls = get_driver(Provider.EC2)
>>> access_key, secret_key = open('.keys').read().split()
>>> drv = cls(access_key, secret_key, region='us-west-2')
>>> drv.create_node(image='ami-0b74be4bc329b8a1b', size='t2.nano')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/jmg/work/openc2-aws-actuator/p/lib/python3.6/site-packages/libcloud/compute/drivers/ec2.py", line 1891, in create_node                                                           
    'ImageId': image.id,
AttributeError: 'str' object has no attribute 'id'
>>> from mock import MagicMock
>>> img = MagicMock()
>>> img.id = 'ami-0b74be4bc329b8a1b'
>>> sizeobj = MagicMock()
>>> sizeobj.id = 't2.nano'
>>> drv.create_node(image=img, size=sizeobj, name='somename')
<Node: uuid=2ba587a276a105f6676e4df1957ff89926206dbb, name=somename, state=PENDING, public_ips=[], private_ips=['172.31.37.77'], provider=Amazon EC2 ...>                                       
>>> 

Cloudflare export to BIND zone format - missing priority attribute

Summary

I have used 2.8.x and the trunk version - in both versions, I cannot access the priority value for a record, or export the zone to bind format without a key error.

Detailed Information

The following traceback is generated from zone.export_to_bind_format():

Traceback (most recent call last):
ย  File "", line 1, in
ย  File "/dns-copy/venv/lib/python3.7/site-packages/libcloud/dns/base.py", line 78, in export_to_bind_format
ย  ย  return self.driver.export_zone_to_bind_format(zone=self)
ย  File "/dns-copy/venv/lib/python3.7/site-packages/libcloud/dns/base.py", line 418, in export_zone_to_bind_format
ย  ย  line = self._get_bind_record_line(record=record)
ย  File "/dns-copy/venv/lib/python3.7/site-packages/libcloud/dns/base.py", line 478, in _get_bind_record_line
ย  ย  priority = str(record.extra['priority'])
KeyError: 'priority'

I was able to resolve the error by adding priority to 'libcloud/libcloud/dns/drivers/cloudflare.py':

RECORD_EXTRA_ATTRIBUTES = {
    'proxiable',
    'proxied',
    'locked',
    'created_on',
    'modified_on',
    'data',
    'priority'
}

I can export to BIND format and retrieve MX record priorities with this change.

azure_arm.py: create_node: cannot create windows VM

Summary

Using the azure_arm provider, I cannot create a windows VM. in the create_node function it has been hardcoded to only support linux VMs (I have commented #HERE in the code below).

Detailed Information

I was able to check the name of the image and see if it is a windows Image (all windows images have image.offer == "WindowsServer") and then set the correct osType with in create_node. However, this does not work with AzureVhdImage Images. And that's where I am stuck ...

Also, we also set up authentication assuming a linux machine. when the os is windows we don't need to set data["properties"]["osProfile"]["linuxConfiguration"] for NodeAuthSSHKey and also NodeAuthPassword.

I tried to make a fix but I couldn't figure how to get the osType from a VHD Image.

Please let me know if you have any insights!

Stack

Traceback (most recent call last):
node = conn.create_node(name=name, location=location, image=image, size=size, auth=auth, ex_resource_group=resource_group, ex_storage_account=storage_account, ex_blob_container=blob_container, ex_nic=nic)
File "/Library/Python/2.7/site-packages/libcloud/compute/drivers/azure_arm.py", line 672, in create_node
method="PUT")
File "/Library/Python/2.7/site-packages/libcloud/common/azure_arm.py", line 229, in request
method=method, raw=raw)
File "/Library/Python/2.7/site-packages/libcloud/common/base.py", line 636, in request
response = responseCls(**kwargs)
File "/Library/Python/2.7/site-packages/libcloud/common/base.py", line 153, in init
headers=self.headers)
libcloud.common.exceptions.BaseHTTPError: [InvalidParameter] The value 'Linux' of parameter 'osDisk.osType' is not allowed. Allowed values are 'Windows'.

Code:

compute/drivers/azure_arm.py

         if isinstance(image, AzureVhdImage):
             instance_vhd = self._get_instance_vhd(
                 name=name,
                 ex_resource_group=ex_resource_group,
                 ex_storage_account=ex_storage_account,
                 ex_blob_container=ex_blob_container)
             storage_profile = {
                 "osDisk": {
                     "name": name,
                     "osType": "linux", #HERE!
                     "caching": "ReadWrite",
                     "createOption": "FromImage",
                     "image": {
                         "uri": image.id
                     },
                     "vhd": {
                         "uri": instance_vhd,
                     }
                 }
             }
             if ex_use_managed_disks:
                 raise LibcloudError(
                     "Creating managed OS disk from %s image "
                     "type is not supported." % type(image))
         elif isinstance(image, AzureImage):
             storage_profile = {
                 "imageReference": {
                     "publisher": image.publisher,
                     "offer": image.offer,
                     "sku": image.sku,
                     "version": image.version
                 },
                 "osDisk": {
                     "name": name,
                     "osType": "linux", #HERE!
                     "caching": "ReadWrite",
                     "createOption": "FromImage"
                 }
             }

...

         if isinstance(auth, NodeAuthSSHKey):
             data["properties"]["osProfile"]["adminPassword"] = \
                 binascii.hexlify(os.urandom(20)).decode("utf-8")
             data["properties"]["osProfile"]["linuxConfiguration"] = { #HERE!
                 "disablePasswordAuthentication": "true",
                 "ssh": {
                     "publicKeys": [
                         {
                             "path": '/home/%s/.ssh/authorized_keys' % (
                                 ex_user_name),
                             "keyData": auth.pubkey  # pylint: disable=no-member
                         }
                     ]
                 }
             }
         elif isinstance(auth, NodeAuthPassword):
             data["properties"]["osProfile"]["linuxConfiguration"] = { #HERE!
                 "disablePasswordAuthentication": "false"
             }
             data["properties"]["osProfile"]["adminPassword"] = auth.password
         else:
             raise ValueError(
                 "Must provide NodeAuthSSHKey or NodeAuthPassword in auth")

Backblaze B2: AUTH_API_HOST not picking up the default

Running the demo code in the docs:

from libcloud.storage.types import Provider
from libcloud.storage.providers import get_driver

account_id = 'XXXXXX'
application_key = 'YYYYYY'

cls = get_driver(Provider.BACKBLAZE_B2)
driver = cls(account_id, application_key)
driver.list_containers()

Produces the error:

requests.exceptions.ConnectionError: HTTPSConnectionPool(host='none', port=443): Max retries exceeded with url: /b2api/v1/b2_list_buckets?accountId=xxxxxxxxxx (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7f7366797610>: Failed to establish a new connection: [Errno -5] No address associated with hostname'))

If the host is supplied to the constructor, then it works as intended:

driver = cls(account_id, application_key, host='api.backblaze.com')

I can replicate this on Python 2 and Python 3. I am using libcloud via django-storages, so there isn't a way for me to pass the hostname via the constructor.

Looking at the backblaze provider the default host is provided and correct, but somehow not making it into the connection object before request is called.

I can take another look at this this afternoon, but a patch didn't jump out at me immediately.

AWS S3 Presigned URL and Azure Blob Storage Shared access signature

Feature Request

AWS S3 has the support of Presigned Object URL and Azure Blob Storage has the support of Shared access signature.

Both of those features allow anyone (with whom shared link was shared) to download or upload a file. This feature is very useful when you want to temporary share a file with some user(s) or to allow them to upload a file, without sharing your credentials (temporary creds, creating an account, etc).

Currently, libcloud Storage interface (API) has two methods which more or less are related to sharing object access:

* Both methods are not implemented either for AWS S3 nor Azure Blob Storage libcloud storage drivers.

My proposal is to extend get_object_cdn_url with additional parameter (mode=["read", "write"]) and implement support for AWS S3 and Azure Blob Storage.

Download page: issues with KEYS links and gpg verify

The download page verification guide
https://libcloud.apache.org/downloads.html#package-verification-guide
has two references to the KEYS file; these should be links to
https://www.apache.org/dist/libcloud/KEYS

[The initial mention of the KEYS file should ideally use just 'KEYS' as the link text, as that is how most other download pages present it]

==

Also the gpg verify commands should have a second parameter as follows:

gpg --verify apache-libcloud-2.3.0.tar.bz2.asc apache-libcloud-2.3.0.tar.bz2

See: https://www.apache.org/info/verification.html#specify_both

Error updating port in OpenStack driver

Summary

There is an error in the ex_update_port function.
It returns 500 error.

Detailed Information

There is an error in the request as the data parameter is not set.

Error list Google GKE clusters - zones deprecated

Summary

Container GKE Provider failed to list and do actions on Google Cloud because of the zones its now deprecated and now you should use locations

Detailed Information

Traceback (most recent call last):
File "container.py", line 39, in
gce.list_clusters()
File "container.py", line 33, in list_clusters
clusters = self.gke.list_clusters(ex_zone=zone)
File "/usr/local/lib/python2.7/site-packages/libcloud/container/drivers/gke.py", line 178, in list_clusters
response = self.connection.request(request, method='GET').object
File "/usr/local/lib/python2.7/site-packages/libcloud/container/drivers/gke.py", line 67, in request
response = super(GKEConnection, self).request(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/libcloud/common/google.py", line 813, in request
*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/libcloud/common/base.py", line 638, in request
response = responseCls(**kwargs)
File "/usr/local/lib/python2.7/site-packages/libcloud/common/base.py", line 154, in init
self.object = self.parse_body()
File "/usr/local/lib/python2.7/site-packages/libcloud/common/google.py", line 277, in parse_body
raise ResourceNotFoundError(message, self.status, code)
libcloud.common.google.ResourceNotFoundError: u'\n\n \n \n <title>Error 404 (Not Found)!!1</title>\n <style>\n {margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px} > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px}\n </style>\n \n

404. That\u2019s an error.\n

The requested URL /v1/projects//zones/clusters was not found on this server. That\u2019s all we know.'

this python project is vulnerable to MITM as it fails to verify the ssl validity of the remote destination

this python project is vulnerable to MITM as it fails to verify the ssl validity of the remote destination.
urllib / urllib2, httplib.SHTTPConnection do not verify ssl at all by default.
from base.py
class ConnectionKey(object):
"""
A Base Connection class to derive from.
"""
conn_classes = (httplib.HTTPConnection, httplib.HTTPSConnection)

....
def connect(self, host=None, port=None):
.....
connection = self.conn_classes[self.secure](host, port)

this request can be MITMed leading to the compromise of a users API key - where a secured https connection was requested, but can be MITM'ed.

Uploading to s3 encrypted with a custom KMS key fails

Summary

When uploading files to s3 where the bucket has server side encryption with a custom KMS key the upload fails

Detailed Information

Python 3.7
Libcloud: 2.8.0
OS: Amazon Linux 2

Libcloud is inspecting the etag expecting it to be the md5sum of the uploaded data object but this is not always the case (where using a custom KMS key, or doing a multi-part upload): https://docs.aws.amazon.com/AmazonS3/latest/API/RESTCommonResponseHeaders.html

Objects created by the PUT Object, POST Object, or Copy operation, or through the AWS Management Console, and are encrypted by SSE-C or SSE-KMS, have ETags that are not an MD5 digest of their object data.

If an object is created by either the Multipart Upload or Part Copy operation, the ETag is not an MD5 digest, regardless of the method of encryption.

the s3 storage driver is comparing the etag to the hash calculated locally on the streamed file (https://github.com/apache/libcloud/blob/trunk/libcloud/storage/drivers/s3.py#L850) but where the etag is not an md5 hash of the file this will always fail

I've included a stack trace below from cassandra-medusa (https://github.com/thelastpickle/cassandra-medusa) which led to me investigating this problem

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/medusa/backup.py", line 274, in main
    cassandra, node_backup, storage, differential_mode, config)
  File "/usr/local/lib/python3.7/site-packages/medusa/backup.py", line 320, in do_backup
    num_files = backup_snapshots(storage, manifest, node_backup, node_backup_cache, snapshot)
  File "/usr/local/lib/python3.7/site-packages/medusa/backup.py", line 388, in backup_snapshots
    manifest_objects = storage.storage_driver.upload_blobs(needs_backup, dst_path)
  File "/usr/local/lib/python3.7/site-packages/medusa/storage/s3_storage.py", line 95, in upload_blobs
    multi_part_upload_threshold=int(self.config.multi_part_upload_threshold),
  File "/usr/local/lib/python3.7/site-packages/medusa/storage/aws_s3_storage/concurrent.py", line 87, in upload_blobs
    return job.execute(list(src))
  File "/usr/local/lib/python3.7/site-packages/medusa/storage/aws_s3_storage/concurrent.py", line 51, in execute
    return list(executor.map(self.with_storage, iterables))
  File "/usr/lib64/python3.7/concurrent/futures/_base.py", line 598, in result_iterator
    yield fs.pop().result()
  File "/usr/lib64/python3.7/concurrent/futures/_base.py", line 435, in result
    return self.__get_result()
  File "/usr/lib64/python3.7/concurrent/futures/_base.py", line 384, in __get_result
    raise self._exception
  File "/usr/lib64/python3.7/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.7/site-packages/medusa/storage/aws_s3_storage/concurrent.py", line 60, in with_storage
    return self.func(self.storage, connection, iterable)
  File "/usr/local/lib/python3.7/site-packages/medusa/storage/aws_s3_storage/concurrent.py", line 83, in <lambda>
    storage, connection, src_file, dest, bucket, multi_part_upload_threshold
  File "/usr/local/lib/python3.7/site-packages/medusa/storage/aws_s3_storage/concurrent.py", line 119, in __upload_file
    obj = _upload_single_part(connection, src, bucket, full_object_name)
  File "/usr/local/lib/python3.7/site-packages/retrying.py", line 49, in wrapped_f
    return Retrying(*dargs, **dkw).call(f, *args, **kw)
  File "/usr/local/lib/python3.7/site-packages/retrying.py", line 212, in call
    raise attempt.get()
  File "/usr/local/lib/python3.7/site-packages/retrying.py", line 247, in get
    six.reraise(self.value[0], self.value[1], self.value[2])
  File "/usr/local/lib/python3.7/site-packages/six.py", line 696, in reraise
    raise value
  File "/usr/local/lib/python3.7/site-packages/retrying.py", line 200, in call
    attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
  File "/usr/local/lib/python3.7/site-packages/medusa/storage/aws_s3_storage/concurrent.py", line 127, in _upload_single_part
    os.fspath(src), container=bucket, object_name=object_name
  File "/usr/local/lib/python3.7/site-packages/libcloud/storage/drivers/s3.py", line 492, in upload_object
    storage_class=ex_storage_class)
  File "/usr/local/lib/python3.7/site-packages/libcloud/storage/drivers/s3.py", line 854, in _put_object
    object_name=object_name, driver=self)

GCE node deletion times out - allow an async option

Feature Request

Currently the GCE node_delete() function uses the async_request() call (which ironically isn't actually async). Deleting a basic n1-standard-1 node running Centos in us-central times out every time. This isn't an uncommon occurrence. Searching the intertubes there are other references of other projects having the same issue:

kubevirt/cloud-image-builder#72
ansible/ansible#29597

The current way to avoid the timeout is to use the ex_destroy_multiple_nodes() function with a gigantic timeout value. An option, for sure, but not flexible enough.

An easy solution in this case would be to offer both a sync and async option for deletion. Applications can poll, if desired. Thankfully, the async option is fairly straightforward, defaulting to the current sync mode of operation:

        if sync:
            self.connection.async_request(request, method='DELETE')
        else:
            self.connection.request(request, method='DELETE')

The wheel distribution of this package on pypi contains the requirement for enum34

Summary

The wheel distribution of this package on pypi contains the requirement for enum34

Detailed Information

enum34 appears to be dynamical included as a requirement in setup.py depending if your Python version is pre 3.4.

PY2_or_3_pre_34 = sys.version_info < (3, 4, 0)
# ...
if PY2_or_3_pre_34:
    INSTALL_REQUIREMENTS.append('typing')
    INSTALL_REQUIREMENTS.append('enum34')

The wheel package contains the requirement without any special qualifier. (I'm not even sure if it is possible to spec requirements in a wheel according to python version)

From METADATA

Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, <4
Requires-Dist: requests (>=2.5.0)
Requires-Dist: typing
Requires-Dist: enum34

This does create a problem in zc.buildout style environments that are running newer python versions.

I think it is probably time to drop support for Python 3.3

Error getting node_id in OpenStack_1_1_FloatingIpAddress

Summary

Error getting node_id in OpenStack_1_1_FloatingIpAddress in case of region is not called "nova".

Detailed Information

In the function OpenStack_2_FloatingIpPool._to_floating_ip the instance_id of the node where the IP is attached to is not correctly obtained if the name of the region is not the expected one "nova".

Documentation: Objectstore example is broken / libcloud does seek() on stream

Summary

I'm running the example on https://libcloud.readthedocs.io/en/stable/storage/examples.html that creates a tarfile and uploads it via stream to an objectstore.

I'm using GCS.
libcloud tries to do a seek() on the supplied iterator, this fails and the program stops.

Detailed Information

apache-libcloud==2.8.0
Python 3.7.6
macos Catalina

The example in the doc is Python 2, I've rewritten it for python 3. I'm using GCS.

#!/usr/bin/env python
import os
import subprocess
from datetime import datetime

from libcloud.storage.types import Provider, ContainerDoesNotExistError
from libcloud.storage.providers import get_driver

from dotenv import load_dotenv
load_dotenv()

cls = get_driver(Provider.GOOGLE_STORAGE)
driver = cls(os.getenv('GOOGLE_ACCOUNT'),
                  os.getenv('AUTH_TOKEN'),
                  project='foo')

directory = os.getenv('FOLDER')
cmd = 'tar cvzpf - %s' % (directory)

object_name = 'backup-%s.tar.gz' % (datetime.now().strftime('%Y-%m-%d'))
container_name = os.getenv('WORKSPACE')

# Create a container if it doesn't already exist
try:
    container = driver.get_container(container_name=container_name)
except ContainerDoesNotExistError:
    container = driver.create_container(container_name=container_name)

pipe = subprocess.Popen(cmd, bufsize=0, shell=True, stdout=subprocess.PIPE)
return_code = pipe.poll()

print('Uploading object...')

while return_code is None:
    # Compress data in our directory and stream it directly to CF
    obj = container.upload_object_via_stream(iterator=pipe.stdout,
                                             object_name=object_name)
    return_code = pipe.poll()

print('Upload complete, transferred: %s KB' % ((obj.size / 1024)))

This returns the following exception:

Traceback (most recent call last):
  File "./bug.py", line 38, in <module>
    object_name=object_name)
  File "/Users/perbu/.virtualenvs/ar1/lib/python3.7/site-packages/libcloud/storage/base.py", line 159, in upload_object_via_stream
    iterator, self, object_name, extra=extra, **kwargs)
  File "/Users/perbu/.virtualenvs/ar1/lib/python3.7/site-packages/libcloud/storage/drivers/s3.py", line 698, in upload_object_via_stream
    storage_class=ex_storage_class)
  File "/Users/perbu/.virtualenvs/ar1/lib/python3.7/site-packages/libcloud/storage/drivers/s3.py", line 842, in _put_object
    headers=headers, file_path=file_path, stream=stream)
  File "/Users/perbu/.virtualenvs/ar1/lib/python3.7/site-packages/libcloud/storage/base.py", line 627, in _upload_object
    self._get_hash_function())
  File "/Users/perbu/.virtualenvs/ar1/lib/python3.7/site-packages/libcloud/storage/base.py", line 657, in _hash_buffered_stream
    stream.seek(0)
OSError: [Errno 29] Illegal seek

The offending code in libcloud/storage/base.py looks like this:

        if hasattr(stream, '__next__') or hasattr(stream, 'next'):
            # Ensure we start from the begining of a stream in case stream is
            # not at the beginning
            if hasattr(stream, 'seek'):
                stream.seek(0)

I'm not entirely sure why the iterator get "seek". I've been able to work around the issue by creating a SimpleIterator class that only supplies next and then taking the output from Popen.stdout and subclassing it into the SimpleIterator.

If I just comment out the seek(0) everything seems to work.

Thanks for an excellent project. Let me know if you need anything more from me.

Cheers,

Per.

list_container_objects doesn't work for Azure blob storage under python 2.7.5

Summary

list_container_objects doesn't work for Azure blob storage under python 2.7.5

Detailed Information

###libcloud version:
2.5.0/2.4.0

###Python version:
2.7.5

###Operating System:
RHEL 7.3

###Steps which are needed to reproduce it:
Using the following code to list objects in Azure blob container:

from libcloud.storage.types import Provider
from libcloud.storage.providers import get_driver

cls = get_driver(Provider.AZURE_BLOBS)
driver = cls(key='XXX', secret='XXXXXXXX')
container = driver.get_container('myContainer')
objs = driver.list_container_objects(container, '123')

print objs

It will throw the following exception at line objs = driver.list_container_objects(container, '123'):

>>> objs = driver.list_container_objects(container, '123')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python2.7/site-packages/libcloud/storage/drivers/azure_blobs.py", line 437, in list_container_objects
    ex_prefix=ex_prefix))
  File "/usr/lib/python2.7/site-packages/libcloud/storage/drivers/azure_blobs.py", line 401, in iterate_container_objects
    params=params)
  File "/usr/lib/python2.7/site-packages/libcloud/common/base.py", line 636, in request
    response = responseCls(**kwargs)
  File "/usr/lib/python2.7/site-packages/libcloud/common/base.py", line 155, in __init__
    self.object = self.parse_body()
  File "/usr/lib/python2.7/site-packages/libcloud/common/azure.py", line 97, in parse_body
    return super(AzureResponse, self).parse_body()
  File "/usr/lib/python2.7/site-packages/libcloud/common/base.py", line 233, in parse_body
    driver=self.connection.driver)
libcloud.common.types.MalformedResponseError: <MalformedResponseException in <libcloud.storage.drivers.azure_blobs.AzureBlobsStorageDriver object at 0x7f9f08838f90> 'Failed to parse XML'>: u'\u010f\u0165\u017c<?xml version="1.0" encoding="utf-8"?><EnumerationResults ContainerName="https://.....

The response XML seems to be valid except it begins with u'\u010f\u0165\u017c.
I tried in python 2.7.15 and it works fine. Maybe this issue is related to python 2.7.5 only?
Python 2.7.5 is the default version for RHEL 7.3, so please help me. Thanks!

mypy issues as of 2.8.0

Summary

Just wanted to feed back to you all a couple issues we had running mypy on our custom drivers after upgrading to libcloud 2.8.0.

Detailed Information

  • The type of NodeDriver.type is Provider, but our custom drivers assign a string to this field which cause a mypy error. Perhaps it should be Union[Provider, str]?
  • NodeSize, NodeLocation, and NodeImage have driver typed to be Type[NodeDriver] in their constructors, but this does not match Node, StorageVolume, and others which have it typed as NodeDriver. We have subclasses of some of these and use self.driver as an instance of a driver, just like the Node class, and not as a driver class.

Require Zope?

Hi guys,

I'm trying to use libcloud on a standard ubuntu desktop 10.10 machine, in a virtualenv. If I install apache-libcloud from PyPI (eg: pip install apache-libcloud), then try to use it to do:

get_driver(Provider.RACKSPACE), libcloud's base.py will error out with an ImportError (No module named zope).

Since it is impossible to install zope via PyPI (as far as I can tell), it's impossible to use the software. Furthermore, I've tried installing the ubuntu package python-zope, and zope-common, but neither will provide the zope module that libcloud requires to work.

Is there a workaround for this? I assume that I'm the only one having the issue.

Unable to create a driver for GCE

Summary

While running gce_demo.py, I always get the this exception:

File "/Users/awmalik/Library/Python/2.7/lib/python/site-packages/libcloud/common/google.py", line 483, in init
raise GoogleAuthError('cryptography library required for '
libcloud.common.google.GoogleAuthError: 'cryptography library required for Service Account Authentication.'

Detailed Information

Here's the full stacktrace for the exception when I try to run gce_demo.py

Traceback (most recent call last):
File "gce_demo.py", line 958, in
main_compute()
File "gce_demo.py", line 343, in main_compute
gce = get_gce_driver()
File "gce_demo.py", line 112, in get_gce_driver
driver = get_driver(Provider.GCE)(*args, **kwargs)
File "/Users/awmalik/Library/Python/2.7/lib/python/site-packages/libcloud/compute/drivers/gce.py", line 1886, in init
super(GCENodeDriver, self).init(user_id, key, **kwargs)
File "/Users/awmalik/Library/Python/2.7/lib/python/site-packages/libcloud/common/base.py", line 976, in init
self.connection = self.connectionCls(*args, **conn_kwargs)
File "/Users/awmalik/Library/Python/2.7/lib/python/site-packages/libcloud/compute/drivers/gce.py", line 99, in init
credential_file=credential_file, **kwargs)
File "/Users/awmalik/Library/Python/2.7/lib/python/site-packages/libcloud/common/google.py", line 772, in init
user_id, key, auth_type, credential_file, scopes, **kwargs)
File "/Users/awmalik/Library/Python/2.7/lib/python/site-packages/libcloud/common/google.py", line 660, in init
self.user_id, self.key, self.scopes, **kwargs)
File "/Users/awmalik/Library/Python/2.7/lib/python/site-packages/libcloud/common/google.py", line 483, in init
raise GoogleAuthError('cryptography library required for '
libcloud.common.google.GoogleAuthError: 'cryptography library required for Service Account Authentication.'

Python version: Python 2.7.10
Libcloud version: apache_libcloud-2.6.0.dist-info
PyCrypto version: pycrypto-2.6.1.dist-info

GCENodeDriver's deploy_node() should accept Deployments and other parameters that are accepted by create_node()

Feature Request

I was trying to use Deployment when launching nodes using GCE, but I noticed that the deploy_node() of GCENodeDriver has a different function signature from other drivers and does not accept Deployment. Furthermore, it doesn't accept other essential parameters you can pass in to create_node(), making it very limited in its usefulness. Is there a reason why this is so? I fail to see why GCENodeDriver has overridden NodeDriver's default deploy_node() when there is even this comment:

Deploy node is typically not overridden in subclasses. The

I think GCENodeDriver should accept Deployment for its deploy_node() method like other drivers. Would this be as simple as removing the overriding method? Thank you for your help in advance!

Support HTTP proxies

My corporate network requires HTTP/S connections to go through a squid proxy.

libcloud should respect the $https_proxy environment variable so that this can work seamlessly.

azure_arm: delete public IP address (ex_delete_public_ip)

Feature Request

This feature is just a function with in the azure_arm driver. Lib cloud users can now delete Azure ARM public ip address resources. the ex_delete_resource() method could not be used as RESOURCE_API_VERSION is too old to support the publicIpAddress resource.

GCE ex_list_instancegroups('all') raises KeyError 'zone'

Summary

The GCE ex_list_instancegroups('all') call raises KeyError 'zone'.

Detailed Information

Libcloud 2.5.0
Python 3.5
Ubuntu 16.04.6 LTS

The error is caused by line 9092 of gce.py in the _to_instancegroup() function:

zone = self.ex_get_zone(instancegroup['zone'])

The code is expecting the instancegroup dict to have a zone attribute, but that's not always the case. The underlying GCE API endpoint /aggregated/instanceGroups can also return objects that have a region attribute but not a zone attribute. Example:

{'creationTimestamp': '2019-05-20T12:00:00.000-08:00',
 'description': 'This instance group is controlled by Regional Instance Group '
                "Manager 'abcd-ef-ghij'. To modify instances in this group, "
                'use the Regional Instance Group Manager API: '
                'https://cloud.google.com/compute/docs/reference/latest/instanceGroupManagers',
 'fingerprint': 'AbCdEfGHiJK=',
 'id': '12345678901234567',
 'kind': 'compute#instanceGroup',
 'name': 'abcd-ef-ghij',
 'namedPorts': [{'name': 'https', 'port': 443}],
 'network': 'https://www.googleapis.com/compute/v1/projects/testproj1/global/networks/ab',
 'region': 'https://www.googleapis.com/compute/v1/projects/testproj1/regions/us-west1',
 'selfLink': 'https://www.googleapis.com/compute/v1/projects/testproj1/regions/us-west1/instanceGroups/abcd-ef-ghij',
 'size': 0,
 'subnetwork': 'https://www.googleapis.com/compute/v1/projects/testproj1/regions/us-west1/subnetworks/asdf1234'}

SignatureDoesNotMatch for manual request

Summary

While attempting to perform a manual request, it appears to be an issue with the credentials being used.

Detailed Information

Upon attempting to create EC2 Launch Template, by performing a manual request with the EC2 driver, it's returning libcloud.common.types.InvalidCredsError: 'SignatureDoesNotMatch

More specifically I have a function which I attempt to create a simple Launch Template

def initialize_test():

    # authenticate
    aws = get_driver(Provider.EC2)
    driver = login_aws(aws)

    // Other actions that are used e.g. driver.ex_create_security_group works perfect
    // Manually performing requests isn't working
    new_template(driver)
    
def new_template(driver):
    params = {
        'Action': 'CreateLaunchTemplate',
        'LaunchTemplateName': "test-template",
        'LaunchTemplateData': {
            'ImageId': "ami-06b41651a26fbba09",
            'InstanceType': "t3.xlarge",
            'NetworkInterface': {
                1: {
                    'AssociatePublicIpAddress': True
                }
            }
        }
    }

    // Breaks here
    response = driver.connection.request("/", params=params)

And this throws

libcloud.common.types.InvalidCredsError: 'SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.'

The project used is running python 3.7.4, using apache-libcloud 2.5.0.

Issue in Create Volume

Summary

I tried creating new volume using below
driver.create_volume(size=volume_size,name=volume_name,location=None,snapshot=None) and am getting below error
MissingParameter: The request must contain the parameter zone

When i debug it expecting availability
/usr/local/lib/python3.7/dist-packages/libcloud/compute/drivers/ec2.py in create_volume

Detailed Information

Am using Libcloud 2.8.0 and running on Python3.7 in Ubuntu 18TLS

Provide detailed information of your bug report.

This includes information about your environment (which Libcloud version are
you using, which Python version, which Operating System / distribution, etc.)
and steps which are needed to reproduce it.

For more information on contributing, please see https://libcloud.readthedocs.io/en/latest/development.html

Can no longer get driver by provider name string

Summary

Can no longer get driver by provider name string as of 2.7.0.

Detailed Information

home@billy [~]--[$ python3 -m venv libcloud-venv
home@billy [~]--[$ . libcloud-venv/bin/activate
(libcloud-venv) home@billy [~]--[$ pip install apache-libcloud==2.7.0
...
Successfully installed apache-libcloud-2.7.0 certifi-2019.11.28 chardet-3.0.4 enum34-1.1.6 idna-2.8 requests-2.22.0 typing-3.7.4.1 urllib3-1.25.7
(libcloud-venv) home@billy [~]--[$ python
Python 3.7.5 (default, Nov  1 2019, 02:16:32)
[Clang 11.0.0 (clang-1100.0.33.8)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from libcloud.compute.providers import get_driver
>>> get_driver('openstack')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/billy/libcloud-venv/lib/python3.7/site-packages/libcloud/compute/providers.py", line 178, in get_driver
    deprecated_constants=deprecated_constants)
  File "/Users/billy/libcloud-venv/lib/python3.7/site-packages/libcloud/common/providers.py", line 73, in get_driver
    raise AttributeError('Provider %s does not exist' % (provider))
AttributeError: Provider openstack does not exist

Works in 2.6.0:

(libcloud-venv) home@billy [~]--[$ pip install -U 'apache-libcloud==2.6.0'
...
Successfully installed apache-libcloud-2.6.0
(libcloud-venv) home@billy [~]--[$ python
Python 3.7.5 (default, Nov  1 2019, 02:16:32)
[Clang 11.0.0 (clang-1100.0.33.8)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from libcloud.compute.providers import get_driver
>>> get_driver('openstack')
<class 'libcloud.compute.drivers.openstack.OpenStackNodeDriver'>

Not sure if this is a bug, or if we've just been using it wrong.

Softlayer Servers always created with Hourly Billing (regardless of ex_hourly value)

Summary

Softlayer server are always created as Hourly Billing, regardless of what value (False / True) is passed on the ex_hourly parameter to deploy_server or create_server

Detailed Information

In the create_node method of the Softlayer driver, the ex_hourly parameter is converted to string ('true' / 'false') to forward it in the data to the createObject method of the Softlayer API.
Nonetheless, the API is expecting a boolean param, and it is setting the default value (hourly = true), resulting in always creating hourly billed instances.

  • Test via libcloud to set Monthly Billing (ex_hourly = False):
    -- Formatted data:
    {'domain': 'wandera.com', 'localDiskFlag': 'true', 'maxMemory': '4', 'blockDevices': [{'device': '0', 'diskImage': {'capacity': 100}}], 'networkComponents': [{'maxSpeed': 100}], 'datacenter': {'name': 'tor01'}, 'hourlyBillingFlag': 'false', 'hostname': 'test.example.com', 'startCpus': '2', 'operatingSystemReferenceCode': 'UBUNTU_18_64', 'sshKeys': [{'id': key_id}]}
    -- Setting: 'hourlyBillingFlag': 'false'
    -- Result: Server created with Hourly Billing

I did some manual tests directly to the SL API:

  • Same format as libcloud:
    -- Request:
    curl -X POST --data '{"parameters":[[{"domain":"wandera.com","localDiskFlag":"true","maxMemory":"4","blockDevices":[{"device":"0","diskImage":{"capacity":100}}],"networkComponents":[{"maxSpeed":100}],"datacenter":{"name":"tor01"},"hourlyBillingFlag":"false","hostname":"test.example.com","startCpus":"2","operatingSystemReferenceCode":"UBUNTU_18_64","sshKeys":[{"id":key_id}]}]]}' -u <username>:<api_token> https://api.softlayer.com/rest/v3/SoftLayer_Virtual_Guest/createObjects.json
    -- Setting: "hourlyBillingFlag": "false"
    -- Result: Server created with Hourly Billing

  • Sent as boolean:
    -- Request:
    curl -X POST --data '{"parameters":[[{"domain":"wandera.com","localDiskFlag":"true","maxMemory":"4","blockDevices":[{"device":"0","diskImage":{"capater":{"name":"tor01"},"hourlyBillingFlag":false,"hostname":"test.example.com","startCpus":"2","operatingSystemReferenceCode":"UBUNTU_18_64","sshKeys":[{"id":key_id}]}]]}' -u <username>:<api_token> https://api.softlayer.com/rest/v3/SoftLayer_Virtual_Guest/createObjects.json
    -- Setting: "hourlyBillingFlag": false
    -- Result: Server created with Monthly Billing

ImportError: Missing "OpenSSL" dependency during test

Summary

If I run:

pip install -r requirements-tests.txt
python3 setup.py test

I have the next error:

============================================================================================================ ERRORS =============================================================================================================
__________________________________________________________________________________ ERROR collecting libcloud/test/loadbalancer/test_nttcis.py ___________________________________________________________________________________
ImportError while importing test module '/home/eamanu/dev/libcloud/libcloud/test/loadbalancer/test_nttcis.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
libcloud/loadbalancer/drivers/nttcis.py:17: in <module>
    import OpenSSL
E   ModuleNotFoundError: No module named 'OpenSSL'

During handling of the above exception, another exception occurred:
libcloud/test/loadbalancer/test_nttcis.py:25: in <module>
    from libcloud.loadbalancer.drivers.nttcis import NttCisLBDriver
libcloud/loadbalancer/drivers/nttcis.py:20: in <module>
    raise ImportError('Missing "OpenSSL" dependency. You can install it '
E   ImportError: Missing "OpenSSL" dependency. You can install it using pip - pip install pyopenssl
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
=================================================================================================== 1 error in 134.47 seconds ===================================================================================================

There are any reason for not add pyopenssl to requriments-tests.txt?

S3 Drivers issue.

## Summary
Uploading a file into S3 . I am getting below error.

ERROR:
**<LibcloudError in <class 'libcloud.storage.drivers.s3.S3StorageDriver'> 'This bucket is located in a different region. Please use the correct driver.'>**

## Detailed Information
Version of libcloud: 2.6.1
Python version: 3.7
code:

`import libcloud
from libcloud.storage.types import Provider
from libcloud.storage.providers import get_driver
FILE_PATH = 'filepath' 
cls = get_driver(Provider.S3)
driver = cls('api key','api secret key')
container = driver.get_container(container_name='my-backups-12345')
extra = {'content_type': 'application/octet-stream'}
with open(FILE_PATH, 'rb') as iterator:
                obj = driver.upload_object_via_stream(iterator=iterator,
                                                      container=container,
                                                      object_name='images_backup.tar',
                                                      extra=extra)

In the above code i mentioned correct driver then also am getting below error.please help me on this issue.Am I missing something?

image

openstack v3 authentication for unscoped token fails with: TypeError: 'NoneType' object is not iterable

Summary

I try to authenticate against an internal OpenStack cloud that is based on OpenStack Rocky.

Detailed Information

My reproducer script is:

#!/usr/bin/python
from libcloud.compute.types import Provider
from libcloud.compute.providers import get_driver
import libcloud.security

libcloud.security.VERIFY_SSL_CERT = False

OpenStack = get_driver(Provider.OPENSTACK)
o = OpenStack('myusername', 'PASSWORD',
              ex_force_auth_url='https://keystone.local:5000/v3/auth/tokens',
              ex_tenant_name='cloud',
              ex_token_scope='unscoped',
              ex_force_service_region='CustomRegion',
              ex_domain_name='ldap_users',
              ex_force_auth_version='3.x_password').list_images()

Running that with the latest libcloud version from pypi (currently 2.6.0) in a virtual env results in:

$ PYTHONPATH=. python ~/tmp/test-libcloud.py 
/home/tom/devel/apache/libcloud/venv-2.7/lib/python2.7/site-packages/urllib3-1.25.6-py2.7.egg/urllib3/connectionpool.py:1004: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  InsecureRequestWarning,
Traceback (most recent call last):
  File "/home/tom/tmp/test-libcloud.py", line 17, in <module>
    ex_force_auth_version='3.x_password').list_images()
  File "/home/tom/devel/apache/libcloud/libcloud/compute/drivers/openstack.py", line 372, in list_images
    self.connection.request('/images/detail').object, ex_only_active)
  File "/home/tom/devel/apache/libcloud/libcloud/common/openstack.py", line 225, in request
    raw=raw)
  File "/home/tom/devel/apache/libcloud/libcloud/common/base.py", line 538, in request
    action = self.morph_action_hook(action)
  File "/home/tom/devel/apache/libcloud/libcloud/common/openstack.py", line 292, in morph_action_hook
    self._populate_hosts_and_request_paths()
  File "/home/tom/devel/apache/libcloud/libcloud/common/openstack.py", line 334, in _populate_hosts_and_request_paths
    auth_version=self._auth_version)
  File "/home/tom/devel/apache/libcloud/libcloud/common/openstack_identity.py", line 185, in __init__
    service_catalog=service_catalog)
  File "/home/tom/devel/apache/libcloud/libcloud/common/openstack_identity.py", line 428, in _parse_service_catalog_auth_v3
    for item in service_catalog:
TypeError: 'NoneType' object is not iterable

The reason is that an unscoped v3 auth request does not return a service catalog. See https://docs.openstack.org/api-ref/identity/v3/?expanded=password-authentication-with-unscoped-authorization-detail,password-authentication-with-explicit-unscoped-authorization-detail,get-service-catalog-detail,password-authentication-with-scoped-authorization-detail#authentication-and-token-management

new driver: Kamatera

Hi, I'm starting to develop a driver for Kamatera

Starting with the compute API.

Per the documentation: I'll use this issue as the umbrella place for all my changes and to ask questions.. (unless there is somewhere else I should do that)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.