googleapis / google-cloud-python Goto Github PK
View Code? Open in Web Editor NEWGoogle Cloud Client Library for Python
Home Page: https://googleapis.github.io/google-cloud-python/
License: Apache License 2.0
Google Cloud Client Library for Python
Home Page: https://googleapis.github.io/google-cloud-python/
License: Apache License 2.0
Yes, under the hood this would do a copy and delete.
Right now we leave people guessing when a gcc command fails.
Let's check for pycrypto and pyopenssl, and if they're not there, make sure that python-dev is installed so that the compilation won't fail. Otherwise, say you need to install the dev Python stuff or the actual packages for openssl and crypto...
(linux specific)
Let's also check into this for Windows and Mac - maybe they have their act together and do this properly...
to get rid of the getattr
I not sure is it is a bug but when I have tried save() a entity with a datetime property appear this error.
pb_attr, pb_value = helpers.get_protobuf_attribute_and_value(value)
File "/usr/local/lib/python2.7/dist-packages/gcloud/datastore/helpers.py", line 39, in get_protobuf_attribute_and_value
name, value = 'timestamp_microseconds', time.mktime(val.timetuple())
NameError: global name 'time' is not defined
Also - how would this work when it comes to moving to the GoogleCloudPlatform page...?
Ideal syntax:
query = dataset.query().ancestor(key)
print query.fetch()
or
query = dataset.query().ancestor(['Kind', 'parent', 'Kind', 'child'])
print query.fetch()
Compute Engine: https://developers.google.com/compute/docs/reference/latest/
Python Compute Engine: https://developers.google.com/compute/docs/api/python-guide
Boto EC2: https://github.com/boto/boto/tree/develop/boto/ec2
Boto EC2 tutorial: https://boto.readthedocs.org/en/latest/ec2_tut.html
Boto EC2 API docs: https://boto.readthedocs.org/en/latest/ref/ec2.html
These exist to an extent but aren't universal and certainly not well documented.
There are two options:
from gcloud import datastore
dataset = datastore.get_dataset(...)
namespace = dataset.namespace('my-namespace')
person = namespace.entity('Person')
person['name'] = 'JJ'
person.save()
query = namespace.query('Person')
print query.fetch()
from gcloud import datastore
dataset = datastore.get_dataset(...)
person = dataset.entity('Person')
person.namespace('my-namespace')
person['name'] = 'JJ'
person.save()
query = dataset.query('Person').namespace('my-namespace')
print query.fetch()
or
person = namespace.entity('Person')
person['name'] = 'JJ'
person.save(namespace='my-namespace')
I currently prefer option 2, where a namespace isn't a separate level in the hierarchy, but simply a flag passed around on queries or entities.
I think we may need to rework the structure a bit.
We can use raw_input(), etc to take names and other fields from the user to make the process more interactive while getting started (and show the code we're actually running as we run it).
Something like gcloud.auth
would allow every service subpackage to reuse the same credentials.
This would just be a shortcut for:
if force:
for key in bucket:
key.delete()
If people just want to kill a bucket, we shouldn't make it more complicated to do this.
Described here: https://developers.google.com/storage/docs/accesslogs
This seems like a pain in the ass to deal with if they are:
entity = dataset.entity('Person')
entity.key(entity.key().name('name'))
versus
entity = dataset.entity('Person')
entity.key().name('name')
What do we really get by making a Key immutable?
A Query being immutable makes great sense - chaining things together is easy and you can "fork" a query into several different ones. But it's unlikely we'll need to fork a key...
Described here: https://developers.google.com/storage/docs/object-versioning
Right now we're using httplib2
because of oauth2client
working really nicely with that, however httplib2
doesn't support a streaming download.
This means, a .request()
will load the entire response content into memory, which won't work at all for a big file.
To get around this, we chunk the file up using the Range HTTP header and request pieces of it until the end, however the better way to handle this is with a streaming HTTP library:
stream = <send http request>
with open('output.txt', 'wb') as output:
data = stream.read(CHUNK_SIZE)
while data:
output.write(data)
data = stream.read(CHUNK_SIZE)
To make this work we need to:
oauth2client
to work with that libraryIs there a way to run multiple requests in parallel?
It would be really nice to be able to execute requests asynchronously, e.g.
future = Query('MyKind').fetch_async()
result = future.get_result()
NDB has a great support for async operations using an event loop.
Is it something you guys might consider for future versions of the client?
Not just JWT certificate.
Right now, the only way to develop this is by hitting the live Cloud Datastore API. Is there a way we can use these API calls in unit tests without mocking?
Can we have a "local datastore" server that understands the API and makes it easy for local development?
We need to be consistent with these names, and project_id is the appropriate argument name. If we don't fix this quickly, people may start depending on this as a kwarg and it will be a nightmare to fix..
currently the ACL class is just a container and fetching the actual metadata is done with either the Bucket or the Key class, it would be nice to make the ACL class standalone.
Something like:
acl = storage.ACL('/path/to/bucket') # or /path/to/key
acl.grant(user_email='[email protected]', perm=ACL.READ)
# or
acl.grant['[email protected]'] = ACL.READ
acl.save()
print acl.get(user_email='[email protected]')
# or
print acl['[email protected]']
Or wait for the new primitive client that simplifies this.
I just installed from scratch and it appears that libffi-dev was a dependency package. Not sure where or how... but that fixed the errors I was getting...
Can we add this to the README ?
For example
connection = get_connection()
bucket = connection['bucket-name']
key = bucket['/path/to/file.txt']
del key
This would be a neat abstraction, though it's certainly not necessary.
Right now it's not really clear how to check for a key's existence (do I need to do if Key('name') in bucket
? or just 'name' in bucket
?
We need to make sure these special methods are documented with examples in the API docs for the Key class.
File "/home/albertog/Documents/spyder-2.3.0beta1/scripts/semtable/test/test.py", line 43, in main
q.fetch(1)
File "gcloud/datastore/query.py", line 247, in fetch
return [Entity.from_protobuf(entity) for entity in entity_pbs]
File "gcloud/datastore/entity.py", line 148, in from_protobuf
value = helpers.get_value_from_protobuf(property_pb)
File "gcloud/datastore/helpers.py", line 82, in get_value_from_protobuf
datetime.timedelta(microseconds=microseconds))
AttributeError: type object 'datetime.datetime' has no attribute 'timedelta'
But make it default to False (http://www.appneta.com/blog/s3-list-get-bucket-default/)
How about we use the name gcloud
and then the version, ie gcloud (0.1)
?
how can get demo.key of my dataset?
Instead of always asking for all the scopes at the same type.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.