google / apitools Goto Github PK
View Code? Open in Web Editor NEWLicense: Apache License 2.0
License: Apache License 2.0
Self explanatory
The upload is initialized and an initial POST is made here, but because auto_transfer is False, the HTTP response to the POST is not returned. Then we make a second POST, this time with bytes_http, throwing away the first uploadID and using the second one.
I'll get a pull request out to fix this once I've had an opportunity to do some testing.
There are copies floating here, in https://github.com/craigcitro/apitools, "internal" in Google, in gsutil
and possible elsewhere.
Is there a migration plan in place? Is there a known set of features that differ? Is there a TODO list?
Most importantly: Can we (open source peoplez) help?
At a high level, when we get a response with custom object metadata such as "456": "def" in gsutil, set_unrecognized_field is being called twice:
Once from protojson.py.__decode_dictionary() with the non-unicode version of the string, i.e., '456'
and once from encoding.py(536)_ProcessUnknownMessages() with the unicode version of the string, i.e., u'456'
Resulting in a message with duplicate entries:
metadata: <MetadataValue
additionalProperties: [<AdditionalProperty
key: '456'
value: u'def'>, <AdditionalProperty
key: '456'
value: u'def'>]>
The gen_client help
says
--discovery_url: URL (or "name/version") of the ...
but the code rejects if it doesn't come in the form name.version
.
Example stacktrace:
$ gen_client --discovery_url="storage/v1" --outdir=foo client
Traceback (most recent call last):
File "/usr/local/bin/gen_client", line 9, in <module>
load_entry_point('google-apitools==0.4.1', 'console_scripts', 'gen_client')()
File "/usr/local/lib/python2.7/dist-packages/apitools/gen/gen_client.py", line 240, in run_main
appcommands.Run()
File "/usr/local/lib/python2.7/dist-packages/google/apputils/appcommands.py", line 791, in Run
return app.run()
File "/usr/local/lib/python2.7/dist-packages/google/apputils/app.py", line 238, in run
return _actual_start()
File "/usr/local/lib/python2.7/dist-packages/google/apputils/app.py", line 267, in _actual_start
really_start()
File "/usr/local/lib/python2.7/dist-packages/google/apputils/appcommands.py", line 788, in InterceptReallyStart
original_really_start(main=_CommandsStart)
File "/usr/local/lib/python2.7/dist-packages/google/apputils/app.py", line 220, in really_start
sys.exit(main(argv))
File "/usr/local/lib/python2.7/dist-packages/google/apputils/appcommands.py", line 773, in _CommandsStart
sys.exit(command.CommandRun(GetCommandArgv()))
File "/usr/local/lib/python2.7/dist-packages/google/apputils/appcommands.py", line 293, in CommandRun
ret = self.Run(argv)
File "/usr/local/lib/python2.7/dist-packages/apitools/gen/gen_client.py", line 203, in Run
codegen = _GetCodegenFromFlags()
File "/usr/local/lib/python2.7/dist-packages/apitools/gen/gen_client.py", line 109, in _GetCodegenFromFlags
discovery_doc = util.FetchDiscoveryDoc(FLAGS.discovery_url)
File "/usr/local/lib/python2.7/dist-packages/apitools/gen/util.py", line 289, in FetchDiscoveryDoc
discovery_url = NormalizeDiscoveryUrl(discovery_url)
File "/usr/local/lib/python2.7/dist-packages/apitools/gen/util.py", line 281, in NormalizeDiscoveryUrl
raise ValueError('Unrecognized value "%s" for discovery url')
ValueError: Unrecognized value "%s" for discovery url
We need to drop these two dependencies.
These two libraries get used in two places:
gen_client
uses them for processing command-line argsI'll probably take care of this in two steps.
The function AcceptableMimeType
in base/py/util.py
incorrectly assumes that all mimetypes will have a forward slash within them:
def AcceptableMimeType(accept_patterns, mime_type):
"""Return True iff mime_type is acceptable for one of accept_patterns.
Note that this function assumes that all patterns in accept_patterns
will be simple types of the form "type/subtype", where one or both
of these can be "*". We do not support parameters (i.e. "; q=") in
patterns.
Args:
accept_patterns: list of acceptable MIME types.
mime_type: the mime type we would like to match.
Returns:
Whether or not mime_type matches (at least) one of these patterns.
"""
if '/' not in mime_type:
raise exceptions.InvalidUserInputError(
'Invalid MIME type: "%s"' % mime_type)
This assumption fails for .p12
files, which are used for server-to-server authentication. The function mimetypes.guess_type
returns x-pkcs12
for .p12
files instead of application/x-pkcs12
. This may be a bug with mimetypes
, but it should nevertheless be handled properly by the apitools
.
When clearing a field X, '_IncludeFields' method resets X's parent fields.
This breaks the request to be sent via API.
https://github.com/google/apitools/blob/master/apitools/base/py/encoding.py#L233
Enum is referenced before definition.
Reproduce:
from apitools.base.protorpclite import messages
In [1]: from apitools.base.protorpclite import messages
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-1-8b7eda70bd5b> in <module>()
----> 1 from apitools.base.protorpclite import messages
/Users/sergioisidoro/apitools/apitools/base/protorpclite/messages.py in <module>()
402
403
--> 404 class Enum(six.with_metaclass(_EnumClass, object)):
405 """Base class for all enumerated types."""
406
/Users/sergioisidoro/apitools/apitools/base/protorpclite/messages.py in __init__(cls, name, bases, dct)
301 def __init__(cls, name, bases, dct):
302 # Can only define one level of sub-classes below Enum.
--> 303 if not (bases == (object,) or bases == (Enum,)):
304 raise EnumDefinitionError(
305 'Enum type %s may only inherit from Enum' % name)
NameError: global name 'Enum' is not defined
https://github.com/google/apitools/blob/master/apitools/base/protorpclite/messages.py#L303
pep8 is renamed to pycodestyle. pep8 is no longer getting updates.
Presently, if an Oauth2 token URI returns a transient error code (like a 503), it's up to the application to handle it. But apitools should be able to retry now that HttpAccessTokenRefreshError in oauth2client v1.5.2 allows inspection of the status code.
Something like this in the default retry function HandleExceptionsAndRebuildHttpConnections should work:
elif (isinstance(retry_args.exc,
oauth2client.client.HttpAccessTokenRefreshError)
and (retry_args.exc.status == TOO_MANY_REQUESTS or
retry_args.exc.status >= 500)):
logging.debug(
'Caught transient credential refresh error (%s), retrying',
retry_args.exc)
When generating message with AdditionalProperties if value type is "any"
from apitools.base.py import extra_types
is not present in messages file.
extra_types
module defines messages such as JsonValue
and JsonObject
which are referenced by the message.
One would get following error when trying to instantiate a message using these extra types:
File "apitools\base\protorpclite\messages.py", line 1992, in find_definition
'Could not find definition for %s' % name)
apitools.base.protorpclite.messages.DefinitionNotFoundError:
Could not find definition for extra_types.JsonValue
Up till now what was saving this code from failing is that there was a wild card import in the generated __init__.py
file which was pulling in extra_types
at the package level, and the type lookup routine is designed to look for types up the hierarchy.
We should ensure that dumping to JSON always returns the same string. (This makes testing easier.)
(came up in #11)
This occurred when copying a 1MiB object with a low parallel composite upload threshold. This in turn split the request into two 512KiB calls to UploadObject made simultaneously across two threads.
It's a rare occurrence (can't reproduce it easily), but it seems like it should never happen.
Encountered exception while copying:
Traceback (most recent call last):
File "/usr/local/google/home/thobrla/gsutil/gslib/command.py", line 1680, in PerformTask
results = task.func(cls, task.args, thread_state=self.thread_gsutil_api)
File "/usr/local/google/home/thobrla/gsutil/gslib/copy_helper.py", line 279, in _PerformParallelUploadFileToObject
gzip_exts=None, allow_splitting=False)
File "/usr/local/google/home/thobrla/gsutil/gslib/copy_helper.py", line 1649, in _UploadFileToObject
dst_obj_metadata, preconditions, gsutil_api, logger)
File "/usr/local/google/home/thobrla/gsutil/gslib/copy_helper.py", line 1402, in _UploadFileToObjectNonResumable
provider=dst_url.scheme, fields=UPLOAD_RETURN_FIELDS)
File "/usr/local/google/home/thobrla/gsutil/gslib/cloud_api_delegator.py", line 233, in UploadObject
fields=fields)
File "/usr/local/google/home/thobrla/gsutil/gslib/gcs_json_api.py", line 1024, in UploadObject
apitools_strategy=apitools_transfer.SIMPLE_UPLOAD)
File "/usr/local/google/home/thobrla/gsutil/gslib/gcs_json_api.py", line 887, in _UploadObject
global_params=global_params)
File "/usr/local/google/home/thobrla/gsutil/gslib/third_party/storage_apitools/storage_v1_client.py", line 975, in Insert
download=download)
File "/usr/local/google/home/thobrla/gsutil/third_party/apitools/apitools/base/py/base_api.py", line 616, in _RunMethod
download)
File "/usr/local/google/home/thobrla/gsutil/third_party/apitools/apitools/base/py/base_api.py", line 598, in PrepareHttpRequest
self.__FinalizeRequest(http_request, url_builder)
File "/usr/local/google/home/thobrla/gsutil/third_party/apitools/apitools/base/py/base_api.py", line 517, in __FinalizeRequest
http_request.url = url_builder.url
File "/usr/local/google/home/thobrla/gsutil/third_party/apitools/apitools/base/py/base_api.py", line 184, in url
self.__scheme, self.__netloc, self.relative_path, self.query, ''))
File "/usr/local/google/home/thobrla/gsutil/third_party/apitools/apitools/base/py/base_api.py", line 176, in query
return urllib.parse.urlencode(self.query_params, doseq=True)
File "/usr/local/google/home/thobrla/gsutil/third_party/six/six.py", line 92, in __get__
delattr(obj.__class__, self.name)
AttributeError: urlencode
Upload and Download classes support passing additional_headers which can include a user-agent, and base_api.py populates one if it's not supplied by the client.
But RefreshResumableUploadState just creates a generic http request.
This typically makes these refresh requests appear as if they are coming from a different UserAgent than the rest of the Upload.
apitools/base/py/base_cli.py
uses readline which is not supported on Windows. Perhaps there is a way to use pyreadline.
As a side note, it would be nice to separate cli code into separate package, because of some wildcard imports if generated cli is present it is being pulled in even if it is not used.
Version 1.11.0 of the six package was released yesterday and causes:
File "/Users/florenthemmi/dev/env/lib/python2.7/site-packages/apache_beam/internal/gcp/json_value.py", line 23, in <module>
from apitools.base.py import extra_types
File "/Users/florenthemmi/dev/env/lib/python2.7/site-packages/apitools/base/py/__init__.py", line 21, in <module>
from apitools.base.py.base_api import *
File "/Users/florenthemmi/dev/env/lib/python2.7/site-packages/apitools/base/py/base_api.py", line 31, in <module>
from apitools.base.protorpclite import message_types
File "/Users/florenthemmi/dev/env/lib/python2.7/site-packages/apitools/base/protorpclite/message_types.py", line 25, in <module>
from apitools.base.protorpclite import messages
File "/Users/florenthemmi/dev/env/lib/python2.7/site-packages/apitools/base/protorpclite/messages.py", line 1165, in <module>
class Field(six.with_metaclass(_FieldMeta, object)):
TypeError: Error when calling the metaclass bases
metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases
This issue has been encountered by several more folks and has been reported as issue benjaminp/six#210.
This issue is to track its resolution, or to make whatever change in apitools could fix this.
What is the status of Python 3 support for google-apitools?
We do not list any language classifiers here:
Line 89 in 28ac38a
Do we know if any extra work is required to make the package Python3-compatible?
Consider this scenario:
message SomeMessage {
required int32 x = 1;
}
message SomeOtherMessage {
repeated SomeMessage messages = 1;
required string uid = 2;
}
The generated code will be something like this:
[...]
from protorpc import messages
[...]
class SomeOtherMessage(messages.Message):
messages = messages.MessageField('SomeMessage',...)
uid = messages.StringField(2)
When using the auto-generated code above, an error will be raised:
eg. 'MessageField' object has no attribute 'StringField'.
Since SomeOtherMessage.messages field shadows protorpc.messages module.
Module messages from protorpc should be qualified to avoid these sorts of name collisions:
the BigQuery API had a discovery doc which contained the following (more or less):
{
"schemas": {
"JobReference": {
"id": "JobReference",
"type": "object",
"properties": {
"location": {
"$ref": "Location"
}
}
},
"Location": {
"id": "Location",
"type": "string"
}
}
}
and it seems that apitools doesn't like a schema that's just a string:
Traceback (most recent call last):
File "apitools/gen/gen_client.py", line 347, in main
return args.func(args) or 0
File "apitools/gen/gen_client.py", line 160, in GenerateClient
codegen = _GetCodegenFromFlags(args)
File "apitools/gen/gen_client.py", line 105, in _GetCodegenFromFlags
apitools_version=args.apitools_version)
File "apitools/gen/gen_client_lib.py", line 95, in __init__
schema_name, schema)
File "apitools/gen/message_registry.py", line 266, in AddDescriptorFromSchema
schema.get('type'))
ValueError: ('Cannot create message descriptors for type %s', u'string')
Currently, google-apitools releases do not contain any changelog, and it is hard to see what the actual changes between two versions are. When you tag a release, it would be nice if that could be added.
Also, if you click on "Releases" in the github UI, it pretends that 0.5.16 (which it calls "HttpError Class") is the latest release.
This would make it easier to maintain OS packages of google-apitools, like for example http://pkgsrc.se/www/py-google-apitools :)
When a decode encounters a MessageField which has a non-dict value (e.g. "None" or "1"), an uncaught AttributeError is raised. The following is an associated stack trace:
File "apitools/base/py/encoding.py", line 110, in DictToMessage
return JsonToMessage(message_type, json.dumps(d))
File "apitools/base/py/encoding.py", line 104, in JsonToMessage
return _ProtoJsonApiTools.Get().decode_message(message_type, message)
File "apitools/base/py/encoding.py", line 290, in decode_message
message_type, result)
File "apitools/base/protorpclite/protojson.py", line 211, in decode_message
message = self.__decode_dictionary(message_type, dictionary)
File "apitools/base/protorpclite/protojson.py", line 284, in __decode_dictionary
for item in value]
File "apitools/base/py/encoding.py", line 312, in decode_field
field.message_type, json.dumps(value))
File "apitools/base/py/encoding.py", line 290, in decode_message
message_type, result)
File "apitools/base/protorpclite/protojson.py", line 211, in decode_message
message = self.__decode_dictionary(message_type, dictionary)
File "apitools/base/protorpclite/protojson.py", line 262, in __decode_dictionary
for key, value in six.iteritems(dictionary):
File "six/__init__.py", line 605, in iteritems
return d.iteritems(**kw)
Presently, apitools can generate nested classes, which, because they aren't defined at the top-level of the module, are un-pickleable. Pickling is necessary for passing apitools objects to other processes, which in turn is useful when optimizing performance by spreading work out over multiple processes and threads.
This is a feature request to add __reduce__
and other appropriate logic so that apitools-generated objects can be pickled without any intervention by library consumers. This could be added either to generated classes or the base message classes in protorpclite
As a workaround, the consumers of a generated library can manual using encoding.MessageToJson prior to pickling and JsonToMessage after pickling, but it's extra work for each consumer.
With a brand-new checkout, running tox -e py27
fails with
======================================================================
ERROR: testGeneration (apitools.gen.client_generation_test.ClientGenerationTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File ".../apitools/apitools/gen/client_generation_test.py", line 61, in testGeneration
retcode = subprocess.call(args)
File "/usr/lib/python2.7/subprocess.py", line 522, in call
return Popen(*popenargs, **kwargs).wait()
File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1327, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
-------------------- >> begin captured logging << --------------------
root: INFO: Testing API drive.v2 with command line: gen_client --client_id=12345 --client_secret=67890 --discovery_url=drive.v2 --outdir=generated --overwrite client
--------------------- >> end captured logging << ---------------------
Incorrectly specifying --unelidable_request_methods arguments will be silently ignored.
It would be nice to fail on this, so that user does not discover this much later in the process.
I'd like to be able to use non-seekable streams, such as werkzeug.wsgi.LimitedStream
that is used in Flask for streaming request and respone data. This is how we do streaming to/from Google Cloud Storage. However, the Upload.__StreamMedia
calls stream.seek()
to verify completion:
File "site-packages/gcloud/storage/blob.py", line 350, in upload_from_file
finish_callback=lambda *args: None)
File "site-packages/apitools/base/py/transfer.py", line 807, in StreamInChunks
additional_headers=additional_headers)
File "site-packages/apitools/base/py/transfer.py", line 773, in __StreamMedia
self.stream.seek(0, os.SEEK_END)
AttributeError: 'LimitedStream' object has no attribute 'seek'
There's even a TODO:
# TODO(craigcitro): Decide how to handle errors in the
# non-seekable case.
This makes using the library a very jarring experience for application start-up times.
E.g. bool(-1) return True, so the while condition is always true, and the loop will only end when page_token == None
Not sure if this is WAI, I would think not, and expected the method to return nothing.
@thobrla I filed:
https://code.google.com/p/google-cloud-sdk/issues/detail?id=165
How does one pull such a change into google-cloud-sdk
?
This is an FR to use separate classes for known status codes, which all may derive from HttpError, for instance:
class ConflictError(HttpError):
"""This is an HTTP Conflict 409 error."""
Why? The reason is to greatly simplify control flow for the caller. It is very common to want to switch on specifically the status code. For instance, let's say you want to display a resource, but if it doesn't yet exist you want to prompt for its creation. The current recommended flow is:
try:
my_resource = MyMessageClass.Get() # calls API
except apitools_exceptions.HttpError as e:
if e.status_code == 409:
my_resource = OfferCreateInteractively()
elif e.status_code == 403:
raise PermissionError('Please go to your console and grant access to the user account')
else:
raise UserFacingCatchAllError(e)
With real classes, the control flow could look like this:
try:
my_resource = MyMessageClass.Get() # calls API
except apitools_exceptions.HttpConflictError:
my_resource = OfferCreateInteractively()
except apitools_exceptions.HttpForbiddenError:
raise MyPermissionError('Please go to your console and grant access to the user account')
except apitools_exceptions.HttpError as e:
raise UserFacingCatchAllError(e)
This way, we:
while still being fully backwards compatible.
7ccade5 added code that calls read()
on a StreamSlice, thus effectively reading the entire contents of a stream into memory. This takes away one of the main advantages of using streams - being able to read them gradually and buffer their contents into memory, allowing the transfer of large files. If you attempt uploads of files larger than the available amount of memory you have, this will fail.
apitools is largely tested by a set of integration tests, which currently aren't included in this repo. i need to:
@craigcitro WDYT of this?
To give some context, I noticed the large number of TODO comments after filing #2, which had an old link to https://github.com/craigcitro/apitools, so I did a git grep -i citro
.
We should probably be pinning the linter to a specific version.
A method that takes a path parameter called "foo" and a request body with a field called "foo" has the request message and path parameter elided, assuming that the value of "foo" specified in the request body will be the same one that should be specified in the path.
This ignores the case where the "foo" field in the request body is marked as @OutputOnly
, where it's not valid to pass the value in the request body.
--unelidable_request_methods
can specify that the message should not be elided, but ideally the eliding logic would skip eliding messages when @OutputOnly
is involved.
I believe the necessary change is somewhere in _NeedRequestType
here.
With new version 1.8.0, pylint introduced checks for new errors. This is causing CI lint builds to fail.
I'm going to use the nose.tools.timed(limit)
decorator as in
googleapis/oauth2client#85
googleapis/oauth2client#89
to determine where the slow test is and why.
The Chromium project (www.chromium.org) pulls in apitools indirectly through the catapult (https://github.com/catapult-project/catapult/) and gsutil (https://github.com/GoogleCloudPlatform/gsutil) source repositories. In order for Chromium to be pulled into various Linux source distributions there's a requirement that all of the third party files pass the Linux licensecheck utility. Currently there are many files in the apitools repository missing per-file licenses. From a current run of licensecheck, they are:
$ licensecheck -r . | grep "No copyright"
./run_pylint.py: *No copyright* UNKNOWN
./samples/storage_sample/storage/__init__.py: *No copyright* UNKNOWN
./samples/storage_sample/storage/storage_v1_client.py: *No copyright* UNKNOWN
./samples/storage_sample/storage/storage_v1.py: *No copyright* UNKNOWN
./samples/storage_sample/storage/storage_v1_messages.py: *No copyright* UNKNOWN
./samples/storage_sample/downloads_test.py: *No copyright* UNKNOWN
./samples/storage_sample/uploads_test.py: *No copyright* UNKNOWN
./ez_setup.py: *No copyright* UNKNOWN
./apitools/__init__.py: *No copyright* UNKNOWN
./apitools/gen/service_registry.py: *No copyright* UNKNOWN
./apitools/gen/util.py: *No copyright* UNKNOWN
./apitools/gen/__init__.py: *No copyright* UNKNOWN
./apitools/gen/extended_descriptor.py: *No copyright* UNKNOWN
./apitools/gen/util_test.py: *No copyright* UNKNOWN
./apitools/gen/command_registry.py: *No copyright* UNKNOWN
./apitools/gen/client_generation_test.py: *No copyright* UNKNOWN
./apitools/gen/message_registry.py: *No copyright* UNKNOWN
./apitools/gen/gen_client.py: *No copyright* UNKNOWN
./apitools/gen/gen_client_lib.py: *No copyright* UNKNOWN
./apitools/base/py/exceptions.py: *No copyright* UNKNOWN
./apitools/base/py/util.py: *No copyright* UNKNOWN
./apitools/base/py/testing/mock.py: *No copyright* UNKNOWN
./apitools/base/py/testing/__init__.py: *No copyright* UNKNOWN
./apitools/base/py/testing/testclient/fusiontables_v1_messages.py: *No copyright* GENERATED FILE
./apitools/base/py/testing/testclient/__init__.py: *No copyright* UNKNOWN
./apitools/base/py/testing/testclient/fusiontables_v1_client.py: *No copyright* UNKNOWN
./apitools/base/py/testing/mock_test.py: *No copyright* UNKNOWN
./apitools/base/py/credentials_lib.py: *No copyright* GENERATED FILE
./apitools/base/py/base_cli.py: *No copyright* GENERATED FILE
./apitools/base/py/stream_slice_test.py: *No copyright* UNKNOWN
./apitools/base/py/encoding_test.py: *No copyright* UNKNOWN
./apitools/base/py/cli.py: *No copyright* UNKNOWN
./apitools/base/py/stream_slice.py: *No copyright* UNKNOWN
./apitools/base/py/http_wrapper.py: *No copyright* UNKNOWN
./apitools/base/py/extra_types_test.py: *No copyright* UNKNOWN
./apitools/base/py/base_api_test.py: *No copyright* UNKNOWN
./apitools/base/py/transfer_test.py: *No copyright* UNKNOWN
./apitools/base/py/batch_test.py: *No copyright* UNKNOWN
./apitools/base/py/__init__.py: *No copyright* UNKNOWN
./apitools/base/py/http_wrapper_test.py: *No copyright* UNKNOWN
./apitools/base/py/list_pager_test.py: *No copyright* UNKNOWN
./apitools/base/py/encoding.py: *No copyright* UNKNOWN
./apitools/base/py/list_pager.py: *No copyright* UNKNOWN
./apitools/base/py/credentials_lib_test.py: *No copyright* UNKNOWN
./apitools/base/py/buffered_stream.py: *No copyright* UNKNOWN
./apitools/base/py/base_api.py: *No copyright* UNKNOWN
./apitools/base/py/util_test.py: *No copyright* UNKNOWN
./apitools/base/py/extra_types.py: *No copyright* UNKNOWN
./apitools/base/py/buffered_stream_test.py: *No copyright* UNKNOWN
./apitools/base/py/batch.py: *No copyright* UNKNOWN
./apitools/base/py/transfer.py: *No copyright* UNKNOWN
./apitools/base/py/app2.py: *No copyright* UNKNOWN
./apitools/base/__init__.py: *No copyright* UNKNOWN
We'd like to ask that per-file licenses be added to these files to make it easier to integrate apitools not only into Chromium, but also Linux distributions in general. Thanks.
This happens in Apache Beam Python SDK when trying to estimate size of a GCS file pattern using a thread pool.
This seems to be due to global logger set/reset at following location.
https://github.com/google/apitools/blob/master/apitools/base/py/encoding.py#L282
When invoking this using a ThreadPool, some threads might see 'logging.ERROR' to be the old_level and reset global logger to 'logging.ERROR'.
Here are the details of the errors: https://travis-ci.org/google/apitools/builds/94512014.
httplib2 is the only blocking dependency for this (https://caniusepython3.com/project/google-apitools) and it has a newer version that supports python3. Are there anything else blocking this feature? Is it possible to get it on the roadmap?
So, this change further increases the delta between error handling in the batch and non-batch case... after discussing with @craigcitro, in the short term we can probably update BatchHttpRequest#_Execute to use each request's check_response_func
https://github.com/google/apitools/blob/master/apitools/base/py/batch.py#L409
and in the longer run, we should further unify the code of a batch and non batch request (and the error handling)
See @craigcitro comment.
PS Craig, https://github.com/google/protorpc does not have issues turned on.
Currently, failing to parse a response as the appropriate type from the server leads to somewhat incomprehensible error messages (usually related to type mismatches between a dict
and a str
).
Instead, we should provide something clearer, along the lines of Could not parse response "{...}" as object of type <typename>.
Discovery doc describing the following type (conceptually map<string, int64>()
)
"MyMapOfValues": {
"description": "",
"type": "object",
"additionalProperties": {
"type": "string",
"format": "int64"
}
}
gets converted to
@encoding.MapUnrecognizedFields('additionalProperties')
class MyMapOfValues(_messages.Message):
class AdditionalProperty(_messages.Message):
key = _messages.StringField(1)
value = _messages.IntegerField(2)
additionalProperties = _messages.MessageField('AdditionalProperty', 1, repeated=True)
Unfortunately decoding json payloads like:
message = encoding.JsonToMessage(MyMapOfValues, '{"myValue": "5"}')
results in
ValidationError: Expected type (<type 'int'>, <type 'long'>) for field value, found 5 (type <type 'unicode'>)
note that
message = encoding.JsonToMessage(MyMapOfValues, '{"myValue": 5}')
works as expected.
In filing this I realized it was changed at HEAD
but not with the version I have locally. Have you cut a new release yet?
See Upload
constructor
def __init__(self, stream, mime_type, total_size=None, http=None,
close_stream=False, chunksize=None, auto_transfer=True,
and the Upload.FromStream
factory
these files use print
the statement instead of print()
the function. this fails under python 3.
git grep 'print '
(and ignore some of the comment/docstring hits) shows the bad files:
If a limit is provided, YieldFromList should enforce that the requested batch size is less than the remaining limit. Otherwise, more data will be requested from the service than requested by the user.
This should apply to the limit as it decreases, for example:
Request 1: limit: 90, batch_size: 50 --> request with batch_size: 50
Request 2: limit: 40, batch_size: 50 --> request with batch_size: 40
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.