Coder Social home page Coder Social logo

google-cloud-rs's People

Contributors

dependabot-preview[bot] avatar hirevo avatar kornholi avatar lht avatar mwilliammyers avatar yagince avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

google-cloud-rs's Issues

Release on crates.io

Hi!
I was wondering if you would consider releasing this crate crates.io.
I read from a previous issue that there are future plans to bundle this repo together with generated REST bindings and eventually release all of it, but I would like you to consider releasing a preview-like version of this crate in its current state as well.

My selfish reason for this request is extra-long download times for git submodule in my CI system, as well as build breaking with current nightly toolchain due to rustfmt not being available on it (which is required in the build.rs script)

Consider vendoring the `googleapis/googleapis` submodule

As mentioned in #24 and #25, cloning and fetching the google-cloud-rs repository for scratch is currently very slow, mostly due to the clone of the googleapis/googleapis repository, that we use as a submodule.

To address this issue, we could consider vendoring the protobuf files from googleapis/googleapis while appropriately crediting them and including their license notice.

Docker build - failed to run custom build command

Hi,

I am building a web application that communicates with Google's Datastore. When I packaged it as a Docker image, I encountered an "error".

error: failed to run custom build command for `google-cloud v0.2.1`

Caused by:
  process didn't exit successfully: `/opt/repository/target/release/build/google-cloud-2e06fd0cd14d951b/build-script-build` (exit code: 1)
  --- stderr
  error: 'rustfmt' is not installed for the toolchain '1.51.0-x86_64-unknown-linux-gnu'
  To install, run `rustup component add rustfmt`

Following the hint above fixed the issue.

rustup component add rustfmt

This isn't really an issue, but a heads up for others like me.

Storage: Object.get() does not work with object names containing "/"

For example:

let mut file = bucket.object("test/test.txt").await?;
let data = file.get().await?;

does result in HTTP error: HTTP status client error (404 Not Found) for url (https://storage.googleapis.com/storage/v1/b/xxx/o/test/test.txt).

Possible solution:
Does the object name need to be url-encoded?

Rename `new()` to `from_application_default_credentials()`

What do you think about renaming new() to from_application_default_credentials()? I was a little surprised when I found out that new depended on the GOOGLE_APPLICATION_CREDENTIALS env var being set. I also think it mirrors from_credentials nicely.

This would also set us up to implement the entire Application Default Credentials (ADC) flow in the future. This would require us querying metadata servers (on GCE or App Engine etc) to get credentials if the GOOGLE_APPLICATION_CREDENTIALS env var is not found.

Panic when listing buckets when there are no buckets

Using the test code as an example with a GCP project that has no buckets, I get this error:

thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Reqwest(reqwest::Error { kind: Decode, source: Error("missing field `items`", line: 3, column: 1) })', src/main.rs:30:27

This is my code, which is basically the test code but loading the credentials from a file. Using google-cloud version 0.1 from crates.io:

fn load_creds() -> ApplicationCredentials {
    let filename = env::var("GCP_TEST_CREDENTIALS").expect("env GCP_TEST_CREDENTIALS not set");
    let creds = fs::read_to_string(filename).unwrap();
    serde_json::from_str::<ApplicationCredentials>(&creds)
        .expect("incorrect application credentials format")
}

async fn setup_client() -> Result<storage::Client, storage::Error> {
    let creds = load_creds();
    let project = env::var("GCP_TEST_PROJECT").expect("env GCP_TEST_PROJECT not set");
    storage::Client::from_credentials(project, creds).await
}

async fn storage_lists_buckets() {
    let mut client = setup_client().await.unwrap();
    let buckets = client.buckets().await.unwrap(); // <<---- line 30 of main.rs
    for bucket in buckets.iter() {
        println!("bucket: {}", bucket.name());
    }
}

Update prost dependency to get build fix for Apple M1

The most recent release of prost (0.7) has a fix for building on Apple M1-based systems. The latest google-cloud-rs is still using version 0.6 which fails to build on Apple Silicon because prost was lacking a prebuilt protoc binary.

Pub/Sub streaming pull method

The RPC as a StreamingPullRequest method which expects a bidirectional stream, sending messages to the client and receiving (N)Acks back. This has a higher throughput than the PullRequest methods. It would be nice to expose this to the API as a Stream.

Datastore: Exclude property from index

Is it possible to exclude a datastore entity's property from indexes. The default behavior is to index all properties, and that places a limit of 1500 characters for string properties. The datastore APIs allow setting an exclusion flag per property, is there any way to set that flag?

Pub/Sub `Subscription`'s `receive` method can panic

Right now, using receive and receive_with_options panics on any error (for example, loss of network or socket unexpectedly closed), preventing graceful handling of them. Furthermore, it is impossible to use catch_unwind as the Subscription type is not unwind-safe.

It would make more sense to return a Result and let the library user deal with the handling (or revert to old behavior via .unwrap()/.except()).

relation to google-apis-rs org?

Hello! This looks like a cool project!

Just opening up this issue to discuss how the google-apis-rs org and this repo can best collaborate (if that is something everyone wants). We currently have a complete (still kind of WIP) automatically generated HTTP based API for every Google API, but I would love to work on a gRPC async-await API...

What does everyone think?

@ggriffiniii @Byron @Hirevo

Authorize with google cloud sdk application default credentials

Hey I am glad seeing this rust binding of google cloud apis.

The python client and others implement several ways to authorize it. https://github.com/googleapis/google-auth-library-python/blob/9e1082366d113286bc063051fd76b4799791d943/google/auth/_default.py#L346-L435

  1. via GOOGLE_APPLICATION_CREDENTIALS environment variable.
  2. via google cloud sdk, i.e. gcloud auth application-default login
  3. via App Engine/ Compute Engine running environment

I am particular interested in the second way for authorization. It essentially involves checking ~/.config/gcloud/application_default_credentials.json file to get client_secret and refresh_token. For example:

> cat ~/.config/gcloud/application_default_credentials.json 
{
  "client_id": "xxx",
  "client_secret": "xxx",
  "refresh_token": "xxx",
  "type": "authorized_user"
}

It would be great if this library can implement this kind of authorization. Issue #15 is probably related.

Figure out mechanism to easily convert to/from Datastore values

Datastore support got merged (#1), and exposes the datastore::Value type as the form for the Datastore's data.
We need a way to easily convert user types to and from Datastore types.

Currently, a few types can be converted into their datastore::Value equivalents by using the Into<datastore::Value> trait:

use gcp::datastore::Value;

let name = "some-name".into::<Value>();
let age = 21.into::<Value>();

assert_eq!(name, Value::StringValue(String::from("some-name")));
assert_eq!(age, Value::IntegerValue(21));

Possible solutions:

  • Implement serde's Serializer and Deserializer traits.
  • Add our own trait and implement a #[derive] macro for it.

Streaming object insert and get, please

I can use memmap to effectively stream a large file when calling create_object(), but when calling get() it will always return a Vec<u8> result. I see there are commented out "writer" and "reader" functions, so I'm filing this request just to track the need for this feature. For my use case, I'm always going to be dealing with files that are 64mb or larger, so streaming would be good.

P.S. The google_storage1 crate defines a ReadSeek trait that is used for uploading files. For download, I think they rely on hyper, enabling std::io::copy() directly to a file.

Get project name from the service account?

Is there a reason why from_credentials takes a project name as well? I am just trying to eliminate the friction in setting up the credentials with e.g. multiple environment variables/secrets for testing etc.

I see a few options:

  1. Get the project name from the service account and allow setting a different project name per request (sometimes that makes sense for GCS when using e.g. another project's bucket)
  2. Get the project name from the service account and allow setting a global project_id/name on the client object later
  3. Accept an Option and default it to the project_id found in the service account file
  4. A hybrid approach where we do 2 or 3 and then also let the user specify a project_id per request if need be (in an ergonomic way, so we aren't passing None around a bunch when we want to just use the project_id in the client for the request...)

I think I am leaning towards option 4...

Configurable endpoint

Currently much of the uris are built from hard-coded associated constants in the Client struct. it would be great if the Client::Endpoint was configurable (or overridable by STORAGE_EMULATOR_HOST) so users could program against a test server as demonstrated in this project https://github.com/fsouza/fake-gcs-server/tree/main/examples

it would appear that others are interested in this functionality (see: pingcap/br#258)
and they have investigated crudely swapping out the uri from tame_gcs in the tikv gcs backend

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.