Coder Social home page Coder Social logo

azure-devops-rust-api's Introduction

Azure DevOps Rust API Generator

If you find any issues then please raise them via Github.

This repo auto-generates a Rust Azure DevOps API crate (azure_devops_rust_api) from the Azure DevOps OpenAPI spec vsts-rest-api-specs.

Status

Repo overview

This repo contains:

  • autorust: A tool to autogenerate the azure_devops_rust_api crate from the OpenAPI spec.
  • vsts-api-patcher: A tool to patch the OpenAPI spec. This modifies the original OpenAPI spec to fix known issues and/or improve the generated code.
  • azure_devops_rust_api: The autogenerated crate.

Usage of generated azure_devops_rust_api crate

For documentation on usage of the generated crate, see the azure_devops_rust_api:

Build

Publishing

The generated crate is manually published to the public Rust crate registry (crates.io) as azure_devops_rust_api.

Managing the version of the OpenAPI spec

The Azure DevOps OpenAPI spec is included as a git submodule linked to a specific version of vsts-rest-api-specs.

You can view the current version (commit id) being used:

$ git submodule status
 312bb8d4aabf70f096b3357ce382b5d91ce38574 vsts-rest-api-specs (heads/master)

To update the version to the latest revision:

git submodule update --remote vsts-rest-api-specs
# Build and test!
./build.sh

Once you are happy with the update, commit the changes:

git add .
git commit -m "Updated vsts-rest-api-specs to latest revision"

Notes

  • The Azure DevOps OpenAPI spec vsts-rest-api-specs is pulled in at build time via a git submodule.
  • There are issues/bugs with the OpenAPI spec, so it is patched using vsts-api-patcher to generate vsts-rest-api-specs.patched, which is used for the code generation.
  • The client code generation is done using a modified version of autorust, a component from azure-sdk-for-rust.
    • autorust is MIT licensed
    • autorust includes an openapi module, which is also MIT licensed
  • Generates crate for API version 7.1 (latest API version).
  • The azure-devops-python-api can be a useful reference when investigating API issues.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

azure-devops-rust-api's People

Contributors

blondeburrito avatar cswindle avatar dependabot[bot] avatar dsteeley avatar elprincidente avatar francescelies avatar johnbatty avatar kyle-rader-msft avatar microsoft-github-operations[bot] avatar microsoftopensource avatar mikel-landa avatar myadav27 avatar rocketjas avatar tofay avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

azure-devops-rust-api's Issues

Add support for continuation tokens

Operations that return many items use continuation tokens to allow the items to be returned in batches.

REST responses that do not contain all items include an HTTP header x-ms-continuationtoken that provides a continuation token value. This value must be provided on a subsequent call as a continuationToken parameter to query the remaining items (repeated as necessary until the server does not include the x-ms-continuationtoken header).

The Python SDK has a "response" object that includes the continuation token, with the actual response as a value field.

The Rust SDK already modifies the spec to handle responses that return lists of items, as the server sends them in a wrapper:

{
   count: Option<i32>,
   value: Vec<type>
} <type>List

We could change this to include the continuation token as per the Python SDK (and perhaps change our wrapper type to be <...>Response rather than <...>List.

{
   count: Option<i32>,
   value: Vec<type>
   continuation_token: Option<String>
} <type>Response

Deserialization of this struct from the protocol would still work because the continuation_token field is declared as an Option, so would always be set to None as the field is in the headers rather than the body. However, we could modify the code generator to fill in this value from the response headers before the value is returned to the application.

Useful links:

Most returned struct fields are wrapped in Option

Most returned struct fields are wrapped in Option.

Example:

pub struct WikiPage {
    pub wiki_page_create_or_update_parameters: [WikiPageCreateOrUpdateParameters],
    pub git_item_path: Option<String>,
    pub id: Option<i32>,
    pub is_non_conformant: Option<bool>,
    pub is_parent_page: Option<bool>,
    pub order: Option<i32>,
    pub path: Option<String>,
    pub remote_url: Option<String>,
    pub sub_pages: Vec<WikiPage>,
    pub url: Option<String>,
}

This makes it tedious/verbose to access these field values, as you either need to check for Some(value) or make liberal use of unwrap().

The code generator does the Option wrapping because the OpenAPI spec does not mark these fields as "mandatory", even though many of them will always be present. For example, in the definition above it is safe to assume that every WikiPage will have an id field.

The current code generator does have the ability to remove the Option wrapper for specific struct fields that are known to always be present (see patch_definition_required_fields()). I have done this by trial and error for a small number of structs that I care about. This must be done with care because if a response does not contain a field that is always expected (not an Option), then the deserialization fails.

If you have specific structs that have Options that you think could be removed then add a comment to this issue and I'll look into fixing.

`git::items::list` `recursion_level` should be an enum rather than a String

The current API defines the git::items::list::ClientBuilder::recursion_level() function to take an Into<String> parameter:

pub fn recursion_level(self, recursion_level: impl Into<String>) -> Self
// The recursion level of this request. The default is ‘none’, no recursion.

However the operation only takes a fixed set of values:

  • None
  • OneLevel
  • OneLevelPlusNestedEmptyFolders
  • Full

Should fix up the spec to make this parameter an enum with the set of valid values.
See VersionControlRecursionType

API structure: auth module should be separated from feature modules

Currently the auth module (containing the Credential struct) is at the same level as the feature modules. This isn't a great structure - in the docs it gets lost amongst the feature modules.

A couple of options to fix this:

  • remove (or make private) the auth module and expose the Credential struct at the top level
  • put all of the feature modules under a new top-level module

deserialize is failing in Distributed task due to null value in variableGroupProjectReferences

Deserialize is failing in Distributed Task due to null value in variableGroupProjectReferences

Error Message:
image

image

    pub provider_data: Option<VariableGroupProviderData>,
    #[doc = "Gets or sets type of the variable group."]
    #[serde(rename = "type", default, skip_serializing_if = "Option::is_none")]
    pub type_: Option<String>,
    #[doc = "all project references where the variable group is shared with other projects."]
    #[serde(
        rename = "variableGroupProjectReferences",
        default,
        skip_serializing_if = "Vec::is_empty"

Add support for authenticating with a PAT

Authentication is done via azure_identity credentials.

Currently azure_identity does not support simple PATs, which seems to be the most common method of authenticating with Azure DevOps. Therefore we need to implement a PatCredential type that implements azure_core::auth::TokenCredential.

Failure parsing build list response

Seeing some failures when querying build lists, due to missing fields in the server responses.

Looks like I've been too keen to remove Option wrappers on these fields - so need to add back the Option wrappers for the missing fields.

Need better diagnostics when deserialization fails

There have been multiple situations where responses fail to deserialize because the server response does not match the autogenerated code - typically because the spec is wrong.

We need better diagnostics when this happens - at a minimum dumping out the response data so that it can be inspected to determine how it differs from the expected response.

API docs need improving

The current autogenerated API docs on docs.rs are confusing.

On a typical operations page there is a large list of modules which are typically uninteresting, and the user should be encouraged to click on the Client struct to get to the page with client methods.

Not sure whether this is a docs issue, or an API structure/visibility issue.

  • Perhaps remove/hide the operations module to change azure_devops_rust_api::git::operations::Client to azure_devops_rust_api::git::Client?

Listing work item queries fails with a response parsing failure

Listing work item queries changes fails, as the server response does not match the autogenerated response struct definition.

Deserialization fails with the error:

missing field `id`

Example code:

    // Get all work item queries
    let work_item_queries = wit_client
        .queries_client()
        .list(&organization, &project)
        .await?;

Looks like the issue is due to the handling of IdentityReference, defined in the OpenAPI spec as:

    "IdentityReference": {
      "description": "Describes a reference to an identity.",
      "type": "object",
      "allOf": [
        {
          "$ref": "#/definitions/IdentityRef"
        }
      ],
      "properties": {
        "id": {
          "type": "string",
          "format": "uuid"
        },
        "name": {
          "description": "Legacy back-compat property. This has been the WIT specific value from Constants. Will be hidden (but exists) on the client unless they are targeting the newest version",
          "type": "string"
        }
      }
    },

The autogenerated Rust struct looks like this:

#[doc = "Describes a reference to an identity."]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
pub struct IdentityReference {
    #[serde(flatten)]
    pub identity_ref: IdentityRef,
    #[serde(default, skip_serializing_if = "Option::is_none")]
    pub id: Option<String>,
    #[doc = "Legacy back-compat property. This has been the WIT specific value from Constants. Will be hidden (but exists) on the client unless they are targeting the newest version"]
    #[serde(default, skip_serializing_if = "Option::is_none")]
    pub name: Option<String>,
}

I think that the issue is that there are two id fields in the Rust code - one within IdentityReference, and one within IdentityRef. IdentityRef is flattened into IdentityReference (due to the allOf in the OpenAPI spec). There is an id value within the response JSON, but I think the deserialization code is (perhaps) expecting to see two (which is clearly not going to happen in the JSON).

date-time parameters are not formatted as RFC3339

Operation date-time format parameters should be formatted as per RFC3339 but are currently not.
Noticed when testing build::list() with minTime, maxTime parameters.

The type of the parameters is time::OffsetDateTime (expected). However, when these are passed as URL parameters, they are converted using to_string() which does not format the value as RFC3339.

[src/build/mod.rs]

                        if let Some(min_time) = &this.min_time {
                            req.url_mut()
                                .query_pairs_mut()
                                .append_pair("minTime", &min_time.to_string());
                        }

Need to change autorust to generate code that uses .format(&Rfc3339) for date-time parameters, e.g.

use time::format_description::well_known::Rfc3339;
...
    if let Some(min_time) = &this.min_time {
        req.url_mut()
            .query_pairs_mut()
            .append_pair("minTime", &min_time.format(&Rfc3339));
    }

git::commits::get_changes(...) response parsing fails

Reported by @myadav27.
Querying git commit changes fails, as the server response does not match the spec definition.

Example code:

    let commit_changes = git_client
        .commits_client()
        .get_changes(&organization, &commit_id, &repository_name, &project)
        .into_future()
        .await?;

Error:

Error: invalid type: map, expected a string at line 1 column 503

The response from the server looks like this:

{
  "changeCounts": {
    "Edit": 1
  },
  "changes": [
    {
      "item": {
        "objectId": "3434a1c33eb40b708cfcebf8e8a5dd17036b1417",
        "originalObjectId": "6c2fe8905129cbfa99faad55f66662132770ef5b",
        "gitObjectType": "blob",
        "commitId": "203c42d1211b1e430d8c69478516b0874cb560e9",
        "path": "/VARIABLES",
        "url": "https://dev.azure.com/<redacted>"
      },
      "changeType": "edit"
    }
  ]
}

The autogenerated code is expecting GitCommitChanges:

pub struct GitCommitChanges {
    pub change_counts: Option<ChangeCountDictionary>,
    pub changes: Vec<GitChange>,
}

pub struct ChangeCountDictionary {}  // !!! < This is clearly wrong!

pub struct GitChange {
    #[serde(flatten)]
    pub change: Change,
    pub change_id: Option<i32>,
    pub new_content_template: Option<GitTemplate>,
    pub original_path: Option<String>,
}

pub struct Change {
    pub change_type: Option<change::ChangeType>,
    pub item: Option<String>,   // !!! < This is what is failing to parse - expects a string but the JSON has a map
    pub new_content: Option<ItemContent>,
    pub source_server_item: Option<String>,
    pub url: Option<String>,
}

So there appear to be two issues

  • ChangeCountDictionary is an empty struct
  • Change::item should be a struct rather than a String

In the spec, ChangeCountDictionary is defined as this:

    "ChangeCountDictionary": {
      "description": "",
      "type": "object",
      "allOf": [
        {
          "type": "object",
          "additionalProperties": {
            "type": "integer",
            "format": "int32"
          }
        }
      ],
      "properties": {}
    },

Looks like the allOf/additionalProperties is not being handled by the autorust code generator.

Change.item is defined as this:

        "item": {
          "description": "Current version.",
          "type": "string",
          "format": "T"
        },

The "format": "T" here is unusual, and I can't find a definition of this in the OpenAPI reference.
Probably need to use the vsts-api-patcher to fix up this definition to replace the type: string with a reference to a struct.

API mismatch with reality?

Version in use

I'm using version 0.7.4 with the current version of Azure DevOps (which as far as I can tell is still the same API version 7.1.

Case 1

When using the pipeline_client to get a current pipeline I'm getting errors of the sort:

context: Full(Custom { kind: DataConversion, error: Error("missing field ``id``", line: 1, column: 455) }.

However looking inside the response an "id" field is present. So the error message seems wrong

Case 2

When using the runs_client on the same pipeline I'm getting this error:

context: Full(Custom { kind: DataConversion, error: Error("missing field ``finishedDate``", line: 1, column: 1723) }

finishedDateis indeed not part of the response. However this run isn't finished so I wouldn't expect it? I'm guessing this value should be optional and isn't.

Case 3

If I choose a run that has finished the response is ok.

Summary

I guess some of the API isn't covered correctly. This is especially an issue since I need unfinished runs from the API.

question: is there a way to trigger a rerun for a certain build?

I would like to trigger programatically the rerun failed jobs button
grafik
This is actually a PATCH to /_apis/build/builds/112560?retry=true

credentials = BasicAuthentication("", personal_access_token)
connection = Connection(base_url=organization_url, creds=credentials)
build_client = connection.clients.get_build_client()

# Get the first page of projects
projects = core_client.get_projects()
project = next(x for x in projects if x.name == "myorg")
build, *_ = build_client.get_builds( project=project.id, build_ids=[112560])

build.rerun_failed()  # what I would like to do

Is there something like rerun_failed()?

Docs for this.

Can you provide some documentation on how to use this API?

API docs are missing function and parameter descriptions

autorust currently does not autogenerate doc comments for API functions and parameters.

Noticed this as some of the parameter values are unintuitive, e.g. a "repository_id" parameter can take an id or name. The documentation does mention this.

Extension deserialization fails due to flags fields

Querying installed extensions fails because there are "flags" fields that are declared to be an enum, but are actually a comma-separated string of enum values, e.g.

Spec:

        "installState": {
          "description": "Information about this particular installation of the extension",
          "$ref": "#/definitions/InstalledExtensionState"
        },
...
    "InstalledExtensionState": {
      "description": "The state of an installed extension",
      "type": "object",
      "properties": {
        "flags": {
          "description": "States of an installed extension",
          "enum": [
            "none",
            "disabled",
            "builtIn",
            "multiVersion",
            "unInstalled",
            "versionCheckError",
            "trusted",
            "error",
            "needsReauthorization",
            "autoUpgradeError",
            "warning"
          ],
...

Data returned by server:

      "installState": {
        "flags": "builtIn, multiVersion, trusted",
        "lastUpdated": "2022-09-22T20:06:58.763Z"
      },

`release::releases::get_logs()` returns unprintable/garbled string

release::releases::get_logs() is currently defined to return Result<String>. However, it currently returns a lot of unprintable/garbled data...

Inspection of the response headers shows that the data is in compressed format:

content-type: application/zip; api-version=7.1-preview.2

Need to decide whether this API function should:

  • Auto-decompress the data, by inspecting the content-type header and doing unzip before returning the data as Result<String>
  • Document the fact that the returned data is zipped, and change the return type to be raw data: Result<Vec<u8>>, leaving it up to the application to decompress (if required - app might just want to write the logs as a compressed file...)

I expect that this might apply to other API functions too.

ADO_ORGANIZATION format? Run example: `cargo run --example build_list --features="build"` (A potentially dangerous Request.Path value was detected from the client )

When running the following example I get a 400

~\src\azure-devops-rust-api\azure_devops_rust_api\examples> cargo run --example build_list --features="build"                                                                              03/27/2023 01:09:34 PM
    Finished dev [unoptimized + debuginfo] target(s) in 0.26s
     Running `~\src\azure-devops-rust-api\azure_devops_rust_api\target\debug\examples\build_list.exe`
Authenticate using PAT provided via $ADO_TOKEN
Create build client
Get list
Error: server returned error status which will not be retried: 400

Caused by:
    HttpError {  Status: 400,  Error Code: <unknown error code>,  Body: "b"\xef\xbb\xbf{\"$id\":\"1\",\"innerException\":null,\"message\":\"A potentially dangerous Request.Path value was detected from the client (:).\",\"typeName\":\"System.Web.HttpException, System.Web\",\"typeKey\":\"HttpException\",\"errorCode\":0,\"eventId\":0}"",  Headers: [   content-length:227   content-type:application/json   x-content-type-options:nosniff   p3p:CP="CAO DSP COR ADMa DEV CONo TELo CUR PSA PSD TAI IVDo OUR SAMi BUS DEM NAV STA UNI COM INT PHY ONL FIN PUR LOC CNT"   x-tfs-serviceerror:A%20potentially%20dangerous%20Request.Path%20value%20was%20detected%20from%20the%20client%20%28%3A%29.   date:Mon, 27 Mar 2023 11:09:46 GMT   x-cache:CONFIG_NOCACHE   x-msedge-ref:Ref A: 3795AE86AED4424FBF6907356F85E385 Ref B: FRAEDGE1907 Ref C: 2023-03-27T11:09:47Z  ], }
error: process didn't exit successfully: `~\src\azure-devops-rust-api\azure_devops_rust_api\target\debug\examples\build_list.exe` (exit code: 1)

For obvious reasons I can't post the real ADO_TOKEN and ADO_ORGANIZATION, any ideas what the problem might be?

So far I tried the following (being here myorg a mock), what is expected in ADO_ORGANIZATION?

ADO_ORGANIZATION = "myorg"
ADO_ORGANIZATION = "myorg.visualstudio.com"
ADO_ORGANIZATION = "https://myorg.visualstudio.com"
ADO_ORGANIZATION = "dev.azure.com/myorg"

Removing Option wrappers from `GitPullRequest` breaks pull request create

The git::pull_requests create() function takes a GitPullRequest struct, which is the same struct returned when querying pull requests. There are many fields in here which are created/managed by the server (e.g. id, url).

I removed a bunch of Option wrappers on GitPullRequest to make the returned pull request struct easier to use (see #6), including the id and url fields. However these values obviously cannot/should not be provided when creating a pull request.

The simple fix is to back out the changes that make these fields required (i.e. restoring option wrappers). However, this isn't ideal as the set of mandatory and optional fields on creation is not encoded/enforced by the crate types. A better solution would therefore to be to replace the create parameter type with a new struct that contains only the valid parameters for pull request create operations. This is fairly easy to implement via vsts-api-patcher. Figuring out which fields are valid on a pull request create operation is another matter (as it isn't defined by the API docs!).

Need to be able to handle date-time fields with invalid RFC3339 value (0001-01-01T00:00:00)

The recent update to the latest autorust code parses fields declared as date-time in the OpenAPI spec as RFC3339 date-time values.

Unfortunately some of the fields declared as date-time return an invalid RFC3339 value - namely 0001-01-01T00:00:00. This is invalid because it does not include "time-offset" - typically a terminating Z character to indicate UTC. A properly formatted value (but still clearly bogus!) is: 0001-01-01T00:00:00Z.

I have so far seen this in TeamProjectReference.lastUpdateTime and TimelineRecord.lastModified, and suspect there will be others.

This causes deserialization to fail when this value is present.

As this is existing server behaviour that we cannot change, this needs to be handled gracefully. Possible solutions/workarounds:

  • Write a modified serde deserializer for date-time that detects and gracefully handles this value
  • Declare the affected fields as type string with no format rather than date-time format
  • Search/replace this string in all responses (easy to implement, but inefficient!)
    • a possible optimization would be to only do this if the initial response deserialization fails

Error in Return Type in Get Items Batch API Call

We believe that there is an error with the return type of the Get Items Batch call.

Currently, both the Microsoft Learn documentation and the azure-devops-rust-api expects the return type for the call to return a Vec<String> (for rust) or an array[] (for C#).

We believe that this is an error, because when invoking the .into_body() method implemented for the Get Items Batch call, we get the "failed to deserialize" response with the additional context that "invalid type: map, expected a sequence at line 1 column 0".

Moreover, similar API calls like Get Item have a more sophisticated return type. In this case, Get Item expects a GitItem object.

Currently we are looking to custom deserialize as a workaround but do let us know if this this something we could update in the patching layer for the Rust SDK. Happy to answer any questions as well! :)

Cannot create a git push

From the spec: https://learn.microsoft.com/en-us/rest/api/azure/devops/git/pushes/create?view=azure-devops-rest-7.1&tabs=HTTP#gitchange

image

However, the change type in rust:

#[doc = ""]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
pub struct Change {
    #[doc = "The type of change that was made to the item."]
    #[serde(rename = "changeType")]
    pub change_type: change::ChangeType,
    pub item: serde_json::Value,
}

Most of the fields are missing, so a push cannot be created.

Add support for Azure DevOps throttling/rate limiting

Azure DevOps services provide throttling feedback via response headers:

We have definitions for these headers in the SDK, but currently don't inspect or process them. It would be good to add common function to clients to provide throttling to avoid hitting limits. This should probably be implemented as an azure_core pipeline policy.

It is worth mentioning I recently enhanced the underlying azure_core pipeline to handle retry-after headers when receiving TooManyRequests or ServiceUnavailable responses, and a future release of azure_devops_rust_api will pick this up when it is released. So that will give appropriate backoff/throttling upon hitting the TooManyRequests state. Implementing support for these additional feedback headers would help reduce the chances of hitting the TooManyRequests limit.

Note: I can see that these headers provide feedback, but it is unclear to me what the actual implementation of the processing logic should be. This needs further investigation - including checking whether any of the other SDKs implement this.

Relevant links:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.