Coder Social home page Coder Social logo

osprey's Introduction

Osprey

Client and service for providing OIDC access to Kubernetes clusters.

The client provides a user login command which will request a username and a password and forward them to the service. The service will forward the credentials to an OpenId Connect Provider (OIDC) to authenticate the user and will return a JWT token with the user details. The token along with some additional cluster information will be used to generate the kubectl configuration to be used to access Kubernetes clusters.

Supported OIDC providers

Dex

This implementation relies on one specific configuration detail of the OIDC provider SkipApprovalScreen : true which eliminates the intermediate step requiring a client to explicitly approve the requested grants before the token is provided. If the target provider does not support this feature, additional work is required to handle that approval.

Azure

When Azure is configured as the OIDC provider, the user login command will generate a link to visit, which the user must open in a browser in order to authenticate. Upon a successful login, the browser will send a request to a local endpoint served by the osprey application. With the information contained in this request it is able to request a JWT token on your behalf.

Quick links

Installation

Osprey is currently supported on Linux, macOS and Windows. It can be installed as a standalone executable or run as a container (only Linux). The Docker container is aimed to be used for the server side, while the binaries' main use are the client commands.

Binaries

There is currently no binary install option for Osprey, due to the sunsetting of our old binary host. You will need to build from source or install with Docker.

For example:

$ go install github.com/sky-uk/osprey/[email protected]

$ ~/go/bin/osprey --help
User authentication for Kubernetes clusters
...

Docker

The Docker image is based on Alpine Linux. It can be pulled from our Docker Hub repository

To pull a specific version replace <version> with the release version (e.g v9.9.0; mind the v prefix).

$ docker pull skycirrus/osprey:<version>

Client

The osprey client will request the user credentials and generate a kubeconfig file based on the contents of its configuration.

To get the version of the binary use the --version argument:

$ osprey --version
osprey version 2.8.1 (2022-03-17T16:25:26Z))

You can run Osprey client in Docker by bind-mounting your Osprey config and kubeconfig files:

$ docker run --rm \
    --network=host \
    --env HOME=/ \
    -v $HOME/.config/osprey/config:/.config/osprey/config:ro \
    -v $HOME/.kube/config:/.kube/config \
    skycirrus/osprey:v2.8.1 user login

Client usage

With a configuration file like:

providers:
  osprey:
    - targets:
        local.cluster:
          server: https://osprey.local.cluster
        foo.cluster:
          server: https://osprey.foo.cluster
          alias: [foo]
          groups: [foo, foobar]
        bar.cluster:
          server: https://osprey.bar.cluster
          groups: [bar, foobar]

The groups are labels that allow the targets to be organised into categories. They can be used, for example, to split non-production and production clusters into different groups, thus making the interaction explicit.

Most of the client commands accept a --group <value> flag which indicate Osprey to execute the commands only against targets containing the specified value in their groups definition.

A default-group may be defined at the top of the configuration which will apply that group to any command if the --group flag is not used. When a default group exists all targets should belong to at least one group; otherwise the configuration will become invalid and an error will be displayed when running any command.

If no group is provided, and no default-group is defined, the operations will be performed against targets without group definitions.

Login

Requests a Kubernetes access token for each of the configured targets and creates the kubeconfig's cluster, user and context elements for them.

$ osprey user login
user: someone
password: ***
Logged in to local.cluster
  • Note: When using a cloud identity provider, a link to the respective online login form will be shown in the terminal. The user must click on this link and follow the login steps.

It will generate the kubeconfig file creating a cluster and user entry per osprey target and one context with the target name and as many extra contexts as aliases have been specified.

When specifying the --group flag, the operations will apply to the targets belonging to the specified group. If targeting a group (provided or default) the output will include the name of the group.

$ osprey user login --group foobar
user: someone
password: ***

Logging in to group 'foobar'

Logged in to foo.cluster | foo
Logged in to bar.cluster

At login, aliases are displayed after the pipes (i.e | foo)

User

Displays information about the currently logged-in user (it shows the details even if the token has already expired). It contains the email of the logged-in user and the list of LDAP membership groups the user is a part of. The latter come from the claims in the user's token.

$ osprey user --group foobar
foo.cluster: [email protected] [membership A, membership B]
bar.cluster: [email protected] [membership C]

If no user is logged in, osprey displays none instead of the user details.

Logout

Removes the token for the currently logged-in user for every configured target.

$ osprey user logout --group foobar
Logged out from foo.cluster
Logged out from bar.cluster

If no user is logged in the command is a no-op.

Config

This command is currently a no-op, used only to group the commands related to the osprey configuration.

Targets

Displays the list of defined targets within the client configuration. It allows displaying the list of targets per group and to target a specific group via flags.

$ osprey config targets --by-groups
Configured targets:
* <ungrouped>
    local.cluster
  bar
    bar.cluster
  foo
    foo.cluster | foo
  foobar
    bar.cluster
    foo.cluster | foo

This command will display targets that do not belong to any group, if there are any, under the special group <ungrouped>.

If the configuration specifies a default group, it will be highlighted with a * before its name, e.g. * foobar. If no default group is defined the special <ungrouped> grouping will be highlighted.

Groups

The targets command flag --list-groups is useful to display only the list of existing groups within the configuration, without any target information.

$ osprey config targets --list-groups
Configured groups:
* <ungrouped>
  bar
  foo
  foobar

Client configuration

The client installation script gets the configuration supported by the installed version.

The client uses a YAML configuration file. Its recommended location is: $HOME/.osprey/config. Its contents are as follows:

V2 Config

The structure of the osprey configuration supports multiple configuration for a provider type. This structure will support scenarios where different azure providers can be configured for prod and non-prod targets.

apiVersion: v2

# Optional path to the kubeconfig file to load/update when loging in.
# Uses kubectl defaults if absent ($HOME/.kube/config).
# kubeconfig: /home/jdoe/.kube/config

# Optional group name to be the default for all commands that accept it.
# When this value is defined, all targets must define at least one group.
# default-group: my-group

## Named map of supported providers (currently `osprey` and `azure`)
providers:
  osprey:
    - provider-name: (Optional)
      # CA cert to use for HTTPS connections to Osprey.
      # Uses system's CA certs if absent.
      # certificate-authority: /tmp/osprey-238319279/cluster_ca.crt

      # Alternatively, a Base64-encoded PEM format certificate.
      # This will override certificate-authority if specified.
      # certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk5vdCB2YWxpZAotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==

      # Named map of target Osprey servers to contact for access-tokens
      targets:
        # Target Osprey's environment name.
        # Used for the name of the cluster, context, and users generated
        foo.cluster:
            # hostname:port of the target osprey server
            server: https://osprey.foo.cluster

            #  list of names to generate additional contexts against the target.
            aliases: [foo.alias]

            #  list of names that can be used to logically group different Osprey servers.
            groups: [foo]

            # CA cert to use for HTTPS connections to Osprey.
            # Uses system's CA certs if absent.
            # certificate-authority: /tmp/osprey-238319279/cluster_ca.crt

            # Alternatively, a Base64-encoded PEM format certificate.
            # This will override certificate-authority if specified.
            # certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk5vdCB2YWxpZAotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
  # Authenticating against Azure AD
  azure:
    - name: (Optional)
      # These settings are required when authenticating against Azure
      tenant-id: your-azure-tenant-id
      server-application-id: azure-ad-server-application-id
      client-id: azure-ad-client-id
      client-secret: azure-ad-client-secret

        # List of scopes to request as part of the request. This should be an Azure link to the API exposed on the server application
      scopes:
          - "api://azure-tenant-id/Kubernetes.API.All"

        # This is required for the browser-based authentication flow. The port is configurable, but it must conform to
        # the format: http://localhost:<port>/auth/callback
      redirect-uri: http://localhost:65525/auth/callback
      targets:
          foo.cluster:
              server: http://osprey.foo.cluster
              # If "use-gke-clientconfig" is specified (default false) Osprey will fetch the API server URL and its
              # CA cert from the GKE-specific ClientConfig resource in kube-public. This resource is created automatically
              # by GKE when you enable to OIDC Identity Service. The "api-server" config element is also required.
              # Usually "api-server" would be set to the public API server endpoint; the fetched API server URL will be
              # the internal load balancer that proxies requests through the OIDC service.
              # use-gke-clientconfig: true
              #
              # If "skip-tls-verify" is specified (default false) Osprey will skip TLS verification when attempting
              # to make the connection to the specified server.  This can be used in conjunction with `server` or `api-server`.
              # skip-tls-verify: true
              #
              # If api-server is specified (default ""), Osprey will fetch the CA cert from the API server itself.
              # Overrides "server". A ConfigMap in kube-public called kube-root-ca.crt should be made accessible
              # to the system:anonymous group. This ConfigMap is created automatically with the Kubernetes feature
              # gate RootCAConfigMap which was alpha in Kubernetes v1.13 and became enabled by default in v1.20+
              # api-server: http://apiserver.foo.cluster
              aliases: [foo.alias]
              groups: [foo]

V1 Config (Deprecated)

This is the previously supported format. The fields are the same but, the provider configuration is mapped to a provider type as opposed to being a list. The config parsing will use this format unless specified to v2 on the apiVersion field in the config.

providers:
    osprey:
      targets:
          local.cluster:
              server: https://osprey.local.cluster
          foo.cluster:
              server: https://osprey.foo.cluster
              alias: [foo]
              groups: [foo, foobar]
          bar.cluster:
              server: https://osprey.bar.cluster
              groups: [bar, foobar]

  # Authenticating against Azure AD
    azure:
        tenant-id: your-tenant-id
        server-application-id: api://SERVER-APPLICATION-ID   # Application ID of the "Osprey - Kubernetes APIserver"
        client-id: azure-application-client-id               # Client ID for the "Osprey - Client" application
        client-secret: azure-application-client-secret       # Client Secret for the "Osprey - Client" application
        scopes:
            # This must be in the format "api://" due to non-interactive logins appending this to the audience in the JWT.
            - "api://SERVER-APPLICATION-ID/Kubernetes.API.All"
        redirect-uri: http://localhost:65525/auth/callback   # Redirect URI configured for the "Osprey - Client" application
        targets: ...

The name of the configured targets will be used to name the managed clusters, contexts, and user. They can be setup as desired. Use the aliases property of the targets to create alias contexts in the kubeconfig.

The previous configuration will result in the following kubeconfig file for the user jdoe:

$ osprey user login --ospreyconfig /tmp/osprey-238319279/.osprey/config
apiVersion: v1
kind: Config
clusters:
- cluster:
    certificate-authority-data: YUhSMGNITTZMeTloY0dselpYSjJaWEl1YzJGdVpHSnZlQzVqYjNOdGFXTXVjMnQ1
    server: https://apiserver.foo.cluster
  name: foo.cluster
contexts:
- context:
    cluster: foo.cluster
    user: foo.cluster
  name: foo.cluster
- context:
    cluster: foo.cluster
    user: foo.cluster
  name: foo.alias
current-context: ""
preferences: {}
users:
- name: foo.cluster
  user:
    auth-provider:
      config:
        client-id: oidc_client_id
        client-secret: oidc_client_secret
        id-token: jdoe_Token
        idp-certificate-authority-data: aHR0cHM6Ly9kZXguc2FuZGJveC5jb3NtaWMuc2t5
        idp-issuer-url: https://dex.foo.cluster
      name: oidc

The client will create/update one instance of cluster, context, and user in the kubeconfig file per target in the ospreyconfig file. We use client-go's config api to manipulate the kubeconfig.

If previous contexts exist in the kubectl config file, they will get updated/overridden when performing a login. It overrides values by name (e.g. cluster.name, context.name, user.name). It is recommended that the first time using the Osprey for a specific cluster old values are removed, to keep the config clean.

The names of clusters, user and context will use the value defined in the Osprey config.

Server

The Osprey server can be started in two different ways:

  • osprey serve cluster-info
  • osprey serve auth

osprey serve cluster-info

Starts an instance of the Osprey serve that will create a webserver that is capable of returning cluster information. In this mode, authentication is disabled. This endpoint is used for service discovery for an osprey target.

This endpoint (/cluster-info) will return the API server URL and the CA for the API server.

In this mode, the required flags are:

  • apiServerCA, the path to the API server CA (defaults to /var/run/secrets/kubernetes.io/serviceaccount/ca.crt) which is the default location of the CA when running inside a Kubernetes cluster.
  • apiServerURL, the API server URL to return to the Osprey client

Note that since v2.5.0 Osprey client can fetch the CA cert directly from the API server without needing a deployment of Osprey server.

osprey serve auth

Starts an instance of the osprey server that will listen for authentication requests. The configuration is done through the commands flags. The Osprey service will receive the user's credentials and forward them to the OIDC provider (Dex) for authentication. On success it will return the token generated by the provider along with additional information about the cluster so that the client can generate the kubectl config file.

$ osprey serve auth --help

When Osprey is being used for authentication, the following flags require to be the same across the specified components:

  • environment, id of the cluster to be used as a client id
    • Dex: registered client id (managed via the Dex api or staticClients)
    • Kubernetes API server: oidc-client-id flag
  • secret, token to be shared between Dex and Osprey
    • Dex: registered client secret (managed via the Dex api or staticClients)
  • redirectURL, Osprey's callback url
    • Dex: registered client redirectURIs (managed via the Dex api or staticClients)
  • issuerURL, Dex's URL
    • Dex: issuer value
    • Kubernetes API server: oidc-issuer-url flag
  • issuerCA, Dex's CA certificate path
    • Kubernetes API server: oidc-ca-file flag

The following diagram depicts the authentication flow from the moment the Osprey client requests a token.

                                       +------------------------+
                                       |                        |
                                       |      +----------------------------+
                                       |      |                 |          |
+------------------+                   |  +---v--------------+  |          |
|                  | 1./access-token   |  |                  |  |          |
|  Osprey Client   +---------------------->   Osprey Server  +-----+       |
|                  |                   |  |                  |  |  |       |
+------------------+                   |  +--+--------+------+  |  |       |
                                       |     |        |         |  |       |
                                       |     |        |         |  |       |
                                       |     | 2.     | 3.      |  |       |
                                       |     |/auth   |/login   |  |6. code|exchange
                                       |     |        |         |  |       |
                                       |     |        |         |  |       |
+------------------+                   |  +--v--------v------+  |  |       |
|                  |                   |  |                  |  |  |       |
|       LDAP       | 4. authenticate   |  |       Dex        <-----+       |
|                  <----------------------+                  +-------------+
+------------------+                   |  +------------------+  |  5. /callback
                                       |                        |
                                       |      Environment       |
                                       +------------------------+

After the user enters their credentials through the Osprey Client:

  1. An HTTPS call is made to an Osprey Server per environment configured.
  2. Per environment:
    1. The Osprey Server will make an authentication request to Dex which will return an authentication url to use and a request ID.
    2. The Osprey Server will post the user credentials using the auth request details.
    3. Dex will call LDAP to validate the user.
    4. Upon a successful validation, Dex will redirect the request to the Osprey Server's callback url, with a generated code and the request ID.
    5. The Osprey Server will exchange the code with Dex to get the final token that is then returned to the client.
    6. The Osprey Client updates the kubeconfig file with the updated token.

TLS

Because the Osprey client sends the users credentials to the server, the communication must always be done securely. The Osprey server has to run using HTTPS, so a certificate and a key must be generated and provided at startup. The client must be configured with the CA used to sign the certificate in order to be able to communicate with the server.

A script to generate a test self-signed certificate, key and CA can be found in the examples

Dex templates and static content

By default Dex searches for web resources in a web folder located in the same directory where the server is started. This location can be overridden in Dex's configuration:

...
frontend:
  dir: /path/to/the/templates
  theme: osprey
...

Dex also requires a web/static folder and a web/themes/<theme> folder for static content. Osprey does not require any of these, but the folders are required to be there, even if empty.

Because the authentication flow does not involve the user, the data exchanged between Dex and Osprey must be in json so the html templates need to be customized.

A folder with the required configuration for Osprey can be taken from out test setup. The only theme is osprey and it is empty. All the templates file are required to be present, but not all of them are used in the authentication flow.

Identity Provider

Osprey doesn't currently support Dex using multiple Identity Providers as the user would be required to select one of them (login.html) before proceeding to the authentication request.

Therefore currently only one Identity Provider can be configured.

Token expiry and Refresh token

Dex allows for configuration of the token expiry, and it also provides a refresh token, so that a client can request a new token without the need of user interaction.

The current usage of Osprey is such that it was decided to discard the refresh token, to prevent a compromised token to be active for more than a configured amount of time. If the need arises, this could be reintroduced and enabled/disabled by configuration.

API server

The Kubernetes API server needs to enable the OIDC Authentication in order for the kubectl requests to be authenticated and then authorised.

Some of those flags have been mentioned in the configuration.

Examples

Download and install the Osprey binaries so that the client can be used for the examples.

Kubernetes

A set of examples resources has been provided to create the required resources to deploy Dex and Osprey to a Kubernetes cluster. The templates can be found in examples/kubernetes.

  1. Provide the required properties in examples/kubernetes/kubernetes.properties:

    • node, the script uses a NodePort service, so in order to configure the Osprey and Dex to talk to each other, a node IP from the target cluster must be provided. A list of IPs to chose from can be obtained via:
      kubectl --context <context> get nodes -o template --template='{{range.items}}{{range.status.addresses}}{{if eq .type "InternalIP"}} {{.address}}:{{end}}{{end}}{{end}}' | tr ":" "\n"
      
    • context, the script uses kubectl to apply the resources and for this it needs a context to target.
    • ospreyNodePort, dexNodePort, dexTelemetryNodePort, the ports where Osprey and Dex (service and metrics) will be available across the cluster. A default value is provided, but if the ports are already in use, they must be changed.
    • ospreyImage, if you want to try a different version for the server.
  2. Run the shell script to render the templates and to deploy the resources to the specified cluster.

    examples/kubernetes/deploy-all.sh </full/path/to/runtime/dir>
    

    To create an Osprey server that serves /cluster-info only, set ospreyAuthenticationDisabled=true in the properties file.

  3. Use the Osprey client

    osprey --ospreyconfig </full/path/to/runtime/dir/>osprey/ospreyconfig --help
    

More properties are available to customize the resources at will.

Local docker containers

Although the Osprey solution is intended to be run in a Kubernetes cluster, with the OIDC Authentication enabled, it is possible to have a local instance of Osprey and Dex to try out and validate a specific configuration.

A set of scripts have been provided to start an end to end run of a user logging in, checking details and logging out.

From the root of the project:

$ mkdir /tmp/osprey_local
$ examples/local/end-to-end.sh /tmp/osprey_local

The end-to-end.sh script will:

  1. Start a Dex server (start-dex.sh)
  2. Start an Osprey server (start-osprey.sh)
  3. Execute the osprey user login command. It will request credentials, use user/pass: [email protected]/doe, [email protected]/doe
  4. Execute the osprey user command
  5. Execute the osprey user logout command
  6. Execute the osprey user command
  7. Shutdown Osprey and Dex

You can also start Dex and Osprey manually with the scripts and play with the Osprey client yourself.

The scripts use templates for the Dex configuration and the Osprey client configuration. The scripts load a properties file to render the templates.

Development

First install the required dependencies:

  • make setup - installs required golang binaries
  • slapd, usually part of the openldap package - needed for end-to-end tests

Then run the tests using make:

$ make

Package structure

  • /cmd contains the cobra commands to start a server or a client.
  • /client contains code for the osprey cli client.
  • /server contains code for the osprey server.
  • /common contains code common to both client and server.
  • /e2e contains the end-to-end tests, and test utils for dependencies.
  • /examples contains scripts to start Dex and Osprey in a Kubernetes clusters or locally.
  • vendor contains the dependencies of the project.

Server and client

We use Cobra, to generate the client and server commands.

E2E tests

The e2e tests are executed against local Dex and LDAP servers.

Note: The below docker image is only for linux/amd64. You might be able to get it working with other architectures, but it's not officially supported yet. The e2e tests are executed against local Dex and LDAP servers. There is a Dockerfile located in the e2e/ directory that will handle the dependencies for you. (This came around due to dependency issues with older versions of openldap in ubuntu).

The setup is as follows:

Osprey Client (1) -> () Osprey Server (1) -> (1) Dex () -> (1) LDAP

Each pair of osprey server-Dex represents an environment (cluster) setup. One osprey client contacts as many osprey-servers as configured in the test setup. Each osprey server will talk to only one Dex instance located in the same environment. All Dex instances from the different environments will talk to the single LDAP instance.

For cloud end-to-end tests, a mocked OIDC server is created and used to authenticate with.

Running E2E tests locally

There's a docker file called Dockerfile.localtest which sets up the test environment similar to the one used by travis. Travis uses ubuntu 16.04 (xenial) as the default node to run build on unless changed with a dist directive. More details available here

To run the test locally, run the following command

  1. Build the local image cd <osprey root>/e2e && docker build -f Dockerfile.localtest -t local-osprey-e2etest:1 .
  2. Run the container with bash as the entry point docker run -it -v <osprey root folder>:/osprey local-osprey-e2etest:1
  3. Inside the container run make test
   make build
   make test

HTTPS/ProtocolBuffers

Given that aws ELBs do not support HTTP/2 osprey needs to run over HTTP. We still use ProtocolBuffers for the requests and responses between osprey and its client.

Any changes made to the proto files should be backwards compatible. This guarantees older clients can continue to work against Osprey, and we don't need to worry about updates to older clients.

To update, update common/pb/osprey.proto then run protoc.

$ make proto

Check in the osprey.pb.go file afterwards.

Azure Active Directory setup

The Azure AD Application setup requires two applications to be created. One for the Kubernetes API servers to use, and one for the Osprey client to use. The Osprey client is then configured to request access on behalf of the Kubernetes OIDC provider.

Create Osprey Kubernetes Application

  1. Visit https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview and log in using your organisations credentials.
  2. Select 'App Registrations' from the side-bar and click '+ New registration' on the top menu bar.
  3. Create an application with the following details:
    • Name: "Osprey - Kubernetes API Server"
    • Supported account types: "Accounts in this organizational directory only"
  4. Select 'API permissions' from the side-bar and click '+ Add a permission' Add the following permissions:
    • Microsoft Graph -> Delegated permissions -> Enable access to "email", "openid" and "profile"
    • Click 'Add permissions' to save.
  5. Select 'Expose an API' from the side-bar and click '+ Add a scope'
  6. Create a scope with an appropriate/descriptive name. e.g. Kubernetes.API.All. The details in this form are what are shown to users when they first authorize the application to log in on their behalf.
  7. Select 'Manifest' from the side-bar and find the field groupMembershipClaims in the JSON. Change this so that its value is "groupMembershipClaims": "All", and not "groupMembershipClaims": null,
  8. The server client-id is the Object ID of this application. This can be found in the Overview panel.

Create Osprey Client Application

  1. Visit https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview and log in using your organisations credentials.
  2. Select 'App Registrations' from the side-bar and click '+ New registration' on the top menu bar.
  3. Create an application with the following details:
    • Name: "Osprey - Client"
    • Supported account types: "Accounts in this organizational directory only"
    • RedirectURI:
      • Type: Web RedirectURI: This is a redirect URI that must be configured to match in both the Azure application config and the Osprey config. It has to be in the http://localhost:<port>/<path> format. This will be the port that Osprey client opens up a webserver on, to listen to callbacks from the login page. We use http://localhost:65525/auth/callback in the example configuration.
  4. Select 'API permissions' from the side-bar and click '+ Add a permission' Add the following permissions:
    • Microsoft Graph -> Delegated permissions -> Enable access to "openid"
    • Click 'Add permissions' to save.
  5. Click '+ Add a permission' and select 'My APIs' from the top of the pop-out menu.
    • Select the "Osprey - Kubernetes API Server"
    • Click 'Add permissions' to save.
  6. Select 'Certificates & secrets' from the side-bar and click '+ New client secret'
    • Choose an expiry for this secret. When a token expires, the osprey client config must be updated to include this as the 'client-secret'. Copy this secret as soon as it is created, as it will be hidden when you leave the azure pane.
  7. The osprey client-id is the Object ID of this application. This can be found in the Overview panel.

The client ID and secrets generated in this section are used to fill out the Osprey config file.

providers:
  azure:
    - tenant-id: your-tenant-id
      server-application-id: api://SERVER-APPLICATION-ID   # Application ID of the "Osprey - Kubernetes APIserver"
      client-id: azure-application-client-id               # Client ID for the "Osprey - Client" application
      client-secret: azure-application-client-secret       # Client Secret for the "Osprey - Client" application
      scopes:
        # This must be in the format "api://" due to non-interactive logins appending this to the audience in the JWT.
        - "api://SERVER-APPLICATION-ID/Kubernetes.API.All"
      redirect-uri: http://localhost:65525/auth/callback   # Redirect URI configured for the "Osprey - Client" application

Kubernetes API server flags:

- --oidc-issuer-url=https://sts.windows.net/<tenant-id>/
- --oidc-client-id=api://9bd903fd-f8df-4390-9a45-ab2fa28673b4
- --oidc-username-claim=unique_name
- --oidc-groups-claim=groups

Dependency management

Dependencies are managed with Go modules. Run go mod download to download all dependencies.

Make sure any Kubernetes dependencies are compatible with the kubernetes-1.8.5

Releasing

Tag the commit in master using an annotated tag and push it to release it. Only maintainers can do this.

Osprey gets released to:

Code guidelines

osprey's People

Contributors

adamdougal avatar aecay avatar dependabot[bot] avatar dudycooly avatar howardburgess avatar jrostand avatar jsravn avatar kevinsulatra avatar mattburgess avatar mgreyo avatar nmaupu avatar oandrew avatar peterbale avatar rewiko avatar saley89 avatar supreethrao avatar totahuanocotl avatar vflaux avatar yku04 avatar zemanlx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

osprey's Issues

Change travis build agent to a newer ubuntu release

Currently, travis agents run ubuntu 16.04 (Xenial) by default. This is pretty dated and does not reflect most of our development environments. It would be good to change this to have agents running on newer releases of ubuntu.

This won't be a trivial change as slapd, which is used for e2e tests as the ldap database will need a different configuration in order to start.

snippet of build failure with ubuntu 22.04
Failure [2.350 seconds]
[BeforeSuite] BeforeSuite 
/home/travis/gopath/src/github.com/sky-uk/osprey/e2e/e2e_suite_test.go:64
  Starts the ldap server
  Expected
      <*exec.ExitError | 0xc0004c2180>: {
          ProcessState: {
              pid: 15373,
              status: 256,
              rusage: {
                  Utime: {Sec: 0, Usec: 3221},
                  Stime: {Sec: 0, Usec: 6443},
                  Maxrss: 63624,
                  Ixrss: 0,
                  Idrss: 0,
                  Isrss: 0,
                  Minflt: 705,
                  Majflt: 0,
                  Nswap: 0,
                  Inblock: 0,
                  Oublock: 0,
                  Msgsnd: 0,
                  Msgrcv: 0,
                  Nsignals: 0,
                  Nvcsw: 1,
                  Nivcsw: 1,
              },
          },
          Stderr: nil,
      }
  to be nil
  /home/travis/gopath/src/github.com/sky-uk/osprey/e2e/e2e_suite_test.go:73
------------------------------
--- Output ---
*** ASYNC COMMAND STARTED
[slapd -d 0 -h ldap://localhost:10389 ldaps://localhost:10636 ldapi://%!F(MISSING)tmp%!F(MISSING)osprey-2515455013%!F(MISSING)ldap%!F(MISSING)ldap.unix -f /tmp/osprey-2515455013/ldap/ldap.conf]
--- End Output ---
Ran 83 of 0 Specs in 2.352 seconds
FAIL! -- 0 Passed | 83 Failed | 0 Pending�[0m | 0 Skipped
--- FAIL: TestOspreySuite (2.56s)

As part of this change, the docker file used for local e2e test will also need to be changed to reflect the newer version

Tasks

  1. Make changes to ldap config template in order to get ldap server started. Some of the changes include the password having to be hashed
  2. Change the base image in local e2e dockerfile

failed to create oauth config still passes healthz

failed to create oauth config: unable to create oidc provider "https://{FOO}:5556": Get https://{FOO}:5556/.well-known/openid-configuration: x509: certificate signed by unknown authority.

(that is kind of expected in a new cluster if starts before dex is ready, but I would like osprey to restart by itself in that case)
I would think this should exit on the "unable to create oidc provider" and/or change the healthz status to 500 unhealthly.

is that reasonable?

Wrong path for golint

The command make setup produces the following error:

== setup
go get -v github.com/golang/lint/golint
github.com/golang/lint (download)
package github.com/golang/lint/golint: code in directory /home/nicolas_ocquidant/go-workspace/src/github.com/golang/lint/golint expects import "golang.org/x/lint/golint"
Makefile:39: recipe for target 'setup' failed
make: *** [setup] Error 1

I needed to change the path in Makefile according to golang/lint#415

diff --git a/Makefile b/Makefile
index b509623..01d170c 100644
--- a/Makefile
+++ b/Makefile
@@ -37,7 +37,7 @@ endif
 
 setup:
        @echo "== setup"
-       go get -v github.com/golang/lint/golint
+       go get -v golang.org/x/lint/golint
        go get golang.org/x/tools/cmd/goimports
        go get github.com/golang/dep/cmd/dep
        dep ensure

Thanks
--nick

It's not possible to install with go install

❯ go install github.com/sky-uk/[email protected]
go install: github.com/sky-uk/[email protected]: github.com/sky-uk/[email protected]: invalid version: module contains a go.mod file, so major version must be compatible: should be v0 or v1, not v2

And go install of @latest picks up the old v1.5.0 version, which panics when running.

go install github.com/sky-uk/osprey@latest
go: downloading github.com/sky-uk/osprey v1.5.0
...
osprey user login
panic: mismatching message name: got pb.LoginResponse, want pb.ConsumeLoginResponse

Also see #45

User configuration erased after log in

Before login

CURRENT   NAME                 CLUSTER              AUTHINFO             NAMESPACE
    
*         a               a.b.c     a.b.c     namespace

After login

*         a               a.b.c     a.b.c      

Any config for the contexts that have been logged in is erased. Other contexts are left intact.

Nothing more than an annoyance, but it would be nice if it did not do this.

EDIT: Redacted content by @totahuanocotl

Kubernetes no longer allows RW mounting of config maps

Newer versions of K8S don't allow secrets or config maps to be mounted RW. I ran into a problem with the example dex yamls not starting because the web-templates were mounted RO not RW as expected. "kubectl describe pod" clearly showed the RO error in this case.

A workaround for this is to add an init container to copy the config map to an emptyDir store.

So in dex.yaml I added this volume
volumes:
- name: dex-web
emptyDir: {}

Then the init container

  initContainers:
    - name: copy-dex-web-templates
      image: busybox
      command: ['sh', '-c', 'cp /configmap/* /web/templates']
      volumeMounts:
        - name: dex-web-templates
          mountPath: /configmap
        - name: dex-web
          mountPath: /web/templates

and then for the dex container you would replace the 7 web-template mounts with dex-web instead of dex-web-templates. For mine I just replaced them with this one

    - mountPath: /web/templates/
      name: dex-web

That let dex start up and work correctly as it did previously.

Proxy list does not contain release tag v2.0.0

We'd like to have the ability to always download the latest released tag version of the master branch, therefore we need the proxy list to be always updated with the newest release tag.

This will give the ability to configure the provider object within the osprey Config object.

Currently when we try to update our dependencies with go get -t github.com/sky-uk/osprey, it ends up pulling the v1.5.0, but we have checked that the latest release tag is v2.0.0 from the master branch. However, this isn't in the proxy list below:
https://proxy.golang.org/github.com/sky-uk/osprey/@v/list

How to run osprey using dex along with other login applications

I have osprey working successfully in my lab using an instance of dex with the modified web/templates that replacing the html with json. I really like the fact that osprey can collect the username/password from the command line rather than having to launch a browser.

In our production clusters we are already using dex along with auth2_proxy to provide authentication for our dashboard. In this case we need to use the stock dex web/templates because they provide the HTML for the login page.

Is it possible to support both the dashboard and osprey at the same time using a single instance of dex? I haven't come across any way to serve different web templates based on the application/clientId being used, but this would be ideal. Alternatively, is it actually possible to use two different instances of dex, one dedicated to osprey and the other dedicated to the dashboard? I had considered this approach but then I'd have two different certificate issuers and the kubernetes API only allows us to specify a single OIDC issuer url. Even if the two instances user the same certificates, they would have different URLs for discovery, generating and refreshing tokens.

I would really appreciate anyone's feedback on this.. thank you!

2FA

I love Osprey, but we have a new requirement for multi factor authentication.

There are other tools I can use to achieve OTP MFA for Kubernetes with OIDC, but as far as I can see they are all reliant on using a web browser, but I am loathed to abandon the clean, CLI only approach of Osprey.

From a quick look at the Dex code, I don't think it supports MFA.

If an alternative OIDC provider with MFA support offered customisable web templates like Dex does, it would be quite easy to adapt Osprey to work with it and pass an OTP value along with the username / password.

Fix the SSL certificates in the tests

Currently the tests need the setting GODEBUG=x509ignoreCN=0. That's because without that, on newer versions of go, the tests give the following error:

Post \"https://localhost:12984/access-token\": x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0

So ideally, the test suite needs to be updated so that it provisions certs that work with the expectations of the new versions of go (using SAN instead of CN, presumably). Then we can remove the GODEBUG workaround from the makefile.

Allow `updateKubeconfig` function to be exported. Therefore, convert to a public function instead of having it as private.

In order to allow the functionality to update the kubeconfig from the cmd package, it would be handy to convert the function into a public function.

The current implementation doesn't allow us to integrate the osprey login strategy with our in-house tooling.

https://github.com/sky-uk/osprey/blob/master/cmd/login.go#L101

func updateKubeconfig(target client.Target, tokenData *client.TargetInfo) {
	err := kubeconfig.UpdateConfig(target.Name(), target.Aliases(), tokenData)
	if err != nil {
		log.Errorf("Failed to update config for %s: %v", target.Name(), err)
		return
	}
	aliases := ""
	if target.HasAliases() {
		aliases = fmt.Sprintf(" | %s", strings.Join(target.Aliases(), " | "))
	}
	log.Infof("Logged in to: %s %s", target.Name(), aliases)
}

to

func UpdateKubeconfig(target client.Target, tokenData *client.TargetInfo) {
	err := kubeconfig.UpdateConfig(target.Name(), target.Aliases(), tokenData)
	if err != nil {
		log.Errorf("Failed to update config for %s: %v", target.Name(), err)
		return
	}
	aliases := ""
	if target.HasAliases() {
		aliases = fmt.Sprintf(" | %s", strings.Join(target.Aliases(), " | "))
	}
	log.Infof("Logged in to: %s %s", target.Name(), aliases)
}

Create a new function to populate a new Target struct object

In reference to https://github.com/sky-uk/osprey/blob/master/client/target.go#L8

We want to have the ability to populate our Target structs fields in order for us to integrate the login strategy of osprey into our in-house tooling.

The current upstream login implementation looks like so,

targetData, err := retriever.RetrieveClusterDetailsAndAuthTokens(target)
            if err != nil {
                if state, ok := status.FromError(err); ok && state.Code() == codes.Unauthenticated {
                    log.Fatalf("Failed to log in to %s: %v", target.Name(), state.Message())
                }
                success = false
                log.Errorf("Failed to log in to %s: %v", target.Name(), err)
                continue
            }
            updateKubeconfig(target, targetData)

A way that we can integrate osprey's login strategy into our in-house tooling is by creating a new constructor of the Target object.

Allow empty tls-key as we might stay behind proxy

When behind NGINX reverse proxy for instance, we might not want to specify tls-cert and tls-key. But without these arguments, osprey server refuses to launch:

time="2019-05-23T11:41:12Z" level=fatal msg="Failed to load tls-cert: failed to read certificate file \"\": open : no such file or directory"

Applying this small change allows the server to run without tls-key:

diff --git a/common/web/client.go b/common/web/client.go
index 893aa88..fe3b9cf 100644
--- a/common/web/client.go
+++ b/common/web/client.go
@@ -15,6 +15,9 @@ import (
 // LoadTLSCert loads a PEM-encoded certificate from file and returns it as a
 // base64-encoded string.
 func LoadTLSCert(path string) (string, error) {
+       if path == "" {
+               return "", nil
+       }
        fileData, err := ioutil.ReadFile(path)
        if err != nil {
                return "", fmt.Errorf("failed to read certificate file %q: %v", path, err)
time="2019-05-23T11:53:46Z" level=info msg="Starting to listen at: http://0.0.0.0:5555"

Thanks
nick

Failed to marshal success response: proto: Marshal called with nil

I'm running Osprey in a Kubernetes 1.10.2 cluster talking to Dex 2.10.0 and I get Failed to marshal success response: proto: Marshal called with nil on the callback from Dex after successful LDAP bind.

I note that in server/web/server.go there's a check for nil on the response, but it continues to attempt to marshal even if response is nil. Is this intended behavior? And if so, any ideas about what could be causing my error?

Osprey should be able to run/start without dex

Osprey should be able to start and resume running without dex or dex unavailable.
This means that if dex is unavailable initially subsequent access token calls would attempt to re-establish/authenticate with dex and if dex is still unavailable then this request would fail (which is current behaviour anyways if dex suffers any downtime while serving a request).

The change will potentially involve introducing a go routine to constantly watch and monitor the connection with dex and re-establish when dropped etc.

We could also introduce a /healthz endpoint for osprey server which can be used as a readiness probe in the deployment.

Failed to parse dex response querying auth endpoint

dex version 2.10.0 using ldap connector
osprey version 2.0
both running inside kube

When i try to log in, osprey server calls consumeAuthResponse which does a GET request to the dex endpoint
/dex/auth?client_id=login&redirect_uri=https%3A%2F%2Flogin.example.com%2Fcallback%2Fauth&response_type=code&scope=groups+openid+profile+email+offline_access&....

but dex response is an html with a login form instead of a json that consumeAuthResponse tries to marshal into a loginForm

type loginForm struct {
	Action        string `json:"action"`
	LoginField    string `json:"login"`
	LoginValue    string `json:"-"`
	PasswordField string `json:"password"`
	PasswordValue string `json:"-"`
	Invalid       bool   `json:"invalid,omitempty"`
}

the form:
image

and that obviously is traduced to a parsing error in osprey server
failed to parse auth response: invalid character '<' looking for beginning of value

do i have missconfigured dex or something?

thanks in advance

Flatten commands

Move login and logout command to be root commands, and not subcommands of user.

Allow using env variable to login

In a CI/CD context, we can't use stdin to enter username and password.

Do you mind adding the following changes?

diff --git a/client/credentials.go b/client/credentials.go
index 76a7cf8..47959d3 100644
--- a/client/credentials.go
+++ b/client/credentials.go
@@ -20,6 +20,11 @@ type LoginCredentials struct {
 
 // GetCredentials loads the credentials from the terminal or stdin.
 func GetCredentials() (*LoginCredentials, error) {
+       username := os.Getenv("OSPREY_USERNAME")
+       password := os.Getenv("OSPREY_PASSWORD")
+       if username != "" && password != "" {
+               return &LoginCredentials{Username: username, Password: password}, nil
+       }
        if terminal.IsTerminal(int(syscall.Stdin)) {
                return consumeCredentials(hiddenInput)
        }

Thanks
--nick

e2e tests are flaky when run in parallel

When running the e2e tests locally, I observed failures which were flaky (in the sense that a different test/step would fail each time) but fairly consistent (in the sense that any given test run was pretty likely to fail). Explicitly disabling parallelism with go test -p 1 removed these failures. I suspect tests in different modules are fighting over ports (or some other global resource). If I'm correct in my hypothesis that port numbers are the contended resource, then probably global port allocation needs to be used across the whole test suite like this.

(NB this is not just ports for the go code to bind to, but also potentially ports for the LDAP server we spin up...)

Add target groups in client config

Allow users to configure group targets in the ospreyconfig, and to specify a group target when logging in.
The user should be able to specify a default group.

  • If a group is provided, osprey should only login to the targets in that group.

    • If the group does not exist, it should display an error.
  • if a group is not provided

    • if no default group is configured, opsrey should login to any targets not belonging to a group.
    • if a default group is configured, osprey should only login to the targets in the default group.
  • If there are no targets in a group, or outside a group for any of the scenarios before osprey should do nothing.

Create a binary distribution pipeline

Bintray, previously used to distribute open source binaries, has been discontinued (#62). We need to create an alternative channel to release binaries to, which is publicly accessible.

Possible options:

  • Github artifacts
  • ...?

Requirements:

  • Identify a binary host from the possible options above
  • Upload the latest osprey version to those hosts
  • Create tools to assist developers to upload future releases to that platform
  • (optional) upload historic versions to the platform as well

Panic with invalid configuration file

Panic when the configuration file has no "providers".
Encountered this after upgrading osprey from v1 to v2 as the configuration format changed.

Steps to reproduce:

$ > ~/.osprey/config
$ osprey user
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x91363b]

goroutine 1 [running]:
github.com/sky-uk/osprey/client.LoadConfig(0xc0000365c0, 0x1b, 0x8a5627, 0xae4920, 0xc00016b230)
	/home/travis/gopath/src/github.com/sky-uk/osprey/client/config.go:86 +0x1db
github.com/sky-uk/osprey/cmd.user(0x118fc60, 0x11b6a50, 0x0, 0x0)
	/home/travis/gopath/src/github.com/sky-uk/osprey/cmd/user.go:34 +0x4e
github.com/spf13/cobra.(*Command).execute(0x118fc60, 0x11b6a50, 0x0, 0x0, 0x118fc60, 0x11b6a50)
	/home/travis/gopath/pkg/mod/github.com/spf13/[email protected]/command.go:830 +0x2aa
github.com/spf13/cobra.(*Command).ExecuteC(0x1190b60, 0x43d25a, 0x11437a0, 0xc000000180)
	/home/travis/gopath/pkg/mod/github.com/spf13/[email protected]/command.go:914 +0x2fb
github.com/spf13/cobra.(*Command).Execute(...)
	/home/travis/gopath/pkg/mod/github.com/spf13/[email protected]/command.go:864
github.com/sky-uk/osprey/cmd.Execute()
	/home/travis/gopath/src/github.com/sky-uk/osprey/cmd/root.go:16 +0x31
main.main()
	/home/travis/gopath/src/github.com/sky-uk/osprey/main.go:6 +0x20

404 error with kubernetes example deployment

I am getting a "404 page not found" when doing an osprey user login against an osprey server set up with deploy-all.sh.

The only things I have set in kubernetes.properties are node and context.

All that is in the logs is:

kubectl logs deploy/osprey-example-deployment -n osprey-example
time="2018-03-10T12:06:31Z" level=info msg="Starting to listen at: https://0.0.0.0:5555"

And for dex;

kubectl logs deploy/dex-example-deployment -n osprey-example
{"level":"info","msg":"config using log level: debug","time":"2018-03-10T12:06:13Z"}
{"level":"info","msg":"config issuer: https://10.20.20.11:31885","time":"2018-03-10T12:06:13Z"}
{"level":"info","msg":"config storage: sqlite3","time":"2018-03-10T12:06:13Z"}
{"level":"info","msg":"config static client: foo.cluster","time":"2018-03-10T12:06:13Z"}
{"level":"info","msg":"config connector: local passwords enabled","time":"2018-03-10T12:06:13Z"}
{"level":"info","msg":"config skipping approval screen","time":"2018-03-10T12:06:13Z"}
{"level":"info","msg":"config signing keys expire after: 24h0m0s","time":"2018-03-10T12:06:13Z"}
{"level":"info","msg":"config id tokens valid for: 24h0m0s","time":"2018-03-10T12:06:13Z"}
{"level":"info","msg":"keys expired, rotating","time":"2018-03-10T12:06:13Z"}
{"level":"info","msg":"keys rotated, next rotation: 2018-03-11 12:06:13.944287479 +0000 UTC","time":"2018-03-10T12:06:13Z"}
{"level":"info","msg":"listening (https) on 0.0.0.0:5556","time":"2018-03-10T12:06:13Z"}
2018/03/10 12:10:14 http: TLS handshake error from 10.32.0.1:42790: remote error: tls: bad certificate

Support parsing of kubeconfig with extra fields

The way in which some tooling modifies your existing ~/.kube/config doesn't behave well with the unmarshal that happens inside osprey.

It gets confused trying to parse the cluster.extensions blocks and errors out.

e.g. minikube inserts this into ~/.kube/config:

- cluster:
    certificate-authority: /Users/itme/.minikube/ca.crt
    extensions:
    - extension:
        last-update: Tue, 03 Aug 2021 15:56:31 BST
        provider: minikube.sigs.k8s.io
        version: v1.21.0
      name: cluster_info
    server: https://127.0.0.1:49960
  name: minikube

Attempting to log into any group after this extra field is introduced causes all parsing to fail.

It can be replicated by adding in any field that osprey isn't aware of, e.g.

- context:
    break-me:
    cluster: cluster-url
    user: cluster-user
  name: cluster-name

Could be related to old versions of some packages used for generating the config being unfamiliar with the new fields, namely:

  • k8s.io/client-go/tools/clientcmd
  • k8s.io/client-go/tools/clientcmd/api

Support multiple Azure OIDC applications

We would like the ability to support multiple Azure applications as it currently only supports one configuration with multiple targets.

This would allow scenarios where a test application is required or a 'non-prod/prod' setup to reside in the same config file.

CA error in osprey pods with kubernetes example

After setting node and context in examples/kubernetes/kubernetes.properties and running examples/kubernetes/deploy-all.sh, the osprey server pods crash loop:

time="2018-03-10T11:44:29Z" level=fatal msg="Failed to create osprey server: failed to create oidc provider "https://10.200.33.11:31885\": Get https://10.200.33.11:31885/.well-known/openid-configuration: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "dex-ca")"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.