Coder Social home page Coder Social logo

denbi / perunkeystoneadapter Goto Github PK

View Code? Open in Web Editor NEW
5.0 3.0 0.0 316 KB

Perun Keystone Adapter parses data propagated by Perun data and modifies a connected Keystone.

Home Page: https://perunkeystoneadapter.readthedocs.io/en/latest/

License: Apache License 2.0

Python 98.09% Shell 0.76% Dockerfile 0.87% Makefile 0.28%
keystone openstack perun

perunkeystoneadapter's Introduction

Documentation Status

Perun Keystone Adapter

The Perun Keystone Adapter is a library written in Python that parses data propagated by Perun and modifies a connected Openstack Keystone.

Additionally, the PKA offers the possibility to restrict created projects quotas and create network stuff for new projects.

Features

  • abstract keystone to simplify often used tasks (create/delete/update/list users and projects)
  • parse SCIM or de.NBI portal compute center propagation data for users and projects
  • modify Keystone according the propagated data:
  • creates items (users or projects) in Keystone if they not exist but propagated
  • modify items properties if they changed
  • mark items as deleted and disable them in Keystone if they are not propagated anymore
  • deleting (marked and disabled) items functionality is available but not integrated in the normal workflow.
  • set/modify project quotas (needs a full openstack installation like DevStack for testing) [optional]
  • create network (router, net and subnet) for new projects [optional]
  • adjust default security group to support ssh access [optional]
  • compatible with python 3.8+

Preparation

Before installing Perun Keystone Adapter you must be sure that used openstack domain for propagation is empty or all existing projects and users that also exists in perun must be tagged to avoid naming conflicts and the project names must have the same names as the groups in perun. By default, everything created by the library is tagged as perun_propagation. This can be overwritten in the constructor of the KeyStone class.

As a help there are two scripts included in the assets directory of this repository that set a flag of your choice for a user and for a project.

  1. First install all necessary dependencies (maybe in your virtual environment) by running

    $ pip install -r requirements/default.txt
  2. The scripts expect that you sourced your OpenStack rc file:

    $ source env.rc
  3. Run the python script

    $ python set_project_flag.py  project_id flag_name

    or

    $ python set_user_flag.py  user_id flag_name

    where

    • user_id and project_id are OpenStack specific IDs
    • flag_name can be any value which is set for the flag attribute. If you do not modify the perunKeystoneAdapter, it expects perun_propagation as the value.

Installation

Install a specific version of this library by providing a tag of the releases page:

E.g: for version 0.1.1:

pip install git+https://github.com/deNBI/[email protected]

Usage

Commandline client

The perun propagation service transfers a zipped tar file containing a users and groups file in scim format. The following script unzips and untars the propagated information and adds it to the keystone database. Keystone is addressed by environment variables (sourcing the openstack rc file) or directly by passing an environemt map (not used in the example). The Openstack user needs at least permission to modify entries in Keystone.

$ perun_propagation perun_upload.tar.gz

WSGI script

The python module also contains a built-in server version of the perun_propagation script. The script uses flask to provide an upload function and run library functions in a separate thread. It can be simply tested starting the flask built-in webserver.

$ perun_propagation_service
 * Serving Flask app "denbi.scripts.perun_propagation_service" (lazy loading)
 * Environment: production
   WARNING: Do not use the development server in a production environment.
   Use a production WSGI server instead.
 * Debug mode: off
 * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)

For running this in production it is easy to use gunicorn as follows:

$ gunicorn --workers 1 --bind 127.0.0.1:5000 denbi.scripts.perun_propagation_service:app

Configuration

The Perun Keystone Adapter can be configured in two different ways, by environment or by configuration file.

... by environment

# OpenStack credentials
# ---------------------
export OS_REGION_NAME="XXX"
export OS_PROJECT_DOMAIN_ID="XXX"
export OS_INTERFACE="public"
export OS_AUTH_URL="https://XXX"
export OS_USERNAME="admin"
export OS_PROJECT_ID="XXX"
export OS_USER_DOMAIN_NAME="Default"
export OS_PROJECT_NAME="admin"
export OS_PASSWORD="XXX"
export OS_IDENTITY_API_VERSION="3"

# Perun Keystone Adapater settings
# --------------------------------

# Location for storing propagated data
export PKA_BASE_DIR=/pka
# Location for storing logs, defaults to current working directory
export PKA_LOG_DIR=/log
# Log level, must be one of ERROR, WARNING, INFO, DEBUG, defaults to INFO
export PKA_LOG_LEVEL=INFO
# Do not make any modifications to keystone
export PKA_KEYSTONE_READ_ONLY=False
# Domain to create users and projects in, defaults to 'elixir'
export PKA_TARGET_DOMAIN_NAME=elixir
# Default role to assign to new users, defaults to 'user'
export PKA_DEFAULT_ROLE=user
export PKA_DEFAULT_NESTED=False
export PKA_ELIXIR_NAME=False
# Set quotas for projects
export PKA_SUPPORT_QUOTA=True
# Create router for new projects
export PKA_SUPPORT_ROUTER=True
# Create a default network/subnetwork for new projects
export PKA_SUPPORT_NETWORK=True
# Create
export PKA_EXTERNAL_NETWORK_ID=16b19dcf-a1e1-4f59-8256-a45170042790
# Add ssh rule to default
export PKA_SUPPORT_DEFAULT_SSH_SGRULE=True

by configuration file

PKA supports configuration files in JSON format. An example configuration could look like this:

{
   "OS_REGION_NAME": "XXX",
   "OS_PROJECT_DOMAIN_ID": "XXX",
   "OS_INTERFACE": "public",
   "OS_AUTH_URL": "https://XXXX",
   "OS_USERNAME": "admin",
   "OS_PROJECT_ID": "XXX",
   "OS_USER_DOMAIN_NAME": "Default",
   "OS_PROJECT_NAME": "admin",
   "OS_PASSWORD": "XXX",
   "OS_IDENTITY_API_VERSION": 3,
   "BASE_DIR": "/pka",
   "LOG_DIR": "/log",
   "LOG_LEVEL": "INFO",
   "KEYSTONE_READ_ONLY": false,
   "TARGET_DOMAIN_NAME": "elixir",
   "DEFAULT_ROLE": "user",
   "NESTED": false,
   "ELIXIR_NAME": false,
   "SUPPORT_QUOTAS": true,
   "SUPPORT_ROUTER": true,
   "SUPPORT_NETWORK": true,
   "EXTERNAL_NETWORK_ID": "16b19dcf-a1e1-4f59-8256-a45170042790",
   "SUPPORT_DEFAULT_SSH_SGRULE": true,
   "SSH_KEY_BLOCKLIST": [],
   "CLEANUP": false
}

Docker

Build docker container.

docker build -t denbi/pka .

Create environment file (pka.env) :

# OpenStack credentials
# ---------------------
OS_REGION_NAME="XXX"
OS_PROJECT_DOMAIN_ID="XXX"
OS_INTERFACE="public"
OS_AUTH_URL="https://XXX"
OS_USERNAME="admin"
OS_PROJECT_ID="XXX"
OS_USER_DOMAIN_NAME="Default"
OS_PROJECT_NAME="admin"
OS_PASSWORD="XXX"
OS_IDENTITY_API_VERSION="3"

# Perun Keystone Adapater settings
# --------------------------------

# Location for storing propagated data
PKA_BASE_DIR=/pka
# Location for storing logs, defaults to current working directory
PKA_LOG_DIR=/log
# Do not make any modifications to keystone
PKA_KEYSTONE_READ_ONLY=False
# Domain to create users and projects in, defaults to 'elixir'
PKA_TARGET_DOMAIN_NAME=elixir
# Default role to assign to new users, defaults to 'user'
PKA_DEFAULT_ROLE=user
PKA_DEFAULT_NESTED=False
PKA_ELIXIR_NAME=False
# Set quotas for projects
PKA_SUPPORT_QUOTA=True
# Create router for new projects
PKA_SUPPORT_ROUTER=True
# Create a default network/subnetwork for new projects
PKA_SUPPORT_NETWORK=True
# Create 
PKA_EXTERNAL_NETWORK_ID=16b19dcf-a1e1-4f59-8256-a45170042790
# Add ssh rule to default 
PKA_SUPPORT_DEFAULT_SSH_SGRULE=True

and run the container:

docker run --net host --env-file pka.env -v $(pwd)/perun/upload:/pka -v $(pwd)/perun/log:/log denbi/pka

Alternatively you can use a configuration file (pka.json) ...

docker run --net host -v $(pwd)/pka.json:/etc/pka.json -v $(pwd)/tmp/base:/pka -v $(pwd)/tmp/log:/log denbi/pka

There are additional deployment options available if you prefer to run WSGI applications with Apache, or other setups.

Logging

The Library supports two different logger domains, which can be configured when instantiating the Keystone/Endpoint class (default "denbi" and "report"). All changes concerning the Openstack database (project, identity and quotas) are logged to the logging domain, everything else is logged to the report domain. The loggers are standard Python logger, therefore all possibilities of Python's logging API are supported. See service script for an example how to configure logging.

Development

For testing and development it is recommended to configure and use a [DevStack] (https://docs.openstack.org/devstack/latest/) that fits your production environment.(see test/devstack/README.md)

Unit tests

The library comes with a set of unit tests - a full functional keystone is required to perfom all tests.

For testing the user/project management only a running keystone is enough. The Makefile included with the project runs a docker container for providing a keystone server.

It is recommended to configure and use a [DevStack]. In any case it is not recommended to use your production keystone/setup .

Linting

$ make lint

will run flake8 on the source code directories.

perunkeystoneadapter's People

Contributors

awalende avatar be-el avatar erasche avatar gasses avatar jkrue avatar maitai avatar pbelmann avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

perunkeystoneadapter's Issues

--dry-run ?

I would like to have an option to perform a dry run without modifying the keystone database.

Consider additional values propagated by Perun

The groups json propagated by Perun contains useful values which are currently not considered by the perunKeystoneAdapter

...
   {
      "blacklisted" : null,
      "denbiVmsRunning" : null,
      "id" : XXX,
      "login-namespace:elixir" : "XXX",
      "login-namespace:elixir-persistent" : "[email protected]",
      "preferredMail" : "XXX",
      "sshPublicKey" : [
         "ssh-rsa XXXXXXXX...."
      ],
      "status" : "VALID"
   },
...

Configurable target domain

Enhancement #5 added support for configurable target domains, but support is missing in frontend python scripts.

Tracking actions (aka logging)

We should add Python logging facilities to provide a simple tracking of actions like user/project create/list/delete.

Maintaining referential integrity in (users|projects)_terminate

Both XYZ_terminate method remove the keystone entities (not sure whether these methods are actually used in the code...).

Removing a project usually does not remove the project's resource like VMs, networks, routers, floating IPs... All these resources will be orphaned after removal.

Either

  • remove the functions if they are not used. Otherwise developers might want to use them in their own stuff without checking the code...

  • implement the correct functionality. The command line client offers a 'project purge' call, which should do the right[tm] things

  • put a BIG FAT warning in the documentation to inform developers that these methods are not what they are looking for....

And correctly implementing this functionality is kind of difficult, since depending on the cloud setup you might have different kind of resources (s3/swift buckets, heat stacks, magnum based kubernetes clusters....).

Service accounts

In __import_dpcc_userdata all users not currently active in the perun data set are deleted/disabled.

This is particular bad for service accounts (e.g. monitoring or domain admin access), since these accounts are also deleted/disabled.

Since all users managed by perun are flagged, a simple solution would be filtering the user list in https://github.com/deNBI/perunKeystoneAdapter/blob/master/python/denbi/bielefeld/perun/endpoint.py#L149 for flagged users only.

These same should also be applied to the project handling code.

Project descriptions should not include non-ascii characters

Perun can easily have project descriptions including non-ascii characters (Umlauts, special characters, etc.). OpenStack components like to choke on these.

Example error message from a description that includes the string "Charité":

"Error: Failed to apply catalog: Execution of '/usr/bin/openstack project list --quiet --format csv --long' returned 1: 'ascii' codec can't encode character u'\\xe9' in position ..."

(hex character e9 is é ...)

The Adapter should filter out ASCII characters and/or replace them with ASCII characters before pushing into OpenStack.

Extend logging

We should add a separate logger domain for general log messages and "updates" concerning Openstack. In the current configuration everything is mixed in Logger domain, which could be a bit confusing ...

Preexisting users/projects in the same domain could cause trouble ... (Python 2 only)

If the domain used for the Perun propagations service contains user/projects which are not created by perunKeystoneAdapter then encoding problems could occur when using Python 2. Since values coming from keystone are currently not cast to an explizit type, it's not clear if str or unicode str is used as underlying type. Since 'abc' is not u'abc' (Python2), the perunKeystoneAdapter fails detecting existing user/projects in this case. A simple solution could be to cast all values to an explizit type (str or unicode str).
This problems affects only Python2, since Python3 str has unicode support.

Projects with the same name

What happens if we try to create a project with the same name. If there is already a test for this scenario, this issue should be closed.

Update Instructions

Update instructions for tagging projects and user with the perun_propagation flag.

Error on unittest run

Running the following command

python -m "unittest" test_endpoint.py

reports:

FF
======================================================================
FAIL: test_import_denbi_portal_compute_center (test_endpoint.TestEndpoint)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/pbelmann/projects/perunKeystoneAdapter/test/test_endpoint.py", line 159, in test_import_denbi_portal_compute_center
    self._test_project(after_import_projects['9999'],'9999',['50000','50001','50002'])
  File "/home/pbelmann/projects/perunKeystoneAdapter/test/test_endpoint.py", line 43, in _test_project
    self.assertSetEqual(set(denbiproject['members']),set(members))
AssertionError: Items in the second set but not the first:
'50000'
'50002'
'50001'

======================================================================
FAIL: test_import_scim (test_endpoint.TestEndpoint)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/pbelmann/projects/perunKeystoneAdapter/test/test_endpoint.py", line 75, in test_import_scim
    self._test_project(after_import_projects['9845'],'9845',['1','2','3'])
  File "/home/pbelmann/projects/perunKeystoneAdapter/test/test_endpoint.py", line 43, in _test_project
    self.assertSetEqual(set(denbiproject['members']),set(members))
AssertionError: Items in the second set but not the first:
'1'
'3'
'2'

----------------------------------------------------------------------
Ran 2 tests in 1.745s

FAILED (failures=2)

Enhancement: pending deletion report

The adapter currently does not delete project or user. This is good, and I do not want anything to be deleted automatically (yet).

Nonetheless I need a simple way to cleanup and release resources. My approach would be writing a simple script to be used in cron jobs....it just prints the project name/id for all project that can be deleted according to their state/tags, The site admins can trigger their site specific cleanup process afterwards (or just paste the output into 'xargs openstack project purge'...).

Comments on this? Time to dedust my python skills?

Working as domain administrator

The current implementation only works as cloud admin (requires project name, requires elevated privileges).

Most of the setup should be possible as domain admin.

Support/Use propagated ssh-keys

Since the portal supports adding a (public) ssh key to one account, this information is also propagated by perun. We should use this information to add a "key pair" when creating a user.

Support clouds.yaml / application credentials

The current version of pKA use environment variables (OpenRC File) for authentication. Since Openstack-SDK also supports clouds.yaml and applications credentials since a decade we should also support them.

RFC: Managing quotas

One of the adapter's goals is managing project quotas; the resource definition in a perun project should be reflected by setting the corresponding quotas in openstack.

I've been thinking about this problem for time now, since we want to write a script to simplify project setup. And it turns out that this script is a little bit more involved than previously thought....

In an idea world I would create one account and make it domain admin of the de.NBI domain. This account can be used to create users, projects and manage quotas. This works, and this is how I want to setup the adapter.

But managing quotas does not work with this account for all types of quotas. It's fine for keystone quotas, but breaks for certain other quotas, notably cinder quotas. Their quota definition is managed by cinder/neutron itself; to set quotas you need to talk with the component's endpoint. This endpoint requires a project/tenant id:

# openstack endpoint list --service volumev3
+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------------------------------------+
| ID                               | Region    | Service Name | Service Type | Enabled | Interface | URL                                                                  |
+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------------------------------------+
| 36bb45416ffe48acbfbe8fc056100b08 | RegionOne | cinderv3     | volumev3     | True    | public    | https://cloud.computational.bio.uni-giessen.de:8776/v3/%(tenant_id)s |
| 6b55fb61e5ef431b9b26ca84d92c4299 | RegionOne | cinderv3     | volumev3     | True    | internal  | http://192.168.12.5:8776/v3/%(tenant_id)s                            |
| fa5fd05ed28842cb9f80dd2042757566 | RegionOne | cinderv3     | volumev3     | True    | admin     | http://192.168.12.5:8776/v3/%(tenant_id)s                            |
+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------------------------------------+

Working as domain admin is defined by NOT specifying a project and thus getting a domain scoped keystone token. Without a project you cannot use the cinder endpoint....

I'm not sure about the best method to handle this problem. Using a second keystone instance with a project admin account probably does not work, since project admin A might not be able to modify porject B. cinder (and almost all other components except keystone) does not support the domain concept at all yet.

We are currently aiming at

  • create a project via domain admin account
  • make service account admin of the new project
  • create new keystone session as project admin in the new project
  • modify storage quotas
  • remove service account membership

I haven't tested this yet, so I'm not sure whether it works. In the worst case we either have to change the policies for cinder (and maybe other components), which will open up a can of worms....or we have to use a cloud admin account. The later one would be the absolute last resort...

Any comments on this? How do you handle automatic quota setup at your site?

Add support for individual domain targets

Currently all users and projects are created in the same domain like the account used to authenticate against Keystone. We have to add an option to support creating users and projects in any (existing) domain. E.g. Elixir users and projects should be placed in domain named 'elixir' instead 'default'/

User/Project deletions

Current situation

All actions on Keystone are processed immediately.

What does that mean ?

The user / project data is propagated by Perun and processed by the Perun Keystone Adapter.
Any changes in the propagated data follows an update of Keystone. For any additions or modifications (user_create, user_update, projects_create and project_update) this seems to be ok. But, a missing user or project in the propagated database is also removed from keystone in time. In my opinion this is a dangerous/unsafe behaviour.

Idea for a maybe better (safer) handling:

Instead of an immediately processing we could deactivate the concerned user/project first and delete it then :

  1. manually
  2. after a fixed period of time (one day, several day, a week)
  3. after one or more confirmations (which could be the next pushed dataset)
  4. a mixture of that above

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.