Coder Social home page Coder Social logo

cloudigrade / cloudigrade Goto Github PK

View Code? Open in Web Editor NEW
8.0 8.0 7.0 5.91 MB

A tool for tracking and reporting RHEL usage in public clouds

Home Page: https://cloudigra.de

License: GNU General Public License v3.0

Python 98.01% Makefile 0.34% Dockerfile 0.23% Shell 1.35% Jinja 0.07%

cloudigrade's Introduction

cloudigrade

license Tests Status codecov

What is cloudigrade?

cloudigrade was an open-source suite of tools for tracking RHEL use in public cloud platforms, but in early 2023, its features were stripped-down to support only the minimum needs of other internal Red Hat services. The 1.0.0 tag marks the last fully-functional version of cloudigrade.

What did cloudigrade do?

cloudigrade actively checked a user's account in a particular cloud for running instances, tracked when instances were powered on, determined if RHEL was installed on them, and provided the ability to generate reports to see how many cloud compute resources had been used in a given time period.

See the cloudigrade wiki or the following documents for more details:

What are "Doppler" and "Cloud Meter"?

Doppler was an early code name for cloudigrade. Cloud Meter is a product-ized Red Hat name for its running cloudigrade service. cloudigrade == Doppler == Cloud Meter for all intents and purposes. πŸ˜‰

cloudigrade's People

Contributors

abaiken avatar abellotti avatar adberglund avatar aweiteka avatar blentz avatar brittrh avatar dependabot[bot] avatar dsesami avatar elyezer avatar gtanzillo avatar hseljenes avatar infinitewarp avatar katherine-black avatar kdelee avatar kholdaway avatar maorfr avatar mbpierce avatar mirekdlugosz avatar noahl avatar pyup-bot avatar rabajaj0509 avatar renovate-bot avatar renovate[bot] avatar ruda avatar werwty avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cloudigrade's Issues

Save inventory of currently running instances

As a Doppler admin, I want to see an inventory of any instances that were already running (see #23) to be saved to a database so that I can review them in a report at a later time.

Acceptance Criteria

  • Verify that the list of instances and AMIs that are currently powered on is written to a database with the time of check as the time the instance was powered on.
  • Verify that the attributes in #23 are in a DB table
  • Verified that "powered on" is the event type
  • Verify that effectively CURRENT_TIMESTAMP is the event time

Technical Assumptions

  • Since we expect the future service to be a web app, we will go ahead and start this using Django even though effectively this is a "simple" CLI.
  • We do not care at this time about deduping, collision checking, updating, etc.
    • This means the app may crash on duplicate row for now.
  • This is to define the same "event" table that will later be used in the CloudTrail log scraping and processing output.
  • The InstanceId from AWS is effectively our primary key or at least a sort-of unique index.
  • Also include on the table:
    • region ID (which we should have from the initial connection/client)
    • AWS user/account ID, which should be a foreign key back to the user/account table that we created in #18
  • All columns in our tables that are references to AWS-based IDs should be strings.

zenhub svg broken in readme on github.com

It looks like github isn't serving a good mime type for svgs coming out of other repos. Maybe we should use the shields.io version after all. Β―\_(ツ)_/Β―

Determine web runner

Do we want to use uWSGI? Gunicorn? bjoern? Wrap it up in nginx? One of these inside of a docker container.

Output of this research is a prototype implementation for Cloudigrade in the preferred technology.

Timebox: 1 days

Ensure customer's IAM role/policy has required privileges enabled

As a Doppler admin, I want to programmatically access customers accounts so I can retrieve EC2 and CloudTrails data.

Acceptance Criteria

  • Verify that each required privilege is enabled by testing related API calls upon customer setup
  • Verify that when a required privilege is not enabled, the CLI user sees an appropriate error message and causes account setup/save to abort

Tech assumptions

  • For now, we will provide to the customer a JSON blob (defining an IAM policy) and instructions to manually add an IAM user and role so Doppler can access that customer's AWS account.

Get usage report via RESTful HTTP request

As a Doppler user, I want to run the "report_usage" command via HTTP commands so that I can see the usage data in a web browser.

Acceptance Criteria

  • Verify that I can GET /api/v1/report/?account_id={ACCOUNT_ID}&start={START}&end={END} with a customer's Amazon account ID for {ACCOUNT_ID}, and the report start and end dates as ISO-8601 datetimes for {START} and {END}, and I receive the same report output as JSON that I would have received via the CLI.
  • Verify that requesting for an ARN that doesn't exist returns a 404.

Tech assumptions

  • We don't need to worry about things like no-cache headers for now. We'll just trust our clients to make new requests as needed since we don't know exactly in what conditions reports will be idempotent.
  • The start/end dates inclusivity/exclusivity behavior matches the CLI.

Support scanning S3-backed AMIs

As a Doppler admin, I want to scan S3-backed AMIs for RHEL so that I can support a broader customer base.

Acceptance Criteria

  • Verify scan works for S3-backed AMIs

JSON report of hourly RHEL usage for customer during time period

As a Doppler admin, I want a report of RHEL usage per customer so that I can ensure that we are billing them appropriately.

Acceptance Criteria

  • Verify that I get JSON in the CLI output that contains total hourly usage aggregated by the combination of RHEL version and EC2 instance size for a customer

Tech assumptions

  • Probably query using Django ORM instead of raw SQL
  • CLI input parameters:
    • start date/time
    • end date/time
    • AWS Account ID from setup/on-boarding step
  • The particulars of the JSON structure are not yet strictly defined

Get list of customer accounts via RESTful HTTP request

As a Doppler user, I want to get a paginated list of customer accounts in Doppler via HTTP commands so that I can see that list in a web browser.

Acceptance Criteria

  • Verify that I can GET /api/v1/account/, and I receive a list of all customer Account instances as JSON in a paginating envelope.
  • Verify that multiple Account instances are returned in created_at ascending sort order. This means the oldest results are returned first.
  • Verify that I can request the second page of results and the results are returned as expected offset and limited by the appropriate amount.

Tech assumptions

  • DRF provides the ability to easily add other sort orders, but we don't care about them for now.
  • We can use DRF's default page size for now.
  • We do not care to override DRF's default behavior around referential IDs/URLs. This may mean clients get unusable URLs for now in some cases.

More aggressive code quality checking

As a Doppler developer, I want even more aggressive code quality checking so that I can rest assured that I am writing clean(er?) code.

Acceptance Criteria

  • Verify that additional quality checks are being run during CI.
  • Verify that failures in the new checks also fail the pipeline.

Tech assumptions

  • Add and configure one or more of:
  • Side note: Prospector is made by the Landscape.io people, and Landscape.io appears to be a dead service. Its website is usually down, but at least Prospector is open source and independently runnable.
  • Included in this story is the effort required to evaluate the default rules for the chosen tools and adjust those rules to fit how we want to develop.

See if AMI has RHEL

As a Doppler admin, I want to know if a given AMI belonging to a customer is using RHEL so that I can report RHEL usage.

Acceptance Criteria

  • Verify CLI output indicates whether RHEL is installed on the AMI
  • Verify that we clean up any images we had to copy to get this information

Tech assumptions

  • EBS only for now
  • Always copy snapshot for now
  • Copy to us-east-1. Attach to our "inspector" instance.
  • Simply check via cat /etc/redhat-release

Do not copy AMI if in same AWS region

As a Doppler admin, I want to only copy AMIs when crossing AWS region boundaries so that I do not incur unnecessary IO.

Acceptance Criteria

  • Verify that scanning an AMI within the same region does not copy AMI
  • Verify that scanning an AMI outside the same region does copy AMI and cleans up afterwards

Research accessing internals of running AMI

As a Cloudigrade user, I want to be able to access the internals of a running AMI so I can figure out if any services I care about are running in it.

AC:

  • Verify that an outside actor can see what files exist in an AMI.

End-user documentation

In order for QE to be effective, we need:

  • architectural overview outlining actors, services, APIs, queues
  • use case paths defined
  • definitions for APIs
  • examples of each REST endpoint with:
    • all optional arguments
    • sample inputs (for POSTs)
    • sample outputs

Parts of this are being covered by:

Make container to attach to AMI and inspect for RHEL (incomplete)

As a Doppler admin, I want to be able to inspect the contents of a customer's AMI so that I can figure out if it's running RHEL.

Acceptance Criteria

  • Verify

Tech assumptions

  • This is a container running in AWS. It probably listens to a queue that receives messages to tell the container to attach another volume (from an AMI we now have ownership access to) and troll through its files.
  • When done, it puts a message on another queue to indicate back to doppler "core" the findings.
  • Contents of messages TBD.

Read CloudTrails metadata for on/off events

As a Doppler admin, I want to know when an EC2 instance is powered on and off so that I can bill customers appropriately.

Acceptance Criteria

  • Verify that a CLI user sees the event metadata that CloudTrails delivers regarding instances powering on and off.
  • Verify that a CLI user does not see events that had an error state; these should have been discarded.

Tech assumptions

  • CLI arguments include:
    • SQS queue URL
  • Event metadata should include:
    • Event type (start/power on, stop/power off)
    • Event time
    • Instance ID
    • Instance size (e.g. micro)
    • AMI ID
    • subnet
  • We need to be careful and explicit about checking for errors
    • For example, we may have a "stop instance" event but the action didn't really complete for some reason.
    • We probably want so just check for a value in the Records[x].errorCode of the event response.
  • We still have very crude or nonexistent error handling.

Save customer account ID

As a Doppler admin, I want to access customers’ AWS accounts via AWS APIs.

This means we give a customer our AWS Account ID, the customer creates a role with our policy JSON, then… TBD? See also: Tutorial: Delegate Access Across AWS Accounts Using IAM Roles

Acceptance Criteria

  • Verify I can give CLI Doppler a customer's AWS ARN.
  • Verify that the DB table contains:
    • TBD… but probably customer account ID
    • ARN

Tech assumptions

  • Since we expect the future service to be a web app, we will go ahead and start this using Django even though effectively this is a "simple" CLI.
  • Store things in a local sqlite DB using Django migrations to build the schema.
  • β€œVerification” here can be done by having a dev reveal the contents of the file/DB since good APIs won’t arrive until later.
  • We do not care at this time about deduping, collision checking, updating, etc.
  • The account ID from AWS is effectively our primary key or at least a sort-of unique index.
  • All columns in our tables that are references to AWS-based IDs should be strings.
  • For now, we plan to access the AWS APIs by using our Doppler account, and the credentials for that account are expected to live in the environment at ~/.aws/. This will later be mounted/substituted/etc. per running environment.
  • All tables in our DB should have auto-populated columns for time created and modified (names TBD, @infinitewarp can provide example of this for Django model)

Hosted project documentation

As a Doppler developer or user, I want publicly accessible documentation so that I don't have to check out the project and read MD files.

Acceptance Criteria

  • Verify our docs live publicly on rtfd.org
  • Verify that when we release, docs are published. (Publishing is part of CD.)

Tech assumptions

  • We will probably use Sphinx for generating the docs.
  • As part of this, someone will need to set up an rtfd account and configure as needed.

Research AWS IAM permissions

As a customer, I want to know what IAM permissions I need to grant so Cloudigrade will be able to get information it needs from my AWS accounts.

AC:

  • Verify that an outside actor can extract instance usage for a machine running in AWS.
  • Verify that an outside actor can extract CloudTrail information and logs for a machine running in AWS.

Ensure working credentials before completing setup

As a Doppler admin, I want to ensure that our Doppler AWS account ID actually has access to the customer's AWS Account ID before confirming user is done with setup.

Acceptance Criteria

  • Verify that if we cannot access the customer's account, we do not save and the user receives an appropriate error message.
  • Verify that correctly configured account access allows successful saving.

Tech assumptions

  • We will decide which APIs we want to poke to ensure minimum connectivity using the credentials.
  • For now, we only care to verify once synchronously during setup. Maybe later we'll have additional checks to ensure that access hasn't been revoked, but we don't care about that now.

Drop support for the Django management commands

As a Doppler dev, I want to drop the CLI so that I don't need to duplicate functionality unnecessarily.

Acceptance Criteria

  • Verify HTTP APIs are all still working.
  • Verify that the CLI commands are gone.

Add complexity checks

As a Doppler developer, I want cyclomatic complexity checks run against the code so that the code does not become more difficult to test and maintain.

What is cyclomatic complexity? See: Cyclomatic complexity (wikipedia)

Cyclomatic complexity is a software metric, used to indicate the complexity of a program. It is a quantitative measure of the number of linearly independent paths through a program's source code. [...] Cyclomatic complexity is computed using the control flow graph of the program: the nodes of the graph correspond to indivisible groups of commands of a program, and a directed edge connects two nodes if the second command might be executed immediately after the first command.

Acceptance Criteria

  • Verify complexity checks run with each build
  • Verify that very complex code (> 10) fails the build

Tech assumptions

  • We should use the mccabe package with the standard suggested limit of 10.

Document architectural overview

As a non-developer, I want to understand the architecture around Doppler so that I can explain the tool's field impact.

Acceptance Criteria

  • Verify documentation exists for the overall architecture.
  • Verify documentation includes general definition of actors, services, APIs, queues

Tech assumptions

  • This may initially be written by a "core" developer, but this should also be written with the aide of and polished by a better marketing-ish wordsmith. Maybe talk to Dan. :)
  • This could be prepared entirely in text, and Dan could turn it into an external visual document with lines and boxes and pictures.
  • At a high level, people care about what this "brings to the table" and how this may affect the behavior of other RH teams. In what sense is this going to open up our field teams? Non-Doppler people want to know specifically what Doppler will tell them and how Doppler tells it so they can know how to make better deals with customers.

Notify inspector handler queue when encountering a new AMI

As a Doppler admin, I want Doppler to kick off a process to identify what distro is in a customer's custom AMI when Doppler doesn't know about that AMI so that I can eventually report on distro usage.

Acceptance Criteria

  • Verify that Doppler does not put a message on the inspection queue when Doppler recognizes a new customer's instance that has the platform flag set to "Windows".
  • Verify that Doppler saves the AMI as "windows"-y when Doppler recognizes a new customer's instance that has the platform flag set to "Windows".
  • Verify that Doppler puts a message on a queue for each AMI with enough information to start inspection when I register a new customer AWS account that has running instances (when non-Windows instance) with AMIs that Doppler has never seen.
  • Verify that the queued messages contain:
    • an identifier for which cloud platform (Amazon, Azure, or Google)
    • the AWS region in which the customer's AMI lives (where the instance was running)
    • the AMI ID

Tech assumptions

  • We only add a message to the queue if this is the first time we have encountered the particular AMI ID.
  • We write skeleton/incomplete information about the AMI to the database so a later process can update it with the distro type and whether we consider it RHEL. This means redefining our models for AMIs.
  • We write some information to mark an instance/AMI in the database as "it's windows!"
  • Use the containerized queue that is set up in #106

Create customer account via RESTful HTTP request

As a Doppler user, I want to be able to create customer accounts via HTTP so that I can expose this functionality in a web browser instead of using the CLI.

Acceptance Criteria

  • Verify that I can POST /api/v1/account with an ARN (Amazon Resource Name) in the POST payload, and this acts like the "add_account" management command.
  • Verify that if given an ARN that does not have permission, I get an error that explains why things don't work. (probably some not-200 not-201 response code, probably 400)
  • Verify that if given a good ARN, I get a successful message. (probably status 201 created)

Tech assumptions

  • This is effectively the first story where we need a long-running daemonized service.
  • This should include CD, containerizing, etc. as needed.
  • We will define the specific JSON response as we go.
  • DRF = Django REST Framework
  • For now, use the built-in development server, unless we've solved #59 already.
  • We do not care to override DRF's default behavior around referential IDs/URLs. This may mean clients get unusable URLs for now in some cases.

Run scheduled task to record results from hound

As a Doppler admin, I want a scheduled Celery task to check a message queue for results from the hound and write those results to my database so I know those results as soon as possible.

Acceptance Criteria

  • Verify that when there are no messages from houndigrade on the queue, this process does nothing and exits cleanly.
  • Verify that when there are any messages from houndigrade on the queue, this process updates the database with the discovered information about the AMI(s).
    • Verify the update sets the full AMI houndigrade JSON string in the database for the AMI (supporting evidence for tags)
    • Verify the RHEL tag is added to database MachineImage if houndigrade JSON detects RHEL
    • Verify the RHEL tag is not added to database MachineImage if houndigrade JSON does not detects RHEL
  • Verify that the latest houndigrade facts are applied (meaning a repeat scan removes old tags, adds new tags). This is not expected to normally happen.

Tech assumptions

  • This requires cloudigrade to have a Celery worker.
  • This will include model changes so that we can track all the AMI details.
  • Until this point, there may be a nulls in some columns on the AMI/image table, implying that that inspector is off doing things. After we processed results here, there should be no more nulls around the AMI model.

Update report to filter exclusively on RHEL AMIs

As a Doppler user, I want usage reports to only include run times for instances that ran on AMIs that Doppler has identified as using RHEL so that I can bill customers appropriately. This is necessary because the current report has no concept of RHEL vs. not-RHEL.

Acceptance Criteria

  • Verify that the output shows zero run time when I run a report for an account and time period that contains run times of exclusively non-RHEL instances.
  • Verify that the output shows the total sum run time when I run a report for an account and time period that contains run times of exclusively RHEL instances.
  • Verify that the output shows the total sum run time of only the RHEL instances when I run a report for an account and time period that contains run times for a mix of RHEL and non-RHEL instances.

Tech assumptions

  • This changes the general format of the report output. Instead of the dictionary with funky "AMI ID + instance size" keys, just simplify it down to a single "is RHEL" value for now. When we have a better idea of what users want out of the system, we can add more dimensions to the output (e.g. size, version, subnet, distro, raw source details…).
  • TODO write up a research spike to see if Amazon ever redefines its instance sizes. Why? Because we expect in the future to want to know fine-grain hardware details as part of reporting (e.g. VCPU count, memory) and we do not know if it's "good enough" to just say the EC2 instance size.

Create makefile for common build tasks

As a Doppler developer, I want a makefile so that I don't have to remember various long chained terminal commands for common tasks.

This will be the starting point for future additions, but for now, it would be nice to have commands to:

  • clean the project directory of any scratch files, bytecode, logs, support directories for tox/coverage/etc.
  • drop and recreate the database

Acceptance Criteria

  • Verify the following commands exist and do what you expect:
    • make clean
    • make reinitdb

Initial Update

Hi πŸ‘Š

This is my first visit to this fine repo, but it seems you have been working hard to keep all dependencies updated so far.

Once you have closed this issue, I'll create separate pull requests for every update as soon as I find one.

That's it for now!

Happy merging! πŸ€–

Get customer account via RESTful HTTP request

As a Doppler user, I want to be able to get a specific account in Doppler via HTTP commands so that I can see that accounts details in a web browser.

Acceptance Criteria

  • Verify that I can GET /api/v1/account/{id}/ with our internally generated id primary key, and I receive the corresponding customer Account instance as JSON.
  • Verify that requesting for an id that doesn't exist returns a 404.

Refactor models to support other clouds

As a Doppler developer, I want models designed so that AWS isn't the only first-class cloud provider so that later I can support Azure, GCP, etc.

Acceptance Criteria

  • Verify that database tables have been migrated to new scheme.
  • Verify that HTTP APIs have been updated to reflect flexibility in cloud types.
    • Verify that creating an account requires a "cloud provider" identifier as an input.
    • Verify that the returned account JSON includes the cloud provider in each instance
  • Verify that CLI commands have been updated to reflect flexibility in cloud types.

Tech assumptions

  • We probably want some parent-child class relationship for our models. This could include all of the ones defined so far.
  • Some "cloud provider" ID/value should be included in the account creation, but everything else going forward can continue to just use the IDs that reference back to other models.

Save if AMI has RHEL

As a Doppler admin, I want results of checking an AMI for RHEL to be stored in a database so that I can reference it and run a report for it later.

Acceptance Criteria

  • Verify that records for AMI are stored in DB with true/false for RHEL presence

Authentication/authorization around HTTP API

As a Doppler admin, I want my public APIs protected by internal accounts so that unauthenticated or unauthorized users cannot access or modify my data.

Acceptance Criteria

  • Verify that an unauthenticated request to the API is rejected with an appropriate 401 error. This includes all HTTP APIs defined up to this point and going forward (/api/v1/*) and all HTTP verbs.
  • Verify that an authenticated GET request to the API returns the expected payload.
  • Verify that an authenticated POST request to the API returns the expected payload, creating the resource as applicable.
  • Verify that developers can automatically get a user in the system (maybe Makefile includes django-admin createsuperuser)

Tech assumptions

  • For now, we should follow the pattern used by the Quipucords team. This means using the built-in Django accounts and auth module.
  • How does a user get an authentication token for making authenticated requests? We assume we will do whatever Quipucords does, and the act of verifying the AC implicitly means the user has gotten this token and has set it appropriately for each subsequent request.
  • For now, there is no real "authorization" beyond "are you authenticated?". Actual permissions may come later.
  • See also #61 for additional guidance and findings.

Watch out for this wacky dangler

As a developer, I don't want unused code littering up my repo so that it doesn't become a maintenance problem. This is a potential liability vector.

We have a function that is currently unused by anything else in the code, we should either make use of it, or remove it.

def extract_sqs_message(message, service='s3'):

@adberglund's initial thought is to keep these for now because they are useful, but maybe pull them into a library for reuse.

Be aware of windows instances/amis and act accordingly

As a Cloudigrade Administrator, I don't want cloudigrade wasting cycles trying to inspect windows instances/amis so that time could be used instead to inspect linux instances.

Acceptance Criteria

  • Verify that Windows instance AMIs are automatically stored in an inspected state and marked as a Windows image when they are discovered.

Tech assumptions

  • For AWS we can query the instance platform to find out if the instance is windows or not. Windows will be marked as windows, and everything else will be None.
$ aws ec2 describe-instances --query 'Reservations[*].Instances[*].[InstanceId,Platform,Status]' --output text
i-07ebccb011511a47c	None	None
i-0586c3c56cca7d93e	windows	None
i-0bee3c1c7c65b518c	None	None
  • Further research will have to be done later to determine how to do this in other clouds.

Save CloudTrails metadata for on/off events

As a Doppler admin, I want customer EC2 instance power on/off events stored in my database so that I can easily report on them later.

Acceptance Criteria

  • Verify that on/off records are stored in the DB
  • Verify that all of the attributes from #27 are included in the DB models
    • Event type (start/power on, stop/power off)
    • Event time
    • Instance ID
    • Instance size (e.g. micro)
    • AMI ID
    • subnet

Technical assumptions

  • The DB models should already exist per a story completed in the previous sprint. See @infinitewarp if you have questions.

README-dev is wrong/incomplete for local development

Trying to install requirements fails at:

  Downloading psycopg2cffi-2.7.7.tar.gz (62kB)
    100% |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 71kB 6.6MB/s
    Complete output from command python setup.py egg_info:
    warning: no previously-included files matching 'yacctab.*' found under directory 'tests'
    warning: no previously-included files matching 'lextab.*' found under directory 'tests'
    warning: no previously-included files matching 'yacctab.*' found under directory 'examples'
    warning: no previously-included files matching 'lextab.*' found under directory 'examples'
    zip_safe flag not set; analyzing archive contents...
    pycparser.ply.__pycache__.lex.cpython-36: module references __file__
    pycparser.ply.__pycache__.lex.cpython-36: module MAY be using inspect.getsourcefile
    pycparser.ply.__pycache__.yacc.cpython-36: module references __file__
    pycparser.ply.__pycache__.yacc.cpython-36: module MAY be using inspect.getsourcefile
    pycparser.ply.__pycache__.yacc.cpython-36: module MAY be using inspect.stack
    pycparser.ply.__pycache__.ygen.cpython-36: module references __file__

    Installed /private/var/folders/5k/m8s8qr_s6kj5dkrrxkknh6480000gn/T/pip-build-qnx0jvl5/psycopg2cffi/.eggs/pycparser-2.18-py3.6.egg
    Error: pg_config executable not found.
    Please add the directory containing pg_config to the PATH.

    ----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /private/var/folders/5k/m8s8qr_s6kj5dkrrxkknh6480000gn/T/pip-build-qnx0jvl5/psycopg2cffi/

It looks like we need to add postgresql to the brew install list.

It may also need updated notes for actually running and setting up a psql database, whether local or in a container.

Set up initial "internal" queue

As a Doppler developer, I want my own queue so that I can build services that send messages via a queue without depending on an externally managed service (Γ‘ la Amazon SQS).

Acceptance Criteria

  • Verify that a queue is running in a locally Dockerized container
    • Verify that you can put a message
    • Verify that you can pop/read a message
  • Verify that documentation has been updated to include setup notes and suggested library usage
  • Verify that the Makefile has been updated to automatically set up/tear down the queue

Tech assumptions

  • Evaluate RabbitMQ, Redis, Red Hat AMQ, etc.
    • Support for queues and subscriptions.
    • Availability of Django support.
    • Ease of setup.
    • Ease up use as a developer.
    • General popularity (see StackOverflow? GitHub stars and forks?)
    • We don't really care as much about high performance at this point as much as ease of use.
  • Take a look at how Celery works.

Do we care about authentication? What's the vision?

Because the initial version of the API is completely public and unauthenticated and unauthorized, we need to have a discussion to figure out how to protect things or if we even care to.

Do we want to use Red Hat's LDAP/SAML thing? Or a local account system? Or integrate with something else? Or nothing at all.

See inventory of currently running instances

As a Doppler admin, I want to see an inventory of any instances that were already running when a customer began using Doppler (at account setup) so that I can track hourly usage for instances that are long-running (i.e. not typically rebooted).

Acceptance Criteria

  • Verify that I see a list of instances and AMIs that are currently powered on for a given customer.
  • Verify that for each instance we see, specifically we see the:
    • InstanceId (e.g. i-0228fecebf85774d5)
    • InstanceType (e.g. t2.micro)
    • ImageId (e.g. ami-0b1e356e)
    • NetworkInterfaces.SubnetId (e.g. subnet-d87eddb0)

Tech assumptions

  • We will simply iterate through all currently known AWS regions and run EC2 "describe instances".

Dockerize our application

As a Doppler developer, I want my application built into a container and uploaded to a container registry so that it can be easily deployed.

Acceptance Criteria

  • Verify you can create and run an instance using the container image built here.
  • Verify the built container exists in the public container registry.
  • Verify the build process is included in our typical pipeline.

Tech assumptions

  • Look into RKT before settling in.
  • Maybe (probably?) put this in the public Docker Hub.
  • Ensure this is baked into our pipeline. See also: DoD.
  • Ideally use python:3-alpine (or close enough)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.