Transit benefits enrollment, minus the paperwork.
Requires Docker and Docker Compose.
Clone the repository:
git clone https://github.com/cal-itp/benefits
cd benefits
Create an environment file from the sample:
cp .env.sample .env
Build the Docker image using Docker Compose:
docker-compose build [--no-cache] client
docker-compose up [-d] client
After initialization, the client is running at http://localhost:${DJANGO_LOCAL_PORT}
(http://localhost:8000 by default).
If DJANGO_ADMIN=true
, the backend administrative interface can be accessed with the superuser you setup at
http://localhost:8000/admin.
By default, sample data from data/client.json
is used to initialize Django. Alternatively you may:
- Modify the sample data file; or
- Point
DJANGO_INIT_PATH
at a different data file; or - Use production data stored in S3 (see Deployment); or
- (If
DJANGO_ADMIN=true
) use the backend administrative interface CRUD
Stop the running services with:
docker-compose down
A basic eligibility verification server is available for testing:
docker-compose up [-d] --build server
The API endpoint is running at http://localhost:5000/verify
.
Sample users and eligibility can be found in data/server.json
.
This repository uses pre-commit
hooks to check and format code.
Ensure you have pre-commit
installed:
pip install pre-commit
Then run (from the root of this repository):
pre-commit install
This is the recommended development setup.
VS Code can be used together with Docker via the Remote - Containers extension to enable a
container-based development environment. This repository includes a .devcontainer.json
file that configures
remote container development and debugging.
With the Remote - Containers extension enabled, open the folder containing this repository inside Visual Studio Code.
You should receive a prompt in the Visual Studio Code window; click Reopen in Container
to run the development environment
inside a container.
If you do not receive a prompt, or when you feel like starting from a fresh environment:
Ctrl+Shift+P
to bring up the command palette in Visual Studio Code- Type
Remote-Containers
to filter the commands - Select
Rebuild and Reopen in Container
Once running inside a container, press F5
to attach a debugger to the client at http://localhost:${DJANGO_LOCAL_PORT}
(http://localhost:8000 by default) on your host machine.
The test eligibility verification server endpoint is running at http://localhost:5000/verify
on your host machine.
Access the server endpoint from within the Dev Container at http://server:5000/verify
.
pre-commit
hooks are also installed and activated within the Dev Container.
To close out of the container and re-open the directory locally in Visual Studio Code:
Ctrl+Shift+P
to bring up the command palette in Visual Studio Code- Type
Remote-Containers
to filter the commands - Select
Reopen Locally
The Eligiblity Verification API makes use of Signed and Encrypted JSON Web Tokens (JWS, JWE, JWT) as a means of data transfer.
A public/private keypair must be generated by each party (Benefits Client and Eligibility Verification Server). Example keys are included for the test verification server and sample agencies.
The application is deployed to AWS Elastic Container Service (ECS) using settings provided in the Task Definition template.
A GitHub Action performs the following steps on pushes to main
:
- Login to Elastic Container Registry (ECR) using AWS credentials stored in repository secrets
- Build and push a new image using the repository's
Dockerfile
, saving the image tag as output - Using the Task Definition template, fill in the newly built image tag
- Push the new Task Definition to ECS, triggering a re-deploy
The production configuration data (a version of data/client.json) is stored in an AWS S3 bucket.
The Task Definition template includes a container definition that uses the AWS CLI Docker image to pull this config file from S3 during the bootup sequence, storing it in a volume that is mounted into the main application container.
To replicate the production configuration locally, fill in the appropriate values for AWS configuration in the .env
file and
then run:
docker-compose run s3config