Coder Social home page Coder Social logo

dispatch-docker's Introduction

Dispatch

Official bootstrap for running your own Dispatch with Docker.

Requirements

  • Docker 17.05.0+
  • Compose 1.19.0+

Minimum Hardware Requirements:

  • You need at least 2400MB RAM

Setup

To get started with all the defaults, simply clone the repo and run ./install.sh in your local check-out.

There may need to be modifications to the included example config files (.env) to accommodate your needs or your environment (such as adding Google credentials). If you want to perform these, do them before you run the install script and copy them without the .example extensions in the name before running the install.sh script.

Data

By default Dispatch does not come with any data. If you're looking for some example data, please use the postgres dump file located here to load example data.

Note: when running the install.sh file, you will be asked whether to load this database dump, or to initialize a new database.

Starting with a clean database

If you decide to start with a clean database, you will need a user. To create a user, go to http://localhost:8000/default/auth/register.

Securing Dispatch with SSL/TLS

If you'd like to protect your Dispatch install with SSL/TLS, there are fantastic SSL/TLS proxies like HAProxy and Nginx. You'll likely want to add this service to your docker-compose.yml file.

Updating Dispatch

The included install.sh script is meant to be idempotent and to bring you to the latest version. What this means is you can and should run install.sh to upgrade to the latest version available.

Upgrading from an older version of postgres

If you are using an earlier version of postgres you may need to run manual steps to upgrade to the newest Postgres image.

This assumes that you have not changed the default Postgres data path (/var/lib/postgresql/data) in your docker-compose.yml.

If you have changed it, please replace all occurences of /var/lib/postgresql/data with your path.

  1. Make a backup of your Dispatch Postgres data dir.
  2. Stop all Dispatch containers, except the postgres one (e.g. use docker stop and not docker-compose stop).
  3. Create a new Postgres container which uses a different data directory:
docker run -d \
      --name postgresnew \
      -e POSTGRES_DB=dispatch \
      -e POSTGRES_USER=dispatch \
      -e POSTGRES_PASSWORD=dispatch \
      -v /var/lib/postgresql/new:/var/lib/postgresql/data:rw \
      postgres:latest
  1. Use pg_dumpall to dump all data from the existing Postgres container to the new Postgres container (replace DISPATCH_DATABASE_CONTAINER_NAME (default is postgres) with the name of the old Postgres container):
docker exec \
    DISPATCH_DATABASE_CONTAINER_NAME pg_dumpall -U postgres | \
    docker exec -i postgresnew psql -U postgres
  1. Stop and remove both Postgres containers:
docker stop DISPATCH_DATABASE_CONTAINER_NAME postgresnew
docker rm DISPATCH_DATABASE_CONTAINER_NAME postgresnew
  1. Edit your docker-compose.yml to use the postgres:latest image for the database container.
  2. Replace old Postgres data directory with upgraded data directory:
mv /var/lib/postgresql/data /var/lib/postgresql/old
mv /var/lib/postgresql/new /var/lib/postgresql/data
  1. Delete the old existing containers:
docker-compose rm
  1. Start Dispatch up again:
docker-compose up

That should be it. Your Postgres data has now been updated to use the postgres image.

dispatch-docker's People

Contributors

0x4c6565 avatar aliceloeser avatar cinojose avatar dbx12 avatar dctalbot avatar dlambda avatar fkrauthan avatar gargrag avatar jtorvald avatar karl-johan-grahn avatar kevgliss avatar metroid-samus avatar mitsuwa avatar mvilanova avatar pdehlke avatar rvandegrift avatar sarahcthekey avatar shumbashi avatar thedahv avatar ymatsiuk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dispatch-docker's Issues

install.sh fails due to tenacity dependency

With the context fix mentioned in #7, I get the following error:

+ pip install -i https://smartiproxy.mgmt.netflix.net/pypi /tmp/dist/dispatch-0.1.0.dev0-py38-none-any.whl
Looking in indexes: https://smartiproxy.mgmt.netflix.net/pypi
Processing /tmp/dist/dispatch-0.1.0.dev0-py38-none-any.whl
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7f42d5f4a8e0>, 'Connection to smartiproxy.mgmt.netflix.net timed out. (connect timeout=15)')': /pypi/tenacity/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7f42d5f4a580>, 'Connection to smartiproxy.mgmt.netflix.net timed out. (connect timeout=15)')': /pypi/tenacity/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7f42d5f4af70>, 'Connection to smartiproxy.mgmt.netflix.net timed out. (connect timeout=15)')': /pypi/tenacity/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7f42d5f4afa0>, 'Connection to smartiproxy.mgmt.netflix.net timed out. (connect timeout=15)')': /pypi/tenacity/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7f42d5f44d60>, 'Connection to smartiproxy.mgmt.netflix.net timed out. (connect timeout=15)')': /pypi/tenacity/
ERROR: Could not find a version that satisfies the requirement tenacity (from dispatch==0.1.0.dev0) (from versions: none)
ERROR: No matching distribution found for tenacity (from dispatch==0.1.0.dev0)
Removing intermediate container 0cf3c593067a
ERROR: Service 'web' failed to build: The command '/bin/sh -c set -x     && buildDeps=""     && apt-get update     && apt-get install -y --no-install-recommends $buildDeps     && pip install -i https://smartiproxy.mgmt.netflix.net/pypi /tmp/dist/*.whl     && apt-get purge -y --auto-remove $buildDeps     && apt-get install -y --no-install-recommends     pkg-config         && apt-get clean     && rm -rf /var/lib/apt/lists/*' returned a non-zero code: 1

Is smartiproxy.mgmt.netflix.net an internal Netflix service? The timeout suggests to me it might be IP restricted (unless this is intermittent).

install.sh COPY ./docker/docker-entrypoint.sh ./docker/.env failure

From the latest in master I am getting the following when running ./install.sh:

Step 37/44 : COPY ./docker/docker-entrypoint.sh ./docker/.env $DISPATCH_CONF/
ERROR: Service 'web' failed to build: COPY failed: stat /var/lib/docker/tmp/docker-builder204368258/docker/.env: no such file or directory
Cleaning up...

Looking at this a bit closer, it isn't quite clear what the purpose is for .env and .requirements.txt is in the repo. Should these be handled by .gitignore?

The ensure_file_from_example() seems to be responsible for creating these files, but by default they exist. Just a little unclear.

SECRET_KEY=.* does not exist in any of the .env* files, so the sed update doesn't work -- additionally, if it is made to work there may be an additional space after the =.

What does .env-e do?

I think a re-work of how the example files are used and generated would be most helpful for getting this project going.

Install error: "No such service: dispatch

Installing on macOS I get the following error:

Successfully tagged dispatch-local:latest
Docker images built.
Setting up database...
ERROR: No such service: dispatch
Cleaning up...

macOS: 10.15.2
Docker Desktop: 2.2.0.3
Docker Engine: 19.03.5
Compose: 1.25.4

Please let me know what sort of logs you need for investigation

Unable to build docker image

Error when trying to build docker image from develop branch:

Summarized error:

  Building wheel for fbprophet (setup.py): started
  Building wheel for fbprophet (setup.py): finished with status 'error'
  ERROR: Command errored out with exit status 1:
   command: /usr/local/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-ctc12fuu/fbprophet/setup.py'"'"'; __file__='"'"'/tmp/pip-install-ctc12fuu/fbprophet/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-i5h5bea1
       cwd: /tmp/pip-install-ctc12fuu/fbprophet/

Complete output

  Downloading uvloop-0.14.0-cp38-cp38-manylinux2010_x86_64.whl (4.7 MB)
Collecting decorator>=3.4.0
  Downloading decorator-4.4.2-py2.py3-none-any.whl (9.2 kB)
Collecting protobuf>=3.4.0
  Downloading protobuf-3.11.3-cp38-cp38-manylinux1_x86_64.whl (1.3 MB)
Collecting googleapis-common-protos<2.0dev,>=1.6.0
  Downloading googleapis-common-protos-1.51.0.tar.gz (35 kB)
Collecting hyperframe<6,>=5.2.0
  Downloading hyperframe-5.2.0-py2.py3-none-any.whl (12 kB)
Collecting hpack<4,>=3.0
  Downloading hpack-3.0.0-py2.py3-none-any.whl (38 kB)
Collecting cycler>=0.10
  Downloading cycler-0.10.0-py2.py3-none-any.whl (6.5 kB)
Collecting kiwisolver>=1.0.1
  Downloading kiwisolver-1.1.0-cp38-cp38-manylinux1_x86_64.whl (91 kB)
Collecting ephem>=3.7.5.3
  Downloading ephem-3.7.7.1-cp38-cp38-manylinux2010_x86_64.whl (1.2 MB)
Collecting pymeeus<=1,>=0.3.6
  Downloading PyMeeus-0.3.7.tar.gz (732 kB)
Collecting cssselect
  Downloading cssselect-1.1.0-py2.py3-none-any.whl (16 kB)
Collecting async-timeout<4.0,>=3.0
  Downloading async_timeout-3.0.1-py3-none-any.whl (8.2 kB)
Collecting attrs>=17.3.0
  Downloading attrs-19.3.0-py2.py3-none-any.whl (39 kB)
Collecting multidict<5.0,>=4.5
  Downloading multidict-4.7.5-cp38-cp38-manylinux1_x86_64.whl (162 kB)
Collecting yarl<2.0,>=1.0
  Downloading yarl-1.4.2-cp38-cp38-manylinux1_x86_64.whl (253 kB)
Collecting cryptography; extra == "signedtoken"
  Downloading cryptography-2.8-cp34-abi3-manylinux2010_x86_64.whl (2.3 MB)
Collecting pyjwt>=1.0.0; extra == "signedtoken"
  Downloading PyJWT-1.7.1-py2.py3-none-any.whl (18 kB)
Collecting cffi!=1.11.3,>=1.8
  Downloading cffi-1.14.0-cp38-cp38-manylinux1_x86_64.whl (409 kB)
Collecting pycparser
  Downloading pycparser-2.20-py2.py3-none-any.whl (112 kB)
Building wheels for collected packages: SQLAlchemy-Searchable, fbprophet, pypd, python-multipart, pystan, sqlalchemy, alembic, sentry-asgi, SQLAlchemy-Utils, validators, holidays, googleapis-common-protos, pymeeus
  Building wheel for SQLAlchemy-Searchable (setup.py): started
  Building wheel for SQLAlchemy-Searchable (setup.py): finished with status 'done'
  Created wheel for SQLAlchemy-Searchable: filename=SQLAlchemy_Searchable-1.1.0-py3-none-any.whl size=10494 sha256=09e4d647ba4e9a1e720a4ed692ec8be2dd3fd2c4aa61d82c01e1ae91cacba514
  Stored in directory: /tmp/pip-ephem-wheel-cache-kt2tttbz/wheels/ea/e3/98/436bb8578fa6dfe888d1d7d516e9632ead66f88120ff2580f9
  Building wheel for fbprophet (setup.py): started
  Building wheel for fbprophet (setup.py): finished with status 'error'
  ERROR: Command errored out with exit status 1:
   command: /usr/local/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-ctc12fuu/fbprophet/setup.py'"'"'; __file__='"'"'/tmp/pip-install-ctc12fuu/fbprophet/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-i5h5bea1
       cwd: /tmp/pip-install-ctc12fuu/fbprophet/
  Complete output (40 lines):
  running bdist_wheel
  running build
  running build_py
  creating build
  creating build/lib
  creating build/lib/fbprophet
  creating build/lib/fbprophet/stan_model
  Traceback (most recent call last):
    File "<string>", line 1, in <module>
    File "/tmp/pip-install-ctc12fuu/fbprophet/setup.py", line 120, in <module>
      setup(
    File "/usr/local/lib/python3.8/site-packages/setuptools/__init__.py", line 144, in setup
      return distutils.core.setup(**attrs)
    File "/usr/local/lib/python3.8/distutils/core.py", line 148, in setup
      dist.run_commands()
    File "/usr/local/lib/python3.8/distutils/dist.py", line 966, in run_commands
      self.run_command(cmd)
    File "/usr/local/lib/python3.8/distutils/dist.py", line 985, in run_command
      cmd_obj.run()
    File "/usr/local/lib/python3.8/site-packages/wheel/bdist_wheel.py", line 223, in run
      self.run_command('build')
    File "/usr/local/lib/python3.8/distutils/cmd.py", line 313, in run_command
      self.distribution.run_command(command)
    File "/usr/local/lib/python3.8/distutils/dist.py", line 985, in run_command
      cmd_obj.run()
    File "/usr/local/lib/python3.8/distutils/command/build.py", line 135, in run
      self.run_command(cmd_name)
    File "/usr/local/lib/python3.8/distutils/cmd.py", line 313, in run_command
      self.distribution.run_command(command)
    File "/usr/local/lib/python3.8/distutils/dist.py", line 985, in run_command
      cmd_obj.run()
    File "/tmp/pip-install-ctc12fuu/fbprophet/setup.py", line 48, in run
      build_models(target_dir)
    File "/tmp/pip-install-ctc12fuu/fbprophet/setup.py", line 36, in build_models
      from fbprophet.models import StanBackendEnum
    File "/tmp/pip-install-ctc12fuu/fbprophet/fbprophet/__init__.py", line 8, in <module>
      from fbprophet.forecaster import Prophet
    File "/tmp/pip-install-ctc12fuu/fbprophet/fbprophet/forecaster.py", line 14, in <module>
      import numpy as np
  ModuleNotFoundError: No module named 'numpy'
  ----------------------------------------
  ERROR: Failed building wheel for fbprophet
  Running setup.py clean for fbprophet
  Building wheel for pypd (setup.py): started
  Building wheel for pypd (setup.py): finished with status 'done'
  Created wheel for pypd: filename=pypd-1.1.0-py3-none-any.whl size=34279 sha256=673d469c8784efdd707f01d1be891293b0f30e7df5522f837a5c44116bdec209
  Stored in directory: /tmp/pip-ephem-wheel-cache-kt2tttbz/wheels/d9/73/ee/f8f6908a77cb4df12e363ffe6d2083fa6c5113ed3c1e65ec91
  Building wheel for python-multipart (setup.py): started
  Building wheel for python-multipart (setup.py): finished with status 'done'
  Created wheel for python-multipart: filename=python_multipart-0.0.5-py3-none-any.whl size=31669 sha256=ef3ae9c84087c999228f78794aa0f5ea81290f42cff4a10a2500ab9e98547f2a
  Stored in directory: /tmp/pip-ephem-wheel-cache-kt2tttbz/wheels/9e/fc/1c/cf980e6413d3ee8e70cd8f39e2366b0f487e3e221aeb452eb0
  Building wheel for pystan (setup.py): started
  Building wheel for pystan (setup.py): finished with status 'error'
  ERROR: Command errored out with exit status 1:
   command: /usr/local/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-ctc12fuu/pystan/setup.py'"'"'; __file__='"'"'/tmp/pip-install-ctc12fuu/pystan/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-blcufvm0
       cwd: /tmp/pip-install-ctc12fuu/pystan/
  Complete output (1 lines):
  Cython>=0.22 and NumPy are required.
  ----------------------------------------
  ERROR: Failed building wheel for pystan
  Running setup.py clean for pystan
  Building wheel for sqlalchemy (PEP 517): started
  Building wheel for sqlalchemy (PEP 517): finished with status 'done'
  Created wheel for sqlalchemy: filename=SQLAlchemy-1.3.15-cp38-cp38-linux_x86_64.whl size=1236627 sha256=2fa886140a2e7610bf55a42f0e819c27ab7edfef6ba9d6f91ac982f7e61b5387
  Stored in directory: /tmp/pip-ephem-wheel-cache-kt2tttbz/wheels/d1/0c/78/33448c81fd8e458d60897744f30462ca39e682637ec9591c0d
  Building wheel for alembic (PEP 517): started
  Building wheel for alembic (PEP 517): finished with status 'done'
  Created wheel for alembic: filename=alembic-1.4.2-py2.py3-none-any.whl size=159543 sha256=a3aceb9ebdd10fe8534362890f70ba0d4236f9dd9462e12d1c73ad1024db7ece
  Stored in directory: /tmp/pip-ephem-wheel-cache-kt2tttbz/wheels/70/08/70/cea787a7e95817b831469fa42af076046e55a05f7c94657463
  Building wheel for sentry-asgi (setup.py): started
  Building wheel for sentry-asgi (setup.py): finished with status 'done'
  Created wheel for sentry-asgi: filename=sentry_asgi-0.2.0-py3-none-any.whl size=3496 sha256=2c4159aa1926c485f2f764a95f0cbe020f542845d6527ea34cd12ca4ad5b7ff6
  Stored in directory: /tmp/pip-ephem-wheel-cache-kt2tttbz/wheels/de/5d/9d/b2d8e08a3bba147dbebe35b922379a70b5b0882a72b295aee7
  Building wheel for SQLAlchemy-Utils (setup.py): started
  Building wheel for SQLAlchemy-Utils (setup.py): finished with status 'done'
  Created wheel for SQLAlchemy-Utils: filename=SQLAlchemy_Utils-0.36.3-py2.py3-none-any.whl size=89208 sha256=5d0f09010eebaface5365d06720ecbdf022740a513e06b406de4012bf60c537b
  Stored in directory: /tmp/pip-ephem-wheel-cache-kt2tttbz/wheels/e2/21/55/1f3055532df357fad26250e2b39521ac2019d94916afb8d2b4
  Building wheel for validators (setup.py): started
  Building wheel for validators (setup.py): finished with status 'done'
  Created wheel for validators: filename=validators-0.14.2-py3-none-any.whl size=17247 sha256=479de9330a312e4c100260958a8113b35b3e9c86979729b32b6bc4d09c0625ad
  Stored in directory: /tmp/pip-ephem-wheel-cache-kt2tttbz/wheels/b4/f0/85/3f7cb38e7402a7db135351689dc5e0ee72a596c42027348bb8
  Building wheel for holidays (setup.py): started
  Building wheel for holidays (setup.py): finished with status 'done'
  Created wheel for holidays: filename=holidays-0.10.1-py3-none-any.whl size=105065 sha256=f1e8b631aec050bde2546e7076992f257a87e59c1d3f58452c20e9014a724e4f
  Stored in directory: /tmp/pip-ephem-wheel-cache-kt2tttbz/wheels/e5/5b/6b/fd5dc37887f57b4e0c52b98e6e5a3b99f47c50db6e986d91d3
  Building wheel for googleapis-common-protos (setup.py): started
  Building wheel for googleapis-common-protos (setup.py): finished with status 'done'
  Created wheel for googleapis-common-protos: filename=googleapis_common_protos-1.51.0-py3-none-any.whl size=77592 sha256=44d3641af3d8af6b2ef9cf30bc8d3413e3b17cf1057892e125a35dbe8204fb10
  Stored in directory: /tmp/pip-ephem-wheel-cache-kt2tttbz/wheels/0d/81/d7/c82ed88e8977ef82d567716c51defc3fd8775c78fa5c4efc7b
  Building wheel for pymeeus (setup.py): started
  Building wheel for pymeeus (setup.py): finished with status 'done'
  Created wheel for pymeeus: filename=PyMeeus-0.3.7-py3-none-any.whl size=702876 sha256=59278445b377455f1dcfacb8264d9fcf415e36aedf70b8ea7ef25fd792218ef2
  Stored in directory: /tmp/pip-ephem-wheel-cache-kt2tttbz/wheels/5a/68/50/d989a005ecd4f58a7922bede25ff7e391d66395a3090acf97a
Successfully built SQLAlchemy-Searchable pypd python-multipart sqlalchemy alembic sentry-asgi SQLAlchemy-Utils validators holidays googleapis-common-protos pymeeus
Failed to build fbprophet pystan
Installing collected packages: sqlalchemy, six, SQLAlchemy-Utils, decorator, validators, SQLAlchemy-Searchable, cachetools, tabulate, httplib2, pyasn1, pyasn1-modules, rsa, google-auth, google-auth-httplib2, uritemplate, protobuf, certifi, urllib3, chardet, idna, requests, pytz, googleapis-common-protos, google-api-core, google-api-python-client, dnspython, email-validator, starlette, pydantic, fastapi, ecdsa, python-jose, oauth2client, h11, hyperframe, hpack, h2, sniffio, hstspreload, rfc3986, httpx, Cython, python-dateutil, numpy, pandas, cmdstanpy, pystan, pyparsing, cycler, kiwisolver, matplotlib, ephem, LunarCalendar, pymeeus, convertdate, holidays, setuptools-git, fbprophet, sentry-sdk, cssutils, cssselect, lxml, premailer, emails, aiofiles, pypd, python-multipart, async-timeout, attrs, multidict, yarl, aiohttp, slackclient, psycopg2-binary, arrow, sh, sqlalchemy-filters, click, schedule, python-editor, MarkupSafe, Mako, alembic, jinja2, sentry-asgi, pycparser, cffi, cryptography, pyjwt, oauthlib, requests-oauthlib, pbr, defusedxml, requests-toolbelt, jira, catalogue, blis, plac, srsly, murmurhash, cymem, wasabi, preshed, tqdm, thinc, spacy, tenacity, google-auth-oauthlib, websockets, httptools, uvloop, uvicorn, joblib, dispatch
    Running setup.py install for pystan: started
    Running setup.py install for pystan: still running...
    Running setup.py install for pystan: still running...
    Running setup.py install for pystan: finished with status 'error'
    ERROR: Command errored out with exit status 1:
     command: /usr/local/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-ctc12fuu/pystan/setup.py'"'"'; __file__='"'"'/tmp/pip-install-ctc12fuu/pystan/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-7dde_41c/install-record.txt --single-version-externally-managed --compile --install-headers /usr/local/include/python3.8/pystan
         cwd: /tmp/pip-install-ctc12fuu/pystan/
    Complete output (18396 lines):
    Compiling pystan/_api.pyx because it depends on /usr/local/lib/python3.8/site-packages/Cython/Includes/libcpp/string.pxd.
    Compiling pystan/_chains.pyx because it depends on /usr/local/lib/python3.8/site-packages/Cython/Includes/libcpp/vector.pxd.
    [1/2] Cythonizing pystan/_api.pyx
    /usr/local/lib/python3.8/site-packages/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /tmp/pip-install-ctc12fuu/pystan/pystan/_api.pyx
      tree = Parsing.p_module(s, pxd, full_module_name)
    [2/2] Cythonizing pystan/_chains.pyx
    running install
    running build
    running build_py
    creating build
    creating build/lib.linux-x86_64-3.8
    creating build/lib.linux-x86_64-3.8/pystan
    copying pystan/__init__.py -> build/lib.linux-x86_64-3.8/pystan
    copying pystan/lookup.py -> build/lib.linux-x86_64-3.8/pystan
    copying pystan/api.py -> build/lib.linux-x86_64-3.8/pystan
    copying pystan/misc.py -> build/lib.linux-x86_64-3.8/pystan
    copying pystan/plots.py -> build/lib.linux-x86_64-3.8/pystan
    copying pystan/_compat.py -> build/lib.linux-x86_64-3.8/pystan
    copying pystan/model.py -> build/lib.linux-x86_64-3.8/pystan
    copying pystan/chains.py -> build/lib.linux-x86_64-3.8/pystan
    copying pystan/constants.py -> build/lib.linux-x86_64-3.8/pystan
    copying pystan/diagnostics.py -> build/lib.linux-x86_64-3.8/pystan
    creating build/lib.linux-x86_64-3.8/pystan/tests
    copying pystan/tests/__init__.py -> build/lib.linux-x86_64-3.8/pystan/tests
    copying pystan/tests/test_optimizing_example.py -> build/lib.linux-x86_64-3.8/pystan/tests
    copying pystan/tests/test_generated_quantities_seed.py -> build/lib.linux-x86_64-3.8/pystan/tests
    copying pystan/tests/test_posterior_means.py -> build/lib.linux-x86_64-3.8/pystan/tests
    copying pystan/tests/test_pickle.py -> build/lib.linux-x86_64-3.8/pystan/tests
    copying pystan/tests/test_lookup.py -> build/lib.linux-x86_64-3.8/pystan/tests
    copying pystan/tests/test_misc.py -> build/lib.linux-x86_64-3.8/pystan/tests
    copying pystan/tests/test_rstan_stan_args.py -> build/lib.linux-x86_64-3.8/pystan/tests
    copying pystan/tests/test_vb.py -> build/lib.linux-x86_64-3.8/pystan/tests
    copying pystan/tests/test_chains.py -> build/lib.linux-x86_64-3.8/pystan/tests
    copying pystan/tests/test_covexpquad.py -> build/lib.linux-x86_64-3.8/pystan/tests
    copying pystan/tests/test_extract.py -> build/lib.linux-x86_64-3.8/pystan/tests
    copying pystan/tests/test_linear_regression.py -> build/lib.linux-x86_64-3.8/pystan/tests
    copying pystan/tests/test_extra_compile_args.py -> build/lib.linux-x86_64-3.8/pystan/tests
    copying pystan/tests/test_utf8.py -> build/lib.linux-x86_64-3.8/pystan/tests
    copying pystan/tests/test_unconstrain_pars.py -> build/lib.linux-x86_64-3.8/pystan/tests
    copying pystan/tests/test_misc_args.py -> build/lib.linux-x86_64-3.8/pystan/tests
    copying pystan/tests/test_basic_matrix.py -> build/lib.linux-x86_64-3.8/pystan/tests
    copying pystan/tests/test_fixed_param.py -> build/lib.linux-x86_64-3.8/pystan/tests
    copying pystan/tests/test_basic_array.py -> build/lib.linux-x86_64-3.8/pystan/tests
    copying pystan/tests/test_rstan_stanfit.py -> build/lib.linux-x86_64-3.8/pystan/tests
    copying pystan/tests/test_user_inits.py -> build/lib.linux-x86_64-3.8/pystan/tests
    copying pystan/tests/test_rstan_getting_started.py -> build/lib.linux-x86_64-3.8/pystan/tests
    copying pystan/tests/test_rstan_testoptim.py -> build/lib.linux-x86_64-3.8/pystan/tests
    copying pystan/tests/test_stanc.py -> build/lib.linux-x86_64-3.8/pystan/tests
    copying pystan/tests/test_basic.py -> build/lib.linux-x86_64-3.8/pystan/tests
    copying pystan/tests/test_ess.py -> build/lib.linux-x86_64-3.8/pystan/tests
    copying pystan/tests/helper.py -> build/lib.linux-x86_64-3.8/pystan/tests
    copying pystan/tests/test_basic_pars.py -> build/lib.linux-x86_64-3.8/pystan/tests
    copying pystan/tests/test_stan_file_io.py -> build/lib.linux-x86_64-3.8/pystan/tests
    creating build/lib.linux-x86_64-3.8/pystan/experimental
    copying pystan/experimental/__init__.py -> build/lib.linux-x86_64-3.8/pystan/experimental
    copying pystan/experimental/misc.py -> build/lib.linux-x86_64-3.8/pystan/experimental
    creating build/lib.linux-x86_64-3.8/pystan/external
    copying pystan/external/__init__.py -> build/lib.linux-x86_64-3.8/pystan/external
    creating build/lib.linux-x86_64-3.8/pystan/external/pymc
    copying pystan/external/pymc/__init__.py -> build/lib.linux-x86_64-3.8/pystan/external/pymc
    copying pystan/external/pymc/stats.py -> build/lib.linux-x86_64-3.8/pystan/external/pymc
    copying pystan/external/pymc/plots.py -> build/lib.linux-x86_64-3.8/pystan/external/pymc
    copying pystan/external/pymc/trace.py -> build/lib.linux-x86_64-3.8/pystan/external/pymc
    creating build/lib.linux-x86_64-3.8/pystan/external/enum
    copying pystan/external/enum/__init__.py -> build/lib.linux-x86_64-3.8/pystan/external/enum
    copying pystan/external/enum/enum.py -> build/lib.linux-x86_64-3.8/pystan/external/enum
    creating build/lib.linux-x86_64-3.8/pystan/external/scipy
    copying pystan/external/scipy/__init__.py -> build/lib.linux-x86_64-3.8/pystan/external/scipy
    copying pystan/external/scipy/mstats.py -> build/lib.linux-x86_64-3.8/pystan/external/scipy
    running egg_info
    creating pystan.egg-info
    writing pystan.egg-info/PKG-INFO
    writing dependency_links to pystan.egg-info/dependency_links.txt
    writing requirements to pystan.egg-info/requires.txt
    writing top-level names to pystan.egg-info/top_level.txt
    writing manifest file 'pystan.egg-info/SOURCES.txt'
    reading manifest file 'pystan.egg-info/SOURCES.txt'
    reading manifest template 'MANIFEST.in'
    no previously-included directories found matching 'doc/_build'
    warning: no previously-included files matching '*.so' found anywhere in distribution
    warning: no previously-included files matching '*.pyc' found anywhere in distribution
    warning: no previously-included files matching '.git*' found anywhere in distribution
    warning: no previously-included files matching '*.png' found anywhere in distribution
    no previously-included directories found matching 'pystan/stan/make'
    no previously-included directories found matching 'pystan/stan/src/docs'
    no previously-included directories found matching 'pystan/stan/src/doxygen'
    no previously-included directories found matching 'pystan/stan/src/python'
    no previously-included directories found matching 'pystan/stan/src/test'
    no previously-included directories found matching 'pystan/stan/lib/stan_math/doc'
    no previously-included directories found matching 'pystan/stan/lib/stan_math/doxygen'
    no previously-included directories found matching 'pystan/stan/lib/stan_math/lib/cpplint_*'
    no previously-included directories found matching 'pystan/stan/lib/stan_math/lib/gtest_*'
    no previously-included directories found matching 'pystan/stan/lib/stan_math/make'
    no previously-included directories found matching 'pystan/stan/lib/stan_math/test'
    writing manifest file 'pystan.egg-info/SOURCES.txt'
    copying pystan/_api.cpp -> build/lib.linux-x86_64-3.8/pystan
    copying pystan/_chains.cpp -> build/lib.linux-x86_64-3.8/pystan
    copying pystan/_misc.cpp -> build/lib.linux-x86_64-3.8/pystan
    creating build/lib.linux-x86_64-3.8/pystan/stan
    creating build/lib.linux-x86_64-3.8/pystan/stan/src
    creating build/lib.linux-x86_64-3.8/pystan/stan/src/stan
    creating build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang
    copying pystan/stan/src/stan/lang/ast_def.cpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang
    creating build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/bare_type_grammar_inst.cpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/expression07_grammar_inst.cpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/expression_grammar_inst.cpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/functions_grammar_inst.cpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/indexes_grammar_inst.cpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/program_grammar_inst.cpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/semantic_actions_def.cpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/statement_2_grammar_inst.cpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/statement_grammar_inst.cpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/term_grammar_inst.cpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/var_decls_grammar_inst.cpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/whitespace_grammar_inst.cpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/py_var_context.hpp -> build/lib.linux-x86_64-3.8/pystan
    copying pystan/stanc.hpp -> build/lib.linux-x86_64-3.8/pystan
    copying pystan/pystan_writer.hpp -> build/lib.linux-x86_64-3.8/pystan
    copying pystan/stan_fit.hpp -> build/lib.linux-x86_64-3.8/pystan
    copying pystan/stanc.pxd -> build/lib.linux-x86_64-3.8/pystan
    copying pystan/io.pxd -> build/lib.linux-x86_64-3.8/pystan
    copying pystan/stan_fit.pxd -> build/lib.linux-x86_64-3.8/pystan
    copying pystan/_misc.pyx -> build/lib.linux-x86_64-3.8/pystan
    copying pystan/_api.pyx -> build/lib.linux-x86_64-3.8/pystan
    copying pystan/_chains.pyx -> build/lib.linux-x86_64-3.8/pystan
    copying pystan/stanfit4model.pyx -> build/lib.linux-x86_64-3.8/pystan
    creating build/lib.linux-x86_64-3.8/pystan/tests/data
    copying pystan/tests/data/blocker.1.csv -> build/lib.linux-x86_64-3.8/pystan/tests/data
    copying pystan/tests/data/blocker.2.csv -> build/lib.linux-x86_64-3.8/pystan/tests/data
    copying pystan/tests/data/external2.stan -> build/lib.linux-x86_64-3.8/pystan/tests/data
    copying pystan/tests/data/external1.stan -> build/lib.linux-x86_64-3.8/pystan/tests/data
    creating build/lib.linux-x86_64-3.8/pystan/lookuptable
    copying pystan/lookuptable/R.txt -> build/lib.linux-x86_64-3.8/pystan/lookuptable
    copying pystan/lookuptable/stan-functions.txt -> build/lib.linux-x86_64-3.8/pystan/lookuptable
    copying pystan/lookuptable/python.txt -> build/lib.linux-x86_64-3.8/pystan/lookuptable
    copying pystan/stan/src/stan/version.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan
    creating build/lib.linux-x86_64-3.8/pystan/stan/src/stan/optimization
    copying pystan/stan/src/stan/optimization/newton.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/optimization
    copying pystan/stan/src/stan/optimization/lbfgs_update.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/optimization
    copying pystan/stan/src/stan/optimization/bfgs.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/optimization
    copying pystan/stan/src/stan/optimization/bfgs_update.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/optimization
    copying pystan/stan/src/stan/optimization/bfgs_linesearch.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/optimization
    copying pystan/stan/src/stan/lang/compile_functions.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang
    copying pystan/stan/src/stan/lang/parser.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang
    copying pystan/stan/src/stan/lang/function_signatures.h -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang
    copying pystan/stan/src/stan/lang/compiler.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang
    copying pystan/stan/src/stan/lang/generator.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang
    copying pystan/stan/src/stan/lang/rethrow_located.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang
    copying pystan/stan/src/stan/lang/ast.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang
    copying pystan/stan/src/stan/lang/grammars/semantic_actions.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/common_adaptors_def.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/expression07_grammar.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/var_decls_grammar.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/var_decls_grammar_def.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/statement_2_grammar_def.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/expression07_grammar_def.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/program_grammar.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/expression_grammar.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/statement_grammar_def.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/expression_grammar_def.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/statement_grammar.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/program_grammar_def.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/functions_grammar.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/iterator_typedefs.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/bare_type_grammar_def.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/term_grammar_def.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/bare_type_grammar.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/statement_2_grammar.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/term_grammar.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/indexes_grammar_def.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/whitespace_grammar.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/functions_grammar_def.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/indexes_grammar.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    copying pystan/stan/src/stan/lang/grammars/whitespace_grammar_def.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/grammars
    creating build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/generator
    copying pystan/stan/src/stan/lang/generator/generate_cpp.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/generator
    copying pystan/stan/src/stan/lang/generator/generate_function_instantiation_body.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/generator
    copying pystan/stan/src/stan/lang/generator/generate_function_instantiation_name.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/generator
    copying pystan/stan/src/stan/lang/generator/generate_void_statement.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/generator
    copying pystan/stan/src/stan/lang/generator/generate_constrained_param_names_method.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/generator
    copying pystan/stan/src/stan/lang/generator/generate_member_var_decls.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan/lang/generator
    copying pystan/stan/src/stan/lang/generator/printable_visgen.hpp -> build/lib.linux-x86_64-3.8/pystan/stan/src/stan

DISPATCH_UI_URL is missing

Hello,

Some one can help me, i have no error juste in the end

thx !!

Setting up database...
Starting dispatch-docker_postgres_1 ... done
Traceback (most recent call last):
File "/usr/local/bin/dispatch", line 5, in
from dispatch.cli import entrypoint
File "/usr/local/lib/python3.8/site-packages/dispatch/cli.py", line 12, in
from dispatch import version, config
File "/usr/local/lib/python3.8/site-packages/dispatch/config.py", line 66, in
DISPATCH_UI_URL = config("DISPATCH_UI_URL")
File "/usr/local/lib/python3.8/site-packages/starlette/config.py", line 62, in call
return self.get(key, cast, default)
File "/usr/local/lib/python3.8/site-packages/starlette/config.py", line 75, in get
raise KeyError(f"Config '{key}' is missing, and has no default.")
KeyError: "Config 'DISPATCH_UI_URL' is missing, and has no default."
Capture dโ€™รฉcran de 2020-03-24 12-33-26

relation "application" does not exist

Building with this .env file:

# For configuration details, see: https://hawkins.gitbook.io/dispatch/configuration/app

DISPATCH_UI_URL=localhost

COMPOSE_PROJECT_NAME=dispatch

# General

DATABASE_HOSTNAME=postgres
DATABASE_CREDENTIALS=postgres:dispatch
DATABASE_NAME=dispatch
DATABASE_PORT=5432
POSTGRES_PASSWORD=dispatch
POSTGRES_DB=dispatch
POSTGRES_USER=postgres

DISPATCH_HELP_EMAIL=""
DISPATCH_HELP_SLACK_CHANNEL=""

DISPATCH_PKCE_AUTH_ENABLED="False"
VUE_APP_DISPATCH_PKCE_AUTH="False"
JWKS_URL="None"
DISPATCH_AUTHENTICATION_PROVIDER=""

# Incident configuration
INCIDENT_CONVERSATION_COMMANDS_REFERENCE_DOCUMENT_ID=""
INCIDENT_NOTIFICATION_CONVERSATIONS=""
INCIDENT_NOTIFICATION_DISTRIBUTION_LISTS=""
INCIDENT_STORAGE_ARCHIVAL_FOLDER_ID=""
INCIDENT_STORAGE_DRIVE_ID_SLUG=""
INCIDENT_STORAGE_INCIDENT_REVIEW_FILE_ID=""
INCIDENT_DOCUMENT_INVESTIGATION_SHEET_ID=""
INCIDENT_FAQ_DOCUMENT_ID=""

# Plugin configuration

# Slack
SLACK_APP_USER_SLUG=""
SLACK_API_BOT_TOKEN=""
SLACK_SIGNING_SECRET=""
SLACK_WORKSPACE_NAME=""

# Google
GOOGLE_SERVICE_ACCOUNT_PROJECT_ID=""
GOOGLE_SERVICE_ACCOUNT_PRIVATE_KEY_ID=""
GOOGLE_SERVICE_ACCOUNT_PRIVATE_KEY=""
GOOGLE_SERVICE_ACCOUNT_CLIENT_EMAIL=""
GOOGLE_SERVICE_ACCOUNT_CLIENT_ID=""
GOOGLE_SERVICE_ACCOUNT_DELEGATED_ACCOUNT=""
GOOGLE_DEVELOPER_KEY=""
GOOGLE_DOMAIN=""

# Jira
JIRA_BROWSER_URL=""
JIRA_PASSWORD=""
JIRA_USERNAME=""
JIRA_API_URL=""
JIRA_PROJECT_KEY=""
JIRA_ISSUE_TYPE_ID=""

# PagerDuty
PAGERDUTY_API_KEY=""
PAGERDUTY_API_FROM_EMAIL=""

Getting:

Setting up database...
Starting dispatch_postgres_1 ... done
Usage: dispatch [OPTIONS] COMMAND [ARGS]...

  Command-line interface to Dispatch.

Options:
  --version  Show the version and exit.
  --help     Show this message and exit.

Commands:
  contact    All commands for contact manipulation.
  database   Container for all dispatch database commands.
  incident   All commands for incident manipulation.
  plugins    All commands for plugin manipulation.
  scheduler  Container for all dispatch scheduler commands.
  server     Container for all dispatch server commands.
  term       All commands for term manipulation.
Starting dispatch_postgres_1 ... done
INFO  [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO  [alembic.runtime.migration] Will assume transactional DDL.
INFO  [alembic.runtime.migration] Running upgrade  -> e75e103693f2, Initial migration
INFO  [alembic.runtime.migration] Running upgrade e75e103693f2 -> d0501fc6be89, empty message
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1247, in _execute_context
    self.dialect.do_execute(
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 588, in do_execute
    cursor.execute(statement, parameters)
psycopg2.errors.UndefinedTable: relation "application" does not exist


The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/bin/dispatch", line 8, in <module>
    sys.exit(entrypoint())
  File "/usr/local/lib/python3.8/site-packages/dispatch/cli.py", line 835, in entrypoint
    dispatch_cli()
  File "/usr/local/lib/python3.8/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.8/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/dispatch/cli.py", line 612, in upgrade_database
    alembic_command.upgrade(alembic_cfg, revision, sql=sql, tag=tag)
  File "/usr/local/lib/python3.8/site-packages/alembic/command.py", line 298, in upgrade
    script.run_env()
  File "/usr/local/lib/python3.8/site-packages/alembic/script/base.py", line 489, in run_env
    util.load_python_file(self.dir, "env.py")
  File "/usr/local/lib/python3.8/site-packages/alembic/util/pyfiles.py", line 98, in load_python_file
    module = load_module_py(module_id, path)
  File "/usr/local/lib/python3.8/site-packages/alembic/util/compat.py", line 184, in load_module_py
    spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 783, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/usr/local/lib/python3.8/site-packages/dispatch/alembic/env.py", line 84, in <module>
    run_migrations_online()
  File "/usr/local/lib/python3.8/site-packages/dispatch/alembic/env.py", line 78, in run_migrations_online
    context.run_migrations()
  File "<string>", line 8, in run_migrations
  File "/usr/local/lib/python3.8/site-packages/alembic/runtime/environment.py", line 846, in run_migrations
    self.get_context().run_migrations(**kw)
  File "/usr/local/lib/python3.8/site-packages/alembic/runtime/migration.py", line 520, in run_migrations
    step.migration_fn(**kw)
  File "/usr/local/lib/python3.8/site-packages/dispatch/alembic/versions/d0501fc6be89_.py", line 21, in upgrade
    op.add_column("application", sa.Column("created_at", sa.DateTime(), nullable=True))
  File "<string>", line 8, in add_column
  File "<string>", line 3, in add_column
  File "/usr/local/lib/python3.8/site-packages/alembic/operations/ops.py", line 1929, in add_column
    return operations.invoke(op)
  File "/usr/local/lib/python3.8/site-packages/alembic/operations/base.py", line 374, in invoke
    return fn(self, operation)
  File "/usr/local/lib/python3.8/site-packages/alembic/operations/toimpl.py", line 132, in add_column
    operations.impl.add_column(table_name, column, schema=schema, **kw)
  File "/usr/local/lib/python3.8/site-packages/alembic/ddl/impl.py", line 237, in add_column
    self._exec(base.AddColumn(table_name, column, schema=schema))
  File "/usr/local/lib/python3.8/site-packages/alembic/ddl/impl.py", line 140, in _exec
    return conn.execute(construct, *multiparams, **params)
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 984, in execute
    return meth(self, multiparams, params)
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/sql/ddl.py", line 72, in _execute_on_connection
    return connection._execute_ddl(self, multiparams, params)
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1041, in _execute_ddl
    ret = self._execute_context(
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1287, in _execute_context
    self._handle_dbapi_exception(
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1481, in _handle_dbapi_exception
    util.raise_(
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 178, in raise_
    raise exception
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1247, in _execute_context
    self.dialect.do_execute(
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 588, in do_execute
    cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedTable) relation "application" does not exist

[SQL: ALTER TABLE application ADD COLUMN created_at TIMESTAMP WITHOUT TIME ZONE]
(Background on this error at: http://sqlalche.me/e/f405)
Cleaning up...

web container set to listen on 127.0.0.1 - not able to access web UI externally

Hi, not sure if you can double check the dispatch web image used in your install script, it doesn't appear to be using an image which starts dispatch web UI in the container on 0.0.0.0.

The web app container is listening on localhost only, rather than 0.0.0.0 so no traffic is being passed through from external with a docker port forward, and can't reach from other docker containers either. I've tried many network troubleshooting and came to this conclusion, I managed to get it working in the end.

The Dockerfile in your dispatch repo : https://github.com/Netflix/dispatch/blob/develop/docker/Dockerfile

States...

ENTRYPOINT ["dispatch"]
CMD ["server", "start", "dispatch.main:app", "--host=0.0.0.0"]

So I built the docker image locally using this Dockerfile, then ran:
docker run dispatch-web --detach --env-file ../dispatch-docker/.env

I can now access publicly. So it's an issue in the Docker Compose build script, it must be ignoring the --host=0.0.0.0 when starting dispatch or Docker Compose is building from a Docker image not based on the Dockerfile containing the host config?

Encountering multiple issues with installation

Hello,

Thanks for Dispatch, seems really cool. I can't wait to try it out for my company.
I am however encountering a bunch of different issues that prevent me from installing it on my machine, that I will list a bit bluntly.

  • "To get started with all the defaults, simply clone the repo and run ./docker/install.sh in your local check-out. => the file is not located in /docker." I guess it is refering to ./install.sh?
  • There is no Dockerfile
  • I am using Docker version 18.09.2, build 6247962, and docker build must be called with an argument ("docker build ." would work).
  • From looking at the installation script, I think a dispatch folder is missing?
  • SECRET_KEY=$(head /dev/urandom | tr -dc "a-z0-9@#%^&*(-_=+)" | head -c 50 | sed -e 's/[\/&]/\\&/g') is not working on my mac, I guess it was made for linux? (tr: Illegal byte sequence)

Again thanks a lot for the work, I might be able to fork it and work on the install script a little at the end of week.

Cheers

Dispatch Vue problem

Has anybody come to the part where you are testing the Dispatch UI?

I am using Dispatch from Docker after building this repo with some modifications of the env settings:

JWKS_URL="127.0.0.1"
DISPATCH_DOMAIN=""

, and with Google credentials configured.

The Chrome console outputted this, no login form on the screen:

vue-router.esm.js:2079 TypeError: Failed to construct 'URL': Invalid URL
    at e.xhr (xhr.js:74)
    at Function.t.fetchFromIssuer (authorization_service_configuration.js:53)
    at At (index.js:57)
    at index.js:98
    at p (vue-router.esm.js:2120)
    at r (vue-router.esm.js:1846)
    at Rt (vue-router.esm.js:1854)
    at e.zt.confirmTransition (vue-router.esm.js:2147)
    at e.zt.transitionTo (vue-router.esm.js:2034)
    at he.init (vue-router.esm.js:2734)
c @ vue-router.esm.js:2079
registerServiceWorker.js:8 App is being served from cache by a service worker.
For more details, visit https://goo.gl/AFskqB
registerServiceWorker.js:13 Service worker has been registered.
127.0.0.1/:1 Error while trying to use the following icon from the Manifest: http://127.0.0.1:8000/img/icons/android-chrome-192x192.png (Download error or resource isn't a valid image)
registerServiceWorker.js:19 New content is downloading.
registerServiceWorker.js:22 New content is available; please refresh.
DevTools failed to parse SourceMap: chrome-extension://hdokiejnpimakedhajhdlcegeplioahd/sourcemaps/onloadwff.js.map

Dispatch logs from the container are the following:

INFO:     URL being requested: GET https://www.googleapis.com/discovery/v1/apis/docs/v1/rest?key=xxx
INFO:     URL being requested: GET https://www.googleapis.com/discovery/v1/apis/drive/v3/rest?key=xxx
INFO:     URL being requested: GET https://www.googleapis.com/discovery/v1/apis/drive/v3/rest?key=xxx
INFO:     URL being requested: GET https://www.googleapis.com/discovery/v1/apis/gmail/v1/rest?key=xxx
INFO:     URL being requested: GET https://www.googleapis.com/discovery/v1/apis/admin/directory_v1/rest?key=xxx
INFO:     Started server process [1]                                              
INFO:     Waiting for application startup.                                        
INFO:     Application startup complete.                                           
INFO:     Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)         
INFO:     127.0.0.1:39326 - "GET /static/js/chunk-vendors.93945a9b.js.map HTTP/1.1" 200 OK 
INFO:     127.0.0.1:39328 - "GET /static/js/app.450c65f4.js.map HTTP/1.1" 200 OK 
INFO:     127.0.0.1:39832 - "GET /service-worker.js HTTP/1.1" 200 OK              
INFO:     127.0.0.1:39834 - "GET /manifest.json HTTP/1.1" 200 OK                  
INFO:     127.0.0.1:39832 - "GET /img/icons/android-chrome-192x192.png HTTP/1.1" 200 OK 
INFO:     127.0.0.1:39832 - "GET /service-worker.js HTTP/1.1" 304 Not Modified  
INFO:     127.0.0.1:39832 - "GET /precache-manifest.db85124ecb32bdaa183a4d4a3dd89a1b.js HTTP/1.1" 200 OK 
INFO:     127.0.0.1:39832 - "GET /static/js/chunk-vendors.93945a9b.js.map HTTP/1.1" 304 Not Modified
INFO:     127.0.0.1:39844 - "GET /static/js/app.450c65f4.js.map HTTP/1.1" 304 Not Modified     

Any ideas?

macOS error: tr: Illegal byte sequence

When I run the command to generate the secret key from ./install.sh on mac OS, I get this error:

head /dev/urandom | env LC_CTYPE=C tr -dc "a-z0-9@#%^&*(-_=+)" | head -c 50 | sed -e 's/[\/&]/\\&/g'
tr: Illegal byte sequence
Nf

Installation failed by "Can't locate revision identified by" error

Hi,
Installation failed to initiate database service. Here is the traceback

Starting dispatch_postgres_1 ... done
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/alembic/script/base.py", line 162, in _catch_revision_errors
yield
File "/usr/local/lib/python3.8/site-packages/alembic/script/base.py", line 364, in upgrade_revs
revs = list(revs)
File "/usr/local/lib/python3.8/site-packages/alembic/script/revision.py", line 765, in iterate_revisions
requested_lowers = self.get_revisions(lower)
File "/usr/local/lib/python3.8/site-packages/alembic/script/revision.py", line 319, in get_revisions
return sum([self.get_revisions(id_elem) for id_elem in id
], ())
File "/usr/local/lib/python3.8/site-packages/alembic/script/revision.py", line 319, in
return sum([self.get_revisions(id_elem) for id_elem in id
], ())
File "/usr/local/lib/python3.8/site-packages/alembic/script/revision.py", line 322, in get_revisions
return tuple(
File "/usr/local/lib/python3.8/site-packages/alembic/script/revision.py", line 323, in
self._revision_for_ident(rev_id, branch_label)
File "/usr/local/lib/python3.8/site-packages/alembic/script/revision.py", line 386, in _revision_for_ident
raise ResolutionError(
alembic.script.revision.ResolutionError: No such revision or branch '895ae7783033'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/local/bin/dispatch", line 8, in
sys.exit(entrypoint())
File "/usr/local/lib/python3.8/site-packages/dispatch/cli.py", line 845, in entrypoint
dispatch_cli()
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 829, in call
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/dispatch/cli.py", line 622, in upgrade_database
alembic_command.upgrade(alembic_cfg, revision, sql=sql, tag=tag)
File "/usr/local/lib/python3.8/site-packages/alembic/command.py", line 298, in upgrade
script.run_env()
File "/usr/local/lib/python3.8/site-packages/alembic/script/base.py", line 489, in run_env
util.load_python_file(self.dir, "env.py")
File "/usr/local/lib/python3.8/site-packages/alembic/util/pyfiles.py", line 98, in load_python_file
module = load_module_py(module_id, path)
File "/usr/local/lib/python3.8/site-packages/alembic/util/compat.py", line 184, in load_module_py
spec.loader.exec_module(module)
File "", line 783, in exec_module
File "", line 219, in _call_with_frames_removed
File "/usr/local/lib/python3.8/site-packages/dispatch/alembic/env.py", line 84, in
run_migrations_online()
File "/usr/local/lib/python3.8/site-packages/dispatch/alembic/env.py", line 78, in run_migrations_online
context.run_migrations()
File "", line 8, in run_migrations
File "/usr/local/lib/python3.8/site-packages/alembic/runtime/environment.py", line 846, in run_migrations
self.get_context().run_migrations(**kw)
File "/usr/local/lib/python3.8/site-packages/alembic/runtime/migration.py", line 509, in run_migrations
for step in self._migrations_fn(heads, self):
File "/usr/local/lib/python3.8/site-packages/alembic/command.py", line 287, in upgrade
return script._upgrade_revs(revision, rev)
File "/usr/local/lib/python3.8/site-packages/alembic/script/base.py", line 365, in _upgrade_revs
return [
File "/usr/local/lib/python3.8/contextlib.py", line 131, in exit
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.8/site-packages/alembic/script/base.py", line 194, in _catch_revision_errors
compat.raise_from_cause(util.CommandError(resolution))
File "/usr/local/lib/python3.8/site-packages/alembic/util/compat.py", line 308, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=exc_value)
File "/usr/local/lib/python3.8/site-packages/alembic/util/compat.py", line 301, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/python3.8/site-packages/alembic/script/base.py", line 162, in _catch_revision_errors
yield
File "/usr/local/lib/python3.8/site-packages/alembic/script/base.py", line 364, in upgrade_revs
revs = list(revs)
File "/usr/local/lib/python3.8/site-packages/alembic/script/revision.py", line 765, in iterate_revisions
requested_lowers = self.get_revisions(lower)
File "/usr/local/lib/python3.8/site-packages/alembic/script/revision.py", line 319, in get_revisions
return sum([self.get_revisions(id_elem) for id_elem in id
], ())
File "/usr/local/lib/python3.8/site-packages/alembic/script/revision.py", line 319, in
return sum([self.get_revisions(id_elem) for id_elem in id
], ())
File "/usr/local/lib/python3.8/site-packages/alembic/script/revision.py", line 322, in get_revisions
return tuple(
File "/usr/local/lib/python3.8/site-packages/alembic/script/revision.py", line 323, in
self._revision_for_ident(rev_id, branch_label)
File "/usr/local/lib/python3.8/site-packages/alembic/script/revision.py", line 386, in _revision_for_ident
raise ResolutionError(
alembic.util.exc.CommandError: Can't locate revision identified by '895ae7783033'

Errors in dispatch_web_1 container

I did the following as recommended:

./install.sh
docker-compose up -d
# then went to UI and registered my email ID and password and logged in

I got these logs off of the dispatch_web_1 container
OS: MacOS High Sierra: 10.15.7
Docker Version: 19.03.13

Error 1:

[SQL: SELECT workflow_instance.resource_type AS workflow_instance_resource_type, workflow_instance.resource_id AS workflow_instance_resource_id, workflow_instance.weblink AS workflow_instance_weblink, workflow_instance.id AS workflow_instance_id, workflow_instance.workflow_id AS workflow_instance_workflow_id, workflow_instance.incident_id AS workflow_instance_incident_id, workflow_instance.parameters AS workflow_instance_parameters, workflow_instance.run_reason AS workflow_instance_run_reason, workflow_instance.creator_id AS workflow_instance_creator_id, workflow_instance.status AS workflow_instance_status, workflow_instance.created_at AS workflow_instance_created_at, workflow_instance.updated_at AS workflow_instance_updated_at 
FROM workflow_instance 
WHERE %(param_1)s = workflow_instance.incident_id]
[parameters: {'param_1': 1}]
(Background on this error at: http://sqlalche.me/e/13/f405)
ERROR:uvicorn.error:Exception in ASGI application
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/uvicorn/protocols/http/h11_impl.py", line 389, in run_asgi
    result = await app(self.scope, self.receive, self.send)
  File "/usr/local/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__
    return await self.app(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/applications.py", line 111, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 181, in __call__
    raise exc from None
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 159, in __call__
    await self.app(scope, receive, _send)
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/base.py", line 26, in __call__
    await response(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/responses.py", line 228, in __call__
    await run_until_first_complete(
  File "/usr/local/lib/python3.8/site-packages/starlette/concurrency.py", line 18, in run_until_first_complete
    [task.result() for task in done]
  File "/usr/local/lib/python3.8/site-packages/starlette/concurrency.py", line 18, in <listcomp>
    [task.result() for task in done]
  File "/usr/local/lib/python3.8/site-packages/starlette/responses.py", line 220, in stream_response
    async for chunk in self.body_iterator:
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/base.py", line 56, in body_stream
    task.result()
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/base.py", line 38, in coro
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/sentry_asgi/middleware.py", line 22, in __call__
    raise exc from None
  File "/usr/local/lib/python3.8/site-packages/sentry_asgi/middleware.py", line 19, in __call__
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/base.py", line 26, in __call__
    await response(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/responses.py", line 228, in __call__
    await run_until_first_complete(
  File "/usr/local/lib/python3.8/site-packages/starlette/concurrency.py", line 18, in run_until_first_complete
    [task.result() for task in done]
  File "/usr/local/lib/python3.8/site-packages/starlette/concurrency.py", line 18, in <listcomp>
    [task.result() for task in done]
  File "/usr/local/lib/python3.8/site-packages/starlette/responses.py", line 220, in stream_response
    async for chunk in self.body_iterator:
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/base.py", line 56, in body_stream
    task.result()
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/base.py", line 38, in coro
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/base.py", line 26, in __call__
    await response(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/responses.py", line 228, in __call__
    await run_until_first_complete(
  File "/usr/local/lib/python3.8/site-packages/starlette/concurrency.py", line 18, in run_until_first_complete
    [task.result() for task in done]
  File "/usr/local/lib/python3.8/site-packages/starlette/concurrency.py", line 18, in <listcomp>
    [task.result() for task in done]
  File "/usr/local/lib/python3.8/site-packages/starlette/responses.py", line 220, in stream_response
    async for chunk in self.body_iterator:
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/base.py", line 56, in body_stream
    task.result()
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/base.py", line 38, in coro
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/exceptions.py", line 82, in __call__
    raise exc from None
  File "/usr/local/lib/python3.8/site-packages/starlette/exceptions.py", line 71, in __call__
    await self.app(scope, receive, sender)
  File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 566, in __call__
    await route.handle(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 376, in handle
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/fastapi/applications.py", line 179, in __call__
    await super().__call__(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/applications.py", line 111, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 181, in __call__
    raise exc from None
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 159, in __call__
    await self.app(scope, receive, _send)
  File "/usr/local/lib/python3.8/site-packages/starlette/exceptions.py", line 82, in __call__
    raise exc from None
  File "/usr/local/lib/python3.8/site-packages/starlette/exceptions.py", line 71, in __call__
    await self.app(scope, receive, sender)
  File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 566, in __call__
    await route.handle(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 227, in handle
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 41, in app
    response = await func(request)
  File "/usr/local/lib/python3.8/site-packages/fastapi/routing.py", line 190, in app
    response_data = await serialize_response(
  File "/usr/local/lib/python3.8/site-packages/fastapi/routing.py", line 103, in serialize_response
    value, errors_ = await run_in_threadpool(
  File "/usr/local/lib/python3.8/site-packages/starlette/concurrency.py", line 34, in run_in_threadpool
    return await loop.run_in_executor(None, func, *args)
  File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "pydantic/fields.py", line 579, in pydantic.fields.ModelField.validate
  File "pydantic/fields.py", line 738, in pydantic.fields.ModelField._validate_singleton
  File "pydantic/fields.py", line 745, in pydantic.fields.ModelField._apply_validators
  File "pydantic/class_validators.py", line 310, in pydantic.class_validators._generic_validator_basic.lambda12
  File "pydantic/main.py", line 658, in pydantic.main.BaseModel.validate
  File "pydantic/main.py", line 360, in pydantic.main.BaseModel.__init__
  File "pydantic/main.py", line 981, in pydantic.main.validate_model
  File "pydantic/fields.py", line 590, in pydantic.fields.ModelField.validate
  File "pydantic/fields.py", line 621, in pydantic.fields.ModelField._validate_sequence_like
  File "pydantic/fields.py", line 731, in pydantic.fields.ModelField._validate_singleton
  File "pydantic/fields.py", line 579, in pydantic.fields.ModelField.validate
  File "pydantic/fields.py", line 738, in pydantic.fields.ModelField._validate_singleton
  File "pydantic/fields.py", line 745, in pydantic.fields.ModelField._apply_validators
  File "pydantic/class_validators.py", line 310, in pydantic.class_validators._generic_validator_basic.lambda12
  File "pydantic/main.py", line 662, in pydantic.main.BaseModel.validate
  File "pydantic/main.py", line 566, in pydantic.main.BaseModel.from_orm
  File "pydantic/main.py", line 960, in pydantic.main.validate_model
  File "pydantic/utils.py", line 412, in pydantic.utils.GetterDict.get
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/attributes.py", line 287, in __get__
    return self.impl.get(instance_state(instance), dict_)
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/attributes.py", line 723, in get
    value = self.callable_(state, passive)
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/strategies.py", line 759, in _load_for_state
    return self._emit_lazyload(
  File "<string>", line 1, in <lambda>
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/strategies.py", line 900, in _emit_lazyload
    q(session)
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/ext/baked.py", line 544, in all
    return list(self)
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/ext/baked.py", line 444, in __iter__
    return q._execute_and_instances(context)
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 3560, in _execute_and_instances
    result = conn.execute(querycontext.statement, self._params)
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1011, in execute
    return meth(self, multiparams, params)
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/sql/elements.py", line 298, in _execute_on_connection
    return connection._execute_clauseelement(self, multiparams, params)
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1124, in _execute_clauseelement
    ret = self._execute_context(
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1316, in _execute_context
    self._handle_dbapi_exception(
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1510, in _handle_dbapi_exception
    util.raise_(
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 182, in raise_
    raise exception
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1276, in _execute_context
    self.dialect.do_execute(
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 593, in do_execute
    cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedTable) relation "workflow_instance" does not exist
LINE 2: FROM workflow_instance 

Error 2:

[SQL: SELECT workflow_instance.resource_type AS workflow_instance_resource_type, workflow_instance.resource_id AS workflow_instance_resource_id, workflow_instance.weblink AS workflow_instance_weblink, workflow_instance.id AS workflow_instance_id, workflow_instance.workflow_id AS workflow_instance_workflow_id, workflow_instance.incident_id AS workflow_instance_incident_id, workflow_instance.parameters AS workflow_instance_parameters, workflow_instance.run_reason AS workflow_instance_run_reason, workflow_instance.creator_id AS workflow_instance_creator_id, workflow_instance.status AS workflow_instance_status, workflow_instance.created_at AS workflow_instance_created_at, workflow_instance.updated_at AS workflow_instance_updated_at 
FROM workflow_instance 
WHERE %(param_1)s = workflow_instance.incident_id]
[parameters: {'param_1': 1}]
(Background on this error at: http://sqlalche.me/e/13/f405)
INFO:     192.168.32.1:52150 - "GET /api/v1/tags/ HTTP/1.1" 500 Internal Server Error
INFO:     192.168.32.1:52156 - "GET /api/v1/incident_priorities/?sortBy[]=view_order&descending[]=false HTTP/1.1" 200 OK
INFO:     192.168.32.1:52152 - "GET /api/v1/documents/?field[]=resource_type&op[]=%3D%3D&value[]=dispatch-incident-faq HTTP/1.1" 200 OK
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/uvicorn/protocols/http/h11_impl.py", line 389, in run_asgi
    result = await app(self.scope, self.receive, self.send)
  File "/usr/local/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__
    return await self.app(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/applications.py", line 111, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 181, in __call__
    raise exc from None
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 159, in __call__
    await self.app(scope, receive, _send)
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/base.py", line 26, in __call__
    await response(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/responses.py", line 228, in __call__
    await run_until_first_complete(
  File "/usr/local/lib/python3.8/site-packages/starlette/concurrency.py", line 18, in run_until_first_complete
    [task.result() for task in done]
  File "/usr/local/lib/python3.8/site-packages/starlette/concurrency.py", line 18, in <listcomp>
    [task.result() for task in done]
  File "/usr/local/lib/python3.8/site-packages/starlette/responses.py", line 220, in stream_response
    async for chunk in self.body_iterator:
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/base.py", line 56, in body_stream
    task.result()
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/base.py", line 38, in coro
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/sentry_asgi/middleware.py", line 22, in __call__
    raise exc from None
  File "/usr/local/lib/python3.8/site-packages/sentry_asgi/middleware.py", line 19, in __call__
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/base.py", line 26, in __call__
    await response(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/responses.py", line 228, in __call__
    await run_until_first_complete(
  File "/usr/local/lib/python3.8/site-packages/starlette/concurrency.py", line 18, in run_until_first_complete
    [task.result() for task in done]
  File "/usr/local/lib/python3.8/site-packages/starlette/concurrency.py", line 18, in <listcomp>
    [task.result() for task in done]
  File "/usr/local/lib/python3.8/site-packages/starlette/responses.py", line 220, in stream_response
    async for chunk in self.body_iterator:
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/base.py", line 56, in body_stream
    task.result()
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/base.py", line 38, in coro
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/base.py", line 26, in __call__
    await response(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/responses.py", line 228, in __call__
    await run_until_first_complete(
  File "/usr/local/lib/python3.8/site-packages/starlette/concurrency.py", line 18, in run_until_first_complete
    [task.result() for task in done]
  File "/usr/local/lib/python3.8/site-packages/starlette/concurrency.py", line 18, in <listcomp>
    [task.result() for task in done]
  File "/usr/local/lib/python3.8/site-packages/starlette/responses.py", line 220, in stream_response
    async for chunk in self.body_iterator:
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/base.py", line 56, in body_stream
    task.result()
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/base.py", line 38, in coro
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/exceptions.py", line 82, in __call__
    raise exc from None
  File "/usr/local/lib/python3.8/site-packages/starlette/exceptions.py", line 71, in __call__
    await self.app(scope, receive, sender)
  File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 566, in __call__
    await route.handle(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 376, in handle
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/fastapi/applications.py", line 179, in __call__
    await super().__call__(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/applications.py", line 111, in __call__
    await self.middleware_stack(scope, receive, send)
INFO:     192.168.32.1:52154 - "GET /api/v1/plugins/?itemsPerPage=-1&field[]=enabled&op[]=%3D%3D&value[]=true HTTP/1.1" 200 OK
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 181, in __call__
    raise exc from None
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 159, in __call__
    await self.app(scope, receive, _send)
  File "/usr/local/lib/python3.8/site-packages/starlette/exceptions.py", line 82, in __call__
    raise exc from None
  File "/usr/local/lib/python3.8/site-packages/starlette/exceptions.py", line 71, in __call__
    await self.app(scope, receive, sender)
  File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 566, in __call__
    await route.handle(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 227, in handle
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 41, in app
    response = await func(request)
  File "/usr/local/lib/python3.8/site-packages/fastapi/routing.py", line 182, in app
    raw_response = await run_endpoint_function(
  File "/usr/local/lib/python3.8/site-packages/fastapi/routing.py", line 135, in run_endpoint_function
    return await run_in_threadpool(dependant.call, **values)
  File "/usr/local/lib/python3.8/site-packages/starlette/concurrency.py", line 34, in run_in_threadpool
    return await loop.run_in_executor(None, func, *args)
  File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.8/site-packages/dispatch/tag/views.py", line 34, in get_tags
    return search_filter_sort_paginate(
  File "/usr/local/lib/python3.8/site-packages/dispatch/database.py", line 211, in search_filter_sort_paginate
    query, pagination = apply_pagination(query, page_number=page, page_size=items_per_page)
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy_filters/pagination.py", line 42, in apply_pagination
    total_results = query.count()
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 3803, in count
    return self.from_self(col).scalar()
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 3523, in scalar
    ret = self.one()
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 3490, in one
    ret = self.one_or_none()
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 3459, in one_or_none
    ret = list(self)
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 3535, in __iter__
    return self._execute_and_instances(context)
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 3560, in _execute_and_instances
    result = conn.execute(querycontext.statement, self._params)
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1011, in execute
    return meth(self, multiparams, params)
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/sql/elements.py", line 298, in _execute_on_connection
    return connection._execute_clauseelement(self, multiparams, params)
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1124, in _execute_clauseelement
    ret = self._execute_context(
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1316, in _execute_context
    self._handle_dbapi_exception(
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1510, in _handle_dbapi_exception
    util.raise_(
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 182, in raise_
    raise exception
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1276, in _execute_context
    self.dialect.do_execute(
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 593, in do_execute
    cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedColumn) column tag.tag_type_id does not exist
LINE 2: ...tag_source, tag.discoverable AS tag_discoverable, tag.tag_ty...

I was able to install and then register new user, but after that I get all sorts of errors just loading the home page. Web UI says something went wrong and I should contact the administrator.
image

I also tried creating an incident and got errors saying that fields were missing although they weren't marked as required fields in the UI. But then I didn't setup all the plugins, so I guess this is because of that, although it would be great if it failed gracefully.

./install fail gosu

ERROR: Service 'web' failed to build: The command '/bin/sh -c set -x     && wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$(dpkg --print-architecture)"     && wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$(dpkg --print-architecture).asc"     && gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu     && rm -r /usr/local/bin/gosu.asc     && chmod +x /usr/local/bin/gosu' returned a non-zero code: 2
Cleaning up...

install.sh: sed modification broke the installation

Commit:
68fbbb5

Generating secret key... sed: can't read s/^SECRET_KEY=.*/SECRET_KEY="fXq\/9B3DuW?K1Q%:.Et+In#Q9khj:(\\u__1+ZtGehC#TX)vExz"/: No such file or directory Cleaning up...

With:
sed -i 's/^SECRET_KEY=.*/SECRET_KEY=\"$SECRET_KEY\"/' $DISPATCH_CONFIG_ENV

It works fine

dispatch-local docker image

I'm a little new to docker and I'm curious as to how is the dispatch-local docker image built. Is it hosted on docker hub or is that generated dynamically by the docker-compose file?

Is there an isolated list of instructions to build the dispatch-local image to deploy on ECS.

Unable to run the demo on Windows

I am trying to run the demo on my Windows10 machine, but I ran into some errors. Can anybody give me some ideas about fixing this?

ฮป docker-compose up
Creating network "dispatch_default" with the default driver
Creating dispatch_postgres_1 ... done
Creating dispatch_web_1                ... done
Creating dispatch_dispatch-scheduler_1 ... done
Attaching to dispatch_postgres_1, dispatch_dispatch-scheduler_1, dispatch_web_1
postgres_1            | Error: Database is uninitialized and superuser password is not specified.
postgres_1            |        You must specify POSTGRES_PASSWORD to a non-empty value for the
postgres_1            |        superuser. For example, "-e POSTGRES_PASSWORD=password" on "docker run".
postgres_1            |
postgres_1            |        You may also use "POSTGRES_HOST_AUTH_METHOD=trust" to allow all
postgres_1            |        connections without a password. This is *not* recommended.
postgres_1            |
postgres_1            |        See PostgreSQL documentation about "trust":
postgres_1            |        https://www.postgresql.org/docs/current/auth-trust.html
web_1                 | Traceback (most recent call last):
web_1                 |   File "/usr/local/bin/dispatch", line 5, in <module>
web_1                 |     from dispatch.cli import entrypoint
web_1                 |   File "/usr/local/lib/python3.8/site-packages/dispatch/cli.py", line 12, in <module>
web_1                 |     from dispatch import __version__, config
web_1                 |   File "/usr/local/lib/python3.8/site-packages/dispatch/config.py", line 43, in <module>
web_1                 |     DISPATCH_HELP_EMAIL = config("DISPATCH_HELP_EMAIL")
web_1                 |   File "/usr/local/lib/python3.8/site-packages/starlette/config.py", line 62, in __call__
web_1                 |     return self.get(key, cast, default)
web_1                 |   File "/usr/local/lib/python3.8/site-packages/starlette/config.py", line 75, in get
web_1                 |     raise KeyError(f"Config '{key}' is missing, and has no default.")
web_1                 | KeyError: "Config 'DISPATCH_HELP_EMAIL' is missing, and has no default."
dispatch-scheduler_1  | Traceback (most recent call last):
dispatch-scheduler_1  |   File "/usr/local/bin/dispatch", line 5, in <module>
dispatch-scheduler_1  |     from dispatch.cli import entrypoint
dispatch-scheduler_1  |   File "/usr/local/lib/python3.8/site-packages/dispatch/cli.py", line 12, in <module>
dispatch-scheduler_1  |     from dispatch import __version__, config
dispatch-scheduler_1  |   File "/usr/local/lib/python3.8/site-packages/dispatch/config.py", line 43, in <module>
dispatch-scheduler_1  |     DISPATCH_HELP_EMAIL = config("DISPATCH_HELP_EMAIL")
dispatch-scheduler_1  |   File "/usr/local/lib/python3.8/site-packages/starlette/config.py", line 62, in __call__
dispatch-scheduler_1  |     return self.get(key, cast, default)
dispatch-scheduler_1  |   File "/usr/local/lib/python3.8/site-packages/starlette/config.py", line 75, in get
dispatch-scheduler_1  |     raise KeyError(f"Config '{key}' is missing, and has no default.")
dispatch-scheduler_1  | KeyError: "Config 'DISPATCH_HELP_EMAIL' is missing, and has no default."
dispatch_postgres_1 exited with code 1

I have already added the two attributes DISPATCH_HELP_EMAIL and DATABASE_CREDENTIALS. Here is my .env file

COMPOSE_PROJECT_NAME=dispatch
DISPATCH_HELP_EMAIL="[email protected]"
DATABASE_CREDENTIALS="admin:admin"

Is there anything else I need to do for running the minimal version of demo?

Run without Configuring?

I would like to run Dispatch "as is" so I can get a sense of what Dispatch is like when its running. I'd like to not do any configuration of my G Suite, Slack, etc. When I copy .env.example to .env and run install.sh I get the following error. Is it possible run Dispatch as a demos without configuring?

`Checking minimum requirements...

./.env already exists, skipped creation.
./requirements.txt already exists, skipped creation.
Removing network dispatch_default
WARNING: Network dispatch_default not found.

Creating volumes for persistent storage...
Created dispatch-postgres.

Generating secret key...
sed: 1: "./.env": invalid command code .
Cleaning up...`

Noise From Shared Drive From Google Groups Creation

Hello, I'm working on setting up Dispatch for my work environment. However I am running into a problem where Dispatch is creating multiple shared drives per incident. Unfortunately the drives remain even with closing an incident. Is there any way to move the incident creation to a central shared drive to reduce noise? We used shared drives within our environment fairly regularly, so having it taken up with old incidents would become problematic rather quickly.

Also do you have anything built into the app for cleaning up or removing previously created shared drives?

Image for reference.

image

[SQL: ALTER TABLE application ADD COLUMN created_at TIMESTAMP WITHOUT TIME ZONE]

Everything set up on .env file, but at the end of the installation I'm getting this error:

File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 588, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.UndefinedTable: relation "application" does not exist


Docker images built.
Creating network "dispatch_default" with the default driver
Creating dispatch_postgres_1 ... done
Setting up database...
Starting dispatch_postgres_1 ... done Usage: dispatch [OPTIONS] COMMAND [ARGS]...

Command-line interface to Dispatch.

Options:
--version Show the version and exit.
--help Show this message and exit.

Commands:
contact All commands for contact manipulation.
database Container for all dispatch database commands.
incident All commands for incident manipulation.
plugins All commands for plugin manipulation.
scheduler Container for all dispatch scheduler commands.
server Container for all dispatch server commands.
term All commands for term manipulation.
Starting dispatch_postgres_1 ... done INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [alembic.runtime.migration] Running upgrade -> e75e103693f2, Initial migration
INFO [alembic.runtime.migration] Running upgrade e75e103693f2 -> d0501fc6be89, empty message
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1247, in _execute_context
self.dialect.do_execute(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 588, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.UndefinedTable: relation "application" does not exist

Incident type and Incident Priority not displayed in UI

I run the install.sh file without make any changes in the .env file and after running that file i am able to see 3 containers running and i lauched the dispatcher UI page that also loading fine. when i try to select "Type and Priority" i am seeing as "No Data Available". Please help me why i am getting like this. Do i need to configure in .env to see those values.

Installation script exits with `createdb: error: database "dispatch" already exists`

Describe the bug
I ran install.sh after adjusting the .env.example file with some account keys. The script stopped with the following output:

Setting up database...
Do you want to load example data (y/N)?y
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  121k  100  121k    0     0   413k      0 --:--:-- --:--:-- --:--:--  412k
createdb: error: database creation failed: ERROR:  database "dispatch" already exists
Cleaning up...

To Reproduce
Steps to reproduce the behavior:

  1. Clone the repository
  2. Configure .env.example (I used a google test account, removed the rest)
  3. Run ./install.sh
  4. When asked to load example data, enter y
  5. Script fails with createdb: error

Expected behavior
Script continues installation process

Screenshots

  • cat .env.example
# For configuration details, see: https://hawkins.gitbook.io/dispatch/configuration/app

# General
COMPOSE_PROJECT_NAME=dispatch
SECRET_KEY=
DISPATCH_JWT_SECRET=

# Database
# NOTE: Ensure that DATABASE_CREDENTIALS match the values passed to POSTGRES_USER and POSTGRES_PASSWORD
DATABASE_CREDENTIALS=dispatch:dispatch
DATABASE_HOSTNAME=postgres
DATABASE_NAME=dispatch
DATABASE_PORT=5432

# Used by postgres docker
POSTGRES_DB=dispatch
POSTGRES_PASSWORD=dispatch
POSTGRES_USER=dispatch

# Cost
ANNUAL_COST_EMPLOYEE=50000
BUSINESS_HOURS_YEAR=1840

# Incident configuration
INCIDENT_CONVERSATION_COMMANDS_REFERENCE_DOCUMENT_ID=INCIDENT_CONVERSATION_COMMANDS_REFERENCE_DOCUMENT_ID
INCIDENT_ONCALL_SERVICE_ID=None
INCIDENT_DOCUMENT_INVESTIGATION_SHEET_ID=INCIDENT_DOCUMENT_INVESTIGATION_SHEET_ID
INCIDENT_FAQ_DOCUMENT_ID=INCIDENT_FAQ_DOCUMENT_ID
INCIDENT_NOTIFICATION_CONVERSATIONS=incidents
[email protected]
INCIDENT_RESOURCE_CONVERSATION_COMMANDS_REFERENCE_DOCUMENT=google-docs-conversation-commands-reference-document
INCIDENT_RESOURCE_FAQ_DOCUMENT=google-docs-faq-document
INCIDENT_RESOURCE_INCIDENT_REVIEW_DOCUMENT=google-docs-incident-review-document
INCIDENT_RESOURCE_INCIDENT_TASK=google-docs-incident-task
INCIDENT_RESOURCE_INVESTIGATION_DOCUMENT=google-docs-investigation-document
INCIDENT_RESOURCE_INVESTIGATION_SHEET=google-docs-investigation-sheet
INCIDENT_RESOURCE_NOTIFICATIONS_GROUP=google-group-participant-notifications-group
INCIDENT_RESOURCE_TACTICAL_GROUP=google-group-participant-tactical-group
INCIDENT_STORAGE_FOLDER_ID=INCIDENT_STORAGE_FOLDER_ID

# Google
GOOGLE_DEVELOPER_KEY=
GOOGLE_DOMAIN=localhost
GOOGLE_SERVICE_ACCOUNT_CLIENT_EMAIL=
GOOGLE_SERVICE_ACCOUNT_CLIENT_ID=
GOOGLE_SERVICE_ACCOUNT_DELEGATED_ACCOUNT=
GOOGLE_SERVICE_ACCOUNT_PRIVATE_KEY=
GOOGLE_SERVICE_ACCOUNT_PRIVATE_KEY_ID=
GOOGLE_SERVICE_ACCOUNT_PROJECT_ID=
  • Terminal log output
Successfully built 5798468a9d81
Successfully tagged dispatch-local:latest

Docker images pulled and built.
Creating network "dispatch_default" with the default driver
Creating dispatch_postgres_1 ... done

Setting up database...
Do you want to load example data (y/N)?y
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  121k  100  121k    0     0   413k      0 --:--:-- --:--:-- --:--:--  412k
createdb: error: database creation failed: ERROR:  database "dispatch" already exists
Cleaning up...

Desktop (please complete the following information):

  • OS: macOS Catalina 10.15.7
  • Docker: Docker 2.5.0.1 for macOS

ERROR: No such service: dispatch. Unable to complete installation

The installation script fails while starting the database stuff, after building everything with success and launching this command:

rasca@catastrofe [~/dispatch-docker]> docker-compose run --rm dispatch database upgrade --noinput
ERROR: No such service: dispatch

I wasn't able to find relations between this failure and what is described here, even though the failure is the same.

I think a user would expect after launching ./install.sh to have everything in place, instead it seems that some bits and pieces are missing and need to be made by hand, and I'm still trying to identify the whole of them.

Warning: Build Currently Broken Due To GPG Issues in Netflix/dispatch on master

Hi, really interesting project, thanks for sharing. Just want to point out that due to issues with GPG keys in Netfix/dispatch (partially?) addressed by a patch last night, the compose build fails on gosu GPG key verification for both web and scheduler builds. I've temporarily pointed L20 and L35 in my local docker-compose.yaml to develop rather than master in an attempt to bypass this issue, but thought I'd add this issue to help others until fixes on develop hit master in Netflix/dispatch.

Running the docker on a clean VM fails

Describe the bug
When startign from a clean VM, installing docker and docker compose, then working through the instructions for the docker installation - it fails.

it seems that the docker-compose.yml is pointing to unkown images.

To Reproduce
run docker-compose up

Expected behavior
pull all images and run them, it fails to download the web and core images

Desktop (please complete the following information):
ubuntu server 20.04 LTS VM

postgress password hostname and db settings.

Apologies, as this is the only forum I am aware of to ask this question. I do not know of any dispatch specific forums. If they exist, please let me know.

I cloned, copied over the .env.example to .env . I made a few adjustments, and ran install.sh

I am not sure what to set these values to:
DATABASE_CREDENTIALS=creds
DATABASE_HOSTNAME=postgres
DATABASE_NAME=dispatch
DATABASE_PORT=5432

Where are the default creds set? I've tried just putting a value in for the credentials, but I get the error below. Presubably, the install.sh script is building the postgress container. I would assume, when it did so, it would set a password. So, what is the default password for this? Or is there a postgres,.conf file (not found) that I need to modify? What is the right thing to do here?


Docker images built.

Setting up database...
Usage: dispatch [OPTIONS] COMMAND [ARGS]...

Command-line interface to Dispatch.

Options:
--version Show the version and exit.
--help Show this message and exit.

Commands:
contact All commands for contact...
database Container for all dispatch
database...

incident All commands for incident...
plugins All commands for plugin...
scheduler Container for all dispatch...
server Container for all dispatch server...
term All commands for term manipulation.
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 2285, in _wrap_pool_connect
return fn()
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 303, in unique_connection
return _ConnectionFairy._checkout(self)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 773, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 492, in checkout
rec = pool._do_get()
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/pool/impl.py", line 238, in _do_get
return self._create_connection()
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 308, in _create_connection
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 437, in init
self.__connect(first_connect_check=True)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 657, in connect
pool.logger.debug("Error on connect(): %s", e)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/util/langhelpers.py", line 68, in exit
compat.raise
(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 178, in raise

raise exception
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 652, in __connect
connection = pool._invoke_creator(self)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/strategies.py", line 114, in connect
return dialect.connect(*cargs, **cparams)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 488, in connect
return self.dbapi.connect(*cargs, **cparams)
File "/usr/local/lib/python3.8/site-packages/psycopg2/init.py", line 126, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: fe_sendauth: no password supplied

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/local/bin/dispatch", line 8, in
sys.exit(entrypoint())
File "/usr/local/lib/python3.8/site-packages/dispatch/cli.py", line 811, in entrypoint
dispatch_cli()
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 829, in call
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/dispatch/cli.py", line 592, in upgrade_database
alembic_command.upgrade(alembic_cfg, revision, sql=sql, tag=tag)
File "/usr/local/lib/python3.8/site-packages/alembic/command.py", line 298, in upgrade
script.run_env()
File "/usr/local/lib/python3.8/site-packages/alembic/script/base.py", line 489, in run_env
util.load_python_file(self.dir, "env.py")
File "/usr/local/lib/python3.8/site-packages/alembic/util/pyfiles.py", line 98, in load_python_file
module = load_module_py(module_id, path)
File "/usr/local/lib/python3.8/site-packages/alembic/util/compat.py", line 184, in load_module_py
spec.loader.exec_module(module)
File "", line 783, in exec_module
File "", line 219, in _call_with_frames_removed
File "/usr/local/lib/python3.8/site-packages/dispatch/alembic/env.py", line 84, in
run_migrations_online()
File "/usr/local/lib/python3.8/site-packages/dispatch/alembic/env.py", line 74, in run_migrations_online
with connectable.connect() as connection:
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 2218, in connect
return self._connection_cls(self, **kwargs)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 103, in init
else engine.raw_connection()
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 2317, in raw_connection
return self._wrap_pool_connect(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 2288, in _wrap_pool_connect
Connection.handle_dbapi_exception_noconnection(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1554, in handle_dbapi_exception_noconnection
util.raise
(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 178, in raise

raise exception
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 2285, in _wrap_pool_connect
return fn()
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 303, in unique_connection
return _ConnectionFairy._checkout(self)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 773, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 492, in checkout
rec = pool._do_get()
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/pool/impl.py", line 238, in _do_get
return self._create_connection()
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 308, in _create_connection
return _ConnectionRecord(self)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 437, in init
self.__connect(first_connect_check=True)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 657, in connect
pool.logger.debug("Error on connect(): %s", e)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/util/langhelpers.py", line 68, in exit
compat.raise
(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 178, in raise

raise exception
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 652, in __connect
connection = pool._invoke_creator(self)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/strategies.py", line 114, in connect
return dialect.connect(*cargs, **cparams)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 488, in connect
return self.dbapi.connect(*cargs, **cparams)
File "/usr/local/lib/python3.8/site-packages/psycopg2/init.py", line 126, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) fe_sendauth: no password supplied

(Background on this error at: http://sqlalche.me/e/e3q8)
Cleaning up...

docker-compose may use wrong context

Running install.sh using the fix provided in #6, I encountered the COPY failed: stat /var/lib/docker/tmp/docker-builder708155561/.nvmrc: no such file or directory error as in Netflix/dispatch#39.

I got past this by updating the docker-compose to use the root as a context

    context: https://github.com/Netflix/dispatch.git#master
    dockerfile: docker/Dockerfile

Happy to submit a PR, although want to make sure this isn't specific to my version of Docker (Docker Desktop for Mac 2.2.0.3).

Connection Refused by dispatch container

I can't seem to get dispatch running behind a reverse proxy (using Traefik for the sake of simplicity here)

I modified the docker-compose.yml to include traefik:

version: "3.4"
x-restart-policy: &restart_policy
  restart: unless-stopped
x-dispatch-defaults: &dispatch_defaults
  <<: *restart_policy
  build:
    context: ../dispatch/
    dockerfile: docker/Dockerfile
  image: dispatch-local:latest
  depends_on:
    - postgres
  env_file:
    - .env
services:
  postgres:
    <<: *restart_policy
    image: "postgres:9.6"
    volumes:
      - "dispatch-postgres:/var/lib/postgresql/data"
    environment:
      - POSTGRES_HOST_AUTH_METHOD=trust
  web:
    <<: *dispatch_defaults
    labels:
          - "traefik.enable=true"
          - "traefik.http.routers.dispatch.rule=Host(`dispatch.local`)"
          - "traefik.http.routers.dispatch.entrypoints=web"
  traefik:
    image: "traefik:v2.1"
    container_name: "traefik"
    command:
      - "--log.level=DEBUG"
      - "--api.insecure=true"
      - "--providers.docker=true"
      - "--providers.docker.exposedbydefault=false"
      - "--entrypoints.web.address=:80"
    ports:
      - "80:80"
      - "8080:8080"
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock:ro"
  dispatch-scheduler:
    <<: *dispatch_defaults
volumes:
  dispatch-postgres:
    external: true

This brings up traefik and sets the routes correctly. However, if I try to connect to it I get an 502 Bad Request by traefik which is caused by a connection refused error from uvicorn.

Uvicorns default is to bind to 127.0.0.1 and I couldn't find a setting in the dispatch repo that would bind to 0.0.0.0 instead.

So I patched the run.py, according to the docs:

import uvicorn

if __name__ == "__main__":
    uvicorn.run("dispatch.main:app", host="0.0.0.0")

I then execed into the dispatch container, and installed netstat to figure out what's going on:

root@f341e4afc781:/# netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.11:38149        0.0.0.0:*               LISTEN      -                   
tcp        0      0 127.0.0.1:8000          0.0.0.0:*               LISTEN      1/python            
udp        0      0 127.0.0.11:36548        0.0.0.0:*                           -           

Everything looking good there. However, this still does not work ๐Ÿ˜ญ

Any suggestions? What's refusing the connection here?

Improve installation instructions

Is your feature request related to a problem? Please describe.
I'm relatively new to Docker and this is my first project using this tool. I have had IMMENSE frustration with installing this app and getting it to run smoothly. From what I've seen so far, I've seen two general steps to follow for installation:

  1. Git clone the repo
  2. Run ./install.sh

Although both of these steps have gotten me somewhere, I'm still having trouble in particular with setting up the database. For example, it's not clear if ./install.sh is something to run within a docker or not. When I run the install.sh script, it creates a database on my local machine, and not in docker. This is something that took me hours to actually grasp since I thought the install.sh would do all the work for me.

From there I've had to rig up a custom solution to install the database within the docker dispatch_postgres_1 container. Even then I'm still experiencing issues with my custom solution. My custom solution is as follows:

  1. Initialize the database in the dispatch_web_1 container via dispatch database init.
  2. Enter psql into the dispatch_postgres_1 container and copy and paste the contents of the sample data dump file into the psql console.

Even with this custom solution, I'm still getting errors such as:

web_1        | sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedColumn) column tag.tag_type_id does not exist
web_1        | LINE 2: ...tag_source, tag.discoverable AS tag_discoverable, tag.tag_ty...

Also, there is documentation on the dispatch CLI tool. Does this tool coincide with the ./install.sh script? Why does it feel like there is overlap between both tools, especially when it comes to initializing the database.

I have no idea what's going on at this point and am likely going to give up.

Describe the solution you'd like
It would be nice if there was a basic set of point-by-point instructions on how to install this application. The instructions should be designed for docker beginners.

Describe alternatives you've considered
Alternatively, seeing links to a docker tutorial that explains how to install an application like this would be nice. However, I think that may be difficult.

Another approach is to record a screen of a fresh installation. Something like this may likely be faster than writing out the documentation.

Additional context
I understand that some of this ticket can be coming from my lack of knowledge with docker, but at the same time I can't help but feel like this should be a simple installation considering you're using Docker in the first place.

Any help would be appreciated. Thank you.

Build context referencing master branch?

The build context in docker-compose.yml is pointing at the master branch of the dispatch repo, but all of the bug fixes seem to be taking place in the develop branch. I imagine this should probably be changed at least temporarily so the bug fixes will be reflected when building the Docker images.

context: https://github.com/Netflix/dispatch.git#master

Remove unused DISPATCH_IMAGE build arg from docker-compose.yml

The docker-compose file describes a configurable argument for DISPATCH_IMAGE: https://github.com/Netflix/dispatch-docker/blob/master/docker-compose.yml#L8-L9

It appears this isn't used by the associated image at all.

dispatch-docker <master> $ DISPATCH_IMAGE=dispatch-local docker-compose build web 

...

[Warning] One or more build-args [DISPATCH_IMAGE] were not consumed
Successfully built fa22fc260b63
Successfully tagged dispatch-local:latest

Indeed, the Dockerfile this configuration points to makes no mention of that term.

This appears to be unused configuration and a red herring to a developer trying to troubleshoot issues. It should probably be removed.

Unable to import dispatch.plugins.dispatch_google.docs.plugin.GoogleDocsDocumentPlugin

Hi all,

I've been slowly making progress on getting Dispatch up and running through Docker, but now I'm stumped (and haven't found any related issues or solutions)

Here's the current error:

$ docker-compose up
Recreating dispatch-docker_postgres_1 ... done
Recreating dispatch-docker_dispatch-scheduler_1 ... done
Recreating dispatch-docker_web_1                ... done
Attaching to dispatch-docker_postgres_1, dispatch-docker_web_1, dispatch-docker_dispatch-scheduler_1
postgres_1            |
postgres_1            | PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres_1            |
postgres_1            | LOG:  database system was shut down at 2020-04-03 21:02:11 UTC
postgres_1            | LOG:  MultiXact member wraparound protections are now enabled
postgres_1            | LOG:  database system is ready to accept connections
postgres_1            | LOG:  autovacuum launcher started
web_1                 | No metric providers specified metrics will not be sent.
dispatch-scheduler_1  | No metric providers specified metrics will not be sent.
web_1                 | ERROR:    Unable to import dispatch.plugins.dispatch_google.docs.plugin.GoogleDocsDocumentPlugin. Reason: Could not deserialize key data.
web_1                 | Traceback (most recent call last):
web_1                 |   File "/usr/local/lib/python3.8/site-packages/dispatch/common/managers.py", line 61, in all
web_1                 |     results.append(cls())
web_1                 |   File "/usr/local/lib/python3.8/site-packages/dispatch/plugins/dispatch_google/docs/plugin.py", line 107, in __init__
web_1                 |     self.client = get_service("docs", "v1", scopes).documents()
web_1                 |   File "/usr/local/lib/python3.8/site-packages/dispatch/plugins/dispatch_google/common.py", line 39, in get_service
web_1                 |     credentials = service_account.Credentials.from_service_account_file(
web_1                 |   File "/usr/local/lib/python3.8/site-packages/google/oauth2/service_account.py", line 217, in from_service_account_file
web_1                 |     info, signer = _service_account_info.from_filename(
web_1                 |   File "/usr/local/lib/python3.8/site-packages/google/auth/_service_account_info.py", line 74, in from_filename
web_1                 |     return data, from_dict(data, require=require)
web_1                 |   File "/usr/local/lib/python3.8/site-packages/google/auth/_service_account_info.py", line 55, in from_dict
web_1                 |     signer = crypt.RSASigner.from_service_account_info(data)
web_1                 |   File "/usr/local/lib/python3.8/site-packages/google/auth/crypt/base.py", line 113, in from_service_account_info
web_1                 |     return cls.from_string(
web_1                 |   File "/usr/local/lib/python3.8/site-packages/google/auth/crypt/_cryptography_rsa.py", line 146, in from_string
web_1                 |     private_key = serialization.load_pem_private_key(
web_1                 |   File "/usr/local/lib/python3.8/site-packages/cryptography/hazmat/primitives/serialization/base.py", line 16, in load_pem_private_key
web_1                 |     return backend.load_pem_private_key(data, password)
web_1                 |   File "/usr/local/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/backend.py", line 1085, in load_pem_private_key
web_1                 |     return self._load_key(
web_1                 |   File "/usr/local/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/backend.py", line 1315, in _load_key
web_1                 |     self._handle_key_loading_error()
web_1                 |   File "/usr/local/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/backend.py", line 1373, in _handle_key_loading_error
web_1                 |     raise ValueError("Could not deserialize key data.")
web_1                 | ValueError: Could not deserialize key data.
... <TRIMMED>

The web service is responsive, as shown by these logs:

web_1                 | INFO:     10.0.2.209:35528 - "GET / HTTP/1.1" 200 OK
web_1                 | INFO:     10.0.2.209:35528 - "GET /static/css/chunk-vendors.c5f0af00.css HTTP/1.1" 200 OK
web_1                 | INFO:     10.0.2.209:35530 - "GET /static/css/app.5f23df13.css HTTP/1.1" 200 OK
web_1                 | INFO:     10.0.2.209:35532 - "GET /static/js/chunk-vendors.afdb7475.js HTTP/1.1" 200 OK
web_1                 | INFO:     10.0.2.209:35536 - "GET /static/js/app.bb6e4494.js HTTP/1.1" 200 OK
web_1                 | INFO:     10.0.2.209:35532 - "GET /img/icons/apple-touch-icon-152x152.png HTTP/1.1" 200 OK
web_1                 | INFO:     10.0.2.209:35536 - "GET /img/icons/favicon-32x32.png HTTP/1.1" 200 OK

... but the UI never loads in the browser.

I entered a document ID (located between https://docs.google.com/document/d/ and edit in a Google Doc URL) from the associated Google Drive account in the INCIDENT_DOCUMENT_INVESTIGATION_SHEET_ID field, to no effect.

I'd greatly appreciate any ideas / pointers!

[master] GPG key error during install.sh

Hi!

Trying to deploy this using the latest master, it seems to fail at the GPG key stuff

Step 10/46 : RUN for key in     B42F6819007F00F88E364FD4036A9C25BF357DD4     595E85A6B1B4779EA4DAAEC70B588DFF0527A9B7     94AE36675C464D64BAFA68DD7434390BDBE9B9C5     FD3A5288F042B6850C66B31F09FE44734EB7990E     71DCFD284A79C3B38668286BC97EC7A07EDE3FC1     DD8F2338BAE7501E3DD5AC78C273792F7D83545D     C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8     B9AE9905FFD7803F25714661B63B535A4C206CA9     77984A986EBC2AA786BC0F66B01FBB92821C587A     8FCCA13FEF1D0C2E91008E09770F7A9A5AE15600     4ED778F539E3634C779C87C6D7062848A1AB005C     A48C2BEE680E841632CD4E44F07496B3EB3C1762     B9E2F5981AA6E0CD28160D9FF13993A75599653C     ; do         for server in             keys.openpgp.org             keyserver.ubuntu.com             hkps.pool.sks-keyservers.net         ; do 	        gpg2 --batch --keyserver "$server" --recv-keys "$key";         done     done
 ---> Running in 823885c29a21
gpg: directory '/root/.gnupg' created
gpg: keybox '/root/.gnupg/pubring.kbx' created
gpg: /root/.gnupg/trustdb.gpg: trustdb created
gpg: key 036A9C25BF357DD4: public key "Tianon Gravi <[email protected]>" imported
gpg: Total number processed: 1
gpg:               imported: 1
gpg: key 036A9C25BF357DD4: "Tianon Gravi <[email protected]>" 2 new user IDs
gpg: key 036A9C25BF357DD4: "Tianon Gravi <[email protected]>" 8 new signatures
gpg: Total number processed: 1
gpg:           new user IDs: 2
gpg:         new signatures: 8
gpg: keyserver receive failed: No name
gpg: key 9A84159D7001A4E5: new key but contains no user ID - skipped
gpg: Total number processed: 1
gpg:           w/o user IDs: 1
gpg: key 9A84159D7001A4E5: public key "Thomas Orozco <[email protected]>" imported
gpg: Total number processed: 1
gpg:               imported: 1
gpg: keyserver receive failed: No name
gpg: key 7434390BDBE9B9C5: new key but contains no user ID - skipped
gpg: Total number processed: 1
gpg:           w/o user IDs: 1
gpg: key 7434390BDBE9B9C5: public key "Colin Ihrig <[email protected]>" imported
gpg: Total number processed: 1
gpg:               imported: 1
gpg: keyserver receive failed: No name
gpg: key 09FE44734EB7990E: new key but contains no user ID - skipped
gpg: Total number processed: 1
gpg:           w/o user IDs: 1
gpg: key 09FE44734EB7990E: public key "Jeremiah Senkpiel <[email protected]>" imported
gpg: Total number processed: 1
gpg:               imported: 1
gpg: keyserver receive failed: No name
gpg: key C97EC7A07EDE3FC1: new key but contains no user ID - skipped
gpg: Total number processed: 1
gpg:           w/o user IDs: 1
gpg: key C97EC7A07EDE3FC1: public key "keybase.io/jasnell <[email protected]>" imported
gpg: Total number processed: 1
gpg:               imported: 1
gpg: keyserver receive failed: No name
gpg: key C273792F7D83545D: public key "Rod Vagg <[email protected]>" imported
gpg: Total number processed: 1
gpg:               imported: 1
gpg: key C273792F7D83545D: "Rod Vagg <[email protected]>" not changed
gpg: Total number processed: 1
gpg:              unchanged: 1
gpg: keyserver receive failed: No name
gpg: key E73BC641CC11F4C8: public key "Myles Borins <[email protected]>" imported
gpg: Total number processed: 1
gpg:               imported: 1
gpg: key E73BC641CC11F4C8: "Myles Borins <[email protected]>" 2 new user IDs
gpg: key E73BC641CC11F4C8: "Myles Borins <[email protected]>" 4 new signatures
gpg: Total number processed: 1
gpg:           new user IDs: 2
gpg:         new signatures: 4
gpg: keyserver receive failed: No name
gpg: key B63B535A4C206CA9: public key "Evan Lucas <[email protected]>" imported
gpg: Total number processed: 1
gpg:               imported: 1
gpg: key B63B535A4C206CA9: "Evan Lucas <[email protected]>" 1 new user ID
gpg: key B63B535A4C206CA9: "Evan Lucas <[email protected]>" 1 new signature
gpg: Total number processed: 1
gpg:           new user IDs: 1
gpg:         new signatures: 1
gpg: keyserver receive failed: No name
gpg: key B01FBB92821C587A: new key but contains no user ID - skipped
gpg: Total number processed: 1
gpg:           w/o user IDs: 1
gpg: key B01FBB92821C587A: public key "Gibson Fahnestock <[email protected]>" imported
gpg: Total number processed: 1
gpg:               imported: 1
gpg: keyserver receive failed: No name
gpg: key 770F7A9A5AE15600: new key but contains no user ID - skipped
gpg: Total number processed: 1
gpg:           w/o user IDs: 1
gpg: key 770F7A9A5AE15600: public key "Michaรซl Zasso (Targos) <[email protected]>" imported
gpg: Total number processed: 1
gpg:               imported: 1
gpg: keyserver receive failed: No name
gpg: key D7062848A1AB005C: new key but contains no user ID - skipped
gpg: Total number processed: 1
gpg:           w/o user IDs: 1
gpg: key D7062848A1AB005C: public key "Beth Griggs <[email protected]>" imported
gpg: Total number processed: 1
gpg:               imported: 1
gpg: keyserver receive failed: No name
gpg: key F07496B3EB3C1762: new key but contains no user ID - skipped
gpg: Total number processed: 1
gpg:           w/o user IDs: 1
gpg: key F07496B3EB3C1762: public key "Ruben Bridgewater <[email protected]>" imported
gpg: Total number processed: 1
gpg:               imported: 1
gpg: keyserver receive failed: No name
gpg: key F13993A75599653C: new key but contains no user ID - skipped
gpg: Total number processed: 1
gpg:           w/o user IDs: 1
gpg: key F13993A75599653C: public key "Shelley Vohr (security is major key) <[email protected]>" imported
gpg: Total number processed: 1
gpg:               imported: 1
gpg: keyserver receive failed: No name
Removing intermediate container 823885c29a21
ERROR: Service 'web' failed to build: The command '/bin/sh -c for key in     B42F6819007F00F88E364FD4036A9C25BF357DD4     595E85A6B1B4779EA4DAAEC70B588DFF0527A9B7     94AE36675C464D64BAFA68DD7434390BDBE9B9C5     FD3A5288F042B6850C66B31F09FE44734EB7990E     71DCFD284A79C3B38668286BC97EC7A07EDE3FC1     DD8F2338BAE7501E3DD5AC78C273792F7D83545D     C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8     B9AE9905FFD7803F25714661B63B535A4C206CA9     77984A986EBC2AA786BC0F66B01FBB92821C587A     8FCCA13FEF1D0C2E91008E09770F7A9A5AE15600     4ED778F539E3634C779C87C6D7062848A1AB005C     A48C2BEE680E841632CD4E44F07496B3EB3C1762     B9E2F5981AA6E0CD28160D9FF13993A75599653C     ; do         for server in             keys.openpgp.org             keyserver.ubuntu.com             hkps.pool.sks-keyservers.net         ; do 	        gpg2 --batch --keyserver "$server" --recv-keys "$key";         done     done' returned a non-zero code: 2
Cleaning up...

sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) fe_sendauth: no password supplied

Just by running ./install.sh. I'm on OSX.

Setting up database...
Starting dispatch_postgres_1 ... done
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 2285, in _wrap_pool_connect
    return fn()
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 363, in connect
    return _ConnectionFairy._checkout(self)
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 773, in _checkout
    fairy = _ConnectionRecord.checkout(pool)
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 492, in checkout
    rec = pool._do_get()
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/pool/impl.py", line 139, in _do_get
    self._dec_overflow()
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/util/langhelpers.py", line 68, in __exit__
    compat.raise_(
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 178, in raise_
    raise exception
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/pool/impl.py", line 136, in _do_get
    return self._create_connection()
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 308, in _create_connection
    return _ConnectionRecord(self)
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 437, in __init__
    self.__connect(first_connect_check=True)
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 657, in __connect
    pool.logger.debug("Error on connect(): %s", e)
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/util/langhelpers.py", line 68, in __exit__
    compat.raise_(
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 178, in raise_
    raise exception
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 652, in __connect
    connection = pool._invoke_creator(self)
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/strategies.py", line 114, in connect
    return dialect.connect(*cargs, **cparams)
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 490, in connect
    return self.dbapi.connect(*cargs, **cparams)
  File "/usr/local/lib/python3.8/site-packages/psycopg2/__init__.py", line 127, in connect
    conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: fe_sendauth: no password supplied

.env file

# For configuration details, see: https://hawkins.gitbook.io/dispatch/configuration/app

# General
COMPOSE_PROJECT_NAME=dispatch
[email protected]
DISPATCH_HELP_SLACK_CHANNEL=#general
DISPATCH_UI_URL=https://example.com
SECRET_KEY="iO-S?G0np0=_oA.H7terCFEBI;B^Hnx6(oi3Z\KxWkl0b/5r(\"

# Database
DATABASE_CREDENTIALS=creds
DATABASE_HOSTNAME=postgres
DATABASE_NAME=dispatch
DATABASE_PORT=5432
POSTGRES_DB=dispatch
POSTGRES_PASSWORD=dispatch
POSTGRES_USER=dispatch

# Authentication
DISPATCH_AUTHENTICATION_PROVIDER_PKCE_JWKS=""

# Frontend
VUE_APP_DISPATCH_AUTHENTICATION_PROVIDER_PKCE_OPEN_ID_CONNECT_URL=""
VUE_APP_DISPATCH_AUTHENTICATION_PROVIDER_PKCE_CLIENT_ID=""

# Cost
ANNUAL_COST_EMPLOYEE=50000
BUSINESS_HOURS_YEAR=2080

# Incident configuration
INCIDENT_CONVERSATION_COMMANDS_REFERENCE_DOCUMENT_ID="INCIDENT_CONVERSATION_COMMANDS_REFERENCE_DOCUMENT_ID"
INCIDENT_DAILY_SUMMARY_ONCALL_SERVICE_ID=None
INCIDENT_DOCUMENT_INVESTIGATION_SHEET_ID=""
INCIDENT_DOCUMENT_INVESTIGATION_SHEET_ID="INCIDENT_DOCUMENT_INVESTIGATION_SHEET_ID"
INCIDENT_FAQ_DOCUMENT_ID=""
INCIDENT_FAQ_DOCUMENT_ID="INCIDENT_FAQ_DOCUMENT_ID"
INCIDENT_NOTIFICATION_CONVERSATIONS=""
INCIDENT_NOTIFICATION_CONVERSATIONS=""
INCIDENT_NOTIFICATION_DISTRIBUTION_LISTS=""
INCIDENT_NOTIFICATION_DISTRIBUTION_LISTS=""
INCIDENT_RESOURCE_CONVERSATION_COMMANDS_REFERENCE_DOCUMENT="google-docs-conversation-commands-reference-document"
INCIDENT_RESOURCE_FAQ_DOCUMENT="google-docs-faq-document"
INCIDENT_RESOURCE_INCIDENT_REVIEW_DOCUMENT="google-docs-incident-review-document"
INCIDENT_RESOURCE_INCIDENT_TASK="google-docs-incident-task"
INCIDENT_RESOURCE_INVESTIGATION_DOCUMENT="google-docs-investigation-document"
INCIDENT_RESOURCE_INVESTIGATION_SHEET="google-docs-investigation-sheet"
INCIDENT_RESOURCE_NOTIFICATIONS_GROUP="google-group-participant-notifications-group"
INCIDENT_RESOURCE_TACTICAL_GROUP="google-group-participant-tactical-group"
INCIDENT_STORAGE_ARCHIVAL_FOLDER_ID=""
INCIDENT_STORAGE_ARCHIVAL_FOLDER_ID=INCIDENT_STORAGE_ARCHIVAL_FOLDER_ID
INCIDENT_STORAGE_DRIVE_ID_SLUG=""
INCIDENT_STORAGE_INCIDENT_REVIEW_FILE_ID=""
INCIDENT_STORAGE_INCIDENT_REVIEW_FILE_ID=INCIDENT_STORAGE_INCIDENT_REVIEW_FILE_ID

# Plugin configuration
INCIDENT_PLUGIN_CONTACT_SLUG="slack-contact"
INCIDENT_PLUGIN_CONVERSATION_SLUG="slack-conversation"
INCIDENT_PLUGIN_DOCUMENT_RESOLVER_SLUG="dispatch-document-resolver"
INCIDENT_PLUGIN_DOCUMENT_SLUG="google-docs-document"
INCIDENT_PLUGIN_EMAIL_SLUG="google-gmail-conversation"
INCIDENT_PLUGIN_GROUP_SLUG="group-participant-group"
INCIDENT_PLUGIN_PARTICIPANT_SLUG="dispatch-participants"
INCIDENT_PLUGIN_STORAGE_SLUG="google-drive-storage"
INCIDENT_PLUGIN_TASK_SLUG="google-drive-task"
INCIDENT_PLUGIN_TICKET_SLUG="jira-ticket"

# Slack
SLACK_APP_USER_SLUG=""
SLACK_API_BOT_TOKEN=""
SLACK_SIGNING_SECRET=""
SLACK_WORKSPACE_NAME=""

# Google
GOOGLE_DEVELOPER_KEY=key
GOOGLE_DOMAIN=site.com
[email protected]
GOOGLE_SERVICE_ACCOUNT_CLIENT_ID=id
GOOGLE_SERVICE_ACCOUNT_DELEGATED_ACCOUNT=account
GOOGLE_SERVICE_ACCOUNT_PRIVATE_KEY=key
GOOGLE_SERVICE_ACCOUNT_PRIVATE_KEY_ID=id
GOOGLE_SERVICE_ACCOUNT_PROJECT_ID=id

# Jira
JIRA_BROWSER_URL=""
JIRA_API_URL=""
JIRA_PROJECT_KEY=""
JIRA_ISSUE_TYPE_ID=""
JIRA_USERNAME=""
JIRA_PASSWORD=""

# PagerDuty
PAGERDUTY_API_KEY=""
PAGERDUTY_API_FROM_EMAIL=""

Error when trying to start http://0.0.0.0:8000/incidents/report

This does not look, right?

postgres=> \dt
Did not find any relations.

No information found

When going to UI, I am seeing below error:

Attaching to dispatch_web_1, dispatch_scheduler_1
scheduler_1  | No metric providers specified metrics will not be sent.
web_1        | No metric providers specified metrics will not be sent.
scheduler_1  | Starting scheduler...
web_1        | WARNING:  Unable to import fbprophet, some metrics will not be usable.
web_1        | INFO:     Started server process [1]
web_1        | INFO:     Waiting for application startup.
web_1        | INFO:     Application startup complete.
web_1        | INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
web_1        | INFO:     172.19.0.1:52170 - "GET /incidents/report HTTP/1.1" 200 OK
web_1        | INFO:     172.19.0.1:52170 - "GET /api/v1/incident_types/?itemsPerPage=50&sortBy[]=name&descending[]=false HTTP/1.1" 401 Unauthorized
web_1        | INFO:     172.19.0.1:52168 - "GET /api/v1/incident_priorities/ HTTP/1.1" 401 Unauthorized
web_1        | INFO:     172.19.0.1:52168 - "GET /static/m.png HTTP/1.1" 200 OK
scheduler_1  | Traceback (most recent call last):
scheduler_1  |   File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1247, in _execute_context
scheduler_1  |     self.dialect.do_execute(
scheduler_1  |   File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 588, in do_execute
scheduler_1  |     cursor.execute(statement, parameters)
scheduler_1  | psycopg2.errors.UndefinedTable: relation "incident" does not exist
scheduler_1  | LINE 2: FROM incident
scheduler_1  |              ^

Service 'web' failed to build

ERROR: Service 'web' failed to build: The command '/bin/sh -c set -x && buildDeps="" && apt-get update && apt-get install -y --no-install-recommends $buildDeps && pip install /tmp/dist/*.whl && apt-get purge -y --auto-remove $buildDeps && apt-get install -y --no-install-recommends pkg-config && apt-get clean && rm -rf /var/lib/apt/lists/*' returned a non-zero code: 100

KeyError: "Config 'INCIDENT_STORAGE_FOLDER_ID' is missing, and has no default."

After cloning the repo and running ./install.sh, the setup fails with the following error:

...
Starting dispatch_postgres_1 ... done
No JWT secret specified, this is required if you are using basic authentication.
No JWT secret specified, this is required if you are using basic authentication.
Traceback (most recent call last):
  File "/usr/local/bin/dispatch", line 5, in <module>
    from dispatch.cli import entrypoint
  File "/usr/local/lib/python3.8/site-packages/dispatch/cli.py", line 12, in <module>
    from dispatch import __version__, config
  File "/usr/local/lib/python3.8/site-packages/dispatch/config.py", line 161, in <module>
    INCIDENT_STORAGE_FOLDER_ID = config("INCIDENT_STORAGE_FOLDER_ID")
  File "/usr/local/lib/python3.8/site-packages/starlette/config.py", line 62, in __call__
    return self.get(key, cast, default)
  File "/usr/local/lib/python3.8/site-packages/starlette/config.py", line 75, in get
    raise KeyError(f"Config '{key}' is missing, and has no default.")
KeyError: "Config 'INCIDENT_STORAGE_FOLDER_ID' is missing, and has no default."
Cleaning up...

Fix Database Upgrade step in install.sh

The installation script runs docker-compose run --rm dispatch database upgrade as one of the last steps of setting up Dispatch.

There are a few issues:

  • it references a service called "dispatch" which does not exist
  • changing the service to a service that has DB access (e.g., "web") produces the configuration error below
  • the upgrade command does not appear to recognize the "--no-input" argument
 $ docker compose run --rm web database upgrade                                                                                                                            [51/1608]
Creating network "dispatch_default" with the default driver
Creating dispatch_postgres_1 ... done
Traceback (most recent call last):
  File "/usr/local/bin/dispatch", line 8, in <module>
    sys.exit(entrypoint())
  File "/usr/local/lib/python3.8/site-packages/dispatch/cli.py", line 766, in entrypoint
    dispatch_cli()
  File "/usr/local/lib/python3.8/site-packages/click/core.py", line 764, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/click/core.py", line 717, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1137, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1137, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/local/lib/python3.8/site-packages/click/core.py", line 956, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.8/site-packages/click/core.py", line 555, in invoke
    return callback(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/dispatch/cli.py", line 553, in upgrade_database
    alembic_command.upgrade(alembic_cfg, revision, sql=sql, tag=tag)
  File "/usr/local/lib/python3.8/site-packages/alembic/command.py", line 278, in upgrade
    script = ScriptDirectory.from_config(config)
  File "/usr/local/lib/python3.8/site-packages/alembic/script/base.py", line 126, in from_config
    script_location = config.get_main_option("script_location")
  File "/usr/local/lib/python3.8/site-packages/alembic/config.py", line 293, in get_main_option
    return self.get_section_option(self.config_ini_section, name, default)
  File "/usr/local/lib/python3.8/site-packages/alembic/config.py", line 276, in get_section_option
    raise util.CommandError(
alembic.util.exc.CommandError: No config file '/usr/local/lib/python3.8/site-packages/dispatch/alembic.ini' found, or file has no '[alembic]' section

what is the url when it's full done

Hi,
sorry i'm not docker expert but it's full done after the ./install.sh and docker-compose up -d it's full done too
i try localhost on my brower but it's doesn't work same thing in https..

Can you help me plz

Is it possible to run dispatch with a few plugins

After multiples attempts of setting the .env as in the example https://github.com/Netflix/dispatch-docker/blob/master/.env.example. I wonder if there is a way to setup dispatch without setting the (Slack,Google & Zoom) plugins. To test.

In my current .env it is empty for Google plugin

GOOGLE_DEVELOPER_KEY=
GOOGLE_DOMAIN=
GOOGLE_SERVICE_ACCOUNT_CLIENT_EMAIL=
GOOGLE_SERVICE_ACCOUNT_CLIENT_ID=
GOOGLE_SERVICE_ACCOUNT_DELEGATED_ACCOUNT=
GOOGLE_SERVICE_ACCOUNT_PRIVATE_KEY=
GOOGLE_SERVICE_ACCOUNT_PRIVATE_KEY_ID=
GOOGLE_SERVICE_ACCOUNT_PROJECT_ID=

But I'm getting an error while trying to create an incident_type
image

But I'm getting this error on the docker image

INFO:     XXX.XXX.XXX.XXX:3936 - "POST /api/v1/incident_types/ HTTP/1.1" 500 Internal Server Error
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/uvicorn/protocols/http/httptools_impl.py", line 385, in run_asgi
    result = await app(self.scope, self.receive, self.send)
  File "/usr/local/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__
    return await self.app(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/applications.py", line 102, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 181, in __call__
    raise exc from None
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 159, in __call__
    await self.app(scope, receive, _send)
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/base.py", line 26, in __call__
    await response(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/responses.py", line 197, in __call__
    async for chunk in self.body_iterator:
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/base.py", line 56, in body_stream
    task.result()
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/base.py", line 38, in coro
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/sentry_asgi/middleware.py", line 22, in __call__
    raise exc from None
  File "/usr/local/lib/python3.8/site-packages/sentry_asgi/middleware.py", line 19, in __call__
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/base.py", line 26, in __call__
    await response(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/responses.py", line 197, in __call__
    async for chunk in self.body_iterator:
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/base.py", line 56, in body_stream
    task.result()
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/base.py", line 38, in coro
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/base.py", line 26, in __call__
    await response(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/responses.py", line 197, in __call__
    async for chunk in self.body_iterator:
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/base.py", line 56, in body_stream
    task.result()
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/base.py", line 38, in coro
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/exceptions.py", line 82, in __call__
    raise exc from None
  File "/usr/local/lib/python3.8/site-packages/starlette/exceptions.py", line 71, in __call__
    await self.app(scope, receive, sender)
  File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 550, in __call__
    await route.handle(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 376, in handle
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/fastapi/applications.py", line 146, in __call__
    await super().__call__(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/applications.py", line 102, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 181, in __call__
    raise exc from None
  File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 159, in __call__
    await self.app(scope, receive, _send)
  File "/usr/local/lib/python3.8/site-packages/starlette/exceptions.py", line 82, in __call__
    raise exc from None
  File "/usr/local/lib/python3.8/site-packages/starlette/exceptions.py", line 71, in __call__
    await self.app(scope, receive, sender)
  File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 550, in __call__
    await route.handle(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 227, in handle
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 41, in app
    response = await func(request)
  File "/usr/local/lib/python3.8/site-packages/fastapi/routing.py", line 204, in app
    response_data = await serialize_response(
  File "/usr/local/lib/python3.8/site-packages/fastapi/routing.py", line 126, in serialize_response
    raise ValidationError(errors, field.type_)
pydantic.error_wrappers.ValidationError: 2 validation errors for IncidentTypeRead
response -> template_document -> weblink
  none is not an allowed value (type=type_error.none.not_allowed)
response -> template_document -> name
  none is not an allowed value (type=type_error.none.not_allowed)
 

need help: install.sh error: Failed to establish a new connection

when i run install.sh, pip reported an error

Step 39/46 : RUN set -x     && buildDeps=""     && apt-get update     && apt-get install -y --no-install-recommends $buildDeps     && pip install -U /tmp/dist/*.whl     && apt-get purge -y --auto-remove $buildDeps     && apt-get install -y --no-install-recommends     pkg-config postgresql-client        && apt-get clean     && rm -rf /var/lib/apt/lists/*
 ---> Running in 930754714ad6
+ buildDeps=
+ apt-get update
Get:1 http://deb.debian.org/debian buster InRelease [121 kB]
Get:2 http://security.debian.org/debian-security buster/updates InRelease [65.4 kB]
Get:3 http://security.debian.org/debian-security buster/updates/main amd64 Packages [243 kB]
Get:4 http://deb.debian.org/debian buster-updates InRelease [51.9 kB]
Get:5 http://deb.debian.org/debian buster/main amd64 Packages [7906 kB]
Get:6 http://deb.debian.org/debian buster-updates/main amd64 Packages [7856 B]
Fetched 8396 kB in 7min 21s (19.0 kB/s)
Reading package lists...
+ apt-get install -y --no-install-recommends
Reading package lists...
Building dependency tree...
Reading state information...
0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded.
+ pip install -U /tmp/dist/dispatch-0.1.0.dev0-py38-none-any.whl
Processing /tmp/dist/dispatch-0.1.0.dev0-py38-none-any.whl
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7fbb0d0407c0>: Failed to establish a new connection: [Errno -2] Name or service not known')': /simple/oauth2client/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7fbb0b4128e0>: Failed to establish a new connection: [Errno -2] Name or service not known')': /simple/oauth2client/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7fbb0b412070>: Failed to establish a new connection: [Errno -2] Name or service not known')': /simple/oauth2client/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7fbb0b412190>: Failed to establish a new connection: [Errno -2] Name or service not known')': /simple/oauth2client/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7fbb0b412250>: Failed to establish a new connection: [Errno -2] Name or service not known')': /simple/oauth2client/
ERROR: Could not find a version that satisfies the requirement oauth2client (from dispatch==0.1.0.dev0) (from versions: none)
ERROR: No matching distribution found for oauth2client (from dispatch==0.1.0.dev0)
Removing intermediate container 930754714ad6
ERROR: Service 'core' failed to build : The command '/bin/sh -c set -x     && buildDeps=""     && apt-get update     && apt-get install -y --no-install-recommends $buildDeps     && pip install -U /tmp/dist/*.whl     && apt-get purge -y --auto-remove $buildDeps     && apt-get install -y --no-install-recommends     pkg-config postgresql-client        && apt-get clean     && rm -rf /var/lib/apt/lists/*' returned a non-zero code: 1
Cleaning up...

Default Installation Issues

Hello, all I am aiming to do at this point is get Dispatch - docker up and running with the default files pulled from GitHub (including the .env file) - I am not entirely sure what to do after running both

pip install -r requirements.txt
./install.sh

any help or step by step would be greatly appreciated

ERROR dispatch not a supported wheel on this platform.

Using docker-compose version

I'm getting this error when trying to run pip install /tmp/dist/dispatch-0.1.0.dev0-py38-none-any.whl

"dispatch-0.1.0.dev0-py38-none-any.whl is not a supported wheel on this platform."

Running in macOS - dispatch not found

After running install.sh on my mac laptop, the src/dispatch folder is not generating. And as I some errors with apt-get(in mac apt-get wont work) in the logs. And the setup just ended up like below -

  File "/usr/local/bin/dispatch", line 5, in <module>
    from dispatch.cli import entrypoint
  File "/usr/local/lib/python3.8/site-packages/dispatch/cli.py", line 14, in <module>
    from .main import *  # noqa
  File "/usr/local/lib/python3.8/site-packages/dispatch/main.py", line 28, in <module>
    configure_logging()
  File "/usr/local/lib/python3.8/site-packages/dispatch/logging.py", line 11, in configure_logging
    logging.basicConfig(level=LOG_LEVEL)
  File "/usr/local/lib/python3.8/logging/__init__.py", line 1994, in basicConfig
    root.setLevel(level)
  File "/usr/local/lib/python3.8/logging/__init__.py", line 1409, in setLevel
    self.level = _checkLevel(level)
  File "/usr/local/lib/python3.8/logging/__init__.py", line 194, in _checkLevel
    raise ValueError("Unknown level: %r" % level)
ValueError: Unknown level: '"warning"'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.