Example Use Case and Dataset repository
Currently the use cases are stored in several different locations
- Write up and user facing documentation - Public Github https://github.com/Teradata/product-help/tree/master/UseCases
- Actual data in csv or json or parquet files - S3 buckets https://td-usgs-public.s3.amazonaws.com/ https://trial-datasets.s3.amazonaws.com/ /s3/s3.amazonaws.com/alpha-data-store-td/retail_sample_data
- Trials sample data - provisioning code https://github.td.teradata.com/teradata-managed-cloud/icaws-eco-sync/tree/master/packages/trialsdata
- List of use cases to drive the Trials UI - Teradata.com https://www.teradata.com/product-help/UseCases/use_cases.json
- Antares sample data - deployed with the application https://gitlab.teracloud.ninja/teracloud/saas-services/applications/janus/-/tree/master/apps/janus/src/assets/sample_data
Create a centralized public github.com repository that is used to house the use cases and datasets. Provide a structure for developing use cases and datasets that can be used by the ui/ux team to display the use cases to the Antares an Vantage users. Leverage the use cases and dataset scripts in this repo to automate as much as possible the loading and creating of the objects.
A use case would consist of a README.md file that is a markdown of all of the user run queries to execute the use case scenarios. The dataset that the use case relies upon will be recognized in two ways.
- In the readme there will be a new code section called
td-dataset
this will be leverage in the fron end to display the load widget in markdown properly - In the use-cases.json the
dataset
attribute will reference the dataset that is needed for the use case
A dataset will contain a README.md for documentation but the definition of the dataset will be in a scripts.json
definition file or a scripts folder. tdb
Convention or configuration is still an outstanding question.
We use the scripts.json
file to determine the order that the scripts will be run in.
No matter what there will be a scripts folder which will contain the sql scripts neccesary to load the data into the database. There will be an optional (based on space considerations) data
folder that will contain the actual data needed for the dataset. The intention is that these data files will be published to an S3 bucket and that the script scripts will reference the S3 location and not this repository.
BODY ->
{
"data_set_name" : "ExampleDataSet",
"parameters": {
"user": "vijay",
"password" : "SuperMan007"
}
}
RESPONSE ->
{
use-case-data-set-job-id : 1234567
}
RESPONSE ->
{
"data_set_name" : "ExampleDataSet",
script_status: [
{"1_create_example_user.sql": DONE},
{"2_create_foreign_tables.sql": IN_PROGRESS},
{"3_create_views.sql": NOT_STARTED}
]
}
RESPONSE ->
{
"data_set_name" : "ExampleDataSet",
script_status: [
{"1_create_example_user.sql": DONE},
{"2_create_foreign_tables.sql": FAILED, message: "VIJAY Broke it"},
{"3_create_views.sql": NOT_STARTED}
]
}
Or after a period of time the status is deleted
1. What happens if the DELETE is called during execution
2. How do we handle Failed conditions
Life cycle documentation for the use-case-dataset-job
Could we ever use the public datasets in AWS https://registry.opendata.aws/