into-cps-association / dtaas Goto Github PK
View Code? Open in Web Editor NEW:factory: :left_right_arrow: :busts_in_silhouette: Digital Twin as a Service
Home Page: https://into-cps-association.github.io/DTaaS/
License: Other
:factory: :left_right_arrow: :busts_in_silhouette: Digital Twin as a Service
Home Page: https://into-cps-association.github.io/DTaaS/
License: Other
Please do
yarn outdated
yarn install --check-files
yarn why
yarn audit
yarn list
yarn licenses list
yarn upgrade-interactive
make sensible choices and submit a PR.
Please post the log of a command if you are in doubt. It is better to discuss before making significant changes.
The lib microservice uses graphql API. The graphql querires can be sent as HTTP post queries over the same API connection.
There are two graphql queries for lib microservice. The equivalent HTTP queries need to be documented.
Use Postman for checking HTTP queries.
The existing .eslintrc
files in both client
and server/lib
seem to have downgraded the javascript version. Here is a snippet of server/auth/.eslintrc
.
{
"env": {
"es2022": true,
"jest": true,
"jest/globals": true
},
"settings": {
"import/resolver": {
"node": {
"paths": ["src"],
"extensions": [".js", ".jsx", ".ts", ".tsx"]
}
}
},
"parser": "@typescript-eslint/parser",
"overrides": [
{
"files": ["*.ts", "*.tsx"], // Your TypeScript files extension
"extends": [
"plugin:@typescript-eslint/recommended",
"plugin:@typescript-eslint/recommended-requiring-type-checking",
"airbnb-base",
"airbnb-typescript/base"
],
"parserOptions": {
"requireConfigFile": false,
"ecmaVersion": 11,
"sourceType": "module",
"project": ["./tsconfig.json"]
}
}
]
}
The plugins seem to be missing as well. Are there any disadvantages to keeping these in .eslintrc
of client/
and server/lib
?
Publish library microservice as an npm package
Having an npm package that you can globally install and used as a commandline utility is the target.
A good example is
npm install -g serve
.
The current lib microservice expects a .env
file in the directory it starts from. This will have to become
libms -c <config-file>
libms --config <config-file>
Update installation procedures and instructions in deploy/
for the latest codebase.
data
directoryscript/git-hooks.bash
in script
The functions responsible for pre- and post-processing of: data inputs, data outputs, control outputs. The data science libraries and functions can be used to create useful function assets for the platform.
In some cases, Digital Twin models require calibration prior to their use; functions written by domain experts along with right data inputs can make model calibration an achievable goal. Another use of functions is to process the sensor and actuator data of both Physical Twins and Digital Twins.
The data sources and sinks available to a digital twins. Typical examples of data sources are sensor measurements from Physical Twins, and test data provided by manufacturers for calibration of models. Typical examples of data sinks are visualization software, external users and data storage services. There exist special outputs such as events, and commands which are akin to control outputs from a Digital Twin. These control outputs usually go to Physical Twins, but they can also go to another Digital Twin.
The model assets are used to describe different aspects of Physical Twins and their environment, at different levels of abstraction. Therefore, it is possible to have multiple models for the same Physical Twin. For example, a flexible robot used in a car production plant may have structural model(s) which will be useful in tracking the wear and tear of parts. The same robot can have a behavioural model(s) describing the safety guarantees provided by the robot manufacturer. The same robot can also have a functional model(s) describing the part manufacturing capabilities of the robot.
The software tool assets are software used to create, evaluate and analyze models. These tools are executed on top of a computing platforms, i.e., an operating system, or virtual machines like Java virtual machine, or inside docker containers. The tools tend to be platform specific, making them less reusable than models.
A tool can be packaged to run on a local or distributed virtual machine environments thus allowing selection of most suitable execution environment for a Digital Twin.
Most models require tools to evaluate them in the context of data inputs.
There exist cases where executable packages are run as binaries in a computing environment. Each of these packages are a pre-packaged combination of models and tools put together to create a ready to use Digital Twins.
These are ready to use digital twins created by one or more users. These digital twins can be reconfigured later for specific use cases.
Create digital twins from tools provided within user workspaces. Each digital twin will have one directory. It is suggested that user provide one bash shell script to run their digital twin. Users can create the required scripts and other files from tools provided in Workbench page.
Digital twins are executed from within user workspaces. The given bash script gets executed from digital twin directory. Terminal-based digital twins can be executed from VSCode and graphical digital twins can be executed from VNC GUI. The results of execution can be placed in the data directory.
The analysis of digital twins requires running of digital twin script from user workspace. The execution results placed within data directory are processed by analysis scripts and results are placed back in the data directory. These scripts can either be executed from VSCode and graphical results or can be executed from VNC GUI.
client/script/test.bash
client/test/e2e/playwright/auth.setup.js
by using dotnev package.client/test/README.md
explaining the steps required create a configuration file before running the end-to-end tests.servers/
codebase.The workflow badges in the STATUS page need to be updated for the latest github actions.
Two kinds of updates are required:
deploy/
and script/
directories.Cross reference to testdouble wiki
Check all examples as a new user
The husky githooks assume yarn setup at the top-level of the project. If we run git commit
from outside client/
directory, commit fails. Please see the log below.
vbox@vbox-Linux:~/git/vbox/DTaaS|fix-top-level-issues⚡ ⇒ git status
On branch fix-top-level-issues
Changes not staged for commit:
(use "git add/rm <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
deleted: LICENSE.md
modified: README.md
modified: STATUS.md
modified: client/README.md
deleted: script/gateway.sh
modified: script/install.bash
modified: servers/lib/README.md
Untracked files:
(use "git add <file>..." to include in what will be committed)
LICENSE.txt
docs/DTaaS-Paper-Draft.pdf
no changes added to commit (use "git add" and/or "git commit -a")
vbox@vbox-Linux:~/git/vbox/DTaaS|fix-top-level-issues⚡ ⇒ git add .
vbox@vbox-Linux:~/git/vbox/DTaaS|fix-top-level-issues⚡ ⇒ git commit
yarn run v1.22.19
$ prettier --ignore-path ../.gitignore --write "**/*.{ts,tsx,css,scss}"
/bin/sh: 1: prettier: not found
error Command failed with exit code 127.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
husky - pre-commit hook exited with code 127 (error)
husky - command not found in PATH=/usr/lib/git-core:/usr/local/texlive/2022/bin/x86_64-linux/:/home/vbox/.yarn/bin:/usr/local/texlive/2022/bin/x86_64-linux/:/home/vbox/.yarn/bin:/
usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
vbox@vbox-Linux:~/git/vbox/DTaaS|fix-top-level-issues⚡ ⇒ cd client
vbox@vbox-Linux:~/git/vbox/DTaaS/client|fix-top-level-issues⚡ ⇒ yarn install
yarn install v1.22.19
[1/4] Resolving packages...
[2/4] Fetching packages...
[3/4] Linking dependencies...
[4/4] Building fresh packages...
$ cd .. && husky install
husky - Git hooks installed
Done in 20.29s.
vbox@vbox-Linux:~/git/vbox/DTaaS/client|fix-top-level-issues⚡ ⇒ git status
On branch fix-top-level-issues
Changes to be committed:
(use "git restore --staged <file>..." to unstage)
renamed: ../LICENSE.md -> ../LICENSE.txt
modified: ../README.md
modified: ../STATUS.md
modified: README.md
new file: ../docs/DTaaS-Paper-Draft.pdf
modified: ../docs/LIB-MS.md
deleted: ../script/gateway.sh
modified: ../script/install.bash
modified: ../servers/lib/README.md
vbox@vbox-Linux:~/git/vbox/DTaaS/client|fix-top-level-issues⚡ ⇒ git commit
yarn run v1.22.19
$ prettier --ignore-path ../.gitignore --write "**/*.{ts,tsx,css,scss}"
env.d.ts 242ms
playwright.config.ts 14ms
public/static/css/MaterialIcon.css 33ms
...
yarn run v1.22.19
$ script/syntax.bash
Done in 3.58s.
[fix-top-level-issues 8ea4628] Adds documentation and updates READMEs
9 files changed, 128 insertions(+), 114 deletions(-)
rename LICENSE.md => LICENSE.txt (100%)
rewrite README.md (60%)
create mode 100755 docs/DTaaS-Paper-Draft.pdf
delete mode 100755 script/gateway.sh
How about asking husky to run the commands only on certain paths?
Create a working test for runner-nestjs branch. The integration test for execaCMDRunner is a good place to start.
It seems like there are no coverage stats included in the report to CodeCov. See snippet.
This is an extracted report snippet of the uploaded report to CodeCov. Everything but 1 test, has been included.
It starts from
<<<<<< network
# path=client/playwright-report/results.json
{
"config": {
"forbidOnly": false,
"fullyParallel": false,
"globalSetup": null,
"globalTeardown": null,
"globalTimeout": 600000,
"grep": {},
"grepInvert": null,
"maxFailures": 0,
"metadata": {},
"preserveOutput": "always",
"projects": [...],
"reporter": [
[
"html",
{
"outputFile": "playwright-report/index.html"
}
],
[
"list",
null
],
[
"junit",
{
"outputFile": "playwright-report/results.xml"
}
],
[
"json",
{
"outputFile": "playwright-report/results.json"
}
]
],
"reportSlowTests": {
"max": 5,
"threshold": 15000
},
"configFile": "/home/runner/work/DTaaS-Bachelor-new-GUI/DTaaS-Bachelor-new-GUI/client/playwright.config.ts",
"rootDir": "/home/runner/work/DTaaS-Bachelor-new-GUI/DTaaS-Bachelor-new-GUI/client/test/e2e",
"quiet": false,
"shard": null,
"updateSnapshots": "missing",
"version": "1.32.1",
"workers": 1,
"webServer": null
},
"suites": [
{
"title": "Menu.test.ts",
"file": "Menu.test.ts",
"column": 0,
"line": 0,
"specs": [],
"suites": [
{
"title": "Menu Links from first page (Layout)",
"file": "Menu.test.ts",
"line": 15,
"column": 6,
"specs": [
{
"title": "Menu Links are visible",
"ok": true,
"tags": [],
"tests": [
{
"timeout": 30000,
"annotations": [],
"expectedStatus": "passed",
"projectId": "chromium",
"projectName": "chromium",
"results": [
{
"workerIndex": 0,
"status": "passed",
"duration": 502,
"errors": [],
"stdout": [],
"stderr": [],
"retry": 0,
"startTime": "2023-04-21T12:43:49.727Z",
"attachments": []
}
],
"status": "expected"
}
],
"id": "017baff3819fe8223973-1835883432be1750c98b",
"file": "Menu.test.ts",
"line": 21,
"column": 3
},
]
}
]
}
],
"errors": []
}
There are a few possible reasons worth looking into. I think the third one is definitely the way to go:
Even though we are not using Webpack, there might be something left inside react-scrpits. So it might not be necessary to re-wire the application to use a babel transpiler configuration.
The github actions need to,
Embedding grafana in iframe of website doesn't work. The setup is as follows.
window.env = {
REACT_APP_ENVIRONMENT: 'test',
REACT_APP_URL_LIB: 'http://foo.com/user1/tree?',
REACT_APP_URL_DT: 'http://foo.com/user1/lab',
REACT_APP_URL_WORKBENCH: 'https://foo.com/vis',
};
service | URL |
---|---|
website | localhost:4000 |
mlworkspace | localhost:8090 |
grafana | localhost:3000 |
http:
routers:
dtaas:
entryPoints:
- http
rule: 'Host(`foo.com`)'
middlewares:
- basic-auth
service: dtaas
user1:
entryPoints:
- http
rule: 'Host(`foo.com`) && PathPrefix(`/user1`)'
middlewares:
- basic-auth
service: user1
vis:
entryPoints:
- http
rule: 'Host(`foo.com`) && PathPrefix(`/vis`)'
service: grafana
# Middleware: Basic authentication
middlewares:
basic-auth:
basicAuth:
usersFile: "/etc/traefik/auth"
services:
dtaas:
loadBalancer:
servers:
- url: "http://localhost:4000"
user1:
loadBalancer:
servers:
- url: "http://localhost:8090"
grafana:
loadBalancer:
servers:
- url: "http://localhost:3000"
With the above setup, the following work:
The following doesn't work:
Not sure if this is related to issue #31.
The User website page require updating to the Gitlab OAuth integration.
The end-to-end tests are for the complete application. These must be moved to test/
directory at the top-level. The end-to-end tests are in the following locations at present:
client/test/e2e
servers/lib/test/e2e
Page layout
Developer
Index
- presentation, video
- development workflow
System Engineering
- Architecture (Explanation of Components with links to dedicated pages of components)
- Website (technology stack, development pointers by linking to README.md page of client/, design documents / diagrams)
- Gateway
- Library Microservice
Testing
- Different kinds of tests (links to autolabcli pages)
- Integration and E2E tests in DTaaS
- Integration Server (link to the wiki page)
Notes:
Codeclimate does not use the top-level codeclimate.yml
configuration for servers/lib
directory.
The pre-push git-hook searches for the matching branch name at the upstream repository (this one). If a fork tries to push a branch that is not in the upstream repository, the following error comes:
git push prasad-public runner-nestjs-tests prasad@prasad-Linux
fatal: ambiguous argument 'origin/runner-nestjs-tests...HEAD': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'
No changes in the client directory. Skipping pre-push hook.
fatal: ambiguous argument 'origin/runner-nestjs-tests...HEAD': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'
No changes in the servers/execution/runner/ directory. Skipping pre-push hook.
Enumerating objects: 10, done.
Counting objects: 100% (10/10), done.
Delta compression using up to 3 threads
Compressing objects: 100% (5/5), done.
Writing objects: 100% (6/6), 1.45 KiB | 185.00 KiB/s, done.
Total 6 (delta 2), reused 0 (delta 0)
remote: Resolving deltas: 100% (2/2), completed with 2 local objects.
To https://github.com/prasadtalasila/DTaaS-Public.git
aa5b699..2d9c63e runner-nestjs-tests -> runner-nestjs-tests
Please do
yarn outdated
yarn install --check-files
yarn why
yarn audit
yarn list
yarn licenses list
yarn upgrade-interactive
make sensible choices and submit a PR.
Please post the log of a command if you are in doubt. It is better to discuss before making significant changes.
Fix code quality issues identified by codeclimate
It might be a good occasion to provide for flexible application paths.
So far we are assuming that the application is always hosted at REACT_APP_URL
. Most often, the applications are hosted at certain URL paths, for example odin.cps.digit.au.dk/mediawiki
. In this case, mediawiki is hosted at that URL which has odin.cps.digit.au.dk as hostname and mediawiki as basepath.
It is possible to make react apps work that way too. For example, we can host DTaaS at REACT_APP_URL/dtaas
. Here dtaas
is the application basename and becomes the prefix for all the react router paths. React BrowserRouter has basename attribute that makes it possible. While doing this, it might be appropriate to separate out the routes array so that the createBrowserRouter() function call looks more readable.
Suggested refactoring is:
const routes = [
path: '/',
element: <SignIn />,
},
....
{
path: 'workbench',
element: (
<PrivateRoute>
<WorkBench />
</PrivateRoute>
),
},
];
const router = createBrowserRouter(routes, {
basename: "/app", //this basename has to come from env.js
});
The dependabot auto-scans code for vulnerabilities. It can also check for package updates. This configuration is needed for the project.
Use the data to create different visualizations in a jupyter notebook.
A bit of explanation on the data:
The following visualizations are required:
These plots need to be saved as png, pdf and the complete notebook needs to be saved as a html file.
Please use these sample notebooks as templates.
The react website requires route URLs for different pages. For example, there are the following URLs are required for workbench page. The usual convention is to hard code these URLs in the codebase. It's better to define these hard coded url partterns in env.js and let envUtils.ts load them into the codebase.
The variable parts like username can be taken from the logged in username, but the rest have to come from env.js. ReactRouter is capable of this kind of URL construction. It might be better to combine the env.js URL fragments approach with the ReactRouters Route component.
Both in deploy/
and script/
directories.
The structure of first docker-compose file:
The structure of second docker-compose file: all the platform services
When a unit test fails test.bash
is not exiting with any error code.
If the script is adapted to exit 1 if jest .
it fixes the issue, but also introduces new issues:
It can possibly be fixed by adding "Continue on error" to test step in pipeline and let the step fail if either coverage is below or a test fails.
@prasadtalasila what is the preferred behavior in this case?
I fail to reproduce this issue. I tried with this env.js file in client/public
window.env = {
REACT_APP_ENVIRONMENT: 'test',
REACT_APP_URL_LIB: 'http://localhost:4000/',
REACT_APP_URL_DT: 'http://localhost:4000/',
REACT_APP_URL_WORKBENCH: 'http://localhost:4000/',
};
This is the commands I ran
git switch upstream/feature/distributed-demo~1 --detach
yarn clean
yarn
yarn build
yarn start
yarn develop
@prasadtalasila
Was the https / http missing from the URL during the demo? That tricked me, to begin with. If that's the case, I'll update the documentation.
The lib microservice tests do not generate code coverage report. Even though the github actions seem successful, it's because the codecov is fails silently.
The updates required to make the application work with basepath (say bar):
REACT_APP_AUTH_AUTHORITY: 'https://gitlab.foo.com/',
REACT_APP_REDIRECT_URI: 'https://foo.com/bar/Library',
REACT_APP_LOGOUT_REDIRECT_URI: 'https://foo.com/bar',
http:
routers:
dtaas:
entryPoints:
- http
rule: 'Host(`foo.com`)' #remember, there is no basepath for this rule
middlewares:
- basic-auth
service: dtaas
user1:
entryPoints:
- http
rule: 'Host(`foo.com`) && PathPrefix(`/bar/user1`)'
middlewares:
- basic-auth
service: user1
# Middleware: Basic authentication
middlewares:
basic-auth:
basicAuth:
usersFile: '/etc/traefik/auth'
removeHeader: true
services:
dtaas:
loadBalancer:
servers:
- url: 'http://localhost:4000'
user1:
loadBalancer:
servers:
- url: 'http://localhost:8090'
Use docs/admin/client/CLIENT.md for an example
Update deploy/install.sh
by adding basepath. For example,
WORKSPACE_BASE_URL="bar/" for all user workspaces.
For user1, the docker command becomes:
docker run -d \
-p 8090:8080 \
--name "ml-workspace-user1" \
-v "${TOP_DIR}/files/user1:/workspace" \
-v "${TOP_DIR}/files/common:/workspace/common" \
--env AUTHENTICATE_VIA_JUPYTER="" \
--env WORKSPACE_BASE_URL="bar/user1" \
--shm-size 512m \
--restart always \
mltooling/ml-workspace:0.13.2 || true
The lib microservice doesn't work if it's put behind Traefik gateway. The codebase for this setup is available in this fork. To test this problem, the following files have to be modified:
deploy/vagrant/single-machine/start.sh
servers/config/gateway/dynamic/fileConfig.yml
gitlab SSO works but the react code doesn't protect the routes. The checkAccessTokenValidity()
function of https://github.com/INTO-CPS-Association/DTaaS/pull/25
authentication function mentioned in PR #25 is checking the token validity. The rest of the code base need to call this function before providing any functionality user.
Since the PR #25 is yet to be merged, a good way to move forward is to create a typescript interface in client/src/util/authentication.ts
, a dummy type implementation and then integrate into the rest of codebase. @KarstenMalle and @Artin13 can change their code to implement this interface.
The end to end tests of lib microservice fail unpredictably. This might have to do with the fixed timeouts of 10,000 milliseconds given in the test code.
@ravvnen , can you please confirm if this is the case? Thanks.
Quite a few microservices shall use gitlab graphql schema. It would be better to use gitlab graphql schema and make the code using schema first approach.
Use the tutorial here to get complete gitlab graphql schema from schema explorer
Software to include:
The platform supported services must be available in user workspaces.
A script providing the SSH local port forwarding needs to be executed in ml-workspace containers as soon as they are launched.
The general idea is to spawn services on different servers and also reuse cloud services. In this case, users expect to use explicit urls of external services. Thus both names must be suppoted.
For example, an InfluxDB hosted on influxdb.foo.com and port 8080 need to be accessible in user workspace with the following URI end points.
influx.foo.com:8080
localhost:8080
The developer README puts ``TEST_PATHin top-level
.env` file.
TEST_PATH="/Users/<Username>/DTaaS/servers/lib/test/data/test_assets"
This is best pushed to a test-specific environment file placed in servers/lib/test/.env
.
The LIB-MS needs to have yarn build
command.
The lib microservice doesn't work if it's put behind Traefik gateway. The codebase for this setup is available in this fork. To test this problem, the following files have to be modified:
deploy/vagrant/single-machine/start.sh
servers/config/gateway/dynamic/fileConfig.yml
Make sure that all the code in deploy/
and script/
have been correctly ported from release-v0.2
branch to feature/distributed-demo
. (cross-check PR #94) - (Done)
The "Admin" tab is really "Installation", though there may be administrative (post-install) tasks there too. But it leaps into the detail of the installation without explaining that there are multiple stages and different bits of software that need to be installed in stages. This level of description may make sense for members of the development team, but to someone unfamiliar it is very difficult to follow/understand. - (Done)
There needs to be an overview of the installation in terms of "we're going to do this, then we're going to do that" so that an unfamiliar user can follow it more easily. Ideally, you need to be able to take a smart (but unfamiliar) person and put them in a room with this website, and have them follow your instructions to install the product. As it stands, I think they would be very confused very quickly.(cross-check PR #215) - (Done)
The instructions and install.sh need to be updated for basepath. All the ml-workspace containers need to have basepath as well. See issue #88 for more information
Add installation instructions for Gitlab OAuth integration. - (Done)
Add representative network diagrams on all the installation pages (the docs/developer/system/DTaaS.drawio) (cross-check PR #215) - (Done)
Admin --> Installation --> Cookbook page
docs/admin/guides
to add notes on modifications to the standard deployment scenarios. Currently known scenarios:ssl/
) / LetsEncrypt certificatesUpdate servers/config/gateway/README.md to include auth in the volume mapping of docker container. - (Done)
Update the servers/config/gateway/README.md for traefik-gateway launch command to use auth. - (Done)
docker run -d \
--name "traefik-gateway" \
--network=host -v "$PWD/traefik.yml:/etc/traefik/traefik.yml" \
-v "$PWD/auth:/etc/traefik/auth" \
-v "$PWD/dynamic:/etc/traefik/dynamic" \
-v /var/run/docker.sock:/var/run/docker.sock \
traefik:v2.5
Document basepath installations and possibility of multiple installations. See issue #88
One Mermaid diagram showing the installation of different software components (cross-check PR #215) - (Done)
Create a new microservice to answer the file system calls.
The following changes are required to make the lib microservice work better:
yarn install
, is it due to using not the latest packages?schema.gql
from codebase.Codecov reports are not being uploaded correctly.
We are using ml-workspace. We would like to replace it with ml-workspace-minimal.
Change the the code text containing
mltooling/ml-workspace:0.13.2
with mltooling/ml-workspace-minimal:0.13.2
Use the integration server to check the correct functioning of application
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.