Coder Social home page Coder Social logo

vhive-serverless / vswarm Goto Github PK

View Code? Open in Web Editor NEW
38.0 6.0 19.0 453.62 MB

A suite of representative serverless cloud-agnostic (i.e., dockerized) benchmarks

License: MIT License

JavaScript 3.37% Dockerfile 5.97% Python 43.98% Shell 3.55% Makefile 9.80% Go 29.29% C++ 0.36% Java 1.82% C# 1.66% HTML 0.19%
serverless benchmarking vhive vswarm knative knative-serving knative-eventing faas function-as-a-service

vswarm's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

vswarm's Issues

Online-shop: emailservice

ToDo:

  • Add reference to original sources in readme.
  • Porting (move folder, split compose file into compose yaml and knative yaml, create Makefile)
  • Run function locally with docker compose
  • Add support for Knative
  • Continuous integration tests

Update invoker helloworld.proto to invoker.proto and rename functions

Describe the bug
Currently the proto file of invoker is based off the helloworld.proto specs, which is not ideal since it

  1. Creates confusion when using functions like AES (which have helloworld.proto coincidentally but need not necessarily be limited to that, indeed standalone functions like online shopping won't be using that).
  2. Limits invoker to SayHello messages, which is not a good naming for it.

This will especially be true after relay is added to vSwarm, because how invoker speaks to relay and how relay speaks to AES (and other standalone functions) will be different.

Expected behavior
A different proto structure for invoker (and subsequently other standalone functions).

My question is, will this end up breaking other functions in vSwarm? How much refactoring would be needed to change this?

Design Protocol Relay

Problem: Support different procols in function
In order to work together with all the testing framework all functions need to implement the helloworld protocol.
This is first of all not nice, makes it harder to integrate new and different kind of functions.
Furthermore it has the large drawback that it limits the input and often functions can span different functionality depending on the functions input.

Describe the solution you'd like
What we need is a protocol relay that:

  1. Translate a helloworld request to any kind of protocol that we can choose
  2. Generate different kind of input patterns we can also choose.

Translation
One need to implement a helloworld server that get triggered by a sayhello request.
After this trigger a input for the downstream protocol is generated and forwarded.
To generate this server one can start with my this and the grpcclient implementation. It creates an base interface for any kind of protocol. With the getGrpc client one can select which client function we want to implement.

For the server we want an input flag -p, -proto with which one can select which protol client to implement

Input generation
We want different parameters to generate input values

  1. Type (single, linear, random (with fixed seed))
  2. Max number of inputs (n)
  3. Range
    With that one can for example generate n=20 random numbers from 20-100 for the fibonacci function

Note the relay should be very fast to not limit ourself in the future therefore use golang and a static instantiation of the downstream protocol

In addition to do:

  • Create container
  • Makefile
    • Build all protocols
    • Build realay container
  • Support tracing that we know the replay overhead
  • CI pipeline

gg llvm CI broken

Describe the bug
The LLVM benchmark works when run by hand, but not in the CI. The CI logs show this warning and error:

[warning] socket failure: TinvYAPwlXN7BsTH8QS2bXR0XHsOm00s1IA0YECBHxVk00008993
gg-force: unhandled poller failure happened, job is not finished

The llvm driver pod crashes and restarts shortly after. I think the gg executor (aka gg-port-0) is never invoked.

This is additionally surprising given that the exact same style of CI is used for the other gg examples, which work fine, and the llvm driver function is essentially the same as the other gg drivers too. And again, just to emphasise, this benchmark (same driver and executor functions) works when trying to reproduce it manually.

To Reproduce
Run the CI (file `e2e-gg-llvm.yml)

Expected behavior
The CI should complete successfully

Logs
Please see the logs in this CI run for an example, but any CI llvm run should currently contain similar results

Building corral benchmark breaks

Describe the bug
CI breaks because corral container cannot be build:
when building locally causes error with missing entry in go.sum

go build -o ./bin/word_count .
go: github.com/ease-lab/vSwarm/utils/tracing/[email protected] requires
        github.com/containerd/[email protected]: missing go.sum entry; to add it:
        go mod download github.com/containerd/containerd
make: *** [Makefile:10: word_count] Error 1

To Reproduce
cd into benchmarks/corral
call make

Auth knative throws RevisionMissing error when setting up revision

Describe the bug
In branch standalone-functions, when you run auth-python on knative using harshitgarg22/auth-python docker image, it doesn't run and kn service list gives you a RevisionMissing error. Invoker can reach the URL which means the route creates successfully but empty rps latency output means it isn't invoking.

To Reproduce

  1. Checkout to standalone-functions
  2. cd benchmarks/auth
  3. kn service apply -f ./knative_yamls/auth-python.yaml
  4. kn service list
  5. ../../tools/invoker -port 50061 -dbg -time 10 -rps 1

Expected behavior
It should run and invoker should return a non-empty rps latency file.

Logs
kubectl get revision auth-python-00001 -output yaml gives:

apiVersion: serving.knative.dev/v1
kind: Revision
metadata:
  annotations:
    serving.knative.dev/creator: kubernetes-admin
    serving.knative.dev/routingStateModified: "2022-02-08T07:10:29Z"
  creationTimestamp: "2022-02-07T20:03:24Z"
  generation: 1
  labels:
    serving.knative.dev/configuration: auth-python
    serving.knative.dev/configurationGeneration: "1"
    serving.knative.dev/configurationUID: a97362fb-b6ee-4a13-838c-125cf2d5a0a3
    serving.knative.dev/routingState: reserve
    serving.knative.dev/service: auth-python
    serving.knative.dev/serviceUID: a48bc9b0-b43a-4483-b283-1c25c70d4d94
  managedFields:
  - apiVersion: serving.knative.dev/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:serving.knative.dev/creator: {}
          f:serving.knative.dev/routingStateModified: {}
        f:labels:
          .: {}
          f:serving.knative.dev/configuration: {}
          f:serving.knative.dev/configurationGeneration: {}
          f:serving.knative.dev/configurationUID: {}
          f:serving.knative.dev/routingState: {}
          f:serving.knative.dev/service: {}
          f:serving.knative.dev/serviceUID: {}
        f:ownerReferences: {}
      f:spec:
        .: {}
        f:containerConcurrency: {}
        f:containers: {}
        f:enableServiceLinks: {}
        f:timeoutSeconds: {}
      f:status:
        .: {}
        f:actualReplicas: {}
        f:conditions: {}
        f:containerStatuses: {}
        f:observedGeneration: {}
    manager: controller
    operation: Update
    time: "2022-02-08T07:10:29Z"
  name: auth-python-00001
  namespace: default
  ownerReferences:
  - apiVersion: serving.knative.dev/v1
    blockOwnerDeletion: true
    controller: true
    kind: Configuration
    name: auth-python
    uid: a97362fb-b6ee-4a13-838c-125cf2d5a0a3
  resourceVersion: "6379410"
  uid: e5ea28c1-a885-4e4f-a025-af99115ca7d0
spec:
  containerConcurrency: 0
  containers:
  - image: docker.io/harshitgarg22/auth-python:latest
    imagePullPolicy: Always
    name: user-container
    ports:
    - containerPort: 50061
      name: h2c
      protocol: TCP
    readinessProbe:
      successThreshold: 1
      tcpSocket:
        port: 0
    resources: {}
  enableServiceLinks: false
  timeoutSeconds: 300
status:
  actualReplicas: 0
  conditions:
  - lastTransitionTime: "2022-02-07T20:13:56Z"
    message: The target is not receiving traffic.
    reason: NoTraffic
    severity: Info
    status: "False"
    type: Active
  - lastTransitionTime: "2022-02-07T20:13:26Z"
    message: |
      Container failed with: Traceback (most recent call last):
        File "/app/server.py", line 13, in <module>
          import helloworld_pb2
      ModuleNotFoundError: No module named 'helloworld_pb2'
    reason: ExitCode1
    status: "False"
    type: ContainerHealthy
  - lastTransitionTime: "2022-02-07T20:13:56Z"
    message: Initial scale was never achieved
    reason: ProgressDeadlineExceeded
    status: "False"
    type: Ready
  - lastTransitionTime: "2022-02-07T20:13:56Z"
    message: Initial scale was never achieved
    reason: ProgressDeadlineExceeded
    status: "False"
    type: ResourcesAvailable
  containerStatuses:
  - imageDigest: index.docker.io/harshitgarg22/auth-python@sha256:a2b7e9b0a84f36010374f17d6bdfd5930ded29352863260d67a8574ce4b45203
    name: user-container
  observedGeneration: 1

Notes
Interestingly, I get this error

  - lastTransitionTime: "2022-02-07T20:13:26Z"
    message: |
      Container failed with: Traceback (most recent call last):
        File "/app/server.py", line 13, in <module>
          import helloworld_pb2
      ModuleNotFoundError: No module named 'helloworld_pb2'

But I did not get this error when I ran it locally using docker-compose -f ./compose_yamls/docker-compose-auth-python.yaml up. I have added COPY ./proto/* /app/ in Dockerfile.python, so I expected it to run with knative also.

gg CI to use temp s3 subdirectory

The gg CIs should create and use unique subdirectories inside the default s3 bucket, and empty them after use
We are currently using the same directory for everything. This might create issues later.

Fibonacci function

ToDo:

  • Run function locally with docker compose
    • python
    • go
    • nodejs
  • Add support for Knative
    • python
    • go
    • nodejs
  • Tracing support (optional)
  • Continuous integration tests

Wrong implementation in Authentification

Describe the bug
Just realized that there are no arguments in the implementation of the Auth server files.
Please compare all files against aes and make them the same

Deployer cannot deploy async (eventing) workflows

As deployFunction() of deployer is in essence a call to kn service apply, it fails to deploy Knative Eventing (i.e., async) workflows, and is NOT easily extensible to support them as those workflow requires some additional components (such as brokers, triggers, and also quite possibly channels, subscriptions, and even sink bindings) to be set up with specific parameters.

The key question we need to ask ourselves is how we differentiate our deployer from kubectl apply --filename X, and whether a simpler tool that leverages kubectl for deploying & waiting can be a better alternative.

gg - migrate "Make gg_framework parameterized" commit

The changes made to the original vHive gg port branch in the commit named Make gg_framework parameterized need to also be ported into this repository. Once this is done then the name of the gg executor (defined in this manifest) could be changed back to gg-framework instead of gg-port-0 (which was a temporary fix while this name was still hard-coded), and also the gg driver scripts and CI might need to be updated.

xdt-vhive module authentication

The xdt-vhive module breaks (1) container builds and (2) go mod download

Logs:
(1) https://github.com/ease-lab/vSwarm/runs/6060750238?check_suite_focus=true#step:5:174

#17 [builder 8/9] RUN go mod tidy
#17 sha256:4f6cb49384556377e5104c8dfab05b6072043e959c6646f5bd61c203be40524d
#17 2.512 go: github.com/ease-lab/vhive-xdt/sdk/[email protected]: invalid version: git fetch -f origin refs/heads/*:refs/heads/* refs/tags/*:refs/tags/* in /go/pkg/mod/cache/vcs/9a55d04d66b40bef8fcfb41f84ee0a31b34b74f38389df85a9774f7df3d7887b: exit status 128:
#17 2.512 	remote: Invalid username or password.
#17 2.512 	fatal: Authentication failed for '***github.com/ease-lab/vhive-xdt/'
#17 ERROR: executor failed running [/bin/sh -c go mod tidy]: exit code: 1
------
 > [builder 8/9] RUN go mod tidy:
------
executor failed running [/bin/sh -c go mod tidy]: exit code: 1

(2) https://github.com/ease-lab/vSwarm/runs/6180720496?check_suite_focus=true#step:4:29

Running [/home/runner/golangci-lint-1.45.2-linux-amd64/golangci-lint run --out-format=github-actions --path-prefix=./benchmarks/chained-function-serving --timeout 5m] in [/home/runner/work/vSwarm/vSwarm/benchmarks/chained-function-serving] ...
  level=error msg="Running error: context loading failed: failed to load packages: failed to load with go/packages: err: exit status 1: stderr: go: github.com/ease-lab/vhive-xdt/sdk/[email protected]: invalid version: git fetch -f origin refs/heads/*:refs/heads/* refs/tags/*:refs/tags/* in /home/runner/go/pkg/mod/cache/vcs/9a55d04d66b40bef8fcfb41f84ee0a31b34b74f38389df85a9774f7df3d7887b: exit status 128:\n\tfatal: could not read Username for 'https://github.com/': terminal prompts disabled\n"

Cannot deploy functions on single node cluster: RevisionMissing

Describe the bug
I use scripts/cloudlab/start_onenode_vhive_cluster.sh to create a single node cluster on a QEMU-KVM VM with a fresh Ubuntu 20.04 image. The setup is done and the knative is up. However, when I try to deploy hello-world or other functions, the deployer hangs forever and knative reports RevisionMissing : Configuration "helloworld-0" is waiting for a Revision to become ready.

To Reproduce

export GITHUB_VHIVE_ARGS="[-dbg] [-snapshots] [-upf]" # specify if to enable debug logs; cold starts: snapshots, REAP snapshots (optional)
scripts/cloudlab/start_onenode_vhive_cluster.sh

Then, in another terminal:

source /etc/profile && pushd ./examples/deployer && go build && popd && ./examples/deployer/deployer

Expected behavior
Hello-world function should be deployed successfully.

Logs
Possible errors of scripts/cloudlab/start_onenode_vhive_cluster.sh:

Clean up host resources if left after previous runs
W0427 12:19:47.168986  842831 loader.go:221] Config not found: /etc/kubernetes/admin.conf
Error: no kubeconfig has been provided, please use a valid configuration to connect to the cluster
Run 'kn --help' for usage
[preflight] Running pre-flight checks
W0427 12:19:47.264698  842841 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
W0427 12:19:49.301695  842841 cleanupnode.go:81] [reset] Failed to remove containers: output: time="2022-04-27T12:19:49Z" level=fatal msg="connect: connect endpoint 'unix:///etc/vhive-cri/vhive-cri.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
, error: exit status 1
...
Removed fc-dev-thinpool
device-mapper: remove ioctl on fc-dev-thinpool  failed: No such device or address
Command failed.
rm: cannot remove '/etc/vhive-cri/vhive-cri.sock': No such file or directory
rm: cannot remove '/home/ubuntu/.kube/config': No such file or directory
Cleaning /var/lib/firecracker-containerd/runtime /var/lib/firecracker-containerd/snapshotter
Cleaning /run/firecracker-containerd/*
Creating a fresh devmapper
0 209715200 thin-pool /dev/loop19 /dev/loop18 128 32768 1 skip_block_zeroing
device-mapper: reload ioctl on fc-dev-thinpool  failed: No such device or address
Command failed.
...
horizontalpodautoscaler.autoscaling/broker-ingress-hpa created
horizontalpodautoscaler.autoscaling/broker-filter-hpa created
NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                                                                      AGE
istio-ingressgateway   LoadBalancer   10.104.231.15   192.168.1.240   15021:31845/TCP,80:31468/TCP,443:31490/TCP,15012:32595/TCP,15443:32031/TCP   67s
All logs are stored in /tmp/ctrd-logs/

kn service list (I tried both hello-world and decoder):

NAME           URL                                                  LATEST   AGE     CONDITIONS   READY     REASON
decoder        http://decoder.default.192.168.1.240.sslip.io                 10m     0 OK / 3     False     RevisionMissing : Configuration "decoder" does not have any ready Revision.
helloworld-0   http://helloworld-0.default.192.168.1.240.sslip.io            6m38s   0 OK / 3     Unknown   RevisionMissing : Configuration "helloworld-0" is waiting for a Revision to become ready.

tail -n 30 /tmp/ctrd-logs/ctrd.err:

time="2022-04-27T12:34:46.735084996Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:2848b4705f4559208a4124826446a9f27b5d1e7118e056cbbb58996b48234fb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-04-27T12:34:46.786854073Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/vhiveease/video-analytics-decoder@sha256:ac850dc42e6517db91fba40ca0e44b2c071897346422adb072ab48a6ce03e37b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-04-27T12:34:46.788573172Z" level=info msg="PullImage \"index.docker.io/vhiveease/video-analytics-decoder@sha256:ac850dc42e6517db91fba40ca0e44b2c071897346422adb072ab48a6ce03e37b\" returns image reference \"sha256:2848b4705f4559208a4124826446a9f27b5d1e7118e056cbbb58996b48234fb4\""
time="2022-04-27T12:34:55.823288227Z" level=info msg="CreateContainer within sandbox \"a24ab5fa2abb7a203c98e7d93b37c8094bc1604ac377c791c391760f01229363\" for container &ContainerMetadata{Name:user-container,Attempt:29,}"
time="2022-04-27T12:34:58.239102083Z" level=info msg="CreateContainer within sandbox \"a24ab5fa2abb7a203c98e7d93b37c8094bc1604ac377c791c391760f01229363\" for &ContainerMetadata{Name:user-container,Attempt:29,} returns container id \"1647b0b5270ac945062e5af13e13a5074ebc282822dd5d0343f221169720e61d\""
time="2022-04-27T12:34:58.820507776Z" level=info msg="PullImage \"index.docker.io/vhiveease/video-analytics-decoder@sha256:ac850dc42e6517db91fba40ca0e44b2c071897346422adb072ab48a6ce03e37b\""
time="2022-04-27T12:35:00.199846800Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/vhiveease/video-analytics-decoder@sha256:ac850dc42e6517db91fba40ca0e44b2c071897346422adb072ab48a6ce03e37b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-04-27T12:35:00.324660325Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:2848b4705f4559208a4124826446a9f27b5d1e7118e056cbbb58996b48234fb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-04-27T12:35:00.396102718Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/vhiveease/video-analytics-decoder@sha256:ac850dc42e6517db91fba40ca0e44b2c071897346422adb072ab48a6ce03e37b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-04-27T12:35:00.398102089Z" level=info msg="PullImage \"index.docker.io/vhiveease/video-analytics-decoder@sha256:ac850dc42e6517db91fba40ca0e44b2c071897346422adb072ab48a6ce03e37b\" returns image reference \"sha256:2848b4705f4559208a4124826446a9f27b5d1e7118e056cbbb58996b48234fb4\""
time="2022-04-27T12:35:00.403763322Z" level=info msg="CreateContainer within sandbox \"cb1585eeca3ee8af9ae75bd9ecf1309c64d150c29be33d67c5b931c248d81591\" for container &ContainerMetadata{Name:user-container,Attempt:22,}"
time="2022-04-27T12:35:00.404074431Z" level=error msg="get state for cb1585eeca3ee8af9ae75bd9ecf1309c64d150c29be33d67c5b931c248d81591" error="context canceled: unknown"
time="2022-04-27T12:35:00.404172402Z" level=warning msg="unknown status" status=0
time="2022-04-27T12:35:01.952869360Z" level=info msg="CreateContainer within sandbox \"cb1585eeca3ee8af9ae75bd9ecf1309c64d150c29be33d67c5b931c248d81591\" for &ContainerMetadata{Name:user-container,Attempt:22,} returns container id \"24d009e8432f6ee2485bc729a1eaa7e2a385afe54fed63aa9b3429f8f8605eb7\""
time="2022-04-27T12:35:09.824499102Z" level=info msg="CreateContainer within sandbox \"a24ab5fa2abb7a203c98e7d93b37c8094bc1604ac377c791c391760f01229363\" for container &ContainerMetadata{Name:user-container,Attempt:30,}"
time="2022-04-27T12:35:12.106024088Z" level=info msg="CreateContainer within sandbox \"a24ab5fa2abb7a203c98e7d93b37c8094bc1604ac377c791c391760f01229363\" for &ContainerMetadata{Name:user-container,Attempt:30,} returns container id \"8cbd8b68bd11c482708df287b31dddc36c826dd70c42f3df5a9fc524a92f7189\""
time="2022-04-27T12:35:12.824292721Z" level=info msg="PullImage \"index.docker.io/vhiveease/video-analytics-decoder@sha256:ac850dc42e6517db91fba40ca0e44b2c071897346422adb072ab48a6ce03e37b\""
time="2022-04-27T12:35:14.436304578Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/vhiveease/video-analytics-decoder@sha256:ac850dc42e6517db91fba40ca0e44b2c071897346422adb072ab48a6ce03e37b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-04-27T12:35:14.549422361Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:2848b4705f4559208a4124826446a9f27b5d1e7118e056cbbb58996b48234fb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-04-27T12:35:14.574547810Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/vhiveease/video-analytics-decoder@sha256:ac850dc42e6517db91fba40ca0e44b2c071897346422adb072ab48a6ce03e37b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-04-27T12:35:14.575633084Z" level=info msg="PullImage \"index.docker.io/vhiveease/video-analytics-decoder@sha256:ac850dc42e6517db91fba40ca0e44b2c071897346422adb072ab48a6ce03e37b\" returns image reference \"sha256:2848b4705f4559208a4124826446a9f27b5d1e7118e056cbbb58996b48234fb4\""
time="2022-04-27T12:35:21.823556614Z" level=info msg="CreateContainer within sandbox \"a24ab5fa2abb7a203c98e7d93b37c8094bc1604ac377c791c391760f01229363\" for container &ContainerMetadata{Name:user-container,Attempt:31,}"
time="2022-04-27T12:35:24.106022322Z" level=info msg="CreateContainer within sandbox \"a24ab5fa2abb7a203c98e7d93b37c8094bc1604ac377c791c391760f01229363\" for &ContainerMetadata{Name:user-container,Attempt:31,} returns container id \"c23831b0bcff8783b48995041418ccc44f845ff9f18fbc64fba7f139c29a99c4\""
time="2022-04-27T12:35:29.819833382Z" level=info msg="PullImage \"index.docker.io/vhiveease/video-analytics-decoder@sha256:ac850dc42e6517db91fba40ca0e44b2c071897346422adb072ab48a6ce03e37b\""
time="2022-04-27T12:35:31.355462870Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/vhiveease/video-analytics-decoder@sha256:ac850dc42e6517db91fba40ca0e44b2c071897346422adb072ab48a6ce03e37b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-04-27T12:35:31.471350718Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:2848b4705f4559208a4124826446a9f27b5d1e7118e056cbbb58996b48234fb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-04-27T12:35:31.522284700Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/vhiveease/video-analytics-decoder@sha256:ac850dc42e6517db91fba40ca0e44b2c071897346422adb072ab48a6ce03e37b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
time="2022-04-27T12:35:31.523983438Z" level=info msg="PullImage \"index.docker.io/vhiveease/video-analytics-decoder@sha256:ac850dc42e6517db91fba40ca0e44b2c071897346422adb072ab48a6ce03e37b\" returns image reference \"sha256:2848b4705f4559208a4124826446a9f27b5d1e7118e056cbbb58996b48234fb4\""
time="2022-04-27T12:35:32.828161219Z" level=info msg="CreateContainer within sandbox \"a24ab5fa2abb7a203c98e7d93b37c8094bc1604ac377c791c391760f01229363\" for container &ContainerMetadata{Name:user-container,Attempt:32,}"
time="2022-04-27T12:35:34.765950911Z" level=info msg="CreateContainer within sandbox \"a24ab5fa2abb7a203c98e7d93b37c8094bc1604ac377c791c391760f01229363\" for &ContainerMetadata{Name:user-container,Attempt:32,} returns container id \"494c5d284280741d9e93c31abb91d6a6d47f147de438d2d2d007d737fd4443d3\""

tail -n 30 fccd.err:

time="2022-04-27T12:36:12.209974072Z" level=info msg="Attaching drive /var/lib/firecracker-containerd/shim-base/firecracker-containerd/48/ctrstub0, slot MN2HE43UOVRDA, root false." runtime=aws.firecracker
time="2022-04-27T12:36:12.210387236Z" level=info msg="Attached drive /var/lib/firecracker-containerd/shim-base/firecracker-containerd/48/ctrstub0: [PUT /drives/{drive_id}][204] putGuestDriveByIdNoContent " runtime=aws.firecracker
time="2022-04-27T12:36:12.210420813Z" level=info msg="Attaching NIC 48_tap (hwaddr 02:FC:00:00:00:2F) at index 1" runtime=aws.firecracker
time="2022-04-27T12:36:12.238885543Z" level=info msg="startInstance successful: [PUT /actions][204] createSyncActionNoContent " runtime=aws.firecracker
time="2022-04-27T12:36:12.238970308Z" level=info msg="calling agent" runtime=aws.firecracker vmID=48
time="2022-04-27T12:36:12.286834699Z" level=warning msg="firecracker exited: exit status 1" runtime=aws.firecracker
time="2022-04-27T12:36:12.339458965Z" level=error attempt=1 error="non-temporary vsock dial failure: failed to dial \"firecracker.vsock\" within 100ms: dial unix firecracker.vsock: connect: connection refused" runtime=aws.firecracker vmID=48
time="2022-04-27T12:36:12.339617147Z" level=error msg="failed to create VM" error="failed to dial the VM over vsock: non-temporary vsock dial failure: failed to dial \"firecracker.vsock\" within 100ms: dial unix firecracker.vsock: connect: connection refused" runtime=aws.firecracker vmID=48
time="2022-04-27T12:36:12.340282563Z" level=error msg="shim CreateVM returned error" error="rpc error: code = Unknown desc = failed to create VM: failed to dial the VM over vsock: non-temporary vsock dial failure: failed to dial \"firecracker.vsock\" within 100ms: dial unix firecracker.vsock: connect: connection refused"
time="2022-04-27T12:36:12.344061496Z" level=debug msg="shim has been terminated" error="signal: killed" vmID=48
time="2022-04-27T12:36:28.130926714Z" level=debug msg="create VM request: VMID:\"49\" MachineCfg:<MemSizeMib:256 VcpuCount:1 > KernelArgs:\"ro noapic reboot=k panic=1 pci=off nomodules systemd.log_color=false systemd.unit=firecracker.target init=/sbin/overlay-init tsc=reliable quiet 8250.nr_uarts=0 ipv6.disable=1\" NetworkInterfaces:<StaticConfig:<MacAddress:\"02:FC:00:00:00:30\" HostDevName:\"49_tap\" IPConfig:<PrimaryAddr:\"190.128.0.50/10\" GatewayAddr:\"190.128.0.1\" Nameservers:\"8.8.8.8\" > > > TimeoutSeconds:100 "
time="2022-04-27T12:36:28.131072912Z" level=debug msg="using namespace: firecracker-containerd"
time="2022-04-27T12:36:28.131637723Z" level=debug msg="starting containerd-shim-aws-firecracker" vmID=49
time="2022-04-27T12:36:28.182120893Z" level=info msg="starting signal loop" namespace=firecracker-containerd path=/var/lib/firecracker-containerd/shim-base/firecracker-containerd/49 pid=866506
time="2022-04-27T12:36:28.182824480Z" level=info msg="creating new VM" runtime=aws.firecracker vmID=49
time="2022-04-27T12:36:28.183358613Z" level=info msg="Called startVMM(), setting up a VMM on firecracker.sock" runtime=aws.firecracker
time="2022-04-27T12:36:28.198105887Z" level=info msg="refreshMachineConfiguration: [GET /machine-config][200] getMachineConfigurationOK  &{CPUTemplate:T2 HtEnabled:0xc00041b1c3 MemSizeMib:0xc00041b1b8 TrackDirtyPages:false VcpuCount:0xc00041b1b0}" runtime=aws.firecracker
time="2022-04-27T12:36:28.198494344Z" level=info msg="PutGuestBootSource: [PUT /boot-source][204] putGuestBootSourceNoContent " runtime=aws.firecracker
time="2022-04-27T12:36:28.198533415Z" level=info msg="Attaching drive /var/lib/firecracker-containerd/runtime/default-rootfs.img, slot root_drive, root true." runtime=aws.firecracker
time="2022-04-27T12:36:28.199089217Z" level=info msg="Attached drive /var/lib/firecracker-containerd/runtime/default-rootfs.img: [PUT /drives/{drive_id}][204] putGuestDriveByIdNoContent " runtime=aws.firecracker
time="2022-04-27T12:36:28.199118722Z" level=info msg="Attaching drive /var/lib/firecracker-containerd/shim-base/firecracker-containerd/49/ctrstub0, slot MN2HE43UOVRDA, root false." runtime=aws.firecracker
time="2022-04-27T12:36:28.199400536Z" level=info msg="Attached drive /var/lib/firecracker-containerd/shim-base/firecracker-containerd/49/ctrstub0: [PUT /drives/{drive_id}][204] putGuestDriveByIdNoContent " runtime=aws.firecracker
time="2022-04-27T12:36:28.199454442Z" level=info msg="Attaching NIC 49_tap (hwaddr 02:FC:00:00:00:30) at index 1" runtime=aws.firecracker
time="2022-04-27T12:36:28.228479548Z" level=info msg="startInstance successful: [PUT /actions][204] createSyncActionNoContent " runtime=aws.firecracker
time="2022-04-27T12:36:28.228512818Z" level=info msg="calling agent" runtime=aws.firecracker vmID=49
time="2022-04-27T12:36:28.266615417Z" level=warning msg="firecracker exited: exit status 1" runtime=aws.firecracker
time="2022-04-27T12:36:28.329077788Z" level=error attempt=1 error="non-temporary vsock dial failure: failed to dial \"firecracker.vsock\" within 100ms: dial unix firecracker.vsock: connect: connection refused" runtime=aws.firecracker vmID=49
time="2022-04-27T12:36:28.329364130Z" level=error msg="failed to create VM" error="failed to dial the VM over vsock: non-temporary vsock dial failure: failed to dial \"firecracker.vsock\" within 100ms: dial unix firecracker.vsock: connect: connection refused" runtime=aws.firecracker vmID=49
time="2022-04-27T12:36:28.330141862Z" level=error msg="shim CreateVM returned error" error="rpc error: code = Unknown desc = failed to create VM: failed to dial the VM over vsock: non-temporary vsock dial failure: failed to dial \"firecracker.vsock\" within 100ms: dial unix firecracker.vsock: connect: connection refused"
time="2022-04-27T12:36:28.333953679Z" level=debug msg="shim has been terminated" error="signal: killed" vmID=49

tail -n 10 /tmp/otrd-logs/orch.out:

time="2022-04-27T12:36:58.445290263Z" level=error error="VM config for pod does not exist"
time="2022-04-27T12:37:06.221295742Z" level=error error="failed to provide non empty guest image in user container config"
time="2022-04-27T12:37:06.227014375Z" level=error msg="VM config for pod cb1585eeca3ee8af9ae75bd9ecf1309c64d150c29be33d67c5b931c248d81591 does not exist"
time="2022-04-27T12:37:06.227060064Z" level=error error="VM config for pod does not exist"
time="2022-04-27T12:37:13.145686045Z" level=warning msg="Failed to Fetch k8s dns clusterIP exit status 1\nThe connection to the server localhost:8080 was refused - did you specify the right host or port?\n\n"
time="2022-04-27T12:37:13.145811084Z" level=warning msg="Using google dns 8.8.8.8\n"
time="2022-04-27T12:37:13.421471045Z" level=error msg="coordinator failed to start VM" error="failed to create the microVM in firecracker-containerd: rpc error: code = Unknown desc = failed to create VM: failed to dial the VM over vsock: non-temporary vsock dial failure: failed to dial \"firecracker.vsock\" within 100ms: dial unix firecracker.vsock: connect: connection refused" image="ghcr.io/ease-lab/helloworld:var_workload" vmID=55
time="2022-04-27T12:37:13.421693375Z" level=error msg="failed to start VM" error="failed to create the microVM in firecracker-containerd: rpc error: code = Unknown desc = failed to create VM: failed to dial the VM over vsock: non-temporary vsock dial failure: failed to dial \"firecracker.vsock\" within 100ms: dial unix firecracker.vsock: connect: connection refused"
time="2022-04-27T12:37:13.449648098Z" level=error msg="VM config for pod a24ab5fa2abb7a203c98e7d93b37c8094bc1604ac377c791c391760f01229363 does not exist"
time="2022-04-27T12:37:13.449716879Z" level=error error="VM config for pod does not exist"

Nothing generated from vHive profiling tool

Describe the bug
I run the vHive profiling tool following this tutorial. However the profile.csv is not generated. I think the profiling step has been skipped somehow. Please refer to the following log section.

To Reproduce
On a Cloudlab node:

git clone https://github.com/ease-lab/vhive.git
cd vhive
then following the profiling tutorial to launch the experiment

Logs

~/vhive$ sudo env "PATH=$PATH" go test -v -timeout 99999s -run TestProfileSingleConfiguration -args -funcNames helloworld -vm 1 -rps 20 -l 1
INFO[2022-03-09T07:03:02.582939321-07:00] Orchestrator snapshots enabled: false
INFO[2022-03-09T07:03:02.583035286-07:00] Orchestrator UPF enabled: false
INFO[2022-03-09T07:03:02.583058286-07:00] Orchestrator lazy serving mode enabled: false
INFO[2022-03-09T07:03:02.583068413-07:00] Orchestrator UPF metrics enabled: false
INFO[2022-03-09T07:03:02.583080771-07:00] Drop cache: true
INFO[2022-03-09T07:03:02.583095482-07:00] Bench dir: bench_results
INFO[2022-03-09T07:03:02.583108479-07:00] Registering bridges for tap manager
INFO[2022-03-09T07:03:02.584695283-07:00] Creating containerd client
INFO[2022-03-09T07:03:02.585429969-07:00] Created containerd client
INFO[2022-03-09T07:03:02.585449706-07:00] Creating firecracker client
INFO[2022-03-09T07:03:02.585535175-07:00] Created firecracker client
=== RUN   TestProfileSingleConfiguration
    perf_bench_test.go:137: Skipping TestProfileSingleConfiguration
--- SKIP: TestProfileSingleConfiguration (0.00s)
PASS
INFO[2022-03-09T07:03:02.585725163-07:00] waiting for goroutines
INFO[2022-03-09T07:03:02.585740391-07:00] waiting done
INFO[2022-03-09T07:03:02.585749372-07:00] Closing fcClient
INFO[2022-03-09T07:03:02.585760299-07:00] Closing containerd client
INFO[2022-03-09T07:03:02.585778188-07:00] Removing bridges
ok      github.com/ease-lab/vhive       0.234s

New Function: Cart-Service

Add cartservice from Googles online boutique.
ToDo:

  • Add reference to original sources in readme.
  • Porting (move folder, split compose file into compose yaml and knative yaml, create Makefile)
  • Run function locally with docker compose
  • Add support for Knative
  • Continuous integration tests

Python tracing tests are failing

Describe the bug
Python tracing tests are failing because of some package error while installing from requirements.txt in utils/tracing/integ-tests.

To Reproduce
Run the test on any PR.

Expected behavior
Test should pass.

Logs

Run make all-image
cd ../../../../ && docker build --tag vhiveease/py-tracing-server:latest --target server -f ./utils/tracing/integ-tests/client-server/Dockerfile .
Sending build context to Docker daemon  10.75MB

Step 1/9 : FROM python:3.8-slim-buster as server
3.8-slim-buster: Pulling from library/python
a4b007099961: Pulling fs layer
66684bb0aed0: Pulling fs layer
e25203a91ebd: Pulling fs layer
513486c7a35b: Pulling fs layer
500882a508f7: Pulling fs layer
513486c7a35b: Waiting
500882a508f7: Waiting
66684bb0aed0: Verifying Checksum
66684bb0aed0: Download complete
e25203a91ebd: Verifying Checksum
e25203a91ebd: Download complete
a4b007099961: Verifying Checksum
a4b007099961: Download complete
513486c7a35b: Verifying Checksum
513486c7a35b: Download complete
500882a508f7: Verifying Checksum
500882a508f7: Download complete
a4b007099961: Pull complete
66684bb0aed0: Pull complete
e25203a91ebd: Pull complete
513486c7a35b: Pull complete
500882a508f7: Pull complete
Digest: sha256:b9c6d865d1a0e6c1a0c3e0ee7a15d37ac2e9d0a195c9c23d4ec5e6ccd6e06cc0
Status: Downloaded newer image for python:3.8-slim-buster
 ---> 970d54d92ed9
Step 2/9 : WORKDIR /app
 ---> Running in d30da34ab541
Removing intermediate container d30da34ab541
 ---> 9b8a7a9afa09
Step 3/9 : COPY ./utils/tracing/integ-tests/client-server/requirements.txt .
 ---> 5906f6379809
Step 4/9 : COPY ./utils/tracing/integ-tests/client-server/*.py ./
 ---> 606bb516a93f
Step 5/9 : COPY ./utils/tracing/python/tracing.py .
 ---> 0527a271cb34
Step 6/9 : RUN pip3 install --user -r requirements.txt
 ---> Running in 5bb2bf70c489
Collecting futures
  Downloading futures-3.0.5.tar.gz (25 kB)
  Preparing metadata (setup.py): started
  Preparing metadata (setup.py): finished with status 'error'
  error: subprocess-exited-with-error
  
  ร— python setup.py egg_info did not run successfully.
  โ”‚ exit code: 1
  โ•ฐโ”€> [25 lines of output]
      Traceback (most recent call last):
        File "<string>", line 2, in <module>
        File "<pip-setuptools-caller>", line 14, in <module>
        File "/usr/local/lib/python3.8/site-packages/setuptools/__init__.py", line 18, in <module>
          from setuptools.dist import Distribution
        File "/usr/local/lib/python3.8/site-packages/setuptools/dist.py", line 32, in <module>
          from setuptools.extern.more_itertools import unique_everseen
        File "<frozen importlib._bootstrap>", line 991, in _find_and_load
        File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
        File "<frozen importlib._bootstrap>", line 657, in _load_unlocked
        File "<frozen importlib._bootstrap>", line 556, in module_from_spec
        File "/usr/local/lib/python3.8/site-packages/setuptools/extern/__init__.py", line 52, in create_module
          return self.load_module(spec.name)
        File "/usr/local/lib/python3.8/site-packages/setuptools/extern/__init__.py", line 37, in load_module
          __import__(extant)
        File "/usr/local/lib/python3.8/site-packages/setuptools/_vendor/more_itertools/__init__.py", line 1, in <module>
          from .more import *  # noqa
        File "/usr/local/lib/python3.8/site-packages/setuptools/_vendor/more_itertools/more.py", line 5, in <module>
          from concurrent.futures import ThreadPoolExecutor
        File "/tmp/pip-install-9vnqstn1/futures_7a94e9de7c984882b7827a2c23da7a9c/concurrent/futures/__init__.py", line 8, in <module>
          from concurrent.futures._base import (FIRST_COMPLETED,
        File "/tmp/pip-install-9vnqstn1/futures_7a94e9de7c984882b7827a2c23da7a9c/concurrent/futures/_base.py", line 357
          raise type(self._exception), self._exception, self._traceback
                                     ^
      SyntaxError: invalid syntax
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

ร— Encountered error while generating package metadata.
โ•ฐโ”€> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
The command '/bin/sh -c pip3 install --user -r requirements.txt' returned a non-zero code: 1
make: *** [server-image] Error 1
Makefile:28: recipe for target 'server-image' failed
Error: Process completed with exit code 2.

Link to one of the PR where it fails

gg Directory cleanup

The gg benchmark directory (./benchmarks/gg/) needs to be brought up to scratch with the other benchmarks.

There are a number of problems currently:

  • The gg driver function is in a subdirectory (./benchmarks/gg/benchmarks/) of the gg executor, but these should be in separate directories. I think a directory structure with ./benchmarks/gg/driver/... and ./benchmarks/gg/executor/... would work best, and then the knative manifests could be collected inside a ./benchmarks/gg/knative_yamls/<TYPE>/ directory. This was the gg directory would have a similar structure to the other benchmarks.
  • The knative manifests are mixed in with the rest of the functions' files. They should be in a knative_yamls/s3/ folder similar to all of the other benchmarks
  • The README for the gg driver and executor needs to be reworked such that it has a similar structure to all of the other benchmarks. Crucially, it needs the standard Running this Benchmark, Instances, and Parameters subsections, and a graph would be good too.

Self-hosted runners are broken

Describe the bug
The runner fail to start due to the apt repositories problem.

To Reproduce

Type on any node that has a passwordless SSH to the target node that should have the runners deployed (e.g., your laptop with SSH to a cloudlab node):

ansible-playbook -u `whoami` -i $REMOTE_HOST_NAME, setup-host.yaml
GH_ACCESS_TOKEN=<...> ~/vSwarm/runner/easy-recreate.sh $REMOTE_HOST_NAME 4

Expected behavior
A clear and concise description of what you expected to happen.

Logs

TASK [Create a Knative-KinD Cluster] *******************************************************************************************************************************************************************************************************************************************
changed: [icnals11]

TASK [Copy setup-runner.sh to the remote host] *********************************************************************************************************************************************************************************************************************************
changed: [icnals11]

TASK [Copy setup-runner.sh into the container] *********************************************************************************************************************************************************************************************************************************
changed: [icnals11]

TASK [Setup and start the GitHub runner] ***************************************************************************************************************************************************************************************************************************************
fatal: [icnals11]: FAILED! => {"changed": true, "cmd": ["docker", "exec", "-e", "RUNNER_ALLOW_RUNASROOT=1", "-e", "TOKEN=ADXNHORKAKPZNPOCTDK6J7DCGMSYU", "apricot-lunch-control-plane", "bash", "/setup-runner.sh"], "delta": "0:00:00.434635", "end": "2022-03-17 12:15:51.079989", "msg": "non-zero return code", "rc": 100, "start": "2022-03-17 12:15:50.645354", "stderr": "E: The repository 'http://security.ubuntu.com/ubuntu groovy-security Release' does not have a Release file.\nE: The repository 'http://archive.ubuntu.com/ubuntu groovy Release' does not have a Release file.\nE: The repository 'http://archive.ubuntu.com/ubuntu groovy-updates Release' does not have a Release file.\nE: The repository 'http://archive.ubuntu.com/ubuntu groovy-backports Release' does not have a Release file.", "stderr_lines": ["E: The repository 'http://security.ubuntu.com/ubuntu groovy-security Release' does not have a Release file.", "E: The repository 'http://archive.ubuntu.com/ubuntu groovy Release' does not have a Release file.", "E: The repository 'http://archive.ubuntu.com/ubuntu groovy-updates Release' does not have a Release file.", "E: The repository 'http://archive.ubuntu.com/ubuntu groovy-backports Release' does not have a Release file."], "stdout": "Ign:1 http://security.ubuntu.com/ubuntu groovy-security InRelease\nIgn:2 http://archive.ubuntu.com/ubuntu groovy InRelease\nErr:3 http://security.ubuntu.com/ubuntu groovy-security Release\n  404  Not Found [IP: 91.189.88.152 80]\nIgn:4 http://archive.ubuntu.com/ubuntu groovy-updates InRelease\nIgn:5 http://archive.ubuntu.com/ubuntu groovy-backports InRelease\nErr:6 http://archive.ubuntu.com/ubuntu groovy Release\n  404  Not Found [IP: 91.189.88.152 80]\nErr:7 http://archive.ubuntu.com/ubuntu groovy-updates Release\n  404  Not Found [IP: 91.189.88.152 80]\nErr:8 http://archive.ubuntu.com/ubuntu groovy-backports Release\n  404  Not Found [IP: 91.189.88.152 80]\nReading package lists...", "stdout_lines": ["Ign:1 http://security.ubuntu.com/ubuntu groovy-security InRelease", "Ign:2 http://archive.ubuntu.com/ubuntu groovy InRelease", "Err:3 http://security.ubuntu.com/ubuntu groovy-security Release", "  404  Not Found [IP: 91.189.88.152 80]", "Ign:4 http://archive.ubuntu.com/ubuntu groovy-updates InRelease", "Ign:5 http://archive.ubuntu.com/ubuntu groovy-backports InRelease", "Err:6 http://archive.ubuntu.com/ubuntu groovy Release", "  404  Not Found [IP: 91.189.88.152 80]", "Err:7 http://archive.ubuntu.com/ubuntu groovy-updates Release", "  404  Not Found [IP: 91.189.88.152 80]", "Err:8 http://archive.ubuntu.com/ubuntu groovy-backports Release", "  404  Not Found [IP: 91.189.88.152 80]", "Reading package lists..."]}

Online-shop: shippingservice

ToDo:

  • Add reference to original sources in readme.
  • Porting (move folder, split compose file into compose yaml and knative yaml, create Makefile)
  • Run function locally with docker compose
  • Add support for Knative
  • Continuous integration tests

AES function

ToDo:

  • Run function locally with docker compose
  • Add support for Knative
  • Tracing support
  • Continuous integration tests

Change go tracing module from vhive to vSwarm

Describe the bug
The tracing go module in utils/tracing/go/go.mod is named github.com/ease-lab/vhive/utils/tracing/go
change vhive to vSwarm to keep everything here.

Then change also all benchmarks to use this module instead of the vhive repo

timeout error while deploying chained-function-serving

Describe the bug
After configured vHive following the quick start tutorial, I tried to deploy the chained-function-serving benchmark but it cannot be deployed.

To Reproduce
After configuring vHive with vhive-ubuntu20 profile on CloudLab, I tried to deploy chained-function-serving with the following command according to Running benchmarks tuturial:

./tools/kn_deploy.sh benchmarks/chained-function-serving/knative_yamls/inline/*

Expected behavior
Deployment finished successfully

Logs

~/vSwarm$ ./tools/kn_deploy.sh benchmarks/chained-function-serving/knative_yamls/inline/*
++ set -e
++ '[' 3 -eq 0 ']'
++ for pattern in "$@"
++ for file in $pattern
++ echo 'applying benchmarks/chained-function-serving/knative_yamls/inline/service-consumer.yaml'
applying benchmarks/chained-function-serving/knative_yamls/inline/service-consumer.yaml
++ kn service apply -f /dev/fd/63
+++ envsubst
Creating service 'consumer' in namespace 'default':

  0.034s The Route is still working to reflect the latest desired specification.
  0.041s ...
  0.053s Configuration "consumer" is waiting for a Revision to become ready.
Error: timeout: service 'consumer' not ready after 600 seconds
Run 'kn --help' for usage

I also checked the deploying information:

~/vSwarm$ kn service list consumer
NAME            URL                                                   LATEST                AGE   CONDITIONS   READY   REASON
consumer        http://consumer.default.192.168.1.240.sslip.io                              10m   0 OK / 3     False   RevisionMissing : Configuration "consumer" does not have any ready Revision.

Notes
By the way, I noticed you mentioned "The function deployment can be monitored using kn service list --all" in Running benchmarks tuturial. However, it will drop an error:

~/vSwarm$ kn service list --all
Error: unknown flag: --all for 'kn service list'
Run 'kn service list --help' for usage

So, do you actually mean run command kn service list --all-namespaces?

Add Server reflection to relay

To make it more convenient to the users of grpcurl we can add server reflection to the relay.
Than no one needs to point to the protocol file -import-path ./tools/invoker/proto -proto helloworld.proto.
See here how to implement

Online-shop: paymentservice

ToDo:

  • Add reference to original sources in readme.
  • Porting (move folder, split compose file into compose yaml and knative yaml, create Makefile)
  • Run function locally with docker compose
  • Add support for Knative
  • Continuous integration tests

Online-shop: checkoutservice

ToDo:

  • Add reference to original sources in readme.
  • Porting (move folder, split compose file into compose yaml and knative yaml, create Makefile)
  • Run function locally with docker compose
  • Add support for Knative
  • Continuous integration tests

Onlineshop Application

Create a readme for the online shop application.

The functions I adopted from this app were:

Note that not all are in the paper but we should add all but mark the once not supported by gem5. The once in the paper are:

  • productcatalogservice ProdL-G
  • currencyservice, Curr-N
  • paymentservice, Pay-N
  • shippingservice, Ship-G
  • emailservice, Email-P
  • recommendationservice, RecO-P

Todo:

Online-shop: Ad-service

ToDo:

  • Add reference to original sources in readme.
  • Porting (move folder, split compose file into compose yaml and knative yaml, create Makefile)
  • Run function locally with docker compose
  • Add support for Knative
  • Continuous integration tests

Online-shop: productcatalogservice

ToDo:

  • Add reference to original sources in readme.
  • Porting (move folder, split compose file into compose yaml and knative yaml, create Makefile)
  • Run function locally with docker compose
  • Add support for Knative
  • Continuous integration tests

Online-shop: cartservice

ToDo:

  • Add reference to original sources in readme.
  • Porting (move folder, split compose file into compose yaml and knative yaml, create Makefile)
  • Run function locally with docker compose
  • Add support for Knative
  • Continuous integration tests

Cleaning up Standalone kernel functions for release

Cleaning
Before the official release we need to clean up the three kernel functions (AES, Auth, Fibonacci)

The following things need to be cleaned up, tested and dokumented:

Bullets to do:

  • Documenting tracing. We want a new and clear documentation how the distributed tracing works. Independent of vHive.

  • Documenting CI integration

  • Create CI test:

  • Docker build, push

  • Certain input => should result in defined output.

  • One Dockerfile per function

  • One folder containing the protocol for the function. Create a make recipe to compile the protocol for python, nodejs and go.

  • One Readme per function with #50

  • Table with features/status (Support for tracing, CI test, support for Knative, support for docker-compose, support for gem5.)

  • Clean the function code itself

  • Remove getgid (or optional)

  • Arguments for addr, port and zipkin

  • Functions should take arguments and respond correctly:
    I.e. AES-python: Invoke aes-python: Input: (Plaintext) xxxx => Response: Ciphertext: YYYY

  • Update global Readme for all functions #50

AES:

  • One docker file

Fibonacci:

  • Tracing for Fibonacci (NodeJS, python and go). All need to be tested

AES:

  • One docker file

Upgrade otel

We need to upgrade the otel module for tracing from v0.20.0 => v1.4.1

I updated all vhive dependencies from vhive to vSwarm. When I want to update the mod file for the invoker I get the following error.
image

Right now it works because the require points still to the vhive repo but to make this clean we want to update this

Auth function

ToDo:

  • Run function locally with docker compose
    • python
    • go
    • nodejs
  • Add support for Knative
    • python
    • go
    • nodejs
  • Tracing support
  • Continuous integration tests

Online-shop: recommendationservice

ToDo:

  • Add reference to original sources in readme.
  • Porting (move folder, split compose file into compose yaml and knative yaml, create Makefile)
  • Run function locally with docker compose
  • Add support for Knative
  • Continuous integration tests

Online-shop: currencyservice

ToDo:

  • Add reference to original sources in readme.
  • Porting (move folder, split compose file into compose yaml and knative yaml, create Makefile)
  • Run function locally with docker compose
  • Add support for Knative
  • Continuous integration tests

Add new standalone serverless functions to Benchmark suite

We plan to add more representative standalone functions to the vHive benchmark suite. The functions should be a representative subset of the functions that are actually deployed in serverless frameworks. In contrast to vhive-serverless/vHive#236 where entire apps with complex workflows are targeted, this functions should serve a more simple and standalone functionality.

A list of ideas for such functions:

Function name Type Prog. Lang. Exec. time Status BM suite integration Link
Fibonacci example Python/Go/NodeJS/C++ Done Open
AES example Python/Go/NodeJS Done Open
...
Authentication Done link
hotel reservation services uServices Go .. link
Webshop services uServices link

Hotel Reservation Application

Integration of all functions from serverless-perf hotel_resv_svc

Those functions come from the Microservices app in hotelReservation app from DeathStarBench integrate as standalone serverless functions.

The Hotel reservation App is build up out of the following 7 microservices:
Screenshot 2021-08-25 070430

The functions I adopted from this app were:

The paper but we should add all but mark the once not supported by gem5. The once in the paper are:

  • Geo Geo-G
  • Profile Prof-G
  • Rate Rate-G
  • Recommendation RecH-G
  • User User-G

Todo:

  • Reference to (https://github.com/delimitrou/DeathStarBench)
  • Make a table similar to the Onlineshop readme that point out runtime and functionality but add compose, knative and gem5 support collumns.
  • Then similar to AES make the desciption of how to run the functions

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.