Coder Social home page Coder Social logo

containerd / runwasi Goto Github PK

View Code? Open in Web Editor NEW
1.0K 36.0 86.0 4 MB

Facilitates running Wasm / WASI workloads managed by containerd

License: Apache License 2.0

Dockerfile 0.23% Makefile 5.12% Rust 91.99% jq 0.07% Shell 1.24% WebAssembly 1.26% Gnuplot 0.10%
containerd kubernetes rust wasi wasm webassembly

runwasi's Introduction

runwasi logo light mode runwasi logo dark mode

runwasi

Warning: Alpha quality software, do not use in production.

This is a project to facilitate running wasm workloads managed by containerd either directly (ie. through ctr) or as directed by Kubelet via the CRI plugin. It is intended to be a (rust) library that you can take and integrate with your wasm host. Included in the repository is a PoC for running a plain wasi host (ie. no extra host functions except to support wasi system calls).

Community

Usage

runwasi is intended to be consumed as a library to be linked to from your own wasm host implementation.

There are two modes of operation supported:

  1. "Normal" mode where there is 1 shim process per container or k8s pod.
  2. "Shared" mode where there is a single manager service running all shims in process.

In either case you need to implement a trait to teach runwasi how to use your wasm host.

There are two ways to do this:

  • implementing the sandbox::Instance trait
  • or implementing the container::Engine trait

The most flexible but complex is the sandbox::Instance trait:

pub trait Instance {
    /// The WASI engine type
    type Engine: Send + Sync + Clone;

    /// Create a new instance
    fn new(id: String, cfg: Option<&InstanceConfig<Self::E>>) -> Self;
    /// Start the instance
    /// The returned value should be a unique ID (such as a PID) for the instance.
    /// Nothing internally should be using this ID, but it is returned to containerd where a user may want to use it.
    fn start(&self) -> Result<u32, Error>;
    /// Send a signal to the instance
    fn kill(&self, signal: u32) -> Result<(), Error>;
    /// Delete any reference to the instance
    /// This is called after the instance has exited.
    fn delete(&self) -> Result<(), Error>;
    /// Wait for the instance to exit
    /// The waiter is used to send the exit code and time back to the caller
    /// Ideally this would just be a blocking call with a normal result, however
    /// because of how this is called from a thread it causes issues with lifetimes of the trait implementer.
    fn wait(&self, waiter: &Wait) -> Result<(), Error>;
}

The container::Engine trait provides a simplified API:

pub trait Engine: Clone + Send + Sync + 'static {
    /// The name to use for this engine
    fn name() -> &'static str;
    /// Run a WebAssembly container
    fn run_wasi(&self, ctx: &impl RuntimeContext, stdio: Stdio) -> Result<i32>;
    /// Check that the runtime can run the container.
    /// This checks runs after the container creation and before the container starts.
    /// By it checks that the wasi_entrypoint is either:
    /// * a file with the `wasm` filetype header
    /// * a parsable `wat` file.
    fn can_handle(&self, ctx: &impl RuntimeContext) -> Result<()> { /* default implementation*/ }
}

After implementing container::Engine you can use container::Instance<impl container::Engine>, which implements the sandbox::Instance trait.

To use your implementation in "normal" mode, you'll need to create a binary which has a main that looks something like this:

use containerd_shim as shim;
use containerd_shim_wasm::sandbox::{ShimCli, Instance}

struct MyInstance {
    // ...
}

impl Instance for MyInstance {
    // ...
}

fn main() {
    shim::run::<ShimCli<MyInstance>>("io.containerd.myshim.v1", opts);
}

or when using the container::Engine trait, like this:

use containerd_shim as shim;
use containerd_shim_wasm::{sandbox::ShimCli, container::{Instance, Engine}}

struct MyEngine {
    // ...
}

impl Engine for MyEngine {
    // ...
}

fn main() {
    shim::run::<ShimCli<Instance<Engine>>>("io.containerd.myshim.v1", opts);
}

Note you can implement your own ShimCli if you like and customize your wasm engine and other things. I encourage you to checkout how that is implemented.

The shim binary just needs to be installed into $PATH (as seen by the containerd process) with a binary name like containerd-shim-myshim-v1.

For the shared mode:

use containerd_shim_wasm::sandbox::{Local, ManagerService, Instance};
use containerd_shim_wasm::services::sandbox_ttrpc::{create_manager, Manager};
use std::sync::Arc;
use ttrpc::{self, Server};
/// ...

struct MyInstance {
    /// ...
}

impl Instance for MyInstance {
    // ...
}

fn main() {
    let s: ManagerService<Local<MyInstance>> =
        ManagerService::new(Engine::new(Config::new().interruptable(true)).unwrap());
    let s = Arc::new(Box::new(s) as Box<dyn Manager + Send + Sync>);
    let service = create_manager(s);

    let mut server = Server::new()
        .bind("unix:///run/io.containerd.myshim.v1/manager.sock")
        .unwrap()
        .register_service(service);

    server.start().unwrap();
    let (_tx, rx) = std::sync::mpsc::channel::<()>();
    rx.recv().unwrap();
}

This will be the host daemon that you startup and manage on your own. You can use the provided containerd-shim-myshim-v1 binary as the shim to specify in containerd.

Shared mode requires precise control over real threads and as such should not be used with an async runtime.

Check out these projects that build on top of runwasi:

Components

  • containerd-shim-[ wasmedge | wasmtime | wasmer ]-v1

This is a containerd shim which runs wasm workloads in WasmEdge or Wasmtime or Wasmer. You can use it with containerd's ctr by specifying --runtime=io.containerd.[ wasmedge | wasmtime | wasmer ].v1 when creating the container. And make sure the shim binary must be in $PATH (that is the $PATH that containerd sees). Usually you just run make install after make build.

build shim with wasmedge we need install library first

This shim runs one per pod.

  • containerd-shim-[ wasmedge | wasmtime | wasmer ]d-v1

A cli used to connect containerd to the containerd-[ wasmedge | wasmtime | wasmer ]d sandbox daemon. When containerd requests for a container to be created, it fires up this shim binary which will connect to the containerd-[ wasmedge | wasmtime | wasmer ]d service running on the host. The service will return a path to a unix socket which this shim binary will write back to containerd which containerd will use to connect to for shim requests. This binary does not serve requests, it is only responsible for sending requests to the containerd-[ wasmedge | wasmtime | wasmer ]d daemon to create or destroy sandboxes.

  • containerd-[ wasmedge | wasmtime | wasmer ]d

This is a sandbox manager that enables running 1 wasm host for the entire node instead of one per pod (or container). When a container is created, a request is sent to this service to create a sandbox. The "sandbox" is a containerd task service that runs in a new thread on its own unix socket, which we return back to containerd to connect to.

The Wasmedge / Wasmtime / Wasmer engine is shared between all sandboxes in the service.

To use this shim, specify io.containerd.[ wasmedge | wasmtime | wasmer ]d.v1 as the runtime to use. You will need to make sure the containerd-[ wasmedge | wasmtime | wasmer ]d daemon has already been started.

Contributing

To begin contributing, learn to build and test the project or to add a new shim please read our CONTRIBUTING.md

Demo

Installing the shims for use with Containerd

Make sure you have installed dependencies and install the shims:

make build
sudo make install

Note: make build will only build one binary. The make install command copies the binary to $PATH and uses symlinks to create all the component described above.

Build the test image and load it into containerd:

make test-image
make load

Demo 1 using container image that contains a Wasm module.

Run it with sudo ctr run --rm --runtime=io.containerd.[ wasmedge | wasmtime | wasmer ].v1 ghcr.io/containerd/runwasi/wasi-demo-app:latest testwasm /wasi-demo-app.wasm echo 'hello'. You should see some output repeated like:

sudo ctr run --rm --runtime=io.containerd.wasmtime.v1 ghcr.io/containerd/runwasi/wasi-demo-app:latest testwasm

This is a song that never ends.
Yes, it goes on and on my friends.
Some people started singing it not knowing what it was,
So they'll continue singing it forever just because...

This is a song that never ends.
Yes, it goes on and on my friends.
Some people started singing it not knowing what it was,
So they'll continue singing it forever just because...

(...)

To kill the process, you can run in other session: sudo ctr task kill -s SIGKILL testwasm.

The test binary supports commands for different type of functionality, check crates/wasi-demo-app/src/main.rs to try it out.

Demo 2 using OCI Images with custom WASM layers

The previous demos run with an OCI Container image containing the wasm module in the file system. Another option is to provide a cross-platform OCI Image that that will not have the wasm module or components in the file system of the container that wraps the wasmtime/wasmedge process. This OCI Image with custom WASM layers can be run across any platform and provides for de-duplication in the Containerd content store among other benefits. To build OCI images using your own images you can use the oci-tar-builder

To learn more about this approach checkout the design document.

Note: This requires containerd 1.7.7+ and 1.6.25+. If you do not have these patches for both containerd and ctr you will end up with an error message such as mismatched image rootfs and manifest layers at the import and run steps. Latest versions of k3s and kind have the necessary containerd versions.

Build and import the OCI image with WASM layers image:

make test-image/oci
make load/oci

Run the image with sudo ctr run --rm --runtime=io.containerd.[ wasmedge | wasmtime | wasmer ].v1 ghcr.io/containerd/runwasi/wasi-demo-oci:latest testwasmoci

sudo ctr run --rm --runtime=io.containerd.wasmtime.v1 ghcr.io/containerd/runwasi/wasi-demo-oci:latest testwasmoci wasi-demo-oci.wasm echo 'hello'
hello
exiting 

Demo 3 using Wasm OCI Artifact

The CNCF tag-runtime wasm working group has a OCI Artifact format for Wasm. This is a new Artifact type that enable the usage across projects beyond just runwasi, see the https://tag-runtime.cncf.io/wgs/wasm/deliverables/wasm-oci-artifact/#implementations

make test-image/oci
make load/oci
make test/k8s-oci-wasmtime

note: We are using a kubernetes cluster to run here since containerd's ctr has a bug that results in ctr: unknown image config media type application/vnd.wasm.config.v0+json

runwasi's People

Contributors

0xe282b0 avatar bokuweb avatar brendanburns avatar captainvincent avatar cpuguy83 avatar danbugs avatar defims avatar denis2glez avatar dependabot[bot] avatar devigned avatar dierbei avatar iceber avatar ipuustin avatar jprendes avatar jsturtevant avatar kate-goldenring avatar keisku avatar lengrongfu avatar mossaka avatar primoly avatar rumpl avatar sachaos avatar utam0k avatar vescoc avatar vyta avatar yihuaf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

runwasi's Issues

child process gets killed by SIGKILL after using cgroup v1 API

Containerd Log

time="2022-12-14T09:30:34.243727563Z" level=info msg="CreateContainer within sandbox \"f450952a49060dbe6756fb3638705b7c66404b38b0877741ac319e4edcb825f9\" for container &ContainerMetadata{Name:traefik,Attempt:0,}"
time="2022-12-14T09:30:34.280628605Z" level=info msg="CreateContainer within sandbox \"f450952a49060dbe6756fb3638705b7c66404b38b0877741ac319e4edcb825f9\" for &ContainerMetadata{Name:traefik,Attempt:0,} returns container id \"2a3087458a40f98ef65bbe454da5d84a379f03c1a1e1b19b9b57fd1e3e9885dc\""
time="2022-12-14T09:30:34.281158713Z" level=info msg="StartContainer for \"2a3087458a40f98ef65bbe454da5d84a379f03c1a1e1b19b9b57fd1e3e9885dc\""
time="2022-12-14T09:30:34.350862636Z" level=info msg="StartContainer for \"2a3087458a40f98ef65bbe454da5d84a379f03c1a1e1b19b9b57fd1e3e9885dc\" returns successfully"
time="2022-12-14T09:30:41.365630407Z" level=info msg="CreateContainer within sandbox \"cb2719f323623808ff663e1d0e409530a160cb62e702d0a1c3bc8670046e57fd\" for container &ContainerMetadata{Name:testwasm,Attempt:2,}"
time="2022-12-14T09:30:41.417023160Z" level=info msg="CreateContainer within sandbox \"cb2719f323623808ff663e1d0e409530a160cb62e702d0a1c3bc8670046e57fd\" for &ContainerMetadata{Name:testwasm,Attempt:2,} returns container id \"6ebb8cc29b333a124661983fba2dec5c82e4fc32c9a484212de41e4a3fa1e06e\""
time="2022-12-14T09:30:41.417626869Z" level=info msg="StartContainer for \"6ebb8cc29b333a124661983fba2dec5c82e4fc32c9a484212de41e4a3fa1e06e\""
[INFO] starting instance
[INFO] preparing module
[INFO] opening rootfs
[INFO] setting up wasi
[INFO] opening stdin
[INFO] opening stdout
[INFO] opening stderr
[INFO] building wasi context
[INFO] wasi context ready
[INFO] loading module from file
[INFO] instantiating instnace
[INFO] getting start function
[INFO] starting wasi instance
[INFO] started wasi instance with tid 1794
time="2022-12-14T09:30:41.559211243Z" level=info msg="StartContainer for \"6ebb8cc29b333a124661983fba2dec5c82e4fc32c9a484212de41e4a3fa1e06e\" returns successfully"
[INFO] child 1794 killed by signal SIGKILL, dumped: false
[INFO] wasi instance exited with status 137
time="2022-12-14T09:30:43.108591141Z" level=info msg="shim disconnected" id=6ebb8cc29b333a124661983fba2dec5c82e4fc32c9a484212de41e4a3fa1e06e
time="2022-12-14T09:30:43.108722243Z" level=warning msg="cleaning up after shim disconnected" id=6ebb8cc29b333a124661983fba2dec5c82e4fc32c9a484212de41e4a3fa1e06e namespace=k8s.io
time="2022-12-14T09:30:43.108732343Z" level=info msg="cleaning up dead shim"
time="2022-12-14T09:30:44.500146327Z" level=info msg="RemoveContainer for \"82de028e9dba19dfe45615e0efaa1e73cf35d05734b09aade8489485c5f48a84\""
time="2022-12-14T09:30:44.517400480Z" level=info msg="RemoveContainer for \"82de028e9dba19dfe45615e0efaa1e73cf35d05734b09aade8489485c5f48a84\" returns successfully"
time="2022-12-14T09:31:12.364643823Z" level=info msg="CreateContainer within sandbox \"cb2719f323623808ff663e1d0e409530a160cb62e702d0a1c3bc8670046e57fd\" for container &ContainerMetadata{Name:testwasm,Attempt:3,}"
time="2022-12-14T09:31:12.398472900Z" level=info msg="CreateContainer within sandbox \"cb2719f323623808ff663e1d0e409530a160cb62e702d0a1c3bc8670046e57fd\" for &ContainerMetadata{Name:testwasm,Attempt:3,} returns container id \"0101352d7327f58fc458166c0df7ce439528db33bd5006da002e69bb33d218d0\""
time="2022-12-14T09:31:12.398916606Z" level=info msg="StartContainer for \"0101352d7327f58fc458166c0df7ce439528db33bd5006da002e69bb33d218d0\""
[INFO] starting instance
[INFO] preparing module
[INFO] opening rootfs
[INFO] setting up wasi
[INFO] opening stdin
[INFO] opening stdout
[INFO] opening stderr
[INFO] building wasi context
[INFO] wasi context ready
[INFO] loading module from file
[INFO] instantiating instnace
[INFO] getting start function
[INFO] starting wasi instance
[INFO] started wasi instance with tid 1862
time="2022-12-14T09:31:12.528460632Z" level=info msg="StartContainer for \"0101352d7327f58fc458166c0df7ce439528db33bd5006da002e69bb33d218d0\" returns successfully"
[ERROR] error waiting for pid 1862: ECHILD: No child processes

Notice that there is a log message says "[INFO] child 1794 killed by signal SIGKILL, dumped: false"

How to reproduce?

Setup a k3d cluster image follow the steps in https://github.com/deislabs/containerd-wasm-shims/tree/main/deployments/k3d. Replace the spin & slight shim with wasmtime shim in "config.toml.tmpl"

[plugins.cri.containerd.runtimes.wasmtime]
  runtime_type = "io.containerd.wasmtime.v1"

Once the k3d cluster image is created, we can create a k3d cluster by running
k3d cluster create k3s-default --image k3swithshim --api-port 6550 -p "8081:80@loadbalancer" --agents 1

Then apply the following workloads

apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: wasmtime
handler: wasmtime
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wasm
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wasm
  template:
    metadata:
      labels:
        app: wasm
    spec:
      runtimeClassName: wasmtime
      containers:
        - name: testwasm
          image: docker.io/mossaka/wasmtest:2

Windows support

Currently we cannot use this project on Windows. This is an umbrella issue to track Windows support. There may be more items added as the work progresses.

The project doesn't currently build: it builds since #238 🎉

 cargo build
   Compiling zstd-safe v5.0.2+zstd.1.5.2
   Compiling zstd-sys v2.0.4+zstd.1.5.2
   Compiling ittapi-sys v0.3.2
   Compiling wasmtime-runtime v2.0.2
   Compiling cranelift-wasm v0.89.2
   Compiling psm v0.1.21
   Compiling uapi v0.2.10
   Compiling uapi-proc v0.0.5
   Compiling num-traits v0.2.15
   Compiling ttrpc v0.6.1
   Compiling ittapi v0.3.2
   Compiling cap-fs-ext v0.26.1
error[E0425]: cannot find function `geteuid` in crate `libc`
   --> C:\Users\jstur\.cargo\registry\src\github.com-1ecc6299db9ec823\uapi-proc-0.0.5\src\lib.rs:16:34
    |
16  |             root: unsafe { libc::geteuid() == 0 },
    |                                  ^^^^^^^ help: a function with a similar name exists: `getpid`

Some errors have detailed explanations: E0412, E0422, E0425, E0432, E0433.
For more information about an error, try `rustc --explain E0412`.
error: could not compile `ttrpc` due to 58 previous errors
error: failed to run custom build command for `uapi v0.2.10

At a minimum the following tasks need to be completed:

Use youki libcontainer crate for all shims

The work in #78 enabled youki's libcontainer for the WasmEdge shim.
The Issue #110 intents to repeat the same work for the wasmtime shim.
During #78, @rumpl 's comment mentioned that we could enable youki's libcontainer in a lower level in runwasi, so that all shims can benefit from it.
Any thoughts?

Distribute using OCI artifacts / allow shim to pull artifact

This is a feature request to allow distribution of apps targeting runwasi using OCI artifacts, and not as a container images.

Currently, all applications targeting runwasi need to be distributed as a container image, with the structure carefully constructed. This is not an ideal long-term solution for a number of reasons, deduplication of Wasm files and static assets being one of them, particularly in the context of bytecodealliance/registry#87 and https://hackmd.io/50rfwV6BTJWN8VZBhdAN_g.

Ideally, pulling the artifact could be done by the actual shim implementation — and implementations could continue to default to wrapping the Wasm app in a container image; however, some shim implementations would benefit greatly from using their existing mechanism of distributing apps using OCI artifacts (see https://developer.fermyon.com/spin/distributing-apps.md#publishing-a-spin-application-to-a-registry).

Thoughts?

cc @rumpl, @squillace, @Mossaka, @devigned.

CI is broken

It looks like the CI is broken that emits the following error message. See this

 --> crates/thirdparty/src/lib.rs:2:5
  |
2 | use oci_spec::runtime::Mount;
  |     ^^^^^^^^

What paths to expose to the runtime

          Note that here https://github.com/ipuustin/runwasi/commit/7e72c3ca10a0454d4e220aa35db26f710fb03a17#diff-36f92d3bdc22f6005e8d13ff459d9d135364b34748de944da89785ecaa8d9e0aR58 is how we could enable rootfs file access for WasmEdge shim too. We need to have a discussion what we want to do about it -- should we only expose /dev and /proc files to the runtime, or to the container too?

Originally posted by @ipuustin in #142 (comment)

wasmedge shim fails with some entry points

The following commands fail:

docker run --rm --platform wasi/wasm --runtime io.contaierd.wasmedge.v1 secondstate/rust-example-hello:latest
docker run --rm --platform wasi/wasm --runtime io.contaierd.wasmedge.v1 --entrypoint hello.wasm secondstate/rust-example-hello:latest

while the following commands succeed:

docker run --rm --platform wasi/wasm --runtime io.contaierd.wasmedge.v1 --entrypoint /hello.wasm secondstate/rust-example-hello:latest
docker run --rm --platform wasi/wasm --runtime io.contaierd.wasmedge.v1 --entrypoint ./hello.wasm secondstate/rust-example-hello:latest

The secondstate/rust-example-hello:latest specifies its entrypoint as ["hello.wasm"].

The same 4 commands succeed with the io.containerd.wasmtime.v1 runtime.

Document full set up with k3s

There is a large amount of assumed knowledge and set up in the current instructions so it would be useful to have documentation of a full run through of setup and usage with k3s.

I'm working on getting this running in my lab using k3s. If I get it all working, I can write up the commands I used.

Full Linux OCI runtime spec support

Right now we have only partial support for the OCI runtime spec.
While some things in the spec may not make sense for running wasm code itself, it is useful for sandboxing for the wasm runtime and/or the execution of the wasm for defense-in-depth as well as ensuring fewer surprises for users expecting their settings to actually apply.

Some things missing:

Instance.wait() call semantics are weird

The Instance trait has a wait function which is used to wait for the instance to exit.
Due to issues with threading and lifetimes the call currently takes a channel sender.

Ideally any sort of async behavior should be handled by the caller (e.g. wrap it in a thread and handle the channels at the call site rather than expecting the implementation to).
Lifetimes of the type parameters on Instance make this a little more problematic and I am currently hesitant to use a 'static lifetime (required by thread::spawn).

Benchmarking

This issue is served as a place for discussing various ways / ideas we can benchmark runwasi the wasm shims that was proposed by @ipuustin

  • One idea is that we can write a simple wasm program (Fibonacci) and execute in runwasi and a native program executing in runc. This provides a base benchmark of comparising the performance of wasi program vs. native runc processes. It is not meant to benchmark the performance of WASI in general.
  • Having the base benchmark set, we can observe the performance difference for each version increments. For example, we can observe how much speed increase / descrease for version 0.2 vs. 0.3
  • Another idea of benchmarking is testing how "dense" of wasm pods can we go for a node. It is often advertised that wasm modules can increase CPU utilization and thus increasing the density of running pods per node. We can verify this point to push the containerd runtime to the extreme by running thousands of pods at the same time.

Feel free to add ideas and thoughts on this topic! Any suggestion is welcome 🙏

Cloning the repository and using `make` fails

Doing a fresh clone of the repository fails with:

 make
cargo build
    Updating crates.io index
   Compiling containerd-shim-wasm v0.1.0 (/home/jstur/projects/runwasi/crates/containerd-shim-wasm)
   Compiling lock_api v0.4.9
   Compiling parking_lot_core v0.9.6
   Compiling wasmedge-sys v0.12.2
   Compiling wasmedge-types v0.3.1
   Compiling oci-tar-builder v0.1.0 (/home/jstur/projects/runwasi/crates/oci-tar-builder)
The following warnings were emitted during compilation:

warning: [wasmedge-sys] Failed to locate lib_dir, include_dir, or header.

error: failed to run custom build command for `wasmedge-sys v0.12.2`

Caused by:
  process didn't exit successfully: `/home/jstur/projects/runwasi/target/debug/build/wasmedge-sys-380b7131222322c7/build-script-build` (exit status: 101)
  --- stdout
  cargo:warning=[wasmedge-sys] Failed to locate lib_dir, include_dir, or header.

  --- stderr
  thread 'main' panicked at '[wasmedge-sys] Failed to locate the required header and/or library file. Please reference the link: https://wasmedge.org/book/en/embed/rust.html', /home/jstur/.cargo/registry/src/github.com-1ecc6299db9ec823/wasmedge-sys-0.12.2/build.rs:30:25
  note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
warning: build failed, waiting for other jobs to finish...
make: *** [Makefile:19: build] Error 101

Should wasmedge be behind a feature flag in the crate or should we add something to the makefile to install the correct dependencies?

Instance::new should return a Result type

          This was discussed before: https://github.com/containerd/runwasi/pull/54#issuecomment-1403269766

IMHO we should aim to remove as many unwrap() calls as possible from shim "main thread", because a library should not panic easily.

Originally posted by @ipuustin in #142 (comment)

tar.gz files included in releases are empty

It looks like the tar.gz assets included in releases are empty. I tried to look at why this might be but im not familiar with the github actions syntax and its hard to test locally. I resorted to building from source but I assume these were meant to have the pre-built binaries in them?

Others("Device or resource busy (os error 16)"): unknown

After building and running the demo example, I got the following error:

sudo ctr run --rm --runtime=io.containerd.wasmtime.v1 docker.io/library/wasmtest:latest testwasm
ctr: Others("Device or resource busy (os error 16)"): unknown

investigation

The task is marked as CREATED:

sudo ctr task ls
TASK          PID    STATUS
testwasm13    0      CREATED

but get the following error when trying to delete it:

sudo ctr task rm testwasm13
ERRO[0000] unable to delete testwasm13                   error="task must be stopped before deletion: created: failed precondition"
ctr: task must be stopped before deletion: created: failed precondition

Also get a slightly different error when trying to "stop" it:

sudo ctr task kill -s SIGKILL testwasm13
ctr: cannot kill non-running container, current state: Exited(TaskState { s: PhantomData }): failed precondition

other info

There seems to be two issues:

  • the shim is and containerd are out of sync on the state of the shim. this leads to not being able to clean up the task/container
  • There is an unhandled exception in
    let res = unsafe { exec::fork(Some(cg.as_ref())) }?;
    This causes the Device or resource busy (os error 16)

versions

containerd version: containerd containerd.io 1.6.7 0197261a30bf81f1ee8e6a4dd2dea0ef95d67ccb
shim version: built from main (e266bbb)
linux version:

lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 22.04.1 LTS
Release:        22.04
Codename:       jammy

It does works on my WSL instance:

containerd containerd.io 1.6.6 10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1    

 lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 20.04.4 LTS
Release:        20.04
Codename:       focal

k3s kubectl logs return empty logs

ctr run directly got right stdout logs

sudo k3s ctr image import --all-platforms target/wasm32-wasi/debug/img.tar #img.tar is build from cd crates/wasi-demo-app && cargo build && cargo build --features oci-v1-tar && cd ../../
sudo k3s ctr run --rm --runtime=io.containerd.wasmtime.v1 ghcr.io/containerd/runwasi/wasi-demo-app:latest wasi-demo-app #need containerd-shim-wasmtime-v1 avaibled by running make && make install 

image

but when I run it with kubectl, I got empty pod logs

sudo k3s kubectl apply -f wasm.yml # need configure containerd with configure file like below
sudo k3s kubectl get pods
sudo k3s kubectl logs wasi-demo-xxx

image

the wasm.yml is:

---
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: wasmtime
handler: wasmtime
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wasi-demo
  labels:
    app: wasi-demo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wasi-demo
  template:
    metadata:
      labels:
        app: wasi-demo
    spec:
      runtimeClassName: wasmtime
      containers:
      - name: demo
        image: ghcr.io/containerd/runwasi/wasi-demo-app:latest
        imagePullPolicy: Never

/var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl is:

version = 2

[plugins."io.containerd.internal.v1.opt"]
  path = "/var/lib/rancher/k3s/agent/containerd"
[plugins."io.containerd.grpc.v1.cri"]
  stream_server_address = "127.0.0.1"
  stream_server_port = "10010"
  enable_selinux = false
  enable_unprivileged_ports = true
  enable_unprivileged_icmp = true
  sandbox_image = "rancher/mirrored-pause:3.6"

[plugins."io.containerd.grpc.v1.cri".containerd]
  snapshotter = "overlayfs"
  disable_snapshot_annotations = true


[plugins."io.containerd.grpc.v1.cri".cni]
  bin_dir = "/var/lib/rancher/k3s/data/7c994f47fd344e1637da337b92c51433c255b387d207b30b3e0262779457afe4/bin"
  conf_dir = "/var/lib/rancher/k3s/agent/etc/cni/net.d"


[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.wasmedge]
  runtime_type = "io.containerd.wasmedge.v1"

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.wasmtime]
  runtime_type = "io.containerd.wasmtime.v1"

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  runtime_type = "io.containerd.runc.v2"

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
  SystemdCgroup = true

containerd shims are in /usr/local/bin
image

Use argument --tcplisten

How do I pass arguments to wasmtime through the shim? I want to use --tcplisten to listen for TCP connections.

I'm trying this command.

ubuntu@wasi:~$ sudo ctr run --rm --runtime=io.containerd.wasmtime.v1 docker.io/martinlinkhorst/wasi:latest wasi10
info: Microsoft.Hosting.Lifetime
      Now listening on: http://localhost:5000
Fatal: TCP accept failed with errno 8. This may mean the host isn't listening for connections. Be sure to pass the --tcplisten parameter.

Shim cannot connect to runtime daemon?

Hi, I'm playing with runwasi in kind by adapting the integration test Dockerfile. I see that the wasmtime shim works for running the docker.io/wasmedge/example-wasi:latest test image, but I cannot run the same workload when using a node image that configures daemon mode. Is there something else that I need to do to get daemon mode working?

Here's the error I see (both wasmedge and wasmtime fail in the same way):

Events:
  Type     Reason                  Age   From               Message
  ----     ------                  ----  ----               -------
  Normal   Scheduled               15s   default-scheduler  Successfully assigned default/wasi-job-demo-wm4cj to kind-worker
  Warning  FailedCreatePodSandBox  14s   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create containerd task: failed to start shim: start failed: containerd-shim-wasmedged-v1: Ttrpc(RpcStatus(Status { code: NOT_FOUND, message: "/runwasi.services.sandbox.v1.Manager/Connect is not supported", details: [], special_fields: SpecialFields { unknown_fields: UnknownFields { fields: None }, cached_size: CachedSize { size: 0 } } }))
: exit status 1: unknown

I configured the daemon as a part of the containerd systemd service and do see that it is running, and the unix socket is present as well:

root@kind-worker:/# ps -ef
UID          PID    PPID  C STIME TTY          TIME CMD
root           1       0  0 20:22 ?        00:00:00 /sbin/init
root          79       1  0 20:22 ?        00:00:00 /lib/systemd/systemd-journald
message+      90       1  0 20:22 ?        00:00:00 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
root         113       1  0 20:22 ?        00:00:00 /usr/local/bin/containerd-wasmedged
root         117       1  1 20:22 ?        00:00:05 /usr/local/bin/containerd
root         201       1  1 20:23 ?        00:00:06 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd
root         254       1  0 20:23 ?        00:00:00 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id a63b62567b06b0cd4d17f8c3ba7b870bb9f98d86df803216f26a9df57c88a327 -address /run/containerd/containerd.sock
root         255       1  0 20:23 ?        00:00:00 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 10ae9a17d1bbe7a0098adb1e27fc296cfe0eaafacf26ba83fc71472aad92cef0 -address /run/containerd/containerd.sock
65535        295     255  0 20:23 ?        00:00:00 /pause
65535        297     254  0 20:23 ?        00:00:00 /pause
root         362     255  0 20:23 ?        00:00:00 /bin/kindnetd
root         387     254  0 20:23 ?        00:00:00 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=kind-worker
root@kind-worker:/# ls -l /var/run/io.containerd.wasmwasi.v1 
total 0
srwxr-xr-x 1 root root 0 Jul  4 20:22 manager.sock

journalctl -u wasmedged.service shows nothing interesting.

containerd config:

root@kind-worker:/# more /etc/containerd/config.toml 
version = 2

[plugins]
  [plugins."io.containerd.grpc.v1.cri"]
    restrict_oom_score_adj = false
    sandbox_image = "registry.k8s.io/pause:3.7"
    tolerate_missing_hugepages_controller = true
    [plugins."io.containerd.grpc.v1.cri".containerd]
      default_runtime_name = "runc"
      discard_unpacked_layers = true
      snapshotter = "overlayfs"
      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
          base_runtime_spec = "/etc/containerd/cri-base.json"
          runtime_type = "io.containerd.runc.v2"
          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            SystemdCgroup = true
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.test-handler]
          base_runtime_spec = "/etc/containerd/cri-base.json"
          runtime_type = "io.containerd.runc.v2"
          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.test-handler.options]
            SystemdCgroup = true
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.wasm]
          runtime_type = "io.containerd.wasmedged.v1"
    [plugins."io.containerd.grpc.v1.cri".registry]
      config_path = "/etc/containerd/certs.d"

[proxy_plugins]
  [proxy_plugins.fuse-overlayfs]
    address = "/run/containerd-fuse-overlayfs.sock"
    type = "snapshot"

failed to start shim: start failed: terminate called after throwing an instance of 'std::out_of_range'

run the following:
ctr run --rm --runtime=io.containerd.wasmedge.v1 ghcr.io/containerd/runwasi/wasi-demo-app:latest testwasm /wasi-demo-app.wasm echo 'hello'

And get errors:
ctr: failed to start shim: start failed: terminate called after throwing an instance of 'std::out_of_range' what(): bitset::reset: __position (which is 1) >= _Nb (which is 1) : signal: aborted (core dumped): unknown

OS:
root@VM-0-10-ubuntu:~# cat /etc/issue
Ubuntu 20.04.6 LTS \n \l

docker run with wasmedge shim stop working #81

Issue Highlight

  • Test below command fail due to c2c2d0d98237e297e72cbcf9c695a4a215a39bad #81
docker run --rm --runtime=io.containerd.wasmedge.v1 --platform wasi/wasm jorgeprendes420/wasmtest echo 'hello'

Fail commit need run with wasmedge 0.12.1

  • Test below command fail due to ca5260fd34635ca56d81db3f59ee8672fc7fde68 #78
git clone https://github.com/WasmEdge/wasmedge_hyper_demo.git
cd wasmedge_hyper_demo
docker compose up client

Fail commit need run with older wasmedge 0.11.2
[Update]
The shim now requires the entrypoint path to be resolved using normal posix executable resolution now.
so ENTRYPOINT need start with / is the new usage restrictions and add padding / in path could resolve the second one.

Motivation

Due to the plan of deprecating the fork under Second State, I need to conduct an evaluation of the current functionality of containerd/runwasi before proceeding. I have discovered that using the wasmedge runtime with Docker has been broken for quite some time (all testing while ctr is working fine). Here, I am providing a document that includes the codebase and process I have been using to test docker run.

make build encountered an error

I tried to use the make build command and got the following error:

image

I think this error may have occurred when installing wasmedge.

I tried to use the official command: curl -sSf https://raw.githubusercontent.com/WasmEdge/WasmEdge/master/utils/install.sh | bash -s -- -e all -v 0.12.1 , and the make build was executed successfully.

Now it looks like make bin/wasmedge doesn't install the required dependencies.

thiserror and anyhow

This project uses two libraries for error handling, maybe we can choose only one and remove the other? I'm not sure why both are needed. If I had to choose I would keep anyhow, I like their context function. WDYT?

Prevent wasm functions from being able to access their source.

TL;DR; wasm can currently inspect itself, and it probably shouldn't. A way to prevent this either by format/tooling or config is maybe a good idea.


Opening here despite the possibility of a better repo. I also considered on crun. Feel free to punt me to a better place.

It seems the ENTRYPOINT is a marker to identify the %.wasm file which is possibly amongst other files in rootFS layers. Code like below shows the guest must be inside the rootFS. In other words the rootFS is mounted, and the same source includes the wasm.

let mod_path = oci::get_root(spec).join(cmd);

I think this is convenient as it allows re-use of tools, but it would be surprising from a black-box or how normal wasm runtimes work. Normally the wasm source is specified independent of any filesystem mounts, and it would be surprising or a mistake for someone to mount their wasm in a place functions can accidentally or otherwise inspect it.

In other words, if I had to guess, someone thought about using an existing wasm layer type or a custom one (remember wasm is a single file so has no benefit of layers), but that would require changes to Dockerfile or its successors and said, nah. Maybe? I really don't know why choices were made, but it seems reasonable if the goal was to get building with the existing ecosystem.

This said, I think there are a lot of things that will take time to correct. I think a way to not leak the source wasm is worth asking for, either as a runtime-specific feature (here) or in some spec (no idea where).

Copying some people who may have thoughts and would act differently perhaps based on outcome,

  • @assambar - VMware runtimes, a builder of the only OCI container I can find published with multiple layers python-wasm
  • @knqyf263 - Trivy, which uses OCI for wasm extensions, but doesn't do it with rootFS layers (rather a wasm one). However, it is a CLI not a service so maybe less concern about this.
  • @giuseppe - crun basically who my colleague @evacchi seems to ping on any low-level container nuance ;)

I intentionally spammed only 3, so yeah feedback welcome regardless of from whom. I think we should have a clear rationale, even if reverse engineered, on this one.

Failed to build when compiling wasmedge-sys v0.12.2

Description

When I run make in root direction, it failed in the step to compile wasmedge-sys v0.12.2, and the error is here:

The following warnings were emitted during compilation:

warning: [wasmedge-sys] Failed to locate lib_dir, include_dir, or header.

error: failed to run custom build command for `wasmedge-sys v0.12.2`

Caused by:
  process didn't exit successfully: `/home/vagrant/runwasi/target/debug/build/wasmedge-sys-1a654059db2210f1/build-script-build` (exit status: 101)
  --- stdout
  cargo:warning=[wasmedge-sys] Failed to locate lib_dir, include_dir, or header.

  --- stderr
  thread 'main' panicked at '[wasmedge-sys] Failed to locate the required header and/or library file. Please reference the link: https://wasmedge.org/book/en/embed/rust.html', /home/vagrant/.cargo/registry/src/github.com-1ecc6299db9ec823/wasmedge-sys-0.12.2/build.rs:30:25
  note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
warning: build failed, waiting for other jobs to finish...
make: *** [Makefile:19: build] Error 101

Did I miss something?

Expected

Build successfully.

Environment

Ubuntu 20.04.6 LTS

License?

I don't see a license file. Will this get an open-source license?

the network commuication between k3s's pod and container failed

I create a test repository defims/wasmedge-hyper-server to reproduct this problem:

environment

wasm pod failed

# run:
sudo kubectl apply -f wasm.yml
sudo curl localhost:30001

# got:
curl: (7) Failed to connect to localhost port 30001 after 1 ms: Connection refused

the img.tar and wasmedge-hyper-server.wasm:
img-and-wasmedge-hyper-server.zip

# unzip img-and-wasmedge-hyper-server.zip and import the image
sudo ctr image import --all-platforms img.tar

/var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl:

version = 2

[plugins."io.containerd.internal.v1.opt"]
  path = "/var/lib/rancher/k3s/agent/containerd"
[plugins."io.containerd.grpc.v1.cri"]
  stream_server_address = "127.0.0.1"
  stream_server_port = "10010"
  enable_selinux = false
  enable_unprivileged_ports = true
  enable_unprivileged_icmp = true
  sandbox_image = "rancher/mirrored-pause:3.6"

[plugins."io.containerd.grpc.v1.cri".containerd]
  snapshotter = "overlayfs"
  disable_snapshot_annotations = true


[plugins."io.containerd.grpc.v1.cri".cni]
  bin_dir = "/var/lib/rancher/k3s/data/7c994f47fd344e1637da337b92c51433c255b387d207b30b3e0262779457afe4/bin"
  conf_dir = "/var/lib/rancher/k3s/agent/etc/cni/net.d"

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.wasmtime]
  runtime_type = "io.containerd.wasmtime.v1"

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.wasmedge]
  runtime_type = "io.containerd.wasmedge.v1"

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.spin]
  runtime_type = "io.containerd.spin.v1"

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  runtime_type = "io.containerd.runc.v2"

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
  SystemdCgroup = true

the wasm.yml file:

---
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: wasmedge
handler: wasmedge
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wasmedge-deployment
  labels:
    app: wasmedge-hyper-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wasmedge-hyper-server
  template:
    metadata:
      labels:
        app: wasmedge-hyper-server
    spec:
      runtimeClassName: wasmedge
      containers:
      - name: wasmedge-hyper-server
        image: ghcr.io/containerd/runwasi/wasmedge-hyper-server:latest
        imagePullPolicy: Never
        ports:
        - containerPort: 8089
---
apiVersion: v1
kind: Service
metadata:
  name: wasmedge-service
  labels:
    app: wasmedge-hyper-server
spec:
  type: NodePort
  selector:
    app: wasmedge-hyper-server
  ports:
    - name: http
      protocol: TCP
      port: 8089
      targetPort: 8089
      nodePort: 30001

nginx works

# run:
sudo kubectl apply -f nginx.yml
sudo curl localhost:30000

# got:
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...

and the nginx.yml file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.16.1
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  labels:
    app: nginx
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: 80
      nodePort: 30000

single pod with hostNetwork failed:

# run:
sudo kubectl apply -f pod.yml
sudo curl localhost:8089

# got:
curl: (7) Failed to connect to localhost port 8089 after 0 ms: Connection refused

pod.yml file:

---
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: wasmedge
handler: wasmedge
---
apiVersion: v1
kind: Pod
metadata:
  name: wasmedge-hyper-pod
spec:
  hostNetwork: true
  runtimeClassName: wasmedge
  containers:
  - name: wasmedge-hyper-server
    image: ghcr.io/containerd/runwasi/wasmedge-hyper-server:latest
    imagePullPolicy: Never
    ports:
    - containerPort: 8089

wasm container works:

I'm sure anything oher than the network works

# run:
sudo ctr run --rm --net-host --runtime=io.containerd.wasmedge.v1 ghcr.io/containerd/runwasi/wasmedge-hyper-server:latest wasmedge-hyper-server
sudo curl localhost:8089

# got:
Try POSTing data to /echo such as: `curl localhost:8089/echo -XPOST -d 'hello world'`

runwasi logo idea

I don't think the project has a logo, so I'm proposing the following.

I'm excited about the project, but I'm not a developer, so this is me trying to contribute. The idea is running WASI :-D

runwasi logo idea

I won't lose any sleep if nobody likes it or is too busy to care right now.

how to debug the wasm container

ctr run work well:

sudo ctr run --rm --runtime=io.containerd.wasmedge.v1 docker.io/library/wasmtest:latest testwasm /wasm echo 'hello'

but kubectl apply yaml failed. I don't know how to debug the wasm container(from scratch). for example echo container logs.

containerd-template.toml added config:

    [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.wasmtime]
      runtime_type = "io.containerd.wasmtime.v1"

k8s.yml:

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: wasmtest
  annotations:
    kubernetes.io/ingress.class: "traefik"
spec:
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: wasmtest
            port:
              number: 3000
---
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: wasmtime
handler: wasmtime
---
kind: Service
apiVersion: v1
metadata:
  name: wasmtest
  labels:
    name: wasmtest
spec:
  ports:
  - name: wasmtest3000
    protocol: TCP
    port: 3000
  selector:
    app: wasmtest
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: wasmtest
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wasmtest
  template:
    metadata:
      labels:
        app: wasmtest
    spec:
      runtimeClassName: wasmtime
      containers:
      - name: wasmtest
        image: docker.io/library/wasmtest:latest
        imagePullPolicy: Never 
        ports:
        - containerPort: 3000
microk8s kubectl apply -f k8s.yml

Move wasi impl to separate crate

The repo has a few binaries and a wasi implementation that is fairly tied to wasmtime.
#15 makes the core library runtime agnostic, meaning it does not depend on wasmtime.

In order to completely remove wasmtime as a dependency from the core library it may be useful to move the binaries along with the Wasi instance implementation into a separate crate (of course both crates can be in this repo).

No run time for "wasm" is configuted

I have config the containerd and set the shim binary at correct path, but when I do kubectl apply -f test/k8s/deploy.yaml I got

wasi-demo-75d6bb666c-479km   0/1     ContainerCreating   0          9s

deploy.yaml

apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: "wasmedge"
handler: "wasmedge"

and I describe it

➜  runwasi git:(main) ✗ kubectl describe pod wasi-demo-75d6bb666c-479km         
Name:                wasi-demo-75d6bb666c-479km
Namespace:           default
Priority:            0
Runtime Class Name:  wasmedge
Service Account:     default
Node:                minikube/192.168.49.2
Start Time:          Tue, 11 Jul 2023 19:55:50 +0800
Labels:              app=wasi-demo
                     pod-template-hash=75d6bb666c
Annotations:         <none>
Status:              Pending
IP:                  
IPs:                 <none>
Controlled By:       ReplicaSet/wasi-demo-75d6bb666c
Containers:
  demo:
    Container ID:   
    Image:          ghcr.io/containerd/runwasi/wasi-demo-app:latest
    Image ID:       
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fb764 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-api-access-fb764:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason                  Age               From               Message
  ----     ------                  ----              ----               -------
  Normal   Scheduled               18s               default-scheduler  Successfully assigned default/wasi-demo-75d6bb666c-479km to minikube
  Warning  FailedCreatePodSandBox  4s (x2 over 18s)  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to get sandbox runtime: no runtime for "wasmedge" is configured

and here is my /etc/containerd/config.toml

      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.wasmedge]
          runtime_type = "io.containerd.wasmedge.v1"
          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.wasmedge.options]
            BinaryName = "/usr/bin/containerd-shim-wasmedge-v1"

I have cp the shim binary at /bin, /usr/bin, /usr/local/bin, but it still not work.

and my cluster started by minikube, here is the describe about the node

➜  runwasi git:(main) ✗ kubectl describe node minikube | grep Container
  Container Runtime Version:  containerd://1.6.20

so, which step was wrong, I am soo confused.

CI might be broken after #187

seems like CI might be broken after #187

   Dirty wasmedge-sys v0.15.0: the file `/usr/lib/llvm-14/lib/clang/14.0.0/include/stdbool.h` is missing
   Compiling wasmedge-sys v0.15.0
     Running `/home/runner/work/runwasi/runwasi/target/debug/build/wasmedge-sys-24c4a5d29523e79f/build-script-build`
error: failed to run custom build command for `wasmedge-sys v0.15.0`
note: To improve backtraces for build dependencies, set the CARGO_PROFILE_DEV_BUILD_OVERRIDE_DEBUG=true environment variable to enable debug information generation.

Caused by:
  process didn't exit successfully: `/home/runner/work/runwasi/runwasi/target/debug/build/wasmedge-sys-24c4a5d29523e79f/build-script-build` (exit status: 1)
  --- stderr
  /home/runner/work/runwasi/runwasi/target/debug/build/wasmedge-sys-24c4a5d29523e79f/build-script-build: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by /home/runner/work/runwasi/runwasi/target/debug/build/wasmedge-sys-24c4a5d29523e79f/build-script-build)
  /home/runner/work/runwasi/runwasi/target/debug/build/wasmedge-sys-24c4a5d29523e79f/build-script-build: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.33' not found (required by /home/runner/work/runwasi/runwasi/target/debug/build/wasmedge-sys-24c4a5d29523e79f/build-script-build)
  /home/runner/work/runwasi/runwasi/target/debug/build/wasmedge-sys-24c4a5d29523e79f/build-script-build: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /home/runner/work/runwasi/runwasi/target/debug/build/wasmedge-sys-24c4a5d29523e79f/build-script-build)
warning: build failed, waiting for other jobs to finish...

Originally posted by @jsturtevant in #192 (comment)

Add troubleshooting guide

Currently there is no troubleshotting guide. People might find it hard to follow readme to produce a hello world example.

Known issues are, but not limited to:

  1. containerd currently only support Linux. So in order to build runwasi, either you need to have a linux machien or run it in WSL on Windows
  2. docker buildx is a dependency
  3. make load is broken

build pipeline is failing

build pipeline: https://github.com/containerd/runwasi/actions/runs/5590373071/jobs/10220012269?pr=182

Error message:

/home/runner/.cargo/bin/cargo build --all --verbose
    Updating crates.io index
    Updating git repository `https://github.com/containerd/rust-extensions`
 Downloading crates ...
error: failed to download from `https://crates.io/api/v1/crates/cap-fs-ext/1.0.[15](https://github.com/containerd/runwasi/actions/runs/5590160267/jobs/10219494199?pr=181#step:8:16)/download`

Caused by:
  failed to get successful HTTP response from `https://crates.io/api/v1/crates/cap-fs-ext/1.0.15/download` (108.138.64.48), got 421
  debug headers:
  x-amz-cf-pop: IAD12-P1
  x-cache: Error from cloudfront
  x-amz-cf-pop: IAD12-P1
  x-amz-cf-id: JdJaQWKSmms_fHEe1k7N9PQila7[18](https://github.com/containerd/runwasi/actions/runs/5590160267/jobs/10219494199?pr=181#step:8:19)nePI_C1gpVEr8lKk-wFNLXVPw==
  body:
  <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
  <HTML><HEAD><META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1">
  <TITLE>ERROR: The request could not be satisfied</TITLE>
  </HEAD><BODY>
  <H1>4[21](https://github.com/containerd/runwasi/actions/runs/5590160267/jobs/10219494199?pr=181#step:8:22) ERROR</H1>
  <H2>The request could not be satisfied.</H2>
  <HR noshade size="1px">
  The distribution does not match the certificate for which the HTTPS connection was established with.
  We can't connect to the server for this app or website at this …

WasmEdge can run only 254 instances?

If I add a unit tests which creates, runs, waits and deletes a WasmEdge instance for 300 times, the test fails on the 255th run:

...
...
...
Running test iteration 253
Running test iteration 254
Running test iteration 255
Error: Any(failed to create container

Caused by:
    0: failed to wait for init ready
    1: failed to wait for init ready
    2: channel connection broken)


failures:
    instance::wasitest::test_wasi_300

It feels to me that we are leaking some resource (such as file descriptors), but I don't know if the problem is in WasmEdge, libcontainer or runwasi (or in the test setup :-). This is the test:

diff --git a/crates/containerd-shim-wasmedge/src/instance.rs b/crates/containerd-shim-wasmedge/src/instance.rs
index 87d0148..31a8e67 100644
--- a/crates/containerd-shim-wasmedge/src/instance.rs
+++ b/crates/containerd-shim-wasmedge/src/instance.rs
@@ -472,6 +472,32 @@ mod wasitest {
         Ok(())
     }

+    #[test]
+    #[serial]
+    fn test_wasi_300() -> Result<(), Error> {
+        if !has_cap_sys_admin() {
+            println!("running test with sudo: {}", function!());
+            return run_test_with_sudo(function!());
+        }
+
+        for i in 1..300 {
+            println!("Running test iteration {}", i);
+
+            let wasmbytes = wat2wasm(WASI_HELLO_WAT).unwrap();
+            let dir = tempdir()?;
+            let path = dir.path();
+            let res = run_wasi_test(&dir, wasmbytes)?;
+
+            assert_eq!(res.0, 0);
+
+            let output = read_to_string(path.join("stdout"))?;
+            assert_eq!(output, "hello world\n");
+
+            reset_stdio();
+        }
+        Ok(())
+    }
+
     #[test]
     #[serial]
     fn test_wasi_error() -> Result<(), Error> {

Cargo test fail with test_cgroup

Not fail on every machines.

I have success and fail machines both, but I am not sure what kind of information I could provide.
I could help test in my environment if you have any idea.

So far I know if I sudo mkdir under /sys/fs/cgroup/memory, they will generate different folder content between success and fail machine.

Run

cargo test --all test_cgroup --verbose

And I got those error log.

---- sandbox::cgroups::tests::test_cgroup stdout ----
Error: Others("failed to apply cgroup: could not open cgroup file /sys/fs/cgroup/relative/nested/containerd-wasm-shim-test_cgroup/memory.max: No such file or directory (os error 2)")

---- sandbox::cgroups::tests::test_cgroup stdout ----
running test with sudo: sandbox::cgroups::tests::test_cgroup
Error: Stdio(Kind(Other))

MacOS support

I get error[E0425]: cannot find function `prctl` in crate `libc`​ which suggests Linux-only.

Update wasmtime deps to latest version (6.x)

I tried to update the wasmtime dep to the latest and found that runwasmedge is blocking the upgrade do to a dep on wasmtime-fiber:

[ERROR rust_analyzer::lsp_utils] rust-analyzer failed to load workspace: Failed to read Cargo metadata from Cargo.toml file /home/jstur/projects/runwasi/Cargo.toml, Some(Version { major: 1, minor: 67, patch: 1 }): Failed to run `"cargo" "metadata" "--format-version" "1" "--features" "generate_bindings" "--manifest-path" "/home/jstur/projects/runwasi/Cargo.toml" "--filter-platform" "x86_64-unknown-linux-gnu"`: `cargo metadata` exited with an error:     Blocking waiting for file lock on package cache
    Updating crates.io index
error: failed to select a version for `wasmtime-fiber`.
    ... required by package `runwasmedge v0.1.0 (/home/jstur/projects/runwasi/crates/wasmedge)`
versions that meet the requirements `^2.0` are: 2.0.2, 2.0.1, 2.0.0

the package `wasmtime-fiber` links to the native library `wasmtime-fiber-shims`, but it conflicts with a previous package which links to `wasmtime-fiber-shims` as well:
package `wasmtime-fiber v6.0.0`
    ... which satisfies dependency `wasmtime-fiber = "=6.0.0"` of package `wasmtime v6.0.0`
    ... which satisfies dependency `wasmtime = "^6.0"` of package `runwasmtime v0.1.0 (/home/jstur/projects/runwasi/crates/wasmtime)`
Only one package in the dependency graph may specify the same links value. This helps ensure that only one copy of a native library is linked in the final binary. Try to adjust your dependencies so that only one package uses the links ='wasmtime-fiber' value. For more information, see https://doc.rust-lang.org/cargo/reference/resolver.html#links.

failed to select a version for `wasmtime-fiber` which could resolve this conflict


[ERROR rust_analyzer::lsp_utils] rust-analyzer failed to load workspace: Failed to read Cargo metadata from Cargo.toml file /home/jstur/projects/runwasi/Cargo.toml, Some(Version { major: 1, minor: 67, patch: 1 }): Failed to run `"cargo" "metadata" "--format-version" "1" "--features" "generate_bindings" "--manifest-path" "/home/jstur/projects/runwasi/Cargo.toml" "--filter-platform" "x86_64-unknown-linux-gnu"`: `cargo metadata` exited with an error:     Updating crates.io index
error: failed to select a version for `wasmtime-fiber`.
    ... required by package `runwasmedge v0.1.0 (/home/jstur/projects/runwasi/crates/wasmedge)`
versions that meet the requirements `^2.0` are: 2.0.2, 2.0.1, 2.0.0

the package `wasmtime-fiber` links to the native library `wasmtime-fiber-shims`, but it conflicts with a previous package which links to `wasmtime-fiber-shims` as well:
package `wasmtime-fiber v6.0.0`
    ... which satisfies dependency `wasmtime-fiber = "=6.0.0"` of package `wasmtime v6.0.0`
    ... which satisfies dependency `wasmtime = "^6.0"` of package `runwasmtime v0.1.0 (/home/jstur/projects/runwasi/crates/wasmtime)`
Only one package in the dependency graph may specify the same links value. This helps ensure that only one copy of a native library is linked in the final binary. Try to adjust your dependencies so that only one package uses the links ='wasmtime-fiber' value. For more information, see https://doc.rust-lang.org/cargo/reference/resolver.html#links.

failed to select a version for `wasmtime-fiber` which could resolve this conflict

It seems since each of these shims is a separate binary we should be able have different dependencies but the current way it is set up doesn't allow for this.

I think we either need wasmedge to update its dep at https://github.com/WasmEdge/WasmEdge/blob/e27198bf674c18989111c2075758c2ee147556fe/bindings/rust/wasmedge-sys/Cargo.toml#L24 or need to re-organize the packages to allow for the binaries to link files seperately (not 100% sure how to do this)

Enable container + Wasm workloads running within the same pod

Currently, runwasi only knows how to run Wasm workloads. K8s users often want to run sidecars for service meshes and other traditional container injections within the same pod.

It would empower a more idiomatic K8s experience if runwasi was able to run both Wasm workloads and traditional container workloads within the same pod.

wasmedge echo hangs on second execution

To repro run the following:

sudo ctr run --rm --runtime=io.containerd.wasmedge.v1 ghcr.io/containerd/runwasi/wasi-demo-app:latest testwasm /wasi-demo-app.wasm echo 'hello'
hello
exiting
sudo ctr run --rm --runtime=io.containerd.wasmedge.v1 ghcr.io/containerd/runwasi/wasi-demo-app:latest testwasm /wasi-demo-app.wasm echo 'hello'
hello
exiting
# hangs here

Update rust-extensions project for the next release

PR #81 upgrades TTRPC to use protobuff 3.x to unblock some work. This is a tracking issue so we can update once we get a new release of the https://github.com/containerd/rust-extensions project

          > This is taking much longer than anticipated, things are moving along but slowly. I am finding it difficult to maintain this patch and work on the Windows implementation with the various changes going in for the youki work.

Any thoughts on pinning to a rev vs a realse so we can start bumping the other dependencies and moving forward on protobuf 3?

I'm good with pinning to a rev as long as we have an issue to track the follow up work. What do others think?

Originally posted by @devigned in #81 (comment)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.