Coder Social home page Coder Social logo

community's People

Contributors

ablu avatar aghecenco avatar alexandruag avatar alyssais avatar andreeaflorescu avatar jonathanwoollett-light avatar kangxiaoning avatar lauralt avatar mathieupoirier avatar pierwill avatar roypat avatar stefano-garzarella avatar stsquad avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

community's Issues

Crate Addition Request: volatile-memory

Crate Name

volatile-memory

Short Description

The volatile-memory crate provides abstractions for memory that can be modified at the same time by the hypervisor and the guest. It includes the Bytes, DataInit and VolatileMemory traits, and also tools to handle endian representations of integer types.

Why is this crate relevant to the rust-vmm project?

This crate is extracted from memory-model (issue #22). The same abstractions can be used to implement for example ring buffers that are shared across processes, so it is not specific to virtualization and can be separated.

Repository/Crate Review Request

Repository Path

https://github.com/andreeaflorescu/kvm-bindings

Short Description

Feature-wise this is a copy of kvm_wrapper. The crate is now named kvm-bindings as the -bindings suffix seems to be commonly used for crates that export Rust FFI bindings.

Notes

Travis CI is not enabled for this repository, but I will add it once we move it rust-vmm.
After the review process, I will also publish the kvm-bindings crate and update the kvm_wrapper as follows:

  • publish a new version for kvm_wrapper on crates.io with an updated readme in which I'll specify that kvm_wrapper is obsolete and people should use kvm-bindings instead.
  • Yank all current versions of the kvm_wrapper crate.

New Component: hypervisor-firmware

Name

hypervisor-firmware

Purpose

This component is an ELF binary (compatible with the vmlinux loading convention used by the linux-loader crate) that can load a guest operating system from the guest image without host involvement. This allows control of the kernel that is booted to be under the control of the customer.

Currently it supports loading a bzImage, initrd & commandline from the EFI system partition using a custom naming convention. e.g. \EFI\LINUX\{BZIMAGE, INITRD, CMDLINE}

In the future the goal is to support loading an EFI application e.g. \EFI\BOOT\BOOTX64.EFI which could be grub or another EFI capable bootloader which would then load the operating system itself.

Implementation details

Dependencies:

  • Rust core (unable to use std as we have no OS)
  • cpuio crate for Port I/O
  • r-efi crate for EFI structs (FUTURE)

The "firmware" contains a basic implementation of a virtio block driver (MMIO only currently, PCI soon), a FAT filesystem implementation and a bzImage loader. In the future it will include a PE32+ loader and a basic EFI compatibility environment (enough to run GRUB.)

Notes

We have this component already developed and working and would like to contribute it to the rust-vmm project. It is currently proceeding through our internal Open Source reviewing process and one of those required steps is to confirm the name.

Crate Addition Request: nvdimm

Crate Name

nvdimm

Short Description

Non-volatile memory is memory that retains its contents even when electrical power is removed, for example from an unexpected power loss, system crash, or normal shutdown.

Why is this crate relevant to the rust-vmm project?

This provides rust-vmm the capability to use nvdimm devices.

vhost-user design

We need a clear story on how vhost-user is going to be supported through rust-vmm crates.
There are three main areas which need to be addressed in order to get it working with current hypervisors (crosvm and firecracker):

  • vhost-user protocol support
  • VM memory sharing support
  • virtio devices consuming those crates

vhost-user protocol

There is some very nice work started by @jiangliu here. This crate development looks almost ready. He implemented both master and slave side of the protocol, even if we care only about the master side for vhost-user support in any hypervisor. The slave implementation is useful for unit testing though.
There are a few tests missing to validate the right behavior of this crate regarding the expected vhost-user protocol.

I think if that's fine with everybody, we would create a crate vhost-user under rust-vmm, and @jiangliu could submit his code there, where some real code reviews could happen. What do you all think?

VM memory sharing

Again, @jiangliu submitted some code to the firecracker codebase here, but everybody agreed during the rust-vmm meeting that it should be part of a separate crate that could be leveraged from every hypervisor.
We agreed on the creation of a memory-model crate that would hold the code responsible for the way the guest memory should/could be managed.

Such crate will allow any virtio device using vhost-user to share some memory regions of the guest address space corresponding to the virtqueues that need to be accessed from the vhost-user backend running on the host.

@jiangliu has just created the crate on rust-vmm organization, but I would suggest that we remove the initial code to submit proper PRs that everybody could review. Any objection here?

virtio devices

The last piece of the puzzle is to try the new vhost-user crate for a real use case, and we need to write devices for that. Having a vhost-user-net device to be used with DPDK or a vhost-user-fs device to be used with the libfuse daemon, as discussed during rust-vmm meeting, would be good starting points for such devices.

Feeback

/cc @andreeaflorescu @mcastelino @sameo @rbradford @jiangliu @zachreizner

Please let me know if I missed anything here, and cc more people for who this could be relevant!

Crate Addition Request: memory-model

Crate Name

memory-model

Short Description

Trying to summarize discussions at #16:

This crate will fundamentally allow for accessing guest memory and translating guest addresses into host memory mapping.

The crate APIs will allow for writing to and reading from the guest memory.
This API should be able to handle raw slices but also DataInit types, i.e. types that can be safely initialized from a byte array.

Why is this crate relevant to the rust-vmm project?

Handling guest memory is a fundamental piece of a vmm...

Create a "must have list" for crates in rust-vmm

From my experience, it is very hard to get the time to work on improvements on the code once you have a working solution. That's why I think we should aim on having crates in a good shape before publishing them on crates.io.

I am thinking about having the following as requirements before publishing the crate:

  • High level documentation about the crate (what is it suppose to do, how can it be used in other projects, usage examples).
  • Mandatory documentation for public functions. To make sure we don't miss anything by mistake we can enforce this rule by using #![deny(missing-docs)].
  • Unit tests
  • Integration tests. For the kvm ioctls crate I've written some basic integration tests in python (because it is super easy, no judgement pls) for running cargo build, cargo test, cargo fmt (coding style), cargo clippy (linter) and I am also planning to add kcov as I think it is important to have adaptive coverage to make sure we keep a high quality bar for our crates. I think we should run the integration tests on each PR.
  • README.md
  • LICENSE

Feel free to add more things. We can also adjust the list if you think it is too much.

Crate Addition Request: libc-utils

Crate Name

libc-utils

Short Description

A collection of modules that essentially provide helpers and utilities on top of the libc crate:

  • errno
  • fork
  • poll
  • signal
  • ioctl
  • timerfd
  • eventfd
  • syslog
  • terminal

Why is this crate relevant to the rust-vmm project?

Although this crate could be used by many projects not related to VMMs or virtualization at all, this set of tools and helpers make writing a VMM easier and shorter than using the libc crate directly.

Crate Addition Request: vhost

Crate Name

vhost or vhost_rs

Short Description

The crate would provide:

  • A Vhost trait
  • A VhostUserMaster trait for vhost-user master endpoints
  • A VhostUserSlave trait for vhost-user slave endpoints
  • An implementation of the Vhost trait based on linux kernel vhost drivers
  • An implementation of the Vhost trait based on the vhost-user protocol
  • An implementation of the VhostUserMaster trait

Why is this crate relevant to the rust-vmm project?

The vhost crate may be used to implement virtio device backends.

Crate Addition Request: acpi

Crate Name

acpi is taken so I propose acpi-tables

Short Description

This crate will provide a set of APIs for generating ACPI tables for the guest platform.

Why is this crate relevant to the rust-vmm project?

In order for rust-vmm based VMMs to boot regular, full blown cloud images, they should build and load ACPI tables in the guest memory.

Crate Addition Request: arch

Crate Name

arch (currently) or vmm-arch (to relate back to the rust-vmm project)

Short Description

A crate to provide a hypervisor-agnostic interface to the existing arch crate currently used by crosvm/Firecracker, and parts of which are used in libwhp.

Why is this crate relevant to the rust-vmm project?

The functionality provided by the arch crate is shared across multiple projects (Firecracker and crosvm), but currently uses KVM primitives and APIs directly. libwhp (using Hyper-V) uses a small portion of the logic but ports it to Hyper-V. A hypervisor-agnostic solution would allow the same crate to be used across projects regardless of hypervisor.

Design

The proposed arch crate relies on the abstraction of the VCPU to achieve hypervisor-agnosticism without sacrificing performance due to hypervisor-specific primitives.

When a hypervisor-agnostic arch crate was was initially proposed in the rust-vmm biweekly meeting, there was some concern expressed about actually losing KVM primitives in the abstraction, which could result in an unacceptable performance loss for Firecracker/crosvm. In practice, the KVM-specific primitives in the existing crate rely on direct operations on the KVM VcpuFd. Our design allows the continued use of these specific primitives by abstracting out the Vcpu functionality (as proposed in [link to Vcpu proposal]), altering the APIs to accept the Vcpu trait generic as an input parameter instead of the directly taking the data structure. And with Rust's compilation performing static dispatch, the abstraction has "zero cost".

Proposed Vcpu trait definition (proposed here):

pub trait Vcpu {
    fn set_fpu(&self, fpu: &Fpu) -> Result<()>;
    fn set_msrs(&self, msrs: &MsrEntries) -> Result<()>;
    // . . .
}

Example function from the existing arch crate

(Taking a VcpuFd reference as an input parameter):

pub fn setup_fpu(vcpu: &VcpuFd) -> Result<()> {
    let fpu: kvm_fpu = kvm_fpu {
        fcw: 0x37f,
        mxcsr: 0x1f80,
        ..Default::default()
    };

    vcpu.set_fpu(&fpu).map_err(Error::SetFPURegisters)
}

Refactor of function consuming the trait as a generic:

pub fn setup_fpu<T: Vcpu>(vcpu: &T) -> Result<()> {
    let fpu: Fpu = Fpu {
        fcw: 0x37f,
        mxcsr: 0x1f80,
        ..Default::default()
    };
    vcpu.set_fpu(&fpu).map_err(Error::SetFPURegisters)?;
    Ok(())
}

And code calling the arch function just calls it as normal, minimizing refactoring:

arch::x86_64::regs::setup_fpu(&self.fd).map_err(Error::FPUConfiguration)?;

Access Request to rust-vmm

GitHub Username: @NotBad4U
Hello 😄,
I met @sameo during a Rust meetup and he mentioned the rust-vmm project. I would like to help on this project but I would need a bit of mentoring / pointing for KVM/Virt API stuff.
Currently, I'm a system developer at CleverCloud, where I use Rust as main language. I'm a contributor on sozu (reverse proxy in Rust) and soon serde (I have to finish my PR).

Crate Addition: kvm-ioctls

Crate Name: kvm-ioctls

The kvm name is taken. The proposal is to use kvm-ioctls instead.

Short Description

The kvm-ioctls crate will offer wrappers over the KVM ioctls.
KVM ioctls are of three types:

  • system ioctls -> grouped in the implementation of the Kvm structure
  • VM ioctls-> grouped in the implementation of the Vm structure
  • vCPU ioctls -> grouped in the implementation of the Vcpu structure

Why is this crate relevant to the rust-vmm project?

We need some wrappers for opening dev/kvm, creating VMs, creating vCPUs & so on.

Crate Addition: vm-device

Crate Name

vm-device

Short Description

vm-device services as a base crate for concrete device crate(s) in rust-vmm. It focuses on defining common traits that can/should be used by any device implementation as well as providing unified interfaces for rest of the rust-vmm code that works on device but does not necessarily to know the implementation details of the device.

Why is this crate relevant to the rust-vmm project?

There is already a ‘devices’ crate in both crosvm and Firecracker. As more and more devices being added to this crate, we will have more and more external dependencies related to device implementation details. On the other hand, not all the devices in this crate will be required by a specific usage case and some devices may have alternative implementation(s). Given that, it will be nature to separate the crate into several smaller crates sometime later. By introducing a base device model crate, we can restrict dependencies to smaller scope that is really need to know the implementation of the concrete device and provide common API to operate on common behavior of the device. It also defines what traits a concrete device should follow.
There is already some discussion in #19 . As an initial step, we can move the bus.rs from devices crate to this crate, we can also discuss the common code for virtio/vhost/PCI etc. As in long term, the following common functionalities can also be abstracted and added:

  • Bus enhancement
  • Get/set device state
  • Resource(e.g. mmio range) management
  • Interrupt management
  • Device lifetime management
  • Device ACPI table generation
  • Hotplug
  • Other common code

Crate Addition Request: linux-loader

Crate Name

linux-loader

Short Description

  • Parsing and loading vmlinux (raw ELF image) and bzImage images
  • Linux command line parsing and generation
  • Definitions and helpers for the Linux boot protocol, multiboot and PVH headers.

Why is this crate relevant to the rust-vmm project?

Direct kernel boot but this could also be consumed by e.g. rust written bootloaders.

Access Request to rust-vmm

GitHub Username: bjzhjing

Please add me to this project, I'm very interesting in this direction, Thanks!

Repository Addition Request - rust-vmm-dev-container

Repository-name: dev-container (or rust-vmm-dev-container)

Short Description

Container with all dependencies required for running integration tests for the rust-vmm crate.

Why is this crate relevant to the rust-vmm project?

  • Deploy it as part of our CI to manage dependencies across multiple platforms
  • It can also be used for development to test the code before submitting PRs in the same environment that it is going to be tested as part of the CI.
  • You can already find the Dockerfile here.

P.S. I already published it on docker hub under the name rust-vmm-dev so I can run my experiments with buildkite.

We can create a rust-vmm organization on docker hub and push the container there.

Crate Addition Request: extend vmm-vcpu to Hypervisor crate

Crate Name

Hypervisor

Short Description

vmm-vcpu has made Vcpu handling be hypervisor agnostic. But there are still
some works to do to make whole rust-vmm be hypervisor agnostic. So here is
a proposal to extend vmm-vcpu to Hypervisor crate to make rust-vmm be
hypervisor agnostic. There has been an issue to discuss this:
rust-vmm/vmm-vcpu#5.

To make larger audience see this, I create this new issue here per Jenny's
suggestion.

Hypervisor crate abstracts different hypervisors interfaces (e.g. kvm ioctls) to
provide unified interfaces to upper layer. The concrete hypervisor (e.g. Kvm/
HyperV) implements the traits to provide hypervisor specific functions.

The upper layer (e.g. Vmm) creates Hypervisor instance which links to the
running hypervisor. Then, it calls running hypervisor interfaces through
Hypervisor instance to make the upper layer be hypervisor agnostic.

Why is this crate relevant to the rust-vmm project?

Rust-vmm should be workable for all hypervisors, e.g. KVM/HyperV/etc. So the
hypervisor abstraction crate is necessary to encapsulate the hypervisor specific
operations so that the upper layer can simplify the implementations to be
hypervisor agnostic.

Design

Relationships of crates
image

Compilation arguments
Create concrete hypervisor instance for Hypervisor users (e.g. Vmm) through
compilation argument. Because only one hypervisor is running for cloud scenario.

Hypervisor crate
This crate itself is simple to expose three public traits Hypervisor, Vm and Vcpu.
This crate is used by KVM/HyperV/etc. The interfaces defined below are used to
show the mechanism. They are got from Firecracker. They are more Kvm specific.
We may change them per requirements.

Note: The Vcpu part refers the [1] and [2] with some changes.

pub trait Hypervsior {
    pub fn create_vm(&self) -> Box<Vm>;
    pub fn get_api_version(&self) -> i32;
    pub fn check_extension(&self, c: Cap) -> bool;
    pub fn get_vcpu_mmap_size(&self) -> Result<usize>;
    pub fn get_supported_cpuid(&self, max_entries_count: usize) -> Result<CpuId>;
}

pub trait Vm {
    pub fn create_vcpu(&self, id: u8) -> Box<Vcpu>;
    pub fn set_user_memory_region(&self,
                                  slot: u32,
                                  guest_phys_addr: u64,
                                  memory_size: u64,
                                  userspace_addr: u64,
                                  flags: u32) -> Result<()>;
    pub fn set_tss_address(&self, offset: usize) -> Result<()>;
    pub fn create_irq_chip(&self) -> Result<()>;
    pub fn create_pit2(&self, pit_config: PitConfig) -> Result<()>;
    pub fn register_irqfd(&self, evt: &EventFd, gsi: u32) -> Result<()>;
}

pub trait Vcpu {
    pub fn get_regs(&self) -> Result<VmmRegs>;
    pub fn set_regs(&self, regs: &VmmRegs) -> Result<()>;
    pub fn get_sregs(&self) -> Result<SpecialRegisters>;
    pub fn set_sregs(&self, sregs: &SpecialRegisters) -> Result<()>;
    pub fn get_fpu(&self) -> Result<Fpu>;
    pub fn set_fpu(&self, fpu: &Fpu) -> Result<()>;
    pub fn set_cpuid2(&self, cpuid: &CpuId) -> Result<()>;
    pub fn get_lapic(&self) -> Result<LApicState>;
    pub fn set_lapic(&self, klapic: &LApicState) -> Result<()>;
    pub fn get_msrs(&self, msrs: &mut MsrEntries) -> Result<(i32)>;
    pub fn set_msrs(&self, msrs: &MsrEntries) -> Result<()>;
    pub fn run(&self) -> Result<VcpuExit>;
}

[1] While the data types themselves (VmmRegs, SpecialRegisters, etc) are
exposed via the trait with generic names, under the hood they can be
kvm_bindings data structures, which are also exposed from the same crate
via public redefinitions:

pub use kvm_bindings::kvm_regs as VmmRegs;
pub use kvm_bindings::kvm_sregs as SpecialRegisters;
// ...

Sample codes to show how it works

Kvm crate
Below are sample codes in Kvm crate to show how to implement above traits.

pub struct Kvm {
    kvm: File,
}

impl Hypervisor for Kvm {
    pub fn create_vm(&self) -> Box<Vm> {
        let ret = unsafe { ioctl(&self.kvm, KVM_CREATE_VM()) };
        let vm_file = unsafe { File::from_raw_fd(ret) };
        Box::new(KvmVmFd { vm: vm_file, ...})
    }

    ...
}

struct KvmVmFd {
    vm: File,
    ...
}

impl Vm for KvmVmFd {
    pub fn create_irq_chip(&self) -> Result<()> {
        let ret = unsafe { ioctl(self, KVM_CREATE_IRQCHIP()) };
        ...
    }

    pub fn create_vcpu(&self, id: u8) -> Result<Vcpu> {
        let vcpu_fd = unsafe { ioctl_with_val(&self.vm,
                                              KVM_CREATE_VCPU(),
                                              id as c_ulong) };
        ...
        let vcpu = unsafe { File::from_raw_fd(vcpu_fd) };
        ...
        Ok(Box::new(KvmVcpuFd { vcpu, ... }))
    }

    ...
}

pub struct KvmVcpuFd {
    vcpu: File,
    ...
}

impl Vcpu for KvmVcpuFd {
    ...
}

Vmm crate
Below are sample codes in Vmm crate to show how to work with Hypervisor
crate.

struct Vmm {
    hyp: Box<Hypervisor>,
    ...
}

impl Vmm {
    fn new(h: Box<Hypervisor>, ...) -> Self {
        Vmm {hyp: h}
        ...
    }
    ...
}

pub struct GuestVm {
    fd: Box<Vm>,
    ...
}

impl GuestVm {
    pub fn new(hyp: Box<Hypervisor>) -> Result<Self> {
        let vm_fd = hyp.create_vm();
        ...
        let cpuid = hyp.get_supported_cpuid(MAX_CPUID_ENTRIES);
        ...
        Ok(GuestVm {
            fd: vm_fd,
            supported_cpuid: cpuid,
            guest_mem: None,
        })
    }
    ...
}

pub struct GuestVcpu {
    fd: Box<Vcpu>,
    ...
}

impl GuestVcpu {
    pub fn new(id: u8, vm: &GuestVm) -> Result<Self> {
        let vcpu = vm.fd.create_vcpu(id);
        Ok(GuestVcpu { fd: vcpu, ... }
    }
    ...
}

When start Vmm, create concrete hypervisor instance according to compilation
argument. Then, set it to Vmm and start the flow: create guest vm -> create guest
vcpus -> run.

References:
[1] #40
[2] https://github.com/rust-vmm/vmm-vcpu

Crate Addition Request: vm-virtio

Crate Name

The virtio name is already taken on crates.io. The crate is unmaintained and really incomplete.

Instead of virtio, we could use:

  • virtio-ng
  • virtio-vmm
  • Send your suggestion...

Short Description

This crate would provide:

  • A VirtioDevice trait
  • Implementations of VirtioDevice for the block, net, rng, ballon virtio devices
  • Implementations of VirtioDevice for the vsock, and net vhost devices
  • Virtio queues and descriptor API.
  • MMIO and PCI virtio transports

Why is this crate relevant to the rust-vmm project?

rust-vmm needs a virtio implementation.

Crate Addition Request: vmm vcpu

Crate Name

vmm-vcpu

Short Description

A crate to provide a hypervisor-agnostic interface to common Virtual-CPU functionality

Why is this crate relevant to the rust-vmm project?

Regardless of hypervisor, container technologies utilize a virtual CPU, and the functions of a VCPU tend to be shared across hypervisors used. For example, VCPUs require functions to get and set registers and MSRs, get and set local APIC state, run until the next Vmexit, etc. Current container implementations (Firecracker, Crosvm, libwhp) use hypervisor-specific VCPUs to accomplish this functionality, but a shared VCPU abstraction would allow for more generic container code. It would also facilitate clean abstractions for other crates; for example, the proposed arch crate relies on a VCPU trait abstraction to provide a hypervisor-agnostic arch crate without losing the performance-optimized primitives that each technology relies on.

Design

Design 1

The vmm-vcpu crate itself is quite simple, requiring only exposing a public Vcpu trait with the functions that comprise common VCPU functionality:

Note: The signatures below are in-progress and subject to change. A design goal is to keep them matching and/or as close to matching the existing signatures found in Firecracker/crosvm to minimize code refactoring. So some of these may change more toward that direction if they haven't already

pub trait Vcpu {
	fn set_fpu(&self, fpu: &Fpu) -> Result<()>;
	fn set_msrs(&self, msrs: &MsrEntries) -> Result<()>;
	fn set_sregs(&self, sregs: &SpecialRegisters) -> Result<()>;
	fn run(&mut self) -> Result<VcpuExit>;
	fn get_run_context(&mut self) -> &mut RunContext;
	fn setup_regs(&mut self, ip: u64, sp: u64, si: u64) -> Result<()>;
	fn get_regs(&self) -> Result<VmmRegisters>;
	fn set_regs(&self, regs: &VmmRegisters) -> Result<()>;
	fn get_sregs(&self) -> Result<SpecialRegisters>;
	fn get_lapic(&self) -> Result<LApicState>;
	fn set_lapic(&mut self, klapic: &LApicState) -> Result<()>;
	fn set_cpuid(&self, cpuid_entries: &[CpuIdEntry]) -> Result<()>;
}

While the data types themselves (VmmRegisters, SpecialRegisters) are exposed via the trait with generic names, under the hood they can be kvm_bindings data structures, which are also exposed from the same crate via public redefinitions:

pub use kvm_bindings::kvm_regs as VmmRegisters;
pub use kvm_bindings::kvm_sregs as SpecialRegisters; 
// ...

Per-hypervisor implementations of the VCPU trait would not reside within the vmm-vcpu crate itself, but would be contained either in other rust-vmm crates, or in crates hosted elsewhere. For example, the proposed and currently in-PR kvm-ioctl, which already refactors out the VcpuFd implementation from its previous home in kvm/src/lib.rs, would require minimal refactoring to utilize the new vmm-vcpu crate, moving its VCPU implementations from the straight VcpuFd implementation to that implementing the trait. For example, from:

pub struct VcpuFd {
    vcpu: File,
    kvm_run_ptr: KvmRunWrapper,
}

impl Vcpu {
    /// ...
    #[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
    pub fn set_fpu(&self, fpu: &kvm_fpu) -> Result<()> {
        let ret = unsafe {
            // Here we trust the kernel not to read past the end of the kvm_fpu struct.
            ioctl_with_ref(self, KVM_SET_FPU(), fpu)
        };
        if ret < 0 {
            return Err(io::Error::last_os_error());
        }
        Ok(())
    }
}

To:

pub struct VcpuFd {
    vcpu: File,
    kvm_run_ptr: KvmRunWrapper,
}

impl Vcpu for VcpuFd {
    #[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
    fn set_fpu(&self, fpu: &kvm_fpu) -> Result<()> {
        let ret = unsafe {
            // Here we trust the kernel not to read past the end of the kvm_fpu struct.
            ioctl_with_ref(self, KVM_SET_FPU(), fpu)
        };
        if ret < 0 {
            return Err(io::Error::last_os_error());
        }
        Ok(())
    }
    // Other functions of the Vcpu trait
}

Similarly, the Rust Hyper-V crate libwhp would implement its side of the Vcpu trait:

pub struct WhpVcpu {
    partition: Rc<RefCell<PartitionHandle>>,
    index: UINT32,
}

impl Vcpu for WhpVcpu {
    fn set_fpu(&self, fpu: &Fpu) -> Result<(), io::Error> {
        let _reg_names: [WHV_REGISTER_NAME; 4] = [
            WHV_REGISTER_NAME::WHvX64RegisterFpControlStatus,
            WHV_REGISTER_NAME::WHvX64RegisterXmmControlStatus,
            WHV_REGISTER_NAME::WHvX64RegisterXmm0,
            WHV_REGISTER_NAME::WHvX64RegisterFpMmx0,
        ];

        let mut reg_values: [WHV_REGISTER_VALUE; 4] = Default::default();
        reg_values[0].Reg64 = fpu.fcw as UINT64;
        reg_values[1].Reg64 = fpu.mxcsr as UINT64;
        reg_values[2].Fp = WHV_X64_FP_REGISTER {
            AsUINT128: WHV_UINT128 {
                Low64: 0,
                High64: 0,
            },
        };
        reg_values[3].Fp = WHV_X64_FP_REGISTER {
            AsUINT128: WHV_UINT128 {
                Low64: 0,
                High64: 0,
            },
        };
        self.set_registers(&reg_names, &reg_values)
            .map_err(|_| io::Error::last_os_error())?;
            
        Ok(())
    }
    // Implement other functions of the Vcpu trait
}

Alternative Design

It was also discussed that a VCPU is unlikely to change between compilations (ie, a single build is likely to contain a KVM VCPU or a Hyper-V VCPU, but not both). Since traits are more useful when multiple implementations of a trait may be present in the same build, we internally discussed whether a trait-based crate is over-engineering for the problem, and something like conditional compilation of common APIs is a better approach. Acknowledging that a trait-based crate would enforce the API contract more strictly, but perhaps with some data visibility lost within the functions as functions consuming trait generics can't access member variables of the structs they are implementing. We solicit opinions from the rust-vmm community as to whether a trait-based or conditional compilation (or even a hybrid, trait-based implementation to enforce the contract but conditional compilation calling the methods directly) approach makes the most sense for this problem space.

Crate Addition Request: CPU model

Crate Name

'cpu-model'

Short Description

A crate to provide a generic framework which has standard interfaces and flexible mechanism to support customized CPU models.

Why is this crate relevant to the rust-vmm project?

Customized CPU model is necessary because of below reasons.

  1. Avoid CPU hardware vulnerabilities.
  2. Keep stable guest ABI.
  3. Hard requirement for live migration.

Access Request to rust-vmm

GitHub Username: yisun-git

Hi,

I am from Intel and interesting to this project. May I join? Thanks!

BRs,
Sun Yi

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.