rust-vmm / community Goto Github PK
View Code? Open in Web Editor NEWrust-vmm community content
rust-vmm community content
GitHub Username: liujing2
GitHub Username: Uncho
volatile-memory
The volatile-memory crate provides abstractions for memory that can be modified at the same time by the hypervisor and the guest. It includes the Bytes
, DataInit
and VolatileMemory
traits, and also tools to handle endian representations of integer types.
This crate is extracted from memory-model
(issue #22). The same abstractions can be used to implement for example ring buffers that are shared across processes, so it is not specific to virtualization and can be separated.
https://github.com/andreeaflorescu/kvm-bindings
Feature-wise this is a copy of kvm_wrapper. The crate is now named kvm-bindings as the -bindings
suffix seems to be commonly used for crates that export Rust FFI bindings.
Travis CI is not enabled for this repository, but I will add it once we move it rust-vmm.
After the review process, I will also publish the kvm-bindings crate and update the kvm_wrapper as follows:
kvm_wrapper
on crates.io with an updated readme in which I'll specify that kvm_wrapper is obsolete and people should use kvm-bindings instead.kvm_wrapper
crate.GitHub Username: chao-p
Thanks.
GitHub Username: stefano-garzarella
hypervisor-firmware
This component is an ELF binary (compatible with the vmlinux loading convention used by the linux-loader crate) that can load a guest operating system from the guest image without host involvement. This allows control of the kernel that is booted to be under the control of the customer.
Currently it supports loading a bzImage, initrd & commandline from the EFI system partition using a custom naming convention. e.g. \EFI\LINUX\{BZIMAGE, INITRD, CMDLINE}
In the future the goal is to support loading an EFI application e.g. \EFI\BOOT\BOOTX64.EFI which could be grub or another EFI capable bootloader which would then load the operating system itself.
Dependencies:
The "firmware" contains a basic implementation of a virtio block driver (MMIO only currently, PCI soon), a FAT filesystem implementation and a bzImage loader. In the future it will include a PE32+ loader and a basic EFI compatibility environment (enough to run GRUB.)
We have this component already developed and working and would like to contribute it to the rust-vmm project. It is currently proceeding through our internal Open Source reviewing process and one of those required steps is to confirm the name.
nvdimm
Non-volatile memory is memory that retains its contents even when electrical power is removed, for example from an unexpected power loss, system crash, or normal shutdown.
This provides rust-vmm the capability to use nvdimm devices.
We need a clear story on how vhost-user
is going to be supported through rust-vmm crates.
There are three main areas which need to be addressed in order to get it working with current hypervisors (crosvm
and firecracker
):
There is some very nice work started by @jiangliu here. This crate development looks almost ready. He implemented both master and slave side of the protocol, even if we care only about the master side for vhost-user support in any hypervisor. The slave implementation is useful for unit testing though.
There are a few tests missing to validate the right behavior of this crate regarding the expected vhost-user protocol.
I think if that's fine with everybody, we would create a crate vhost-user
under rust-vmm
, and @jiangliu could submit his code there, where some real code reviews could happen. What do you all think?
Again, @jiangliu submitted some code to the firecracker codebase here, but everybody agreed during the rust-vmm meeting that it should be part of a separate crate that could be leveraged from every hypervisor.
We agreed on the creation of a memory-model
crate that would hold the code responsible for the way the guest memory should/could be managed.
Such crate will allow any virtio device using vhost-user to share some memory regions of the guest address space corresponding to the virtqueues that need to be accessed from the vhost-user backend running on the host.
@jiangliu has just created the crate on rust-vmm organization, but I would suggest that we remove the initial code to submit proper PRs that everybody could review. Any objection here?
The last piece of the puzzle is to try the new vhost-user crate for a real use case, and we need to write devices for that. Having a vhost-user-net
device to be used with DPDK or a vhost-user-fs
device to be used with the libfuse daemon, as discussed during rust-vmm meeting, would be good starting points for such devices.
/cc @andreeaflorescu @mcastelino @sameo @rbradford @jiangliu @zachreizner
Please let me know if I missed anything here, and cc more people for who this could be relevant!
Please provide access.
GitHub Username: dianpopa
Thanks!
GitHub Username: hannibalhuang
memory-model
Trying to summarize discussions at #16:
This crate will fundamentally allow for accessing guest memory and translating guest addresses into host memory mapping.
The crate APIs will allow for writing to and reading from the guest memory.
This API should be able to handle raw slices but also DataInit
types, i.e. types that can be safely initialized from a byte array.
Handling guest memory is a fundamental piece of a vmm...
We should specify in the readme what rust-vmm is.
GitHub Username: juntian
GitHub Username: yterrencelau
From my experience, it is very hard to get the time to work on improvements on the code once you have a working solution. That's why I think we should aim on having crates in a good shape before publishing them on crates.io.
I am thinking about having the following as requirements before publishing the crate:
Feel free to add more things. We can also adjust the list if you think it is too much.
GitHub Username: zyfjeff
Document how contributors can add a new crate to the rust-vmm organization
libc-utils
A collection of modules that essentially provide helpers and utilities on top of the libc
crate:
errno
fork
poll
signal
ioctl
timerfd
eventfd
syslog
terminal
Although this crate could be used by many projects not related to VMMs or virtualization at all, this set of tools and helpers make writing a VMM easier and shorter than using the libc
crate directly.
vhost or vhost_rs
The crate would provide:
The vhost crate may be used to implement virtio device backends.
acpi
is taken so I propose acpi-tables
This crate will provide a set of APIs for generating ACPI tables for the guest platform.
In order for rust-vmm
based VMMs to boot regular, full blown cloud images, they should build and load ACPI tables in the guest memory.
GitHub Username: bonzini
I would like to be able to close issues, assign them to myself etc.
GitHub Username: serban300
I am part of the firacracker-microvm org
arch
(currently) or vmm-arch
(to relate back to the rust-vmm project)
A crate to provide a hypervisor-agnostic interface to the existing arch
crate currently used by crosvm/Firecracker, and parts of which are used in libwhp.
The functionality provided by the arch
crate is shared across multiple projects (Firecracker and crosvm), but currently uses KVM primitives and APIs directly. libwhp (using Hyper-V) uses a small portion of the logic but ports it to Hyper-V. A hypervisor-agnostic solution would allow the same crate to be used across projects regardless of hypervisor.
The proposed arch
crate relies on the abstraction of the VCPU to achieve hypervisor-agnosticism without sacrificing performance due to hypervisor-specific primitives.
When a hypervisor-agnostic arch
crate was was initially proposed in the rust-vmm biweekly meeting, there was some concern expressed about actually losing KVM primitives in the abstraction, which could result in an unacceptable performance loss for Firecracker/crosvm. In practice, the KVM-specific primitives in the existing crate rely on direct operations on the KVM VcpuFd. Our design allows the continued use of these specific primitives by abstracting out the Vcpu functionality (as proposed in [link to Vcpu proposal]), altering the APIs to accept the Vcpu trait generic as an input parameter instead of the directly taking the data structure. And with Rust's compilation performing static dispatch, the abstraction has "zero cost".
pub trait Vcpu {
fn set_fpu(&self, fpu: &Fpu) -> Result<()>;
fn set_msrs(&self, msrs: &MsrEntries) -> Result<()>;
// . . .
}
arch
crate(Taking a VcpuFd
reference as an input parameter):
pub fn setup_fpu(vcpu: &VcpuFd) -> Result<()> {
let fpu: kvm_fpu = kvm_fpu {
fcw: 0x37f,
mxcsr: 0x1f80,
..Default::default()
};
vcpu.set_fpu(&fpu).map_err(Error::SetFPURegisters)
}
pub fn setup_fpu<T: Vcpu>(vcpu: &T) -> Result<()> {
let fpu: Fpu = Fpu {
fcw: 0x37f,
mxcsr: 0x1f80,
..Default::default()
};
vcpu.set_fpu(&fpu).map_err(Error::SetFPURegisters)?;
Ok(())
}
And code calling the arch function just calls it as normal, minimizing refactoring:
arch::x86_64::regs::setup_fpu(&self.fd).map_err(Error::FPUConfiguration)?;
GitHub Username: @NotBad4U
Hello 😄,
I met @sameo during a Rust meetup and he mentioned the rust-vmm project. I would like to help on this project but I would need a bit of mentoring / pointing for KVM/Virt
API stuff.
Currently, I'm a system developer at CleverCloud, where I use Rust as main language. I'm a contributor on sozu (reverse proxy in Rust) and soon serde
(I have to finish my PR).
The kvm
name is taken. The proposal is to use kvm-ioctls
instead.
The kvm-ioctls
crate will offer wrappers over the KVM ioctls.
KVM ioctls are of three types:
Kvm
structureVm
structureVcpu
structureWe need some wrappers for opening dev/kvm
, creating VMs, creating vCPUs & so on.
There are much duplicated in binding crates generated from linux header files, so any plan to reduce duplicated code?
vm-device
vm-device services as a base crate for concrete device crate(s) in rust-vmm. It focuses on defining common traits that can/should be used by any device implementation as well as providing unified interfaces for rest of the rust-vmm code that works on device but does not necessarily to know the implementation details of the device.
There is already a ‘devices’ crate in both crosvm and Firecracker. As more and more devices being added to this crate, we will have more and more external dependencies related to device implementation details. On the other hand, not all the devices in this crate will be required by a specific usage case and some devices may have alternative implementation(s). Given that, it will be nature to separate the crate into several smaller crates sometime later. By introducing a base device model crate, we can restrict dependencies to smaller scope that is really need to know the implementation of the concrete device and provide common API to operate on common behavior of the device. It also defines what traits a concrete device should follow.
There is already some discussion in #19 . As an initial step, we can move the bus.rs from devices crate to this crate, we can also discuss the common code for virtio/vhost/PCI etc. As in long term, the following common functionalities can also be abstracted and added:
linux-loader
Direct kernel boot but this could also be consumed by e.g. rust written bootloaders.
GitHub Username: bjzhjing
Please add me to this project, I'm very interesting in this direction, Thanks!
Container with all dependencies required for running integration tests for the rust-vmm crate.
P.S. I already published it on docker hub under the name rust-vmm-dev so I can run my experiments with buildkite.
We can create a rust-vmm organization on docker hub and push the container there.
GitHub Username: mswilson
GitHub Username: alexggh
GitHub Username: ...
Thanks in advance!!
Hypervisor
vmm-vcpu has made Vcpu handling be hypervisor agnostic. But there are still
some works to do to make whole rust-vmm be hypervisor agnostic. So here is
a proposal to extend vmm-vcpu to Hypervisor crate to make rust-vmm be
hypervisor agnostic. There has been an issue to discuss this:
rust-vmm/vmm-vcpu#5.
To make larger audience see this, I create this new issue here per Jenny's
suggestion.
Hypervisor crate abstracts different hypervisors interfaces (e.g. kvm ioctls) to
provide unified interfaces to upper layer. The concrete hypervisor (e.g. Kvm/
HyperV) implements the traits to provide hypervisor specific functions.
The upper layer (e.g. Vmm) creates Hypervisor instance which links to the
running hypervisor. Then, it calls running hypervisor interfaces through
Hypervisor instance to make the upper layer be hypervisor agnostic.
Rust-vmm should be workable for all hypervisors, e.g. KVM/HyperV/etc. So the
hypervisor abstraction crate is necessary to encapsulate the hypervisor specific
operations so that the upper layer can simplify the implementations to be
hypervisor agnostic.
Compilation arguments
Create concrete hypervisor instance for Hypervisor users (e.g. Vmm) through
compilation argument. Because only one hypervisor is running for cloud scenario.
Hypervisor crate
This crate itself is simple to expose three public traits Hypervisor, Vm and Vcpu.
This crate is used by KVM/HyperV/etc. The interfaces defined below are used to
show the mechanism. They are got from Firecracker. They are more Kvm specific.
We may change them per requirements.
Note: The Vcpu part refers the [1] and [2] with some changes.
pub trait Hypervsior {
pub fn create_vm(&self) -> Box<Vm>;
pub fn get_api_version(&self) -> i32;
pub fn check_extension(&self, c: Cap) -> bool;
pub fn get_vcpu_mmap_size(&self) -> Result<usize>;
pub fn get_supported_cpuid(&self, max_entries_count: usize) -> Result<CpuId>;
}
pub trait Vm {
pub fn create_vcpu(&self, id: u8) -> Box<Vcpu>;
pub fn set_user_memory_region(&self,
slot: u32,
guest_phys_addr: u64,
memory_size: u64,
userspace_addr: u64,
flags: u32) -> Result<()>;
pub fn set_tss_address(&self, offset: usize) -> Result<()>;
pub fn create_irq_chip(&self) -> Result<()>;
pub fn create_pit2(&self, pit_config: PitConfig) -> Result<()>;
pub fn register_irqfd(&self, evt: &EventFd, gsi: u32) -> Result<()>;
}
pub trait Vcpu {
pub fn get_regs(&self) -> Result<VmmRegs>;
pub fn set_regs(&self, regs: &VmmRegs) -> Result<()>;
pub fn get_sregs(&self) -> Result<SpecialRegisters>;
pub fn set_sregs(&self, sregs: &SpecialRegisters) -> Result<()>;
pub fn get_fpu(&self) -> Result<Fpu>;
pub fn set_fpu(&self, fpu: &Fpu) -> Result<()>;
pub fn set_cpuid2(&self, cpuid: &CpuId) -> Result<()>;
pub fn get_lapic(&self) -> Result<LApicState>;
pub fn set_lapic(&self, klapic: &LApicState) -> Result<()>;
pub fn get_msrs(&self, msrs: &mut MsrEntries) -> Result<(i32)>;
pub fn set_msrs(&self, msrs: &MsrEntries) -> Result<()>;
pub fn run(&self) -> Result<VcpuExit>;
}
[1] While the data types themselves (VmmRegs, SpecialRegisters, etc) are
exposed via the trait with generic names, under the hood they can be
kvm_bindings data structures, which are also exposed from the same crate
via public redefinitions:
pub use kvm_bindings::kvm_regs as VmmRegs;
pub use kvm_bindings::kvm_sregs as SpecialRegisters;
// ...
Kvm crate
Below are sample codes in Kvm crate to show how to implement above traits.
pub struct Kvm {
kvm: File,
}
impl Hypervisor for Kvm {
pub fn create_vm(&self) -> Box<Vm> {
let ret = unsafe { ioctl(&self.kvm, KVM_CREATE_VM()) };
let vm_file = unsafe { File::from_raw_fd(ret) };
Box::new(KvmVmFd { vm: vm_file, ...})
}
...
}
struct KvmVmFd {
vm: File,
...
}
impl Vm for KvmVmFd {
pub fn create_irq_chip(&self) -> Result<()> {
let ret = unsafe { ioctl(self, KVM_CREATE_IRQCHIP()) };
...
}
pub fn create_vcpu(&self, id: u8) -> Result<Vcpu> {
let vcpu_fd = unsafe { ioctl_with_val(&self.vm,
KVM_CREATE_VCPU(),
id as c_ulong) };
...
let vcpu = unsafe { File::from_raw_fd(vcpu_fd) };
...
Ok(Box::new(KvmVcpuFd { vcpu, ... }))
}
...
}
pub struct KvmVcpuFd {
vcpu: File,
...
}
impl Vcpu for KvmVcpuFd {
...
}
Vmm crate
Below are sample codes in Vmm crate to show how to work with Hypervisor
crate.
struct Vmm {
hyp: Box<Hypervisor>,
...
}
impl Vmm {
fn new(h: Box<Hypervisor>, ...) -> Self {
Vmm {hyp: h}
...
}
...
}
pub struct GuestVm {
fd: Box<Vm>,
...
}
impl GuestVm {
pub fn new(hyp: Box<Hypervisor>) -> Result<Self> {
let vm_fd = hyp.create_vm();
...
let cpuid = hyp.get_supported_cpuid(MAX_CPUID_ENTRIES);
...
Ok(GuestVm {
fd: vm_fd,
supported_cpuid: cpuid,
guest_mem: None,
})
}
...
}
pub struct GuestVcpu {
fd: Box<Vcpu>,
...
}
impl GuestVcpu {
pub fn new(id: u8, vm: &GuestVm) -> Result<Self> {
let vcpu = vm.fd.create_vcpu(id);
Ok(GuestVcpu { fd: vcpu, ... }
}
...
}
When start Vmm, create concrete hypervisor instance according to compilation
argument. Then, set it to Vmm and start the flow: create guest vm -> create guest
vcpus -> run.
References:
[1] #40
[2] https://github.com/rust-vmm/vmm-vcpu
The virtio
name is already taken on crates.io. The crate is unmaintained and really incomplete.
Instead of virtio
, we could use:
virtio-ng
virtio-vmm
This crate would provide:
VirtioDevice
traitVirtioDevice
for the block, net, rng, ballon virtio devicesVirtioDevice
for the vsock, and net vhost devicesrust-vmm needs a virtio implementation.
GitHub Username: bIgBV
GitHub Username: jiazhang0
vmm-vcpu
A crate to provide a hypervisor-agnostic interface to common Virtual-CPU functionality
Regardless of hypervisor, container technologies utilize a virtual CPU, and the functions of a VCPU tend to be shared across hypervisors used. For example, VCPUs require functions to get and set registers and MSRs, get and set local APIC state, run until the next Vmexit, etc. Current container implementations (Firecracker, Crosvm, libwhp) use hypervisor-specific VCPUs to accomplish this functionality, but a shared VCPU abstraction would allow for more generic container code. It would also facilitate clean abstractions for other crates; for example, the proposed arch
crate relies on a VCPU trait abstraction to provide a hypervisor-agnostic arch crate without losing the performance-optimized primitives that each technology relies on.
The vmm-vcpu
crate itself is quite simple, requiring only exposing a public Vcpu
trait with the functions that comprise common VCPU functionality:
Note: The signatures below are in-progress and subject to change. A design goal is to keep them matching and/or as close to matching the existing signatures found in Firecracker/crosvm to minimize code refactoring. So some of these may change more toward that direction if they haven't already
pub trait Vcpu {
fn set_fpu(&self, fpu: &Fpu) -> Result<()>;
fn set_msrs(&self, msrs: &MsrEntries) -> Result<()>;
fn set_sregs(&self, sregs: &SpecialRegisters) -> Result<()>;
fn run(&mut self) -> Result<VcpuExit>;
fn get_run_context(&mut self) -> &mut RunContext;
fn setup_regs(&mut self, ip: u64, sp: u64, si: u64) -> Result<()>;
fn get_regs(&self) -> Result<VmmRegisters>;
fn set_regs(&self, regs: &VmmRegisters) -> Result<()>;
fn get_sregs(&self) -> Result<SpecialRegisters>;
fn get_lapic(&self) -> Result<LApicState>;
fn set_lapic(&mut self, klapic: &LApicState) -> Result<()>;
fn set_cpuid(&self, cpuid_entries: &[CpuIdEntry]) -> Result<()>;
}
While the data types themselves (VmmRegisters
, SpecialRegisters
) are exposed via the trait with generic names, under the hood they can be kvm_bindings
data structures, which are also exposed from the same crate via public redefinitions:
pub use kvm_bindings::kvm_regs as VmmRegisters;
pub use kvm_bindings::kvm_sregs as SpecialRegisters;
// ...
Per-hypervisor implementations of the VCPU trait would not reside within the vmm-vcpu
crate itself, but would be contained either in other rust-vmm
crates, or in crates hosted elsewhere. For example, the proposed and currently in-PR kvm-ioctl
, which already refactors out the VcpuFd
implementation from its previous home in kvm/src/lib.rs
, would require minimal refactoring to utilize the new vmm-vcpu crate, moving its VCPU implementations from the straight VcpuFd
implementation to that implementing the trait. For example, from:
pub struct VcpuFd {
vcpu: File,
kvm_run_ptr: KvmRunWrapper,
}
impl Vcpu {
/// ...
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
pub fn set_fpu(&self, fpu: &kvm_fpu) -> Result<()> {
let ret = unsafe {
// Here we trust the kernel not to read past the end of the kvm_fpu struct.
ioctl_with_ref(self, KVM_SET_FPU(), fpu)
};
if ret < 0 {
return Err(io::Error::last_os_error());
}
Ok(())
}
}
To:
pub struct VcpuFd {
vcpu: File,
kvm_run_ptr: KvmRunWrapper,
}
impl Vcpu for VcpuFd {
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
fn set_fpu(&self, fpu: &kvm_fpu) -> Result<()> {
let ret = unsafe {
// Here we trust the kernel not to read past the end of the kvm_fpu struct.
ioctl_with_ref(self, KVM_SET_FPU(), fpu)
};
if ret < 0 {
return Err(io::Error::last_os_error());
}
Ok(())
}
// Other functions of the Vcpu trait
}
Similarly, the Rust Hyper-V crate libwhp
would implement its side of the Vcpu trait:
pub struct WhpVcpu {
partition: Rc<RefCell<PartitionHandle>>,
index: UINT32,
}
impl Vcpu for WhpVcpu {
fn set_fpu(&self, fpu: &Fpu) -> Result<(), io::Error> {
let _reg_names: [WHV_REGISTER_NAME; 4] = [
WHV_REGISTER_NAME::WHvX64RegisterFpControlStatus,
WHV_REGISTER_NAME::WHvX64RegisterXmmControlStatus,
WHV_REGISTER_NAME::WHvX64RegisterXmm0,
WHV_REGISTER_NAME::WHvX64RegisterFpMmx0,
];
let mut reg_values: [WHV_REGISTER_VALUE; 4] = Default::default();
reg_values[0].Reg64 = fpu.fcw as UINT64;
reg_values[1].Reg64 = fpu.mxcsr as UINT64;
reg_values[2].Fp = WHV_X64_FP_REGISTER {
AsUINT128: WHV_UINT128 {
Low64: 0,
High64: 0,
},
};
reg_values[3].Fp = WHV_X64_FP_REGISTER {
AsUINT128: WHV_UINT128 {
Low64: 0,
High64: 0,
},
};
self.set_registers(®_names, ®_values)
.map_err(|_| io::Error::last_os_error())?;
Ok(())
}
// Implement other functions of the Vcpu trait
}
It was also discussed that a VCPU is unlikely to change between compilations (ie, a single build is likely to contain a KVM VCPU or a Hyper-V VCPU, but not both). Since traits are more useful when multiple implementations of a trait may be present in the same build, we internally discussed whether a trait-based crate is over-engineering for the problem, and something like conditional compilation of common APIs is a better approach. Acknowledging that a trait-based crate would enforce the API contract more strictly, but perhaps with some data visibility lost within the functions as functions consuming trait generics can't access member variables of the structs they are implementing. We solicit opinions from the rust-vmm community as to whether a trait-based or conditional compilation (or even a hybrid, trait-based implementation to enforce the contract but conditional compilation calling the methods directly) approach makes the most sense for this problem space.
GitHub Username: [email protected]
GitHub Username: aylei
Thanks!
'cpu-model'
A crate to provide a generic framework which has standard interfaces and flexible mechanism to support customized CPU models.
Customized CPU model is necessary because of below reasons.
GitHub Username: MaciekBielski
GitHub Username: yisun-git
Hi,
I am from Intel and interesting to this project. May I join? Thanks!
BRs,
Sun Yi
Please add me into this organization, thanks!
[email protected]
GitHub Username: ChrisMacNaughton
GitHub Username: maccarro
GitHub Username: ochescc
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.