vmware / vic Goto Github PK
View Code? Open in Web Editor NEWvSphere Integrated Containers Engine is a container runtime for vSphere.
Home Page: http://vmware.github.io/vic
License: Other
vSphere Integrated Containers Engine is a container runtime for vSphere.
Home Page: http://vmware.github.io/vic
License: Other
We should use a vendoring tool (e.g gvt, gb, gpm) to populate the vendor/
directory at the root of the repo. This will allow us to keep only our code in the repo while being able to grab dependencies during build time.
The existing bonneville daemon has datastore related functionality that can be refactored into its own set of utilities. This will require a bit of investigation, starting in daemon/modules/vmware/driver.go:
imagePath string
imageDatastore *Datastore
containerPath string
containerDatastore *Datastore
volumePath string
volumeDatastore *Datastore
It may merit its own pkg/vsphere/datastore package or perhaps just a Datastore type in pkg/vmware/object/datastore.go
type Datastore struct {
*object.Datastore // govmomi/object
// cache, etc
dsType string
dsPath string
}
Investigate automatic creation & destruction of ESX VMs on vCloud Air for purposes of CI-driven end-to-end testing for VIC.
this may be automatically updated but make sure we provide support for the container of the vm being displayed. Are their actions on the VM we want to do for the container? i.e., deletion will delete the container too.
Investigate how we can directly manipulate VMDKs on ESX. The following high level tasks are of specific interest:
We need an automated way to benchmark the following Docker workflows with VIC. We also need a way to track changes in these workflow with our CI builds.
Docker Create
Docker Start
Docker Stop
Docker Delete
In order to see and address breakages early, we want to move to using photon nightly builds as our base. This has a dependency on #16
We need a package handle disk preparation from within the VCH, such as attach/detach, mount/unmount, format, etc.
There is an existing collection of such methods in bonneville-daemon/daemon/modules/vmware/disk.go
These can be refactored into a new package (pkg/vsphere/disk) that is not tied to the docker daemon.
If we cannot add VCH tasks into the normal task view then we need to create this view in the plugin area. (see #43)
Imported from BON-283.
If we need to disable ASR and enable it in the future (like post kernel boot), the linux kernel provides a proc/sysctl interface which can be leveraged. This needs to be verified but this is a first step which may just work.
randomize_va_space sysctl
randomize_va_space:
This option can be used to select the type of process address
space randomization that is used in the system, for architectures
that support this feature.
0 - Turn the process address space randomization off. This is the
default for architectures that do not support this feature anyways,
and kernels that are booted with the "norandmaps" parameter.
1 - Make the addresses of mmap base, stack and VDSO page randomized.
This, among other things, implies that shared libraries will be
loaded to random addresses. Also for PIE-linked binaries, the
location of code start is randomized. This is the default if the
CONFIG_COMPAT_BRK option is enabled.
2 - Additionally enable heap randomization. This is the default if
CONFIG_COMPAT_BRK is disabled.
There are a few legacy applications out there (such as some ancient
versions of libc.so.5 from 1996) that assume that brk area starts
just after the end of the code+bss. These applications break when
start of the brk area is randomized. There are however no known
non-legacy applications that would be broken this way, so for most
systems it is safe to choose full randomization.
Systems with ancient and/or broken binaries should be configured
with CONFIG_COMPAT_BRK enabled, which excludes the heap from process
address space randomization.
Not clear about what to send yet.
VSphere users should be able to easily monitor containers in the UI. This may initially just mean monitoring the same set of things that can be monitored for a VM.
Create a networks tab on the vch page to show similar content to the normal networks page of vApps.
Docker Machine will be one of the ways in which a VCH can be created. We need to understand how flexible the plug-in model is so that we can come up with a specification for how Docker Machine might be able to drive both vSphere admin tasks and user tasks.
As discussed in #4 (comment)
When self-provisioning a VCH, access is going to need to be granted ahead-of-time by an admin to certain vSphere system resources. The user and the admin need a secure and simple mechanism by which access is granted, presented and validated. The most simple approach to this is for the vSphere admin to be able to create a binary token, representing access to specific resources, which can be passed as input to VCH creation.
In order to specify this workflow, we need to be clearer about the mechanisms of authentication, authorization and validation that we've chosen. We also need to decide what the scope of the token should be.
@jak-atx @hickeng @lweitzman evaluating github issues as a pipeline for ux/ui workflow
Testing links to clickable workflow: http://ue-sandbox.eng.vmware.com/vic/HTML/
Do we want to add links back to internal artifacts ?
How to add a label? Looks like might need to get membership to add label - waiting on Dana Nourie
When this repo is made public, then I assume all these TP issues will be public too.
Create a datastores tab on the vch page to show similar content to the normal datastore page of vApps.
We're going to want docker API server bindings:
Update the icons in the left nav and in the inventories section on the home page. (The icons are still being prepared)
We need to have well documented workflow for NSX deployment and DPG creation:
This investigation should document at least (1) and generate additional issues for (2) and (3)
I nuked isolinux.bin and boot.cat from the repo. This breaks the iso build in the linux/Dockerfile. isolinux.bin can be grabbed from the build container. boot.cat needs to be looked at.
In any case, fix the iso building.
Imported from BON-274
This is the easiest repo to refactor since it's all our code. The objective is to allow the tether component to be built locally for all platforms it supports. It currently builds inside build containers where the Dockerfile copies only the relevant files for the specified platform. This breaks local tools when developing and makes development hard.
If the build is now carried out by a makefile, this simplifies the dockerfile significantly. The dockerfile will do not much more beyond getting some build tools, calling make, then building the iso. The base Dockerfile will be subsumed by this single file and all caching will happen in the docker context (SRCTOP).
See https://github.com/vmware/vic/pull/4/files#r50573409
Might as well cleanup the base dockerfile too. See https://github.com/vmware/vic/pull/4/files#r50573362
cc @caglar10ur
We should use the STREAM test here.
This should assess the total consumed memory (preferably with breakdown of ESX/VM/guest) in:
This will allow us to track not only direct memory usage, but also page breaking behaviour.
The appliance must be capable of being restarted while containers are running. The raises a number of questions about the impact this has:
The Bonneville appliance has a local disk as well as a ramdisk. It may be beneficial to see if we can make the VIC appliance "diskless" - have it boot off the ISO and then have everything else written into memory. The main question marks around this are:
The bootstrap kernel (which we're currently building from source) adds the serial driver to the kernel and directs a console to it. This serial port is file backed in the esx datastore so any logs written to the console, the kernel will write to the serial port, where esx will write it to a log file.
The issue is the "mainline" photon-kernel does not include the serial port in the kernel. They cite a performance penalty in adding a serial port to the kernel incurred by all VMs using their kernel.
We use this console to write kernel panic's which is what happens if the tether PANICs. The log ensures you get the tether stack trace before the kernel OOPs in a log, whereas the vga console will only capture the OOPs (due to log wrap).
There is, however, an initrd which includes the serial module.
So, we have 2 options.
1 - Modify the logger in the tether to log to the serial port rather than the console. This means we no longer need to build our own kernel and the vga console is reserved for kernel OOPses.
2 - Continue to build our own kernel and log to the serial console.
The Port Layer storage APIs will not cover image resolution. They are specifically concerned with how to take a tar file as an input (ideally streaming) and then store, index, attach, export etc. Given that the Port Layer is supposed to be container-engine and OS-agnostic, the question of how an image name gets resolved to one or more tar files must be handled by a layer above.
It makes sense to break this capability out into a simple binary that could be driven by the container engine, which for arguments sake, we can call "imagec". Docker may well split out their own image resolution code into an imagec themselves and if this happens, it would be desirable for us to simply adopt that code.
So this issue will cover the building of an imagec binary. In order to spec this out, we need to understand exactly what metadata is stored, derived and managed by a v2 Docker Repository; how recursive resolution is handled; how and where imagec should buffer images its downloading; and what the interface between imagec and the port layer should look like.
ToDo Items;
There are several existing in daemon/modules/vmware/utils.go - however most depends on internal APIs (#8). Once we have the internal part sorted out, we can refactor these to pkg/vsphere/object/virtual_machine.go
Add Drone config for building and testing, as well as updating README to instruct on how to use Drone to build locally.
We need to have full end-to-end performance analysis as part of our automated testing. This would require storing data from performance test runs, and their analysis and visualization.
This may be swagger, swagger used solely for specification rather than generation, or something else.
Currently the ssh server code is the spec for our communication. This is fragile and makes testing awkward when the client/server code are separated, particularly when it comes to the data structures that pass over the wire.
When specifying this, consider it to be a definition of the data that needs to pass from the VCH to the containerVM, not how it's passed. For example, we could pass an IP directly via ssh, or we can place it into the guestinfo for the containerVM so that it's persisted in an infrastructure visible fashion (the latter is preferred).
We need a way to create a OVF of ESX from a specified build/release and upload that OVF to vCloud Air in an automated fashion for deploying ESX/vCenter test harness(es) of various versions
Hola! @ali5ter has created a ZenHub account for the vmware organization. ZenHub is the leading team collaboration and project management solution built for GitHub.
To get set up with ZenHub, all you have to do is download the browser extension and log in with your GitHub account. Once you do, you’ll get access to ZenHub’s complete feature-set immediately.
ZenHub adds a series of enhancements directly inside the GitHub UI:
Still curious? See more ZenHub features or read user reviews. This issue was written by your friendly ZenHub bot, posted by request from @ali5ter.
Test container start time, across the various configurations we enable:
There is already a tracing facility in the code to time function calls, perhaps we can use that get a detailed startup time breakup.
There are a number of architectural decisions that pivot on our decision whether or not to use an ESX agent to provide very low-level services to multiple tenants.
Possible functions/benefits of an ESX agent:
The investigation work that needs to be done is:
Using very simple passthrough API for the agents lets us be very API version agnostic w.r.t the contents of the stream.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.