Coder Social home page Coder Social logo

vmware / vic Goto Github PK

View Code? Open in Web Editor NEW
640.0 67.0 182.0 98.99 MB

vSphere Integrated Containers Engine is a container runtime for vSphere.

Home Page: http://vmware.github.io/vic

License: Other

Shell 3.74% Go 71.11% Makefile 0.40% RobotFramework 22.99% CSS 0.02% HTML 0.15% Python 0.16% Ruby 1.26% Dockerfile 0.13% Roff 0.03%
vic-engine vsphere vsphere-ui vsphere-networks containers workloads vic-machine docker virtualization golang

vic's Issues

Unify vendoring of repo

We should use a vendoring tool (e.g gvt, gb, gpm) to populate the vendor/ directory at the root of the repo. This will allow us to keep only our code in the repo while being able to grab dependencies during build time.

Datastore helpers for vsphere

The existing bonneville daemon has datastore related functionality that can be refactored into its own set of utilities. This will require a bit of investigation, starting in daemon/modules/vmware/driver.go:

    imagePath          string
    imageDatastore     *Datastore
    containerPath      string
    containerDatastore *Datastore
    volumePath         string
    volumeDatastore    *Datastore

It may merit its own pkg/vsphere/datastore package or perhaps just a Datastore type in pkg/vmware/object/datastore.go

type Datastore struct {
   *object.Datastore // govmomi/object

   // cache, etc
   dsType string
   dsPath string
}

Direct VMDK manipulation investigation

Investigate how we can directly manipulate VMDKs on ESX. The following high level tasks are of specific interest:

  • Extract tar archive onto VMDK - presuming ext4 as the filesystem format should be suitable for linux rootfs
  • Expand VMDK capacity, followed by resize filesystem - this should be a "rare" operation so could be handled via a block device attach to a service VM, but we should know if it's viable via direct operation.

Pull nightly photon builds

In order to see and address breakages early, we want to move to using photon nightly builds as our base. This has a dependency on #16

Create disk package for vsphere

We need a package handle disk preparation from within the VCH, such as attach/detach, mount/unmount, format, etc.

There is an existing collection of such methods in bonneville-daemon/daemon/modules/vmware/disk.go

These can be refactored into a new package (pkg/vsphere/disk) that is not tied to the docker daemon.

Disable ASR at kernel boot and re-enable at tether/init

Imported from BON-283.

If we need to disable ASR and enable it in the future (like post kernel boot), the linux kernel provides a proc/sysctl interface which can be leveraged. This needs to be verified but this is a first step which may just work.
randomize_va_space sysctl
randomize_va_space:
This option can be used to select the type of process address
space randomization that is used in the system, for architectures
that support this feature.
0 - Turn the process address space randomization off. This is the
default for architectures that do not support this feature anyways,
and kernels that are booted with the "norandmaps" parameter.
1 - Make the addresses of mmap base, stack and VDSO page randomized.
This, among other things, implies that shared libraries will be
loaded to random addresses. Also for PIE-linked binaries, the
location of code start is randomized. This is the default if the
CONFIG_COMPAT_BRK option is enabled.
2 - Additionally enable heap randomization. This is the default if
CONFIG_COMPAT_BRK is disabled.
There are a few legacy applications out there (such as some ancient
versions of libc.so.5 from 1996) that assume that brk area starts
just after the end of the code+bss. These applications break when
start of the brk area is randomized. There are however no known
non-legacy applications that would be broken this way, so for most
systems it is safe to choose full randomization.
Systems with ancient and/or broken binaries should be configured
with CONFIG_COMPAT_BRK enabled, which excludes the heap from process
address space randomization.

Create VCH networks page

Create a networks tab on the vch page to show similar content to the normal networks page of vApps.

Specify Docker Machine integration for VCH creation

Docker Machine will be one of the ways in which a VCH can be created. We need to understand how flexible the plug-in model is so that we can come up with a specification for how Docker Machine might be able to drive both vSphere admin tasks and user tasks.

Need a specification and control-flow for VCH self-provision

When self-provisioning a VCH, access is going to need to be granted ahead-of-time by an admin to certain vSphere system resources. The user and the admin need a secure and simple mechanism by which access is granted, presented and validated. The most simple approach to this is for the vSphere admin to be able to create a binary token, representing access to specific resources, which can be passed as input to VCH creation.

In order to specify this workflow, we need to be clearer about the mechanisms of authentication, authorization and validation that we've chosen. We also need to decide what the scope of the token should be.

Create VCH datastores page

Create a datastores tab on the vch page to show similar content to the normal datastore page of vApps.

Swagger generated Docker API server

We're going to want docker API server bindings:

  • FVT tests that don't encompass docker code
  • client side interaction and integration with other products
  • possible implementation of thin semantic wrappers between API and port layer abstractions

NSX deployment and distributed port group creation

We need to have well documented workflow for NSX deployment and DPG creation:

  1. manual steps initially
  2. automated for nested test topologies
  3. automation for docker network integration

This investigation should document at least (1) and generate additional issues for (2) and (3)

Fix iso building in container

I nuked isolinux.bin and boot.cat from the repo. This breaks the iso build in the linux/Dockerfile. isolinux.bin can be grabbed from the build container. boot.cat needs to be looked at.

In any case, fix the iso building.

Refactor bonneville-container/tether to build locally

Imported from BON-274

This is the easiest repo to refactor since it's all our code. The objective is to allow the tether component to be built locally for all platforms it supports. It currently builds inside build containers where the Dockerfile copies only the relevant files for the specified platform. This breaks local tools when developing and makes development hard.

Unify bootstrap Dockerfiles and merge with single vendored makefile

If the build is now carried out by a makefile, this simplifies the dockerfile significantly. The dockerfile will do not much more beyond getting some build tools, calling make, then building the iso. The base Dockerfile will be subsumed by this single file and all caching will happen in the docker context (SRCTOP).

Write test to measure memory overhead and run it as part of CI

We should use the STREAM test here.

This should assess the total consumed memory (preferably with breakdown of ESX/VM/guest) in:

  • the template
  • a minimal live container (snapshot X seconds after start)
  • a minimal live container (idle over time)
  • a minimal live container (defined workload over time)

This will allow us to track not only direct memory usage, but also page breaking behaviour.

Investigate options and benefits of diskless appliance

The appliance must be capable of being restarted while containers are running. The raises a number of questions about the impact this has:

  • Interactivity with the containers - what control points should continue to work?
  • Whether the appliance can be stateless (see diskless discussion)
  • What state the appliance must re-discover and how/where it gets it from
  • What if the appliance has migrated to a new ESX host?
  • How does it re-authenticate?
  • Do custom attributes on the VM have a role to play?

The Bonneville appliance has a local disk as well as a ramdisk. It may be beneficial to see if we can make the VIC appliance "diskless" - have it boot off the ISO and then have everything else written into memory. The main question marks around this are:

  • What are the benefits? What do we write to the local disk today and does it need to be persisted?
  • Can we go stateless with the appliance? When it gets restarted, can it discover everything it needs?
  • Where does Docker container / image metadata live?

Decide on bootstap/tether's logging when/if serial is not available

The bootstrap kernel (which we're currently building from source) adds the serial driver to the kernel and directs a console to it. This serial port is file backed in the esx datastore so any logs written to the console, the kernel will write to the serial port, where esx will write it to a log file.

The issue is the "mainline" photon-kernel does not include the serial port in the kernel. They cite a performance penalty in adding a serial port to the kernel incurred by all VMs using their kernel.

We use this console to write kernel panic's which is what happens if the tether PANICs. The log ensures you get the tether stack trace before the kernel OOPs in a log, whereas the vga console will only capture the OOPs (due to log wrap).

There is, however, an initrd which includes the serial module.

So, we have 2 options.
1 - Modify the logger in the tether to log to the serial port rather than the console. This means we no longer need to build our own kernel and the vga console is reserved for kernel OOPses.

2 - Continue to build our own kernel and log to the serial console.

Implementation of imagec

The Port Layer storage APIs will not cover image resolution. They are specifically concerned with how to take a tar file as an input (ideally streaming) and then store, index, attach, export etc. Given that the Port Layer is supposed to be container-engine and OS-agnostic, the question of how an image name gets resolved to one or more tar files must be handled by a layer above.

It makes sense to break this capability out into a simple binary that could be driven by the container engine, which for arguments sake, we can call "imagec". Docker may well split out their own image resolution code into an imagec themselves and if this happens, it would be desirable for us to simply adopt that code.

So this issue will cover the building of an imagec binary. In order to spec this out, we need to understand exactly what metadata is stored, derived and managed by a v2 Docker Repository; how recursive resolution is handled; how and where imagec should buffer images its downloading; and what the interface between imagec and the port layer should look like.

ToDo Items;

  • More testing
  • Testing with a hub account
  • Testing with a private registry
  • Integration with port layer
  • Signature checking
  • Content sum checking
  • Need to send HEAD requests before GETs
  • Need to extract JSON metadata

VirtualMachine helpers for vsphere

There are several existing in daemon/modules/vmware/utils.go - however most depends on internal APIs (#8). Once we have the internal part sorted out, we can refactor these to pkg/vsphere/object/virtual_machine.go

Drone CI

Add Drone config for building and testing, as well as updating README to instruct on how to use Drone to build locally.

Spec for container command & control operations

This may be swagger, swagger used solely for specification rather than generation, or something else.

Currently the ssh server code is the spec for our communication. This is fragile and makes testing awkward when the client/server code are separated, particularly when it comes to the data structures that pass over the wire.

When specifying this, consider it to be a definition of the data that needs to pass from the VCH to the containerVM, not how it's passed. For example, we could pass an IP directly via ssh, or we can place it into the guestinfo for the containerVM so that it's persisted in an infrastructure visible fashion (the latter is preferred).

Automate ESXi image build for vCA

We need a way to create a OVF of ESX from a specified build/release and upload that OVF to vCloud Air in an automated fashion for deploying ESX/vCenter test harness(es) of various versions

Getting more done in GitHub with ZenHub

Hola! @ali5ter has created a ZenHub account for the vmware organization. ZenHub is the leading team collaboration and project management solution built for GitHub.


How do I use ZenHub?

To get set up with ZenHub, all you have to do is download the browser extension and log in with your GitHub account. Once you do, you’ll get access to ZenHub’s complete feature-set immediately.

What can ZenHub do?

ZenHub adds a series of enhancements directly inside the GitHub UI:

  • Real-time, customizable task boards for GitHub issues;
  • Burndown charts, estimates, and velocity tracking based on GitHub Milestones;
  • Personal to-do lists and task prioritization;
  • “+1” button for GitHub issues and comments;
  • Drag-and-drop file sharing;
  • Time-saving shortcuts like a quick repo switcher.

Add ZenHub to GitHub

Still curious? See more ZenHub features or read user reviews. This issue was written by your friendly ZenHub bot, posted by request from @ali5ter.

ZenHub Board

Come up with detailed profile of startup time

Test container start time, across the various configurations we enable:

  • thin disk
  • sesparse disk
  • tags enabled
  • custom attributes enabled

There is already a tracing facility in the code to time function calls, perhaps we can use that get a detailed startup time breakup.

Quantify suitability of ESX agent to perform low-level tasks

There are a number of architectural decisions that pivot on our decision whether or not to use an ESX agent to provide very low-level services to multiple tenants.

Possible functions/benefits of an ESX agent:

  • vSocket support for tether interaction. This would eliminate our need to use serial-over-LAN for communications between tether and VCH. Serial-over-LAN currently inhibits vMotion and won't run on free ESX due to license restrictions.
  • Authentication proxy. Not only would we do authentication out-of-band, but it would mean that the a VCH could be untrusted (not run in the VC management network). We would need to consider how the authenticating proxy might present in the container guestOS and a VMOMI gateway may well be the appropriate mechanism (guest sends SOAP requests to an endpoint which provides validation and authentication before forwarding).
  • Out-of-band VMDK preparation. Currently VMDK prep is a bottleneck in the docker pull path, given the need to attach and detach disks to/from VMs in order to be able to write to them. When we have a viable solution for out-of-band VMDK prep, we will need an endpoint to delegate to.

The investigation work that needs to be done is:

  • Write and deploy a HelloWorld agent to an ESX host. How involved is the toolchain / build process?
  • Investigate mechanisms for installing/uninstalling/upgrading as part of the VIC product. What if a host is added to a cluster? Can the agent be pushed to the host automatically?
  • In addition to this, we need to have some decisions around all of the above functions/benefits (timeframe, basic designs) so that we can decide how critical an agent might be in the short term.
  • Are there other optimizations that the agent may be good for?
  • What implications would the ESX agent have on container vMotion? How would the vMotioned guest re-attach to a new agent on a new host?
  • A significant consideration is how we build a VIB without access to internal tool chains. Are there precedents for this from VMware partners? Are there libraries we can link to? How would we make an SDK available to OSS contributors to build against?

Using very simple passthrough API for the agents lets us be very API version agnostic w.r.t the contents of the stream.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.