opennebula / addon-lxdone Goto Github PK
View Code? Open in Web Editor NEWAllows OpenNebula to manage Linux Containers via LXD
Home Page: https://opennebula.org/lxdone-lightweight-virtualization-for-opennebula/
License: Apache License 2.0
Allows OpenNebula to manage Linux Containers via LXD
Home Page: https://opennebula.org/lxdone-lightweight-virtualization-for-opennebula/
License: Apache License 2.0
Test OpenNebula volatile disks
Hello! Has anyone tried this plugin? Can anyone confirm to me if it really supports OpenNebula and it works? Thank you so much.
Insert a donation link for supporting the project
Somebody could say me if this plugin works?
The first container in a virtualization node is the only one who gets graphic session. More info can be found on this lxd issues. https://github.com/lxc/lxd/issues/1129 lxc/incus#936
This is not an LXDoNe bug, it's related to LXD. Maybe some config can be a workaround.
Any contribution will be appreciated.
I'm using Ubuntu 18.04 on a KVM VM. 16GB ram, 4 cpu, 100GB disk
I was following the instructions to install addon-lxdone and got to the section:
Follow KVM Node Installation, up to step 6.
So following the KVM Node installation I got to:
To add OpenNebula repository on Debian/Ubuntu execute as root:
Found there is no Ubuntu 18.04 instruction so I just edited & used the 17.04 instruction to look like:
echo "deb https://downloads.opennebula.org/repo/5.4/Ubuntu/**18.04** stable opennebula" > /etc/apt/sources.list.d/opennebula.list
Saved the change then did:
sudo apt update
and got the following error:
The following packages have unmet dependencies:
opennebula-node : Depends: opennebula-common (= 5.4.1-1) but 5.6.1-1 is to be installed
E: Unable to correct problems, you have held broken packages.
**so I changed the /etc/atp/sources.list.d/opennebula/list to:
deb https://downloads.opennebula.org/repo/5.6.1/Ubuntu/18.04 stable opennebula
and then "sudo apt update" worked..!**
The next documentation problem I saw was with Step 4. Configure Passwordless SSH
Following the documentation we've not yet created any LXD container "nodes" yet! So I am not sure how someone installing LXDone would accomplish the:
To create the known_hosts file, we have to execute this command as user oneadmin in the Front-end with all the node names and the Front-end name as parameters:
ssh-keyscan ... >> /var/lib/one/.ssh/known_hosts
Next, again not sure what someone installing LXDone does for:
Step 5. Networking Configuration
As installation of LXD (sudo apt install lxd -or- sudo snap install lxd) followed by the execution of sudo lxd init usually creates the lxdbr0 bridge... I am not sure whether to create a BR0 bridge in Step 5 or not ???
However... since step 2.1 Install required packages looks like thats where LXD gets installed and in step 2.5.1 Configure the LXD daemon it does do an LXD INIT I am guessing that lxdbr0 will work
In section 2.1 Install Required Packages
There needs to be an 18.04 identical to the 16.04 except replace xenial with bionic:
apt install lxd lxd-tools python-pylxd/bionic-updates
criu bridge-utils
python-ws4py python-pip
In section 2.1 it says:
**> Check that pylxd is >= 2.0.5 or LXDoNe will not work correctly.
dpkg -s python-pylxd | grep 2.0.5 || echo "ERROR pylxd version not 2.05"**
But the python-pylxd that installed earlier is 2.2.6-0ubuntu1.1 so the above ERROR is incorrect I believe!
When Creating a Virtual Network where instructions say:
First IP/MAC address: some private IP available on the network.
I input 10.66.154.1 which is the IP of my LXDBR0 bridge and get an error:
One or more required fields are missing or malformed
So I thought I'd try 10.66.154.2 ... same error!
Now I am stuck as I don't know what it wants here.
So I'm going to stop here until I get a reply with advice of what to enter here.
I've got to say that overall ... your instructions are VERY good other than the above info I've documented.
Really looking forward to trying this.
Brian
Brian
LXDoNe uses raw block devices instead of the default LXD image tarballs. This happens due to the fact that OpenNebula default behavior is to operate with raw and qcow images. Right now we are tricking LXD something like https://github.com/lxc/lxd/issues/3540, but that is not a sane behavior cause this work should be done by LXD itself, of course it's not LXD fault.
With the new storage drivers lxc image import path-to-tar.gz
would be like uploading the base image to OpenNebula and creating container during lxc launch would be prolog in OpenNebula. Also the image datastore in OpenNebula would be like a public image server to the LXD virtualization nodes.
This approach would grant a more LXD-like user experience and could save time, bandwidth, space and code. Also we would support thin provisioned containers. To develop a storage driver for OpenNebula read http://docs.opennebula.org/5.4/integration/infrastructure_integration/sd.html#storage-driver. Contributions will be really appreciated.
Use the native LXD and Ceph setup instead of tricking LXD.
Some ideas taken from Proxmox and lxd-latest
Hello,
In a preexisting installation of OpenNebula 5.4, after changing the value of DATASTORE_LOCATION in oned.conf, the default /var/lib/one/datastores/ are hardcoded in the lxd addon, hence the value in onde.conf is never changed.
Relevant files are:
addon-lxdone/src/remotes/vmm/lxd/deploy.py
Line 103 in 7bc9b16
and
Thanks!
Hi everyone!
I just installed the LXDone on a new Opennebula installation following the installation tutorial, but when I tried to deploy the VM provided in the marketplace (ID: 7dd50db7-33c4-4b39-940c-f6a55432622f) it failed. Can any one help me, please.
The opennebula is configured as an stand alone server. The kvm driver is working and deploying VMs normally, and the 'lxc launch' command either.
VM log:
Mon Sep 25 18:11:28 2017 [Z0][VM][I]: New state is ACTIVE
Mon Sep 25 18:11:28 2017 [Z0][VM][I]: New LCM state is PROLOG
Mon Sep 25 18:11:29 2017 [Z0][VM][I]: New LCM state is BOOT
Mon Sep 25 18:11:29 2017 [Z0][VMM][I]: Generating deployment file: /var/lib/one/vms/14/deployment.0
Mon Sep 25 18:11:30 2017 [Z0][VMM][I]: Successfully execute transfer manager driver operation: tm_context.
Mon Sep 25 18:11:30 2017 [Z0][VMM][I]: Successfully execute network driver operation: pre.
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: Command execution fail: cat << EOT | /var/tmp/one/vmm/lxd/deploy '/var/lib/one//datastores/0/14/deployment.0' 'localhost' 14 localhost
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: deploy.py: ########################################
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: losetup: /var/lib/one/datastores/0/14/disk.0: Warning: file does not fit into a 512-byte sector; the end of the file will be ignored.
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: mount: wrong fs type, bad option, bad superblock on /dev/loop0,
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: missing codepage or helper program, or other error
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]:
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: In some cases useful info is found in syslog - try
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: dmesg | tail or so.
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: Traceback (most recent call last):
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: File "/var/tmp/one/vmm/lxd/deploy.py", line 154, in
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: apply_profile(profile, container)
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: File "/var/tmp/one/vmm/lxd/deploy.py", line 114, in apply_profile
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: lc.storage_context(container, contextiso)
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: File "/var/tmp/one/vmm/lxd/lxd_common.py", line 187, in storage_context
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: container.files.put('/mnt/' + i.name, i.content)
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: File "/usr/lib/python2.7/dist-packages/pylxd/container.py", line 78, in put
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: params={'path': filepath}, data=data)
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: File "/usr/lib/python2.7/dist-packages/pylxd/client.py", line 94, in post
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: self._assert_response(response, allowed_status_codes=(200, 201, 202))
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: File "/usr/lib/python2.7/dist-packages/pylxd/client.py", line 66, in _assert_response
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: raise exceptions.LXDAPIException(response)
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: pylxd.exceptions.LXDAPIException: file does not exist
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: ExitCode: 1
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: Failed to execute virtualization driver operation: deploy.
Mon Sep 25 18:11:31 2017 [Z0][VMM][E]: Error deploying virtual machine
Mon Sep 25 18:11:31 2017 [Z0][VM][I]: New LCM state is BOOT_FAILURE
oneadmin@max:~$ ll datastores/0/14
total 243076
drwxrwxr-x 2 oneadmin oneadmin 54 Sep 25 18:11 ./
drwxr-xr-x 3 oneadmin oneadmin 16 Sep 25 18:11 ../
-rw-rw-r-- 1 oneadmin oneadmin 3305 Sep 25 18:11 deployment.0
-rw-r--r-- 1 oneadmin oneadmin 248529635 Sep 25 18:11 disk.0
-rw-r--r-- 1 oneadmin oneadmin 372736 Sep 25 18:11 disk.1
lxc info:
config:
core.https_address: 0.0.0.0:8443
core.trust_password: true
api_extensions:
- id_map
api_status: stable
api_version: "1.0"
auth: trusted
public: false
environment:
addresses:
- 10.235.2.225:8443
- 192.168.122.1:8443
architectures:
- x86_64
- i686
certificate: |
-----BEGIN CERTIFICATE-----
MIIFPTCCAyWgAwIBAgIQVu8i4gpyY...
-----END CERTIFICATE-----
certificate_fingerprint: e6a7dd45....
driver: lxc
driver_version: 2.0.8
kernel: Linux
kernel_architecture: x86_64
kernel_version: 4.4.0-96-generic
server: lxd
server_pid: 460
server_version: 2.0.10
storage: dir
storage_version: ""
oneadmin@max:~$ lxc list
+--------+---------+------+------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+--------+---------+------+------+------------+-----------+
| one-14 | STOPPED | | | PERSISTENT | 0 |
+--------+---------+------+------+------------+-----------+
oneadmin@max:~$ losetup
NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE
/dev/loop0 0 0 0 0 /vmfs/datastores/0/14/disk.0
root@max:~# mount /dev/loop0 /mnt/
mount: wrong fs type, bad option, bad superblock on /dev/loop0,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so.
root@max:~# dmesg | tail
[13435.322510] audit: type=1400 audit(1506372931.308:42): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="lxc-container-default" pid=63940 comm="apparmor_parser"
[13435.322520] audit: type=1400 audit(1506372931.308:43): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="lxc-container-default-cgns" pid=63940 comm="apparmor_parser"
[13435.322526] audit: type=1400 audit(1506372931.308:44): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="lxc-container-default-with-mounting" pid=63940 comm="apparmor_parser"
[13435.322532] audit: type=1400 audit(1506372931.308:45): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="lxc-container-default-with-nesting" pid=63940 comm="apparmor_parser"
[13574.967436] audit: type=1400 audit(1506373070.948:46): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/usr/bin/lxc-start" pid=453 comm="apparmor_parser"
[13574.980193] audit: type=1400 audit(1506373070.960:47): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="lxc-container-default" pid=458 comm="apparmor_parser"
[13574.980205] audit: type=1400 audit(1506373070.960:48): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="lxc-container-default-cgns" pid=458 comm="apparmor_parser"
[13574.980213] audit: type=1400 audit(1506373070.960:49): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="lxc-container-default-with-mounting" pid=458 comm="apparmor_parser"
[13574.980220] audit: type=1400 audit(1506373070.960:50): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="lxc-container-default-with-nesting" pid=458 comm="apparmor_parser"
[15595.453903] audit: type=1400 audit(1506375091.368:51): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxd-one-14_</var/lib/lxd>" pid=24195 comm="apparmor_parser"
lsee@max:~$ sudo mkfs.ext4 /dev/loop0
mke2fs 1.42.13 (17-May-2015)
Discarding device blocks: done
Creating filesystem with 242704 1k blocks and 60720 inodes
Filesystem UUID: 997a0a16-69de-4b2c-b4bd-c56feb56d011
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729, 204801, 221185
Allocating group tables: done
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done
lsee@max:~$ sudo mount /dev/loop0 /mnt/
lsee@max:~$ ls -la /mnt/
total 13
drwxr-xr-x 3 root root 1024 Sep 25 19:27 .
drwxr-xr-x 23 root root 322 Sep 24 06:53 ..
drwx------ 2 root root 12288 Sep 25 19:27 lost+found
lsee@max:~$ sudo ls -la /mnt/lost+found/
total 13
drwx------ 2 root root 12288 Sep 25 19:27 .
drwxr-xr-x 3 root root 1024 Sep 25 19:27 ..
I don't know if the following command should work, but anyway I tried:
oneadmin@max:~$ lxc start one-14
error: Error calling 'lxd forkstart one-14 /var/lib/lxd/containers /var/log/lxd/one-14/lxc.conf': err='Failed to run: /usr/bin/lxd forkstart one-14 /var/lib/lxd/containers /var/log/lxd/one-14/lxc.conf: '
lxc 20170925213131.423 ERROR lxc_conf - conf.c:mount_rootfs:798 - No such file or directory - Failed to get real path for "/var/lib/lxd/containers/one-14/rootfs".
lxc 20170925213131.423 ERROR lxc_conf - conf.c:setup_rootfs:1220 - Failed to mount rootfs "/var/lib/lxd/containers/one-14/rootfs" onto "/usr/lib/x86_64-linux-gnu/lxc" with options "(null)".
lxc 20170925213131.423 ERROR lxc_conf - conf.c:do_rootfs_setup:3899 - failed to setup rootfs for 'one-14'
lxc 20170925213131.423 ERROR lxc_conf - conf.c:lxc_setup:3981 - Error setting up rootfs mount after spawn
lxc 20170925213131.423 ERROR lxc_start - start.c:do_start:811 - Failed to setup container "one-14".
lxc 20170925213131.423 ERROR lxc_sync - sync.c:__sync_wait:57 - An error occurred in another process (expected sequence number 3)
lxc 20170925213131.424 ERROR lxc_start - start.c:__lxc_start:1358 - Failed to spawn container "one-14".
lxc 20170925213131.998 ERROR lxc_conf - conf.c:run_buffer:416 - Script exited with status 1.
lxc 20170925213131.998 ERROR lxc_start - start.c:lxc_fini:546 - Failed to run lxc.hook.post-stop for container "one-14".
Any Idea why I' m getting this problem?
Thanks!
Add support for disk attach/detach.
There are some issues. The thing is, attach_disk modifies the VM template, but doesn’t update the deployment.X file which contains the xml. You can request the VM template in the frontend and dump it into a file, that is nice, but, what I really need is to do the same thing in a virtualization host. I haven’t found a way to do this.
Migrate a container when not RUNNING from a LXD node to another
I follow tutorial
https://github.com/OpenNebula/addon-lxdone/blob/master/Setup.md
unfortunately after creating VM with no errors
i get in VNC message
Booting from Hard Disk
Boot failed: not a bootable disk
I tried images create from exported existing LXC container and also
build-img.sh script provided from addon-lxdone
https://github.com/OpenNebula/addon-lxdone/blob/master/image-handling/build-img.sh
what am i forgot?
regards
Add support for virtual appliance resize in container images.
Conditions:
1 - The conainer one-1 managed by OpenNebula is running
2 - The virtualization node it runs on is forced off (ex. power failure)
3 - The node restarts
4 - Before powering one-1 the user decides to erase it
Result
Now you have a container one-1 in the background not managed by OpenNebula.
Hello, I wanted to check and see if an estimated timeline is known for release on the following features and if so, when that is:
Migration is the biggest feature I'm looking forward to however I thought I would ask about a few I had interest in using.
Great project BTW, this is really the future of cloud computing IMO and I hope to see this merged into the regular OpenNebula release sooner than later.
Migrate from python 2 to python 3
Create a debian/ubuntu package for installing LXDoNe.
Throttle bandwidth per attached NIC
Given an unknown set of circumstances invloving power failures, the data stored inside a random virtual image gets deleted. In https://github.com/OpenNebula/addon-lxdone/releases/tag/v5.4-5 were added some validations to deal with this situation. However, the set conditions reamin unknown.
Snapshot a container when not RUNNING
Migrate a container when RUNNING from a LXD node to another
Hola! @dann1 has created a ZenHub account for the OpenNebula organization. ZenHub is the only project management tool integrated natively in GitHub – created specifically for fast-moving, software-driven teams.
To get set up with ZenHub, all you have to do is ?utm_source=ZHOnboarding download the browser extension and log in with your GitHub account. Once you do, you’ll get access to ZenHub’s complete feature-set immediately.
ZenHub adds a series of enhancements directly inside the GitHub UI:
Still curious? See ?utm_source=ZHOnboarding more ZenHub features or read user reviews . This issue was written by your friendly ZenHub bot, posted by request from @dann1.
Add the extra features to https://github.com/OpenNebula/addon-lxdone/blob/master/Image.md
Edit also context related info
Somehow OpenNebula kills the VNC server started during deploy fase. Sometimes you may get Server Disconnection when logged via noVNC in Sunstone. This is solved by pressing f5 reloading the tab in your browser. What happens behind the scenes is
addon-lxdone/src/remotes/vmm/lxd/vnc.bash
Lines 8 to 12 in 7bc9b16
This basically starts the VNC server associated to a one-container whenever its killed. The problem is VNC server shouldn't be restarted because it shouldn't be killed in the first place. We don't know why it's been killed but we think it has something to do with OpenNebula because manually starting the VNC and not entering via Sunstone keeps the server alive. We used tigervnc client and it wouldn't die.
Resuming, the vnc.bash patch works but things could be better. Any ideas would be welcomed.
Snapshot a container when RUNNING
Image download links are in the setup guide. Feel free to create a pull request if the links are broken
Using qemu-nbd kernel module, we could mount qcow2 blocks on the container directory.
The module is loaded issuing
apt install qemu-utlis
modprobe nbd
This is an example of mapping an image file into a block device
qemu-nbd -c /dev/nbd0 /var/lib/vz/images/107/vm-107-disk-1.qcow2
Then we could mount it as a regular volume.
To unmap the image file
qemu-nbd -d /dev/nbd0
Throttle IO per attached disk including rootfs
Enable svncterm argument for password
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.