Coder Social home page Coder Social logo

opennebula / addon-lxdone Goto Github PK

View Code? Open in Web Editor NEW
37.0 13.0 6.0 1.79 MB

Allows OpenNebula to manage Linux Containers via LXD

Home Page: https://opennebula.org/lxdone-lightweight-virtualization-for-opennebula/

License: Apache License 2.0

Ruby 18.67% Shell 34.71% Python 46.62%
lxd opennebula containers virtualization lxc cloud data-center infrastructure

addon-lxdone's Introduction

LXDoNe


Twitter

LXD Ceph OpenNebula

pylxd VNC

Codacy Badge



LXDoNe is an addon for OpenNebula to manage LXD Containers. It fits in the Virtualization and Monitorization Driver section according to OpenNebula's Architecture. It uses the pylxd API for several container tasks. This addon is the continuation of LXCoNe, an addon for LXC. Check the blog entry in OpenNebula official site.

LXD is a daemon which provides a REST API to drive LXC containers. Containers are lightweight OS-level Virtualization instances, they behave like Virtual Machines but don't suffer from hardware emulation processing penalties by sharing the kernel with the host. They run bare-metal-like, simple containers can boot up in 2 seconds consuming less than 32MB of RAM and a minimal fraction of a CPU Core. Check out this performance comparison against KVM if you don't know much about LXD.

The master branch is subject to changes. We recommend to use one of the stables releases you can check at the top of this page.

Developers

Leaders

Contributors

  • Akihiko Ota [@sw37th]
  • Devon Hubner [@DevoKun]

Compatibility

LXDoNe is not an update of LXCoNe so your old containers won't be manageable out of the box. Default compressed LXD images won't work either. For more information read Virtual Appliance.

Tested OpenNebula versions

OpenNebula OpenNebula

Tested Linux Distributions

Ubuntu

Setup

Check the Setup Guide to deploy a working scenario.

Features

  • Tested OpenNebula 5.4.1
  • Added validations in several container life moments
  • Setup process updated
  • Created a script for updating vmm and im drivers.
  • Several minor improvements and bug fixes
  • Fixed extra hdd mounting permission issues
  • Base image updated with new context and dotfiles
  • Virtual Appliance generation guide reworked
  • Poll minor bug fixed
  • VNC fixed
  • Context reworked
  • Logs reworked
  • Allow use of LXD features in VM Template:
    • privileged/unprivileged containers
    • nesting
  • vmm scripts execution times reduced 40-60%
  • NIC Hotplug
  • Virtual Appliance uploaded
  • Enhanced buildimg.sh, thanks @sw37th
    • Bug fixes
    • Included auto-contextualization
  • Virtual Appliance creation script
  • Life cycle control:
    • Start and Poweroff
    • Reboot and Reset
    • Suspend and Resume
  • Monitorization:
    • CPU
    • RAM
    • Status
    • Network Traffic
  • Resource Limitation:
    • RAM
    • CPU
    • VCPU
  • Log scripts execution time duration
  • Deploy container with several disks
  • Deploy container with several NICs
  • Storage Backends:
    • Ceph
    • Filesystem
  • VNC (beta)
  • Specify target device for extra disks
  • Contextualization compatibility
  • 802.1Q network driver compatibility

addon-lxdone's People

Contributors

codacy-badger avatar dann1 avatar jmdelafe avatar sw37th avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

addon-lxdone's Issues

Erase background containers

Conditions:
1 - The conainer one-1 managed by OpenNebula is running
2 - The virtualization node it runs on is forced off (ex. power failure)
3 - The node restarts
4 - Before powering one-1 the user decides to erase it

Result
Now you have a container one-1 in the background not managed by OpenNebula.

Missing or Incorrect Documentation in Setup Guide instruction

I'm using Ubuntu 18.04 on a KVM VM. 16GB ram, 4 cpu, 100GB disk

I was following the instructions to install addon-lxdone and got to the section:

Follow KVM Node Installation, up to step 6.

So following the KVM Node installation I got to:

To add OpenNebula repository on Debian/Ubuntu execute as root:

Found there is no Ubuntu 18.04 instruction so I just edited & used the 17.04 instruction to look like:

echo "deb https://downloads.opennebula.org/repo/5.4/Ubuntu/**18.04** stable opennebula" > /etc/apt/sources.list.d/opennebula.list

Saved the change then did:

sudo apt update

and got the following error:

The following packages have unmet dependencies:
opennebula-node : Depends: opennebula-common (= 5.4.1-1) but 5.6.1-1 is to be installed
E: Unable to correct problems, you have held broken packages.

**so I changed the /etc/atp/sources.list.d/opennebula/list to:

deb https://downloads.opennebula.org/repo/5.6.1/Ubuntu/18.04 stable opennebula

and then "sudo apt update" worked..!**

The next documentation problem I saw was with Step 4. Configure Passwordless SSH

Following the documentation we've not yet created any LXD container "nodes" yet! So I am not sure how someone installing LXDone would accomplish the:

To create the known_hosts file, we have to execute this command as user oneadmin in the Front-end with all the node names and the Front-end name as parameters:

ssh-keyscan ... >> /var/lib/one/.ssh/known_hosts

Next, again not sure what someone installing LXDone does for:

Step 5. Networking Configuration

As installation of LXD (sudo apt install lxd -or- sudo snap install lxd) followed by the execution of sudo lxd init usually creates the lxdbr0 bridge... I am not sure whether to create a BR0 bridge in Step 5 or not ???

However... since step 2.1 Install required packages looks like thats where LXD gets installed and in step 2.5.1 Configure the LXD daemon it does do an LXD INIT I am guessing that lxdbr0 will work

In section 2.1 Install Required Packages

There needs to be an 18.04 identical to the 16.04 except replace xenial with bionic:

apt install lxd lxd-tools python-pylxd/bionic-updates
criu bridge-utils
python-ws4py python-pip

In section 2.1 it says:

**> Check that pylxd is >= 2.0.5 or LXDoNe will not work correctly.

dpkg -s python-pylxd | grep 2.0.5 || echo "ERROR pylxd version not 2.05"**

But the python-pylxd that installed earlier is 2.2.6-0ubuntu1.1 so the above ERROR is incorrect I believe!

When Creating a Virtual Network where instructions say:

First IP/MAC address: some private IP available on the network.

I input 10.66.154.1 which is the IP of my LXDBR0 bridge and get an error:

One or more required fields are missing or malformed

So I thought I'd try 10.66.154.2 ... same error!

Now I am stuck as I don't know what it wants here.

So I'm going to stop here until I get a reply with advice of what to enter here.

I've got to say that overall ... your instructions are VERY good other than the above info I've documented.

Really looking forward to trying this.

Brian

Brian

Getting more done in GitHub with ZenHub

Hola! @dann1 has created a ZenHub account for the OpenNebula organization. ZenHub is the only project management tool integrated natively in GitHub – created specifically for fast-moving, software-driven teams.


How do I use ZenHub?

To get set up with ZenHub, all you have to do is ?utm_source=ZHOnboarding download the browser extension and log in with your GitHub account. Once you do, you’ll get access to ZenHub’s complete feature-set immediately.

What can ZenHub do?

ZenHub adds a series of enhancements directly inside the GitHub UI:

  • Real-time, customizable task boards for GitHub issues;
  • Multi-Repository burndown charts, estimates, and velocity tracking based on GitHub Milestones;
  • Personal to-do lists and task prioritization;
  • Time-saving shortcuts – like a quick repo switcher, a “Move issue” button, and much more.

?utm_source=ZHOnboarding Add ZenHub to GitHub

Still curious? See ?utm_source=ZHOnboarding more ZenHub features or read user reviews . This issue was written by your friendly ZenHub bot, posted by request from @dann1.

ZenHub Board

Disk Hotplug

Add support for disk attach/detach.

There are some issues. The thing is, attach_disk modifies the VM template, but doesn’t update the deployment.X file which contains the xml. You can request the VM template in the frontend and dump it into a file, that is nice, but, what I really need is to do the same thing in a virtualization host. I haven’t found a way to do this.

Migration, live migration and native ceph support

Hello, I wanted to check and see if an estimated timeline is known for release on the following features and if so, when that is:

  • Migration
  • Live migration
  • Native LXD Ceph support

Migration is the biggest feature I'm looking forward to however I thought I would ask about a few I had interest in using.

Great project BTW, this is really the future of cloud computing IMO and I hope to see this merged into the regular OpenNebula release sooner than later.

Cannot deploy container

Hi everyone!
I just installed the LXDone on a new Opennebula installation following the installation tutorial, but when I tried to deploy the VM provided in the marketplace (ID: 7dd50db7-33c4-4b39-940c-f6a55432622f) it failed. Can any one help me, please.

The opennebula is configured as an stand alone server. The kvm driver is working and deploying VMs normally, and the 'lxc launch' command either.

VM log:

Mon Sep 25 18:11:28 2017 [Z0][VM][I]: New state is ACTIVE
Mon Sep 25 18:11:28 2017 [Z0][VM][I]: New LCM state is PROLOG
Mon Sep 25 18:11:29 2017 [Z0][VM][I]: New LCM state is BOOT
Mon Sep 25 18:11:29 2017 [Z0][VMM][I]: Generating deployment file: /var/lib/one/vms/14/deployment.0
Mon Sep 25 18:11:30 2017 [Z0][VMM][I]: Successfully execute transfer manager driver operation: tm_context.
Mon Sep 25 18:11:30 2017 [Z0][VMM][I]: Successfully execute network driver operation: pre.
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: Command execution fail: cat << EOT | /var/tmp/one/vmm/lxd/deploy '/var/lib/one//datastores/0/14/deployment.0' 'localhost' 14 localhost
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: deploy.py: ########################################
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: losetup: /var/lib/one/datastores/0/14/disk.0: Warning: file does not fit into a 512-byte sector; the end of the file will be ignored.
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: mount: wrong fs type, bad option, bad superblock on /dev/loop0,
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: missing codepage or helper program, or other error
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]:
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: In some cases useful info is found in syslog - try
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: dmesg | tail or so.
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: Traceback (most recent call last):
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: File "/var/tmp/one/vmm/lxd/deploy.py", line 154, in
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: apply_profile(profile, container)
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: File "/var/tmp/one/vmm/lxd/deploy.py", line 114, in apply_profile
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: lc.storage_context(container, contextiso)
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: File "/var/tmp/one/vmm/lxd/lxd_common.py", line 187, in storage_context
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: container.files.put('/mnt/' + i.name, i.content)
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: File "/usr/lib/python2.7/dist-packages/pylxd/container.py", line 78, in put
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: params={'path': filepath}, data=data)
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: File "/usr/lib/python2.7/dist-packages/pylxd/client.py", line 94, in post
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: self._assert_response(response, allowed_status_codes=(200, 201, 202))
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: File "/usr/lib/python2.7/dist-packages/pylxd/client.py", line 66, in _assert_response
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: raise exceptions.LXDAPIException(response)
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: pylxd.exceptions.LXDAPIException: file does not exist
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: ExitCode: 1
Mon Sep 25 18:11:31 2017 [Z0][VMM][I]: Failed to execute virtualization driver operation: deploy.
Mon Sep 25 18:11:31 2017 [Z0][VMM][E]: Error deploying virtual machine
Mon Sep 25 18:11:31 2017 [Z0][VM][I]: New LCM state is BOOT_FAILURE
oneadmin@max:~$ ll datastores/0/14
total 243076
drwxrwxr-x 2 oneadmin oneadmin        54 Sep 25 18:11 ./
drwxr-xr-x 3 oneadmin oneadmin        16 Sep 25 18:11 ../
-rw-rw-r-- 1 oneadmin oneadmin      3305 Sep 25 18:11 deployment.0
-rw-r--r-- 1 oneadmin oneadmin 248529635 Sep 25 18:11 disk.0
-rw-r--r-- 1 oneadmin oneadmin    372736 Sep 25 18:11 disk.1

lxc info:

config:
  core.https_address: 0.0.0.0:8443
  core.trust_password: true
api_extensions:
- id_map
api_status: stable
api_version: "1.0"
auth: trusted
public: false
environment:
  addresses:
  - 10.235.2.225:8443
  - 192.168.122.1:8443
  architectures:
  - x86_64
  - i686
  certificate: |
    -----BEGIN CERTIFICATE-----
    MIIFPTCCAyWgAwIBAgIQVu8i4gpyY...
    -----END CERTIFICATE-----
  certificate_fingerprint: e6a7dd45....
  driver: lxc
  driver_version: 2.0.8
  kernel: Linux
  kernel_architecture: x86_64
  kernel_version: 4.4.0-96-generic
  server: lxd
  server_pid: 460
  server_version: 2.0.10
  storage: dir
  storage_version: ""
oneadmin@max:~$ lxc list
+--------+---------+------+------+------------+-----------+
|  NAME  |  STATE  | IPV4 | IPV6 |    TYPE    | SNAPSHOTS |
+--------+---------+------+------+------------+-----------+
| one-14 | STOPPED |      |      | PERSISTENT | 0         |
+--------+---------+------+------+------------+-----------+
oneadmin@max:~$ losetup
NAME       SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE
/dev/loop0         0      0         0  0 /vmfs/datastores/0/14/disk.0
root@max:~# mount /dev/loop0 /mnt/
mount: wrong fs type, bad option, bad superblock on /dev/loop0,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.

root@max:~# dmesg | tail
[13435.322510] audit: type=1400 audit(1506372931.308:42): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="lxc-container-default" pid=63940 comm="apparmor_parser"
[13435.322520] audit: type=1400 audit(1506372931.308:43): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="lxc-container-default-cgns" pid=63940 comm="apparmor_parser"
[13435.322526] audit: type=1400 audit(1506372931.308:44): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="lxc-container-default-with-mounting" pid=63940 comm="apparmor_parser"
[13435.322532] audit: type=1400 audit(1506372931.308:45): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="lxc-container-default-with-nesting" pid=63940 comm="apparmor_parser"
[13574.967436] audit: type=1400 audit(1506373070.948:46): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/usr/bin/lxc-start" pid=453 comm="apparmor_parser"
[13574.980193] audit: type=1400 audit(1506373070.960:47): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="lxc-container-default" pid=458 comm="apparmor_parser"
[13574.980205] audit: type=1400 audit(1506373070.960:48): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="lxc-container-default-cgns" pid=458 comm="apparmor_parser"
[13574.980213] audit: type=1400 audit(1506373070.960:49): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="lxc-container-default-with-mounting" pid=458 comm="apparmor_parser"
[13574.980220] audit: type=1400 audit(1506373070.960:50): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="lxc-container-default-with-nesting" pid=458 comm="apparmor_parser"
[15595.453903] audit: type=1400 audit(1506375091.368:51): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxd-one-14_</var/lib/lxd>" pid=24195 comm="apparmor_parser"
lsee@max:~$ sudo mkfs.ext4 /dev/loop0
mke2fs 1.42.13 (17-May-2015)
Discarding device blocks: done
Creating filesystem with 242704 1k blocks and 60720 inodes
Filesystem UUID: 997a0a16-69de-4b2c-b4bd-c56feb56d011
Superblock backups stored on blocks:
        8193, 24577, 40961, 57345, 73729, 204801, 221185

Allocating group tables: done
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

lsee@max:~$ sudo mount /dev/loop0 /mnt/
lsee@max:~$ ls -la /mnt/
total 13
drwxr-xr-x  3 root root  1024 Sep 25 19:27 .
drwxr-xr-x 23 root root   322 Sep 24 06:53 ..
drwx------  2 root root 12288 Sep 25 19:27 lost+found
lsee@max:~$ sudo ls -la /mnt/lost+found/
total 13
drwx------ 2 root root 12288 Sep 25 19:27 .
drwxr-xr-x 3 root root  1024 Sep 25 19:27 ..

I don't know if the following command should work, but anyway I tried:

oneadmin@max:~$ lxc start one-14
error: Error calling 'lxd forkstart one-14 /var/lib/lxd/containers /var/log/lxd/one-14/lxc.conf': err='Failed to run: /usr/bin/lxd forkstart one-14 /var/lib/lxd/containers /var/log/lxd/one-14/lxc.conf: '
  lxc 20170925213131.423 ERROR lxc_conf - conf.c:mount_rootfs:798 - No such file or directory - Failed to get real path for "/var/lib/lxd/containers/one-14/rootfs".
  lxc 20170925213131.423 ERROR lxc_conf - conf.c:setup_rootfs:1220 - Failed to mount rootfs "/var/lib/lxd/containers/one-14/rootfs" onto "/usr/lib/x86_64-linux-gnu/lxc" with options "(null)".
  lxc 20170925213131.423 ERROR lxc_conf - conf.c:do_rootfs_setup:3899 - failed to setup rootfs for 'one-14'
  lxc 20170925213131.423 ERROR lxc_conf - conf.c:lxc_setup:3981 - Error setting up rootfs mount after spawn
  lxc 20170925213131.423 ERROR lxc_start - start.c:do_start:811 - Failed to setup container "one-14".
  lxc 20170925213131.423 ERROR lxc_sync - sync.c:__sync_wait:57 - An error occurred in another process (expected sequence number 3)
  lxc 20170925213131.424 ERROR lxc_start - start.c:__lxc_start:1358 - Failed to spawn container "one-14".
  lxc 20170925213131.998 ERROR lxc_conf - conf.c:run_buffer:416 - Script exited with status 1.
  lxc 20170925213131.998 ERROR lxc_start - start.c:lxc_fini:546 - Failed to run lxc.hook.post-stop for container "one-14".

Any Idea why I' m getting this problem?

Thanks!

Package LXDoNe

Create a debian/ubuntu package for installing LXDoNe.

VNC autorestart patch

Somehow OpenNebula kills the VNC server started during deploy fase. Sometimes you may get Server Disconnection when logged via noVNC in Sunstone. This is solved by pressing f5 reloading the tab in your browser. What happens behind the scenes is

while [[ $(lxc info one-$VM_ID | grep Status | awk {'print $2'}) == "Running" ]]; do
svncterm -timeout 0 -rfbport $VNC_PORT -c lxc exec one-$VM_ID login || ((counter=counter+1))
if [[ $counter -eq 6 ]]; then exit
fi
done

This basically starts the VNC server associated to a one-container whenever its killed. The problem is VNC server shouldn't be restarted because it shouldn't be killed in the first place. We don't know why it's been killed but we think it has something to do with OpenNebula because manually starting the VNC and not entering via Sunstone keeps the server alive. We used tigervnc client and it wouldn't die.

Resuming, the vnc.bash patch works but things could be better. Any ideas would be welcomed.

Storage Drivers for LXD image datastore

LXDoNe uses raw block devices instead of the default LXD image tarballs. This happens due to the fact that OpenNebula default behavior is to operate with raw and qcow images. Right now we are tricking LXD something like https://github.com/lxc/lxd/issues/3540, but that is not a sane behavior cause this work should be done by LXD itself, of course it's not LXD fault.

With the new storage drivers lxc image import path-to-tar.gz would be like uploading the base image to OpenNebula and creating container during lxc launch would be prolog in OpenNebula. Also the image datastore in OpenNebula would be like a public image server to the LXD virtualization nodes.

This approach would grant a more LXD-like user experience and could save time, bandwidth, space and code. Also we would support thin provisioned containers. To develop a storage driver for OpenNebula read http://docs.opennebula.org/5.4/integration/infrastructure_integration/sd.html#storage-driver. Contributions will be really appreciated.

Improve VNC connection

Some ideas taken from Proxmox and lxd-latest

  • sudo lxc-console -n first-lxd-container -P /var/lib/lxd/containers -t 0
  • usr/bin/vncterm -rfbport 5901 -timeout 10 -authpath /vms/100 -perm VM.Console -notls -listen localhost -c /usr/bin/dtach -A /var/run/dtach/vzctlconsole100 -r winch -z lxc-console -n 100

Support for qcow2 block images

Using qemu-nbd kernel module, we could mount qcow2 blocks on the container directory.

The module is loaded issuing

apt install qemu-utlis
modprobe nbd

This is an example of mapping an image file into a block device
qemu-nbd -c /dev/nbd0 /var/lib/vz/images/107/vm-107-disk-1.qcow2

Then we could mount it as a regular volume.

To unmap the image file
qemu-nbd -d /dev/nbd0

LXDoNe production ready?

Hello! Has anyone tried this plugin? Can anyone confirm to me if it really supports OpenNebula and it works? Thank you so much.

Hardcoded DATASTORE location

Hello,

In a preexisting installation of OpenNebula 5.4, after changing the value of DATASTORE_LOCATION in oned.conf, the default /var/lib/one/datastores/ are hardcoded in the lxd addon, hence the value in onde.conf is never changed.

Relevant files are:

DS_LOCATION = '/var/lib/one/datastores/' + DS_ID + '/' + VM_ID + '/'

and

disk = "/var/lib/one/datastores/" + DS_ID + \

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.