Coder Social home page Coder Social logo

lavabit / robox Goto Github PK

View Code? Open in Web Editor NEW
605.0 17.0 136.0 26.85 MB

The tools needed to robotically create/configure/provision a large number of operating systems, for a variety of hypervisors, using packer.

Shell 88.30% Ruby 11.70%
bash packer vagrant freebsd openbsd centos rhel alpine debian arch

robox's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

robox's Issues

generic/fedora29 (v1.8.40, libvirt) stuck on .device systemd jobs

Hello.

generic/fedora29 (v1.8.40, libvirt)

Image boot is stuck for 1.5 minutes with messages A start job is running for dev-disk...8c3382.device, Dependency failed for Resume from ...8c3382 (this is swap disk UUID, /dev/sda2), and then stuck forever with A start job is running for dev-disk...8243e.device message (this is /, /dev/sda3).

However, booting this image manually into rescue mode (from Grub menu) works. After regenerating initrd from rescue mode with dracut --verbose --force, booting to normal mode also works.

Thank you for all your boxes. Your project is probably the only one which builds boxes for parallels, hyperv, virtualbox, libvirt, and vmware_desktop - 5 platforms at the same time, allowing to use consistent environment across different virtualization engines.

Debian 10 for Hyper-V: Why on the last build sometimes just 3 providers instead of the 5 ?

Hello,

First thanks a lot for the work on building vagrant box and provide them to us !

I noticed that the last debian 10, don't provide Hyper-V sometimes. Is it a bug in the build system ?
Please see v1.9.19 and v1.9.14, no hyper-v, nor parralels:
https://app.vagrantup.com/generic/boxes/debian10
https://app.vagrantup.com/roboxes/boxes/debian10

The last Debian 10 (buster), v1.9.19, is the stable debian (all previous version are based on debian 10 testing). I would have very like to find Hyper-V as provider for this first one Debian 10 stable box.

Is it a bug or is a new build planned very soon for Hyper-V ?

Thanks !

Request about naming openbsd boxes

Hello,

please name the OpenBSD boxes with the release number like openbsd64 for openbsd 6.4.
OpenBSD releases are numbered with 0.1 increased from release to release.

Regards

generic/ vs roboxes/ namespace?

Which namespace should users prefer when referencing these boxes from Vagrant Cloud, the "generic" or "roboxes" namespace? Will one become deprecated in favor of the other over time?

More boxes!

Hey, thank you for maintaining so many useful Vagrant boxes online, and supporting many backend providers as well! I'd love to see even more guest OS's available through robox! Feel free to crib a few from my packer templates:

https://github.com/mcandre/packer-templates

  • Illumos: DilOS, SmartOS
  • DragonflyBSD
  • HardenedBSD
  • macOS
  • NetBSD
  • Haiku OS
  • MINIX
  • another musl/Linux with xbps packages: Void Linux
  • Windows

Where do you build your vmware boxes ?

Hello

I'm packaging boxes via Packer and chef/bento based configuration files. VirtualBox is all fine, but I hit a wall trying to run my vmware CentOS / RHEL boxes over ESXi using vagrant + vagrant plugin vagrant-vmware-esxi : disk seems to be ignored, the VM boot is looping on PXE/TFTP lookup.

So I'm trying to understand hence this question. I tried some upstream boxes : centos/6 and centos/7 does work, generic/centos6 centos7 rhel7 doesn't, same symptoms. CentOS is building his boxes over a VMWare Workstation. Mine are built over ESXi 5.1 5.5. Where/how do you build yours ?

And any hints for investigation would be welcomed, I've been at it for days, I'm out of idea!

RHEL8: does not boot with disk_bus=scsi

hi,

it appears to have been changed for vmware already, but i was unable to boot the libvirt variant of your rhel8 image with the default setting:

disk_bus=scsi

it times out searching for the root volume and fdisk in rescue mode does not show any disks.
I was able to boot the VM with vagrant-libvirt using

disk_bus="sata"

in the Vagrant File setting.

Cannot ssh into generic/ubuntu1604 (virtualbox, 1.8.24)

I cannot ssh into generic/ubuntu1604, provider virtualbox, version 1.8.24.

If I log into the console sshd is running and listening on port 22, however there is an error in the logs

error: Could not load host key: /etc/ssh/ssh_host_rsa_key
error: Could not load host key: /etc/ssh/ssh_host_dsa_key
error: Could not load host key: /etc/ssh/ssh_host_ecdsa_key
error: Could not load host key: /etc/ssh/ssh_host_ed25519_key
> vagrant up --provider virtualbox
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'generic/ubuntu1604'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'generic/ubuntu1604' is up to date...
==> default: Setting the name of the VM: generic_ubuntu1604_default_1534863324235_72684
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
==> default: Forwarding ports...
    default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2222
    default: SSH username: vagrant
    default: SSH auth method: private key
    default: Warning: Remote connection disconnect. Retrying...
    default: Warning: Remote connection disconnect. Retrying...
    default: Warning: Remote connection disconnect. Retrying...
    default: Warning: Remote connection disconnect. Retrying...
    default: Warning: Remote connection disconnect. Retrying...
    default: Warning: Remote connection disconnect. Retrying...
    default: Warning: Remote connection disconnect. Retrying...
    default: Warning: Remote connection disconnect. Retrying...
    default: Warning: Remote connection disconnect. Retrying...
    default: Warning: Remote connection disconnect. Retrying...
The guest machine entered an invalid state while waiting for it
to boot. Valid states are 'starting, running'. The machine is in the
'unknown' state. Please verify everything is configured
properly and try again.

If the provider you're using has a GUI that comes with it,
it is often helpful to open that and watch the machine, since the
GUI often has more helpful error messages than Vagrant can retrieve.
For example, if you're using VirtualBox, run `vagrant up` while the
VirtualBox GUI is open.

The primary issue for this error is that the provider you're using
is not properly configured. This is very rarely a Vagrant issue.
> ssh [email protected] -p 2460
Connection closed by 127.0.0.1 port 2460

Ideas for speeding up builds

robox is a wonderfully comprehensive collection of base boxes covering a wide variety of different operating systems. I gather that time to pack all these boxes is a constraint on further development, so I wonder if we can improve packing time somehow.

For example, is there a significant restriction on build resources, RAM, CPU cores, and so on? Perhaps we could provide a donation link specifically for funding cloud resources used to build our images.

That could help with the hardware constraints on build time, by scaling vertically. Could we also scale horizontally? Perhaps building boxes from a pool of hosts.

Finally, what steps can we take to improve the build time of each particular box? I've done some (premature) optimizations on my own box templates, such as minifying boot_command contents, and compressing provisioning media, as keyboard input delivery and FTP can be run slowly on certain virtual guests.

Honestly, the bottlenecks of OS installation tend to be the long, uncontrolled process of running the install wizards. But what other things can we shave off, even just a few minutes from each build? Reducing boot splash timeouts, selecting faster virtual hardware (when guest compatible), and ensuring that installation media (ISO's, IMG's) are sourced from fast online caches. Any other ideas for accelerating builds?

MirBSD box

Hi, I made a prototype base box for MirBSD, also known as MirOS. It's similar to OpenBSD but with a few quirks. Here are the packer tweaks and tests I implemented to support this VM:

https://github.com/mcandre/packer-templates/tree/master/miros

Basic functionality seems to work, including rsync-based shared folders and provisioning files and scripts. Some notes:

  • ACPI support is missing. packer build is able to (just barely) succeed with the provided boot_command, but only because no complex applications are running at the same time that halt -p executes. The MirBSD devs are not interested in adding ACPI support, the VirtualBox developers are not interested in improving APM support. Applications will generally work, and file system can be generally safe, as long as sensitive user applications are safely terminated prior to vagrant halt. Not an ideal situation, but one we can document for the corner case of people wishing to deploy MirBSD VM's while avoiding accidental file system corruption! If you find that VMware or qemu provides better APM support, please let me know. Then it would actually make sense to offer VMware/libvirt providers but not VirtualBox, to safeguard against this issue.
  • Related to ACPI lack of support, is the fact that much of the provisioning for the base box is done directly in boot_command, that would ordinarily be done at a later phase over SSH (much faster). This is because of how Packer doesn't understand that the last SSH provisioning script might trigger a final powerdown that is not recognized by ACPI. And so in practice Packer stalls the build waiting for an SSH connection that never happens. Fortunately, MirBSD is a fairly lean install (~100MiB) and so the boot_command provisioning is minimal in time and complexity.
  • MirBSD's repository for install media including the base .NGZ files, is quite slow, taking around half an hour to download a basic set of install files. As a workaround, I have setup my packer build process with a make wrapper that mirrors these files locally with wget. It's up to you whether to do this; for debugging this really speeds up the build cycle. When everything is running smoothly, there's no particular reason to locally mirror these files.
  • MirPorts appears to be broken. pkg_add still works, and I plan to look into pkgsrc later, in order to get more edgy packages like cmake and pip installed in some of my personal downstream boxes.

Buster broken

Now that Debian v10 Buster has graduated from testing to stable, the repository URLs have changed. This means the generic/debian10 VM is in a corrupt apt state, unable to install packages. Let's point the buster VM's at the sharp teeth, unleashed, ultra-beast stable release!

Some images fail to find a rootfs

Hello!

I have a strange issue with the some boxes under libvirt provider. For some reason the box can't find the rootfs after booting up. After spending countless hours rummaging through config files and ending up with reinstalling vagrant, libvirt, and qemu, I still haven't managed to find the root cause.

Current behavior:

$ vagrant init generic/arch
$ vagrant up --provider=libvirt
Bringing machine 'default' up with 'libvirt' provider...
==> default: Box 'generic/arch' could not be found. Attempting to find and install...
    default: Box Provider: libvirt
    default: Box Version: >= 0
==> default: Loading metadata for box 'generic/arch'
    default: URL: https://vagrantcloud.com/generic/arch
==> default: Adding box 'generic/arch' (v1.9.6) for provider: libvirt
    default: Downloading: https://vagrantcloud.com/generic/boxes/arch/versions/1.9.6/providers/libvirt.box
==> default: Successfully added box 'generic/arch' (v1.9.6) for 'libvirt'!
==> default: Uploading base box image as volume into libvirt storage...
==> default: Creating image (snapshot of base box volume).
==> default: Creating domain with the following settings...
==> default:  -- Name:              vagrant_default
==> default:  -- Domain type:       kvm
==> default:  -- Cpus:              2
==> default:  -- Feature:           acpi
==> default:  -- Feature:           apic
==> default:  -- Feature:           pae
==> default:  -- Memory:            2048M
==> default:  -- Management MAC:    
==> default:  -- Loader:            
==> default:  -- Base box:          generic/arch
==> default:  -- Storage pool:      default
==> default:  -- Image:             /var/lib/libvirt/images/vagrant_default.img (32G)
==> default:  -- Volume Cache:      default
==> default:  -- Kernel:            
==> default:  -- Initrd:            
==> default:  -- Graphics Type:     vnc
==> default:  -- Graphics Port:     5900
==> default:  -- Graphics IP:       127.0.0.1
==> default:  -- Graphics Password: Not defined
==> default:  -- Video Type:        cirrus
==> default:  -- Video VRAM:        256
==> default:  -- Sound Type:	
==> default:  -- Keymap:            en-us
==> default:  -- TPM Path:          
==> default:  -- INPUT:             type=mouse, bus=ps2
==> default: Creating shared folders metadata...
==> default: Starting domain.
==> default: Waiting for domain to get an IP address...

This phase never finishes. However, on the serial console initramfs complains about missing rootfs:
image

dmesg tail:
image

I have this issue only on Fedora, the generic/arch box works fine on CentOS 7.

$ cat /etc/fedora-release 
Fedora release 29 (Twenty Nine)
$ rpm -q libvirt vagrant vagrant-libvirt 
libvirt-4.7.0-1.fc29.x86_64
vagrant-2.1.2-3.fc29.noarch
vagrant-libvirt-0.0.40-5.fc29.noarch

I'm under impression I keep missing something pretty obvious here, as some boxes work (like generic/debian9) and some don't (like generic/arch or generic/centos7).

UTC

When specifying timezones, please select UTC, which will result in more predictable behavior on our servers.

Contributing Documentation

Hey, I stumbled on this project when on the Vagrant store looking for boxes and noticed the generic boxes were all done quite well, but was unable to find some boxes offered by you that I'd like to see. So first off, thank you for this project! Appreciate the work you've put in here.

I made a comment on the other issue where you were asking for the next distros that people would like to see added. But, I would also like to see documentation on how to contribute to the repository if that's not too hard. That way, it wouldn't only be upon you to make boxes, but others could follow your steps so that they can add in the Operating Systems that they'd like to see. Looking at the repo now, it's a little unclear what all needs to happen in order to get another OS added for automated box building.

Fedora boxes do not support vboxsf

I'm trying to use roboxes/fedora29 but I'm running into issues when trying to have a synced folder.

Apparently the fedora package virtualbox-guest-additions.x86_64 does not supply the vboxsf file system. See this redhat bug and this one for discussion.

Currently the vbguest plugin also fails to properly install the guest additions with this bug.

So I'm wondering if there is a way of getting a fedora in a virtual machine with synced folders and being able to do vagrant destroy and vagrant up and have it working without having to roll one's own fedora box.

Maybe using nfs is a solution, but my networking setup is a bit complex, so I haven't gotten round to that.

I don't know if roboxes could create a box compatible with synced folders, but I thought I should at least report the current state of affairs.

Hardcoded external DNS breaks internal DNS resolver

We use a non public subdomain inside our internal DNS infrastructure. Hard coding public DNS servers breaks this (common) setup.
This renders all your Ubuntu 18.04 and 18.10 images unusable for many companies.

root@default-u1804:~# dig proxy.internal.ourdomain.com     
; <<>> DiG 9.11.3-1ubuntu1.2-Ubuntu <<>> proxy.internal.ourdomain.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 34613
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;proxy.internal.ourdomain.com.  IN      A

;; Query time: 31 msec
;; SERVER: 127.0.0.53#53(127.0.0.53)
;; WHEN: Mon Nov 19 09:16:11 PST 2018
;; MSG SIZE  rcvd: 57

generic/ubuntu1804 kernel panic on boot with virtualbox

Starting a generic/ubuntu1804 vagrant box with VirtualBox with 256M memory gives a kernel panic and an out of memory error on the console, vagrant ssh does not work, and provisioning fails (happens early in the boot process).

This issue does not occur with bento/ubuntu-18.04 or ubuntu/bionic64. Is there a different kernel or kernel options in use, or something else that might cause this?

dns servers hardcoded in generic/ubuntu1804

The following servers are hardcoded in the generic/ubuntu1804 image:

# systemd-resolve --status
Global
         DNS Servers: 4.2.2.1
                      4.2.2.2
                      208.67.220.220

Wouldn't it be expected that the local DNS from DHCP should have priority over these servers? If running in an environment where access is blocked to external DNS servers, then this image requires further work to use, rather than working out of the box, like say, bento/ubuntu-18.04 or ubuntu/bionic64.

Shrink boxes

We can strip out some components to reduce total image size, see the cleanup scripts in https://github.com/mcandre/packer-templates for examples. In general, caches should be cleaned, including /tmp mounts and any OS package manager caches.

We can shrink images even further by removing some nonessential software packages, like perl in Ubuntu, that aren't strictly needed in order to boot up and serve SSH commands. This could break some user expectations, of course, so it's up to you to decide how minimal or maximal we want the base boxes to be.

The `synced_folder` does not work on `generic/centos7`

I'm using generic/centos7 with the following configuration

Vagrant.configure("2") do |config|
    config.vm.define "cent7" do |cent7|
        cent7.vm.box = "generic/centos7"
        cent7.vm.network "private_network", ip: "172.20.120.10"
        cent7.vm.synced_folder "bin/", "/vagrant/bin"
    end
end

But there seems to be some issue related to synced_folder. I'm seeing the following error when I run vagrant up cent7

An error occurred while executing `vmrun`, a utility for controlling
VMware machines. The command and output are below:

Command: ["enableSharedFolders", "/Users/nalluri/Projects/consul-client/.vagrant/machines/cent7/vmware_desktop/f2febc17-913d-48b4-b94a-f5dc84e46f12/generic-centos7-vmware.vmx", {:notify=>[:stdout, :stderr]}]

Stdout: Error: There was an error mounting the Shared Folders file system inside the guest operating system

Stderr:

More architectures!

Could we get boxes for more architectures, to help test applications in different environments? I think most of these boxes are amd64, which is awesome, and i386, powerpc, arm, mips would be great as well!

Unfortunately, most hypervisors do not currently support non-x86-based guests. Though I think it is possible to run some non-x86 guest boxes with Vagrant, via the Vagrant libvirt plugin. This enables users to run non-x86 guests on x86 hosts, though the hosts must be running (GNU?) Linux natively.

Could we get some libvirt-based ppc, arm, and mips boxes published? These take longer to build and run, but they are worth it for projects that need to test on lots of different architectures.

Stuck at Waiting for SSH to become available...

I'm unable to get generic/ubuntu1804 and generic/ubuntu1604 to work properly with libvirt.

Steps to reproduce:

  1. vagrant init generic/ubuntu1804
  2. vagrant up --provider=libvirt

Vagrant gets stuck at Waiting for SSH to become available... but I can see that the VM has booted successfully in virt-manager.
I have previously used the ubuntu1604 box successfully on this computer and I'm unsure what has happened.
The same problem is not present when using virtualbox as provider.

Environment
Host OS: Ubuntu 18.04
Vagrant: 2.0.1
libvirt: 4.0.0
QEMU emulator version 2.11.1

Missing guest additions in generic/ubuntu1804 box for virtualbox provider

Bringing up a box with a Vagrantfile that shares my home folder with the guest generates the following error:

MacBook-Pro:ubuntu-18.04(master)$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Box 'generic/ubuntu1804' could not be found. Attempting to find and install...
    default: Box Provider: virtualbox
    default: Box Version: >= 0
==> default: Loading metadata for box 'generic/ubuntu1804'
    default: URL: https://vagrantcloud.com/generic/ubuntu1804
==> default: Adding box 'generic/ubuntu1804' (v1.8.38) for provider: virtualbox
    default: Downloading: https://vagrantcloud.com/generic/boxes/ubuntu1804/versions/1.8.38/providers/virtualbox.box
    default: Download redirected to host: vagrantcloud-files-production.s3.amazonaws.com
==> default: Successfully added box 'generic/ubuntu1804' (v1.8.38) for 'virtualbox'!
==> default: Importing base box 'generic/ubuntu1804'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'generic/ubuntu1804' is up to date...
==> default: Setting the name of the VM: ubuntu-1804_default_1541001124673_44044
==> default: Fixed port collision for 22 => 2222. Now on port 2200.
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
==> default: Forwarding ports...
    default: 80 (guest) => 8081 (host) (adapter 1)
    default: 22 (guest) => 2200 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2200
    default: SSH username: vagrant
    default: SSH auth method: private key
    default: 
    default: Vagrant insecure key detected. Vagrant will automatically replace
    default: this with a newly generated keypair for better security.
    default: 
    default: Inserting generated public key within guest...
    default: Removing insecure key from the guest if it's present...
    default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
==> default: Mounting shared folders...
    default: /home/vohi => /Users/vohi
Vagrant was unable to mount VirtualBox shared folders. This is usually
because the filesystem "vboxsf" is not available. This filesystem is
made available via the VirtualBox Guest Additions and kernel module.
Please verify that these guest additions are properly installed in the
guest. This is not a bug in Vagrant and is usually caused by a faulty
Vagrant box. For context, the command attempted was:

mount -t vboxsf -o uid=1000,gid=1000 home_vohi /home/vohi

The error output from the command was:

mount: /home/vohi: wrong fs type, bad option, bad superblock on home_vohi, missing codepage or helper program, or other error.

The Vagrantfile is:

# -*- mode: ruby -*-
# vi: set ft=ruby :

$user = ENV['USER']

Vagrant.configure("2") do |config|
  config.vm.box = "generic/ubuntu1804"
  config.vm.network "forwarded_port", guest: 80, host: 8081, host_ip: "127.0.0.1"
  config.vm.synced_folder "~", "/home/#{$user}"
  config.vm.provision "shell", path: "provision.sh", args: "/home/#{$user}"
end

'vagrant up' fails on generic/alpine39 if the option hostname is needed

Vagrant version

2.2.4

Host operating system

Ubuntu 18.04 LTS (Bionic Beaver)

Guest operating system

Alpine Linux 3.9

Vagrantfile

Vagrant.configure("2") do |config|  
  config.vm.box = "generic/alpine39"
  config.vm.hostname = "alpine.example.org" 
end

Expected behavior

Alpine VM is up and requested hostname is set.

Actual behavior

vagrant up fails on Setting hostname....

==> default: Setting hostname...                                                                                                                                                              
The following SSH command responded with a non-zero exit status.                                                                                                                              
Vagrant assumes that this means the command failed!                                                                                                                                           
                                                                                                                                                                                              
# Save current hostname saved in /etc/hosts                                                                                                                                                   
CURRENT_HOSTNAME_FULL="$(hostname -f)"                                                                                                                                                        
CURRENT_HOSTNAME_SHORT="$(hostname -s)"                                                                                                                                                       
                                                                                                                                                                                              
# New hostname to be saved in /etc/hosts                                                                                                                                                      
NEW_HOSTNAME_FULL='alpine.exampler.org'                                                                                                                                                       
NEW_HOSTNAME_SHORT="${NEW_HOSTNAME_FULL%%.*}"                                                                                                                                                 
                                                                                                                                                                                              
# Update sysconfig                                                                                                                                                                            
sed -i 's/\(HOSTNAME=\).*/\1alpine.exampler.org/' /etc/sysconfig/network                                                                                                                      
                                                                                                                                                                                              
# Set the hostname - use hostnamectl if available
if command -v hostnamectl; then
  hostnamectl set-hostname --static 'alpine.exampler.org'
  hostnamectl set-hostname --transient 'alpine.exampler.org'
else
  hostname 'alpine.exampler.org'
fi

# Update ourselves in /etc/hosts
if grep -w "$CURRENT_HOSTNAME_FULL" /etc/hosts; then
  sed -i -e "s/( )$CURRENT_HOSTNAME_FULL( )/$NEW_HOSTNAME_FULL/g" -e "s/( )$CURRENT_HOSTNAME_FULL$/$NEW_HOSTNAME_FULL/g" /etc/hosts
fi
if grep -w "$CURRENT_HOSTNAME_SHORT" /etc/hosts; then
  sed -i -e "s/( )$CURRENT_HOSTNAME_SHORT( )/$NEW_HOSTNAME_SHORT/g" -e "s/( )$CURRENT_HOSTNAME_SHORT$/$NEW_HOSTNAME_SHORT/g" /etc/hosts
fi

# Restart network
service network restart


Stdout from the command:

127.0.0.1       localhost.lavabit.com localhost localhost.localdomain localhost
127.0.0.1       localhost.lavabit.com localhost localhost.localdomain localhost
::1             localhost localhost.localdomain


Stderr from the command:

sed: /etc/sysconfig/network: No such file or directory
 * service: service `network' does not exist

Steps to reproduce

Run vagrant up with the provided Vagrantfile

References

The same error is mentioned on
hashicorp/vagrant: #10584

generic/fedora30 (virtualbox) VB guest update error with vagrant-vbguest

I created a Vagrantfile similar to this:

Vagrant.configure("2") do |config|
  # I have some proxy settings here

  config.vm.hostname = "foo"
  config.vm.box = "generic/fedora30"

  config.vm.box_check_update = false
end

When I run vagrant up (on Linux) it fails:

Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'generic/fedora30'...
==> default: Matching MAC address for NAT networking...
==> default: Setting the name of the VM: client-vm_default_1563189751261_38790
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
==> default: Forwarding ports...
    default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2222
    default: SSH username: vagrant
    default: SSH auth method: private key
    default: 
    default: Vagrant insecure key detected. Vagrant will automatically replace
    default: this with a newly generated keypair for better security.
    default: 
    default: Inserting generated public key within guest...
    default: Removing insecure key from the guest if it's present...
    default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!
==> default: Configuring proxy environment variables...
==> default: Configuring proxy for Git...
==> default: Configuring proxy for Yum...
Got different reports about installed GuestAdditions version:
Virtualbox on your host claims:   6.0.0
VBoxService inside the vm claims: 6.0.8
Going on, assuming VBoxService is correct...
[default] GuestAdditions seems to be installed (6.0.8) correctly, but not running.
Got different reports about installed GuestAdditions version:
Virtualbox on your host claims:   6.0.0
VBoxService inside the vm claims: 6.0.8
Going on, assuming VBoxService is correct...
bash: line 4: start: command not found
bash: line 4: start: command not found
Got different reports about installed GuestAdditions version:
Virtualbox on your host claims:   6.0.0
VBoxService inside the vm claims: 6.0.8
Going on, assuming VBoxService is correct...
bash: line 4: setup: command not found
==> default: Checking for guest additions in VM...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!

 setup

Stdout from the command:



Stderr from the command:

bash: line 4: setup: command not found

I have the vagrant-vbguest plugin installed if that matters.

$ vagrant --version
Vagrant 2.2.4
$ vagrant plugin list
vagrant-disksize (0.1.3, global)
  - Version Constraint: > 0
vagrant-docker-compose (1.3.0, global)
  - Version Constraint: > 0
vagrant-ignition (0.0.3, global)
  - Version Constraint: > 0
vagrant-libvirt (0.0.45, global)
  - Version Constraint: > 0
vagrant-openstack-provider (0.13.0, global)
  - Version Constraint: > 0
vagrant-proxyconf (2.0.1, global)
  - Version Constraint: > 0
vagrant-vbguest (0.18.0, global)
  - Version Constraint: > 0

Box version: generic/fedora30 (virtualbox, 1.9.18)

Update: indeed this problem is related to vagrant-vbguest. Could this vagrant box support vagrant-vbguest? Otherwise the following workaround in the Vagrantfile works (I found it in some other Vagrantfile):

  if Vagrant.has_plugin?("vagrant-vbguest") then
    config.vbguest.auto_update = false
  end

9p and NFS support for Vagrant

Would it be possible to add 9p and NFS support into Vagrant libvirt boxes by default, since VirtualBox shares aren't available there and those are (afaik) the only other options for mounts that don't have to be synced manually?

Anyone else getting "E1000 NIC" security warnings?

I recently updated Vagrant, and now it complains that the virtual network device that many boxes use by default, E1000, is insecure. Does this happen with the generic/ boxes as well? Can we pack more secure base boxes while Vagrant works on a fix?

vagrant upload fails on ubuntu1804 box when uploading a directory

Hi,

I'm managing jobs that I want to execute on a vagrant-managed VM in subdirectories which contain the necessary script files. The ideas is that given a directory jobs/test with files main.sh and other stuff, I can basically just do

$ vagrant upload jobs/test test ubuntu1804

and then run

$ vagrant ssh -c test/main.sh ubuntu1804

This worked nicely for all sorts of images until recently. It works with version 1.8.54, but with the latest version 1.9.2 I get an error from vagrant:

/opt/vagrant/embedded/gems/2.2.3/gems/net-scp-1.2.1/lib/net/scp.rb:398:in `await_response_state': scp: error: unexpected filename: . (RuntimeError)
	from /opt/vagrant/embedded/gems/2.2.3/gems/net-scp-1.2.1/lib/net/scp.rb:365:in `block (3 levels) in start_command'
	from /opt/vagrant/embedded/gems/2.2.3/gems/net-ssh-5.1.0/lib/net/ssh/connection/channel.rb:610:in `do_close'
	from /opt/vagrant/embedded/gems/2.2.3/gems/net-ssh-5.1.0/lib/net/ssh/connection/session.rb:573:in `channel_closed'
	from /opt/vagrant/embedded/gems/2.2.3/gems/net-ssh-5.1.0/lib/net/ssh/connection/session.rb:682:in `channel_close'
	from /opt/vagrant/embedded/gems/2.2.3/gems/net-ssh-5.1.0/lib/net/ssh/connection/session.rb:549:in `dispatch_incoming_packets'
	from /opt/vagrant/embedded/gems/2.2.3/gems/net-ssh-5.1.0/lib/net/ssh/connection/session.rb:249:in `ev_preprocess'
	from /opt/vagrant/embedded/gems/2.2.3/gems/net-ssh-5.1.0/lib/net/ssh/connection/event_loop.rb:101:in `each'
	from /opt/vagrant/embedded/gems/2.2.3/gems/net-ssh-5.1.0/lib/net/ssh/connection/event_loop.rb:101:in `ev_preprocess'
	from /opt/vagrant/embedded/gems/2.2.3/gems/net-ssh-5.1.0/lib/net/ssh/connection/event_loop.rb:29:in `process'
	from /opt/vagrant/embedded/gems/2.2.3/gems/net-ssh-5.1.0/lib/net/ssh/connection/session.rb:228:in `process'
	from /opt/vagrant/embedded/gems/2.2.3/gems/net-ssh-5.1.0/lib/net/ssh/connection/session.rb:181:in `block in loop'
	from /opt/vagrant/embedded/gems/2.2.3/gems/net-ssh-5.1.0/lib/net/ssh/connection/session.rb:181:in `loop'
	from /opt/vagrant/embedded/gems/2.2.3/gems/net-ssh-5.1.0/lib/net/ssh/connection/session.rb:181:in `loop'
	from /opt/vagrant/embedded/gems/2.2.3/gems/net-ssh-5.1.0/lib/net/ssh/connection/channel.rb:272:in `wait'
	from /opt/vagrant/embedded/gems/2.2.3/gems/net-scp-1.2.1/lib/net/scp.rb:284:in `upload!'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/communicators/ssh/communicator.rb:296:in `block in upload'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/communicators/ssh/communicator.rb:709:in `block in scp_connect'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/communicators/ssh/communicator.rb:489:in `connect'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/communicators/ssh/communicator.rb:707:in `scp_connect'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/communicators/ssh/communicator.rb:293:in `upload'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/commands/upload/command.rb:104:in `block in execute'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/plugin/v2/command.rb:238:in `block in with_target_vms'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/plugin/v2/command.rb:232:in `each'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/plugin/v2/command.rb:232:in `with_target_vms'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/commands/upload/command.rb:69:in `execute'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/cli.rb:58:in `execute'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/environment.rb:291:in `cli'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/bin/vagrant:182:in `<main>'

Given that the only change is the version of the box, I suspect something in the configuration to have broken things.
I can still upload individual files, but that's not what I need :)

Perhaps you are aware of some changes here that could have broken this?

Cheers,
Volker

Plan 9?

Could we get a Plan 9 box? Would be awesome to quickly build and test apps in a Plan 9 VM!

Parallels Box Error

I tried to run this box seems to be getting the error

There was an error loading a Vagrantfile. The file being loaded
and the error message are shown below. This is usually caused by
a syntax error.

Path: <provider config: parallels>
Line number: 42
Message: ArgumentError: wrong number of arguments (given 4, expected 1..2)

404 when initialize boxes in vagrant

I keep getting a 404 when I attempt to "vagrant up" after initializeing a box. I also get a 404 if I try to add a box directly. I have tried this for Fedora 28, Fedora 29, and Fedora 29 Silverblue:

C:\Users\Jonathan Calloway\vagrant>vagrant box add generic/fedora28
==> box: Loading metadata for box 'generic/fedora28'
box: URL: https://vagrantcloud.com/generic/fedora28
This box can work with multiple providers! The providers that it
can work with are listed below. Please review the list and choose
the provider you will be working with.

  1. hyperv
  2. libvirt
  3. parallels
  4. virtualbox
  5. vmware_desktop

Enter your choice: 4
==> box: Adding box 'generic/fedora28' (v1.8.52) for provider: virtualbox
box: Downloading: https://vagrantcloud.com/generic/boxes/fedora28/versions/1.8.52/providers/virtualbox.box
box: Progress: 0% (Rate: 0/s, Estimated time remaining: --:--:--)
An error occurred while downloading the remote file. The error
message, if any, is reproduced below. Please fix this error and try
again.

The requested URL returned error: 404 Not Found

Which distros should be added next?

@mcandre when I'm able to get RAM/SSDs for the recently donated blades, and then get those robots up and running, I plan to add more distros. Devuan, possibly Minix are at the top of my personal wish list, but if you have 1 or 2 that you think you think are important, now would be the time to suggest them. MacOS/Windows are also near the top of my list, but they'll require the most work.

Technically speaking once I get the new blade server working I should have the capacity to add several distros. The bottleneck will be the time it takes to test/troubleshoot the new distro on the 5 different hypervisors I currently target. If your willing to tweak your templates so they fit into the Robox generic pipeline, and then submit a pull request, that would make it easier for me to accept more of the distros you keep requesting.

If your interested I can setup an experimental branch you can work with while you get the new boxes integrated.

freebsd vagrant does not connect

On arch I run:

$ vagrant init generic/freebsd12
$ vagrant up              
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Checking if box 'freebsd/FreeBSD-11.0-STABLE' version '2017.05.11.2' is up to date...
==> default: Clearing any previously set forwarded ports...
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
==> default: Forwarding ports...
    default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2222
    default: SSH username: vagrant
    default: SSH auth method: private key
    default: Warning: Connection reset. Retrying...
    default: Warning: Remote connection disconnect. Retrying...
    default: Warning: Connection reset. Retrying...
    default: Warning: Remote connection disconnect. Retrying...
Timed out while waiting for the machine to boot. This means that
Vagrant was unable to communicate with the guest machine within
the configured ("config.vm.boot_timeout" value) time period.

If you look above, you should be able to see the error(s) that
Vagrant had when attempting to connect to the machine. These errors
are usually good hints as to what may be wrong.

If you're using a custom box, make sure that networking is properly
working and you're able to connect to the machine. It is a common
problem that networking isn't setup properly in these boxes.
Verify that authentication configurations are also setup properly,
as well.

If the box appears to be booting properly, you may want to increase
the timeout ("config.vm.boot_timeout") value.

Offer Version of Boxes without Hardcoded DNS?

Hi,

As mentioned in issue #11 having a hard-coded external dns breaks our internal dns resolver. In the mean time, I've taken a bit of time to modify your robox.sh script for our use, without the hard-coded dns.

Would this be something you'd be open to merge via PR? Now that the robox namespace seems to be preferred, maybe we could utilise the 'generic' namespace for the boxes without dns changes, and the 'robox' namespace for boxes with hard-coded dns?

Thoughts? Perhaps we could even add this as a robox.sh build-time option?

Even more providers!

Thank you for maintaining so many operating system base boxes, for so many providers already. Honestly, this is a massive feature matrix to have completed, kudos!

What if we took the boxes and offered them on even more providers, like Amazon/Google Cloud/Azure/Oracle images, as well as Docker and Triton images (where possible)? This could grow the userbase and possibly funding sources a bit further, plugging gaps where no compatible images currently exist. For Docker and Triton builders, we would get a bonus effect of many GNU/Linux guests available as lightweight containers, as opposed to more heavyweight virtual machines.

http://packer.io/docs/builders/index.html

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.