oscar-stack / oscar Goto Github PK
View Code? Open in Web Editor NEWEasy mode installation of Puppet Enterprise on Vagrant
License: Other
Easy mode installation of Puppet Enterprise on Vagrant
License: Other
When I have the following setup :
vms.yaml :
vms:
- name: master
box: centos-64-x64-vbox4210-nocm
roles:
- pe-puppet-master
- name: first
box: centos-64-x64-vbox4210-nocm
roles:
- pe-puppet-agent
pe_build.yaml :
---
pe_build:
version: 3.3.0
roles.yaml :
---
roles:
pe-puppet-master:
private_networks:
- {ip: '0.0.0.0', auto_network: true}
provider:
type: virtualbox
customize:
- [modifyvm, !ruby/sym id, '--memory', 4024]
provisioners:
- {type: hosts}
- {type: pe_bootstrap, role: !ruby/sym master}
pe-puppet-agent:
private_networks:
- {ip: '0.0.0.0', auto_network: true}
provider:
type: virtualbox
provisioners:
- {type: hosts}
- {type: pe_bootstrap}
and using vagrant version Vagrant 1.7.2 with virtual box 4.3.24 I get the following error :
Stderr from the command:
!! ERROR: Puppet Master at 'master:8140' could not be reached.
Aborting installation as directed by answer file. Set
'q_fail_on_unsuccessful_master_lookup' to 'n' if installation
should continue despite communication failures.
I haven't been able to use oscar sucessfully from windows. Am i missing something in my config?
Seems like Node Classifier takes forever and then times out after starting pe-puppetdb. What check is performed to check if Node Classifier is started?
==> master: Notice: /Stage[main]/Puppet_enterprise::Puppetdb::Service/Service[pe-puppetdb]/ensure: ensure changed 'stopped' to 'running' ==> master: Notice: Finished catalog run in 3.06 seconds ==> master: ==> master: Loaded plugins: fastestmirror, security ==> master: Cleaning repos: puppet-enterprise-installer ==> master: Cleaning up Everything ==> master: Cleaning up list of fastest mirrors ==> master: PuppetDB configured. ==> master: Waiting for Node Classifier to start... ==> master: !!! WARNING: The node classifier could not be reached; please check the logs in '/var/log/pe-console-services/' for more information.
The contents of /var/log/pe-console-services/* are below:
[root@master pe-console-services]# tail -20 pe-console-services-daemon.log 07:46:48,324 |-INFO in LogbackRequestLog - Will use configuration file [/etc/puppetlabs/console-services/request-logging.xml] 07:46:48,336 |-INFO in ch.qos.logback.access.joran.action.ConfigurationAction - debug attribute not set 07:46:48,337 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.FileAppender] 07:46:48,337 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [FILE] 07:46:48,337 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.access.PatternLayoutEncoder] for [encoder] property 07:46:48,355 |-INFO in ch.qos.logback.core.FileAppender[FILE] - File property is set to [/var/log/pe-console-services/console-services-access.log] 07:46:48,356 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [FILE] to null 07:46:48,356 |-INFO in ch.qos.logback.access.joran.action.ConfigurationAction - End of configuration. 07:46:48,356 |-INFO in ch.qos.logback.access.joran.JoranConfigurator@2b00d44c - Registering current configuration as safe fallback point 07:46:48,780 |-INFO in LogbackRequestLog - Will use configuration file [/etc/puppetlabs/console-services/request-logging.xml] 07:46:48,781 |-INFO in ch.qos.logback.access.joran.action.ConfigurationAction - debug attribute not set 07:46:48,781 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.FileAppender] 07:46:48,781 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [FILE] 07:46:48,785 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.access.PatternLayoutEncoder] for [encoder] property 07:46:48,786 |-INFO in ch.qos.logback.core.FileAppender[FILE] - File property is set to [/var/log/pe-console-services/console-services-access.log] 07:46:48,786 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [FILE] to null 07:46:48,786 |-INFO in ch.qos.logback.access.joran.action.ConfigurationAction - End of configuration. 07:46:48,786 |-INFO in ch.qos.logback.access.joran.JoranConfigurator@4b160f29 - Registering current configuration as safe fallback point [root@master pe-console-services]# tail -20 console-services.log 2014-12-31 07:46:49,125 INFO [o.e.j.s.ServerConnector] Started ServerConnector@71f61516{HTTP/1.1}{127.0.0.1:4430} 2014-12-31 07:46:49,149 INFO [o.e.j.s.ServerConnector] Started ServerConnector@18d2543e{SSL-HTTP/1.1}{0.0.0.0:4431} 2014-12-31 07:46:49,309 INFO [m.database] creating migration table 'schema_migrations' 2014-12-31 07:46:49,331 INFO [m.core] Starting migrations 2014-12-31 07:46:49,687 INFO [m.core] Running up for [20140903132700 20140903153000 20141024111137] 2014-12-31 07:46:49,687 INFO [m.core] Up 20140903132700-initial-scheme 2014-12-31 07:46:49,778 INFO [m.core] Up 20140903153000-backup-activity-events 2014-12-31 07:46:49,788 INFO [m.core] Up 20141024111137-drop-commit-uniqueness 2014-12-31 07:46:49,795 INFO [m.core] Ending migrations 2014-12-31 07:46:49,836 INFO [o.e.j.s.h.ContextHandler] Started o.e.j.s.ServletContextHandler@4e57af8b{/activity-api,null,AVAILABLE} 2014-12-31 07:46:49,870 INFO [p.c.class-updater] Requesting environment list from "https://master:8140/v2.0/environments" 2014-12-31 07:46:52,255 INFO [p.c.class-updater] 200 response received for request for environments from "https://master:8140/v2.0/environments" 2014-12-31 07:46:52,264 INFO [p.c.class-updater] Requesting classes in production from "https://master:8140/production/resource_types/*" 2014-12-31 07:47:00,244 INFO [p.c.class-updater] 200 response received for request for classes in production from "https://master:8140/production/resource_types/*" 2014-12-31 07:47:03,225 INFO [p.c.class-updater] Synchronized 105 classes from the Puppet Master in 13 seconds 2014-12-31 08:01:50,012 INFO [p.c.class-updater] Requesting environment list from "https://master:8140/v2.0/environments" 2014-12-31 08:01:53,699 INFO [p.c.class-updater] 200 response received for request for environments from "https://master:8140/v2.0/environments" 2014-12-31 08:01:53,733 INFO [p.c.class-updater] Requesting classes in production from "https://master:8140/production/resource_types/*" 2014-12-31 08:02:01,889 INFO [p.c.class-updater] 200 response received for request for classes in production from "https://master:8140/production/resource_types/*" 2014-12-31 08:02:03,937 INFO [p.c.class-updater] Synchronized 105 classes from the Puppet Master in 14 seconds [root@master pe-console-services]# tail -20 console-services-access.log master - - - 31/Dec/2014:08:03:34 -0800 "GET /rbac-api/v1/users HTTP/1.1" 200 421 master - - - 31/Dec/2014:08:03:35 -0800 "POST /rbac-api/v1/users/42bf351c-f9ec-40af-84ad-e976fec7f4bd/password/reset HTTP/1.1" 201 851 master - - - 31/Dec/2014:08:03:37 -0800 "POST /rbac-api/v1/auth/reset HTTP/1.1" 200 2 master - - - 31/Dec/2014:08:03:54 -0800 "POST /classifier-api/v1/classified/nodes/master HTTP/1.1" 200 123 master - - - 31/Dec/2014:08:04:01 -0800 "POST /classifier-api/v1/classified/nodes/master HTTP/1.1" 200 123 master - - - 31/Dec/2014:08:04:32 -0800 "POST /classifier-api/v1/classified/nodes/master HTTP/1.1" 200 123 master - - - 31/Dec/2014:08:04:49 -0800 "POST /classifier-api/v1/classified/nodes/master HTTP/1.1" 200 123 master - - - 31/Dec/2014:08:05:06 -0800 "POST /classifier-api/v1/classified/nodes/master HTTP/1.1" 200 123 master - - - 31/Dec/2014:08:05:13 -0800 "POST /classifier-api/v1/classified/nodes/master HTTP/1.1" 200 123
The fqdn is not being associated with the IP in /etc/hosts during provisioning. This is resulting in errors in installing the master:
2017-04-07 17:39:08,563 - [Error]: Failed to apply catalog: Could not connect to the Node Manager service at https://master.example.com:4433/classifier-api: #<SocketError: getaddrinfo: Name or service not known>
Using auto_network: true in roles.yaml
Using oscar 0.5.3. Tried downgrading to 0.5.2, same result.
The docs just say it can be configured, not where the configs are. How do I actually edit them?
It'd be awesome if the 'init' commands and such listed the paths they were editing.
Installers available for use:
download_root
config option to a validvagrant pe-build copy
Is there an easy way to integrate vagrant-r10k with oscar? Tried using the following in the config/roles.yaml:
---
roles:
pe-puppet-master:
r10k:
puppet_dir: "puppet"
puppetfile_path: "puppet/Puppetfile"
module_path: "puppet/vendor"
Returns the following error on a vagrant up/provision:
master: vagrant-r10k: puppet_dir and/or puppetfile_path not set in config; not running
From vagrant debug:
INFO warden: Calling IN action: #<VagrantPlugins::R10k::Modulegetter:0x0000000390bc28>
INFO interface: detail: vagrant-r10k: puppet_dir and/or puppetfile_path not set in config; not running
INFO interface: detail: master: vagrant-r10k: puppet_dir and/or puppetfile_path not set in config; not running
master: vagrant-r10k: puppet_dir and/or puppetfile_path not set in config; not running
INFO warden: Calling IN action: #<Vagrant::Action::Builtin::ConfigValidate:0x0000000390bc00>
It may be that the vagrant-config_builder is simply not pulling in the defined vagrant-r10k configs.
The goal is to use oscar with r10k to build out a local dev environment and enable local development and testing of locally stored puppet modules.
I'm not sure which oscar plugin this applies to, but in any case, CentOS 7 now has interface names like these for ethernet cards:
[root@pe-371-master ~]# ifconfig
enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.0.2.15 netmask 255.255.255.0 broadcast 10.0.2.255
inet6 fe80::a00:27ff:fefd:47eb prefixlen 64 scopeid 0x20<link>
ether 08:00:27:fd:47:eb txqueuelen 1000 (Ethernet)
RX packets 843 bytes 90927 (88.7 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 512 bytes 77331 (75.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.20.1.4 netmask 255.255.255.0 broadcast 10.20.1.255
inet6 fe80::a00:27ff:fe0e:36cc prefixlen 64 scopeid 0x20<link>
ether 08:00:27:0e:36:cc txqueuelen 1000 (Ethernet)
RX packets 9 bytes 2680 (2.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 46 bytes 7390 (7.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
In its current state, vagrant oscar init-vms
doesn't have that much utility, because you can't use it to generate VMs on the fly. Each time you do, you need to restore your download_root:
in pe_build.yaml.
This happens even when I hadn't previously downloaded the box:
[master] VM already created. Booting if it's not already running... [master] Clearing any previously set forwarded ports... [master] Forwarding ports... [master] -- 22 => 2222 (adapter 1) [master] -- 443 => 20443 (adapter 1) [master] Creating shared folders metadata... [master] Clearing any previously set network interfaces... There was an error executing the following command with VBoxManage: ["hostonlyif", "create"] For more information on the failure, enable detailed logging with VAGRANT_LOG. Here's the full log output: VAGRANT_LOG=INFO vagrant up master INFO vagrant: `vagrant` invoked: ["up", "master"] INFO environment: Environment initialized (#) INFO environment: - cwd: /Users/celia/instapants INFO environment: Home path: /Users/celia/.vagrant.d INFO environment: Loading configuration... INFO provisioner: Provisioner class: Vagrant::Provisioners::Shell INFO provisioner: Provisioner class: Vagrant::Provisioners::Shell INFO provisioner: Provisioner class: Vagrant::Provisioners::Shell INFO provisioner: Provisioner class: Vagrant::Provisioners::Shell INFO provisioner: Provisioner class: Vagrant::Provisioners::Shell INFO provisioner: Provisioner class: Vagrant::Provisioners::Shell INFO provisioner: Provisioner class: Vagrant::Provisioners::Shell INFO provisioner: Provisioner class: Vagrant::Provisioners::Shell INFO provisioner: Provisioner class: Vagrant::Provisioners::Shell INFO provisioner: Provisioner class: Vagrant::Provisioners::Shell INFO provisioner: Provisioner class: Vagrant::Provisioners::Shell INFO provisioner: Provisioner class: Vagrant::Provisioners::Shell INFO provisioner: Provisioner class: Vagrant::Provisioners::Shell INFO provisioner: Provisioner class: Vagrant::Provisioners::Shell INFO provisioner: Provisioner class: Vagrant::Provisioners::Shell INFO provisioner: Provisioner class: Vagrant::Provisioners::Shell INFO cli: CLI: [] "up" ["master"] INFO datastore: Created: /Users/celia/instapants/.vagrant INFO virtualbox_base: VBoxManage path: VBoxManage INFO subprocess: Starting process: ["VBoxManage", "--version"] INFO virtualbox: Using VirtualBox driver: Vagrant::Driver::VirtualBox_4_0 INFO virtualbox_base: VBoxManage path: VBoxManage INFO subprocess: Starting process: ["VBoxManage", "showvminfo", "ca95d645-d8ec-4c8f-ba26-4c073f4c55ee"] INFO vm: Loading guest: linux INFO virtualbox_base: VBoxManage path: VBoxManage INFO subprocess: Starting process: ["VBoxManage", "--version"] INFO virtualbox: Using VirtualBox driver: Vagrant::Driver::VirtualBox_4_0 INFO virtualbox_base: VBoxManage path: VBoxManage INFO vm: Loading guest: linux INFO subprocess: Starting process: ["VBoxManage", "showvminfo", "ca95d645-d8ec-4c8f-ba26-4c073f4c55ee", "--machinereadable"] INFO up: Booting: master INFO interface: info: VM already created. Booting if it's not already running... [master] VM already created. Booting if it's not already running... INFO subprocess: Starting process: ["VBoxManage", "showvminfo", "ca95d645-d8ec-4c8f-ba26-4c073f4c55ee", "--machinereadable"] INFO subprocess: Starting process: ["VBoxManage", "showvminfo", "ca95d645-d8ec-4c8f-ba26-4c073f4c55ee", "--machinereadable"] INFO hosts: Host class: Vagrant::Hosts::BSD INFO runner: Running action: start INFO warden: Calling action: # INFO warden: Calling action: # INFO subprocess: Starting process: ["VBoxManage", "showvminfo", "ca95d645-d8ec-4c8f-ba26-4c073f4c55ee", "--machinereadable"] INFO warden: Calling action: # INFO subprocess: Starting process: ["VBoxManage", "list", "systemproperties"] INFO warden: Calling action: # INFO interface: info: Clearing any previously set forwarded ports... [master] Clearing any previously set forwarded ports... INFO subprocess: Starting process: ["VBoxManage", "showvminfo", "ca95d645-d8ec-4c8f-ba26-4c073f4c55ee", "--machinereadable"] INFO subprocess: Starting process: ["VBoxManage", "modifyvm", "ca95d645-d8ec-4c8f-ba26-4c073f4c55ee", "--natpf1", "delete", "dr-jur", "--natpf1", "delete", "ssh"] INFO warden: Calling action: # INFO warden: Calling action: # INFO subprocess: Starting process: ["VBoxManage", "showvminfo", "ca95d645-d8ec-4c8f-ba26-4c073f4c55ee", "--machinereadable"] INFO subprocess: Starting process: ["VBoxManage", "list", "vms"] INFO subprocess: Starting process: ["VBoxManage", "showvminfo", "e933f840-6896-4e26-b713-c200403b2684", "--machinereadable"] INFO subprocess: Starting process: ["VBoxManage", "showvminfo", "def3f785-b1d9-49ac-931d-65b86c9e616c", "--machinereadable"] INFO subprocess: Starting process: ["VBoxManage", "showvminfo", "d0535fe7-ca42-4dcf-8fdf-8b7c39b65447", "--machinereadable"] INFO subprocess: Starting process: ["VBoxManage", "showvminfo", "edf79f31-7f57-4a07-917e-87cb77c6ea5a", "--machinereadable"] INFO subprocess: Starting process: ["VBoxManage", "showvminfo", "8519a929-963d-4eae-8556-901ed27ea9d1", "--machinereadable"] INFO warden: Calling action: # INFO interface: info: Forwarding ports... [master] Forwarding ports... INFO subprocess: Starting process: ["VBoxManage", "showvminfo", "ca95d645-d8ec-4c8f-ba26-4c073f4c55ee", "--machinereadable"] INFO interface: info: -- 22 => 2222 (adapter 1) [master] -- 22 => 2222 (adapter 1) INFO interface: info: -- 443 => 20443 (adapter 1) [master] -- 443 => 20443 (adapter 1) INFO subprocess: Starting process: ["VBoxManage", "modifyvm", "ca95d645-d8ec-4c8f-ba26-4c073f4c55ee", "--natpf1", "ssh,tcp,,2222,,22", "--natpf1", "dr-jur,tcp,,20443,,443"] INFO warden: Calling action: # INFO warden: Calling action: # INFO subprocess: Starting process: ["VBoxManage", "list", "vms"] INFO warden: Calling action: # INFO warden: Calling action: # INFO subprocess: Starting process: ["VBoxManage", "showvminfo", "ca95d645-d8ec-4c8f-ba26-4c073f4c55ee", "--machinereadable"] INFO subprocess: Starting process: ["VBoxManage", "sharedfolder", "remove", "ca95d645-d8ec-4c8f-ba26-4c073f4c55ee", "--name", "v-root"] INFO subprocess: Starting process: ["VBoxManage", "sharedfolder", "remove", "ca95d645-d8ec-4c8f-ba26-4c073f4c55ee", "--name", "manifests"] INFO subprocess: Starting process: ["VBoxManage", "sharedfolder", "remove", "ca95d645-d8ec-4c8f-ba26-4c073f4c55ee", "--name", "modules"] INFO warden: Calling action: # INFO interface: info: Creating shared folders metadata... [master] Creating shared folders metadata... INFO subprocess: Starting process: ["VBoxManage", "sharedfolder", "add", "ca95d645-d8ec-4c8f-ba26-4c073f4c55ee", "--name", "v-root", "--hostpath", "/Users/celia/instapants"] INFO subprocess: Starting process: ["VBoxManage", "sharedfolder", "add", "ca95d645-d8ec-4c8f-ba26-4c073f4c55ee", "--name", "manifests", "--hostpath", "/Users/celia/instapants/manifests"] INFO subprocess: Starting process: ["VBoxManage", "sharedfolder", "add", "ca95d645-d8ec-4c8f-ba26-4c073f4c55ee", "--name", "modules", "--hostpath", "/Users/celia/instapants/modules"] INFO warden: Calling action: # INFO warden: Calling action: # INFO interface: info: Clearing any previously set network interfaces... [master] Clearing any previously set network interfaces... INFO subprocess: Starting process: ["VBoxManage", "modifyvm", "ca95d645-d8ec-4c8f-ba26-4c073f4c55ee", "--nic2", "none", "--nic3", "none", "--nic4", "none", "--nic5", "none", "--nic6", "none", "--nic7", "none", "--nic8", "none"] INFO warden: Calling action: # INFO subprocess: Starting process: ["VBoxManage", "list", "bridgedifs"] INFO subprocess: Starting process: ["VBoxManage", "list", "dhcpservers"] INFO subprocess: Starting process: ["VBoxManage", "list", "hostonlyifs"] INFO subprocess: Starting process: ["VBoxManage", "hostonlyif", "create"] ERROR warden: Error occurred: There was an error executing the following command with VBoxManage: ["hostonlyif", "create"] For more information on the failure, enable detailed logging with VAGRANT_LOG. ERROR warden: Error occurred: There was an error executing the following command with VBoxManage: ["hostonlyif", "create"] For more information on the failure, enable detailed logging with VAGRANT_LOG. ERROR warden: Error occurred: There was an error executing the following command with VBoxManage: ["hostonlyif", "create"] For more information on the failure, enable detailed logging with VAGRANT_LOG. ERROR warden: Error occurred: There was an error executing the following command with VBoxManage: ["hostonlyif", "create"] For more information on the failure, enable detailed logging with VAGRANT_LOG. ERROR warden: Error occurred: There was an error executing the following command with VBoxManage: ["hostonlyif", "create"] For more information on the failure, enable detailed logging with VAGRANT_LOG. ERROR warden: Error occurred: There was an error executing the following command with VBoxManage: ["hostonlyif", "create"] For more information on the failure, enable detailed logging with VAGRANT_LOG. ERROR warden: Error occurred: There was an error executing the following command with VBoxManage: ["hostonlyif", "create"] For more information on the failure, enable detailed logging with VAGRANT_LOG. ERROR warden: Error occurred: There was an error executing the following command with VBoxManage: ["hostonlyif", "create"] For more information on the failure, enable detailed logging with VAGRANT_LOG. ERROR warden: Error occurred: There was an error executing the following command with VBoxManage: ["hostonlyif", "create"] For more information on the failure, enable detailed logging with VAGRANT_LOG. ERROR warden: Error occurred: There was an error executing the following command with VBoxManage: ["hostonlyif", "create"] For more information on the failure, enable detailed logging with VAGRANT_LOG. ERROR warden: Error occurred: There was an error executing the following command with VBoxManage: ["hostonlyif", "create"] For more information on the failure, enable detailed logging with VAGRANT_LOG. ERROR warden: Error occurred: There was an error executing the following command with VBoxManage: ["hostonlyif", "create"] For more information on the failure, enable detailed logging with VAGRANT_LOG. ERROR warden: Error occurred: There was an error executing the following command with VBoxManage: ["hostonlyif", "create"] For more information on the failure, enable detailed logging with VAGRANT_LOG. ERROR warden: Error occurred: There was an error executing the following command with VBoxManage: ["hostonlyif", "create"] For more information on the failure, enable detailed logging with VAGRANT_LOG. ERROR warden: Error occurred: There was an error executing the following command with VBoxManage: ["hostonlyif", "create"] For more information on the failure, enable detailed logging with VAGRANT_LOG. ERROR vagrant: Vagrant experienced an error! Details: ERROR vagrant: # ERROR vagrant: There was an error executing the following command with VBoxManage: ["hostonlyif", "create"] For more information on the failure, enable detailed logging with VAGRANT_LOG. ERROR vagrant: /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/driver/virtualbox_base.rb:261:in `execute' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/driver/virtualbox_4_0.rb:44:in `create_host_only_network' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/vm/network.rb:276:in `create_hostonly_network' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/vm/network.rb:230:in `hostonly_adapter' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/vm/network.rb:36:in `send' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/vm/network.rb:36:in `call' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/vm/network.rb:31:in `each' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/vm/network.rb:31:in `call' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/warden.rb:33:in `call' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/vm/clear_network_interfaces.rb:26:in `call' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/warden.rb:33:in `call' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/vm/host_name.rb:10:in `call' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/warden.rb:33:in `call' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/vm/share_folders.rb:20:in `call' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/warden.rb:33:in `call' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/vm/clear_shared_folders.rb:13:in `call' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/warden.rb:33:in `call' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/vm/nfs.rb:40:in `call' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/warden.rb:33:in `call' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/vm/prune_nfs_exports.rb:15:in `call' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/warden.rb:33:in `call' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/vm/provision.rb:29:in `call' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/warden.rb:33:in `call' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/vm/forward_ports.rb:24:in `call' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/warden.rb:33:in `call' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/vm/check_port_collisions.rb:38:in `call' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/warden.rb:33:in `call' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/env/set.rb:16:in `call' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/warden.rb:33:in `call' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/vm/clear_forwarded_ports.rb:13:in `call' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/warden.rb:33:in `call' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/vm/clean_machine_folder.rb:17:in `call' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/warden.rb:33:in `call' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/vm/check_accessible.rb:18:in `call' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/warden.rb:33:in `call' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/general/validate.rb:14:in `call' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/warden.rb:33:in `call' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/builder.rb:92:in `call' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/runner.rb:49:in `run' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/util/busy.rb:19:in `busy' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/action/runner.rb:49:in `run' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/vm.rb:192:in `run_action' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/vm.rb:150:in `start' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/command/up.rb:43:in `execute' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/command/base.rb:100:in `with_target_vms' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/command/base.rb:95:in `each' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/command/base.rb:95:in `with_target_vms' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/command/up.rb:39:in `execute' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/cli.rb:38:in `execute' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/lib/vagrant/environment.rb:156:in `cli' /Users/celia/.rvm/gems/ruby-1.8.7-p357/gems/vagrant-0.9.7/bin/vagrant:43 /Users/celia/.rvm/gems/ruby-1.8.7-p357/bin/vagrant:19:in `load' /Users/celia/.rvm/gems/ruby-1.8.7-p357/bin/vagrant:19 INFO interface: error: There was an error executing the following command with VBoxManage: ["hostonlyif", "create"] For more information on the failure, enable detailed logging with VAGRANT_LOG. There was an error executing the following command with VBoxManage: ["hostonlyif", "create"] For more information on the failure, enable detailed logging with VAGRANT_LOG.
Running w/ Vagrant 1.9.7 on osx...
Installing the 'oscar' plugin. This can take a few minutes...
Bundler, the underlying system Vagrant uses to install plugins,
reported an error. The error is shown below. These errors are usually
caused by misconfigured plugin installations or transient network
issues. The error from Bundler is:
conflicting dependencies rb-fsevent (= 0.9.8) and rb-fsevent (= 0.10.2)
Activated rb-fsevent-0.10.2
which does not match conflicting dependency (= 0.9.8)
Conflicting dependency chains:
rb-fsevent (= 0.10.2), 0.10.2 activated
versus:
rb-fsevent (= 0.9.8)
Gems matching rb-fsevent (= 0.9.8):
rb-fsevent-0.9.8
I've been trying to figure out how to add vagrant-cachier support to oscar but if a core oscar developer wanted to race me to the finish line I would be happy to lose that race.
RubyGems.org doesn't report a license for your gem. This is because it is not specified in the gemspec of your last release.
via e.g.
spec.license = 'MIT'
# or
spec.licenses = ['MIT', 'GPL-2']
Including a license in your gemspec is an easy way for rubygems.org and other tools to check how your gem is licensed. As you can imagine, scanning your repository for a LICENSE file or parsing the README, and then attempting to identify the license or licenses is much more difficult and more error prone. So, even for projects that already specify a license, including a license in your gemspec is a good practice. See, for example, how rubygems.org uses the gemspec to display the rails gem license.
There is even a License Finder gem to help companies/individuals ensure all gems they use meet their licensing needs. This tool depends on license information being available in the gemspec. This is an important enough issue that even Bundler now generates gems with a default 'MIT' license.
I hope you'll consider specifying a license in your gemspec. If not, please just close the issue with a nice message. In either case, I'll follow up. Thanks for your time!
Appendix:
If you need help choosing a license (sorry, I haven't checked your readme or looked for a license file), GitHub has created a license picker tool. Code without a license specified defaults to 'All rights reserved'-- denying others all rights to use of the code.
Here's a list of the license names I've found and their frequencies
p.s. In case you're wondering how I found you and why I made this issue, it's because I'm collecting stats on gems (I was originally looking for download data) and decided to collect license metadata,too, and make issues for gemspecs not specifying a license as a public service :). See the previous link or my blog post about this project for more information.
I managed to get it working with this modification to config/pe_build.yaml
pe_build:
version: 3.0.0
download_root:
Currently when configuring a windows puppet agent - we need to manually modify the %WINDIR%\System32\drivers\etc\hosts file and add an entry for the master. This allows it to work and receive manifests etc...
This manual step shouldn't exist.
Ideally vagrant-hosts should support windows and copy the functionality of https://github.com/smdahlen/vagrant-hostmanager
Another way would be to configure the generated roles.yaml or vms.yaml file to use vagrant-hostmanager for windows boxes. Unsure of the syntax to apply to get it to work though.
I tried setting a specific hostname, but it looks as if vagrant-hosts use the name only.
roles.yaml is the generated default.
vms.yaml
vms:
- name: master
hostname: puppet.puppetlabs.com
box: debian-70rc1-x64-vbox4210-nocm
roles:
- pe-puppet-master
- name: first
hostname: web.puppetlabs.com
box: debian-70rc1-x64-vbox4210-nocm
roles:
- pe-puppet-agent
Error from agent
[first] !! ERROR: Puppet Master at 'master:8140' could not be reached.
Aborting installation as directed by answer file. Set
'q_fail_on_unsuccessful_master_lookup' to 'n' if installation
should continue despite communication failures.
Hi!
First use of your vagrant plugin, so I maybe doing something wrong.
Symptom:
The pe-puppet-agent can't contact the pe-puppet-master on port 8140, and it's probably forbidden by the firewall on the master.
Workaround:
iptables -I INPUT 4 -p tcp --syn --dport 8140 -j ACCEPT
iptables -I INPUT 4 -p tcp --syn --dport 61613 -j ACCEPT
On master followed by destroy of the agent and then create it again.
Setup:
mimac:config mikan$ vagrant plugin list
vagrant-auto_network (0.2.2)
vagrant-config_builder (0.7.1)
vagrant-hosts (2.1.3)
vagrant-pe_build (0.8.6)
oscar (0.3.1)
vagrant-cachier (0.7.0)
vagrant-login (1.0.1, system)
vagrant-share (1.0.1, system)
vagrant-triggers (0.3.0)
mimac:config mikan$ cat vms.yaml
---
vms:
- name: master
box: centos-64-x64-vbox4210-nocm
roles:
- pe-puppet-master
- name: dev
box: centos-64-x64-vbox4210-nocm
roles:
- pe-puppet-agent
- name: test
box: centos-64-x64-vbox4210-nocm
roles:
- pe-puppet-agent
- name: prod
box: centos-64-x64-vbox4210-nocm
roles:
- pe-puppet-agent
mimac:config mikan$ cat roles.yaml
---
roles:
pe-puppet-master:
private_networks:
- {ip: '0.0.0.0', auto_network: true}
provider:
type: virtualbox
customize:
- [modifyvm, !ruby/sym id, '--memory', 1024]
provisioners:
- {type: hosts}
- {type: pe_bootstrap, role: !ruby/sym master}
pe-puppet-agent:
private_networks:
- {ip: '0.0.0.0', auto_network: true}
provisioners:
- {type: hosts}
- {type: pe_bootstrap}
mimac:config mikan$ cat pe_build.yaml
---
pe_build:
version: 3.2.3
Sincerely
Mikael
There are a couple of issues that prevent Oscar from working under Vagrant 1.6:
See hashicorp/vagrant#3660 for the root cause. Fixing these issues will require new releases of the affected plugins.
Dear Adrien,
Thanks for developing Oscar for us, I really would like to use it to experiment a bit. But, when I try to install it and run "vagrant up", whatever I try, I keep getting this message:
!! ERROR: Puppet Master at 'puppet:8140' could not be reached.
Aborting installation as directed by answer file. Set
'q_fail_on_unsuccessful_master_lookup' to 'n' if installation
should continue despite communication failures.
I also tried to change the roles.yaml file to add the master to the (first and only) agent, but also that one failed.
Thanks in advance,
Filip Moons
I am having difficulty finding the puppet enterprise console url with vagrant. I am using the default configuration and have stopped the firewall and iptables. Can anyone tell me the url to the puppet console?
Thanks
Attempted to use oscar to provision latest PE, 3.7.0.
Provisioning failed.
==> master: Configuring proxy environment variables...
==> master: Configuring proxy for Yum...
==> master: Running provisioner: hosts...
==> master: Running provisioner: hosts...
==> master: Running provisioner: pe_bootstrap...
==> master: Puppet Enterprise is already installed, skipping installation.
==> master: Applying post-install configuration to Puppet Enterprise.
==> master: Notice: Compiled catalog for master in environment production in 0.19 seconds
==> master:
==> master: Error: Could not start Service[pe-httpd]: Execution of '/sbin/service pe-httpd start' returned 1: Starting pe-httpd: pe-httpd.worker: Could not reliably determine the server's fully qualified domain name, using 172.16.0.3 for ServerName
==> master: no listening sockets available, shutting down
==> master: Unable to open logs
==> master: [FAILED]
==> master: Wrapped exception:
==> master: Execution of '/sbin/service pe-httpd start' returned 1: Starting pe-httpd: pe-httpd.worker: Could not reliably determine the server's fully qualified domain name, using 172.16.0.3 for ServerName
==> master: no listening sockets available, shutting down
==> master: Unable to open logs
==> master: [FAILED]
==> master: Error: /Stage[main]/Main/Service[pe-httpd]/ensure: change from stopped to running failed: Could not start Service[pe-httpd]: Execution of '/sbin/service pe-httpd start' returned 1: Starting pe-httpd: pe-httpd.worker: Could not reliably determine the server's fully qualified domain name, using 172.16.0.3 for ServerName
==> master: no listening sockets available, shutting down
==> master: Unable to open logs
==> master: [FAILED]
==> master: Notice: Finished catalog run in 2.21 seconds
==> master:
pe-httpd fails with the following error:
Starting pe-httpd: pe-httpd.worker: Could not reliably determine the server's fully qualified domain name, using 172.16.0.3 for ServerName
no listening sockets available, shutting down
It seems that oscar should be starting [and looking for] pe-puppetserver, but I'm not certain.
I'm trying to model my prod environ which has a split install(master/db/console) and don't see support for that setup in Oscar. Is that something you'd consider implementing?
I would like to use qualified domain names as the vm names in my puppet environment so that I can use puppet to update /etc/hosts on my mac (using vagrant hosts puppetize | sudo puppet apply
) and have each environment not stomp on each other. Also, it's a better simulation of real world setups.
I've tried doing this:
vagrant oscar init
vagrant oscar init-vms \
--master master.workflow.example=puppetlabs/centos-6.6-64-nocm \
--agent git.workflow.example=puppetlabs/centos-6.6-64-nocm \
--agent agent1.workflow.example=puppetlabs/centos-6.6-64-nocm \
--pe-version 2015.2.0
# hack in iptables disabling shell provisioners to config/roles.yaml
# increase memory allocation of master from 1GB to 3GB
vagrant up
This does along OK until an agent tries to do a puppet run. You get a certificate mismatch error:
[root@agent1 ~]# puppet agent -t
Warning: Unable to fetch my node definition, but the agent run will continue:
Warning: Server hostname 'master' did not match server certificate; expected one of master.workflow.example, DNS:master.workflow.example, DNS:puppet
Info: Retrieving pluginfacts
Error: /File[/opt/puppetlabs/puppet/cache/facts.d]: Failed to generate additional resources using 'eval_generate': Server hostname 'master' did not match server certificate; expected one of master.workflow.example, DNS:master.workflow.example, DNS:puppet
Error: /File[/opt/puppetlabs/puppet/cache/facts.d]: Could not evaluate: Could not retrieve file metadata for puppet:///pluginfacts: Server hostname 'master' did not match server certificate; expected one of master.workflow.example, DNS:master.workflow.example, DNS:puppet
Info: Retrieving plugin
Error: /File[/opt/puppetlabs/puppet/cache/lib]: Failed to generate additional resources using 'eval_generate': Server hostname 'master' did not match server certificate; expected one of master.workflow.example, DNS:master.workflow.example, DNS:puppet
Error: /File[/opt/puppetlabs/puppet/cache/lib]: Could not evaluate: Could not retrieve file metadata for puppet:///plugins: Server hostname 'master' did not match server certificate; expected one of master.workflow.example, DNS:master.workflow.example, DNS:puppet
Error: Could not retrieve catalog from remote server: Server hostname 'master' did not match server certificate; expected one of master.workflow.example, DNS:master.workflow.example, DNS:puppet
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
Error: Could not send report: Server hostname 'master' did not match server certificate; expected one of master.workflow.example, DNS:master.workflow.example, DNS:puppet
I can workaround this by updating the server address in /etc/puppetlabs/puppet/puppet.conf
from master
to master.workflow.example
Am I doing this all wrong? How else can you set up vms with qualified domain names as hostnames?
two issues.
vagrant plugin list
, then don't know how to uninstall it.E.g.
Vagrant failed to initialize at a very early stage:
The machine with the name 'tester' was not found configured for
this Vagrant environment.
Your only recourse is to recreate an entry for that vm, just to delete it, and then remove the entry.
I presume this is more a vagrant limitation than oscar but it'd be nice if there was some way to work around it.
I was able to reproduce this on two separate OS X hosts (10.10.2)
After running 'vagrant plugin install oscar' successfully, I run 'vagrant oscar init' and get
fileutils.rb:1375:in `copy': unknown file type: ~/.vagrant.d/gems/gems/oscar-0.4.1/templates/oscar-init-skeleton/vmware_fusion/. (RuntimeError)
Upon inspecting ~/.vagrant.d/gems/gems/oscar-0.4.1/templates/oscar-init-skeleton/ the vmware_fusion directory does not exist.
I'm able to get around this issue by pulling the vmware_fusion templates from github and manually placing them in the correct location.
INFO global: Vagrant version: 1.7.2
INFO global: Ruby version: 2.0.0
INFO global: RubyGems version: 2.0.14
INFO global: VAGRANT_EXECUTABLE="/opt/vagrant/bin/../embedded/gems/gems/vagrant-1.7.2/bin/vagrant"
INFO global: VAGRANT_LOG="debug"
INFO global: VAGRANT_INSTALLER_EMBEDDED_DIR="/opt/vagrant/bin/../embedded"
INFO global: VAGRANT_INSTALLER_VERSION="2"
INFO global: VAGRANT_DETECTED_OS="Darwin"
INFO global: VAGRANT_INSTALLER_ENV="1"
INFO global: VAGRANT_INTERNAL_BUNDLERIZED="1"
INFO global: VAGRANT_NO_PLUGINS="1"
INFO global: VAGRANT_VAGRANTFILE="plugin_command_1423846951"
INFO global: Plugins:
INFO global: - bundler = 1.7.11
INFO global: - json = 1.8.2
INFO global: - mime-types = 1.25.1
INFO global: - rdoc = 4.2.0
INFO global: - rest-client = 1.6.8
INFO global: - vagrant-share = 1.1.3
INFO global: - vagrant-vmware-fusion = 3.2.1
As far as I could tell, There is no documentation for the syntax of synced folders. I scoured the web for hours and finally found it in the captions of @adrienthebo slideshare presentation.
http://www.slideshare.net/PuppetLabs/oscar-rapid-iteration-with-vagrant-and-puppet-enterprise
If I get time, I will create a pull request with this addition, in addition to the requirement of the download_root setting in pe_build.
Stderr from the command:
!! ERROR: Could not find response for above question in answer file.
(Variable needed: q_puppet_enterpriseconsole_database_install)
When starting a machine with Vagrant 1.7.4. the following message appears. It's only a warning for now though. Tried to modify roles.yaml, but probably code modification is necessary within the vagrant-config_builder.
WARN config_builder: The provider attribute, set on vm , is deprecated and will be removed in an upcoming release. Use the providers attribute instead.
Oscar only loads config_builder
, auto_network
, et. al. once Oscar.run
is invoked. Some of these plugins have initialization hooks that need to run early in the Vagrant process, therefore they should be loaded as soon as Oscar is loaded by Vagrant.
Hi there,
Tried another Puppet master install on Ubuntu 12.04 LTS ad get the same error after the “waiting for node classifier….” – see embedded error below. I used the public IP for the certificate name which seems to break it but if I use the Private IP if doesn’t fail but the agent can’t connect to the private IP just the Public but the agents checks-in into the console screen listing its Finger print but never gets listed and on checking event viewer I see the error message “cert doesn’t match” as it was expecting a matching cert from the private IP – I don’t get it, Any pointers ?
?? The puppet master's certificate will contain a unique name ("certname");
this should be the main DNS name at which it can be reliably reached.
Puppet master's certname? [Default:ip-172-31-X-X.eu-west-1.compute.internal] ec2-54-77-X-X.eu-west-1.compute.amazonaws.com
?? The puppet master's certificate can contain DNS aliases; agent nodes will
only trust the master if they reach it at its certname or one of these
official aliases. Puppet master's DNS aliases (comma-separated list)?
[Default:ec2-54-77-X-X,ec2-54-77-X-X.eu-west-1.compute.amazonaws.com,puppet,pupp et.eu-west-1.compute.amazonaws.com]
ERRor!!!!!
Waiting for Node Classifier to start...
!!! WARNING: The node classifier could not be reached; please check the logs in '/var/log/puppetlabs/console-services/' for more information.
/opt/puppetlabs/puppet/lib/ruby/2.1.0/net/http.rb:879:in `initialize': Connection refused - connect(2) for "ec2-54-77-X-X.eu-west-1.compute.amazonaws.com" port 4433 (Errno::ECONNREFUSED)
from /opt/puppetlabs/puppet/lib/ruby/2.1.0/net/http.rb:879:in `open'
from /opt/puppetlabs/puppet/lib/ruby/2.1.0/net/http.rb:879:in `block in connect'
from /opt/puppetlabs/puppet/lib/ruby/2.1.0/timeout.rb:76:in `timeout'
from /opt/puppetlabs/puppet/lib/ruby/2.1.0/net/http.rb:878:in `connect'
from /opt/puppetlabs/puppet/lib/ruby/2.1.0/net/http.rb:863:in `do_start'
from /opt/puppetlabs/puppet/lib/ruby/2.1.0/net/http.rb:852:in `start'
from /opt/puppetlabs/puppet/lib/ruby/2.1.0/net/http.rb:1375:in `request'
from /tmp/puppet-enterprise-2015.2.0-ubuntu-12.04-amd64/update-superuser-password.rb:51:in `get_response'
from /tmp/puppet-enterprise-2015.2.0-ubuntu-12.04-amd64/update-superuser-password.rb:96:in `get_user'
from /tmp/puppet-enterprise-2015.2.0-ubuntu-12.04-amd64/update-superuser-password.rb:104:in `main'
from /tmp/puppet-enterprise-2015.2.0-ubuntu-12.04-amd64/update-superuser-password.rb:108:in `<main>'
--------------------------------------------------------------------------------
STEP 5: DONE
Thanks for installing Puppet Enterprise!
To learn more and get started using Puppet Enterprise, refer to the
Puppet Enterprise Quick Start Guide
(http://docs.puppetlabs.com/pe/latest/quick_start.html) and the Puppet
Enterprise Deployment Guide
(http://docs.puppetlabs.com/guides/deployment_guide/index.html).
The console can be reached at the following URI:
* https://ip-172-31-X-X.eu-west-1.compute.internal:3000
================================================================================
## NOTES
Puppet Enterprise has been installed to "/opt/puppetlabs," and its
configuration files are located in "/etc/puppetlabs".
Answers from this session saved to
'/tmp/puppet-enterprise-2015.2.0-ubuntu-12.04-amd64/answers.lastrun.ip-172-31-X-X.eu-west-1.compute.internal'
In addition, auto-generated database users and passwords have been saved
to '/etc/puppetlabs/installer/database_info.install'
!!! WARNING: Do not discard these files! All auto-generated database
users and passwords have been saved in them. You will need this
information to configure the console role during installation.
If you have a firewall running, please ensure the following TCP ports are
open: 3000, 4433, 8140, 61613
!!! WARNING: Installer failed to classify Puppet Enterprise. Puppet
Enterprise will not be able to manage itself because of this. Check
'/var/log/puppetlabs/console-services/' for more information.
!!! WARNING: Installer failed to update superuser password. This leaves
your PE installation at risk. Check
'/var/log/puppetlabs/console-services/' for more information. Log into
the console (user: admin, password: admin) as soon as possible and change
the admin users password through the console.
--------------------------------------------------------------------------------
root@ip-172-31-X-X:/tmp/puppet-enterprise-2015.2.0-ubuntu-12.04-amd64#
In the file: lib/oscar/command/init_vms.rb
Line 44 and 49 have a incorrect string:
o.on('-m', '--master=val', String, 'The name and basebox for a Puppet master') do |val|
Line 44 should be:
o.on('-m', '--master name=box', String, 'The name and basebox for a Puppet master') do |val|
and on line 49
o.on('-m', '--agent name=box', String, 'The name and basebox for a Puppet agent') do |val|
I don't know ruby so i don't know if this is the right syntax and can't test it due to issue #10
Hi,
I when I do vagrant up master it installs and configures puppet master server (on Centos7) box. But during this it stops in between with following error when it tries to start puppet server process.
Logs: https://gist.github.com/shreejit13/d03122c8509e0fe9e14f
Details of Vm server (for puppet master) :
Centos7, 8 GB ram, 2 cpus and 20 GB space.
My host server from when I am running vagrant is windows 8 OS.
Can someone tell me what can bet he possible reasons for this error ?
Also is there any way to enable debug mode in oscar ?
Hi,
Not so much a bug in Oscar, but this will put off a lot of users from using Oscar. When using Vagrant on Windows, Oscar works fine, until the PE installer reads the answer files. Because the file is mounted from Windows to a Linux box, the line endings are screwed up and the installer fails, stating that the answer is null. I'm gonna report it to the PE buglist also.
A simple fix would be to force the answer file to a Unix format in the utilities file in the PE installer. Just add this to the load_answers() function:
load_answers() {
t_load_answers__file="${1?}"
if [ -f "${t_load_answers__file?}" ]; then
# Force the answer file to have Unix line-endings.
tr -d '\15\32' < "${t_load_answers__file?}" > answerfile_placeholder
mv answerfile_placeholder "${t_load_answers__file?}"
...
It would seem the default role sets up 1024M of ram for the puppet master, which is not enough. The installer fails, at least on CentOS 7.2. I suggest bumping that up to a more reasonable number for the default.
And... while you are at it, it would appear that it doesn't prefer "provider:" anymore:
WARN config_builder: The provider attribute, set on vm master, is deprecated and will be removed in an upcoming release. Use the providers attribute instead.
... which might simplify your templates?
Tommy
vagrant up
fails with the following error when using sles-11sp1-x64-vbox4210-nocm:
[master] Running provisioner: hosts...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
sed -i 's/\(HOSTNAME=\).*/\1master/' /etc/sysconfig/network
This is due to /etc/sysconfig/network being a directory on SLES 11
vagrant-sles-11-x64:/home/vagrant # sed -i 's/\(HOSTNAME=\).*/\1master/' /etc/sysconfig/network
sed: couldn't edit /etc/sysconfig/network: not a regular file
vagrant-sles-11-x64:/home/vagrant # file /etc/sysconfig/network/
/etc/sysconfig/network/: directory
SLES hostnames are set in the /etc/HOSTNAME
file.
After installing the oscar plugin and executing the following
vagrant oscar init-vms --master master=centos-64-x64-vbox4210-nocm --agent first=centos-64-x64-vbox4210-nocm
I receive the following error:
/Users/<username>/.vagrant.d/gems/gems/vagrant-pe_build-0.4.3/lib/pe_build/release/2_0.rb:17:in `block in <module:Release>': undefined method `template_dir' for PEBuild:Module (NoMethodError)
from /Users/<username>/.vagrant.d/gems/gems/vagrant-pe_build-0.4.3/lib/pe_build/release/instance.rb:14:in `instance_eval'
from /Users/<username>/.vagrant.d/gems/gems/vagrant-pe_build-0.4.3/lib/pe_build/release/instance.rb:14:in `initialize'
from /Users/<username>/.vagrant.d/gems/gems/vagrant-pe_build-0.4.3/lib/pe_build/release.rb:13:in `new'
from /Users/<username>/.vagrant.d/gems/gems/vagrant-pe_build-0.4.3/lib/pe_build/release.rb:13:in `newrelease'
from /Users/<username>/.vagrant.d/gems/gems/vagrant-pe_build-0.4.3/lib/pe_build/release/2_0.rb:3:in `<module:Release>'
from /Users/<username>/.vagrant.d/gems/gems/vagrant-pe_build-0.4.3/lib/pe_build/release/2_0.rb:1:in `<top (required)>'
from /Applications/Vagrant/embedded/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
from /Applications/Vagrant/embedded/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
from /Users/<username>/.vagrant.d/gems/gems/vagrant-pe_build-0.4.3/lib/pe_build/release.rb:16:in `<module:Release>'
from /Users/<username>/.vagrant.d/gems/gems/vagrant-pe_build-0.4.3/lib/pe_build/release.rb:2:in `<module:PEBuild>'
from /Users/<username>/.vagrant.d/gems/gems/vagrant-pe_build-0.4.3/lib/pe_build/release.rb:1:in `<top (required)>'
from /Applications/Vagrant/embedded/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
from /Applications/Vagrant/embedded/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
from /Users/<username>/.vagrant.d/gems/gems/oscar-0.3.1/lib/oscar/command/init_vms.rb:16:in `initialize'
from /Users/<username>/.vagrant.d/gems/gems/oscar-0.3.1/lib/oscar/command/helpers.rb:11:in `new'
from /Users/<username>/.vagrant.d/gems/gems/oscar-0.3.1/lib/oscar/command/helpers.rb:11:in `invoke_subcommand'
from /Users/<username>/.vagrant.d/gems/gems/oscar-0.3.1/lib/oscar/command.rb:21:in `execute'
from /Applications/Vagrant/embedded/gems/gems/vagrant-1.3.1/lib/vagrant/cli.rb:38:in `execute'
from /Applications/Vagrant/embedded/gems/gems/vagrant-1.3.1/lib/vagrant/environment.rb:484:in `cli'
from /Applications/Vagrant/embedded/gems/gems/vagrant-1.3.1/bin/vagrant:96:in `<top (required)>'
from /Applications/Vagrant/bin/../embedded/gems/bin/vagrant:23:in `load'
from /Applications/Vagrant/bin/../embedded/gems/bin/vagrant:23:in `<main>'
Using Vagrant 1.3.1 and Oscar 0.3.1
Any clues?
I added this to my roles.yaml, but it doesn't seem to be working. What is the correct way to do this?
---
roles:
pe-puppet-master:
private_networks:
- {ip: '0.0.0.0', auto_network: true}
provider:
type: virtualbox
customize:
- [modifyvm, !ruby/sym id, '--memory', 4096]
provisioners:
- {type: shell, inline: 'cd /etc/yum.repos.d; wget -nc http://public-yum.oracle.com/public-yum-ol6.repo'}
- {type: hosts}
- {type: pe_bootstrap, role: !ruby/sym master}
- {type: shell, path: 'scripts/bootstrap.sh'}
- {type: puppet, manifest_file: 'vagrant.pp'}
ssh:
private_key_path: "~/.ssh/mykey_rsa"
forward_agent: true
When trying to follow the instructions in the README, I'm getting the following error when I run "vagrant up":
[:~/tmp] $ vagrant plugin install oscar
Installing the 'oscar' plugin. This can take a few minutes...
Installed the plugin 'oscar (0.3.1)'!
[:~/tmp] $ mkdir oscar
[:~/tmp] $ cd oscar
[:~/tmp/oscar] $ vagrant oscar init
A stub Vagrantfile has been placed in this directory and default configurations
have been placed into the `config` directory. You can now run `vagrant up` to start
your Oscar built environment, courtesy of Vagrant.
[:~/tmp/oscar] $ vagrant oscar init-vms --master master=centos-64-x64-vbox4210-nocm
Your environment has been initialized with the following configuration:
masters:
- ["master", "centos-64-x64-vbox4210-nocm"]
agents:
pe_version: 3.1.0
[:~/tmp/oscar] $ vagrant up
Bringing machine 'master' up with 'virtualbox' provider...
[master] Importing base box 'centos-64-x64-vbox4210-nocm'...
[master] Matching MAC address for NAT networking...
[master] Setting the name of the VM...
[master] Clearing any previously set forwarded ports...
[master] Fixed port collision for 22 => 2222. Now on port 2202.
[master] Creating shared folders metadata...
[master] Clearing any previously set network interfaces...
[master] Assigning "10.20.1.2" to 'aa89107d-8f2b-4feb-bfe0-13dfd1f9b315'
[master] Preparing network interfaces based on configuration...
[master] Forwarding ports...
[master] -- 22 => 2202 (adapter 1)
[master] Running 'pre-boot' VM customizations...
[master] Booting VM...
[master] Waiting for machine to boot. This may take a few minutes...
[master] Machine booted and ready!
[master] Configuring and enabling network interfaces...
[master] Mounting shared folders...
[master] -- /vagrant
[master] Running provisioner: hosts...
[master] Running provisioner: pe_bootstrap...
Cannot fetch installer puppet-enterprise-3.1.0-el-6-x86_64.tar.gz; no download source available.
Installers available for use:
The Puppet Enterprise installer puppet-enterprise-3.1.0-el-6-x86_64.tar.gz
is not available. Please set the `download_root` config option to a valid
mirror, or add the installer yourself by using the `vagrant pe-build copy`
command. Downloads for Puppet Enterprise are available for download at
https://puppetlabs.com/puppet/puppet-enterprise/
The official centos vagrant boxes do not include the Virtualbox guest additions, so rsync is the default sync transport. A vagrant up
will then just do an initial rsync for sync'd folders. There are a few issues with this with oscar:
1/ after downloading PE installer:
==> master: Running provisioner: pe_bootstrap...
==> master: bash: line 4: /vagrant/.pe_build/puppet-enterprise-2019.2.1-el-7-x86_64/puppet-enterprise-installer: No such file or directory
2/ after installing and trying to puppet apply master.pp:
==> master: Error: Could not run: Could not find file /vagrant/.pe_build/post-install/master.pp
The workaround at each step is to vagrant reload master
.
I assume there's a way we can also get /vagrant to be mounted using vagrant-sshfs or nfs. Is there a recommended method of configuring this?
I get this warning when I run it:
Vagrant.require_plugin is deprecated and has no effect any longer.
Use vagrant plugin
commands to manage plugins. This warning will
be removed in the next version of Vagrant.
Hello - I installed PE 3.3.2 on centOS 6.5 and would like to view the puppet dashboard. I ran oscar with default configuration and have private networking. I tried to ping the IP address from my host but doesn't work. Can't get to https://master either.
Does oscar install the puppet dashboard by default?
Below are the answer files I used to manually install PE1.
File for master:
q_puppet_symlinks_install=y
q_puppetagent_certname=master
q_puppetagent_install=y
q_puppetagent_pluginsync=y
q_puppetagent_server=master
q_puppetdashboard_database_install=y
q_puppetdashboard_database_name=dashboard
q_puppetdashboard_database_password=test
q_puppetdashboard_database_root_password=test
q_puppetdashboard_database_user=dashboard
q_puppetdashboard_httpd_port=3000
q_puppetdashboard_install=y
q_puppetdashboard_inventory_hostname=master
q_puppetdashboard_inventory_port=8140
q_puppetdashboard_master_hostname=master
q_puppetmaster_certdnsnames=master:puppet
q_puppetmaster_certname=master
q_puppetmaster_dashboard_hostname=localhost
q_puppetmaster_dashboard_port=3000
q_puppetmaster_forward_facts=n
q_puppetmaster_install=y
q_puppetmaster_use_dashboard_classifier=y
q_puppetmaster_use_dashboard_reports=y
q_rubydevelopment_install=y
q_vendor_packages_install=y
agent:
q_puppet_symlinks_install=y
q_puppetagent_certname=agent
q_puppetagent_install=y
q_puppetagent_pluginsync=y
q_puppetagent_server=master
q_puppetdashboard_install=n
q_puppetmaster_install=n
q_rubydevelopment_install=n
q_vendor_packages_install=n
Is the new version of puppet Enterprise 2018 supported?
Would it be possible to get a libvirt provider in addition to Virtualbox and vmware? This would be helpful for those of us using libvirt as a provider. I do not have enough of a background to provide the necessary yaml file unfortunately.
thanks
I'm using oscar with VirtualBox on OSX:
# robin at mbp in ~
vagrant --version
vaVagrant 1.8.5
# robin at mbp in ~
vagrant plugin list
oscar (0.5.1)
vagrant-hostmanager (1.8.5)
vagrant-librarian-puppet (0.9.2)
vagrant-puppet-install (4.1.0)
vagrant-share (1.1.5, system)
vagrant-vbguest (0.12.0)
I've attached the config I'm using:
oscar-config.tar.gz
The problem I'm seeing is that the auto-selected IPs are being assigned to the 2nd NIC on the boxes (eth1) but not brought up. This causes provisioning to fail as the guests are not able to connect to the master.
As a workaround, this works:
vagrant up --no-provision
vagrant reload
When the boxes are reloaded, the 2nd network is started and provisioning (which runs on reload because it hasn't run already) completes successfully.
Any idea where the problem might lie?
R.
After initializing 2 boxes and running a vagrant up the first box (the master) comes up successfully and the agent (first) fails to import.
In the /Users//Workspace/test/.vagrant/machines/ folder the master folder is created but the first folder is missing causing the stack trace (i think).
[master] Notice: /Stage[main]//Service[pe-httpd]: Triggered 'refresh' from 1 events
[master] Notice: Finished catalog run in 14.12 seconds
[master]
[first] Importing base box 'centos-64-x64-vbox4210-nocm'...
Progress: 90%/Applications/Vagrant/embedded/gems/gems/vagrant-1.3.1/lib/vagrant/machine.rb:204:in `initialize': No such file or directory - /Users/<username>/Workspace/test/.vagrant/machines/first/virtualbox/id (Errno::ENOENT)
from /Applications/Vagrant/embedded/gems/gems/vagrant-1.3.1/lib/vagrant/machine.rb:204:in `open'
from /Applications/Vagrant/embedded/gems/gems/vagrant-1.3.1/lib/vagrant/machine.rb:204:in `open'
from /Applications/Vagrant/embedded/gems/gems/vagrant-1.3.1/lib/vagrant/machine.rb:204:in `id='
from /Applications/Vagrant/embedded/gems/gems/vagrant-1.3.1/plugins/providers/virtualbox/action/import.rb:15:in `call'
from /Applications/Vagrant/embedded/gems/gems/vagrant-1.3.1/lib/vagrant/action/warden.rb:34:in `call'
from /Applications/Vagrant/embedded/gems/gems/vagrant-1.3.1/plugins/providers/virtualbox/action/customize.rb:38:in `call'
from /Applications/Vagrant/embedded/gems/gems/vagrant-1.3.1/lib/vagrant/action/warden.rb:34:in `call'
from /Applications/Vagrant/embedded/gems/gems/vagrant-1.3.1/plugins/providers/virtualbox/action/check_accessible.rb:18:in `call'
from /Applications/Vagrant/embedded/gems/gems/vagrant-1.3.1/lib/vagrant/action/warden.rb:34:in `call'
from /Applications/Vagrant/embedded/gems/gems/vagrant-1.3.1/lib/vagrant/action/runner.rb:61:in `block in run'
from /Applications/Vagrant/embedded/gems/gems/vagrant-1.3.1/lib/vagrant/util/busy.rb:19:in `busy'
from /Applications/Vagrant/embedded/gems/gems/vagrant-1.3.1/lib/vagrant/action/runner.rb:61:in `run'
from /Applications/Vagrant/embedded/gems/gems/vagrant-1.3.1/lib/vagrant/action/builtin/call.rb:51:in `call'
from /Applications/Vagrant/embedded/gems/gems/vagrant-1.3.1/lib/vagrant/action/warden.rb:34:in `call'
from /Applications/Vagrant/embedded/gems/gems/vagrant-1.3.1/lib/vagrant/action/builtin/config_validate.rb:25:in `call'
from /Applications/Vagrant/embedded/gems/gems/vagrant-1.3.1/lib/vagrant/action/warden.rb:34:in `call'
from /Applications/Vagrant/embedded/gems/gems/vagrant-1.3.1/lib/vagrant/action/builtin/call.rb:57:in `call'
from /Applications/Vagrant/embedded/gems/gems/vagrant-1.3.1/lib/vagrant/action/warden.rb:34:in `call'
from /Applications/Vagrant/embedded/gems/gems/vagrant-1.3.1/plugins/providers/virtualbox/action/check_virtualbox.rb:17:in `call'
from /Applications/Vagrant/embedded/gems/gems/vagrant-1.3.1/lib/vagrant/action/warden.rb:34:in `call'
from /Users/<username>/.vagrant.d/gems/gems/vagrant-pe_build-0.4.3/lib/pe_build/action/pe_build_dir.rb:16:in `call'
from /Applications/Vagrant/embedded/gems/gems/vagrant-1.3.1/lib/vagrant/action/warden.rb:34:in `call'
from /Users/<username>/.vagrant.d/gems/gems/vagrant-auto_network-0.2.1/lib/auto_network/action/load_pool.rb:26:in `call'
from /Applications/Vagrant/embedded/gems/gems/vagrant-1.3.1/lib/vagrant/action/warden.rb:34:in `call'
from /Applications/Vagrant/embedded/gems/gems/vagrant-1.3.1/lib/vagrant/action/builder.rb:116:in `call'
from /Applications/Vagrant/embedded/gems/gems/vagrant-1.3.1/lib/vagrant/action/runner.rb:61:in `block in run'
from /Applications/Vagrant/embedded/gems/gems/vagrant-1.3.1/lib/vagrant/util/busy.rb:19:in `busy'
from /Applications/Vagrant/embedded/gems/gems/vagrant-1.3.1/lib/vagrant/action/runner.rb:61:in `run'
from /Applications/Vagrant/embedded/gems/gems/vagrant-1.3.1/lib/vagrant/machine.rb:147:in `action'
from /Applications/Vagrant/embedded/gems/gems/vagrant-1.3.1/lib/vagrant/batch_action.rb:63:in `block (2 levels) in run'
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.