Comments (62)
@battlemidget Thanks much 😄 . That did the trick. The error is gone. Let me resume this and I will report back. 👍
from conjure-up.
Can you add the ppa? Ive fixed most of those issues related to ssh
apt-add-repository ppa:conjure/ppa
apt update
apt upgrade
from conjure-up.
I tried to complete the conjure after adding the PPA but it got stuck
somewhat, so I reinstalled my ubuntu server from scratch and restarted the
conjure. Now I'm getting another error (see attached)
Regards,
Pau
2016-04-26 14:10 GMT+02:00 Adam Stokes [email protected]:
Can you add the ppa? Ive fixed most of those issues related to ssh
apt-add-repository ppa:conjure/ppa
apt update
apt upgrade—
You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub
#8 (comment)
from conjure-up.
Hi kapde,
I don't think attachments work through email can you add it to the issue itself?
from conjure-up.
Also you can run:
# hit enter without typing any passwords in
$ ssh-keygen
$ source /usr/share/openstack/bundles/common-openstack/novarc
$ openstack keypair create --public-key $HOME/.ssh/id_rsa.pub ubuntu-keypair
This should get a keypair added without having to redeploy. You can also import keypairs through the openstack dashboard.
from conjure-up.
Sorry, it was displaying something like:
Oops there was a problem with your install:
Reason: Creating Juju controller "local.janessa" on lxd/localhostBostrapping model "admin"Starting new instance for initial controllerLaunching instance starting instance can't get info for image 'ubuntu-xenial': not found
Regards
from conjure-up.
Ah interesting, can you make sure you're using juju 2.0 beta 6? there were fixes for that in there
from conjure-up.
This works great only issue is after I deploy on my laptop and reboot 1/2 of the containers start.
Tried 4-5x times.
root@rhicksted-XPS-15-9550:/home/rhicksted# lxc list
+------------------------------------------------------+---------+----------------------+------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------------------------------------------------------+---------+----------------------+------+------------+-----------+
| juju-23f522df-89ab-4fc4-84de-0390ef44f13f-machine-0 | RUNNING | 10.20.158.16 (eth0) | | PERSISTENT | 0 |
+------------------------------------------------------+---------+----------------------+------+------------+-----------+
| juju-8131e5ee-a4fc-4ad9-8890-f5fb311e10bc-machine-0 | RUNNING | 10.20.158.19 (eth0) | | PERSISTENT | 0 |
+------------------------------------------------------+---------+----------------------+------+------------+-----------+
| juju-8131e5ee-a4fc-4ad9-8890-f5fb311e10bc-machine-1 | RUNNING | 10.20.158.112 (eth0) | | PERSISTENT | 0 |
+------------------------------------------------------+---------+----------------------+------+------------+-----------+
| juju-8131e5ee-a4fc-4ad9-8890-f5fb311e10bc-machine-10 | RUNNING | 10.20.158.2 (eth0) | | PERSISTENT | 0 |
+------------------------------------------------------+---------+----------------------+------+------------+-----------+
| juju-8131e5ee-a4fc-4ad9-8890-f5fb311e10bc-machine-11 | RUNNING | 10.20.158.198 (eth0) | | PERSISTENT | 0 |
+------------------------------------------------------+---------+----------------------+------+------------+-----------+
| juju-8131e5ee-a4fc-4ad9-8890-f5fb311e10bc-machine-12 | RUNNING | 10.20.158.86 (eth0) | | PERSISTENT | 0 |
+------------------------------------------------------+---------+----------------------+------+------------+-----------+
| juju-8131e5ee-a4fc-4ad9-8890-f5fb311e10bc-machine-13 | RUNNING | 10.20.158.99 (eth0) | | PERSISTENT | 0 |
+------------------------------------------------------+---------+----------------------+------+------------+-----------+
| juju-8131e5ee-a4fc-4ad9-8890-f5fb311e10bc-machine-14 | RUNNING | 10.20.158.106 (eth0) | | PERSISTENT | 0 |
+------------------------------------------------------+---------+----------------------+------+------------+-----------+
| juju-8131e5ee-a4fc-4ad9-8890-f5fb311e10bc-machine-15 | RUNNING | 10.20.158.136 (eth0) | | PERSISTENT | 0 |
+------------------------------------------------------+---------+----------------------+------+------------+-----------+
| juju-8131e5ee-a4fc-4ad9-8890-f5fb311e10bc-machine-16 | RUNNING | 10.20.158.208 (eth0) | | PERSISTENT | 0 |
+------------------------------------------------------+---------+----------------------+------+------------+-----------+
| juju-8131e5ee-a4fc-4ad9-8890-f5fb311e10bc-machine-2 | ERROR | | | PERSISTENT | 0 |
+------------------------------------------------------+---------+----------------------+------+------------+-----------+
| juju-8131e5ee-a4fc-4ad9-8890-f5fb311e10bc-machine-3 | STOPPED | | | PERSISTENT | 0 |
+------------------------------------------------------+---------+----------------------+------+------------+-----------+
| juju-8131e5ee-a4fc-4ad9-8890-f5fb311e10bc-machine-4 | STOPPED | | | PERSISTENT | 0 |
+------------------------------------------------------+---------+----------------------+------+------------+-----------+
| juju-8131e5ee-a4fc-4ad9-8890-f5fb311e10bc-machine-5 | STOPPED | | | PERSISTENT | 0 |
+------------------------------------------------------+---------+----------------------+------+------------+-----------+
| juju-8131e5ee-a4fc-4ad9-8890-f5fb311e10bc-machine-6 | STOPPED | | | PERSISTENT | 0 |
+------------------------------------------------------+---------+----------------------+------+------------+-----------+
| juju-8131e5ee-a4fc-4ad9-8890-f5fb311e10bc-machine-7 | STOPPED | | | PERSISTENT | 0 |
+------------------------------------------------------+---------+----------------------+------+------------+-----------+
| juju-8131e5ee-a4fc-4ad9-8890-f5fb311e10bc-machine-8 | STOPPED | | | PERSISTENT | 0 |
+------------------------------------------------------+---------+----------------------+------+------------+-----------+
| juju-8131e5ee-a4fc-4ad9-8890-f5fb311e10bc-machine-9 | STOPPED | | | PERSISTENT | 0 |
+------------------------------------------------------+---------+----------------------+------+------------+-----------+
Regards
from conjure-up.
@rhickstedjr, interesting, can you attach /var/log/lxd/container-name/lxc.log of the ones that aren't started?
So for example, the last entry in the above output would be:
/var/log/lxd/ juju-8131e5ee-a4fc-4ad9-8890-f5fb311e10bc-machine-9 /lxc.log
Thanks
from conjure-up.
2-9 in order.
Thanks,
Rick
From: Adam Stokes [email protected]
Sent: Tuesday, May 3, 2016 1:46 PM
To: Ubuntu-Solutions-Engineering/conjure-up
Cc: Rick Hicksted; Mention
Subject: Re: [Ubuntu-Solutions-Engineering/conjure-up] Openstack LXD Single installation failing (#8)
@rhickstedjrhttps://github.com/rhickstedjr, interesting, can you attach /var/log/lxd//lxc.log of the ones that aren't started?
So for example, the last entry in the above output would be:
/var/log/lxd/ juju-8131e5ee-a4fc-4ad9-8890-f5fb311e10bc-machine-9 /lxc.log
Thanks
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHubhttps://github.com//issues/8#issuecomment-216659362
from conjure-up.
Unfortunately I dont think it works by attaching to the email, can you attach directly to the bug?
from conjure-up.
Here they are via the web page.
Thanks,
Rick
from conjure-up.
Thanks I'll talk with the lxd guys tomorrow
from conjure-up.
Same here except all servers stay in "Allocating" for more than a day and never changed, started over from a fresh installation and do the same. Is an HP PC lab and no ZFS used.
from conjure-up.
If you're using zfs as a backing store for those containers, there's a possibility that the zfs module didn't load on reboot, so a modprobe of zfs should fix the issue. There's another bug not on github that's been filed for this issue.
from conjure-up.
I am using ZFS in the example above, but I also noticed the issue without ZFS deployment as well.
If the module was not loading on reboots why would 1/2 the containers start?
On May 4, 2016, at 8:53 AM, secroft [email protected] wrote:
If you're using zfs as a backing store for those containers, there's a possibility that the zfs module didn't load on reboot, so a modprobe of zfs should fix the issue. There's another bug not on github that's been filed for this issue.
—
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
from conjure-up.
If the module was not loading on reboots why would 1/2 the containers start?
Yea we're asking that too, I am still working with LXD guys on possible reasons why this is happening.
from conjure-up.
Any luck? Can we put a delay between container starts?
Thanks,
Rick
On May 4, 2016, at 5:05 PM, Adam Stokes [email protected] wrote:
If the module was not loading on reboots why would 1/2 the containers start?
Yea we're asking that too, I am still working with LXD guys on possible reasons why this is happening.
—
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
from conjure-up.
Still debugging, there seems to be a race between juju connecting to the lxd endpoint. Because it is difficult to reproduce we'll need more time.
from conjure-up.
@rhickstedjr, digging through this with LXD guys now, hopefully will have an update soon.
from conjure-up.
Good deal! I would really like to test more of the solution.
Thanks,
Rick
On May 6, 2016, at 8:15 AM, Adam Stokes [email protected] wrote:
@rhickstedjr, digging through this with LXD guys now, hopefully will have an update soon.
—
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
from conjure-up.
@rhickstedjr, I've opened a bug with LXD over at https://github.com/lxc/lxd/issues/2001 feel free to subscribe and if possible provide any additional information they may need
from conjure-up.
No work around to stagger the container start time on reboot to affect the possible race?
Thanks,
Rick
On May 6, 2016, at 8:54 AM, Adam Stokes [email protected] wrote:
@rhickstedjr, I've opened a bug with LXD over at lxc/lxd#2001 feel free to subscribe and if possible provide any additional information they may need
—
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
from conjure-up.
Not that I know of unfortunately
from conjure-up.
This works:
https://bitsandslices.wordpress.com/2015/08/26/autostarting-lxd-containers/
boot.autostart.delay lxc.start.delay
stagger based on the container #
ex 16 has a higher delay than 15..14..13..etc.
Thanks,
Rick
On May 6, 2016, at 10:05 AM, Adam Stokes [email protected] wrote:
Not that I know of unfortunately
—
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
from conjure-up.
So that worked for you? If so this is great information I can pass along to the right people
from conjure-up.
Yes, tried 5x reboots.
Thanks,
Sent from my iPhone
On May 7, 2016, at 12:00 PM, Adam Stokes <[email protected]mailto:[email protected]> wrote:
So that worked for you? If so this is great information I can pass along to the right people
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHubhttps://github.com//issues/8#issuecomment-217661956
from conjure-up.
Perfect thanks
from conjure-up.
@rhickstedjr I built a new openstack package in the ppa, could you retry your tests without the delay options and let me know if you still run into issues?
from conjure-up.
I am unable to make this work as well. The installation gets as far as deploying the services and then hangs on Keystone. The installation fails after a while. The only LXD container deployed is MAAS. I see this in the /var/log/openstack.log file:
john@openstack-dev:~$ tail /var/log/openstack.log
May 25 09:44:50 openstack-dev openstack: [WARNING]
May 25 09:44:50 openstack-dev openstack: [ERROR] There was an error during the post processing phase, retrying.
May 25 09:44:56 openstack-dev openstack: [WARNING]
May 25 09:44:56 openstack-dev openstack: [ERROR] There was an error during the post processing phase, retrying.
May 25 09:49:23 openstack-dev openstack: [WARNING] pollinate binary not found
May 25 09:49:26 openstack-dev openstack: message repeated 2 times: [ [WARNING] pollinate binary not found]
May 25 09:52:09 openstack-dev openstack: [WARNING] pollinate binary not found
May 25 09:55:23 openstack-dev openstack: message repeated 7 times: [ [WARNING] pollinate binary not found]
May 25 09:57:46 openstack-dev openstack: [WARNING] pollinate binary not found
May 25 09:57:48 openstack-dev openstack: [WARNING] pollinate binary not found
After some googling for the pollinate error, I ran across an issue with MAAS 2.0 and juju 2.0. So is this currently broken and just does not work? Thanks for the help.
References:
#9
https://bugs.launchpad.net/juju-core/+bug/1564577
from conjure-up.
Yea MAAS 2.0 doesn't quite work with our stuff yet see #38 to track it's progress. I will be working on getting that going this week and into next
from conjure-up.
Thank you for the reply. Is it possible to work around this and still use conjure-up to do the install? I am guessing not.
from conjure-up.
@jpmitchell Other than use MAAS 1.9 i dont think there is a way around it for now. Our next release is due May 30th so I'll try to have this fix in there by then.
from conjure-up.
@battlemidget Sounds good. Thanks again for the quick update!
from conjure-up.
Any update on the release? Thanks!
from conjure-up.
Running behind hope to do a release soon
from conjure-up.
Sounds good. Thanks again for the update. Hope it goes well.
from conjure-up.
@jpmitchell We're hoping to do a release today but it won't have full maas 2.0 support yet. I've set that to be done by the release next Monday June 13th. I'll keep you posted on it's progress
from conjure-up.
LXD Openstack failing as well on localhost. I can't pass "pre processing checks" (right after the commit). I'm on a fresh 16.04, with the ppa, picking everything vanilla.
Machine is a Intel NUC with core i7, 16GB RAM, extremely classic.
Let me know if there are any logs or stuff I should put here.
Thx,
from conjure-up.
Still waiting on the openstack package to reach xenial-updates, can you upgrade to the openstack package in proposed and see if it still persists @SaMnCo
from conjure-up.
I would get a system with 32G RAM. I had real issues at 16.
Note: the NUC can go to 32G, but the cost is 300$ for a set of DIMMs, which is almost the cost of the I7 NUC.
Thanks,
Sent from my iPhone
On Jun 7, 2016, at 4:14 AM, Samuel Cozannet <[email protected]mailto:[email protected]> wrote:
LXD Openstack failing as well. I can't pass "pre processing checks" (right after the commit). I'm on a fresh 16.04, with the ppa, picking everything vanilla.
Machine is a Intel NUC with core i7, 16GB RAM, extremely classic.
Let me know if there are any logs or stuff I should put here.
Thx,
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHubhttps://github.com//issues/8#issuecomment-224250514, or mute the threadhttps://github.com/notifications/unsubscribe/AMBUZzxSVlS5BPV6r_y7xOeMVhlSLCcgks5qJVKVgaJpZM4IPdek.
from conjure-up.
Packages pushed to xenial-updates should resolve the initial poster's bug.
from conjure-up.
@jpmitchell please track #38 for maas 2.0 support, ive got the initial code done just doing some testing so it can be included in next monday's release.
from conjure-up.
Thanks. I am already following it.
from conjure-up.
Hello,
Having a similar problem. Trying to perform single machine installation and having an error.
administrator@openstack:~$ tail /var/log/openstack.log
Aug 8 16:10:57 openstack openstack: [WARNING] pollinate binary not found
Aug 8 16:11:10 openstack openstack: message repeated 2 times: [ [WARNING] pollinate binary not found]
Aug 8 16:12:31 openstack openstack: [WARNING] pollinate binary not found
Aug 8 16:12:33 openstack openstack: message repeated 2 times: [ [WARNING] pollinate binary not found]
Aug 8 16:17:11 openstack openstack: [WARNING] pollinate binary not found
From the output of the installation script
Setting up python3-glanceclient (1:2.0.0-2ubuntu0.16.04.1) ...
update-alternatives: using /usr/bin/python3-glance to provide /usr/bin/glance (glance) in auto mode
Setting up python3-openstackclient (2.3.0-2) ...
update-alternatives: using /usr/bin/python3-openstack to provide /usr/bin/openstack (openstack) in auto mode
Setting up openstack (1.0.7.16.04.1) ...
Processing triggers for libc-bin (2.23-0ubuntu3) ...
Job for lxd-containers.service failed because the control process exited with error code. See "systemctl status lxd-containers.service" and "journalctl -xe" for details.
lxd-containers.service couldn't start.
Job for lxd-containers.service failed because the control process exited with error code. See "systemctl status lxd-containers.service" and "journalctl -xe" for details.
lxd-containers.service couldn't restart.
Exception in ev.run():
Traceback (most recent call last):
File "/usr/share/conjure-up/ubuntui/ev.py", line 83, in run
cls.loop.run()
File "/usr/lib/python3/dist-packages/urwid/main_loop.py", line 278, in run
self._run()
File "/usr/lib/python3/dist-packages/urwid/main_loop.py", line 376, in _run
self.event_loop.run()
File "/usr/lib/python3/dist-packages/urwid/main_loop.py", line 1326, in run
self._loop.run_forever()
File "/usr/lib/python3.5/asyncio/base_events.py", line 345, in run_forever
self._run_once()
File "/usr/lib/python3.5/asyncio/base_events.py", line 1276, in _run_once
event_list = self._selector.select(timeout)
File "/usr/lib/python3.5/selectors.py", line 441, in select
fd_event_list = self._epoll.poll(timeout, max_ev)
KeyboardInterrupt
Traceback (most recent call last):
File "/usr/bin/conjure-up", line 9, in
load_entry_point('conjure-up==0.1.2', 'console_scripts', 'conjure-up')()
File "/usr/share/conjure-up/conjure/app.py", line 222, in main
app.start()
File "/usr/share/conjure-up/conjure/app.py", line 171, in start
EventLoop.run()
File "/usr/share/conjure-up/ubuntui/ev.py", line 83, in run
cls.loop.run()
File "/usr/lib/python3/dist-packages/urwid/main_loop.py", line 278, in run
self._run()
File "/usr/lib/python3/dist-packages/urwid/main_loop.py", line 376, in _run
self.event_loop.run()
File "/usr/lib/python3/dist-packages/urwid/main_loop.py", line 1326, in run
self._loop.run_forever()
File "/usr/lib/python3.5/asyncio/base_events.py", line 345, in run_forever
self._run_once()
File "/usr/lib/python3.5/asyncio/base_events.py", line 1276, in _run_once
event_list = self._selector.select(timeout)
File "/usr/lib/python3.5/selectors.py", line 441, in select
fd_event_list = self._epoll.poll(timeout, max_ev)
KeyboardInterrupt
Not sure if I'm doing anything wrong or it's the script which is not ok.
Thanks
from conjure-up.
Looks like your LXD isn't configured properly
from conjure-up.
Thanks. I found the problem was the /var/lib/lxd/containers path wasn't created. Once I've created previous to launch the installation, I don't have this error. Once I've resolved that I'm getting a new error and I have no clue about what can be the problem:
Setting up python3-oslo.config (1:3.9.0-3) ...
update-alternatives: using /usr/bin/python3-oslo-config-generator to provide /usr/bin/oslo-config-generator (oslo-config-generator) in auto mode
Setting up python3-keystoneclient (1:2.3.1-2) ...
update-alternatives: using /usr/bin/python3-keystone to provide /usr/bin/keystone (keystone) in auto mode
Setting up python3-cinderclient (1:1.6.0-2) ...
update-alternatives: using /usr/bin/python3-cinder to provide /usr/bin/cinder (cinder) in auto mode
Setting up python3-glanceclient (1:2.0.0-2ubuntu0.16.04.1) ...
update-alternatives: using /usr/bin/python3-glance to provide /usr/bin/glance (glance) in auto mode
Setting up python3-openstackclient (2.3.0-2) ...
update-alternatives: using /usr/bin/python3-openstack to provide /usr/bin/openstack (openstack) in auto mode
Setting up openstack (1.0.7.16.04.1) ...
Processing triggers for libc-bin (2.23-0ubuntu3) ...
Exception in ev.run():
Traceback (most recent call last):
File "/usr/share/conjure-up/ubuntui/ev.py", line 83, in run
cls.loop.run()
File "/usr/lib/python3/dist-packages/urwid/main_loop.py", line 278, in run
self._run()
File "/usr/lib/python3/dist-packages/urwid/main_loop.py", line 376, in _run
self.event_loop.run()
File "/usr/lib/python3/dist-packages/urwid/main_loop.py", line 1326, in run
self._loop.run_forever()
File "/usr/lib/python3.5/asyncio/base_events.py", line 331, in run_forever
self._run_once()
File "/usr/lib/python3.5/asyncio/base_events.py", line 1262, in _run_once
event_list = self._selector.select(timeout)
File "/usr/lib/python3.5/selectors.py", line 441, in select
fd_event_list = self._epoll.poll(timeout, max_ev)
KeyboardInterrupt
Traceback (most recent call last):
File "/usr/bin/conjure-up", line 9, in
load_entry_point('conjure-up==0.1.2', 'console_scripts', 'conjure-up')()
File "/usr/share/conjure-up/conjure/app.py", line 222, in main
app.start()
File "/usr/share/conjure-up/conjure/app.py", line 171, in start
EventLoop.run()
File "/usr/share/conjure-up/ubuntui/ev.py", line 83, in run
cls.loop.run()
File "/usr/lib/python3/dist-packages/urwid/main_loop.py", line 278, in run
self._run()
File "/usr/lib/python3/dist-packages/urwid/main_loop.py", line 376, in _run
self.event_loop.run()
File "/usr/lib/python3/dist-packages/urwid/main_loop.py", line 1326, in run
self._loop.run_forever()
File "/usr/lib/python3.5/asyncio/base_events.py", line 331, in run_forever
self._run_once()
File "/usr/lib/python3.5/asyncio/base_events.py", line 1262, in _run_once
event_list = self._selector.select(timeout)
File "/usr/lib/python3.5/selectors.py", line 441, in select
fd_event_list = self._epoll.poll(timeout, max_ev)
KeyboardInterrupt
administrator@openstack:~$ tail /var/log/openstack.log
Aug 9 10:04:22 openstack openstack: [WARNING] pollinate binary not found
Aug 9 10:04:26 openstack openstack: message repeated 2 times: [ [WARNING] pollinate binary not found]
Aug 9 10:04:56 openstack openstack: [WARNING] pollinate binary not found
Aug 9 10:04:59 openstack openstack: message repeated 2 times: [ [WARNING] pollinate binary not found]
Aug 9 10:08:33 openstack openstack: [WARNING] pollinate binary not found
Thanks for the help.
Luis
from conjure-up.
Hi,
When trying an OpenStack LXD single install on 16.04 it is giving all machine states as error.
here i pasted my $ sudo juju status
MODEL CONTROLLER CLOUD/REGION VERSION
default localhost lxd/localhost 2.0-beta15
APP VERSION STATUS EXPOSED ORIGIN CHARM REV OS
ceph unknown false jujucharms ceph 4 ubuntu
ceph-radosgw unknown false jujucharms ceph-radosgw 5 ubuntu
cinder unknown false jujucharms cinder 3 ubuntu
cinder-ceph false jujucharms cinder-ceph 2 ubuntu
glance unknown false jujucharms glance 2 ubuntu
keystone unknown false jujucharms keystone 2 ubuntu
mysql unknown false jujucharms percona-cluster 2 ubuntu
neutron-api unknown false jujucharms neutron-api 3 ubuntu
neutron-gateway unknown false jujucharms neutron-gateway 3 ubuntu
neutron-openvswitch false jujucharms neutron-openvswitch 3 ubuntu
nova-cloud-controller unknown false jujucharms nova-cloud-controller 4 ubuntu
nova-compute unknown false jujucharms nova-compute 3 ubuntu
openstack-dashboard unknown false jujucharms openstack-dashboard 2 ubuntu
rabbitmq-server unknown false jujucharms rabbitmq-server 5 ubuntu
RELATION PROVIDES CONSUMES TYPE
mon ceph ceph peer
mon ceph ceph-radosgw regular
ceph ceph cinder-ceph regular
ceph ceph glance regular
ceph ceph nova-compute regular
cluster ceph-radosgw ceph-radosgw peer
identity-service ceph-radosgw keystone regular
cluster cinder cinder peer
storage-backend cinder cinder-ceph subordinate
image-service cinder glance regular
identity-service cinder keystone regular
shared-db cinder mysql regular
cinder-volume-service cinder nova-cloud-controller regular
amqp cinder rabbitmq-server regular
cluster glance glance peer
identity-service glance keystone regular
shared-db glance mysql regular
image-service glance nova-cloud-controller regular
image-service glance nova-compute regular
amqp glance rabbitmq-server regular
cluster keystone keystone peer
shared-db keystone mysql regular
identity-service keystone neutron-api regular
identity-service keystone nova-cloud-controller regular
identity-service keystone openstack-dashboard regular
cluster mysql mysql peer
shared-db mysql neutron-api regular
shared-db mysql nova-cloud-controller regular
cluster neutron-api neutron-api peer
neutron-plugin-api neutron-api neutron-gateway regular
neutron-plugin-api neutron-api neutron-openvswitch regular
neutron-api neutron-api nova-cloud-controller regular
amqp neutron-api rabbitmq-server regular
cluster neutron-gateway neutron-gateway peer
quantum-network-service neutron-gateway nova-cloud-controller regular
amqp neutron-gateway rabbitmq-server regular
neutron-plugin neutron-openvswitch nova-compute regular
amqp neutron-openvswitch rabbitmq-server regular
cluster nova-cloud-controller nova-cloud-controller peer
cloud-compute nova-cloud-controller nova-compute regular
amqp nova-cloud-controller rabbitmq-server regular
neutron-plugin nova-compute neutron-openvswitch subordinate
compute-peer nova-compute nova-compute peer
amqp nova-compute rabbitmq-server regular
cluster openstack-dashboard openstack-dashboard peer
cluster rabbitmq-server rabbitmq-server peer
UNIT WORKLOAD AGENT MACHINE PUBLIC-ADDRESS PORTS MESSAGE
ceph-radosgw/0 unknown allocating 3 Waiting for agent initialization to finish
ceph/0 unknown allocating 0 Waiting for agent initialization to finish
ceph/1 unknown allocating 1 Waiting for agent initialization to finish
ceph/2 unknown allocating 2 Waiting for agent initialization to finish
cinder/0 unknown allocating 4 Waiting for agent initialization to finish
glance/0 unknown allocating 5 Waiting for agent initialization to finish
keystone/0 unknown allocating 6 Waiting for agent initialization to finish
mysql/0 unknown allocating 7 Waiting for agent initialization to finish
neutron-api/0 unknown allocating 8 Waiting for agent initialization to finish
neutron-gateway/0 unknown allocating 9 Waiting for agent initialization to finish
nova-cloud-controller/0 unknown allocating 10 Waiting for agent initialization to finish
nova-compute/0 unknown allocating 11 Waiting for agent initialization to finish
openstack-dashboard/0 unknown allocating 12 Waiting for agent initialization to finish
rabbitmq-server/0 unknown allocating 13 Waiting for agent initialization to finish
MACHINE STATE DNS INS-ID SERIES AZ
0 error pending xenial
1 error pending xenial
2 error pending xenial
3 error pending xenial
4 error pending xenial
5 error pending xenial
6 error pending xenial
7 error pending xenial
8 error pending xenial
9 error pending xenial
10 error pending xenial
11 error pending xenial
12 error pending xenial
13 error pending xenial
Thanks in Advance
Sureshbabu J
from conjure-up.
Your containers aren't starting up for some reason.
from conjure-up.
how can i get rid this error
from conjure-up.
Hi all, new entrant here.
When juju starts the bootstrapping process, this error halts the process:
"error: flag provided but not defined: --upload-tools"
I'm trying to get it set up on a bare metal server running Ubuntu 16.04.
from conjure-up.
You'll need to get a newer conjure-up, see http://conjure-up.io for install details
from conjure-up.
Reporting back to duty. Things have been smooth on my side so far, but I can't seem to get the home run yet 😅
It's failing randomly while at a it.
juju show-status
Just picked from the long list:
glance/0* error idle 7 10.239.127.135 hook failed: "install"
neutron-gateway/0* error idle 11 10.239.127.51 hook failed: "install"
from conjure-up.
Can you do:
juju ssh glance/0
Then inside the container run:
pastebinit /var/log/juju/unit-glance-0.log
And provide that link in this bug
from conjure-up.
I just did, got in via ssh, but I'm not sure if it's my network at the moment. Getting this:
Failed to contact the server: [Errno socket error] The write operation timed out
on running pastebinit /var/log/juju/unit-glance-0.log
from conjure-up.
Are you behind a firewall or anything? This could be another indication as to why the installation is failing (problematic network)
from conjure-up.
I think you're right. The server is in a location with a mighty country wide government firewall.
from conjure-up.
@battlemidget I moved countries and I was able to restart the process on a stable network. Here's an output from an lxd container that's has failed: http://paste.ubuntu.com/23614089/
from conjure-up.
@battlemidget the first:
same issue
"error: flag provided but not defined: --upload-tools"
I'm trying to get it set up on a bare metal server running Ubuntu 16.04.1 (installed as maas region, next added rack localy on the same server, next installed openstack, next installed conjure)
installed conjure as written at "http://conjure-up.io"
$ uname -a
Linux mSys 4.4.0-59-generic #80-Ubuntu SMP Fri Jan 6 17:47:47 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
what kind of info can i provide?
and next - it's to difficult to type all api key in dialog - why paste do not work like in bash console (working over putty (ssh) from windows pc)?
from conjure-up.
and next - it's to difficult to type all api key in dialog - why paste do not work like in bash console (working over putty (ssh) from windows pc)?
I think you can do SHIFT+INSERT and it'll paste in putty
Did you run
sudo snap install conjure-up --classic
?
from conjure-up.
@battlemidget , yes
snap install conjure-up --classic
wrote that installed latest version. tried twice. every selection in menu (single/nova-LXD/MaaS) shows the same message. will try another once later today, but i think it will be the same
PS forgot about keyboard )) windows makes me stupid )) 10x
from conjure-up.
Make sure you apt-get remove juju-2.0 as the snap provides that already
from conjure-up.
Related Issues (20)
- 2.6.8 conjure-up installation for OpenStack with NovaKVM does not show localhost cloud option as available HOT 1
- tlsv1
- OpenStack with NovaLXD - spell fails initially at neutron-gateway "install" HOT 2
- cloud not found error when choosing existing MAAS cloud HOT 1
- Unable to bootstrap (cloud type: maas) HOT 2
- [ERROR] conjure-up/openstack-novalxd - events.py:161 HOT 3
- ProviderID not unique HOT 5
- can't deploy openstack with NovaKVM HOT 2
- awscli brew upgrade issue due to conjure-up test failure HOT 1
- entity() got an unexpected keyword argument 'include_stats' HOT 1
- Canonical openstack failed "cannot retrieve charm "cs:ceph-mon-44": cannot get archive:" HOT 1
- MAAS Install hang up HOT 1
- Master machines are not getting created for the Kubernetes spell HOT 1
- addCharm() takes 3 positional arguments but 4 were given HOT 3
- Unable to bootstrap (cloud type: localhost) HOT 2
- conjure-up bootstraping failed on cloud type localhost HOT 1
- Unable to bootstrap (cloud type: localhost) HOT 1
- Juju controller installation fails - SSH connection fails HOT 2
- conjure-up charmed-kubernetes fails with Exception: Failure in step 02_get-kubectl after-deploy on MAAS HOT 1
- Unable to bootstrap cloud type maas
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from conjure-up.