Coder Social home page Coder Social logo

Comments (12)

berendt avatar berendt commented on June 18, 2024

Thanks for the issue. The latest OpenStack Train images on Quay are currently broken. New working images should be available tomorrow during the day.

from testbed.

 avatar commented on June 18, 2024

Thanks for the issue. The latest OpenStack Train images on Quay are currently broken. New working images should be available tomorrow during the day.

Okay, I will try again in the evening then. Thank you

from testbed.

berendt avatar berendt commented on June 18, 2024

Okay, I will try again in the evening then. Thank you

Train Latest Images are usable again. Please test.

from testbed.

 avatar commented on June 18, 2024

Okay, I will try again in the evening then. Thank you

Train Latest Images are usable again. Please test.

Hi Christian,

This time I ran:

openstack --os-cloud testbed \
  stack create \
  -e heat/environment.yml \
  --parameter deploy_ceph=true \
  --parameter deploy_infrastructure=true \
  --parameter deploy_openstack=true \
  --timeout 150 \
  -t heat/stack.yml testbed

and I also went for ceph-nautilus instead of octopus... cause when I tried octopus in the morning, glance was failing due to "could not find ceph keyrings).

Currently, my heat stack status is "CREATE_FAILED" but its still bootstraping openstack services at

TASK [neutron : Creating Neutron database user and setting permissions] ********
changed: [testbed-node-0.osism.local -> 192.168.40.10]

TASK [neutron : include_tasks] *************************************************
included: /ansible/roles/neutron/tasks/bootstrap_service.yml for testbed-node-0.osism.local, testbed-node-1.osism.local

TASK [neutron : Running Neutron bootstrap container] ***************************

So I assume it should be finished in the end. Also, I wanted to ask how can I access services? I tried sshuttle, tried even "sudo route add -net 192.168.40.0/24 gw 10.0.2.12(neutron router ip address)" however, still cannot open services locally. Any advice on that one?

Thank you

from testbed.

berendt avatar berendt commented on June 18, 2024

and I also went for ceph-nautilus instead of octopus... cause when I tried octopus in the morning, glance was failing due to "could not find ceph keyrings).

Ocotpus is not yet well tested and the upstream is still very active. That's why errors occur here from time to time. Nautilus is therefore set as default and is what we currently deploy.

Currently, my heat stack status is "CREATE_FAILED" but its still bootstraping openstack services at

Stack timeout reached?

So I assume it should be finished in the end. Also, I wanted to ask how can I access services? I tried sshuttle, tried even "sudo route add -net 192.168.40.0/24 gw 10.0.2.12(neutron router ip address)" however, still cannot open services locally. Any advice on that one?

Run make sshuttle. The APIs/Horizon can then be accessed under 192.168.50.200. Created instances are only accessible via the manager. I'll change that.

from testbed.

 avatar commented on June 18, 2024

and I also went for ceph-nautilus instead of octopus... cause when I tried octopus in the morning, glance was failing due to "could not find ceph keyrings).

Ocotpus is not yet well tested and the upstream is still very active. That's why errors occur here from time to time. Nautilus is therefore set as default and is what we currently deploy.

Currently, my heat stack status is "CREATE_FAILED" but its still bootstraping openstack services at

Stack timeout reached?

So I assume it should be finished in the end. Also, I wanted to ask how can I access services? I tried sshuttle, tried even "sudo route add -net 192.168.40.0/24 gw 10.0.2.12(neutron router ip address)" however, still cannot open services locally. Any advice on that one?

Run make sshuttle. The APIs/Horizon can then be accessed under 192.168.50.200. Created instances are only accessible via the manager. I'll change that.

Hi Christian,

I just deployed openstack env-testbed via terraform scripts which you uploaded recently. Used my another linux machine for sshuttle and it worked, however, once I logged in to openstack I got - http://imgur.com/sYGLd7Fl.png "You are not authorized to access this page Login". I am wondering, if this testbed can be used for anything?
My point was to test magnum. Kolla's magnum is very unstable, even though I managed to install from master branch with auto-scaler and auto-healer, it was still very broken though.

Using terraform, "make ssh && sshuttle" are not working, I assume vars are wrong.
Cockpit has wrong password in docs.

P.S. Wireguard wd be easier than sshuttle yeah :)

Thank you

from testbed.

berendt avatar berendt commented on June 18, 2024

I am wondering, if this testbed can be used for anything?

The testbed is fully functional. At least Refstack runs through it completely.

Please try an openstack --os-cloud admin token issue from the testbed-manager.

My point was to test magnum. Kolla's magnum is very unstable, even though I managed to install from master branch with auto-scaler and auto-healer, it was still very broken though.

Magnum itself is somewhat unstable. I will add necessary templates to the testbed tomorrow.

Using terraform, "make ssh && sshuttle" are not working, I assume vars are wrong.

I can't confirm that. Works here. What's the error message?

Cockpit has wrong password in docs.

The password is already correct. It just wasn't set. It's fixed (#171).

from testbed.

 avatar commented on June 18, 2024

Hi Christian,

I was trying to deploy testbed again yesterday. keepalived images were broken again. Is there any way I can change images from master to stable branch?

P.S. Also, I was reading docs a lot and I am wondering, what would you say the main difference/comparison of osism and kayobe(kolla's project)?

Thank you and regards

from testbed.

berendt avatar berendt commented on June 18, 2024

I was trying to deploy testbed again yesterday. keepalived images were broken again. Is there any way I can change images from master to stable branch?

Latest images are working fine. I created an environment with the latest images (built about 48 hours ago) three hours ago. Keepalived and all other images work.

What was the error message?

quay.io/osism/keepalived                  train-latest        83395f55d20a        47 hours ago        212MB

P.S. Also, I was reading docs a lot and I am wondering, what would you say the main difference/comparison of osism and kayobe(kolla's project)?

Kayobe is a CLI for the use of kolla-ansible, which has been developed for about a year and parts will be integrated into our framework in the future.

from testbed.

 avatar commented on June 18, 2024

I was trying to deploy testbed again yesterday. keepalived images were broken again. Is there any way I can change images from master to stable branch?

Latest images are working fine. I created an environment with the latest images (built about 48 hours ago) three hours ago. Keepalived and all other images work.

What was the error message?

quay.io/osism/keepalived                  train-latest        83395f55d20a        47 hours ago        212MB

P.S. Also, I was reading docs a lot and I am wondering, what would you say the main difference/comparison of osism and kayobe(kolla's project)?

Kayobe is a CLI for the use of kolla-ansible, which has been developed for about a year and parts will be integrated into our framework in the future.

HI,

Deployment: make deploy-openstack ENVIRONMENT=environment.yml (using terraform)

TASK [haproxy : Deploy haproxy containers] *************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ImportError: No module named docker
failed: [testbed-node-0.osism.local] (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'haproxy', 'enabled': True, 'image': 'quay.io/osism/haproxy:train-latest', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/'], 'dimensions': {}}}) => {"ansible_loop_var": "item", "changed": false, "item": {"key": "haproxy", "value": {"container_name": "haproxy", "dimensions": {}, "enabled": true, "group": "haproxy", "image": "quay.io/osism/haproxy:train-latest", "privileged": true, "volumes": ["/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro", "/etc/localtime:/etc/localtime:ro", "/etc/timezone:/etc/timezone:ro", "haproxy_socket:/var/lib/kolla/haproxy/"]}}, "module_stderr": "Traceback (most recent call last):\n  File \"<stdin>\", line 114, in <module>\n  File \"<stdin>\", line 106, in _ansiballz_main\n  File \"<stdin>\", line 49, in invoke_module\n  File \"/tmp/ansible_kolla_docker_payload_SkmE6x/__main__.py\", line 28, in <module>\nImportError: No module named docker\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ImportError: No module named docker
failed: [testbed-node-1.osism.local] (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'haproxy', 'enabled': True, 'image': 'quay.io/osism/haproxy:train-latest', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/'], 'dimensions': {}}}) => {"ansible_loop_var": "item", "changed": false, "item": {"key": "haproxy", "value": {"container_name": "haproxy", "dimensions": {}, "enabled": true, "group": "haproxy", "image": "quay.io/osism/haproxy:train-latest", "privileged": true, "volumes": ["/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro", "/etc/localtime:/etc/localtime:ro", "/etc/timezone:/etc/timezone:ro", "haproxy_socket:/var/lib/kolla/haproxy/"]}}, "module_stderr": "Traceback (most recent call last):\n  File \"<stdin>\", line 114, in <module>\n  File \"<stdin>\", line 106, in _ansiballz_main\n  File \"<stdin>\", line 49, in invoke_module\n  File \"/tmp/ansible_kolla_docker_payload_ef1aZG/__main__.py\", line 28, in <module>\nImportError: No module named docker\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ImportError: No module named docker
failed: [testbed-node-1.osism.local] (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'haproxy', 'enabled': True, 'image': 'quay.io/osism/keepalived:train-latest', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/'], 'dimensions': {}}}) => {"ansible_loop_var": "item", "changed": false, "item": {"key": "keepalived", "value": {"container_name": "keepalived", "dimensions": {}, "enabled": true, "group": "haproxy", "image": "quay.io/osism/keepalived:train-latest", "privileged": true, "volumes": ["/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro", "/etc/localtime:/etc/localtime:ro", "/etc/timezone:/etc/timezone:ro", "/lib/modules:/lib/modules:ro", "haproxy_socket:/var/lib/kolla/haproxy/"]}}, "module_stderr": "Traceback (most recent call last):\n  File \"<stdin>\", line 114, in <module>\n  File \"<stdin>\", line 106, in _ansiballz_main\n  File \"<stdin>\", line 49, in invoke_module\n  File \"/tmp/ansible_kolla_docker_payload_VJhsBk/__main__.py\", line 28, in <module>\nImportError: No module named docker\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ImportError: No module named docker
failed: [testbed-node-0.osism.local] (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'haproxy', 'enabled': True, 'image': 'quay.io/osism/keepalived:train-latest', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/'], 'dimensions': {}}}) => {"ansible_loop_var": "item", "changed": false, "item": {"key": "keepalived", "value": {"container_name": "keepalived", "dimensions": {}, "enabled": true, "group": "haproxy", "image": "quay.io/osism/keepalived:train-latest", "privileged": true, "volumes": ["/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro", "/etc/localtime:/etc/localtime:ro", "/etc/timezone:/etc/timezone:ro", "/lib/modules:/lib/modules:ro", "haproxy_socket:/var/lib/kolla/haproxy/"]}}, "module_stderr": "Traceback (most recent call last):\n  File \"<stdin>\", line 114, in <module>\n  File \"<stdin>\", line 106, in _ansiballz_main\n  File \"<stdin>\", line 49, in invoke_module\n  File \"/tmp/ansible_kolla_docker_payload_ditZPc/__main__.py\", line 28, in <module>\nImportError: No module named docker\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}

RUNNING HANDLER [haproxy : Restart haproxy container] **************************

RUNNING HANDLER [haproxy : Restart keepalived container] ***********************

PLAY RECAP *********************************************************************
testbed-manager.osism.local : ok=6    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
testbed-node-0.osism.local : ok=21   changed=10   unreachable=0    failed=1    skipped=4    rescued=0    ignored=0
testbed-node-1.osism.local : ok=21   changed=10   unreachable=0    failed=1    skipped=4    rescued=0    ignored=0
testbed-node-2.osism.local : ok=6    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

After this error, ceph deployment started. Once ceph deployment was finished:

TASK [keystone : Check keystone containers] ************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ImportError: No module named docker
failed: [testbed-node-1.osism.local] (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'quay.io/osism/keystone:train-latest', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5000', 'listen_port': '5000'}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'port': '5000', 'listen_port': '5000'}, 'keystone_admin': {'enabled': True, 'mode': 'http', 'external': False, 'port': '35357', 'listen_port': '35357'}}}}) => {"ansible_loop_var": "item", "changed": false, "item": {"key": "keystone", "value": {"container_name": "keystone", "dimensions": {}, "enabled": true, "group": "keystone", "haproxy": {"keystone_admin": {"enabled": true, "external": false, "listen_port": "35357", "mode": "http", "port": "35357"}, "keystone_external": {"enabled": true, "external": true, "listen_port": "5000", "mode": "http", "port": "5000"}, "keystone_internal": {"enabled": true, "external": false, "listen_port": "5000", "mode": "http", "port": "5000"}}, "image": "quay.io/osism/keystone:train-latest", "volumes": ["/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro", "/etc/localtime:/etc/localtime:ro", "/etc/timezone:/etc/timezone:ro", "", "kolla_logs:/var/log/kolla/", "keystone_fernet_tokens:/etc/keystone/fernet-keys"]}}, "module_stderr": "Traceback (most recent call last):\n  File \"<stdin>\", line 114, in <module>\n  File \"<stdin>\", line 106, in _ansiballz_main\n  File \"<stdin>\", line 49, in invoke_module\n  File \"/tmp/ansible_kolla_docker_payload_v1B1PP/__main__.py\", line 28, in <module>\nImportError: No module named docker\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ImportError: No module named docker
failed: [testbed-node-0.osism.local] (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'quay.io/osism/keystone:train-latest', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5000', 'listen_port': '5000'}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'port': '5000', 'listen_port': '5000'}, 'keystone_admin': {'enabled': True, 'mode': 'http', 'external': False, 'port': '35357', 'listen_port': '35357'}}}}) => {"ansible_loop_var": "item", "changed": false, "item": {"key": "keystone", "value": {"container_name": "keystone", "dimensions": {}, "enabled": true, "group": "keystone", "haproxy": {"keystone_admin": {"enabled": true, "external": false, "listen_port": "35357", "mode": "http", "port": "35357"}, "keystone_external": {"enabled": true, "external": true, "listen_port": "5000", "mode": "http", "port": "5000"}, "keystone_internal": {"enabled": true, "external": false, "listen_port": "5000", "mode": "http", "port": "5000"}}, "image": "quay.io/osism/keystone:train-latest", "volumes": ["/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro", "/etc/localtime:/etc/localtime:ro", "/etc/timezone:/etc/timezone:ro", "", "kolla_logs:/var/log/kolla/", "keystone_fernet_tokens:/etc/keystone/fernet-keys"]}}, "module_stderr": "Traceback (most recent call last):\n  File \"<stdin>\", line 114, in <module>\n  File \"<stdin>\", line 106, in _ansiballz_main\n  File \"<stdin>\", line 49, in invoke_module\n  File \"/tmp/ansible_kolla_docker_payload_fWp65m/__main__.py\", line 28, in <module>\nImportError: No module named docker\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ImportError: No module named docker
failed: [testbed-node-1.osism.local] (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'quay.io/osism/keystone-ssh:train-latest', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}}}) => {"ansible_loop_var": "item", "changed": false, "item": {"key": "keystone-ssh", "value": {"container_name": "keystone_ssh", "dimensions": {}, "enabled": true, "group": "keystone", "image": "quay.io/osism/keystone-ssh:train-latest", "volumes": ["/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro", "/etc/localtime:/etc/localtime:ro", "/etc/timezone:/etc/timezone:ro", "kolla_logs:/var/log/kolla/", "keystone_fernet_tokens:/etc/keystone/fernet-keys"]}}, "module_stderr": "Traceback (most recent call last):\n  File \"<stdin>\", line 114, in <module>\n  File \"<stdin>\", line 106, in _ansiballz_main\n  File \"<stdin>\", line 49, in invoke_module\n  File \"/tmp/ansible_kolla_docker_payload_x3rOE9/__main__.py\", line 28, in <module>\nImportError: No module named docker\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ImportError: No module named docker
failed: [testbed-node-0.osism.local] (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'quay.io/osism/keystone-ssh:train-latest', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}}}) => {"ansible_loop_var": "item", "changed": false, "item": {"key": "keystone-ssh", "value": {"container_name": "keystone_ssh", "dimensions": {}, "enabled": true, "group": "keystone", "image": "quay.io/osism/keystone-ssh:train-latest", "volumes": ["/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro", "/etc/localtime:/etc/localtime:ro", "/etc/timezone:/etc/timezone:ro", "kolla_logs:/var/log/kolla/", "keystone_fernet_tokens:/etc/keystone/fernet-keys"]}}, "module_stderr": "Traceback (most recent call last):\n  File \"<stdin>\", line 114, in <module>\n  File \"<stdin>\", line 106, in _ansiballz_main\n  File \"<stdin>\", line 49, in invoke_module\n  File \"/tmp/ansible_kolla_docker_payload_ILOIUP/__main__.py\", line 28, in <module>\nImportError: No module named docker\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ImportError: No module named docker
failed: [testbed-node-1.osism.local] (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'quay.io/osism/keystone-fernet:train-latest', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}}}) => {"ansible_loop_var": "item", "changed": false, "item": {"key": "keystone-fernet", "value": {"container_name": "keystone_fernet", "dimensions": {}, "enabled": true, "group": "keystone", "image": "quay.io/osism/keystone-fernet:train-latest", "volumes": ["/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro", "/etc/localtime:/etc/localtime:ro", "/etc/timezone:/etc/timezone:ro", "kolla_logs:/var/log/kolla/", "keystone_fernet_tokens:/etc/keystone/fernet-keys"]}}, "module_stderr": "Traceback (most recent call last):\n  File \"<stdin>\", line 114, in <module>\n  File \"<stdin>\", line 106, in _ansiballz_main\n  File \"<stdin>\", line 49, in invoke_module\n  File \"/tmp/ansible_kolla_docker_payload_gBy6Ly/__main__.py\", line 28, in <module>\nImportError: No module named docker\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ImportError: No module named docker
failed: [testbed-node-0.osism.local] (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'quay.io/osism/keystone-fernet:train-latest', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}}}) => {"ansible_loop_var": "item", "changed": false, "item": {"key": "keystone-fernet", "value": {"container_name": "keystone_fernet", "dimensions": {}, "enabled": true, "group": "keystone", "image": "quay.io/osism/keystone-fernet:train-latest", "volumes": ["/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro", "/etc/localtime:/etc/localtime:ro", "/etc/timezone:/etc/timezone:ro", "kolla_logs:/var/log/kolla/", "keystone_fernet_tokens:/etc/keystone/fernet-keys"]}}, "module_stderr": "Traceback (most recent call last):\n  File \"<stdin>\", line 114, in <module>\n  File \"<stdin>\", line 106, in _ansiballz_main\n  File \"<stdin>\", line 49, in invoke_module\n  File \"/tmp/ansible_kolla_docker_payload_558mz6/__main__.py\", line 28, in <module>\nImportError: No module named docker\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}

RUNNING HANDLER [keystone : Restart keystone container] ************************

RUNNING HANDLER [keystone : Restart keystone-ssh container] ********************

RUNNING HANDLER [keystone : Restart keystone-fernet container] *****************

PLAY RECAP *********************************************************************
testbed-manager.osism.local : ok=9    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
testbed-node-0.osism.local : ok=22   changed=6    unreachable=0    failed=1    skipped=7    rescued=0    ignored=0
testbed-node-1.osism.local : ok=20   changed=6    unreachable=0    failed=1    skipped=6    rescued=0    ignored=0
testbed-node-2.osism.local : ok=9    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
TASK [Create test project] *****************************************************
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Failed to discover available identity versions when contacting http://api.osism.local:5000/v3. Attempting to parse version from URL.\nTraceback (most recent call last):\n  File \"/usr/local/lib/python3.6/dist-packages/urllib3/connection.py\", line 160, in _new_conn\n    (self._dns_host, self.port), self.timeout, **extra_kw\n  File \"/usr/local/lib/python3.6/dist-packages/urllib3/util/connection.py\", line 84, in create_connection\n    raise err\n  File \"/usr/local/lib/python3.6/dist-packages/urllib3/util/connection.py\", line 74, in create_connection\n    sock.connect(sa)\nOSError: [Errno 113] No route to host\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File \"/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py\", line 677, in urlopen\n    chunked=chunked,\n  File \"/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py\", line 392, in _make_request\n    conn.request(method, url, **httplib_request_kw)\n  File \"/usr/lib/python3.6/http/client.py\", line 1264, in request\n    self._send_request(method, url, body, headers, encode_chunked)\n  File \"/usr/lib/python3.6/http/client.py\", line 1310, in _send_request\n    self.endheaders(body, encode_chunked=encode_chunked)\n  File \"/usr/lib/python3.6/http/client.py\", line 1259, in endheaders\n    self._send_output(message_body, encode_chunked=encode_chunked)\n  File \"/usr/lib/python3.6/http/client.py\", line 1038, in _send_output\n    self.send(msg)\n  File \"/usr/lib/python3.6/http/client.py\", line 976, in send\n    self.connect()\n  File \"/usr/local/lib/python3.6/dist-packages/urllib3/connection.py\", line 187, in connect\n    conn = self._new_conn()\n  File \"/usr/local/lib/python3.6/dist-packages/urllib3/connection.py\", line 172, in _new_conn\n    self, \"Failed to establish a new connection: %s\" % e\nurllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f5c15f78780>: Failed to establish a new connection: [Errno 113] No route to host\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File \"/usr/local/lib/python3.6/dist-packages/requests/adapters.py\", line 449, in send\n    timeout=timeout\n  File \"/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py\", line 725, in urlopen\n    method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]\n  File \"/usr/local/lib/python3.6/dist-packages/urllib3/util/retry.py\", line 439, in increment\n    raise MaxRetryError(_pool, url, error or ResponseError(cause))\nurllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='api.osism.local', port=5000): Max retries exceeded with url: /v3/auth/tokens (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f5c15f78780>: Failed to establish a new connection: [Errno 113] No route to host',))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File \"/usr/local/lib/python3.6/dist-packages/keystoneauth1/session.py\", line 1004, in _send_request\n    resp = self.session.request(method, url, **kwargs)\n  File \"/usr/local/lib/python3.6/dist-packages/requests/sessions.py\", line 530, in request\n    resp = self.send(prep, **send_kwargs)\n  File \"/usr/local/lib/python3.6/dist-packages/requests/sessions.py\", line 643, in send\n    r = adapter.send(request, **kwargs)\n  File \"/usr/local/lib/python3.6/dist-packages/requests/adapters.py\", line 516, in send\n    raise ConnectionError(e, request=request)\nrequests.exceptions.ConnectionError: HTTPConnectionPool(host='api.osism.local', port=5000): Max retries exceeded with url: /v3/auth/tokens (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f5c15f78780>: Failed to establish a new connection: [Errno 113] No route to host',))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File \"<stdin>\", line 102, in <module>\n  File \"<stdin>\", line 94, in _ansiballz_main\n  File \"<stdin>\", line 40, in invoke_module\n  File \"/usr/lib/python3.6/runpy.py\", line 205, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/usr/lib/python3.6/runpy.py\", line 96, in _run_module_code\n    mod_name, mod_spec, pkg_name, script_name)\n  File \"/usr/lib/python3.6/runpy.py\", line 85, in _run_code\n    exec(code, run_globals)\n  File \"/tmp/ansible_os_project_payload_zmu7qz1l/ansible_os_project_payload.zip/ansible/modules/cloud/openstack/os_project.py\", line 211, in <module>\n  File \"/tmp/ansible_os_project_payload_zmu7qz1l/ansible_os_project_payload.zip/ansible/modules/cloud/openstack/os_project.py\", line 174, in main\n  File \"/usr/local/lib/python3.6/dist-packages/openstack/cloud/_identity.py\", line 99, in get_project\n    domain_id=domain_id)\n  File \"/usr/local/lib/python3.6/dist-packages/openstack/cloud/_utils.py\", line 205, in _get_entity\n    entities = search(name_or_id, filters, **kwargs)\n  File \"/usr/local/lib/python3.6/dist-packages/openstack/cloud/_identity.py\", line 84, in search_projects\n    domain_id=domain_id, name_or_id=name_or_id, filters=filters)\n  File \"/usr/local/lib/python3.6/dist-packages/openstack/cloud/_identity.py\", line 56, in list_projects\n    if self._is_client_version('identity', 3):\n  File \"/usr/local/lib/python3.6/dist-packages/openstack/cloud/openstackcloud.py\", line 461, in _is_client_version\n    client = getattr(self, client_name)\n  File \"/usr/local/lib/python3.6/dist-packages/openstack/cloud/_identity.py\", line 32, in _identity_client\n    'identity', min_version=2, max_version='3.latest')\n  File \"/usr/local/lib/python3.6/dist-packages/openstack/cloud/openstackcloud.py\", line 408, in _get_versioned_client\n    if adapter.get_endpoint():\n  File \"/usr/local/lib/python3.6/dist-packages/keystoneauth1/adapter.py\", line 282, in get_endpoint\n    return self.session.get_endpoint(auth or self.auth, **kwargs)\n  File \"/usr/local/lib/python3.6/dist-packages/keystoneauth1/session.py\", line 1225, in get_endpoint\n    return auth.get_endpoint(self, **kwargs)\n  File \"/usr/local/lib/python3.6/dist-packages/keystoneauth1/identity/base.py\", line 380, in get_endpoint\n    allow_version_hack=allow_version_hack, **kwargs)\n  File \"/usr/local/lib/python3.6/dist-packages/keystoneauth1/identity/base.py\", line 271, in get_endpoint_data\n    service_catalog = self.get_access(session).service_catalog\n  File \"/usr/local/lib/python3.6/dist-packages/keystoneauth1/identity/base.py\", line 134, in get_access\n    self.auth_ref = self.get_auth_ref(session)\n  File \"/usr/local/lib/python3.6/dist-packages/keystoneauth1/identity/generic/base.py\", line 208, in get_auth_ref\n    return self._plugin.get_auth_ref(session, **kwargs)\n  File \"/usr/local/lib/python3.6/dist-packages/keystoneauth1/identity/v3/base.py\", line 184, in get_auth_ref\n    authenticated=False, log=False, **rkwargs)\n  File \"/usr/local/lib/python3.6/dist-packages/keystoneauth1/session.py\", line 1131, in post\n    return self.request(url, 'POST', **kwargs)\n  File \"/usr/local/lib/python3.6/dist-packages/keystoneauth1/session.py\", line 913, in request\n    resp = send(**kwargs)\n  File \"/usr/local/lib/python3.6/dist-packages/keystoneauth1/session.py\", line 1020, in _send_request\n    raise exceptions.ConnectFailure(msg)\nkeystoneauth1.exceptions.connection.ConnectFailure: Unable to establish connection to http://api.osism.local:5000/v3/auth/tokens: HTTPConnectionPool(host='api.osism.local', port=5000): Max retries exceeded with url: /v3/auth/tokens (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f5c15f78780>: Failed to establish a new connection: [Errno 113] No route to host',))\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}

PLAY RECAP *********************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0
testbed-manager.osism.local : ok=3    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
TASK [service-ks-register : heat | Creating services] **************************
FAILED - RETRYING: heat | Creating services (5 retries left).
FAILED - RETRYING: heat | Creating services (4 retries left).
FAILED - RETRYING: heat | Creating services (3 retries left).
FAILED - RETRYING: heat | Creating services (2 retries left).

Thus, its logical that every single openstack service will be failing cause haproxy and keepalived containers are not running.

dragon@testbed-node-0:~$ sudo docker ps
CONTAINER ID        IMAGE                               COMMAND                  CREATED             STATUS              PORTS               NAMES
07f8b15064d0        osism/ceph-daemon:nautilus-latest   "/opt/ceph-container…"   5 minutes ago       Up 5 minutes                            ceph-rgw-testbed-node-0-rgw0
1e5d840866ac        osism/ceph-daemon:nautilus-latest   "/opt/ceph-container…"   11 minutes ago      Up 11 minutes                           ceph-mds-testbed-node-0
c094c4e577ce        osism/ceph-daemon:nautilus-latest   "/opt/ceph-container…"   15 minutes ago      Up 15 minutes                           ceph-osd-3
f8cf6abc2adc        osism/ceph-daemon:nautilus-latest   "/opt/ceph-container…"   15 minutes ago      Up 15 minutes                           ceph-osd-1
2f6b1999747e        osism/ceph-daemon:nautilus-latest   "/opt/ceph-container…"   17 minutes ago      Up 17 minutes                           ceph-mgr-testbed-node-0
87a0ce0bb686        osism/ceph-daemon:nautilus-latest   "/opt/ceph-container…"   19 minutes ago      Up 18 minutes                           ceph-mon-testbed-node-0

Manager is fine though

dragon@testbed-manager:~$ sudo docker ps
CONTAINER ID        IMAGE                                   COMMAND                  CREATED             STATUS                    PORTS                                           NAMES
ac65f95b174e        osism/cephclient:nautilus               "/usr/bin/dumb-init …"   4 minutes ago       Up 4 minutes                                                              cephclient_cephclient_1
c844d430be1e        osism/openstackclient:train             "/usr/bin/dumb-init …"   33 minutes ago      Up 33 minutes                                                             openstackclient_openstackclient_1
d9b9f7fb6985        osism/phpmyadmin:latest                 "/docker-entrypoint.…"   34 minutes ago      Up 34 minutes (healthy)   192.168.40.5:8110->80/tcp                       phpmyadmin_phpmyadmin_1
9ee8d2b6585c        rpardini/docker-registry-proxy:latest   "/entrypoint.sh"         53 minutes ago      Up 53 minutes (healthy)   80/tcp, 8081/tcp, 192.168.50.5:8000->3128/tcp   registry-proxy
4356eeb879a9        osism/kolla-ansible:train               "/usr/bin/dumb-init …"   59 minutes ago      Up 37 minutes                                                             manager_kolla-ansible_1
7db050107c5d        osism/ceph-ansible:nautilus             "/usr/bin/dumb-init …"   59 minutes ago      Up 37 minutes                                                             manager_ceph-ansible_1
a837cf60d126        osism/osism-ansible:latest              "/usr/bin/dumb-init …"   59 minutes ago      Up 37 minutes                                                             manager_osism-ansible_1
64bd877eb04d        osism/ara-server:latest                 "sh -c '/wait && /ru…"   59 minutes ago      Up 37 minutes (healthy)   192.168.40.5:8120->8000/tcp                     manager_ara-server_1
414c33fd2c5b        osism/mariadb:latest                    "docker-entrypoint.s…"   59 minutes ago      Up 37 minutes (healthy)   3306/tcp                                        manager_database_1
531992da510a        osism/redis:latest                      "docker-entrypoint.s…"   59 minutes ago      Up 37 minutes (healthy)   6379/tcp                                        manager_cache_1

Thank you

from testbed.

berendt avatar berendt commented on June 18, 2024

Just make one make deploy (with your environment), please. Then only the manager and the nodes are prepared. This saves time during debugging.

Then run osism-kolla deploy common and then run osism-kolla deploy haproxy.
This then only deployed the Haproxy.

Please look at testbed-node-0 and testbed-node-1 to see what the error of the Haproxy or Keepalived is.

Alternatively if the environment is still running you can purge OpenStack and Ceph (https://docs.osism.de/testbed/usage.html#purge-services). Then you do not have to completely rebuild.

osism-kolla _ purge
osism-ceph purge-container-cluster

from testbed.

berendt avatar berendt commented on June 18, 2024

Please reopen if there is any further need.

from testbed.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.