vultr / ansible-collection-vultr Goto Github PK
View Code? Open in Web Editor NEWLicense: GNU General Public License v3.0
License: GNU General Public License v3.0
Is your feature request related to a problem? Please describe.
When removing UFW and doing iptables -F
, this will drop the SSH connection.
I'd like to be able to reboot the instance(s) ( https://www.vultr.com/api/#tag/instances/operation/reboot-instance / https://www.vultr.com/api/#tag/instances/operation/reboot-instances ) via the API and do a hard reset, as this is the only option you've got after removing UFW.
Describe the solution you'd like
Ability to hard reset ( https://www.vultr.com/api/#tag/instances/operation/reboot-instance / https://www.vultr.com/api/#tag/instances/operation/reboot-instances ) via the API
Describe alternatives you've considered
There aren't any
Additional context
No additional context
Describe the bug
The listed files shouldn't be runnable with python because they have relative imports. But thanks to the shebangs they are runnable, which is wrong because it always throws an error.
listed files:
ansible-collection-vultr/plugins/inventory/vultr.py
ansible-collection-vultr/plugins/modules/account_info.py
ansible-collection-vultr/plugins/modules/block_storage.py
ansible-collection-vultr/plugins/modules/block_storage_info.py
ansible-collection-vultr/plugins/modules/dns_domain.py
ansible-collection-vultr/plugins/modules/dns_domain_info.py
ansible-collection-vultr/plugins/modules/dns_record.py
ansible-collection-vultr/plugins/modules/firewall_group.py
ansible-collection-vultr/plugins/modules/firewall_group_info.py
ansible-collection-vultr/plugins/modules/firewall_rule.py
ansible-collection-vultr/plugins/modules/firewall_rule_info.py
ansible-collection-vultr/plugins/modules/instance.py
ansible-collection-vultr/plugins/modules/instance_info.py
ansible-collection-vultr/plugins/modules/os_info.py
ansible-collection-vultr/plugins/modules/plan_info.py
ansible-collection-vultr/plugins/modules/plan_metal_info.py
ansible-collection-vultr/plugins/modules/region_info.py
ansible-collection-vultr/plugins/modules/reserved_ip.py
ansible-collection-vultr/plugins/modules/snapshot.py
ansible-collection-vultr/plugins/modules/snapshot_info.py
ansible-collection-vultr/plugins/modules/ssh_key.py
ansible-collection-vultr/plugins/modules/ssh_key_info.py
ansible-collection-vultr/plugins/modules/startup_script.py
ansible-collection-vultr/plugins/modules/startup_script_info.py
ansible-collection-vultr/plugins/modules/user.py
ansible-collection-vultr/plugins/modules/user_info.py
ansible-collection-vultr/plugins/modules/vpc.py
ansible-collection-vultr/plugins/modules/vpc_info.py
To Reproduce
Steps to reproduce the behavior:
git clone https://github.com/aueam/ansible-collection-vultr.git
cd ansible-collection-vultr/plugins/inventory/
chmod +x vultr.py
./vultr.py
Traceback (most recent call last):
File "/tmp/ansible-collection-vultr/plugins/inventory/./vultr.py", line 160, in <module>
from ..module_utils.vultr_v2 import VULTR_USER_AGENT
ImportError: attempted relative import with no known parent package
Expected behavior
I expected them not to be runnable.
Desktop (please complete the following information where applicable:
Is your feature request related to a problem? Please describe.
I'm migrating from old "vultr_server" (https://docs.ansible.com/ansible/2.8/modules/vultr_server_module.html) ansible (late to the party) and looking for an equivalent to private_network_enabled: true
This is mostly provided by api v2 enable_vpc
or enable_vpc2
(https://www.vultr.com/api/#tag/instances/operation/create-instance)
Describe the solution you'd like
A simple way to setup default vpc on creating an instance and use enable_vpc
or enable_vpc2
on the api
Describe alternatives you've considered
Manual set up of vpcs - however I just need one default vpc (seems like the simplest use case) and this is what the api provides
Describe the bug
An instance provisioned with multiple tags only has one of them in the instance.tag field and there is not instance.tags (or similar) field to handle the rest of the tags assigned. This makes it impossible to coherently organize ansible inventory by tags
To Reproduce
Steps to reproduce the behavior:
Expected behavior
All tags assigned to the server should be reported by ansible-inventory
$ ansible-inventory --version
ansible-inventory [core 2.13.7]
config file = /home/rex/.ansible.cfg
configured module search path = ['/home/rex/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/rex/venv/ansible_env/3.x/lib/python3.8/site-packages/ansible
ansible collection location = /home/rex/.ansible/collections:/usr/share/ansible/collections
executable location = /home/rex/venv/ansible_env/3.x/bin/ansible-inventory
python version = 3.8.10 (default, Nov 14 2022, 12:59:47) [GCC 9.4.0]
jinja version = 3.1.2
libyaml = True
Describe the bug
The Ansible hostvars
variable is missing vultr_gateway_v4
.
To Reproduce
Steps to reproduce the behavior:
inventory.vultr.yml
plugin: vultr.cloud.vultr
api_key: "[REMOVED]"
compose:
ansible_host: vultr_internal_ip or vultr_v6_main_ip or vultr_main_ip
keyed_groups:
- key: vultr_tags | lower
prefix: ''
separator: ''
strict: true
info.yml
---
- name: Info playbook
hosts:
- localhost
tasks:
- name: hostvars
ansible.builtin.debug:
var: hostvars
Debug:
"vultr_hostname": "[REMOVED]",
"vultr_id": "[REMOVED]",
"vultr_internal_ip": "[REMOVED]",
"vultr_label": "[REMOVED]",
"vultr_main_ip": "[REMOVED]",
"vultr_plan": "[REMOVED]",
"vultr_region": "[REMOVED]",
"vultr_tags": [
"[REMOVED]"
],
"vultr_v6_main_ip": "[REMOVED]"
Expected behavior
A variable for vultr_gateway_v4
with the value of the IPv4 gateway.
Desktop (please complete the following information where applicable:
ansible [core 2.13.4]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/etc/ansible/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /etc/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.16 (main, Dec 16 2022, 16:48:31) [Clang 13.0.0 ]
jinja version = 3.1.2
libyaml = True
Notes
gateway_v4
does show up when using vultr.cloud.instance_info
Is your feature request related to a problem? Please describe.
https://docs.ansible.com/ansible/latest/collections/ngine_io/vultr/vultr_server_module.html#ansible-collections-ngine-io-vultr-vultr-server-module supported reinstallation of existing servers. I like to use this feature in CI pipelines for cost optimization of tasks running only a few minutes.
Describe the solution you'd like
Implemented state: reinstalled
Describe alternatives you've considered
Old collection, but too much trouble.
Additional context
I couldn't find anything in the Issues or PRs, so opening this.
Describe the bug
the documentation for the instance_info indicate that name is an alias for label, however currently it is behaving as an alias for region instead
To Reproduce
Steps to reproduce the behavior:
---
- name: Instance
hosts: localhost
tasks:
- name: Deploy Instance
vultr.cloud.instance:
label: foo
region: fra
plan: vc2-1c-1gb
os: Ubuntu 22.04 LTS x64
- name: Get Instance
vultr.cloud.instance_info:
name: foo
register: results
- debug:
var: results
- name: Get Instance
vultr.cloud.instance_info:
name: fra
register: results
- debug:
var: results
Additional context
A pull request should be linked to this issue to fix this trivial bug
There is no option for block_type
on vultr.cloud.block_storage
This is important because NVMe (block_type: high_perf) is not available in all regions. For example, I have a VPS in Atlanta and I want to add block storage in the same datacenter. My only option is HDD (block_type: storage_opt) which this module cannot handle. Additionally cost can be an issue as NVMe cost four times as much as HDD.
Is your feature request related to a problem? Please describe.
currently the instance module does not handle vpcs.
Describe the solution you'd like
instance module should also handle vpcs
Is your feature request related to a problem? Please describe.
No. It is a "makes live easier" kind of feature
Describe the solution you'd like
When creating a Vultr instance, you can define a "user_scheme" (see https://www.vultr.com/api/#tag/instances/operation/create-instance). By default it is "root". But i always need the "limited" for security reasons.
Describe alternatives you've considered
It is easy to set this up afterwards, but doing it upon instance creation is easier.
Describe the bug
fatal: [testhost]: FAILED! => {"changed": false, "fetch_url_info": {"body": "{\"error\":\"Unable to destroy server: Unable to remove VM: Server is currently locked\",\"status\":500}", "connection": "close", "content-type": "application/json", "date": "Tue, 13 Feb 2024 04:11:34 GMT", "msg": "HTTP Error 500: Internal Server Error", "server": "nginx", "status": 500, "strict-transport-security": "max-age=63072000", "transfer-encoding": "chunked", "url": "https://api.vultr.com/v2/instances/3d9ca35e-1cdd-4e46-9db8-b253756abfc3", "x-content-type-options": "nosniff", "x-frame-options": "DENY", "x-robots-tag": "noindex,noarchive", "x-user": "[email protected]"}, "msg": "Failure while calling the Vultr API v2 with DELETE for \"/instances/3d9ca35e-1cdd-4e46-9db8-b253756abfc3\"."}
Expected behavior
retry on locked but since the status is 500, it's a fatal error.
Describe the bug
When trying to create an instance it fails with Backups option must be either enabled or disabled.
even though backups: false
in the playbook.
I have tried with setting backups to false
, no
, 0
with the same result
To Reproduce
Steps to reproduce the behavior:
vultr.playbook.yml
as example belowansible-playbook -i localhost vultr.playbook.yml
...
TASK [create vultr instance] ******************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "fetch_url_info": {"body": "{\"error\":\"Backups option must be either enabled or disabled.\",\"status\":400}", "connection": "close", "content-type": "application/json", "date": "Thu, 27 Oct 2022 19:03:22 GMT", "msg": "HTTP Error 400: Bad Request", "server": "nginx", "status": 400, "strict-transport-security": "max-age=31536000", "transfer-encoding": "chunked", "url": "https://api.vultr.com/v2/instances", "x-content-type-options": "nosniff", "x-frame-options": "DENY", "x-robots-tag": "noindex,noarchive", "x-user": <my user name>"}, "msg": "Failure while calling the Vultr API v2 with POST for \"/instances\"."
...
Expected behavior
Expect playbook to execute without errors and that a instance is created
Desktop (please complete the following information where applicable:
ansible --version
ansible [core 2.13.5]
config file = None
configured module search path = ['/home/me/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/me/a5e/ansible-role-nats/venv/lib/python3.8/site-packages/ansible
ansible collection location = /home/me/.ansible/collections:/usr/share/ansible/collections
executable location = /home/me/a5e/ansible-role-nats/venv/bin/ansible
python version = 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0]
jinja version = 3.1.2
libyaml = True
Additional context
---
# vultr.playbook.yml
- hosts: localhost
connection: local
gather_facts: false
tasks:
- name: Gather Vultr regions information
vultr.cloud.region_info:
register: result
- name: Pick region
ansible.builtin.set_fact:
region: "{{ result.vultr_region_info | selectattr('country', '==', 'SE') | first }}"
- name: Gather Vultr plans information
vultr.cloud.plan_info:
register: result
- name: Pick plan
ansible.builtin.set_fact:
plan: "{{ result.vultr_plan_info | sort(attribute='monthly_cost') | first }}"
- name: pickings
ansible.builtin.debug:
msg: "{{ region.id }} {{ plan.id }}"
- name: create vultr instance
vultr.cloud.instance:
label: molecule-test
hostname: molecule-test
plan: "{{ plan.id }}"
ddos_protection: no
backups: false
enable_ipv6: no
ssh_keys:
- mysshkey
region: "{{ region.id }}"
os: Debian 11 x64 (bullseye)
Is your feature request related to a problem? Please describe.
Surfing through https://docs.ansible.com/ansible/latest/collections/vultr/cloud/index.html , it looks like object storage isn't supported currently, unless I misread?
Describe the solution you'd like
For object storage to be supported and available via the Ansible collection
Describe alternatives you've considered
There aren't any viable alternatives
When deploying large amount of instances, it's very useful to skip waiting for the instance to become online.
If it takes 1/2 min per instance to become available, and I'm deploying 100 nodes... its easily 2h of waiting each
while I just want to deploy 100 in one go and wait 2 min to have all online
for comparison see DO module:
https://docs.ansible.com/ansible/latest/collections/community/digitalocean/digital_ocean_droplet_module.html#parameter-wait
Describe the solution you'd like
add this flag:
wait: true|false
Describe alternatives you've considered
Additional context
Describe the bug
Creating an instance with the vultr.cloud.instance
module will not pass any configured startup_scripts to the API.
This is possibly due to the module key startup_script_id
not matching the api key script_id
.
To Reproduce
Steps to reproduce the behavior:
vultr.cloud.startup_script
modulevultr.cloud.instance
module with the previously created script as parameter.Expected behavior
On the instance the /tmp/firstboot.exec
file should be rendered as specified and should have been executed on firstboot.
Desktop (please complete the following information where applicable:
Describe the bug
I've tried to create an instance using SSH keys but when I log in .ssh/authorized_keys is empty but no password information is returned in ansible.
From /var/log/cloud-init.log:
2022-11-13 23:36:41,103 - util.py[DEBUG]: Changing the ownership of /root/.ssh to 0:0
2022-11-13 23:36:41,103 - util.py[DEBUG]: Writing to /root/.ssh/authorized_keys - wb: [600] 0 bytes
2022-11-13 23:36:41,103 - util.py[DEBUG]: Changing the ownership of /root/.ssh/authorized_keys to 0:0
2022-11-13 23:36:41,104 - util.py[DEBUG]: Reading from /root/.ssh/authorized_keys (quiet=False)
2022-11-13 23:36:41,104 - util.py[DEBUG]: Read 0 bytes from /root/.ssh/authorized_keys
2022-11-13 23:36:41,104 - util.py[DEBUG]: Writing to /root/.ssh/authorized_keys - wb: [600] 0 bytes
2022-11-13 23:36:41,105 - util.py[DEBUG]: Reading from /etc/ssh/sshd_config (quiet=False)
2022-11-13 23:36:41,105 - util.py[DEBUG]: Read 3298 bytes from /etc/ssh/sshd_config
2022-11-13 23:36:41,105 - util.py[DEBUG]: Reading from /root/.ssh/authorized_keys (quiet=False)
2022-11-13 23:36:41,105 - util.py[DEBUG]: Read 0 bytes from /root/.ssh/authorized_keys
2022-11-13 23:36:41,105 - util.py[DEBUG]: Writing to /root/.ssh/authorized_keys - wb: [600] 0 bytes
and
:~# cat .ssh/authorized_keys
:~#
In cloud-config I can see password auth being enabled and roots password being set:
"#cloud-config\n{\"package_upgrade\":true,\"disable_root\":false,\"manage_etc_hosts\":true,\"system_info\":{\"default_user\":{\"name\":\"root\"}},\"ssh_pwauth\":1,\"chpasswd\":{\"list\":[\"root:$6$oH6edejpgFjgnkPT$[etc etc]1\"],\"expire\":false}}",
To Reproduce
- name: Create Instance
vultr.cloud.instance:
api_key: "{{ vultr_api_key }}"
activation_email: "{{ hostvars[instance_hostname].instance_activation_email |default('no') }}"
enable_ipv6: false
backups: "{{ hostvars[instance_hostname].instance_backups |default('no') }}"
firewall_group: "{{ firewall_group_result.vultr_firewall_group.description |default(fallback_vultr_firewall_group) }}"
hostname: "{{ instance_hostname }}"
label: "{{ hostvars[instance_hostname].instance_label }}"
os: "{{ hostvars[instance_hostname].instance_os }}"
plan: "{{ hostvars[instance_hostname].instance_plan }}"
region: "{{ hostvars[instance_hostname].deployment_region }}"
ssh_keys: "{{ hostvars[instance_hostname].instance_ssh_keys }}"
# startup_script: "{{ hostvars[instance_hostname].startup_script_result }}"
state: present
register: instance_result
The ssh keys variable looks like this (not that its relevant to the point of this issue):
group_vars/all:instance_ssh_keys: [ 'Karl (butter)', 'Monica (laptop)' ]
Edit to note: I tried this with and without (); keys were not installed. Should I raise a separate question about that?
Expected behavior
If SSH keys are not correctly installed roots password should be returned via Ansible so the next steps can be done via password if required.
Additional context
Add any other context about the problem here.
$ ansible-playbook --version
ansible-playbook [core 2.13.4]
config file = None
configured module search path = ['/home/kgoetz/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/kgoetz/.venvs/ansible_core/lib/python3.9/site-packages/ansible
ansible collection location = /home/kgoetz/.ansible/collections:/usr/share/ansible/collections
executable location = /home/kgoetz/.venvs/ansible_core/bin/ansible-playbook
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
jinja version = 3.1.2
libyaml = True
collection version
$ ansible-galaxy collection list |grep vultr
vultr.cloud 1.3.0
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: KeyError: 'id'
fatal: [testhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"<stdin>\", line 141, in <module>\n File \"<stdin>\", line 133, in _ansiballz_main\n File \"<stdin>\", line 81, in invoke_module\n File \"/usr/lib/python3.10/runpy.py\", line 224, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.10/runpy.py\", line 96, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_vultr.cloud.snapshot_payload_n3vyutrv/ansible_vultr.cloud.snapshot_payload.zip/ansible_collections/vultr/cloud/plugins/modules/snapshot.py\", line 218, in <module>\n File \"/tmp/ansible_vultr.cloud.snapshot_payload_n3vyutrv/ansible_vultr.cloud.snapshot_payload.zip/ansible_collections/vultr/cloud/plugins/modules/snapshot.py\", line 214, in main\n File \"/tmp/ansible_vultr.cloud.snapshot_payload_n3vyutrv/ansible_vultr.cloud.snapshot_payload.zip/ansible_collections/vultr/cloud/plugins/module_utils/vultr_v2.py\", line 302, in present\n File \"/tmp/ansible_vultr.cloud.snapshot_payload_n3vyutrv/ansible_vultr.cloud.snapshot_payload.zip/ansible_collections/vultr/cloud/plugins/module_utils/vultr_v2.py\", line 296, in create_or_update\n File \"/tmp/ansible_vultr.cloud.snapshot_payload_n3vyutrv/ansible_vultr.cloud.snapshot_payload.zip/ansible_collections/vultr/cloud/plugins/modules/snapshot.py\", line 174, in create\n File \"/tmp/ansible_vultr.cloud.snapshot_payload_n3vyutrv/ansible_vultr.cloud.snapshot_payload.zip/ansible_collections/vultr/cloud/plugins/module_utils/vultr_v2.py\", line 275, in wait_for_state\nKeyError: 'id'\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
Would it be possible to add a VPC2.0 parameter for vultr.cloud.instance
? At the moment there is just vpcs
which only supports v1.
Thanks & Kind regards,
David
Describe the bug
vultr.cloud is unable to create bare metal instances (perhaps due to the swtich to the v2 api?)
To Reproduce
Run a playbook with "plan: vbm-6c-32gb" (or another bare metal offering).
Screenshots
TASK [vultr : create an instance using OS] ************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "fetch_url_info": {"body": "{"error":"Invalid plan chosen.","status":400}", "connection": "close", "content-type": "application/json", "date": "Tue, 08 Aug 2023 04:47:10 GMT", "msg": "HTTP Error 400: Bad Request", "server": "nginx", "status": 400, "strict-transport-security": "max-age=63072000", "transfer-encoding": "chunked", "url": "https://api.vultr.com/v2/instances", "x-content-type-options": "nosniff", "x-frame-options": "DENY", "x-robots-tag": "noindex,noarchive", "x-user": "[email protected]"}, "msg": "Failure while calling the Vultr API v2 with POST for "/instances"."}
Additional context
I can manually create a bare metal instance on the vultr website or through vultr-cli.
Bare metals need to be created with a POST resquest for /bare_metals, so requests to /instances won't work.
Describe the bug
After reviewing the account that the tests are run in I noticed a large amount of instances that have not been removed. The behavior of the scripts needs to be updated to ensure that all resources are removed at the end of the run.
+ Cleaning instances
- Deleting ansible-test-58206534-fv-az457-575_vm_for_attachment - ad0a1ee7-d1ba-45a4-a749-55b3d60c6216
- Deleting ansible-test-29567499-fv-az241-340_vm_for_attachment - 12ed1fda-ffcc-4a8c-9b5f-8aef1a7a77b3
- Deleting ansible-test-95539053-fv-az264-892_vm_for_attachment - 95e32594-601b-4196-a669-ae54394f806e
- Deleting ansible-test-30742365-fv-az571-443_vm_for_attachment - 26284b71-f9d2-48f0-b650-608219465dad
- Deleting ansible-test-41051901-fv-az399-96_vm_for_attachment - d2cbb0f5-d0ee-416c-a202-014d3a955d91
- Deleting ansible-test-55556826-fv-az259-289_vm_for_attachment - 93ffed8c-cfb3-4000-bae3-6bf2f364388b
- Deleting ansible-test-53849020-fv-az41-963_vm_for_attachment - 5f37ef4e-fb96-4206-b269-1edcfe9364a7
- Deleting ansible-test-57154596-fv-az41-963_vm_for_attachment - 330c427b-b69c-4862-bb28-364fdc2c7215
- Deleting ansible-test-68561648-fv-az264-892_vm_for_attachment - ff51739b-541f-493c-add2-aa4a5f45c7cb
- Deleting ansible-test-98480117-fv-az554-228_vm_for_attachment - faa416e3-bfcd-4683-acfa-889ee1e4a5ef
- Deleting ansible-test-98273835-fv-az213-499_vm_for_attachment - a7fc6088-f78b-48f7-964e-535c74d264aa
- Deleting ansible-test-90207754-fv-az460-75_vm_for_attachment - 691b2843-c4d8-428f-ab69-b9eff6445e32
- Deleting ansible-test-99445897-fv-az180-114_vm_for_attachment - 5cc8b9cd-4ba1-4628-a8d2-781e7ab1e3ed
- Deleting ansible-test-14927640-fv-az341-509_vm_for_attachment - cc1c94f5-9eb8-48e4-bd98-83bf40a79296
- Deleting ansible-test-71858876-fv-az222-453_vm_for_attachment - ce812830-14ea-4b28-b6bd-fe794347c899
- Deleting ansible-test-72022827-fv-az203-353_vm_for_attachment - 8f02071b-7746-4ec2-9e74-b58bc0a06457
- Deleting ansible-test-63756813-fv-az208-571_vm_for_attachment - 66e0338c-6a7a-4644-b10f-21838049fdc7
- Deleting ansible-test-42203396-fv-az83-370_vm_for_attachment - c8fe924e-fb62-4b09-8c50-4970de1ea951
- Deleting ansible-test-36550713-fv-az345-706_vm_for_attachment - b5026d38-d1ae-4724-9f22-ce7341861f45
- Deleting ansible-test-32412766-fv-az77-505_vm_for_attachment - 35c3fd4e-ad41-4619-a0e3-1f969e7650f6
- Deleting ansible-test-47962799-fv-az203-353_vm_for_attachment - cb7812ab-2752-4762-9cba-4c848bfe833c
- Deleting ansible-test-89186448-fv-az377-906_vm_for_attachment - 41f88eec-422c-4602-8b0a-4a2042877bc9
- Deleting ansible-test-15512236-fv-az77-505_vm_for_attachment - ca7d6368-032e-457b-8a40-cbb84defa219
- Deleting ansible-test-44393646-fv-az154-459_os1 - ec76e7b2-94cf-4de0-aa2d-63e410601589
- Deleting ansible-test-59959984-fv-az334-647_os1 - 7d2519f6-e870-4d69-8be0-85a10740cfba
- Deleting ansible-test-83430314-fv-az91-21_os1 - 77ee4563-4ed9-49c1-b9c4-d2c8c616c8cc
- Deleting ansible-test-71595837-fv-az292-792_os1 - b4722340-605d-4117-9a71-dd64470514f4
- Deleting ansible-test-55851971-fv-az391-222_os1 - a97544ad-7e54-48db-bd57-eb6e094e5fef
- Deleting ansible-test-55851971-fv-az391-222_info1 - f7f8ce9c-d464-4de5-b8e9-30eed9867f27
- Deleting ansible-test-55851971-fv-az391-222_info2 - f06734e1-e941-4f4a-99b1-c6a32fa54387
+ Instances cleaned
+ No bare metals to remove
+ No load balancers to remove
+ No snapshots to remove
+ Cleaning block storage
- Deleting ansible-test-fv-az77-681-61434800-volume - d5e4e4b3-7403-40da-8a4d-7176ad46ada3
- Deleting ansible-test-36038483-fv-az127-139-volume - 96d4b858-6381-4db5-8f13-b285d8548825
- Deleting ansible-test-40355126-fv-az246-917-volume - 4e60daec-a2cf-46c9-98ef-7b7f38721680
- Deleting ansible-test-76871637-fv-az194-118-volume - 8c5f28b4-2846-45d0-8109-636aa9d829ac
- Deleting ansible-test-13579941-fv-az183-630-volume - a008662e-f064-4661-b21c-003d8e58f2ed
- Deleting ansible-test-98693036-fv-az246-949-volume - 1dc4d6fd-a5b0-4942-8297-274920c461d4
+ Block Storage cleaned
Expected behavior
Clean the resources created regardless of whether or not the tests passed/failed.
Describe the bug
The api_key
value is not being recognised
To Reproduce
Steps to reproduce the behavior:
./vultr.yaml
---
plugin: vultr.cloud.vultr
api_key: abc123
view_plans.yaml
---
- name: View the plans
hosts: localhost
tasks:
- name: Gather Vultr regions information
vultr.cloud.region_info:
register: region
- name: Print the list of countries
ansible.builtin.debug:
var: item.country
Expected behavior
List the plans's countries
Screenshots
$ ansible-playbook view_plans.yaml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not
match 'all'
PLAY [View the plans] **********************************************************************************************
TASK [Gathering Facts] *********************************************************************************************
ok: [localhost]
TASK [Gather Vultr regions information] ****************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "missing required arguments: api_key"}
PLAY RECAP *********************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Desktop (please complete the following information where applicable:
$ brew list ansible --version
ansible 7.2.0
chris@cerebro:test_vultr $ ansible-galaxy collection list vultr.cloud
# /opt/homebrew/lib/python3.11/site-packages/ansible_collections
Collection Version
----------- -------
vultr.cloud 1.7.0
# /Users/${USER}/.ansible/collections/ansible_collections
Collection Version
----------- -------
vultr.cloud 1.7.0
Additional context
I have tried many iterations of yaml and ini files for the vultr plugin, but nothing seems to be working.
I have read the [collections module page|https://docs.ansible.com/ansible/latest/collections/vultr/cloud/vultr_inventory.html#ansible-collections-vultr-cloud-vultr-inventory] and it says
# File endings vultr{,-{hosts,instances}}.y{,a}ml
# All configuration done via environment variables:
plugin: vultr.cloud.vultr
api_key: '{{ lookup("pipe"), "./get_vultr_api_key.sh" }}'
I replaced the Jinja with the actual API Key. I named it (as above) vultr.yaml
I have no idea how to get ansible to recognise the api_key
variable... I have been trying for a few hours now and I have exhausted all avenues. I have even gone as far as debugging plugins/inventory/vultr.py
to see where exactly this api_key
variable is read from, but I've only just started... I thought I'd write this first.
Any help would be immensely appreciated.
Cheers.
Is your feature request related to a problem? Please describe.
I had a cluster, I wanted to create a migration playbook to move everything to a new cluster.
I was able to automate every step except creating and deleting the clusters.
Describe the solution you'd like
Implementing Ansible modules for managing VKE.
Describe alternatives you've considered
Using a Terraform module in Ansible.
Additional context
I am done with the project for now, but it would be nice for the future.
Describe the bug
When using ansible-inventory --list
, the internal_ip
no longer shows up in the instance data.
To Reproduce
ansible-inventory --list
Expected behavior
When an instance is attached to a VPC2.0, the "internal_ip" field should be populated.
Screenshots
ansible.cfg:
[default]
collections_paths=/etc/ansible/collections
inventory=/etc/ansible/vultr.yml
[inventory]
enable_plugins=vultr.cloud.vultr
vultr.yml
plugin: vultr.cloud.vultr
api_key: "[redacted]"
compose:
ansible_host: vultr_v6_main_ip or vultr_main_ip
keyed_groups:
- key: vultr_tags | lower
prefix: ''
separator: ''
strict: true
Desktop (please complete the following information where applicable:
ansible [core 2.13.4]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/etc/ansible/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /etc/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.16 (main, Dec 16 2022, 16:48:31) [Clang 13.0.0 ]
jinja version = 3.1.2
libyaml = True
Additional context
curl "https://api.vultr.com/v2/instances/[redacted]" -X GET -H "Authorization: Bearer [redacted]"
correctly shows the "internal_ip"
Based on the community decision to use true/false
for boolean values in documentation and examples, we ask that you evaluate booleans in this collection and consider changing any that do not use true/false
(lowercase).
See documentation block format for more info (specifically, option defaults).
If you have already implemented this or decide not to, feel free to close this issue.
P.S. This is auto-generated issue, please raise any concerns here
Describe the bug
When an ICMP firewall rule is created with a port on subsequent runs the API returns an error.
This behaviour holds true if the port is -1 or 0.
To Reproduce
Playbook:
- name: Test firewall behaviour
hosts: localhost
gather_facts: false
vars:
instance_hostname: test.collectiveaccess.au
default_firewall_rule_entries:
- subnet: "0.0.0.0"
port_min: "80"
port_max: "1024"
subnet_size: "0"
ip_gen: v4
- subnet: "0.0.0.0"
port_min: "443"
port_max: "1024"
subnet_size: "0"
ip_gen: v4
- subnet: '0.0.0.0'
port_min: '-1'
subnet_size: "0"
proto: 'icmp'
ip_gen: v4
tasks:
- name: Ensure firewall rule group is configured for this host
vultr.cloud.firewall_group:
api_key: "{{ vultr_api_key }}"
description: "{{ 'Group to hold all rules associated with ' + instance_hostname }}"
state: present
register: firewall_group_result
- name: Ensure firewall rules exist
vultr.cloud.firewall_rule:
api_key: "{{ vultr_api_key }}"
group: "{{ firewall_group_result.vultr_firewall_group.description }}"
ip_type: "{{ item.ip_gen }}"
notes: "Access to port range {{ item.port_min }} through {{ item.port_max |default(item.port_min) }} from {{ item.subnet }}/{{ item.subnet_size }}"
port: "{{ item.port_min + ':' + item.port_max |default(item.port_min) }}"
protocol: "{{ item.proto |default('tcp') }}"
state: present
subnet: "{{ item.subnet }}"
subnet_size: "{{ item.subnet_size }}"
when: firewall_group_result.vultr_firewall_group.description is defined
loop: "{{ default_firewall_rule_entries }}"
register: firewall_rules_result
# Ignoring errors until I can come back to this and fix it properly
ignore_errors: true
Result of ansible-playbook test-firewall.yaml
:
PLAY [Test firewall behaviour] *************************************************************************************************************************************************************************************************************
TASK [Ensure firewall rule group is configured for this host] ******************************************************************************************************************************************************************************
ok: [localhost]
TASK [Ensure firewall rules exist] *********************************************************************************************************************************************************************************************************
ok: [localhost] => (item={'subnet': '0.0.0.0', 'port_min': '80', 'port_max': '1024', 'subnet_size': '0', 'ip_gen': 'v4'})
ok: [localhost] => (item={'subnet': '0.0.0.0', 'port_min': '443', 'port_max': '1024', 'subnet_size': '0', 'ip_gen': 'v4'})
failed: [localhost] (item={'subnet': '0.0.0.0', 'port_min': '-1', 'subnet_size': '0', 'proto': 'icmp', 'ip_gen': 'v4'}) => {"ansible_loop_var": "item", "changed": false, "fetch_url_info": {"body": "{\"error\":\"This rule is already defined\",\"status\":400}", "connection": "close", "content-type": "application/json", "date": "Tue, 20 Jun 2023 03:32:20 GMT", "msg": "HTTP Error 400: Bad Request", "server": "nginx", "status": 400, "strict-transport-security": "max-age=63072000", "transfer-encoding": "chunked", "url": "https://api.vultr.com/v2/firewalls/[removed]/rules", "x-content-type-options": "nosniff", "x-frame-options": "DENY", "x-robots-tag": "noindex,noarchive", "x-user": "[removed]"}, "item": {"ip_gen": "v4", "port_min": "-1", "proto": "icmp", "subnet": "0.0.0.0", "subnet_size": "0"}, "msg": "Failure while calling the Vultr API v2 with POST for \"/firewalls/[removed]/rules\"."}
...ignoring
PLAY RECAP *********************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=1
If port is commented out in vultr.cloud.firewall_rule
all rules check OK:
PLAY [Test firewall behaviour] *************************************************************************************************************************************************************************************************************
TASK [Ensure firewall rule group is configured for this host] ******************************************************************************************************************************************************************************
ok: [localhost]
TASK [Ensure firewall rules exist] *********************************************************************************************************************************************************************************************************
ok: [localhost] => (item={'subnet': '0.0.0.0', 'port_min': '80', 'port_max': '1024', 'subnet_size': '0', 'ip_gen': 'v4'})
ok: [localhost] => (item={'subnet': '0.0.0.0', 'port_min': '443', 'port_max': '1024', 'subnet_size': '0', 'ip_gen': 'v4'})
ok: [localhost] => (item={'subnet': '0.0.0.0', 'port_min': '-1', 'subnet_size': '0', 'proto': 'icmp', 'ip_gen': 'v4'})
PLAY RECAP *********************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Expected behavior
Port ignored and ICMP check passed
Desktop (please complete the following information where applicable:
TASK [snapshot : setup instance] ***********************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: KeyError: 'id'
fatal: [testhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"<stdin>\", line 141, in <module>\n File \"<stdin>\", line 133, in _ansiballz_main\n File \"<stdin>\", line 81, in invoke_module\n File \"/usr/lib/python3.10/runpy.py\", line 224, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.10/runpy.py\", line 96, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_vultr.cloud.instance_payload_8qulosg_/ansible_vultr.cloud.instance_payload.zip/ansible_collections/vultr/cloud/plugins/modules/instance.py\", line 712, in <module>\n File \"/tmp/ansible_vultr.cloud.instance_payload_8qulosg_/ansible_vultr.cloud.instance_payload.zip/ansible_collections/vultr/cloud/plugins/modules/instance.py\", line 708, in main\n File \"/tmp/ansible_vultr.cloud.instance_payload_8qulosg_/ansible_vultr.cloud.instance_payload.zip/ansible_collections/vultr/cloud/plugins/module_utils/vultr_v2.py\", line 283, in present\n File \"/tmp/ansible_vultr.cloud.instance_payload_8qulosg_/ansible_vultr.cloud.instance_payload.zip/ansible_collections/vultr/cloud/plugins/modules/instance.py\", line 606, in create_or_update\n File \"/tmp/ansible_vultr.cloud.instance_payload_8qulosg_/ansible_vultr.cloud.instance_payload.zip/ansible_collections/vultr/cloud/plugins/module_utils/vultr_v2.py\", line 261, in wait_for_state\nKeyError: 'id'\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
Is your feature request related to a problem? Please describe.
Currently (1.7.0) does not support bare metal hosts inventory.
Describe the solution you'd like
Add the v2 bare metal API inventory.
Additional context
I couldn't find anything in the Issues or PRs, so opening this.
We're using a fair number of vultr hosts managed with ansible and would like to use this plugin for dynamic inventory.
**Is your feature request related to a problem?
The deprecated ngine_io.vultr allowed you to created instances feeding in a snapshot name. cloud.vultr.instance does not support the provision of snapshots. All of our vultr instances are based on custom snapshots, we are no longer able to deploy new servers without this support.
Describe the solution you'd like
Ability to provide a snapshot name to create the instance from.
Describe alternatives you've considered
Write a custom ansible module to use this part of the Vultr API, or fork this repo to fix. Thought it best to raise a feature request so that the community benefits.
Additional context
The Vultr V2 API supports creation of an instance with a snapshot id:
https://www.vultr.com/api/#operation/create-instance
snapshot_id | string - The Snapshot id to use when deploying the instance.
Disregard, I'm an idiot. Thought this was done via localhost rather than ssh.
While the vultr plugin dynamically builds an inventory for vultr instances, when actually connecting to run ansible tasks the vultr_label
is used, rather than main_ip
or hostname
, so unless the label happens to be a domain or IP that resolves to the server, the connection fails.
This is my 'vultr.yml' inventory:
plugin: vultr.cloud.vultr
api_key: XXX
attributes: disk,features,hostname,id,label,main_ip,os,plan,power_status,ram,region,server_status,status,tags,vcpu_count
keyed_groups:
# Group by "region"
- key: vultr_region | lower
separator: ''
prefix: region_
# Group by "tags"
- key: vultr_tags | lower
separator: ''
prefix: ''
when I run ansible-inventory --list
I get a dynamic inventory as expected:
"domain1.org": {
"ansible_become": true,
"ansible_user": "testadmin",
"vultr_disk": 55,
"vultr_features": [],
"vultr_hostname": "domain1.org",
"vultr_id": "XXX",
"vultr_label": "mylabel",
"vultr_main_ip": "XXX",
"vultr_os": "Ubuntu 20.04 LTS x64",
"vultr_plan": "vc2-1c-2gb",
"vultr_power_status": "running",
"vultr_ram": 2048,
"vultr_region": "syd",
"vultr_server_status": "ok",
"vultr_status": "active",
"vultr_tags": [
"mytag1"
],
"vultr_vcpu_count": 1
},
however, running a task fails if the label doesn't resolve to the server. Or, as in one case I have, the domain in the vultr_label
uses CloudFlare for DNS, so doesn't resolve to the server directly and no port 22 connection is available. If Ansible was using main_ip
or ansible_host
it would work.
To Reproduce
Steps to reproduce the behavior:
ansible -a uptime all
domain1.org | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname mylabel: Name or service not known",
"unreachable": true
}
Expected behavior
Ansible should connect via SSH using main_ip
or ansible_host
and run tasks as usual.
Hello, I just wanted to point out a typo in
https://docs.ansible.com/ansible/latest/collections/vultr/cloud/instance_info_module.html#examples
- name: Get Vultr instance infos of region ams
vultr.cloud.instances_info:
region: ams
- name: Get Vultr instance infos of a single host
vultr.cloud.instances_info:
label: myhost
- name: Get all Vultr instance infos
vultr.cloud.instances_info:
register: results
- name: Print the gathered infos
ansible.builtin.debug:
var: results.vultr_instance_info
The issue being there is an s
in vultr.cloud.instances_info:
, where it should be vultr.cloud.instance_info:
Thank you for this ansible collection!
I need to fetch details about an instance via ansible so I can read it's tags.
There is only a vultr_instance
plugin which is designed for create/delete/start/stop sort of things. There doesn't seem to be any way to simply read an instance using it's ID or some other combination of attributes.
Is your feature request related to a problem? Please describe.
https://docs.ansible.com/ansible/latest/collections/vultr/cloud/vultr_inventory.html has an example using:
filters:
- '"vpc" in vultr_tags'
Though apparently this does nothing as vultr_tags isn't available anywhere.
Describe the solution you'd like
vultr_tags
actually being available, so they can be filtered on.
Describe alternatives you've considered
?
Additional context
No other additional context
Describe the bug
The vultr.cloud.reserved_ip
module throws exception when using it for creating (and possibly removing) reserved IPs.
This is due to the self.query_list
override in reserved_ip.py:AnsibleVultrReservedIp
lacks the query_params
kwarg.
To Reproduce
Steps to reproduce the behavior:
- name: Create reserved IP
vultr.cloud.reserved_ip:
label: "my-label"
region: ams
ip_type: v4
instance_id: ""
VULTR_API_KEY
environment variable with the value set to your API keyExpected behavior
The reserved IP is created, it moves on to the next task.
Observed behavior
It fails with the python error TypeError: AnsibleVultrReservedIp.query_list() got an unexpected keyword argument 'query_params'
Additional context
It seems like the reserved_ip
module overrides for the query_list
method:
is missing the query_list
keyword argument.
Adding either **kwargs
or query_list=None
to the parameter list for the query_list
method fixes the issue, but I'm not sure how these should be used. That's why this is a bug report rather than a pull request.
It seems like this is the only module which contains a redefinition for the AnsibleVultr.query_list
method
Let me know if you have any questions or need further clarification! :)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.