Coder Social home page Coder Social logo

ansible-collections / community.libvirt Goto Github PK

View Code? Open in Web Editor NEW
56.0 15.0 41.0 191 KB

Manage libvirt with Ansible

Home Page: http://galaxy.ansible.com/community/libvirt

License: GNU General Public License v3.0

Python 90.83% Shell 8.99% Jinja 0.18%
ansible-collection libvirt

community.libvirt's Introduction

community.libvirt Collection

Build Status Codecov

This repo hosts the community.libvirt Ansible Collection.

The collection includes the libvirt modules and plugins supported by Ansible libvirt community to help the management of virtual machines and/or containers via the libvirt API.

This collection is shipped with the ansible package.

Tested with Ansible

  • 2.9
  • 2.10
  • 2.11
  • devel

External requirements

Included content

Modules:

Inventory:

Connection:

Using this collection

Before using the libvirt collection, you need to install it with the Ansible Galaxy command-line tool:

ansible-galaxy collection install community.libvirt

You can include it in a requirements.yml file and install it via ansible-galaxy collection install -r requirements.yml, using the format:

---
collections:
  - name: community.libvirt

You can also download the tarball from Ansible Galaxy and install the collection manually wherever you need.

Note that if you install the collection from Ansible Galaxy with the command-line tool or tarball, it will not be upgraded automatically when you upgrade the Ansible package. To upgrade the collection to the latest available version, run the following command:

ansible-galaxy collection install community.libvirt --upgrade

You can also install a specific version of the collection, for example, if you need to downgrade when something is broken in the latest version (please report an issue in this repository). Use the following syntax:

ansible-galaxy collection install community.libvirt:==X.Y.Z

See Ansible Using collections for more details.

Contributing to this collection

The content of this collection is made by people just like you, a community of individuals collaborating on making the world better through developing automation software.

We are actively accepting new contributors.

All types of contributions are very welcome.

You don't know how to start? Refer to our contribution guide!

The aspiration is to follow the following general guidelines:

  • Changes should include tests and documentation where appropriate.
  • Changes will be lint tested using standard python lint tests.
  • No changes which do not pass CI testing will be approved/merged.
  • The collection plugins must provide the same coverage of python support as the versions of Ansible supported.
  • The versions of Ansible supported by the collection must be the same as those in developed, or those maintained, as shown in the Ansible Release and Maintenance documentation.

We use the following guidelines:

Local Testing

To learn how to test your pull request locally, refer to the Quick-start guide.

To learn how to test a pull request made by another person in your local environment, refer to the Test PR locally guide.

Collection maintenance

The current maintainers (contributors with write or higher access) are listed in the MAINTAINERS file. If you have questions or need help, feel free to mention them in the proposals.

To learn how to maintain / become a maintainer of this collection, refer to the Maintainer guidelines.

It is necessary for maintainers of this collection to be subscribed to:

  • The collection itself (the Watch button -> All Activity in the upper right corner of the repository's homepage).
  • The "Changes Impacting Collection Contributors and Maintainers" issue.

They also should be subscribed to Ansible's The Bullhorn newsletter.

Publishing New Version

See the Releasing guidelines to learn more.

Communication

To communicate, we use:

We announce important development changes and releases through Ansible's The Bullhorn newsletter. If you are a collection developer, be sure you are subscribed.

We take part in the global quarterly Ansible Contributor Summit virtually or in-person. Track The Bullhorn newsletter and join us.

For more information about communication, refer to the Ansible Communication guide.

Reference

License

GNU General Public License v3.0 or later.

See LICENCE to see the full text.

community.libvirt's People

Contributors

adam-dej avatar andersson007 avatar bturmann avatar csmart avatar daveol avatar dseeley avatar electrocucaracha avatar gotmax23 avatar gundalow avatar jensheinrich avatar l3n41c avatar mlow avatar odyssey4me avatar pghmcfc avatar rthill avatar saito-hideki avatar stove-panini avatar thinkl33t avatar yannik avatar zhengyi13 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

community.libvirt's Issues

Fix wrong requirement in the modules doc

SUMMARY

All the modules / plugins should be checked on the wrong requirement python-libvirt that should be changed to libvirt-python as was spotted in #56 (comment)

ISSUE TYPE
  • Documentation Report
COMPONENT NAME

Please check all the modules / plugins

Add documentation publishing

SUMMARY

Documentation should be automatically published.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME
ADDITIONAL INFORMATION

define might get silently ignored

This was a big source of confusion for me, as I'd been trying to use ansible to resize VMs.

except libvirtError as e:
if e.get_error_code() != 9: # 9 means 'domain already exists' error
module.fail_json(msg='libvirtError: %s' % e.message)

While I expect a definition to be updated (if I use the virsh define command line this works as expected), instead this is just getting silently ignored.

Not clear failure descritpion: Failed to create temporary directory

SUMMARY

I have two very similar VMs and for one VM the ansible connects normally, but for another raising the following error. The traceback is not helpful to me.

The command and output is the following:

ansible-playbook  --extra-vars='repo_root=/var/tmp/tmt/run-070/plans/integration/inhibit-if-kmods-is-not-supported/vm_oracle8/good/discover/default/tests' -c community.libvirt.libvirt_qemu -i c2r_oracle8_template-clone, /home/azhukov/Documents/convert2rhel/tests/ansible_collections/basic_setup.yml -vvv

ansible-playbook 2.10.7
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/azhukov/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /home/azhukov/.pyenv/versions/3.9.1/lib/python3.9/site-packages/ansible
  executable location = /home/azhukov/.pyenv/versions/3.9.1/bin/ansible-playbook
  python version = 3.9.1 (default, Jan 14 2021, 15:48:15) [GCC 10.2.0]
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
Parsed c2r_oracle8_template-clone, inventory source with host_list plugin
Loading callback plugin default of type stdout, v2.0 from /home/azhukov/.pyenv/versions/3.9.1/lib/python3.9/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.

PLAYBOOK: basic_setup.yml ******************************************************
Positional arguments: /home/azhukov/Documents/convert2rhel/tests/ansible_collections/basic_setup.yml
verbosity: 4
connection: community.libvirt.libvirt_qemu
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('c2r_oracle8_template-clone,',)
extra_vars: ('repo_root=/var/tmp/tmt/run-070/plans/integration/inhibit-if-kmods-is-not-supported/vm_oracle8/good/discover/default/tests',)
forks: 5
1 plays in /home/azhukov/Documents/convert2rhel/tests/ansible_collections/basic_setup.yml

PLAY [all] *********************************************************************

TASK [Gathering Facts] *********************************************************
task path: /home/azhukov/Documents/convert2rhel/tests/ansible_collections/basic_setup.yml:1
Loading collection community.libvirt from /home/azhukov/.pyenv/versions/3.9.1/lib/python3.9/site-packages/ansible_collections/community/libvirt
<c2r_oracle8_template-clone> CONNECT TO qemu:///system
<c2r_oracle8_template-clone> FIND DOMAIN c2r_oracle8_template-clone
<c2r_oracle8_template-clone> ESTABLISH community.libvirt.libvirt_qemu CONNECTION
<c2r_oracle8_template-clone> EXEC /bin/sh -c 'echo ~ && sleep 0'
<c2r_oracle8_template-clone> GA send: {"execute": "guest-exec", "arguments": {"path": "/bin/sh", "capture-output": true, "arg": ["-c", "echo ~ && sleep 0"]}}
<c2r_oracle8_template-clone> GA return: {'return': {'pid': 1007}}
<c2r_oracle8_template-clone> GA send: {"execute": "guest-exec-status", "arguments": {"pid": 1007}}
<c2r_oracle8_template-clone> GA return: {'return': {'exited': False}}
<c2r_oracle8_template-clone> GA return: {'return': {'exitcode': 0, 'out-data': 'L3Jvb3QK', 'exited': True}}
<c2r_oracle8_template-clone> GA stdout: /root
<c2r_oracle8_template-clone> GA stderr: 
<c2r_oracle8_template-clone> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1617899822.4112039-61006-63082701834685 `" && echo ansible-tmp-1617899822.4112039-61006-63082701834685="` echo /root/.ansible/tmp/ansible-tmp-1617899822.4112039-61006-63082701834685 `" ) && sleep 0'
<c2r_oracle8_template-clone> GA send: {"execute": "guest-exec", "arguments": {"path": "/bin/sh", "capture-output": true, "arg": ["-c", "( umask 77 && mkdir -p \"` echo /root/.ansible/tmp `\"&& mkdir \"` echo /root/.ansible/tmp/ansible-tmp-1617899822.4112039-61006-63082701834685 `\" && echo ansible-tmp-1617899822.4112039-61006-63082701834685=\"` echo /root/.ansible/tmp/ansible-tmp-1617899822.4112039-61006-63082701834685 `\" ) && sleep 0"]}}
<c2r_oracle8_template-clone> GA return: {'return': {'pid': 1008}}
<c2r_oracle8_template-clone> GA send: {"execute": "guest-exec-status", "arguments": {"pid": 1008}}
<c2r_oracle8_template-clone> GA return: {'return': {'exited': False}}
<c2r_oracle8_template-clone> GA return: {'return': {'exitcode': 1, 'err-data': 'bWtkaXI6IGNhbm5vdCBjcmVhdGUgZGlyZWN0b3J5IOKAmC9yb290Ly5hbnNpYmxl4oCZOiBQZXJtaXNzaW9uIGRlbmllZAo=', 'exited': True}}
<c2r_oracle8_template-clone> GA stdout: 
<c2r_oracle8_template-clone> GA stderr: mkdir: cannot create directory ‘/root/.ansible’: Permission denied
fatal: [c2r_oracle8_template-clone]: UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \"` echo /root/.ansible/tmp `\"&& mkdir \"` echo /root/.ansible/tmp/ansible-tmp-1617899822.4112039-61006-63082701834685 `\" && echo ansible-tmp-1617899822.4112039-61006-63082701834685=\"` echo /root/.ansible/tmp/ansible-tmp-1617899822.4112039-61006-63082701834685 `\" ), exited with result 1, stderr output: mkdir: cannot create directory ‘/root/.ansible’: Permission denied\n",
    "unreachable": true
}

PLAY RECAP *********************************************************************
c2r_oracle8_template-clone : ok=0    changed=0    unreachable=1    failed=0    skipped=0    rescued=0    ignored=0   

I see from the systemctl status qemu-guest-agent, that the command was called:

[root@vase inhibit-if-kmods-is-not-supported]# systemctl status qemu-guest-agent
● qemu-guest-agent.service - QEMU Guest Agent
   Loaded: loaded (/usr/lib/systemd/system/qemu-guest-agent.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2021-04-08 16:29:07 UTC; 10min ago
 Main PID: 666 (qemu-ga)
    Tasks: 2 (limit: 10636)
   Memory: 2.9M
   CGroup: /system.slice/qemu-guest-agent.service
           └─666 /usr/bin/qemu-ga --method=virtio-serial --path=/dev/virtio-ports/org.qemu.guest_agent.0 --blacklist= ->

Apr 08 16:37:02 vase.domaci.sit.hvfree.net qemu-ga[666]: info: guest-exec called: "/bin/sh -c echo ~ && sleep 0"
Apr 08 16:37:02 vase.domaci.sit.hvfree.net qemu-ga[666]: info: guest-exec-status called, pid: 1007
Apr 08 16:37:02 vase.domaci.sit.hvfree.net qemu-ga[666]: info: guest-exec-status called, pid: 1007
Apr 08 16:37:02 vase.domaci.sit.hvfree.net qemu-ga[666]: info: guest-exec-status called, pid: 1007
Apr 08 16:37:02 vase.domaci.sit.hvfree.net qemu-ga[666]: info: guest-exec called: "/bin/sh -c ( umask 77 && mkdir -p "`>
Apr 08 16:37:02 vase.domaci.sit.hvfree.net qemu-ga[666]: info: guest-exec-status called, pid: 1008
Apr 08 16:37:02 vase.domaci.sit.hvfree.net qemu-ga[666]: info: guest-exec-status called, pid: 1008
Apr 08 16:37:02 vase.domaci.sit.hvfree.net qemu-ga[666]: info: guest-exec-status called, pid: 1008
Apr 08 16:37:02 vase.domaci.sit.hvfree.net qemu-ga[666]: info: guest-exec-status called, pid: 1008
Apr 08 16:37:02 vase.domaci.sit.hvfree.net qemu-ga[666]: info: guest-exec-status called, pid: 1008

I also ensured, that in qemu-ga config I have commented the following line:

# BLACKLIST_RPC=guest-file-open,guest-file-close,guest-file-read,guest-file-write,guest-file-seek,guest-file-flush,guest-exec,guest-exec-status

Also the machine has in its configuration:

    <channel type="unix">
      <source mode="bind" path="/var/lib/libvirt/qemu/f16x86_64.agent"/>
      <target type="virtio" name="org.qemu.guest_agent.0"/>
      <address type="virtio-serial" controller="0" bus="0" port="1"/>
    </channel>
ISSUE TYPE
  • Bug Report
COMPONENT NAME

libvirt-qemu

ANSIBLE VERSION
➞  ansible --version                                                                                                   ansible 2.10.7  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/azhukov/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /home/azhukov/.pyenv/versions/3.9.1/lib/python3.9/site-packages/ansible
  executable location = /home/azhukov/.pyenv/versions/3.9.1/bin/ansible
  python version = 3.9.1 (default, Jan 14 2021, 15:48:15) [GCC 10.2.0]
CONFIGURATION
null
OS / ENVIRONMENT

kvm VM, using ansible libvirt plugin to connect

STEPS TO REPRODUCE
- hosts: all
  vars:
    # to be overridden by the tmt
    repo_root:
  tasks:

    - name: Building rpms
      delegate_to: localhost
      shell: make rpms
      args:
        chdir: ../..

    - name: Copying rpms to kvm remote
      copy:
        dest: "{{ repo_root }}/.rpms"
        src: ../../.rpms/
      when: ansible_facts['virtualization_type'] == "kvm"

    - name: Install el8 convert2rhel package
      yum:
        name: "{{ repo_root }}/.rpms/convert2rhel-0.20-1.el8.noarch.rpm"
        state: present
        disable_gpg_check: true
EXPECTED RESULTS

In another machine, it works, but I can't find the difference with the faulty one

ACTUAL RESULTS

Already provided


Document standards for developers

SUMMARY

There is no documentation for what is required when developing modules, plugins, etc. Add some documentation.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME
ADDITIONAL INFORMATION

Add integration testing for plugins/inventory/libvirt.py

SUMMARY

There is no integration testing for this inventory script. Add integration testing to ensure that the script functions properly with all changes made to it.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME
ADDITIONAL INFORMATION

Automate libvirt in an easier manner

SUMMARY

The current modules rely on XML which isn't very easy to create manually. I've started for myself to create few roles to make it easier, mainly based on virt-install. If there is interest, I would be very much willing to offer my roles as feature for this collection.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

https://github.com/ericzolf/libvirt_automated

ADDITIONAL INFORMATION

See the example under https://github.com/ericzolf/libvirt_automated/tree/master/playbooks - I think it shows quite convincingly how easier it could be to create domains.

Release plan

SUMMARY

(partially copied from ansible-collections/community.crypto#74)

We should decide eventually on how to release this collection (w.r.t. versioning).
Small collections like this one don't need a complex plan like the one for community.general and community.network.
So how about the following?

  1. Release minor and patch releases whenever we want (like after adding new features or fixing bugs). Since this collection is small, there's no need to fix things in advance. Just add features, and after a feature either wait a bit longer for more features/bugs, or make a release.

I suggest releasing without branching https://docs.ansible.com/ansible/devel/community/collection_contributors/collection_release_without_branches.html

Once we release a 2.0.0 (with some breaking change relative to 1.x.y), we can have a stable-1 branch so we can backport bugfixes (or even features) if needed, and release more 1.x.y versions.

ci: update freebsd version for devel

SUMMARY

Projects using Azure Pipelines to test against the ansible-core devel branch should update their test matrix to use the latest available FreeBSD and macOS versions as explained in the checklist below.

The deprecated platforms freebsd/12.2 and macos/11.1 will be removed from the devel version of ansible-test on January 31st, 2022.

See ansible-collections/overview#45 (comment)

Add integration testing for plugins/modules/virt.py

SUMMARY

There is no integration testing for this module. Add integration testing to ensure that the module functions properly with all changes made to it.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME
ADDITIONAL INFORMATION

Add unit testing for plugins/modules/virt.py

SUMMARY

There is no unit testing for this module. Add unit testing to ensure that the module behaves consistently with all changes made to it. Try to add some negative testing too.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME
ADDITIONAL INFORMATION

virt module get_xml command should support flags

migrated from ansible

SUMMARY

libvirt XMLDesc(flags) The flag is important, some time we want to have VIR_DOMAIN_XML_SECURE to get sensitive info. In most cases we want to use VIR_DOMAIN_XML_INACTIVE for running VMs, in some cases we want to get the live state. So far the module use flag=0 which is live one without sensitive data. We need to give an additional flag parameter for this.

ISSUE TYPE
  • Feature Idea: add a flags parameter to pass into XMLDesc(), it can just be a list of int.
COMPONENT NAME

virt

ADDITIONAL INFORMATION

virt: command=get_xml flags=[...]

- virt:
    command: get_xml
    name: vm1
    flags:
       - 1
       - 2

virt module is not able to undefine a domain with defined nvram file

SUMMARY

Virt module is not able to undefine a domain with defined nvram file.
migrated from: ansible/ansible#43915

ISSUE TYPE
  • Bug Report
COMPONENT NAME

virt

ANSIBLE VERSION
ansible 2.6.2
python version = 2.7.12
CONFIGURATION
OS / ENVIRONMENT
STEPS TO REPRODUCE
- name: undefine vm with nvram
  virt:
    name: example-vm-nvram
    command: undefine
EXPECTED RESULTS

domain "example-vm-nvram" is undefined

ACTUAL RESULTS
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: libvirt.libvirtError: Requested operation is not valid: cannot delete inactive domain with nvram
fatal: [localhost]: FAILED! => {"changed": false, "failed_when_result": true, "msg": "Requested operation is not valid: cannot delete inactive domain with nvram"}

The libvirt function undefine is not able to remove a domain with nvram, by using undefineFlags it is possible to control the deletion of the nvram file. See:
https://libvirt.org/html/libvirt-libvirt-domain.html#virDomainUndefine
https://libvirt.org/html/libvirt-libvirt-domain.html#virDomainUndefineFlags

With the following command the undefine action is possible.

virsh undefine example-vm-nvram --nvram

ansible-playbook --check mode/diff support for virt module

SUMMARY

The virt module does not seem to support ansible-playbook --check mode - in check mode, tasks that use the virt module will always return skipped.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

virt

ADDITIONAL INFORMATION

This would allow reviewing changes in "dry-run" mode before actually applying them. Running the virt module in check mode should return the expected ok/changed state, and a diff of the VM XML definition when relevant (for example when command: define).

ansible-playbook playbook.yml --tags libvirt --check --diff

Add collection build test

SUMMARY

As part of the testing for the collection, we should verify that it builds correctly. This should be a distinct test.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME
ADDITIONAL INFORMATION

CI for libvirt

Hi,
Have you thought about CI for libvirt?

Collections built around using APIs instead of local resources tend to only need testing on different Python versions, not distros. Examples would include the AWS and VMware collections.

Those using local resources should be tested on different platforms/versions: Examples would include Windows and crypto collections.
For the former, the default test container makes sense since it has all python versions available.
For the later, using the various --docker, --remote and --windows options as appropriate makes sense.

I can help get this setup

Please clarify install instructions and requirements (docs say TBD a lot)

SUMMARY

Please clarify install instructions and requirements. I am struggling to setup this module and cannot find clear documentation anywhere. I am attempting to hack multiple guides together and still not having success. I have tried both python2 and python3 now.

Here is the current flow I am attempting. It's still not working but I don't know where to go from here.

sudo apt update
sudo apt -y install python3-pip
python3 -m pip install -U pip
python3 -m pip install --user ansible
// I think pkg-config may come with libvirt-dev as well
sudo apt install pkg-config
sudo apt install libvirt-dev
python3 -m pip install libvirt-python

// Update path
nano ~/.bashrc
// Add this
export PATH=$PATH:$HOME/.local/bin
// Refresh bash
source ~/.bashrc

This is the error I have now in my playbook (I have read many pages discussing this error):

{"changed": false, "msg": "The `libvirt` module is not importable. Check the requirements."}
ISSUE TYPE
  • Documentation Report
COMPONENT NAME

community.libvirt.virt

ANSIBLE VERSION
[DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the controller starting with Ansible 2.12. Current 
version: 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0]. This feature will be removed from ansible-core in version 
2.12. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
ansible [core 2.11.1] 
  config file = None
  configured module search path = ['/home/ubuntu/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /home/ubuntu/.local/lib/python3.6/site-packages/ansible
  ansible collection location = /home/ubuntu/.ansible/collections:/usr/share/ansible/collections
  executable location = /home/ubuntu/.local/bin/ansible
  python version = 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0]
  jinja version = 2.10
  libyaml = True

Missing galaxy.yml

SUMMARY

This repository needs a galaxy.yml file so that ansible-galaxy and other consumers can enumerate the namespace and the name.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

galaxy.yml

virt_net modify only affects the running config

SUMMARY

virt_net with command modify only affects the current/running version of the network being modified and not the actual startup config. There is no way to make changes made by modify permanent.

res = network.update(libvirt.VIR_NETWORK_UPDATE_COMMAND_ADD_LAST,
libvirt.VIR_NETWORK_SECTION_IP_DHCP_HOST,
-1, xml, libvirt.VIR_NETWORK_UPDATE_AFFECT_CURRENT)

res = network.update(libvirt.VIR_NETWORK_UPDATE_COMMAND_MODIFY,
libvirt.VIR_NETWORK_SECTION_IP_DHCP_HOST,
-1, xml, libvirt.VIR_NETWORK_UPDATE_AFFECT_CURRENT)

ISSUE TYPE
  • Bug Report
COMPONENT NAME

virt_net

ANSIBLE VERSION
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.6/site-packages/ansible
  executable location = /bin/ansible
  python version = 3.6.8 (default, Jun 12 2019, 01:12:31) [GCC 8.2.1 20180905 (Red Hat 8.2.1-3)]
CONFIGURATION
OS / ENVIRONMENT

Linux 4.18.0-80.11.2.el8_0.x86_64 #1 SMP Sun Sep 15 11:24:21 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

STEPS TO REPRODUCE
call something like the following from a playbook

virt_net:
  command: modify
  name: default
  xml: "<host mac='de:ad:be:ef:00:00' name='deadbeef' ip='192.168.122.50'/>"

' virsh net-dumpxml default' will show the current running config with the modifcation, however doing a virsh net-edit default will not reflect the changes as the update method in the module only affects the running config.

eg

# virsh net-update --help
  NAME
    net-update - update parts of an existing network's configuration

  SYNOPSIS
    net-update <network> <command> <section> <xml> [--parent-index <number>] [--config] [--live] [--current]

  OPTIONS
    [--network] <string>  network name or uuid
    [--command] <string>  type of update (add-first, add-last (add), delete, or modify)
    [--section] <string>  which section of network configuration to update
    [--xml] <string>  name of file containing xml (or, if it starts with '<', the complete xml element itself) to add/modify, or to be matched for search
    --parent-index <number>  which parent object to search through
    --config         affect next network startup  <--- this
    --live           affect running network
    --current        affect current state of network
see above
EXPECTED RESULTS

that both the startup config and the running/current be updated .. or there be some way to specify which config to modify

ACTUAL RESULTS

n/a

Volunteer help wanted

Unfortunately I'm not able to commit as much time as I'd like to the maintenance and continuous improvement of this repository. I would appreciate it if some others could step up to take on any or all of these roles:

  1. Development of improvements - looking at the code base and improving its readability, maintainability and testability.
  2. Development of bug fixes - fixing reported bugs and implementing tests to ensure they do not happen with any future changes.
  3. Triaging of bug reports - validating the bug and providing a test playbook to reproduce it.

Please feel free to reply to this issue to volunteer.

Implement a generic libvirt inventory plugin

SUMMARY

It would be very useful to have an inventory plugin for libvirt which works against both local and remote libvirt URI's. It can produce inventory which designates the host, groups according to type (lxc, qemu, etc) and provides various details like IP addresses and such.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME
ADDITIONAL INFORMATION

virt_net facts requires name to be specified

SUMMARY

in the example of the document, it does not require name at all:

# Gather facts about networks
# Facts will be available as 'ansible_libvirt_networks'
- community.libvirt.virt_net:
    command: facts

But in practical it requires name. Even worse, if I have several networks, the ansible_libvirt_networks only store the facts of last network called with virt_net.facts. It should not require any name and if multiple names are given it should just put all facts of all networks in ansible_libvirt_networks

ISSUE TYPE
  • Bug Report
COMPONENT NAME
ANSIBLE VERSION
both 2.10.9 and 2.11.1
community.libvirt             1.0.1
CONFIGURATION

OS / ENVIRONMENT

Ubuntu 2004, Fedora 33, Centos8

STEPS TO REPRODUCE
EXPECTED RESULTS
ACTUAL RESULTS
{"changed": false, "msg": "missing required arguments: name"}

Using plugin with windows vm's

SUMMARY

How to use the inventory plugin with windows vm?
when using ansible-inventory not many hostvars are showing to use, only ansible_connection and ansible_libvirt_uri

ansible-inventory -i kvm.yml --host demo1
{
    "ansible_connection": "community.libvirt.libvirt_qemu",
    "ansible_libvirt_uri": "qemu:///system"

when using ansible ad-hoc commands or using the inventory plugin in playbooks it defaults to /bin/sh

I found #37 where windows is added to the connection plugin, but for me it's unclear how to use this.

ISSUE TYPE
  • Documentation Report
COMPONENT NAME

community.libvirt.libvirt_qemu connection plugin

ANSIBLE VERSION
ansible --version
ansible 2.9.26
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/eos/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.6/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.6.8 (default, Aug 12 2021, 07:06:15) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]

list_nets and facts commands from virt_net do not work as listed in examples

SUMMARY

Both examples using list_nets and facts return with error: missing required arguments: name
Specifying a name makes the example work, but these commands become quite useless.

Seems to be something simple to fix but I am not familiar with Ansible development. I believe the problem is related to line 605 of virt_net.py, but not sure.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

community.libvirt.virt_net

ANSIBLE VERSION
ansible 2.9.14
  config file = /root/ansible/ansible.cfg
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.7.9 (default, Aug 19 2020, 17:05:11) [GCC 9.3.1 20200408 (Red Hat 9.3.1-2)]

CONFIGURATION
ANSIBLE_PIPELINING(/root/ansible/ansible.cfg) = True
COLLECTIONS_PATHS(/root/ansible/ansible.cfg) = ['/root/ansible']
DEFAULT_HOST_LIST(/root/ansible/ansible.cfg) = ['/root/ansible/inventory']
DEFAULT_ROLES_PATH(/root/ansible/ansible.cfg) = ['/root/ansible/roles']

OS / ENVIRONMENT

Fedora release 31

STEPS TO REPRODUCE
- name: Test playbook
  hosts: localhost
  tasks:

    - name: list nets
      community.libvirt.virt_net:
        command: list_nets
      register: result
      ignore_errors: yes

    - debug: var=result

    - name: facts
      community.libvirt.virt_net:
        command: facts
      register: result
      ignore_errors: yes

    - debug: var=result
EXPECTED RESULTS

Expecting receiving information of all libvirt networks.

ACTUAL RESULTS

These commands are requiring name argument.

PLAYBOOK: p.yml *****************************************************************************************************************************************************************************************************************
1 plays in p.yml

PLAY [Test playbook] ************************************************************************************************************************************************************************************************************

TASK [Gathering Facts] **********************************************************************************************************************************************************************************************************
task path: /root/ansible/p.yml:2
Using module file /usr/lib/python3.7/site-packages/ansible/modules/system/setup.py
Pipelining is enabled.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python3 && sleep 0'
ok: [localhost]
META: ran handlers

TASK [list nets] ****************************************************************************************************************************************************************************************************************
task path: /root/ansible/p.yml:12
Using module file /root/ansible/ansible_collections/community/libvirt/plugins/modules/virt_net.py
Pipelining is enabled.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python3 && sleep 0'
The full traceback is:
  File "/tmp/ansible_community.libvirt.virt_net_payload_nc1t_aa4/ansible_community.libvirt.virt_net_payload.zip/ansible/module_utils/basic.py", line 1655, in _check_required_arguments
    check_required_arguments(spec, param)
  File "/tmp/ansible_community.libvirt.virt_net_payload_nc1t_aa4/ansible_community.libvirt.virt_net_payload.zip/ansible/module_utils/common/validation.py", line 193, in check_required_arguments
    raise TypeError(to_native(msg))
fatal: [localhost]: FAILED! => {
    "changed": false,
    "invocation": {
        "module_args": {
            "command": "list_nets",
            "uri": "qemu:///system"
        }
    },
    "msg": "missing required arguments: name"
}
...ignoring

TASK [debug] ********************************************************************************************************************************************************************************************************************
task path: /root/ansible/p.yml:18
ok: [localhost] => {
    "result": {
        "changed": false,
        "exception": "  File \"/tmp/ansible_community.libvirt.virt_net_payload_nc1t_aa4/ansible_community.libvirt.virt_net_payload.zip/ansible/module_utils/basic.py\", line 1655, in _check_required_arguments\n    check_required_arguments(spec, param)\n  File \"/tmp/ansible_community.libvirt.virt_net_payload_nc1t_aa4/ansible_community.libvirt.virt_net_payload.zip/ansible/module_utils/common/validation.py\", line 193, in check_required_arguments\n    raise TypeError(to_native(msg))\n",
        "failed": true,
        "msg": "missing required arguments: name"
    }
}

TASK [facts] ********************************************************************************************************************************************************************************************************************
task path: /root/ansible/p.yml:20
Using module file /root/ansible/ansible_collections/community/libvirt/plugins/modules/virt_net.py
Pipelining is enabled.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python3 && sleep 0'
The full traceback is:
  File "/tmp/ansible_community.libvirt.virt_net_payload_mdgxaa3j/ansible_community.libvirt.virt_net_payload.zip/ansible/module_utils/basic.py", line 1655, in _check_required_arguments
    check_required_arguments(spec, param)
  File "/tmp/ansible_community.libvirt.virt_net_payload_mdgxaa3j/ansible_community.libvirt.virt_net_payload.zip/ansible/module_utils/common/validation.py", line 193, in check_required_arguments
    raise TypeError(to_native(msg))
fatal: [localhost]: FAILED! => {
    "changed": false,
    "invocation": {
        "module_args": {
            "command": "facts",
            "uri": "qemu:///system"
        }
    },
    "msg": "missing required arguments: name"
}
...ignoring

TASK [debug] ********************************************************************************************************************************************************************************************************************
task path: /root/ansible/p.yml:26
ok: [localhost] => {
    "result": {
        "changed": false,
        "exception": "  File \"/tmp/ansible_community.libvirt.virt_net_payload_mdgxaa3j/ansible_community.libvirt.virt_net_payload.zip/ansible/module_utils/basic.py\", line 1655, in _check_required_arguments\n    check_required_arguments(spec, param)\n  File \"/tmp/ansible_community.libvirt.virt_net_payload_mdgxaa3j/ansible_community.libvirt.virt_net_payload.zip/ansible/module_utils/common/validation.py\", line 193, in check_required_arguments\n    raise TypeError(to_native(msg))\n",
        "failed": true,
        "msg": "missing required arguments: name"
    }
}

Refactor all plugins for better maintenance

SUMMARY

Each of the modules are independent bits of code, but we could refactor them all to use common objects, methods, etc which would make their implementations more consistent and easier to maintain.

There are also some great ideas and samples in ansible/ansible#27905 which are worth looking at.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME
ADDITIONAL INFORMATION

Add code coverage testing

SUMMARY

Currently there is no code coverage testing. We should add it, and require that all PR's include testing for anything they add.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME
ADDITIONAL INFORMATION

virt state shutdown should wait till vm is at shutdown state

SUMMARY

the virt state shutdown does not wait till the virsh domstate become shut off.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

virt

ANSIBLE VERSION
2.10.6
CONFIGURATION

OS / ENVIRONMENT

Fedora33

STEPS TO REPRODUCE
- name: list active vms
  virt:
    command: list_vms
    state: running
  register: vms_result
- debug:
    var: vms_result
- name: shutdown
  virt:
    state: shutdown
    name: "{{item}}"
  register: vm_shutdown_results
  loop:
    "{{vms_result.list_vms}}"
- debug:
    var: vm_shutdown_results
- name: list inactive vms
  virt:
    command: list_vms
    state: shutdown
  register: inactive_vms_result
- debug:
    var: inactive_vms_result
EXPECTED RESULTS
ACTUAL RESULTS

inactive_vms_result does not show all the VMs, most VMs are still at in shutdown state.


Add unit testing for plugins/modules/virt_pool.py

SUMMARY

There is no unit testing for this module. Add unit testing to ensure that the module behaves consistently with all changes made to it. Try to add some negative testing too.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME
ADDITIONAL INFORMATION

Update of libvirt network default fails

SUMMARY

I want to update the default network of libvirt (purpose: add IPv6). Ansible reports that no change is necessary: ok: [localhost], although the XML definition is different from what I get from sudo virsh net-dumpxml default.

If I change the name in the ansible code but not in the XML definition, the default network gets updated as wanted.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

community.libvirt.virt_net

ANSIBLE VERSION
ansible 2.9.13
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/walter/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.8/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.8.5 (default, Aug 12 2020, 00:00:00) [GCC 10.2.1 20200723 (Red Hat 10.2.1-1)]
CONFIGURATION
empty output, no change.
OS / ENVIRONMENT

Fedora 32 (Workstation Edition)

STEPS TO REPRODUCE

(1) Create a new file network_default.xml with the current XML definition of the default network.

sudo virsh net-dumpxml default

Modify the file.

(2) Run an ansible playbook containing the tasks:

    - name: Network default is inactive
      community.libvirt.virt_net:
        state: inactive
        name: default
    - name: Default network defined correctly
      community.libvirt.virt_net:
        command: define
        name: default
        xml: '{{ lookup("template", "network_default.xml") }}'

Ansible reports that no change is necessary: ok: [localhost], although the XML definition is different from what I get from sudo virsh net-dumpxml default.

(3) Change the name of the network in the playbook but not in the network_default.xml.

    - name: Network default is inactive
      community.libvirt.virt_net:
        state: inactive
        name: default
    - name: Default network defined correctly
      community.libvirt.virt_net:
        command: define
        name: xyz
        xml: '{{ lookup("template", "network_default.xml") }}'

The default network gets updated as wanted. sudo virsh net-dumpxml default shows the change.

EXPECTED RESULTS
  • Ansible can update the network default.
  • Most likely the network name should not be necessary in the playbook as it is already given in the XML file for the network definition.

Inclusion of community.libvirt in Ansible 2.10

This collection will be included in Ansible 2.10 because it contains modules and/or plugins that were included in Ansible 2.9. Please review:

DEADLINE: 2020-08-18

The latest version of the collection available on August 18 will be included in Ansible 2.10.0, except possibly newer versions which differ only in the patch level. (For details, see the roadmap). Please release version 1.0.0 of your collection by this date! If 1.0.0 does not exist, the same 0.x.y version will be used in all of Ansible 2.10 without updates, and your 1.x.y release will not be included until Ansible 2.11 (unless you request an exception at a community working group meeting and go through a demanding manual process to vouch for backwards compatibility . . . you want to avoid this!).

Follow semantic versioning rules

Your collection versioning must follow all semver rules. This means:

  • Patch level releases can only contain bugfixes;
  • Minor releases can contain new features, new modules and plugins, and bugfixes, but must not break backwards compatibility;
  • Major releases can break backwards compatibility.

Changelogs and Porting Guide

Your collection should provide data for the Ansible 2.10 changelog and porting guide. The changelog and porting guide are automatically generated from ansible-base, and from the changelogs of the included collections. All changes from the breaking_changes, major_changes, removed_features and deprecated_features sections will appear in both the changelog and the porting guide. You have two options for providing changelog fragments to include:

  1. If possible, use the antsibull-changelog tool, which uses the same changelog fragment as the ansible/ansible repository (see the documentation).
  2. If you cannot use antsibull-changelog, you can provide the changelog in a machine-readable format as changelogs/changelog.yaml inside your collection (see the documentation of changelogs/changelog.yaml format).

If you cannot contribute to the integrated Ansible changelog using one of these methods, please provide a link to your collection's changelog by creating an issue in https://github.com/ansible-community/ansible-build-data/. If you do not provide changelogs/changelog.yml or a link, users will not be able to find out what changed in your collection from the Ansible changelog and porting guide.

Make sure your collection passes the sanity tests

Run ansible-test sanity --docker -v in the collection with the latest ansible-base or stable-2.10 ansible/ansible checkout.

Keep informed

Be sure you're subscribed to:

Questions and Feedback

If you have questions or want to provide feedback, please see the Feedback section in the collection requirements.

(Internal link to keep track of issues: ansible-collections/overview#102)

Add automated collection publishing

SUMMARY

We should automate the publishing of a collection to ansible galaxy.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME
ADDITIONAL INFORMATION

Cannot create a transient domain

SUMMARY

Hi, all. I cannot create a transient Libvirt domain from a XML with the create command (1). As discussed in here, this is not a common behavior using Libvirt tools.
Also the module outputs that it requires 1 argument: "guest" (2). The output is misleading (2.1), the documentation does not provide information about a guest argument, which I believe is the "name" argument. Furthermore, as stated here, I understand that the module tries to find a domain with the same name, but this information, the domain name, is provided by the XML that one uses to create (2.2) and I'm not sure why is it needed in the case of the create command (as it is not necessary for the define). I believe this already works for the define command, that uses conn.defineXML(xml), while the create uses find_vm(vmid).create() instead of conn.createXML(xml). From Virt.py:

def create(self, vmid):
        return self.find_vm(vmid).create()
def define_from_xml(self, xml):
        return self.conn.defineXML(xml)
ISSUE TYPE
  • Bug Report
COMPONENT NAME

Virt

ANSIBLE VERSION
ansible 2.9.12
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
  executable location = /usr/local/bin/ansible
  python version = 2.7.17 (default, Jul 20 2020, 15:37:01) [GCC 7.5.0]
CONFIGURATION
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
ANSIBLE_SSH_ARGS(/etc/ansible/ansible.cfg) = -C -o ControlMaster=auto -o ControlPersist=60s
DEFAULT_BECOME(/etc/ansible/ansible.cfg) = True
DEFAULT_BECOME_METHOD(/etc/ansible/ansible.cfg) = sudo
DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /etc/ansible/ansible.log
DEFAULT_VAULT_IDENTITY_LIST(/etc/ansible/ansible.cfg) = [u'dev@~/.ansible_secret/vault_pass_insecure']
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
INTERPRETER_PYTHON(/etc/ansible/ansible.cfg) = /usr/bin/python3
PERSISTENT_CONNECT_TIMEOUT(/etc/ansible/ansible.cfg) = 30
RETRY_FILES_ENABLED(env: ANSIBLE_RETRY_FILES_ENABLED) = False
OS / ENVIRONMENT

Target OS: Ubuntu 18.04.5 LTS

STEPS TO REPRODUCE

(1):

name: create a transient Libvirt domain from a XML
virt:
  name: domain-name
  command: create
  xml: “{{ lookup(‘file’, ‘domain.mxl’) }}”

Or better (2):

name: create a transient Libvirt domain from a XML
virt:
  command: create
  xml: “{{ lookup(‘file’, ‘domain.mxl’) }}”
EXPECTED RESULTS

The domain being created.

ACTUAL RESULTS

For (1), using the name argument:

TASK [creates a transient domain] ************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: VMNotFound: virtual machine domain-name not found
fatal: [hv-1]: FAILED! => {"changed": false, "msg": "virtual machine domain-name not found"}

For (2), not using the name argument:

TASK [creates a transient domain] ************************
fatal: [hv-1]: FAILED! => {"changed": false, "msg": "create requires 1 argument: guest"}

Add changelog publishing

SUMMARY

In ansible-collections/overview#18 there is some debate about how collections should publish changelogs. This collection currently has no changelog, so we need to create one using one of the many available methods and publish it.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME
ADDITIONAL INFORMATION

Rename repo to community.libvirt

SUMMARY

Hi,
To be more consistent with the other repos https://github.com/ansible-collections/ I'd like to rename this GitHub Repo to be community.libvirt

  • Old URLs for issues and PRs will still work
  • Old git URLs will work, though it will be good to update them locally
  • We will need to update the URLs in galaxy.yml

Are you happy with this?

ISSUE TYPE
  • Bug Report

Ansible virt module targets wrong KVM guest when ansible-playbook run in parallel

SUMMARY

When ansible-playbook is run in parallel on the same KVM host, acting on different KVM guests, the "virt" Ansible module sometimes gets confused about which KVM guest it is referring to.

During "virt undefine" and "virt destroy" Ansible sometimes fails with Domain not found: no domain with matching id <id> where <id> is not the id of the guest I targeted.
During "virt undefine" Ansible sometimes fails with Domain not found: no domain with matching name <name> where <name> is not name of the guest I targeted.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

virt

ANSIBLE VERSION
zt93k8:~ # ansible --version
ansible 2.9.12
  config file = None
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.6/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.6.10 (default, Dec 19 2019, 15:48:40) [GCC]
CONFIGURATION
zt93k8:~ # ansible-config dump --only-changed
lines ?-?/? (END)
zt93k8:~ #
OS / ENVIRONMENT
# uname -a Linux zt93k8 5.3.18-22-default #1 SMP Wed Jun 3 12:16:43 UTC 2020 (720aeba/lp-1a956f1) s390x s390x s390x GNU/Linux
 
Machine Type = Type:  8561  Model:  708  T01 

Host Kernel Level: 
5.3.18-22-default

Host OS:
NAME="SLES"
VERSION="15-SP2"
VERSION_ID="15.2"
PRETTY_NAME="SUSE Linux Enterprise Server 15 SP2"
ID="sles"
ID_LIKE="suse"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:suse:sles:15:sp2"

Host Qemu Level:
QEMU emulator version 4.2.0 (SUSE Linux Enterprise 15)
Copyright (c) 2003-2019 Fabrice Bellard and the QEMU Project developers

Host Libvirt Level:
6.0.0
STEPS TO REPRODUCE
  1. Ensure all guests are running (virsh start)
    • This is done because the errors occur during destroy or undefine, and the playbook intentionally doesn't try to restart the guest.
  2. Run ansible-playbook in parallel on 6 guests (in 6 background processes)
    • Playbook does the following (see full virt_test.yml playbook below):
    1. Collect guest XML
    2. Destroy guest (virt destroy)
    3. Undefine guest (virt undefine)
    4. Define guest from saved XML (virt define)
    5. Start guest (virt start)
    6. Wait for it to start and verify ping and ssh connectivity.
  3. Wait for all processes to be complete
  4. Start over at step 1. Run for 9 iterations.

Full transcript of running the test:

zt93k8:~/lifecycle # iteration=1; for node in zt93k8seg-safron_lifecycle-10-20-112-66 zt93k8seg-safron_lifecycle-10-20-112-68 zt93k8seg-safron_lifecycle-10-20-112-69 zt93k8seg-safron_lifecycle-10-20-112-70 zt93k8seg-safron_lifecycle-10-20-112-73 zt93k8seg-safron_lifecycle-10-20-112-74; do   logfile=/tmp/virt_test_${node}-$iteration.log;   echo $logfile;   virsh list --all > $logfile; ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1 & done
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-66-1.log
[1] 177363
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-68-1.log
[2] 177366
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-69-1.log
[3] 177369
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-70-1.log
[4] 177372
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-73-1.log
[5] 177375
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-74-1.log
[6] 177378

zt93k8:~/lifecycle # for node in `virsh list --name --all |grep safron`; do echo $node; virsh start $node; done
zt93k8seg-safron_lifecycle-10-20-112-68
error: Domain is already active

zt93k8seg-safron_lifecycle-10-20-112-69
error: Domain is already active

zt93k8seg-safron_lifecycle-10-20-112-73
error: Domain is already active

zt93k8seg-safron_lifecycle-10-20-112-66
Domain zt93k8seg-safron_lifecycle-10-20-112-66 started

zt93k8seg-safron_lifecycle-10-20-112-70
Domain zt93k8seg-safron_lifecycle-10-20-112-70 started

zt93k8seg-safron_lifecycle-10-20-112-74
Domain zt93k8seg-safron_lifecycle-10-20-112-74 started

zt93k8:~/lifecycle # iteration=2; for node in zt93k8seg-safron_lifecycle-10-20-112-66 zt93k8seg-safron_lifecycle-10-20-112-68 zt93k8seg-safron_lifecycle-10-20-112-69 zt93k8seg-safron_lifecycle-10-20-112-70 zt93k8seg-safron_lifecycle-10-20-112-73 zt93k8seg-safron_lifecycle-10-20-112-74; do   logfile=/tmp/virt_test_${node}-$iteration.log;   echo $logfile;   virsh list --all > $logfile; ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1 & done
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-66-2.log
[1] 181317
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-68-2.log
[2] 181320
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-69-2.log
[3] 181323
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-70-2.log
[4] 181326
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-73-2.log
[5] 181329
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-74-2.log
[6] 181332
zt93k8:~/lifecycle #
[1]   Done                    ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
[2]   Done                    ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
[3]   Done                    ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
[4]   Done                    ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
[5]-  Done                    ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
[6]+  Done                    ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
zt93k8:~/lifecycle # for node in `virsh list --name --all |grep safron`; do echo $node; virsh start $node; done
zt93k8seg-safron_lifecycle-10-20-112-70
error: Domain is already active

zt93k8seg-safron_lifecycle-10-20-112-69
error: Domain is already active

zt93k8seg-safron_lifecycle-10-20-112-68
error: Domain is already active

zt93k8seg-safron_lifecycle-10-20-112-74
error: Domain is already active

zt93k8seg-safron_lifecycle-10-20-112-66
error: Domain is already active

zt93k8seg-safron_lifecycle-10-20-112-73
error: Domain is already active

zt93k8:~/lifecycle # iteration=3; for node in zt93k8seg-safron_lifecycle-10-20-112-66 zt93k8seg-safron_lifecycle-10-20-112-68 zt93k8seg-safron_lifecycle-10-20-112-69 zt93k8seg-safron_lifecycle-10-20-112-70 zt93k8seg-safron_lifecycle-10-20-112-73 zt93k8seg-safron_lifecycle-10-20-112-74; do   logfile=/tmp/virt_test_${node}-$iteration.log;   echo $logfile;   virsh list --all > $logfile; ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1 & done
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-66-3.log
[1] 183840
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-68-3.log
[2] 183843
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-69-3.log
[3] 183846
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-70-3.log
[4] 183849
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-73-3.log
[5] 183852
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-74-3.log
[6] 183859
zt93k8:~/lifecycle #
[1]   Done                    ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
[2]   Exit 2                  ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
[3]   Done                    ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
[4]   Done                    ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
[5]-  Exit 2                  ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
[6]+  Exit 2                  ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
zt93k8:~/lifecycle # for node in `virsh list --name --all |grep safron`; do echo $node; virsh start $node; done
zt93k8seg-safron_lifecycle-10-20-112-66
error: Domain is already active

zt93k8seg-safron_lifecycle-10-20-112-70
error: Domain is already active

zt93k8seg-safron_lifecycle-10-20-112-69
error: Domain is already active

zt93k8seg-safron_lifecycle-10-20-112-68
Domain zt93k8seg-safron_lifecycle-10-20-112-68 started

zt93k8seg-safron_lifecycle-10-20-112-73
Domain zt93k8seg-safron_lifecycle-10-20-112-73 started

zt93k8seg-safron_lifecycle-10-20-112-74
Domain zt93k8seg-safron_lifecycle-10-20-112-74 started

zt93k8:~/lifecycle # iteration=4; for node in zt93k8seg-safron_lifecycle-10-20-112-66 zt93k8seg-safron_lifecycle-10-20-112-68 zt93k8seg-safron_lifecycle-10-20-112-69 zt93k8seg-safron_lifecycle-10-20-112-70 zt93k8seg-safron_lifecycle-10-20-112-73 zt93k8seg-safron_lifecycle-10-20-112-74; do   logfile=/tmp/virt_test_${node}-$iteration.log;   echo $logfile;   virsh list --all > $logfile; ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1 & done
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-66-4.log
[1] 185860
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-68-4.log
[2] 185863
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-69-4.log
[3] 185866
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-70-4.log
[4] 185869
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-73-4.log
[5] 185872
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-74-4.log
[6] 185875
zt93k8:~/lifecycle #
[1]   Done                    ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
[2]   Exit 2                  ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
[3]   Done                    ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
[4]   Exit 2                  ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
[5]-  Exit 2                  ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
[6]+  Exit 2                  ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
zt93k8:~/lifecycle # for node in `virsh list --name --all |grep safron`; do echo $node; virsh start $node; done
zt93k8seg-safron_lifecycle-10-20-112-73
error: Domain is already active

zt93k8seg-safron_lifecycle-10-20-112-74
error: Domain is already active

zt93k8seg-safron_lifecycle-10-20-112-66
error: Domain is already active

zt93k8seg-safron_lifecycle-10-20-112-69
error: Domain is already active

zt93k8seg-safron_lifecycle-10-20-112-68
Domain zt93k8seg-safron_lifecycle-10-20-112-68 started

zt93k8seg-safron_lifecycle-10-20-112-70
Domain zt93k8seg-safron_lifecycle-10-20-112-70 started

zt93k8:~/lifecycle # iteration=5; for node in zt93k8seg-safron_lifecycle-10-20-112-66 zt93k8seg-safron_lifecycle-10-20-112-68 zt93k8seg-safron_lifecycle-10-20-112-69 zt93k8seg-safron_lifecycle-10-20-112-70 zt93k8seg-safron_lifecycle-10-20-112-73 zt93k8seg-safron_lifecycle-10-20-112-74; do   logfile=/tmp/virt_test_${node}-$iteration.log;   echo $logfile;   virsh list --all > $logfile; ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1 & done
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-66-5.log
[1] 187503
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-68-5.log
[2] 187506
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-69-5.log
[3] 187509
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-70-5.log
[4] 187512
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-73-5.log
[5] 187515
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-74-5.log
[6] 187518
zt93k8:~/lifecycle #
[1]   Exit 2                  ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
[2]   Done                    ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
[3]   Exit 2                  ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
[4]   Done                    ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
[5]-  Done                    ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
[6]+  Exit 2                  ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
zt93k8:~/lifecycle # for node in `virsh list --name --all |grep safron`; do echo $node; virsh start $node; done
zt93k8seg-safron_lifecycle-10-20-112-68
error: Domain is already active

zt93k8seg-safron_lifecycle-10-20-112-70
error: Domain is already active

zt93k8seg-safron_lifecycle-10-20-112-73
error: Domain is already active

zt93k8seg-safron_lifecycle-10-20-112-66
Domain zt93k8seg-safron_lifecycle-10-20-112-66 started

zt93k8seg-safron_lifecycle-10-20-112-69
Domain zt93k8seg-safron_lifecycle-10-20-112-69 started

zt93k8seg-safron_lifecycle-10-20-112-74
Domain zt93k8seg-safron_lifecycle-10-20-112-74 started

zt93k8:~/lifecycle # iteration=6; for node in zt93k8seg-safron_lifecycle-10-20-112-66 zt93k8seg-safron_lifecycle-10-20-112-68 zt93k8seg-safron_lifecycle-10-20-112-69 zt93k8seg-safron_lifecycle-10-20-112-70 zt93k8seg-safron_lifecycle-10-20-112-73 zt93k8seg-safron_lifecycle-10-20-112-74; do   logfile=/tmp/virt_test_${node}-$iteration.log;   echo $logfile;   virsh list --all > $logfile; ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1 & done
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-66-6.log
[1] 189495
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-68-6.log
[2] 189498
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-69-6.log
[3] 189501
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-70-6.log
[4] 189504
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-73-6.log
[5] 189507
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-74-6.log
[6] 189510
zt93k8:~/lifecycle #
[1]   Done                    ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
[2]   Done                    ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
[3]   Done                    ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
[4]   Done                    ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
[5]-  Done                    ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
[6]+  Done                    ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
zt93k8:~/lifecycle # for node in `virsh list --name --all |grep safron`; do echo $node; virsh start $node; done
zt93k8seg-safron_lifecycle-10-20-112-69
error: Domain is already active

zt93k8seg-safron_lifecycle-10-20-112-73
error: Domain is already active

zt93k8seg-safron_lifecycle-10-20-112-66
error: Domain is already active

zt93k8seg-safron_lifecycle-10-20-112-70
error: Domain is already active

zt93k8seg-safron_lifecycle-10-20-112-68
error: Domain is already active

zt93k8seg-safron_lifecycle-10-20-112-74
error: Domain is already active

zt93k8:~/lifecycle # iteration=7; for node in zt93k8seg-safron_lifecycle-10-20-112-66 zt93k8seg-safron_lifecycle-10-20-112-68 zt93k8seg-safron_lifecycle-10-20-112-69 zt93k8seg-safron_lifecycle-10-20-112-70 zt93k8seg-safron_lifecycle-10-20-112-73 zt93k8seg-safron_lifecycle-10-20-112-74; do   logfile=/tmp/virt_test_${node}-$iteration.log;   echo $logfile;   virsh list --all > $logfile; ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1 & done
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-66-7.log
[1] 192059
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-68-7.log
[2] 192062
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-69-7.log
[3] 192065
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-70-7.log
[4] 192068
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-73-7.log
[5] 192071
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-74-7.log
[6] 192074
zt93k8:~/lifecycle #
[1]   Exit 2                  ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
[2]   Done                    ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
[3]   Exit 2                  ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
[4]   Done                    ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
[5]-  Exit 2                  ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
[6]+  Exit 2                  ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
zt93k8:~/lifecycle # for node in `virsh list --name --all |grep safron`; do echo $node; virsh start $node; done
zt93k8seg-safron_lifecycle-10-20-112-68
error: Domain is already active

zt93k8seg-safron_lifecycle-10-20-112-70
error: Domain is already active

zt93k8seg-safron_lifecycle-10-20-112-66
Domain zt93k8seg-safron_lifecycle-10-20-112-66 started

zt93k8seg-safron_lifecycle-10-20-112-69
Domain zt93k8seg-safron_lifecycle-10-20-112-69 started

zt93k8seg-safron_lifecycle-10-20-112-73
Domain zt93k8seg-safron_lifecycle-10-20-112-73 started

zt93k8seg-safron_lifecycle-10-20-112-74
Domain zt93k8seg-safron_lifecycle-10-20-112-74 started

zt93k8:~/lifecycle # iteration=8; for node in zt93k8seg-safron_lifecycle-10-20-112-66 zt93k8seg-safron_lifecycle-10-20-112-68 zt93k8seg-safron_lifecycle-10-20-112-69 zt93k8seg-safron_lifecycle-10-20-112-70 zt93k8seg-safron_lifecycle-10-20-112-73 zt93k8seg-safron_lifecycle-10-20-112-74; do   logfile=/tmp/virt_test_${node}-$iteration.log;   echo $logfile;   virsh list --all > $logfile; ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1 & done
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-66-8.log
[1] 194464
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-68-8.log
[2] 194467
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-69-8.log
[3] 194470
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-70-8.log
[4] 194473
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-73-8.log
[5] 194476
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-74-8.log
[6] 194479
zt93k8:~/lifecycle #
[1]   Done                    ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
[2]   Done                    ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
[3]   Exit 2                  ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
[4]   Exit 2                  ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
[5]-  Done                    ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
[6]+  Done                    ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1
zt93k8:~/lifecycle # for node in `virsh list --name --all |grep safron`; do echo $node; virsh start $node; done
zt93k8seg-safron_lifecycle-10-20-112-70
error: Domain is already active

zt93k8seg-safron_lifecycle-10-20-112-69
error: Domain is already active

zt93k8seg-safron_lifecycle-10-20-112-68
error: Domain is already active

zt93k8seg-safron_lifecycle-10-20-112-66
error: Domain is already active

zt93k8seg-safron_lifecycle-10-20-112-74
error: Domain is already active

zt93k8seg-safron_lifecycle-10-20-112-73
error: Domain is already active

zt93k8:~/lifecycle # iteration=9; for node in zt93k8seg-safron_lifecycle-10-20-112-66 zt93k8seg-safron_lifecycle-10-20-112-68 zt93k8seg-safron_lifecycle-10-20-112-69 zt93k8seg-safron_lifecycle-10-20-112-70 zt93k8seg-safron_lifecycle-10-20-112-73 zt93k8seg-safron_lifecycle-10-20-112-74; do   logfile=/tmp/virt_test_${node}-$iteration.log;   echo $logfile;   virsh list --all > $logfile; ansible-playbook virt_test.yml -e "guest=$node" >> $logfile 2>&1 & done
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-66-9.log
[1] 196522
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-68-9.log
[2] 196525
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-69-9.log
[3] 196528
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-70-9.log
[4] 196531
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-73-9.log
[5] 196534
/tmp/virt_test_zt93k8seg-safron_lifecycle-10-20-112-74-9.log
[6] 196537
# virt_test.yml playbook
- hosts: localhost
  gather_facts: yes
  gather_subset: 
    - date_time
    - virtual
  vars:
    guest: ""
    guest_userid: rundeck

  tasks:
  - name: lifecycle | Gather guest XML for {{ guest }}
    virt:
      command: get_xml
      name: "{{ guest }}"
    register: guest_xml
    
  - name: lifecycle | Destroy guest {{ guest }}
    virt:
      command: destroy
      name: "{{ guest }}"

  - name: lifecycle | verify vm is stopped
    wait_for:
      path: /var/run/libvirt/qemu/{{ guest }}.pid
      state: absent
      timeout: 20

  - name: lifecycle | Undefine guest {{ guest }}
    virt:
      command: undefine
      name: "{{ guest }}"

  - name: lifecycle | verify vm is undefined
    wait_for:
      path: /etc/libvirt/qemu/{{ guest }}.xml
      state: absent
      timeout: 20

  - name: lifecycle | Define guest from saved XML
    virt:
      command: define
      name: "{{ guest }}"
      xml: "{{ guest_xml['get_xml'] }}"

  - name: lifecycle | Wait 5 seconds for node to finish defining
    wait_for:
      timeout: 5

  - name: lifecycle | verify vm is defined
    wait_for:
      path: /etc/libvirt/qemu/{{ guest }}.xml
      state: present
      timeout: 20

  - name: lifecycle | Start guest {{ guest }}
    virt:
      command: start
      name: "{{ guest }}"
  
  - name: lifecycle verify | Wait for guest to start and get IP address 
    shell: >
      set -o pipefail ;
      virsh domifaddr --source agent
      {{ guest }}
      | grep eth0
      | egrep  '([0-9]{1,3}\.){3}[0-9]{1,3}' 
      | awk '{ print $4 }' 
      | cut -d/ -f1
    register: result_get_ip
    until: result_get_ip.stdout
    retries: 30
    delay: 10

  - name: lifecycle verify | ping guest_ip {{ result_get_ip.stdout }}
    shell: >
      ping -c2 {{ result_get_ip.stdout }}

  - name: lifecycle verify | ssh guest_ip {{ result_get_ip.stdout }}
    shell: >
      ssh {{ guest_userid }}@{{ result_get_ip.stdout }} 'date'
    register: result_ssh
    until: result_ssh.rc == 0
    retries: 30
    delay: 10
EXPECTED RESULTS

I expected each ansible-playbook process to destroy/undefine/define/start the guest it was assigned without referencing or affecting any other guests. Running multiple ansible-playbook processes at the same time for different guests should not interfere with each other.

ACTUAL RESULTS

I attached the full logs for all guests below, plus an additional log containing ansible-playbook -vvvvv output), but here are the logs for 3 of the guests that demonstrates the problems.

  • In the first log the playbook tried to undefine zt93k8seg-safron_lifecycle-10-20-112-66, but the error referenced a different guest name Error: Domain not found: no domain with matching name 'zt93k8seg-safron_lifecycle-10-20-112-68'.
  • In the second log the playbook tried to undefine zt93k8seg-safron_lifecycle-10-20-112-74 (id 583), but the error referenced a different guest id: Domain not found: no domain with matching id 584.
  • In the third log the playbook tried to undefine zt93k8seg-safron_lifecycle-10-20-112-74 (id 608), but the error referenced a different guest id: Domain not found: no domain with matching id 606
#------  LOG 1
» cat virt_test_zt93k8seg-safron_lifecycle-10-20-112-66-5.log
 Id    Name                                      State
-----------------------------------------------------------
 607   zt93k8seg-safron_lifecycle-10-20-112-73   running
 608   zt93k8seg-safron_lifecycle-10-20-112-74   running
 609   zt93k8seg-safron_lifecycle-10-20-112-66   running
 610   zt93k8seg-safron_lifecycle-10-20-112-69   running
 611   zt93k8seg-safron_lifecycle-10-20-112-68   running
 612   zt93k8seg-safron_lifecycle-10-20-112-70   running
 -     zt93k8seg-bsteen-10-20-112-62             shut off

[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'

PLAY [localhost] *********************************************************************************************************************************************************************************************************

TASK [Gathering Facts] ***************************************************************************************************************************************************************************************************
ok: [localhost]

TASK [lifecycle | Gather guest XML for zt93k8seg-safron_lifecycle-10-20-112-66] ******************************************************************************************************************************************
ok: [localhost]

TASK [lifecycle | Destroy guest zt93k8seg-safron_lifecycle-10-20-112-66] *************************************************************************************************************************************************
ok: [localhost]

TASK [lifecycle | verify vm is stopped] **********************************************************************************************************************************************************************************
ok: [localhost]

TASK [lifecycle | Undefine guest zt93k8seg-safron_lifecycle-10-20-112-66] ************************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: libvirt.libvirtError: Domain not found: no domain with matching name 'zt93k8seg-safron_lifecycle-10-20-112-68'
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Domain not found: no domain with matching name 'zt93k8seg-safron_lifecycle-10-20-112-68'"}

PLAY RECAP ***************************************************************************************************************************************************************************************************************
localhost                  : ok=4    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0

#------ LOG 2
» cat virt_test_zt93k8seg-safron_lifecycle-10-20-112-74-1.log
 Id    Name                                      State
-----------------------------------------------------------
 577   zt93k8seg-safron_lifecycle-10-20-112-66   running
 578   zt93k8seg-safron_lifecycle-10-20-112-69   running
 581   zt93k8seg-safron_lifecycle-10-20-112-68   running
 583   zt93k8seg-safron_lifecycle-10-20-112-74   running
 584   zt93k8seg-safron_lifecycle-10-20-112-70   running
 585   zt93k8seg-safron_lifecycle-10-20-112-73   running
 -     zt93k8seg-bsteen-10-20-112-62             shut off

[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'

PLAY [localhost] *********************************************************************************************************************************************************************************************************

TASK [Gathering Facts] ***************************************************************************************************************************************************************************************************
ok: [localhost]

TASK [lifecycle | Gather guest XML for zt93k8seg-safron_lifecycle-10-20-112-74] ******************************************************************************************************************************************
ok: [localhost]

TASK [lifecycle | Destroy guest zt93k8seg-safron_lifecycle-10-20-112-74] *************************************************************************************************************************************************
ok: [localhost]

TASK [lifecycle | verify vm is stopped] **********************************************************************************************************************************************************************************
ok: [localhost]

TASK [lifecycle | Undefine guest zt93k8seg-safron_lifecycle-10-20-112-74] ************************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: libvirt.libvirtError: Domain not found: no domain with matching id 584
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Domain not found: no domain with matching id 584"}

PLAY RECAP ***************************************************************************************************************************************************************************************************************
localhost                  : ok=4    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0
#------ LOG 3
» cat virt_test_zt93k8seg-safron_lifecycle-10-20-112-74-4.log                                                                                                         130 ↵
 Id    Name                                      State
-----------------------------------------------------------
 603   zt93k8seg-safron_lifecycle-10-20-112-66   running
 604   zt93k8seg-safron_lifecycle-10-20-112-70   running
 605   zt93k8seg-safron_lifecycle-10-20-112-69   running
 606   zt93k8seg-safron_lifecycle-10-20-112-68   running
 607   zt93k8seg-safron_lifecycle-10-20-112-73   running
 608   zt93k8seg-safron_lifecycle-10-20-112-74   running
 -     zt93k8seg-bsteen-10-20-112-62             shut off

[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'

PLAY [localhost] *********************************************************************************************************************************************************************************************************

TASK [Gathering Facts] ***************************************************************************************************************************************************************************************************
ok: [localhost]

TASK [lifecycle | Gather guest XML for zt93k8seg-safron_lifecycle-10-20-112-74] ******************************************************************************************************************************************
ok: [localhost]

TASK [lifecycle | Destroy guest zt93k8seg-safron_lifecycle-10-20-112-74] *************************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: libvirt.libvirtError: Domain not found: no domain with matching id 606
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Domain not found: no domain with matching id 606"}

PLAY RECAP ***************************************************************************************************************************************************************************************************************
localhost                  : ok=2    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0

Full logs from the 9 iterations described above:
virt_test_LOGS.zip

Full logs using ansible-playbook -vvvvv for 1 iteration with a different 6 nodes:
virt_test_VERBOSE_LOGS.zip

KVM host and KVM guest logs:
kvm_LOGS.zip

connection/libvirt_qemu.py: Add capabilities test

SUMMARY

Using the command virsh qemu-agent-command <domain name> '{"execute":"guest-info"}' | jq we're able to get a set of capabilities for the domain back, which can allow us to verify whether the connection prerequisites are met. This can enable us to fail the host with a nice message saying that it can't work, and list what capabilities are missing.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME
ADDITIONAL INFORMATION

inventory plugin: add a way to skip dormant domains

SUMMARY

Currently I have some libvirt domains that I created that are meant to be shutdown most of the time, this means doing something like: ansible -m ping all will always return errors like the following:

libvirt: Domain Config error : Requested operation is not valid: domain is not running
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: libvirt.libvirtError: Requested operation is not valid: domain is not running
nafta.hq.akdev.xyz | FAILED! => {
    "msg": "Unexpected failure during module execution.",
    "stdout": ""
}

This clutters the output and it probably makes playbooks take slightly longer as it tries and fails each command for domains that aren't running.

My proposal would be to add a new option in the config file:

plugin: community.libvirt.libvirt
uri: qemu:///system
ignore_off: True

with such configuration the plugin would just return a list of domains that are currently active. (it could warn that other domains are being ignored for awareness)

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

community.libvirt.libvirt inventory plugin

ADDITIONAL INFORMATION

If the feature is implemented then users would be able to run playbooks without getting errors whenever they have one or more VMs that are shut down.

$ ansible -m ping all
libvirt: Domain Config error : Requested operation is not valid: domain is not running
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: libvirt.libvirtError: Requested operation is not valid: domain is not running
nafta.hq.akdev.xyz | FAILED! => {
    "msg": "Unexpected failure during module execution.",
    "stdout": ""
}

I am willing to put some time towards making this happen if no one is against having this - alternatively if there's a way of achieving what I want that I'm unaware of please let me know.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.