Coder Social home page Coder Social logo

ansible-manage-lvm's Introduction

ansible-manage-lvm's People

Contributors

aethylred avatar cityofships avatar davidcaste avatar dependabot[bot] avatar emper0r avatar faisalnizam avatar genaumann avatar jcox10 avatar markgoddard avatar mikap83 avatar mnasiadka avatar mrlesmithjr avatar msielicki avatar oneswig avatar rohitkothari avatar roxyrob avatar smutel avatar stefanheimberg avatar tcharl avatar zoobert avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ansible-manage-lvm's Issues

Device /home/<user>/d not found.

Regression with the very last commit.

TASK [ansible-manage-lvm : manage_lvm | creating new LVM volume group(s)] **************************************************************************************************************
failed: [my-server] (item={u'lvnames': [{u'lvname': u'usr', u'create': True, u'filesystem': u'xfs', u'mntp': u'/usr', u'mount': True, u'size': u'2g'}, {u'lvname': u'var', u'create': True, u'filesystem': u'xfs', u'mntp': u'/var', u'mount': True, u'size': u'3g'}, {u'lvname': u'var_log', u'create': True, u'filesystem': u'xfs', u'mntp': u'/var/log', u'mount': True, u'size': u'2g'}], u'vgname': u'vg01', u'disks': u'/dev/vda2', u'create': True}) => {"failed": true, "item": {"create": true, "disks": "/dev/vda2", "lvnames": [{"create": true, "filesystem": "xfs", "lvname": "usr", "mntp": "/usr", "mount": true, "size": "2g"}, {"create": true, "filesystem": "xfs", "lvname": "var", "mntp": "/var", "mount": true, "size": "3g"}, {"create": true, "filesystem": "xfs", "lvname": "var_log", "mntp": "/var/log", "mount": true, "size": "2g"}], "vgname": "vg01"}, "msg": "Device /home/oloc/d not found."}

Role fails on CentOS due to system-storage-manager package

When running this role against a CentOS 6.8, I see the following error -

TASK [mrlesmithjr.manage-lvm : centos | installing lvm2] ***********************
fatal: [x.y.z.a]: FAILED! => {"changed": false, "failed": true, "msg": "No Package matching 'system-storage-manager' found available, installed or updated", "rc": 0, "results": []}

When running this role against a CentOS 7, it successfully installs system-storage-package however fails at the following task -

TASK [mrlesmithjr.manage-lvm : manage_lvm | creating new LVM volume group(s)] **
failed: [a.b.c.d] (item={u'lvnames': [{u'lvname': u'lvm1', u'create': True, u'filesystem': u'ext4', u'mntp': [], u'mount': False, u'size': u'2g'}, {u'lvname': u'lvm2', u'create': True, u'filesystem': u'ext4', u'mntp': u'/', u'mount': False, u'size': u'4g'}], u'vgname': u'ubuntu-vg', u'disks': u'/dev/xvdf', u'create': True}) => {"failed": true, "item": {"create": true, "disks": "/dev/xvdf", "lvnames": [{"create": true, "filesystem": "ext4", "lvname": "lvm1", "mntp": [], "mount": false, "size": "2g"}, {"create": true, "filesystem": "ext4", "lvname": "lvm2", "mntp": "/", "mount": false, "size": "4g"}], "vgname": "ubuntu-vg"}, "msg": "Failed to find required executable pvs"}

This seems to be due to pvs command not being present inside CentOS machine because lvm2 was not installed.

It looks like the role attempts to install system-storage-manager package, however does not use ssm CLI tool (that comes with it) and instead uses the same LVM ansible modules lvol, lvg, etc. that use lvm2 package.

Wondering if system-storage-manager should be replaced with lvm2 package for CentOS if there is no hard requirement for system-storage-manager in CentOS? Or should LVM tasks be managed using ssm for CentOS?

ansible_distribution used instead of ansible_facts.distribution

Describe the bug
ansible_distribution is being used directly, it should be used via ansible_facts.distribution

By default, Ansible injects a variable for every fact, prefixed with
ansible_. This can result in a large number of variables for each host,
which at scale can incur a performance penalty. Ansible provides a
configuration option 0 that can be set to False to prevent this
injection of facts. In this case, facts should be referenced via
ansible_facts..

REQ: Set mount permissions, owner / group

I know this can be done using the file module, however as this role is managing mounts, it would be nice if it also manages the individual mount directory permissions, owner, group etc.

Xfs not working on centos machines

Hi,

xfstools are not installed on centos like machines, thus failing xfs mountpoint

I did a huge PR adding molecule tests and fixing that case.

Issue with resizefs and swap

Describe the bug
I'm getting the following error:

TASK [mrlesmithjr.manage-lvm : create_fs | creating new filesystem on new LVM logical volume(s)] ***
fatal: [host]: FAILED! => changed=false 
  msg: module does not support resizing swap filesystem yet.

To Reproduce
Steps to reproduce the behavior:
Configure a CentOS 8 host with a swap partition

 - vgname: vg
   disks:
     - /dev/sda5
   create: true
   lvnames:
     - lvname: swap_1
       size: 5g
       create: true
       # Defines filesystem to format lvol as
       filesystem: swap

See:

TASK [mrlesmithjr.manage-lvm : create_fs | creating new filesystem on new LVM logical volume(s)] ***
fatal: [host]: FAILED! => changed=false 
  msg: module does not support resizing swap filesystem yet.

Expected behavior
It should run without error.

Cannot run module with Ansible 2.2

Changes to Ansible deprecated the use of bare vars in playbooks. This meant some of the tasks that used with_* could not be executed. Per Ansible's release notes, the deprecated functionality to use bare vars was removed in Ansible 2.2.

This module cannot be run in its current state using code in manage_lvm.yml that uses the bare vars syntax. These need to be modified to use the new syntax:

  with_subelements:
    - "{{ lvm_groups }"
    - lvnames

Error with XFS filesystem

Error received when creating XFS filesystems:

failed: [consul-itg-01] (item=[{u'vgname': u'vg02', u'disks': [u'/dev/vdb1'], u'create': True}, {u'lvname': u'consul-data', u'filesystem': u'xfs', u'mntp': u'/opt/consul/data', u'create': True, u'mount': True, u'size': u'5g'}]) => {"ansible_loop_var": "item", "changed": true, "cmd": ["xfs_growfs", "-d", "/dev/vg02/consul-data"], "delta": "0:00:00.005995", "end": "2019-10-14 11:57:37.229978", "item": [{"create": true, "disks": ["/dev/vdb1"], "vgname": "vg02"}, {"create": true, "filesystem": "xfs", "lvname": "consul-data", "mntp": "/opt/consul/data", "mount": true, "size": "5g"}], "msg": "non-zero return code", "rc": 1, "start": "2019-10-14 11:57:37.223983", "stderr": "xfs_growfs: /dev/vg02/consul-data is not a mounted XFS filesystem", "stderr_lines": ["xfs_growfs: /dev/vg02/consul-data is not a mounted XFS filesystem"], "stdout": "", "stdout_lines": []}

xfs_growfs is using a mounted partition as parameter and not a LVM mapper.

extending volume

Hi @mrlesmithjr,
we noted you removed xfs resize capability, commented as

unable to resize xfs: looks like we've to reference the mountpoint instead of the device

in https://github.com/mrlesmithjr/ansible-manage-lvm/blob/master/tasks/create_fs.yml
related I suppose to commits:

17e8ec1
4a1177e

We noted on CentOS 7 normally LV Path works (e.g. xfs_growfs {LV Path}), on other version (probably CentOS8) works with mount point (xfs_growfs /mountpiunt).

On CentOS 7 this role does not extend xfs filesystem any more.

LVM physical disk not updated (pvresize)

This role seems not to automatically extend pv after increasing disk size.

Reproduce steps:

Run a playbook using this role with this initial configuration (/dev/sdb = size 5GB)

manage_lvm: true
lvm_groups:
  - vgname: 'mysqlvg'
    disks:
      - /dev/sdb
    create: true
    lvnames:
      - lvname: 'mysqllv'
        size: '+100%FREE'
        create: true
        filesystem: xfs
        mount: true
        mntp: '/mysql'

Increasing /dev/sdb disk size (e.g. in aws ebs volume or in vsphere VM virtual disk) and running again, does not extend pv on the new disk size, subsequently leaving vg, lv and filesystem not espanded.

About extending functionality

This LVM scripts is so cool! I noticed in the Note that the script has extended functionality, but I'm not sure how to use this function. Can you give me a use case? (I'm not sure that the script would be able to extend an existing LV without any impact on the files in the LV)
Thanks!

rescan-scsi-bus.sh fails on NVMe only hosts

Describe the bug
rescan-scsi-bus.sh fails on NVMe only hosts (no SCSI devices)

Log:
fatal: [node]: FAILED! => {"changed": false, "cmd": ["/usr/bin/rescan-scsi-bus.sh"], "delta": "0:00:00.012365", "end": "2021-11-22 16:31:58.033021", "msg": "non-zero return code", "rc": 1, "start": "2021-11-22 16:31:58.020656", "stderr": "cat: /sys/class/scsi_host/host/proc_name: No such file or directory", "stderr_lines": ["cat: /sys/class/scsi_host/host/proc_name: No such file or directory"], "stdout": "No SCSI host adapters found in sysfs", "stdout_lines": ["No SCSI host adapters found in sysfs"]}

To Reproduce
Steps to reproduce the behavior:

  1. Run the role on NVMe only host
  2. See error

Expected behavior
Works properly

resize disk

Hi,

Days ago the module works fine... today I pulled from repo to get updates and now get this error

my config

lvm vars

manage_lvm: true
lvm_disk_mount: "/dockerlvm/docker"
lvm_groups:

  • vgname: 'docker'
    disks:
    • /dev/sdb
      create: true
      lvnames:
    • lvname: 'docker'
      size: 169.95g <--- I just grow the disk so I'm increasing other 40gb from 129.5
      create: true
      filesystem: ext4
      mount: true
      mntp: "{{ lvm_disk_mount }}"

I Increase vmware disk
did partprobe
did pvresize /dev/sdb2 as always did before lunch the deploy.

error found

TASK [ext-mrlesmithjr.manage-lvm : centos | install xfs tools] **************************************************************************************************************************************
Monday 27 April 2020 15:13:34 +0200 (0:00:00.093) 0:00:59.796 **********
fatal: [docker2]: FAILED! =>
msg: 'Invalid data passed to ''loop'', it requires a list, got this instead: ({u''vgname'': u''docker'', u''disks'': [u''/dev/sdb''], u''create'': True}, {u''lvname'': u''docker'', u''create'': True, u''filesystem'': u''ext4'', u''mntp'': u''/dockerlvm/docker'', u''mount'': True, u''size'': u''169.95g''}). Hint: If you passed a list/dict of just one element, try adding wantlist=True to your lookup invocation or use q/query instead of lookup.'

PLAY RECAP ******************************************************************************************************************************************************************************************
docker2 : ok=4 changed=0 unreachable=0 failed=1 skipped=2 rescued=0 ignored=0

Travis Build

Build is failing at docker step.

Can someone please look into it or can I fix that ?

I create a PR for a swap partition change. Please accept it

Fix Galaxy meta info

The following error is being generated:

  molecule lint
  shell: /usr/bin/bash -e {0}
INFO     default scenario test matrix: dependency, lint
INFO     Performing prerun...
INFO     Added ANSIBLE_LIBRARY=/home/runner/.cache/ansible-compat/10b5be/modules:/home/runner/.ansible/plugins/modules:/usr/share/ansible/plugins/modules
INFO     Added ANSIBLE_COLLECTIONS_PATH=/home/runner/.cache/ansible-compat/10b5be/collections:/home/runner/.ansible/collections:/usr/share/ansible/collections
INFO     Added ANSIBLE_ROLES_PATH=/home/runner/.cache/ansible-compat/10b5be/roles:/home/runner/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles
ERROR    Computed fully qualified role name of manage-lvm does not follow current galaxy requirements.
Please edit meta/main.yml and assure we can correctly determine full role name:

galaxy_info:
role_name: my_name  # if absent directory name hosting role is used instead
namespace: my_galaxy_namespace  # if absent, author is used instead

Namespace: https://galaxy.ansible.com/docs/contributing/namespaces.html#galaxy-namespace-limitations
Role: https://galaxy.ansible.com/docs/contributing/creating_role.html#role-names

As an alternative, you can add 'role-name' to either skip_list or warn_list.

Fix Travis Testing

Currently Travis CI testing is failing after adding Python requirements.

skip setting rescan_scsi_command for debian testing and sid also

Describe the bug
For Debian Testing and SID ansible_distribution_major_version contains a string instead of an int

Expected behavior
Add an additional condition to check ansible_distribution_release which should contain the Debian codename.

PR: #91

Screenshots

TASK [ansible-role-storage : Set rescan_scsi_command for old debian version] ***************************************************************************************************************************
fatal: [htpc-g072]: FAILED! => {"msg": "The conditional check 'ansible_distribution_major_version is version(10, '<=')' failed. The error was: Version comparison failed: '<' not supported between instances of 'str' and 'int'\n\nThe error appears to be in '/home/*****/tasks/main.yml': line 3, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n# tasks file for ansible-manage-lvm\n- name: Set rescan_scsi_command for old debian version\n  ^ here\n"}

ansible-galaxy role 404s

Describe the bug
Looks like something has gone wrong with the latest release as the role is no longer available on ansible-galaxy

To Reproduce
Steps to reproduce the behavior:

  1. Go to https://galaxy.ansible.com/mrlesmithjr/manage-lvm
  2. Get a 404 😁

Reviewers needed

Looking for folks who would be interested in being reviewers for this repo. Please let me know if you are interested.

Syntax error on tasks/centos.yml

Describe the bug
There's a syntax error on centos | installing sg3_utils task:

The name and state attributes should be tabulated within package section.

Screenshots
The following error is shown when executing it:

'state' is not a valid attribute for a Task
The error appears to be in '/tmp/rhel8-8dMdj20220113-2950417-12c2fvd/microenv_roles/manage-lvm/tasks/centos.yml': line 45, column 5, but may be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- block:
  - name: centos | installing sg3_utils
    ^ here
This error can be suppressed as a warning using the "invalid_task_attribute_failed" configuration

Cleanup Code

Need to cleanup code and vars. The current code and vars are not clean formatted.

btrfs: extra double quotes

The extra double quotes at the end of the line triggers an error.

ERROR! Syntax Error while loading YAML.

The error appears to have been in '/xxx/roles/ansible-manage-lvm/tasks/manage_lvm.yml': line 66, column 57, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

- name: manage_lvm | resizing btrfs
  shell: "btrfs filesystem resize max {{ item.1.mntp }}""
                                                        ^ here

Command rescan-scsi-bus does not exist in Debian 11

Describe the bug
The task ext-lvm : debian | rescanning for new disks added is using the command rescan-scsi-bus which no longer exist in Debian 11.

To Reproduce
Steps to reproduce the behavior:

  1. Deploy the role on Debian 11

Expected behavior
No error reported

Screenshots

TASK [ext-lvm : debian | rescanning for new disks added] **********************************************************************************************************************************************************
fatal: [bastion-app-dc2-01.adm.prx.prod.dwadm.in]: FAILED! => changed=false 
  cmd: /sbin/rescan-scsi-bus
  msg: '[Errno 2] No such file or directory: b''/sbin/rescan-scsi-bus'''
  rc: 2
  stderr: ''
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>

Additional context
In Debian 10, the command rescan-scsi-bus was included into the package scsitools
In Debian 11, the command is not included into this package scsitools

Before creating a new PR, I need to know if this command is no longer needed in Debian 11 or it should be replaced by another one ?

Need more condition: If disk is not in lvm format. Contains data, but not mounted

[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 50G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 10G 0 part
└─centos-root 253:0 0 10G 0 lvm /
vdb 252:16 0 10G 0 disk
vdc 252:32 0 10G 0 disk
vdd 252:48 0 10G 0 disk
vde 252:64 0 10G 0 disk /data/vde
vdf 252:80 0 10G 0 disk
vdg 252:96 0 10G 0 disk
vdh 252:112 0 10G 0 disk
vdi 252:128 0 10G 0 disk /data/vdi
[root@localhost ~]# umount /data/vde
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 50G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 10G 0 part
└─centos-root 253:0 0 10G 0 lvm /
vdb 252:16 0 10G 0 disk
vdc 252:32 0 10G 0 disk
vdd 252:48 0 10G 0 disk
vde 252:64 0 10G 0 disk
vdf 252:80 0 10G 0 disk
vdg 252:96 0 10G 0 disk
vdh 252:112 0 10G 0 disk
vdi 252:128 0 10G 0 disk /data/vdi

[root@localhost ~]# lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT
sr0
vda
├─vda1 xfs 09ac9b3f-91c2-4004-ab9a-93b7333d6b10 /boot
└─vda2 LVM2_member FRkLWF-D13C-WtyC-M07L-Qyoh-vqer-Qdon5q
└─centos-root xfs 44863b3b-5a77-495b-8bd0-98545221a431 /
vdb LVM2_member NfuF65-M1hI-FtgP-JFT0-GKQU-0cJg-rrlrCo
vdc LVM2_member oh4YIm-3qiT-1yas-eSxR-7RCS-ZXBN-8YgjkA
vdd
vde ext4 bdf6e67b-56f6-4f75-b1d3-15838dd9d977
vdf
vdg
vdh
vdi xfs a6d6a9de-c976-431c-9db0-0654a6165b4e /data/vdi
[root@localhost ~]#

(CentOS) No rescan of scsi device resize and scsi bus check script error

Hi @mrlesmithjr,
this role does not recognize on CentOS partition/disk resize (fdisk -l does not see if one increase a virtual machine disk size), so this role works only after some manually executed commands:

  • Rescan disk/partition

partprobe or more specific partprobe [/dev/sd[X][Y]] (requirements: yum install parted)

an alternative to partprobe is:
echo '1' > /sys/class/scsi_disk/0\:0\:0\:0/device/rescan

  • Increase Phisical Virtual Disk containing VG/LV

pvresize /dev/sdX

Can you add these commands to the role ?

Interested adding thinpool

Hi folks,
Thank you for that wonderful cookbook!

I made a little wrapper cookbook on top of it, adding some more capabilities: thinpool management and autoextends.

If you're interested, please take a look, I can PR the additional code if you want to.

Best regards,
Charlie

XFS filesystems are not extended

Hello,

Currently XFS filesystems are not extended.
If you create a xfs logical volume of 1GB with this role.
You updated the size to 5GB and you deploy the configuration the size of the logical volume will be increased but not the filesystem.
After a xfs_growfs manually of the filesystem, the size is correct.

Thanks.

Enhancement request

Hi,

Thanks for a great ansible role, very useful! However there is one feature that I'd like to have included. The ability to partition the disks. As it is now the role is assuming the disks are partitioned and ready to be used.

Perhaps the variable declaration could look something like this:

- hosts: test-nodes
  vars:
    lvm_groups:
      - vgname: test-vg
        disks:
          - device: /dev/sdb
            number: 1
          - device: /dev/sdc
            number: 2
        create: true
        lvnames:
          - lvname: test_1
            size: 5g
            create: true
            filesystem: ext4
            mount: true
            mntp: /mnt/test_1
          - lvname: test_2
            size: 10g
            create: true
            filesystem: ext4
            mount: true
            mntp: /mnt/test_2
    manage_lvm: true
    pri_domain_name: 'test.vagrant.local'
  roles:
    - role: ansible-manage-lvm
  tasks:

and corresponding part in the role:

- name: Partition disk
  parted:
    device: "{{ item.1.device }}"
    number: "{{ item.1.number }}"
    flags: [lvm]
    state: present
  with_subelements:
    - "{{ lvm_groups }}"
    - disks
  register: parted

Unfortunately I dont know how to create a list out of that to be used by the pvs option in the lvg module.

Regards,

/Johan

AWS nvme to sd device name support

Hi,

Thanks for this great ansible role ! What about to add support for AWS nvme device name abstraction using a temporary /dev/sd (/dev/xvd) device name ?

Example:

On new nitro based AWS EC2 instances NVMe driver is used that on EC2 startup rename defined /dev/sdX (or /dev/xvdX) device names to unpredictable /dev/nvmeXnY device names. Some solutions can be used to get /dev/sd (/dev/xvd) mapping from nvme block device definition . This role can support something like this:

  1. Use nvme-cli to make symbolic links for /dev/sd or /dev/xvd from which nvmeXnY was renamed
  2. Use those links in the ansible configuration to manage LVM (equal to AWS EBS configuration device_name=/dev/sdX)
  3. Remove temporary created symbolic links

It'll avoid inconsistent nvme "device names" in favor of fixed /dev/sd|/dev/xvd device names and will allow this role to be used for AWS EC2 nitro based instance without add separated ansible (or provisioning like terraform) code to do that.

Some refs to find mapping between /dev/sd|/dev/xvd and /dev/nvmeXnY device names
https://github.com/leboncoin/terraform-aws-nvme-example/blob/master/scripts/ebs_alias.sh.tpl
https://russell.ballestrini.net/aws-nvme-to-block-mapping/#nvme-to-block-mapping
https://medium.com/@ripon.banik/how-to-find-ebs-volume-id-for-nvme-volume-1148d7499dc

manage_lvm | unmounting filesystem(s): extra parenthesis

Hi,

I am using the master version of ansible-manage-lvm and I had an issue with manage_lvm | unmounting filesystem(s) in manage_lvm.yml:

- name: manage_lvm | unmounting filesystem(s)
  mount:
    name: "{{ item[1]['mntp'] }}"
    src: "/dev/{{ item[0]['vgname'] }}/{{ item[1]['lvname'] }}"
    fstype: "{{ item[1]['filesystem'] | default(omit) }}"
    state: "absent"
  become: true
  with_subelements:
    - "{{ lvm_groups }}"
    - lvnames
  when: >
        (item[1] is defined and
        item[1] != 'None') and
        (item[1]['create'] is defined and
        not item[1]['create'] and
        item[1]['filesystem'] != "swap"))

The problem is due to the extra parenthesis after "swap". I will submit you a pull request with the fix.

Support managing of lv when the vg is already created

Usecase is to perform only creation of lvs, filesystems and mounts. the vg was already created during os install.

Currently the logic requires that vg create is set to true. This stops with error message:

fatal: [ax41.pythonbites.de]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'disks'\n\nThe error appears to be in '/home/gthieleb/.ansible/roles/mrlesmithjr.manage-lvm/tasks/create_vg.yml': line 3, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: manage_lvm | creating new LVM volume group(s)\n  ^ here\n"}

I don't want to care about the disks mapping of the underlying volume group.

Task check already converted failed

Hello,

Using the last version 0.1.6, I encountered this issue using this role:

failed: [consul-app-dc1-03.xxxxx] (item={'lvname': 'consul-data', 'size': '6g', 'create': True, 'filesystem': 'xfs', '
mount': True, 'mntp': '/opt/consul/data'}) => changed=false                                                                                  
  ansible_loop_var: lv                                                                                                                       
  cmd: xfs_info /dev/vg02/consul-data | grep -c 'ftype=1'                                                                                    
  delta: '0:00:00.013730'                                                                                                                    
  end: '2020-04-21 11:23:05.168671'                                                                                                          
  lv:                                                                                                                                        
    create: true                                                                                                                             
    filesystem: xfs                                                                                                                          
    lvname: consul-data                                                                                                                      
    mntp: /opt/consul/data
    mount: true
    size: 6g
  msg: non-zero return code
  rc: 1
  start: '2020-04-21 11:23:05.154941'
  stderr: |-
    xfs_info: /dev/vg02/consul-data contains a mounted filesystem
  
    fatal error -- couldn't initialize XFS library
  stderr_lines: <omitted>
  stdout: '0'
  stdout_lines: <omitted>

Reason:

xfs_info command should be executed on the mounted partition and not directly on the logical volume.

Partition resize ?

I might be blind, but i don't see any task that extends the actual partition.
My case: deploying an azure rhel8 image from the marketplace . It's 64Gib big, of course i want a bigger odsdisk so i provision with 120GiB

The way i understand it the first step would be to extend the actual partition for ex. /dev/sda2 and then go an pvresize , vgresize etc.

There is a community parted ansible module for this :

- name: Read device information (always use unit when probing)
  community.general.parted: device=/dev/sdb unit=MiB
  register: sda_info

- name: Extend an existing partition to fill all available space
  become: true
  community.general.parted:
    device: /dev/sda
    number: "{{ sda_info.partitions | length }}"
    part_end: "99%"
    resize: true
    label: gpt
    state: present
  tags:
  - resizePart
 

README description clarifying extending lvm

Thank you for your work on this. It is wonderful.

I would like to ask the question: What is the process to extend a volume?
Add an additional disk to the list of disks? and Increase the size?

I noticed the conditional to run the resize tasks references lvm['changed']. Is that the lvm_groups?

Just looking for a little clarification. Thanks!

skip resizing swap filesystem

Describe the bug
Ansible filesystem module doesn't support resizing swap filesystem.

To Reproduce
Try to change the swap partition size.

Expected behavior
Skip resizing swap filesystem.

PR: #89

Screenshots

TASK [ansible-role-storage : create_fs | creating new filesystem on new LVM logical volume(s)] *********************************************************************************************************
fatal: [htpc-g072]: FAILED! => {"changed": false, "msg": "module does not support resizing swap filesystem yet."}

Deprecation warning

Several deprecation warnings from community.general module:

[DEPRECATION WARNING]: community.general.system.filesystem has been deprecated. You are using an internal name to access the 
community.general.filesystem modules. This has never been supported or documented, and will stop working in community.general 9.0.0. This 
feature will be removed from community.general in version 9.0.0.
[DEPRECATION WARNING]: community.general.system.lvol has been deprecated. You are using an internal name to access the 
community.general.lvol modules. This has never been supported or documented, and will stop working in community.general 9.0.0. This feature
 will be removed from community.general in version 9.0.0.
[DEPRECATION WARNING]: community.general.system.lvg has been deprecated. You are using an internal name to access the community.general.lvg
 modules. This has never been supported or documented, and will stop working in community.general 9.0.0. This feature will be removed from 
community.general in version 9.0.0.

Rename `create` key to `present`

Is your feature request related to a problem? Please describe.
When I read the lvm_groups variables my understanding of the key create causes confusion.
Even after reading the role variables, naturally I interpreted the key with "should Ansible create and/or manage the vg or lv or not?", True would manage it, false would not manage it.

Understanding this wrong would potentially end up in removing a lot of logical volumes.

Describe the solution you'd like
The question I should have to ask myself is, should the vg or lv be present on the system.
Therefor I would like to request to rename create into present as a key.

Describe alternatives you've considered
You could consider an other name than create, as long as it is more clear that the key does not represent creating but being present.

Additional context

XFS Filesystem are not created

Describe the bug
XFS filesystem is not created

To Reproduce
Use the following lvm_groups variable

lvm_groups:
  - vgname: test_vg
    disks:
      - /dev/sdb
    create: true
    lvnames:
      - lvname: test_lv
        size: 100%FREE
        create: true
        filesystem: xfs
        mount: true
        mntp: /test

Expected behavior
xfs_info command (in create_fs) never fail in an xfs root

Additional context
Test done on centos 7

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.