Coder Social home page Coder Social logo

netapp's Introduction

NetApp Ansible Collections

As of May 10, 2021, after releasing 21.6.0 for four collections, we discontinued and archived ansible-collections/netapp repository.

It is being replaced with 7 new repositories, one for each collection

This meets a requirement from Red Hat that each repository hosts a single collection, and it makes it easier to version and publish each collection independently.

This is also part of a move to fully comply with semantic versioning.

Need help

Join our Slack Channel at Netapp.io

netapp's People

Contributors

awesomenameman avatar carchi8py avatar chaffelson avatar gundalow avatar jamalhad avatar joshedmonds avatar kvegh avatar lonico avatar matthias-beck avatar stobias123 avatar zeten30 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

netapp's Issues

na_ontap_quotas and quota resize?

SUMMARY

We were able to set quotas with this module, but when we add quotas later oder modify them, it seems that "quota resize" is not done - is there any way to get this working without some kind of workaround ("send command via ssh or something like this")?

Or we are doing things wrong...

Example output from ansible run about the options:

 "changed": true, 
    "invocation": { 
        "module_args": { 
            "disk_limit": "65GB", 
            "file_limit": "-", 
            "hostname": "svm-xyz", 
            "http_port": null, 
            "https": true, 
            "ontapi": null, 
            "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", 
            "policy": "default", 
            "qtree": "", 
            "quota_target": "/vol/test/toast", 
            "set_quota_status": true, 
            "state": "present", 
            "threshold": "-", 
            "type": "tree", 
            "use_rest": "Auto", 
            "username": "user", 
            "validate_certs": false, 
            "volume": "spicetest", 
            "vserver": "svm-xyz" 
        } 
    } 
} 

Thanks in advance for you support...

AndySilvia

na_ontap_user fails on existing user with same password

SUMMARY

na_ontap_user module fails if the user already exists with the same password

ISSUE TYPE
  • Bug Report
COMPONENT NAME

na_ontap_user

ANSIBLE VERSION
ansible 2.9.7
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/usr/share/my_modules']
  ansible python module location = /usr/lib/python2.7/dist-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.17 (default, Apr 15 2020, 17:20:14) [GCC 7.5.0]
CONFIGURATION
DEFAULT_MODULE_PATH(/etc/ansible/ansible.cfg) = [u'/usr/share/my_modules']
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = [u'/etc/ansible/roles']
DEPRECATION_WARNINGS(/etc/ansible/ansible.cfg) = False
OS / ENVIRONMENT

FAS8060 running ONTAP 9.7P3

STEPS TO REPRODUCE

Issue the na_ontap_user task when the user already exists with the correct password.

- name: Create ONTAP User for ActiveIQ Unified Manager
  netapp.ontap.na_ontap_user:
    <<: *clusterlogin
    state: "{{ item.state }}"
    vserver: "{{ cluster_name }}"
    name: "{{ item.ontap_user }}"
    applications: ontapi,ssh,http
    authentication_method: password
    set_password: "{{ item.ontap_user_password }}"
    role_name: admin
  with_items:
    "{{ activeiq_unified_manager }}"
  when:
    activeiq_unified_manager != None
  tags: aiqum_user
EXPECTED RESULTS

The module should handle the "Error while updating user password: {'message': 'New password must be different than the old password.', 'code': '7077925'}" response from ONTAP as being an acceptable result.

ACTUAL RESULTS
<fas-cluster1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<fas-cluster1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1591315679.77-4065-176919232991861 && echo ansible-tmp-1591315679.77-4065-176919232991861="` echo /root/.ansible/tmp/ansible-tmp-1591315679.77-4065-176919232991861 `" ) && sleep 0'
Using module file /root/.ansible/collections/ansible_collections/netapp/ontap/plugins/modules/na_ontap_user.py
<fas-cluster1> PUT /root/.ansible/tmp/ansible-local-40263floEB/tmpPZW3ii TO /root/.ansible/tmp/ansible-tmp-1591315679.77-4065-176919232991861/AnsiballZ_na_ontap_user.py
<fas-cluster1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1591315679.77-4065-176919232991861/ /root/.ansible/tmp/ansible-tmp-1591315679.77-4065-176919232991861/AnsiballZ_na_ontap_user.py && sleep 0'
<fas-cluster1> EXEC /bin/sh -c '/usr/bin/python3 /root/.ansible/tmp/ansible-tmp-1591315679.77-4065-176919232991861/AnsiballZ_na_ontap_user.py && sleep 0'
<fas-cluster1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1591315679.77-4065-176919232991861/ > /dev/null 2>&1 && sleep 0'
failed: [fas-cluster1] (item={u'applications': u'ssh,http,ontapi', u'state': u'present', u'role': u'admin', u'name': u'chris', u'password': u'netapp123', u'auth_method': u'password'}) => {
    "ansible_loop_var": "item",
    "changed": false,
    "invocation": {
        "module_args": {
            "applications": [
                "ssh",
                "http",
                "ontapi"
            ],
            "authentication_method": "password",
            "authentication_password": null,
            "authentication_protocol": null,
            "cert_filepath": null,
            "engine_id": null,
            "feature_flags": {},
            "hostname": "10.128.58.113",
            "http_port": null,
            "https": true,
            "key_filepath": null,
            "lock_user": null,
            "name": "scommonitor",
            "ontapi": null,
            "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
            "privacy_password": null,
            "privacy_protocol": null,
            "remote_switch_ipaddress": null,
            "role_name": "admin",
            "set_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
            "state": "present",
            "use_rest": "Auto",
            "username": "admin",
            "validate_certs": false,
            "vserver": "fas-cluster1"
        }
    },
    "item": {
        "applications": "ssh,http,ontapi",
        "auth_method": "password",
        "name": "chris",
        "password": "netapp123",
        "role": "admin",
        "state": "present"
    },
    "msg": "Error while updating user password: {'message': 'New password must be different than the old password.', 'code': '7077925'}"

na_ontap_net_subnet fails on existing subnet

Versions:

  • ONTAP 9.6
  • Ansible 2.8.5
  • Python 3.7.4
  • netapp-lib 2018.11.13

na_ontap_net_subnet fails when run on a subnet which already exists and doesnt have an IP range set.

Execute the following playbook 2 times in a row, the 2nd time it will fail:

---
- hosts: localhost
  tasks:
  - na_ontap_net_subnet:
      hostname: netappcluster.example
      username: admin
      password: secretpassword
      broadcast_domain: Default
      name: test-subnet
      subnet: 192.168.69.0/24

When defining an IP range ( subnet modify -subnet-name test-subnet -ip-ranges 192.168.69.200-192.168.69.210 ) the playbook wont fail when run again.

Error message:

fatal: [localhost]: FAILED! => {
    "changed": false,
    "module_stderr": "/home/joel/.ansible/tmp/ansible-tmp-1571389942.8206644-35837320584168/AnsiballZ_na_ontap_net_subnet.py:18: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses\n  import imp\nTraceback (most recent call last):\n  File \"/home/joel/.ansible/tmp/ansible-tmp-1571389942.8206644-35837320584168/AnsiballZ_na_ontap_net_subnet.py\", line 114, in <module>\n    _ansiballz_main()\n  File \"/home/joel/.ansible/tmp/ansible-tmp-1571389942.8206644-35837320584168/AnsiballZ_na_ontap_net_subnet.py\", line 106, in _ansiballz_main\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n  File \"/home/joel/.ansible/tmp/ansible-tmp-1571389942.8206644-35837320584168/AnsiballZ_na_ontap_net_subnet.py\", line 49, in invoke_module\n    imp.load_module('__main__', mod, module, MOD_DESC)\n  File \"/usr/lib/python3.7/imp.py\", line 234, in load_module\n    return load_source(name, filename, file)\n  File \"/usr/lib/python3.7/imp.py\", line 169, in load_source\n    module = _exec(spec, sys.modules[name])\n  File \"<frozen importlib._bootstrap>\", line 630, in _exec\n  File \"<frozen importlib._bootstrap_external>\", line 728, in exec_module\n  File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\n  File \"/tmp/ansible_na_ontap_net_subnet_payload_hl7qtz57/__main__.py\", line 323, in <module>\n  File \"/tmp/ansible_na_ontap_net_subnet_payload_hl7qtz57/__main__.py\", line 319, in main\n  File \"/tmp/ansible_na_ontap_net_subnet_payload_hl7qtz57/__main__.py\", line 279, in apply\n  File \"/tmp/ansible_na_ontap_net_subnet_payload_hl7qtz57/__main__.py\", line 177, in get_subnet\nAttributeError: 'NoneType' object has no attribute 'get_children'\n",
    "module_stdout": "",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 1
}

na_ontap_software_update idempotency

SUMMARY

na_ontap_software_update runs when package_version is the currently running version.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

na_ontap_software_update

ANSIBLE VERSION
ansible 2.9.10
  config file = /ansible/private/ansible.cfg
  configured module search path = [u'/home/user/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, Sep 26 2019, 13:23:47) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]

CONFIGURATION

OS / ENVIRONMENT

RHEL 7.8
ONTAP 9.7P7
netapp.ontap collection version 20.9.0

STEPS TO REPRODUCE
- hosts: NetApp_Setup
  become: false
  collections:
    - netapp.ontap
  gather_facts: false
  vars:
    login: &login
      username: "{{ username }}"
      password: "{{ password }}"
      https: true
      validate_certs: false
  tasks:
   - na_ontap_software_update:
      state: present
      package_version: "{{ version }}"
      package_url: https://webserver/ontap/{{ version | replace('.', '') }}_q_image.tgz
      download_only: false
      ignore_validation_warning: true
      hostname: "{{ inventory_hostname }}"
      <<: *login
    connection: local
EXPECTED RESULTS

Return "ok" and skip update because version is already installed.

ACTUAL RESULTS

Update was triggered.

na_ontap_info result for the installed release:

    "ontap_info": {
        "ontapi_version": "170",
        "ontap_version": "170",
        "ontap_system_version": {
            "is_clustered": "true",
            "version_tuple": {
                "system_version_tuple": {
                    "generation": "9",
                    "major": "7",
                    "minor": "0"
                }
            },
            "version": "NetApp Release 9.7P7: Thu Aug 27 20:57:05 UTC 2020",
            "build_timestamp": "1598561825"
        }
    }

Using the na_ontap_info module fails due to insufficient access with a read-only account

SUMMARY

I'm trying to test that the ontap module works correctly before starting development using it. As I'm not a netapp admin, I've only been given read-only access to their server for initial testing.

Unfortunately using a simple call with na_ontap_info fails due to insufficient write privileges, which makes no sense as I'm not attempting to write anything.

I'm assuming the overall module is testing for write access, even when using just ontap_info

Apologies I am not using the latest collection, but it's a bit too bleeding edge for our env. Didn't know if there's any other repo to post this issue though. If this behaviour is known has been fixed in the ansible collection, please let me know.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

na_ontap_info

ANSIBLE VERSION
ansible 2.9.1
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/ec/test/app/rundecktest/ansible/brocade/library']
  ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
  executable location = /usr/local/bin/ansible
  python version = 3.6.8 (default, Sep 26 2019, 11:57:09) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]

CONFIGURATION
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
ANSIBLE_SSH_ARGS(/etc/ansible/ansible.cfg) = -C -o ControlMaster=auto -o ControlPersist=10s -o LogLevel=QUIET -o PreferredAuthentications=publickey
DEFAULT_MODULE_PATH(env: ANSIBLE_LIBRARY) = ['/ec/test/app/rundecktest/ansible/brocade/library']
DEFAULT_REMOTE_USER(/etc/ansible/ansible.cfg) = root
DEFAULT_TIMEOUT(/etc/ansible/ansible.cfg) = 10
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = True
RETRY_FILES_SAVE_PATH(/etc/ansible/ansible.cfg) = /root/ansible
OS / ENVIRONMENT

Ansible control node: Red Hat Enterprise Linux Server release 7.8 (Maipo)
Ansible delegation server: Red Hat Enterprise Linux Server release 7.8 (Maipo)

STEPS TO REPRODUCE

Run the below playbook, using an AD account with only read access

---
- hosts: all
  gather_facts: no
  
  tasks:
    
  - name: Gather facts
    na_ontap_info:
      state: info 
      hostname: 'nas-server'
      username: 'domain\ro-admin'
      password: "{{ NAS_PASS }}"
      https: true 
      validate_certs: false 
      gather_subset: "ontap_system_version"
    register: netapp_facts

  - name: Show facts
    debug: 
      var: netapp_facts
EXPECTED RESULTS

Receive the ontap version

ACTUAL RESULTS

I'm receiving a 13003:Insufficient privileges

Paste below has server/username redacted

TASK [Gather facts] ************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: netapp_lib.api.zapi.zapi.NaApiError: NetApp API failed. Reason - 13003:Insufficient privileges: user '*******' does not have write access to this resource
fatal: [*******]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n  File \"<stdin>\", line 102, in <module>\n  File \"<stdin>\", line 94, in _ansiballz_main\n  File \"<stdin>\", line 40, in invoke_module\n  File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n    mod_name, mod_spec, pkg_name, script_name)\n  File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n    exec(code, run_globals)\n  File \"/tmp/ansible_na_ontap_info_payload_bc5j08v7/ansible_na_ontap_info_payload.zip/ansible/modules/storage/netapp/na_ontap_info.py\", line 619, in <module>\n  File \"/tmp/ansible_na_ontap_info_payload_bc5j08v7/ansible_na_ontap_info_payload.zip/ansible/modules/storage/netapp/na_ontap_info.py\", line 613, in main\n  File \"/tmp/ansible_na_ontap_info_payload_bc5j08v7/ansible_na_ontap_info_payload.zip/ansible/modules/storage/netapp/na_ontap_info.py\", line 501, in get_all\n  File \"/tmp/ansible_na_ontap_info_payload_bc5j08v7/ansible_na_ontap_info_payload.zip/ansible/module_utils/netapp.py\", line 507, in ems_log_event\n  File \"/usr/local/lib/python3.6/site-packages/netapp_lib/api/zapi/zapi.py\", line 301, in invoke_successfully\n    raise NaApiError(code, msg)\nnetapp_lib.api.zapi.zapi.NaApiError: NetApp API failed. Reason - 13003:Insufficient privileges: user '*******' does not have write access to this resource\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact 

Test change

This is a test change

SUMMARY

This is a test change

ISSUE TYPE
  • Bug Report
COMPONENT NAME
ANSIBLE VERSION

CONFIGURATION

OS / ENVIRONMENT
STEPS TO REPRODUCE
EXPECTED RESULTS
ACTUAL RESULTS

[RFE] Need module to automate the ontap volume access management commands

SUMMARY

We are looking to automate the below ontap volume access management commands to perform the below tasks.

  1. Assign / share / grant access a newly created volume to a host node
  2. Check on which and all node a particular vol share is assigned / granted access to
  3. Check on which and all node a particular vol share is mounted on currently.
ISSUE TYPE
  • Feature Idea
COMPONENT NAME
  • na_ontap

na_ontap_aggregate doesn't add disks to a MCCIP aggregate

SUMMARY

When trying to add disks to an existing aggregate on a MCCIP system, Ansible doesn't add any disks to the existing aggregate.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

na_ontap_aggregate

ANSIBLE VERSION
ansible 2.9.13
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /opt/awx/python-envs/netapp-env/lib64/python3.6/site-packages/ansible
  executable location = /opt/awx/python-envs/netapp-env/bin/ansible
  python version = 3.6.8 (default, Sep 26 2019, 11:57:09) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]```

##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below

OS / ENVIRONMENT

LSB Version: :core-4.1-amd64:core-4.1-noarch
Distributor ID: RedHatEnterpriseServer
Description: Red Hat Enterprise Linux Server release 7.8 (Maipo)
Release: 7.8
Codename: Maipo

STEPS TO REPRODUCE

We're passing for example a total amount of 20 disks to a pre-existing aggregate that already contains 10 disks. According to the documentation Ansible should add 10 disks to reach a total of 20.

- name: create or add disks to aggregate
      tags: xy
      delegate_to: localhost
      na_ontap_aggregate:
        <<: *login
        state: present
        service_state: online
        name: "{{ item.aggr_data_name }}"
        nodes: "{{ item.node_name }}"
        disk_count: "{{ item.disk_count }}"
        raid_size: "{{ item.max_raidsize }}"
        wait_for_online: true
        time_out: 300
      loop: "{{ aggregates }} "
EXPECTED RESULTS

Aggregate grows from 10 to 20 disks.

ACTUAL RESULTS

Ansible changes nothing.


na_ontap_lun.py: Wrong size starting from 450g

SUMMARY

When creating a LUN >= 450g ONTAP doesn't calculate the size 1024 based:

supernetapp::> lun create -vserver Test_SC1 -volume ultraLun -lun sql01.lun -ostype linux -size 450g

Created a LUN of size 450.1g (483247783936)

So when running the ansible-playbook a second time you'll get an error, that the script can't shrink the LUN.

ISSUE TYPE
  • Bug Report
COMPONENT NAME
na_ontap_lun
ANSIBLE VERSION
2.9.12
CONFIGURATION
-
OS / ENVIRONMENT

Linux version 4.19.0-8-amd64 ([email protected]) (gcc version 8.3.0 (Debian 8.3.0-6)) #1 SMP Debian 4.19.98-1+deb10u1 (2020-04-27)

VM

STEPS TO REPRODUCE

Just create a lun with netapp.ontap.na_ontap_lun size >= 450GB.

EXPECTED RESULTS

When run a second time, lun result is ok.

ACTUAL RESULTS
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1599651978.7138655-18688-206661629417177 `" && echo ansible-tmp-1599651978.7138655-18688-206661629417177="` echo /root/.ansible/tmp/ansible-tmp-1599651978.7138655-18688-206661629417177 `" ) && sleep 0'
Using module file /root/.ansible/collections/ansible_collections/netapp/ontap/plugins/modules/na_ontap_lun.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-186831lqdpgcm/tmpis00dlfa TO /root/.ansible/tmp/ansible-tmp-1599651978.7138655-18688-206661629417177/AnsiballZ_na_ontap_lun.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1599651978.7138655-18688-206661629417177/ /root/.ansible/tmp/ansible-tmp-1599651978.7138655-18688-206661629417177/AnsiballZ_na_ontap_lun.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python3 /root/.ansible/tmp/ansible-tmp-1599651978.7138655-18688-206661629417177/AnsiballZ_na_ontap_lun.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1599651978.7138655-18688-206661629417177/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
  File "/tmp/ansible_netapp.ontap.na_ontap_lun_payload_wnpnfpga/ansible_netapp.ontap.na_ontap_lun_payload.zip/ansible_collections/netapp/ontap/plugins/modules/na_ontap_lun.py", line 344, in resize_lun
  File "/usr/local/lib/python3.7/dist-packages/netapp_lib/api/zapi/zapi.py", line 301, in invoke_successfully
    raise NaApiError(code, msg)
netapp_lib.api.zapi.zapi.NaApiError: NetApp API failed. Reason - 9033:Reducing LUN size without coordination with the host system may cause permanent data loss or corruption. Use the force flag to allow LUN size reduction.
failed: [localhost] (item={'lun': 'sql01.lun', 'volume': 'ultraLun', 'vserver': 'Test_SC1', 'size': 800, 'size_unit': 'gb', 'ostype': 'linux', 'path': '/vol/ultraLun/sql01.lun', 'igroup': 'igroup01'}) => {
    "ansible_loop_var": "item",
    "changed": false,
    "invocation": {
        "module_args": {
            "cert_filepath": null,
            "feature_flags": {},
            "flexvol_name": "ultraLun",
            "force_remove": false,
            "force_remove_fenced": false,
            "force_resize": false,
            "hostname": "XXX",
            "http_port": null,
            "https": true,
            "key_filepath": null,
            "name": "sql01.lun",
            "ontapi": null,
            "ostype": "linux",
            "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
            "size": 800,
            "size_unit": "gb",
            "space_allocation": false,
            "space_reserve": true,
            "state": "present",
            "use_rest": "auto",
            "username": "admin",
            "validate_certs": false,
            "vserver": "Test_SC1"
        }
    },
    "item": {
        "igroup": "igroup01",
        "lun": "sql01.lun",
        "ostype": "linux",
        "path": "/vol/ultraLun/sql01.lun",
        "size": 800,
        "size_unit": "gb",
        "volume": "ultraLun",
        "vserver": "Test_SC1"
    },
    "msg": "Error resizing lun /vol/ultraLun/sql01.lun: NetApp API failed. Reason - 9033:Reducing LUN size without coordination with the host system may cause permanent data loss or corruption. Use the force flag to allow LUN size reduction."
}

na_ontap_cifs_vserver_security issues

We have identifed 3 issues with the na_ontap_cifs_vserver_security module (new in 2.9)

  1. Int and string comparison

The module accepts a number of parameters, 4 of those specifically are of type “int”:

kerberos_clock_skew
kerberos_ticket_age
kerberos_renew_age
kerberos_kdc_timeout

However, the cifs_security_get_iter() function contained within that module returns the current value from the cluster as a string rather than an int. This causes the module to fail when calling the get_modified_attribute() function as follows:

Traceback (most recent call last):
File "/root/debug_dir/ansible/modules/storage/netapp/na_ontap_vserver_cifs_security.py", line 282, in
main()
File "/root/debug_dir/ansible/modules/storage/netapp/na_ontap_vserver_cifs_security.py", line 278, in main
obj.apply()
File "/root/debug_dir/ansible/modules/storage/netapp/na_ontap_vserver_cifs_security.py", line 255, in apply
modify = self.na_helper.get_modified_attributes(current, self.parameters)
File "/usr/local/lib/python3.6/site-packages/ansible/module_utils/netapp_module.py", line 239, in get_modified_attributes
elif cmp(value, desired[key]) != 0:
File "/usr/local/lib/python3.6/site-packages/ansible/module_utils/netapp_module.py", line 53, in cmp
return (a > b) - (a < b)
TypeError: '>' not supported between instances of 'str' and 'int'

The solution was simple, as per the code snippet below the return created by cifs_security_get_iter() can be updated to cast the string to int:

if result.get_child_by_name('num-records') and int(result.get_child_content('num-records')) > 0:
cifs_security_info = result.get_child_by_name('attributes-list').get_child_by_name('cifs-security')
cifs_security_details['kerberos_clock_skew'] = int(cifs_security_info.get_child_content('kerberos-clock-skew'))
cifs_security_details['kerberos_ticket_age'] = int(cifs_security_info.get_child_content('kerberos-ticket-age'))
cifs_security_details['kerberos_renew_age'] = int(cifs_security_info.get_child_content('kerberos-renew-age'))
cifs_security_details['kerberos_kdc_timeout'] = int(cifs_security_info.get_child_content('kerberos-kdc-timeout'))

  1. String and Boolean comparison

The same module also has several Boolean parameters. When the cifs_security_get_iter() function retrieves the current values from the cluster it attempts to cast from string to Boolean. However, this was not working properly as all string values were being casted to a Boolean value of true (regardless of whether the cluster returned string value “true” or “false”). This is because bool(some_variable) will always equate to true provided that some_variable contains a string value. The intent of the developer was clearly to cast string “false” to Boolean False, but Python does not work quite like that. The end result is that the module always determined that a change was required even when one wasn’t (as the module parameters would always differ from the values returned form the cluster). There are several solutions to this issue but the simplest of which we implemented was to use json.loads to perform the casting from string to Boolean as per the code snippet below:

        cifs_security_details['is_signing_required'] = json.loads(cifs_security_info.get_child_content('is-signing-required'))
        cifs_security_details['is_password_complexity_required'] = json.loads(cifs_security_info.get_child_content('is-password-complexity-required'))
        cifs_security_details['is_aes_encryption_enabled'] = json.loads(cifs_security_info.get_child_content('is-aes-encryption-enabled'))
        cifs_security_details['is_smb_encryption_required'] = json.loads(cifs_security_info.get_child_content('is-smb-encryption-required'))
        cifs_security_details['referral_enabled_for_ad_ldap'] = json.loads(cifs_security_info.get_child_content('referral-enabled-for-ad-ldap'))
        cifs_security_details['use_start_tls_for_ad_ldap'] = json.loads(cifs_security_info.get_child_content('use-start-tls-for-ad-ldap'))
  1. Missing parameter use_ldap_for_ad_ldap

All of the cifs-securirty-modify API parameters were available already with the exception of use_ldap_for_ad_ldap which we have added

Port Flowcontrol and autonegotiate are global

SUMMARY

I want to set flowcontrol and negotiate settings per port. Not use a global for all of htem.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

na_ontap_cluster_config role.
https://github.com/ansible-collections/netapp/blob/master/ansible_collections/netapp/ontap/roles/na_ontap_cluster_config/tasks/main.yml#L109

STEPS TO REPRODUCE

Run default role, no option available to set negotiate per port.

EXPECTED RESULTS

I should be able to set these settings per port.

How to create Cluster Peer SVM DR Permissions?

SUMMARY

I am running OnTap 9.7.
Also am running ansible 2.9.

Have set up two netapp nodes, both got a single-node-cluster configuration.
With Ansible I can set up the SVMs.
I also can set up the cluster Peers.
But if I want to set up an SVM DR, I need to set the Cluster Peer SVM Permissions.
Not sure where to do that with ansible. Is this explained in the documentation? Can't find it...

I am using the netapp.ontap.na_ontap_cluster_peer modules and the netapp.ontap.na_ontap_svm modules.

ISSUE TYPE
  • Documentation Report
COMPONENT NAME

netapp.ontap.na_ontap_cluster_peer
and/or
netapp.ontap.na_ontap_svm

ANSIBLE VERSION
[ansible@automata ~]$ ansible --version 
ansible 2.9.7
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/ansible/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.6/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.6.8 (default, Dec  5 2019, 15:45:45) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
[ansible@automata ~]$ 


netapp_elementsw_module.py: get_snapshot() cannot gracefully handle non-existing snapshot id's

SUMMARY

get_snapshot() crashes on snapshot id var that does not exist, causing tasks to fail mid-way, in a non-graceful manner

ISSUE TYPE
  • Bug Report
COMPONENT NAME
  • plugins/module_utils/netapp_elementsw_module.py get_snapshot() function
ANSIBLE VERSION
ansible 2.10.2
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/sean/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /home/sean/.local/lib/python3.6/site-packages/ansible
  executable location = /home/sean/.local/bin/ansible
  python version = 3.6.9 (default, Jul 17 2020, 12:50:27) [GCC 8.4.0]
CONFIGURATION
DEFAULT_DEBUG(env: ANSIBLE_DEBUG) = True
DEFAULT_GATHERING(/etc/ansible/ansible.cfg) = explicit
DEFAULT_LOG_PATH(env: ANSIBLE_LOG_PATH) = /home/sean/.ansible/collections/ansible_collections/netapp/elementsw/ansible.log
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/content']
DEFAULT_STDOUT_CALLBACK(env: ANSIBLE_STDOUT_CALLBACK) = debug
DEFAULT_TIMEOUT(/etc/ansible/ansible.cfg) = 30
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
INTERPRETER_PYTHON(env: ANSIBLE_PYTHON_INTERPRETER) = /usr/bin/python3
OS / ENVIRONMENT
  • Ubuntu 18
  • SolidFire 12
  • Latest NetApp / ElementSW modules
STEPS TO REPRODUCE
  • Pick existing and correct values for everything except src_snapshot_id (use a snapshot id that does not exist - in my environment for example 15330 does not exist):
tasks:
    - name: Copy SF Volume
      na_elementsw_volume_clone:
        hostname: "{{ sf_hostname }}"
        username: "{{ sf_username }}"
        password: "{{ sf_password }}"
        src_volume_id: "{{ sf_vol_id }}"
        src_snapshot_id: "{{ sf_snap_id }}"
        account_id: 9
        name: test2
      register: result
EXPECTED RESULTS

get_snapshot() handles the API error and returns None. Or fail immediately (in netapp_elementsw_module.py) since we already know this task won't complete:

[WARNING]: The value "21" (type int) was converted to "'21'" (type string). If this does not look like what you expect,
quote the entire value to ensure it does not change.
[WARNING]: The value "15330" (type int) was converted to "'15330'" (type string). If this does not look like what you
expect, quote the entire value to ensure it does not change.
[WARNING]: The value "9" (type int) was converted to "'9'" (type string). If this does not look like what you expect,
quote the entire value to ensure it does not change.
fatal: [192.168.1.34]: FAILED! => {
    "changed": false
}

MSG:

Snapshot id not found: 15330
ACTUAL RESULTS
  • get_snapshots() can't handle non-existing snapshot id's:
2020-10-15 22:23:07,252 - solidfire.Element - INFO - {"method": "ListSnapshots", "id": 4, "params": {"volumeID": 21, "snapshotID": "15330"}}
Traceback (most recent call last):
  File "/home/sean/.ansible/tmp/ansible-tmp-1602771785.9104586-2590-94545454477935/AnsiballZ_na_elementsw_volume_clone.py", line 102, in <module>
    _ansiballz_main()
  File "/home/sean/.ansible/tmp/ansible-tmp-1602771785.9104586-2590-94545454477935/AnsiballZ_na_elementsw_volume_clone.py", line 94, in _ansiballz_main
    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
  File "/home/sean/.ansible/tmp/ansible-tmp-1602771785.9104586-2590-94545454477935/AnsiballZ_na_elementsw_volume_clone.py", line 40, in invoke_module
    runpy.run_module(mod_name='ansible_collections.netapp.elementsw.plugins.modules.na_elementsw_volume_clone', init_globals=None, run_name='__main__', alter_sys=True)
  File "/usr/lib/python3.6/runpy.py", line 205, in run_module
    return _run_module_code(code, init_globals, run_name, mod_spec)
  File "/usr/lib/python3.6/runpy.py", line 96, in _run_module_code
    mod_name, mod_spec, pkg_name, script_name)
  File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/tmp/ansible_na_elementsw_volume_clone_payload_1x1ajsh3/ansible_na_elementsw_volume_clone_payload.zip/ansible_collections/netapp/elementsw/plugins/modules/na_elementsw_volume_clone.py", line 276, in <module>
  File "/tmp/ansible_na_elementsw_volume_clone_payload_1x1ajsh3/ansible_na_elementsw_volume_clone_payload.zip/ansible_collections/netapp/elementsw/plugins/modules/na_elementsw_volume_clone.py", line 272, in main
  File "/tmp/ansible_na_elementsw_volume_clone_payload_1x1ajsh3/ansible_na_elementsw_volume_clone_payload.zip/ansible_collections/netapp/elementsw/plugins/modules/na_elementsw_volume_clone.py", line 252, in apply
  File "/tmp/ansible_na_elementsw_volume_clone_payload_1x1ajsh3/ansible_na_elementsw_volume_clone_payload.zip/ansible_collections/netapp/elementsw/plugins/modules/na_elementsw_volume_clone.py", line 201, in get_snapshot_id
  File "/tmp/ansible_na_elementsw_volume_clone_payload_1x1ajsh3/ansible_na_elementsw_volume_clone_payload.zip/ansible_collections/netapp/elementsw/plugins/module_utils/netapp_elementsw_module.py", line 137, in get_snapshot
  File "/home/sean/.local/lib/python3.6/site-packages/solidfire/__init__.py", line 3540, in list_snapshots
    since=6.0
  File "/home/sean/.local/lib/python3.6/site-packages/solidfire/common/__init__.py", line 704, in send_request
    response["error"]["message"])
solidfire.common.ApiServerError: ApiServerError(method_name="ListSnapshots", err_json=500 xSnapshotIDDoesNotExist Snapshot 15330 does not exist.)

na_ontap_user idempotency issue with domain users

SUMMARY

Using na_ontap_user to create a domain user works. Running playbook again (while the user already exists) fails with:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: NaApiError: NetApp API failed. Reason - 16034:User not found

ISSUE TYPE
  • Bug Report
COMPONENT NAME

na_ontap_user

ANSIBLE VERSION
2.9.10
CONFIGURATION

OS / ENVIRONMENT

RHEL 7.8
ONTAP 9.6P7

STEPS TO REPRODUCE

Run the following play twice.

    na_ontap_user:
      state: present
      vserver: clustername
      name: DOMAIN\usergroup
      lock_user: no
      authentication_method: domain
      applications: http,ontapi,ssh
      role_name: admin
      hostname: "{{ cluster }}"
      username: "{{ username }}"
      password: "{{ password }}"
      https: true
EXPECTED RESULTS

ok on second run

ACTUAL RESULTS
The full traceback is:
Traceback (most recent call last):
  File "/tmp/ansible_na_ontap_user_payload_9Ab18t/ansible_na_ontap_user_payload.zip/ansible/modules/storage/netapp/na_ontap_user.py", line 258, in unlock_given_user
  File "/usr/lib/python2.7/site-packages/netapp_lib/api/zapi/zapi.py", line 301, in invoke_successfully
    raise NaApiError(code, msg)
NaApiError: NetApp API failed. Reason - 16034:User not found
failed: [localhost] (item=clustername) => {
    "ansible_loop_var": "item",
    "changed": false,
    "invocation": {
        "module_args": {
            "applications": [
                "http",
                "ontapi",
                "ssh"
            ],
            "authentication_method": "domain",
            "hostname": "clustername",
            "http_port": null,
            "https": true,
            "lock_user": false,
            "name": "DOMAIN\\usergroup",
            "ontapi": null,
            "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
            "role_name": "admin",
            "set_password": null,
            "state": "present",
            "use_rest": "Auto",
            "username": "DOMAIN\\user",
            "validate_certs": true,
            "vserver": "clustername"
        }
    },
    "item": "clustername",
    "msg": "Error unlocking user DOMAIN\\usergroup: NetApp API failed. Reason - 16034:User not found"
}

ONTAP module for managing LIF service policies

SUMMARY

There is no module for managing service policies (available since ONTAP 9.5). Probably the most common use-case is creating a custom service policy or modify an existing one as there is no built-in policy in ONTAP allowing management as well as data traffic on a single LIF.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME
  • Proposal: na_ontap_interface_service_policy
ADDITIONAL INFORMATION

Inclusion of netapp.elementsw in Ansible 2.10

This collection will be included in Ansible 2.10 because it contains modules and/or plugins that were included in Ansible 2.9. Please review:

DEADLINE: 2020-08-18

The latest version of the collection available on August 18 will be included in Ansible 2.10.0, except possibly newer versions which differ only in the patch level. (For details, see the roadmap). Please release version 1.0.0 of your collection by this date! If 1.0.0 does not exist, the same 0.x.y version will be used in all of Ansible 2.10 without updates, and your 1.x.y release will not be included until Ansible 2.11 (unless you request an exception at a community working group meeting and go through a demanding manual process to vouch for backwards compatibility . . . you want to avoid this!).

Follow semantic versioning rules

Your collection versioning must follow all semver rules. This means:

  • Patch level releases can only contain bugfixes;
  • Minor releases can contain new features, new modules and plugins, and bugfixes, but must not break backwards compatibility;
  • Major releases can break backwards compatibility.

Changelogs and Porting Guide

Your collection should provide data for the Ansible 2.10 changelog and porting guide. The changelog and porting guide are automatically generated from ansible-base, and from the changelogs of the included collections. All changes from the breaking_changes, major_changes, removed_features and deprecated_features sections will appear in both the changelog and the porting guide. You have two options for providing changelog fragments to include:

  1. If possible, use the antsibull-changelog tool, which uses the same changelog fragment as the ansible/ansible repository (see the documentation).
  2. If you cannot use antsibull-changelog, you can provide the changelog in a machine-readable format as changelogs/changelog.yaml inside your collection (see the documentation of changelogs/changelog.yaml format).

If you cannot contribute to the integrated Ansible changelog using one of these methods, please provide a link to your collection's changelog by creating an issue in https://github.com/ansible-community/ansible-build-data/. If you do not provide changelogs/changelog.yml or a link, users will not be able to find out what changed in your collection from the Ansible changelog and porting guide.

Make sure your collection passes the sanity tests

Run ansible-test sanity --docker -v in the collection with the latest ansible-base or stable-2.10 ansible/ansible checkout.

Keep informed

Be sure you're subscribed to:

Questions and Feedback

If you have questions or want to provide feedback, please see the Feedback section in the collection requirements.

(Internal link to keep track of issues: ansible-collections/overview#102)

ipspace param for na_ontap_cluster_peer

SUMMARY

It's not possible to peer clusters that have lifs not in Default ipspace.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

na_ontap_cluster_peer module

ADDITIONAL INFORMATION

We have created own ipspace for intercluster communication. But a bit later i have found that there is no ipspace param for na_ontap_cluster_peer and task failed with error that source / destination lifs can't be found in ipspace Default.
Than i've read documentation https://library.netapp.com/ecmdocs/ECMLP2858435/html/resources/cluster_peer.html
and found description for properties "ipspace - IPspace of the local intercluster LIFs. Assumes Default IPspace if not provided."
So i would be great to add ipspace param to na_ontap_cluster_peer

na_ontap_info subset "ldap_client" - server info missing

SUMMARY

Using na_ontap_info to gather subset "ldap_client", the ldap server information is missing.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

na_ontap_info

ANSIBLE VERSION
ansible 2.9.9
  config file = ...
  configured module search path = [u'/home/user/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, Sep 26 2019, 13:23:47) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
CONFIGURATION

OS / ENVIRONMENT

RHEL 7.8

STEPS TO REPRODUCE
- name: Gather facts
    na_ontap_info:
      hostname: "{{ item }}"
      username: "{{ username }}"
      password: "{{ password }}"
      https: true
      validate_certs: true
      gather_subset:
        - "ldap_client"
    register: config_out
    with_items: "{{ groups['NetApp_All'] }}"
  - name: save config to file
    copy:
      content: '{{ config_out.results}}'
      dest: /tmp/netapp_facts.json
EXPECTED RESULTS

Server information is included in ldap client information.

ACTUAL RESULTS

Server information is missing from ldap client information. Output looks like this:

"ldap_client": {
    "example-vserver": {
        "referral_enabled": "false",
        "user_dn": "ou=People,dc=test",
        "base_dn": "ou=People,dc=test",
        "group_membership_filter": null,
        "query_timeout": "3",
        "ldap_servers": {
            "string": {

            }
        },
...
}

I queried the ZAPI directly via zedi and the result looks ok:

<ldap-client>
		<base-dn>ou=People,dc=test</base-dn>
		<base-scope>subtree</base-scope>
		<bind-as-cifs-server>false</bind-as-cifs-server>
		<group-dn>ou=People,dc=test</group-dn>
		<group-membership-filter/>
		<group-scope>subtree</group-scope>
		<is-netgroup-byhost-enabled>false</is-netgroup-byhost-enabled>
		<is-owner>true</is-owner>
		<ldap-client-config>Filer-LDAP</ldap-client-config>
		<ldap-servers>
			<string>example1.test</string>
			<string>example2.test</string>
		</ldap-servers>
                ...
<ldap-client>

na_ontap_vscan_on_access_policy missing option to enable the policy

na_ontap_vscan_on_access_policy module (added in 2.8) doesn't provide the option to enable the newly created vscan on access policy. The following code snippet shows how to enable the policy:

vscan_policy_status_obj = netapp_utils.zapi.NaElement("vscan-on-access-policy-status-modify")
vscan_policy_status_obj.add_new_child('policy-name', self.parameters['policy_name'])
vscan_policy_status_obj.add_new_child('policy-status', 'true')

However, it isn't quite as simple as just adding the code above. The process of enabling a new on access policy as follows:

  1. Disable vscan for the vserver
  2. Disable the currently enabled on access policy
  3. Enable the newly created on access policy
  4. Enable vscan for the vserver

As such we have written a new module for enabling an on access vscan policy.

na_ontap_vscan_scanner_pool only gets one scanner pool

The current na_ontap_vscan_scanner_pol module (added in 2.8) only gets a single child scanner pool as opposed to all children scanner pools.

The code snippet below shows the issue:

if result.get_child_by_name('num-records') and int(result.get_child_content('num-records')) >= 1:
if result.get_child_by_name('attributes-list').get_child_by_name('vscan-scanner-pool-info').get_child_content(
'scanner-pool') == self.scanner_pool:
return result.get_child_by_name('attributes-list').get_child_by_name('vscan-scanner-pool-info')

The code snippet below shows the modification we have made to the new module to get all scanner pools i.e. get_children():

if result.get_child_by_name('num-records') and int(result.get_child_content('num-records')) >= 1:
children = result.get_child_by_name('attributes-list').get_children()
for child in children:
if child.get_child_content('scanner-pool') == self.scanner_pool:
return child
return False

na_ontap_info: add security relevant information

SUMMARY

I would like to gather information of the following cli commands:

  • "event notification show"
  • "event notification destination show"
  • "cluster log-forwarding show"
  • "cifs options show"
  • "security login role show"
  • "security login role config show"
ISSUE TYPE
  • Feature Idea
COMPONENT NAME

Module: na_ontap_info

ADDITIONAL INFORMATION

I need it for daily security configuration auditing.

ONTAP modules generate "Unauthorized" audit events

SUMMARY

Usage of the ONTAP modules produce many "Unauthorized" audit events, even though the playbooks run successfully.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

Tested with na_ontap_info and na_ontap_command. Other modules might be affected aswell.

ANSIBLE VERSION
ansible 2.9.10
  config file = /ansible/private/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /bin/ansible
  python version = 2.7.5 (default, Mar 20 2020, 17:08:22) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]

CONFIGURATION

OS / ENVIRONMENT

RHEL 7.9
ONTAP 9.7P7
Collection 20.11

STEPS TO REPRODUCE
  1. Create custom user with admin role in ONTAP.
                                                                 Second
User/Group                 Authentication                 Acct   Authentication
Name           Application Method        Role Name        Locked Method
-------------- ----------- ------------- ---------------- ------ --------------
customuser    console     password      admin            no     none
customuser    http        password      admin            no     none
customuser    ontapi      password      admin            no     none
customuser    ssh         password      admin            no     none
  1. Execute the playbook with the created user.

  2. Check audit log with "security audit log show"

Mon Nov 09 13:26:25 2020  clustername-01  [kern_audit:info:7967] 8503e8000006c64b :: clustername:ontapi :: 192.168.1.2:33956 :: clustername:unknown :: POST /servlets/netapp.servlets.admin.XMLrequest_filer HTTP/1.1 :: Error: 401 Unauthorized
Mon Nov 09 13:26:25 2020  clustername-01  [kern_audit:info:2509] 8503e8000006c64c :: clustername:ontapi :: 192.168.1.2:33958 :: clustername:customuser :: <netapp xmlns="http://www.netapp.com/filer/admin" version="1.110" vfiler="clustername"><ems-autosupport-log><computer-name>Ansible</computer-name><event-id>12345</event-id><event-source>na_ontap_info</event-source><app-version>20.11.0</app-version><category>Information</category><event-description>setup</event-description><log-level>6</log-level><auto-support>false</auto-support></ems-autosupport-log></netapp> :: Pending:
Mon Nov 09 13:26:25 2020  clustername-01  [kern_audit:info:2509] 8503e8000006c64c :: clustername:ontapi :: 192.168.1.2:33958 :: clustername:customuser :: ems-autosupport-log :: Success:
Mon Nov 09 13:26:25 2020  clustername-01  [kern_audit:info:7967] 8503e8000006c64d :: clustername:ontapi :: 192.168.1.2:33960 :: clustername:unknown :: POST /servlets/netapp.servlets.admin.XMLrequest_filer HTTP/1.1 :: Error: 401 Unauthorized
Mon Nov 09 13:26:25 2020  clustername-01  [kern_audit:info:7967] 8503e8000006c64f :: clustername:ontapi :: 192.168.1.2:33964 :: clustername:unknown :: POST /servlets/netapp.servlets.admin.XMLrequest_filer HTTP/1.1 :: Error: 401 Unauthorized
Mon Nov 09 13:26:26 2020  clustername-01  [kern_audit:info:7967] 8503e8000006c651 :: clustername:ontapi :: 192.168.1.2:33968 :: clustername:unknown :: POST /servlets/netapp.servlets.admin.XMLrequest_filer HTTP/1.1 :: Error: 401 Unauthorized
Mon Nov 09 13:26:26 2020  clustername-01  [kern_audit:info:7967] 8503e8000006c653 :: clustername:ontapi :: 192.168.1.2:33976 :: clustername:unknown :: POST /servlets/netapp.servlets.admin.XMLrequest_filer HTTP/1.1 :: Error: 401 Unauthorized
Mon Nov 09 13:26:26 2020  clustername-01  [kern_audit:info:7967] 8503e8000006c655 :: clustername:ontapi :: 192.168.1.2:33982 :: clustername:unknown :: POST /servlets/netapp.servlets.admin.XMLrequest_filer HTTP/1.1 :: Error: 401 Unauthorized
Mon Nov 09 13:26:26 2020  clustername-01  [kern_audit:info:7967] 8503e8000006c657 :: clustername:ontapi :: 192.168.1.2:33988 :: clustername:unknown :: POST /servlets/netapp.servlets.admin.XMLrequest_filer HTTP/1.1 :: Error: 401 Unauthorized
Mon Nov 09 13:26:26 2020  clustername-01  [kern_audit:info:7967] 8503e8000006c659 :: clustername:ontapi :: 192.168.1.2:33994 :: clustername:unknown :: POST /servlets/netapp.servlets.admin.XMLrequest_filer HTTP/1.1 :: Error: 401 Unauthorized
Mon Nov 09 13:26:26 2020  clustername-01  [kern_audit:info:7967] 8503e8000006c65b :: clustername:ontapi :: 192.168.1.2:33998 :: clustername:unknown :: POST /servlets/netapp.servlets.admin.XMLrequest_filer HTTP/1.1 :: Error: 401 Unauthorized
Mon Nov 09 13:26:26 2020  clustername-01  [kern_audit:info:7967] 8503e8000006c65d :: clustername:ontapi :: 192.168.1.2:34008 :: clustername:unknown :: POST /servlets/netapp.servlets.admin.XMLrequest_filer HTTP/1.1 :: Error: 401 Unauthorized
Mon Nov 09 13:26:26 2020  clustername-01  [kern_audit:info:7967] 8503e8000006c65f :: clustername:ontapi :: 192.168.1.2:34012 :: clustername:unknown :: POST /servlets/netapp.servlets.admin.XMLrequest_filer HTTP/1.1 :: Error: 401 Unauthorized
Mon Nov 09 13:26:26 2020  clustername-01  [kern_audit:info:7967] 8503e8000006c661 :: clustername:ontapi :: 192.168.1.2:34018 :: clustername:unknown :: POST /servlets/netapp.servlets.admin.XMLrequest_filer HTTP/1.1 :: Error: 401 Unauthorized
Mon Nov 09 13:26:26 2020  clustername-01  [kern_audit:info:7967] 8503e8000006c663 :: clustername:ontapi :: 192.168.1.2:34022 :: clustername:unknown :: POST /servlets/netapp.servlets.admin.XMLrequest_filer HTTP/1.1 :: Error: 401 Unauthorized
Mon Nov 09 13:26:27 2020  clustername-01  [kern_audit:info:7967] 8503e8000006c665 :: clustername:ontapi :: 192.168.1.2:34030 :: clustername:unknown :: POST /servlets/netapp.servlets.admin.XMLrequest_filer HTTP/1.1 :: Error: 401 Unauthorized
Mon Nov 09 13:26:27 2020  clustername-01  [kern_audit:info:7967] 8503e8000006c667 :: clustername:ontapi :: 192.168.1.2:34034 :: clustername:unknown :: POST /servlets/netapp.servlets.admin.XMLrequest_filer HTTP/1.1 :: Error: 401 Unauthorized
Mon Nov 09 13:26:27 2020  clustername-01  [kern_audit:info:7967] 8503e8000006c669 :: clustername:ontapi :: 192.168.1.2:34040 :: clustername:unknown :: POST /servlets/netapp.servlets.admin.XMLrequest_filer HTTP/1.1 :: Error: 401 Unauthorized
Mon Nov 09 13:26:27 2020  clustername-01  [kern_audit:info:7967] 8503e8000006c66b :: clustername:ontapi :: 192.168.1.2:34046 :: clustername:unknown :: POST /servlets/netapp.servlets.admin.XMLrequest_filer HTTP/1.1 :: Error: 401 Unauthorized
Mon Nov 09 13:26:27 2020  clustername-01  [kern_audit:info:7967] 8503e8000006c66d :: clustername:ontapi :: 192.168.1.2:34050 :: clustername:unknown :: POST /servlets/netapp.servlets.admin.XMLrequest_filer HTTP/1.1 :: Error: 401 Unauthorized
- hosts: localhost
  become: false
  collections:
    - netapp.ontap
  name: ONTAP facts gatherer
  vars:
    out_path: /tmp
    username: customuser
    password:  pass
  tasks:
  - name: Gather facts
    na_ontap_info:
      hostname: "clustername.dzbank.vrnet"
      username: "{{ username }}"
      password: "{{ password }}"
      https: true
      validate_certs: true
      use_rest: always
      gather_subset:
        - "cifs_server_info"
        - "cifs_vserver_security_info"
        - "cluster_identity_info"
        - "ldap_client"
        - "ldap_config"
        - "net_dev_discovery_info"
        - "net_dns_info"
        - "net_ifgrp_info"
        - "net_port_info"
        - "ntp_server_info"
        - "security_login_account_info"
        - "sis_info"
        - "vserver_info"
    register: config_out
EXPECTED RESULTS

Successful login in the audit log.

ACTUAL RESULTS

Many failed logins in the audit log.


na_ontap_volume errors if Root LS Mirror Volume already exists

SUMMARY

When the SVM Root LS Mirror Volume already exists the na_ontap_volume module fails with the error "Invalid operation. Reason: Cannot unmount Vserver namespace root volume"

ISSUE TYPE
  • Bug Report
COMPONENT NAME

na_ontap_volume

ANSIBLE VERSION
ansible 2.9.7
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/usr/share/my_modules']
  ansible python module location = /usr/lib/python2.7/dist-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.17 (default, Apr 15 2020, 17:20:14) [GCC 7.5.0]
CONFIGURATION
DEFAULT_MODULE_PATH(/etc/ansible/ansible.cfg) = [u'/usr/share/my_modules']
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = [u'/etc/ansible/roles']
DEPRECATION_WARNINGS(/etc/ansible/ansible.cfg) = False
OS / ENVIRONMENT

FAS8060 running ONTAP 9.7P3
ONTAP vSIM running 9.5P6

STEPS TO REPRODUCE
- name: Create SVM Root LS Mirror Volume
  netapp.ontap.na_ontap_volume:
    <<: *clusterlogin
    state: "{{ item.state }}"
    vserver: "{{ item.name }}"
    name: "{{ item.name }}_root_m1"
    aggregate_name: "{{ item.ls_mirror_aggr }}"
    size: 20
    size_unit: mb
    type: DP
    policy: default
    junction_path: ""
    space_guarantee: "volume"
    snapshot_policy: none
    volume_security_style: "{{ 'ntfs' if item.protocol.lower() is search('cifs') else 'unix' }}"
  with_items:
    "{{ vservers }}"
  when:
    - item.ls_mirror_aggr is defined
    - vservers != None
  tags: vserver_ls_volume
EXPECTED RESULTS

The module will report OK if the LS Mirror Volume already exists.

ACTUAL RESULTS
TASK [svms : Create SVM Root LS Mirror Volume] ******************************************************************************************************************************************************************************************************************************************************************************
task path: /mnt/c/Users/chrisj1/Documents/GitRepos/Test Cluster Build/roles/svms/tasks/main.yml:27
<fas-cluster1> ESTABLISH LOCAL CONNECTION FOR USER: root
<fas-cluster1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<fas-cluster1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1591317837.6-14779-179076282602759 && echo ansible-tmp-1591317837.6-14779-179076282602759="` echo /root/.ansible/tmp/ansible-tmp-1591317837.6-14779-179076282602759 `" ) && sleep 0'
Using module file /root/.ansible/collections/ansible_collections/netapp/ontap/plugins/modules/na_ontap_volume.py
<fas-cluster1> PUT /root/.ansible/tmp/ansible-local-14740YCh7IS/tmpTUCG_z TO /root/.ansible/tmp/ansible-tmp-1591317837.6-14779-179076282602759/AnsiballZ_na_ontap_volume.py
<fas-cluster1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1591317837.6-14779-179076282602759/ /root/.ansible/tmp/ansible-tmp-1591317837.6-14779-179076282602759/AnsiballZ_na_ontap_volume.py && sleep 0'
<fas-cluster1> EXEC /bin/sh -c '/usr/bin/python3 /root/.ansible/tmp/ansible-tmp-1591317837.6-14779-179076282602759/AnsiballZ_na_ontap_volume.py && sleep 0'
<fas-cluster1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1591317837.6-14779-179076282602759/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
  File "/tmp/ansible_netapp.ontap.na_ontap_volume_payload_km7d0uv4/ansible_netapp.ontap.na_ontap_volume_payload.zip/ansible_collections/netapp/ontap/plugins/modules/na_ontap_volume.py", line 1306, in volume_unmount
  File "/usr/local/lib/python3.6/dist-packages/netapp_lib/api/zapi/zapi.py", line 301, in invoke_successfully
    raise NaApiError(code, msg)
netapp_lib.api.zapi.zapi.NaApiError: NetApp API failed. Reason - 13001:Invalid operation. Reason: Cannot unmount Vserver namespace root volume "fas_svm1_root_m1".
failed: [fas-cluster1] (item={u'ls_mirror_aggr': u'sas_aggr_02', u'protocol': u'cifs,nfs,iscsi', u'name': u'fas_svm1', u'subtype': u'default', u'state': u'present', u'snapshot_policy': u'default', u'aggr': u'sas_aggr_01', u'export_policy_clientmatch': u'10.128.57.0/24'}) => {
    "ansible_loop_var": "item",
    "changed": false,
    "invocation": {
        "module_args": {
            "aggr_list": null,
            "aggr_list_multiplier": null,
            "aggregate_name": "sas_aggr_02",
            "atime_update": null,
            "auto_provision_as": null,
            "auto_remap_luns": null,
            "cert_filepath": null,
            "check_interval": 30,
            "comment": null,
            "cutover_action": null,
            "efficiency_policy": null,
            "encrypt": false,
            "feature_flags": {},
            "force_restore": null,
            "force_unmap_luns": null,
            "from_name": null,
            "from_vserver": null,
            "group_id": null,
            "hostname": "10.128.58.113",
            "http_port": null,
            "https": true,
            "is_infinite": false,
            "is_online": true,
            "junction_path": "",
            "key_filepath": null,
            "language": null,
            "name": "fas_svm1_root_m1",
            "nvfail_enabled": null,
            "ontapi": null,
            "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
            "percent_snapshot_space": null,
            "policy": "default",
            "preserve_lun_ids": null,
            "qos_adaptive_policy_group": null,
            "qos_policy_group": null,
            "size": 20,
            "size_unit": "mb",
            "snapdir_access": null,
            "snapshot_auto_delete": null,
            "snapshot_policy": "none",
            "snapshot_restore": null,
            "space_guarantee": "volume",
            "space_slo": null,
            "state": "present",
            "tiering_policy": null,
            "time_out": 180,
            "type": "DP",
            "unix_permissions": null,
            "use_rest": "Auto",
            "user_id": null,
            "username": "admin",
            "validate_certs": false,
            "volume_security_style": "ntfs",
            "vserver": "fas_svm1",
            "vserver_dr_protection": null,
            "wait_for_completion": false
        }
    },
    "item": {
        "aggr": "sas_aggr_01",
        "export_policy_clientmatch": "10.128.57.0/24",
        "ls_mirror_aggr": "sas_aggr_02",
        "name": "fas_svm1",
        "protocol": "cifs,nfs,iscsi",
        "snapshot_policy": "default",
        "state": "present",
        "subtype": "default"
    },
    "msg": "Error unmounting volume fas_svm1_root_m1: NetApp API failed. Reason - 13001:Invalid operation. Reason: Cannot unmount Vserver namespace root volume \"fas_svm1_root_m1\". "
}

na_ontap_user: cannot add a publickey for a login account

SUMMARY

Actually it is possible to use na_ontap_user module to create a password-less login using the parameter authentication_method: publickey but you cannot inject a publickey for the login account. So, I think the module should be able to let a user set the following parameters:

  • publickey
  • index
  • comment
ISSUE TYPE
  • Feature Idea
COMPONENT NAME

ontap/plugins/modules/na_ontap_user.py

ADDITIONAL INFORMATION

The typical use-case of a password-less ssh login is to let an application running on a server to execute some netapp commands via ssh (for ex.: snap create/vol clone/etc).
Actually, the only option to inject the publickey is using the cli or using the na_ontap_command module giving up on the ansible idempotecy power

- name: Inject the publickey to myapp user
  na_ontap_command:
    command:
      - security
      - login
      - publickey
      - create
      - -vserver
      - '{{ netapp_svm_name }}'
      - -username
      - '{{ myapp }}'
      - -publickey
      - '{{ mypubkey }}'

Inclusion of netapp.aws in Ansible 2.10

This collection will be included in Ansible 2.10 because it contains modules and/or plugins that were included in Ansible 2.9. Please review:

DEADLINE: 2020-08-18

The latest version of the collection available on August 18 will be included in Ansible 2.10.0, except possibly newer versions which differ only in the patch level. (For details, see the roadmap). Please release version 1.0.0 of your collection by this date! If 1.0.0 does not exist, the same 0.x.y version will be used in all of Ansible 2.10 without updates, and your 1.x.y release will not be included until Ansible 2.11 (unless you request an exception at a community working group meeting and go through a demanding manual process to vouch for backwards compatibility . . . you want to avoid this!).

Follow semantic versioning rules

Your collection versioning must follow all semver rules. This means:

  • Patch level releases can only contain bugfixes;
  • Minor releases can contain new features, new modules and plugins, and bugfixes, but must not break backwards compatibility;
  • Major releases can break backwards compatibility.

Changelogs and Porting Guide

Your collection should provide data for the Ansible 2.10 changelog and porting guide. The changelog and porting guide are automatically generated from ansible-base, and from the changelogs of the included collections. All changes from the breaking_changes, major_changes, removed_features and deprecated_features sections will appear in both the changelog and the porting guide. You have two options for providing changelog fragments to include:

  1. If possible, use the antsibull-changelog tool, which uses the same changelog fragment as the ansible/ansible repository (see the documentation).
  2. If you cannot use antsibull-changelog, you can provide the changelog in a machine-readable format as changelogs/changelog.yaml inside your collection (see the documentation of changelogs/changelog.yaml format).

If you cannot contribute to the integrated Ansible changelog using one of these methods, please provide a link to your collection's changelog by creating an issue in https://github.com/ansible-community/ansible-build-data/. If you do not provide changelogs/changelog.yml or a link, users will not be able to find out what changed in your collection from the Ansible changelog and porting guide.

Make sure your collection passes the sanity tests

Run ansible-test sanity --docker -v in the collection with the latest ansible-base or stable-2.10 ansible/ansible checkout.

Keep informed

Be sure you're subscribed to:

Questions and Feedback

If you have questions or want to provide feedback, please see the Feedback section in the collection requirements.

(Internal link to keep track of issues: ansible-collections/overview#102)

na_ontap_command throws : "AttributeError: 'NoneType' object has no attribute 'get_child_by_name'"

SUMMARY

When na_ontap_command to list the volumes, it fails with the below error

{
    "exception": "Traceback (most recent call last):\r\n  File \"/home/cloud-user/.ansible/tmp/ansible-tmp-1607709071.77-46722177438710/AnsiballZ_na_ontap_command.py\", line 102, in <module>\r\n    _ansiballz_main()\r\n  File \"/home/cloud-user/.ansible/tmp/ansible-tmp-1607709071.77-46722177438710/AnsiballZ_na_ontap_command.py\", line 94, in _ansiballz_main\r\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n  File \"/home/cloud-user/.ansible/tmp/ansible-tmp-1607709071.77-46722177438710/AnsiballZ_na_ontap_command.py\", line 40, in invoke_module\r\n    runpy.run_module(mod_name='ansible.modules.storage.netapp.na_ontap_command', init_globals=None, run_name='__main__', alter_sys=True)\r\n  File \"/usr/lib64/python2.7/runpy.py\", line 176, in run_module\r\n    fname, loader, pkg_name)\r\n  File \"/usr/lib64/python2.7/runpy.py\", line 82, in _run_module_code\r\n    mod_name, mod_fname, mod_loader, pkg_name)\r\n  File \"/usr/lib64/python2.7/runpy.py\", line 72, in _run_code\r\n    exec code in run_globals\r\n  File \"/tmp/ansible_na_ontap_command_payload_8a8r0D/ansible_na_ontap_command_payload.zip/ansible/modules/storage/netapp/na_ontap_command.py\", line 228, in <module>\r\n  File \"/tmp/ansible_na_ontap_command_payload_8a8r0D/ansible_na_ontap_command_payload.zip/ansible/modules/storage/netapp/na_ontap_command.py\", line 224, in main\r\n  File \"/tmp/ansible_na_ontap_command_payload_8a8r0D/ansible_na_ontap_command_payload.zip/ansible/modules/storage/netapp/na_ontap_command.py\", line 149, in apply\r\n  File \"/tmp/ansible_na_ontap_command_payload_8a8r0D/ansible_na_ontap_command_payload.zip/ansible/modules/storage/netapp/na_ontap_command.py\", line 117, in run_command\r\n  File \"/tmp/ansible_na_ontap_command_payload_8a8r0D/ansible_na_ontap_command_payload.zip/ansible/modules/storage/netapp/na_ontap_command.py\", line 111, in asup_log_for_cserver\r\n  File \"/tmp/ansible_na_ontap_command_payload_8a8r0D/ansible_na_ontap_command_payload.zip/ansible/module_utils/netapp.py\", line 525, in get_cserver\r\n  File \"/tmp/ansible_na_ontap_command_payload_8a8r0D/ansible_na_ontap_command_payload.zip/ansible/module_utils/netapp.py\", line 519, in get_cserver_zapi\r\nAttributeError: 'NoneType' object has no attribute 'get_child_by_name'\r\n",
    "_ansible_no_log": false,
    "module_stderr": "OpenSSH_7.4p1, OpenSSL 1.0.2k-fips  26 Jan 2017\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 21\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\nShared connection to 10.173.234.25 closed.\r\n",
    "changed": false,
    "module_stdout": "Traceback (most recent call last):\r\n  File \"/home/cloud-user/.ansible/tmp/ansible-tmp-1607709071.77-46722177438710/AnsiballZ_na_ontap_command.py\", line 102, in <module>\r\n    _ansiballz_main()\r\n  File \"/home/cloud-user/.ansible/tmp/ansible-tmp-1607709071.77-46722177438710/AnsiballZ_na_ontap_command.py\", line 94, in _ansiballz_main\r\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n  File \"/home/cloud-user/.ansible/tmp/ansible-tmp-1607709071.77-46722177438710/AnsiballZ_na_ontap_command.py\", line 40, in invoke_module\r\n    runpy.run_module(mod_name='ansible.modules.storage.netapp.na_ontap_command', init_globals=None, run_name='__main__', alter_sys=True)\r\n  File \"/usr/lib64/python2.7/runpy.py\", line 176, in run_module\r\n    fname, loader, pkg_name)\r\n  File \"/usr/lib64/python2.7/runpy.py\", line 82, in _run_module_code\r\n    mod_name, mod_fname, mod_loader, pkg_name)\r\n  File \"/usr/lib64/python2.7/runpy.py\", line 72, in _run_code\r\n    exec code in run_globals\r\n  File \"/tmp/ansible_na_ontap_command_payload_8a8r0D/ansible_na_ontap_command_payload.zip/ansible/modules/storage/netapp/na_ontap_command.py\", line 228, in <module>\r\n  File \"/tmp/ansible_na_ontap_command_payload_8a8r0D/ansible_na_ontap_command_payload.zip/ansible/modules/storage/netapp/na_ontap_command.py\", line 224, in main\r\n  File \"/tmp/ansible_na_ontap_command_payload_8a8r0D/ansible_na_ontap_command_payload.zip/ansible/modules/storage/netapp/na_ontap_command.py\", line 149, in apply\r\n  File \"/tmp/ansible_na_ontap_command_payload_8a8r0D/ansible_na_ontap_command_payload.zip/ansible/modules/storage/netapp/na_ontap_command.py\", line 117, in run_command\r\n  File \"/tmp/ansible_na_ontap_command_payload_8a8r0D/ansible_na_ontap_command_payload.zip/ansible/modules/storage/netapp/na_ontap_command.py\", line 111, in asup_log_for_cserver\r\n  File \"/tmp/ansible_na_ontap_command_payload_8a8r0D/ansible_na_ontap_command_payload.zip/ansible/module_utils/netapp.py\", line 525, in get_cserver\r\n  File \"/tmp/ansible_na_ontap_command_payload_8a8r0D/ansible_na_ontap_command_payload.zip/ansible/module_utils/netapp.py\", line 519, in get_cserver_zapi\r\nAttributeError: 'NoneType' object has no attribute 'get_child_by_name'\r\n",
    "rc": 1,
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    }
}
ISSUE TYPE
  • Bug Report
COMPONENT NAME

na_ontap_command

ANSIBLE VERSION
2.9.6
CONFIGURATION
ansible-playbook 2.9.6
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/var/lib/awx/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible-playbook
  python version = 2.7.5 (default, Sep 26 2019, 13:23:47) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /etc/ansible/ansible.cfg as config file
OS / ENVIRONMENT

RHEL 7

STEPS TO REPRODUCE
---
- name: Listing the current TVS volumes
  hosts: all
  gather_facts: false
  tasks:
    - name: run ontap cli command
      na_ontap_command:
        hostname: "{{ v_tvs_virtual_storage }}"
        username: "{{ v_user_name }}"
        password: "{{ v_user_password }}"
        command: ['vol show'']
        privilege: 'admin'
        https: true
        validate_certs: false
        use_rest: Always
EXPECTED RESULTS

List the volume

ACTUAL RESULTS
AttributeError: 'NoneType' object has no attribute 'get_child_by_name'\r\n",
    "rc": 1,
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    }

repo name change to just "netapp"

SUMMARY

As this repo now lives in a organization call "ansible-collections", might be worth it just to change the name of the repo to just "netapp" and remove the "ansible_collections" prefix.

na_ontap_volume_clone fails on parent-vserver

SUMMARY
"msg": "Error creating volume clone: vol_dati_CIFS_volume_clone: NetApp API failed. Reason - 13115:Extra input: parent-vserver"}
ISSUE TYPE
  • Bug Report
COMPONENT NAME

na_ontap_volume_clone in netapp.ontap collection

ANSIBLE VERSION
ansible 2.9.0
  config file = /opt/users/laurentn-local/ansible/ansible_git/ansible.cfg
  configured module search path = ['/opt/users/laurentn-local/ansible/ansible_git/ansible/lib/ansible/modules/storage/netapp']
  ansible python module location = /opt/users/laurentn-local/ansible/virtualenv37/lib/python3.7/site-packages/ansible
  executable location = /opt/users/laurentn-local/ansible/virtualenv37/bin/ansible
  python version = 3.7.1 (default, Nov 21 2018, 03:40:42) [GCC 4.8.5 20150623 (Red Hat 4.8.5-4)]

CONFIGURATION
COLLECTIONS_PATHS(env: ANSIBLE_COLLECTIONS_PATHS) = ['/opt/users/laurentn-local/ansible/ansible_git']
DEFAULT_HOST_LIST(/opt/users/laurentn-local/ansible/ansible_git/ansible.cfg) = ['/opt/users/laurentn-local/ansible/ansi
DEFAULT_MODULE_PATH(/opt/users/laurentn-local/ansible/ansible_git/ansible.cfg) = ['/opt/users/laurentn-local/ansible/an
DEFAULT_MODULE_UTILS_PATH(/opt/users/laurentn-local/ansible/ansible_git/ansible.cfg) = ['/opt/users/laurentn-local/ansi

OS / ENVIRONMENT
STEPS TO REPRODUCE

Using the parent_vserver parameter and running at admin level

    - name: create volume clone on different SVMs
      na_ontap_volume_clone:
       <<: *creds
       state: present
       name: IBM_123_cloned
       parent_volume: IBM_123
       parent_vserver: ansibleSVM
       vserver: trident_svm
EXPECTED RESULTS

The clone should be created, or nothing should happen if already present

ACTUAL RESULTS
"msg": "Error creating volume clone: vol_dati_CIFS_volume_clone: NetApp API failed. Reason - 13115:Extra input: parent-vserver"}

python NetApp-Lib error

SUMMARY

Getting below error when executing the playbook even though NetApp-Lib is installed.
fatal: [localhost]: FAILED! => {"changed": false, "msg": "the python NetApp-Lib module is required"}

ISSUE TYPE
  • Bug Report
COMPONENT NAME

na_ontap_info

ANSIBLE VERSION
# ansible --version
ansible 2.9.13
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, Sep 26 2019, 13:23:47) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
[root@rwle020v ~]# 

CONFIGURATION
# ansible-config dump --only-changed
DISPLAY_SKIPPED_HOSTS(/etc/ansible/ansible.cfg) = False
OS / ENVIRONMENT

NAME="Red Hat Enterprise Linux Server"
VERSION="7.8 (Maipo)"

STEPS TO REPRODUCE

I have installed requests, python netapp-lib version 2020.7.16 and ran source command on "collections-env-setup" file but still receiving the error "fatal: [localhost]: FAILED! => {"changed": false, "msg": "the python NetApp-Lib module is required"}"

Name: netapp-lib
Version: 2020.7.16
Summary: netapp-lib is required for Ansible deployments to interact with NetApp storage systems.
Home-page: https://netapp.io/
Author: NetApp, Inc.
Author-email: [email protected]
License: Apache License, Version 2.0
Location: /usr/local/lib/python3.6/site-packages
Requires: xmltodict, lxml

$ echo $PYTHONPATH
/home/schowd1/.ansible/collections

---
- hosts: localhost
  gather_facts: false
  collections:
    - netapp.ontap

  environment:
    SSL_CERT_FILE: <cert_path>

  vars:
    ansible_python_interpreter: /usr/bin/python3
    vserver: xxxxxxxx
    login: &login
      hostname: xxxxxxxx
      username: admin
      password: admins_password
      https: True
      validate_certs: True

  tasks:
  - name: Info
    na_ontap_info:
      state: info
      vserver: "{{ vserver }}"
      gather_subset: clock_info
      <<: *login
    register: ontap

  - name: print info
    debug:
      var: ontap
EXPECTED RESULTS

Need to show the NTP clock info for the vserver.

ACTUAL RESULTS
$ ansible-playbook test_gatherinfo.yml -vvv
ansible-playbook 2.9.13
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/home/test/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible-playbook
  python version = 2.7.5 (default, Sep 26 2019, 13:23:47) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin

PLAYBOOK: test_gatherinfo.yml ***************************************************************************************************************************************************************
1 plays in test_gatherinfo.yml

PLAY [localhost] ****************************************************************************************************************************************************************************
META: ran handlers
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: test
<127.0.0.1> EXEC /bin/sh -c 'echo ~test && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/test/.ansible/tmp `"&& mkdir "` echo /home/test/.ansible/tmp/ansible-tmp-1605239571.93-26137-244036756566486 `" && echo ansible-tmp-1605239571.93-26137-244036756566486="` echo /home/test/.ansible/tmp/ansible-tmp-1605239571.93-26137-244036756566486 `" ) && sleep 0'
Using module file /home/test/.ansible/collections/ansible_collections/netapp/ontap/plugins/modules/na_ontap_info.py
<127.0.0.1> PUT /home/test/.ansible/tmp/ansible-local-261274tHnob/tmpv3Gacv TO /home/test/.ansible/tmp/ansible-tmp-1605239571.93-26137-244036756566486/AnsiballZ_na_ontap_info.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/test/.ansible/tmp/ansible-tmp-1605239571.93-26137-244036756566486/ /home/test/.ansible/tmp/ansible-tmp-1605239571.93-26137-244036756566486/AnsiballZ_na_ontap_info.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'SSL_CERT_FILE=<cert_path> /usr/bin/python3 /home/test/.ansible/tmp/ansible-tmp-1605239571.93-26137-244036756566486/AnsiballZ_na_ontap_info.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/test/.ansible/tmp/ansible-tmp-1605239571.93-26137-244036756566486/ > /dev/null 2>&1 && sleep 0'

TASK [Info] *********************************************************************************************************************************************************************************
task path: /home/test/netappplaybooks/test_gatherinfo.yml:21
fatal: [localhost]: FAILED! => {
    "changed": false, 
    "invocation": {
        "module_args": {
            "cert_filepath": null, 
            "continue_on_error": [
                "never"
            ], 
            "desired_attributes": null, 
            "feature_flags": {}, 
            "gather_subset": [
                "clock_info"
            ], 
            "hostname": "xxxxxxx", 
            "http_port": null, 
            "https": true, 
            "key_filepath": null, 
            "max_records": 1024, 
            "ontapi": null, 
            "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", 
            "query": null, 
            "state": "info", 
            "summary": false, 
            "use_native_zapi_tags": false, 
            "use_rest": "auto", 
            "username": "admin", 
            "validate_certs": true, 
            "volume_move_target_aggr_info": null, 
            "vserver": "xxxxxxxx"
        }
    }, 
    "msg": "the python NetApp-Lib module is required"
}

PLAY RECAP **********************************************************************************************************************************************************************************
localhost                  : ok=0    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0   

KeyError: 'registrationState' during Resource provider 'Microsoft.NetApp' used by this operation is not registered

SUMMARY

Executing netapp.azure.azure_rm_netapp_account with a new subscription results in a keyerror when the state field is not available, full error output below.
If the command is run again, it succeeds quite quickly.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

netapp.azure.azure_rm_netapp_account

ANSIBLE VERSION
ansible 2.10.3
  config file = /ansible/ansible.cfg
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.7.7 (default, Jul 19 2020, 03:57:54) [GCC 8.3.0]

CONFIGURATION
ANSIBLE_SSH_RETRIES(/ansible/ansible.cfg) = 10
COLLECTIONS_PATHS(/ansible/ansible.cfg) = ['/ansible/collections']
DEFAULT_CALLBACK_WHITELIST(/ansible/ansible.cfg) = ['profile_tasks']
DEFAULT_HOST_LIST(/ansible/ansible.cfg) = ['/ansible/inventories']
DEFAULT_ROLES_PATH(/ansible/ansible.cfg) = ['/ansible/roles']
ENABLE_TASK_DEBUGGER(/ansible/ansible.cfg) = True
OS / ENVIRONMENT

Running in alpine linux docker container on osx

STEPS TO REPRODUCE
- name: Create Netapp Storage Account
  netapp.azure.azure_rm_netapp_account:
    resource_group: "{{ fef0_metagroup_name }}"
    name: "{{ fef0_netapp_account_name }}"
    state: "{{ _fef0_state }}"
    location: "{{ fef0_az_region }}"
EXPECTED RESULTS

Storage account should be created

ACTUAL RESULTS
TASK [Create Netapp Storage Account] ********************************************************************************************************
task path: /ansible/1x3_storage.yml:87
Wednesday 16 December 2020  22:35:16 +0000 (0:00:05.632)       0:03:03.901 **** 
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1608158116.656728-1043-211408991222507 `" && echo ansible-tmp-1608158116.656728-1043-211408991222507="` echo /root/.ansible/tmp/ansible-tmp-1608158116.656728-1043-211408991222507 `" ) && sleep 0'
Using module file /ansible/collections/ansible_collections/netapp/azure/plugins/modules/azure_rm_netapp_account.py
<localhost> PUT /root/.ansible/tmp/ansible-local-12ul4fk_2/tmp_qvoymcn TO /root/.ansible/tmp/ansible-tmp-1608158116.656728-1043-211408991222507/AnsiballZ_azure_rm_netapp_account.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1608158116.656728-1043-211408991222507/ /root/.ansible/tmp/ansible-tmp-1608158116.656728-1043-211408991222507/AnsiballZ_azure_rm_netapp_account.py && sleep 0'
<localhost> EXEC /bin/sh -c 'ANSIBLE_HOST_KEY_CHECKING=False AWS_REGION=eu-west-1 /usr/bin/python3.7 /root/.ansible/tmp/ansible-tmp-1608158116.656728-1043-211408991222507/AnsiballZ_azure_rm_netapp_account.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1608158116.656728-1043-211408991222507/ > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
    "changed": false,
    "module_stderr": "Resource provider 'Microsoft.NetApp' used by this operation is not registered. We are registering for you.\nTraceback (most recent call last):\n  File \"/root/.ansible/tmp/ansible-tmp-1608158116.656728-1043-211408991222507/AnsiballZ_azure_rm_netapp_account.py\", line 102, in <module>\n    _ansiballz_main()\n  File \"/root/.ansible/tmp/ansible-tmp-1608158116.656728-1043-211408991222507/AnsiballZ_azure_rm_netapp_account.py\", line 94, in _ansiballz_main\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n  File \"/root/.ansible/tmp/ansible-tmp-1608158116.656728-1043-211408991222507/AnsiballZ_azure_rm_netapp_account.py\", line 40, in invoke_module\n    runpy.run_module(mod_name='ansible_collections.netapp.azure.plugins.modules.azure_rm_netapp_account', init_globals=None, run_name='__main__', alter_sys=True)\n  File \"/usr/lib/python3.7/runpy.py\", line 205, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/usr/lib/python3.7/runpy.py\", line 96, in _run_module_code\n    mod_name, mod_spec, pkg_name, script_name)\n  File \"/usr/lib/python3.7/runpy.py\", line 85, in _run_code\n    exec(code, run_globals)\n  File \"/tmp/ansible_netapp.azure.azure_rm_netapp_account_payload_ry57thov/ansible_netapp.azure.azure_rm_netapp_account_payload.zip/ansible_collections/netapp/azure/plugins/modules/azure_rm_netapp_account.py\", line 193, in <module>\n  File \"/tmp/ansible_netapp.azure.azure_rm_netapp_account_payload_ry57thov/ansible_netapp.azure.azure_rm_netapp_account_payload.zip/ansible_collections/netapp/azure/plugins/modules/azure_rm_netapp_account.py\", line 189, in main\n  File \"/tmp/ansible_netapp.azure.azure_rm_netapp_account_payload_ry57thov/ansible_netapp.azure.azure_rm_netapp_account_payload.zip/ansible_collections/netapp/azure/plugins/modules/azure_rm_netapp_account.py\", line 122, in __init__\n  File \"/tmp/ansible_netapp.azure.azure_rm_netapp_account_payload_ry57thov/ansible_netapp.azure.azure_rm_netapp_account_payload.zip/ansible_collections/netapp/azure/plugins/module_utils/azure_rm_netapp_common.py\", line 23, in __init__\n  File \"/tmp/ansible_netapp.azure.azure_rm_netapp_account_payload_ry57thov/ansible_netapp.azure.azure_rm_netapp_account_payload.zip/ansible_collections/azure/azcollection/plugins/module_utils/azure_rm_common.py\", line 441, in __init__\n  File \"/tmp/ansible_netapp.azure.azure_rm_netapp_account_payload_ry57thov/ansible_netapp.azure.azure_rm_netapp_account_payload.zip/ansible_collections/netapp/azure/plugins/modules/azure_rm_netapp_account.py\", line 181, in exec_module\n  File \"/tmp/ansible_netapp.azure.azure_rm_netapp_account_payload_ry57thov/ansible_netapp.azure.azure_rm_netapp_account_payload.zip/ansible_collections/netapp/azure/plugins/modules/azure_rm_netapp_account.py\", line 153, in create_azure_netapp_account\n  File \"/usr/lib/python3.7/site-packages/azure/mgmt/netapp/operations/_accounts_operations.py\", line 263, in create_or_update\n    **operation_config\n  File \"/usr/lib/python3.7/site-packages/azure/mgmt/netapp/operations/_accounts_operations.py\", line 210, in _create_or_update_initial\n    response = self._client.send(request, stream=False, **operation_config)\n  File \"/usr/lib/python3.7/site-packages/msrest/service_client.py\", line 336, in send\n    pipeline_response = self.config.pipeline.run(request, **kwargs)\n  File \"/usr/lib/python3.7/site-packages/msrest/pipeline/__init__.py\", line 197, in run\n    return first_node.send(pipeline_request, **kwargs)  # type: ignore\n  File \"/usr/lib/python3.7/site-packages/msrest/pipeline/__init__.py\", line 150, in send\n    response = self.next.send(request, **kwargs)\n  File \"/usr/lib/python3.7/site-packages/msrest/pipeline/requests.py\", line 72, in send\n    return self.next.send(request, **kwargs)\n  File \"/usr/lib/python3.7/site-packages/msrest/pipeline/requests.py\", line 137, in send\n    return self.next.send(request, **kwargs)\n  File \"/usr/lib/python3.7/site-packages/msrest/pipeline/__init__.py\", line 150, in send\n    response = self.next.send(request, **kwargs)\n  File \"/usr/lib/python3.7/site-packages/msrest/pipeline/requests.py\", line 193, in send\n    self.driver.send(request.http_request, **kwargs)\n  File \"/usr/lib/python3.7/site-packages/msrest/universal_http/requests.py\", line 333, in send\n    return super(RequestsHTTPSender, self).send(request, **requests_kwargs)\n  File \"/usr/lib/python3.7/site-packages/msrest/universal_http/requests.py\", line 142, in send\n    **kwargs)\n  File \"/usr/lib/python3.7/site-packages/requests/sessions.py\", line 542, in request\n    resp = self.send(prep, **send_kwargs)\n  File \"/usr/lib/python3.7/site-packages/requests/sessions.py\", line 662, in send\n    r = dispatch_hook('response', hooks, r, **kwargs)\n  File \"/usr/lib/python3.7/site-packages/requests/hooks.py\", line 31, in dispatch_hook\n    _hook_data = hook(hook_data, **kwargs)\n  File \"/usr/lib/python3.7/site-packages/msrest/universal_http/requests.py\", line 275, in user_hook_cb\n    return user_hook(r, *args, **kwargs)\n  File \"/usr/lib/python3.7/site-packages/msrestazure/tools.py\", line 57, in register_rp_hook\n    if not _register_rp(session, url_prefix, rp_name):\n  File \"/usr/lib/python3.7/site-packages/msrestazure/tools.py\", line 102, in _register_rp\n    if rp_info['registrationState'] == 'Registered':\nKeyError: 'registrationState'\n",
    "module_stdout": "",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 1
}

na_ontap_kerberos_realm doesn't support Microsoft Active Directory

The na_ontap_kerberos_realm module (new in 2.9) doesn't appear to support Microsoft Active Directory. We have created an updated version of the module which has the following modifications:

  1. Case sensitivity issue with Microsoft and Other as kdc_vendor parameter choices, changed to all lower case (so that it also matches the output of the kerberos-realm-get-iter API)
  2. Added ad_server_ip and ad_server_name as parameters
  3. Update required_if Ansible parameter definition so that ad_server_ip and ad_server_name are required parameters if kdc_vendor is set to microsoft
  4. Added ad_server_ip and ad_server_name as optional child elements when making the kerberos-realm-create API call

VLAN step crashes when following docs

Minor type-o fix.

SUMMARY

Docs show

vlans:
  - { id: 201, node: cluster-01, parent: a0a }

Step in module doesn't reference item.parent.

- name: Create VLAN
  na_ontap_net_vlan:
    state: present
    vlanid: "{{ item.id }}"
    node: "{{ item.node }}"
    parent_interface: "{{ item.port }}"
    hostname: "{{ netapp_hostname }}"
    username: "{{ netapp_username }}"
    password: "{{ netapp_password }}"
ISSUE TYPE
  • Bug Report
COMPONENT NAME

role na_ontap_cluster_config

ANSIBLE VERSION
ansible 2.9.6
  config file = None
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
  executable location = /usr/local/bin/ansible
  python version = 3.6.9 (default, Nov  7 2019, 10:44:02) [GCC 8.3.0]
CONFIGURATION
n/a
OS / ENVIRONMENT

n/a

STEPS TO REPRODUCE

Just run via the docs :p

- name: Create VLAN
  na_ontap_net_vlan:
    state: present
    vlanid: "{{ item.id }}"
    node: "{{ item.node }}"
    parent_interface: "{{ item.port }}"
    hostname: "{{ netapp_hostname }}"
    username: "{{ netapp_username }}"
    password: "{{ netapp_password }}"
EXPECTED RESULTS

It to run.

ACTUAL RESULTS

Error.


na_ontap_firewall_policy missing portmap as a choice for service parameter

The na_ontap_firewall_policy module (added in 2.7) is missing portmap as a possible choice for the service parameter.

We created an updated version of this which accepts portmap as a choice as per code snippet below:

service=dict(required=False, type='str', choices=['portmap', 'dns', 'http', 'https', 'ndmp',
'ndmps', 'ntp', 'rsh', 'snmp', 'ssh', 'telnet']),

na_ontap_command return_dict throws an error

SUMMARY

na_ontap_command return_dict always gives me an error:
parse_xml_to_dict\nValueError: invalid literal for int() with base 10: 'u1'\n"

ISSUE TYPE
  • Bug Report
COMPONENT NAME
ANSIBLE VERSION
ansible 2.9.1
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, Aug 29 2016, 10:12:21) [GCC 4.8.5 20150623 (Red Hat 4.8.5-4)]
CONFIGURATION
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
OS / ENVIRONMENT

NAME="Red Hat Enterprise Linux Server"
VERSION="7.2 (Maipo)"

target: NetApp Release 9.3P7: Wed Jul 25 10:11:10 UTC 2018

STEPS TO REPRODUCE
- hosts: localhost
  gather_facts: False
  vars:
    hostname: xxxxxxx
    username: xxxxxxx
    password: xxxxxxx

  tasks:
  - name: run ontap cli command
    na_ontap_command:
      hostname: "{{ hostname }}"
      username: "{{ username }}"
      password: "{{ password }}"
      command: "version"
      validate_certs: no
      https: true
      return_dict: yes
    register: command_out

  - name: print output
    debug:
      msg: "{{ command_out.msg }}"
EXPECTED RESULTS

this is what happens when I do it without return_dict:

PLAY [localhost] ****************************************************************************************************************************************************************************

TASK [run ontap cli command] ****************************************************************************************************************************************************************
changed: [localhost]

TASK [print output] *************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "<results xmlns="http://www.netapp.com/filer/admin\" status="passed">NetApp Release 9.3P14: Wed Jul 10 14:38:46 UTC 2019\n\n1"

I expect the same output in dictionary form

ACTUAL RESULTS
PLAY [localhost] ****************************************************************************************************************************************************************************

TASK [run ontap cli command] ****************************************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ValueError: invalid literal for int() with base 10: 'u1'
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n  File \"/root/.ansible/tmp/ansible-tmp-1589532394.97-154348621604878/AnsiballZ_na_ontap_command.py\", line 102, in <module>\n    _ansiballz_main()\n  File \"/root/.ansible/tmp/ansible-tmp-1589532394.97-154348621604878/AnsiballZ_na_ontap_command.py\", line 94, in _ansiballz_main\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n  File \"/root/.ansible/tmp/ansible-tmp-1589532394.97-154348621604878/AnsiballZ_na_ontap_command.py\", line 40, in invoke_module\n    runpy.run_module(mod_name='ansible.modules.storage.netapp.na_ontap_command', init_globals=None, run_name='__main__', alter_sys=True)\n  File \"/usr/lib64/python2.7/runpy.py\", line 176, in run_module\n    fname, loader, pkg_name)\n  File \"/usr/lib64/python2.7/runpy.py\", line 82, in _run_module_code\n    mod_name, mod_fname, mod_loader, pkg_name)\n  File \"/usr/lib64/python2.7/runpy.py\", line 72, in _run_code\n    exec code in run_globals\n  File \"/tmp/ansible_na_ontap_command_payload_xYTFFO/ansible_na_ontap_command_payload.zip/ansible/modules/storage/netapp/na_ontap_command.py\", line 228, in <module>\n  File \"/tmp/ansible_na_ontap_command_payload_xYTFFO/ansible_na_ontap_command_payload.zip/ansible/modules/storage/netapp/na_ontap_command.py\", line 224, in main\n  File \"/tmp/ansible_na_ontap_command_payload_xYTFFO/ansible_na_ontap_command_payload.zip/ansible/modules/storage/netapp/na_ontap_command.py\", line 149, in apply\n  File \"/tmp/ansible_na_ontap_command_payload_xYTFFO/ansible_na_ontap_command_payload.zip/ansible/modules/storage/netapp/na_ontap_command.py\", line 135, in run_command\n  File \"/tmp/ansible_na_ontap_command_payload_xYTFFO/ansible_na_ontap_command_payload.zip/ansible/modules/storage/netapp/na_ontap_command.py\", line 187, in parse_xml_to_dict\nValueError: invalid literal for int() with base 10: 'u1'\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}

PLAY RECAP **********************************************************************************************************************************************************************************
localhost                  : ok=0    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0

Description and default interval value issues with SolidFire snapshot schedule module

SUMMARY
  1. The module description is completely wrong (maybe not just this module).
    It says it's for account management .

  2. There's an unnecessary default non-zero value for time_interval_days.

    schedule_type:
        description:
        - Schedule type for creating schedule.
        choices: ['DaysOfWeekFrequency','DaysOfMonthFrequency','TimeIntervalFrequency']

    time_interval_days:
        description: Time interval in days.
        default: 1
ISSUE TYPE
  • Bug Report
COMPONENT NAME
ANSIBLE VERSION
ansible 2.9.10
  config file = None
  configured module search path = ['/home/vagrant/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /home/vagrant/.local/lib/python3.6/site-packages/ansible
  executable location = /home/vagrant/.local/bin/ansible
  python version = 3.6.9 (default, Apr 18 2020, 01:56:04) [GCC 8.4.0]
CONFIGURATION
  • Element OS 11.7
EXPECTED RESULTS
  • The module description correctly describes its purpose
  • The module does not use unreasonable default values
ACTUAL RESULTS
  • The module description is wrong
  • At least one variable has an unreasonable defaults value (time_interval_days=1). Actually I don't even understand why time_interval_days is necessary in the first place (there's DaysOfWeekFrequency and DaysOfMonthFrequency as well as time_interval_hours=24) but if it can't be removed, then it should have a reasonable default value like the other two time intervals, namely 0.

na_ontap_qtree : KeyError: No element by given name mode.

SUMMARY

The playbook can run only once. One complete run, a second run ends with module error.

ISSUE TYPE

after first run, the module na_ontap_qtree then looks for a key named "mode" to fill variable unix_permissions, but this keyname does not exists.

Workaround: disabled line 186 in na_ontap_qtree.py as key mode does not exists

 return_q = {'export_policy': result['attributes-list']['qtree-info']['export-policy'],
#            'unix_permissions': result['attributes-list']['qtree-info']['mode'],
             'oplocks': result['attributes-list']['qtree-info']['oplocks'],
             'security_style': result['attributes-list']['qtree-info']['security-style']}
COMPONENT NAME

Line 186 /usr/lib/python2.7/dist-packages/ansible/modules/storage/netapp/na_ontap_qtree.py

ANSIBLE VERSION
ansible 2.9.12
CONFIGURATION
netapp-lib (2018.11.13) # /usr/local/lib/python2.7/dist-packages/netapp_lib

# ansible-config dump --only-changed
COLLECTIONS_PATHS(/etc/ansible/ansible.cfg) = [u'/etc/ansible/collections', u'/etc/ansible/collections', u'/etc/ansible/collections/ansible_collections']
DEFAULT_GATHERING(/etc/ansible/ansible.cfg) = explicit
DEFAULT_REMOTE_USER(/etc/ansible/ansible.cfg) = ansible
DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = /etc/ansible/.ansible_vault
DEPRECATION_WARNINGS(/etc/ansible/ansible.cfg) = False
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
OS / ENVIRONMENT
ONTAP: NetApp Release 9.7
OS: Linux ansible01 4.15.0-54-generic #58-Ubuntu SMP
STEPS TO REPRODUCE

Call this task twice, assuming that u have a dummy volume/export_policy already

    - name: Create Qtree
      na_ontap_qtree:
        hostname: "{{ hostname }}"
        username: "{{ username }}"
        password: "{{ password }}"
        state: "present"
        name: "tq1"
        flexvol_name: "testqtree1"
        # Workaround: disable na_ontap_qtree Line 186 (unix_premissions= KeyError mode)
        export_policy: "nfs-testqtree1"
        security_style: "ntfs"
        oplocks: enabled
        unix_permissions: ""
        vserver: "svm_test"
EXPECTED RESULTS

On first run alls seems ok.
On second run I expected, to run without error, as nohting changed,
but getting module error below.
KeyError: 'No element by given name mode.'\n",

ACTUAL RESULTS
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: KeyError: 'No element by given name mode.'
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n  File \"/root/.ansible/tmp/ansible-tmp-1608029769.31-131028-78894809548122/AnsiballZ_na_ontap_qtree.py\", line 102, in <module>\n    _ansiballz_main()\n  File \"/root/.ansible/tmp/ansible-tmp-1608029769.31-131028-78894809548122/AnsiballZ_na_ontap_qtree.py\", line 94, in _ansiballz_main\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n  File \"/root/.ansible/tmp/ansible-tmp-1608029769.31-131028-78894809548122/AnsiballZ_na_ontap_qtree.py\", line 40, in invoke_module\n    runpy.run_module(mod_name='ansible.modules.storage.netapp.na_ontap_qtree', init_globals=None, run_name='__main__', alter_sys=True)\n  File \"/usr/lib/python2.7/runpy.py\", line 188, in run_module\n    fname, loader, pkg_name)\n  File \"/usr/lib/python2.7/runpy.py\", line 82, in _run_module_code\n    mod_name, mod_fname, mod_loader, pkg_name)\n  File \"/usr/lib/python2.7/runpy.py\", line 72, in _run_code\n    exec code in run_globals\n  File \"/tmp/ansible_na_ontap_qtree_payload_2mH9bD/ansible_na_ontap_qtree_payload.zip/ansible/modules/storage/netapp/na_ontap_qtree.py\", line 303, in <module>\n  File \"/tmp/ansible_na_ontap_qtree_payload_2mH9bD/ansible_na_ontap_qtree_payload.zip/ansible/modules/storage/netapp/na_ontap_qtree.py\", line 299, in main\n  File \"/tmp/ansible_na_ontap_qtree_payload_2mH9bD/ansible_na_ontap_qtree_payload.zip/ansible/modules/storage/netapp/na_ontap_qtree.py\", line 273, in apply\n  File \"/tmp/ansible_na_ontap_qtree_payload_2mH9bD/ansible_na_ontap_qtree_payload.zip/ansible/modules/storage/netapp/na_ontap_qtree.py\", line 186, in get_qtree\n  File \"/usr/local/lib/python2.7/dist-packages/netapp_lib/api/zapi/zapi.py\", line 489, in __getitem__\n    raise KeyError('No element by given name %s.' % key)\nKeyError: 'No element by given name mode.'\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}

na_ontap_user module - create snmpv3 user (usm)

SUMMARY

I would like to create an user for snmpv3 (usm), but the module do not support parameters like EngineID, authentication protocol, privacy protocol and remote-switch-ipaddress.
Could you add these parameters?
Thank you.

ISSUE TYPE
  • Bug Report / Future Request
COMPONENT NAME

na_ontap_user module

ANSIBLE VERSION
ansible 2.9.7

na_ontap_command requirements are missing console permissions

SUMMARY

The module requirements listed at https://docs.ansible.com/ansible/latest/modules/na_ontap_command_module.html#na-ontap-command-module do not mention that the user needs console access - admin is not enough.

See ansible/ansible#65324 for the original report.

ISSUE TYPE
  • Documentation Report
COMPONENT NAME

na_ontap_command

ANSIBLE VERSION
ansible 2.9.1
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, Aug 29 2016, 10:12:21) [GCC 4.8.5 20150623 (Red Hat 4.8.5-4)]
CONFIGURATION
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
OS / ENVIRONMENT
ADDITIONAL INFORMATION

na_ontap_broadcast_domain_ports module falsely supports check mode

SUMMARY

Edit: Further investigation shows that this might be a broken module.

Running in check mode shows "0 changes" in this task...

TASK [na_ontap_cluster_config : Remove ports from Default broadcast domain] ****

and "1 change" in this task...

TASK [na_ontap_cluster_config : create broadcast domain] ***********************
changed: [localhost] => (item={u'mtu': 1500, u'ipspace': u'Default', u'ports': u'cluster01-01:e0M,cluster01-01:e0i,cluster01-02:e0M,cluster01-02:e0i', u'name': u'Default'})

When the playbook runs, I end up seeing the below.

TASK [na_ontap_cluster_config : Remove ports from Default broadcast domain] ****
failed: [localhost] (item={u'node': u'cluster01-01', u'autonegotiate': False, u'flowcontrol': u'none', u'port': u'e0M', u'mtu': 1500}) => {"ansible_loop_var": "item", "changed": false, "item": {"autonegotiate": false, "flowcontrol": "none", "mtu": 1500, "node": "cluster01-01", "port": "e0M"}, "msg": "Error deleting port for broadcast domain Default: NetApp API failed. Reason - 13001:Port \"cluster01-01:e0M\" 
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: NaApiError: NetApp API failed. Reason - 13001:Port "cluster01-02:e0M" cannot be used because it is currently the home port or current port of a LIF. To view LIFs with a specified home port, use the "network interface show -home-node <nodename> -home-port {<netport>|<ifgrp>}" command. To view LIFs with a specified current port, use the "network interface show -curr-node <nodename> -curr-port {<netport>|<ifgrp>}" command.
failed: [localhost] (item={u'node': u'cluster01-02', u'autonegotiate': False, u'flowcontrol': u'none', u'port': u'e0M', u'mtu': 1500}) => {"ansible_loop_var": "item", "changed": false, "item": {"autonegotiate": false, "flowcontrol": "none", "mtu": 1500, "node": "cluster01-02", "port": "e0M"}, "msg": "Error deleting port for broadcast domain Default: NetApp API failed. Reason - 13001:Port \"cluster01-02:e0M\" cannot be used because it is currently the home port or curr…
ISSUE TYPE
  • Bug Report
COMPONENT NAME

ontap_cluster_config role.

na_ontap_broadcast_domain_ports module.

STEPS TO REPRODUCE

Try to modify a broadcast domain with the provided role.

EXPECTED RESULTS

I expect the playbook check mode to reflect behavior of a run.

ACTUAL RESULTS

Check mode shows nothing should be happening in the step where I see a failure.

na_ontap_info throws an unexpected error

SUMMARY

When I try to use na_ontap_info in a playbook, it always fails with an unexpected error

ISSUE TYPE
  • Bug Report
COMPONENT NAME

na_ontap_info

ANSIBLE VERSION
ansible 2.9.1
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, Sep 26 2019, 13:23:47) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
CONFIGURATION
ANSIBLE_NOCOWS(env: ANSIBLE_NOCOWS) = True
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 50
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
OS / ENVIRONMENT

NetApp Release 9.3P14: Wed Jul 10 14:38:46 UTC 2019

STEPS TO REPRODUCE
  - name: this fails, open a bug when you get some time
    na_ontap_info:
      username: "{{ env_vars.netapp.username }}"
      password: "{{ env_vars.netapp.password }}"
      hostname: "{{ env_vars.netapp.hostname }}"
      validate_certs: no
    register: command_out
EXPECTED RESULTS

Get ontap info

ACTUAL RESULTS
TASK [this fails, open a bug when you get some time] ****************************************************************************************************************************************
task path: /etc/ansible/nkanov/netapp.yml:14
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1600417818.82-62050145132658 `" && echo ansible-tmp-1600417818.82-62050145132658="` echo /root/.ansible/tmp/ansible-tmp-1600417818.82-62050145132658 `" ) && sleep 0'
Using module file /root/.ansible/collections/ansible_collections/netapp/ontap/plugins/modules/na_ontap_info.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-9883BphQUU/tmp0gKihE TO /root/.ansible/tmp/ansible-tmp-1600417818.82-62050145132658/AnsiballZ_na_ontap_info.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1600417818.82-62050145132658/ /root/.ansible/tmp/ansible-tmp-1600417818.82-62050145132658/AnsiballZ_na_ontap_info.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python2 /root/.ansible/tmp/ansible-tmp-1600417818.82-62050145132658/AnsiballZ_na_ontap_info.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1600417818.82-62050145132658/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
  File "/root/.ansible/tmp/ansible-tmp-1600417818.82-62050145132658/AnsiballZ_na_ontap_info.py", line 102, in <module>
    _ansiballz_main()
  File "/root/.ansible/tmp/ansible-tmp-1600417818.82-62050145132658/AnsiballZ_na_ontap_info.py", line 94, in _ansiballz_main
    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
  File "/root/.ansible/tmp/ansible-tmp-1600417818.82-62050145132658/AnsiballZ_na_ontap_info.py", line 40, in invoke_module
    runpy.run_module(mod_name='ansible_collections.netapp.ontap.plugins.modules.na_ontap_info', init_globals=None, run_name='__main__', alter_sys=True)
  File "/usr/lib64/python2.7/runpy.py", line 176, in run_module
    fname, loader, pkg_name)
  File "/usr/lib64/python2.7/runpy.py", line 82, in _run_module_code
    mod_name, mod_fname, mod_loader, pkg_name)
  File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
    exec code in run_globals
  File "/tmp/ansible_na_ontap_info_payload_zoEHA1/ansible_na_ontap_info_payload.zip/ansible_collections/netapp/ontap/plugins/modules/na_ontap_info.py", line 1700, in <module>
  File "/tmp/ansible_na_ontap_info_payload_zoEHA1/ansible_na_ontap_info_payload.zip/ansible_collections/netapp/ontap/plugins/modules/na_ontap_info.py", line 1692, in main
  File "/tmp/ansible_na_ontap_info_payload_zoEHA1/ansible_na_ontap_info_payload.zip/ansible_collections/netapp/ontap/plugins/modules/na_ontap_info.py", line 1478, in get_all
  File "/tmp/ansible_na_ontap_info_payload_zoEHA1/ansible_na_ontap_info_payload.zip/ansible_collections/netapp/ontap/plugins/modules/na_ontap_info.py", line 1467, in send_ems_event
  File "/tmp/ansible_na_ontap_info_payload_zoEHA1/ansible_na_ontap_info_payload.zip/ansible_collections/netapp/ontap/plugins/module_utils/netapp.py", line 329, in get_cserver
  File "/tmp/ansible_na_ontap_info_payload_zoEHA1/ansible_na_ontap_info_payload.zip/ansible_collections/netapp/ontap/plugins/module_utils/netapp.py", line 299, in get_cserver_zapi
netapp_lib.api.zapi.zapi.NaApiError: NetApp API failed. Reason - Unexpected error:

fatal: [localhost]: FAILED! => {
    "changed": false,
    "module_stderr": "Traceback (most recent call last):\n  File \"/root/.ansible/tmp/ansible-tmp-1600417818.82-62050145132658/AnsiballZ_na_ontap_info.py\", line 102, in <module>\n    _ansiballz_main()\n  File \"/root/.ansible/tmp/ansible-tmp-1600417818.82-62050145132658/AnsiballZ_na_ontap_info.py\", line 94, in _ansiballz_main\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n  File \"/root/.ansible/tmp/ansible-tmp-1600417818.82-62050145132658/AnsiballZ_na_ontap_info.py\", line 40, in invoke_module\n    runpy.run_module(mod_name='ansible_collections.netapp.ontap.plugins.modules.na_ontap_info', init_globals=None, run_name='__main__', alter_sys=True)\n  File \"/usr/lib64/python2.7/runpy.py\", line 176, in run_module\n    fname, loader, pkg_name)\n  File \"/usr/lib64/python2.7/runpy.py\", line 82, in _run_module_code\n    mod_name, mod_fname, mod_loader, pkg_name)\n  File \"/usr/lib64/python2.7/runpy.py\", line 72, in _run_code\n    exec code in run_globals\n  File \"/tmp/ansible_na_ontap_info_payload_zoEHA1/ansible_na_ontap_info_payload.zip/ansible_collections/netapp/ontap/plugins/modules/na_ontap_info.py\", line 1700, in <module>\n  File \"/tmp/ansible_na_ontap_info_payload_zoEHA1/ansible_na_ontap_info_payload.zip/ansible_collections/netapp/ontap/plugins/modules/na_ontap_info.py\", line 1692, in main\n  File \"/tmp/ansible_na_ontap_info_payload_zoEHA1/ansible_na_ontap_info_payload.zip/ansible_collections/netapp/ontap/plugins/modules/na_ontap_info.py\", line 1478, in get_all\n  File \"/tmp/ansible_na_ontap_info_payload_zoEHA1/ansible_na_ontap_info_payload.zip/ansible_collections/netapp/ontap/plugins/modules/na_ontap_info.py\", line 1467, in send_ems_event\n  File \"/tmp/ansible_na_ontap_info_payload_zoEHA1/ansible_na_ontap_info_payload.zip/ansible_collections/netapp/ontap/plugins/module_utils/netapp.py\", line 329, in get_cserver\n  File \"/tmp/ansible_na_ontap_info_payload_zoEHA1/ansible_na_ontap_info_payload.zip/ansible_collections/netapp/ontap/plugins/module_utils/netapp.py\", line 299, in get_cserver_zapi\nnetapp_lib.api.zapi.zapi.NaApiError: NetApp API failed. Reason - Unexpected error:\n",
    "module_stdout": "",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 1
}

PLAY RECAP **********************************************************************************************************************************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0

na_ontap_command should support interactive commands

SUMMARY

na_ontap_command should support answering of interactive commands, similar to the prompt/answer options in cli_command module for network devices.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

na_ontap_command module

ADDITIONAL INFORMATION

There are ONTAP commands that always prompt for input, example: "system configuration backup settings set-password"

na_ontap_cluster issue with pre-cluster APIs

We have detected a couple of issues with the na_ontap_cluster module (added in 2.6).

Firstly, the apply() function makes API calls to get licenses and for autosupport_log. However, those API calls fail if made prior to cluster setup i.e. before running the cluster-create API.

Secondly, the cluster-join API which is used to add additional nodes to an existing cluster is deprecated post ONTAP 9.2 and has been replaced with the cluster-node-add API.

We have written an updated module to use the new API and to only make the autosupport_log API call post cluster create. We have also removed the license components as installing a base license is no longer a mandatory part of creating a cluster post ONTAP 9.2, and therefore can be handled using the na_ontap_license module.

na_ontap_volume fails on already existing data-protection volume with no security-style set

SUMMARY

When creating a volume with type: data-protection, the volume is created and no security style is set as expected but rerunning the task checks for existing security style and since there's nothing set it fails with:

raise KeyError('No element by given name %s.' % key)\nKeyError: 'No element by given name style.'\n",

pointing to:

ansible_collections/netapp/ontap/plugins/modules/na_ontap_volume.py\", line 667, in get_volume

This seem to affect all versions starting from 20.4.1

ISSUE TYPE
  • Bug Report
COMPONENT NAME

na_ontap_volume

ANSIBLE VERSION
ansible 2.9.7

na_ontap_vserver_create failover policy for iscsi lifs

SUMMARY

na_ontap_vserver_create creates lifs for nfs without proper failover_policy

failover_policy: "{{ 'local-only' if item.protocol.lower() is not search('iscsi') else omit }}"

tries to create interface with failover_policy 'local-only' for nfs / cifs. Looks like extra "not".
iscsi lifs must not have failover_policy because of multipathing

we have to change this line to and it works ok:
failover_policy: "{{ omit if item.protocol.lower() is search('iscsi') else 'system-defined' }}"

ISSUE TYPE
  • Bug Report
COMPONENT NAME

na_ontap_vserver_create task list

EXPECTED RESULTS

system-defined failover policy for none iscsi protocols (for example nfs/cifs).

wrong vserver in Create Intercluster Lif task

SUMMARY

wrong vserver in - name: Create Intercluster Lif


line should be "vserver: Intercluster"

ISSUE TYPE
  • Bug Report
COMPONENT NAME

na_ontap_cluster_config

EXPECTED RESULTS

intercluster lif created

ACTUAL RESULTS

Task fail with error.

I have created using web and intercluster lifs got associated with vserver named Intercluster.

****-*****::> network interface show -lif inter_lif_0
            Logical    Status     Network            Current       Current Is
Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
----------- ---------- ---------- ------------------ ------------- ------- ----
Intercluster
            inter_lif_0  up/up    x.y.z.v/24      ****-*****-01 
                                                                   a0a-212 true

Inclusion of netapp.ontap in Ansible 2.10

This collection will be included in Ansible 2.10 because it contains modules and/or plugins that were included in Ansible 2.9. Please review:

DEADLINE: 2020-08-18

The latest version of the collection available on August 18 will be included in Ansible 2.10.0, except possibly newer versions which differ only in the patch level. (For details, see the roadmap). Please release version 1.0.0 of your collection by this date! If 1.0.0 does not exist, the same 0.x.y version will be used in all of Ansible 2.10 without updates, and your 1.x.y release will not be included until Ansible 2.11 (unless you request an exception at a community working group meeting and go through a demanding manual process to vouch for backwards compatibility . . . you want to avoid this!).

Follow semantic versioning rules

Your collection versioning must follow all semver rules. This means:

  • Patch level releases can only contain bugfixes;
  • Minor releases can contain new features, new modules and plugins, and bugfixes, but must not break backwards compatibility;
  • Major releases can break backwards compatibility.

Changelogs and Porting Guide

Your collection should provide data for the Ansible 2.10 changelog and porting guide. The changelog and porting guide are automatically generated from ansible-base, and from the changelogs of the included collections. All changes from the breaking_changes, major_changes, removed_features and deprecated_features sections will appear in both the changelog and the porting guide. You have two options for providing changelog fragments to include:

  1. If possible, use the antsibull-changelog tool, which uses the same changelog fragment as the ansible/ansible repository (see the documentation).
  2. If you cannot use antsibull-changelog, you can provide the changelog in a machine-readable format as changelogs/changelog.yaml inside your collection (see the documentation of changelogs/changelog.yaml format).

If you cannot contribute to the integrated Ansible changelog using one of these methods, please provide a link to your collection's changelog by creating an issue in https://github.com/ansible-community/ansible-build-data/. If you do not provide changelogs/changelog.yml or a link, users will not be able to find out what changed in your collection from the Ansible changelog and porting guide.

Make sure your collection passes the sanity tests

Run ansible-test sanity --docker -v in the collection with the latest ansible-base or stable-2.10 ansible/ansible checkout.

Keep informed

Be sure you're subscribed to:

Questions and Feedback

If you have questions or want to provide feedback, please see the Feedback section in the collection requirements.

(Internal link to keep track of issues: ansible-collections/overview#102)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.