Coder Social home page Coder Social logo

ansible-dcnm's People

Contributors

allenrobel avatar dependabot[bot] avatar dsx1123 avatar kharicha avatar kirviswa avatar mikewiebe avatar mmudigon avatar nkshrishail avatar praveenramoorthy avatar rost-d avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ansible-dcnm's Issues

Policy Description filed is set to blank when modify policy is used

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Ansible Version and collection version

ansible 2.9.27
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/risarwar/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Sep 10 2021, 09:13:53) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]

DCNM version

  • V 11.5(3a)

Affected module(s)

  • dcnm_policy

Ansible Playbook

 - name: Configure Mulitple Policies 
        cisco.dcnm.dcnm_policy:
            fabric: "{{ fabric }}"
            deploy: false
            state: merged
            config:
              - switch:
                  - ip: "{{ Switch_name }}"
                    policies:
                        - name: "{{item.policy_id}}"
                          create_additional_policy: false
                          priority: "{{item.priority | default(500)}}"
                          description: "{{item.description}}"
                          policy_vars: "{{ item.Values | default([])}}"
        register: policy_update_data
        loop: "{{ Policies }}"
        ignore_errors: yes

Debug Output

Expected Behavior

I expect this play to update the Policy with specified policy ID and not change fields that have not been changed.

Actual Behavior

The description field is set to null (when it was previously set upon creation of the policy). Even when the description field is not specified, same result is produced.

Steps to Reproduce

References

Day 0 Easy fabric provisioning using ansible

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

Automated provisioning of Easy Fabric deployment.

We have a requirement as part of a new push for Network Automation where all configuration must where possible be deployed only via automation reducing the need to manually configure DCNM from the GUI.

In our use case, we would like to automate day 0 provisioning of the network infrastructure for VXLAN EVPN using DCNM to provision all the configuration via ansible right from the onboarding of network switches to configuration of building a new fabric build and selecting the easy fabric option for provisioning the fabric. All the variables that are specified in the easy fabric template would be good to have a module specifically to be used to setup all the required fields.

New or Affected modules(s):

dcnm_easy_fabric_provision

DCNM version

  • V 11.5.1 and above.

Potential ansible task config

image

image

image

image

Within the fabric template, it would be helpful if there was also an option where freeform text needs to be entered like the iBGP peer template configuration and leaf peer template:
image

image

image

Similarly for this section for the AAA config
image

image

image

One of the other options that would be helpful is if we can select the border leafs and enable vpc fabric peering.

This would give a full day 0 provisioning of DCNM fabric in minutes.

Short Name example needs to be updated

In the example for adding the collection to then use the short name, the task needs to be updated to only use the short name

Example in README section:

---
- hosts: dcnm_hosts
  gather_facts: false
  connection: httpapi

  collections:
    - cisco.dcnm

  tasks:
    - name: Merge a Switch
      cisco.dcnm.dcnm_inventory:
        ...parameters...

cisco.dcnm.dcnm_inventory should be shortened to dcnm_inventory:

query failed with dcnm_interface module

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Ansible Version and collection version

ansible [core 2.12.2]
  config file = /root/.ansible.cfg
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /root/virtualenv/ansible/lib/python3.9/site-packages/ansible
  ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
  executable location = /root/virtualenv/ansible/bin/ansible
  python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
  jinja version = 3.0.2
  libyaml = True

DCNM version

  • V 11.5.3

Affected module(s)

  • dcnm_interface

Ansible Playbook

# Copy-paste your anisble playbook here 
- name: query interface
  hosts: dcnm_cylon
  gather_facts: false
  collections:
    - cisco.dcnm
  tasks:
    - name: query interfaces from leaf1
      dcnm_interface:
        fabric: fabric-cylon
        state: query
        config:
          - switch:
            - ip: "172.25.74.61"

Debug Output

ansible-playbook [core 2.12.2]
  config file = /root/.ansible.cfg
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /root/virtualenv/ansible/lib/python3.9/site-packages/ansible
  ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
  executable location = /root/virtualenv/ansible/bin/ansible-playbook
  python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
  jinja version = 3.0.2
  libyaml = True
Using /root/.ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
Loading collection cisco.dcnm from /root/.ansible/collections/ansible_collections/cisco/dcnm
[WARNING]: Collection cisco.dcnm does not support Ansible version 2.12.2
Loading callback plugin default of type stdout, v2.0 from /root/virtualenv/ansible/lib/python3.9/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.

PLAYBOOK: query_interface.yml ********************************************************************************************************************************************************************************************************************
Positional arguments: query_interface.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/etc/ansible/hosts',)
forks: 5
1 plays in query_interface.yml

PLAY [query interface] ***************************************************************************************************************************************************************************************************************************
Trying secret FileVaultSecret(filename='/root/.vault_pass') for vault_id=default
META: ran handlers
Loading collection ansible.netcommon from /root/.ansible/collections/ansible_collections/ansible/netcommon

TASK [query interfaces from leaf1] ***************************************************************************************************************************************************************************************************************
task path: /root/workspace/ansible/dcnm/GR/query_interface.yml:7
redirecting (type: connection) ansible.builtin.httpapi to ansible.netcommon.httpapi
<172.25.74.53> attempting to start connection
<172.25.74.53> using connection plugin ansible.netcommon.httpapi
Found ansible-connection at path /root/virtualenv/ansible/bin/ansible-connection
<172.25.74.53> local domain socket does not exist, starting it
<172.25.74.53> control socket path is /root/.ansible/pc/6bff1a41bb
<172.25.74.53> redirecting (type: connection) ansible.builtin.httpapi to ansible.netcommon.httpapi
<172.25.74.53> Loading collection ansible.netcommon from /root/.ansible/collections/ansible_collections/ansible/netcommon
<172.25.74.53> Loading collection cisco.dcnm from /root/.ansible/collections/ansible_collections/cisco/dcnm
<172.25.74.53> local domain socket listeners started successfully
<172.25.74.53> loaded API plugin ansible_collections.cisco.dcnm.plugins.httpapi.dcnm from path /root/.ansible/collections/ansible_collections/cisco/dcnm/plugins/httpapi/dcnm.py for network_os cisco.dcnm.dcnm
<172.25.74.53>
<172.25.74.53> local domain socket path is /root/.ansible/pc/6bff1a41bb
<172.25.74.53> ANSIBLE_NETWORK_IMPORT_MODULES: disabled
<172.25.74.53> ANSIBLE_NETWORK_IMPORT_MODULES: module execution time may be extended
<172.25.74.53> ESTABLISH LOCAL CONNECTION FOR USER: root
<172.25.74.53> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-local-757176tqu3yncw `"&& mkdir "` echo /root/.ansible/tmp/ansible-local-757176tqu3yncw/ansible-tmp-1647278612.521348-757180-274152109484785 `" && echo ansible-tmp-1647278612.521348-757180-274152109484785="` echo /root/.ansible/tmp/ansible-local-757176tqu3yncw/ansible-tmp-1647278612.521348-757180-274152109484785 `" ) && sleep 0'
Using module file /root/.ansible/collections/ansible_collections/cisco/dcnm/plugins/modules/dcnm_interface.py
<172.25.74.53> PUT /root/.ansible/tmp/ansible-local-757176tqu3yncw/tmp88789yfi TO /root/.ansible/tmp/ansible-local-757176tqu3yncw/ansible-tmp-1647278612.521348-757180-274152109484785/AnsiballZ_dcnm_interface.py
<172.25.74.53> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-local-757176tqu3yncw/ansible-tmp-1647278612.521348-757180-274152109484785/ /root/.ansible/tmp/ansible-local-757176tqu3yncw/ansible-tmp-1647278612.521348-757180-274152109484785/AnsiballZ_dcnm_interface.py && sleep 0'
<172.25.74.53> EXEC /bin/sh -c '/root/virtualenv/ansible/bin/python /root/.ansible/tmp/ansible-local-757176tqu3yncw/ansible-tmp-1647278612.521348-757180-274152109484785/AnsiballZ_dcnm_interface.py && sleep 0'
<172.25.74.53> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-local-757176tqu3yncw/ansible-tmp-1647278612.521348-757180-274152109484785/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
  File "/root/.ansible/tmp/ansible-local-757176tqu3yncw/ansible-tmp-1647278612.521348-757180-274152109484785/AnsiballZ_dcnm_interface.py", line 107, in <module>
    _ansiballz_main()
  File "/root/.ansible/tmp/ansible-local-757176tqu3yncw/ansible-tmp-1647278612.521348-757180-274152109484785/AnsiballZ_dcnm_interface.py", line 99, in _ansiballz_main
    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
  File "/root/.ansible/tmp/ansible-local-757176tqu3yncw/ansible-tmp-1647278612.521348-757180-274152109484785/AnsiballZ_dcnm_interface.py", line 47, in invoke_module
    runpy.run_module(mod_name='ansible_collections.cisco.dcnm.plugins.modules.dcnm_interface', init_globals=dict(_module_fqn='ansible_collections.cisco.dcnm.plugins.modules.dcnm_interface', _modlib_path=modlib_path),
  File "/usr/lib/python3.9/runpy.py", line 210, in run_module
    return _run_module_code(code, init_globals, run_name, mod_spec)
  File "/usr/lib/python3.9/runpy.py", line 97, in _run_module_code
    _run_code(code, mod_globals, init_globals,
  File "/usr/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/tmp/ansible_dcnm_interface_payload_3816sigc/ansible_dcnm_interface_payload.zip/ansible_collections/cisco/dcnm/plugins/modules/dcnm_interface.py", line 3037, in <module>
  File "/tmp/ansible_dcnm_interface_payload_3816sigc/ansible_dcnm_interface_payload.zip/ansible_collections/cisco/dcnm/plugins/modules/dcnm_interface.py", line 2986, in main
  File "/tmp/ansible_dcnm_interface_payload_3816sigc/ansible_dcnm_interface_payload.zip/ansible_collections/cisco/dcnm/plugins/modules/dcnm_interface.py", line 2946, in dcnm_translate_switch_info
  File "/tmp/ansible_dcnm_interface_payload_3816sigc/ansible_dcnm_interface_payload.zip/ansible_collections/cisco/dcnm/plugins/module_utils/network/dcnm/dcnm.py", line 230, in dcnm_get_ip_addr_info
TypeError: inet_pton() argument 2 must be str, not dict
fatal: [172.25.74.53]: FAILED! => {
    "changed": false,
    "module_stderr": "Traceback (most recent call last):\n  File \"/root/.ansible/tmp/ansible-local-757176tqu3yncw/ansible-tmp-1647278612.521348-757180-274152109484785/AnsiballZ_dcnm_interface.py\", line 107, in <module>\n    _ansiballz_main()\n  File \"/root/.ansible/tmp/ansible-local-757176tqu3yncw/ansible-tmp-1647278612.521348-757180-274152109484785/AnsiballZ_dcnm_interface.py\", line 99, in _ansiballz_main\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n  File \"/root/.ansible/tmp/ansible-local-757176tqu3yncw/ansible-tmp-1647278612.521348-757180-274152109484785/AnsiballZ_dcnm_interface.py\", line 47, in invoke_module\n    runpy.run_module(mod_name='ansible_collections.cisco.dcnm.plugins.modules.dcnm_interface', init_globals=dict(_module_fqn='ansible_collections.cisco.dcnm.plugins.modules.dcnm_interface', _modlib_path=modlib_path),\n  File \"/usr/lib/python3.9/runpy.py\", line 210, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/usr/lib/python3.9/runpy.py\", line 97, in _run_module_code\n    _run_code(code, mod_globals, init_globals,\n  File \"/usr/lib/python3.9/runpy.py\", line 87, in _run_code\n    exec(code, run_globals)\n  File \"/tmp/ansible_dcnm_interface_payload_3816sigc/ansible_dcnm_interface_payload.zip/ansible_collections/cisco/dcnm/plugins/modules/dcnm_interface.py\", line 3037, in <module>\n  File \"/tmp/ansible_dcnm_interface_payload_3816sigc/ansible_dcnm_interface_payload.zip/ansible_collections/cisco/dcnm/plugins/modules/dcnm_interface.py\", line 2986, in main\n  File \"/tmp/ansible_dcnm_interface_payload_3816sigc/ansible_dcnm_interface_payload.zip/ansible_collections/cisco/dcnm/plugins/modules/dcnm_interface.py\", line 2946, in dcnm_translate_switch_info\n  File \"/tmp/ansible_dcnm_interface_payload_3816sigc/ansible_dcnm_interface_payload.zip/ansible_collections/cisco/dcnm/plugins/module_utils/network/dcnm/dcnm.py\", line 230, in dcnm_get_ip_addr_info\nTypeError: inet_pton() argument 2 must be str, not dict\n",
    "module_stdout": "",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 1
}

PLAY RECAP ***************************************************************************************************************************************************************************************************************************************
172.25.74.53               : ok=0    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0

Expected Behavior

all interfaces should be returned

Actual Behavior

module crashed as shown above

Steps to Reproduce

References

support poap when add new switches

feature request

add poap support on dcnm_inventory module

current behavior

When add new switches to fabric, currently dcnm_inventory only support using mgmt ip address to register switches to fabric, user could use built-in poap function of dcnm to onboarding new switch. this feature request is to add poap support under dcnm_inventory module.

Fix documentation issues in modules

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Ansible Version and collection version

DCNM version

  • V 1.2.0

Affected module(s)

  • all modules

Ansible Playbook

# Copy-paste your anisble playbook here 

Debug Output

Expected Behavior

Actual Behavior

Steps to Reproduce

References

Role[border_gateway]is not a valid role

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Ansible Version and collection version

ansible 2.10.5
  config file = /root/.ansible.cfg
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /root/virtualenv/ansible/lib/python3.7/site-packages/ansible
  executable location = /root/virtualenv/ansible/bin/ansible
  python version = 3.7.3 (default, Apr  3 2019, 05:39:12) [GCC 8.3.0]

DCNM version

  • V 11.5.1

Affected module(s)

  • dcnm_inventory

Ansible Playbook

# Copy-paste your anisble playbook here 
---
- name: add switches to fabric
  hosts: dcnm
  gather_facts: false
  collections:
    - cisco.dcnm
  tasks:
    - name: add switches to fabric
      dcnm_inventory:
        fabric: "{{ lab109_fabric_1.fabric }}"
        config:
          - seed_ip: 172.31.186.152
            user_name: admin
            password: xxxxx
            max_hops: 0
            role: border_gateway
            preserve_config: false

Debug Output

https://gist.github.com/dsx1123/153690db572ff4fff9e3ed52ace027e9

checked the API call, the role used in API is actually "border gateway", however if set the role to this string, it will not pass the validation. Only tested border gateway but I suspect same issue could apply to all other multi-word role.

Expected Behavior

Switch should be added to fabric as border gateway

Actual Behavior

see debug output

Steps to Reproduce

execute playbook above to register switch to exsited fabric

References

DELETE network failed when network is attached

Ansible version

ansible 2.10.5
  config file = /root/.ansible.cfg
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /root/virtualenv/ansible/lib/python3.7/site-packages/ansible
  executable location = /root/virtualenv/ansible/bin/ansible
  python version = 3.7.3 (default, Apr  3 2019, 05:39:12) [GCC 8.3.0]

DCNM Version

11.5.1

playbook

---
- name: delete ansible vrf and network
  hosts: dcnm
  gather_facts: false
  collections:
    - cisco.dcnm
  tasks:
    - name: TASK 01 DELTE NETWORK ANSIBLE
      dcnm_network:
        fabric: fabric-demo
        config:
          - net_name: ANSIBLE_NETWORK
        state: deleted

    - name: TASK 02 DELETE VRF ANSIBLE
      dcnm_vrf:
        fabric: fabric-demo
        config:
          - vrf_name: ANSIBLE
        state: deleted

debug output

debug output

Description

When delete network when it is deployed, deletion failed

Setting deploy to false still pushes the configuration to the switch

I am trying to ensure that anytime I configure DCNM policies on the switch, I would like to manually re-validate the configuration on all switches before running the save and deploy.

Following is my playbook.

--- 
- 
  connection: ansible.netcommon.httpapi
  gather_facts: false
  hosts: dcnm
  tasks: 
    - 
      cisco.dcnm.dcnm_policy: 
        config: 
          - 
            #deploy: false
            #state: merged
            switch: 
              - 
                ip: "{{ ansible_switch1 }}"
                policies: 
                  - 
                    create_additional_policy: false
                    name: feature_bfd
            deploy: false
            state: merged
        fabric: "{{ ansible_it_fabric }}"
      name: "Create different policies"
      no_log: false
      vars: 
        ansible_command_timeout: 1000
        ansible_connect_timeout: 1000
  vars: 
    ansible_it_fabric: dcnm-fabric
    ansible_switch1: "10.1.1.1"
    ansible_switch2: "10.1.1.2"

==========================================

I have tried false and I have tried no.

ansible-galaxy collection list | grep dcnm
cisco.dcnm 1.1.0

ansible --version
ansible 2.10.8
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0]

Would you be able to advise if this is a bug in the latest version please?

maximum recursion depth exceeded error when fabric name does not exist

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Ansible Version and collection version

ansible [core 2.11.8]  (detached HEAD e40051f7c0) last updated 2022/02/10 10:36:01 (GMT -400)
  config file = None
  configured module search path = ['/Users/mwiebe/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /Users/mwiebe/Projects/Ansible/nxos_collection_latest/ansible/lib/ansible
  ansible collection location = /Users/mwiebe/.ansible/collections:/usr/share/ansible/collections
  executable location = /Users/mwiebe/Projects/Ansible/nxos_collection_latest/ansible/bin/ansible
  python version = 3.8.0 (default, Feb 17 2020, 13:58:20) [Clang 11.0.0 (clang-1100.0.33.17)]
  jinja version = 2.11.2
  libyaml = False

DCNM version

  • V 12.0.2f

Affected module(s)

All modules

Ansible Playbook

    - name: MERGED Create L3 network along with all dhcp options
      cisco.dcnm.dcnm_network:
        fabric: fabric_name_does_not_exist
        state: query```

Expected Behavior

Playbook run should fail gracefully with a meaningful error message to indicate that the fabric does not exist.

Actual Behavior

ansible.module_utils.connection.ConnectionError: ['Error on attempt to connect and authenticate with DCNM controller: maximum recursion depth exceeded', 'Error on attempt to connect and authenticate with NDFC controller: maximum recursion depth exceeded', 'Error on attempt to connect and authenticate with NDFC controller: maximum recursion depth exceeded']

Steps to Reproduce

Run the playbook above with an invalid fabric name

Ansible Automation Hub Certification and dcnm contact information

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

The Ansible Partner Engineering team has received the Cisco dcnm v2.0.1 collection on Automation Hub. Before approval, there are a number of errors in the import log that need to be resolved. The log can be viewed on Automation Hub. The log should present solutions and/or explanations for all errors shown. We can provide recommendations on how to resolve these issues as well. For direct contact with the Ansible PE team to discuss recommendations for the dcnm collection, please reach out to [email protected]. You can also respond to this issue with contact information and we will reach out.

New or Affected modules(s):

DCNM version

  • V 2.0.1

Potential ansible task config

References

Ansible playbook for adding policy template with argument values does not get configured on DCNM

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Ansible Version and collection version

ansible 2.10.8
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0]

ansible-galaxy collection list | grep dcnm
cisco.dcnm 1.1.0

DCNM version

  • V 11.5.1

Affected module(s)

  • cisco.dcnm.dcnm_policy

Ansible Playbook

---

- hosts: dcnm
  gather_facts: false
  connection: ansible.netcommon.httpapi

  vars:
    ansible_it_fabric: dcnm-fabric
    ansible_switch1: 10.20.20.1
    ansible_switch2: 10.20.20.2

  tasks:
    - name: Create policy including required variables
      cisco.dcnm.dcnm_policy:
        fabric: "{{ ansible_it_fabric }}"
        deploy: false
        state: merged
        config:
          - name: add_ip_prefix_list              # This must be a valid template name
            create_additional_policy: false  # Do not create a policy if it already exists
            priority: 101
            policy_vars:
              PREFIX_LIST_NAME: ansible-prefix-list
              ACL_NAME: 10.122.84.0/24
              SEQ_NUM: 20
              PREFIX_LIST_ACTION: permit

    
          - switch:
              - ip: "{{ ansible_switch1 }}"

      vars:
          ansible_command_timeout: 1000
          ansible_connect_timeout: 1000

Debug Output

https://gist.github.com/micruzz82/15a0992561c4584f8b962abe2305b32a

Expected Behavior

image

This is the new custom template created in the template library.

policy template is for device.

Following are the variables for template policy "add_ip_prefix_list"

                    "ACL_NAME": "10.122.84.0/24",
                    "PREFIX_LIST_ACTION": "permit",
                    "PREFIX_LIST_NAME": "ansible-prefix-list",
                    "SEQ_NUM": 20

As per documentation example, once the playbook is run, it should have updated a new policy on the switch.

Actual Behavior

The playbook just runs without any errors and nothing is configured on DCNM for the switch that needs to be configured.

Steps to Reproduce

Step 1)
Create a new template in template library: "add_ip_prefix_list"

Add following to template:

##template variables

@(DisplayName="Prefix List Name", Description="Name of Prefix List with max. 63 characters, example PL-ICE-")
string PREFIX_LIST_NAME {
maxLength = 63;
};

@(DisplayName="Prefix List Action", Description="Prefix List permit or deny operation")
enum PREFIX_LIST_ACTION {
validValues=permit,deny;
defaultValue=permit;
};

@(DisplayName="Prefix List Sequence Number", Description="Prefix List match sequence number, example 10")
integer SEQ_NUM {
min = 0;
max = 65535;
defaultValue=10;
};

@(DisplayName="ACL Name", Description="Name of ACL with max. 63 characters, example 10.x.x.x/x or 10.x.x.x/x le xx")
string ACL_NAME {
maxLength = 63;
};

##template content

ip prefix-list $$PREFIX_LIST_NAME$$ seq $$SEQ_NUM$$ $$PREFIX_LIST_ACTION$$ $$ACL_NAME$$

Step 2)
configure ansible playbook for fabric as documented above.

Step3)
Execute playbook to verify if policy has been updated on switch.

References

query parameter for vrf and network does not provide information

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Ansible Version and collection version

ansible 2.10.10
collection version 1.2.1

DCNM version

  • V 11.5.1

Affected module(s)

  • dcnm_vrf
  • dcnm_module

Ansible Playbook

---

- hosts: dcnm_controllers
  gather_facts: false

  vars:
    fabric: DevNet_Fabric

  tasks:
    - name: Query VRF deployed from Ansible
      cisco.dcnm.dcnm_vrf:
        fabric: "{{ fabric }}"
        state: query
        config:
        - vrf_name: Ansible_VRF
          vrf_template: Default_VRF_Universal
          vrf_extension_template: Default_VRF_Extension_Universal

Debug Output

Expected Behavior

I'd expect something querying the specific URI for the VRF or network and returning the configured parameters of such. Similar to this returned from /top-down/fabrics/{fabric}/vrfs/{vrf-name}

{
  "fabric": "DevNet_Fabric",
  "vrfName": "Ansible_VRF",
  "vrfTemplate": "Default_VRF_Universal",
  "vrfExtensionTemplate": "Default_VRF_Extension_Universal",
  "vrfTemplateConfig": "{\"advertiseDefaultRouteFlag\":\"true\",\"vrfVlanId\":\"\",\"isRPExternal\":\"false\",\"bgpPassword\":\"\",\"vrfSegmentId\":\"50000\",\"multicastGroup\":\"\",\"configureStaticDefaultRouteFlag\":\"true\",\"advertiseHostRouteFlag\":\"false\",\"trmEnabled\":\"false\",\"trmBGWMSiteEnabled\":\"false\",\"asn\":\"65001\",\"nveId\":\"1\",\"vrfName\":\"Ansible_VRF\"}",
  "tenantName": null,
  "vrfId": 50000,
  "serviceVrfTemplate": null,
  "source": null
}

Actual Behavior

Return looks similar to this

ok: [dcnm] => {
    "changed": false,
    "diff": [],
    "invocation": {
        "module_args": {
            "check_mode": false,
            "config": [
                {
                    "vrf_extension_template": "Default_VRF_Extension_Universal",
                    "vrf_name": "Ansible_VRF",
                    "vrf_template": "Default_VRF_Universal"
                }
            ],
            "fabric": "DevNet_Fabric",
            "state": "query"
        }
    },
    "response": []
}

The same happens for querying the network, but with the network mentioned. No details about the VRF/network given. This is different than other query parameters for modules (such as templates, policy) which actually return information about the item.

Steps to Reproduce

Run a query playbook against a pre-configured network or VRF

References

support enable/disable interface with dcnm_interface module

feature request

support enable/disable interface with dcnm_interface module

current behavior

dcnm_interface is only used to provision the interfaces, but not able to shutdown or no shutdown selected interface, this feature request is add new parameter under config to mark it as enable(no shutdown) or disable(shutdown) of selected interface.

Native vlan

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

New or Affected modules(s):

  • dcnm_XXXXX

DCNM version

  • V 11.5.1

Potential ansible task config

# Copy-paste your ansible playbook

References

Additional context
Add any other context or screenshots about the feature request here.

when ansible server is using proxy with authentication, dcnm_rest failed with exception

ansible version

ansible-playbook 2.10.3
config file = /root/.ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /root/virtualenv/ansible/lib/python3.7/site-packages/ansible
executable location = /root/virtualenv/ansible/bin/ansible-playbook
python version = 3.7.3 (default, Apr 3 2019, 05:39:12) [GCC 8.3.0]

example of playbook

---
- name: create networks
  hosts: dcnm
  gather_facts: false
  tasks:
    - name: create network network_app
      cisco.dcnm.dcnm_rest:
        method: POST
        path: /rest/top-down/fabrics/fabric1/networks
        json_data: |
          {
            "fabric":"fabric1",
            "vrf":"vrf_red",
            "networkName":"network_app",
            "networkId":"30010",
            "networkTemplateConfig":"{\"gatewayIpAddress\":\"10.3.4.1/24\", \"vlanId\":\"\"}",
            "networkTemplate":"Default_Network_Universal",
            "networkExtensionTemplate":"Default_Network_Extension_Universal"
          }

      register: result

    - name: next available segments
      debug:
        msg: "{{ result }}"

###result

ansible-playbook 2.10.3
  config file = /root/.ansible.cfg
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /root/virtualenv/ansible/lib/python3.7/site-packages/ansible
  executable location = /root/virtualenv/ansible/bin/ansible-playbook
  python version = 3.7.3 (default, Apr  3 2019, 05:39:12) [GCC 8.3.0]
Using /root/.ansible.cfg as config file
host_list declined parsing /root/workspace/ansible/dcnm/inventory/dcnm/host as it did not pass its verify_file() method
script declined parsing /root/workspace/ansible/dcnm/inventory/dcnm/host as it did not pass its verify_file() method
auto declined parsing /root/workspace/ansible/dcnm/inventory/dcnm/host as it did not pass its verify_file() method
Parsed /root/workspace/ansible/dcnm/inventory/dcnm/host inventory source with ini plugin
redirecting (type: action) cisco.dcnm.dcnm_rest to cisco.dcnm.dcnm
redirecting (type: callback) ansible.builtin.profile_tasks to ansible.posix.profile_tasks
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.

PLAYBOOK: dcnm_rest.yml *******************************************************************************************************************************************************************************************
1 plays in dcnm_rest.yml

PLAY [create networks] ********************************************************************************************************************************************************************************************
META: ran handlers
redirecting (type: action) cisco.dcnm.dcnm_rest to cisco.dcnm.dcnm

TASK [create network network_app] *********************************************************************************************************************************************************************************
task path: /root/workspace/ansible/dcnm/dcnm_rest.yml:6
Tuesday 15 December 2020  11:46:38 -0800 (0:00:00.047)       0:00:00.047 ******
redirecting (type: connection) ansible.builtin.httpapi to ansible.netcommon.httpapi
redirecting (type: action) cisco.dcnm.dcnm_rest to cisco.dcnm.dcnm
<172.25.74.53> ESTABLISH LOCAL CONNECTION FOR USER: root
<172.25.74.53> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-local-15572318wdfpm `"&& mkdir "` echo /root/.ansible/tmp/ansible-local-15572318wdfpm/ansible-tmp-1608061600.0660121-15577-281150431964816 `" && echo ansible-tmp-1608061600.0660121-15577-281150431964816="` echo /root/.ansible/tmp/ansible-local-15572318wdfpm/ansible-tmp-1608061600.0660121-15577-281150431964816 `" ) && sleep 0'
<172.25.74.53> Attempting python interpreter discovery
<172.25.74.53> EXEC /bin/sh -c 'echo PLATFORM; uname; echo FOUND; command -v '"'"'/usr/bin/python'"'"'; command -v '"'"'python3.7'"'"'; command -v '"'"'python3.6'"'"'; command -v '"'"'python3.5'"'"'; command -v '"'"'python2.7'"'"'; command -v '"'"'python2.6'"'"'; command -v '"'"'/usr/libexec/platform-python'"'"'; command -v '"'"'/usr/bin/python3'"'"'; command -v '"'"'python'"'"'; echo ENDFOUND && sleep 0'
<172.25.74.53> EXEC /bin/sh -c '/usr/bin/python && sleep 0'
Using module file /root/.ansible/collections/ansible_collections/cisco/dcnm/plugins/modules/dcnm_rest.py
<172.25.74.53> PUT /root/.ansible/tmp/ansible-local-15572318wdfpm/tmpe75w5n8p TO /root/.ansible/tmp/ansible-local-15572318wdfpm/ansible-tmp-1608061600.0660121-15577-281150431964816/AnsiballZ_dcnm_rest.py
<172.25.74.53> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-local-15572318wdfpm/ansible-tmp-1608061600.0660121-15577-281150431964816/ /root/.ansible/tmp/ansible-local-15572318wdfpm/ansible-tmp-1608061600.0660121-15577-281150431964816/AnsiballZ_dcnm_rest.py && sleep 0'
<172.25.74.53> EXEC /bin/sh -c '/usr/bin/python3 /root/.ansible/tmp/ansible-local-15572318wdfpm/ansible-tmp-1608061600.0660121-15577-281150431964816/AnsiballZ_dcnm_rest.py && sleep 0'
<172.25.74.53> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-local-15572318wdfpm/ansible-tmp-1608061600.0660121-15577-281150431964816/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
  File "/root/.ansible/tmp/ansible-local-15572318wdfpm/ansible-tmp-1608061600.0660121-15577-281150431964816/AnsiballZ_dcnm_rest.py", line 102, in <module>
    _ansiballz_main()
  File "/root/.ansible/tmp/ansible-local-15572318wdfpm/ansible-tmp-1608061600.0660121-15577-281150431964816/AnsiballZ_dcnm_rest.py", line 94, in _ansiballz_main
    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
  File "/root/.ansible/tmp/ansible-local-15572318wdfpm/ansible-tmp-1608061600.0660121-15577-281150431964816/AnsiballZ_dcnm_rest.py", line 40, in invoke_module
    runpy.run_module(mod_name='ansible_collections.cisco.dcnm.plugins.modules.dcnm_rest', init_globals=None, run_name='__main__', alter_sys=True)
  File "/usr/lib/python3.7/runpy.py", line 205, in run_module
    return _run_module_code(code, init_globals, run_name, mod_spec)
  File "/usr/lib/python3.7/runpy.py", line 96, in _run_module_code
    mod_name, mod_spec, pkg_name, script_name)
  File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/tmp/ansible_cisco.dcnm.dcnm_rest_payload_d28wkh9t/ansible_cisco.dcnm.dcnm_rest_payload.zip/ansible_collections/cisco/dcnm/plugins/modules/dcnm_rest.py", line 103, in <module>
  File "/tmp/ansible_cisco.dcnm.dcnm_rest_payload_d28wkh9t/ansible_cisco.dcnm.dcnm_rest_payload.zip/ansible_collections/cisco/dcnm/plugins/modules/dcnm_rest.py", line 96, in main
TypeError: '>=' not supported between instances of 'NoneType' and 'int'
fatal: [172.25.74.53]: FAILED! => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "module_stderr": "Traceback (most recent call last):\n  File \"/root/.ansible/tmp/ansible-local-15572318wdfpm/ansible-tmp-1608061600.0660121-15577-281150431964816/AnsiballZ_dcnm_rest.py\", line 102, in <module>\n    _ansiballz_main()\n  File \"/root/.ansible/tmp/ansible-local-15572318wdfpm/ansible-tmp-1608061600.0660121-15577-281150431964816/AnsiballZ_dcnm_rest.py\", line 94, in _ansiballz_main\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n  File \"/root/.ansible/tmp/ansible-local-15572318wdfpm/ansible-tmp-1608061600.0660121-15577-281150431964816/AnsiballZ_dcnm_rest.py\", line 40, in invoke_module\n    runpy.run_module(mod_name='ansible_collections.cisco.dcnm.plugins.modules.dcnm_rest', init_globals=None, run_name='__main__', alter_sys=True)\n  File \"/usr/lib/python3.7/runpy.py\", line 205, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/usr/lib/python3.7/runpy.py\", line 96, in _run_module_code\n    mod_name, mod_spec, pkg_name, script_name)\n  File \"/usr/lib/python3.7/runpy.py\", line 85, in _run_code\n    exec(code, run_globals)\n  File \"/tmp/ansible_cisco.dcnm.dcnm_rest_payload_d28wkh9t/ansible_cisco.dcnm.dcnm_rest_payload.zip/ansible_collections/cisco/dcnm/plugins/modules/dcnm_rest.py\", line 103, in <module>\n  File \"/tmp/ansible_cisco.dcnm.dcnm_rest_payload_d28wkh9t/ansible_cisco.dcnm.dcnm_rest_payload.zip/ansible_collections/cisco/dcnm/plugins/modules/dcnm_rest.py\", line 96, in main\nTypeError: '>=' not supported between instances of 'NoneType' and 'int'\n",
    "module_stdout": "",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 1
}

expected result

moduled filed with meaningful error like 407 authentication required

Network Attach for Orphan Ports

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

ansible [core 2.11.1]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/usr/share/ansible']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /Users/robdwyer/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.5 (default, May 4 2021, 03:36:27) [Clang 12.0.0 (clang-1200.0.32.29)]
jinja version = 3.0.1
libyaml = True

ansible-galaxy [core 2.11.1]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/usr/share/ansible']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /Users/robdwyer/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-galaxy
python version = 3.9.5 (default, May 4 2021, 03:36:27) [Clang 12.0.0 (clang-1200.0.32.29)]
jinja version = 3.0.1
libyaml = True

DCNM version

  • 11.5.3

Affected module(s)

  • cisco.dcnm 2.0.1

Ansible Playbook

  • name: Merge l3 networks no dhcp
    cisco.dcnm.dcnm_network:
    fabric: fabric_01
    state: merged
    config:
    • net_name: "{{item.name}}"
      vrf_name: "{{item.vrf}}"
      net_id: "{{item.nid}}"
      net_template: Default_Network_Universal
      net_extension_template: Default_Network_Extension_Universal
      vlan_id: "{{item.vid}}"
      gw_ip_subnet: "{{item.gw_sub}}"
      attach:
      • ip_address: "{{item.switch_name}}"
        ports: "{{item.port_list}}"
        deploy: true
      • ip_address: "{{item.swt_peer}}"
        ports: []
        deploy: true
        deploy: true
        loop: "{{networks}}"
        loop_control:
        extended: yes

Debug Output

<script src="https://gist.github.com/dwyerr/e0016969eb6e429bed9dd014ffbd12df.js"></script>

Expected Behavior

Module should be able to attach 'orphan' ports to a network.

Actual Behavior

DCNM is returning a 200 but with an error message - 'No vPC Peer' and no changes are pushed.

"response": [
    {
        "DATA": {
            "ldc-vl1780-[9V15LFHVQ6L/pod01_leaf01]": "Attach Response : Failed : No VPC Peer "
        },
        "MESSAGE": "OK",
        "METHOD": "POST",
        "REQUEST_PATH": "https://10.122.41.138:443/rest/top-down/fabrics/humana_fabric_01/networks/attachments",
        "RETURN_CODE": 200
    },

Steps to Reproduce

Try to attach a port to a network on only a single peer in a vPC pair.

References

Add SVI interface type to dcnm_interface

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

For non-VXLAN fabrics, the ability to create an SVI is crucial for building out traditional (or classical) ethernet architectures.

And, of course, support for optionally configuring HSRP on the SVI as well.

New or Affected modules(s):

  • dcnm_interface

DCNM version

  • V 11.5.1

Potential ansible task config

    - name: Create SVI VLAN10 on Host (Source of Truth)
      dcnm_interface:
        fabric: external-fabric
        state: replaced
        config:
          - name: vlan10
            type: svi
            switch: "core1"
            deploy: false
            profile:
              admin_state: true
              interface_vrf: default
              ip: 192.168.10.1
              netmask_length: 24
              mtu: jumbo
              routing_tag: 12345
              disable_ip_redirects: true
              description: "SVI for VLAN 10"
              hsrp:
                ip: 192.168.10.254
                group: 1
                version: 1
                priority: 100
                preempt: true
                virtual_mac: lee7.dead.beef

References

None.

Additional context

vrf lite configuration does not honor the auto deploy both flag

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Ansible Version and collection version

DCNM version

  • V 11.5.1

Affected module(s)

  • dcnm_vrf

Ansible Playbook

# Copy-paste your anisble playbook here 
---
- name: Configure external connectivity
  hosts: dcnm
  gather_facts: false
  vars_files:
    - vars.yml
  collections:
    - cisco.dcnm
  tasks:
    - name: external | attach vrf blue to border leaf {{ border[0].name }}
      dcnm_vrf:
        fabric: "{{ fabric }}"
        config:
          - vrf_name: blue
            attach:
              - ip_address: "{{ border[0].ip }}"
                vrf_lite:
                  - peer_vrf: blue

And vrf lite configuration of fabric is set as below:
image

Debug Output

debug output

Expected Behavior

external fabric that connected to border should also be deployed

Actual Behavior

external fabric is not deployed

Steps to Reproduce

  1. set vrf lite setting of fabric as below
    image
  2. run the playbook
  3. check the deployment result, the auto deploy is set to false which cause the external fabric is not deployed:
    image

References

Checked the code, looks like we did not get auto deploy flag from fabric, but instead, it is set to false no matter what:

                    vrflite_con['VRF_LITE_CONN'][0]['IPV6_NEIGHBOR']=a_l['neighbor_ipv6']
                    vrflite_con['VRF_LITE_CONN'][0]['AUTO_VRF_LITE_FLAG']='false'
                    vrflite_con['VRF_LITE_CONN'][0]['PEER_VRF_NAME']=a_l['peer_vrf']

dcnm_interface: for routed interface, specifying mtu generates ansible conversion error

The DCNM Ansible dcnm_interface module documentation says that, for the 'eth' profile, the MTU attribute is supposed to be a string. However, when I do specify the MTU attribute for a routed interface (in the example below), I get an Ansible error regarding conversion to 'int'. While the example below indicated mtu 'default', I have verified the same error gets thrown for mtu 'jumbo' (which is supposed to be the default value).

Not specifying mtu at all works just fine (and of course defaults to jumbo: 9216). The example was tested against the dCloud VXLAN Multisite environment (not following the VXLAN script though... just giving you a standard environment upon which to verify it).

Sample Ansible:

- hosts: dcnm
  gather_facts: false
  connection: ansible.netcommon.httpapi

  collections:
    - cisco.dcnm

  vars:
    # Need to extend timeouts because discovery process is slow
    ansible_command_timeout: 1800
    ansible_connect_timeout: 1800

    switch_username: "{{ lookup('env', 'SWITCH_USER') }}"
    switch_password: "{{ lookup('env', 'SWITCH_PASS') }}"

  tasks:
    - name: Create L3 Routed Link between Core and Aggregate
      dcnm_interface:
        fabric: "{{ site1.fabric_name }}"
        check_deploy: yes
        state: merged
        config:
          - name: Ethernet1/2
            deploy: yes
            switch:
              - "{{ site1.spine1_ip }}"
            type: eth
            profile:
              admin_state: yes
              mode: "routed"
              mtu: "default"
              ipv4_addr: 192.168.0.1
              ipv4_mask_len: 30

Output of the error:

PLAY [dcnm] *********************************************************************************************************************************************************************

TASK [Create L3 Routed Link between Core and Aggregate] *************************************************************************************************************************
fatal: [dcloud_dcnm]: FAILED! => {"changed": false, "module_stderr": "<class 'ansible.parsing.yaml.objects.AnsibleUnicode'> cannot be converted to an int", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error"}

PLAY RECAP **********************************************************************************************************************************************************************
dcloud_dcnm                : ok=0    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0   

dcnm_vrf - Omission of Loopback Routing Tag during VRF creation causes later issue when editing VRF in NDFC

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Ansible Version and collection version

cisco.dcnm        2.0.1  
ansible [core 2.12.4]
  config file = /Users/foobar/tmp_ndfc/ndfc-roles/ansible.cfg
  configured module search path = ['/Users/foobar/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /Users/foobar/py310/lib/python3.10/site-packages/ansible
  ansible collection location = /Users/foobar/.ansible/collections:/usr/share/ansible/collections
  executable location = /Users/foobar/py310/bin/ansible
  python version = 3.10.3 (v3.10.3:a342a49189, Mar 16 2022, 09:34:18) [Clang 13.0.0 (clang-1300.0.29.30)]
  jinja version = 3.0.3
  libyaml = True

DCNM version

  • V 12.0(2f) (NDFC)

Affected module(s)

  • dcnm_vrf

Ansible Playbook

  cisco.dcnm.dcnm_vrf:
    fabric: f1
    state: replaced
    config:
    - vrf_name: v1
      vrf_id: 9003031
      vlan_id: 3031
      vrf_template: Default_VRF_Universal
      vrf_extension_template: Default_VRF_Extension_Universal
      service_vrf_template: null
      attach:
        - ip_address: 1.2.3.4
        - ip_address: 5.6.7.8

Debug Output

Debug Output: dcnm_vrf - Omission of Loopback Routing Tag during VRF creation causes later issue when editing VRF in NDFC

Expected Behavior

A Loopback Routing Tag field should be present in the REST POST sent to the switch, even if the value is an empty string ("").

Actual Behavior

The Loopback Routing Tag field is not present during VRF creation, which causes later (manual) edits of the VRF in NDFC to push a config-profile with unresolved tag parameter. The issue, on Cisco's side, is being tracked with CSCwb74982. However, it would be good to ensure that all required parameters are present to avoid possibly similar issues in the future when a combination of Ansible and GUI (NDFC) are used.

Steps to Reproduce

  1. Run the above playbook, or similar
  2. After the VRF is created, in DCNM/NDFC 12.0.(2f) do the following
  3. Click on Topology in the NDFC sidebar
  4. Double-click on fabric f1
  5. Double-click on VRFs
  6. Double-click on VRF v1
  7. Right-click on a device and select "Edit Attachment"
  8. Click the slider to change the attachment state from Detach to Attach
  9. Enter 111 into the Loopback Id field
  10. Enter 1.2.3.4 into the Loopback IPv4 Address
  11. The other fields do not matter and can be ignored for the purpose of this reproduce
  12. Click Save
  13. Right-click on the device, and select Preview
  14. In the Pending Config column, click the config (should say '36 lines')
  15. Observe that the loopback config looks as follows
  interface loopback111
    vrf member v1
    ip address 1.2.3.4/32 tag $$tag$$

If you try to Deploy this above config, NDFC will return the following error.

CLI command 'refresh profile v1 v1_new overwrite' failed with following error:Delivery failed with message:ERROR: Cannot refresh to parameterized destination profile

Looking at the Detailed History for this error, it occurred when the following CLI was attempted:

refresh profile v1 v1_new overwrite

Manually issuing the above (on the switch) produces the same error.

If the config-profile is manually edited to replace $$tag$$ with '12345', the command succeeds.

References

L2 Checkbox while creating a network

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Ansible Version and collection version

ansible 2.10.3 collection: 1.1.1

DCNM version

  • 11.4

Affected module(s)

  • dcnm_networks

While creating a network on DCNM GUI, we have an option to create it without a VRF by checking on the L2 checkbox.
Can this be achieved using this collection?

Any pointers will be appreciated. Thanks.

support breakout operation pm dcnm_interface module

feature request

support breakout operation pm dcnm_interface module

current behavior

dcnm_interface module only support provision new interface on existed physical interface, since switch could support breakout certain interface to multiple sub interface. dcnm_interface could support breakout operation with provided breakout option like 10g-4x and also should support unbreakout operation.

dcnm_interface Merged issue

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Ansible Version and collection version

ansible version 2.10
cisco.dcnm 1.2.0 and cisco.dcnm 2.0.1

DCNM version

  • V 11.5.1

Affected module(s)

  • cisco.dcnm.dcnm_interface

Ansible Playbook

#  - name: Add cmd inside freeform config of the interface 
       cisco.dcnm.dcnm_interface: 
        fabric: PR_DCI_FCT
        state: merged
        config:
          - name: "Ethernet1/12"
            type: eth
            switch:
              - "1.2.3.4"
             deploy: false
             profile:
               mode: trunk
               cmds:
                 - 'vpc orphan-port suspend'
     

Debug Output

Expected Behavior

I expected to add a cli command inside interface "freeform config field" even if there is already other cli commands inside this field.

Actual Behavior

If there is already cli commands inside interface "Freeform Config filed" , the merged state in the playbook replace all the "Freeform Config filed" by the cli in the playbook.

Steps to Reproduce

References

support customized template when using dcnm_interface module

feature request

support customized template when using dcnm_inteface module

current behavior

on current dcnm_interface module, only default template is supported when deploy the interface, user could create own template and use it as policy when create new interface like port-channel.

dcnm_inventory state merged is not idempotent

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Ansible Version and collection version

Ansible: 2.10.4

Ansible Collection: cisco.dcnm = 1.2.0

Python modules

  • ansible-base 2.10.4
  • ansible 2.10.5

DCNM version

DCNM LAN Fabric: 11.5.1

Affected module(s)

  • dcnm_inventory

Ansible Playbook

- hosts: dcnm
  gather_facts: false
  connection: ansible.netcommon.httpapi

  collections:
    - cisco.dcnm

  vars:
    ansible_command_timeout: 1800
    ansible_connect_timeout: 1800
    switch_username: USERNAME
    switch_password: PASSWORD
    site1_vxlan: VXLAN-Site1
    site1_bgw1: 1.1.3.1
    site1_bgw2: 1.1.3.2
    site2_vxlan: VXLAN-Site2
    site2_bgw1: 2.1.3.1
    site2_bgw2: 2.1.3.2

  tasks:
    - name: Add Border Gateways to "{{ site1_vxlan }}"
      dcnm_inventory:
        fabric: "{{ site1_vxlan }}"
        state: merged
        config:
          - seed_ip: "{{ site1_bgw1 }}"
            max_hops: 0
            preserve_config: false
            role: border_gateway
            auth_proto: MD5
            user_name: "{{ switch_username }}"
            password: "{{ switch_password }}"
          - seed_ip: "{{ site1_bgw2 }}"
            max_hops: 0
            preserve_config: false
            role: border_gateway
            auth_proto: MD5
            user_name: "{{ switch_username }}"
            password: "{{ switch_password }}"

Debug Output

# First run 
% ansible-playbook 01-vxlan_fabric_switches.yaml 

PLAY [dcnm] **********************************************************************************************************

TASK [Add Border Gateways to "VXLAN-Site1"] **************************************************************************
[WARNING]: Adding switches to a VXLAN fabric can take a while.  Please be patient...
changed: [dcpod_dcnm]

PLAY RECAP ***********************************************************************************************************
dcpod_dcnm                 : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

# Second run
(ansible-2.10) timmil@TIMMIL-M-842A ansible-dcnm-examples % ansible-playbook 01-vxlan_fabric_switches.yaml 

PLAY [dcnm] **********************************************************************************************************

TASK [Add Border Gateways to "VXLAN-Site1"] **************************************************************************
[WARNING]: Adding switches to a VXLAN fabric can take a while.  Please be patient...
fatal: [dcpod_dcnm]: FAILED! => {"changed": false, "msg": {"DATA": "Invalid JSON response: The IP is already in inventory: [/10.60.66.131]\n\nDiscovery Failed. \n\nPlease resolve selections and try again.", "MESSAGE": "Internal Server Error", "METHOD": "POST", "REQUEST_PATH": "https://172.18.180.190:443/rest/control/fabrics/VXLAN-Site1/inventory/discover?gfBlockingCall=true", "RETURN_CODE": 500}}

PLAY RECAP ***********************************************************************************************************
dcpod_dcnm                 : ok=0    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0   

Expected Behavior

  • First run should have completed successfully, taking a great deal of time.
  • Second run should have completed with a task status of "ok".
  • Second run should have completed much more QUICKLY.

Actual Behavior

  • Second run failed as reported in the "debug" info above.
  • Second run took a decent amount of time to fail despite have a short list of attributes to check/compare. Although, in fairness, maybe because I'm doing 2 switches?
  • Second run actually fails because it made a REST call that was not correct - rather than detecting no changes were needed.

Steps to Reproduce

Simply run ansible-playbook 01-vxlan_fabric_switches.yaml (YAML copied above) multiple times.

References

Creating / Merging Interfaces Fail with DCNM ver 10.5(1)

ansible 2.10.6
config file = /root/dev/dcnm-ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /root/envs/dcnm-ansible/lib/python3.6/site-packages/ansible
executable location = /root/envs/dcnm-ansible/bin/ansible
python version = 3.6.8 (default, Nov 16 2020, 16:55:22) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]

DCNM Version 10.5(1)

playbook: see attached
merge-loopback.txt

Expected Result:
TASK [Create loopback interfaces] to return code 200

Actual Result:

TASK [Create loopback interfaces] **********************************************************************************************************
fatal: [10.X.X.X]: FAILED! => {"changed": false, "msg": {"DATA": [{"column": 0, "entity": "", "line": 0, "message": "Interface type is not valid", "reportItemType": "ERROR"}], "MESSAGE": "Internal Server Error", "METHOD": "POST", "REQUEST_PATH": "https://10.X.X.X:443/rest/globalInterface", "RETURN_CODE": 500}}

template indent is not correct after created

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Ansible Version and collection version

ansible 2.10.5
  config file = /root/.ansible.cfg
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /root/virtualenv/ansible/lib/python3.7/site-packages/ansible
  executable location = /root/virtualenv/ansible/bin/ansible
  python version = 3.7.3 (default, Apr  3 2019, 05:39:12) [GCC 8.3.0]

DCNM version

  • V 11.5.1

Affected module(s)

  • dcnm_template

Ansible Playbook

# Copy-paste your anisble playbook here 
---
- name: create template for telemery
  hosts: dcnm
  gather_facts: false
  tasks:
    - name: read template config from file
      set_fact:
        template_content: "{{ lookup('file', './telemetry.cfg')}}"

    - name: create template
      cisco.dcnm.dcnm_template:
        config:
          - name: telegraf_telemetry_configuration
            description: "telemetry configuration"
            content: "{{ template_content}}"

content of telemetry.cfg

feature telemetry

telemetry
  certificate /bootflash/telegraf.crt telegraf
  destination-profile
    use-vrf management
  destination-group 1
    ip address 10.195.225.176 port 57000 protocol gRPC encoding GPB
  sensor-group 1
    data-source DME
    path sys/ch depth unbounded
  sensor-group 2
    data-source DME
    path sys/intf depth unbounded
  sensor-group 3
    data-source DME
    path sys/bgp depth unbounded
  sensor-group 4
    data-source DME
    path sys/procsys/syscpusummary/syscpuhistory-last60seconds
  sensor-group 5
    data-source DME
    path sys/procsys/sysmem/sysmemusage
  sensor-group 6
    data-source DME
    path sys/bd depth unbounded
  sensor-group 7
    data-source DME
    path sys/mac depth unbounded
  sensor-group 8
    data-source DME
    path sys/evpn depth 4
  sensor-group 9
    data-source DME
    path sys/urib depth unbounded query-condition rsp-foreign-subtree=ephemeral
  sensor-group 10
    data-source DME
    path sys/u6rib depth unbounded query-condition rsp-foreign-subtree=ephemeral
  sensor-group 11
    data-source DME
    path sys/bgp/inst/dom-default/af-[l2vpn-evpn] depth unbounded query-condition rsp-foreign-subtree=ephemeral
  subscription 1
    dst-grp 1
    snsr-grp 1 sample-interval 10000
  subscription 2
    dst-grp 1
    snsr-grp 2 sample-interval 10000
  subscription 3
    dst-grp 1
    snsr-grp 3 sample-interval 30000
  subscription 4
    dst-grp 1
    snsr-grp 4 sample-interval 15000
  subscription 5
    dst-grp 1
    snsr-grp 5 sample-interval 15000
  subscription 6
    dst-grp 1
    snsr-grp 6 sample-interval 0
  subscription 7
    dst-grp 1
    snsr-grp 7 sample-interval 10000
  subscription 8
    dst-grp 1
    snsr-grp 8 sample-interval 15000
  subscription 9
    dst-grp 1
    snsr-grp 9 sample-interval 15000
  subscription 10
    dst-grp 1
    snsr-grp 10 sample-interval 15000
  subscription 11
    dst-grp 1
    snsr-grp 11 sample-interval 15000

Debug Output

https://gist.github.com/dsx1123/6f39fc58304c69f8d88570781a9b5ad3

Expected Behavior

the content of template should match the file

Actual Behavior

Here is the actual content after create template:

feature telemetry

telemetry
 certificate /bootflash/telegraf.crt telegraf
 destination-profile
 use-vrf management
 destination-group 1
 ip address 10.195.225.176 port 57000 protocol gRPC encoding GPB
 sensor-group 1
 data-source DME
 path sys/ch depth unbounded
 sensor-group 2
 data-source DME
 path sys/intf depth unbounded
 sensor-group 3
 data-source DME
 path sys/bgp depth unbounded
 sensor-group 4
 data-source DME
 path sys/procsys/syscpusummary/syscpuhistory-last60seconds
 sensor-group 5
 data-source DME
 path sys/procsys/sysmem/sysmemusage
 sensor-group 6
 data-source DME
 path sys/bd depth unbounded
 sensor-group 7
 data-source DME
 path sys/mac depth unbounded
 sensor-group 8
 data-source DME
 path sys/evpn depth 4
 sensor-group 9
 data-source DME
 path sys/urib depth unbounded query-condition rsp-foreign-subtree=ephemeral
 sensor-group 10
 data-source DME
 path sys/u6rib depth unbounded query-condition rsp-foreign-subtree=ephemeral
 sensor-group 11
 data-source DME
 path sys/bgp/inst/dom-default/af-[l2vpn-evpn] depth unbounded query-condition rsp-foreign-subtree=ephemeral
 subscription 1
 dst-grp 1
 snsr-grp 1 sample-interval 10000
 subscription 2
 dst-grp 1
 snsr-grp 2 sample-interval 10000
 subscription 3
 dst-grp 1
 snsr-grp 3 sample-interval 30000
 subscription 4
 dst-grp 1
 snsr-grp 4 sample-interval 15000
 subscription 5
 dst-grp 1
 snsr-grp 5 sample-interval 15000
 subscription 6
 dst-grp 1
 snsr-grp 6 sample-interval 0
 subscription 7
 dst-grp 1
 snsr-grp 7 sample-interval 10000
 subscription 8
 dst-grp 1
 snsr-grp 8 sample-interval 15000
 subscription 9
 dst-grp 1
 snsr-grp 9 sample-interval 15000
 subscription 10
 dst-grp 1
 snsr-grp 10 sample-interval 15000
 subscription 11
 dst-grp 1
 snsr-grp 11 sample-interval 1500

Steps to Reproduce

  1. creat template use play above

References

dcnm_rest module crashes when provided with wrong username

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Ansible Version and collection version

ansible 2.9.22
config file = /Users/prramoor/Ansible/dcnm-network/ansible.cfg
configured module search path = ['/Users/prramoor/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/prramoor/.pyenv/versions/3.8.0/envs/ansible_collections/lib/python3.8/site-packages/ansible
executable location = /Users/prramoor/.pyenv/versions/ansible_collections/bin/ansible
python version = 3.8.0 (default, Apr 13 2020, 16:49:15) [Clang 11.0.3 (clang-1103.0.32.29)]

DCNM version

  • V 1.1.1

Affected module(s)

  • dcnm_rest

Ansible Playbook

---
- hosts: dcnm
  gather_facts: no
  connection: ansible.netcommon.httpapi

  tasks:

  - name: Verify if fabric is deployed.
    cisco.dcnm.dcnm_rest:
      method: GET 
      path: /rest/control/fabrics/{{ ansible_it_fabric }}
    register: result

Debug Output

TASK [dcnm_network : - Verify if fabric is deployed.] ****************************************************************************************************************************************
task path: /Users/prramoor/Ansible/dcnm-network/collections/ansible_collections/cisco/dcnm/tests/integration/targets/dcnm_network/tests/dcnm/query.yaml:5
<10.122.197.6> attempting to start connection
<10.122.197.6> using connection plugin ansible.netcommon.httpapi
<10.122.197.6> found existing local domain socket, using it!
<10.122.197.6> updating play_context for connection
<10.122.197.6>
<10.122.197.6> local domain socket path is /Users/prramoor/.ansible/pc/a354799df9
<10.122.197.6> ESTABLISH LOCAL CONNECTION FOR USER: prramoor
<10.122.197.6> EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /Users/prramoor/.ansible/tmp/ansible-local-39672777vj8ep"&& mkdir "echo /Users/prramoor/.ansible/tmp/ansible-local-39672777vj8ep/ansible-tmp-1624544019.736255-39752-56998697776135" && echo ansible-tmp-1624544019.736255-39752-56998697776135="echo /Users/prramoor/.ansible/tmp/ansible-local-39672777vj8ep/ansible-tmp-1624544019.736255-39752-56998697776135" ) && sleep 0'
Using module file /Users/prramoor/Ansible/dcnm-network/collections/ansible_collections/cisco/dcnm/plugins/modules/dcnm_rest.py
<10.122.197.6> PUT /Users/prramoor/.ansible/tmp/ansible-local-39672777vj8ep/tmp9hkoz7e7 TO /Users/prramoor/.ansible/tmp/ansible-local-39672777vj8ep/ansible-tmp-1624544019.736255-39752-56998697776135/AnsiballZ_dcnm_rest.py
<10.122.197.6> EXEC /bin/sh -c 'chmod u+x /Users/prramoor/.ansible/tmp/ansible-local-39672777vj8ep/ansible-tmp-1624544019.736255-39752-56998697776135/ /Users/prramoor/.ansible/tmp/ansible-local-39672777vj8ep/ansible-tmp-1624544019.736255-39752-56998697776135/AnsiballZ_dcnm_rest.py && sleep 0'
<10.122.197.6> EXEC /bin/sh -c 'python /Users/prramoor/.ansible/tmp/ansible-local-39672777vj8ep/ansible-tmp-1624544019.736255-39752-56998697776135/AnsiballZ_dcnm_rest.py && sleep 0'
<10.122.197.6> EXEC /bin/sh -c 'rm -f -r /Users/prramoor/.ansible/tmp/ansible-local-39672777vj8ep/ansible-tmp-1624544019.736255-39752-56998697776135/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "/Users/prramoor/.ansible/tmp/ansible-local-39672777vj8ep/ansible-tmp-1624544019.736255-39752-56998697776135/AnsiballZ_dcnm_rest.py", line 102, in
_ansiballz_main()
File "/Users/prramoor/.ansible/tmp/ansible-local-39672777vj8ep/ansible-tmp-1624544019.736255-39752-56998697776135/AnsiballZ_dcnm_rest.py", line 94, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/Users/prramoor/.ansible/tmp/ansible-local-39672777vj8ep/ansible-tmp-1624544019.736255-39752-56998697776135/AnsiballZ_dcnm_rest.py", line 40, in invoke_module
runpy.run_module(mod_name='ansible_collections.cisco.dcnm.plugins.modules.dcnm_rest', init_globals=None, run_name='main', alter_sys=True)
File "/Users/prramoor/.pyenv/versions/3.8.0/lib/python3.8/runpy.py", line 205, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/Users/prramoor/.pyenv/versions/3.8.0/lib/python3.8/runpy.py", line 95, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/Users/prramoor/.pyenv/versions/3.8.0/lib/python3.8/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/var/folders/kf/sbbc10rn02d_m5fdgzdbrhfh0000gn/T/ansible_cisco.dcnm.dcnm_rest_payload_pqak23b4/ansible_cisco.dcnm.dcnm_rest_payload.zip/ansible_collections/cisco/dcnm/plugins/modules/dcnm_rest.py", line 103, in
File "/var/folders/kf/sbbc10rn02d_m5fdgzdbrhfh0000gn/T/ansible_cisco.dcnm.dcnm_rest_payload_pqak23b4/ansible_cisco.dcnm.dcnm_rest_payload.zip/ansible_collections/cisco/dcnm/plugins/modules/dcnm_rest.py", line 96, in main
TypeError: '>=' not supported between instances of 'NoneType' and 'int'
fatal: [10.122.197.6]: FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File "/Users/prramoor/.ansible/tmp/ansible-local-39672777vj8ep/ansible-tmp-1624544019.736255-39752-56998697776135/AnsiballZ_dcnm_rest.py", line 102, in \n _ansiballz_main()\n File "/Users/prramoor/.ansible/tmp/ansible-local-39672777vj8ep/ansible-tmp-1624544019.736255-39752-56998697776135/AnsiballZ_dcnm_rest.py", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File "/Users/prramoor/.ansible/tmp/ansible-local-39672777vj8ep/ansible-tmp-1624544019.736255-39752-56998697776135/AnsiballZ_dcnm_rest.py", line 40, in invoke_module\n runpy.run_module(mod_name='ansible_collections.cisco.dcnm.plugins.modules.dcnm_rest', init_globals=None, run_name='main', alter_sys=True)\n File "/Users/prramoor/.pyenv/versions/3.8.0/lib/python3.8/runpy.py", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File "/Users/prramoor/.pyenv/versions/3.8.0/lib/python3.8/runpy.py", line 95, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File "/Users/prramoor/.pyenv/versions/3.8.0/lib/python3.8/runpy.py", line 85, in _run_code\n exec(code, run_globals)\n File "/var/folders/kf/sbbc10rn02d_m5fdgzdbrhfh0000gn/T/ansible_cisco.dcnm.dcnm_rest_payload_pqak23b4/ansible_cisco.dcnm.dcnm_rest_payload.zip/ansible_collections/cisco/dcnm/plugins/modules/dcnm_rest.py", line 103, in \n File "/var/folders/kf/sbbc10rn02d_m5fdgzdbrhfh0000gn/T/ansible_cisco.dcnm.dcnm_rest_payload_pqak23b4/ansible_cisco.dcnm.dcnm_rest_payload.zip/ansible_collections/cisco/dcnm/plugins/modules/dcnm_rest.py", line 96, in main\nTypeError: '>=' not supported between instances of 'NoneType' and 'int'\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}

Expected Behavior

Should have reported an login error

Actual Behavior

Module crashed

dcnm_network requires ports attribute when attaching to switch

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Ansible Version and collection version

Ansible 2.10.5
DCNM Collection 1.1.1

DCNM version

  • V 11.5.1

Affected module(s)

  • dcnm_network

Ansible Playbook

    - name: Create a Network and Attach to Switches
      dcnm_network:
        fabric: multisite_domain_fabric
        state: replaced
        config:
          - net_name: VLAN10
            vrf_name: VXLAN_OVERLAY
            vlan_id: 10
            gw_ip_subnet: '192.168.10.1/24'
            net_id: 30000
            deploy: true
            attach:
              - ip_address: 10.60.66.131

Debug Output

TASK [Create a Network and Attach to Switches]
*******************************************************************
fatal: [dcpod_dcnm]: FAILED! => {"changed": false, "msg": "Invalid parameters in playbook: ports : Required parameter not found"}

Expected Behavior

The VLAN (and SVI) should be configured on the switch specified without any errors and without the requirement to assign it to a port.

Actual Behavior

The DCNM module has specified that ports is a mandatory attribute for attaching the network to a switch. This behavior does not match the DCNM GUI behavior.

It's also a case of DRY (don't repeat yourself) in that you have to define allowed VLANs on trunk or access VLANs when configuring portchannel, VPC, or other trunk interfaces. So, customers (and myself) prefer defining VLAN members on the actual interfaces and not on the networks.

status of vrf should be included when query vrf

when query the vrf information, in respone, vrfStatus should be included even it is not configurable:
environment:

ansible-playbook 2.9.13
  config file = /root/.ansible.cfg
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /root/virtualenv/ansible/lib/python3.7/site-packages/ansible
  executable location = /root/virtualenv/ansible/bin/ansible-playbook
  python version = 3.7.3 (default, Apr  3 2019, 05:39:12) [GCC 8.3.0]

collection version: 1.0.0

example of playbook:

---
- name: verify | check if all VRFs are deployed
  cisco.dcnm.dcnm_vrf:
    fabric: "{{ fabric }}"
    config:
      - vrf_name: vrf_blue 
    state: query
  register: result

- debug:
    msg: "{{ result }}"

result:

PLAY [verify] ********************************************************************************************************************************************************************************************************************************

TASK [test : verify | check if all VRFs are deployed] ****************************************************************************************************************************************************************************************
Wednesday 16 September 2020  21:42:55 -0700 (0:00:00.123)       0:00:00.123 ***
ok: [172.25.74.53]

TASK [test : debug] **************************************************************************************************************************************************************************************************************************
Wednesday 16 September 2020  21:43:00 -0700 (0:00:04.930)       0:00:05.054 ***
ok: [172.25.74.53] => {
    "msg": {
        "ansible_facts": {
            "discovered_interpreter_python": "/usr/bin/python"
        },
        "changed": false,
        "diff": [],
        "failed": false,
        "response": [
            {
                "attach": [],
                "service_vrf_template": null,
                "source": null,
                "vrf_extension_template": "Default_VRF_Extension_Universal",
                "vrf_id": 50000,
                "vrf_name": "vrf_blue",
                "vrf_template": "Default_VRF_Universal"
            }
        ]
    }
}

expected result
vrfStatus of response is included in query result

dcnm_inventory - deleted state docs update

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Affected module(s)

  • dcnm_inventory

Expected Behavior

Docs should describe the dcnm_inventory required fields and provide example for each state - currently the deleted state example lists every field as required and included in example when it seems that only IP is required. Requesting that the example is updated to list only 'required' for each state.

status of network should be included when query

when query the network information, in response, networkStatus should be included even it is not configurable:
environment:

ansible-playbook 2.9.13
  config file = /root/.ansible.cfg
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /root/virtualenv/ansible/lib/python3.7/site-packages/ansible
  executable location = /root/virtualenv/ansible/bin/ansible-playbook
  python version = 3.7.3 (default, Apr  3 2019, 05:39:12) [GCC 8.3.0]

collection version: 1.0.0

example of tasks:

---
- name:  verify | check if network is deployed
  cisco.dcnm.dcnm_network:
    fabric: "{{ fabric }}"
    config:
      - net_name: network_app
        vrf_name: vrf_blue
    state: query
  register: net_query

- debug:
    msg: "{{ net_query }}"

result:

PLAY [verify] ********************************************************************************************************************************************************************************************************************************

TASK [test : verify | check if network is deployed] ******************************************************************************************************************************************************************************************
Wednesday 16 September 2020  21:50:09 -0700 (0:00:00.117)       0:00:00.117 ***
ok: [172.25.74.53]

TASK [test : debug] **************************************************************************************************************************************************************************************************************************
Wednesday 16 September 2020  21:50:18 -0700 (0:00:08.534)       0:00:08.651 ***
ok: [172.25.74.53] => {
    "msg": {
        "ansible_facts": {
            "discovered_interpreter_python": "/usr/bin/python"
        },
        "changed": false,
        "diff": [],
        "failed": false,
        "response": [
            {
                "attach": [],
                "gw_ip_subnet": "192.168.2.1/24",
                "net_extension_template": "Default_Network_Extension_Universal",
                "net_id": 30001,
                "net_name": "network_app",
                "net_template": "Default_Network_Universal",
                "vlan_id": "",
                "vrf_name": "vrf_blue"
            }
        ]
    }
}

expected result
networkStatus of response is included in query result

Installation Documentation wrong

Hi,

first of all thanks for the awesome collection.

I just found a small issue on the documentation.
ansible-galaxy collection install cisco.dcnm
Doesn't work by default. I tested it on multiple hosts and ansible versions just to be sure.
The basic requirements.yml documentation shows version 0.9.0.
If somebody just copies this in his requirements.yml it will also not work.
It is need to append the correct version aswell.
No big Issue but maybe it prevents some new ansible users on pursuing this track since they get frustrated.

collections:

  • name: cisco.dcnm
    version: 0.9.0-dev4

docstring of dcnm_network module is failed to parse

Error when using ansible-doc to get documentation of dcnm_network module:

# ansible-doc -t module cisco.dcnm.dcnm_network
ERROR! module cisco.dcnm.dcnm_network missing documentation (or could not parse documentation): while scanning for the next token
found character '\t' that cannot start any token
  in "<unicode string>", line 57, column 1:
        note: If not specified in the p ...
    ^

ansible version:

ansible 2.9.10
  config file = /root/.ansible.cfg
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /root/virtualenv/ansible/lib/python3.7/site-packages/ansible
  executable location = /root/virtualenv/ansible/bin/ansible
  python version = 3.7.3 (default, Apr  3 2019, 05:39:12) [GCC 8.3.0]

dcnm-collection versoin 0.9.0-dev4

Getting misleading message from dcmm module

I was getting the following error messages when i ran my playbooks against valid fabrics - in my setup, "MSD_Fabric_East" and 'Site-A' are valid fabrics. However, my issue was really a simple IP connectivity issue between the Ansible control node and DCNM. Let me know If you need more info. I am running Ansible 2.10 and latest Python3 versions. DCNM is running 11.5(1).

(python3-ansible-venv) [administrator@asharma-linux-js55 Config_FP-VXLAN-Fabric]$ ansible-playbook -i inventory setup_fabric.yml

PLAY [Setup VXLAN fabric configuration for FlexPod MetroCluster CVD] ******************************************************

TASK [Query fabric for list of Spine and Leaf switches] *******************************************************************
fatal: [192.168.160.140]: FAILED! => {"changed": false, "msg": "Unable to find inventories under fabric: MSD_Fabric_East"}

PLAY RECAP ****************************************************************************************************************
192.168.160.140 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0

(python3-ansible-venv) [administrator@asharma-linux-js55 Config_FP-VXLAN-Fabric]$ vi setup_fabric.yml
(python3-ansible-venv) [administrator@asharma-linux-js55 Config_FP-VXLAN-Fabric]$ ansible-playbook -i inventory setup_fabric.yml

PLAY [Setup VXLAN fabric configuration for FlexPod MetroCluster CVD] ******************************************************

TASK [Query fabric for list of Spine and Leaf switches] *******************************************************************
fatal: [192.168.160.140]: FAILED! => {"changed": false, "msg": "Unable to find inventories under fabric: Site-A"}

PLAY RECAP ****************************************************************************************************************
192.168.160.140 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0

(python3-ansible-venv) [administrator@asharma-linux-js55 Config_FP-VXLAN-Fabric]$

******* Detailed Log ******

(python3-ansible-venv) [administrator@asharma-linux-js55 Config_FP-VXLAN-Fabric]$ ansible-playbook -i inventory -vvv setup_fabric.yml
ansible-playbook 2.10.7
config file = None
configured module search path = ['/home/administrator/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/administrator/python3-ansible-venv/lib64/python3.6/site-packages/ansible
executable location = /home/administrator/python3-ansible-venv/bin/ansible-playbook
python version = 3.6.8 (default, Aug 24 2020, 17:57:11) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
No config file found; using defaults
host_list declined parsing /home/administrator/Config_FP-VXLAN-Fabric/inventory as it did not pass its verify_file() method
script declined parsing /home/administrator/Config_FP-VXLAN-Fabric/inventory as it did not pass its verify_file() method
auto declined parsing /home/administrator/Config_FP-VXLAN-Fabric/inventory as it did not pass its verify_file() method
Parsed /home/administrator/Config_FP-VXLAN-Fabric/inventory inventory source with ini plugin
redirecting (type: action) cisco.dcnm.dcnm_inventory to cisco.dcnm.dcnm
redirecting (type: action) cisco.dcnm.dcnm_inventory to cisco.dcnm.dcnm
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.

PLAYBOOK: setup_fabric.yml ************************************************************************************************
1 plays in setup_fabric.yml

PLAY [Setup VXLAN fabric configuration for FlexPod MetroCluster CVD] ******************************************************
META: ran handlers
redirecting (type: action) cisco.dcnm.dcnm_inventory to cisco.dcnm.dcnm

TASK [Query fabric for list of Spine and Leaf switches] *******************************************************************
task path: /home/administrator/Config_FP-VXLAN-Fabric/setup_fabric.yml:11
redirecting (type: action) cisco.dcnm.dcnm_inventory to cisco.dcnm.dcnm
<192.168.160.140> ESTABLISH LOCAL CONNECTION FOR USER: administrator
<192.168.160.140> EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /home/administrator/.ansible/tmp/ansible-local-157047_i0u24kb"&& mkdir "echo /home/administrator/.ansible/tmp/ansible-local-157047_i0u24kb/ansible-tmp-1616973903.5716214-157053-120667932400862" && echo ansible-tmp-1616973903.5716214-157053-120667932400862="echo /home/administrator/.ansible/tmp/ansible-local-157047_i0u24kb/ansible-tmp-1616973903.5716214-157053-120667932400862" ) && sleep 0'
Using module file /home/administrator/.ansible/collections/ansible_collections/cisco/dcnm/plugins/modules/dcnm_inventory.py
<192.168.160.140> PUT /home/administrator/.ansible/tmp/ansible-local-157047_i0u24kb/tmpgn4g5sfu TO /home/administrator/.ansible/tmp/ansible-local-157047_i0u24kb/ansible-tmp-1616973903.5716214-157053-120667932400862/AnsiballZ_dcnm_inventory.py
<192.168.160.140> EXEC /bin/sh -c 'chmod u+x /home/administrator/.ansible/tmp/ansible-local-157047_i0u24kb/ansible-tmp-1616973903.5716214-157053-120667932400862/ /home/administrator/.ansible/tmp/ansible-local-157047_i0u24kb/ansible-tmp-1616973903.5716214-157053-120667932400862/AnsiballZ_dcnm_inventory.py && sleep 0'
<192.168.160.140> EXEC /bin/sh -c 'python /home/administrator/.ansible/tmp/ansible-local-157047_i0u24kb/ansible-tmp-1616973903.5716214-157053-120667932400862/AnsiballZ_dcnm_inventory.py && sleep 0'
<192.168.160.140> EXEC /bin/sh -c 'rm -f -r /home/administrator/.ansible/tmp/ansible-local-157047_i0u24kb/ansible-tmp-1616973903.5716214-157053-120667932400862/ > /dev/null 2>&1 && sleep 0'
fatal: [192.168.160.140]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"config": null,
"fabric": "Site-A",
"state": "query"
}
},
"msg": "Unable to find inventories under fabric: Site-A"
}
^C [ERROR]: User interrupted execution
(python3-ansible-venv) [administrator@asharma-linux-js55 Config_FP-VXLAN-Fabric]$

********** playbook module ******
- name: Query fabric for list of Spine and Leaf switches
cisco.dcnm.dcnm_inventory:
fabric: Site-A
state: query
register: result

dcnm_inventory state overridden is not idempotent

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Ansible Version and collection version

Ansible: 2.10.4

Ansible Collection: cisco.dcnm = 1.2.0

Python modules

  • ansible-base 2.10.4
  • ansible 2.10.5

DCNM version

DCNM LAN Fabric: 11.5.1

Affected module(s)

  • dcnm_inventory

Ansible Playbook

- hosts: dcnm
  gather_facts: false
  connection: ansible.netcommon.httpapi

  collections:
    - cisco.dcnm

  vars:
    ansible_command_timeout: 1800
    ansible_connect_timeout: 1800
    switch_username: USERNAME
    switch_password: PASSWORD
    site1_vxlan: VXLAN-Site1
    site1_bgw1: 1.1.3.1
    site1_bgw2: 1.1.3.2
    site2_vxlan: VXLAN-Site2
    site2_bgw1: 2.1.3.1
    site2_bgw2: 2.1.3.2

  tasks:
    - name: Add Border Gateways to "{{ site1_vxlan }}"
      dcnm_inventory:
        fabric: "{{ site1_vxlan }}"
        state: overridden
        config:
          - seed_ip: "{{ site1_bgw1 }}"
            max_hops: 0
            preserve_config: false
            role: border_gateway
            auth_proto: MD5
            user_name: "{{ switch_username }}"
            password: "{{ switch_password }}"
          - seed_ip: "{{ site1_bgw2 }}"
            max_hops: 0
            preserve_config: false
            role: border_gateway
            auth_proto: MD5
            user_name: "{{ switch_username }}"
            password: "{{ switch_password }}"

Debug Output

# First run 
% ansible-playbook 01-vxlan_fabric_switches.yaml 

PLAY [dcnm] **********************************************************************************************************

TASK [Add Border Gateways to "VXLAN-Site1"] **************************************************************************
[WARNING]: Adding switches to a VXLAN fabric can take a while.  Please be patient...
changed: [dcpod_dcnm]

PLAY RECAP ***********************************************************************************************************
dcpod_dcnm                 : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

# Second run
(ansible-2.10) timmil@TIMMIL-M-842A ansible-dcnm-examples % ansible-playbook 01-vxlan_fabric_switches.yaml 

PLAY [dcnm] **********************************************************************************************************

TASK [Add Border Gateways to "VXLAN-Site1"] **************************************************************************
[WARNING]: Adding switches to a VXLAN fabric can take a while.  Please be patient...
changed: [dcpod_dcnm]

PLAY RECAP ***********************************************************************************************************
dcpod_dcnm                 : ok=0    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

Expected Behavior

  • First run should have completed successfully, taking a great deal of time.
  • Second run should have completed with a task status of "ok".
  • Second run should have completed much more QUICKLY.

Actual Behavior

First run does take a LONG time to complete but completes successfully. Even in scenarios when the switches already existed in the specified roles.

However, subsequent runs take an equally long amount of time and do not simply return "ok".

The documentation clearly says:

If the switch exists, properties that need to be modified and can be modified will be modified

However, each and every run of dcnm_inventory where the state is overridden results in the switches being REMOVED from the fabric and re-discovered, forcing migration mode, a save/deploy cycle, etc. despite the attributes not having changed at all.

Maybe that is somehow (in a non-intuitive way) what "overridden" was intended to mean but that's clearly not what the documentation states.

Steps to Reproduce

Simply run ansible-playbook 01-vxlan_fabric_switches.yaml (YAML copied above) multiple times.

References

Definitely related to #86 in that the idempotency (do nothing if nothing has changed) is missing for dcnm_inventory. Perhaps an overridden "state" is needed to behave as it currently does. However, we need a state (certainly merge, if not more) where, if the Ansible doesn't change and the switch in DCNM hasn't been changed, nothing is applied to DCNM.

Support additional switch role types in dcnm_inventory module

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

Please add support for adding switches to other types of fabrics, such as External Fabric (first priority) and LAN Classic (lesser priority).

Honestly, it seems a bit limiting that the dcnm_inventory module merged the switch discovery and role assignment APIs into a single module when there was no real requirement or dependency enforcing that behavior. As a result, we get this limiting behavior where switches can't be added to DCNM solely because a role isn't in the validated input field.

Perhaps the correct approach to this feature request is simply to split the tasks into simply adding a switch to the fabric (dcnm_inventory) and assigning it a role (dcnm_role)?

New or Affected modules(s):

  • dcnm_inventory

DCNM version

  • V 11.5.1

Potential ansible task config

-name: Merge switch into fabric
    cisco.dcnm.dcnm_inventory:
      fabric: external-fabric
      state: merged
      config:
       - seed_ip: 192.168.0.1
         auth_proto: MD5
         user_name: switch_username
         password: switch_password
         max_hops: 0
         role: access                                  # Valid types:  access, ToR, aggregation, etc. etc.
         preserve_config: False

References

None.

Additional context

dcnm_vrf fails any operation when the number of existing VRFs in DCNM is high (fails with 800; ok with 400)

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Ansible Version and collection version

ansible 2.10.6
dcnm-collection - 1.2.3

DCNM version

  • V 11.5(2)

Affected module(s)

  • dcnm_vrf

Ansible Playbook

---
- name: Merge vrfs
  hosts: '{{ defaults.hosts }}'
  connection: ansible.netcommon.httpapi
  gather_facts: no

  collections:
    - cisco.dcnm

  vars:
    - leaf_nodes: "{{ lookup('file','./dynamic_inventory/leaf_nodes.json')|from_json }}" 

  vars_files:
    - ./exec_vars/defaults.yaml
    - ./exec_vars/vrfs.yaml

  tasks:
    - name: Dummy set fact for leaf_attach_list
      set_fact:
        leaf_vrf_attach: []

    - name: Build list of VRFs to be deployed
      set_fact:
        vrfs_list: "{{ vrfs_list|default([]) + [{ 'vrf_name': 'TEST_VRF%03d' | format(item), 'deploy': 'no', 'vrf_id': (item | int + 10 + defaults.base_vrf_id) | int, 'vlan_id': (item | int + defaults.base_vlanid) | int, 'attach': leaf_vrf_attach }] }}"
      loop: '{{ range(0, 800) | list }}'
      tags:
        - dev
    - name: Push all VRFs to DCNM
      dcnm_vrf:        
        fabric: '{{ defaults.fabric_name }}'
        state: merged
        config: '{{ vrfs_list }}'
      tags:
        - dev

The variables used in the playbook and declared in "defaults.yaml" vars file are just integers:

  • defaults.base_vlanid = 100
  • defaults.base_vrf_id = 50000

Debug Output

https://gist.github.com/gnakh/f067747d6e0c5ea0c5c2cf2999bfc7e2

Expected Behavior

VRFs are modified, or overridden, or deleted.

Actual Behavior

Ansible task fails

Steps to Reproduce

Run the playbook above and create 800 VRFs (no need to attach them to switches) in DCNM. The playbook will complete successfuly.

Run the playbook again (without modification) or modify the last task and change the state to 'deleted' or 'overridden'.

The task will fail.

Creating/deleting/modifying 400 VRFs works just fine. I have not tested beyond that number.

References

Timeout triggered when adding many interfaces

Creating one or two vPC interfaces is no problem.
Adding 40+ seems to be working looking at interfaces in DCNM webui, but ansible triggers timeout.
Rerunning the same task works fine.
I'll add more data as I find it.

remote as in vrf lite is hard cored to 65535

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Ansible Version and collection version

ansible 2.10.5
  config file = /root/.ansible.cfg
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /root/virtualenv/ansible/lib/python3.7/site-packages/ansible
  executable location = /root/virtualenv/ansible/bin/ansible
  python version = 3.7.3 (default, Apr  3 2019, 05:39:12) [GCC 8.3.

DCNM version

  • V 11.5.1

Affected module(s)

  • dcnm_vrf

Ansible Playbook

# Copy-paste your anisble playbook here 

same play as issue #65

Debug Output

same output as #65

Expected Behavior

the remote as should be set to what is configured in external fabric, in this case, it is 200

Actual Behavior

After push the configuration to border leaf, this configuration shows in border:

 neighbor 10.31.0.2
      remote-as 65535
      address-family ipv4 unicast
        send-community
        send-community extended
        route-map extcon-rmap-filter out

Steps to Reproduce

References

looks like we hard coded the as number to 65535:

vrflite_con['VRF_LITE_CONN'][0]['NEIGHBOR_ASN']='65535'

Duplicate Policies created when create_additional_policy is set to false.

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Ansible Version and collection version

ansible 2.9.27
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/risarwar/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Sep 10 2021, 09:13:53) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]

DCNM version

v2.0.0

Affected module(s)

  • cisco.dcnm.dcnm_policy

Ansible Playbook

- name: Configure Mulitple Policies
        cisco.dcnm.dcnm_policy:
            fabric: "{{ fabric }}"
            deploy: false
            state: merged
            config:
              - switch:
                  - ip: "{{ Switch_name }}"
                    policies:
                        - name: "{{item.name}}"
                          create_additional_policy: false 
                          priority: "{{item.priority | default(500)}}"
                          description: "{{item.description}}"
                          policy_vars: "{{ item.Values | default([])}}"
        register: policy_update_data
        loop: "{{ Policies }}"
        ignore_errors: yes

Debug Output

Expected Behavior

The expected behaviour is that the Policies should be created only once, if a duplicate run of this playbook is invoked with same input data, no new policies should be created.

Actual Behavior

Duplicate policies are created in DCNM with new Policy ID.

Steps to Reproduce

I have tested this on virtual instances of DCNM version 11.5(3a).

References

Unable to create vpc intefaces

DCNM v11.5.1

Collection Version


ansible.netcommon 1.4.1
cisco.dcnm 1.0.0

---

- hosts: dcnm_controllers
  gather_facts: false
  connection: ansible.netcommon.httpapi

  tasks:
    - name: Create vPC interfaces
      cisco.dcnm.dcnm_interface: &vpc_merge
        fabric: f
        state: merged                         # only choose from [merged, replaced, deleted, overridden, query]
        config:
          - name: vpc1
            type: vpc
            switch:
              - "le1"
              - "le2"
            deploy: true                       # choose from [true, false]
            profile:
              admin_state: true                ## choose from [true, false]
              mode: trunk                      # choose from [trunk, access]
              peer1_pcid: 1                    # choose between [Min:1, Max:4096], if not given, will be VPC port-id
              peer2_pcid: 1                    # choose between [Min:1, Max:4096], if not given, will be VPC port-id
              peer1_members:                   ## member interfaces on peer 1
                - e1/1
              peer2_members:                   ## member interfaces on peer 2
                - e1/1
              pc_mode: 'active'                ## choose from ['on', 'active', 'passive']
              bpdu_guard: true                 ## choose from [true, false, 'no']
              port_type_fast: true             ## choose from [true, false]
              mtu: jumbo                       ## choose from [default, jumbo]
              peer1_allowed_vlans: none        ## choose from [none, all, vlan range]
              peer2_allowed_vlans: none        ## choose from [none, all, vlan range]
              peer1_description: "VPC acting as trunk peer1 - modified"
              peer2_description: "VPC acting as trunk peer2 - modified"

generates the following dry run:

changed: [dcnm] => {
    "changed": true,
    "diff": [
        {
            "deleted": [],
            "deploy": [
                {
                    "ifName": "vPC1",
                    "serialNumber": "serials"
                }
            ],
            "merged": [
                {
                    "interfaces": [
                        {
                            "fabricName": "f",
                            "ifName": "vPC1",
                            "interfaceType": "INTERFACE_VPC",
                            "nvPairs": {
                                "ADMIN_STATE": "true",
                                "BPDUGUARD_ENABLED": "true",
                                "INTF_NAME": "vPC1",
                                "MTU": "jumbo",
                                "PC_MODE": "active",
                                "PEER1_ALLOWED_VLANS": "none",
                                "PEER1_MEMBER_INTERFACES": "e1/1",
                                "PEER1_PCID": "1",
                                "PEER1_PO_CONF": "",
                                "PEER1_PO_DESC": "VPC acting as trunk peer1 - modified",
                                "PEER2_ALLOWED_VLANS": "none",
                                "PEER2_MEMBER_INTERFACES": "e1/1",
                                "PEER2_PCID": "1",
                                "PEER2_PO_CONF": "",
                                "PEER2_PO_DESC": "VPC acting as trunk peer2 - modified",
                                "PORTTYPE_FAST_ENABLED": "true"
                            },
                            "serialNumber": "serials"
                        }
                    ],
                    "policy": "int_vpc_trunk_host_11_1",
                    "skipResourceCheck": "true"
                }
            ],
            "overridden": [],
            "query": [],
            "replaced": []
        }
    ],
    "invocation": {
        "module_args": {
            "config": [
                {
                    "deploy": true,
                    "name": "vpc1",
                    "profile": {
                        "admin_state": true,
                        "bpdu_guard": true,
                        "mode": "trunk",
                        "mtu": "jumbo",
                        "pc_mode": "active",
                        "peer1_allowed_vlans": "none",
                        "peer1_description": "VPC acting as trunk peer1 - modified",
                        "peer1_members": [
                            "e1/1"
                        ],
                        "peer1_pcid": 1,
                        "peer2_allowed_vlans": "none",
                        "peer2_description": "VPC acting as trunk peer2 - modified",
                        "peer2_members": [
                            "e1/1"
                        ],
                        "peer2_pcid": 1,
                        "port_type_fast": true
                    },
                    "switch": [
                        "le1",
                        "le2"
                    ],
                    "type": "vpc"
                }
            ],
            "fabric": "f",
            "state": "merged"
        }
    },
    "response": []
}

But when running it bails on interface type:

fatal: [dcnm]: FAILED! => {
    "changed": false,
    "invocation": {
        "module_args": {
            "config": [
                {
                    "deploy": true,
                    "name": "vpc1",
                    "profile": {
                        "admin_state": true,
                        "bpdu_guard": true,
                        "mode": "trunk",
                        "mtu": "jumbo",
                        "pc_mode": "active",
                        "peer1_allowed_vlans": "none",
                        "peer1_description": "VPC acting as trunk peer1 - modified",
                        "peer1_members": [
                            "e1/1"
                        ],
                        "peer1_pcid": 1,
                        "peer2_allowed_vlans": "none",
                        "peer2_description": "VPC acting as trunk peer2 - modified",
                        "peer2_members": [
                            "e1/1"
                        ],
                        "peer2_pcid": 1,
                        "port_type_fast": true
                    },
                    "switch": [
                        "le1",
                        "le2"
                    ],
                    "type": "vpc"
                }
            ],
            "fabric": "f",
            "state": "merged"
        }
    },
    "msg": {
        "DATA": [
            {
                "column": 0,
                "entity": "",
                "line": 0,
                "message": "Interface type is not valid",
                "reportItemType": "ERROR"
            }
        ],
        "MESSAGE": "Internal Server Error",
        "METHOD": "POST",
        "REQUEST_PATH": "https://dcnm:443/rest/globalInterface",
        "RETURN_CODE": 500
    }
}

rest_api_trace.log:

--------------------------------------------------------------------------------
2021.03.19 10:53:23.501  INFO  - REQUEST    [id=36454]
	POST        /rest/globalInterface
	request-id: 36454
	client-ip:  ip
	username:   
	size:       683 B   (683)
	upload time: 
--------------------------------------------------------------------------------
2021.03.19 10:53:23.536  INFO  - RESPONSE    [id=36454]
	POST        /rest/globalInterface
	request-id:  36454
	client-ip:   ip
	username:    admin
	took:        35 ms
	http code:   500
[ {
  "reportItemType" : "ERROR",
  "message" : "Interface type is not valid",
  "entity" : "",
  "line" : 0,
  "column" : 0
} ]
--------------------------------------------------------------------------------

Physical Interface Description of vPC Member Port

Possible feature request?

Is it possible to apply a description to a physical interface that is a member of a vPC? There is a description option present for vPC config but this applies to the port-channel itself rather than the member port.

I am following the example:

- name: Create vPC interfaces
  cisco.dcnm.dcnm_interface: &vpc_merge
    fabric: mmudigon-fabric
    state: merged                         # only choose from [merged, replaced, deleted, overridden, query]
    config:
      - name: vpc750                      # should be of the form vpc<port-id>
        type: vpc                         # choose from this list [pc, vpc, sub_int, lo, eth]
        switch:                           # provide switches of vPC pair
          - ["{{ ansible_switch1 }}",
             "{{ ansible_switch2 }}"]
        deploy: true                      # choose from [true, false]
        profile:
          admin_state: true               # choose from [true, false]
          mode: trunk                     # choose from [trunk, access]
          peer1_pcid: 100                 # choose between [Min:1, Max:4096], if not given, will be VPC port-id
          peer2_pcid: 100                 # choose between [Min:1, Max:4096], if not given, will be VPC port-id
          peer1_members:                  # member interfaces on peer 1
            - e1/24
          peer2_members:                  # member interfaces on peer 2
            - e1/24
          pc_mode: 'active'               # choose from ['on', 'active', 'passive']
          bpdu_guard: true                # choose from [true, false, 'no']
          port_type_fast: true            # choose from [true, false]
          mtu: jumbo                      # choose from [default, jumbo]
          peer1_allowed_vlans: none       # choose from [none, all, vlan range]
          peer2_allowed_vlans: none       # choose from [none, all, vlan range]
          peer1_description: "VPC acting as trunk peer1"
          peer2_description: "VPC acting as trunk peer2"

Thanks...

dcnm_inventory module does not print "diff" of the config pushed to device in the run results.

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

New or Affected modules(s):

  • dcnm_XXXXX

DCNM version

  • V x.x.x

Potential ansible task config

# Copy-paste your ansible playbook

References

Additional context
Add any other context or screenshots about the feature request here.

Add support for creating VPC pair

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

Aside from spines in a VXLAN fabric, most other switch types (border gateways or leafs) are frequently added as a VPC pair. There's support for adding switches to a fabric but no support for the next logical step: vpc pairing.

For VXLAN fabrics, the required information is very minimal: namely, which two switches are going to be paired. The remaining settings are derived from fabric settings.

For external fabrics, the domain configuration information is required (PKA, Peer link port channel, etc.).

The example below shows all the options for external fabrics but only the "ip" attribute would be required for VXLAN fabrics. So, as a matter of software, you'd validate the inputs based on fabric type.

New or Affected modules(s):

  • dcnm_vpc_domain

DCNM version

  • V 11.5.1

Potential ansible task config

      dcnm_vpc_domain:
        fabric: "{{ local_fabric }}"
        state: merged
        deploy: false
        config:
          - ip: 10.60.66.231                        # switch management ip (for DCNM lookup)
            pka_ip: 10.60.66.231                    # optional: what PKA source IP to use
            pka_vrf: management                     # optional: provide if non-default
            peerlink_po_id: 1
            peerlink_members: eth1/53-54
            peerlink_description: Ansible created
            peerlink_allowed_vlans: all
          - ip: 10.60.66.232                        # switch management ip (for DCNM lookup)
            pka_ip: 10.60.66.232                    # optional: what PKA source IP to use
            pka_vrf: management                     # optional: provide if non-default
            peerlink_po_id: 1
            peerlink_members: eth1/53-54
            peerlink_description: Ansible created
            peerlink_allowed_vlans: all

References

DCNM 11.5 section on configuring vPC in external fabric

Additional context

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.