Coder Social home page Coder Social logo

vmware-samples / sddc-import-export-for-vmware-cloud-on-aws Goto Github PK

View Code? Open in Web Editor NEW
20.0 8.0 15.0 213 KB

The SDDC Import/Export for VMware Cloud on AWS tool enables you to save and restore your VMware Cloud on AWS (VMC) Software-Defined Data Center (SDDC) networking and security configuration.

License: Other

Dockerfile 0.12% Python 99.88%

sddc-import-export-for-vmware-cloud-on-aws's Introduction

1. SDDC Import/Export for VMware Cloud on AWS

1.1. Table of Contents

1.2. Overview

The SDDC Import/Export for VMware Cloud on AWS tool enable customers to save and restore their VMware Cloud on AWS (VMC) Software-Defined Data Center (SDDC) configuration.

There are many situations when customers want to migrate from an existing SDDC to a different one. While HCX addresses the data migration challenge, this tool offers customers the ability to copy the configuration from a source to a destination SDDC.

A few example migration scenarios are:

  • SDDC to SDDC migration from bare-metal (i3) to a different bare-metal type (i3en, i4i)
  • SDDC to SDDC migration from region (i.e. London) to a different region (i.e. Dublin).

Other use cases are:

  • Backups - save the entire SDDC configuration
  • Lab purposes - customers or partners might want to deploy SDDCs with a pre-populated configuration.
  • DR purposes - deploy a pre-populated configuration in conjunction with VMware Site Recovery or VMware Cloud Disaster Recovery

1.3. Getting Started

1.3.1. Install Python

This tool is dependent on Python3, you can find installation instructions for your operating system in the Python documentation.

1.3.2. Download code

If you know git, clone the repo with

git clone https://github.com/vmware-samples/sddc-import-export-for-vmware-cloud-on-aws.git

If you don't know git, you can download the code from the Flings site

1.3.3. Install Python modules and packages

When you navigate to the sddc_import_export folder, you will find a requirements.txt file that list all your Python packages. They can all be installed by running the following command on Linux/Mac:

pip3 install -r requirements.txt

On Windows, use

python -m pip install -r requirements.txt

If you get pip installation errors complaining about being unable to install git+https://github.com/vmware/[email protected], and you don't need to use the vCenter folder export feature, you can do the following:

  • Remove line 10 in requirements.txt git+https://github.com/vmware/[email protected]
  • Comment out line 54 in sddc_import_export.py import vcenter

You must uncomment these lines in order to use the vCenter folder export feature. Installing the required library requires you to have git installed on your local system.

1.3.4. Update vmc.ini

Version 1.1 introduces the gov_cloud_urls flag in vmc.ini. The default value is False - change this value to True if you are using GovCloud.

Access to the VMware Cloud on AWS API is dependent on a refresh token. To generate a token for your account, see the Generate API Tokens help article.

The Org ID and SDDC ID can be found on the Support tab of your SDDCs.

# Refresh tokens generated in the VMC console. Users have a separate token in each org
source_refresh_token    = XXXXXXXXXXXXXXX
dest_refresh_token      = XXXXXXXXXXXXXXX

# Organization and SDDC IDs are easily found in the support tab of any SDDC
source_org_id           = XXXXXXXXXXXXXXX
source_sddc_id          = XXXXXXXXXXXXXXX
dest_org_id             = XXXXXXXXXXXXXXX
dest_sddc_id            = XXXXXXXXXXXXXXX

The vmc.ini configuration can also be passed via command line. Use sddc_import_export --help for syntax.

1.3.5. Update config.ini

Config.ini contains configuration sections for import and export.

There are True/False flags that can be set for each configuration option. The default configuration enables all options.

For example, in this section of the configuration, the compute gateway networks would be exported, but the public IP and NAT associations would not be exported.

# Export the networks configured on the compute gateway?
network_export  = True
network_export_filename = cgw-networks.json

# Export the list of public IP addresses?
public_export = False
public_export_filename = public.json

# Export the NAT rules, including Public IP addresses?
nat_export = False
nat_export_filename = natrules.json

1.3.5.1. Sync Mode

An experimental feature was introduced in version 1.2. The configuration flag is named sync_mode, and is found in the importConfig section of config.ini. Note - sync does not support public IP or NAT configurations. It also does not support syncing deletions. It only syncs new object creations or existing object updates.

[importConfig]

# Set this to True if you want to do continuous sync operations i.e. a periodic sync of
# DFW rules from a source to a destination SDDC. The default method of import operations 
# is a PUT. Setting this flag to True changes the method to a PATCH
# Not all settings are supported for sync - public IP and NAT mapping are unsupported
sync_mode = False

The default value is False. When set to true, existing objects in the destination SDDC will be overwritten with values from the exported data. This feature is handy if you have multiple SDDCs for identical purposes, such as desktop clusters, and want to push identical firweall rules to all SDDCs. Not all settings are supported for sync - public IP and NAT mapping are unsupported.

1.3.5.2. Exclude List Filtering

Version 1.3 introduced the ability to filter out objects during an import. The following objects can be filtered:

  • CGW firewall rule
  • CGW firewall group
  • CGW network segment
  • Flexible segment
  • MGW firewall rule
  • MGW firewall group

The exclude filter performs a Python regex match on the display name of the object. The default is to have no filter. The CGW exclude list section of config.ini is shown below

# Python regex match on CGW group display name, pipe-delimited. See README for examples.
cgw_groups_import_exclude_list =
# Python regex match on CGW rule display name, pipe-delimited. See README for examples.
cgw_import_exclude_list =

You must be careful not to filter out a group that a firewall rule is dependent on. The tool does not enforce group dependencies. If you filter out a group that a firewall rule uses, the firewall rule will fail to import.

1.3.5.2.1. Filtering Examples

Exclude all groups that begin with 'abcd' or 'efgh'

cgw_groups_import_exclude_list = abcd*|efgh*

Exclude all network segments that begin with L2E (HCX extended segements)

network_import_exclude_list = L2E*

A sample config file for VCDR is included in this repository.

1.3.6. Update aws.ini (optional)

If you want to use the optional feature to archive exported zipfiles to S3 storage, you must update aws.ini with a bucket name and credentials with write access to the bucket. You must also set the export_type value in config.ini to 's3'.

[awsConfig]

aws_s3_export_access_id = ""
aws_s3_export_access_secret = ""
aws_s3_export_bucket = ""

The aws.ini also includes an option to include credentials for the customer-owned AWS account connected to the SDDC. This allows for automatic acceptance of a resource share for the managed prefix list feature as well as configuration of multiple VPC route tables. This is separate from the S3 configuration above and must be filled out for the configuration to work correctly.

aws_import_access_key_id = 
aws_import_secret_access_key = 
aws_import_session_token = 

The aws.ini configuration can also be passed via command line. Use sddc_import_export --help for syntax.

1.3.7. Update vcenter.ini (optional)

If you want to use the optional feature to sync your on-prem vCenter folder structure to VMC, you must update vcenter.ini with the appropriate URLs, credentials, and Datacenter name. The tool can only export and import a single datacenter object. The required libraries are commented out by default in requirements.txt, and line 54 of sddc_import_export.py is also commented out import vmc. You must uncomment both lines and run the requirements.txt installation again for dependencies. This library is dependent on git being installed on your local system.

[vCenterConfig]

srcvCenterURL = on-prem-vcenter.domain.com
srcvCenterUsername = [email protected]
srcvCenterPassword = x
srcvCenterDatacenter = Datacenter-Name
srcvCenterSSLVerify = False

destvCenterURL = vcenter.sddc-xx-xx-xx-xx.vmwarevmc.com
destvCenterUsername = [email protected]
destvCenterPassword = x
destvCenterDatacenter = SDDC-Datacenter
destvCenterSSLVerify = True

1.4. Running the script

1.4.1. Export

Export will export your existing SDDC configuration from your source SDDC to a set of files that can be subsequently used for import.

Run the following command to export:

python3 sddc_import_export.py -o export

If all of the export options are enabled, this will export a set of files:

  • Services.json

  • cgw_groups.json

  • cgw-networks.json

  • cgw.json

  • dfw_details.json

  • dfw.json

  • dhcp-static-bindings.json

  • flex_seg_disc_prof.json

  • flex_seg.json

  • mcgw_fw.json

  • mcgw_static_routes.json

  • mcgw.json

  • mgw_groups.json

  • mgw.json

  • mpl.json

  • natrules.json

  • nsx_adv_fw_policies.json

  • nsx_adv_fw_profiles.json

  • nsx_adv_fw_rules.json

  • nsx_adv_fw_settings.json

  • nsx_adv_fw_sigs.json

  • public_ip_old_new.json

  • public.json

  • ral.json

  • route_config.json

  • s3-service_access.json

  • sddc_info.json

  • service_access.json

  • services.json

  • t1vpn.json

  • t1vpn_service.json

  • t1vpn_le.json

  • vpn-bgp.json

  • vpn-dpd.json

  • vpn-ike.json

  • vpn-l2.json

  • vpn-l3.json

  • vpn-local-bgp.json

  • vpn-tunnel.json

A config.ini flag named 'export_history' allows for the JSON files to be zipped for archival purposes. A related configuration option named 'max_export_history_files' lets you control how many zipped archive files are retained.

Export is read-only and will not make any changes to your source SDDC.

1.4.1.1. Exclusions

Manual VM memberships in groups are not supported, they will be filtered out of the export.

1.4.2. Import

Import will import your saved SDDC configuration from a set of exported JSON files to a destination SDDC.

Run the following command to import:

python3 sddc_import_export.py -o import

Before making changes, the script will prompt you for confirmation

Script is running in live mode - changes will be made to your destination SDDC. Continue? (y/n): y

There are two sections of the config.ini file that control whether the script will make changes to your destination SDDC. The first is import_mode. If import_mode is set to live, changes will be made to the destination SDDC. There is one final flag - import_live_mode_warning. If this flag is set to true, you will be warned that the script is in live mode and given the option to switch back to test mode. If you want to run the script repeatedly and are absolutely sure of your configuration, you can set the import_mode_live_warning flag to False, this will enable the script to run in live mode without user intervention.

[importConfig]
# Import mode
# import_mode = test
#  - No changes will be made to the destination SDDC
#
# import_mode = live
#  - Changes will be made to the destination SDDC
import_mode = live

# Script will warn, ask for a confirmation before continuing in live mode
# Set this to false if you are absolutely sure you have your script configured correctly and want to run it automatically
import_mode_live_warning = True

Example of a successful import:

nvibert-a01:sddc_import_export nicolasvibert$ python3 sddc_import_export.py -o import
Import mode: live
Importing data into org VMC-SET-TEST (b7793958-b6b6-4916-a008-40c5c47ec24c), SDDC AVI-LONDON-SDDC (1eadc044-a195-4f43-8bbd-fa1089544e6d)
Script is running in live mode - changes will be made to your destination SDDC. Continue? (y/n): y
Live import will proceed
Beginning CGW network import:
Added sddc-cgw-network-1
Import results:
+--------------------+---------+-------------+--------------------+
|    Display Name    |  Result | Result Note |     Segment ID     |
+--------------------+---------+-------------+--------------------+
| sddc-cgw-network-1 | SUCCESS |             | sddc-cgw-network-1 |
+--------------------+---------+-------------+--------------------+
Beginning Services import
Service nico has been imported.
Service nico has been imported.
Service nico3 has been imported.
Beginning CGW import...
CGW Group Blue_VMs has been imported.
CGW Group Red_VMs has been imported.
CGW Group tf-group12 has been imported.
CGW Group tf-group13 has been imported.
CGW Group tf-group14 has been imported.
Firewall Rule VMC to AWS has been imported.
Firewall Rule AWS to VMC has been imported.
Firewall Rule Internet out has been imported.
Firewall Rule Default VTI Rule has been imported.
Beginning MGW import...
Firewall Rule ESXi Provisioning has been imported.
Firewall Rule vCenter Inbound has been imported.
Firewall Rule ESXi Outbound has been imported.
Firewall Rule vCenter Outbound has been imported.
Beginning Public IP export...
Previous IP 44.242.3.1 has been imported and remapped to 3.10.11.226.
Previous IP 34.215.71.251 has been imported and remapped to 3.11.245.46.
Previous IP 54.184.3.133 has been imported and remapped to 18.134.166.17.
Beginning NAT import...
NAT Rule test-nat-rule has been imported.
NAT Rule test-nat-second-rule has been imported.
NAT Rule test-nat-third-rule has been imported.
Import has been concluded. Thank you for using SDDC Import/Export for VMware Cloud on AWS.

1.4.3. Export-Import

Some customers want to run export from source, then immediately import into the destination. This option is useful if you want to set the script on a schedule to keep 2 SDDCs in sync.

Run the following command to run an export-import:

python3 sddc_import_export.py -o export-import

1.4.4. Export NSX-T on-prem

To export your DFW configuration, first edit the nsxConfig section of vcenter.ini with your NSX-T manager URL and credentials.

[nsxConfig]

srcNSXmgrURL = 
srcNSXmgrUsername = 
srcNSXmgrPassword = 
srcNSXmgrSSLVerify = False

Then, run the export-nsx command

python3 sddc_import_export.py -o export-nsx

1.4.5. Import NSX-T on-prem

To export your on-prem DFW configuration into VMC on AWS, first run the export-nsx function.

After the export is complete, ensure that you have vmc.ini configured with a dest_refresh_token, dest_org_id, and dest_sddc_id. The import-nsx process uses the code as a standard import from an SDDC, including all of the options to enable and disable sections in config.ini

Finally, run import-nsx command.

python3 sddc_import_export.py -o import-nsx

1.4.6. Export vCenter

To export your vCenter server folder structure, set the export_vcenter_folders flag in config.ini to True. Then run the export command:

python3 sddc_import_export.py -o export-vcenter

1.4.7. Import vCenter

To Import your vCenter server folder structure, set the import_vcenter_folders flag in config.ini to True. Then run the import command:

python3 sddc_import_export.py -o import-vcenter

1.4.8. Import from zip archive

If you enable the 'export_history' flag, a zipfile containing all of the exported JSON will be exported into the /json folder. You can pass the filename to the script as shown to use it as the import source.

python3 sddc_import_export.py -o import -i json/path-to_json-export.zip

1.4.9. Running S3 export as a Lambda function

Install all required packages to a folder

mkdir python_req
cd python_req
pip3 install --target . -r ../requirements.txt

Zip python_req and upload it to a Lambda layer

Change export_folder in config.ini to /tmp, because /tmp is the only writable folder in Lambda

Ensure you have configured aws.ini with your S3 bucket settings

Ensure that you have granted the execution role write permissions to your S3 bucket

Add the following files individually to the function code, or zip them up and upload all at once:

  • config_ini/*
  • lambda_handler.py
  • sddc_import_export.py
  • VMCImportExport.py

Change the Handler runtime settings to invoke_lambda.lambda_handler

Execute your Lambda function. Although it is possible to configure values in the config_ini files that you upload to the function code, it might be preferable to pass the required values via command line argument. See invoke_lambda.py for an example.

1.4.10. Cloud Services Platform Role Sync

You can sync user roles with the rolesync option.

Note: the API tokens you use must have Org Owner permissions. You can sync across orgs by using a different source and destination OrgID. Alternatively, if your sync template and destinations are in the same org, you can configure the source and destination OrgIDs to be identical.

Known issue: If the source user has roles assigned that do not exist in the destination, the sync will fail. The initial release of this feature does not have any logic in it to search for invalid roles. For example: a source user is assigned a role for both VMC on AWS and vRealize Automation Cloud. The destination org only has VMC on AWS activated - vRealize Automation is not activated in the destination org. The sync will fail. But the sync would succeed if the source user only had VMC on AWS roles assigned.

Feature Limits In the initial release of this feature, the sync is an addition - any roles assigned to the source will be added to the destination. It does not delete any roles in the destination.

First, configure a template account in the Cloud Services Platform, granting it all of the roles you want to synchronize.

Next, configure the role_sync_source_user_email poperty in config.ini with the email address of your template account - for this example, we set it to [email protected].

Then, configure the role_sync_dest_user_emails property in config.ini with the email address(es) of your destination accounts. These accounts must already exist for the sync to work.

You can then run the sync:

python3 sddc_import_export.py -o rolesync

Example output:

Looking up template user [email protected]
Looking up destination user [email protected]
Role sync success: [email protected]>[email protected]

You can also run the sync directly from the command line without configuring config.ini:

python3 sddc_import_export.py -o rolesync -rss [email protected] -rsd [email protected],[email protected]

Example output:

loaded role sync source user email from command line
Loaded role sync dest user emails from command line
Looking up template user [email protected]
Looking up destination user [email protected]
Role sync success: [email protected]>[email protected]
Looking up destination user [email protected]
Role sync success: [email protected]>[email protected]

1.4.11. Testbed commands

The testbed command lets you create a number of test objects. This is useful when experimenting with API limits in VMC on AWS.

The testbed command always operates on the destination SDDC. It obeys the import_mode setting in config.ini.

This will create 1,500 test Compute Gateway groups, starting with cgw-test-group-0000.

python sddc_import_export.py -o testbed --test-name create-cgw-groups --num-objects 1500

This will delete 1,500 test Compute Gateway groups, starting with cgw-test-group-0000

python sddc_import_export.py -o testbed --test-name delete-cgw-groups --num-objects 1500

You can also pass the --start-num argument. The default is 0. This will create 1,500 test Compute Gateway groups, starting with cgw-test-group-0050

python sddc_import_export.py -o testbed --test-name create-cgw-groups --num-objects 1500 --start-num 50

This will delete 1,500 test Compute Gateway groups, starting with cgw-test-group-0050

python sddc_import_export.py -o testbed --test-name delete-cgw-groups --num-objects 1500 --start-num 50

This will delete ALL CGW GROUPS. Use with extreme caution.

python sddc_import_export.py -o testbed --test-name delete-all-cgw-groups

sddc-import-export-for-vmware-cloud-on-aws's People

Contributors

dependabot[bot] avatar fr0gger03 avatar kremerpatrick avatar nvibert avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sddc-import-export-for-vmware-cloud-on-aws's Issues

Add tags infos on resources

Describe the bug

Missing tags on segments, t1 gateways can be an issue if nsx groups are based on it.

When importing nsx configuration using tags on those objects, it fails as object tags are missing.

Reproduction steps

  1. add tag to a segment
  2. create a rule based on tag on a segment
  3. import the configuration to new sddc
  4. nsx group import fails as segment tag is missing
    ...

Expected behavior

object tags must be added during the import if they are available on the source SDDC

tags must be added in the following function:

  • import_flex_segments
  • import_mcgw

To add tags only if source tag where available this kind code can be used :

if "tags" in mcgw:
    json_data["tags"] = f["tags"]

Additional context

No response

CGW group import bug

CGW groups with longer names appear in dfw_details.json file with the UID rather than the name. This results in import errors because the UID does not match the name.

Enable SDDDC version checking

Is your feature request related to a problem? Please describe.

APIs differ depending on SDDC version, Fling must check version to allow for execution of correct API

Describe the solution you'd like

APIs differ depending on SDDC version, Fling must check version to allow for execution of correct API

Describe alternatives you've considered

No response

Additional context

No response

IPv6 Support

Is your feature request related to a problem? Please describe.

VMC 1.22 added support for IPv6 for flexible segments connected to non-default CGWs. Import/Export support of the following would be great:
IPv6 SDDC enablement
IPv6 on flexible segments
IPv6 route aggregations prefix lists
IPv6 prefix lists applied to route configurations
IPv6 in MGW policy
IPv6 in CGW policy
IPv6 in DFW policy

Describe the solution you'd like

Support for the items listed above.

Describe alternatives you've considered

No response

Additional context

No response

Lambda update

Revisit Lambda instructions, should work as CLI option without having to edit any ini files ahead of time.

M16 - NSX-T Advanced Firewall Add-On

WIth SDDC v1.16, VMware added the NSX-T Advanced Firewall Add-on which brought the ability to add IDS/IPS, L7 App-ID, FQDN and User-ID filtering.

Improve error checking

Error checking should be improved across a number of areas

  • Error handling if the ini files are not configured
  • Checks to ensure item is configured before attempting an export

Custom services are not imported/exported for DFW

The code currently looks at whether you are exporting CGW or MGW, and automatically exports custom services. It does the same thing on import - services get imported before the CGW or MGW are imported. However, the DFW is also reliant on custom services and there is no check.

This bug only shows up if you import only the DFW, and the DFW references custom services, without also importing CGW or MGW. If you only export the DFW, the services will not be there, and the subsequent import will fail.

BUG: If no Tier-1 VPN configured on source SDDC, export errors

Describe the bug

If no Tier-1 VPN is configured on the source SDDC, the export errors out if Tier-1 VPN export is enabled in the config.ini file

Reproduction steps

  1. Attempt to export a SDDC config from an SDDC without a Tier-1 VPN configured

...

Expected behavior

If no Tier-1 VPN is configured, the script should detect that and continue on

Additional context

No response

Make services and groups optional

Currently the code forces you to import both services and groups before you're allowed to import firewall rules. This is generally good in that

However, customers with extremely large numbers services and groups have performance problems as the system goes through unnecessary imports. Services rarely change. Groups change more often than services, but not as often as the firewall rules.

The request is to make service and group import an optional configuration item, default to True.

Refresh token timeout inconsistency

Token refreshes are implemented inconsistently. Export functions call self.check_access_token_expiration() function to ensure a current expiration time on the token, but most of the import functions do not.

NAT import without public IP

NAT import is meaningless without a public IP mapping. A check needs to be made to force NAT and public IPs to be imported together. If there is no public IP mapping, the import errors out currently

BUG: import errors out in test mode

Describe the bug

When the import is run in "test" mode and Tier-1 GW import is enabled, the script errors out

Reproduction steps

  1. Enable import of MCGW
  2. Place import into test mode

Expected behavior

Script should run through and test the import of the MCGW configs.

Additional context

No response

Flexible segments of type "Disconnected" do not import

Describe the bug

Flexible segments of type "Disconnected" do not import and cause the script to fail.

Reproduction steps

Disconnected segments do not have a "connectivity_path" key in their JSON.

Expected behavior

Import disconnected segments correctly

Additional context

No response

Route Aggregation lists and configuration

Describe the bug

API set changed from M21 to M22; existing functions must incorporate new API and SDDC version checking for proper application.

Reproduction steps

  1. attempt I/E of route aggregation and configurations with M22

Expected behavior

should work properly with M22

Additional context

No response

Support for Managed Prefix List Mode

Is your feature request related to a problem? Please describe.

In VMC 1.18, the ability to configure Managed Prefix List mode for the Connected VPC was enabled. SDDC I/E should check for this to be enabled or disabled and configure it accordingly.

Describe the solution you'd like

Support for Managed Prefix List Mode

Describe alternatives you've considered

No response

Additional context

No response

Fix undesired import behavior

Commit 0c2cfd1 introduced undesired behavior where mgw/cgw/dfw object imports were still required in order to do services or compute/management group imports.

Need to have 2 separate flags for groups: compute and management.

Need to decouple the import operations.

FR -Write to logfile every time the import/export tool is run

Feature - Raised during the Tech weekday call
Is there a way to write to the logs every time the SDDC import/export tool is run, as this will allow for better analysis of things like when the tool was run, and allow for better diagnosis when it fails to run?

DFW scaling issues

DFW rules export/import take quite long time when the rules set is quite large
Cannot export more than 1,000 NSX services and groups

Add support for NSX Security Tags

Is your feature request related to a problem? Please describe.

SDDC Import/Export needs the ability to export and import NSX security tags and categories

Describe the solution you'd like

na

Describe alternatives you've considered

No response

Additional context

No response

DFW import error summary

Create a DFW import failure summary to easily see which DFW rules imported and which didn't. It is difficult when there are hundreds of DFW rules to watch the console output and pick out the errors. A summary at the end would help in SDDCs with large numbers of rules.

Multiple time segment import in import_flex_segments

Describe the bug

When importing nsx configuration into sddc, import_flex_segments function imports multiple times the segment as the code flex_segment_import_exclude_list has indentation code issue

Reproduction steps

  1. import the nsx configuration
  2. check the output logs when importing the flex segments
  3. "error_message" : "Cannot create an object with path=[/infra/segments/seg-aaa-db] as it already exists."
    ...

Expected behavior

import only one time the segment and do not tries to import multiple times

Additional context

Error is in the VMCImportExport.py file code line 1246. Indentation is not correct as the segment put code is inside the flex_segment_import_exclude_list loop.
if skip_network is True code must be at the same level as for e in self.flex_segment_import_exclude_list:

Curent code is the following

        for f in flex_segments:
            skip_network = False
            for e in self.flex_segment_import_exclude_list:
                m = re.match(e,f['display_name'])
                if m:
                    print(f"{f['display_name']}, skipped - matches excluseion regex")
                    skip_network = True
                    break
                if skip_network is True:
                    continue
                result = ""
                result_note = ""
                json_data = {}

it must be the following

        for f in flex_segments:
            skip_network = False
            for e in self.flex_segment_import_exclude_list:
                m = re.match(e,f['display_name'])
                if m:
                    print(f"{f['display_name']}, skipped - matches excluseion regex")
                    skip_network = True
                    break
            if skip_network is True:
                continue
            result = ""
            result_note = ""
            json_data = {}

Fix handling of groups that cannot be imported

38bf692 resolves a problem where any group that was skipped due to having an unsupported ExternalIDExpression would cause any subsequent groups to be skipped.

However, once these groups are skipped, any firewall rule relying on them errors upon import with dependencies missing. All supported firewall rules still import successfully, but it is a less than optimal user experience for the ones that are missed. The code needs to keep a list of groups that are skipped due to ExternalIDExpression, then skip any subsequent firewall rule containing those expressions with an explanation.

HCX Public IPs

I exported a customers SDDC and then re-imported into an new SDDC and noticed that the HCX Public IP's were also included. This should probably be ignored since HCX will most likely be deployed via the AddOn's tab

image

DPD import errors

DPD configurations are exported, but are not imported. If a VPN connection has a custom DPD configuration, it will error on input.

bug: Custom BGP prefix lists are not imported

Example error:

Beginning BGP Neighbors...
API Call Status 400, text:{
"httpStatus" : "BAD_REQUEST",
"error_code" : 503047,
"module_name" : "Policy",
"error_message" : "Invalid prefix-list /infra/tier-0s/vmc/prefix-lists/stretch-filter."
}
{"id": "f3eec340-e402-11ea-8356-ab0cb657a449", "neighbor_address": "169.254.111.5", "remote_as_num": "65010", "route_filtering": [{"enabled": true, "address_family": "IPV4", "out_route_filters": ["/infra/tier-0s/vmc/prefix-lists/stretch-filter"]}],
"keep_alive_time": 60, "hold_down_time": 180, "allow_as_in": false, "maximum_hop_limit": 1, "resource_type": "BgpNeighborConfig", "display_name": "f3eec340-e402-11ea-8356-ab0cb657a449", "marked_for_delete": false, "overridden": false}

M18 - Multi-CGW Export/Import

With SDDC v1.18, VMware released the ability to create multiple Tier-1 compute gateways and segments attached to those gateways. We need to add the ability to:

  1. Export multiple tier-1 CGWs and Segments from VMC/A SDDC
  2. Export multiple tier-1 CGWs and Segments from NSX-T On-Prem
  3. Import multiple tier-1 CGWs and Segments into VMC/A SDDC

Support for MCGW based VPNs

Is your feature request related to a problem? Please describe.

VMC 1.18 added the ability to configure IPsec VPNs on customer created CGWs. Support is needed for the following VPN types:
Routed VPNs
Policy VPNs
L2 VPNs

This will require collecting the following configs:
VPN Service
IPsec sessions
Local Endpoints

Describe the solution you'd like

Support for the items above

Describe alternatives you've considered

Click, click, click

Additional context

No response

Make import more resilient when expected config files are missing

This came up when somebody did an export, pulled down new code, then did an import. The import bombed because we added the DFW import feature but there was no dfw file to import from.

It's easily fixable by changing the config file. But instead of bombing out, the code should automatically disable the import flag for that particular section if the file is not found, then print a warning.

Unable to retrieve access token

As of today, I've been getting the error when running the export command:

Unable to retrieve access token. Server response: None

I've tried this on multiple computers and multiple orgs just to rule out API token issues or connection issues. All the same result. I've also tested that these Access Tokens do work as I'm able to authenticate via other methods (e.g. Terraform) using the same Access Token.

Export/Import of policy based VPN does not register in the Network & Security tab of the SDDC

When I setup a policy based VPN within an SDDC and export it the export says it is successful. However when I spin up a new SDDC and attempt to import it does not seem to import correctly. The import process says it is successful and the BGP ASN does get updated but the VPN is not listed in the Network & Security tab of the SDDC. If I attempt to re-import the config it fails saying the VPN configurations are already there.

When looking through the various JSON files it appears the vpn-l3.json is empty. However vpn-bgp.json, vpn-ike.json, vpn-local-bgp.json and vpn-tunnel.json all contain exported information.

DFW import bug

DFW import for rules in sections with a # in the rule name showed no error, but did not import.

M18 SDDC - cannot find NSX-T proxy

SDDC import/export errors out with "Unable to get NSX-T proxy URL. API response:" when targeting M18 SDDC, likely because ['resource_config']['nsx_api_public_endpoint_url'] have moved.

Improve configloader

The config loader section has become a bit unwieldy when adding new config options. We need to switch it to a more dynamic model.

Token expiration

Today, once the 1800 seconds expire, the token expires and the script fails. We need to support token refreshes.

Prefix filters?

from the tech weekday call - does the import export tool capture prefix filters? - A la Tom Twyman

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.