Coder Social home page Coder Social logo

equinix / terraform-equinix-metal-openstack Goto Github PK

View Code? Open in Web Editor NEW
13.0 10.0 12.0 18.03 MB

OpenStack Cloud on Equinix Metal

Home Page: https://registry.terraform.io/modules/equinix/openstack/metal/latest

License: Apache License 2.0

HCL 42.56% Shell 43.52% Python 13.91%
openstack baremetal hybrid-cloud packet virtual-machines

terraform-equinix-metal-openstack's Issues

Error Not Found

What I Was Trying To Do:

  • Follow the README.md steps to create a default openstack on equinix metal environment

What I Expected To Happen:

  • create Minimal Config environment

Actual Outcome:
% terraform apply
null_resource.blank-hostfile: Refreshing state... [id=7851812259129772224]
tls_private_key.ssh_key_pair: Refreshing state... [id=bf9002a16da97b5ccd749e9c2ffc25b2a44d4534]
random_password.os_admin_password: Refreshing state... [id=none]
random_id.cloud: Refreshing state... [id=cFTzTY7Gu_o]
local_file.cluster_public_key: Refreshing state... [id=d689aaf84ea2a2af4ccc8e99e17968c65f06e2af]
local_file.cluster_private_key_pem: Refreshing state... [id=375a0d224f16c8dcc29e74d4f7d80558a2ccefd0]
metal_ssh_key.ssh_pub_key: Refreshing state... [id=25b5ce12-ea7e-48be-90be-7961027c9f39]

Error: Not found

on BareMetal.tf line 2, in resource "metal_project" "project":
2: resource "metal_project" "project" {

Steps to Reproduce:
% export TF_VAR_metal_project_id=YOUR_PROJECT_ID_HERE (replaced with my project id)
% export TF_VAR_metal_auth_token=YOUR_PACKET_TOKEN_HERE (replaced with my packet token)
% git clone https://github.com/equinix/terraform-metal-openstack
% cd terraform-metal-openstack
% terraform init
% cp sample.terraform.tfvars terraform.tfvars
% terraform apply

Uniform Standards Request: Experimental Repository

Hello!

We believe this repository is Experimental and therefore needs the following files updated:

If you feel the repository should be maintained or end of life or that you'll need assistance to create these files, please let us know by filing an issue with https://github.com/packethost/standards.

Packet maintains a number of public repositories that help customers to run various workloads on Packet. These repositories are in various states of completeness and quality, and being public, developers often find them and start using them. This creates problems:

  • Developers using low-quality repositories may infer that Packet generally provides a low quality experience.
  • Many of our repositories are put online with no formal communication with, or training for, customer success. This leads to a below average support experience when things do go wrong.
  • We spend a huge amount of time supporting users through various channels when with better upfront planning, documentation and testing much of this support work could be eliminated.

To that end, we propose three tiers of repositories: Private, Experimental, and Maintained.

As a resource and example of a maintained repository, we've created https://github.com/packethost/standards. This is also where you can file any requests for assistance or modification of scope.

The Goal

Our repositories should be the example from which adjacent, competing, projects look for inspiration.

Each repository should not look entirely different from other repositories in the ecosystem, having a different layout, a different testing model, or a different logging model, for example, without reason or recommendation from the subject matter experts from the community.

We should share our improvements with each ecosystem while seeking and respecting the feedback of these communities.

Whether or not strict guidelines have been provided for the project type, our repositories should ensure that the same components are offered across the board. How these components are provided may vary, based on the conventions of the project type. GitHub provides general guidance on this which they have integrated into their user experience.

Acc Tests fail with "Could not find matching reserved block, all IPs were []"

The automation is failing:

Error: Could not find matching reserved block, all IPs were []

  on ProviderNetwork.tf line 6, in data "metal_precreated_ip_block" "private_ipv4":
   6: data "metal_precreated_ip_block" "private_ipv4" {



Error: Could not find matching reserved block, all IPs were []

  on ProviderNetwork.tf line 24, in data "metal_precreated_ip_block" "public_ipv6":
  24: data "metal_precreated_ip_block" "public_ipv6" {

The testing account may need additional permissions or Terraform should be updated to request a reservation.

Originally posted by @displague in #55 (comment)

CI: keystone fails to add mariadb dependency

keystone fails to add mariadb dependency:

null_resource.controller-keystone (remote-exec): E: The repository 'http://www.ftp.saix.net/DB/mariadb/repo/10.3/ubuntu bionic Release' does not have a Release file.

make services externally accessible

Services (such as glance) are currently configured with the hostname (i.e. controller) rather than a public addressable IP or DNS resolvable hostname. This prevents external entities from using the API services. Either the hostnames need to be put into DNS or public IPs published in the service catalog.

Provider Routed Networks

Setup a configuration that uses flat networking (IPv4) with VM server addresses using provider networking off the compute nodes. This would require a Provider Routed Network architecture. This would be the start of a multi facility deployment.

Remove testing files from the project

I ran into a few things when trying to terraform plan this:

  • main.tf includes a TF Cloud backend that is specific to the testing environment. This file will prohibit users from consuming the module without modification.
    I think we should move the contents of this file to a secret and create it at test time:
     - name: Create testing backend config
       run: |
         echo $BACKEND_FILE >> backend.tf
       shell: bash
       env:
         BACKEND_FILE : ${{secrets.BACKEND_FILE}}
    
  • OpenStack.tf could be main.tf (since the existing main.tf would become backend.tf)
  • There is a root terraform.yml file which is nearly identical to the one in .github/, the one in the root adds terraform fmt and terraform plan
  • terraform fmt applies a few changes to the project that should be included in the PR

Update branding

There are various references to Packet which should be rebranded to Equinix Metal. Also, the default configurations are Gen2, which should be tested and then updated to Gen3.

CI: controller_nova index configuration produces warnings

controller_nova index configuration produces warnings:

null_resource.controller-nova (remote-exec): /usr/lib/python3/dist-packages/pymysql/cursors.py:165: Warning: (1831, 'Duplicate index `block_device_mapping_instance_uuid_virtual_name_device_name_idx`. This is deprecated and will be disallowed in a future release.')
null_resource.controller-nova (remote-exec):   result = self._query(query)
null_resource.controller-nova (remote-exec): /usr/lib/python3/dist-packages/pymysql/cursors.py:165: Warning: (1831, 'Duplicate index `uniq_instances0uuid`. This is deprecated and will be disallowed in a future release.')
null_resource.controller-nova (remote-exec):   result = self._query(query)

CI needs to be configured with an API token

A METAL_AUTH_TOKEN secret should be added to this project.
This token must be attached to an account dedicated to this project that only has access to this project.

If a Project level API token is sufficient for this build, we should consider using https://github.com/displague/metal-actions-example with a suggested project name (rather than random).

The user account that provisions the project token would need to be similar to the one used for terraform-provider-metal testing (it could perhaps be the same account, but should be a different user token).

CI: sample workload common fails to create openstack resources via CLI

Due to an HTTP 500 when running the openstack CLI, sample workload common fails to create:

  • security group rule
  • openstack subnet
null_resource.openstack-sample-workload-common (remote-exec): Error while executing command: HttpException: 500, Request Failed: internal server error while processing your request.
null_resource.openstack-sample-workload-common (remote-exec): usage: openstack security group rule create [-h]
null_resource.openstack-sample-workload-common (remote-exec):                                             [-f {json,shell,table,value,yaml}]
null_resource.openstack-sample-workload-common (remote-exec):                                             [-c COLUMN] [--noindent]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--prefix PREFIX]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--max-width <integer>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--fit-width] [--print-empty]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--remote-ip <ip-address> | --remote-group <group>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--dst-port <port-range>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--protocol <protocol>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--description <description>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--icmp-type <icmp-type>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--icmp-code <icmp-code>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--ingress | --egress]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--ethertype <ethertype>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--project <project>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--project-domain <project-domain>]
null_resource.openstack-sample-workload-common (remote-exec):                                             <group>
null_resource.openstack-sample-workload-common (remote-exec): openstack security group rule create: error: the following arguments are required: <group>
null_resource.openstack-sample-workload-common (remote-exec): usage: openstack security group rule create [-h]
null_resource.openstack-sample-workload-common (remote-exec):                                             [-f {json,shell,table,value,yaml}]
null_resource.openstack-sample-workload-common (remote-exec):                                             [-c COLUMN] [--noindent]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--prefix PREFIX]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--max-width <integer>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--fit-width] [--print-empty]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--remote-ip <ip-address> | --remote-group <group>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--dst-port <port-range>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--protocol <protocol>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--description <description>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--icmp-type <icmp-type>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--icmp-code <icmp-code>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--ingress | --egress]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--ethertype <ethertype>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--project <project>]
null_resource.openstack-sample-workload-common (remote-exec):                                             [--project-domain <project-domain>]
null_resource.openstack-sample-workload-common (remote-exec):                                             <group>
null_resource.openstack-sample-workload-common (remote-exec): openstack security group rule create: error: the following arguments are required: <group>
null_resource.openstack-flavors: Still creating... [10s elapsed]
null_resource.openstack-flavors: Creation complete after 11s [id=5855033389445283733]
null_resource.openstack-sample-workload-common (remote-exec): Error while executing command: HttpException: 500, Request Failed: internal server error while processing your request.
null_resource.openstack-sample-workload-common (remote-exec): usage: openstack subnet create [-h] [-f {json,shell,table,value,yaml}]
null_resource.openstack-sample-workload-common (remote-exec):                                [-c COLUMN] [--noindent] [--prefix PREFIX]
null_resource.openstack-sample-workload-common (remote-exec):                                [--max-width <integer>] [--fit-width]
null_resource.openstack-sample-workload-common (remote-exec):                                [--print-empty] [--project <project>]
null_resource.openstack-sample-workload-common (remote-exec):                                [--project-domain <project-domain>]
null_resource.openstack-sample-workload-common (remote-exec):                                [--subnet-pool <subnet-pool> | --use-prefix-delegation USE_PREFIX_DELEGATION | --use-default-subnet-pool]
null_resource.openstack-sample-workload-common (remote-exec):                                [--prefix-length <prefix-length>]
null_resource.openstack-sample-workload-common (remote-exec):                                [--subnet-range <subnet-range>]
null_resource.openstack-sample-workload-common (remote-exec):                                [--dhcp | --no-dhcp]
null_resource.openstack-sample-workload-common (remote-exec):                                [--dns-publish-fixed-ip | --no-dns-publish-fixed-ip]
null_resource.openstack-sample-workload-common (remote-exec):                                [--gateway <gateway>] [--ip-version {4,6}]
null_resource.openstack-sample-workload-common (remote-exec):                                [--ipv6-ra-mode {dhcpv6-stateful,dhcpv6-stateless,slaac}]
null_resource.openstack-sample-workload-common (remote-exec):                                [--ipv6-address-mode {dhcpv6-stateful,dhcpv6-stateless,slaac}]
null_resource.openstack-sample-workload-common (remote-exec):                                [--network-segment <network-segment>] --network
null_resource.openstack-sample-workload-common (remote-exec):                                <network> [--description <description>]
null_resource.openstack-sample-workload-common (remote-exec):                                [--allocation-pool start=<ip-address>,end=<ip-address>]
null_resource.openstack-sample-workload-common (remote-exec):                                [--dns-nameserver <dns-nameserver>]
null_resource.openstack-sample-workload-common (remote-exec):                                [--host-route destination=<subnet>,gateway=<ip-address>]
null_resource.openstack-sample-workload-common (remote-exec):                                [--service-type <service-type>]
null_resource.openstack-sample-workload-common (remote-exec):                                [--tag <tag> | --no-tag]
null_resource.openstack-sample-workload-common (remote-exec):                                <name>
null_resource.openstack-sample-workload-common (remote-exec): openstack subnet create: error: argument --network: expected one argument
null_resource.openstack-sample-workload-common: Still creating... [10s elapsed]
null_resource.openstack-sample-workload-common (remote-exec): usage: openstack router add subnet [-h] <router> <subnet>
null_resource.openstack-sample-workload-common (remote-exec): openstack router add subnet: error: the following arguments are required: <subnet>

missing ExternalNetwork.sh?

README.md says to add external floating IPs, on the controller run;
sudo bash ExternalNetwork.sh <ELASTIC_CIDR>

Where do we find ExternalNetwork.sh? It would appear to be missing.

Missing TF_VAR documentation

README.md needs to include:

export TF_VAR_metal_create_project=false

OR the default needs to include this variable.

external networking with private IPv4 addresses

It would be nice to have some provider networking enabled (perhaps using the private IPv4 address space). Public IPv4 addresses are scarce so private IPv4 would work better for demos. This would require allocating the space via TF and then setting up the provider networks within OpenStack.

CI: controller neutron fails to modify network_id in SQL migration

controller neutron fails to modify network_id in SQL migration:

null_resource.controller-neutron (remote-exec): INFO  [alembic.runtime.migration] Running upgrade 61663558142c -> 867d39095bf4, port forwarding
null_resource.controller-neutron (remote-exec): INFO  [alembic.runtime.migration] Running upgrade 867d39095bf4 -> d72db3e25539, modify uniq port forwarding
null_resource.controller-neutron (remote-exec): INFO  [alembic.runtime.migration] Running upgrade d72db3e25539 -> cada2437bf41
null_resource.controller-neutron (remote-exec): INFO  [alembic.runtime.migration] Running upgrade cada2437bf41 -> 195176fb410d, router gateway IP QoS
null_resource.controller-neutron (remote-exec): INFO  [alembic.runtime.migration] Running upgrade 195176fb410d -> fb0167bd9639
null_resource.controller-neutron (remote-exec): INFO  [alembic.runtime.migration] Running upgrade fb0167bd9639 -> 0ff9e3881597
null_resource.controller-neutron (remote-exec): INFO  [alembic.runtime.migration] Running upgrade 0ff9e3881597 -> 9bfad3f1e780
null_resource.controller-neutron (remote-exec): INFO  [alembic.runtime.migration] Running upgrade 9bfad3f1e780 -> 63fd95af7dcd
null_resource.controller-neutron (remote-exec): INFO  [alembic.runtime.migration] Running upgrade 63fd95af7dcd -> c613d0b82681
null_resource.controller-neutron (remote-exec): Traceback (most recent call last):
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1246, in _execute_context
null_resource.controller-neutron (remote-exec):     cursor, statement, parameters, context
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 581, in do_execute
null_resource.controller-neutron (remote-exec):     cursor.execute(statement, parameters)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/pymysql/cursors.py", line 165, in execute
null_resource.controller-neutron (remote-exec):     result = self._query(query)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/pymysql/cursors.py", line 321, in _query
null_resource.controller-neutron (remote-exec):     conn.query(q)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 860, in query
null_resource.controller-neutron (remote-exec):     self._affected_rows = self._read_query_result(unbuffered=unbuffered)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 1061, in _read_query_result
null_resource.controller-neutron (remote-exec):     result.read()
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 1349, in read
null_resource.controller-neutron (remote-exec):     first_packet = self.connection._read_packet()
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 1018, in _read_packet
null_resource.controller-neutron (remote-exec):     packet.check_error()
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 384, in check_error
null_resource.controller-neutron (remote-exec):     err.raise_mysql_exception(self._data)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/pymysql/err.py", line 107, in raise_mysql_exception
null_resource.controller-neutron (remote-exec):     raise errorclass(errno, errval)
null_resource.controller-neutron (remote-exec): pymysql.err.InternalError: (1832, "Cannot change column 'network_id': used in a foreign key constraint 'subnets_ibfk_1'")

null_resource.controller-neutron (remote-exec): The above exception was the direct cause of the following exception:

null_resource.controller-neutron (remote-exec): Traceback (most recent call last):
null_resource.controller-neutron (remote-exec):   File "/usr/bin/neutron-db-manage", line 10, in <module>
null_resource.controller-neutron (remote-exec):     sys.exit(main())
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/neutron/db/migration/cli.py", line 658, in main
null_resource.controller-neutron (remote-exec):     return_val |= bool(CONF.command.func(config, CONF.command.name))
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/neutron/db/migration/cli.py", line 182, in do_upgrade
null_resource.controller-neutron (remote-exec):     desc=branch, sql=CONF.command.sql)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/neutron/db/migration/cli.py", line 83, in do_alembic_command
null_resource.controller-neutron (remote-exec):     getattr(alembic_command, cmd)(config, *args, **kwargs)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/alembic/command.py", line 279, in upgrade
null_resource.controller-neutron (remote-exec):     script.run_env()
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/alembic/script/base.py", line 475, in run_env
null_resource.controller-neutron (remote-exec):     util.load_python_file(self.dir, "env.py")
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/alembic/util/pyfiles.py", line 98, in load_python_file
null_resource.controller-neutron (remote-exec):     module = load_module_py(module_id, path)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/alembic/util/compat.py", line 174, in load_module_py
null_resource.controller-neutron (remote-exec):     spec.loader.exec_module(module)
null_resource.controller-neutron (remote-exec):   File "<frozen importlib._bootstrap_external>", line 678, in exec_module
null_resource.controller-neutron (remote-exec):   File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/neutron/db/migration/alembic_migrations/env.py", line 120, in <module>
null_resource.controller-neutron (remote-exec):     run_migrations_online()
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/neutron/db/migration/alembic_migrations/env.py", line 114, in run_migrations_online
null_resource.controller-neutron (remote-exec):     context.run_migrations()
null_resource.controller-neutron (remote-exec):   File "<string>", line 8, in run_migrations
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/alembic/runtime/environment.py", line 846, in run_migrations
null_resource.controller-neutron (remote-exec):     self.get_context().run_migrations(**kw)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/alembic/runtime/migration.py", line 365, in run_migrations
null_resource.controller-neutron (remote-exec):     step.migration_fn(**kw)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/neutron/db/migration/alembic_migrations/versions/train/expand/c613d0b82681_subnet_force_network_id.py", line 40, in upgrade
null_resource.controller-neutron (remote-exec):     existing_type=sa.String(36))
null_resource.controller-neutron (remote-exec):   File "<string>", line 8, in alter_column
null_resource.controller-neutron (remote-exec):   File "<string>", line 3, in alter_column
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/alembic/operations/ops.py", line 1775, in alter_column
null_resource.controller-neutron (remote-exec):     return operations.invoke(alt)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/alembic/operations/base.py", line 345, in invoke
null_resource.controller-neutron (remote-exec):     return fn(self, operation)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/alembic/operations/toimpl.py", line 56, in alter_column
null_resource.controller-neutron (remote-exec):     **operation.kw
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/alembic/ddl/mysql.py", line 98, in alter_column
null_resource.controller-neutron (remote-exec):     else existing_comment,
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/alembic/ddl/impl.py", line 134, in _exec
null_resource.controller-neutron (remote-exec):     return conn.execute(construct, *multiparams, **params)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 982, in execute
null_resource.controller-neutron (remote-exec):     return meth(self, multiparams, params)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/sqlalchemy/sql/ddl.py", line 72, in _execute_on_connection
null_resource.controller-neutron (remote-exec):     return connection._execute_ddl(self, multiparams, params)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1044, in _execute_ddl
null_resource.controller-neutron (remote-exec):     compiled,
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1250, in _execute_context
null_resource.controller-neutron (remote-exec):     e, statement, parameters, cursor, context
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1474, in _handle_dbapi_exception
null_resource.controller-neutron (remote-exec):     util.raise_from_cause(newraise, exc_info)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 398, in raise_from_cause
null_resource.controller-neutron (remote-exec):     reraise(type(exception), exception, tb=exc_tb, cause=cause)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 152, in reraise
null_resource.controller-neutron (remote-exec):     raise value.with_traceback(tb)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1246, in _execute_context
null_resource.controller-neutron (remote-exec):     cursor, statement, parameters, context
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 581, in do_execute
null_resource.controller-neutron (remote-exec):     cursor.execute(statement, parameters)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/pymysql/cursors.py", line 165, in execute
null_resource.controller-neutron (remote-exec):     result = self._query(query)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/pymysql/cursors.py", line 321, in _query
null_resource.controller-neutron (remote-exec):     conn.query(q)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 860, in query
null_resource.controller-neutron (remote-exec):     self._affected_rows = self._read_query_result(unbuffered=unbuffered)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 1061, in _read_query_result
null_resource.controller-neutron (remote-exec):     result.read()
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 1349, in read
null_resource.controller-neutron (remote-exec):     first_packet = self.connection._read_packet()
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 1018, in _read_packet
null_resource.controller-neutron (remote-exec):     packet.check_error()
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 384, in check_error
null_resource.controller-neutron (remote-exec):     err.raise_mysql_exception(self._data)
null_resource.controller-neutron (remote-exec):   File "/usr/lib/python3/dist-packages/pymysql/err.py", line 107, in raise_mysql_exception
null_resource.controller-neutron (remote-exec):     raise errorclass(errno, errval)
null_resource.controller-neutron (remote-exec): oslo_db.exception.DBError: (pymysql.err.InternalError) (1832, "Cannot change column 'network_id': used in a foreign key constraint 'subnets_ibfk_1'")
null_resource.controller-neutron (remote-exec): [SQL: ALTER TABLE subnets MODIFY network_id VARCHAR(36) NOT NULL]
null_resource.controller-neutron (remote-exec): (Background on this error at: http://sqlalche.me/e/2j85)
null_resource.controller-neutron: Creation complete after 1m4s [id=920363323014[978](https://github.com/equinix/terraform-metal-openstack/actions/runs/4981221529/jobs/8915180596#step:12:979)7266]

HA services

Setup redundant backend services including a clustered database and message bus. This could be done with BGP via the upstream Packet routers. The cost to run this setup would go up considerably (from 3 to 8? servers) so maybe use smaller devices?

Provision ssh keys with Terraform

The instruction to create an SSH key by hand https://github.com/equinix/terraform-metal-openstack#deployment-prep, can be avoided by generating the key with Terraform:
https://github.com/equinix/terraform-metal-anthos-on-baremetal/blob/v0.3.0/main.tf#L30-L44

In this way, the SSH key will be registered in userdata and configured by cloud-init.

Lines such as these (which add the SSH key contents to userdata directly) will not be necessary: https://github.com/equinix/terraform-metal-openstack/blob/4c4f30b7e2cd76f9fa19b41d2d3289889b04d500/BareMetal.tf#L22

Instances of key copying can create the remote file using the content property instead of file, following the same pattern found in the anthos project. https://github.com/equinix/terraform-metal-openstack/blob/4c4f30b7e2cd76f9fa19b41d2d3289889b04d500/DistributeKeys.tf#L16-L19

We can also remove these instructions if we adapt this behavior: https://github.com/equinix/terraform-metal-openstack#ensure-that-your-equinix-metal-account-has-an-ssh-key-attached

IPv6 Support

Setup IPv6 Support...

Since there is no floating IPs for IPv6, the IPv6 subnets would need to be pushed out to the compute host. This would likely require provider routed networks when more than one compute host is used.

HA Dashboard

Setup high availability via BGP to the Packet routers to load balance across redundant dashboards.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.