Coder Social home page Coder Social logo

coreos / tectonic-installer Goto Github PK

View Code? Open in Web Editor NEW
602.0 62.0 268.0 36.01 MB

Install a Kubernetes cluster the CoreOS Tectonic Way: HA, self-hosted, RBAC, etcd Operator, and more

License: Apache License 2.0

HCL 48.68% Shell 4.45% Go 10.73% Groovy 1.29% Makefile 1.06% HTML 1.09% Ruby 7.65% CSS 5.35% JavaScript 19.69%
kubernetes etcd containers container-linux coreos terraform terraform-modules docker oci

tectonic-installer's Introduction

Tectonic Installer

The CoreOS and OpenShift teams are now working together to integrate Tectonic and Open Shift into a converged platform which will be developed in https://github.com/openshift/installer. We'll consider all feature requests for the new converged platform, but will not be adding new features to this repository.

In the meantime, current Tectonic customers will continue to receive support and updates. Any such bugfixes will take place on the track-1 branch.

See the CoreOS blog for any additional details: https://coreos.com/blog/coreos-tech-to-combine-with-red-hat-openshift

Note: The master branch of the project reflects a work-in-progress design approach works only on AWS and Libvirt. In order to deploy Tectonic to other platforms, e.g. Azure, bare metal, OpenStack, etc, please checkout the track-1 branch of this project, which maintains support for the previous architecture and more platforms.

tectonic-installer's People

Contributors

aaronlevy avatar alexsomesan avatar amrutac avatar athai avatar bison avatar brancz avatar chancez avatar coverprice avatar cpanato avatar dghubble avatar diegs avatar enxebre avatar erikkn avatar estroz avatar ethernetdan avatar ggreer avatar justaugustus avatar kans avatar kyoto avatar lander2k2 avatar mxinden avatar philips avatar quentin-m avatar rithujohn191 avatar robszumski avatar sozercan avatar squat avatar sym3tri avatar trawler avatar zbwright avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tectonic-installer's Issues

Wire tectonic_etcd_servers

For each platform, we need to wire the tectonic_etcd_servers:

  • Don't provision etcd
  • Configure the modules to use that list rather that the output of the etcd provisioning

Loosely relies on hashicorp/terraform#12453, there is a dirty workaround that we currently use in some places already.

Setup automated builds + smoke tests

As soon as possible, we should come up with a plan and execute on running:

  • Smoke tests for every PR
    • terraform fmt
    • terraform validate
    • orphan variables / parameters check
    • terraform apply on AWS with basic parameters
  • Nightly builds
  • Daily scenarized tests (new VPC, existing VPC, external etcd, self-hosted etcd, ...)
  • E2E tests

For the scenarized / E2E tests, we should sync with @pbx0, @ethernetdan and @bison to see how we can integrate with the existing tooling.

I think it could be interesting to have something like:

tests/scenarios/
test/scenerarios/aws-existing-vpc.tfvars <--- Input variables used to run tectonic-terraform-sdk
test/scenarios/aws-existing-vpc.tf      <--- Extra infrastructure to deploy before running 
tectonic-terraform-sdk (e.g. existing vpcs), output is used to substitute variables in associated .tfvars
test/scenarios/aws-self-hosted-etcd.tfvars
[...]

We have set up a public Jenkins to run all these:

azure: TPR 404 on prometheues

Workaround was:

sudo bash /opt/tectonic/tectonic.sh kubeconfig /opt/tectonic/tectonic

Error logs:

Mar 20 22:34:35 tectonic-master-000002 bash[2041]: [  331.108351] bootkube[5]:         Pod Status: kube-controller-manager        Running
Mar 20 22:34:35 tectonic-master-000002 bash[2041]: [  331.108610] bootkube[5]: All self-hosted control plane components successfully started
Mar 20 22:34:35 tectonic-master-000002 bash[3102]: Waiting for Kubernetes API...
Mar 20 22:34:40 tectonic-master-000002 bash[3102]: Waiting for Kubernetes components...
Mar 20 22:34:55 tectonic-master-000002 bash[3102]: Creating Tectonic Namespace
Mar 20 22:34:55 tectonic-master-000002 bash[3102]: Creating Initial Roles
Mar 20 22:34:55 tectonic-master-000002 bash[3102]: Creating Tectonic ConfigMaps
Mar 20 22:34:55 tectonic-master-000002 bash[3102]: Creating Tectonic Secrets
Mar 20 22:34:56 tectonic-master-000002 bash[3102]: Creating Tectonic Identity
Mar 20 22:34:56 tectonic-master-000002 bash[3102]: Creating Tectonic Console
Mar 20 22:34:56 tectonic-master-000002 bash[3102]: Creating Tectonic Monitoring
Mar 20 22:34:56 tectonic-master-000002 bash[3102]: Waiting for third-party resource definitions...
Mar 20 22:35:16 tectonic-master-000002 bash[3102]: Failed to create monitoring/prometheus-k8s.json (got 404):
Mar 20 22:35:16 tectonic-master-000002 bash[3102]: {"apiVersion":"monitoring.coreos.com/v1alpha1","kind":"Prometheus","metadata":{"name":"k8s","namespace":"tectonic-system","selfLink":"/apis/monitoring.coreos.com/v1alpha1/namespaces/tectonic-system/prometheuses/k8s","uid":"7dc23af4-0dbd-11e7-b41c-000d3a1648ea","resourceVersion":"1216","creationTimestamp":"2017-03-20T22:35:16Z","labels":{"prometheus":"k8s"}},"spec":{"replicas":1,"resources":{"limits":{"cpu":"400m","memory":"2000Mi"},"requests":{"cpu":"200m","memory":"1500Mi"}},"serviceAccountName":"prometheus-k8s","version":"v1.5.2"}}
Mar 20 22:35:16 tectonic-master-000002 systemd[1]: tectonic.service: Main process exited, code=exited, status=1/FAILURE
Mar 20 22:35:16 tectonic-master-000002 systemd[1]: Failed to start Bootstrap a Tectonic cluster.
Mar 20 22:35:16 tectonic-master-000002 systemd[1]: tectonic.service: Unit entered failed state.
Mar 20 22:35:16 tectonic-master-000002 systemd[1]: tectonic.service: Failed with result 'exit-code'.

openstack: add multi-node etcd

Currently we bootstrap only one etcd node in openstack neutron/nova. This was fine for initial bootstrap, but we need to catch up with the other platforms.

openstack: DNS for Nova network

The openstack-nova PoC includes AWS Route 53 for resolving master, worker, and etcd nodes. Apparently we cannot rely on this setup in a Nova-only environment and need a fallback solution for this.

A poor-man solution that comes into my mind is to use dnsmasq as a DaemonSet using nodeSelector: master: true, and hostNetwork: true and invoking dnsmasq with --addn-hosts ./hosts where

$ cat hosts
<ip1> <master1>
<ip2> <master2>
...
<ip3> <etc1>
<ip4> <etc2>
...
<ip5> <worker1>
<ip6> <worker2>
...

The above hosts file could be distributed as a config map maybe to allow updates on worker additions/removals? For a first should this could be static, generated in terraform.

We'd distribute kubelet's resolv.conf files on the worker nodes having:

$ cat /etc/resolv.conf
nameserver <ip of master1>
nameserver <ip of master2>
nameserver <ip of master3>

Does this sound stupid/reasonable/worth tackling in this PoC?

/cc @philips @alexsomesan

Skip SCP/SSH Step on AWS & Multi-Masters

According to one of the proposals established offline, replace the SCP/SSH step on AWS used for assets transfer and bootkube/tectonic execution so we can start having multiple controller nodes.

azure: document how to scale workers

Right now if you re-run terraform apply ./platforms/azure you will end up re-running bootkube. We need to document how to scale the workers independently of deploying the cluster.

vmware: support for static IPs

It's common to have statically assigned IP addresses in VMware environments.

Would be great to supports a CSV (or similar) based inventory assign nodes IP addresses. Based on my past experience managing VMware Infrastructure, most shops do not enable DHCP on Server VLANs.

aws: if tectonic_aws_az_count != etcd node count, "terraform apply" fails

If we set tectonic_aws_az_count=2, terraform apply fails:

Error applying plan:

1 error(s) occurred:

* index 2 out of range for list var.etcd_subnets (max 2) in:

${var.etcd_subnets[count.index]}

This is because we still create 3 etcd instances, not considerung only 2 az's. We need some modulo logic here.

Documentation - Cleanup [placeholder]

This is a placeholder for me to update the documentation based on experience at a customer site running thru openstack-novanet.

  • Add to pre-reqs : jq and terraform binaries
  • make syntax didn't work for me : went thru makefile step by step instead, most likely an issue with my folder layout, but should probably make this cleaner. There are references in the documentation to boot folder and build folder etc.
  • CoreOS on Openstack example (this can potentially go to upstream documentation) mentions --visibility=public for glance image upload. My customer did not have rights to publish this outside their project.

azure: put masters and workers into AvailabilitySets and ScaleSets

ScaleSets currently cannot attach different disks to different virtual machines. This means that Kubernetes persistent volumes will not work with our setup for multi-machine.

To fix this we need to offer the ability to spin-up via AvailabilitySets instead. Should we create a separate module if someone wants to use AvailabilitySets vs ScaleSets similar to how you do with AWS?

cc @alexsomesan

AWS: Upload entire Ignition config rather than individual files

Instead of uploading individual files (i.e. kubeconfig, init-master.sh) to S3 and pulling them back individually, we should upload the whole Ignition stack to S3. The reason is that eventually, even if we continue uploading these files, we will end up hitting the UserData size limit again. It is also much more complex to manage and maintain.

  • Adding support for straight up S3 pulls in Ignition (so it can pull the S3 object directly)
  • Adding a new Terraform resource for creating Signed S3 URLs (so the object can be provided to Ignition as a https:// link, which it supports)

vmware: support vsphere cloud provider

VMware has Kubernetes --cloud-provider=vsphere support as described in documentation[0] . Cloud Provider plugins allow mainly integration with VMware's storage stack for block storage, similar to aws-ebs integration. Apart from the block storage integration it also supports self discovery such as nodes adding metadata about their placement in VMware infrastructure.

Created a short demo in; https://youtu.be/gfjwwkTYNRQ (needs CoreOS SSO)

Opening the issue to discuss the right way to implement. Per upstream documentation implementation requires: "Provide` the cloud config file to each instance of kubelet, apiserver and controller manager via --cloud-config="

Since kube-api and controller-manager are self-hosted it's possible to inject --cloud-config via Kubernetes secret. However kubelet will require the configuration file to be on disk. Since we would need to deploy this file onto disk, it's probably best to have kube-api and controller-manager use hostpath mounts as well. Since kubelet already mounts /etc/kubernetes I've added /etc/kubernetes/vsphere.conf via ignition_file and update kubelet.service to use

--cloud-config=/etc/kubernetes/vsphere.conf \
--cloud-provider=vsphere \

Also have to modify the assets for kube-api and controller-manager pods with

        - --cloud-provider=vsphere
        - --cloud-config=/etc/kubernetes/vsphere.conf
        - mountPath: /etc/kubernetes/vsphere.conf
          name: vsphere
          readOnly: true
        - mountPath: /sys/devices/virtual/dmi/id/product_uuid
          name: vspheredmi

for hostpaths. /sys/devices/virtual/dmi/id/product_uuid [0] is needed for vmware code to find itself in the virtual machine infrastructure. This was merged in 1.5.3, and prior to it operator had to manually ad Node's VMware Virtual Machine UUID.

Master and Worker nodes also need disk.enableUUID = "1" as part of their Terraform custom_configuration_parameters.

Happy to submit a PR if above sounds acceptable.

[0] https://kubernetes.io/docs/getting-started-guides/vsphere/
[1] https://github.com/kubernetes/kubernetes/blob/v1.5.3/pkg/cloudprovider/providers/vsphere/vsphere.go#L207-L260

openstack: controller manager issues

W0302 05:06:30.768813       1 client_config.go:481] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0302 05:06:30.853536       1 leaderelection.go:247] lock is held by kube-controller-manager-2039395169-p5rh5 and has not yet expired
I0302 05:06:34.309047       1 leaderelection.go:247] lock is held by kube-controller-manager-2039395169-p5rh5 and has not yet expired
I0302 05:06:38.570485       1 leaderelection.go:247] lock is held by kube-controller-manager-2039395169-p5rh5 and has not yet expired
I0302 05:06:42.169622       1 leaderelection.go:247] lock is held by kube-controller-manager-2039395169-p5rh5 and has not yet expired
I0302 05:06:45.223493       1 leaderelection.go:247] lock is held by kube-controller-manager-2039395169-p5rh5 and has not yet expired
I0302 05:06:48.260875       1 leaderelection.go:188] sucessfully acquired lease kube-system/kube-controller-manager
I0302 05:06:48.268460       1 event.go:217] Event(api.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"774ff70d-ff04-11e6-ac9d-fa163e8b3190", APIVersion:"v1", ResourceVersion:"1395", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kube-controller-manager-2039395169-6ntgx became leader
F0302 05:06:48.299171       1 controllermanager.go:280] Cloud provider could not be initialized: could not init cloud provider "openstack": no OpenStack cloud provider config file given

makefile: not idempotent

Makefile is not idempotent, which caused some issues that were identified once the nodes were up.

Reported by @alekssaul who can provide more details.

openstack - support spreading the cluster across multiple AZs

Customer is interested in using Openstack Availability Zone metadata to spread Tectonic cluster to multiple AZs. Customer currently treats each rack as an availability zone.

Ideally allow Tectonic tiers (etcd/worker/controller planes) to be spread to multiple AZs. Also allow exposure of the data so the applications can be deployed to multiple AZs.

Terraform seem to have the Availability Zone argument in; https://www.terraform.io/docs/providers/openstack/r/compute_instance_v2.html#availability_zone . Per customer the data is available via Openstack metadata;http://169.254.169.254/latest/meta-data/placement/availability-zone and thru API in https://docs.openstack.org/developer/glance/metadefs-concepts.html . I believe this information is also imported when --cloud-provider=openstack is enabled ; https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/openstack/openstack.go#L483-L496

azure: use DNS labels to glue IPs to LBs

Based on a suggestion from MS folks. This mapping allows more flexibility for when the load balancer IP address changes. Their DNS service will automatically update those records.

The current implementation is probably better to make this function the same across multiple providers, but I'd put this in the "make it work natively on the platform" bucket.

azure: re-enable cloud-provider

Currently the cloud provider is crashing the kubelet. We need to figure out:

  1. What config is it expecting to read?
  2. What is the correct recovery path for nil config? AWS has a nil config recovery path in their plugin entry point.
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]: I0320 21:59:57.218818    2628 server.go:312] acquiring file lock on "/var/run/lock/kubelet.lock"
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]: I0320 21:59:57.219230    2628 server.go:317] watching for inotify events for: /var/run/lock/kubelet.lock
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]: I0320 21:59:57.219609    2628 feature_gate.go:189] feature gates: map[]
[Unit]
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]: panic: runtime error: invalid memory address or nil pointer dereference [recovered]
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]:         panic: runtime error: invalid memory address or nil pointer dereference
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0xa7a394]
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]: goroutine 1 [running]:
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]: panic(0x2ea92a0, 0xc420014030)
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]:         /usr/local/go/src/runtime/panic.go:500 +0x1a1
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]: io/ioutil.readAll.func1(0xc4208fe698)
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]:         /usr/local/go/src/io/ioutil/ioutil.go:30 +0x13e
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]: panic(0x2ea92a0, 0xc420014030)
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]:         /usr/local/go/src/runtime/panic.go:458 +0x243
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]: bytes.(*Buffer).ReadFrom(0xc4208fe5e8, 0x0, 0x0, 0xc4202a8c00, 0x0, 0x200)
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]:         /usr/local/go/src/bytes/buffer.go:176 +0x134
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]: io/ioutil.readAll(0x0, 0x0, 0x200, 0x0, 0x0, 0x0, 0x0, 0x0)
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]:         /usr/local/go/src/io/ioutil/ioutil.go:33 +0x147
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]: io/ioutil.ReadAll(0x0, 0x0, 0x5c76860, 0x3435360, 0x5c76860, 0x3435e80, 0x0)
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]:         /usr/local/go/src/io/ioutil/ioutil.go:42 +0x3e
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]: k8s.io/kubernetes/pkg/cloudprovider/providers/azure.NewCloud(0x0, 0x0, 0x7ffebead2f65, 0x5, 0xc4200018f0, 0x1)
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]:         /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/cloudprovider/providers/azure
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]: k8s.io/kubernetes/pkg/cloudprovider.GetCloudProvider(0x7ffebead2f65, 0x5, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]:         /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/cloudprovider/plugins.go:85 +
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]: k8s.io/kubernetes/pkg/cloudprovider.InitCloudProvider(0x7ffebead2f65, 0x5, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]:         /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/cloudprovider/plugins.go:111
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]: k8s.io/kubernetes/cmd/kubelet/app.run(0xc4207eb800, 0x0, 0x496ec8, 0xc4208ffa08)
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]:         /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:362 +0x
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]: k8s.io/kubernetes/cmd/kubelet/app.Run(0xc4207eb800, 0x0, 0xc4208ffa18, 0xc4208ffa28)
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]:         /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:270 +0x
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]: main.NewKubelet.func1(0xc420c26870, 0xc42019ef00, 0x0, 0xf, 0x0, 0x0)
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]:         /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/hyperkube/kubelet.go:37 +0x33
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]: main.(*HyperKube).Run(0xc42040ac60, 0xc42000a610, 0xf, 0xf, 0x0, 0x0)
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]:         /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/hyperkube/hyperkube.go:179 +0
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]: main.(*HyperKube).RunToExit(0xc42040ac60, 0xc42000a600, 0x10, 0x10)
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]:         /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/hyperkube/hyperkube.go:189 +0
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]: main.main()
Mar 20 21:59:57 tectonic-master-000000 kubelet-wrapper[2628]:         /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/hyperkube/main.go:46 +0x784
Mar 20 21:59:57 tectonic-master-000000 systemd[1]: kubelet.service: Main process exited, code=exited, status=2/INVALIDARGUMENT

document how to run each platform using the Tectonic installer

Due by 3/24. Docs should be in this repo somewhere.

Will essentially be:

  • download installer
  • create a config.json file & fill it out
  • run with special subcommand, something like ./installer asset-gen --platform=azure --input=config.json
  • cp assets.zip into sdk folder
  • unzip assets.zip
  • run ./terraform apply

convert.sh: breaks under osx

diff --git a/convert.sh b/convert.sh
index 3b94a70..1c91f11 100755
--- a/convert.sh
+++ b/convert.sh
@@ -56,11 +56,11 @@ function assets {
             local tectonic_domain=$(jq -r .Resources.TectonicDomain.Properties.Name "${cloud_formation}")

             for f in "${assets}/manifests/kube-apiserver.yaml" "${assets}/manifests/kube-controller-manager.yaml"; do
-                sed -i 's/--cloud-provider=aws/--cloud-provider=openstack/g' $f
+                sed -i -e 's/--cloud-provider=aws/--cloud-provider=openstack/g' $f
             done

             for f in $(find "${assets}" -type f -name "*.yaml"); do
-                sed -i "s/https:\/\/${tectonic_domain}/https:\/\/${tectonic_domain}:32000/g" $f
+                sed -i -e "s/https:\/\/${tectonic_domain}/https:\/\/${tectonic_domain}:32000/g" $f
             done
             ;;
         *)

Need to do some uname magic.

all: introduce concept of platform version

Problem: After deploying an infrastructure for Kubernetes a user may need help debugging something or upgrading. But, infrastructure is a tricky thing because there are a lot of moving parts that will be deployed by this Platform SDK.

Solution: Introduce a Platform SDK version.

Support customers with Proxy

  • Bootkube script curls to 127.0.0.1, no_proxy variable needs to specify 127.0.0.1
  • Customers MITM SSL connectivity, need to trust their cert bundle (ex: copy to /etc/ssl/certs and run update-ca-certs script)
  • All systemd units need environment variables (http_proxy, https_proxy, no_proxy)
  • Openstack metadata service needs to be included in no_proxy
  • /etc/environment should contain environmental variables (http_proxy, https_proxy, no_proxy)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.