Coder Social home page Coder Social logo

okd4-upi-lab-setup's Introduction

This tutorial is being deprecated in favor of a new version:

The archived main branch can be found in the archive branch of this project and the previous documentation can be found at: https://cgruver.github.io/okd4-upi-lab-setup/

I will not be doing any additional development on this project.

The newest version of my helper scripts is here: https://github.com/cgruver/kamarotos

The replacement for this project is here: Building a Portable Kubernetes Home Lab with OpenShift - OKD4

Follow my Blog here: Upstream - Without A Paddle

okd4-upi-lab-setup's People

Contributors

cgruver avatar johankok avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

okd4-upi-lab-setup's Issues

Wow! Just a starter question.

I'm on my way for my onw lab...

First of all! Great work!

I will start with a linux root server I have rented at a public hosting provider.

The specs:
AMD Ryzen™ 9 3900 12-Core / Simultaneous Multithreading
2 x 1,92 TB NVMe SSD Datacenter Edition
128 GB DDR4 ECC RAM
Centos 7.8

Because this server is facing the world I just have a ssh port available to the outside world and I'm not willing to open any other ports :-)

Do you think it makes sense to have the bastion host (DHCP, LB, DNS, PXE, TFTP, RPMS) together with 3xControl Plane, 3xWorker based on libvirt on the same host?

Invalid image pruner cron

First, thanks a ton for this guide! It helped me understand a few things here and there.

Here's the issue

E0627 17:27:59.883915      13 controllerimagepruner.go:306] (image pruner) unable to sync: unable to apply objects: failed to update object *v1beta1.CronJob, Namespace=openshift-image-registry, Name=image-pruner: CronJob.batch "image-pruner" is invalid: spec.schedule: Invalid value: "*/0 * * * *": Step of range should be a positive number: */0, requeuing

Is the idea that it should run at every hour mark? If so maybe 0 */1 * * * ?

Error mirroring to nexus

Still working through existing pages, with CentOS8.3.

Install yaml file uses OpenshiftSDN, changed to default network type to have a better chance with 4.6

oc from 4.5 could not pull down latest oc_client and oc_install so switched to current 4.6. one which seemed to work.

mirrorOkdRelease.sh failed to mirror to nexus with error (have replaced domain with XXX):
error: unable to connect to nexus.XXX:5001/origin: Get "https://nexus.XXX:5001/v2/": x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0

Defining GODEBUG in script doesn't help. I guess in earlier instructions, need to enter a SAN for the nexus cert.

Move workloads to masters

Hi,

My current setup is 3 master and 3 worker nodes. Thus, following this https://github.com/cgruver/okd4-upi-lab-setup/blob/master/docs/pages/InfraNodes.md#designate-master-nodes-as-infrastructure-nodes it’s very nice that you can move ingress pods to masters(infra) without making them scheduleEnabled. However, I am wondering what would be the right way of moving workloads (image-registry, monitoring, logging) to master nodes the same way keeping masters without worker role. Following that https://github.com/cgruver/okd4-upi-lab-setup/blob/master/docs/pages/InfraNodes.md#move-workloads-to-the-new-infra-nodes monitoring and image registry are stuck at pending state due to master=“” role.

Thank you

3-node minimal cluster without nested virtualisation?

There's a lot of people with home-labs, who have setup 3-node (or 4-node) Proxmox HA clusters, to play around with these concepts at home. If you do it on 3 (or 4) physical nodes, performance can also be really good and HA works great.

I'm trying to do the same in OKD/OpenShift, (with Kubevirt and hopefully Ceph/Rook) and your guide looks like the best starting point.

However, I'm still a bit confused about the minimum number of nodes needed, assuming you go with physical nodes. OpenShift docs seems to suggest you need 3 x master nodes, and 2 x worker nodes (as well as a bastion and bootstrap node).

And assuming you want to run Kubevirt on these and Ceph, I'd like to avoid nested virtualisation if possible.

Is there any way of:

  • Sharing master/worker duties on the same nodes, without nested virtualisation?
  • Running with 1 x master (instead of 3), and 2 x workers?

Ceph

Works like a charme!

i did the last step and applied pvc to registry with patch... but... hm... not shure it is really bound.

Bare-metal LoadBalancer service

Hello,

I am wondering, what can be used in order to expose non-http traffic/tcp traffic? Thus, for bare-metal solution the most popular solution is MetalLB. However, it's a known fact that using L2 arp mode is not very good choice. So I am interested in what solution you guys use in your bare-metal clusters? Is it https://github.com/redhat-cop/keepalived-operator ? Is it MetalLB with BGP? If it's MetalLB with BGP, could you please advise what's the correct way of configuring it because I am currently stuck with it:

  • I have deployed metallb in the cluster.
  • Created pfsense vm with frr installed.
  • Configured AS and neighbours (3 master and 3 worker nodes of the cluster).
  • Added configmap, and then created nginx + lb service.

So then I can see 172.16.1.160/32 ip is advertised for nginx - https://prnt.sc/xfzfkx Problem is that I can curl it from (3) master and (3) worker nodes but if I go outside of the cluster within the same LAN 172.16.1.0/24, it cant be curl'd so I cant access it from other vms. Any advice on that one please?

Wrong names for CentOS 8 repos

Using CentOS 8.3 as Bastion OS we have a few issues.

In the nginx page, in the downloading of centos repos, the following variable is used:
LOCAL_REPOS="BaseOS AppStream centosplus extras epel epel-modular"

BaseOS and AppStream should be lower case. (I did not find centosplus yet).

The earlier mkdir creating the directories should also use lower case names.

Error with existing text:
Error: Unknown repo: 'BaseOS'
Error: Unknown repo: 'AppStream'
Error: Unknown repo: 'centosplus'

As well, the minimal install iso no longer exists (it does not exist for CentOS 8 stream either).

DeployKvmHost.sh incorrect for current CentOS 8.3

As, for instance, referring to the repo Minimal which no longer exists. I would guess the kickstart may need to be changed as well.

The manual install for KVM hosts should be fine, but the pxe boot was a nice feature.

4.5.0-0.okd-2020-06-29-110348-beta6

Just to let you know... no success with following combinations:

4.5.0-0.okd-2020-06-29-110348-beta6
stable/32.20200615.3.0
stable/32.20200601.3.0
stable/31.20200517.3.0
next/32.20200625.1.0

4.5.0-0.okd-2020-06-30-142627
stable/32.20200615.3.0

4.6.0-0.okd-2020-06-29-123105
stable/32.20200615.3.0
next/32.20200625.1.0

To be honest I don't think it's an CoreOS issue.

Consider using RHEL instead of CentOS Stream

Just a suggestion.
Now that RH is making 16 copies of RHEL available for production use, based on a RH developer account (Feb 1, we'll see if it happens), RHEL becomes a candidate.
It really depends on your focus. If the focus is on the OKD part, then a stable OS for the services and underlying virtualization on the is helpful. As CentOS used to be downstream of RHEL, it fit the bill. Now CentOS stream is the incubator for RHEL and can be expected to be (much?) less stable. This is reasonable if you are checking out the changes to RHEL, but not an advantage if you are using it as a platform for something else. RHEL 8.3 and CentOS 8 are both on kernel 4.18 so the newest stuff that does not require kernel 5 should be there.
I am looking at moving from CentOS 8 to RHEL 8.3 where I need stability, and look at Fedora for what may be coming up (or for running on top of FCOS)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.