Coder Social home page Coder Social logo

corneliusweig / kubernetes-lxd Goto Github PK

View Code? Open in Web Editor NEW
280.0 280.0 52.0 29 KB

A step-by-step guide to get kubernetes running inside an LXC container

Shell 100.00%
howto k8s k8s-cluster kubernetes kubernetes-cluster kubernetes-environment lxc-containers lxd

kubernetes-lxd's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubernetes-lxd's Issues

How to adapt this for multi-node clusters?

I was heading towards doing something like this when I stumbled upon yours. Sterling effort. Well done for this.

I have not read and understood enough yet, but I have seen indications that the use case is a single lxc container with a single-node k8s running within it. Is this assumption correct?

If it is, how to adapt it so we can have multiple lxc containers acting as nodes for a cluster? ... just generalised overview, no need to delve into detailed implementation aspects.

Thanks in advance.

"systemctl restart lxd" is now "systemctl restart snap.lxd.daemon"

https://github.com/corneliusweig/kubernetes-lxd/blame/413e6bfbb3f5dd07722a12902e42f691e15b50f3/README.md#L35
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 23.04
Release: 23.04
Codename: lunar

$ sudo systemctl restart lxd
Failed to restart lxd.service: Unit lxd.service not found.

$ sudo systemctl restart snap.lxd.daemon

$sudo systemctl status snap.lxd.daemon
● snap.lxd.daemon.service - Service for snap application lxd.daemon
Loaded: loaded (/etc/systemd/system/snap.lxd.daemon.service; static)
Active: active (running) since Fri 2023-05-26 22:00:30 EDT; 4min 4s ago
TriggeredBy: ● snap.lxd.daemon.unix.socket
Main PID: 39274 (daemon.start)
Tasks: 0 (limit: 6889)
Memory: 22.1M
CPU: 853ms
CGroup: /system.slice/snap.lxd.daemon.service
‣ 39274 /bin/sh /snap/lxd/24846/commands/daemon.start

ip forwarding

On Ubuntu 18.04, the host OS has ip_forwarding by default. I had to enable ip_forwarding on the host to get the Traefic load balancer to work.

Without it I get cryptic errors from Traefic that didn't make any sense. Not sure if anyone else has experienced this, but doing a quick sysctl -w net.ipv4.ip_forward=1 on the host resolves the issue.

Hope that helps anyone giving this a whirl.

conntrack not found by lxc container with module enabled

I'm following Kubernetes the hard way and made it all the way to configuring the worker nodes: https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/09-bootstrapping-kubernetes-workers.md#start-the-worker-services
At this point, the kube-proxy service fails because

5286 server.go:489] open /proc/sys/net/netfilter/nf_conntrack_max: no such file or directory

which led me to your guide here.
I have added

config:
  linux.kernel_modules: xt_conntrack, nf_conntrack

to my worker node, and conntrack -L yields output.
On the host machine I can ls /proc/sys/net/netfilter/nf_conntrack_max and the file is there.

And yet the kube-proxy service still fails because it cannot find this file. Any advice?

edit: on the worker node this is lsmod | grep conntrack output

root@worker-0:~# lsmod | grep conntrack
nf_conntrack_netlink    45056  0
nfnetlink              16384  10 nf_conntrack_netlink,nf_tables
xt_conntrack           16384  28
nf_conntrack          139264  5 xt_conntrack,nf_nat,xt_nat,nf_conntrack_netlink,xt_MASQUERADE
nf_defrag_ipv6         24576  1 nf_conntrack
nf_defrag_ipv4         16384  1 nf_conntrack
libcrc32c              16384  2 nf_conntrack,nf_nat
x_tables               40960  24 ebtables,ip6table_filter,xt_conntrack,iptable_filter,xt_LOG,xt_multiport,xt_tcpudp,xt_addrtype,xt_CHECKSUM,xt_nat,ip6t_rt,xt_comment,ip6_tables,ipt_REJECT,ipt_rpfilter,iptable_raw,ip_tables,xt_limit,xt_hl,ip6table_mangle,xt_MASQUERADE,ip6t_REJECT,iptable_mangle,xt_mark

I've tried editing the systemd unit file for kube-proxy to include --conntrack-max-per-core=0 which according to the kube-proxy binary, should disable setting the conntrack value,

root@worker-1:~# kube-proxy --help | grep conntrack
      --conntrack-max-per-core int32                 Maximum number of NAT connections to track per CPU core (0 to leave the limit as-is and ignore conntrack-min). (default 32768)
      --conntrack-min int32                          Minimum number of conntrack entries to allocate, regardless of conntrack-max-per-core (set conntrack-max-per-core=0 to leave the limit as-is). (default 131072)

but this setting seems to be ignored and it tried to write to the file anyway.

No mention of swap settings

I'm following this for learning purposes. All is well until the first kubeadm command. I am receiving preflight errors regarding memory and swap:

[ERROR Swap]: running with swap on is not supported. Please disable swap 

Maybe add a note about this in the guide? And any pointers on how to go around this "properly"?

LXD is version 4.0 and Kubernetes is at 1.18.1.

subuid and subgid values why 1000000:1000000000?

Your repo is awsome as using lxd to test and learn kubernetes feels much more efficient than minikube. Tho I am just now only starting and wondering few things.

In arch linux it was recommended to use

/etc/subuid
root:100000:65536
/etc/subgid
root:100000:65536

but in your guide its
1000000:1000000000
Can you elaborate why?

conntrack hashsize alteration fails on large CPU counts

See kubernetes/kubernetes#58610

When the CPU count is large (e.g. I had it at 12 which is the amount on my host), conntrack hashsize may need to be increased when starting kube-proxy during k8s boot. The problem in LXC setups seems to be that the /sys/.../conntrack/hashsize file cannot be edited in any way inside the container, leading to failure if it needs to be altered.

My fix was to limit the CPU count on the system to 4 cores, which resulted in no wanted changes to the hashsize value.

Maybe add a note about this into the guide?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.