coredns / deployment Goto Github PK
View Code? Open in Web Editor NEWScripts, utilities, and examples for deploying CoreDNS.
License: Apache License 2.0
Scripts, utilities, and examples for deploying CoreDNS.
License: Apache License 2.0
This is very slick, I like the approach to installing it. Until I found this I thought I would need to update the kubelet config or use kubeadm upgrade.
hello I deployment coredns1.1.2 in k8s 1.9 which has no auth. so I only deploy cm, deployement and service part of your yaml, removing the serviceaccount name : coredns line in dp.
But the coredns keeps crashing and report "no such file or directory /var/run/token...
"
I am new to coredns, do you have some idea to solve my pb ? thanks.
My kubernetes cluster doesnt have service account which is the default configuration for coredns.
I want to use a kubeconfig file to reach the apiservers and coredns doesnt have any option for that which is blocker for me.
It is suggested that googlesource to be replaced github.com
go_resource "golang.org/x/text" do
url "https://go.googlesource.com/text.git",
:revision => "c01e4764d870b77f8abe5096ee19ad20d80e8075"
end
go_resource "golang.org/x/net" do
url "https://go.googlesource.com/net.git",
:revision => "5561cd9b4330353950f399814f427425c0a26fd2"
end
Add pod anti node affinity, now that there are 2 replicas.
deploy setup:
./deploy.sh -s -r 10.254.0.0/16 -i 10.254.0.2 -d clouster.local > coredns.yaml
kubectl create -f coredns.yaml
kubectl get pods,svc -n kube-system
NAME READY STATUS RESTARTS AGE
pod/coredns-55c956c95c-2k928 1/1 Running 0 10h
pod/coredns-55c956c95c-d685d 1/1 Running 0 10h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-dns ClusterIP 10.254.0.2 <none> 53/UDP,53/TCP 10h
another pod can't resolve domains, for example:
[root@k8s-master ~]# kubectl get pods,svc --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default pod/nginx-774d74897-47zbm 1/1 Running 0 11h
default pod/nginx-774d74897-jt6rj 1/1 Running 0 11h
default pod/test-dns-d854db9-8s8w9 1/1 Running 1 10h
kube-system pod/coredns-55c956c95c-2k928 1/1 Running 0 10h
kube-system pod/coredns-55c956c95c-d685d 1/1 Running 0 10h
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/example-service NodePort 10.254.139.43 <none> 80:30163/TCP 11h
default service/kubernetes ClusterIP 10.254.0.1 <none> 443/TCP 16h
kube-system service/kube-dns ClusterIP 10.254.0.2 <none> 53/UDP,53/TCP 10h
[root@k8s-master ~]# kubectl exec -it test-dns-d854db9-8s8w9 /bin/sh
/ # nslookup kubernetes.default
Server: 10.254.0.2
Address: 10.254.0.2:53
** server can't find kubernetes.default: NXDOMAIN
*** Can't find kubernetes.default: No answer
/ # nslookup kube-dns
nslookup: write to '10.254.0.2': Connection refused
;; connection timed out; no servers could be reached
/ # nslookup example-service
Server: 10.254.0.2
Address: 10.254.0.2:53
** server can't find example-service: NXDOMAIN
*** Can't find example-service: No answer
my deployment method correct?
Caddy imports are causing trouble for users of the library.
See kubernetes/kubernetes#78033
It would be more palatable if we don't import Caddy. We can do this by copying parsing functions locally, and refactoring to use alternate caddy funcs (e.g. caddyfile to/from json).
Modify the coredns configuration to use forward
plugin instead of proxy
plugin in its default configuration for Kubernetes deployments.
The kubernetes migration library framework is designed to be general. However currently it only defines migrations for Kubernetes related plugins.
The library can be generalized if we add plugin migration definitions for other plugins. This would be mostly a clerical task - albeit a sizable one.
If more plugins are supported, we could move the tool to its own repo.
Moving the tool to its own repo may even make sense now, as is with only k8s related plugin support, since its intended to be used as a library - it could be more cleanly cloned without cloning everything else in the deployment repo.
Related: #170
I am running the latest kubeadm 1.11 where CoreDNS is the default. I've also installed weave-net plugin. However, I still cannot get coredns pod running ....
linux-uwkw:~ # kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
coredns-78fcdf6894-fv74n 0/1 ContainerCreating 0 6m
coredns-78fcdf6894-pnq22 0/1 ContainerCreating 0 6m
etcd-linux-uwkw.fritz.box 1/1 Running 0 5m
kube-apiserver-linux-uwkw.fritz.box 1/1 Running 0 5m
kube-controller-manager-linux-uwkw.fritz.box 1/1 Running 0 5m
kube-proxy-785pk 1/1 Running 0 6m
kube-scheduler-linux-uwkw.fritz.box 1/1 Running 0 5m
weave-net-8tqdh 2/2 Running 0 6m
Any debug tips?
If the master node(s) have a NoSchedule taint the current deployment manifest would allow pods to be deployed on master nodes.
Assumptions:
You have a Kubernetes cluster running
The master node(s) kubelet config has the register-schedulable=true
set.
The master nodes(s) have the following taint applied
kubectl taint nodes master1 node-role.kubernetes.io/master="":NoSchedule
Cordon all other nodes (excluding master nodes) in the cluster
kubectl cordon <node name>
Result:
coredns-77b5855fb7-f6ng7 1/1 Running 0 7s 172.16.137.65 master1
coredns-77b5855fb7-pxfjh 1/1 Running 0 7s 172.16.137.65 master1
Solution:
Edit the deployment and remove the following:
- key: node-role.kubernetes.io/master
effect: NoSchedule
Delete all current coredns pods using:
kubectl delete pods -n kube-system -l k8s-app=coredns
When describing one of the pending pods, the description displays:
Warning FailedScheduling 4s (x6 over 19s) default-scheduler 0/8 nodes are available: 1 PodToleratesNodeTaints, 7 NodeUnschedulable
If you uncordon the worker nodes, the pods start to be scheduled correctly.
Currently the default corefile deployed by our deployment script uses a single server block. e.g.
.:53 {
errors
health
ready
kubernetes <CLUSTER_DOMAIN> <REVERSE_ZONES...> {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
Let's discuss concrete pros and cons of alternative defaults that are structured differently (e.g. multiple server blocks).
Solution: quote ${YAML_TEMPLATE}
when calling the sed
command
Upstream k8s now is switching to apps/v1 API for deployments.
For example, cidrs
is still in here: https://github.com/coredns/deployment/blob/master/kubernetes/coredns-1.6.yaml.sed#L54
We should expand the existing unit test oriented cases, to be more broad.
By unit tests oriented, I mean that each test currently aims to test a specific migration operation that we have implemented.
The goal of adding more test cases would be to capture different styles of Corefiles we have seen in the field (or concoct on our own), to make sure there aren't any cases we stumble over.
Thx!
coredns/ci#11 is merged, and on the ci server.
Now we can add the web hook to this repo to allow "/integration" comments to trigger tests of the kubernetes deployment script.
This issue is very similar as issue 87, however, the result of lookback does not work with following return:
can't load package: package github.com/containernetworking/plugins: no buildable Go source files in /root/go/src/github.com/containernetworking/plugins
The return of kubectl describe pods coredns-86c58d9df4- -n kube-system is:
Name: coredns-86c58d9df4-<token>
Namespace: kube-system
Priority: 0
PriorityClassName: <none>
Node: <hostname>/<ip-address>
Start Time: <time>
Labels: k8s-app=kube-dns
pod-template-hash=86c58d9df4
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/coredns-86c58d9df4
Containers:
coredns:
Container ID:
Image: k8s.gcr.io/coredns:1.2.6
Image ID:
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Environment: <none>
Mounts:
/etc/coredns from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from coredns-token-p2b8x (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
coredns-token-p2b8x:
Type: Secret (a volume populated by a Secret)
SecretName: coredns-token-p2b8x
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: CriticalAddonsOnly
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SandboxChanged 3m47s (x371 over 84m) kubelet, <hostname> Pod sandbox changed, it will be killed and re-created.
The flannel was used as network fabric and runs perfectly fine.
The issue does not exist prior to reinitialising kubernetes.
Split off from #12. Maybe someone can add a rpm spec file.
(Don't use any rpm distro myself)
data:
Corefile: |
.:53 {
errors
log
health
kubernetes cluster.local 10.96.0.0/16 10.244.0.0/16
proxy . /etc/resolv.conf
cache 30
etcd k8s.io {
path /skydns
endpoint http://192.168.4.106:2379
}
}
# docker exec -ti vespace-strategy-etcd1 sh
# etcdctl ls /
#
any help is appreciate!!
Look into adding a readinessProbe to the deployment. Probably something similar to livenessProbe
like...
readinessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
As of now, it may be redundant with livenessProbe, since we don't (as of yet) ever go "unready" after becoming ready. But without the readinessProbe, k8s may be reporting that a coredns pod is ready before it is actually becomes healthy.
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
.:53 {
errors
health
kubernetes CLUSTER_DOMAIN REVERSE_CIDRS {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}FEDERATIONS
prometheus :9153
forward . UPSTREAMNAMESERVER
cache 30
loop
reload
loadbalance
}STUBDOMAINS
is ineffcient and should be split up into multiple servers.
r : Define a reverse zone for the given CIDR. You may specifcy this option more
than once to add multiple reverse zones. If no reverse CIDRs are defined,
then the default is to handle all reverse zones (i.e. in-addr.arpa and ip6.arpa)
will kill any reverse look ups.
See coredns/coredns#2593 (comment) for a better one
@lugoues - this installs and works well - but the example Corefile didn't install:
...
==> Checking out revision 5561cd9b4330353950f399814f427425c0a26fd2
==> go build -ldflags -X github.com/coredns/coredns/coremain.gitTag=0.9.9 -o /usr/local/Cellar/coredns/0.9.9/sbin/coredns
Error: No such file or directory @ rb_file_s_rename - (Corefile.example, /usr/local/etc/coredns/Corefile)
ANN-M-JBELAMARIC:~ jbelamaric$ ls /usr/local/etc/
bash_completion.d gitconfig openssl stunnel
Thanks.
Name: coredns-99649559f-wqjf8
Namespace: kube-system
Priority: 2000000000
PriorityClassName: system-cluster-critical
Node: k8s-n2/192.168.90.107
Start Time: Tue, 30 Apr 2019 11:56:00 +0800
Labels: k8s-app=kube-dns
pod-template-hash=99649559f
Annotations:
Status: Running
IP: 10.244.5.20
Controlled By: ReplicaSet/coredns-99649559f
Containers:
coredns:
Container ID: docker://aca08cefd09e7844610e2c2bcc7e568faeb1adbf4ddfb4c8577302aba558604e
Image: coredns/coredns:1.2.6
Image ID: docker-pullable://coredns/coredns@sha256:81936728011c0df9404cb70b95c17bbc8af922ec9a70d0561a5d01fefa6ffa51
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 30 Apr 2019 11:59:00 +0800
Finished: Tue, 30 Apr 2019 11:59:00 +0800
Ready: False
Restart Count: 5
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
Mounts:
/etc/coredns from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from coredns-token-p6l9s (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
coredns-token-p6l9s:
Type: Secret (a volume populated by a Secret)
SecretName: coredns-token-p6l9s
Optional: false
QoS Class: Burstable
Node-Selectors: beta.kubernetes.io/os=linux
Tolerations: CriticalAddonsOnly
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
Normal Scheduled 3m17s default-scheduler Successfully assigned kube-system/coredns-99649559f-wqjf8 to k8s-n2
Warning BackOff 117s (x10 over 3m14s) kubelet, k8s-n2 Back-off restarting failed container
Normal Pulled 104s (x5 over 3m16s) kubelet, k8s-n2 Container image "coredns/coredns:1.2.6" already present on machine
Normal Created 104s (x5 over 3m16s) kubelet, k8s-n2 Created container
Normal Started 104s (x5 over 3m16s) kubelet, k8s-n2 Started container
In order to fix coredns/coredns#2464 and related issues with crashing due to glog/klog issues, we should just add an EmptyDir volume and mount it on /tmp
Should we add pods insecure to te manifest here?
Things to consider:
First of all thanks for all that work you are doing.
Just want to point on one moment.
If your are targeting for GA in k8s 1.11, (I already use it in prod with 1.10), why not to fix api version for k8s resources in deployment manifests?
- apiVersion: rbac.authorization.k8s.io/v1beta1
+ apiVersion: rbac.authorization.k8s.io/v1
kind: Deployment
- apiVersion: extensions/v1beta1
+ apiVersion: apps/v1
After installing k8s 1.6 via kubeadm, I tried to install coredns
to replace kube-dns
, and the coredns
pod/svc does get successfully deployed. However, inspecting the log of coredns
shows the following errors that repeats every few seconds:
E0423 18:35:23.759412 1 reflector.go:214] github.com/coredns/coredns/vendor/k8s.io/client-go/1.5/tools/cache/reflector.go:109: Failed to list *api.Namespace: the server does not allow access to the requested resource (get namespaces)
E0423 18:35:23.832581 1 reflector.go:214] github.com/coredns/coredns/vendor/k8s.io/client-go/1.5/tools/cache/reflector.go:109: Failed to list *api.Service: the server does not allow access to the requested resource (get services)
E0423 18:35:23.833121 1 reflector.go:214] github.com/coredns/coredns/vendor/k8s.io/client-go/1.5/tools/cache/reflector.go:109: Failed to list *api.Endpoints: the server does not allow access to the requested resource (get endpoints)
I do have a default service account in the namespace kube-system
:
root@radiant1:~# kubectl get serviceaccounts -n kube-system | grep default
default 1 1d
AFAIK, this service account is what coredns
uses:
serviceAccount: default
serviceAccountName: default
Any thoughts on this is not working?
This is super-cool, I don't think it only applies to K8s, maybe it should be up-leveled to the deployment directory, or even have its own repo.
Hi Guys,
I wonder why the coredns deployment still has k8s-app: kube-dns
instead of k8s-app: coredns
?
For example the prometheus-operator
is looking for coredns using k8s-app: coredns
but obviously it fails to find it.
Any reason for this?
We need to handle cases where multiple proxy plugins exist in a single server block.
We can either error out or convert them into new server blocks.
I think converting them into new server blocks will be challenging, but we should explore it to see if its feasible to deliver now. If it's not feasible, then we need make the migration return an error.
Can anyone remove me from this repository??My username is nik786
kubectl exec -it etcd-0 sh
/ # nslookup etcd.default
nslookup: can't resolve '(null)': Name does not resolve
Name: etcd.default
Address 1: 172.30.243.197 etcd-2.etcd.default.svc.cluster.local
Address 2: 172.30.243.246 etcd-1.etcd.default.svc.cluster.local
Address 3: 172.30.243.255 etcd-0.etcd.default.svc.cluster.local
/ # nslookup www.google.com
nslookup: can't resolve '(null)': Name does not resolve
nslookup: can't resolve 'www.google.com': Try again
kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:21:50Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:13:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
kubectl describe svc kube-dns -n kube-system
Name: kube-dns
Namespace: kube-system
Labels: addonmanager.kubernetes.io/mode=Reconcile
k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=KubeDNS
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kube-dns","kubernet...
Selector: k8s-app=kube-dns
Type: ClusterIP
IP: 10.254.0.2
Port: dns 53/UDP
TargetPort: 53/UDP
Endpoints: 172.30.243.253:53
Port: dns-tcp 53/TCP
TargetPort: 53/TCP
Endpoints: 172.30.243.253:53
Session Affinity: None
Events: <none>
I was wondering if you might have an interest in a snap version. I would help make the snap and submit a PR if there's any interest.
Crafting your first snap
The Mac install instructions throw this on Homebrew 2.0.2, because of Homebrew/brew#5477:
$ brew install coredns
==> Installing coredns from coredns/deployment
==> Installing dependencies for coredns/deployment/coredns: go
==> Installing coredns/deployment/coredns dependency: go
==> Downloading https://homebrew.bintray.com/bottles/go-1.11.5.mojave.bottle.tar.gz
######################################################################## 100.0%
==> Pouring go-1.11.5.mojave.bottle.tar.gz
πΊ /usr/local/Cellar/go/1.11.5: 9,298 files, 404.3MB
==> Installing coredns/deployment/coredns
==> Downloading https://github.com/coredns/coredns/archive/v0.9.10.tar.gz
==> Downloading from https://codeload.github.com/coredns/coredns/tar.gz/v0.9.10
######################################################################## 100.0%
Error: An exception occurred within a child process:
MethodDeprecatedError: Calling MacOS.prefer_64_bit? is disabled! There is no replacement.
Please report this to the coredns/deployment tap:
/usr/local/Homebrew/Library/Taps/coredns/homebrew-deployment/HomebrewFormula/coredns.rb:49
I wanted to add a CNAME record in CoreDNS for my cluster. I've edited the CoreDNS configmap in the following way:
$ kubectl describe configmap coredns -n kube-system
Name: coredns
Namespace: kube-system
Labels: eks.amazonaws.com/component=coredns
k8s-app=kube-dns
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","data":{"Corefile":".:53 {\n errors\n health\n kubernetes cluster.local {\n pods insecure\n upstream...
Data
====
Corefile:
----
.:53 {
errors
health
kubernetes cluster.local {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
proxy . /etc/resolv.conf
cache 30
}
. {
rewrite name orderer0.example.com orderer0-example-com.orbix-mvp.svc.cluster.local
rewrite name peer0.example.com peer0-example-com.orbix-mvp.svc.cluster.local
}
Events: <none>
I restarted CoreDNS, and the hostname is available in the pods:
$ kubectl exec -it default-fabric-container-6cb8df4998-dkvr5 /bin/bash
root@default-fabric-container-6cb8df4998-dkvr5:/# host peer0.example.com
peer0-example-com.orbix-mvp.svc.cluster.local has address 172.20.244.179
This is great.
However, when running a new pod with has a BASH script in its args
, I get the following:
2019-02-15 10:11:50.314 UTC [nodeCmd] createChaincodeServer -> ERRO 06b Error creating GRPC server: listen tcp: lookup peer0.example.com on 172.20.0.10:53: no such host
2019-02-15 10:11:50.314 UTC [nodeCmd] serve -> CRIT 06c Failed to create chaincode server: listen tcp: lookup peer0.example.com on 172.20.0.10:53: no such host
panic: Failed to create chaincode server: listen tcp: lookup peer0.example.com on 172.20.0.10:53: no such host
So it cannot look up the hostname peer0.example.com
. Upon investigation, the 172.20.0.10:53
in the above output is the kube-dns
service:
$ kubectl get services -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 172.20.0.10 <none> 53/UDP,53/TCP 24d
I'm really not sure why KubeDNS is being queried, or why it's not providing the record from CoreDNS.
Can someone please help with this? Thanks :)
Should we replace proxy
with the forward
plugin yet?
forward
could handle about 4% more QPS than proxy
, which is probably not compelling on it's own.proxy
as more mileage.proxy
may be removed from default builds of CoreDNS in the future.Hi,
I was evaluating using CoreDNS rather than the deprecated kube-dns addon, as now suggested by Kubernetes. However, I found a few issues in the provided manifests:
coredns.yaml.sed
, coredns-1.6.yaml.sed
? We are now on Kubernetes 1.8 and soon 1.9. In my understanding, coredns.yaml.sed
should be renamed coredns-1.5.yaml.sed
and coredns-1.6.yaml.sed
should simply be coredns.yaml.sed
, or even better: the RBAC resource should be separated (like most projects do).kubernetes.io/cluster-service
label has been deprecated, and was only useful to the Addon Managerscheduler.alpha.kubernetes.io/tolerations
has been replaced by an actual spec element (see bel0w_.latest
image is generally considered bad practice.The Kubernetes documentation deprecated most of the cluster addons (even the well-known kubedns), and now lists a very few addons, including CoreDNS (pointing to that repository). However:
If you do want to use CoreDNS in production, please let us know.
Tolerating and selecting the master nodes
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
- key: "CriticalAddonsOnly"
operator: "Exists"
nodeSelector:
node-role.kubernetes.io/master: ""
Rolling-Update strategy
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
Thank you.
The PR kubernetes/kubernetes#72203 changes the API version to apps/v1 for the deployment manifest for kube-up deployments. This is already been changed for kubeadm deployments for a while now.
Do we change it in our deployment manifest as well?
The README for the deploy.sh script for Kubernetes has a couple minor typos in it.
Fallthrough for zones was added in 1.0.3.
Add fallthrough in-addr.arpa ip6.arpa
to config.
Followed the instructions from https://github.com/coredns/deployment/tree/master/kubernetes
Tried to remove kube-dns using $ kubectl delete --namespace=kube-system deployment kube-dns
is says it deleted, but within a few mintues it re-appears.
Any assistance would be greatly appreciated!
The content of the two files is inconsistent:
https://github.com/coredns/deployment/blob/master/kubernetes/coredns.yaml.sed#L53
https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/coredns.yaml.sed#L62
Inspired by a tweet. Adding rpm deb would help people deploy and test coredns - would even be helpful in my usecase (amd64 and arm).
Maybe this just needs to be documented, but do you folks have any ideas on how I'd go ahead finding my service and pod CIDRs for container engine?
I found this feature request but couldn't see any tips from it: kubernetes/kubernetes#25533
I looked at the related issues and none of them looked specifically like they'd have the info I'd need.
I guess I could get a liberal CIDR from the existing service and pod IPs, but that sounds like I'm asking for trouble. The only idea I've had so far is gcloud compute routes list
which does return a bunch of ranges for the cluster that would all fit in a /16
.
CoreDNS needs SIGINT for the shutdown handlers to be called. I just implemented lameducking in the health middleware which will only work for SIGINT.
SIGTERM will just stop the process immediately.
K8s currently recommends using a pod based dns-autoscaler that scales DNS replicas based on compute cluster size (with node:replica and core:replica ratios). The default settings in the dns-autoscaler stand as the only official recommendation/guidance from k8s on how to scale DNS pods in a cluster (that I am aware of).
For various reasons CoreDNS can handle more load with less replicas than kube-dns (e.g. multithreading, no inflight packet limit). However, the same node/core:replica ratio settings are used for both kube-dns and CoreDNS. This was kept the same because we didn't have data from very large scale deployments to suggest we may need to drop the number of replicas at the time. However, subsequent tests in kubernetes/kubernetes#68683 show that we probably should (CoreDNS consumes 30% more memory than kube-dns on a maxed-out scale 5K node cluster).
Relating to kubernetes/kubernetes#68613, there is concern over cluster memory pressure of DNS pods tightly packed large scale deployments. We can allay this concern if we reduce the number of replicas in the dns-autoscaler (increasing the default node/core:replica ratios), and couple with a release note that explains that the memory profile of coredns and kube-dns is different (needs less replicas, each replica uses more memory).
Regardless of the memory usage concerns, we should adjust the dns-autoscaler to reflect the greater compute capability of CoreDNS. As it stands now, when compared to kube-dns it is over deployed. Even if we optimize k8s scale data storage to match/beat kube-dns, we should not be over deploying CoreDNS.
Alternately we can opt to discard the dns-autoscaler entirely, and recommend CPU based autoscaling with a node/core based cap instead (if that kind of variable cap is possible).
Hi,
just tested out coredns and feels great!
I was kind of surprised that when I made changes to the Config, the changes didn't get applied automatically. You can send SIGUSR1
to coredns to reload the config, but would be nice to reload automatically.
Before finding any other way to implement this, just asking do you know is there any plan to implement this into the coredns Kubernetes integration?
I have setup multi master kubernetes cluster. core DNS pods are running only on one of the master nodes. Even not able to come up on other master nodes when coreDNS pods are killed.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.