Coder Social home page Coder Social logo

deployment's Introduction

Deployment

Scripts, utilities, and examples for deploying CoreDNS.

Debian

On a debian system:

  • Run dpkg-buildpackage -us -uc -b --target-arch ARCH Where ARCH can be any of the released architectures, like "amd64" or "arm".
  • Most users will just run: dpkg-buildpackage -us -uc -b
  • Note that any existing environment variables will override the default makefile variables in debian/rules
  • The above can be used, for example, to build a particular verison by setting the VERSION environment variable

To install:

  • Run dpkg -i coredns_0.9.10-0~9.20_amd64.deb.

This installs the coredns binary in /usr/bin, adds a coredns user (homedir set to /var/lib/coredns) and a small Corefile /etc/coredns.

Kubernetes

Helm Chart

The repository providing the helm chart repo is available under

https://github.com/coredns/helm

deployment's People

Contributors

ae6rt avatar antonofthewoods avatar bardiharborow avatar brianchristie avatar carlocolumna avatar chrisohaver avatar cuericlee avatar fhemberger avatar fkore avatar fntlnz avatar haad avatar jaybeale avatar johnbelamaric avatar julianvmodesto avatar kaysond avatar klausenbusk avatar laverya avatar lugoues avatar miekg avatar mrueg avatar mzaian avatar ngarindungu avatar paologallinaharbur avatar prologic avatar rajansandeep avatar sayf-eddine-scality avatar spdfnet avatar swade1987 avatar utkonos avatar zouyee avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deployment's Issues

Generalize corefile migration tool; move to new repo

The kubernetes migration library framework is designed to be general. However currently it only defines migrations for Kubernetes related plugins.

The library can be generalized if we add plugin migration definitions for other plugins. This would be mostly a clerical task - albeit a sizable one.

If more plugins are supported, we could move the tool to its own repo.

Moving the tool to its own repo may even make sense now, as is with only k8s related plugin support, since its intended to be used as a library - it could be more cleanly cloned without cloning everything else in the deployment repo.

Related: #170

remove nik786

Can anyone remove me from this repository??My username is nik786

kubernetes: Adjust DNS autoscaling defaults for CoreDNS

K8s currently recommends using a pod based dns-autoscaler that scales DNS replicas based on compute cluster size (with node:replica and core:replica ratios). The default settings in the dns-autoscaler stand as the only official recommendation/guidance from k8s on how to scale DNS pods in a cluster (that I am aware of).

For various reasons CoreDNS can handle more load with less replicas than kube-dns (e.g. multithreading, no inflight packet limit). However, the same node/core:replica ratio settings are used for both kube-dns and CoreDNS. This was kept the same because we didn't have data from very large scale deployments to suggest we may need to drop the number of replicas at the time. However, subsequent tests in kubernetes/kubernetes#68683 show that we probably should (CoreDNS consumes 30% more memory than kube-dns on a maxed-out scale 5K node cluster).

Relating to kubernetes/kubernetes#68613, there is concern over cluster memory pressure of DNS pods tightly packed large scale deployments. We can allay this concern if we reduce the number of replicas in the dns-autoscaler (increasing the default node/core:replica ratios), and couple with a release note that explains that the memory profile of coredns and kube-dns is different (needs less replicas, each replica uses more memory).

Regardless of the memory usage concerns, we should adjust the dns-autoscaler to reflect the greater compute capability of CoreDNS. As it stands now, when compared to kube-dns it is over deployed. Even if we optimize k8s scale data storage to match/beat kube-dns, we should not be over deploying CoreDNS.

Alternately we can opt to discard the dns-autoscaler entirely, and recommend CPU based autoscaling with a node/core based cap instead (if that kind of variable cap is possible).

migration tool is awesome

This is super-cool, I don't think it only applies to K8s, maybe it should be up-leveled to the deployment directory, or even have its own repo.

Kubernetes 1.13 not querying CoreDNS

I wanted to add a CNAME record in CoreDNS for my cluster. I've edited the CoreDNS configmap in the following way:

$ kubectl describe configmap coredns -n kube-system
Name:         coredns
Namespace:    kube-system
Labels:       eks.amazonaws.com/component=coredns
              k8s-app=kube-dns
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","data":{"Corefile":".:53 {\n    errors\n    health\n    kubernetes cluster.local {\n      pods insecure\n      upstream...

Data
====
Corefile:
----
.:53 {
    errors
    health
    kubernetes cluster.local {
      pods insecure
      upstream
      fallthrough in-addr.arpa ip6.arpa
    }
    prometheus :9153
    proxy . /etc/resolv.conf
    cache 30
}
. {
    rewrite name orderer0.example.com orderer0-example-com.orbix-mvp.svc.cluster.local
    rewrite name peer0.example.com peer0-example-com.orbix-mvp.svc.cluster.local
}

Events:  <none>

I restarted CoreDNS, and the hostname is available in the pods:

$ kubectl exec -it default-fabric-container-6cb8df4998-dkvr5 /bin/bash
root@default-fabric-container-6cb8df4998-dkvr5:/# host peer0.example.com

peer0-example-com.orbix-mvp.svc.cluster.local has address 172.20.244.179

This is great.

However, when running a new pod with has a BASH script in its args, I get the following:

2019-02-15 10:11:50.314 UTC [nodeCmd] createChaincodeServer -> ERRO 06b Error creating GRPC server: listen tcp: lookup peer0.example.com on 172.20.0.10:53: no such host
2019-02-15 10:11:50.314 UTC [nodeCmd] serve -> CRIT 06c Failed to create chaincode server: listen tcp: lookup peer0.example.com on 172.20.0.10:53: no such host
panic: Failed to create chaincode server: listen tcp: lookup peer0.example.com on 172.20.0.10:53: no such host

So it cannot look up the hostname peer0.example.com. Upon investigation, the 172.20.0.10:53 in the above output is the kube-dns service:

$ kubectl get services -n kube-system
NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
kube-dns        ClusterIP   172.20.0.10     <none>        53/UDP,53/TCP   24d

I'm really not sure why KubeDNS is being queried, or why it's not providing the record from CoreDNS.

Can someone please help with this? Thanks :)

this googlesource code repo is not connect at China

It is suggested that googlesource to be replaced github.com

go_resource "golang.org/x/text" do
    url "https://go.googlesource.com/text.git",
      :revision => "c01e4764d870b77f8abe5096ee19ad20d80e8075"
  end

  go_resource "golang.org/x/net" do
    url "https://go.googlesource.com/net.git",
      :revision => "5561cd9b4330353950f399814f427425c0a26fd2"
  end

Kubernetes manifest need rework

Hi,

I was evaluating using CoreDNS rather than the deprecated kube-dns addon, as now suggested by Kubernetes. However, I found a few issues in the provided manifests:

  • It is not clear at first glance which manifest should be used: coredns.yaml.sed, coredns-1.6.yaml.sed? We are now on Kubernetes 1.8 and soon 1.9. In my understanding, coredns.yaml.sed should be renamed coredns-1.5.yaml.sed and coredns-1.6.yaml.sed should simply be coredns.yaml.sed, or even better: the RBAC resource should be separated (like most projects do).
  • coredns-1.6.yaml.sed has an invalid ConfigMap (cidrs)
  • The kubernetes.io/cluster-service label has been deprecated, and was only useful to the Addon Manager
  • scheduler.alpha.kubernetes.io/tolerations has been replaced by an actual spec element (see bel0w_.
  • Tolerations should contain the standardized master taint / label (see below)
  • For stability and security purposes (especially with the Node Authorizer admission controller), CoreDNS should only run on master nodes (and should probably have at least 2 replicas by default).
  • The deployment should have a RollingUpdate strategy for availability purposes.
  • Using the latest image is generally considered bad practice.
  • Exposing the metrics port on the service can arguably be considered a threat (especially if Network Policies are being used).

The Kubernetes documentation deprecated most of the cluster addons (even the well-known kubedns), and now lists a very few addons, including CoreDNS (pointing to that repository). However:

  • The manifest does not seem to be strongly maintained
  • The main repository's README file mentions: If you do want to use CoreDNS in production, please let us know.
    This is a little bit worrying considering thousands users refer to that documentation as a reference, including numerous companies, which sometimes run hundreds/thousands clusters.

Tolerating and selecting the master nodes

      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        node-role.kubernetes.io/master: ""

Rolling-Update strategy

  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1

Thank you.

how to save coredns data in etcd

ask for help

  • I'm trying to save coredns data in etcd, but failed(after deploy, I can't find any data in etcd), here is config file:
data:
  Corefile: |
    .:53 {
        errors
        log
        health
        kubernetes cluster.local 10.96.0.0/16 10.244.0.0/16
        proxy . /etc/resolv.conf
        cache 30
        etcd k8s.io {
          path /skydns
          endpoint http://192.168.4.106:2379
        }
    }
  • can't find any data in etcd:
# docker exec -ti vespace-strategy-etcd1 sh
# etcdctl ls /
#

any help is appreciate!!

Compliment: Thanks

This is very slick, I like the approach to installing it. Until I found this I thought I would need to update the kubelet config or use kubeadm upgrade.

server does not allow access to the requested resources

After installing k8s 1.6 via kubeadm, I tried to install coredns to replace kube-dns, and the coredns pod/svc does get successfully deployed. However, inspecting the log of coredns shows the following errors that repeats every few seconds:

E0423 18:35:23.759412       1 reflector.go:214] github.com/coredns/coredns/vendor/k8s.io/client-go/1.5/tools/cache/reflector.go:109: Failed to list *api.Namespace: the server does not allow access to the requested resource (get namespaces)
E0423 18:35:23.832581       1 reflector.go:214] github.com/coredns/coredns/vendor/k8s.io/client-go/1.5/tools/cache/reflector.go:109: Failed to list *api.Service: the server does not allow access to the requested resource (get services)
E0423 18:35:23.833121       1 reflector.go:214] github.com/coredns/coredns/vendor/k8s.io/client-go/1.5/tools/cache/reflector.go:109: Failed to list *api.Endpoints: the server does not allow access to the requested resource (get endpoints)

I do have a default service account in the namespace kube-system:

root@radiant1:~# kubectl get serviceaccounts -n kube-system | grep default
default                      1         1d

AFAIK, this service account is what coredns uses:

  serviceAccount: default
  serviceAccountName: default

Any thoughts on this is not working?

kubernetes: proxy -> forward

Should we replace proxy with the forward plugin yet?

  • In performance tests using a local upstream server, forward could handle about 4% more QPS than proxy, which is probably not compelling on it's own.
  • proxy as more mileage.
  • proxy may be removed from default builds of CoreDNS in the future.

Kubernetes: Config auto-reload?

Hi,
just tested out coredns and feels great!
I was kind of surprised that when I made changes to the Config, the changes didn't get applied automatically. You can send SIGUSR1 to coredns to reload the config, but would be nice to reload automatically.

Before finding any other way to implement this, just asking do you know is there any plan to implement this into the coredns Kubernetes integration?

coredns stuck at ContainerCreating

I am running the latest kubeadm 1.11 where CoreDNS is the default. I've also installed weave-net plugin. However, I still cannot get coredns pod running ....

linux-uwkw:~ # kubectl get pods --namespace=kube-system
NAME                                           READY     STATUS              RESTARTS   AGE
coredns-78fcdf6894-fv74n                       0/1       ContainerCreating   0          6m
coredns-78fcdf6894-pnq22                       0/1       ContainerCreating   0          6m
etcd-linux-uwkw.fritz.box                      1/1       Running             0          5m
kube-apiserver-linux-uwkw.fritz.box            1/1       Running             0          5m
kube-controller-manager-linux-uwkw.fritz.box   1/1       Running             0          5m
kube-proxy-785pk                               1/1       Running             0          6m
kube-scheduler-linux-uwkw.fritz.box            1/1       Running             0          5m
weave-net-8tqdh                                2/2       Running             0          6m

Any debug tips?

kubernetes: default corefile structure

Currently the default corefile deployed by our deployment script uses a single server block. e.g.

    .:53 {
        errors
        health
        ready
        kubernetes <CLUSTER_DOMAIN> <REVERSE_ZONES...> {
          pods insecure
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }

Let's discuss concrete pros and cons of alternative defaults that are structured differently (e.g. multiple server blocks).

Error from brew forumula

@Lugoues - this installs and works well - but the example Corefile didn't install:

...
==> Checking out revision 5561cd9b4330353950f399814f427425c0a26fd2
==> go build -ldflags -X github.com/coredns/coredns/coremain.gitTag=0.9.9 -o /usr/local/Cellar/coredns/0.9.9/sbin/coredns
Error: No such file or directory @ rb_file_s_rename - (Corefile.example, /usr/local/etc/coredns/Corefile)
ANN-M-JBELAMARIC:~ jbelamaric$ ls /usr/local/etc/
bash_completion.d	gitconfig		openssl			stunnel

Thanks.

Back-off restarting failed container

Name: coredns-99649559f-wqjf8
Namespace: kube-system
Priority: 2000000000
PriorityClassName: system-cluster-critical
Node: k8s-n2/192.168.90.107
Start Time: Tue, 30 Apr 2019 11:56:00 +0800
Labels: k8s-app=kube-dns
pod-template-hash=99649559f
Annotations:
Status: Running
IP: 10.244.5.20
Controlled By: ReplicaSet/coredns-99649559f
Containers:
coredns:
Container ID: docker://aca08cefd09e7844610e2c2bcc7e568faeb1adbf4ddfb4c8577302aba558604e
Image: coredns/coredns:1.2.6
Image ID: docker-pullable://coredns/coredns@sha256:81936728011c0df9404cb70b95c17bbc8af922ec9a70d0561a5d01fefa6ffa51
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 30 Apr 2019 11:59:00 +0800
Finished: Tue, 30 Apr 2019 11:59:00 +0800
Ready: False
Restart Count: 5
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
Mounts:
/etc/coredns from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from coredns-token-p6l9s (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
coredns-token-p6l9s:
Type: Secret (a volume populated by a Secret)
SecretName: coredns-token-p6l9s
Optional: false
QoS Class: Burstable
Node-Selectors: beta.kubernetes.io/os=linux
Tolerations: CriticalAddonsOnly
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Normal Scheduled 3m17s default-scheduler Successfully assigned kube-system/coredns-99649559f-wqjf8 to k8s-n2
Warning BackOff 117s (x10 over 3m14s) kubelet, k8s-n2 Back-off restarting failed container
Normal Pulled 104s (x5 over 3m16s) kubelet, k8s-n2 Container image "coredns/coredns:1.2.6" already present on machine
Normal Created 104s (x5 over 3m16s) kubelet, k8s-n2 Created container
Normal Started 104s (x5 over 3m16s) kubelet, k8s-n2 Started container

Kubernetes: pods insecure

Should we add pods insecure to te manifest here?

Things to consider:

  • it’s an insecure option, so for new installs probably not.
  • for upgrades of a k8s cluster that already already uses it, we need to include it ( or pods verified)
  • I think this behavior is deprecated, per the dns spec

kubernetes: how do we kill the pod?

CoreDNS needs SIGINT for the shutdown handlers to be called. I just implemented lameducking in the health middleware which will only work for SIGINT.

SIGTERM will just stop the process immediately.

coredns can't resolve domain

deploy setup:

./deploy.sh -s -r 10.254.0.0/16 -i 10.254.0.2 -d clouster.local > coredns.yaml
kubectl create -f coredns.yaml
kubectl get pods,svc -n kube-system 
NAME                           READY   STATUS    RESTARTS   AGE
pod/coredns-55c956c95c-2k928   1/1     Running   0          10h
pod/coredns-55c956c95c-d685d   1/1     Running   0          10h

NAME               TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
service/kube-dns   ClusterIP   10.254.0.2   <none>        53/UDP,53/TCP   10h

another pod can't resolve domains, for example:

[root@k8s-master ~]# kubectl get pods,svc --all-namespaces
NAMESPACE     NAME                           READY   STATUS    RESTARTS   AGE
default       pod/nginx-774d74897-47zbm      1/1     Running   0          11h
default       pod/nginx-774d74897-jt6rj      1/1     Running   0          11h
default       pod/test-dns-d854db9-8s8w9     1/1     Running   1          10h
kube-system   pod/coredns-55c956c95c-2k928   1/1     Running   0          10h
kube-system   pod/coredns-55c956c95c-d685d   1/1     Running   0          10h

NAMESPACE     NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
default       service/example-service   NodePort    10.254.139.43   <none>        80:30163/TCP    11h
default       service/kubernetes        ClusterIP   10.254.0.1      <none>        443/TCP         16h
kube-system   service/kube-dns          ClusterIP   10.254.0.2      <none>        53/UDP,53/TCP   10h
[root@k8s-master ~]# kubectl exec -it test-dns-d854db9-8s8w9 /bin/sh
/ # nslookup kubernetes.default
Server:		10.254.0.2
Address:	10.254.0.2:53

** server can't find kubernetes.default: NXDOMAIN

*** Can't find kubernetes.default: No answer

/ # nslookup kube-dns
nslookup: write to '10.254.0.2': Connection refused
;; connection timed out; no servers could be reached

/ # nslookup example-service
Server:		10.254.0.2
Address:	10.254.0.2:53

** server can't find example-service: NXDOMAIN

*** Can't find example-service: No answer

my deployment method correct?

Add CI webhook to this repo

coredns/ci#11 is merged, and on the ci server.
Now we can add the web hook to this repo to allow "/integration" comments to trigger tests of the kubernetes deployment script.

when core-dns rollback to kube-dns pod can resolve cluster address but can't resolve external adresses

kubectl exec -it etcd-0 sh
/ # nslookup etcd.default
nslookup: can't resolve '(null)': Name does not resolve

Name: etcd.default
Address 1: 172.30.243.197 etcd-2.etcd.default.svc.cluster.local
Address 2: 172.30.243.246 etcd-1.etcd.default.svc.cluster.local
Address 3: 172.30.243.255 etcd-0.etcd.default.svc.cluster.local

/ # nslookup www.google.com
nslookup: can't resolve '(null)': Name does not resolve

nslookup: can't resolve 'www.google.com': Try again

kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:21:50Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:13:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

kubectl describe svc kube-dns -n kube-system

Name:              kube-dns
Namespace:         kube-system
Labels:            addonmanager.kubernetes.io/mode=Reconcile
                   k8s-app=kube-dns
                   kubernetes.io/cluster-service=true
                   kubernetes.io/name=KubeDNS
Annotations:       kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kube-dns","kubernet...
Selector:          k8s-app=kube-dns
Type:              ClusterIP
IP:                10.254.0.2
Port:              dns  53/UDP
TargetPort:        53/UDP
Endpoints:         172.30.243.253:53
Port:              dns-tcp  53/TCP
TargetPort:        53/TCP
Endpoints:         172.30.243.253:53
Session Affinity:  None
Events:            <none>

MacOS.prefer_64_bit? deprecated

The Mac install instructions throw this on Homebrew 2.0.2, because of Homebrew/brew#5477:

$ brew install coredns
==> Installing coredns from coredns/deployment
==> Installing dependencies for coredns/deployment/coredns: go
==> Installing coredns/deployment/coredns dependency: go
==> Downloading https://homebrew.bintray.com/bottles/go-1.11.5.mojave.bottle.tar.gz
######################################################################## 100.0%
==> Pouring go-1.11.5.mojave.bottle.tar.gz
🍺  /usr/local/Cellar/go/1.11.5: 9,298 files, 404.3MB
==> Installing coredns/deployment/coredns
==> Downloading https://github.com/coredns/coredns/archive/v0.9.10.tar.gz
==> Downloading from https://codeload.github.com/coredns/coredns/tar.gz/v0.9.10
######################################################################## 100.0%
Error: An exception occurred within a child process:
  MethodDeprecatedError: Calling MacOS.prefer_64_bit? is disabled! There is no replacement.
Please report this to the coredns/deployment tap:
  /usr/local/Homebrew/Library/Taps/coredns/homebrew-deployment/HomebrewFormula/coredns.rb:49

kubernetes: readinessProbe

Look into adding a readinessProbe to the deployment. Probably something similar to livenessProbe like...

        readinessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP

As of now, it may be redundant with livenessProbe, since we don't (as of yet) ever go "unready" after becoming ready. But without the readinessProbe, k8s may be reporting that a coredns pod is ready before it is actually becomes healthy.

https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/

coredns stuck at ContainerCreating

This issue is very similar as issue 87, however, the result of lookback does not work with following return:
can't load package: package github.com/containernetworking/plugins: no buildable Go source files in /root/go/src/github.com/containernetworking/plugins

The return of kubectl describe pods coredns-86c58d9df4- -n kube-system is:

Name:               coredns-86c58d9df4-<token>
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               <hostname>/<ip-address>
Start Time:         <time>
Labels:             k8s-app=kube-dns
                    pod-template-hash=86c58d9df4
Annotations:        <none>
Status:             Pending
IP:
Controlled By:      ReplicaSet/coredns-86c58d9df4
Containers:
  coredns:
    Container ID:
    Image:         k8s.gcr.io/coredns:1.2.6
    Image ID:
    Ports:         53/UDP, 53/TCP, 9153/TCP
    Host Ports:    0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-p2b8x (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  coredns-token-p2b8x:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  coredns-token-p2b8x
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     CriticalAddonsOnly
                 node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason          Age                    From               Message
  ----    ------          ----                   ----               -------
  Normal  SandboxChanged  3m47s (x371 over 84m)  kubelet, <hostname> Pod sandbox changed, it will be killed and re-created.

The flannel was used as network fabric and runs perfectly fine.

The issue does not exist prior to reinitialising kubernetes.

kubernetes migration: proxy -> forward

We need to handle cases where multiple proxy plugins exist in a single server block.

We can either error out or convert them into new server blocks.
I think converting them into new server blocks will be challenging, but we should explore it to see if its feasible to deliver now. If it's not feasible, then we need make the migration return an error.

coresdns is still labeled as kube-dns

Hi Guys,

I wonder why the coredns deployment still has k8s-app: kube-dns instead of k8s-app: coredns?

For example the prometheus-operator is looking for coredns using k8s-app: coredns but obviously it fails to find it.

Any reason for this?

inefficient corefile

  .:53 {
        errors
        health
        kubernetes CLUSTER_DOMAIN REVERSE_CIDRS {
          pods insecure
          upstream
          fallthrough in-addr.arpa ip6.arpa
        }FEDERATIONS
        prometheus :9153
        forward . UPSTREAMNAMESERVER
        cache 30
        loop
        reload
        loadbalance
    }STUBDOMAINS

is ineffcient and should be split up into multiple servers.

r : Define a reverse zone for the given CIDR. You may specifcy this option more
         than once to add multiple reverse zones. If no reverse CIDRs are defined,
         then the default is to handle all reverse zones (i.e. in-addr.arpa and ip6.arpa)

will kill any reverse look ups.

See coredns/coredns#2593 (comment) for a better one

kubernetes migration: expand test cases

We should expand the existing unit test oriented cases, to be more broad.
By unit tests oriented, I mean that each test currently aims to test a specific migration operation that we have implemented.

The goal of adding more test cases would be to capture different styles of Corefiles we have seen in the field (or concoct on our own), to make sure there aren't any cases we stumble over.

continue asking serviceaccount/token when without serviceaccountname: coredns

hello I deployment coredns1.1.2 in k8s 1.9 which has no auth. so I only deploy cm, deployement and service part of your yaml, removing the serviceaccount name : coredns line in dp.

But the coredns keeps crashing and report "no such file or directory /var/run/token...
screen shot 2018-04-27 at 2 59 06 pm
"
I am new to coredns, do you have some idea to solve my pb ? thanks.

rpm package

Split off from #12. Maybe someone can add a rpm spec file.
(Don't use any rpm distro myself)

deb package

Inspired by a tweet. Adding rpm deb would help people deploy and test coredns - would even be helpful in my usecase (amd64 and arm).

Currently pods can be scheduled to master nodes

If the master node(s) have a NoSchedule taint the current deployment manifest would allow pods to be deployed on master nodes.

Assumptions:

  • You have a Kubernetes cluster running

  • The master node(s) kubelet config has the register-schedulable=true set.

  • The master nodes(s) have the following taint applied

    kubectl taint nodes master1 node-role.kubernetes.io/master="":NoSchedule
    
  • Cordon all other nodes (excluding master nodes) in the cluster

    kubectl cordon <node name>
    

Result:

coredns-77b5855fb7-f6ng7                 1/1       Running   0          7s        172.16.137.65    master1
coredns-77b5855fb7-pxfjh                 1/1       Running   0          7s        172.16.137.65    master1

Solution:

Edit the deployment and remove the following:

- key: node-role.kubernetes.io/master
  effect: NoSchedule

Delete all current coredns pods using:

kubectl delete pods -n kube-system -l k8s-app=coredns

When describing one of the pending pods, the description displays:

Warning  FailedScheduling  4s (x6 over 19s)  default-scheduler  0/8 nodes are available: 1 PodToleratesNodeTaints, 7 NodeUnschedulable

If you uncordon the worker nodes, the pods start to be scheduled correctly.

Deploying to GKE

Maybe this just needs to be documented, but do you folks have any ideas on how I'd go ahead finding my service and pod CIDRs for container engine?

I found this feature request but couldn't see any tips from it: kubernetes/kubernetes#25533

I looked at the related issues and none of them looked specifically like they'd have the info I'd need.

I guess I could get a liberal CIDR from the existing service and pod IPs, but that sounds like I'm asking for trouble. The only idea I've had so far is gcloud compute routes list which does return a bunch of ranges for the cluster that would all fit in a /16.

api versions for k8s resources

First of all thanks for all that work you are doing.

Just want to point on one moment.
If your are targeting for GA in k8s 1.11, (I already use it in prod with 1.10), why not to fix api version for k8s resources in deployment manifests?

- apiVersion: rbac.authorization.k8s.io/v1beta1
+ apiVersion: rbac.authorization.k8s.io/v1

kind: Deployment
- apiVersion: extensions/v1beta1
+ apiVersion: apps/v1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.