Coder Social home page Coder Social logo

pihole-kubernetes's Introduction

pihole-kubernetes's People

Contributors

alesz avatar allcontributors[bot] avatar andyg-0 avatar billimek avatar brnl avatar dependabot[bot] avatar derrockwolf avatar dtourde avatar ebcrypto avatar fernferret avatar github-actions[bot] avatar imle avatar jetersen avatar joshua-nord avatar jskswamy avatar konturn avatar luqasn avatar mbode avatar mojo2600 avatar morremeyer avatar northerngit avatar putz612 avatar raackley avatar rafaelgaspar avatar sam-kleiner avatar sim22 avatar tdorsey avatar utkuozdemir avatar vashiru avatar wrmilling avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pihole-kubernetes's Issues

Custom existing PVC name

Please add an extra values.yaml configuration key to set a custom name for the "existing" Persistent Volume Claim. For now the existing PVC must be named "pihole". Also this prevents to have more than one pihole deployment in the same namespace.

Also please consider a refactor of the value keys. For example Bitnami has its own rule-of-thumb for this scenarios:
https://github.com/bitnami/charts/tree/master/upstreamed/redmine/#existing-persistentvolumeclaims

A possible refactor could be:

## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
  ## OLD: persistentVolumeClaim.enabled
  enabled: true 
  ## A manually manage Persistent Volume Claim
  ## Requires persistence.enable: true
  ## If defined, PVC must be created manually before volume will be bound
  # existingClaim:
  annotations: {}
  ## PiHole data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  # storageClass: "-"
  accessMode: ReadWriteOnce
  size: "500Mi"

What do you think?

LoadBalancer won't route trafic to pods

Hi,

First, thank you for this great project and explanation.

I managed to get it to run but there's an issue w/ metalLB.
Both services (TCP / UDP) look allright,
image

but when I navigate to my ip, or telnet on any of the TCP ports, it doesn't work, I get: Network is unreachable.
If I open the node's IP w/ NodePort in the browser, it works, I can see pihole's admin web.

Here's what I get when I describe the tcp service:
image

How can I troubleshoot it?

Thank you.

Custom existing PVC name

Please add an extra values.yaml configuration key to set a custom name for the "existing" Persistent Volume Claim. For now the existing PVC must be named "pihole". Also this prevents to have more than one pihole deployment in the same namespace.

Also please consider a refactor of the value keys. For example Bitnami has its own rule-of-thumb for this scenarios:
https://github.com/bitnami/charts/tree/master/upstreamed/redmine/#existing-persistentvolumeclaims ✖️
https://github.com/bitnami/charts/tree/master/bitnami/redmine#existing-persistentvolumeclaims ✔️

A possible refactor could be:

## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
  ## OLD: persistentVolumeClaim.enabled
  enabled: true 
  ## A manually manage Persistent Volume Claim
  ## Requires persistence.enable: true
  ## If defined, PVC must be created manually before volume will be bound
  # existingClaim:
  annotations: {}
  ## PiHole data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  # storageClass: "-"
  accessMode: ReadWriteOnce
  size: "500Mi"

What do you think?

Liveness probe failure

I have been using version 4.4 of pi-hole for awhile now and went to update to version 5. When running on my cluster I get this error, that keeps pihole from starting.

kubelet, kube-worker-1 Readiness probe failed: Get http://10.42.1.14:80/admin.index.php: dial tcp 10.42.1.14:80: connect: connection refused

However If I uninstall the failed version 5 and roll back to 4.4, simply only changing the app version in values.yaml. It runs without issue.

I will provide any more details as needed. Just trying to figure out this issue.

Formerly working install no longer starts

I had a working installation until this morning when I noticed DNS wasn't working. It appears the pod isn't taking the proper volumes and is erroring out. It's on the latest version of the Helm chart. Did something change in 1.8.23 or other recent versions that would make this incompatible?

[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] 01-resolver-resolv: applying...
[fix-attrs.d] 01-resolver-resolv: exited 0.
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 20-start.sh: executing...
 ::: Starting docker specific checks & setup for docker pihole/pihole
  [✓] Update local cache of available packages
  [i] Existing PHP installation detected : PHP version 7.3.19-1~deb10u1

  [i] Installing configs from /etc/.pihole...
  [i] Existing dnsmasq.conf found... it is not a Pi-hole file, leaving alone!
  [✓] Copying 01-pihole.conf to /etc/dnsmasq.d/01-pihole.conf
chown: cannot access '': No such file or directory
chmod: cannot access '': No such file or directory
chown: cannot access '/etc/pihole/dhcp.leases': No such file or directory

Values file is...

adlists:
  - https://raw.githubusercontent.com/jmdugan/blocklists/master/corporations/facebook/all

dnsmasq:
  upstreamServers:
   - server=/my.domain/192.168.15.42

persistentVolumeClaim:
  enabled: true
  storageClass: managed-nfs-storage-persistent

ingress:
  enabled: true
  tls:
    - secretName: pihole-www-tls
      hosts:
        - my.domain
  hosts:
    - my.domain
  path: "/"

serviceWeb:
  loadBalancerIP: 192.168.15.41
  annotations:
    metallb.universe.tf/allow-shared-ip: pihole-svc
  type: LoadBalancer

serviceDns:
  loadBalancerIP: 192.168.15.41
  annotations:
    metallb.universe.tf/allow-shared-ip: pihole-svc
  type: LoadBalancer

DNS1: 1.1.1.2
DNS2: 1.0.0.2

admin:
  existingSecret: pihole-web-password
  passwordKey: WEBPASSWORD

Configuration Help

Thanks for the helm chart. I'm having an issue and looking for some basic help. I have a single centos 8/kubernetes host for testing so no external load balancer. I have the following set (below) but the host doesn't seem to forward DNS requests from my test network to it to the pihole NodeIP ports for dns tcp and udp:

ingress:
  enabled: true
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/affinity: "cookie"
    nginx.ingress.kubernetes.io/affinity-mode: "persistent"
    nginx.ingress.kubernetes.io/session-cookie-name: "route"
    nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
    nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
  path: /
  hosts:
    - pi.hole.mydomainlocal

persistentVolumeClaim:
  enabled: true
  existingClaim: "pihole"

Then on firewalld I forward ports like:

sudo firewall-cmd --zone="trusted" --add-forward-port=port=53:proto=udp:toport=31111
sudo firewall-cmd --zone="trusted" --add-forward-port=port=53:proto=tcp:toport=31112
sudo firewall-cmd --zone="public" --add-forward-port=port=53:proto=udp:toport=31111
sudo firewall-cmd --zone="public" --add-forward-port=port=53:proto=tcp:toport=31112

PiHole not reachable after 2-3 hours

I'm running the latest version of the chart on microk8s. Here are my values:

persistentVolumeClaim:
    enabled: true
  
serviceTCP:
    loadBalancerIP: 192.168.1.101
    annotations:
        metallb.universe.tf/allow-shared-ip: pihole-svc
    type: LoadBalancer
  
serviceUDP:
    loadBalancerIP: 192.168.1.101
    annotations:
        metallb.universe.tf/allow-shared-ip: pihole-svc
    type: LoadBalancer

When I install the chart everything works fine and I can see the admin page on http://192.168.1.101/admin after 2-3 hours it becomes unreachable.

Running:

k -n pihole rollout restart deployment pihole

Brings the application back up again.

I'm aware that this might not even be an issue with this chart but I'd really appreciate any tips on how to debug it. I'll grab the logs from the pod when it falls over again but I didn't see anything wrong the first time around.

can't change sharing key for "pihole/pihole-tcp", address also in use by pihole/pihole-udp

For some reason, I can't seem to colocate the tcp/udp services. Says there is an issue with the sharing key.

~/workspace/kubernetes 
❯ kubectl describe svc pihole-udp --namespace pihole
Name:                     pihole-udp
Namespace:                pihole
Labels:                   app=pihole
                          app.kubernetes.io/managed-by=Helm
                          chart=pihole-1.7.8
                          heritage=Helm
                          release=pihole
Annotations:              meta.helm.sh/release-name: pihole
                          meta.helm.sh/release-namespace: pihole
                          metallb.universe.tf/allow-shared-ip: pihole
Selector:                 app=pihole,release=pihole
Type:                     LoadBalancer
IP:                       10.97.203.194
IP:                       192.168.0.230
LoadBalancer Ingress:     192.168.0.230
Port:                     dns-udp  53/UDP
TargetPort:               dns-udp/UDP
NodePort:                 dns-udp  32314/UDP
Endpoints:                10.244.1.26:53
Port:                     client-udp  67/UDP
TargetPort:               client-udp/UDP
NodePort:                 client-udp  30298/UDP
Endpoints:                10.244.1.26:67
Session Affinity:         None
External Traffic Policy:  Local
HealthCheck NodePort:     30674
Events:
  Type    Reason        Age    From                Message
  ----    ------        ----   ----                -------
  Normal  IPAllocated   3m41s  metallb-controller  Assigned IP "192.168.0.230"
  Normal  nodeAssigned  2m33s  metallb-speaker     announcing from node "hivemind-2"

~/workspace/kubernetes 
❯ kubectl describe svc pihole-tcp --namespace pihole
Name:                     pihole-tcp
Namespace:                pihole
Labels:                   app=pihole
                          app.kubernetes.io/managed-by=Helm
                          chart=pihole-1.7.8
                          heritage=Helm
                          release=pihole
Annotations:              meta.helm.sh/release-name: pihole
                          meta.helm.sh/release-namespace: pihole
                          metallb.universe.tf/allow-shared-ip: pihole
Selector:                 app=pihole,release=pihole
Type:                     LoadBalancer
IP:                       10.108.117.207
IP:                       192.168.0.230
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  32092/TCP
Endpoints:                10.244.1.26:80
Port:                     https  443/TCP
TargetPort:               https/TCP
NodePort:                 https  31429/TCP
Endpoints:                10.244.1.26:443
Port:                     dns  53/TCP
TargetPort:               dns/TCP
NodePort:                 dns  32296/TCP
Endpoints:                10.244.1.26:53
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type     Reason            Age                    From                Message
  ----     ------            ----                   ----                -------
  Warning  AllocationFailed  3m21s (x2 over 3m44s)  metallb-controller  Failed to allocate IP for "pihole/pihole-tcp": can't change sharing key for "pihole/pihole-tcp", address also in use by pihole/pihole-udp

Relevant values.yaml:

serviceTCP:                                                                                                                                                                                                                             
  loadBalancerIP: 192.168.0.230                                                                                                                                                                                                         
  annotations:                                                                                                                                                                                                                          
    metallb.universe.tf/allow-shared-ip: pihole                                                                                                                                                                                         
  type: LoadBalancer                                                                                                                                                                                                                    
                                                                                                                                                                                                                                        
serviceUDP:                                                                                                                                                                                                                             
  loadBalancerIP: 192.168.0.230                                                                                                                                                                                                         
  annotations:                                                                                                                                                                                                                          
    metallb.universe.tf/allow-shared-ip: pihole                                                                                                                                                                                         
  type: LoadBalancer                                                                                                                                                                                                                    

DHCP leases should use loadbalanced IP if available

Currently the dhcp lease handed out to the clients uses the node IP where the pihole pod is running as the dns server. This means if this nodes gets down or the pod is resheduled to another node dns requests from the clients fail.

With bare metall LBs like metallb or purelb it should be possible to use the loadbalancer ip of the service instead as the dns server in the lease.

The chart should be expanded that there is a switch to set dns server as the loadbalanced IP. I think during start of the pod this need to be requested from kubernetes api and then pushed into a config file for dnsmasq.

pihole 5.1.2

I changed the image in my deployment script to pull down 5.1.2 and it seams to be working fine. you may want to update the newest chart to support pihole 5.1.2 rather than pulling 5.1.1 by default.

LivenessProbe may be too aggressive

Using the current settings I notice that the pihole pods occasionally restart (a few times/day) due to liveness probes failing.

The remedy is to adjust the probe settings to be a little more forgiving. I'll submit a pull request with those adjustments as well as to make the probe values configurable as values.

Static DHCP entries are not persistent

I am using pihole as a DHCP server for my network.

Once this chart gets installed and is working, I went ahead and set all the DHCP settings in the web-interface

I then added all the static leases for my network devices.

However if the pod terminates, the static lease information is not persisted. It looks like all the other configs are persisted properly.
It looks like pihole is storing the static leases in

./dnsmasq.d/04-pihole-static-dhcp.conf

Is is possible to add persistence for this, or have a section for static leases in values.yaml?

No response for dhcp?

You've written an excellent guide, and with it I've been able to transfer my Pihole install for DNS from docker to kubernetes. My docker install was also acting as my home lan DHCP server, but I have not been able to get a DHCP response out of the kubernetes hosted instance. Have you tried to do so, and has there been any success?

Persistent Volume Claim name and namespace in services.

Hi,
Perhaps I'm not doing well, but I notice the PVC name is set to template name ( claimName: {{ template "pihole.fullname" . }} ) If this is the expected behavior, why not use a pvc existing name?

Another question is, could be implemented using another namespace that default, for example, in services?

Thanks.

Setting externalTrafficPolicy: Local Pihole doesn't work.

After several days trying to get this working finally I got it, but the problem comes when I set the externalTrafficPolicy: to Local. The Pihole stop to work, the http service is working fine, but the DNS doesn't . You can find below my service configuration:

pi@raspserverM:~/k3s/pihole $ cat svc-pihole-tcp.yml

apiVersion: v1
kind: Service
metadata:
annotations:
metallb.universe.tf/address-pool: pool-home
metallb.universe.tf/allow-shared-ip: pihole-svc
name: pihole-tcp
labels:
app: pihole
spec:
externalTrafficPolicy: Local
type: LoadBalancer
ports:

  • protocol: TCP
    port: 9888
    targetPort: pihole-http
    name: pihole-http
  • protocol: TCP
    port: 53
    targetPort: pihole-dns-tcp
    name: pihole-dns-tcp
    selector:
    app: pihole

pi@raspserverM:~/k3s/pihole $ cat svc-pihole-udp.yml

apiVersion: v1
kind: Service
metadata:
annotations:
metallb.universe.tf/allow-shared-ip: pihole-svc
name: pihole-udp
labels:
app: pihole
spec:
externalTrafficPolicy: Local
type: LoadBalancer
ports:

  • protocol: UDP
    port: 53
    targetPort: pihole-dns-udp
    name: pihole-dns-udp
    selector:
    app: pihole

And the Metallb config as well:

pi@raspserverM:~/k3s/pihole $ cat ../core/metallb-configmap.yml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: pool-home
protocol: layer2
addresses:
- 192.168.1.120-192.168.1.125

I'm using the classic version instead helms

Liveness and Readiness probes are incorrectly defined

With the values.yaml file from master, helm template -f values.yaml pihole yields the following:

unknown field "livenessProbe" in io.k8s.api.core.v1.ContainerPort, ValidationError(Deployment.spec.template.spec.containers[0].ports[4]): unknown field "readinessProbe" in io.k8s.api.core.v1.ContainerPort

kubectl/k8s version 1.16

Restart fails for pihole

So this is an issue I have been having since first using this container. Whenever my container bricks and is restarted it continues to restart because it fails when loading my adlists. When using the kubectl logs (podName) -f I can see the container going out and loading in the blocklists and after adding a few it fails and reboots, not always at the same list. Sometimes it loads them all then the container reboots. If I delete my persistent volume it will start up just fine. I then change the folder permissions (this is the www-data #39 ) user issue we've discussed here a lot. Then add the lists manual through the web interface and everything is fine until it restarts. Once it restarts it fails until I repeat this process.

exposing metrics through service

Currently, if metics needed to be scraped, below conf work, which included pod IP.

    additionalScrapeConfigs:
    - job_name: 'pihole'
      static_configs:
        - targets: ['10.244.0.196:9617']
NAME                          READY   STATUS    RESTARTS   AGE     IP             NODE
pod/pihole-77875bf5f4-w4jvj   2/2     Running   1          7h40m   10.244.0.196 

But the pod IP can be easily changed on different events, it's better to create SVC for it so the IP will be static.

Client IP is only IP of node the pod is running on and assigning a new IP from metallb

Hi,

Thanks for providing these configs. It's a great help.

I have everything up and running but am noticing that the Client IP is the IP of the node in the cluster that the pod is running on. I definitely have spec.externalTrafficPolicy set to Local.

My Kubernetes versions are below:

Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:08:12Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.4", GitCommit:"f49fa022dbe63faafd0da106ef7e05a29721d3f1", GitTreeState:"clean", BuildDate:"2018-12-14T06:59:37Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}

I'm using Rancher 2.1.6 and Metallb also for the load balancing.

I also have noticed that if I assign an IP to the node and then restart the node, I can't connect to it using the same P (even if i remove and re-add the service). The only way to get it working again is to assign a new IP to the service.

Any ideas on how to debug this further?

Rework services

Right now there is a tcp and a udp service.

I think it makes more sense to have three services:

  1. webservice
  2. dhcp service
  3. dns service

I will make the MR if you agree, but i don't really know the background behind the separation in tcp and udp, so I thought I would ask first.

Enable rewrite for ingress

I'd like an option, outside of manually writing annotations, which allows me to set a rewrite rule so it will rewrite /admin/ to /.

Unable to install pihole bause of resolve errors

Problem

I followed Jeff Geerlings guide to install pihole, but i can't figure out what the problem is. When trying to install the helm chart, 1 container fails because it can't pull the image.

Events / Logs

Name: pihole-9cf8cd796-6hg94
Namespace: pihole
Priority: 0
Node: slave1/192.168.1.201
Start Time: Tue, 17 Nov 2020 21:14:07 +0000
Labels: app=pihole
pod-template-hash=9cf8cd796
release=pihole
Annotations: checksum.config.adlists: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546
checksum.config.blacklist: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546
checksum.config.dnsmasqConfig: b8db33b1edc0c6d931e44ddb1f551bef2185bdfbad893d40b1c946479abdbfc
checksum.config.regex: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546
checksum.config.whitelist: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546
Status: Pending
IP: 10.42.1.102
IPs:
IP: 10.42.1.102
Controlled By: ReplicaSet/pihole-9cf8cd796
Containers:
pihole:
Container ID:
Image: pihole/pihole:v5.1.2
Image ID:
Ports: 80/TCP, 53/TCP, 53/UDP, 443/TCP, 67/UDP
Host Ports: 0/TCP, 0/TCP, 0/UDP, 0/TCP, 0/UDP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Limits:
cpu: 200m
memory: 256Mi
Requests:
cpu: 100m
memory: 128Mi
Liveness: http-get http://:http/admin.index.php delay=60s timeout=5s period=10s #success=1 #failure=10
Readiness: http-get http://:http/admin.index.php delay=60s timeout=5s period=10s #success=1 #failure=3
Environment:
WEB_PORT: 80
VIRTUAL_HOST: pi.hole
WEBPASSWORD: <set to the key 'password' in secret 'pihole-password'> Optional: false
DNS1: 8.8.8.8
DNS2: 8.8.4.4
Mounts:
/etc/addn-hosts from custom-dnsmasq (rw,path="addn-hosts")
/etc/dnsmasq.d/02-custom.conf from custom-dnsmasq (rw,path="02-custom.conf")
/etc/pihole from config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mfw4h (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
config:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pihole
ReadOnly: false
custom-dnsmasq:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: pihole-custom-dnsmasq
Optional: false
default-token-mfw4h:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mfw4h
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s

Events:
Type Reason Age From Message

Normal Scheduled default-scheduler Successfully assigned pihole/pihole-9cf8cd796-6hg94 to slave1
Normal Pulling 54s (x3 over 103s) kubelet, slave1 Pulling image "pihole/pihole:v5.1.2"
Warning Failed 48s (x3 over 98s) kubelet, slave1 Failed to pull image "pihole/pihole:v5.1.2": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/pihole/pihole:v5.1.2": failed to resolve reference "docker.io/pihole/pihole:v5.1.2": failed to do request: Head https://registry-1.docker.io/v2/pihole/pihole/manifests/v5.1.2: dial tcp: lookup registry-1.docker.io: Try again
Warning Failed 48s (x3 over 98s) kubelet, slave1 Error: ErrImagePull
Normal BackOff 9s (x5 over 97s) kubelet, slave1 Back-off pulling image "pihole/pihole:v5.1.2"
Warning Failed 9s (x5 over 97s) kubelet, slave1 Error: ImagePullBackOff

nslookup

nslookup https://registry-1.docker.io/v2/pihole/pihole/manifests/v5.1.2
Server: 1.1.1.1
Address: 1.1.1.1#53

** server can't find https://registry-1.docker.io/v2/pihole/pihole/manifests/v5.1.2: NXDOMAIN

curl

curl -I https://registry-1.docker.io/v2/pihole/pihole/manifests/v5.1.2
HTTP/1.1 401 Unauthorized
Content-Type: application/json
Docker-Distribution-Api-Version: registry/2.0
Www-Authenticate: Bearer realm="https://auth.docker.io/token",service="registry.docker.io",scope="repository:pihole/pihole:pull"
Date: Tue, 17 Nov 2020 21:28:59 GMT
Content-Length: 156
Strict-Transport-Security: max-age=31536000

I hope those outputs help, maybe someone can help me. I have no clue where the problem is.
Any help is appreciated 👍

StatefulSet?

Currently, the chart deploys pihole as a Deployment. After experimentation, I determined that it isn't a good idea to run more than one replica of pihole due to the shared nature of the storage used by multiple instances.

In cases where someone wants a more highly-available pihole workload (i.e. more than one instance running), I wonder if leveraging StatefulSet could help. See this primer for a comparison between the different types.

Running pihole as a StatefulSet would mean the following:

  • It would still be possible to run just one instance if desired
  • When running more than one instance, each 'instance' would have a distinct copy of the filesystem that pihole uses. This means that query log collection, settings, etc would all be separated from each instance and if you are hitting the service or ingress for pihole, you may see different results depending on which stateful instance happens to serve the request
  • It should, in theory, allow for running multiple instances of pihole for a more highly-available deployment

Thoughts?

Maintainers wanted :)

Hey everbody,

is there anybody willing to help maintain this repository? My work focus shifted away from kubernetes and I'm not on track with the latest updates and features anymore. I still have my cluster here and will continue use and improve this chart. But maybe there is somebody with more experience in kubernetes and helm who wants to join in and help review and merge new changes?

Thank you!

Christian

adlist isn't applying lists

I have the following information in my values file:

adlists:
  - https://raw.githubusercontent.com/jmdugan/blocklists/master/corporations/facebook/all

It creates the adlist ConfigMap and the /etc/pihole/adlist.list file but I don't see the adlist showing up in the web UI.

Local DNS Records aren't reachable

As soon as I create a local DNS entry, the page/server is no longer accessible locally. When I install pi-hole on a single RPI, local entries work fine.

Adding possibility to attach bash script/configmap in Deployment

Hi.
With the working https://gitlab.com/grublets/youtube-updater-for-pi-hole I would like to add bash script to ConfigMap (which can be done separately) but would like to have an opportunity to mount volumes and ConfigMaps regarding that in Deployment.

Right now I have to override deployment each time pod is killed (and redeployed via helm).

Or simply allow attaching some scripts directly into pihole image so they can be run automatically 👍

Thanks a lot

Pi-Hole in HA mode (multi-pod)

Hi,

I wanted to get PiHole working as a 2-pod HA-cluster, so I can distribute them over multiple hosts. I found that the nginx persistent cookie could help. For your information, here is a working values file, based on chart version 1.7.17.

You need a central storage, available to all pods like NFS.

# values for Pi-Hole HA for nginx ingress example
replicaCount: 2

persistentVolumeClaim:
  enabled: true
  accessModes:
    - ReadWriteMany
  storageClass: nfs-client

ingress:
  enabled: true
  hosts:
    # Set your favorite hostname. It must resolve to the ingress IP-address. You could add this in PiHole itself
    # once you use it as your DNS-server. Until then, please add it to your hosts file.
    - pihole.local
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/affinity: "cookie"
    nginx.ingress.kubernetes.io/affinity-mode: "persistent"
    nginx.ingress.kubernetes.io/session-cookie-name: "route"
    nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
    nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"

This should also work with Treafik, which is the default on K3S but I didn't test this:

# values for Pi-Hole HA for traefik ingress example
replicaCount: 2

persistentVolumeClaim:
  enabled: true
  accessModes:
    - ReadWriteMany
  storageClass: nfs-client

ingress:
  enabled: true
  hosts:
    # Set your favorite hostname. It must resolve to the ingress IP-address. You could add this in PiHole itself
    # once you use it as your DNS-server. Until then, please add it to your hosts file.
    - pihole.local
  annotations:
    kubernetes.io/ingress.class: traefik

serviceTCP:
  annotations:
    traefik.ingress.kubernetes.io/affinity: "true"
    traefik.ingress.kubernetes.io/session-cookie-name: "sticky"

Additionally you can force kubernetes to schedule the pods on different hosts:

antiaff:
  enabled: true
  # Here you can set the pihole release (you set in `helm install <releasename> ...`)
  # you want to avoid
  avoidRelease: pihole1
  # Here you can choose between preferred or required
  strict: true

Hope this helps!

DHCP enabled

Hi,
First of all, thank you for the hard work, this chart is perfect.

My router doesn't allow me to configure a DNS server, so I decided enable Pi-Hole DHCP server according to this post and disable my router DHCP server.

The problem is that it doesn't work at all.

I have a pretty similar config as you, metallb provides PiHole two unique IPs to expose UDP and TCP ports and I can reach those ports from anywhere in the network but no device can pick an IP from the Pi-Hole DHCP server.

So I was wondering if you or anyone have a similar setup which is working and where could be my issue ?

Thanks in advanced.

Greg

installation using helm3 fails

helm operator reports this error:

ts=2020-01-13T20:47:52.764681657Z caller=release.go:217 component=release release=pihole targetNamespace=pihole resource=pihole:helmrelease/pihole helmVersion=v3 error="Helm release failed" revision=49ea46486a904d1b5a491f8d223b7b9c92eacc24 err="failed to upgrade chart for release [pihole]: template: pihole/templates/_helpers.tpl:15:14: executing \"pihole.fullname\" at <.Values.fullnameOverride>: can't evaluate field Values in type string"

This is my config

apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
  name: pihole
  namespace: pihole
spec:
  releaseName: pihole
  helmVersion: v3
  chart:
    git: [email protected]:MoJo2600/pihole-kubernetes.git
    path: pihole
  values:
    name: pihole
    ingress:
      enabled: true
      annotations:
        kubernetes.io/tls-acme: "true"
        traefik.ingress.kubernetes.io/frontend-entry-points: http,https
        traefik.ingress.kubernetes.io/redirect-entry-point: https
      hosts:
         - pihole.example.com
    serviceTCP:
      metallb.universe.tf/allow-shared-ip: pihole-svc
    serviceUDP:
      metallb.universe.tf/allow-shared-ip: pihole-svc
    doh:
      enabled: true
    resources:
      limits:
        memory: 64Mi
      requests:
        cpu: 50m
        memory: 64Mi

Does it work for you?

Does not Expose on port 53

Hello, I don't know if this is the right channel for support/help. But here it goes. Everything works great with the configurations i have got. The only thing is that the ports (ie. 53, 80) are exposed on random ports. I guess i can live with port 80 living on any random port as this is only the admin portal. But in order for me to point my router to pihole using the router's dns setting, the port on the Kubernetes node that is exposed must be 53

service/pihole-tcp   LoadBalancer   10.110.201.40    <pending>     80:30181/TCP,443:31614/TCP,53:30634/TCP   2m46s
service/pihole-udp   LoadBalancer   10.110.195.142   <pending>     53:30699/UDP,67:30840/UDP                 2m46s

As you can see from above port 53 on TCP is assigned external port 30634. How do I make it so that port 53 is mapped to port 53.

Thank you

Can't run TCP and UDP on same port with metalLB

When setting up the tcp and udp services i run into this issue
Warning FailedScheduling: X node(s) didn't have free ports for the requested pod ports.
Using metalLB v0.7.3

Did you experience anything similar or what version of metalLB do you use?

Readonly Pod on NFS

If i deploy a pihole pod with helm on my k3s cluster on the local storage class all work well.
If i deploy a pihole pod with helm on my k3s cluster on my nfs storage class i cant add dns entries or blacklist entries.
I always get this error:

Error, something went wrong!
While executing INSERT OT IGNORE: attempt to write a readonly database
Added 0 out of 1 domains

I use the nfs provider from
https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
and it was installed via helm.

For the pihole installation i test this line
helm install pihole mojo2600/pihole --set DNS1="1.1.1.1" --set DNS2="8.8.8.8" --set adminPassword="asdf" --set replicaCount="1" --set serviceDns.type="LoadBalancer" --set serviceWeb.type="LoadBalancer" --set persistentVolumeClaim.enabled="true"

I dont get any information about the logs.

What can be wrong?
Thank You.

doh enabled pods do not stay ready

Hiya!
Chart works great with pihole, however if I set doh to enabled (leaving the rest of the config as-is), the container does not become ready, and DNS resolution does not work in pihole.
If I describe the container I get:

Container cloudflared failed liveness probe, will be restarted

All there is in the container log for cloudflared is:

[36mINFO^[0m[2020-12-22T17:45:09Z] Adding DNS upstream - url: https://1.1.1.1/dns-query

[36mINFO^[0m[2020-12-22T17:45:09Z] Adding DNS upstream - url: https://1.0.0.1/dns-query

[36mINFO^[0m[2020-12-22T17:45:09Z] Starting DNS over HTTPS proxy server on: dns://0.0.0.0:5053

[36mINFO^[0m[2020-12-22T17:45:09Z] Starting metrics server on [::]:49312/metrics

ingress templating is too cute

The ingress templating is trying to be too fancy with appending virtualHost.

I'd like to have a single host ingress.
As it would be impossible to get a let's encrypt certificate for pi.hole

ingress:
  annotations:
    kubernetes.io/ingress.class: nginx
    kubernetes.io/tls-acme: "true"
  enabled: true
  hosts:
  - pihole.lan.jetersen.dev
  tls:
  - secret: pihole.lan.jetersen.dev-tls
    hosts:
    - pihole.lan.jetersen.dev

virtualHost: pihole.lan.jetersen.dev

becomes:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: pihole
  labels:
    app: pihole
    chart: pihole-1.7.7
    release: pihole
    heritage: Helm
  annotations:
    kubernetes.io/ingress.class: nginx
    kubernetes.io/tls-acme: "true"
spec:
  tls:
    - hosts:
        - "pihole.lan.jetersen.dev"
        - "pihole.lan.jetersen.dev"
      secretName:
  rules:
    - host: "pihole.lan.jetersen.dev"
      http:
        paths:
          - path: /
            backend:
              serviceName: pihole-tcp
              servicePort: http
    - host: "pihole.lan.jetersen.dev"
      http:
        paths:
          - path: /
            backend:
              serviceName: pihole-tcp
              servicePort: http

Imposible to add more domains to whitelist or blocklist.

Hello,

I'm unable to add more domains to adlist or whitelist files using the pihole web interface. I've filled before with some domains to the values template, and I'm always get the message: "read-only filesystem". It's strange because If I let without any domain the values template, it works fine, I can add or remove domains.

Regards

PersistentVolumeClaim error

Hi. I am following this tutorial: https://kauri.io/68-selfhost-pihole-on-kubernetes-and-block-ads-and/5268e3daace249aba7db0597b47591ef/a

In "Deployment" section, point 4a, the changes made to the "values.yml" file somehow can't be installed through helm. This is the error I am facing:

Error: Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Volumes: []v1.Volume: v1.Volume.VolumeSource: PersistentVolumeClaim: v1.PersistentVolumeClaimVolumeSource.ClaimName: ReadString: expects " or n, but found t, error found in #10 byte of ...|aimName":true}},{"co|..., bigger context ...|e":"config","persistentVolumeClaim":{"claimName":true}},{"configMap":{"defaultMode":420,"name":"piho|...

When I perform a "kubectl get pvc -n pihole" this is the output:

NAME     STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pihole   Bound    pihole   500Mi      RWO            manual         3h30m

PS. I checked-out the commit from March and it is working now, so latest commit is not working

svclb-pihole-tcp pods stuck in pending status

This is probably a config error on my part but would appreciate any help.

All three pods for svclb-pihole-tcp are stuck in pending status.

Describe states:
Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) didn't have free ports for the requested pod ports.

netstat --listen on my 3 nodes indicate nothing on ports 53, 80 nor 443.

What might I be missing or messed up? Thanks!

chown: cannot access '/etc/pihole/dhcp.leases': No such file or directory

So, the pihole seems to have started up all right but as I was going through the logs:

chown: cannot access '': No such file or directory
chmod: cannot access '': No such file or directory
chown: cannot access '/etc/pihole/dhcp.leases': No such file or directory

Is there something that I missed that maybe should've worked?

Also another question, how do I setup DoH?

I was following this - https://kauri.io/68-selfhost-pihole-on-kubernetes-and-block-ads-and/5268e3daace249aba7db0597b47591ef/a and they seem to have not explained that, is there any place I can read on how to configure DoH? Thank you.

can't set DNS2 to empty or no

docker-pi-hole configuration allows you to set DNS2 to no if you want a single dns server (which I do, because I'm running coredns on k8s with the pi, and that should be my only dns)

When I set DNS2 to "no" in values, it fails the checks.

When I set it to no (no quotation marks) it passes through, but then sets google dns as the second dns.

I'm new to k8s and haven't figured this out yet.

https://github.com/pi-hole/docker-pi-hole/blob/master/README.md#environment-variables

Conditional DNS isn't working

I enabled conditional DNS forwarding and it seems to create the file 02-custom.conf properly. However, when I logged into the PiHole interface the checkbox for "Conditional Forwarding" wasn't enabled. I set it manually using the web interface and found it added the server= entry to 01-pihole.conf instead of 02-custom.conf.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.