Coder Social home page Coder Social logo

k8s-at-home / charts Goto Github PK

View Code? Open in Web Editor NEW
1.4K 21.0 615.0 34.74 MB

⚠️ Deprecated : Helm charts for applications you run at home

Home Page: https://docs.k8s-at-home.com

License: Apache License 2.0

Smarty 47.40% Ruby 20.79% Mustache 8.99% Shell 22.82%
helm charts kubernetes k8s homelab

charts's Introduction

⚠️ Deprecation and Archive Notice

This repo is being deprecated, please read this issue

Helm charts

All Contributors

docs Discord pre-commit renovate task Artifact Hub

Usage

Helm must be installed to use the charts. Please refer to Helm's documentation to get started.

Once Helm is set up properly, add the repo as follows:

helm repo add k8s-at-home https://k8s-at-home.com/charts/

You can then run helm search repo k8s-at-home to see the charts.

Support

We have a few outlets for getting support with our projects:

Contributing

See CONTRIBUTING.md

License

Apache 2.0 License

Contributors ✨

Thanks goes to these wonderful people (emoji key):


Fabian Zimmermann

💻

Vegetto

💻

Travis Lyons

💻

Bᴇʀɴᴅ Sᴄʜᴏʀɢᴇʀs

💻

Kjeld Schouten-Lebbing

💻

Rolf Berkenbosch

💻

auricom

💻

Aaron Johnson

💻

Anders Brujordet

💻

Antoine Bertin

💻

ᗪєνιη ᗷυнʟ

💻

Ardetus

💻

Chris Golden

💻

Fabio Brito d'Araujo e Oliveira

💻

Allen Porter

💻

Rasmus Hermansen

💻

Dennis Zhang

💻

Clemens Bergmann

💻

Arnaud Lemaire

💻

Julen Dixneuf

💻

Nicholas St. Germain

💻

Ryan Walter

💻

Chip Wolf ‮

💻

jr0dd

💻

Aleksandr Beshkenadze

💻

Yusuke Nakamura

💻

Brandon Clifford

💻

Nat Allan

💻

Jack Maloney

💻

Andrew Zammit

💻

Ryan Draga

💻

Jan-Otto Kröpke

💻

Chris Sanders

💻

Alex Waibel

💻

Simon Caron

💻

Karan Samani

💻

Markus Reiter

💻

Paul N

💻

Varac

💻

CoolMintChocolate

💻

Jonas Janz

💻

Thibault Cohen

💻

Dang Mai

💻

Christopher Larivière

💻

Winston R. Milling

💻

Arthur

💻

Skyler Mäntysaari

💻

Dis

💻

Roger Rumao

💻

Marcello Ceschia

💻

Roberto Santalla

💻

Greg Haskins

💻

jlrgraham

💻

Lukas Wingerberg

💻

TheDJVG

💻

Rickard Schoultz

💻

Taylor Vories

💻

Jonathan

💻

Johannes Kastl

💻

David Young

💻

Bikramjeet Singh

💻

Gerald Wu

💻

Ivan Gregurić Ortolan

💻

Luca Calcaterra

💻

Omar Pakker

💻

Noel Georgi

💻

lanquarden

💻

This project follows the all-contributors specification. Contributions of any kind welcome!

charts's People

Contributors

allcontributors[bot] avatar angelnu avatar billimek avatar bjw-s avatar carpenike avatar cubic3d avatar dcplaya avatar dirtycajunrice avatar dynamicat avatar flipenergy avatar halkeye avatar ishioni avatar jmmaloney4 avatar jonnobrow avatar jr0dd avatar kimi450 avatar mkilchhofer avatar mr-onion-2 avatar nicholaswilde avatar nolte avatar onedr0p avatar patricol avatar reitermarkus avatar renovate[bot] avatar rwaltr avatar rytislt avatar samip5 avatar somerandow avatar truxnell avatar wrmilling avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

charts's Issues

Where is the code of cloudflare dyndns?

Hello,
I am missing some config options for the cloudflare dyndns chart, specifically CF_APITOKEN. This is used to provide a more restricted token so the app has much more limited access.
Because it is not documented, I decided to take a look at the chart code, but I was not able to find it.
Where is it? And, will you consider adding this configuration options?
Thanks and regards

External Domain

I'm probably missing something obvious, but I don't see a way to specify the public-facing domain for callbacks from third parties. I do have ingress.hosts set, but when I add Plex as a vendor, that third party site tries to redirect me back to https://<local_ip>:8123/auth/plex/<etc>. If I swap out the local_ip:8123 with the proper domain, I can continue the auth flow.

[common] Nodeselector and Tolerations don't render properly

When adding the following to the values:

nodeSelector:
  kubernetes.io/hostname: k8s-staticwan
tolerations:
- effect: NoSchedule
  operator: Exists

it gets rendered as:

nodeSelector:        kubernetes.io/hostname: k8s-staticwan
tolerations:        - effect: NoSchedule
  operator: Exists

Gotta fix that indenting 😮

[media-common] Annotations and Labels

As far as I can tell, there is currently no possibility to add annotations or labels to the statefulsets or deployments managed by the media-common chart, is that correct?

Use case:

  • organizing resources
  • 3rd party tools that read annotations/labels (e.g. keel)

[plex] - Plex fails to start because of readiness checks

When Plex is managing large libraries, it can take a really long time to start (so readiness checks fail and restart the pod). This is because it tries to chown everything inside of /config, and that directory might be big (in terms of the number of files) if you have a large library.

I've created a PR in upstream plex Docker image which should hopefully speed it up a bit.

I've also resorted to using CHANGE_CONFIG_DIR_OWNERSHIP set to false (this brought up a templating issue, for which I'll create a PR), which works but imho isn't ideal.

I was thinking the better approach might be to create an init container or possibly even a job that sets the ownership correctly... This way the readiness/liveness probes only are dealing with actual Plex starting up, not any preamble that might need to happen.

I would be interested in ideas.

[calibre-web] Pod does does terminate with [s6-finish] sending all processes the TERM signal.

I install the chart with the default values but the pod constantly restarts. It comes up but after [s6-finish] waiting for services. the pod does a [s6-finish] sending all processes the TERM signal..

kubectl -n calibre logs calibre-calibre-web-58c874b9d8-5fssm calibre-web -f
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 01-envfile: executing... 
[cont-init.d] 01-envfile: exited 0.
[cont-init.d] 10-adduser: executing... 

-------------------------------------
          _         ()
         | |  ___   _    __
         | | / __| | |  /  \ 
         | | \__ \ | | | () |
         |_| |___/ |_|  \__/


Brought to you by linuxserver.io
-------------------------------------

To support LSIO projects visit:
https://www.linuxserver.io/donate/
-------------------------------------
GID/UID
-------------------------------------

User uid:    1001
User gid:    1001
-------------------------------------

[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 30-config: executing... 
[cont-finish.d] executing container finish scripts...
[cont-finish.d] done.
[s6-finish] waiting for services.
[s6-finish] sending all processes the TERM signal.
[s6-finish] sending all processes the KILL signal and exiting.

Thus web-test-connection always fails

kubectl -n calibre logs calibre-calibre-web-test-connection
Connecting to calibre-calibre-web:8083 (10.43.52.188:8083)

[media-common] Breaking ingress change

Several of my deploys are failing recently and I just dug into what's going on. It appears media-common had some updates to the ingress to meet new 1.19 requirements. That change is here.

The change bumped the Minor version from 1.1.1 to 1.2.0. This being non-backwards compatible I believe it should have bumped the major version to 2.0.0 instead.

The charts repository then had this change landed which switches the media-common from "~" to "^" versioning which allows Minor and Patch instead of just Patch upgrades.

The result is that all media-common charts with the 2nd change (sonarr, radarr, organizr) have a backwards incompatible change to their values.yaml. The ingress used in those charts require that the 'paths' key have a map rather than list of paths as was previously in the chart.

It's a good change, I believe the error in this is actually the first commit which should have bumped the Major version, so that anyone using the supporting charts with ~ or ^ matching wouldn't receive a backwards incompatible change. For now I've pinned my chart directly to the release from a few days ago.

I think rolling back the 1.2.0 media-common, bumping the change to 2.0 and then bumping the umbrella charts major versions would be the correct way to version this change such that existing charts are aware this is a non-backwards compatible change and won't pull it in automatically.

[node-red] Define 'CredentialSecret' at launch

Exploring the 'how' to do this now. Looks like one of the options in settings.js needs to be updated with a value.

At startup Node-Red sends this warning:

---------------------------------------------------------------------
Your flow credentials file is encrypted using a system-generated key.

If the system-generated key is lost for any reason, your credentials
file will not be recoverable, you will have to delete it and re-enter
your credentials.

You should set your own key using the 'credentialSecret' option in
your settings file. Node-RED will then re-encrypt your credentials
file using your chosen key the next time you deploy a change.
---------------------------------------------------------------------

Adguard-home - Liveness probe failed / Connection failed

Hi

Apologies if this is not the right place to ask, since this is probably more due to my lack of experience with Helm rather than any issue with this chart.

I am running a clean k3s environment on a Pi4. Just the one node for now.

The only values I have overridden in the adguard-home chart are the image tag (armhf-latest) and the existingClaim.

I can port-forward to the pod on port 3000 and access and complete the setup wizard at /install.html, but after this it says the connection was reset. Then eventually the liveness probe fails with e.g. Get http://10.42.0.70:3000/login.html: dial tcp 10.42.0.70:3000: connect: connection refused.

Am I missing something in my environment setup?

Environment:
Raspberry Pi4
5.4.73-v8+ #1360 SMP PREEMPT Thu Oct 29 16:00:37 GMT 2020 aarch64 GNU/Linux
Running k3s version v1.18.9+k3s1 (630bebf9)

Many thanks

Remove appVersion in Chart.yaml

@billimek how do you feel if we remove this field from the Chart.yaml? It seems like the field is useless for a bunch of applications that do not require a chart update when a image is updated.

My line of thinking is that the command helm list --all-namespaces -a, or the helm-exporter is not returning the correct appVersion if we update the image. That makes this metadata wrong.

Of-course the proper way to handle this is to release a new chart version on every image update, but we don't really have the time to do that since new versions get released all the time, almost daily.

Arm support via linuxserver/plex

As this project largely serves home projects, a fair number of the audience may be using arm based SBC clusters (especially with there now being an 8GB Raspberry Pi).

I am aware that the Plex maintained docker images only cater to x86, however the linuxserver.io images include arm images.

While it may require significant effort to cleanly migrate, my proposal would be to do so, so as to cater to a wider audience.

While it would mean moving away from the official images, linuxserver.io has a good reputation for maintaining high quality images.

https://hub.docker.com/r/linuxserver/plex/

[unifi] Enable external deployment of mangoDB

As the title says, feature request to enable the possibility to deploy an external db. Either as separate pod (e.g. helm chart dependency) or completely external by e.g. host/user/pass configuration.

[home-assistant] Homeassistant fails to start with AppDaemon enabled

Describe the bug
I'm instantiating HASS in a kubernetes cluster using the following definition. https://github.com/billimek/billimek-charts/tree/master/charts/home-assistant

Homeassistant fails to start with AppDaemon enabled.

A clear and concise description of what the bug is.
Version of Helm and Kubernetes:
Helm: v3.2.1
Kubernetes: v1.18.2

What happened:

➜  kubernetes git:(master) ✗ kubectl -n homeassistant logs homeassistant-home-assistant-8ccc9d79-f22bk -c appdaemon 
2020-06-14 16:35:45.284536 INFO AppDaemon Version 3.0.5 starting
2020-06-14 16:35:45.284770 INFO Configuration read from: /conf/appdaemon.yaml
2020-06-14 16:35:45.286422 INFO AppDaemon: Starting Apps
2020-06-14 16:35:45.289228 INFO AppDaemon: Loading Plugin HASS using class HassPlugin from module hassplugin
2020-06-14 16:35:45.329504 INFO AppDaemon: HASS: HASS Plugin Initializing
2020-06-14 16:35:45.329869 INFO AppDaemon: HASS: HASS Plugin initialization complete
2020-06-14 16:35:45.330040 INFO Starting Dashboards
2020-06-14 16:35:45.330182 WARNING ------------------------------------------------------------
2020-06-14 16:35:45.330251 WARNING Unexpected error during run()
2020-06-14 16:35:45.330311 WARNING ------------------------------------------------------------
2020-06-14 16:35:45.330992 WARNING Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/appdaemon/admain.py", line 82, in run
    self.rundash = rundash.RunDash(self.AD, loop, self.logger, self.access, **hadashboard)
  File "/usr/local/lib/python3.6/site-packages/appdaemon/rundash.py", line 130, in __init__
    dash_net = url.netloc.split(":")
TypeError: a bytes-like object is required, not 'str'

2020-06-14 16:35:45.331065 WARNING ------------------------------------------------------------
2020-06-14 16:35:45.331140 INFO AppDeamon Exited

Relevant Configuration Section

appdaemon:
  enabled: true

  ## code-server container image                                                                                                              
  ##                                                                                                                                          
  image:
    repository: acockburn/appdaemon
    tag: 3.0.5
    pullPolicy: IfNotPresent

  ## Home Assistant API token                                                                                                                 
  # haToken:                                                                                                                                  

  ## Additional hass-vscode container environment variable                                                                                    
  ## For instance to add a http_proxy                                                                                                         
  ##                                                                                                                                          
  extraEnv: {}

  ingress:
    enabled: false
    annotations: {}
      # kubernetes.io/ingress.class: nginx                                                                                                    
      # kubernetes.io/tls-acme: "true"                                                                                                        
    path: /
    hosts:
      - appdaemon.local
    tls: []
    #  - secretName: appdaemon-tls                                                                                                            
    #    hosts:                                                                                                                               
    #      - appdaemon.local

  service:
    type: ClusterIP
    port: 5050
    annotations: {}                                                                                                                           
    labels: {}                                                                                                                                
    clusterIP: ""
    ## List of IP addresses at which the hass-appdaemon service is available                                                                  
    ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips                                                                      
    ##                                                                                                                                        
    externalIPs: []
    loadBalancerIP: ""
    loadBalancerSourceRanges: []
    # nodePort: 30000

[media-common] Image values separate organisation

In the media-common chart the image definitions are as follows:

image:
  organization: ""
  repository: ""
  pullPolicy: IfNotPresent
  tag: ""

Which separates the organisation part from the image name part. For update automation flux requires the repository value to be in the owner/image-name format. I think it would be possible to workaround that, by creating a dummy value for the repository and point flux to use that, but this makes it very hacky and one would need to maintain three repetitive fields for custom images and additional annotations for flux. Were there any specific design decisions to separate them?

[question] additionalVolumeMounts usage

Seems like using additional mounts behavior has changed from the original use case on things like ombi/sonarr/etc. Can someone add description (or potentially fix) additionalVolumeMounts?

Previous method:

persistence: 
  extraExistingClaimMounts:
      - name: "downloads"
        mountPath: "/downloads"
        existingClaim: "downloads"
        readOnly: false

New method:

persistence:
  additionalVolumeMounts:
     - name: "downloads"
       mountPath: "/downloads"
       existingClaim: "downloads"
       readOnly: false

That results in:
Error: UPGRADE FAILED: error validating "": error validating data: [ValidationError(Deployment.spec.template.spec.containers[0].volumeMounts[1]): unknown field "existingClaim" in io.k8s.api.core.v1.VolumeMount

Removing the existingClaim results in the mount not being found. I'm sure I have poor formatting but this is not my expertise.

[bitwardenrs] HPA target does not account for Deployment/Statefulset

Details

Helm chart name and version: [email protected]

Container name and tag: bitwardenrs/server:latest

What steps did you take and what happened:
Deploying the chart with the default persistence option (statefulset) yields the following error:
Screenshots from Lens:
image

image

Looking at the chart, the target for the HPA is always:

spec:
  scaleTargetRef:
    apiVersion: apps/v1
    type: Deployment

This means that the HPA looks for a target deployment when scraping metrics, even if the chart is deployed as a StatefulSet.

What did you expect to happen: HPA to reference the StatefulSet when deploying this way.

Anything else you would like to add:
Impending PR to resolve this.

Additional Information:

Deduplicating documentation

Currently there are significant amounts of documentation duplication which inevitably leads to drift, copy/paste errors, and toil.
Is there a better way? Current thoughts:

Possible remediation options:

  • Unique only readmes with common documentatiion in another location
    • docs folder
    • github wiki
    • gh-pages
  • Duplicated documentation but the common parts are stitched together by a bot similar to how lsio does their readmes.

In addition, as we lean into artifacthub.io, we will also have a metadata file which will contain basically all of the above information and more. How do we keep the toil down there?

[question] new "common" chart vs. "The Common Helm Helper Chart"

Hi guys, just saw that there is a new common library chart that replaces media-common.

There's also something called The Common Helm Helper Chart in the Helm incubator repository. Maybe this could be used and extended by the k8s-at-home chart? It could reduce the amount of code in k8s-at-home chart and at the same time ensure we follow Helm best practices.

There would be a name clash, so the k8s-at-home chart would need a new unique name (as common is already used by the The Common Helm Helper Chart).

[common] Enhancements / bug fixes to common library chart

Created this issue as a container issue, to more easily keep track of the things we need/want to fix on the common library chart, but that are not strictly neccessary right away and/or don't really warrant an issue of their own.

  • {{ template common.whatever . }} pattern is best replaced by {{ include common.whatever . }}, according to official Helm docs
  • Helm named templates should all have a docstring (for lack of a better word) that describes what it does and what the expected input is (if applicable)
  • service.port.targetPort is a string in the provided values.yaml file. Merging in user overrides could cause an error when an integer is given due to conflicting data types.
  • README.md improvements are required
  • Add support for hostNetwork
  • Add support for hostAliases
  • Add support for daemonset controller (34df5b8)
  • Add support for specifying and creating chart-specific serviceAccount (836b5de)

Possibility of adding https://github.com/guillaumedsde/alpine-qbittorrent-openvpn

Hey ! I was trying to add a chart with this image.
I was able to modified your qbittorrent chart with some success. The issue being, I was able to curl from one of the nodes but not another (I have a 2 node setup using k3s. Curl in master node was giving operation timeout where as a curl in worker node was working just fine for the same clusterIP.). As a result, nginx ingress was giving timeout.
Other charts seems to be working fine under the same conditions.

Hope you can help me. Thank you.

[home-assistant] included esphome fails to start on multi-node clusters

Hello,

I have a 4 node RPi K3S cluster and I'm running your chart for home-assistant. When setting esphome.enabled: true by default it tries to mount in the home-assistant pvc so that it can access the home-assistant secrets.yaml. This does not work if the esphome pod is scheduled on a node where the home-assistant pvc does not exist.

I was able to work around this with the following settings to the home-assistant values:

esphome:
  enabled: true
  extraVolumes: null
  extraVolumeMounts: null

If you decide to keep this cross-mounting behavior in the home-assistant chart, you should add an affinity rule when esphome is enabled as well so that the pods will both co-locate. I think that ideally the secrets.yaml would be provided via a kubernetes secret which would allow the file to be mounted into both containers with no affinity concerns.

RFC: most charts should be StatefullSet, but are Deployments

I believe that charts which deploy stateful applications, like for example Home Assistant, radarr et all should deploy statefullsets instead of deployments. I know that one can get almost the same result using a chart hardcoded replica number, and deployment strategy so as to guard from 2 copies accessing the same file (like on a new version rollout) from a PVC. I also realise that most of these applications have no business of being deployed in more than one copy from the same deployment, since they don't cluster in any meaningful way.

Meanwhile statefullsets have these things built-in. They also have a neat optional bonus (or a hindrance, depending on ones usecase), that because they bring pvc templates instead of pvc, helm will not delete the pvc outright on uninstall.

This is not a critique, it's a call for comments: are people using deployments instead of statefullsets because of familiarity, or is there something i'm not seeing?

[unifi] Add support for configurable securityContext and proper runAsNonRoot

A couple of things could be done in this area, could have a stab at it when/if I have time but report it here for now:

  1. Enable to run completely as non-root (not only RUNAS_UID0=false) but to configure securityContext with runAsUser and runAsGroup.
  2. Make it configurable by exposing securityContext to values.yaml, so one can also do things as making rootFS readonly (however not tested specifically with unifi).
  3. Make SETFCAP configurable instead (by exposing complete securityContext) as it would not be needed if run properly as non-root (the code that needs it is not run in the entrypoint script in that case).

zwave2mqtt doesn't work on first install

Setting my configuration in the zwave2mqtt UI, results in exception.

Kubernetes version v1.17.0
Cluster of 3 bare metal nodes running ubuntu server 18.04. Aeotec z-stick gen 5, tested on two different nodes.

Deploying with
helm install --namespace=default zwave2mqtt -f ./helm-configs/zwave2mqtt-values.yaml billimek/zwave2mqtt

Deploying the helm chart with default values except I have:

  • upped the image version to 3.0.2
  • enabled persistence.
  • Set affinity for the node with my z-stick
  • Added ingress

I can provide zwave2mqtt-values.yaml in full if needed.

Setting my configurations in the zwave2mqtt UI results in the following log in the container:

2020-04-04T18:56:23.404Z z2m:Zwave Connecting to /dev/ttyACM0
2020-04-04 18:56:23.408 Always, OpenZwave Version 1.6-1061-g14f2ba74 Starting Up
2020-04-04 18:56:23.408 Warning, Unable to load Localization file /usr/local/etc/openzwave/Localization.xml: Failed to open file
2020-04-04 18:56:23.409 Warning, Exception: Localization.cpp:794 - 1 - Cannot Create Localization Class! - Missing/Invalid Config File?

If I downgraded to 2.2.0 I would get the gateway working, nodes would report values but the gateway would not recognize devices. E.G. Aeotec Siren 6 would show up as "unknown" and would not expose all expected entities. The log would show warnings related to reading xml files.

Could this be related to some missing securityContext ? Seems its having issues reading expected configuration that should have been written at some point but perhaps OpenZwave was not allowed to write it?

The /usr/local/etc/openzwave/ dir is empty.
I assume i'm hitting this line in OpenZwave : https://github.com/OpenZWave/open-zwave/blob/e3bae88f29139032c736144b67e7c67d3a764b2b/cpp/src/Localization.cpp#L794

Chart migration to use media-common

Candidate charts that would work as an umbrella templated media-common chart

  • bazarr
  • nzbget
  • sabnzbd
  • dashmachine

(are there more? will add to the list as needed)

[home-assistant] Missing ability to control image arguments for vscode

The codercom/code-server image doesn't have support for ARM and I'm trying to install this on a k3s raspberry pi cluster. It would be nice to have the option to forgo the arguments passed to the image or to define your own. That way you can have the ability to use your own image or have greater customization. In my case I'm trying to use the linuxserver image.

[powerdns] Powerdns chart do not works with postgres

The Powerdns chart do not works with a postgres database.

When configuring the chart to use a postgres database, for example with the following values:

powerdns:
  postgres:
    username: pdns
    password: mysecret
    database: pdns
    
mariadb:
  enabled: false

postgres:
  enabled: true
  postgresqlUsername: pdns
  postgresqlPassword: mysecret
  postgresqlDatabase: pdns

The powerdns pod ends up with the following errors in its logs:

ERROR 2005 (HY000): Unknown MySQL server host 'mysql' (-2)
Waiting for database to come up

It looks that the underlying image used by this chart (psitrax/powerdns) only support mysql: psi-4ward/docker-powerdns#22

[traefik-forward-auth] Incorrect probe configuration

Traefik-forward-auth does not currently implement health or readiness http handlers. The configured handlers at "/" actually trigger a forward auth check which calls the auth provider and fails to incorrect headers. Instead these handlers should use tcpProbes of httpGet handlers.

[plex] unable to attach or mount volumes

I'm new to kubernetes so I'm not sure if this is expected or not. I'm trying to add the plex chart to my cluster and I have only a single PV and PVC. After starting the chart it's stuck at ContainerCreating because it cannot attach the volumes

Unable to attach or mount volumes: unmounted volumes=[data transcode], unattached volumes=[data config transcode shared default-token-8s7wb]: timed out waiting for the condition

It seems to me that it might be because deployment.yml file storing hardcoded values for volumeMounts as they don't exist in my cluster. e.g. https://github.com/billimek/billimek-charts/blob/master/charts/plex/templates/deployment.yaml#L196

Is there a reason we are hardcoding those values, do I need to setup extra PVC named as they are?

There is a similar issue here which seems to have the same problem.

Here is the kubectl describe pod ...

Name:           plex-69fb48c94f-tf56c
Namespace:      media
Priority:       0
Node:           mona-2/192.168.0.224
Start Time:     Tue, 18 Aug 2020 20:57:35 +0200
Labels:         app.kubernetes.io/instance=plex
                app.kubernetes.io/name=plex
                pod-template-hash=69fb48c94f
Annotations:    <none>
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  ReplicaSet/plex-69fb48c94f
Containers:
  plex:
    Container ID:   
    Image:          plexinc/pms-docker:1.19.1.2645-ccb6eb67e
    Image ID:       
    Ports:          32400/TCP, 32469/TCP, 1900/UDP, 32410/UDP, 32412/UDP, 32413/UDP, 32414/UDP
    Host Ports:     0/TCP, 0/TCP, 0/UDP, 0/UDP, 0/UDP, 0/UDP, 0/UDP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Liveness:       http-get http://:32400/identity delay=15s timeout=5s period=10s #success=1 #failure=5
    Readiness:      http-get http://:32400/identity delay=15s timeout=5s period=10s #success=1 #failure=5
    Environment:
      TZ:                    Europe/London
      PLEX_CLAIM:            REDACTED
      PMS_INTERNAL_ADDRESS:  http://plex:32400
      PMS_IMAGE:             plexinc/pms-docker:1.19.1.2645-ccb6eb67e
      KUBE_NAMESPACE:        media (v1:metadata.namespace)
      TRANSCODE_PVC:         media-ssd
      DATA_PVC:              media-ssd
      CONFIG_PVC:            media-ssd
      PLEX_UID:              1000
      PLEX_GID:              1000
    Mounts:
      /config from config (rw)
      /data from data (rw)
      /shared from shared (rw)
      /transcode from transcode (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-8s7wb (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  media-ssd
    ReadOnly:   false
  config:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  media-ssd
    ReadOnly:   false
  transcode:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  media-ssd
    ReadOnly:   false
  shared:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  default-token-8s7wb:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-8s7wb
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason       Age                  From               Message
  ----     ------       ----                 ----               -------
  Normal   Scheduled    <unknown>            default-scheduler  Successfully assigned media/plex-69fb48c94f-tf56c to hydra-2
  Warning  FailedMount  22m (x3 over 35m)    kubelet, hydra-2   Unable to attach or mount volumes: unmounted volumes=[data transcode], unattached volumes=[shared default-token-8s7wb data config transcode]: timed out waiting for the condition

[home-assistant] RFC: Enable External Management of secrets.yml

Was IM'ing with @onedr0p on Discord and chatting on how we could somehow lock down the secrets.yml file so that everything can be stored in public git.

Considering the use of an initContainer that pulls down an entire hass config from a public git repo. The secrets.yml file would be encrypted with either git-crypt (https://github.com/AGWA/git-crypt) or SOPS (https://github.com/mozilla/sops) and would be unlocked as part of the init process.

Would require us to maintain an initContainer that could perform most of this heavy lifting, although I anticipate that will be small.

Thoughts?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.