Coder Social home page Coder Social logo

billimek-charts's Introduction

billimek-charts's People

Contributors

bezerker avatar billimek avatar bjw-s avatar blackjid avatar blakeblackshear avatar carpenike avatar cscheib avatar damacus avatar dcplaya avatar dewet22 avatar dirtycajunrice avatar halkeye avatar hall avatar imduffy15 avatar ishioni avatar jwalker343 avatar kamuelafranco avatar marshallford avatar masantiago avatar mkilchhofer avatar onedr0p avatar radum avatar rickcoxdev avatar runecalico avatar somerandow avatar st0rmingbr4in avatar w4 avatar wrmilling avatar yasn77 avatar zer0tonin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

billimek-charts's Issues

Blocky RollingUpdate strategy

@billimek what do you think about adding RollingUpdate for the deployments strategy for 0 downtime?

Maybe something like this can be added to the deployment

strategy:
  type: RollingUpdate
  rollingUpdate:
     maxUnavailable: 25%
     maxSurge: 1

Fix links and image links

Now that this is on helm hub, some of the links and images are relative-path and not working properly.

Update the markdown files to properly resolve URLs.

Jackett and NZBHydra2 chart liveness and readyness probes

I don't have more time to work on this tonight, so I'll create this issue to remind me.

nzbhydra2 is a Java Spring application and I might need to set the timeout a lot higher as it runs database migrations on start. Currently on Raspberry Pi this takes awhile.

Jackett also seems to take longer than a minute to start up as well.

Logs are showing the Container being killed.

@billimek do you have any suggestions, or do you think just adjusting the probes should do it, or maybe removing them?

Edit: I should have mentioned the probe PR didn't fix the issue, I'll need to figure out the best values...

migrate helm/charts back to this repo?

Given the deprecation of the helm/charts stable repo, it stands to reason that a new home should be found for some of the charts.

Charts I'm an owner (or co-owner) of right now in the helm/charts repo:

  • unifi
  • home-assistant
  • node-red
  • minecraft
  • nextcloud

unifi, home-assistant, and node-red originated in this repo and it shouldn't be a big deal to move them back. Will need to check with the other co-owners of the minecraft and nextcloud chart to see what they want to do.

Create qbittorrent-prune Chart

https://gitlab.com/onedr0p/qbittorrent-prune

Script to delete torrents from qBittorrent that have a tracker error like Torrent not Registered or Unregistered torrent. This script currently only supports monitoring up to 3 categories in qBittorrent to check for tracker errors.

Opening this issue for me to work on.

Why not automate all the things? :)

Endless loop of provisioning when deploying Unifi Controller with helm chart.

Hi @billimek just a couple of question,

  1. have you had trouble with endless loops of re-provisioning config with the unifi controller when deployed in kubernetes? I tried your chart but I ended up in a endless loop and re-provisioned all devices until I stopped the controller.

  2. how do you handle STUN communication? What kind of ingress controller do you use? is it necessary for the unifi controller to have a working STUN communication? Im using Traefik and it doesnt support UDP traffic so I cant expose the STUN port.

Persistent Volume Claims of /downloads and /movies should be located on same volume with one pvc

By default, radarr and sonarr are set to rename movies and create hardlinks. Hardlinks can only be created when both file locations are located on the same file system. Even though the /downloads and /movies PV could be from the same share, the container has two mount points and will not recognize that they are from the same system. Instead of creating hardlinks, radarr and sonarr will copy the file.

Help: Issue with cloudflare-dyndns chart

Sorry I am still a bit of a newbie working with Kubernetes and Charts. I am having an issue deploying this chart.

# cloudflare-dyndns-helm-values.txt
cloudflare:
  user: "$CLOUDFLARE_USER"
  token: "$CLOUDFLARE_APIKEY"
  zones: "$CLOUDFLARE_ZONES"
  hosts: "$CLOUDFLARE_HOSTS"
  record_types: "$CLOUDFLARE_RECORDTYPES"
# cloudflare-dyndns.yaml
---
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
  name: cloudflare-dyndns
  namespace: default
  annotations:
    fluxcd.io/automated: "false"
spec:
  releaseName: cloudflare-dyndns
  chart:
    repository: https://billimek.com/billimek-charts/
    name: cloudflare-dyndns
    version: 1.0.0
  values:
    image:
      repository: hotio/cloudflare-ddns
      tag: stable-47b759b
    resources:
      requests:
        cpu: 100m
        memory: 64Mi
      limits:
        cpu: 500m
        memory: 128Mi
  valueFileSecrets:
  - name: "cloudflare-dyndns-helm-values"

The chart has defined the env variables like so:

...
    spec:
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          env:
            - name: CF_USER
              valueFrom:
                secretKeyRef:
                  name: {{ template "cloudflare-dyndns.fullname" . }}
                  key: cloudflare-dyndns-user
            - name: CF_APIKEY
              valueFrom:
                secretKeyRef:
                  name: {{ template "cloudflare-dyndns.fullname" . }}
                  key: cloudflare-dyndns-token
            - name: CF_ZONES
              valueFrom:
                secretKeyRef:
                  name: {{ template "cloudflare-dyndns.fullname" . }}
                  key: cloudflare-dyndns-zones
            - name: CF_HOSTS
              valueFrom:
                secretKeyRef:
                  name: {{ template "cloudflare-dyndns.fullname" . }}
                  key: cloudflare-dyndns-hosts
            - name: CF_RECORDTYPES
              value: "{{ .Values.cloudflare.record_types }}"
            - name: DETECTION_MODE
              value: "{{ .Values.cloudflare.detection_mode }}"
            - name: LOG_LEVEL
              value: "{{ .Values.cloudflare.log_level }}"
            - name: SLEEP_INTERVAL
              value: "{{ .Values.cloudflare.sleep_interval }}"
...

Do you think I should update the Chart to not use the valueFrom.secretKeyRef? It seems as though they can all just be regular values (see below) like since the valueFileSecrets will just drop them in.

          env:
            - name: CF_USER
              value: "{{ .Values.cloudflare.user }}"
            - name: CF_APIKEY
              value: "{{ .Values.cloudflare.token }}"
            - name: CF_ZONES
              value: "{{ .Values.cloudflare.zones }}"
            - name: CF_HOSTS
              value: "{{ .Values.cloudflare.hosts }}"
            - name: CF_RECORDTYPES
              value: "{{ .Values.cloudflare.record_types }}"
            - name: DETECTION_MODE
              value: "{{ .Values.cloudflare.detection_mode }}"
            - name: LOG_LEVEL
              value: "{{ .Values.cloudflare.log_level }}"
            - name: SLEEP_INTERVAL
              value: "{{ .Values.cloudflare.sleep_interval }}"

Again sorry and thanks for your help!

Migrate Metallb chart

@billimek what do you think about migrating the Metallb chart to here, since the developers have no interest in maintaining one themselves?

deleting charts issue

When hard-deleting a chart, the following considerations must be made:

The chart-releaser-action doesn't currently support or handle a chart deletion. The reason for this is that it looks for all 'changed' charts since the last successful release (based on tags in the repo). When a chart is removed, the releaser sees that it was 'changed' and tries to release it. It can't, of course, because there is nothing to release.

A workaround for this could be to not hard-delete a chart but instead mark it as deprecated. See this slack thread for some more context.

If it is necessary to actually delete a chart, the following actions can be taken:

  • create a new 'tag' on the delete commit and push it to the repo:

For a given commit (e.g. 67167f7), tag it:

git tag -a remove_cloudflare-ddns 67167f7 -m "removing cloudflare-ddns chart"

Next, push the tag to the remote repo:

git push --tags

... now subsequent invocations of the chart releaser should work as expected.

How to install modem-stats

Thank you for these charts. They make for a great opportunity to learn with a huge payoff at the end.

I had to jump through a few hoops to get modem-stats to install and run.

  1. Install the bitnami repo (I have to think everyone has it by now)
  2. helm install -n influxdb-influxdb bitnami/influxdb
  3. Get the admin password by running the export ADMIN_PASSWORD command shown in the output. Copy it the clipboard.
  4. helm install -n modem-stats billimek/modem-stats --set config.influxdb.username="admin",config.influxdb.password=""

Only after following those steps did modem-stats successfully create. It took me a while to realize that modem-stats was looking for influxdb using the name "influxdb-influxdb". Then it failed on requiring a username. It finally said the table doesn't exist and it will try to create one.

So, not sure if you'd like to beef up the docs a bit, but these are the hoops I had to jump through to get modem-stats to install.

Thanks!

Fix github actions issues

The github actions results in two 'push' events when a pull request is merged which results in one of the two duplicate events failing. There shouldn't be two push events at the same time when a pull request is merged.

Implement revisionHistoryLimit in deployments

Currently it doesn't appear there is a history limit on the deployments (default is 10) so we're seeing old replicasets starting to accumulate

replicaset.apps "radarr-55d94f7557" deleted
replicaset.apps "radarr-589cbccb4b" deleted
replicaset.apps "radarr-5b9c6c569b" deleted
replicaset.apps "radarr-6dfb689658" deleted
replicaset.apps "radarr-6fb4c9fc9d" deleted
replicaset.apps "radarr-74bfc48555" deleted
replicaset.apps "radarr-757cc79b67" deleted
replicaset.apps "radarr-85b85fbfc9" deleted
replicaset.apps "radarr-8d6c78bbf" deleted
replicaset.apps "radarr-dd6fc4bf" deleted
replicaset.apps "sonarr-54b697b96f" deleted
replicaset.apps "sonarr-5847d5b49c" deleted
replicaset.apps "sonarr-5bd56dcff4" deleted
replicaset.apps "sonarr-6796958fb" deleted
replicaset.apps "sonarr-6bfc8b4b86" deleted
replicaset.apps "sonarr-77d565879b" deleted
replicaset.apps "sonarr-78cd6f4b46" deleted
replicaset.apps "sonarr-7d4ff87c5d" deleted

According to https://www.weave.works/blog/how-many-kubernetes-replicasets-are-in-your-cluster- we can limit this...

I suggest 3 is good enough,

revisionHistoryLimit: 3

We likely don't need to increase chart version or make this option configurable.

Sometimes a Blocky pod experiences CrashLoopBackOff

I have Blocky scaled to 3 replicas, once in awhile one of these Pods will become unhealthy. This is the second time within 2 weeks it happened. Workaround is scaling down Blocky to 0 and then scale it back up, but it's not a fix as this will happen again.

@billimek have you noticed anything like this?

Maybe it could be related to this issue: 0xERR0R/blocky#20

My Blocky Pods

devin@Gaming-PC ~/C/k3s-gitops> k get po | grep blocky
blocky-558f8966b6-88jgx                 1/1     Running            2          7d10h
blocky-558f8966b6-rp49r                 1/1     Running            1          7d10h
blocky-558f8966b6-5glnc                 0/1     CrashLoopBackOff   139        7d10h

Description of failed Pod

devin@Gaming-PC ~/C/k3s-gitops> k describe pod/blocky-558f8966b6-5glnc
Name:         blocky-558f8966b6-5glnc
Namespace:    default
Priority:     0
Node:         k3s-worker-b/192.168.42.13
Start Time:   Sat, 21 Mar 2020 22:22:39 -0400
Labels:       app.kubernetes.io/instance=blocky
              app.kubernetes.io/name=blocky
              pod-template-hash=558f8966b6
Annotations:  prometheus.io/port: monitoring
              prometheus.io/scrape: true
Status:       Running
IP:           10.42.2.160
IPs:
  IP:           10.42.2.160
Controlled By:  ReplicaSet/blocky-558f8966b6
Containers:
  blocky:
    Container ID:   docker://b6bd9e81565ff6ec9b0c2f35d36e17668918f6232c60807e4af91acdbcdb79ad
    Image:          spx01/blocky:v0.5
    Image ID:       docker-pullable://spx01/blocky@sha256:51bb1df868cb5ace0a275abda6e1856681749df92910ce5af71d6c187d2ec755
    Ports:          4000/TCP, 53/TCP, 53/UDP
    Host Ports:     0/TCP, 0/TCP, 0/UDP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       ContainerCannotRun
      Message:      OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449:
container init caused \"rootfs_linux.go:58: mounting \\\"/var/lib/kubelet/pods/f001ba3a-afe4-491a-bd47-88f86e9f1362/volume-subpa
ths/config/blocky/0\\\" to rootfs \\\"/var/lib/docker/overlay2/8b372dd20d86217ae1f3bb6458eefb152db9ca0dfcfc66c3402f3706e925ee08/
merged\\\" at \\\"/var/lib/docker/overlay2/8b372dd20d86217ae1f3bb6458eefb152db9ca0dfcfc66c3402f3706e925ee08/merged/app/config.ym
l\\\" caused \\\"no such file or directory\\\"\"": unknown
      Exit Code:    128
      Started:      Sun, 29 Mar 2020 08:53:09 -0400
      Finished:     Sun, 29 Mar 2020 08:53:09 -0400
    Ready:          False
    Restart Count:  139
    Limits:
      cpu:     1
      memory:  500Mi
    Requests:
      cpu:      50m
      memory:   275Mi
    Liveness:   http-get http://:monitoring/metrics delay=0s timeout=1s period=10s #success=1 #failure=5
    Readiness:  http-get http://:monitoring/metrics delay=0s timeout=1s period=10s #success=1 #failure=5
    Environment:
      TZ:  America/New_York
    Mounts:
      /app/config.yml from config (rw,path="config.yml")
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-777n4 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      blocky
    Optional:  false
  default-token-777n4:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-777n4
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason   Age                     From                   Message
  ----     ------   ----                    ----                   -------
  Normal   Pulled   47m (x129 over 11h)     kubelet, k3s-worker-b  Container image "spx01/blocky:v0.5" already present on machin
e
  Warning  BackOff  2m50s (x3151 over 11h)  kubelet, k3s-worker-b  Back-off restarting failed container

From that looks like this is the error:

      Message:      OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449:
container init caused \"rootfs_linux.go:58: mounting \\\"/var/lib/kubelet/pods/f001ba3a-afe4-491a-bd47-88f86e9f1362/volume-subpa
ths/config/blocky/0\\\" to rootfs \\\"/var/lib/docker/overlay2/8b372dd20d86217ae1f3bb6458eefb152db9ca0dfcfc66c3402f3706e925ee08/
merged\\\" at \\\"/var/lib/docker/overlay2/8b372dd20d86217ae1f3bb6458eefb152db9ca0dfcfc66c3402f3706e925ee08/merged/app/config.ym

[teslamate-chart] Error 426 upgrade required on kubernetes

Hi,

I'm trying to install the teslamate chart but I can't get the websockets to work. Is there any extra steps needed to make this work?

WebSocket connection to 'ws://tesla.xx.xxx/live/websocket?baseUrl=http%3A%2F%2Ftesla.xx.xxx&referrer=&vsn=2.0.0' failed: Error during WebSocket handshake: Unexpected response code: 426

I'm using the nginx ingress controller with no special changes and forwarding all paths, including /ws/ to port 4000 in the pod. I'm seeing the panel, but the websocket is giving errors.

Is there a special configuration needed in the NGINX ingress config? Or am I missing something else?

Thank you!

Create ser2sock chart

Create a chart for ser2sock and behave very much like the frigate chart whereby it will only schedule to a node with a particular label indicating which node has the special USB device connected to it.

decide what to do with kube-plex chart

The kube-plex chart is hosted here because there is no chart registry currently hosting the excellent kube-plex chart.

Currently it's a submodule and not really handled that great in this repo.

Personally speaking, using flux I can just reference the chart via git URL and don't need it to be packaged int a chart repo.

I need to figure out a better way to host this, or convince upstream to host a chart registry instead. That would be the better outcome. There is an existing issue almost a year old attempting to support this. Perhaps I can make the necessary changes as a PR (using github actions) to have munnerz package as a chart repo.

explore github actions

With the re-launch of github actions, it may be time to attempt to use it again instead of circleCI

Update Nextcloud to 15?

I'm very new to Helm and charts but would love to deploy a Nextcloud 15. I see you are at 13. Would you be willing to update to 15 (or show me how?) :)

Inconsistent Unifi Templates

Hello

I am testing out your helm chart with a new deployment, and for my deployment I need to use the annotations in the services however they are only included in the template files for the GUI and Ingress. There is a configuration for them, and they are included in the values.yaml but they never get rendered into the services from the templates. Please let me know if you need any additional information.

Thanks for your work on this project!

InterDir Nzbget PVC

Looking to move my interdir off of the NFS share and onto a block device out of ceph. Should allow for a much faster processing time of small files with only the completed files being copied over to the NFS share that sonarr/radarr picks up.

Do you think it makes most sense to add this as a separate stanza in the nzbget chart or trying to use the extravolumemounts:[] block?

add pod annotations to charts

Need to add pod annotation to the following charts where they are missing:

  • nzbget
  • rtorrent-flood
  • comcast
  • modem-stats
  • speedtest
  • uptimerobot

refactor speedtest chart

CIFS Mount

Can you share the syntax or sample .yaml on how to configure Plex to use a host mount that is pointing to a CIFS share with content to play in Plex? Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.