quickwit-oss / helm-charts Goto Github PK
View Code? Open in Web Editor NEWHelm charts for Quickwit
Home Page: https://helm.quickwit.io
License: MIT License
Helm charts for Quickwit
Home Page: https://helm.quickwit.io
License: MIT License
Hello its not realy clear to know what put in values.yaml because of s3 values and no storage values in default values.yaml like a minio exemple or something
should add rest labels here https://github.com/quickwit-oss/helm-charts/blob/main/charts/quickwit/templates/configmap.yaml#L13
{{- with .Values.config.rest }}
rest:
{{- toYaml . | nindent 6 }}
{{- end }}
I deployed several quickwit clusters in different namespaces... and all nodes joined together to form only one cluster :)))
@guilload mystery resolved
This is an issue as most of our users have a PostgreSQL metastore.
By default pointing to edge is too dangerous. The helm chart should target 0.4.0 once it is released.
version: 0
-> version: 0.4
Currently the install cannot work because of this:
Installing chart "quickwit => (version: \"0.1.8\", path: \"charts/quickwit\")"...
Creating namespace "quickwit-66x11pe6x8"...
namespace/quickwit-66x11pe6x8 created
Error: INSTALLATION FAILED: execution error at (quickwit/templates/secret.yaml:9:[24](https://github.com/quickwit-oss/helm-charts/actions/runs/3462666234/jobs/5781856353#step:9:25)): A valid config.postgres.password is required!
========================================================================================================================
........................................................................................................................
==> Events of namespace quickwit-66x11pe6x8
........................................................................................................................
No resources found in quickwit-66x11pe6x8 namespace.
........................................................................................................................
<== Events of namespace quickwit-66x11pe6x8
........................................................................................................................
========================================================================================================================
Deleting release "quickwit-66x11pe6x8"...
Error deleting Helm release: failed waiting for process: exit status 1
Deleting namespace "quickwit-66x11pe6x8"...
namespace "quickwit-66x11pe6x8" deleted
Error: failed installing charts: failed processing charts
Namespace "quickwit-66x11pe6x8" terminated.
------------------------------------------------------------------------------------------------------------------------
✖︎ quickwit => (version: "0.1.8", path: "charts/quickwit") > failed waiting for process: exit status 1
------------------------------------------------------------------------------------------------------------------------
failed installing charts: failed processing charts
Error: Process completed with exit code 1.
I guess we can fix it by providing some default values in the CI.
... to avoid confuse users. Right now indexers/searcher are getting the default file metastore uri.
Is-it possible to add a flag to enable the KEDA autoscaler for the searcher and indexer pods. Something we could enabled in the values.yaml
like that:
keda:
indexer:
enabled: true
# metrics used for scale
searcher:
enabled: true
# metrics used for scale
Thanks
Hi, I'm using Quickwit in a distributed setup, using a S3 file-based metastore.
Server 1 has the following components:
Server 2 is using the helm chart to add the searcher UI with those components:
Since the values.yaml
in cludes this part
control_plane:
replicaCount: 1
I assumed if it is possible to disable the control-plane completely in such a distributed setup. The docs also say that the control-plane is only about indexing tasks. Now the replicaCount
value doesn't seem to have any effect, it isn't used in the control-plane deployment file. It's using a hard-coded replicas: 1
.
Would it make sense to have a deployment with only metastore, janitor and searchers and without control-plane?
Would it even make sense to have multiple control planes or would a simple enable: true/false
suffice?
We should add some integration tests to ensure that the helm chart works as expected with the Prometheus Operator. This requires a cluster with the operator installed in the default namespace (see operator docs)
I really don't like having to define secrets in the yaml file values.yaml
.
Bitnami charts solve this issue by defining an optional existingSecret
variable.
Let's do that.
kubectl describe pod returns the following:
Name: quickwit-indexing-demo-index-gh-archive-structured-b5mpv
Namespace: indexing-demo
Priority: 0
Service Account: default
Node: ip-10-28-90-116.ec2.internal/10.28.90.116
Start Time: Sat, 12 Nov 2022 12:07:49 +0100
Labels: app.kubernetes.io/instance=quickwit-indexing-demo
app.kubernetes.io/managed-by=Helm
controller-uid=b9df73d9-4f2e-4025-8f29-f7030c751b0e
helm.sh/chart=quickwit-0.1.7
job-name=quickwit-indexing-demo-index-gh-archive-structured
Annotations: kubernetes.io/psp: eks.privileged
Status: Pending
IP:
IPs: <none>
Controlled By: Job/quickwit-indexing-demo-index-gh-archive-structured
Containers:
quickwit:
Container ID:
Image: quickwit/quickwit:edge
Image ID:
Port: <none>
Host Port: <none>
Command:
/bin/bash
-c
quickwit index describe --index gh-archive-structured || quickwit index create --index-config gh-archive-structured.yaml
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
NAMESPACE: indexing-demo (v1:metadata.namespace)
POD_NAME: quickwit-indexing-demo-index-gh-archive-structured-b5mpv (v1:metadata.name)
POD_IP: (v1:status.podIP)
QW_CONFIG: node.yaml
QW_S3_ENDPOINT: https://s3.us-east-1.amazonaws.com
AWS_REGION: us-east-1
AWS_ACCESS_KEY_ID: xxxx
AWS_SECRET_ACCESS_KEY: xxx
QW_NODE_ID: $(POD_NAME)
QW_PEER_SEEDS: quickwit-indexing-demo-headless
QW_ADVERTISE_ADDRESS: $(POD_IP)
OTEL_EXPORTER_JAEGER_AGENT_HOST: jaeger-agent.monitoring.svc.cluster.local
OTEL_EXPORTER_JAEGER_AGENT_PORT: 6831
QW_ENABLE_JAEGER_EXPORTER: true
POSTGRES_HOST: xxx
POSTGRES_PORT: xxx
POSTGRES_DATABASE: indexing_demo_db
POSTGRES_USERNAME: indexing_demo
POSTGRES_PASSWORD: <set to the key 'postgres.password' in secret 'quickwit-indexing-demo'> Optional: false
QW_METASTORE_URI: postgres://$(POSTGRES_USERNAME):$(POSTGRES_PASSWORD)@$(POSTGRES_HOST):$(POSTGRES_PORT)/$(POSTGRES_DATABASE)
Mounts:
/quickwit/gh-archive-structured.yaml from index (rw,path="gh-archive-structured.yaml")
/quickwit/node.yaml from config (rw,path="node.yaml")
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g9cjn (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: quickwit-indexing-demo
Optional: false
index:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: quickwit-indexing-demo-bootstrap
Optional: false
kube-api-access-g9cjn:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedMount 21m (x99 over 6h40m) kubelet Unable to attach or mount volumes: unmounted volumes=[config index], unattached volumes=[config index kube-api-access-g9cjn]: timed out waiting for the condition
Warning FailedMount 5m54s (x205 over 6h46m) kubelet MountVolume.SetUp failed for volume "index" : configmap "quickwit-indexing-demo-bootstrap" not found
Warning FailedMount 62s (x41 over 6h44m) kubelet Unable to attach or mount volumes: unmounted volumes=[index config], unattached volumes=[index kube-api-access-g9cjn config]: timed out waiting for the condition
I'm a bit new to helm + kubectl, but I can't get everything running locally following the docs. I'm pretty sure the issue below is because of my newbishness, but perpaps the docs are missing an important step.
I'm running docker on my m1 mac with kubernetes enabled. I've installed helm, too. I have successfully run helm install qw quickwit/quickwit -f helm-values.yaml
with the example config from the docs. Then I get this:
NAME: qw
LAST DEPLOYED: Thu Mar 23 16:37:47 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=quickwit,app.kubernetes.io/instance=qw" -o jsonpath="{.items[0].metadata.name}")
export CONTAINER_PORT=$(kubectl get pod --namespace default $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT
So I run copy the commands and paste them, and I get:
Visit http://127.0.0.1:8080 to use your application
Forwarding from 127.0.0.1:8080 -> 7280
Forwarding from [::1]:8080 -> 7280
Great! Now I Visit that URL and I get:
E0323 16:45:11.414813 29778 portforward.go:407] an error occurred forwarding 8080 -> 7280: error forwarding port 7280 to pod d7fe0a2b320fdba75861f6aff1da35838816ddcee44c182b395a58bd92eaee3f, uid : exit status 1: 2023/03/23 15:45:11 socat[50953] E connect(16, AF=2 127.0.0.1:7280, 16): Connection refused
E0323 16:45:11.415399 29778 portforward.go:233] lost connection to pod
Not sure what I forgot to do or where I made a mistake.
UPDATE: I guess it's because one of the containers crashes, because I don't have postgres running. I want to use S3 storage.
commented index config with version 0.4
With 100 nodes it's too long to wait for each node to start or terminate.
Right now our ingress routes requests to searchers.
It would be more efficient to send them to indexers probably.
We can probably cook something by adding a route and adding the annotation:
nginx.ingress.kubernetes.io/use-regex: "true"
With a local metastore the post installation hook does not work.
The hook simply attempts to modify a local file metastore when it should modify the files in the metastore pod.
Ideally we should use a rest API (not existing yet) to create the index/sources in a safe manner.
When setting
image:
tag: v0.5.0
I get
Failed to pull image "quickwit/quickwit:v0.5.0": rpc error: code = Unknown desc = no matching manifest for linux/arm64/v8 in the manifest list entries
Setting tag to latest
fixed the issue.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.