Coder Social home page Coder Social logo

magda-config's Introduction

Magda Config

This is a simple boilerplate that allows you to quickly set up a Magda instance - the idea is that you can fork this config, commit changes but keep merging in master in order to stay up to date. If you are new to Magda, you might also be interested in our tutorial repo.

⚠️ Warning: Compatibility Issues ⚠️

We have upgraded our Terraform module to work with Magda v0.0.57 or later (using Helm3 Terraform Provier).

If you need the Terraform module to deploy an older version Magda (v0.0.56-RC or earlier), please check out branch v0.0.56-RC6 and use Terraform module there.

⚠️ Warning: Work in Progress ⚠️

With this repo we're trying to make it as easy to get started with Magda as possible... but we're not there yet. To setup Magda in a similar configuration to data.gov.au (i.e. an openly-available, pure-open-data search engine) is fairly simple, but using other features (e.g. Add Dataset, the Admin UI) will almost certainly result in getting stuck in some way that requires Kubernetes skills to get out of.

This doesn't mean you shouldn't try, and we're happy to answer any questions you have on our Github Discussions Forum. Just be aware that at best, this repo works a bit like a Linux installer - it can get you started easily, but if you want to mess around you'll still have to learn how it works.

Getting Started

How you get started with Magda will depend on where you're starting from:

  • I have nothing already set up, and I'm happy to run everything on Google Cloud through Terraform: Please use the instructions below.
  • I already have a kubernetes cluster, or want to use a local environment/cloud environment other than Google Cloud, or I just don't like Terraform: Please have a look at our tutorial repo.

NOTE: Since version v0.0.57, Magda requires Helm v3 to deploy. The Terraform helm provider has been upgraded to version 1.1.1 to support Helm v3. If you previously deployed an older version (e.g. v0.0.56-RC6) Magda, please refer to this migration document to upgrade your release before use terraform to upgrade your existing release to a newer version.

Quickstart Instructions - Terraform

For new users setting up Magda for the first time we recommend using these instructions - these use Terraform to set you up with a instance running on Google Cloud Engine very quickly (about 5 minutes of entering commands / editing config and 20 minutes of waiting), and gives you a basic instance, and in another 30-60 minutes of waiting will get you HTTPS working on your own domain.

1. Clone this repo

git clone --single-branch --branch master https://github.com/magda-io/magda-config.git

or download it with the "Clone or download" button in Github.

2. Install Terraform

Go to https://learn.hashicorp.com/terraform/getting-started/install.html for instructions

3. Install Helm

Go to https://helm.sh/docs/intro/install/ for instructions

Version 3.2.0 or higher is required.

You can test your install by:

helm version

this should tell you the version of the helm installed.

4. Install Google Cloud SDK

Go to https://cloud.google.com/sdk/docs/downloads-interactive for instructions.

Once Google Cloud SDK is installed, you also need to install gcloud beta components by the following command:

gcloud components install beta

5. Create a Google Cloud Project

Before you start the deployment process, you need to create a google cloud project via Google Cloud Console and note down the Project Id. Note that this isn't necessarily exactly the same as the id you specified - if it's already been taken, Google will append some numbers to it. Make sure by checking the "Select a Project" dialog in Google Cloud:

Google Cloud Select a Project Dialog

6. Set Default Project

Set the project id you noted down to an environment variable, because you'll need it in a few places - this will work in bash. If you're using another shell use the equivalent command or just manually replace $PROJECT_ID with your project id.

export PROJECT_ID=[your-project-id]

Then set it as the default in Google Cloud

gcloud config set project $PROJECT_ID

7. Enable required services & APIs for your project

gcloud services enable compute.googleapis.com
gcloud services enable container.googleapis.com

8. Create service account for the deployment

gcloud iam service-accounts create magda-robot

Feel free to use a name other than magda-robot if you like.

9. Find out service account email

You need to find out the service account email of your newly created service account to be used as the identifier in other commands.

To do so, first list all service accounts:

gcloud iam service-accounts list

Find the row of your service account. The service account email should be something similar to magda-robot@[your-project-id].iam.gserviceaccount.com. You'll need this a few times, so it's worth saving it to an environment variable - once again, if you're not using a shell that supports this you can just manually replace $SERVICE_ACCOUNT_EMAIL with the email address itself.

export SERVICE_ACCOUNT_EMAIL=[your-service-account-email]

10. Create an access key for your service account

First go to the terraform/magda directory inside your cloned version of this repository.

cd magda-config/terraform/magda
gcloud iam service-accounts keys create key.json --iam-account=$SERVICE_ACCOUNT_EMAIL

You will now have a key.json file in terraform/magda, containing a private key. We suggest you put this somewhere safe like a password manager. DO NOT CHECK IT INTO SOURCE CONTROL.

11. Grant service account permission

Grant editor role to your service account:

gcloud projects add-iam-policy-binding $PROJECT_ID --member serviceAccount:$SERVICE_ACCOUNT_EMAIL --role roles/editor

Grant k8s admin role to your service account:

gcloud projects add-iam-policy-binding $PROJECT_ID --member serviceAccount:$SERVICE_ACCOUNT_EMAIL --role roles/container.admin

12. Initiate Terraform

To do so, run:

terraform init

After a bit of waiting you should get this message:

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

13. Edit terraform config

Edit terraform/magda/terraform.tfvars and supply the follow parameters:

  • Project id: the id of the google cloud project that you created (echo $PROJECT_ID)
  • Deploy Region: which region you want to deploy magda to
  • credential_file_path: the path of the service account key file (key.json) that we just generated
  • namespace: which kubernetes namespace you want to deploy Magda to (generally this should just be "default")
  • external_domain: Optional: what domain you want the Magda server to be accessed from (which requires a bit of extra configuration). Leave blank to just access your instance through a temporary domain. You can set this later if necessary.

Other optional settings and their default values (if not set) are:

You can find full list of configurable options from here.

14. Edit default helm config

Look at values.yaml. It has reasonable defaults but you might want to edit something - it will give you a new instance with a standard colour scheme/logos and no datasets (yet).

15. Deploy!

terraform apply -auto-approve

This will take quite a while (like 20 minutes), but it should update you about its progress. Take this opportunity to make a cup of tea or stretch!

Once the deployment is complete, you should get a bunch of output including something like this:

Apply complete! Resources: 12 added, 0 changed, 0 destroyed.

Outputs:

external_access_url = http://34.98.120.7.xip.io/
external_ip = 34.98.120.7

You should be able to go to http://[external_ip] right away and see your Magda homepage come up. If you didn't specify external_domain, then the external_access_url will also work, otherwise see below:

Use Your Own Domain

If you specified external_domain, you need to create a DNS A record in your DNS registrar's system. The A record needs to point to the external_ip that was generated when deploying Magda.

SSL / HTTPS Access

As long as you specified external_domain in config file terraform/magda/terraform.tfvars and you've set an A record from that domain to the value that came back from external_ip, the SSL certificate will be automatically generated and set up for you. The process is going to take 30 to 60 minutes as specified by Google:

With a correct configuration the total time for provisioning certificates is likely to take from 30 to 60 minutes.

Upgrade Your Site for SSL / HTTPS Access

If you didn't supply a value for external_domain config field during your initial deployment, you can edit the config file and update your deployment by re-running:

terraform apply -auto-approve

16. What now?

Start playing around!

  • If you want to get some datasets into your system, turn the connectors tag to true in values.yaml and re-run terraform apply -auto-approve. A connector job will be created and start pulling datasets from data.gov.au... or you can modify connectors: in values.yaml to pull in datasets from somewhere else.
  • In the Google Cloud console, go to Kubernetes Engine / Clusters and click the "Connect" button, then use the kubectl command (should be installed along with the Google Cloud command line) to look at your new Magda cluster.

Google Kubernetes Engine Connect Button

Use kubectl get pods to see all of the running containers and kubectl logs -f <container name> to tail the logs of one. You can also use kubectl port-forward combined-db-0 5432 to open a tunnel to the database, and use psql, PgAdmin or equivalent to investigate the database - you can find the password in terraform.tfstate.

  • Sign up for an API key on Facebook or Google, and put your client secret in terraform.tfvars and your client id in values.yaml to enable signing in via OAuth.
  • Configure an SMTP server in terraform.tfvars and values.yaml and switch the correspondence flag to true in order to be able to send emails from the app.
  • Set scssVars in values.yaml to change the colours
  • Ask us questions on https://github.com/magda-io/magda/discussions
  • Send us an email at [email protected] to tell us about your new Magda server.

You might also be interested in our tutorial repo which will not only help you to get familiar with more advanced configuration but also give you a quick registry API tour.

FAQ

How do I make myself an admin?

This is harder than it should be at this point.

  1. Use kubectl port-forward combined-db-0 5432 -n <your-namespace> to get a connection to the database
  2. Get your db password out of the db-passwords secret - in bash you can use
kubectl get secrets db-passwords -o yaml -n <your namespace> | grep authorization-db: | awk '{print $2}' | base64 -D

or you can just use kubectl get secrets db-passwords -o yaml -n <your namespace> to get the secret then base64 decode it to get the password. 3. Use acs-cmd to set / unset a user as an admin

Where's the admin UI?

After login as an Admin user, you will see the Admin button on your account details page.

How do I authorise API access?

Please refer to How to create API key doc for more information of accessing APIs with an API key.

How do I add a new dataset

After login as an admin user, you will see a button for creating a new dataset on Home Page.

Troubleshooting

  • If something goes wrong, often you can fix it by just running terraform apply again.
  • If that fails, and you've got up to the helm release stage, you can try deleting the helm release by running:
terraform taint helm_release.magda_helm_release
terraform taint kubernetes_secret.auth_secrets
terraform taint kubernetes_secret.db_passwords
terraform taint kubernetes_namespace.magda_namespace
terraform taint kubernetes_namespace.magda_openfaas_namespace
terraform taint kubernetes_namespace.magda_openfaas_fn_namespace

And then terraform apply again. Note that this will probably destroy any data you've entered so far.

  • If that fails, you can start the entire process from scratch by running terraform destroy and re-running terraform apply. This will definitely destroy any data you've entered so far.

magda-config's People

Contributors

alexgilleran avatar kring avatar sajidanower23 avatar soyarsauce avatar t83714 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

magda-config's Issues

failed to install CRD crds/crd.yaml

Hello,
Could you please help with the error I am receiving when trying to install magda on minikube? When I run the last step of installation

helm upgrade --install --timeout 9999s --debug magda ./chart

I receive the error message below.

Error: failed to install CRD crds/crd.yaml: unable to recognize "": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
helm.go:88: [debug] unable to recognize "": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
failed to install CRD crds/crd.yaml
helm.sh/helm/v3/pkg/action.(*Install).installCRDs
        helm.sh/helm/v3/pkg/action/install.go:143
helm.sh/helm/v3/pkg/action.(*Install).RunWithContext
        helm.sh/helm/v3/pkg/action/install.go:207
main.runInstall
        helm.sh/helm/v3/cmd/helm/install.go:265
main.newUpgradeCmd.func2
        helm.sh/helm/v3/cmd/helm/upgrade.go:124
github.com/spf13/cobra.(*Command).execute
        github.com/spf13/[email protected]/command.go:856
github.com/spf13/cobra.(*Command).ExecuteC
        github.com/spf13/[email protected]/command.go:974
github.com/spf13/cobra.(*Command).Execute
        github.com/spf13/[email protected]/command.go:902
main.main
        helm.sh/helm/v3/cmd/helm/helm.go:87
runtime.main
        runtime/proc.go:225
runtime.goexit
        runtime/asm_amd64.s:1371

Also, when I run the role binding

kubectl apply -f role-binding.yaml

I receive error message saying

The connection to the server localhost:8080 was refused - did you specify the right host or port?

Enabling multi-tenancy fail

Hi,

I'm trying to use multitenant pre-alpha feature on Magda using, in values.yaml, the argument

  enableMultiTenants: true

Obtained result :

  • In browser page : blank page with message
Unable to process magda.domain right now. Please try again shortly.
  • In browser developper tools :
The character encoding of a plain text document has not been declared. The document will be displayed with incorrect characters for some browser configurations if the document contains characters outside the US-ASCII range. The character encoding of the file must be declared in the transfer protocol or the file must use a byte order mark (BOM) as the encoding signature.
  • In Kubernetes Magda Gateway Pods logs (errors exist) :
ERROR admin Error: getaddrinfo ENOTFOUND admin-api admin-api:80
ERROR correspondence Error: getaddrinfo ENOTFOUND correspondence-api correspondence-api:80
  • In Kubernetes Magda Namespace events (errors exist) :
17m         Warning   Unhealthy                pod/storage-api-6bdcb54f98-jlqkg                  Readiness probe failed: Get http://10.2.10.227:80/v0/status/ready: dial tcp 10.2.10.227:80: connect: connection refused
19m         Warning   Unhealthy                pod/tenant-api-7f6f68c846-rthmm                   Readiness probe failed: Get http://10.2.7.5:80/v0/status/ready: dial tcp 10.2.7.5:80: connect: connection refused
  • In Kubernetes Magda Gateway Pods events (no problem) :
Events:
  Type    Reason     Age        From                                                Message
  ----    ------     ----       ----                                                -------
  Normal  Scheduled  <unknown>  default-scheduler                                   Successfully assigned magda/gateway-7b44c984f5-f2sw7 to node-ec756107-8cf7-47e7-97ab-f1e6d87b6233
  Normal  Pulling    5m58s      kubelet, node-ec756107-8cf7-47e7-97ab-f1e6d87b6233  Pulling image "docker.io/data61/magda-gateway:0.0.57"
  Normal  Pulled     5m56s      kubelet, node-ec756107-8cf7-47e7-97ab-f1e6d87b6233  Successfully pulled image "docker.io/data61/magda-gateway:0.0.57"
  Normal  Created    5m56s      kubelet, node-ec756107-8cf7-47e7-97ab-f1e6d87b6233  Created container gateway
  Normal  Started    5m56s      kubelet, node-ec756107-8cf7-47e7-97ab-f1e6d87b6233  Started container gateway

Any clue ? Is it a problem from my side, or a bug ?
Thanks a lot !

Configure HTTPS Ingress with cert-manager via own ClusterIssuer

Hi all, I'm trying to configure an Ingress with cert-manager for HTTPS access.

In values.yaml file, I changed the folllowing :

tags:
  ingress: true
...
magda:
  magda-core:
    ingress:
      hostname: magda.<domain-name>
      # ipName:
      # ingressClass: gce
      enableTls: true
      tlsSecretName: "magda-cert-tls"
      domains:
        - magda.<domain-name>

I'd like to be able to add an annotation in the Ingress to call for my own cert-manager ClusterIssuer.
Is it possible (can't find a way) ? Will it be in a near future ?
Thanks

Create Staging cluster & tested upgrade to v60

Create Staging cluster & tested upgrade to v60

  • Create Staging cluster
  • Upgrade k8s api & node pool to latest stable channel
  • Tested upgrade to v60
  • Update Magda config
  • Test again v8 national map test site

create-secrets executable binary needs update

create-secrets executable binary needs an update.
We used to have an auto script to update this from main Magda repo but it didn't trigger for the recent release.
Will check this issue as well

Can't install create-secrets

Hi,

I was following the tutorial to run this locally on OSX Darwin 19.6.0, but I can't install create-secrets.
It is step 6 of the following: https://github.com/magda-io/magda-config/blob/master/existing-k8s.md

It gives me error below:

Thanks.

 npm install --global @magda/create-secrets                       
npm ERR! code ENOENT
npm ERR! syscall chmod
npm ERR! path /Users/miguel.silvestre/.nvm/versions/node/v14.15.0/lib/node_modules/@magda/create-secrets/bin/create-secrets.js
npm ERR! errno -2
npm ERR! enoent ENOENT: no such file or directory, chmod '/Users/miguel.silvestre/.nvm/versions/node/v14.15.0/lib/node_modules/@magda/create-secrets/bin/create-secrets.js'
npm ERR! enoent This is related to npm not being able to find a file.
npm ERR! enoent

npm ERR! A complete log of this run can be found in:
npm ERR!     /Users/miguel.silvestre/.npm/_logs/2020-11-11T12_23_53_421Z-debug.log

Elastic search failing on clean k3d install

I am performing a clean install under k3d v5.2.2 (with k3s v1.21.7-k3s1). Everything seems to deploy fine: I can open the main page fine:
image

However, the Organisations page shows the following error:
image

After setting up an admin user and digging a little more, I found that I cannot select a Country when attempting to create a data set, and then I cannot select an Organization, so I cannot complete a data set upload either.

Digging into the k8s logs I see the following when anything that needs the search-api:

kubectl -n magda logs deployment/search-api

03:06:16,229 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback-test.xml]                                                                                                                             
03:06:16,229 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback.groovy]                                                                                                                               
03:06:16,229 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Found resource [logback.xml] at [jar:file:/app/lib/au.csiro.data61.magda-search-api-0.0.59.jar!/logback.xml]                                                           
03:06:16,230 |-WARN in ch.qos.logback.classic.LoggerContext[default] - Resource [logback.xml] occurs multiple times on the classpath.                                                                                                         
03:06:16,230 |-WARN in ch.qos.logback.classic.LoggerContext[default] - Resource [logback.xml] occurs at [jar:file:/app/lib/au.csiro.data61.magda-search-api-0.0.59.jar!/logback.xml]                                                          
03:06:16,230 |-WARN in ch.qos.logback.classic.LoggerContext[default] - Resource [logback.xml] occurs at [jar:file:/app/lib/au.csiro.data61.magda-scala-common-0.0.59.jar!/logback.xml]                                                        
03:06:16,242 |-INFO in ch.qos.logback.core.joran.spi.ConfigurationWatchList@5adb0db3 - URL [jar:file:/app/lib/au.csiro.data61.magda-search-api-0.0.59.jar!/logback.xml] is not of type file                                                   
03:06:16,416 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - debug attribute not set                                                                                                                                      
03:06:16,417 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.ConsoleAppender]                                                                                          
03:06:16,423 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [STDOUT]                                                                                                                                          
03:06:16,455 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.classic.AsyncAppender]                                                                                         
03:06:16,457 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [ASYNC]                                                                                                                                           
03:06:16,457 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [STDOUT] to ch.qos.logback.classic.AsyncAppender[ASYNC]                                                                                  
03:06:16,457 |-INFO in ch.qos.logback.classic.AsyncAppender[ASYNC] - Attaching appender named [STDOUT] to AsyncAppender.                                                                                                                      
03:06:16,457 |-INFO in ch.qos.logback.classic.AsyncAppender[ASYNC] - Setting discardingThreshold to 51                                                                                                                                        
03:06:16,458 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [ch.qos.logback] to ERROR                                                                                                                   
03:06:16,458 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [com.sksamuel] to ERROR                                                                                                                     
03:06:16,458 |-INFO in ch.qos.logback.classic.joran.action.RootLoggerAction - Setting level of ROOT logger to INFO                                                                                                                            
03:06:16,458 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [ASYNC] to Logger[ROOT]
03:06:16,458 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - End of configuration.
03:06:16,459 |-INFO in ch.qos.logback.classic.joran.JoranConfigurator@3f270e0a - Registering current configuration as safe fallback point

[INFO] [01/08/2022 03:06:16.462] [main] [MagdaApp$(akka://search-api)] Starting Search API on port 80
[INFO] [01/08/2022 03:06:16.498] [search-api-akka.actor.default-dispatcher-6] [akka.actor.ActorSystemImpl(search-api)] Elastic Client server Url: elasticsearch://elasticsearch:9200?cluster.name=myesdb
[INFO] [01/08/2022 03:06:16.499] [search-api-akka.actor.default-dispatcher-6] [akka.actor.ActorSystemImpl(search-api)] Elastic Client maxRetryTimeout: 30000
[INFO] [01/08/2022 03:06:16.525] [search-api-akka.actor.default-dispatcher-6] [akka.actor.ActorSystemImpl(search-api)] Elastic Client connectTimeout: 30000
[INFO] [01/08/2022 03:06:16.525] [search-api-akka.actor.default-dispatcher-6] [akka.actor.ActorSystemImpl(search-api)] Elastic Client socketTimeout: 30000
[INFO] [01/08/2022 03:06:17.206] [search-api-akka.actor.default-dispatcher-6] [akka.actor.ActorSystemImpl(search-api)] Successfully made initial contact with the ES client (this doesn't mean we're fully connected yet!)
[ERROR] [01/08/2022 20:04:07.105] [search-api-akka.actor.default-dispatcher-27] [akka.actor.ActorSystemImpl(search-api)] Exception when searching
au.csiro.data61.magda.search.elasticsearch.Exceptions.ESException: search_phase_execution_exception: all shards failed
        at au.csiro.data61.magda.search.elasticsearch.Exceptions.ESGenericException$.unapply(Exceptions.scala:109)
        at au.csiro.data61.magda.search.elasticsearch.ElasticSearchQueryer.$anonfun$searchOrganisations$3(ElasticSearchQueryer.scala:963)
        at scala.concurrent.Future.$anonfun$flatMap$1(Future.scala:307)
        at scala.concurrent.impl.Promise.$anonfun$transformWith$1(Promise.scala:41)
        at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
        at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
        at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:92)
        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:85)
        at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:92)
        at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
        at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:49)
        at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

The logs for the ES stateful set show something as well that's similar but not the same error:

kubectl -n magda logs statefulset/es-data

[2022-01-08T20:04:07,101][WARN ][r.suppressed             ] [r1lL6eJ] path: /datasets46/_search, params: {index=datasets46}
org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed                                                                                                                                                             
        at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:293) [elasticsearch-6.5.4.jar:6.5.4]                                                                                      
        at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:133) [elasticsearch-6.5.4.jar:6.5.4]
        at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:254) [elasticsearch-6.5.4.jar:6.5.4]                                                                                         
        at org.elasticsearch.action.search.InitialSearchPhase.onShardFailure(InitialSearchPhase.java:101) [elasticsearch-6.5.4.jar:6.5.4]                                                                                                    
        at org.elasticsearch.action.search.InitialSearchPhase.access$100(InitialSearchPhase.java:48) [elasticsearch-6.5.4.jar:6.5.4]                                                                                                         
        at org.elasticsearch.action.search.InitialSearchPhase$2.lambda$onFailure$1(InitialSearchPhase.java:222) [elasticsearch-6.5.4.jar:6.5.4]
        at org.elasticsearch.action.search.InitialSearchPhase.maybeFork(InitialSearchPhase.java:176) [elasticsearch-6.5.4.jar:6.5.4]
        at org.elasticsearch.action.search.InitialSearchPhase.access$000(InitialSearchPhase.java:48) [elasticsearch-6.5.4.jar:6.5.4]
        at org.elasticsearch.action.search.InitialSearchPhase$2.onFailure(InitialSearchPhase.java:222) [elasticsearch-6.5.4.jar:6.5.4]
        at org.elasticsearch.action.search.SearchExecutionStatsCollector.onFailure(SearchExecutionStatsCollector.java:73) [elasticsearch-6.5.4.jar:6.5.4]
        at org.elasticsearch.action.ActionListenerResponseHandler.handleException(ActionListenerResponseHandler.java:51) [elasticsearch-6.5.4.jar:6.5.4]
        at org.elasticsearch.action.search.SearchTransportService$ConnectionCountingHandler.handleException(SearchTransportService.java:464) [elasticsearch-6.5.4.jar:6.5.4]
        at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1130) [elasticsearch-6.5.4.jar:6.5.4]
        at org.elasticsearch.transport.TransportService$DirectResponseChannel.processException(TransportService.java:1247) [elasticsearch-6.5.4.jar:6.5.4]
        at org.elasticsearch.transport.TransportService$DirectResponseChannel.sendResponse(TransportService.java:1221) [elasticsearch-6.5.4.jar:6.5.4]
        at org.elasticsearch.transport.TaskTransportChannel.sendResponse(TaskTransportChannel.java:66) [elasticsearch-6.5.4.jar:6.5.4]
        at org.elasticsearch.action.support.HandledTransportAction$ChannelActionListener.onFailure(HandledTransportAction.java:112) [elasticsearch-6.5.4.jar:6.5.4]
        at org.elasticsearch.search.SearchService$2.onFailure(SearchService.java:347) [elasticsearch-6.5.4.jar:6.5.4]
        at org.elasticsearch.search.SearchService$2.onResponse(SearchService.java:341) [elasticsearch-6.5.4.jar:6.5.4]
        at org.elasticsearch.search.SearchService$2.onResponse(SearchService.java:335) [elasticsearch-6.5.4.jar:6.5.4]
        at org.elasticsearch.search.SearchService$4.doRun(SearchService.java:1082) [elasticsearch-6.5.4.jar:6.5.4]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:723) [elasticsearch-6.5.4.jar:6.5.4]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.5.4.jar:6.5.4]
        at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:41) [elasticsearch-6.5.4.jar:6.5.4]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.5.4.jar:6.5.4]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
        at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: org.elasticsearch.search.SearchContextException: no mapping found for `publisher.aggKeywords.keyword` in order to collapse on
        at org.elasticsearch.search.collapse.CollapseBuilder.build(CollapseBuilder.java:234) ~[elasticsearch-6.5.4.jar:6.5.4]
        at org.elasticsearch.search.SearchService.parseSource(SearchService.java:916) ~[elasticsearch-6.5.4.jar:6.5.4]
        at org.elasticsearch.search.SearchService.createContext(SearchService.java:616) ~[elasticsearch-6.5.4.jar:6.5.4]
        at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:592) ~[elasticsearch-6.5.4.jar:6.5.4]
        at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:367) ~[elasticsearch-6.5.4.jar:6.5.4]
        at org.elasticsearch.search.SearchService.access$100(SearchService.java:121) ~[elasticsearch-6.5.4.jar:6.5.4]
        at org.elasticsearch.search.SearchService$2.onResponse(SearchService.java:339) ~[elasticsearch-6.5.4.jar:6.5.4]
        ... 9 more

Logs for the other components all seem fine I think, but I could have missed something since the error messages above seem vague to me.

Any ideas on what to try? It seems I am very close to getting everything working at this point...

How to set password for the admin user

I successfully deployed magda using terraform.
But I coudn't find a setting for admin password, and I can't access /auth/admin or /admin.
Is there a way to change password for admin user?

Helm / Tiller Terminates Hook Job too quick before it's complete

Helm / Tiller Terminates Hook Job too quick before it's complete

It only happened when #7 has happened (i.e. redeploy to an interrupted release).

Seem this issue with helm since 2.15:

helm/helm#6767

To reproduce:

  • Deploy using terraform apply
  • Once you see error as described in #7 , re-deploy using terraform apply
  • Go to cloud console and kubectl -n magda get pods -w
  • you will see migrators are created quite late and are terminating before complete (in a few seconds)

Could not get apiVersions from Kubernetes: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request

When deploying with terraform, you may see the following error with terraform apply

Error: rpc error: code = Unknown desc = Could not get apiVersions from Kubernetes: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request

  on main.tf line 95, in resource "helm_release" "magda_helm_release":
  95: resource "helm_release" "magda_helm_release" {

this error will go away with the following terraform apply attempt (to the same cluster).

It may have something to do with this helm issue:

helm/helm#6361

Could also be a tempory GKE issue as per here

For GKE users, Google is having issues with heapster and metric-server. This is what is causing the helm failures and explains why it works sometimes and not others.

Event Start: 10/30/19
Affected Products:
Cloud Services
Description:
The issue with Google Kubernetes Engine experiencing an elevated rate of errors for heapster autoscaling is in the process of being mitigated and our Engineering Team is working to deploy new versions with a fix.
Once the fixed versions become available affected customers will be able to upgrade their clusters to receive the fix.
We will provide an update on the status of the fix by Wednesday, 2019-11-13 16:30 US/Pacific with current details. In the interim, if you have questions or are impacted, please open a case with the Support Team and we will work with you until this issue is resolved.
Steps to Reproduce:
Heapster deployment may be crashing due to inaccurate resource values and then fail to resize due to an invalid name reference in the heapster-nanny container. The logs for an affected clusters will show errors like the below under the heapster-nanny container logs:
ERROR: logging before flag.Parse: E1030 14:50:59.147245 1 nanny_lib.go:110] deployments.extensions "heapster-v1.7.X" not found
Workaround:
Manually add requests/limits to the heapster container under the heapster deployment::
kubectl -n kube-system edit deployment heapster
These values can be calculated as:
* cpu: 80m + 0.5m * number of nodes
* memory: 140Mi + 4Mi * number of nodes

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.