Coder Social home page Coder Social logo

appdynamics-charts's Introduction

AppDynamics Charts

Welcome to the repository of Helm charts for deployments of AppDynamics agents.

Get the repo

$ helm repo add  appdynamics-charts https://ciscodevnet.github.io/appdynamics-charts

Machine Agent

AppDynamics Machine Agent offers application-centric server monitoring. It helps to proactively isolate and resolve application performance issues faster with actionable, correlated application-server metrics.

  • Install the chart:
helm install --namespace=appdynamics \
--set controller.accessKey=<controller-key> \ 
--set controller.host=<*.saas.appdynamics.com> \
--set controller.port=443 --set controller.ssl=true \
--set controller.accountName=<account-name> \
--set controller.globalAccountName=<global-account-name> \ 
--set analytics.eventEndpoint=https://analytics.api.appdynamics.com \
--set agent.netviz=true serverviz appdynamics-charts/machine-agent

For detailed list of configuration settings refer to the chart documentation

To remove the chart, run the following command:

$ helm del stable

ClusterAgent

โš ๏ธ Deprecation Notice This is a notice for sunsetting this public facing repository containing AppDynamics Cluster Agent helm charts in Github.

This repository will be maintained till October 31st 2023, post which it will not be updated. After this date, the latest helm charts will be made available via the Appdynamics production artifactory

AppDynamics ClusterAgent provides insights into the health of Kubernetes and OpenShift clusters and helps differentiate application anomalies from issues with cluster configuration and performance.

  • Install the chart:
helm install --namespace=appdynamics --set controller.dns=saas.appdynamics.com --set controller.port=443 --set controller.ssl=true --set controller.accountName=customer1 --set controller.accessKey=f37b760f-962a-4280-b8b3-e85dcc016967 k8s-agent appdynamics-charts/cluster-agent

For detailed list of configuration settings refer to the chart documentation

To remove the chart, run the following command:

$ helm del stable

appdynamics-charts's People

Contributors

ajohnstone avatar appdagents avatar sashapm avatar subhashiyer07 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

appdynamics-charts's Issues

Remove "lookup" logic from the cluster-agent secret

The Problem

Right now the cluster-agent has this logic in the "secret-cluster-agent.yaml":

{{ $secret := (lookup "v1" "Secret" .Release.Namespace "cluster-agent-secret") }}

This is a query to the Kubernetes API that checks if the "cluster-agent-secret" exists or not. This is an anti-pattern for Helm, and it should be avoided because it creates unnecessary racing conditions. The only way that this would work is if I guarantee that the secret exists before the chart is rendered and executed, but this approach is bad because Helm is not a good tool to define order in which resources are deployed. Basically my deployer now needs logic to say "first deploy the secret, and then deploy the cluster agent". This type of logic is not supported by Helm or other Kubernetes deployers like ArgoCD, Kustomize or Kaptain. So... basically, we're stuck with a manual step or we have to code custom logic just to make this automatic.

Also, our Helm CI is breaking because to validate the configurations we do Helm Template. But the template will always break because it has no access to the cluster API, so the lookup function always returns an empty map.

The solution

  1. Simply ask for a variable in the values.yaml called "customSecretName", so I can say something like:
customSecretName: my-secret

I can create this secret in the same automation, in another helm chart dependency, as another ArgoCD resource... Many options. The point is that I'm creating my own secret and letting the chart know the secret name.

  1. Then in the "secret-cluster-agent.yaml" just add the logic saying "if .Values.customSecretName is defined, do not render the cluster-agent-secret"

  2. In the deployment add the logic "If .Values.customSecretName is defined, use that secret to inject secrets in the container. Otherwise, use the cluster-agent-secret"

Why is this a solution?

Because now the racing condition will be solved by the deployment. The container won't be deployed until the secretMount exists, so it doesn't matter if the custom secret is created before or after the deployment, the deployment will wait until it exists.

Look at the way other Charts do this: https://artifacthub.io/packages/helm/wso2/mysql
That is the official Mysql Chart, look at the variable existingSecret. https://artifacthub.io/packages/helm/wso2/mysql?modal=template&template=secrets.yaml

AppDynamics cluster agent auto-instrumentation breaks other deployed Helm releases

The AppDynamics cluster agent appears to work by running an init container that copies the AppDynamics language-specific agent and instrumentation shim into the application container. It does so by creating an emptyDir volume named "appd-agent-repo-nodejs" (in our case, anyways) that is mounted by the init container, and the application container. This appears to be accomplished by modifying the deployment spec to define the volume and have the application container mount the volume.

A problem arises when attempting to deploy the service again, as it will generate the following error --

Error: UPGRADE FAILED: an error occurred while rolling back the release. original upgrade error: cannot patch "foo" with kind Deployment: Deployment.apps "foo" is invalid: spec.template.spec.initContainers[0].volumeMounts[0].name: Not found: "appd-agent-repo-nodejs": release foo failed: timed out waiting for the condition

This is because the deployment spec has been updated and the application Helm chart does not match anymore. One workaround was to try adding the "appd-agent-repo-nodejs" emptyDir volume manually to each service Helm chart, which seemed to work, at least for deploying the chart. However, when the containers would actually try to start the AppDynamics cluster agent seemed to skip actually performing the instrumentation, and the services would fail to start with this error --

internal/modules/cjs/loader.js:796
    throw err;
    ^

Error: Cannot find module '/opt/appdynamics-nodejs/shim.js'
Require stack:
- internal/preload
    at Function.Module._resolveFilename (internal/modules/cjs/loader.js:793:17)
    at Function.Module._load (internal/modules/cjs/loader.js:686:27)
    at Module.require (internal/modules/cjs/loader.js:848:19)
    at Module._preloadModules (internal/modules/cjs/loader.js:1133:12)
    at loadPreloadModules (internal/bootstrap/pre_execution.js:443:5)
    at prepareMainThreadExecution (internal/bootstrap/pre_execution.js:62:3)
    at internal/main/run_main_module.js:7:1 {
  code: 'MODULE_NOT_FOUND',
  requireStack: [ 'internal/preload' ]

Are username and password values required?

The documentation says that the following is required --

controllerInfo:
  url: <controller-url>
  account: <controller-account>
  username: <controller-username>
  password: <controller-password>
  accessKey: <controller-accesskey>

Are the username and password fields really required, or is the access key sufficient, or vice-versa?

Using Auto-instrumentation

Can I use the auto-instrumentation feature using this chart ? If yes, can you please point some directions ?

Update broken links

  1. Repo update link the readme should be updated to the new repo
  2. urls in the index.yaml should be updated
  3. Link in the repo description should be updated
    cc @jeffaholmes

Use of "latest" and "win-latest" tags in helm charts does not follow best-practice

We are a large AppD customer, looking to expand our use of the cluster-agent.

Currently we have to patch our values file with replacements for "latest" and "win-latest". It is not helm-chart and/or kubernetes best-practice to use floating image versions.

References to "latest" and "win-latest" should be updated to the current release when the helm chart is released.

This was the helm chart footprint is known and tested. We test and whitelist all images which are pulled to our clusters, use of non version-aligned tags breaks this pattern.

instead of

imageInfo:
  agentImage: docker.io/appdynamics/cluster-agent
  agentTag: 22.9.0
  operatorImage: docker.io/appdynamics/cluster-agent-operator
  operatorTag: 22.9.0
  imagePullPolicy: Always                               # Will be used for operator pod
  machineAgentImage: docker.io/appdynamics/machine-agent
  machineAgentTag: latest
  machineAgentWinImage: docker.io/appdynamics/machine-agent-analytics
  machineAgentWinTag: win-latest
  netVizImage: docker.io/appdynamics/machine-agent-netviz
  netvizTag: latest

it would be better to have a tested version of machineAgentTag, machineAgentWinTag and netvizTag:

imageInfo:
  agentImage: docker.io/appdynamics/cluster-agent
  agentTag: 22.9.0
  operatorImage: docker.io/appdynamics/cluster-agent-operator
  operatorTag: 22.9.0
  imagePullPolicy: Always                               # Will be used for operator pod
  machineAgentImage: docker.io/appdynamics/machine-agent
  machineAgentTag: 22.9.0
  machineAgentWinImage: docker.io/appdynamics/machine-agent-analytics
  machineAgentWinTag: 22.9.0
  netVizImage: docker.io/appdynamics/machine-agent-netviz
  netvizTag: 21.3.0

tolerations taking an array instead of list

Hi Team,
The syntax for daemonset tolerations is shown as a JSON set instead of an array. How are we supposed to define multiple tolerations for the daemonset in that case?

Let me know in case any clarification is needed from my end.

Thanks,
Rajeev

infraViz.nodeOS condition not working as intended

Hello,

I am trying to setup the cluster-agent chart on an AKS cluster using both Linux and Windows nodes. The default Linux value for nodeOS works just fine. When I try to use any of the other options (windows and all), this is the resulting message: Error: UPGRADE FAILED: YAML parse error on cluster-agent/templates/infraviz.yaml: error converting YAML to JSON: yaml: line 10: mapping values are not allowed in this context
Looking at the infraviz template I do not see any glaring issues. I made sure to update the helm repo, and destroy the CRDs before trying again just in case but to no avail. Manually updating the infraviz resource with what the conditional was supposed to output is also working for me. Let me know if there is any information you need for this. Not 100% sure what to provide.

Add cluster-agent nodeAntiAffinity

Hello guys,

I`m trying to deploy the cluster-agent helm chart in a EKS cluster running with some Fargate instances and I see some pods in Pending state because they can't run on Fargate.
I can't see any way to add a nodeAntiAffinity rule to the cluster-agent daemonset that is managed by the Operator.

There are some way to do this? Else it`s possible to add this feature?

Documentation to Configure the Server Cluster Name in ClusterAgent Chart

There does not seem to be any documentation regarding how the Kubernetes server name is set when looking in the App Dynamics Portal at Servers > Clusters.

For me, this was simply showing as appdynamics-cluster-agent-appdynamics in the Portal.
After looking at the following line in the templates, I see that this needs to be set using the optional value of .Values.clusterAgent.appName.

name: {{ (cat (.Values.clusterAgent.appName | default (cat .Release.Name "-" "appdynamics-cluster-agent" | nospace)) "-" .Release.Namespace | nospace) | trunc 63 }}

I would have expected it to at least be listed here: Install the Cluster Agent with Helm Charts - Configuration Options

Add ability to set generic env entries that can refer to prior env entries

Kubernetes provides the ability for environment entries to reference any defined earlier in the list per this link... https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/#define-an-environment-dependent-variable-for-a-container.

I would like to be able to utilize this feature to set the APPDYNAMICS_AGENT_UNIQUE_HOST_ID like so...

In the values.yaml

  daemonset:
    envValueFrom:
      MY_HOST_IP:
        fieldRef:
          fieldPath: status.hostIP
    additionalEnv:
      - name: APPDYNAMICS_AGENT_UNIQUE_HOST_ID
        value: "dev-$(MY_HOST_IP)"

This will allow to prefix the name of the resources in the appd console and more easily find things with a more human readable option than the node name currently being assigned.

I've tried this method using a config map reference (

{{- range $value := .Values.daemonset.envFromConfigMap }}
), but that doesn't seem to be supported by kubernetes.

CR and CRB for appdynamics-operator

Hi all,
I have deployed the operator on OCP 4.11.9. The k8s-cluster-agent logs reported errors about listing resources at cluster scope like the one below:
failed to list *v1.Deployment: deployments.apps is forbidden: User "system:serviceaccount:appdynamics:appdynamics-operator" cannot list resource "deployments" in API group "apps" at the cluster scope
This error is repeated for different resources (deployment, pod...) in different API group.
Checking the resources created by the chart there were a Role and a RoleBiding, but no CR and CRB. I have created these resources and the errors are gone.

is there a reason these resources aren't being created or is it an oversight? In the second case, i can add the missing files and open a PR.

Best regards,
Daniele

Unwelcome change in behavior for Cluster Agent Name by using `clusterAgent.appName` value

Version v1.9.0 of the Cluster Agent Helm Chart used to make the Cluster Agent Name using the syntax "%s-%s" .Release.Name "appdynamics-cluster-agent" this was changed in v1.10.0 to derive the name using .Values.clusterAgent.appName.

Kubernetes does not allow uppercase letters in the resource name and we are passing in an uppercase value from an external source. This change in behavior therefore caused the upgrade to fail. I suggest that this be handled in the template to force the value to be lowercase, rather than requiring external systems to do this in advance.

https://github.com/CiscoDevNet/appdynamics-charts/blob/12257197ab9f27488c7b46dc796f0200b7450bc6/cluster-agent/templates/cluster-agent.yaml#LL5C8-L5C8

Error

Error: UPGRADE FAILED: failed to create resource: Clusteragent.cluster.appdynamics.com "MHRDEV02-K8S-01-appdynamics" is invalid: metadata.name: Invalid value: "MHRDEV02-K8S-01-appdynamics": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is 'a-z0-9?(.a-z0-9?)*')

However, I don't believe that the Cluster Agent Name should be derived from this value at all. In our scenario, the clusterAgent.appName is different per cluster, and we do not want the Cluster Agent Name changing based on this value.

Why not do something similar to other popular charts which use the _helpers.tpl and the fullname function (see other popular Helm Charts like prometheus and argo-cd as examples).

Service file for machine-agent needs an update as it has wrong selector

We are trying to configure a Analytic agent for App Dynamics but we are getting an error "No route to host"
After trouble shooting further, we found that ,service file machine agent have the following the selector:
selector:
app.kubernetes.io/instance: machine-agent
app.kubernetes.io/name: machine-agent
But none of the machine agent pods are running with the labels mentioned above,the pods have the following labels:

app=machine-agent,controller-revision-hash=6cf5558c6d,linkerd.io/control-plane-ns=linkerd,linkerd.io/proxy-daemonset=machine-agent,linkerd.io/workload-ns=appdynamics-machine,pod-template-generation=2,release=machine-agent

As you can clearly see, it doesn't match with the selector mentioned in the service file.

I have done the changes manually(to change the selector see below ) and it all started working
selector:
app: machine-agent

Allow Dynamic Access Key Retrieval and SecretProviderClass Integration

Currently, the appdynamics-charts requires the access key to be hardcoded, limiting the flexibility and security of the integration. To adhere to best practices and enhance the overall security posture, we need to implement the ability to retrieve access keys dynamically and integrate with the SecretProviderClass. This will allow us to securely fetch secrets from external sources without hardcoding them within the repository.

tolerations configuration doesn't work.

I can't figure out any way to get the chart to take tolerations,

daemonset:
  tolerations:
    - effect: NoSchedule
      operator: Exists

this does not work

Error: Failed to render chart: exit status 1: coalesce.go:220: warning: cannot overwrite table with non table for machine-agent.daemonset.tolerations (map[])
Error: YAML parse error on machine-agent/templates/daemonset.yaml: error converting YAML to JSON: yaml: line 74: did not find expected key

Machine Agent Service Points to Port 80 Internally

The default configuration of the most recent machine-agent-analytics container at:
docker.io/appdynamics/machine-agent-analytics

Listens on port 9090 by default per its configuration.

Values available for these helm charts include an analytics.port value (which is also set to 9090 in the default values file):
https://github.com/Appdynamics/appdynamics-charts/blob/8616f2fffcfa6128a731eaaf0dc9f9875ef945f9/machine-agent/values.yaml#L68

The targetPort in the current service is set to http:
https://github.com/Appdynamics/appdynamics-charts/blob/8616f2fffcfa6128a731eaaf0dc9f9875ef945f9/machine-agent/templates/service.yaml#L13

This results in a revoked connection since nothing is listening on port 80 with the indicated machine-agent-analytics container.

( Note: To first establish a connection to the machine agent containers with this service, the selector must also be corrected as indicated in issue #16 )

Changing the targetPort to 9090 resolves the issue.

Chart missing nameOverride and fullnameOverride

Hi there,

I don't know if this is more of a doubt, but when trying to install the cluster agent chart using terraform, I can't find any option to override its name. So the pod ends up with a very "default" name.

terraform code

resource "helm_release" "appdynamics" {
  name = "k8s"
  namespace = "appdynamics"
  repository = "https://ciscodevnet.github.io/appdynamics-charts"
  chart = "cluster-agent"

  values = [
    file("${path.module}/appdynamics-values.yaml")
  ]
}

values

nameOverride: "test"
fullnameOverride: "test"

deploymentMode: PRIMARY

imageInfo:
  agentImage: docker.io/appdynamics/cluster-agent
  agentTag: 20.9.0
  operatorImage: docker.io/appdynamics/cluster-agent-operator
  operatorTag: latest
  imagePullPolicy: IfNotPresent
  machineAgentImage: docker.io/appdynamics/machine-agent-analytics
  machineAgentTag: 21.9.0
  machineAgentWinTag: 21.9.0-win-ltsc2019  

controllerInfo:
  url: https://apm-foobar.pl.appdynamics.com:443
  account: itau-dev                   
  username: FOO
  password: BAR
  accessKey: FOO123BAR456

clusterAgent:
  nsToMonitor: [default,kube-node-lease,kube-public,kube-system,appdynamics]
  clusterMetricsSyncInterval: 60
  metadataSyncInterval: 60
  eventUploadInterval: 10
  httpClientTimeout: 30
  podBatchSize: 6
  imagePullSecret: ""
  containerProperties:
    containerBatchSize: 5
    containerParallelRequestLimit: 1
    containerRegistrationInterval: 120
  logProperties:
    logFileSizeMb: 5
    logFileBackups: 3
    logLevel: INFO
  metricProperties:
    metricsSyncInterval: 30
    metricUploadRetryCount: 2
    metricUploadRetryIntervalMilliSeconds: 5

install:
  metrics-server: true  

pods

$ kubectl get pods -n appdynamics
NAMESPACE       NAME                                             READY   STATUS    RESTARTS   AGE
appdynamics     appdynamics-operator-646866b895-7nqd8            1/1     Running   0          56s
appdynamics     k8s-appdynamics-cluster-agent-84b466c677-v2p4t   1/1     Running   0          50s
appdynamics     k8s-metrics-server-d65b5b4c-wwzzd                1/1     Running   0          56s

I wanted to be able to change the cluster agent pod's name. Is that possible?
Tks

Adding releases / tags in repo

SUMMARY
Having tagged releases would enable pulling the exact version with Terraform helm_chart provider:
https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release

ADDITIONAL INFORMATION
Without a release, a solution that we found is to get the whole repo content:

module "appdynamics_charts_repo" {
  source = "git::https://github.com/Appdynamics/appdynamics-charts.git"
}

output "cluster_agent_file" {
  value = "${path.module}/cluster-agent-0.1.16.tgz"
}

Cluster agent crashes when CronJob runs

When a CronJob runs, the cluster agent crashes. It looks like when a CronJob creates a new container, the agent tries to add the Job, but crashes because it has already added it?

I have tested this with various CronJobs and it can be reliably replicated.

Here's the stack trace:

time="2019-08-29T13:10:05Z" level=debug msg="Added Job: xxxxxxxx\n"
E0829 13:10:05.677312 1 runtime.go:69] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 129 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x12e7d20, 0x22c6250)
/go/src/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65 +0x82
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/go/src/k8s.io/apimachinery/pkg/util/runtime/runtime.go:47 +0x82
panic(0x12e7d20, 0x22c6250)
/usr/local/go/src/runtime/panic.go:513 +0x1b9
github.com/appdynamics/cluster-agent/workers.(*JobsWorker).processObject(0xc0002e84b0, 0xc001758d80, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/usr/local/go/src/github.com/appdynamics/cluster-agent/workers/jobs.go:216 +0x4e6
github.com/appdynamics/cluster-agent/workers.(*JobsWorker).onNewJob(0xc0002e84b0, 0x1436ce0, 0xc001758d80)
/usr/local/go/src/github.com/appdynamics/cluster-agent/workers/jobs.go:92 +0x142
github.com/appdynamics/cluster-agent/workers.(*JobsWorker).onNewJob-fm(0x1436ce0, 0xc001758d80)
/usr/local/go/src/github.com/appdynamics/cluster-agent/workers/jobs.go:70 +0x3e
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(0xc00057d080, 0xc00057d0b0, 0xc00057d0a0, 0x1436ce0, 0xc001758d80)
/go/src/k8s.io/client-go/tools/cache/controller.go:196 +0x49
k8s.io/client-go/tools/cache.(*processorListener).run.func1.1(0x0, 0xc000269e00, 0xc0004d65f0)
/go/src/k8s.io/client-go/tools/cache/shared_informer.go:608 +0x21d
k8s.io/apimachinery/pkg/util/wait.ExponentialBackoff(0x989680, 0x3ff0000000000000, 0x3fb999999999999a, 0x5, 0x0, 0xc0003c1e18, 0x42d752, 0xc0004d6620)
/go/src/k8s.io/apimachinery/pkg/util/wait/wait.go:284 +0x51
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
/go/src/k8s.io/client-go/tools/cache/shared_informer.go:602 +0x79
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc000319f68)
/go/src/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x54
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0003c1f68, 0xdf8475800, 0x0, 0x12c9301, 0xc000660720)
/go/src/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xbe
k8s.io/apimachinery/pkg/util/wait.Until(0xc000319f68, 0xdf8475800, 0xc000660720)
/go/src/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
k8s.io/client-go/tools/cache.(*processorListener).run(0xc00033ae00)
/go/src/k8s.io/client-go/tools/cache/shared_informer.go:600 +0x8d
k8s.io/client-go/tools/cache.(*processorListener).run-fm()
/go/src/k8s.io/client-go/tools/cache/shared_informer.go:444 +0x2a
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000480760, 0xc0005e2550)
/go/src/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x4f
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
/go/src/k8s.io/apimachinery/pkg/util/wait/wait.go:69 +0x62
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x11668e6]
goroutine 129 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/go/src/k8s.io/apimachinery/pkg/util/runtime/runtime.go:54 +0x108
panic(0x12e7d20, 0x22c6250)
/usr/local/go/src/runtime/panic.go:513 +0x1b9
github.com/appdynamics/cluster-agent/workers.(*JobsWorker).processObject(0xc0002e84b0, 0xc001758d80, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/usr/local/go/src/github.com/appdynamics/cluster-agent/workers/jobs.go:216 +0x4e6
github.com/appdynamics/cluster-agent/workers.(*JobsWorker).onNewJob(0xc0002e84b0, 0x1436ce0, 0xc001758d80)
/usr/local/go/src/github.com/appdynamics/cluster-agent/workers/jobs.go:92 +0x142
github.com/appdynamics/cluster-agent/workers.(*JobsWorker).onNewJob-fm(0x1436ce0, 0xc001758d80)
/usr/local/go/src/github.com/appdynamics/cluster-agent/workers/jobs.go:70 +0x3e
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(0xc00057d080, 0xc00057d0b0, 0xc00057d0a0, 0x1436ce0, 0xc001758d80)
/go/src/k8s.io/client-go/tools/cache/controller.go:196 +0x49
k8s.io/client-go/tools/cache.(*processorListener).run.func1.1(0x0, 0xc000269e00, 0xc0004d65f0)
/go/src/k8s.io/client-go/tools/cache/shared_informer.go:608 +0x21d
k8s.io/apimachinery/pkg/util/wait.ExponentialBackoff(0x989680, 0x3ff0000000000000, 0x3fb999999999999a, 0x5, 0x0, 0xc0003c1e18, 0x42d752, 0xc0004d6620)
/go/src/k8s.io/apimachinery/pkg/util/wait/wait.go:284 +0x51
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
/go/src/k8s.io/client-go/tools/cache/shared_informer.go:602 +0x79
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc000319f68)
/go/src/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x54
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0003c1f68, 0xdf8475800, 0x0, 0x12c9301, 0xc000660720)
/go/src/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xbe
k8s.io/apimachinery/pkg/util/wait.Until(0xc000319f68, 0xdf8475800, 0xc000660720)
/go/src/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
k8s.io/client-go/tools/cache.(*processorListener).run(0xc00033ae00)
/go/src/k8s.io/client-go/tools/cache/shared_informer.go:600 +0x8d
k8s.io/client-go/tools/cache.(*processorListener).run-fm()
/go/src/k8s.io/client-go/tools/cache/shared_informer.go:444 +0x2a
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000480760, 0xc0005e2550)
/go/src/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x4f
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
/go/src/k8s.io/apimachinery/pkg/util/wait/wait.go:69 +0x62

Machine-Agent installation fails using helm chart

Hello Sasha,

There seems to be something wrong with the helm chart for machineagent. Here is what I get:

biswajit.nanda@bnanda-macbook:~/AppDynamics/InfraVisibility-EKS/bnanda-ekscluster-MachineAgent$ helm install -f helmchart.yml serverviz appdynamics-charts/machine-agent
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: unknown object type "nil" in ConfigMap.data.APPDYNAMICS_AGENT_GLOBAL_ACCOUNT_NAME

As the https://appdynamics.github.io/appdynamics-charts/ is public facing, a lot of customers are trying to use it and creating support tickets as it fails. This is creating significant overhead on already loaded Customer Engineering and Support team from AppDynamics.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.