Coder Social home page Coder Social logo

k8ssandra-terraform's Introduction

[DEPRECATED]

This project is deprecated and replaced by k8ssandra-operator

Read this blog post to see what differences exist between K8ssandra and k8ssandra-operator, and why we decided to build an operator.
Follow our migration guide to migrate from K8ssandra (and Apache Cassandra®) to k8ssandra-operator.

K8ssandra

K8ssandra is a simple to manage, production-ready, distribution of Apache Cassandra and Stargate that is ready for Kubernetes. It is built on a foundation of rock-solid open-source projects covering both the transactional and operational aspects of Cassandra deployments. This project is distributed as a collection of Helm charts. Feel free to fork the repo and contribute. If you're looking to install K8ssandra head over to the Quickstarts.

Components

K8ssandra is composed of a number of sub-charts each representing a component in the K8ssandra stack. The default installation is focused on developer deployments with all of the features enabled and configured for running with a minimal set of resources. Many of these components may be deployed independently in a centralized fashion. Below is a list of the components in the K8ssandra stack with links to the appropriate projects.

Apache Cassandra

K8ssandra packages and deploys Apache Cassandra via the cass-operator project. Each Cassandra container has the Management API for Apache Cassandra (MAAC) and Metrics Collector for Apache Cassandra(MCAC) pre-installed and configured to come up automatically.

Stargate

Stargate provides a collection of horizontally scalable API endpoints for interacting with Cassandra databases. Developers may leverage REST and GraphQL alongside the traditional CQL interfaces. With Stargate operations teams gain the ability to independently scale coordination (Stargate) and data (Cassandra) layers. In some use-cases, this has resulted in a lower TCO and smaller infrastructure footprint.

Monitoring

Monitoring includes the collection, storage, and visualization of metrics. Along with the previously mentioned MCAC, K8ssandra utilizes Prometheus and Grafana for the storage and visualization of metrics. Installation and management of these pieces is handled by the Kube Prometheus Stack Helm chart.

Repairs

The Last Pickle Reaper is used to schedule and manage repairs in Cassandra. It provides a web interface to visualize repair progress and manage activity.

Backup & Restore

Another project from The Last Pickle, Medusa, manages the backup and restore of K8ssandra clusters.

Next Steps

If you are looking to run K8ssandra in your Kubernetes environment check out the Getting Started guide, with follow-up details for developers and site reliability engineers.

We are always looking for contributions to the docs, helm charts, and underlying components. Check out the code contribution guide and docs contribution guide

If you are a developer interested in working with the K8ssandra code, here is a guide that will give you an introduction to:

  • Important technologies and learning resources
  • Project components
  • Project processes and resources
  • Getting up and running with a basic IDE environment
  • Deploying to a local docker-based cluster environment (kind)
  • Understanding the K8ssandra project structure
  • Running unit tests
  • Troubleshooting tips

Dependencies

For information on the packaged dependencies of K8ssandra and their licenses, check out our open source report.

k8ssandra-terraform's People

Contributors

bradfordcp avatar chaitu6022 avatar chrislovecnm avatar hadesarchitect avatar jdonenine avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

k8ssandra-terraform's Issues

Create TF for AWS

chaitu6022 can you flush this out and add a tentative completion date?

┆Issue is synchronized with this Jiraserver Task by Unito
┆Issue Number: K8SSAND-178
┆Priority: Medium

Google Kubernetes Engine unhealth backend services

Load balancer health checks are failing due to the wrong health check request path. These load balancer health checks are created automatically. We need to find a way to update these health check path.

┆Issue is synchronized with this Jiraserver Bug by Unito
┆Issue Number: K8SSAND-603
┆Priority: Medium

Incorrect company name in license header.

Currently, the license in the header of each file indicates DataStax LLC, it should say DataStax, Inc. Consider the following copied from the DataStax Java Driver.

/*
 * Copyright DataStax, Inc.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 * http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

Azure

Includes initial effort of Azure Terraform integration with cloud-readiness.

┆Issue is synchronized with this Jira Task by Unito
┆epic: Cloud Provider Smoke Test Automation
┆friendlyId: K8SSAND-1315
┆priority: Medium

Add CI

We should at least

  1. check headers
  2. check terraform lint
  3. lint any thing else we can

Action Items from meeting

  1. Merge two PRs
  2. Update GKE ticket and ETA on health check yaml
  3. Doublecheck AWS access
  4. Status on AWS and Azure

┆Issue is synchronized with this Jiraserver Bug by Unito
┆Issue Number: K8SSAND-591
┆Priority: Medium

RedHat ubi container not updating

We are doing this

https://github.com/cockroachdb/cockroach-operator/blob/a65d8e9dcb100035010bbd8ec294529bb29837a7/Makefile#L239

Because the ubi image here:

https://github.com/cockroachdb/cockroach-operator/blob/a65d8e9dcb100035010bbd8ec294529bb29837a7/WORKSPACE#L99

Does not get pulled all of the time. We need to investigate if we should have a Digest or what.

┆Issue is synchronized with this Jiraserver Bug by Unito
┆Issue Number: K8SSAND-595
┆Priority: Medium

Wrong path used in GCP init.sh script

The gcp init command is failing with the following error:

% make init "provider=gcp"
bash /Users/adejanovski/projets/cassandra/adejanovski/k8ssandra-terraform/gcp/scripts/init.sh
Following variables are configured
TF_VAR_environment: dev
TF_VAR_name: k8ssandra
TF_VAR_region: us-central-1
TF_VAR_project_id: us-central-1/Users/adejanovski/projets/cassandra/adejanovski/k8ssandra-terraform/gcp/scripts/init.sh: line 34: /Users/adejanovski/projets/cassandra/adejanovski/k8ssandra-terraform/gcp/gcp/scripts/make_bucket.py: No such file or directory
make: *** [init] Error 1

It looks like the /gcp/ part of the path should be removed on the following line as it's already part of ${ROOT}:

source "${ROOT}/gcp/scripts/make_bucket.py"

┆Issue is synchronized with this Jiraserver Task by Unito
┆Issue Number: K8SSAND-478
┆Priority: Medium

[GCP] Destroy command fails

Running destroy fails with the following error:

% make destroy "provider=gcp"
bash /Users/adejanovski/projets/cassandra/adejanovski/k8ssandra-terraform/gcp/scripts/destroy.sh
Following variables are configured
TF_VAR_environment: dev
TF_VAR_name: k8ssandra
TF_VAR_region: us-central-1
TF_VAR_project_id: us-central-1
/Users/adejanovski/projets/cassandra/adejanovski/k8ssandra-terraform/gcp/scripts/destroy.sh: line 44: end: command not found
make: *** [destroy] Error 127

It looks like the following line is responsible due to a call to an end command that doesn't exist:

┆Issue is synchronized with this Jiraserver Task by Unito
┆Issue Number: K8SSAND-480
┆Priority: Medium

[GCP] Incorrect output of env variables

${TF_VAR_region} is used twice when logging the configured env variables instead of ${TF_VAR_project_id} for the second one:

printf "This step requires to export the the following variables \nTF_VAR_environment: %s \nTF_VAR_name: %s \nTF_VAR_region: %s \nTF_VAR_project_id: %s" "${TF_VAR_environment}" "${TF_VAR_name}" "${TF_VAR_region}" "${TF_VAR_region}"
exit 1
else
printf "Following variables are configured \nTF_VAR_environment: %s \nTF_VAR_name: %s \nTF_VAR_region: %s \nTF_VAR_project_id: %s" "${TF_VAR_environment}" "${TF_VAR_name}" "${TF_VAR_region}" "${TF_VAR_region}"

┆Issue is synchronized with this Jiraserver Task by Unito
┆Issue Number: K8SSAND-477
┆Priority: Medium

[GCP] TF_* vars don't seem to be taken into account

Despite setting the required env variables using the GCP_module branch, they do not seem to be taken into account:

% env
...
...
TF_VAR_environment=dev
TF_VAR_name=k8ssandra
TF_VAR_region=us-east1
TF_VAR_project_id=community-ecosystem

The generated variables.tf file contains the following values:

variable "name" {
  description = "Name of the cluster resources"
  default     = "k8ssandra"
}

variable "environment" {
  description = "The environment of the infrastructure being built."
}

variable "region" {
  description = "The region in which to create the VPC network"
  type        = string
  default     = "us-central1"
}

variable "project_id" {
  description = "The GCP project in which the components are created."
  type        = string
  default     = "k8ssandra-testing"
}

variable "zone" {
  description = "The zone in which to create the Kubernetes cluster. Must match the region"
  type        = string
  default     = "us-central-1a"
}

Are those supposed to be set automatically based on the env variables?

┆Issue is synchronized with this Jiraserver Task by Unito
┆Issue Number: K8SSAND-498
┆Priority: Medium

Unit testing master issue

This issue is a list of all of the unit tests we need

  • Actor order test
  • Decommision test that PVCs do not delete

┆Issue is synchronized with this Jiraserver Bug by Unito
┆Issue Number: K8SSAND-596
┆Priority: Medium

Create GHA CI workflow to test k8ssandra-terraform AWS module

A CI workflow should be created in GHA to test the AWS Terraform module for K8ssandra.
The workflow should be scheduled every week and would provision an EKS cluster on which the latest stable version of K8ssandra would be deployed.
Smoke tests should be implemented to verify that all the components have started successfully:

  • Check that the cassdc object reaches "Ready" state
  • Check that Cassandra is accessible by running cqlsh
  • Check that Stargate is accessible through its REST api
  • Check that Reaper is accessible through its REST api
  • Check that Grafana is accessible through its Web UI
    Medusa is assumed to be in a working state if Cassandra is accessible.

┆Issue is synchronized with this Jiraserver Task by Unito
┆Epic: k8ssandra-terraform integration tests
┆Issue Number: K8SSAND-574
┆Priority: Medium

Instance Sizing

An open issue for documenting GKE instance sizing

┆Issue is synchronized with this Jira Task by Unito
┆friendlyId: K8SSAND-175
┆priority: Medium

AWS

AWS Terraform integration with cloud-readiness effort.

┆Issue is synchronized with this Jira Task by Unito
┆epic: Cloud Provider Smoke Test Automation
┆friendlyId: K8SSAND-1316
┆priority: Medium

Create a Makefile for Google cloud Kubernetes cluster module.

Create a Makefile for the google cloud kubernetes cluster module.

make init -initialize the backend
make plan - validate and plan resources
make apply - apply the resources
make destroy - destroy the resources teardown the environment
make lint - check linting for the files in repository.

Error with sensitive values in outputs

╷
│ Error: Output refers to sensitive values
│ 
│   on ../modules/iam/outputs.tf line 22:
│   22: output "service_account_key" {
│ 
│ Expressions used in outputs can only refer to sensitive values if the sensitive attribute is
│ true.
╵

┆Issue is synchronized with this Jiraserver Bug by Unito
┆Issue Number: K8SSAND-626
┆Priority: Medium

Create GHA CI workflow to test k8ssandra-terraform Azure module

A CI workflow should be created in GHA to test the Azure Terraform module for K8ssandra.
The workflow should be scheduled every week and would provision an AKS cluster on which the latest stable version of K8ssandra would be deployed.
Smoke tests should be implemented to verify that all the components have started successfully:

  • Check that the cassdc object reaches "Ready" state
  • Check that Cassandra is accessible by running cqlsh
  • Check that Stargate is accessible through its REST api
  • Check that Reaper is accessible through its REST api
  • Check that Grafana is accessible through its Web UI
    Medusa is assumed to be in a working state if Cassandra is accessible.

┆Issue is synchronized with this Jiraserver Task by Unito
┆Epic: k8ssandra-terraform integration tests
┆Issue Number: K8SSAND-576
┆Priority: Medium

terraform init question

Hi Krishna chaitu6022. I recently started over with a new GKE cluster (jsmart-k8ssandra-130) in my gcp-techpubs project. While following our own instructions in https://docs.k8ssandra.io/install/gke/, all was well, until I tried the terraform init step:

...
john.smart@jsmart-rmbp15 env % terraform init
zsh: command not found: terraform
john.smart@jsmart-rmbp15 env % pwd
/Users/john.smart/github/k8ssandra-terraform/gcp/env

Note that before that, I did:

...
john.smart@jsmart-rmbp15 gcp % gcloud config set compute/region us-central1
Updated property [compute/region].

john.smart@jsmart-rmbp15 gcp % gcloud config set compute/zone us-central1-c
Updated property [compute/zone].

john.smart@jsmart-rmbp15 gcp % gcloud config set project "gcp-techpubs"
Updated property [core/project].

john.smart@jsmart-rmbp15 gcp % export TF_VAR_environment=prod
john.smart@jsmart-rmbp15 gcp % export TF_VAR_name=k8ssandra
john.smart@jsmart-rmbp15 gcp % export TF_VAR_project_id=gcp-techpubs
john.smart@jsmart-rmbp15 gcp % export TF_VAR_region=us-central1

Any idea why terraform init wouldn't work from my client? Things to check?

┆Issue is synchronized with this Jiraserver Bug by Unito
┆Issue Number: K8SSAND-609
┆Priority: Medium

Backups fail with 403 errors with EKS clusters backing up to S3.

Bug Report

Describe the bug
Attempting to back up an EKS cluster to S3 shows 403 errors in Medusa logs. I've tried both IAM roles specified on the EC2 instances as well as manually specifying credentials in the Medusa Storage Secret.

To Reproduce
Steps to reproduce the behavior:

EC2 Roles

  1. Create an EKS cluster with k8ssandra/terraform
  2. Create an empty secret for Medusa
  3. Install K8ssandra
  4. Create a backup

Manual IAM

  1. Create an EKS cluster with k8ssandra/terraform
  2. Create an IAM role with permissions to the S3 bucket generated via Terraform
  3. Create a Medusa secret with the IAM credentials
  4. Install K8ssandra
  5. Create a backup

Expected behavior
I expect the S3 bucket to have the files generated by the backup operation

Environment (please complete the following information):

  • Helm charts version info
$ helm ls -A
NAME          	NAMESPACE	REVISION	UPDATED                                	STATUS  	CHART          	APP VERSION
prod-k8ssandra	default  	1       	2021-04-30 15:17:09.031862215 -0400 EDT	deployed	k8ssandra-1.1.0	           
test          	default  	1       	2021-04-30 15:33:39.283343328 -0400 EDT	deployed	backup-0.26.0  	 
  • Helm charts user-supplied values

    cassandra:
    # Version of Apache Cassandra to deploy
    version: "3.11.10"

    # Configuration for the /var/lib/cassandra mount point
    cassandraLibDirVolume:
      # AWS provides this storage class on EKS clusters out of the box. Note we
      # are using `gp2` here as it has `volumeBindingMode: WaitForFirstConsumer`
      # which is important during scheduling.
      storageClass: gp2
    
      # The recommended live data size is 1 - 1.5 TB. A 2 TB volume supports this
      # much data along with room for compactions. Consider increasing this value
      # as the number of provisioned IOPs is directly related to the volume size.
      size: 2048Gi
    
    heap:
     size: 8G
     newGenSize: 31G
    
    resources:
      requests:
        cpu: 4000m
        memory: 32Gi
      limits:
        cpu: 4000m
        memory: 32Gi
    
    # This key defines the logical topology of your cluster. The rack names and
    # labels should be updated to reflect the Availability Zones where your GKE
    # cluster is deployed.
    datacenters:
    - name: dc1
      size: 3
      racks:
      - name: us-east-1a
        affinityLabels:
          topology.kubernetes.io/zone: us-east-1a
      - name: us-east-1b
        affinityLabels:
          topology.kubernetes.io/zone: us-east-1b
      - name: us-east-1c
        affinityLabels:
          topology.kubernetes.io/zone: us-east-1c
    

    stargate:
    enabled: true
    replicas: 3
    heapMB: 1024
    cpuReqMillicores: 1000
    cpuLimMillicores: 1000

    medusa:
    enabled: true
    storage: s3

    # Reference the Terraform output for the correct bucket name to use here.
    bucketName: prod-k8ssandra2-s3-bucket
    
    # The secret here must align with the value used in the previous section.
    storageSecret: prod-k8ssandra-medusa-key
    
    storage_properties:
      region: us-east-1Medusa Secret```apiVersion: v1kind: Secretmetadata:
    

name: prod-k8ssandra-medusa-key
type: Opaque
stringData:

Note that this currently has to be set to medusa_s3_credentials!

medusa_s3_credentials: |-
[default]
aws_access_key_id = AK..................
aws_secret_access_key = ......


* Kubernetes version information:

Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.8-eks-96780e", GitCommit:"96780e1b30acbf0a52c38b6030d7853e575bcdf3", GitTreeState:"clean", BuildDate:"2021-03-10T21:32:29Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.21) and server (1.19) exceeds the supported minor version skew of +/-1


* Kubernetes cluster kind:
EKS

**Additional context**
Medusa logs

$ kubectl logs prod-k8ssandra-dc1-us-east-1b-sts-0 -c medusa
MEDUSA_MODE = GRPC
sleeping for 0 sec
Starting Medusa gRPC service
/home/cassandra/.local/lib/python3.6/site-packages/requests/init.py:91: RequestsDependencyWarning: urllib3 (1.26.4) or chardet (3.0.4) doesn't match a supported version!
RequestsDependencyWarning)
INFO:root:Init service
[2021-04-30 19:18:32,688] INFO: Init service
DEBUG:root:Loading storage_provider: s3
[2021-04-30 19:18:32,688] DEBUG: Loading storage_provider: s3
DEBUG:root:Reading AWS credentials from /etc/medusa-secrets/medusa_s3_credentials
[2021-04-30 19:18:32,689] DEBUG: Reading AWS credentials from /etc/medusa-secrets/medusa_s3_credentials
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): s3.amazonaws.com:443
[2021-04-30 19:18:33,061] DEBUG: Starting new HTTPS connection (1): s3.amazonaws.com:443
DEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 "HEAD /prod-k8ssandra2-s3-bucket HTTP/1.1" 200 0
[2021-04-30 19:18:33,103] DEBUG: https://s3.amazonaws.com:443 "HEAD /prod-k8ssandra2-s3-bucket HTTP/1.1" 200 0
INFO:root:Starting server. Listening on port 50051.
[2021-04-30 19:18:33,453] INFO: Starting server. Listening on port 50051.
INFO:root:Performing backup test
[2021-04-30 19:25:45,382] INFO: Performing backup test
INFO:root:Monitoring provider is noop
[2021-04-30 19:25:45,383] INFO: Monitoring provider is noop
DEBUG:root:Loading storage_provider: s3
[2021-04-30 19:25:45,383] DEBUG: Loading storage_provider: s3
DEBUG:root:Reading AWS credentials from /etc/medusa-secrets/medusa_s3_credentials
[2021-04-30 19:25:45,383] DEBUG: Reading AWS credentials from /etc/medusa-secrets/medusa_s3_credentials
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): s3.amazonaws.com:443
[2021-04-30 19:25:45,725] DEBUG: Starting new HTTPS connection (1): s3.amazonaws.com:443
DEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 "HEAD /prod-k8ssandra2-s3-bucket HTTP/1.1" 200 0
[2021-04-30 19:25:45,800] DEBUG: https://s3.amazonaws.com:443 "HEAD /prod-k8ssandra2-s3-bucket HTTP/1.1" 200 0
DEBUG:root:This server has systemd: False
[2021-04-30 19:25:46,152] DEBUG: This server has systemd: False
WARNING:root:is ccm : 0
[2021-04-30 19:25:46,153] WARNING: is ccm : 0
DEBUG:root:Blob prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql was not found in cache.
[2021-04-30 19:25:46,164] DEBUG: Blob prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql was not found in cache.
DEBUG:root:[Storage] Getting object prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql
[2021-04-30 19:25:46,164] DEBUG: [Storage] Getting object prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql
DEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 "HEAD /prod-k8ssandra2-s3-bucket HTTP/1.1" 200 0
[2021-04-30 19:25:46,171] DEBUG: https://s3.amazonaws.com:443 "HEAD /prod-k8ssandra2-s3-bucket HTTP/1.1" 200 0
DEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 "HEAD /prod-k8ssandra2-s3-bucket/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql HTTP/1.1" 404 0
[2021-04-30 19:25:46,193] DEBUG: https://s3.amazonaws.com:443 "HEAD /prod-k8ssandra2-s3-bucket/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql HTTP/1.1" 404 0
DEBUG:root:Process psutil.Process(pid=1, name='python3', status='sleeping', started='19:18:32') was set to use only idle IO and CPU resources
[2021-04-30 19:25:46,194] DEBUG: Process psutil.Process(pid=1, name='python3', status='sleeping', started='19:18:32') was set to use only idle IO and CPU resources
INFO:root:Saving tokenmap and schema
[2021-04-30 19:25:46,194] INFO: Saving tokenmap and schema
DEBUG:cassandra.cluster:Connecting to cluster, contact points: ['10.0.2.189']; protocol version: 4
[2021-04-30 19:25:46,195] DEBUG: Connecting to cluster, contact points: ['10.0.2.189']; protocol version: 4
DEBUG:cassandra.pool:Host 10.0.2.189:9042 is now marked up
[2021-04-30 19:25:46,195] DEBUG: Host 10.0.2.189:9042 is now marked up
DEBUG:cassandra.cluster:[control connection] Opening new connection to 10.0.2.189:9042
[2021-04-30 19:25:46,196] DEBUG: [control connection] Opening new connection to 10.0.2.189:9042
DEBUG:cassandra.connection:Sending initial options message for new connection (140047656107200) to 10.0.2.189:9042
[2021-04-30 19:25:46,196] DEBUG: Sending initial options message for new connection (140047656107200) to 10.0.2.189:9042
DEBUG:cassandra.io.libevreactor:Starting libev event loop
[2021-04-30 19:25:46,196] DEBUG: Starting libev event loop
DEBUG:cassandra.connection:Received options response on new connection (140047656107200) from 10.0.2.189:9042
[2021-04-30 19:25:46,197] DEBUG: Received options response on new connection (140047656107200) from 10.0.2.189:9042
DEBUG:cassandra.connection:No available compression types supported on both ends. locally supported: odict_keys([]). remotely supported: ['snappy', 'lz4']
[2021-04-30 19:25:46,197] DEBUG: No available compression types supported on both ends. locally supported: odict_keys([]). remotely supported: ['snappy', 'lz4']
DEBUG:cassandra.connection:Sending StartupMessage on <LibevConnection(140047656107200) 10.0.2.189:9042>
[2021-04-30 19:25:46,197] DEBUG: Sending StartupMessage on <LibevConnection(140047656107200) 10.0.2.189:9042>
DEBUG:cassandra.connection:Sent StartupMessage on <LibevConnection(140047656107200) 10.0.2.189:9042>
[2021-04-30 19:25:46,198] DEBUG: Sent StartupMessage on <LibevConnection(140047656107200) 10.0.2.189:9042>
DEBUG:cassandra.connection:Got AuthenticateMessage on new connection (140047656107200) from 10.0.2.189:9042: org.apache.cassandra.auth.PasswordAuthenticator
[2021-04-30 19:25:46,198] DEBUG: Got AuthenticateMessage on new connection (140047656107200) from 10.0.2.189:9042: org.apache.cassandra.auth.PasswordAuthenticator
DEBUG:cassandra.connection:Sending SASL-based auth response on <LibevConnection(140047656107200) 10.0.2.189:9042>
[2021-04-30 19:25:46,198] DEBUG: Sending SASL-based auth response on <LibevConnection(140047656107200) 10.0.2.189:9042>
DEBUG:cassandra.connection:Connection <LibevConnection(140047656107200) 10.0.2.189:9042> successfully authenticated
[2021-04-30 19:25:46,278] DEBUG: Connection <LibevConnection(140047656107200) 10.0.2.189:9042> successfully authenticated
DEBUG:cassandra.cluster:[control connection] Established new connection <LibevConnection(140047656107200) 10.0.2.189:9042>, registering watchers and refreshing schema and topology
[2021-04-30 19:25:46,278] DEBUG: [control connection] Established new connection <LibevConnection(140047656107200) 10.0.2.189:9042>, registering watchers and refreshing schema and topology
DEBUG:cassandra.cluster:[control connection] Refreshing node list and token map using preloaded results
[2021-04-30 19:25:46,284] DEBUG: [control connection] Refreshing node list and token map using preloaded results
INFO:cassandra.policies:Using datacenter 'dc1' for DCAwareRoundRobinPolicy (via host '10.0.2.189:9042'); if incorrect, please specify a local_dc to the constructor, or limit contact points to local cluster nodes
[2021-04-30 19:25:46,284] INFO: Using datacenter 'dc1' for DCAwareRoundRobinPolicy (via host '10.0.2.189:9042'); if incorrect, please specify a local_dc to the constructor, or limit contact points to local cluster nodes
DEBUG:cassandra.cluster:[control connection] Found new host to connect to: 10.0.1.64:9042
[2021-04-30 19:25:46,284] DEBUG: [control connection] Found new host to connect to: 10.0.1.64:9042
INFO:cassandra.cluster:New Cassandra host <Host: 10.0.1.64:9042 dc1> discovered
[2021-04-30 19:25:46,284] INFO: New Cassandra host <Host: 10.0.1.64:9042 dc1> discovered
DEBUG:cassandra.cluster:Handling new host <Host: 10.0.1.64:9042 dc1> and notifying listeners
[2021-04-30 19:25:46,284] DEBUG: Handling new host <Host: 10.0.1.64:9042 dc1> and notifying listeners
DEBUG:cassandra.cluster:Done preparing queries for new host <Host: 10.0.1.64:9042 dc1>
[2021-04-30 19:25:46,284] DEBUG: Done preparing queries for new host <Host: 10.0.1.64:9042 dc1>
DEBUG:cassandra.pool:Host 10.0.1.64:9042 is now marked up
[2021-04-30 19:25:46,284] DEBUG: Host 10.0.1.64:9042 is now marked up
DEBUG:cassandra.cluster:[control connection] Found new host to connect to: 10.0.3.128:9042
[2021-04-30 19:25:46,285] DEBUG: [control connection] Found new host to connect to: 10.0.3.128:9042
INFO:cassandra.cluster:New Cassandra host <Host: 10.0.3.128:9042 dc1> discovered
[2021-04-30 19:25:46,285] INFO: New Cassandra host <Host: 10.0.3.128:9042 dc1> discovered
DEBUG:cassandra.cluster:Handling new host <Host: 10.0.3.128:9042 dc1> and notifying listeners
[2021-04-30 19:25:46,285] DEBUG: Handling new host <Host: 10.0.3.128:9042 dc1> and notifying listeners
DEBUG:cassandra.cluster:Done preparing queries for new host <Host: 10.0.3.128:9042 dc1>
[2021-04-30 19:25:46,285] DEBUG: Done preparing queries for new host <Host: 10.0.3.128:9042 dc1>
DEBUG:cassandra.pool:Host 10.0.3.128:9042 is now marked up
[2021-04-30 19:25:46,285] DEBUG: Host 10.0.3.128:9042 is now marked up
DEBUG:cassandra.cluster:[control connection] Finished fetching ring info
[2021-04-30 19:25:46,285] DEBUG: [control connection] Finished fetching ring info
DEBUG:cassandra.cluster:[control connection] Rebuilding token map due to topology changes
[2021-04-30 19:25:46,285] DEBUG: [control connection] Rebuilding token map due to topology changes
DEBUG:cassandra.cluster:Control connection created
[2021-04-30 19:25:46,297] DEBUG: Control connection created
DEBUG:cassandra.pool:Initializing connection for host 10.0.2.189:9042
[2021-04-30 19:25:46,298] DEBUG: Initializing connection for host 10.0.2.189:9042
DEBUG:cassandra.pool:Initializing connection for host 10.0.1.64:9042
[2021-04-30 19:25:46,298] DEBUG: Initializing connection for host 10.0.1.64:9042
DEBUG:cassandra.connection:Sending initial options message for new connection (140047656034992) to 10.0.2.189:9042
[2021-04-30 19:25:46,299] DEBUG: Sending initial options message for new connection (140047656034992) to 10.0.2.189:9042
DEBUG:cassandra.connection:Received options response on new connection (140047656034992) from 10.0.2.189:9042
[2021-04-30 19:25:46,300] DEBUG: Received options response on new connection (140047656034992) from 10.0.2.189:9042
DEBUG:cassandra.connection:Sending initial options message for new connection (140047616862584) to 10.0.1.64:9042
[2021-04-30 19:25:46,300] DEBUG: Sending initial options message for new connection (140047616862584) to 10.0.1.64:9042
DEBUG:cassandra.connection:No available compression types supported on both ends. locally supported: odict_keys([]). remotely supported: ['snappy', 'lz4']
[2021-04-30 19:25:46,300] DEBUG: No available compression types supported on both ends. locally supported: odict_keys([]). remotely supported: ['snappy', 'lz4']
DEBUG:cassandra.connection:Sending StartupMessage on <LibevConnection(140047656034992) 10.0.2.189:9042>
[2021-04-30 19:25:46,300] DEBUG: Sending StartupMessage on <LibevConnection(140047656034992) 10.0.2.189:9042>
DEBUG:cassandra.connection:Sent StartupMessage on <LibevConnection(140047656034992) 10.0.2.189:9042>
[2021-04-30 19:25:46,301] DEBUG: Sent StartupMessage on <LibevConnection(140047656034992) 10.0.2.189:9042>
DEBUG:cassandra.connection:Got AuthenticateMessage on new connection (140047656034992) from 10.0.2.189:9042: org.apache.cassandra.auth.PasswordAuthenticator
[2021-04-30 19:25:46,301] DEBUG: Got AuthenticateMessage on new connection (140047656034992) from 10.0.2.189:9042: org.apache.cassandra.auth.PasswordAuthenticator
DEBUG:cassandra.connection:Sending SASL-based auth response on <LibevConnection(140047656034992) 10.0.2.189:9042>
[2021-04-30 19:25:46,301] DEBUG: Sending SASL-based auth response on <LibevConnection(140047656034992) 10.0.2.189:9042>
DEBUG:cassandra.connection:Received options response on new connection (140047616862584) from 10.0.1.64:9042
[2021-04-30 19:25:46,302] DEBUG: Received options response on new connection (140047616862584) from 10.0.1.64:9042
DEBUG:cassandra.connection:No available compression types supported on both ends. locally supported: odict_keys([]). remotely supported: ['snappy', 'lz4']
[2021-04-30 19:25:46,302] DEBUG: No available compression types supported on both ends. locally supported: odict_keys([]). remotely supported: ['snappy', 'lz4']
DEBUG:cassandra.connection:Sending StartupMessage on <LibevConnection(140047616862584) 10.0.1.64:9042>
[2021-04-30 19:25:46,302] DEBUG: Sending StartupMessage on <LibevConnection(140047616862584) 10.0.1.64:9042>
DEBUG:cassandra.connection:Sent StartupMessage on <LibevConnection(140047616862584) 10.0.1.64:9042>
[2021-04-30 19:25:46,302] DEBUG: Sent StartupMessage on <LibevConnection(140047616862584) 10.0.1.64:9042>
DEBUG:cassandra.connection:Got AuthenticateMessage on new connection (140047616862584) from 10.0.1.64:9042: org.apache.cassandra.auth.PasswordAuthenticator
[2021-04-30 19:25:46,303] DEBUG: Got AuthenticateMessage on new connection (140047616862584) from 10.0.1.64:9042: org.apache.cassandra.auth.PasswordAuthenticator
DEBUG:cassandra.connection:Sending SASL-based auth response on <LibevConnection(140047616862584) 10.0.1.64:9042>
[2021-04-30 19:25:46,304] DEBUG: Sending SASL-based auth response on <LibevConnection(140047616862584) 10.0.1.64:9042>
DEBUG:cassandra.connection:Connection <LibevConnection(140047656034992) 10.0.2.189:9042> successfully authenticated
[2021-04-30 19:25:46,382] DEBUG: Connection <LibevConnection(140047656034992) 10.0.2.189:9042> successfully authenticated
DEBUG:cassandra.pool:Finished initializing connection for host 10.0.2.189:9042
[2021-04-30 19:25:46,382] DEBUG: Finished initializing connection for host 10.0.2.189:9042
DEBUG:cassandra.cluster:Added pool for host 10.0.2.189:9042 to session
[2021-04-30 19:25:46,382] DEBUG: Added pool for host 10.0.2.189:9042 to session
DEBUG:cassandra.pool:Initializing connection for host 10.0.3.128:9042
[2021-04-30 19:25:46,382] DEBUG: Initializing connection for host 10.0.3.128:9042
DEBUG:cassandra.connection:Sending initial options message for new connection (140047616863984) to 10.0.3.128:9042
[2021-04-30 19:25:46,384] DEBUG: Sending initial options message for new connection (140047616863984) to 10.0.3.128:9042
DEBUG:cassandra.connection:Received options response on new connection (140047616863984) from 10.0.3.128:9042
[2021-04-30 19:25:46,385] DEBUG: Received options response on new connection (140047616863984) from 10.0.3.128:9042
DEBUG:cassandra.connection:No available compression types supported on both ends. locally supported: odict_keys([]). remotely supported: ['snappy', 'lz4']
[2021-04-30 19:25:46,385] DEBUG: No available compression types supported on both ends. locally supported: odict_keys([]). remotely supported: ['snappy', 'lz4']
DEBUG:cassandra.connection:Sending StartupMessage on <LibevConnection(140047616863984) 10.0.3.128:9042>
[2021-04-30 19:25:46,385] DEBUG: Sending StartupMessage on <LibevConnection(140047616863984) 10.0.3.128:9042>
DEBUG:cassandra.connection:Sent StartupMessage on <LibevConnection(140047616863984) 10.0.3.128:9042>
[2021-04-30 19:25:46,385] DEBUG: Sent StartupMessage on <LibevConnection(140047616863984) 10.0.3.128:9042>
DEBUG:cassandra.connection:Got AuthenticateMessage on new connection (140047616863984) from 10.0.3.128:9042: org.apache.cassandra.auth.PasswordAuthenticator
[2021-04-30 19:25:46,387] DEBUG: Got AuthenticateMessage on new connection (140047616863984) from 10.0.3.128:9042: org.apache.cassandra.auth.PasswordAuthenticator
DEBUG:cassandra.connection:Sending SASL-based auth response on <LibevConnection(140047616863984) 10.0.3.128:9042>
[2021-04-30 19:25:46,387] DEBUG: Sending SASL-based auth response on <LibevConnection(140047616863984) 10.0.3.128:9042>
DEBUG:cassandra.connection:Connection <LibevConnection(140047616862584) 10.0.1.64:9042> successfully authenticated
[2021-04-30 19:25:46,389] DEBUG: Connection <LibevConnection(140047616862584) 10.0.1.64:9042> successfully authenticated
DEBUG:cassandra.pool:Finished initializing connection for host 10.0.1.64:9042
[2021-04-30 19:25:46,389] DEBUG: Finished initializing connection for host 10.0.1.64:9042
DEBUG:cassandra.cluster:Added pool for host 10.0.1.64:9042 to session
[2021-04-30 19:25:46,389] DEBUG: Added pool for host 10.0.1.64:9042 to session
DEBUG:cassandra.connection:Connection <LibevConnection(140047616863984) 10.0.3.128:9042> successfully authenticated
[2021-04-30 19:25:46,471] DEBUG: Connection <LibevConnection(140047616863984) 10.0.3.128:9042> successfully authenticated
DEBUG:cassandra.pool:Finished initializing connection for host 10.0.3.128:9042
[2021-04-30 19:25:46,471] DEBUG: Finished initializing connection for host 10.0.3.128:9042
DEBUG:cassandra.cluster:Added pool for host 10.0.3.128:9042 to session
[2021-04-30 19:25:46,471] DEBUG: Added pool for host 10.0.3.128:9042 to session
DEBUG:cassandra.cluster:Not starting MonitorReporter thread for Insights; not supported by server version 3.11.10 on ControlConnection host 10.0.2.189:9042
[2021-04-30 19:25:46,471] DEBUG: Not starting MonitorReporter thread for Insights; not supported by server version 3.11.10 on ControlConnection host 10.0.2.189:9042
DEBUG:cassandra.cluster:Started Session with client_id 2f1315fa-6d59-4882-a15a-f870fccdb75d and session_id dce52b2e-32d1-4cf2-acd7-9c504289e01e
[2021-04-30 19:25:46,471] DEBUG: Started Session with client_id 2f1315fa-6d59-4882-a15a-f870fccdb75d and session_id dce52b2e-32d1-4cf2-acd7-9c504289e01e
DEBUG:root:Checking datacenter...
[2021-04-30 19:25:46,474] DEBUG: Checking datacenter...
DEBUG:root:Resolved 10.0.2.189 to prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local
[2021-04-30 19:25:46,475] DEBUG: Resolved 10.0.2.189 to prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local
DEBUG:root:Checking host 10.0.2.189 against 10.0.2.189/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local
[2021-04-30 19:25:46,475] DEBUG: Checking host 10.0.2.189 against 10.0.2.189/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local
DEBUG:root:Resolved 10.0.1.64 to 10-0-1-64.prod-k8ssandra-dc1-all-pods-service.default.svc.cluster.local
[2021-04-30 19:25:46,477] DEBUG: Resolved 10.0.1.64 to 10-0-1-64.prod-k8ssandra-dc1-all-pods-service.default.svc.cluster.local
DEBUG:root:Resolved 10.0.2.189 to prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local
[2021-04-30 19:25:46,477] DEBUG: Resolved 10.0.2.189 to prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local
DEBUG:root:Resolved 10.0.3.128 to 10-0-3-128.prod-k8ssandra-dc1-all-pods-service.default.svc.cluster.local
[2021-04-30 19:25:46,477] DEBUG: Resolved 10.0.3.128 to 10-0-3-128.prod-k8ssandra-dc1-all-pods-service.default.svc.cluster.local
DEBUG:cassandra.io.libevreactor:Closing connection (140047656034992) to 10.0.2.189:9042
[2021-04-30 19:25:46,477] DEBUG: Closing connection (140047656034992) to 10.0.2.189:9042
DEBUG:cassandra.io.libevreactor:Closed socket to 10.0.2.189:9042
[2021-04-30 19:25:46,477] DEBUG: Closed socket to 10.0.2.189:9042
DEBUG:cassandra.io.libevreactor:Closing connection (140047616862584) to 10.0.1.64:9042
[2021-04-30 19:25:46,477] DEBUG: Closing connection (140047616862584) to 10.0.1.64:9042
DEBUG:cassandra.io.libevreactor:Closed socket to 10.0.1.64:9042
[2021-04-30 19:25:46,478] DEBUG: Closed socket to 10.0.1.64:9042
DEBUG:cassandra.io.libevreactor:Closing connection (140047616863984) to 10.0.3.128:9042
[2021-04-30 19:25:46,478] DEBUG: Closing connection (140047616863984) to 10.0.3.128:9042
DEBUG:cassandra.io.libevreactor:Closed socket to 10.0.3.128:9042
[2021-04-30 19:25:46,478] DEBUG: Closed socket to 10.0.3.128:9042
DEBUG:cassandra.cluster:Shutting down Cluster Scheduler
[2021-04-30 19:25:46,478] DEBUG: Shutting down Cluster Scheduler
DEBUG:cassandra.cluster:Shutting down control connection
[2021-04-30 19:25:46,478] DEBUG: Shutting down control connection
DEBUG:cassandra.io.libevreactor:Closing connection (140047656107200) to 10.0.2.189:9042
[2021-04-30 19:25:46,478] DEBUG: Closing connection (140047656107200) to 10.0.2.189:9042
DEBUG:cassandra.io.libevreactor:Closed socket to 10.0.2.189:9042
[2021-04-30 19:25:46,478] DEBUG: Closed socket to 10.0.2.189:9042
DEBUG:cassandra.io.libevreactor:All Connections currently closed, event loop ended
[2021-04-30 19:25:46,478] DEBUG: All Connections currently closed, event loop ended
DEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 "POST /prod-k8ssandra2-s3-bucket/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql?uploads= HTTP/1.1" 403 None
[2021-04-30 19:25:46,592] DEBUG: https://s3.amazonaws.com:443 "POST /prod-k8ssandra2-s3-bucket/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql?uploads= HTTP/1.1" 403 None
DEBUG:urllib3.connectionpool:Resetting dropped connection: s3.amazonaws.com
[2021-04-30 19:26:06,609] DEBUG: Resetting dropped connection: s3.amazonaws.com
DEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 "POST /prod-k8ssandra2-s3-bucket/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql?uploads= HTTP/1.1" 403 None
[2021-04-30 19:26:06,665] DEBUG: https://s3.amazonaws.com:443 "POST /prod-k8ssandra2-s3-bucket/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql?uploads= HTTP/1.1" 403 None
DEBUG:urllib3.connectionpool:Resetting dropped connection: s3.amazonaws.com
[2021-04-30 19:26:46,705] DEBUG: Resetting dropped connection: s3.amazonaws.com
DEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 "POST /prod-k8ssandra2-s3-bucket/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql?uploads= HTTP/1.1" 403 None
[2021-04-30 19:26:46,762] DEBUG: https://s3.amazonaws.com:443 "POST /prod-k8ssandra2-s3-bucket/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql?uploads= HTTP/1.1" 403 None
DEBUG:urllib3.connectionpool:Resetting dropped connection: s3.amazonaws.com
[2021-04-30 19:28:06,844] DEBUG: Resetting dropped connection: s3.amazonaws.com
DEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 "POST /prod-k8ssandra2-s3-bucket/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql?uploads= HTTP/1.1" 403 None
[2021-04-30 19:28:06,898] DEBUG: https://s3.amazonaws.com:443 "POST /prod-k8ssandra2-s3-bucket/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql?uploads= HTTP/1.1" 403 None
DEBUG:urllib3.connectionpool:Resetting dropped connection: s3.amazonaws.com
[2021-04-30 19:30:06,991] DEBUG: Resetting dropped connection: s3.amazonaws.com
DEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 "POST /prod-k8ssandra2-s3-bucket/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql?uploads= HTTP/1.1" 403 None
[2021-04-30 19:30:07,050] DEBUG: https://s3.amazonaws.com:443 "POST /prod-k8ssandra2-s3-bucket/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql?uploads= HTTP/1.1" 403 None
DEBUG:urllib3.connectionpool:Resetting dropped connection: s3.amazonaws.com
[2021-04-30 19:32:07,136] DEBUG: Resetting dropped connection: s3.amazonaws.com
DEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 "POST /prod-k8ssandra2-s3-bucket/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql?uploads= HTTP/1.1" 403 None
[2021-04-30 19:32:07,212] DEBUG: https://s3.amazonaws.com:443 "POST /prod-k8ssandra2-s3-bucket/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql?uploads= HTTP/1.1" 403 None
INFO:root:Performing backup test
[2021-04-30 19:33:40,258] INFO: Performing backup test
INFO:root:Monitoring provider is noop
[2021-04-30 19:33:40,258] INFO: Monitoring provider is noop
DEBUG:root:Loading storage_provider: s3
[2021-04-30 19:33:40,258] DEBUG: Loading storage_provider: s3
DEBUG:root:Reading AWS credentials from /etc/medusa-secrets/medusa_s3_credentials
[2021-04-30 19:33:40,259] DEBUG: Reading AWS credentials from /etc/medusa-secrets/medusa_s3_credentials
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): s3.amazonaws.com:443
[2021-04-30 19:33:40,600] DEBUG: Starting new HTTPS connection (1): s3.amazonaws.com:443
DEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 "HEAD /prod-k8ssandra2-s3-bucket HTTP/1.1" 200 0
[2021-04-30 19:33:40,693] DEBUG: https://s3.amazonaws.com:443 "HEAD /prod-k8ssandra2-s3-bucket HTTP/1.1" 200 0
DEBUG:root:This server has systemd: False
[2021-04-30 19:33:41,028] DEBUG: This server has systemd: False
WARNING:root:is ccm : 0
[2021-04-30 19:33:41,029] WARNING: is ccm : 0
DEBUG:root:Blob prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql was not found in cache.
[2021-04-30 19:33:41,040] DEBUG: Blob prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql was not found in cache.
DEBUG:root:[Storage] Getting object prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql
[2021-04-30 19:33:41,040] DEBUG: [Storage] Getting object prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql
DEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 "HEAD /prod-k8ssandra2-s3-bucket HTTP/1.1" 200 0
[2021-04-30 19:33:41,046] DEBUG: https://s3.amazonaws.com:443 "HEAD /prod-k8ssandra2-s3-bucket HTTP/1.1" 200 0
DEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 "HEAD /prod-k8ssandra2-s3-bucket/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql HTTP/1.1" 404 0
[2021-04-30 19:33:41,087] DEBUG: https://s3.amazonaws.com:443 "HEAD /prod-k8ssandra2-s3-bucket/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql HTTP/1.1" 404 0
DEBUG:root:Process psutil.Process(pid=1, name='python3', status='sleeping', started='19:18:32') was set to use only idle IO and CPU resources
[2021-04-30 19:33:41,088] DEBUG: Process psutil.Process(pid=1, name='python3', status='sleeping', started='19:18:32') was set to use only idle IO and CPU resources
INFO:root:Saving tokenmap and schema
[2021-04-30 19:33:41,088] INFO: Saving tokenmap and schema
DEBUG:cassandra.cluster:Connecting to cluster, contact points: ['10.0.2.189']; protocol version: 4
[2021-04-30 19:33:41,088] DEBUG: Connecting to cluster, contact points: ['10.0.2.189']; protocol version: 4
DEBUG:cassandra.pool:Host 10.0.2.189:9042 is now marked up
[2021-04-30 19:33:41,089] DEBUG: Host 10.0.2.189:9042 is now marked up
DEBUG:cassandra.cluster:[control connection] Opening new connection to 10.0.2.189:9042
[2021-04-30 19:33:41,089] DEBUG: [control connection] Opening new connection to 10.0.2.189:9042
DEBUG:cassandra.connection:Sending initial options message for new connection (140047616713896) to 10.0.2.189:9042
[2021-04-30 19:33:41,089] DEBUG: Sending initial options message for new connection (140047616713896) to 10.0.2.189:9042
DEBUG:cassandra.io.libevreactor:Starting libev event loop
[2021-04-30 19:33:41,089] DEBUG: Starting libev event loop
DEBUG:cassandra.connection:Received options response on new connection (140047616713896) from 10.0.2.189:9042
[2021-04-30 19:33:41,090] DEBUG: Received options response on new connection (140047616713896) from 10.0.2.189:9042
DEBUG:cassandra.connection:No available compression types supported on both ends. locally supported: odict_keys([]). remotely supported: ['snappy', 'lz4']
[2021-04-30 19:33:41,090] DEBUG: No available compression types supported on both ends. locally supported: odict_keys([]). remotely supported: ['snappy', 'lz4']
DEBUG:cassandra.connection:Sending StartupMessage on <LibevConnection(140047616713896) 10.0.2.189:9042>
[2021-04-30 19:33:41,090] DEBUG: Sending StartupMessage on <LibevConnection(140047616713896) 10.0.2.189:9042>
DEBUG:cassandra.connection:Sent StartupMessage on <LibevConnection(140047616713896) 10.0.2.189:9042>
[2021-04-30 19:33:41,090] DEBUG: Sent StartupMessage on <LibevConnection(140047616713896) 10.0.2.189:9042>
DEBUG:cassandra.connection:Got AuthenticateMessage on new connection (140047616713896) from 10.0.2.189:9042: org.apache.cassandra.auth.PasswordAuthenticator
[2021-04-30 19:33:41,091] DEBUG: Got AuthenticateMessage on new connection (140047616713896) from 10.0.2.189:9042: org.apache.cassandra.auth.PasswordAuthenticator
DEBUG:cassandra.connection:Sending SASL-based auth response on <LibevConnection(140047616713896) 10.0.2.189:9042>
[2021-04-30 19:33:41,091] DEBUG: Sending SASL-based auth response on <LibevConnection(140047616713896) 10.0.2.189:9042>
DEBUG:cassandra.connection:Connection <LibevConnection(140047616713896) 10.0.2.189:9042> successfully authenticated
[2021-04-30 19:33:41,171] DEBUG: Connection <LibevConnection(140047616713896) 10.0.2.189:9042> successfully authenticated
DEBUG:cassandra.cluster:[control connection] Established new connection <LibevConnection(140047616713896) 10.0.2.189:9042>, registering watchers and refreshing schema and topology
[2021-04-30 19:33:41,171] DEBUG: [control connection] Established new connection <LibevConnection(140047616713896) 10.0.2.189:9042>, registering watchers and refreshing schema and topology
DEBUG:cassandra.cluster:[control connection] Refreshing node list and token map using preloaded results
[2021-04-30 19:33:41,176] DEBUG: [control connection] Refreshing node list and token map using preloaded results
INFO:cassandra.policies:Using datacenter 'dc1' for DCAwareRoundRobinPolicy (via host '10.0.2.189:9042'); if incorrect, please specify a local_dc to the constructor, or limit contact points to local cluster nodes
[2021-04-30 19:33:41,176] INFO: Using datacenter 'dc1' for DCAwareRoundRobinPolicy (via host '10.0.2.189:9042'); if incorrect, please specify a local_dc to the constructor, or limit contact points to local cluster nodes
DEBUG:cassandra.cluster:[control connection] Found new host to connect to: 10.0.1.64:9042
[2021-04-30 19:33:41,177] DEBUG: [control connection] Found new host to connect to: 10.0.1.64:9042
INFO:cassandra.cluster:New Cassandra host <Host: 10.0.1.64:9042 dc1> discovered
[2021-04-30 19:33:41,177] INFO: New Cassandra host <Host: 10.0.1.64:9042 dc1> discovered
DEBUG:cassandra.cluster:Handling new host <Host: 10.0.1.64:9042 dc1> and notifying listeners
[2021-04-30 19:33:41,177] DEBUG: Handling new host <Host: 10.0.1.64:9042 dc1> and notifying listeners
DEBUG:cassandra.cluster:Done preparing queries for new host <Host: 10.0.1.64:9042 dc1>
[2021-04-30 19:33:41,177] DEBUG: Done preparing queries for new host <Host: 10.0.1.64:9042 dc1>
DEBUG:cassandra.pool:Host 10.0.1.64:9042 is now marked up
[2021-04-30 19:33:41,177] DEBUG: Host 10.0.1.64:9042 is now marked up
DEBUG:cassandra.cluster:[control connection] Found new host to connect to: 10.0.3.128:9042
[2021-04-30 19:33:41,177] DEBUG: [control connection] Found new host to connect to: 10.0.3.128:9042
INFO:cassandra.cluster:New Cassandra host <Host: 10.0.3.128:9042 dc1> discovered
[2021-04-30 19:33:41,177] INFO: New Cassandra host <Host: 10.0.3.128:9042 dc1> discovered
DEBUG:cassandra.cluster:Handling new host <Host: 10.0.3.128:9042 dc1> and notifying listeners
[2021-04-30 19:33:41,177] DEBUG: Handling new host <Host: 10.0.3.128:9042 dc1> and notifying listeners
DEBUG:cassandra.cluster:Done preparing queries for new host <Host: 10.0.3.128:9042 dc1>
[2021-04-30 19:33:41,177] DEBUG: Done preparing queries for new host <Host: 10.0.3.128:9042 dc1>
DEBUG:cassandra.pool:Host 10.0.3.128:9042 is now marked up
[2021-04-30 19:33:41,177] DEBUG: Host 10.0.3.128:9042 is now marked up
DEBUG:cassandra.cluster:[control connection] Finished fetching ring info
[2021-04-30 19:33:41,177] DEBUG: [control connection] Finished fetching ring info
DEBUG:cassandra.cluster:[control connection] Rebuilding token map due to topology changes
[2021-04-30 19:33:41,177] DEBUG: [control connection] Rebuilding token map due to topology changes
DEBUG:cassandra.cluster:Control connection created
[2021-04-30 19:33:41,193] DEBUG: Control connection created
DEBUG:cassandra.pool:Initializing connection for host 10.0.2.189:9042
[2021-04-30 19:33:41,194] DEBUG: Initializing connection for host 10.0.2.189:9042
DEBUG:cassandra.pool:Initializing connection for host 10.0.1.64:9042
[2021-04-30 19:33:41,194] DEBUG: Initializing connection for host 10.0.1.64:9042
DEBUG:cassandra.connection:Sending initial options message for new connection (140047616714960) to 10.0.2.189:9042
[2021-04-30 19:33:41,195] DEBUG: Sending initial options message for new connection (140047616714960) to 10.0.2.189:9042
DEBUG:cassandra.connection:Received options response on new connection (140047616714960) from 10.0.2.189:9042
[2021-04-30 19:33:41,196] DEBUG: Received options response on new connection (140047616714960) from 10.0.2.189:9042
DEBUG:cassandra.connection:No available compression types supported on both ends. locally supported: odict_keys([]). remotely supported: ['snappy', 'lz4']
[2021-04-30 19:33:41,196] DEBUG: No available compression types supported on both ends. locally supported: odict_keys([]). remotely supported: ['snappy', 'lz4']
DEBUG:cassandra.connection:Sending StartupMessage on <LibevConnection(140047616714960) 10.0.2.189:9042>
[2021-04-30 19:33:41,196] DEBUG: Sending StartupMessage on <LibevConnection(140047616714960) 10.0.2.189:9042>
DEBUG:cassandra.connection:Sent StartupMessage on <LibevConnection(140047616714960) 10.0.2.189:9042>
[2021-04-30 19:33:41,196] DEBUG: Sent StartupMessage on <LibevConnection(140047616714960) 10.0.2.189:9042>
DEBUG:cassandra.connection:Sending initial options message for new connection (140047615680584) to 10.0.1.64:9042
[2021-04-30 19:33:41,196] DEBUG: Sending initial options message for new connection (140047615680584) to 10.0.1.64:9042
DEBUG:cassandra.connection:Got AuthenticateMessage on new connection (140047616714960) from 10.0.2.189:9042: org.apache.cassandra.auth.PasswordAuthenticator
[2021-04-30 19:33:41,197] DEBUG: Got AuthenticateMessage on new connection (140047616714960) from 10.0.2.189:9042: org.apache.cassandra.auth.PasswordAuthenticator
DEBUG:cassandra.connection:Sending SASL-based auth response on <LibevConnection(140047616714960) 10.0.2.189:9042>
[2021-04-30 19:33:41,197] DEBUG: Sending SASL-based auth response on <LibevConnection(140047616714960) 10.0.2.189:9042>
DEBUG:cassandra.connection:Received options response on new connection (140047615680584) from 10.0.1.64:9042
[2021-04-30 19:33:41,198] DEBUG: Received options response on new connection (140047615680584) from 10.0.1.64:9042
DEBUG:cassandra.connection:No available compression types supported on both ends. locally supported: odict_keys([]). remotely supported: ['snappy', 'lz4']
[2021-04-30 19:33:41,199] DEBUG: No available compression types supported on both ends. locally supported: odict_keys([]). remotely supported: ['snappy', 'lz4']
DEBUG:cassandra.connection:Sending StartupMessage on <LibevConnection(140047615680584) 10.0.1.64:9042>
[2021-04-30 19:33:41,199] DEBUG: Sending StartupMessage on <LibevConnection(140047615680584) 10.0.1.64:9042>
DEBUG:cassandra.connection:Sent StartupMessage on <LibevConnection(140047615680584) 10.0.1.64:9042>
[2021-04-30 19:33:41,199] DEBUG: Sent StartupMessage on <LibevConnection(140047615680584) 10.0.1.64:9042>
DEBUG:cassandra.connection:Got AuthenticateMessage on new connection (140047615680584) from 10.0.1.64:9042: org.apache.cassandra.auth.PasswordAuthenticator
[2021-04-30 19:33:41,200] DEBUG: Got AuthenticateMessage on new connection (140047615680584) from 10.0.1.64:9042: org.apache.cassandra.auth.PasswordAuthenticator
DEBUG:cassandra.connection:Sending SASL-based auth response on <LibevConnection(140047615680584) 10.0.1.64:9042>
[2021-04-30 19:33:41,200] DEBUG: Sending SASL-based auth response on <LibevConnection(140047615680584) 10.0.1.64:9042>
DEBUG:cassandra.connection:Connection <LibevConnection(140047616714960) 10.0.2.189:9042> successfully authenticated
[2021-04-30 19:33:41,277] DEBUG: Connection <LibevConnection(140047616714960) 10.0.2.189:9042> successfully authenticated
DEBUG:cassandra.pool:Finished initializing connection for host 10.0.2.189:9042
[2021-04-30 19:33:41,277] DEBUG: Finished initializing connection for host 10.0.2.189:9042
DEBUG:cassandra.cluster:Added pool for host 10.0.2.189:9042 to session
[2021-04-30 19:33:41,277] DEBUG: Added pool for host 10.0.2.189:9042 to session
DEBUG:cassandra.pool:Initializing connection for host 10.0.3.128:9042
[2021-04-30 19:33:41,277] DEBUG: Initializing connection for host 10.0.3.128:9042
DEBUG:cassandra.cluster:Not starting MonitorReporter thread for Insights; not supported by server version 3.11.10 on ControlConnection host 10.0.2.189:9042
[2021-04-30 19:33:41,277] DEBUG: Not starting MonitorReporter thread for Insights; not supported by server version 3.11.10 on ControlConnection host 10.0.2.189:9042
DEBUG:cassandra.cluster:Started Session with client_id 360cac2f-94b6-4450-8ce8-a3b91ff5f330 and session_id f22a4a5f-bef4-4d4c-b693-885ac3662515
[2021-04-30 19:33:41,278] DEBUG: Started Session with client_id 360cac2f-94b6-4450-8ce8-a3b91ff5f330 and session_id f22a4a5f-bef4-4d4c-b693-885ac3662515
DEBUG:root:Checking datacenter...
[2021-04-30 19:33:41,281] DEBUG: Checking datacenter...
DEBUG:cassandra.connection:Sending initial options message for new connection (140047615677608) to 10.0.3.128:9042
[2021-04-30 19:33:41,281] DEBUG: Sending initial options message for new connection (140047615677608) to 10.0.3.128:9042
DEBUG:root:Resolved 10.0.2.189 to prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local
[2021-04-30 19:33:41,282] DEBUG: Resolved 10.0.2.189 to prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local
DEBUG:root:Checking host 10.0.2.189 against 10.0.2.189/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local
[2021-04-30 19:33:41,282] DEBUG: Checking host 10.0.2.189 against 10.0.2.189/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local
DEBUG:cassandra.connection:Connection <LibevConnection(140047615680584) 10.0.1.64:9042> successfully authenticated
[2021-04-30 19:33:41,282] DEBUG: Connection <LibevConnection(140047615680584) 10.0.1.64:9042> successfully authenticated
DEBUG:cassandra.pool:Finished initializing connection for host 10.0.1.64:9042
[2021-04-30 19:33:41,283] DEBUG: Finished initializing connection for host 10.0.1.64:9042
DEBUG:cassandra.connection:Received options response on new connection (140047615677608) from 10.0.3.128:9042
[2021-04-30 19:33:41,283] DEBUG: Received options response on new connection (140047615677608) from 10.0.3.128:9042
DEBUG:cassandra.cluster:Added pool for host 10.0.1.64:9042 to session
[2021-04-30 19:33:41,283] DEBUG: Added pool for host 10.0.1.64:9042 to session
DEBUG:cassandra.connection:No available compression types supported on both ends. locally supported: odict_keys([]). remotely supported: ['snappy', 'lz4']
[2021-04-30 19:33:41,283] DEBUG: No available compression types supported on both ends. locally supported: odict_keys([]). remotely supported: ['snappy', 'lz4']
DEBUG:cassandra.connection:Sending StartupMessage on <LibevConnection(140047615677608) 10.0.3.128:9042>
[2021-04-30 19:33:41,283] DEBUG: Sending StartupMessage on <LibevConnection(140047615677608) 10.0.3.128:9042>
DEBUG:cassandra.connection:Sent StartupMessage on <LibevConnection(140047615677608) 10.0.3.128:9042>
[2021-04-30 19:33:41,283] DEBUG: Sent StartupMessage on <LibevConnection(140047615677608) 10.0.3.128:9042>
DEBUG:root:Resolved 10.0.1.64 to 10-0-1-64.prod-k8ssandra-dc1-all-pods-service.default.svc.cluster.local
[2021-04-30 19:33:41,283] DEBUG: Resolved 10.0.1.64 to 10-0-1-64.prod-k8ssandra-dc1-all-pods-service.default.svc.cluster.local
DEBUG:root:Resolved 10.0.2.189 to prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local
[2021-04-30 19:33:41,284] DEBUG: Resolved 10.0.2.189 to prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local
DEBUG:root:Resolved 10.0.3.128 to 10-0-3-128.prod-k8ssandra-dc1-all-pods-service.default.svc.cluster.local
[2021-04-30 19:33:41,284] DEBUG: Resolved 10.0.3.128 to 10-0-3-128.prod-k8ssandra-dc1-all-pods-service.default.svc.cluster.local
DEBUG:cassandra.connection:Got AuthenticateMessage on new connection (140047615677608) from 10.0.3.128:9042: org.apache.cassandra.auth.PasswordAuthenticator
[2021-04-30 19:33:41,284] DEBUG: Got AuthenticateMessage on new connection (140047615677608) from 10.0.3.128:9042: org.apache.cassandra.auth.PasswordAuthenticator
DEBUG:cassandra.connection:Sending SASL-based auth response on <LibevConnection(140047615677608) 10.0.3.128:9042>
[2021-04-30 19:33:41,284] DEBUG: Sending SASL-based auth response on <LibevConnection(140047615677608) 10.0.3.128:9042>
DEBUG:cassandra.connection:Connection <LibevConnection(140047615677608) 10.0.3.128:9042> successfully authenticated
[2021-04-30 19:33:41,365] DEBUG: Connection <LibevConnection(140047615677608) 10.0.3.128:9042> successfully authenticated
DEBUG:cassandra.pool:Finished initializing connection for host 10.0.3.128:9042
[2021-04-30 19:33:41,365] DEBUG: Finished initializing connection for host 10.0.3.128:9042
DEBUG:cassandra.cluster:Added pool for host 10.0.3.128:9042 to session
[2021-04-30 19:33:41,365] DEBUG: Added pool for host 10.0.3.128:9042 to session
DEBUG:cassandra.io.libevreactor:Closing connection (140047616714960) to 10.0.2.189:9042
[2021-04-30 19:33:41,365] DEBUG: Closing connection (140047616714960) to 10.0.2.189:9042
DEBUG:cassandra.io.libevreactor:Closed socket to 10.0.2.189:9042
[2021-04-30 19:33:41,366] DEBUG: Closed socket to 10.0.2.189:9042
DEBUG:cassandra.io.libevreactor:Closing connection (140047615680584) to 10.0.1.64:9042
[2021-04-30 19:33:41,366] DEBUG: Closing connection (140047615680584) to 10.0.1.64:9042
DEBUG:cassandra.io.libevreactor:Closed socket to 10.0.1.64:9042
[2021-04-30 19:33:41,366] DEBUG: Closed socket to 10.0.1.64:9042
DEBUG:cassandra.io.libevreactor:Closing connection (140047615677608) to 10.0.3.128:9042
[2021-04-30 19:33:41,366] DEBUG: Closing connection (140047615677608) to 10.0.3.128:9042
DEBUG:cassandra.io.libevreactor:Closed socket to 10.0.3.128:9042
[2021-04-30 19:33:41,366] DEBUG: Closed socket to 10.0.3.128:9042
DEBUG:cassandra.cluster:Shutting down Cluster Scheduler
[2021-04-30 19:33:41,366] DEBUG: Shutting down Cluster Scheduler
DEBUG:cassandra.cluster:Shutting down control connection
[2021-04-30 19:33:41,366] DEBUG: Shutting down control connection
DEBUG:cassandra.io.libevreactor:Closing connection (140047616713896) to 10.0.2.189:9042
[2021-04-30 19:33:41,366] DEBUG: Closing connection (140047616713896) to 10.0.2.189:9042
DEBUG:cassandra.io.libevreactor:All Connections currently closed, event loop ended
[2021-04-30 19:33:41,366] DEBUG: All Connections currently closed, event loop ended
DEBUG:cassandra.io.libevreactor:Closed socket to 10.0.2.189:9042
[2021-04-30 19:33:41,366] DEBUG: Closed socket to 10.0.2.189:9042
DEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 "POST /prod-k8ssandra2-s3-bucket/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql?uploads= HTTP/1.1" 403 None
[2021-04-30 19:33:41,429] DEBUG: https://s3.amazonaws.com:443 "POST /prod-k8ssandra2-s3-bucket/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql?uploads= HTTP/1.1" 403 None
DEBUG:urllib3.connectionpool:Resetting dropped connection: s3.amazonaws.com
[2021-04-30 19:34:01,437] DEBUG: Resetting dropped connection: s3.amazonaws.com
DEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 "POST /prod-k8ssandra2-s3-bucket/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql?uploads= HTTP/1.1" 403 None
[2021-04-30 19:34:01,494] DEBUG: https://s3.amazonaws.com:443 "POST /prod-k8ssandra2-s3-bucket/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql?uploads= HTTP/1.1" 403 None
DEBUG:urllib3.connectionpool:Resetting dropped connection: s3.amazonaws.com
[2021-04-30 19:34:07,296] DEBUG: Resetting dropped connection: s3.amazonaws.com
DEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 "POST /prod-k8ssandra2-s3-bucket/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql?uploads= HTTP/1.1" 403 None
[2021-04-30 19:34:07,350] DEBUG: https://s3.amazonaws.com:443 "POST /prod-k8ssandra2-s3-bucket/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql?uploads= HTTP/1.1" 403 None
ERROR:root:backup failed
Traceback (most recent call last):
File "/home/cassandra/medusa/service/grpc/server.py", line 51, in Backup
medusa.backup_node.main(self.config, request.name, None, False, "differential")
File "/home/cassandra/medusa/backup_node.py", line 227, in main
config
File "/home/cassandra/medusa/utils.py", line 35, in handle_exception
raise exception
File "/home/cassandra/medusa/backup_node.py", line 190, in main
node_backup.schema = schema
File "/home/cassandra/medusa/storage/node_backup.py", line 137, in schema
self._storage.storage_driver.upload_blob_from_string(self.schema_path, schema)
File "/home/cassandra/.local/lib/python3.6/site-packages/retrying.py", line 49, in wrapped_f
return Retrying(_dargs, *_dkw).call(f, _args, *_kw)
File "/home/cassandra/.local/lib/python3.6/site-packages/retrying.py", line 212, in call
raise attempt.get()
File "/home/cassandra/.local/lib/python3.6/site-packages/retrying.py", line 247, in get
six.reraise(self.value[0], self.value[1], self.value[2])
File "/home/cassandra/.local/lib/python3.6/site-packages/six.py", line 703, in reraise
raise value
File "/home/cassandra/.local/lib/python3.6/site-packages/retrying.py", line 200, in call
attempt = Attempt(fn(_args, *_kwargs), attempt_number, False)
File "/home/cassandra/medusa/storage/abstract_storage.py", line 65, in upload_blob_from_string
object_name=str(path)
File "/home/cassandra/.local/lib/python3.6/site-packages/libcloud/storage/drivers/s3.py", line 753, in upload_object_via_stream
storage_class=ex_storage_class)
File "/home/cassandra/.local/lib/python3.6/site-packages/libcloud/storage/drivers/s3.py", line 989, in _put_object_multipart
headers=headers)
File "/home/cassandra/.local/lib/python3.6/site-packages/libcloud/storage/drivers/s3.py", line 573, in _initiate_multipart
headers=headers, params=params)
File "/home/cassandra/.local/lib/python3.6/site-packages/libcloud/common/base.py", line 655, in request
response = responseCls(**kwargs)
File "/home/cassandra/.local/lib/python3.6/site-packages/libcloud/common/base.py", line 166, in init
message=self.parse_error(),
File "/home/cassandra/.local/lib/python3.6/site-packages/libcloud/storage/drivers/s3.py", line 138, in parse_error
raise InvalidCredsError(self.body)
libcloud.common.types.InvalidCredsError: '\nAccessDeniedAccess DeniedC8QKKGB0B3090Z1Ds3R0e+Rgl+U6j4/sPtdpSjP7nHgb46O50ltwWDzsFmOj8qJjxzOB9Nb/nKLsIThm4K2gbK+8OfI='
[2021-04-30 19:34:07,350] ERROR: backup failed
Traceback (most recent call last):
File "/home/cassandra/medusa/service/grpc/server.py", line 51, in Backup
medusa.backup_node.main(self.config, request.name, None, False, "differential")
File "/home/cassandra/medusa/backup_node.py", line 227, in main
config
File "/home/cassandra/medusa/utils.py", line 35, in handle_exception
raise exception
File "/home/cassandra/medusa/backup_node.py", line 190, in main
node_backup.schema = schema
File "/home/cassandra/medusa/storage/node_backup.py", line 137, in schema
self._storage.storage_driver.upload_blob_from_string(self.schema_path, schema)
File "/home/cassandra/.local/lib/python3.6/site-packages/retrying.py", line 49, in wrapped_f
return Retrying(_dargs, *_dkw).call(f, _args, *_kw)
File "/home/cassandra/.local/lib/python3.6/site-packages/retrying.py", line 212, in call
raise attempt.get()
File "/home/cassandra/.local/lib/python3.6/site-packages/retrying.py", line 247, in get
six.reraise(self.value[0], self.value[1], self.value[2])
File "/home/cassandra/.local/lib/python3.6/site-packages/six.py", line 703, in reraise
raise value
File "/home/cassandra/.local/lib/python3.6/site-packages/retrying.py", line 200, in call
attempt = Attempt(fn(_args, *_kwargs), attempt_number, False)
File "/home/cassandra/medusa/storage/abstract_storage.py", line 65, in upload_blob_from_string
object_name=str(path)
File "/home/cassandra/.local/lib/python3.6/site-packages/libcloud/storage/drivers/s3.py", line 753, in upload_object_via_stream
storage_class=ex_storage_class)
File "/home/cassandra/.local/lib/python3.6/site-packages/libcloud/storage/drivers/s3.py", line 989, in _put_object_multipart
headers=headers)
File "/home/cassandra/.local/lib/python3.6/site-packages/libcloud/storage/drivers/s3.py", line 573, in _initiate_multipart
headers=headers, params=params)
File "/home/cassandra/.local/lib/python3.6/site-packages/libcloud/common/base.py", line 655, in request
response = responseCls(**kwargs)
File "/home/cassandra/.local/lib/python3.6/site-packages/libcloud/common/base.py", line 166, in init
message=self.parse_error(),
File "/home/cassandra/.local/lib/python3.6/site-packages/libcloud/storage/drivers/s3.py", line 138, in parse_error
raise InvalidCredsError(self.body)
libcloud.common.types.InvalidCredsError: '\nAccessDeniedAccess DeniedC8QKKGB0B3090Z1Ds3R0e+Rgl+U6j4/sPtdpSjP7nHgb46O50ltwWDzsFmOj8qJjxzOB9Nb/nKLsIThm4K2gbK+8OfI='
DEBUG:urllib3.connectionpool:Resetting dropped connection: s3.amazonaws.com
[2021-04-30 19:34:41,515] DEBUG: Resetting dropped connection: s3.amazonaws.com
DEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 "POST /prod-k8ssandra2-s3-bucket/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql?uploads= HTTP/1.1" 403 None
[2021-04-30 19:34:41,570] DEBUG: https://s3.amazonaws.com:443 "POST /prod-k8ssandra2-s3-bucket/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql?uploads= HTTP/1.1" 403 None
DEBUG:urllib3.connectionpool:Resetting dropped connection: s3.amazonaws.com
[2021-04-30 19:36:01,633] DEBUG: Resetting dropped connection: s3.amazonaws.com
DEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 "POST /prod-k8ssandra2-s3-bucket/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql?uploads= HTTP/1.1" 403 None
[2021-04-30 19:36:01,699] DEBUG: https://s3.amazonaws.com:443 "POST /prod-k8ssandra2-s3-bucket/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql?uploads= HTTP/1.1" 403 None
DEBUG:urllib3.connectionpool:Resetting dropped connection: s3.amazonaws.com
[2021-04-30 19:38:01,800] DEBUG: Resetting dropped connection: s3.amazonaws.com
DEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 "POST /prod-k8ssandra2-s3-bucket/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql?uploads= HTTP/1.1" 403 None
[2021-04-30 19:38:01,854] DEBUG: https://s3.amazonaws.com:443 "POST /prod-k8ssandra2-s3-bucket/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql?uploads= HTTP/1.1" 403 None
DEBUG:urllib3.connectionpool:Resetting dropped connection: s3.amazonaws.com
[2021-04-30 19:40:01,953] DEBUG: Resetting dropped connection: s3.amazonaws.com
DEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 "POST /prod-k8ssandra2-s3-bucket/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql?uploads= HTTP/1.1" 403 None
[2021-04-30 19:40:02,009] DEBUG: https://s3.amazonaws.com:443 "POST /prod-k8ssandra2-s3-bucket/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql?uploads= HTTP/1.1" 403 None
DEBUG:urllib3.connectionpool:Resetting dropped connection: s3.amazonaws.com
[2021-04-30 19:42:02,057] DEBUG: Resetting dropped connection: s3.amazonaws.com
DEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 "POST /prod-k8ssandra2-s3-bucket/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql?uploads= HTTP/1.1" 403 None
[2021-04-30 19:42:02,118] DEBUG: https://s3.amazonaws.com:443 "POST /prod-k8ssandra2-s3-bucket/prod-k8ssandra-dc1-us-east-1b-sts-0.prod-k8ssandra-dc1-service.default.svc.cluster.local/test/meta/schema.cql?uploads= HTTP/1.1" 403 None
ERROR:root:backup failed
Traceback (most recent call last):
File "/home/cassandra/medusa/service/grpc/server.py", line 51, in Backup
medusa.backup_node.main(self.config, request.name, None, False, "differential")
File "/home/cassandra/medusa/backup_node.py", line 227, in main
config
File "/home/cassandra/medusa/utils.py", line 35, in handle_exception
raise exception
File "/home/cassandra/medusa/backup_node.py", line 190, in main
node_backup.schema = schema
File "/home/cassandra/medusa/storage/node_backup.py", line 137, in schema
self._storage.storage_driver.upload_blob_from_string(self.schema_path, schema)
File "/home/cassandra/.local/lib/python3.6/site-packages/retrying.py", line 49, in wrapped_f
return Retrying(_dargs, *_dkw).call(f, _args, *_kw)
File "/home/cassandra/.local/lib/python3.6/site-packages/retrying.py", line 212, in call
raise attempt.get()
File "/home/cassandra/.local/lib/python3.6/site-packages/retrying.py", line 247, in get
six.reraise(self.value[0], self.value[1], self.value[2])
File "/home/cassandra/.local/lib/python3.6/site-packages/six.py", line 703, in reraise
raise value
File "/home/cassandra/.local/lib/python3.6/site-packages/retrying.py", line 200, in call
attempt = Attempt(fn(_args, *_kwargs), attempt_number, False)
File "/home/cassandra/medusa/storage/abstract_storage.py", line 65, in upload_blob_from_string
object_name=str(path)
File "/home/cassandra/.local/lib/python3.6/site-packages/libcloud/storage/drivers/s3.py", line 753, in upload_object_via_stream
storage_class=ex_storage_class)
File "/home/cassandra/.local/lib/python3.6/site-packages/libcloud/storage/drivers/s3.py", line 989, in _put_object_multipart
headers=headers)
File "/home/cassandra/.local/lib/python3.6/site-packages/libcloud/storage/drivers/s3.py", line 573, in _initiate_multipart
headers=headers, params=params)
File "/home/cassandra/.local/lib/python3.6/site-packages/libcloud/common/base.py", line 655, in request
response = responseCls(**kwargs)
File "/home/cassandra/.local/lib/python3.6/site-packages/libcloud/common/base.py", line 166, in init
message=self.parse_error(),
File "/home/cassandra/.local/lib/python3.6/site-packages/libcloud/storage/drivers/s3.py", line 138, in parse_error
raise InvalidCredsError(self.body)
libcloud.common.types.InvalidCredsError: '\nAccessDeniedAccess DeniedB6FPBNY9TPPHEMSE7De4yhrVyAB0ZCJGNYyfTUVHdAr0ARKobV/UyAvjv6Qx/dmkXg/fdQit2AM7j/hdTspC5S3L9Kw='
[2021-04-30 19:42:02,118] ERROR: backup failed
Traceback (most recent call last):
File "/home/cassandra/medusa/service/grpc/server.py", line 51, in Backup
medusa.backup_node.main(self.config, request.name, None, False, "differential")
File "/home/cassandra/medusa/backup_node.py", line 227, in main
config
File "/home/cassandra/medusa/utils.py", line 35, in handle_exception
raise exception
File "/home/cassandra/medusa/backup_node.py", line 190, in main
node_backup.schema = schema
File "/home/cassandra/medusa/storage/node_backup.py", line 137, in schema
self._storage.storage_driver.upload_blob_from_string(self.schema_path, schema)
File "/home/cassandra/.local/lib/python3.6/site-packages/retrying.py", line 49, in wrapped_f
return Retrying(_dargs, *_dkw).call(f, _args, *_kw)
File "/home/cassandra/.local/lib/python3.6/site-packages/retrying.py", line 212, in call
raise attempt.get()
File "/home/cassandra/.local/lib/python3.6/site-packages/retrying.py", line 247, in get
six.reraise(self.value[0], self.value[1], self.value[2])
File "/home/cassandra/.local/lib/python3.6/site-packages/six.py", line 703, in reraise
raise value
File "/home/cassandra/.local/lib/python3.6/site-packages/retrying.py", line 200, in call
attempt = Attempt(fn(_args, *_kwargs), attempt_number, False)
File "/home/cassandra/medusa/storage/abstract_storage.py", line 65, in upload_blob_from_string
object_name=str(path)
File "/home/cassandra/.local/lib/python3.6/site-packages/libcloud/storage/drivers/s3.py", line 753, in upload_object_via_stream
storage_class=ex_storage_class)
File "/home/cassandra/.local/lib/python3.6/site-packages/libcloud/storage/drivers/s3.py", line 989, in _put_object_multipart
headers=headers)
File "/home/cassandra/.local/lib/python3.6/site-packages/libcloud/storage/drivers/s3.py", line 573, in _initiate_multipart
headers=headers, params=params)
File "/home/cassandra/.local/lib/python3.6/site-packages/libcloud/common/base.py", line 655, in request
response = responseCls(**kwargs)
File "/home/cassandra/.local/lib/python3.6/site-packages/libcloud/common/base.py", line 166, in init
message=self.parse_error(),
File "/home/cassandra/.local/lib/python3.6/site-packages/libcloud/storage/drivers/s3.py", line 138, in parse_error
raise InvalidCredsError(self.body)
libcloud.common.types.InvalidCredsError: '\nAccessDeniedAccess DeniedB6FPBNY9TPPHEMSE7De4yhrVyAB0ZCJGNYyfTUVHdAr0ARKobV/UyAvjv6Qx/dmkXg/fdQit2AM7j/hdTspC5S3L9Kw='




┆Issue is synchronized with this [Jiraserver Task](https://k8ssandra.atlassian.net/browse/K8SSAND-172) by [Unito](https://www.unito.io)
┆Issue Number: K8SSAND-172
┆Priority: Medium

Create GHA CI workflow to test k8ssandra-terraform GCP module

A CI workflow should be created in GHA to test the GCP Terraform module for K8ssandra.
The workflow should be scheduled every week and would provision a GKE cluster on which the latest stable version of K8ssandra would be deployed.
Smoke tests should be implemented to verify that all the components have started successfully:

  • Check that the cassdc object reaches "Ready" state
  • Check that Cassandra is accessible by running cqlsh
  • Check that Stargate is accessible through its REST api
  • Check that Reaper is accessible through its REST api
  • Check that Grafana is accessible through its Web UI
    Medusa is assumed to be in a working state if Cassandra is accessible.

┆Issue is synchronized with this Jiraserver Task by Unito
┆Epic: k8ssandra-terraform integration tests
┆Issue Number: K8SSAND-575
┆Priority: Medium

Update GitHub Repo

Can we turn on that a merge cannot be done, unless the PR is rebased? I do not seem to have access to that setting.

┆Issue is synchronized with this Jiraserver Task by Unito
┆Issue Number: K8SSAND-171
┆Priority: Medium

Create TF for Azure

chaitu6022 when you are done with AWS flush this out and add a due date

┆Issue is synchronized with this Jiraserver Task by Unito
┆Issue Number: K8SSAND-177
┆Priority: Medium

Ingress YAML

For each cloud provider, give YAML examples for cloud provider-specific Ingress for reaper and Grafana.

┆Issue is synchronized with this Jira Task by Unito
┆friendlyId: K8SSAND-174
┆priority: Medium

After running terraform destroy IAM service account is still visible in the GCP console

After destroying resources with terraform destroy I expect all resources to be removed. Upon logging in to the cloud console this morning and looking at the IAM -> Service Accounts screen I see a service account that should not exist, although the UI indicates that it is either disabled or the keys are unable to load.

It is worth noting that while writing out this issue the offending entry is no longer visible in the GCP UI. Maybe it's an eventual consistency thing? Feel free to close this bug if you cannot reproduce.

Hashicorp Terraform 1.0.x bug generating invalid gcp_medusa_key.json

Bug Report

Describe the bug
Following the instructions in our Install K8ssandra on GKE topic, we noticed that the Medusa container in C* pods would not start. Looking further, it appears that Hashicorp's terraform generates extraneous new-line characters in the private key of gcp_medusa_key.json.

This was a bug in Terraform 0.13 (hashicorp/terraform#25986), supposedly fixed, but appears to be an issue in Terraform 1.0.0. johnsmartco filed a new bug report with Hashicorp,
hashicorp/terraform#29079 .

To Reproduce

Expected behavior
Hashicorp terraform sw, such as this command, should generate a valid key for Medusa:

terraform output -json service_account_key > medusa_gcp_key.json

Environment (please complete the following information):

  • Helm charts version info

NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
prod-k8ssandra default 1 2021-07-06 18:41:56.898509262 +0000 UTC deployed k8ssandra-1.2.0

  • Helm charts user-supplied values

From my version of gke.values.yaml:

USER-SUPPLIED VALUES:
cassandra:
cassandraLibDirVolume:
size: 2048Gi
storageClass: standard-rwo
datacenters:

  • name: dc1
    racks:
    • affinityLabels:
      topology.kubernetes.io/zone: us-central1-f
      name: us-central1-f
    • affinityLabels:
      topology.kubernetes.io/zone: us-central1-a
      name: us-central1-a
    • affinityLabels:
      topology.kubernetes.io/zone: us-central1-c
      name: us-central1-c
      size: 3
      heap:
      newGenSize: 3G
      size: 8G
      resources:
      limits:
      cpu: 5000m
      memory: 50Gi
      requests:
      cpu: 5000m
      memory: 50Gi
      version: 3.11.10
      medusa:
      bucketName: prod-k8ssandra-storage-bucket
      enabled: true
      storage: google_storage
      storageSecret: prod-k8ssandra-medusa-key
      stargate:
      cpuLimMillicores: 1000
      cpuReqMillicores: 1000
      enabled: true
      heapMB: 1024
      replicas: 3
  • Kubernetes version information:

Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.11", GitCommit:"c6a2f08fc4378c5381dd948d9ad9d1080e3e6b33", GitTreeState:"clean", BuildDate:"2021-05-12T12:27:07Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.10-gke.1600", GitCommit:"7b8e568a7fb4c9d199c2ba29a5f7d76f6b4341c2", GitTreeState:"clean", BuildDate:"2021-05-07T09:18:53Z", GoVersion:"go1.15.10b5", Compiler:"gc", Platform:"linux/amd64"}

  • Kubernetes cluster kind:
    GCP GKE

Additional context
It's a bug in Hashicorp but entering this issue so we can track it. cc: jdonenine.

┆Issue is synchronized with this Jiraserver Bug by Unito
┆Issue Number: K8SSAND-666
┆Priority: Medium

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.