Coder Social home page Coder Social logo

import issues about terraform-provider-kafka HOT 13 OPEN

mongey avatar mongey commented on August 20, 2024 2
import issues

from terraform-provider-kafka.

Comments (13)

xcjs avatar xcjs commented on August 20, 2024 1

@Mongey It may work for you because you are using Consul as a source for your bootstrap_servers where @jonhoare is using the sample code for getting the bootstrap_servers from the newly created Kafka cluster.

I mention this because I am having the same issue - bootstrap_servers is not set when creating both a Kafka cluster and Kafka topics from within Terraform.

@jonhoare - were you able to resolve your issue?

from terraform-provider-kafka.

Mongey avatar Mongey commented on August 20, 2024

Perhaps we're doing the provider validation too early in the process... I'll need to investigate how other providers handle this.

from terraform-provider-kafka.

jonhoare avatar jonhoare commented on August 20, 2024

Is there any update on this issue?

I am having the same issue as Mongey/terraform-provider-confluentcloud#13

Whereby if my confluentcloud kafka cluster hasn't yet been created, the provider throws an error.

I have tried to use depends_on = [confluentcloud_kafka_cluster.myinstance] against my kafka_topic, which should only initialize my kafka provider if the resource exists, but this still does not work.

from terraform-provider-kafka.

Mongey avatar Mongey commented on August 20, 2024

@jonhoare Have you tried with v0.2.4 ?

from terraform-provider-kafka.

jonhoare avatar jonhoare commented on August 20, 2024

@Mongey Yes I am using v0.2.4.

I've only just set this up today and so I am using the latest versions.

  • terraform-provider-kafka (v0.2.4 )
  • terraform-provider-confluent-cloud (v0.0.1)
provider "confluentcloud" {}

resource "confluentcloud_kafka_cluster" "myinstance" {
  name = "myinstance"
  service_provider  = "azure"
  region = "westeurope"
  availability = "LOW"
  environment_id = "env-id"
}

resource "confluentcloud_api_key" "management" {
  cluster_id = confluentcloud_kafka_cluster.myinstance.id
  environment_id = "env-id"
}

locals {
  bootstrap_servers = [replace(confluentcloud_kafka_cluster.myinstance.bootstrap_servers, "SASL_SSL://", "")]
}

provider "kafka" {
  bootstrap_servers = local.bootstrap_servers

  tls_enabled = true
  sasl_username = confluentcloud_api_key.management.key
  sasl_password = confluentcloud_api_key.management.secret
  sasl_mechanism = "plain"
}

resource "kafka_topic" "mytopic" {
  depends_on = [confluentcloud_kafka_cluster.myinstance]
  name = "mytopic"
  replication_factor = 3
  partitions = 1

  config = {
    "cleanup.policy" = "delete"
    "compression.type" = "producer"
    "delete.retention.ms" = "86400000"
    "file.delete.delay.ms" = "60000"
    "flush.messages" = "9223372036854775807"
    "flush.ms" = "9223372036854775807"
    "index.interval.bytes" = "4096"
    "max.message.bytes" = "2097164"
    "message.format.version" = "1.0-IV0"
    "message.timestamp.difference.max.ms" = "9223372036854775807"
    "message.timestamp.type" = "CreateTime"
    "min.cleanable.dirty.ratio" = "0.5"
    "min.insync.replicas" = "2"
    "preallocate" = "false"
    "retention.bytes" = "1000000000"
    "retention.ms" = "43200000"
    "segment.bytes" = "1073741824"
    "segment.index.bytes" = "10485760"
    "segment.jitter.ms" = "0"
    "segment.ms" = "604800000"
    "unclean.leader.election.enable" = "false"
  }
}

from terraform-provider-kafka.

Mongey avatar Mongey commented on August 20, 2024

this should be fixed...
This works from me.

provider "consul" {}

data "consul_keys" "kafka_servers" {
  datacenter = "dc1"

  # Read the launch AMI from Consul
  key {
    name = "kafka"
    path = "kafka"
  }
}
provider "kafka" {
  bootstrap_servers = [data.consul_keys.kafka_servers.var.kafka]

  ca_cert     = file("../secrets/snakeoil-ca-1.crt")
  client_cert = file("../secrets/kafkacat-ca1-signed.pem")
  client_key  = file("../secrets/kafkacat-raw-private-key.pem")
  tls_enabled = true
}

# Make sure we don't lock down ourself on first run of terraform.
# First grant ourself admin permissions, then add ACL for topic.
resource "kafka_acl" "global" {
  resource_name       = "*"
  resource_type       = "Topic"
  acl_principal       = "User:*"
  acl_host            = "*"
  acl_operation       = "All"
  acl_permission_type = "Allow"
}

resource "kafka_topic" "syslog" {
  name               = "syslog"
  replication_factor = 1
  partitions         = 4

  config = {
    "segment.ms"   = "4000"
    "retention.ms" = "86400000"
  }

  depends_on = [kafka_acl.global]
}

resource "kafka_acl" "test" {
  resource_name       = "syslog"
  resource_type       = "Topic"
  acl_principal       = "User:Alice"
  acl_host            = "*"
  acl_operation       = "Write"
  acl_permission_type = "Deny"

  depends_on = [kafka_acl.global]
}

from terraform-provider-kafka.

VipulZopSmart avatar VipulZopSmart commented on August 20, 2024

Same problem exists if using sasl like this

provider "kafka" {
  bootstrap_servers = split(",", aws_msk_cluster.msk_cluster.bootstrap_brokers_sasl_scram)
  sasl_username     = var.kafka_admin_user
  sasl_password     = random_password.scram_password.result
  sasl_mechanism    = "scram-sha512"
}

from terraform-provider-kafka.

VipulZopSmart avatar VipulZopSmart commented on August 20, 2024
Error: No bootstrap_servers provided
│ 
│   with kafka_topic.kafka_topics["test-topic"],
│   on topics.tf line 18, in resource "kafka_topic" "kafka_topics":
│   18: resource "kafka_topic" "kafka_topics" {

from terraform-provider-kafka.

itaykat avatar itaykat commented on August 20, 2024

@jonhoare were you able to solve your issue?
I'm experiencing the exact same issue and using latest version... this issue should be open as this was not resolved yet.
As @xcjs mentioned, the scenario described by @Mongey is a different one and assumed that the bootstrap_servers already exists.
This is a core issue that must be resolved as it won't enable the creation of new cluster from scratch (given the confluent service credentials, username + password, only)

from terraform-provider-kafka.

dahooligan avatar dahooligan commented on August 20, 2024

Are there any updates on this issue? I am experiencing the same problem as @VipulZopSmart where an aws_msk_cluster is created in the same terraform definition with the Mongey/kafka provider.
Is this that can even be resolved or is it a won't fix? Thanks in advance :)

from terraform-provider-kafka.

Mongey avatar Mongey commented on August 20, 2024

@dahooligan can you provide the full example?

from terraform-provider-kafka.

dahooligan avatar dahooligan commented on August 20, 2024

Hy @Mongey, thanks for your reply. I'll provide you with a (non-)working minimal example asap.

from terraform-provider-kafka.

meisfrancis avatar meisfrancis commented on August 20, 2024

@Mongey, In my example, the error occurs when the provider block is placed in the referred module.

# source dir
my-tf/
|__msk/v1/
|  |__files/
|  |  |__topics.yaml
|  |__main.tf
|  |__msk_topic.tf
|__core/
|  |__kafka-topic/v1/
|     |__main.tf
|     |__variables.tf
# topics.yaml
hello_from_cis:
  partitions: 1
hello_from_bob:
  partitions: 1
# msk_topic.tf
locals {
  topics = yamldecode(file("${path.module}/files/topics.yaml"))
}

module "msk_topics" {
  source                = "../../../../../modules/aws/msk-topic/v1"
  msk_bootstrap_servers = split(",", module.msk.bootstrap_brokers[0])
  topics                = local.topics
}
# kafka-topic/v1/main.tf
terraform {
  required_providers {
    kafka = {
      source  = "Mongey/kafka"
      version = "~> 0.5"
    }
  }
}

provider "kafka" {
  bootstrap_servers = var.msk_bootstrap_servers
  tls_enabled       = false
}

resource "kafka_topic" "topics" {
  for_each           = var.topics
  name               = each.key
  partitions         = lookup(each.value, "partitions", 1)
  replication_factor = lookup(each.value, "replication_factor", 2)

  config = merge({
    "retention.ms"    = 3600000
    "retention.bytes" = 250000000
  }, lookup(each.value, "config", {}))
}
terraform init; terraform plan
.
.
.
.
.
.
.
.
Plan: 2 to add, 0 to change, 0 to destroy.
╷
│ Error: Missing required argument
│ 
│ The argument "bootstrap_servers" is required, but was not set.
╵

Other than that. I've found that moving the provider block from kafka-topic/v1/main.tf to msk_topic.tf will make it work.

After examining the error following the plan result, my hypothesis is that the provider needs time to initialize. However, msk_topic.tf invokes kafka-topic/v1/main.tf, which causes the resource "kafka_topic" "topics" to initialize prior to the provider, resulting in the error.

from terraform-provider-kafka.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.