Coder Social home page Coder Social logo

Comments (6)

mmajis avatar mmajis commented on August 20, 2024 10

Thanks again for your help!

It appears Confluent Cloud (Professional, single AZ) has a policy which requires replication_factor = 3. Any other value gives the policy error.

Having fixed that, I set out to reproduce the unknown error. Turns out it shows up if I use literal booleans like "preallocate" = false. Those get converted to "preallocate" = "0" and don't seem to work while "preallocate" = "false" does.

Furthermore, trying to edit an existing topic to add a bad preallocate gives a clear error: * kafka_topic.s3_connector_test_terraform_5: Invalid config value for resource Resource(type=TOPIC, name='s3_connector_test_terraform_5'}: Invalid value 0 for configuration preallocate: Expected value to be either true or false

Including the incorrect value during topic creation leads to the unknown error.

I'm all set, got my topic created! Love your terraform provider and your world class support effort! 👍

from terraform-provider-kafka.

mmajis avatar mmajis commented on August 20, 2024

Looks like confluent cloud supports TLSv1, TLSv1.1 and TLSv1.2

from terraform-provider-kafka.

Mongey avatar Mongey commented on August 20, 2024

Hi @mmajis, sorry you're running into issues.

Can you run TF_LOG=debug terraform apply and supply the logs.

It might be worth removing the empty config value, just incase that's causing issues.

resource "kafka_topic" "test_terraform" {
  name               = "test_terraform"
  replication_factor = 2
  partitions         = 10
}

Any ideas? Should be possible to specify ciphers somehow or maybe defaults need fixing?

It doesn't looks like a TLS error to me.
Reading a bit about the specific error, it looks like Confluent Cloud probably has a custom create.topic.policy.class.name set, which requires certain config values be set for every topic.

e.g. you need to set

config = {
     "retention.ms" = "100000"  
}

from terraform-provider-kafka.

mmajis avatar mmajis commented on August 20, 2024

Thanks for having a look at this!

I added more config based on what confluent control center creates a topic with, and got rid of the policy error. However, here's the next error with debug output:

Terraform will perform the following actions:


  + kafka_topic.s3_connector_test_terraform_2
      id:                                         <computed>
      config.%:                                   "22"
      config.cleanup.policy:                      "delete"
      config.compression.type:                    "producer"
      config.delete.retention.ms:                 "86400000"
      config.file.delete.delay.ms:                "60000"
      config.flush.messages:                      "9223372036854775807"
      config.flush.ms:                            "9223372036854775807"
      config.index.interval.bytes:                "4096"
      config.max.message.bytes:                   "2097164"
      config.message.format.version:              "1.0-IV0"
      config.message.timestamp.difference.max.ms: "9223372036854775807"
      config.message.timestamp.type:              "CreateTime"
      config.min.cleanable.dirty.ratio:           "0.5"
      config.min.compaction.lag.ms:               "0"
      config.min.insync.replicas:                 "2"
      config.preallocate:                         "0"
      config.retention.bytes:                     "1000000000"
      config.retention.ms:                        "43200000"
      config.segment.bytes:                       "1073741824"
      config.segment.index.bytes:                 "10485760"
      config.segment.jitter.ms:                   "0"
      config.segment.ms:                          "604800000"
      config.unclean.leader.election.enable:      "0"
      name:                                       "s3_connector_test_terraform_2"
      partitions:                                 "12"
      replication_factor:                         "2"


Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

2019/01/23 17:50:38 [INFO] terraform: building graph: GraphTypeApply
 {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "config.segment.index.bytes":*terraform.ResourceAttrDiff{Old:"", New:"10485760", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "config.compression.type":*terraform.ResourceAttrDiff{Old:"", New:"producer", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "config.message.timestamp.difference.max.ms":*terraform.ResourceAttrDiff{Old:"", New:"9223372036854775807", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "config.message.timestamp.type":*terraform.ResourceAttrDiff{Old:"", New:"CreateTime", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "config.segment.ms":*terraform.ResourceAttrDiff{Old:"", New:"604800000", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "id":*terraform.ResourceAttrDiff{Old:"", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x2}, "config.max.message.bytes":*terraform.ResourceAttrDiff{Old:"", New:"2097164", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "config.file.delete.delay.ms":*terraform.ResourceAttrDiff{Old:"", New:"60000", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "config.preallocate":*terraform.ResourceAttrDiff{Old:"", New:"0", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "config.min.compaction.lag.ms":*terraform.ResourceAttrDiff{Old:"", New:"0", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "config.retention.ms":*terraform.ResourceAttrDiff{Old:"", New:"43200000", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "config.retention.bytes":*terraform.ResourceAttrDiff{Old:"", New:"1000000000", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "name":*terraform.ResourceAttrDiff{Old:"", New:"s3_connector_test_terraform_2", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "config.delete.retention.ms":*terraform.ResourceAttrDiff{Old:"", New:"86400000", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "replication_factor":*terraform.ResourceAttrDiff{Old:"", New:"2", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "config.min.insync.replicas":*terraform.ResourceAttrDiff{Old:"", New:"2", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "config.%":*terraform.ResourceAttrDiff{Old:"", New:"22", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}}, Destroy:false, DestroyDeposed:false, DestroyTainted:false, Meta:map[string]interface {}(nil)}
2019/01/23 17:50:38 [DEBUG] Resource state not found for "kafka_topic.s3_connector_test_terraform_2": kafka_topic.s3_connector_test_terraform_2
2019/01/23 17:50:38 [TRACE] Graph after step *terraform.AttachStateTransformer:

kafka_topic.s3_connector_test_terraform_2 - *terraform.NodeApplyableResource
2019/01/23 17:50:38 [DEBUG] ReferenceTransformer: "kafka_topic.s3_connector_test_terraform_2" references: []
2019/01/23 17:50:38 [DEBUG] ReferenceTransformer: "provider.kafka" references: []
2019-01-23T17:50:38.763+0200 [DEBUG] plugin.terraform-provider-kafka: 2019/01/23 17:50:38 [DEBUG] 0:Converting redacted:9092 to string
2019-01-23T17:50:38.763+0200 [DEBUG] plugin.terraform-provider-kafka: 2019/01/23 17:50:38 [DEBUG] configuring provider with Brokers @ &[redacted:9092]
2019-01-23T17:50:38.763+0200 [DEBUG] plugin.terraform-provider-kafka: 2019/01/23 17:50:38 [DEBUG] Config @ &{0xc00050b000 120    true true redacted redacted}
2019-01-23T17:50:38.763+0200 [DEBUG] plugin.terraform-provider-kafka: 2019/01/23 17:50:38 [INFO] configuring bootstrap_servers &{0xc00050b000 120    true true redacted redacted}
2019-01-23T17:50:38.763+0200 [DEBUG] plugin.terraform-provider-kafka: 2019/01/23 17:50:38 [WARN] skipping TLS client config
2019-01-23T17:50:38.763+0200 [DEBUG] plugin.terraform-provider-kafka: 2019/01/23 17:50:38 [WARN] no CA file set skipping
2019-01-23T17:50:39.295+0200 [DEBUG] plugin.terraform-provider-kafka: 2019/01/23 17:50:39 [INFO] Checking the diff!
2019-01-23T17:50:39.295+0200 [DEBUG] plugin.terraform-provider-kafka: 2019/01/23 17:50:39 [INFO] Partitions have changed!
2019-01-23T17:50:39.295+0200 [DEBUG] plugin.terraform-provider-kafka: 2019/01/23 17:50:39 Partitions is changing from 0 to 12
2019-01-23T17:50:39.296+0200 [DEBUG] plugin.terraform-provider-kafka: 2019/01/23 17:50:39 [INFO] Checking the diff!
2019-01-23T17:50:39.296+0200 [DEBUG] plugin.terraform-provider-kafka: 2019/01/23 17:50:39 [INFO] Partitions have changed!
2019-01-23T17:50:39.296+0200 [DEBUG] plugin.terraform-provider-kafka: 2019/01/23 17:50:39 Partitions is changing from 0 to 12
kafka_topic.s3_connector_test_terraform_2: Creating...
  config.%:                                   "" => "22"
  config.cleanup.policy:                      "" => "delete"
  config.compression.type:                    "" => "producer"
  config.delete.retention.ms:                 "" => "86400000"
  config.file.delete.delay.ms:                "" => "60000"
2019-01-23T17:50:39.297+0200 [DEBUG] plugin.terraform-provider-kafka: 2019/01/23 17:50:39 [DEBUG] Brokers 0 , redacted:9092
2019-01-23T17:50:39.297+0200 [DEBUG] plugin.terraform-provider-kafka: 2019/01/23 17:50:39 [DEBUG] Timeout is 2m0s
  config.flush.messages:                      "" => "9223372036854775807"
  config.flush.ms:                            "" => "9223372036854775807"
  config.index.interval.bytes:                "" => "4096"
  config.max.message.bytes:                   "" => "2097164"
  config.message.format.version:              "" => "1.0-IV0"
  config.message.timestamp.difference.max.ms: "" => "9223372036854775807"
  config.message.timestamp.type:              "" => "CreateTime"
  config.min.cleanable.dirty.ratio:           "" => "0.5"
  config.min.compaction.lag.ms:               "" => "0"
  config.min.insync.replicas:                 "" => "2"
  config.preallocate:                         "" => "0"
  config.retention.bytes:                     "" => "1000000000"
  config.retention.ms:                        "" => "43200000"
  config.segment.bytes:                       "" => "1073741824"
  config.segment.index.bytes:                 "" => "10485760"
  config.segment.jitter.ms:                   "" => "0"
  config.segment.ms:                          "" => "604800000"
  config.unclean.leader.election.enable:      "" => "0"
  name:                                       "" => "s3_connector_test_terraform_2"
  partitions:                                 "" => "12"
  replication_factor:                         "" => "2"
2019/01/23 17:50:39 [DEBUG] plugin: waiting for all plugin processes to complete...

Error: Error applying plan:

1 error(s) occurred:

* kafka_topic.s3_connector_test_terraform_2: 1 error(s) occurred:

2019-01-23T17:50:39.748+0200 [DEBUG] plugin.terraform-provider-kafka-connect: 2019/01/23 17:50:39 [ERR] plugin: plugin server: accept unix /var/folders/dj/kmdjnztx64v3rdmr3q7n0yf40000gn/T/plugin158399668: use of closed network connection
2019-01-23T17:50:39.748+0200 [DEBUG] plugin.terraform-provider-kafka-connect: 2019/01/23 17:50:39 [ERR] plugin: stream copy 'stderr' error: stream closed
2019-01-23T17:50:39.749+0200 [DEBUG] plugin.terraform-provider-kafka: 2019/01/23 17:50:39 [ERR] plugin: plugin server: accept unix /var/folders/dj/kmdjnztx64v3rdmr3q7n0yf40000gn/T/plugin686870369: use of closed network connection
* kafka_topic.s3_connector_test_terraform_2: kafka server: Unexpected (unknown?) server error.

from terraform-provider-kafka.

mmajis avatar mmajis commented on August 20, 2024

And yeah sorry about the TLS chatter, got that mixed up with a previous issue I had.

from terraform-provider-kafka.

Mongey avatar Mongey commented on August 20, 2024

I've tried the following topic config locally and it seems to work

resource "kafka_topic" "test_terraform" {
  name               = "s3_connector_test_terraform_2"
  partitions         = 12
  replication_factor = 2

  config = {
    "cleanup.policy"                      = "delete"
    "compression.type"                    = "producer"
    "delete.retention.ms"                 = "86400000"
    "file.delete.delay.ms"                = "60000"
    "flush.messages"                      = "9223372036854775807"
    "flush.ms"                            = "9223372036854775807"
    "index.interval.bytes"                = "4096"
    "max.message.bytes"                   = "2097164"
    "message.format.version"              = "1.0-IV0"
    "message.timestamp.difference.max.ms" = "9223372036854775807"
    "message.timestamp.type"              = "CreateTime"
    "min.cleanable.dirty.ratio"           = "0.5"
    "min.insync.replicas"                 = "2"
    "preallocate"                         = "false"
    "retention.bytes"                     = "1000000000"
    "retention.ms"                        = "43200000"
    "segment.bytes"                       = "1073741824"
    "segment.index.bytes"                 = "10485760"
    "segment.jitter.ms"                   = "0"
    "segment.ms"                          = "604800000"
    "unclean.leader.election.enable"      = "false"
  }
}

* kafka_topic.s3_connector_test_terraform_2: kafka server: Unexpected (unknown?) server error.

😅 It looks like the upstream library, Sarama doesn't support all of the error codes returned by Kafka, so I can't see the actual error returned by the broker.

Going by what's not implemented, I'm going to guess that the error should be
kafka server: The requesting client does not support the compression type of given partition.

Can you try leaving out the "compression.type" = "producer" line and see if it works ... or just comment out different combinations of config entries until terraform apply works. 😬

from terraform-provider-kafka.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.