Coder Social home page Coder Social logo

Comments (4)

ewencp avatar ewencp commented on July 24, 2024

@tony-lijinwen This is intentional, and that config is intended to be managed by the framework. The intent is that the framework leverages the existing consumer group functionality and manages balancing work amongst tasks itself. It can be a bit confusing to a connector developer since framework-level and connector-level configs are a bit conflated, but this keeps things a lot simpler and easier to understand for users.

Do you have a reason why you need to split the subscriptions up so they are different among the tasks? At a bare minimum, the framework would at least have to make sure the original subscription was covered, which could actually be quite difficult once we add support for regex subscriptions (which the underlying consumers support, but has not been extended to Kafka Connect yet). This is a request I haven't seen yet before, so I'm curious about the use case.

from kafka-connect-hdfs.

tony-lijinwen avatar tony-lijinwen commented on July 24, 2024

@ewencp Thanks for your reply. The reason that I split the subscriptions is: I have three topics, two of them in charge of huge messages, but the messages are not urgent; one of them in charge of small messages, but the messages are urgent. And as I knew, if one consumer subscribe more than one topics, it will consume the messags by FIFO (i.e. If the other topics contain huge messages, the urgent messags will be blocked). So I want to split the subscriptions to ensure the urgent messages can be handled as soon as possible.

from kafka-connect-hdfs.

sv2000 avatar sv2000 commented on July 24, 2024

I have the same requirement as the OP. Don't know how the OP resolved this issue with Kafka connect. It would be nice to have a partition assignment scheme that assigns one topic to each sink task, given that the SinkConnector cannot change the assignment among the tasks.

from kafka-connect-hdfs.

cotedm avatar cotedm commented on July 24, 2024

@tony-lijinwen @sv2000 I'm curious as to why you would not just have separate connector instances if you need the consumption of the topic data to be handled in a different way. Thinking about this outside of the connect related concepts, if I needed to consume data with different sizes and levels of urgency, I would logically expect to have different consumer configurations to accommodate those needs.

This is really more of a general Connect API question rather than specific to the HDFS connector, so I would propose moving this discussion to a KAFKA JIRA instead if you would like to continue it. Here is the place to open the JIRA to detail the needs that can't be accommodated currently https://issues.apache.org/jira/browse/KAFKA

from kafka-connect-hdfs.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.