Comments (4)
@tony-lijinwen This is intentional, and that config is intended to be managed by the framework. The intent is that the framework leverages the existing consumer group functionality and manages balancing work amongst tasks itself. It can be a bit confusing to a connector developer since framework-level and connector-level configs are a bit conflated, but this keeps things a lot simpler and easier to understand for users.
Do you have a reason why you need to split the subscriptions up so they are different among the tasks? At a bare minimum, the framework would at least have to make sure the original subscription was covered, which could actually be quite difficult once we add support for regex subscriptions (which the underlying consumers support, but has not been extended to Kafka Connect yet). This is a request I haven't seen yet before, so I'm curious about the use case.
from kafka-connect-hdfs.
@ewencp Thanks for your reply. The reason that I split the subscriptions is: I have three topics, two of them in charge of huge messages, but the messages are not urgent; one of them in charge of small messages, but the messages are urgent. And as I knew, if one consumer subscribe more than one topics, it will consume the messags by FIFO (i.e. If the other topics contain huge messages, the urgent messags will be blocked). So I want to split the subscriptions to ensure the urgent messages can be handled as soon as possible.
from kafka-connect-hdfs.
I have the same requirement as the OP. Don't know how the OP resolved this issue with Kafka connect. It would be nice to have a partition assignment scheme that assigns one topic to each sink task, given that the SinkConnector cannot change the assignment among the tasks.
from kafka-connect-hdfs.
@tony-lijinwen @sv2000 I'm curious as to why you would not just have separate connector instances if you need the consumption of the topic data to be handled in a different way. Thinking about this outside of the connect related concepts, if I needed to consume data with different sizes and levels of urgency, I would logically expect to have different consumer configurations to accommodate those needs.
This is really more of a general Connect API question rather than specific to the HDFS connector, so I would propose moving this discussion to a KAFKA JIRA instead if you would like to continue it. Here is the place to open the JIRA to detail the needs that can't be accommodated currently https://issues.apache.org/jira/browse/KAFKA
from kafka-connect-hdfs.
Related Issues (20)
- Issue in a Kerberized environment a day after renew ticket HOT 2
- Explain limitation listed in the documentation HOT 3
- using wrong user/keytab while there are multiple hdfs-sink connections HOT 1
- template file isn't committed and uploaded to storage when using AvroFormat
- java.util.ConcurrentModificationException during task rebalancing HOT 1
- log4j update schedule HOT 1
- Hive table does not match column names present in the parquet data
- Exception when reading Decimal types written by connector
- Hive Merge Feature
- Incremental Co-operative Rebalancing Support for HDFS Connector
- Error after install and unistall connect-transforms
- Adding Hive partition threw unexpected error
- HDFS2 connect compatibility with HDFS3 server
- CVE-2021-34538 HIGH vulnerability HOT 2
- Task is being killed and will not recover until manually restarted
- Allow to limit retry write errors by timeout
- Kafka Issue while running on docker and adding new connector HOT 1
- can't build because repo conjars is down
- multiple keytab kerberos issue HOT 1
- OzoneFileSystem
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from kafka-connect-hdfs.