Comments (7)
Hi @amitmalagi I'm not sure we want to skip records on error. Do you have a good idea of how the invalid record entered the topic? What are you using to produce to the topic?
from kafka-connect-hdfs.
Hello @jcustenborder, The producer is a node.js application. I am using 'kafka-node' module to post messages to kafka and 'avsc' module to encode messages into arvo format. In some cases, if the message size is bigger than the allocated buffer size used for avro encoding, the resulting record would be invalid.
from kafka-connect-hdfs.
@amitmalagi, I also encountered the similar issue, did you resolve it?
from kafka-connect-hdfs.
@tony-lijinwen, I addressed this issue in our producer application.
from kafka-connect-hdfs.
Even I am facing same issue. Please share the fix for the same.
[2016-09-26 13:51:42,569] INFO WorkerSinkTask{id=elasticsearch-schema-sink-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSinkTask)
[2016-09-26 13:52:34,437] INFO WorkerSinkTask{id=elasticsearch-schema-sink-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSinkTask)
[2016-09-26 13:52:34,446] ERROR Task elasticsearch-schema-sink-0 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask)
org.apache.kafka.connect.errors.DataException: Failed to deserialize data to Avro:
at io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:109)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:357)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:226)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:170)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:142)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.kafka.common.errors.SerializationException: Error deserializing Avro message for id -1
Caused by: org.apache.kafka.common.errors.SerializationException: Unknown magic byte!
from kafka-connect-hdfs.
That error occurrs because your producer application sends messages that are not compatible with consumer. The confluent producer/consumer use some specific format of messages: first byte is 0, then 4 bytes - Id of schema in schema registry (The schema must be registered for the same topic as messages are sent to) and after that - message itself.
from kafka-connect-hdfs.
@Tseretyan is correct and this format is documented here. I'm not seeing anything outstanding here so closing this out.
from kafka-connect-hdfs.
Related Issues (20)
- using wrong user/keytab while there are multiple hdfs-sink connections HOT 1
- template file isn't committed and uploaded to storage when using AvroFormat
- java.util.ConcurrentModificationException during task rebalancing HOT 1
- log4j update schedule HOT 1
- Hive table does not match column names present in the parquet data
- Exception when reading Decimal types written by connector
- Hive Merge Feature
- Incremental Co-operative Rebalancing Support for HDFS Connector
- Error after install and unistall connect-transforms
- Adding Hive partition threw unexpected error
- HDFS2 connect compatibility with HDFS3 server
- CVE-2021-34538 HIGH vulnerability HOT 2
- Task is being killed and will not recover until manually restarted
- Allow to limit retry write errors by timeout
- Kafka Issue while running on docker and adding new connector HOT 1
- can't build because repo conjars is down
- multiple keytab kerberos issue HOT 1
- OzoneFileSystem
- Non-resolvable parent POM io.confluent:common:[7.7.0, 7.7.1) for io.confluent:kafka-connect-storage-common-parent:11.2.9
- [2024-05-30 10:25:31,403] ERROR [hdfs3_sink-test_v4|task-0] WorkerSinkTask{id=hdfs3_sink-test_v4-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:237) java.lang.NullPointerException: Cannot invoke "io.confluent.connect.hdfs3.DataWriter.open(java.util.Collection)" because "this.hdfsWriter" is null
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from kafka-connect-hdfs.