Coder Social home page Coder Social logo

Comments (4)

ewencp avatar ewencp commented on July 24, 2024

@krisskross This seems like basically what the Partitioner classes and partitioner.class config was designed for. They don't close the files explicitly, but they allow you to divide data across files, and once you hit rotation, the file will be closed anyway.

Is there a specific case you're thinking of where you need to close the file sooner? I think one common example might be a time-based partitioner where timestamps are guaranteed to be monotonically increasing, in which case you know when it is safe to rotate a file (although note that the requirement that it is monotonically increasing is harder to guarantee in practice than you may think). Is there some other example you're thinking of that isn't addressed by the existing approach, or are you just trying to reduce the latency of delivery for the final file?

from kafka-connect-hdfs.

krisskross avatar krisskross commented on July 24, 2024

The use case i'm seeking is closing files earlier so that they can be processed by batch jobs as early as possible. Also it might be possible to get larger files (ideally one per partition) which have the benefit of more efficient compression and faster batch processing.

We use an modified version of HourlyPartitioner that looks at event time (monotonically increasing) of the log and we can determine when no more events will arrive for a certain partition/hour by looking at the event time (with exceptions). But there is no mechanism to trigger the close and move the file to the final directory.

from kafka-connect-hdfs.

ewencp avatar ewencp commented on July 24, 2024

Sure, so maybe a way to implement this would be to add a method to the Partitioner interface to allow it to indicate what files are now safe to close, and have the connector invoke it after each record? I think it'll make tracking the necessary state a bit more complicated (we normally commit all outstanding data, but in this case we'd change several states to handle only a subset of the outstanding data), but probably won't make things much more complicated.

from kafka-connect-hdfs.

krisskross avatar krisskross commented on July 24, 2024

Yes, sounds like reasonable way forward. Maybe as default method on the interface in order to not break backward compatibility?

from kafka-connect-hdfs.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.