Comments (4)
I've had a bit of a poke around in the course of the streams/object store experiments with this, works pretty well so far (at the moment I've got a json object producing schema repr (some kind of schema summary felt necessary vs just guessing column names), and a Vec<String> -> ProjectionMask, plugged into ParquetRecordBatchStreamBuilder
's with_projection).
So, design questions/opinions:
- At the moment, I'm silently ignoring any input columns that just don't exist - options:
a. warn, display a succinct repr of the schema and move on, throw if there's absolutely no valid input columns
b. throw if any of them are invalid (with the schema and a bit of highlighting) - rich reprs for the schema - flipping the serde feature of arrow_schema seems to be enough.
- Nested schemas - these work perfectly with dot separators (they're embedded in the ColumDescriptor), though a bit of heads up documentation would probably be warranted (namely that selecting subfields just drops siblings from the parent type, so a table consisting of a struct column
foo { a, b, c}
after filtering with['foo.a']
results infoo { a }
nota
)
The api looks like this:
let instance = await new AsyncParquetFile("<targetFile>");
const stream = await instance.select([
'STATE', 'foo_struct.STATE_FIPS'
]).stream(4);
for await (const chunk of stream) {
console.log(chunk);
/* rows (after passing into parseRecordBatch) look like
* {"STATE": "ME", "foo_struct": {"STATE_FIPS":"23"}}
*/
}
Tangent
Exposing the with_row_filter
part of the builder interface would complete the transformation of the AsyncParquetFile struct into a (very) primitive equivalent to polars' LazyFrame (without the join abilities of course).
It would be fascinating to see in the context of some of the less expensive boolean spatial predicates from geoarrow - provided they can be squeezed into the constraints imposed by ArrowPredicate/Fn (which it looks like they can), that would get you full-blown spatial predicate pushdown for... <<10MB of wasm (more or less instantly paid off by dint of all the saved bandwidth).
from parquet-wasm.
- At the moment, I'm silently ignoring any input columns that just don't exist - options:
a. warn, display a succinct repr of the schema and move on, throw if there's absolutely no valid input columns
b. throw if any of them are invalid (with the schema and a bit of highlighting)
I'd prefer to throw. The usual workflow I'd expect would be fetching the Parquet metadata first, letting the user pick which columns, and then fetching the rest of the data. It's too easy to be off by one character and miss a ton of data.
- rich reprs for the schema - flipping the serde feature of arrow_schema seems to be enough.
That seems fine. Ideally the repr won't add much bundle size.
Do you think we could reuse arrow JS's repr? I.e. not make our own and only direct users to inspect the repr from an arrow JS schema object? You should be able to get a schema across FFI by treating it as a struct of fields.
- Nested schemas - these work perfectly with dot separators (they're embedded in the ColumDescriptor), though a bit of heads up documentation would probably be warranted (namely that selecting subfields just drops siblings from the parent type, so a table consisting of a struct column
foo { a, b, c}
after filtering with['foo.a']
results infoo { a }
nota
)
The api looks like this:let instance = await new AsyncParquetFile("<targetFile>"); const stream = await instance.select([ 'STATE', 'foo_struct.STATE_FIPS' ]).stream(4); for await (const chunk of stream) { console.log(chunk); /* rows (after passing into parseRecordBatch) look like * {"STATE": "ME", "foo_struct": {"STATE_FIPS":"23"}} */ }
That all seems reasonable. It's risky to pull columns up to the top level when the leaf names could collide with something else at the root.
I tend to like dot separators, although I was reminded recently that there's no restriction about not having a dot in the column name, right? Would it be better to have something like {columns: [['foo', 'a'], ['foo', 'b']]}
? It's more verbose; not sure.
Tangent
Exposing the
with_row_filter
part of the builder interface would complete the transformation of the AsyncParquetFile struct into a (very) primitive equivalent to polars' LazyFrame (without the join abilities of course).It would be fascinating to see in the context of some of the less expensive boolean spatial predicates from geoarrow - provided they can be squeezed into the constraints imposed by ArrowPredicate/Fn (which it looks like they can), that would get you full-blown spatial predicate pushdown for... <<10MB of wasm (more or less instantly paid off by dint of all the saved bandwidth).
IIRC the row filter only happens after the data is materialized in memory, right? In that case I'd tend to think that parquet-wasm should handle row-group filtering but leave the individual row filtering to other tools, or another function (even in the same memory space). I wrote down some ideas on composition of wasm libraries here
from parquet-wasm.
IIRC the row filter only happens after the data is materialized in memory, right?
Nope, the row filter occurs just before that - the flow is more or less:
- Do a fetch of the row group with just the columns required for the predicate (the predicates have to provide an explicit ProjectionMask), and the row selection mask. e.g. just the state column for
r.state == 'CA'
. - Evaluate the predicate and update the row selection mask.
- 1-2 in a loop (with each successive loop taking into account the results of the previous ones - so a predicate that evaluates to false for all rows shortcircuits IO for subsequent predicates)
- If there's any rows that passed all the predicates, spit out byte ranges for all data pages touched by the rows * selected columns (the top level projection mask). [this is where the per row group, per column requests problem came from - no attempt is made to coalesce byte ranges here ]
- the bulk (hopefully) of the IO + decoding.
A simple hardcoded filter I ran (r.state == 'CA'
) cut the transfer size and time by 60%.
I agree that specifying the construction of the row filters should be external, but it would have to be provided in some form to the record batch stream builder. The host (by that I mean the code passing in the row filter) would have to be in the same memory space too, likely written in Rust.
Other than geoarrow-wasm[full], geoparquet-wasm makes a lot of sense for a consumer of this extension point - there's several quite high value, low cost (in terms of bundle size) hardcoded IO-integrated filters that anyone using the module would want (or not care about paying for):
- bounding box intersection (I used this through pyogrio the other day to skip reading ~90% of the Overture Maps dataset)
- index equality (H3, S2, Quad, Geohash - identical signature really, given they're all uint64s and you're just specifying a column identifier)
- polygon intersection (harder, and in a lot of cases requires loading the column's contents anyway)
Outside of that, if someone wants more elaborate expressions in the IO phase, they can build their own wrapper module easily enough, if with_filter(RowFilter)
is public (but not wasm bindgen'd)
from parquet-wasm.
I need to read through that a couple more times to get it, but as a quick note:
- bounding box intersection (I used this through pyogrio the other day to skip reading ~90% of the Overture Maps dataset)
Yes, I definitely think geoparquet-wasm should have native support for bounding box filtering (ref opengeospatial/geoparquet#191); I think I just wasn't sure whether this filtering should happen at the row group or row level.
from parquet-wasm.
Related Issues (20)
- runtime error while reading large parquet HOT 6
- HEAD-ache requests HOT 2
- Do you have a data processing flowchart for this set? HOT 1
- Update `readParquetFFI` docs to drop the table
- Document that stream API needs polyfill to be used as async iterable HOT 5
- Deprecate arrow2/parquet2
- Try using `ParquetObjectReader` for arrow1 async api
- Module '"parquet-wasm/bundler/arrow2"' has no exported member '__wasm' HOT 3
- Group dependabot updates
- Add publish from CI
- Update README documentation HOT 2
- Request batching HOT 21
- bundler version doesn't work in production since 0.4.0-beta.5 HOT 3
- Unable to get 0.6.0-beta.1 to work in Node HOT 7
- Writing a Date column drops associated time information HOT 1
- Changelog notes: HOT 5
- explore ehttp for request fetching
- import with typescript HOT 3
- Example doesnt work HOT 18
- Cannot read properties of undefined (reading 'fromIPCStream') HOT 8
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from parquet-wasm.