Coder Social home page Coder Social logo

kylebarron / parquet-wasm Goto Github PK

View Code? Open in Web Editor NEW
472.0 7.0 19.0 2.41 MB

Rust-based WebAssembly bindings to read and write Apache Parquet data

Home Page: https://kylebarron.dev/parquet-wasm/

License: Apache License 2.0

Rust 70.42% JavaScript 6.87% HTML 0.31% Python 3.80% Shell 2.33% TypeScript 16.26%
webassembly wasm rust parquet javascript arrow apache-arrow apache-parquet

parquet-wasm's Issues

Use &self on methods on wasm bindgen structs

When you make a method:

    #[wasm_bindgen]
    pub fn version(self) -> i32 {
        self.0.version()
    }

the self consumes the instance, and you can't use it again. So if you try to call .version() twice from JS you see:

/Users/kyle/github/rust/parquet-wasm/tmp/arrow1.js:403
        if (this.ptr == 0) throw new Error('Attempt to use a moved value');
                           ^

Error: Attempt to use a moved value
    at FileMetaData.version (/Users/kyle/github/rust/parquet-wasm/tmp/arrow1.js:403:34)
    at evalmachine.<anonymous>:1:10
    at Script.runInThisContext (node:vm:129:12)
    at Object.runInThisContext (node:vm:305:38)
    at run ([eval]:1020:15)
    at onRunRequest ([eval]:864:18)
    at onMessage ([eval]:828:13)
    at process.emit (node:events:520:28)
    at emit (node:internal/child_process:938:14)
    at processTicksAndRejections (node:internal/process/task_queues:84:21)

Instead you can take a reference (?) to self:

    #[wasm_bindgen]
    pub fn version(&self) -> i32 {
        self.0.version()
    }

And now .version() doesn't consume the instance.

You might also want to consider using a mutable self here, instead of making a new instance every time.

pub fn set_writer_version(self, value: WriterVersion) -> Self {
Self(self.0.set_writer_version(value.to_arrow1()))
}

Update Readme

Update footnote

[^0]: I originally decoded Parquet files to the Arrow IPC File format, but Arrow JS occasionally produced bugs such as `Error: Expected to read 1901288 metadata bytes, but only read 644` when parsing using `arrow.tableFromIPC`. When testing the same buffer in Pyarrow, `pa.ipc.open_file` succeeded but `pa.ipc.open_stream` failed, leading me to believe that the Arrow JS implementation has some bugs to decide when `arrow.tableFromIPC` should internally use the `RecordBatchStreamReader` vs the `RecordBatchFileReader`.

after investigation from #19

Split functions into non-wasm-bindgen helpers

For a while, there will probably be issues with the APIs, either in the wasm bindings, my bindings, or the underlying libraries. It will be necessary to debug these problems outside of the web environment, at least to the extent possible.

To that end I think it'll be very helpful to have a debug CLI, where essentially the exact same binding code is run, but locally instead of in wasm.

This means:

  • Decoupling any JS specific code out of read_parquet and write_parquet. They should take as input and output rust slices and buffers, and return rust errors, not js errors. (Maybe read up on how method()?; works, which would make the code a lot cleaner).
  • Creating an optional feature with main.rs which would be a CLI input to these four functions.
// lib.rs

#[cfg(feature = "arrow1")]
#[wasm_bindgen(js_name = readParquet1)]
pub fn read_parquet(parquet_file: &[u8]) -> Result<Uint8Array, JsValue> {
  match crate::arrow1::read_parquet() {
    // This function would return a rust vec that would be copied to a Uint8Array here
    Ok(buffer) => buffer,
    Err(error) => JsValue::from_str(format!("{}", error).as_str())
  }
}
// main.rs
// CLI that wraps crate::arrow1::read_parquet and writes output to a local file

Document debug cli

cargo run --example parquet_read --features io_parquet,io_parquet_compression -- 1-partition-lz4.parquet

Return iterator of arrow record batches to JS

Motivation: Parquet and Arrow are chunked formats. Therefore we shouldn't need to wait for the entire dataset to load/parse before getting some data back.

However I'm still not aware of a way to return an iterable or an async iterable from rust to js. To get around this, I think we can "drive" the iteration from JS. Essentially this:

import * as wasm from 'parquet-wasm';

const arr = new Uint8Array(); // Parquet bytes
// name readSchema to align with pyarrow api?
const parquetFile = new wasm.ParquetFile(arr);
const schemaIPC = parquetFile.schema();
for (let i = 0; i < parquetFile.numRowGroups; i++) {
  const recordBatchIPC = parquetFile.readRowGroup(i);
}

And ideally we'll have an async version of this too

Helper to copy `Vec<u8>` to `Uint8Array`

I.e. each public binding should be able to change from

pub fn read_parquet(parquet_file: &[u8]) -> Result<Uint8Array, JsValue> {
let buffer = match crate::arrow1::reader::read_parquet(parquet_file) {
// This function would return a rust vec that would be copied to a Uint8Array here
Ok(buffer) => buffer,
Err(error) => return Err(JsValue::from_str(format!("{}", error).as_str())),
};
let return_len = match (buffer.len() as usize).try_into() {
Ok(return_len) => return_len,
Err(error) => return Err(JsValue::from_str(format!("{}", error).as_str())),
};
let return_vec = Uint8Array::new_with_length(return_len);
return_vec.copy_from(&buffer);
return Ok(return_vec);
}

to

pub fn read_parquet(parquet_file: &[u8]) -> Result<Uint8Array, JsValue> {
    let buffer = crate::arrow1::reader::read_parquet(parquet_file)?;
    Ok(crate::utils::copy_vec_to_uint8array(buffer))
}

Should be some way to make the ? work in this context? You might need to add an impl for converting from the ParquetError to a js Error?

Arrow-rs debugging (Error: Expected to read 2166784 metadata bytes, but only read 486.) [Solved]

For a while, before switching to arrow2/parquet2, (i.e. up until this commit) I was using the arrow and parquet crates from https://github.com/apache/arrow-rs. I repeatedly had an issue with some files, where the Parquet file would be readable in Rust, and then the generated Arrow IPC data wouldn't be readable in JS. This caused a ton of frustration, and switching to Arrow2/Parquet2 seemed to solve it, but I didn't know why.

With more debugging, (crucial was logging the vector in Rust right before returning and the Uint8Array from JS), I realized that the data wasn't successfully being transferred back to JS correctly! E.g. when testing at this commit with the test file 1-partition-snappy.parquet, the arrays on the JS and Rust sides had the same length, but changed data.

It appears the entire issue was the reliance on unsafe { Uint8Array::view(&file) }. When I instead create a new Uint8Array and copy the file into the newly created Uint8Array, the array in JS and in Rust matches, and the file is read successfully by Arrow JS.

From the wasm-bindgen docs

Views into WebAssembly memory are only valid so long as the backing buffer isn’t resized in JS. Once this function is called any future calls to Box::new (or malloc of any form) may cause the returned value here to be invalidated. Use with caution!

Additionally the returned object can be safely mutated but the input slice isn’t guaranteed to be mutable.

Finally, the returned object is disconnected from the input slice’s lifetime, so there’s no guarantee that the data is read at the right time.

To be honest, I'm not entirely sure where I was violating these principles (or maybe it was some internals from the arrow FileWriter). So makes sense (at least for now) to remove the unsafe code and create a new Uint8Array buffer to solve this 🙂 .

Note that creating another Uint8Array buffer would put more memory pressure on WebAssembly, which seems to run out of memory after using 1GB, but that's a problem for the future (ideally we'll be able to return a stream of record batches to JS).

TypeError: wasm.__wbindgen_add_to_stack_pointer is not a function

Hi,

I was looking for a JS library for reading parquet files on the browser, and I found parquet-wasm.

My current setup is the following:

import { readParquet } from "parquet-wasm";

const parquetFile = files[0]; // File picked from input tag
const fileData = new Blob([parquetFile]);
const promise = new Promise(getBuffer(fileData));

promise
	.then(function (data) {
		const arrowStream = readParquet(data);
	})
	.catch(function (err) {
		console.log("Error: ", err);
	});

function getBuffer(fileData) {
	return function (resolve) {
		const reader = new FileReader();
		reader.readAsArrayBuffer(fileData);
		reader.onload = function () {
			const arrayBuffer = reader.result;
			const bytes = new Uint8Array(arrayBuffer);
			resolve(bytes);
		};
	};
}

Unfortunately I get the following error Error: TypeError: wasm.__wbindgen_add_to_stack_pointer is not a function.

Do you have any suggestion? Thanks for the help.

Docstrings for exported functions

Any wasm-bindgen function annotated with /// before the function (seems to work when it's on the line before #[wasm_bindgen]) becomes a jsdoc when built 🙀 😍 . So it will be nice to copy all documentation into the function docstrings.

create debug cli

Use the same underlying read/write functions but add a CLI through main.rs. This should be helpful when debugging why a file is crashing in the browser.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.