Coder Social home page Coder Social logo

doyaaaaaken / kotlin-csv Goto Github PK

View Code? Open in Web Editor NEW
607.0 6.0 48.0 481 KB

Pure Kotlin CSV Reader/Writer

Home Page: https://kenta-koyama-biz.gitbook.io/kotlin-csv/

License: Apache License 2.0

Kotlin 100.00%
kotlin csv dsl kotlin-csv kotlin-multiplatform

kotlin-csv's Introduction

kotlin-csv

Version License: Apache License 2.0 CircleCI codecov CodeFactor

Pure Kotlin CSV Reader/Writer.

Design goals

1. Simple interface

  • easy to setup
  • use DSL so easy to read

2. Automatic handling of I/O

  • in Java, we always need to close file. but it's boilerplate code and not friendly for non-JVM user.
  • provide interfaces which automatically close file without being aware.

3. Multiplatform

  • Kotlin Multiplatform projects support.

Usage

Download

Gradle

// Gradle Kotlin DSL
implementation("com.github.doyaaaaaken:kotlin-csv-jvm:1.9.3") // for JVM platform
implementation("com.github.doyaaaaaken:kotlin-csv-js:1.9.3") // for Kotlin JS platform

// Gradle Groovy DSL
implementation 'com.github.doyaaaaaken:kotlin-csv-jvm:1.9.3' // for JVM platform
implementation 'com.github.doyaaaaaken:kotlin-csv-js:1.9.3' // for Kotlin JS platform

Maven

<dependency>
  <groupId>com.github.doyaaaaaken</groupId>
  <artifactId>kotlin-csv-jvm</artifactId>
  <version>1.9.3</version>
</dependency>
<dependency>
  <groupId>com.github.doyaaaaaken</groupId>
  <artifactId>kotlin-csv-js</artifactId>
  <version>1.9.3</version>
</dependency>
@file:DependsOn("com.github.doyaaaaaken:kotlin-csv-jvm:1.9.3") // for JVM platform
@file:DependsOn("com.github.doyaaaaaken:kotlin-csv-js:1.9.3") // for Kotlin JS platform

Examples

CSV Read examples

Simple case

You can read csv file from String, java.io.File or java.io.InputStream object.
No need to do any I/O handling. (No need to call use, close and flush method.)

// read from `String`
val csvData: String = "a,b,c\nd,e,f"
val rows: List<List<String>> = csvReader().readAll(csvData)

// read from `java.io.File`
val file: File = File("test.csv")
val rows: List<List<String>> = csvReader().readAll(file)

Read with header

val csvData: String = "a,b,c\nd,e,f"
val rows: List<Map<String, String>> = csvReader().readAllWithHeader(csvData)
println(rows) //[{a=d, b=e, c=f}]

Read as Sequence

Sequence type allows to execute lazily.
It starts to process each rows before reading all row data.

Learn more about the Sequence type on Kotlin's official documentation.

csvReader().open("test1.csv") {
    readAllAsSequence().forEach { row: List<String> ->
        //Do something
        println(row) //[a, b, c]
    }
}

csvReader().open("test2.csv") {
    readAllWithHeaderAsSequence().forEach { row: Map<String, String> ->
        //Do something
        println(row) //{id=1, name=doyaaaaaken}
    }
}

NOTE: readAllAsSequence and readAllWithHeaderAsSequence methods can only be called within the open lambda block. The input stream is closed after the open lambda block.

Read line by line

If you want to handle line-by-line, you can do it by using open method. Use open method and then use readNext method inside nested block to read row.

csvReader().open("test.csv") {
    readNext()
}

Read in a Suspending Function

csvReader().openAsync("test.csv") {
    val container = mutalbeListOf<List<String>>()
    delay(100) //other suspending task
    readAllAsSequence().asFlow().collect { row ->
        delay(100) // other suspending task
        container.add(row)
    }
}

Note: openAsync can be and only be accessed through a coroutine or another suspending function

Customize

When you create CsvReader, you can choose read options:

// this is tsv reader's option
val tsvReader = csvReader {
    charset = "ISO_8859_1"
    quoteChar = '"'
    delimiter = '\t'
    escapeChar = '\\'
}
Option default value description
logger no-op Logger instance for logging debug information at runtime.
charset UTF-8 Charset encoding. The value must be supported by java.nio.charset.Charset.
quoteChar " Character used to quote fields.
delimiter , Character used as delimiter between each field.
Use "\t" if reading TSV file.
escapeChar " Character to escape quote inside field string.
Normally, you don't have to change this option.
See detail comment on ICsvReaderContext.
skipEmptyLine false Whether to skip or error out on empty lines.
autoRenameDuplicateHeaders false Whether to auto rename duplicate headers or throw an exception.
skipMissMatchedRow false Deprecated. Replace with appropriate values in excessFieldsRowBehaviour and insufficientFieldsRowBehaviour, e.g. both set to IGNORE. Whether to skip an invalid row. If ignoreExcessCols is true, only rows with less than the expected number of columns will be skipped.
excessFieldsRowBehaviour ERROR Behaviour to use when a row has more fields (columns) than expected. ERROR (default), IGNORE (skip the row) or TRIM (remove the excess fields at the end of the row to match the expected number of fields).
insufficientFieldsRowBehaviour ERROR Behaviour to use when a row has fewer fields (columns) than expected. ERROR (default), IGNORE (skip the row) or EMPTY_STRING (replace missing fields with an empty string).

CSV Write examples

Simple case

You can start writing csv in one line, no need to do any I/O handling (No need to call use, close and flush method.):

val rows = listOf(listOf("a", "b", "c"), listOf("d", "e", "f"))
csvWriter().writeAll(rows, "test.csv")

// if you'd append data on the tail of the file, assign `append = true`.
csvWriter().writeAll(rows, "test.csv", append = true)

// You can also write into OutpusStream.
csvWriter().writeAll(rows, File("test.csv").outputStream())

You can also write a csv file line by line by open method:

val row1 = listOf("a", "b", "c")
val row2 = listOf("d", "e", "f")

csvWriter().open("test.csv") {
    writeRow(row1)
    writeRow(row2)
    writeRow("g", "h", "i")
    writeRows(listOf(row1, row2))
}

Write in a Suspending Function

val rows = listOf(listOf("a", "b", "c"), listOf("d", "e", "f")).asSequence()
csvWriter().openAsync(testFileName) {
    delay(100) //other suspending task
    rows.asFlow().collect {
        delay(100) // other suspending task
        writeRow(it)
    }
}

Write as String

val rows = listOf(listOf("a", "b", "c"), listOf("d", "e", "f"))
val csvString: String = csvWriter().writeAllAsString(rows) //a,b,c\r\nd,e,f\r\n

long-running write (manual control for file close)

If you want to close a file writer manually for performance reasons (e.g. streaming scenario), you can use openAndGetRawWriter and get a raw CsvFileWriter.
DO NOT forget to close the writer!

val row1 = listOf("a", "b", "c")

@OptIn(KotlinCsvExperimental::class)
val writer = csvWriter().openAndGetRawWriter("test.csv")
writer.writeRow(row1)
writer.close()

Customize

When you create a CsvWriter, you can choose write options.

val writer = csvWriter {
    charset = "ISO_8859_1"
    delimiter = '\t'
    nullCode = "NULL"
    lineTerminator = "\n"
    outputLastLineTerminator = true
    quote {
        mode = WriteQuoteMode.ALL
        char = '\''
    }
}
Option default value description
charset UTF-8 Charset encoding. The value must be supported by java.nio.charset.Charset.
delimiter , Character used as delimiter between each fields.
Use "\t" if reading TSV file.
nullCode (empty string) Character used when a written field is null value.
lineTerminator \r\n Character used as line terminator.
outputLastLineTerminator true Output line break at the end of file or not.
prependBOM false Output BOM (Byte Order Mark) at the beginning of file or not.
quote.char " Character to quote each fields.
quote.mode CANONICAL Quote mode.
- CANONICAL: Not quote normally, but quote special characters (quoteChar, delimiter, line feed). This is the specification of CSV.
- ALL: Quote all fields.
- NON_NUMERIC: Quote non-numeric fields. (ex. 1,"a",2.3)

Links

Documents

Libraries which use kotlin-csv

Miscellaneous

๐Ÿค Contributing

Contributions, issues and feature requests are welcome! If you have questions, ask away in Kotlin Slack's kotlin-csv room.

๐Ÿ’ป Development

git clone [email protected]:doyaaaaaken/kotlin-csv.git
cd kotlin-csv
./gradlew check

Show your support

Give a โญ๏ธ if this project helped you!

๐Ÿ“ License

Copyright ยฉ 2019 doyaaaaaken.
This project is licensed under Apache 2.0.


This project is inspired โค๏ธ by scala-csv

This README was generated with โค๏ธ by readme-md-generator

kotlin-csv's People

Contributors

atsuki avatar blackmo18 avatar doyaaaaaken avatar floern avatar jomof avatar jul1u5 avatar juliansudendorf avatar kengotoda avatar kev3978 avatar leocolman avatar mykhaylo- avatar pgebert avatar popcornac avatar satou-aaaaa avatar skaldebane avatar starsep avatar t45k avatar thijsiez avatar tmdgusya avatar vanniktech avatar xeruf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

kotlin-csv's Issues

use suspend function inside lambda of `open` method

In the below code, a compile error happen because suspend function cannot be called inside lambda of open function.
So make it callable.

suspend fun processRow(row: List<String>): List<String> {
    return row.map { "prefix-$it" }
}

val rows: List<List<String>> = csvReader().open("test.csv") {
    readAllAsSequence()
        .map { row -> processRow(row) } // Compile ERROR!! processRow is suspend function so cannot call inside lambda
        .toList()
}

Discusssion: https://kotlinlang.slack.com/archives/CMAL3470A/p1601651001001000

Asci Null characters between csv Strings

Hey!
i've parsed a csv and found out, that there is an asci NULL character between every char.

i've used a httpRequest with a ByeInputStream as following:

val result = httpClient.get<HttpStatement> {
  url(blobUrl)
}.execute() { response: HttpResponse ->
  val channel: ByteReadChannel = response.receive()
  val byteIn = ByteArrayInputStream(channel.toByteArray())
  csvReader {
    delimiter = ','
    skipEmptyLine = true
    skipMissMatchedRow = true
  }.readAll(byteIn)
}

the output is a List<List> which is totally correct.

// when i go through all elements like:
map { list ->
    list[0].forEach { c: Char ->
    println(c.toInt())
     }
}
// the output is:
0
50
0
48
0
50
0
49
0
45
0
48
0
49
0
45
0
48
0
49
0

This just happens to one csv response which i'm not sure why it happens. It's a report from the Google Play store
The same code works 100% good with other csv files.

i've solved it my replacing the NULL char manually like:

list[0].replace(Char.MIN_VALUE.toString(), "")

// e.g.
list[0].replace(Char.MIN_VALUE.toString(), "").forEach { c: Char ->
  println(c.toInt())
}

which returns:

50
48
50
49
45
48
49
45
48
49

i'm not sure if it's interesting to do it out of the box?

have a nice day!

EROFS (Read-only file system)

getting this error please help
Caused by: java.io.FileNotFoundException: test.csv: open failed: EROFS (Read-only file system)

Problem with parsing CSV file with spaces and colon

Good morning,

I'm trying to use kotlin-csv on a comma delimited file (input stream) but I think there is a problem with managing the spaces and colon.

In particular, these are the first lines of the file:

Device serial,Date ,Temperature 51 (Medium) ยฐC
11869,2021-02-09 00:14:59,7.2
11869,2021-02-09 00:30:01,7.1
11869,2021-02-09 00:44:59,7.2
11869,2021-02-09 00:59:59,7.4
11869,2021-02-09 01:14:58,7.5
11869,2021-02-09 01:29:58,7.5
11869,2021-02-09 01:44:58,7.3
11869,2021-02-09 01:59:58,7.2
11869,2021-02-09 02:14:58,7.2
11869,2021-02-09 02:29:58,7.2
11869,2021-02-09 02:44:58,7.2
11869,2021-02-09 02:59:57,7.3
11869,2021-02-09 03:14:57,7.2
11869,2021-02-09 03:29:57,7.3
11869,2021-02-09 03:44:57,7.2
11869,2021-02-09 03:59:57,7.4

..while this is the script:

data class DataClass(
val FirstColumn: String,
val SecondColumn: String,
val ThirdColumn: String )

fun parse(data:InputStream): List?{

 val list = mutableListOf<DataClass>()

 try {
    val rows: List<List<String>> = csvReader().readAll(data)

    for (i in rows) {

 var firstColumn : String =  i[0]
 var secondColumn : String =  i[1]
 var thirdColumn : String =  i[2]

 list.add(DataClass(FirstColumn =firstColumn, SecondColumn = secondColumn,  ThirdColumn = thirdColumn))
    }

}
catch (e: Exception) {
e.printStackTrace()
}
return list
}

Unfortunately only the first column of each row is correctly identified in the output, for example (first row):
first column: 11869
second column: 2021-02-09
third colum: 0
No other colums detected.

Where is my mistake?
Thank you

Handle duplicated headers

Hey @doyaaaaaken thank you very much for this library!

val duplicated = findDuplicate(headers)
if (duplicated != null) throw MalformedCSVException("header '$duplicated' is duplicated")

When reading all lines with headers, a duplicate check is performed and MalformedCSVException thrown.
Since columns in a row are accessed by their index headers could simply be deduplicated by appending something like an occurrence indicator to the header name.

Example:

just some example example headers
A B C D E

The header example appears twice. According to the suggestion the second occurrence could be named example_01.
Main benefit would be to not have a runtime exception thrown and not needing to rename columns in the original file to have it processed.

I know this would introduce some "magic" and I hope that wouldn't collide with your goal of having a simple library.
The required functionality would be just a few lines of code and I'd very gladly do a PR myself, just first wanted to get your thoughts on this.

Skip mismatched row fields number option?

Is it possible to add an option to skip (or even read) rows with different fields number than other rows instead of throwing an exception. I'm willing to PR it myself if you're open to adding this option

Let me know what your thoughts are on this

CsvFileReader#readAllAsSequence fails on non-equal sized rows

Describe the bug
If rows are not equal-sized an exception is thrown:
com.github.doyaaaaaken.kotlincsv.util.CSVFieldNumDifferentException: Fields num seems to be 4 on each row, but on 2th csv row, fields num is 3.

To Reproduce

        csvWriter().open(csvFile) {
            writeRow(listOf("a"))
            writeRow(listOf("a", "b"))
        }
        csvReader().open(csvFile) {
           readAllAsSequence()
        }

Expected behavior
Missing cells are treated as nulls or empty strings.

Environment

  • kotlin-csv version 0.11.0
  • OS: Android 10

Screenshots
N/A

java.lang.NoClassDefFoundError: mu/KotlinLogging

Describe the problem

It quite not a bug, but I'm stuck on this exception:

Caused by: java.lang.NoClassDefFoundError: mu/KotlinLogging
	at com.github.doyaaaaaken.kotlincsv.client.CsvFileReader.<init>(CsvFileReader.kt:21)
	at com.github.doyaaaaaken.kotlincsv.client.CsvReader.open(CsvReader.kt:129)
	at com.github.doyaaaaaken.kotlincsv.client.CsvReader.readAll(CsvReader.kt:48)

It happens only on Ubuntu. MacOS is still fine.

Environment

  • kotlin-csv version: 0.11.0
  • java version: java8
  • kotlin version: 1.4.10
  • OS: Ubuntu 18.04.5

Any suggestion on this? Thank you!

Error AsSequence

val file = File("file.csv") // format ZoneDateTime,BigDecimal
val values = csvReader.open(file) {
    val listStr = readAll()
    val size = listStr.map { ZonedDateTime.parse(it[0]) to BigDecimal(it[1]) }.size
    println(size)
    readAllAsSequence().map { ZonedDateTime.parse(it[0]) to BigDecimal(it[1]) }.toMap()
}

println(values.size)

output:

1490
0

PS: Sorry, I realized the error: you need to rediscover

adding kotlin-csv-jvm depenedncy pulls testing libraries into runtime

Describe the bug
When building application (using gradle distZip from application plugin) that depends on kotlin-csv, test libraries are pulled into created application artefact.

To Reproduce
Create empty basic kotlin project
add dependency implementation("com.github.doyaaaaaken:kotlin-csv-jvm:0.10.1")

Expected behavior
No testing libraries in artefact. To check:

  • run gradle dependencies

    • runtimeClasspath should not contain kotlin-test library
  • run gradle distZip with application plugin

    • created zip should not contain testing libraries

Environment

  • kotlin-csv: 0.10.1
  • java version: java8
  • gradle: 6.5
  • OS: Win

Feature request: Differentiate between empty string and null value

readAllWithHeader yields a List<Map<String,String>> and hence empty columns are being read as empty strings, so that we get "" for both col1 and col2 in the following example:

"col1","col2"
"",

I'd really like to get null for col2 here (this of course only makes sense if all strings are quoted, otherwise it wouldn't be clear how to interpret empty columns). I understand that you can't change the result to List<Map<String,String?>> now, but maybe you could add a nullCode option for reading as it already exists for writing. The default value is an empty string "" (=current behavior). I could then simply do

val nullCode = "NULL"
val rows = csvReader(nullCode=nullCode).readAllWithHeader(inputStream)
    .map { row -> row.mapValues { col -> if (col.value == nullCode) null else col.value } }

At first glance it seems that it only requires to change

delimiter -> {
flushField()
state = ParseState.DELIMITER
}
'\n', '\u2028', '\u2029', '\u0085' -> {
flushField()
state = ParseState.END
}
'\r' -> {
if (nextCh == '\n') pos += 1
flushField()
state = ParseState.END
}

to

                    delimiter -> {
                        field.append(nullCode)
                        flushField()
                        state = ParseState.DELIMITER
                    }
                    '\n', '\u2028', '\u2029', '\u0085' -> {
                        field.append(nullCode)
                        flushField()
                        state = ParseState.END
                    }
                    '\r' -> {
                        if (nextCh == '\n') pos += 1
                        field.append(nullCode)
                        flushField()
                        state = ParseState.END
                    }

and the same for

delimiter -> {
flushField()
state = ParseState.DELIMITER
}
'\n', '\u2028', '\u2029', '\u0085' -> {
flushField()
state = ParseState.END
}
'\r' -> {
if (nextCh == '\n') pos += 1
flushField()
state = ParseState.END
}

but I didn't check it thoroughly.

Long-running write

Please allow writing to csv file without having to close it.
I have a streaming scenario where I need to write each data I get to csv. Closing and reopening after each batch would be suboptimal.

Thanks
David

number of fields in a row has to be based on the header

To reproduce

Have a csv file with header row with 3 columns and two data rows, first data row with two columns, second - with three. Like this:

First name Last name Citizenship
John Bobkins
Michael Pepkins US

While invoking 'readAllWithHeaderAsSequence' on this file, the CSVFieldNumDifferentException is thrown saying that two colums are expected but three are found. It happens because 'fieldsNum' variable in the CsvFileReader.kt is initialized based on the first data row, while it has to be initialized based on the header row.

Expected behavior
The following code has to return two rows:

csvReader().open(filePath) {
                readAllWithHeaderAsSequence().forEach {

. . . 
}

Environment

  • kotlin-csv version 0.11.1
  • java version - java8
  • OS: Windows 10

java.lang.NoClassDefFoundError: com/github/doyaaaaaken/kotlincsv/dsl/CsvReaderDslKt

Describe the bug
Cannot find the dsl

To Reproduce

plugins {
	kotlin("jvm") version "1.6.10"
        ...
}
...
dependencies {
	implementation("com.github.doyaaaaaken:kotlin-csv-jvm:1.2.0")
        ...
}
...
val rows = csvReader().readAll(inputStream) // throws error

java.lang.NoClassDefFoundError: com/github/doyaaaaaken/kotlincsv/dsl/CsvReaderDslKt

Expected behavior
A clear and concise description of what you expected to happen.

Environment

  • kotlin-csv version: 1.6.10
  • java version: 11.0.3
  • OS: MacOS

Screenshots
If applicable, add screenshots to help explain your problem.

Screen Shot 2022-03-15 at 12 11 11 PM

Write Without Header

With current implementation whenever we write rows, it automatically adds the first row as a header. There should be an option to say no header needed.

Remove logger 3rd party library

Quickly looking at the code it seems like there's only one log statement:

logger.info { "skip miss matched row. [csv row num = ${idx + 1}, fields num = ${row.size}, fields num of first row = $fieldsNumInRow]" }

Do we really need to pull an entire library for logging?

implementation("io.github.microutils:kotlin-logging:2.0.11")

I'm an Android user and currently that log would go basically nowhere.

Introduce BOM for Microsoft applications

Hey there,

thank you very much for this gerat project.

Microsoft applications, for some reason, seem to require a BOM to parse for example UTF-8 files correctly, even though there is no byte order in UTF-8 like there is in 16/32. In order to open a created csv file correctly I suggest to add this special BOM (UTF-8 does require three special bytes 0xEF, 0xBB and 0xBF at the start of the file), even though the csvWriter is configured with the Charsets.UTF_8.name().

Why this is undocumented and why Excel seems to require a BOM for UTF-8 I don't know; might be good questions for Excel team at Microsoft.

What do you think or do you have any suggestion to solve this problem?

Improvement: Read one row at a time

Currently the only way to interact with a CSV is to parse all rows. Two use-cases that this does not cover are:

  • Reading only the header. This is useful if you wish to provide a breakdown of what is included in the file. While it should be trivial to do without a library, the existence of this library and its parsing logic supports the position that this is a non-trivial task.
  • Reading row-by-row, which is arguably a superset of the former use-case. This would be helpful when interacting with asynchronous workflows. One could attempt to read a single row from a piped input stream, and the library throws an exception when another line cannot be read in its entirety (as it does now with the full text). The producer can then continue to populate the input stream as data becomes available. The end result would be an asynchronous stream of rows (which I am not suggesting should be included in this library, but these changes would make this possible).

Ability to read line by line file with header

It is very common to have csv headers in csv files. However not always I can or need to store the whole file in memory. Right now there is possibility either to read file line by line or read all data with header. It would be really useful to be able to do both at the same time

Reuse config between reader and writer

When reading/writing you usually want to use the same config, i.e. charset, quoteChar, etc.

It would be great if we could write this config once, and then reuse in both read and write.

Something like:

val context = CsvContext {
    charset = "UTF-8"
}

csvReader(context)...
csvWriter(context)...

Functions that use lambdas should be inlined where possible

I noticed that functions using lambdas aren't utilizing Kotlin's inline keyword. This could have avoidable performance impacts.

Take, for example, this function:

fun csvReader(init: CsvReaderContext.() -> Unit = {}): CsvReader {
    val context = CsvReaderContext().apply(init)
    return CsvReader(context)
}

The JVM doesn't have higher-order functions, so Kotlin must generate a class (a "SAM type") with the lambda's code in a single method. This doesn't matter too much if Kotlin can generate a singleton object, but in this case it can't, as CsvReaderContext() is captured in the closure. So, every time this function is invoked the lambda's class must be instantiated with CsvReaderContext() in a field, invoked, then garbage collected right after. (Correct me if I'm wrong)

*Small correction: this is a bad example. Looking at the bytecode, the reader context is passed as a method parameter.

I'm not sure how this works on other platforms, but on the JVM this impacts performance.

To avoid this, Kotlin provides inline functions, which inline the function's bytecode into where it's used. This mitigates the performance issues above at the expense of bytecode size being larger. If internal types or functions are used, you can add the @PublishedApi annotation to allow them to be accessed by the function, which makes whatever it's applied to public in the bytecode but obeyed by Kotlin.

A more impactful example would be the open functions in CsvReader.kt and CsvWriter.kt

Now, whether this is that big of a deal in this case is debatable.

Skip empty line option on reading

According to the CSV Specificaton, an empty line between CSV rows is not allowed.
But, there is a demand for reading that kind of file.

So, we want to set it as csvReader option like below.

val reader = csvReader {
    skipEmptyLine = true
}

val str = """a,b,c

d,e,f
"""
//can read csv containing empty line.
reader.read(str)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.