Coder Social home page Coder Social logo

joshclose / csvhelper Goto Github PK

View Code? Open in Web Editor NEW
4.6K 132.0 1.0K 54.38 MB

Library to help reading and writing CSV files

Home Page: http://joshclose.github.io/CsvHelper/

License: Other

C# 85.21% Batchfile 0.01% JavaScript 0.09% CSS 13.67% HTML 0.96% SCSS 0.05%
hacktoberfest

csvhelper's Introduction

CsvHelper

Join the chat at https://gitter.im/CsvHelper/Lobby Backers on Open Collective Sponsors on Open Collective NuGet Version NuGet Download Count

A library for reading and writing CSV files. Extremely fast, flexible, and easy to use. Supports reading and writing of custom class objects.

Install

Package Manager Console

PM> Install-Package CsvHelper

.NET CLI Console

> dotnet add package CsvHelper

Documentation

http://joshclose.github.io/CsvHelper/

Building the Documentation

Run the build-docs.cmd file.

License

Dual licensed

Microsoft Public License (MS-PL)

http://www.opensource.org/licenses/MS-PL

Apache License, Version 2.0

http://opensource.org/licenses/Apache-2.0

Contributing

Want to contribute? Great! Here are a few guidelines.

  1. If you want to do a feature, post an issue about the feature first. Some features are intentionally left out, some features may already be in the works, or I may have some advice on how I think it should be done. I would feel bad if time was spent on some code that won't be used.
  2. If you want to do a bug fix, it might not be a bad idea to post about it too. I've had the same bug fixed by multiple people at the same time before.
  3. All code should have a unit test. If you make a feature, there should be significant tests around the feature. If you do a bug fix, there should be a test specific to that bug so it doesn't happen again.
  4. Pull requests should have a single commit. If you have multiple commits, squash them into a single commit before requesting a pull.
  5. Try and follow the code styling already in place. If you have ReSharper there is a dotsettings file included and things should automatically be formatted for you.

Credits

Contributors

This project exists thanks to all the people who contribute. [Contribute].

Backers

Thank you to all our backers! πŸ™ [Become a backer]

Sponsors

A huge thanks to the .NET on AWS Open Source Software Fund for sponsoring CsvHelper!

Support this project by becoming a sponsor through GitHub Sponsors or Open Collective. Your logo will show up here with a link to your website.

csvhelper's People

Contributors

0xced avatar ajorkowski avatar alitas avatar benjaminmichaelis avatar chaliy avatar crowbar27 avatar daiplusplus avatar ddrinka avatar erikschierboom avatar geertvanhorrik avatar hoppyluke avatar jakubmaguza avatar jamesbascle avatar johnmal avatar joshclose avatar karsaac avatar kpstolk avatar laurensknoll avatar marcelversteeg avatar micheleissa avatar mxashlynn avatar pengzhisun avatar ripvannwinkler avatar rob-hague avatar seanterry avatar simoncropp avatar stefanbertels avatar twarner-dealeron avatar twilsonxpert avatar vcaraulean avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

csvhelper's Issues

Cannot read if last line is missing newline

H!

I'm not sure this is new or if it's been fixed. I looked through the issues but could not find anything that looked related. Anyway, the following sample fails to run. It hangs and just seems to consume CPU and a lot of memory. The problem looks to be that the last line of the input is missing the newline character.

namespace Test
{
    using System;
    using System.IO;
    using System.Text;

    using CsvHelper;
    using CsvHelper.Configuration;

    public static class Program
    {
        public static void Main(string[] args)
        {
            try
            {
                Csv.Run();
            }
            catch (Exception ex)
            {
                Console.WriteLine(ex);
            }
        }
    }

    public class ActionRow
    {
        public string Id { get; set; }
    }

    public sealed class ActionRowClassMap : CsvClassMap<ActionRow>
    {
        public ActionRowClassMap()
        {
            this.Map(a => a.Id);
        }
    }

    public class Csv
    {
        public static void Run()
        {
            const string Contents = "Id\n1";
            var buffer = Encoding.ASCII.GetBytes(Contents);
            var stream = new MemoryStream(buffer);
            var csvConfiguration = new CsvConfiguration();
            using (var sr = new StreamReader(stream))
            using (var reader = new CsvReader(sr, csvConfiguration))
            {
                foreach (ActionRow record in reader.GetRecords(typeof(ActionRow)))
                {
                    Console.WriteLine(record.Id);
                }
            }
        }
    }
}

A failing test in last commit: local culture & invariant culture

Last commit (SHA: 73bfc21) started breaking on my local machine, one test is failing.

The reason is parsing of DateTime: local culture in unit test and forced Invariant Culture in GetField method. My local culture doesn't use same date delimiters as the invariant one.

Details:

Test method CsvHelper.Tests.CsvReaderTests.GetTypeTest threw exception:
System.FormatException: 25.07.2011 14:30:51 is not a valid value for DateTime. ---> System.FormatException: String was not recognized as a valid DateTime.
at System.DateTimeParse.Parse(String s, DateTimeFormatInfo dtfi, DateTimeStyles styles)
at System.ComponentModel.DateTimeConverter.ConvertFrom(ITypeDescriptorContext context, CultureInfo culture, Object value)
--- End of inner exception stack trace ---
at System.ComponentModel.DateTimeConverter.ConvertFrom(ITypeDescriptorContext context, CultureInfo culture, Object value)
at CsvHelper.CsvReader.GetField(Int32 index, TypeConverter converter) in CsvReader.cs: line 412
at CsvHelper.CsvReader.GetField(Int32 index, TypeConverter converter) in CsvReader.cs: line 207
at CsvHelper.CsvReader.GetField(Int32 index) in CsvReader.cs: line 176
at CsvHelper.Tests.CsvReaderTests.GetTypeTest() in CsvReaderTests.cs: line 103

CsvWriter calls .Flush() after every line?

Hi, we're writing a big CSV to a network share (134 columns, 100k rows, 50MB on disk) and the performance is really slow compared to the local file system (much slower than expected... we have a gige lan and a big SAN, it's normally pretty quick copying big files around).

Digging around a bit, I notice CsvWriter.NextRecord() calls TextWriter.Flush() after every record.

  1. This seems a little bit excessive in terms of IO... wouldn't it be better to never call .Flush() and let the FileStream automatically decide when to flush itself? (Every 4KB by default I believe)
  2. Because of the manual .Flush() calls it is currently not possible for me to configure how often CsvHelper writes to disk, even if I pass my own buffered stream in. If you let FileStream do the flushing automatically, I could specify my buffer size in the stream constructor (say 500KB) and it would automatically flush every time this limit was met.

Sorry I don't have a pull request for this, I'm at work.

csvHelper.Reader.Configuration.Delimiter doesn't set.

i have csv file like :

column1;column2;column3
rec11;rec12;rec13
rec21;rec22;rec23
rec31;rec32;rec33

Δ° use csvHelper like that:

FileStream fs = new FileStream("fileName.csv", FileMode.Open);
CsvHelper.CsvHelper csv = new CsvHelper.CsvHelper(fs);
csv.Reader.Configuration.Delimiter = ';'; csv.Configuration.Delimiter = ';';
List records = csv.Reader.GetRecords().ToList(); // Result: three record and all is empty. i look at debug mode csv.Reader.Parser.Configuration.Delimiter is ',' how i can this -delimiter- change. csv.Reader.Parser is non-public.

entityRec is :

public class entityRec
{
    [CsvField(Name = "column1")]
    public string col1 { get; set; }

    [CsvField(Name = "column2")]
    public string col2 { get; set; }

    [CsvField(Name = "column3")]
    public string col3 { get; set; }
}

Fluent Mappings and Writing returns Exception

Hello -

I am having some difficulty using a custom fluent mapping and then writing those records into a file using the fluent mappings. Before, I approached the solution by using Custom Attributes, but I pulled the class back into a library to be reused by others, so I needed the use of the fluent mappings to alleviate customizing the export and manipulating the inheritance chain.

Here was my code before, which worked beautifully:

var csv = new CsvWriter(new StreamWriter(fileName));

csv.WriteRecords(myList);

csv.Dispose();
return fileName;

I added a custom class map like so:

public sealed class ContactsExport: CsvClassMap<ContactModel>
{
    public ContactsExport()
    {
        Map(m => m.ID).Ignore();
        Map(m => m.IsActive).Ignore();
        Map(m => m.L1GroupID).Ignore();
        Map(m => m.L2GroupID).Ignore();
        Map(m => m.PersonID).Ignore();
    }
}        

and then modified the first section of code above to incorporate the configuration:

var csv = new CsvWriter(new StreamWriter(fileName));
csv.Configuration.ClassMapping(new ContactsExport());
csv.WriteRecords(myList);
csv.Dispose();

return fileName;

I also tried to use the overloaded method like so, replacing:
csv.Configuration.ClassMapping(new ContactsExport());
with:
csv.Configuration.ClassMapping<ContactsExport, ContactModel>();

This is the error I receive:

Object reference not set to an instance of an object. (NullReferenceException)
CsvHelper.CsvWriter.WriteRecords[T](IEnumerable`1 records)

C:\Projects\CsvHelper\src\CsvHelper\CsvWriter.cs line 193
Stack Trace:
NullReferenceException at CsvHelper.CsvWriter.WriteRecords[T](IEnumerable`1 records) in C:\Projects\CsvHelper\src\CsvHelper\CsvWriter.cs:line 193

The List<ContactModel> called myList is, in both cases, instantiated and does have records. In my case, three. If I comment out the ClassMapping line, it works fine.

Any pointers would be helpful.

Allowing a nullable *last* column when there is no header row

Not sure if you want to classify this as a bug or a feature request. I think I'm leaning more towards a feature request. Say I'm importing data for cars.

Column 1: Year
Column 2: Make
Column 3: Model
Column 4: Trim (optional)

Use this to map:

public class Car
{
    [CsvField(Index = 0)]
    public int Year { get; set; }

    [CsvField(Index = 1)]
    public string Make { get; set; }

    [CsvField(Index = 2)]
    public string Model { get; set; }

    [CsvField(Index = 3)]
    public string Trim { get; set; }
}

For this test, assume Trim is always empty in the CSV I'm importing. If I have a header row, it would look like this:

Year,Make,Model,Trim
2006,Toyota,Tacoma,
1992,Ford,Mustang,

Excel saves the file with trailing commas after the Model column, because there is a row at the top with data in 4 cells. Now if you don't have a header row, Excel will save it as:

2006,Toyota,Tacoma
1992,Ford,Mustang

When running .GetRecords<Car>(), the program will break because it tries to access a 4th column, which technically doesn't exist, since we didn't add a Trim for any of the cars in the CSV file.

I'm not 100% sure what the best solution would be, as a "nullable" column doesn't really make sense, since this is only an issue with the last column. A work around would be to move the column that may contain nulls to the left, leaving a column at the end that will always have data, thus never breaking the parser. Also, adding a header row fixes it, due to how Excel will save the file. But it would be nice to not have to do either of those. Maybe a boolean property saying the final column may or may not be there?

Hit me up if you need anymore information about this. Again, love the library!

Populate properties that contain custom classes

A CSV file may have a bunch of different data that is flat. If we use Person as an example, a person may have a work address, a mailing address, etc, as a part of the data. The Person custom class object may represent this data as single properties of type Address for MailingAddress, WorkAddress, etc.

ex:

public class Person
{
    public string Name { get; set; }
    public Address MailingAddress { get; set; }
    public Address WorkAddress { get; set; }
}

public class Address
{
    public string StreetAddress { get; set; }
    public string City { get; set; }
    public string State { get; set; }
    public string Zip { get; set; }
}

CSV file:

Name,MailingStreetAddress,MailingCity,MailingZip,WorkStreetAddress,WorkCity,WorkState,WorkZip

Bug in reading a field if buffer is full

Hi, I've found a bug that return a wrong field if the field is in the middle of a change-to -next-buffer and the first char is a space (line 143 in CsvParser).
In RFC csv field can contain spaces in the beginning, so trimming them is wrong:
"space before and after delimiter commas may not be trimmed. This is required by RFC 4180."
Maybe interesting (I found in others csv library) permits to comment a line (using # char): in my file it start with a commented line (I need some informations) so header detection will be wrong.

If you need can submit you the example file that recreate the bug.

Best regards and thank you

Add dynamic support

For quick-N-dirty CSV reading a dynamic object would be sufficient. This would help people reading a csv file in 3 lines of code:

var reader = new CsvReader("mycsv.csv");
var records = reader.GetRecords<dynamic>();
var firstRecord = records.First();

// Get value via name
var foo = firstRecord.ColumnX;

// Get via index (column order)
var bar = firstRecord[0];

wrapping output in "Quotes"

Is there a way when writing a csv file that I can wrap each column in quotes? I am trying to prevent Excel from removing the leading zero from numbers when opening the csv.

I was hoping the output would be:

"val1","val2","val3"

Thanks,
Steve.

Documentation for 1.7.0

Hi Josh, has the documentation changed? as I cannot find the "GetRecords" method?

var csv = new CsvReader( new StreamReader( "Actors.csv" ) );
var actorsList = csv.GetRecords<Actor>();

Steve

Exception: "Operation could destabilize the runtime" when running in Medium Trust

I'm calling the "CsvWriter.WriteRecords()" function from an ASP.NET IHttpHandler (.ashx) running in Medium Trust, and it's throwing this exception:

[VerificationException: Operation could destabilize the runtime.]
lambda_method(ExecutionScope , CsvWriter , UserInfo ) +51
System.Action2.Invoke(T1 arg1, T2 arg2) +0 CsvWriter.WriteRecords(IEnumerable1 records) +353

"UserInfo" is just a C# class of mine that holds some user properties (username, email, etc.)

parsing from a point in the file

hi there,

I really like CsvHelper however a use case has arisen where we need to process a file in batches.

Our plan is to store the offset that we have processed up to and then seek to that point when we resume. I've kind of got a clunky version working. To do so, I had to subclass the CsvParser class and use some reflection to look into the internals of CsvParser.

I was wondering if you had any other ideas for a better way to be able to use CsvParser but be able to seek to a position in the file (and query the position that we're up to after we process some records)

cheers
Paul.

Position property not always correct for unicode sources

Hi Josh,

Thanks for adding the Position property on the parser. I've been experimenting with some of our data files and I noticed that in some cases, it was reporting the wrong value (that is, seeking back to that 'position' in the stream does not take me back to the real position it was up to).

The reason, I found, was due to some unicode characters using more than 1 byte. An example line is this:

    [email protected],"Test Co., Ltd",Choi,Mr,,ε΄”ι’Ÿι“‰,"Test Co., Ltd"

Each of those kanji characters actually takes up 3 bytes in the data file.

I was able to get it working by changing the line:

this.Position++;

to

this.Position+= Encoding.UTF8.GetBytes(new[]{c}).Length;

however I know that this is not a proper solution because not all streams will be UTF8.

I was wondering if you had any other ideas on how to handle this (eg: is there a way to ask the stream reader what the encoding is? and use that to calculate the position)?

Make it possible to control construction

I'd like to be able to specify a way to construct classes during reading from csv. I have a class that needs some fields to be initialized conditionally and it would be nice if I had that ability. Something like this:

public sealed class ActionRowClassMap : CsvClassMap<ActionRow>
{
    public ActionRowClassMap(string firstPlanningMarket, string secondPlanningMarket)
    {
        this.ConstructUsing(() => new ActionRow(firstPlanningMarket, secondPlanningMarket));
        this.Map(a => a.Field1, "Field1"); // etc
    }
}

Problem writing out records.

I have created a simple VS2010 MVC 3 c# solution located at https://www.dropbox.com/s/45veyjnoqy9to9i/CSVHelperTest.zip and would really appreciate it if you could have a quick look.

There is a csv file included (Book1.csv) which you place in c:\ and then when running the app go to localhost:xxxx/Home/Process

You can view the process action in the HomeController. The process creates a new.csv file in c:\ and simply writes out the records that it reads in from Book1.csv The result though is a blank file? the process action also returns you to a view that outputs the loaded records to screen.

Thanks,
Steve.

Multiple Types (WriteRecords)

This doesn't work:

csvhelper.WriteRecords(MyType1);

csvhelper.WriteRecords(MyType2);

We get the following error (on the second type) regardless of order:

Property 'System.String RecordType' is not defined for type 'MyType2'
.. and of course it is. If I reverse the order, the error will happen with 'MyType1'

Do we have to flush/clear/reset the csv writer somehow? I need these records in the same file even though they are different types. What am I missing? Thanks!

Character encoding

Hi,

Is there support for Turkish characters? ( Creating CSV )
Γ‡ Γ§ Δ± Δ° Γ– ΓΆ Ş ş Ü ΓΌ

CsvWriter.Delimiter is read only

The documentation states that this is a writable property but it only provides read access. It would be nice to be able to modify the delimiter value.

Datetime format

How does one control the datetime format of a property. I have tried the following

[TypeConverter(typeof(CustomDateTimeTypeConverter))]
public DateTime? Dob
{
get { return _dob; }
set { _dob = value; }
}

public class CustomDateTimeTypeConverter : DateTimeConverter
{
public override object ConvertTo(ITypeDescriptorContext context, System.Globalization.CultureInfo culture, object value, Type destinationType)
{
if (destinationType == typeof(string))
{
return ((DateTime)value).ToString("dd-MM-yyyy");
}
return base.ConvertTo(context, CultureInfo.InvariantCulture, value, destinationType);
}
}

Add properties to CsvReaderException to format custom error messages

Currently CsvReaderException.Message contains useful information like row number and column name. But it has to parse strings to display a custom message for user. My suggetion is to add this properties (row number, column name/index, etc) to CsvReaderException.

Also current error message contains details of an inner exception (e.g. FormatException), but it seems that InnerException property is not set.

GetExceptionMessage fails when CSV file input doesn't have the correct number of columns

IndexOutOfBounds exception is thrown on line 901 of src/CsvHelper/CsvReader.cs:

error.AppendFormat( "Field Value: '{0}'", currentRecord[currentIndex] ).AppendLine();

To duplicate:

Class:

public class ImportObject
{
    [CsvField(Index = 0)]
    public int Id { get; set; }
    [CsvField(Index = 1)]
    public DateTime CreationDate { get; set; }
    [CsvField(Index = 2)]
    public string Description { get; set; }
}

CSV:

1,9/24/2012

My implementation:

IList<ImportObject> data = null;
using (var streamReader = new StreamReader(fileUpload.PostedFile.InputStream))
{
    using (var csvReader = new CsvReader(streamReader))
    {
        try
        {
            csvReader.Configuration.HasHeaderRecord = false;
            data = csvReader.GetRecords<ImportObject>().ToList();
        }
        catch (CsvReaderException cex)
        {
            ParseCsvReaderException(cex);
        }
    }
}

Since there is no trailing comma on the first line of the sample CSV, currentRecord does not have a 3rd column, so the collection is only of count 2. GetExceptionMessage tries to access the data from that column after the import fails for the exception, and dies. It looks like this only happens if the first row of the CSV file doesn't have enough columns. I have not tried this with a CSV with a header row.

Dev Environment:
Windows 7 Professional SP1, x64
Visual Studio 2010 Professional, SP1
CsvHelper 1.8

Please let me know if there's any other information that would be helpful. Thanks for the great library!

Feature Request: Default(value) & Boolean Converter alteration

Keeping with the mapping system (which is great, nice and FluentNHibernate style) it would be great if there were two extra extensions.

Default(value)

Default Value to allow you to set a default if null.

The boolean converter should probably accept 1/0 as True/False

Problem with nuget configuration for dotNet v4.5 project

Hi,

I used Nuget to add CsvHelper to a fresh ASP.NET MVC project in VS2012 (which uses dotNet framework v4.5).

For some reason, it chose the net2.0 version of CsvHelper.dll, which meant that the fluent configuration types weren't available. This had me scratching my head for awhile.

I've manually updated the ref to the net40-client version of CsvHelper.dll, so I'm good to go now.

I'm not a Nuget expert to know how VS2012 decides which version of a Nuget library to use, but thought I would post this in case it's something you can control (as package author), or at least it may help someone else out in future.

Thanks for the library, it looks good!

Provide non-generic overloads for generic methods

I need to parse several CSV files without know what type they map to at compile time. When my application is executed, it determines which of my types match the file based on the file name. It would be ideal if there were an overload of GetRecord<T>() and GetRecords<T>() that took a Type as a parameter.

For example:

var type = GetType(typeName);

using (var fileReader = OpenFile(path))
using (var parser = new CsvReader(fileReader))
{
    IEnumerable<object> records = parser.GetRecords(type);
}

writer.Configuration.QuoteAllFields need writer.Configuration.QuoteStringFields

Hi Josh,
Firstly great Lib. Saved me a LOT of work.
Was wondering if it would be possible to implement something like "writer.Configuration.QuoteAllFields" but only for String type fields.
Also, how could I setup so that first row (header) does not receive QuoteIdentifier when QuoteAllFields = True.

Problem is that the file I am trying to create must only have quotes around the text fields (irrespective if they actually, according to csv rules, require them ala theres a space or delim char in field value) and also not quote wrap the header fields.

I use the WriteField(FieldName) method to write each field independently as I need to look at format per field (some decimal and date fields are formatted differently within the same file, different columns)

I have looked at the code for WriteField and suggestion would be to add parameter to indicate if field should be quoted (True/False) and on the last line where the Field is added to the current record either leave the data as is or if param indicated field should not be quoted, strip the quotes from field data before adding to current record.

I would greatly appreciate any help, even if I need to create a custom class that inherits from yours if you do not have time to implement or do not want to add to your lib.

Many thanks
Theo Jacobs
[email protected]

Add support to automatically skip over blank rows

We have an issue where the CSV file might contain either blank lines, or a row with all cells being empty.

The former might be a scenario where the user is formatting the document for readability (separating lines into groups).

The later occurs when exporting a CSV from Excel where there is a space several lines below the last visible line.

My workaround is to provide the following parser to the constructor of the CsvReader:

/// <summary>
/// The <see cref="FilteredCsvParser"/>
///   class is used to provide CSV data that filters out blank rows.
/// </summary>
[CLSCompliant(false)]
public class FilteredCsvParser : CsvParser
{
    public FilteredCsvParser(TextReader reader)
        : base(reader)
    {
    }

    protected override string[] ReadLine()
    {
        var nextLine = base.ReadLine();

        if (nextLine == null)
        {
            return null;
        }

        var data = string.Join(string.Empty, nextLine);

        if (string.IsNullOrWhiteSpace(data) == false)
        {
            return nextLine;
        }

        // Recurse
        return ReadLine();
    }
}

Add custom formatting handlers for double/datetime/... parsing

When I have a poco with double types, or datetimes, i want to be able to format them myself (and not using default culture or hacks that involve changing current culture).

I didn't find any reference on this. My apologies if it would already be possible.

CSV Read of empty delimited field

When I read a csv which is comma separated with quoted fields and I read a field which is blank, e.g.

"XYZ101", "", "ABC"

The second field is returned as containing a quote when it should be empty.

Can I map class properties at runtime?

Hi, Im trying to work out (if possible) how to map class properties at runtime hopefully the code below will demonstrate what im trying to achieve:

I am want to allow the user to map which fields from the csv will map to which fields in the Model. the code below receives the mappings as postdata.

        StreamReader reader = new StreamReader("file.csv");

        var csv = new CsvReader(reader);

        Data d = new Data();
        csv.Configuration.PropertyMap<Data>(p => p.DataID).Ignore();

        foreach (var key in postdata.AllKeys)
        {
            if (postdata[key.ToString()] != null)
            {
                PropertyInfo dMember = d.GetType().GetProperty(postdata[key].ToString());

                csv.Configuration.PropertyMap<Data>(p => dMember.Name).Name(postdata[key.ToString()].ToString());
            }
            else
            {
                PropertyInfo dMember = d.GetType().GetProperty(postdata[key].ToString());

                csv.Configuration.PropertyMap<Data>(p => dMember.Name).Ignore();
            }
        }

        IEnumerable<Data> theData = csv.GetRecords<Data>();

Please help.

Steve.

Support fixed width parsing

I had a developer yesterday ask me how to easily parse a fixed width formatted file. I thought this library could work with a little bit of customisation of the CsvParser. Perhaps fixed width support could be supported by allowing definition of a ColumnWidth on the CsvField attribute (and obviously ignoring the comma delimiter).

NuGet.exe in NuGet Package

Just noticed the NuGet exe was in the packages folder when updating to 1.1. I don't think the .exe was added on purpose? See commit 8fed783

MetadataType

Would it be possible to enhance this so that we could set TypeConverter and CsvField in a MetadataType
This would be useful for people (me) who want to export csv files from Entity Framework models

i.e.

[MetadataType(typeof(PersonMetaData))]
public partial class Person {    }

public class PersonMetaData
{
    [CsvField(Ignore = true)]
    public string PersonId { get; set; }

    public string FullName { get; set; }
}

Add better read failure notification

Currently read failures do not provide information that helps an end user fix their csv data.

In an example that my tester just hit, an empty field was provided for a field mapped to an int property. It would be good if any read exceptions could identify the row number, column number, column name and column value along with the details of the exception.

Ability to identify badly formatted Quote characters

I have a source file where the fields are not enclosed between the Quote character ("), and when the quote character will appear as a part of the value, the reader will read everything until the next time the Quote character appears.

For example, I have value = 4'6" where the Quote character is part of the value but is treated as a Quote because it was not escaped properly.

Right now, I am fixing these externally using regex and then passing it to the CsvHelper.

Class Properties not found in CSV file will throw Exceptions

Class Properties (defined in the C# class for a single record/row) that are not found in CSV file (as a field/column) will throw Exceptions.

I added a couple simple "ifs" to CsvReader.cs on lines 331 and 338, snippet below:
(Line 331, modified the "else" to an "else if")
else if (namedIndexes.ContainsKey(property.Name)) // [email protected] - check if it exists in the CSV file headers
{
// Get the field using the property name.
var method = GetType().GetProperty("Item", new[] { typeof(string) }).GetGetMethod();
fieldExpression = Expression.Call(readerParameter, method, Expression.Constant(property.Name, typeof(string)));
}

(Line 338, wrapped an if != null block around the rest of the stuff, so that bindings.Add(...) is not called if fieldExpression is null)

                if (fieldExpression != null) // [email protected]
                {

ICsvReader.TryGetField<DateTime> should fail for an invalid DateTime (blank strings, null etc.)

reader.TryGetField<DateTime>(fieldName)

...should return false (i.e. fail to parse to a DateTime for fields that are blank, null, invalid dates etc.). Instead it returns the date "1/1/0000 00:00:00". Note that other value types (int, bool etc.) work correctly, .TryGetField will return false for blank/null fields.

E.g. consider the following CSV

IntType, DateTimeType
lalala,"    "
1234,"2011-12-20"

For the first row:
reader.TryGetField("IntType", out value) will return false, but
reader.TryGetField("DateTimeType", out value) will return true and 'value' will be DateTime.MinValue

Both return correct results for the second row.

Note: I'm thinking that the behaviour for TryGetField should be inline with DateTime.TryParse() method in the .NET framework.

PS: Excellent library though, keep up the good work :-)

Basic example seems wrong

I'm just attempting to do proof of concept using you're library (looks great by the way), but the basic example doesn't seem to work with the NuGET sourced package?

var csv = new CsvHelper( File.OpenRead( "Actors.csv" ) );
var actorsList = csv.Reader.GetRecords<Actor>();

Is the wiki for this out of date with the version?

WriteRecords -> Object reference not set to an instance of an object

Hi Guys,

I get an object reference error when I try to write the records?

This is my Code:

            SearchResultTest test1 = new SearchResultTest();
            test1.Id = "1";
            test1.Name = "name1";

            SearchResultTest test2 = new SearchResultTest();
            test2.Id = "2";
            test2.Name = "name2";

            var testResults = new List<object>();
            testResults.Add(test1);
            testResults.Add(test2);

            FileStream fileStream = File.OpenWrite(@"c:\Share\ipc.csv");

            using (var csv = new CsvHelper.CsvHelper(fileStream))
            {
                csv.Writer.Configuration.Delimiter = Convert.ToChar(";");
                csv.Writer.WriteRecords(testResults); //-> error occurs
            }

So testResults is not null, the filestream is also ok (CanWrite = true).
My class seems also allright.

public class SearchResultTest
{
[CsvField(Index = 0)]
public string Id { get; set; }
[CsvField(Index = 1)]
public string Name { get; set; }
}

Anyone ideas?

Using .NET Framework 3.5, CsvHelper version 1.1.2.40563

Thx,

Bart

CsvHelper Cannot Work with Read-only Streams

I am trying to use CsvHelper to parse a read-only Stream, but the CsvHelper constructor throws an exception when a read-only Stream is passed to it because it attempts to new up a StreamWriter for the given Stream. Could we check if Stream.CanWrite is true before trying to new up a CsvWriter?

Wiki: Basics and Attribute Mapping

Examples are not working. What's going on? I'm sure I'm missing something simple here. I'm using VS2012 (VS11?) and my project is an MVC4 Internet Application.

BASICS:
var csv = new CsvReader( new StreamReader( "Actors.csv" ) );
var actorsList = csv.GetRecords();

Design Time Error: 'CsvHelper.CsvReader' does not contain a definition for 'GetRecords' and no extension method 'GetRecords' accepting a first argument of type 'CsvHelper.CsvReader' could be found (are you missing a using directive or an assembly reference?)

I don't think so. Am I?
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using MyWebAppName.Models;
using CsvHelper;
using System.IO;

ATTRIBUTE MAPPING:
public class CustomClass
{
[CsvField( Name = "String Field" )]
public string StringProperty { get; set; }

[CsvField( Name = "Guid Field" )]
public Guid GuidProperty { get; set; }

[CsvField( Name = "Int Field" )]
public int IntProperty { get; set; }

}

Design Time Error: The type or namespace name 'CsvField' could not be found (are you missing a using directive or an assembly reference?)

I don't think so. Am I? I even added a bunch to try, here is the list:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Data.Entity;
using CsvHelper;
using CsvHelper.Configuration; //Added to try
using System.ComponentModel.DataAnnotations; //Added to try
using System.ComponentModel.Design; //Added to try
using System.ComponentModel.DataAnnotations.Schema; //Added to try
using System.ComponentModel.DataAnnotations.Resources; //Added to try
using Attribute; //Added to try

HOWEVER:
I do have the following working but would much rather use the .GetRecords method:
HttpPostedFileBase file = Request.Files[0];
ICsvParser csvParser = new CsvParser(new StreamReader(file.InputStream));
CsvReader csvReader = new CsvReader(csvParser);
string[] headers = { };
List<string[]> rows = new List<string[]>();
string[] row;
while (csvReader.Read())
{
// Gets Headers if they exist
if (csvReader.FieldHeaders.Any() && !headers.Any())
{
headers = csvReader.FieldHeaders;
}
row = new string[headers.Count()];
for (int i = 0; i < headers.Count(); i++)
{
row[i] = csvReader.GetField(i);
}
rows.Add(row);
}
List ImportItems = new List();
for (int i = 0; i < rows.Count(); i++)
{
ImportItem importItem = new ImportItem {
ID = i + 1,
StringProperty= rows[i].ElementAt(Array.IndexOf(headers, "StringProperty")).ToString(),
DecimalProperty= Decimal.Parse(rows[i].ElementAt(Array.IndexOf(headers, "DecimalProperty"))),
};
ImportItems.Add(importItem);
}

Any help would be greatly appreciated. Like I said, I just know I'm missing something silly here.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.