Coder Social home page Coder Social logo

Planned functionality about datacleaner HOT 19 OPEN

rhiever avatar rhiever commented on August 16, 2024
Planned functionality

from datacleaner.

Comments (19)

rhiever avatar rhiever commented on August 16, 2024 1

See the docs you linked and the categorical_features parameter.

from datacleaner.

rhiever avatar rhiever commented on August 16, 2024 1

Running autoclean multiple times might be the easier solution. Might be a useful extension to autocleaner to allow the user to pass multiple preprocessors in a list.

from datacleaner.

jaumebp avatar jaumebp commented on August 16, 2024

In my experience it is worth identifying ordinal variables (e.g. numerical grades) and handle then separately. In many cases these can be treated as continuous variables, but sometimes it is necessary to treat them as discrete ones. One example of this is missing value imputation. If treating them as continuous you may end up injecting fake values that then can mislead the downstream analysis.

Thanks for the project! I tested it on some of my biomedical datasets and compared the PCA before/after the cleaning. The only case where there were differences is a dataset with discrete variables (Exome sequencing) and specifically in the columns where some of the values were '0'. There was the following error message:
sys:1: DtypeWarning: Columns (6,19,131,225,404,416,515,651,833,945,975,986,1265,1327,1387,1494,1541,1558,1715,1737,1854,1875,1947,1980,2015,2024,2111,2132,2140,2165,2426,2652,2667,2668,2871,2943,2978,2997,3165,3335,3634,3807,3945,4010,4018,4177,4191,4196,4243,4245,4389,4463,4553,4772,4814,4841,4962) have mixed types. Specify dtype option on import or set low_memory=False.

from datacleaner.

rhiever avatar rhiever commented on August 16, 2024

Indeed, which is why I'm trying to discover how to identify ordinal vs. continuous variables. I posted this question on StackOverflow to brainstorm.

from datacleaner.

jaumebp avatar jaumebp commented on August 16, 2024

In our software we went with a much simpler approach. Letting the user specify a list of attributes to be treated as ordinal. Of course, an automatic solution is far more elegant :)

from datacleaner.

westurner avatar westurner commented on August 16, 2024

"Convenience function: Detect if there are non-numerical features and encode them as numerical features" EpistasisLab/tpot#61

from datacleaner.

westurner avatar westurner commented on August 16, 2024

Do I have to do get_dummies() all by myself?
http://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html

... get_dummies() accepts a number of kwargs

from datacleaner.

westurner avatar westurner commented on August 16, 2024

Do I have to do get_dummies() all by myself?

I think it illogical to e.g. average Exterior1st in the Kaggle House Prices Dataset: the average of ImStucc and Wd Sdng seems nonsensical?

from datacleaner.

westurner avatar westurner commented on August 16, 2024

CSVW as JSONLD may be a good way to specify a dataset header with the relevant metadata for such operations? pandas-dev/pandas#3402

from datacleaner.

rhiever avatar rhiever commented on August 16, 2024

You should be able to use the sklearn OneHotEncoder to get the equivalent of the pandas get_dummies().

from datacleaner.

westurner avatar westurner commented on August 16, 2024

You should be able to use the sklearn OneHotEncoder to get the equivalent of the pandas get_dummies().

http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html

Is there a way to specify that I only need certain columns to be expanded into multiple columns w/ OneHotEncoder?

from datacleaner.

westurner avatar westurner commented on August 16, 2024

Do I need to write a FunctionTransformer to stack multiple preprocessing modules?

from datacleaner.

westurner avatar westurner commented on August 16, 2024

Do I need to write a FunctionTransformer to stack multiple preprocessing modules?

i.e for different columns. Or just run autoclean multiple times?

from datacleaner.

westurner avatar westurner commented on August 16, 2024

Might be a useful extension to autocleaner to allow the user to pass multiple preprocessors in a list.

https://github.com/paulgb/sklearn-pandas DataFrameMapper supports various combinations of columns and transformations.

from datacleaner.

westurner avatar westurner commented on August 16, 2024

It may be worth noting that pandas Categoricals have an ordered=True parameter. http://pandas.pydata.org/pandas-docs/stable/categorical.html#sorting-and-order

Does specifying the Categoricals have a different effect than inferring the ordinals from the happenstance sequence of strings in a given dataset?

from datacleaner.

adrose avatar adrose commented on August 16, 2024

any plans to impute NA's rather then replace continuous variables with the median value?

from datacleaner.

rhiever avatar rhiever commented on August 16, 2024

@adrose, do you mean via model-based imputation?

from datacleaner.

adrose avatar adrose commented on August 16, 2024

@rhiever sorry should have been A LOT more specific, but yes something similar to what the Amelia command is doing in this R package - i.e. (bootstrapped linear regression).

Happy to expand on it more, or would be excited to see if you have any thoughts on this function if you think it may be applicable.

from datacleaner.

westurner avatar westurner commented on August 16, 2024
  • https://en.wikipedia.org/wiki/Imputation_(statistics) :

    In statistics, imputation is the process of replacing missing data with substituted values. When substituting for a data point, it is known as "unit imputation"; when substituting for a component of a data point, it is known as "item imputation". There are three main problems that missing data causes: missing data can introduce a substantial amount of bias, make the handling and analysis of the data more arduous, and create reductions in efficiency.[1] Because missing data can create problems for analyzing data, imputation is seen as a way to avoid pitfalls involved with listwise deletion of cases that have missing values. That is to say, when one or more values are missing for a case, most statistical packages default to discarding any case that has a missing value, which may introduce bias or affect the representativeness of the results. Imputation preserves all cases by replacing missing data with an estimated value based on other available information. Once all missing values have been imputed, the data set can then be analysed using standard techniques for complete data.[2] Imputation theory is constantly developing and thus requires consistent attention to new information regarding the subject. There have been many theories embraced by scientists to account for missing data but the majority of them introduce large amounts of bias. A few of the well known attempts to deal with missing data include: hot deck and cold deck imputation; listwise and pairwise deletion; mean imputation; regression imputation; last observation carried forward; stochastic imputation; and multiple imputation. [emphasis added]

  • http://scikit-learn.org/stable/modules/preprocessing.html#imputation-of-missing-values
  • http://scikit-learn.org/stable/modules/classes.html#module-sklearn.preprocessing
  • http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.Imputer.html
    • class sklearn.preprocessing.Imputer(, strategy=('mean', 'median', 'most_frequent'), )
  • http://scikit-learn.org/stable/auto_examples/missing_values.html :

    Imputing missing values before building an estimator
    This example shows that imputing the missing values can give better results than discarding the samples containing any missing value. Imputing does not always improve the predictions, so please check via cross-validation. Sometimes dropping rows or using marker values is more effective.
    Missing values can be replaced by the mean, the median or the most frequent value using the strategy hyper-parameter. The median is a more robust estimator for data with high magnitude variables which could dominate results (otherwise known as a ‘long tail’)."

from datacleaner.

Related Issues (13)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.