Coder Social home page Coder Social logo

anu7n / final_project_pwdk Goto Github PK

View Code? Open in Web Editor NEW
7.0 1.0 8.0 20.2 MB

This final project is one of the requirements for graduating from Job Connector Data Science and Machine Learning Purwadhika Start-up and Coding School.

Jupyter Notebook 82.65% Python 0.06% HTML 17.28%

final_project_pwdk's Introduction

Final Project Purwadhika

This final project is one of the requirements for graduating from Job Connector Data Science and Machine Learning Purwadhika Start-up and Coding School

So, what's happening here?



What's happening here is a gradient boosting framework which is called as 'Light GBM' that uses on old well known 'Decission Tree' learning algorithm which comes under the more well-known umberella term 'Machine Learning'. I've trained a learning model on SBA Loans Dataset and now place myself as a 'Bank Officer at USA' and receive a loan application from a small business. To make the decission, I will use Loan Predictor to help me to classify the risk of the loan which is a high risk or low risk Loan.

Deeper on Loan Predictor evaluation, the accuracy of this model algorithm is ~90% and has AUC Score 0.97 which means that it is an ideal situation. When two curves don’t overlap at all means model has an ideal measure of separability. It is nearly perfectly able to distinguish between positive class and negative class. But, it still has a limitation that only works for the loan that has guarantee from US Small Business Administration.

The U.S. SBA was founded in 1953 on the principle of promoting and assisting small enterprises in the U.S. credit market. Small businesses have been a primary source of job creation in the United States; therefore, fostering small business formation and growth has social benefits by creating job opportunities and reducing unemployment. One way SBA assists these small business enterprises is through a loan guarantee program which is designed to encourage banks to grant loans to small businesses. SBA acts much like an insurance provider to reduce the risk for a bank by taking on some of the risk through guaranteeing a portion of the loan. In the case that a loan goes into default, SBA then covers the amount they guaranteed. There have been many success stories of start-ups receiving SBA loan guarantees such as FedEx and Apple Computer. However, there have also been stories of small businesses and/or start-ups that have defaulted on their SBA-guaranteed loans. The rate of default on these loans has been a source of controversy for decades.[Reference]


Step of work


1. Data Preprocessing and Exploratory Data Analysis (EDA)

Started with import the dataset which is SBA Loan dataset. After that, I do data cleaning to remove some unnecessary simbols, give restriction to the data that will be used based on reference and EDA, change NAICS code to industrial sector, fixed nan value, remove the outlier, etc.

2. Feature Engineering

At feature engineering, I add columnm that maybe will be used for predicting like real estate. I also did one hot encoding for categorical columns like state and NAICS (Industrial Sector), drop the columns that I think it will not give effect for modelling, check the correlation each features, etc.

3. Modelling

I started modelling with standardize the continues data, doing cross validation method from five algorithms which are Logistic Regression, Decission Tree, Random Forest, Light GBM, and KNN for normal data and oversampling data (using SMOTE). After found two of the best algorithms which are from normal data (without SMOTE), then I will tune hyperparameter for them. In the following below is the result from cross validation :



4. Tuning Hyperparameter

After found two of the best algorithms which are Random Forest and Light GBM, I tuned hyperparameter for them using GridSearchCV. The results from tuning hyperparameter show that Light GBM gives better result than Random Forest. The following below are the parameters that I tuned, classification report, and best parameters from Light GBM :

paramaters
param_model_2 = {
    'max_depth' : [-1,7,12,14,17],
    'num_leaves' : [31,70,120],
    'min_data_in_leaf' : [60,100,120],
    'learning_rate' : [0.001,0.01,0.05],
    'num_iterations' :[200,400,600]
}
evaluation
===== CLASSIFICATION REPORT SCORING FROM F1 SCORE =====
              precision    recall  f1-score   support

           0       0.84      0.81      0.83     14561
           1       0.96      0.97      0.96     71177

    accuracy                           0.94     85738
   macro avg       0.90      0.89      0.90     85738
weighted avg       0.94      0.94      0.94     85738

tn :  11838  fp :  2723  fn :  2276  tp :  68901

======== BEST PARAMETERS SCORING FROM F1 SCORE ========
{'learning_rate': 0.05, 'max_depth': 12, 'min_data_in_leaf': 60, 'num_iterations': 600, 'num_leaves': 120}

5. Set Threshold

The next step is check the AUCROC from Light GBM with best parameters, then I found that it will give better result for predicting 0 (CHGOFF) when the threshold set to 0.24. It's more important to avoid the condition when the status actually CHGOFF and we predict as 1 (PIF) than we predict PIF as CHGOFF. Because I want to minimize the condition that CHGOFF predicted as PIF, so I need to increase recall 0 (CHGOFF) and precission 1 (PIF).

Information : CHGOFF = Charge Off , PIF = Paid in Full

6. Performance Evaluation

For performance evaluation, I have checked features importance, classification report and confusion matrix. The following below is the performance evaluation result (classification report and confusion matrix) when the threshold set to 0.24.

======== CLASSIFICATION REPORT WITH THRESHOLD ========
              precision    recall  f1-score   support

           0       0.75      0.91      0.82     14561
           1       0.98      0.94      0.96     71177

    accuracy                           0.93     85738
   macro avg       0.87      0.92      0.89     85738
weighted avg       0.94      0.93      0.94     85738


tn :  13196  fp :  1365  fn :  4387  tp :  66790

7. Validation Model

Doing validation model is very usefull to check the stability of the model that has made. For validation the model that using threshold 0.24, I use KFold with 5 fold. And the result gives good stability for each fold. The following below is the result from validation model :

F1 Scores       :  [0.957, 0.958, 0.956, 0.957, 0.957]

Accuracy Scores :  [0.93, 0.932, 0.928, 0.931, 0.931]

** NOTES : ClassificationReport for each fold (1-5) and all codes available at Jupyter Notebook, please have a look here.



Dashboard


Home Page :




Prediction Page :




Result Page :


When the result is low risk loan



When the result is high risk loan




About Page :




Data Visualization and Data Table Page :





NOTES : On the dashboard, to load the final dataset, I use two choices. Using SQLAlchemy to load from SQL Databases or load manually using pandas. Looks here


Keywords : Loan, Prediction, Machine Learning, Light GBM, U.S Small Business Administration, MySQL

final_project_pwdk's People

Contributors

anu7n avatar

Stargazers

 avatar Hernán Galletti avatar Haripriya T V avatar Sardi Irfansyah avatar Hazmi Cokro avatar Muhaemin Iskandar avatar Risdan Kristori avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.