The goal of this project is to build a model that can detect auto mobile insurance fraud. Several models are tested to produce best results.
Dataset source: https://www.kaggle.com/buntyshah/auto-insurance-claims-data
The data set consist of 1000 auto incidents and auto insurance claims from Ohio, Illinois and Indiana from 01 January 2015 to 01 March 2015. Before any cleaning or feature engineering, the data set has a total of 40 variables with 1000 samples.
The dataset is highly imbalanced with 247 frauds and 753 non-frauds claimes.24.7% of the data were frauds while 75.3% were non-fraudulent claims.
The dataset is balanced using 'SMOTE'. Synthetic Minority Oversampling Technique, or SMOTE is a technique used for balancing the imbalanced dataset by oversampling the minority class. After smote the dataset is balanced with both classes have 526 samples each
IFrom the above plots it can be seen that property_claim, policy_annual_premium, and age has some outliers.
Here we can see that there is some correlation between age and months as customer apart from that there is no major correlation between features.
Results on training data:
- Accuracy = 0.7956273764258555
- Precision = 0.7961936939383735
- Recall = 0.7956273764258555
Results on testing data:
- Accuracy = 0.7866666666666666
- Precision = 0.8178682345846525
- Recall = 0.7866666666666666
Results on training data:
- Accuracy = 1.0
- Precision = 1.0
- Recall = 1.0
Results on testing data:
- Accuracy = 0.8266666666666667
- Precision = 0.823645147123408
- Recall = 0.8266666666666667
Results on training data:
- Accuracy = 0.8336501901140685
- Precision = 0.8349686305957829
- Recall = 0.8336501901140685
Results on testing data:
- Accuracy = 0.7633333333333333
- Precision = 0.7890935068512016
- Recall = 0.7633333333333333
Results on training data:
- Accuracy = 1.0
- Precision = 1.0
- Recall = 1.0
Results on testing data:
- Accuracy = 0.78
- Precision = 0.7779851159357895
- Recall = 0.78
- Accuracy = 1.0
- Precision = 1.0
- Recall = 1.0
Results on testing data:
- Accuracy = 0.8266666666666667
- Precision = 0.8222902654111418
- Recall = 0.8266666666666667
Results on training data:
- Accuracy = 1.0
- Precision = 1.0
- Recall = 1.0
Results on testing data:
- Accuracy = 0.8433333333333334
- Precision = 0.8440751813760663
- Recall = 0.8433333333333334
Out of all the models 'CatBoost classifier' gives best results with 100% accuracy on training data and 84% accuracy on test data.