In this competition the quality of wine was predicted from a Kaggle wine dataset. The wine quality was given on a scale of 1-10, but the classification was used to determine whether wine is "good" or "bad". All wines with a quality score 7 or higher were deemed "good" and all other wines were deemed "bad".
The original dataset was modified to convert wine quality to a "good/bad" rating. The strongest correlated property to wine quality was alcohol content (the higher the better) while the weakest correlated property was the residual sugar content.
For this competition, my friend Adam and I competed to get the best prediction score in 30 minutes. We used the Kaggle project shown here.
In the end, I had a precision score of 0.90 and Adam has 0.91, so he won (barely!). We both used gradient boosted random forest as a classification method, but he used more estimators, which lead to a better prediction.