Real-world data is rarely clean. Use Python and its libraries to collect data from different sources in different formats, assess its quality and cleanliness, and then clean it. This is known as data wrangling. Record your wrangling efforts in Jupyter notebooks and present them through analysis and visualization using Python (and its libraries) and SQL. The data set we will be working with (and analyzing and visualizing) is the tweet archive of Twitter user @dog_rates, also known as WeRateDogs. WeRateDogs is a Twitter account where people rate dogs with humorous comments about dogs. The denominator for these ratings is 10 in most cases. But the numerator? More than ten in most cases. 10.11., 10.12., 13.10. etc. Why? Because "They're good dogs, Brent." On WeRateDogs he has over 4 million followers and receives international media coverage.
wrangle_act.ipynb : This file contains code for gathering, assessing, cleaning, analyzing, and visualizing of the data.
tweet_json.txt : This file contains informations through API.
image_predictions : contains tweet id and url to dog images.
act_report.pdf : A pdf documentation of analysis and insights gotten from the dataset.
wrangle_report.pdf : This file contains documentation of Data wrangling steps: gathering, assessing and cleaning of the data.
twitter_archive_master.csv : This file contains the clean and combined data.
twitter-archive-enhanced.csv : This file was gotten from class room.
This project was completed as part of my Udacitys Data Analyst Nanodegree certification