This lesson summarizes the topics we'll be covering in section 23 and why they'll be important to you as a data scientist.
You will be able to:
- Understand and explain what is covered in this section
- Understand and explain why the section will help you to become a data scientist
In this section we'll be introducing some additional statistical techniques that will be important as we move into module 3 and beyond.
We kick off the section with a recap of some of the key probability distributions - uniform, exponential, normal and Poisson.
One of the most common assumptions for many machine learning algorithms is that your data set is normally distributed. In this lesson, we introduce the Kolmogorov-Smirnov test which can be used to test the whether a data set is normally distributed.
Sometimes you just want to test the efficiency or performance of an algorithm with a certain type of data. When that is the case, you need to be able to generate a data set meeting a particular set of requirements. So next, we give you some hands on experience of generating synthentic data sets.
Next up, we look at techniques for taking repeated subsamples from a sample using bootstrapping, jackknife and permutation tests to better estimate the precision of your sample statistics or validate models by using random subsets.
We finish up the section by introducing the idea of monte carlo simulations for running large numbers of simulations with various inputs to provide distributions of possible output values.
In this section we continue to introduce foundational statistical concepts that will be critical when working with various machube learning models in modules 3 and 4.