Coder Social home page Coder Social logo

coursera's Introduction

Coursera

My course work solutions and quiz answers

Deep Learning Specialization

Course 1 - Neural Networks and Deep Learning

Week 1 - Introduction to Deep Learning Quiz

Analyze the major trends driving the rise of deep learning, and give examples of where and how it is applied today.

Week 2 - Neural Networks Basics Quiz

Set up a machine learning problem with a neural network mindset and use vectorization to speed up your models.

Python Basics With Numpy

Logistic Regression with a Neural Network mindset

Week 3 - Shallow Neural Networks Quiz

Build a neural network with one hidden layer, using forward propagation and backpropagation.

Planar data classification with one hidden layer

Week 4 - Deep Neural Networks Quiz

Analyze the key computations underlying deep learning, then use them to build and train deep neural networks for computer vision tasks.

Building your Deep Neural Network Step by Step

Deep Neural Network Application

Course 2 - Improving Deep Neural Networks: Hyperparameter Tuning, Regularization and Optimization

Week 1 - Practical Aspects of Deep Learning Quiz

Discover and experiment with a variety of different initialization methods, apply L2 regularization and dropout to avoid model overfitting, then apply gradient checking to identify errors in a fraud detection model.

Initialization

Regularization

Gradient Checking

Week 2 - Optimization Algorithms Quiz

Develop your deep learning toolbox by adding more advanced optimizations, random minibatching, and learning rate decay scheduling to speed up your models.

Optimization methods

Week 3 - Hyperparameter Tuning, Batch Normalization and Programming Frameworks Quiz

Explore TensorFlow, a deep learning framework that allows you to build neural networks quickly and easily, then train a neural network on a TensorFlow dataset.

Tensorflow introduction

Course 3 - Structuring Machine Learning Projects

Week 1 - Bird Recognition in the City of Peacetopia (Case Study) Quiz

Streamline and optimize your ML production workflow by implementing strategic guidelines for goal-setting and applying human-level performance to help define key priorities.

Week 2 - Autonomous Driving (Case Study) Quiz

Develop time-saving error analysis procedures to evaluate the most worthwhile options to pursue and gain intuition for how to split your data and when to use multi-task, transfer, and end-to-end deep learning.

Course 4 - Convolutional Neural Networks

Week 1 - The Basics of ConvNets Quiz

Implement the foundational layers of CNNs (pooling, convolutions) and stack them properly in a deep network to solve multi-class image classification problems.

Convolution model Step by Step

Convolution Model Application

Week 2 - Deep Convolutional Models Quiz

Discover some powerful practical tricks and methods used in deep CNNs, straight from the research papers, then apply transfer learning to your own deep CNN.

Residual Networks

Transfer learning with MobileNet v1

Week 3 - Detection Algorithms Quiz

Apply your new knowledge of CNNs to one of the hottest (and most challenging!) fields in computer vision: object detection.

Car detection with YOLO

Image Segmentation with U-Net v2

Week 4 - Face Recognition&Neural Style Transfer Quiz

Explore how CNNs can be applied to multiple fields, including art generation and face recognition, then implement your own algorithm to generate art and recognize faces!

Face Recognition

Art Generation with Neural Style Transfer

Reference Papers


Reinforcement Learning Specialization

Course 1 - Fundamentals of Reinforcement Learning

Week 1 - An Introduction to Sequential Decision-Making Quiz

For the first week of this course, you will learn how to understand the exploration-exploitation trade-off in sequential decision-making, implement incremental algorithms for estimating action-values, and compare the strengths and weaknesses to different algorithms for exploration. For this week’s graded assessment, you will implement and test an epsilon-greedy agent.

Bandits and Exploration and Exploitation

Week 2 - Markov Decision Processes Quiz

When you’re presented with a problem in industry, the first and most important step is to translate that problem into a Markov Decision Process (MDP). The quality of your solution depends heavily on how well you do this translation. This week, you will learn the definition of MDPs, you will understand goal-directed behavior and how this can be obtained from maximizing scalar rewards, and you will also understand the difference between episodic and continuing tasks. For this week’s graded assessment, you will create three example tasks of your own that fit into the MDP framework.

Week 3 - Value Functions & Bellman Equations Quiz 1 Quiz 2

Once the problem is formulated as an MDP, finding the optimal policy is more efficient when using value functions. This week, you will learn the definition of policies and value functions, as well as Bellman equations, which is the key technology that all of our algorithms will use.

Week 4 - Dynamic Programming Quiz

This week, you will learn how to compute value functions and optimal policies, assuming you have the MDP model. You will implement dynamic programming to compute value functions and optimal policies and understand the utility of dynamic programming for industrial applications and problems. Further, you will learn about Generalized Policy Iteration as a common template for constructing algorithms that maximize reward. For this week’s graded assessment, you will implement an efficient dynamic programming agent in a simulated industrial control problem.

Optimal Policies with Dynamic Programming

Course 2 - Sample-based Learning Methods

Week 1 - Monte Carlo Methods for Prediction & Control Quiz

This week you will learn how to estimate value functions and optimal policies, using only sampled experience from the environment. This module represents our first step toward incremental learning methods that learn from the agent’s own interaction with the world, rather than a model of the world. You will learn about on-policy and off-policy methods for prediction and control, using Monte Carlo methods---methods that use sampled returns. You will also be reintroduced to the exploration problem, but more generally in RL, beyond bandits.

Blackjack

Week 2 - Temporal Difference Learning Methods for Prediction Quiz

This week, you will learn about one of the most fundamental concepts in reinforcement learning: temporal difference (TD) learning. TD learning combines some of the features of both Monte Carlo and Dynamic Programming (DP) methods. TD methods are similar to Monte Carlo methods in that they can learn from the agent’s interaction with the world, and do not require knowledge of the model. TD methods are similar to DP methods in that they bootstrap, and thus can learn online---no waiting until the end of an episode. You will see how TD can learn more efficiently than Monte Carlo, due to bootstrapping. For this module, we first focus on TD for prediction, and discuss TD for control in the next module. This week, you will implement TD to estimate the value function for a fixed policy, in a simulated domain.

Policy Evaluation with Temporal Difference Learning

Week 3 - Temporal Difference Learning Methods for Control Quiz

This week, you will learn about using temporal difference learning for control, as a generalized policy iteration strategy. You will see three different algorithms based on bootstrapping and Bellman equations for control: Sarsa, Q-learning and Expected Sarsa. You will see some of the differences between the methods for on-policy and off-policy control, and that Expected Sarsa is a unified algorithm for both. You will implement Expected Sarsa and Q-learning, on Cliff World.

Q-Learning and Expected Sarsa

Week 4 - Planning Learning Acting Quiz

Up until now, you might think that learning with and without a model are two distinct, and in some ways, competing strategies: planning with Dynamic Programming verses sample-based learning via TD methods. This week we unify these two strategies with the Dyna architecture. You will learn how to estimate the model from data and then use this model to generate hypothetical experience (a bit like dreaming) to dramatically improve sample efficiency compared to sample-based methods like Q-learning. In addition, you will learn how to design learning systems that are robust to inaccurate models.

Dyna-Q and Dyna-Qplus

Course 3 - Prediction and Control with Function Approximation

Week 1 - On-policy Prediction with Approximation Quiz

This week you will learn how to estimate a value function for a given policy, when the number of states is much larger than the memory available to the agent. You will learn how to specify a parametric form of the value function, how to specify an objective function, and how estimating gradient descent can be used to estimate values from interaction with the world.

Semi-gradient TD(0) with State Aggregation

Week 2 - Constructing Features for Prediction Quiz

The features used to construct the agent’s value estimates are perhaps the most crucial part of a successful learning system. In this module we discuss two basic strategies for constructing features: (1) fixed basis that form an exhaustive partition of the input, and (2) adapting the features while the agent interacts with the world via Neural Networks and Backpropagation. In this week’s graded assessment you will solve a simple but infinite state prediction task with a Neural Network and TD learning.

Semi-gradient TD with a Neural Network

Week 3 - Control with Approximation Quiz

This week, you will see that the concepts and tools introduced in modules two and three allow straightforward extension of classic TD control methods to the function approximation setting. In particular, you will learn how to find the optimal policy in infinite-state MDPs by simply combining semi-gradient TD methods with generalized policy iteration, yielding classic control methods like Q-learning, and Sarsa. We conclude with a discussion of a new problem formulation for RL---average reward---which will undoubtedly be used in many applications of RL in the future.

Function Approximation and Control

Week 4 - Policy Gradient Quiz

Every algorithm you have learned about so far estimates a value function as an intermediate step towards the goal of finding an optimal policy. An alternative strategy is to directly learn the parameters of the policy. This week you will learn about these policy gradient methods, and their advantages over value-function based methods. You will also learn how policy gradient methods can be used to find the optimal policy in tasks with both continuous state and action spaces.

Average Reward Softmax Actor-Critic using Tile-coding


Practical Time Series Analysis

Week 1-6

Week 1 - Basic Statistics Quiz 1 Quiz 2

During this first week, we show how to download and install R on Windows and the Mac. We review those basics of inferential and descriptive statistics that you'll need during the course.

Measuring Linear Association with the Correlation Function

Week 2 - Visualizing Time Series, and Beginning to Model Time Series Quiz 1 Quiz 2 Quiz 3

In this week, we begin to explore and visualize time series available as acquired data sets. We also take our first steps on developing the mathematical models needed to analyze time series data.

Introduction to Time Series

Week 3 - Stationarity, MA(q) and AR(p) processes Quiz 1 Quiz 2 Quiz 3 Quiz 4

In Week 3, we introduce few important notions in time series analysis: Stationarity, Backward shift operator, Invertibility, and Duality. We begin to explore Autoregressive processes and Yule-Walker equations.

Series and nseries representation

Stationarity Intuition and Definition

Stationarity White Noise Random Walks and Moving Averages

Stationarity ACF of a Moving Average

Autoregressive Processes Definition and First Examples

Autoregressive Processes Backshift Operator and the ACF

Yule Walker equations

Week 4 - AR(p) processes, Yule-Walker equations, PACF Quiz 1 Quiz 2 Quiz 3

In this week, partial autocorrelation is introduced. We work more on Yule-Walker equations, and apply what we have learned so far to few real-world datasets.

Yule-Walker Equations in matrix form

Partial Autocorrelation and the PACF First Examples

Week 5 - Akaike Information Criterion (AIC), Mixed Models, Integrated Models Quiz 1 Quiz 2 Quiz 3 Quiz 4

In Week 5, we start working with Akaike Information criterion as a tool to judge our models, introduce mixed models such as ARMA, ARIMA and model few real-world datasets.

ARIMA processes

Akaike Information Criterion and Model Quality

ARMA Properties and Examples

ARMA Models and a Little Theory

Week 6 - Seasonality, SARIMA, Forecasting Quiz 1 Quiz 2 Quiz 3

In the last week of our course, another model is introduced: SARIMA. We fit SARIMA models to various datasets and start forecasting.

SARIMA processes

Forecasting using Simple Exponential Smoothing

Forecasting Using Holt Winters for Trend (Double Exponential)

Forecasting Using Holt Winters for Trend and Seasonality (Triple Exponential)

coursera's People

Contributors

kenhding avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.