Click the arrows to see more!
💼 My Recent Projects Include:
Research: Reinforcement Learning for V2V Communication
- Artifically Intelligent (AI) form of electronic communications between a vehicle and everything (V2X):
Machine Learning Projects:
1. Regression with Multiple Linear Perceptron (MLP) Modeling of the Saddle and Ackley Functions
- Generate a data set with the simple Saddle Point or the Ackley Function
- Add uniform random noise and visualize the 3D meshgrid
- Reshape the generated data to be a tensor input vector (shape will be: sample rows by feature columns)
- Regression MLP Model Parameters:
- Stochastic Gradient Descent optimizer
- Neural Network Architecture:
- Input Layer = 10 neurons with a sigmoid activation function
- Output Layer = 1 neuron
- Learing Rate = 0.1
- Exponential Decay Factor = 0
- Momentum = 0.1
- Train Duration: 50 Epochs
- Batch Size = 10
- Mean Square Error Loss Function
- Create a predicted Saddle point and Ackley Function from the Regression MLP trained Neural Network
- Plot the Results
Multiple Linear Perceptron (MLP) Model
Results of Saddle Function Predictions
- Real vs. Predicted Saddle
- z-x cross section @ y = 2
- z-x cross section @ y = 0
- Model Loss Vs. Epochs
- Topological Heat Map
Results are shown in the above image Left-to-Right, Top-to-Bottom
Results of Ackley Function Predictions
- Real vs. Predicted Ackley
- z-x cross section @ y = 2
- z-x cross section @ y = 0
- Model Loss Vs. Epochs
- Topological Heat Map
Results are shown in the above image Left-to-Right, Top-to-Bottom
2. Real Estate Evaluation of housing prices in Taipei Taiwan
- Google Coolab Notebook: Jupyter Notebook
- Github Repository: Respository
- Paper: Real Estate Evaluation of housing prices in Taipei Taiwan
- We are using the same sequential MLP model used for the Saddle Point and Ackley Function preditctions.
Multiple Linear Perceptron (MLP) Model
We will be examining real estate valuation which will help us understand where people tend to live in a city. The higher the price, the greater the demand to live in the property. Predicting real estate valuation can help urban design and urban policies, as it could help identify what factors have the most impact on property prices. Our aim is to predict real estate value, based on several features.
- Regression MLP Machine Learning on Taipei Taiwan Algorithm:
- Load the Real estate valuation data set
- Independent feature vector containing:
- X2 house age
- X3 distance to the nearest MRT station
- X4 number of convenience stores
- X5 latitude
- X6 longitude
- Train/Test split the data at a ratio of 80:20, respectively
- Min/Max Scale the dataset with a range of 0 to 1
- Normalise the scaled features
- Regression MLP Model Parameters:
- Neural Network Architecture:
- Input Layer = 10 neurons with a sigmoid activation function
- Output Layer = 1 neuron
- Stochastic Gradient Descent optimizer
- Learing Rate = 0.1
- Exponential Decay Factor = 0
- Momentum = 0.1
- Mean Square Error Loss Function
- Train Duration: 50 Epochs
- Batch Size = 10
- Create a predicted House Price Prediction of the unit area from the Regression MLP trained Neural Network
- Plot the Results
3. Classification of MNIST 70,000 Handwritten Digits 0-9 Image Data Set
- Google Coolab Notebook: Jupyter Notebook
- Github Repository: Respository
- Paper: Classification of MNIST 70,000 Handwritten Digits 0-9 Image Data Set
- Categorical Cross Entropy Algorithm:
- Load the Modified National Institute of Standards and Technology (MNIST) Handwritten digits 0-9 data set
- Train/Test split the data at a ratio of 6:1, respectively
- Reshape the images from 28x28 pixels to 784x1 pixels
- Normalise the image pixels by dividing by the gray scale image intensity level set L:
- Create 10 Categories for the 10 digits 0-9 to be classified
- Categorical Cross Entropy (CE) Model Parameters:
- categorical cross entropy (CE) Loss Function:
- Model Accuracy:
- Neural Network Architecture:
- Input Layer = 16 hyperbolic tangent activation (tanh) neurons with an input shape of 784x1
- Hidden Layer = 16 hyperbolic tangent activation (tanh) neurons with an input shape of 16x1
- Output Layer = 10 softmax neurons
- Stochastic Gradient Descent optimizer
- Learing Rate = 0.4
- Exponential Decay Factor = 0
- Momentum = 0.5
- Train Duration: 10 Epochs
- Batch Size = 128
- training samples = 60,000
- testing samples = 10,000
Creating classes 10 classes for the 10 digits 0-9 of handwritten digits
Where: The formula can be seen as above, where ti refers to the i -th element of the target vector and si refers to the i -th element of the models output vector, and C the number of classes.
Visualization of Log Loss (Cross Entropy)
Cross Entropy between probability distributions for each Class
Where: M is the number of samples in the dataset, tk is the target vector for the k-th sample, and sk is the models output vector for the k-th sample.
7. Show Results:
Visualization of Model Loss and Accuracy (0.1532 and 95.49% Respectively)
Visualization of First Layer Weights W1 from Neural Network Architecture
Visualization of Second Layer Weights W2 from Neural Network Architecture
Visualization of Third Layer Weights W3 from Neural Network Architecture
4. Classification of Fashion MNIST Image Data Set
- Google Coolab Notebook: Jupyter Notebook
- Github Repository: Respository
- Paper: Classification of Fashion MNIST Image Data Set
- Categorical Cross Entropy Algorithm:
- Load the Modified National Institute of Standards and Technology (MNIST) Fashion data set
- Train/Test split the data at a ratio of 6:1, respectively
- Reshape the images from 28x28 pixels to 784x1 pixels
- Normalise the image pixels by dividing by the gray scale image intensity level set L:
- Create 10 Categories for class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
- Categorical Cross Entropy (CE) Model Parameters:
- categorical cross entropy (CE) Loss Function:
- Model Accuracy:
- Neural Network Architecture:
- Input Layer = 64 ReLu activation neurons with an input shape of 784x1
- Hidden Layer = 64 ReLu activation neurons with an input shape of 64x1
- Output Layer = 10 softmax neurons
- Stochastic Gradient Descent optimizer
- Learing Rate = 0.1
- Exponential Decay Factor = 0
- Momentum = 0
- Train Duration: 10 Epochs
- Batch Size = 128
- training samples = 60,000
- testing samples = 10,000
Creating 10 classes for the 10 types of clothing in the Image Data Set
Where: The formula can be seen as above, where ti refers to the i -th element of the target vector and si refers to the i -th element of the models output vector, and C the number of classes.
Visualization of Log Loss (Cross Entropy)
Cross Entropy between probability distributions for each Class
Where: M is the number of samples in the dataset, tk is the target vector for the k-th sample, and sk is the models output vector for the k-th sample.
7. Show Results:
Visualization of Model Loss and Accuracy (0.3090 and 88.66% Respectively)
Visualization of First Layer Weights W1 from Neural Network Architecture
Visualization of Second Layer Weights W2 from Neural Network Architecture
Visualization of Third Layer Weights W3 from Neural Network Architecture
- Pattern Recognition Projects:
- a) Bayesian Binary Classification on Multidimensional Multivariate Distributions: Paper, Jupyter Notebook
- b) Density Estimation and Applications in Estimation, Clustering and Segmentation: Paper, Jupyter Notebook
- c) Feature Extraction and Object Recognition: Paper, Jupyter Notebook
- d) Principle Component Analysis (PCA) and Linear Discriminant Analysis (LDA): Paper, Jupyter Notebook