Quantifying what neural networks don't know and when they should abstain from making predictions is an important goal for safe real-world decision-making. This project will involve designing algorithms that can quantify uncertainty in neural networks and explore their applications towards noisy labels, outlier detection, interpretability, and robustness.
chenblair / uncertainty-deep-learning Goto Github PK
View Code? Open in Web Editor NEWQuantifying what neural networks don't know and when they should abstain from making predictions is an important goal for safe real-world decision-making.