Safety-Oriented Stability Biases for Continual Learning
M. Math. Thesis
A Gaurav
[link] [code]

Simple Continual Learning Strategies for Safer Classifers
Workshop on AI Safety, AAAI 2020
A Gaurav, S Vernekar, J Lee, V Abdelzad, K Czarnecki, S Sedwards
[paper] [code]

Out-of-distribution Detection in Classifiers via Generation
Safety & Robustness in Decision Making Workshop, NeurIPS 2019
S Vernekar, A Gaurav, V Abdelzad, T Denouden, R Salay, K Czarnecki
[paper] [arXiv] [code]

WiseMove: A Framework to Investigate Safe Deep Reinforcement Learning for Autonomous Driving
Quantitative Evaluation of SysTems (QEST) 2019
J Lee*, A Balakrishnan*, A Gaurav*, K Czarnecki, S Sedwards*
[paper] [arXiv] [code]

Analysis of Confident Classifiers for Out-of-distribution Detection
SafeML Workshop, ICLR 2019
S Vernekar*, A Gaurav*, T Denouden, B Phan, V Abdelzad, R Salay, K Czarnecki
[paper] [arXiv] [code]

Design Space of Behaviour Planning for Autonomous Driving
M Ilievski, S Sedwards, A Gaurav, A Balakrishnan, A Sarkar, J Lee, F Bouchard, R De Iaco, K Czarnecki