I am interested in creating safe Machine Learning programmes. This includes looking into the techniques of formally verifying properites of Deep Neural Networks and on safe Reinforcement Learning.
Currently, I have several projects going on: Abstraction of Neural Networks (smarter than in my thesis), using machine learning (esp. Decision Trees) for better (faster or more explainable) strategy generation on MDPs, explainable and certifiable regression models, and improvement of reinforcement learning via updates.I am always happy to collaborate with students. If you're interested in some of the ongoing projects or if you have any other cool idea, feel free to contact me. I'll be happy to meet you.
Pranav Ashok, Vahid Hashemi, Jan Křetínský, Stefanie Mohr. DeepAbstract: Neural Network Abstraction for Accelerating Verification. Accepted at ATVA 2020. (pre-print, link)
Vahid Hashemi, Jan Křetínský, Stefanie Mohr, Emmanouil Seferis. Gaussian-based runtime detection of out-of-distribution inputs for neural networks. Accepted at RuntimeVerification 2021. (link, PDF)
Stefanie Mohr, Konstantina Drainas, Jürgen Geist. Assessment of Neural Networks for Stream-Water-Temperature Prediction. Accepted at ICMLA 2021. (pre-print, link)