The 1st edition was held as a satellite event of ETAPS 2017 on April 29, 2017, in Uppsala, Sweden. It featured 12 presentations and an invited talk by Kim G. Larsen (Aalborg University), who has an ongoing ERC Advanced Grant LASSO (Learning, analysis, synthesis and optimization of cyber-physical systems). The 2nd edition was held as a satellite event of ETAPS 2018 on April 20, 2018, in Thessaloniki, Greece. Apart from regular presentations, it featured two invited talks by Guy Katz (Stanford / Hebrew University) and Krishnamurthy Dvijotham (Google DeepMind) on verifying neural networks. The 3rd edition is held as a satellite event of ETAPS 2019 on April 6, 2019, in Prague, Czech Republic.
Abstract: Reinforcement learning algorithms discover policies that maximize reward, but do not necessarily guarantee safety during learning or execution phases. In this talk, we discuss an approach to learn optimal policies while enforcing properties expressed in temporal logic. To this end, given a temporal logic specification, we synthesize a reactive system called a shield. The shield monitors the actions from the learner and corrects them only if the chosen action causes a violation of the specification. Besides safety issues, learned controllers have further limitations like: (1) Learned controllers are monolithic and it is not possible to add new features without retraining. (2) When facing un-trained unexpected behavior, the performance of learned controllers might be very poor or it can even result in complete system failure. We address these issues by formalizing deviations from optimal controller performance using quantitative run-time measurements and synthesize quantitative shields that ensure both optimal performance and safe behaviour.
Bettina Könighofer works at Graz University of Technology in the Secure and Correct System Group led by Prof. Roderick Bloem. Her research interests include synthesis, verification, bridging the gap between formal methods and AI, logic and applications to verification and security.
Abstract: Symmetry is the essential element of lifted inference that has recently demonstrated the possibility to perform very efficient inference in highly-connected, but symmetric probabilistic models models aka. relational probabilistic models. This raises the question, whether this holds for optimization problems in general. In this talk I shall demonstrate that for a large class of mathematical programs this is actually the case. More precisely, I shall introduce the concept of fractional symmetries of linear and convex quadratic programs (QPs), which lie at the heart of many machine learning approaches, and exploit it to lift, i.e., to compress them. These lifted QPs can then be tackled with the usual optimization toolbox (off-the-shelf solvers, cutting plane algorithms, stochastic gradients etc.): If the original QP exhibits symmetry, then the lifted one will generally be more compact, and hence their optimization is likely to be more efficient. This talk is based on joint works with Martin Mladenov, Martin Grohe, Leonard Kleinhans, Pavel Tokmakov, Babak Ahmadi, Amir Globerson, and many others.
Kristian Kersting is a Professor (W3) for Machine Learning at the TU Darmstadt University, Germany. After receiving his Ph.D. from the University of Freiburg in 2006, he was with the MIT, Fraunhofer IAIS, the University of Bonn, and the TU Dortmund University. His main research interests are statistical relational artificial intelligence (AI) and probabilistic deep learning. Kristian has published over 160 peer-reviewed technical papers and co-authored a book on statistical relational AI. He regularly serves on the PC (often at senior level) for several top conference and co-chaired the PC of ECML PKDD 2013 and UAI 2017. He is the Speciality Editor in Chief for Machine Learning and AI of Frontiers in Big Data, and is/was an action editor of TPAMI, JAIR, AIJ, DAMI, and MLJ.
In case of any questions, please contact the organizer Jan Kretinsky at <name>.<surname>@tum.de
Looking forward to seeing you LiVe in Prague!