Pavel Osinenko studied control engineering at the Bauman Moscow State Technical University from 2003 through 2009. He started his postgraduate education in vehicle control systems at the same university. In 2011, Pavel moved to the Dresden University of Technology after receiving a Presidential Grant. He obtained a PhD degree in 2014 with a dissertation on vehicle optimal control and identification. Pavel has work experience in the German private sector and at the Fraunhofer Institute for Transportation and Infrastructure Systems. In 2016, he made a transition to the Chemnitz University of Technology as a research scientist. He was responsible for project and research coordination, doctorate supervision, teaching, administration etc. Pavel’s areas of research include reinforcement learning, especially its safety and connections to control theory, and computational aspects of dynamical systems.
Reinforcement learning (RL) can be understood as an AI-based control method for agents in dynamical environments. Despite its growing popularity, industry hesitates to recognize it as a viable substitute for the classical control methods. The main reason in our view is the lack of safety and stability guarantees. We investigate possibilities of integrating safety-ensuring machinery into reinforcement learning to give rise to new methods. Specifically, the following two general bases are possible: a Lyapunov-based safe RL and generalized predictive agents with learning stage costs (read: rewards). Notice that ISO/IEC Special Committee JTC 1 Nr. 42 on AI standardization puts safety and reliability among the crucial requirements for AI systems. We integrate tools with built-in constraints into the RL agents and to set up policies so as to avoid dangerous actions. Such a setup is novel. In contrast to the existing methods (which simply rely on a sufficiently complex neural net architecture), this approach first creates the framework for safe RL within which various neural nets and learning algorithms can be employed. If we introduce safety constraints, however, some of the RL optimality might be given up. There is currently no study on the matters of sub-optimality induced by additional constraints in RL. We rigorously analyze this sub-optimality degree. Applications of RL at the lab include robotics; agriculture; macroeconomic optimization based on, e.g., global stochastic dynamic equilibrium or stock-flow consistent models; predictive control based on model estimation from visual data etc.
Computational aspects of dynamical systems are of particular interest at the Lab. Computational uncertainty (also called numerical or algorithmic), is the discrepancy between a mathematical result and its implementation in a computer. In control engineering and machine learning, computational uncertainty is usually either neglected or considered submerged into some other type of uncertainty, such as noise, and addressed via robust methods. However, even the latter may be compromised when the mathematical objects involved in the respective algorithms fail to exist in exact form and subsequently fail to satisfy the required properties. We seek methods of system analysis that address computational uncertainty explicitly. This enables the road to automated agent synthesis via program extraction from proof assistants.
(2020-2021)
Term 5 (1) | Sep.-Oct. | Reinforcement learning |
Term 4 (4) |
Apr.-May | Safety aspects of AI |