Pavel Osinenko



Personal Websites

Pavel Osinenko

Assistant Professor
Center for Digital Engineering

Pavel Osinenko studied control engineering at the Bauman Moscow State Technical University from 2003 through 2009. He started his postgraduate education in vehicle control systems at the same university. In 2011, Pavel moved to the Dresden University of Technology after receiving a Presidential Grant. He obtained a PhD degree in 2014 with a dissertation on vehicle optimal control and identification. Pavel has work experience in the German private sector and at the Fraunhofer Institute for Transportation and Infrastructure Systems. In 2016, he made a transition to the Chemnitz University of Technology as a research scientist. He was responsible for project and research coordination, doctorate supervision, teaching, administration etc. Pavel’s areas of research include reinforcement learning, especially its safety and connections to control theory, and computational aspects of dynamical systems.

Reinforcement learning (RL) can be understood as an AI-based control method for agents in dynamical environments. Despite its growing popularity, industry hesitates to recognize it as a viable substitute for the classical control methods. The main reason in our view is the lack of safety and stability guarantees. We investigate possibilities of integrating safety-ensuring machinery into reinforcement learning to give rise to new methods. Specifically, the following two general bases are possible: a Lyapunov-based safe RL and generalized predictive agents with learning stage costs (read: rewards). Notice that ISO/IEC Special Committee JTC 1 Nr. 42 on AI standardization puts safety and reliability among the crucial requirements for AI systems. We integrate tools with built-in constraints into the RL agents and to set up policies so as to avoid dangerous actions. Such a setup is novel. In contrast to the existing methods (which simply rely on a sufficiently complex neural net architecture), this approach first creates the framework for safe RL within which various neural nets and learning algorithms can be employed. If we introduce safety constraints, however, some of the RL optimality might be given up. There is currently no study on the matters of sub-optimality induced by additional constraints in RL. We rigorously analyze this sub-optimality degree. Applications of RL at the lab include robotics; agriculture; macroeconomic optimization based on, e.g., global stochastic dynamic equilibrium or stock-flow consistent models; predictive control based on model estimation from visual data etc.

Computational aspects of dynamical systems are of particular interest at the Lab. Computational uncertainty (also called numerical or algorithmic),  is the discrepancy between a mathematical result and its implementation in a computer. In control engineering and machine learning, computational uncertainty is usually either neglected or considered submerged into some other type of uncertainty, such as noise, and addressed via robust methods. However, even the latter may be compromised when the mathematical objects involved in the respective algorithms fail to exist in exact form and subsequently fail to satisfy the required properties. We seek methods of system analysis that address computational uncertainty explicitly. This enables the road to automated agent synthesis via program extraction from proof assistants.

(2020-2021)

  • Osinenko, K. Biegert, R. McCormick, T. Göhrt, G. Devadze, J. Streif and S. Streif. Application of non-destructive sensors and big-data analysis to predict physiological storage disorders and fruit firmness in ‘Braeburn’ apples, Computers and Electronics in Agriculture, 2021 (accepted)
  • Osinenko and S. Streif. On constructive extractability of measurable selectors of set-valued maps, IEEE Transactions on Automatic Control, 2020 (early access)
  • Osinenko, G. Devadze, and S. Streif. Constructive analysis of eigenvalue problems in control under numerical uncertainty, International Journal of Control, Automation and Systems, vol. 18, p. 2177–2185, 2020
  • Osinenko, A. Kobelski and S. Streif. Experimental verification of an online traction parameter identification method. Control Engineering Practice, 2021 (accepted)
  • Osinenko, P. Schmidt, S. Streif. Nonsmooth stabilization and its computational aspects. IFAC World Congress, 2020 (in print)
  • Göhrt, F. Griesing-Scheiwe, F., P. Osinenko and S. Streif, A reinforcement learning method with closed-loop stability guarantee for systems with unknown parameters. IFAC World Congress, 2020 (in print)
  • Russwurm, P. Osinenko and S. Streif. Optimal control of centrifugal spreader. IFAC World Congress, 2020 (in print)
  • Osinenko, L. Beckenbach, T. Göhrt and S. Streif. A reinforcement learning method with real-time closed-loop stability guarantee. IFAC World Congress, 2020 (in print)
  • Osinenko, A. Kobelski and S. Streif. A method of online traction parameter identification and mapping. IFAC World Congress, 2020 (in print)
  • Beckenbach, P. Osinenko and S. Streif. On closed-loop stability of model predictive controllers with learning costs. European Control Conference, 2020
  • Beckenbach, P. Osinenko and S. Streif. A Q-learning predictive control scheme with guaranteed stability. European Journal of Control 56 (2020): 167-178
  • Runner‑up for the best student paper award – IEEE International Conference on Fuzzy Systems (2014)
  • Grant for East‑European students – Walter Stauß‑Stiftung (2014)
  • Erasmus Mundus Action 2 scholarship – EACEA (2012- 2013)
  • Grant of President of Russia – Russian Ministry of Education and Science (2011‑2012)
  • Golubitsky‑Scholarship for gifted students – “Centr‑Telekom” Ltd. and Kaluga Research Center (2008)
  • Ksenia Makarova, PhD student
  • Alexander Rubashevskii, PhD student
  • Grigory Yaremenko, MSc student
  • Yana Khassan, MSc student
  • Nina Aleskerova, MSc student
  • Nicholas Babaev, MSc student
  • Maxim Pankratov, MSc student
ilyaosokin
Ilya Osokin
Junior Research Scientist
Term 5 (1) Sep.-Oct. Reinforcement learning
Term 4 (4)
Apr.-May Safety aspects of AI