Dmitry Yudin studied applied mathematics and physics at Moscow Institute of Physics and Technology (Moscow, Russia) and graduated with a PhD examination at Uppsala University (Uppsala, Sweden) in 2015.
During his PhD studies and right after he gained significant research experience in developing advanced numerical algorithms for studying various aspects of strongly correlated systems, including those based on machine learning and variational quantum circuits.
After defending his PhD he took a position as a Research Fellow at Nanyang Technological University (Singapore), while enjoyed being a long term visiting researcher at the University of California at San Diego, USA, and at Max-Planck-Institute for Quantum Optics, Germany.
He subsequently moved back to Russia where he used to work as an independent researcher with support from the Russian Foundation for Basic Research and the Russian Science Foundation.
In 2018, Dmitry joined Skoltech as a Research Scientist at the Center for Computational Data-Intensive Science and Engineering (CDISE) and was appointed as an Assistant Professor in 2020. Since 2022 Dmitry has been working with the Skoltech Project Center for Next Generation Wireless and IoT, where he is pursuing research on 6G enabling technologies and edge intelligence.
Recent publications:
R. Konlechner, A. Allagui, V. N. Antonov, and D. Yudin. A superstatistics approach to the modelling of memristor current–voltage responses // Physica A 614, 128555 (2023)
D. Prodan, G. Paradezhenko, D. Yudin, and A. Pervishko. An ab initio approach to anisotropic alloying into the Si(001) surface // Physical Chemistry Chemical Physics 25, 5501 (2023)
G. V. Paradezhenko, A. A. Pervishko, and D. Yudin. Searching topological magnetic textures with machine learning // Physics Uspekhi (2023)
J. Park, T. Kim, G. W. Kim, V. Bessonov, A. Telegin , I. G. Iliushin , A. A. Pervishko , D. Yudin, A. Yu. Samardak, A. V. Ognev, A. S. Samardak, J. Cho, Y. K. Kim. Compositional gradient induced enhancement of Dzyaloshinskii–Moriya interaction in Pt/Co/Ta heterostructures modulated by Pt–Co alloy intralayers // Acta Materialia 241, 118383 (2022)
G. Paradezhenko, A. Pervishko, N. Swain, P. Sengupta, and D. Yudin. Spin-hedgehog-derived electromagnetic effects in itinerant magnets // Physical Chemistry Chemical Physics 24, 24317 (2022)
A. S. Kardashin, A. V. Vlasova, A. A. Pervishko, D. Yudin, and J. D. Biamonte. Quantum-machine-learning channel discrimination // Physical Review A 106, 032409 (2022)
A. S. Samardak, A. V. Ognev, A. G. Kolesnikov, M. E. Stebliy, V. Yu. Samardak, I. G. Iliushin, A. A. Pervishko, D. Yudin, M. Platunov, T. Ono, F. Wilhelm, and A. Rogalev. XMCD and ab initio study of interface-engineered ultrathin Ru/Co/W/Ru films with perpendicular magnetic anisotropy and strong Dzyaloshinskii–Moriya interaction // Physical Chemistry Chemical Physics 24, 8225 (2022)
A. A. Pervishko and D. Yudin. Microscopic approach to the description of spin torques in two-dimensional Rashba anti- and ferromagnets // Physics Uspekhi 65, 215 (2022)
N. Swain, M. Shahzad, G. V. Paradezhenko, A. A. Pervishko, D. Yudin, and P. Sengupta. Skyrmion-driven topological Hall effect in a Shastry-Sutherland magnet // Physical Review B 104, 235156 (2021)
G. V. Paradezhenko, D. Yudin, and A. A. Pervishko. Random iron-nickel alloys: From first principles to dynamic spin fluctuation theory // Physical Review B 104, 245102 (2021)
A. Kardashin, A. Pervishko, J. Biamonte, and D. Yudin. Numerical hardware-efficient variational quantum simulation of a soliton solution // Physical Review A 104, L020402 (2021)
I. I. Vrubel, D. Yudin, and A. A. Pervishko. On the origin of the electron accumulation layer at clean InAs(111) surfaces // Physical Chemistry Chemical Physics 23, 4811 (2021)
A. Macarone Palmieri, E. Kovlakov, F. Bianchi, D. Yudin, S. Straupe, J. D. Biamonte, and S. Kulik. Experimental neural network enhanced quantum tomography // npj Quantum Information 6, 20 (2020)
A. Kardashin, A. Uvarov, D. Yudin, and J. Biamonte. Certified variational quantum algorithms for eigenstate preparation // Physical Review A 102, 052610 (2020)
A. A. Pervishko, D. Yudin, V. Kumar Gudelli, A. Delin, O. Eriksson, and G.-Y. Guo. Localized surface electromagnetic waves in CrI3-based magnetophotonic structures // Optics Express 28, 29155 (2020)
A. Uvarov, J. D. Biamonte, and D. Yudin. Variational quantum eigensolver for frustrated quantum systems // Physical Review B 102, 075104 (2020)
A. Berezutskii, M. Beketov, D. Yudin, Z. Zimborás, and J. D. Biamonte. Probing criticality in quantum spin chains with neural networks // Journal of Physics: Complexity 1, 03LT01 (2020)
I. I. Vrubel, A. A. Pervishko, D. Yudin, B. Sanyal, O. Eriksson, and P. A. Rodnyi. Oxygen vacancy in ZnO-w phase: pseudohybrid Hubbard density functional study // Journal of Physics: Condensed Matter 32, 315503 (2020)
I. I. Vrubel, A. A. Pervishko, H. Herper, B. Brena, O. Eriksson, and D. Yudin. An ab initio perspective on STM measurements of the tunable Kondo resonance of the TbPc2 molecule on a gold substrate // Physical Review B 101, 125106 (2020)
M. Baglai, R. J. Sokolewicz, A. Pervishko, M. I. Katsnelson, O. Eriksson, D. Yudin, and M. Titov. Giant anisotropy of Gilbert damping in a Rashba honeycomb antiferromagnet // Physical Review B 101, 104403 (2020)
We are currently looking for prospective PhD and MS students as well as postdocs to work in the fields:
Nowadays neuromorphic computing is considered as one of the most promising approaches for resolving the critical problems which the conventional CMOS technology faces upon continual miniaturization and ever-increasing power consumption. Owing to the low-power performance and brain-inspired massively parallel computing principles, a large number of bio-inspired algorithms and devices have been attempted in complex pattern recognition, image processing, and data mining. Intensive research has been conducted towards developing learning-based artificial synapses and neurons, attempting to reproduce the behavior of these two fundamental building blocks in biological neural networks. Memristor, a passive component capable of changing its resistance depending on electric charge passing through, has been proposed as a suitable artificial synapse and neuron for emerging computing systems. In this project, we aim at adapting a single-crystalline silicon memristors with alloying conducting channels for neuromorphic realizations of on-chip deep learning. We investigate how these memristors may be tailored to provide novel concept devices to secure stable and controllable device operation, thus enabling large-scale implementation of neuromorphic computing. We will propose machine learning algorithms suitable for this particular platform and evaluate their performance, including data storage, parallel updating of weights, and matrix multiplication.
Fifth-generation (5G) mobile communications rapidly deployed nowadays all around the world are expected to bring to reduced latency, enhanced energy-efficiency and higher data rate. In practice, with a peak speed of about 10 Gb/s and channel bandwidth of 0.1-1 GHz making use of mm-wave carriers becomes unavoidable. However, 5G will hardly be an adequate solution in the short run owing to growing demands and needs for machine connectivity, e.g., Internet of Things (IoT). The emerging sixth-generation (6G) is still in its germinal phase with no clear definition behind: it is however clear that switching to terahertz frequency electromagnetic waves (0.1-10 THz) is an essential prerequisite. Metasurfaces, representing an array of artificial unit cells each of which is characterized by its own electromagnetic response, provide us with a unique tool for wave manipulations, particularly in terahertz spectrum. In this project, we explore index modulation, which is considered as one of the possible solution towards next generation wireless communications, in a multiple-input multiple-output (MIMO) array using programmable metasurfaces.
Do not hesitate to send your request to us!
Master students
PhD students
Neuromorphic Computing
Term 5 AY 2021/2022
MS Program: Advanced Computational Science
Over the past decades the concept of neuromorphic computing that relies on imitating and exploiting the mechanisms inherent to biological nervous system has evolved into an interdisciplinary research area at the boundary between advanced computing and computational neuroscience. This direction is largely considered as one of the most promising approach to resolve the critical problems that come with continual miniaturization and ever-increasing power consumption of CMOS technology. A vast number of brain-inspired algorithms and architectures, endowing low-power requirements and massive parallel computing principles, have been attempted for applications, including complex pattern recognition, image processing, and data mining. In parallel, intensive research has been conducted for practical implementation of learning-based artificial synapses and neurons that represent two fundamental building blocks of biological neural networks. The course is designed to provide the students with basic understanding and familiarize them with recent achievements in the field of neuromorphic engineering as implemented in artificial and spiking neural networks. In the course we will address:
Emerging Technologies for Next Generation Wireless Communications
Term 2 AY 2021/2022
MS Program: Internet of Things and Wireless Technologies
Fifth generation mobile communications, rapidly deploying all around the globe, are promised to outperform the available solutions in terms of latency, energy-efficiency, and data rates. However, with its peak speed of about 10 Gb/s and channel bandwidth of 0.1-1 GHz this technology will turn out to be inadequate to meet the explosive growth of machine connectivity in the short run. This course is designed to provide the students with basic knowledge of terahertz technology, edge AI hardware design, and reconfigurable intelligent surfaces which are customarily identified as enabling technology towards wireless mobile communications beyond 5G. We will specifically touch upon reconfigurable intelligent surfaces that can potentially lead to enhanced energy and spectrum efficiency of wireless communications. In particular, we will elaborate on space-time-coding digital metasurfaces that are constituted by a programmable array of artificial unit cells each of which is characterized by its own electromagnetic response.
В рамках Startup Village 2021 мы организуем секцию
СЕКЦИЯ: TINYML. МАШИННОЕ ОБУЧЕНИЕ
ВРЕМЯ: 25 мая 2021, 10:00 – 11:30
Известно, что широкое внедрение технологий искусственного интеллекта в конечные устройства сильно ограничено их вычислительными ресурсами и требует новых научных и инженерных подходов. Секция посвящена докладам по новой и перспективной технологии TinyML (Машинное обучение на микроконтроллерах или на специальных чипах/акселераторах ML сверхнизкой мощности). TinyML позволяет запускать алгоритмы машинного обучения на микроконтроллерах, присутствующих почти во всех устройствах. TinyML работает локально на том же микроконтроллере, который имеет возможность проводить аналитику в реальном времени и управлять подключенными датчиками и исполнительными механизмами. Это обеспечивает защиту данных, кроме того обучение на микроконтроллере происходит со сверхнизким энергопотреблением. TinyML делает ИИ повсеместным и доступным для потребителей, делая миллионы устройств, которые люди используют ежедневно, интеллектуальными, позволяет эффективно осуществлять Data Mining.
Ведущими экспертами будет дан обзор современных технологических возможностей TinyML, бизнес проектов, возможностей инвестиций. Будет сделана презентация о том, как можно сделать стартап по данному направлению.
В деловой секции также будет обсуждаться некоммерческая глобальная организация и экосистема по развитию ИИ на микроконтероллерах TinyML Foundation (https://tinyml.org). Эта ассоциация была создана в 2019 году в Кремниевой Долине. На сегодняшний день у нее есть 28 отделений в 22 странах, включая США, Англию, Японию, Китай. Генеральными спонсорами являются Qualcomm, ARM, Samsung, LatticeSemiconductor, Brainchip и тд. В мероприятиях и спонсорской поддержке ассоциации участвуют такие гиганты как Гугл, Микрософт, и Амазон.
ТinyML Foundation осуществляет следующую деятельность в области технологий ИИ на микроконтроллерах и на специальных чипах/акселераторах ML сверхнизкой мощности:
• развитие глобального сообщества ученых, инженеров, дизайнеров, специалистов по управлению продуктами и бизнесменов в области аппаратного и программного обеспечения и системных специалистов;
• привлечение экспертов и новичков к разработке передового машинного обучения со сверхнизким энергопотреблением;
• содействие и стимулирование открытого обмена знаниями между исследователями и промышленностью для ускорения развития области;
• подключение технологий и инноваций к разнообразным продуктам и бизнес-возможностям, создающим ценность для всей экосистемы и отраслевых вертикалей.
В России создано отделение этой организации и в секции будут обсуждаться ее основные функции и возможности. Предполагается, что это отделение будет осуществлять помощь в установлении новых партнерств и коллабораций, организацию конференций, семинаров, мастер-классов, поддержку стартапов в области ИИ на микроконтроллерах и станет точкой развития этой технологии в России.
ПРОГРАММА СЕКЦИИ:
(1) Обзорный доклад о TinyML, 25-30 мин
(2) Панельное обсуждение, 30 мин
(3) Обзорный доклад о образовательных программах и ресурсах, 30 мин
ОСНОВНЫЕ ДОКЛАДЧИКИ:
(1) Evgeni Gousev // Senior Director at Qualcomm AI Research, the United States
(2) Yuri Panchul // Juniper Networks, the United States
СПИКЕРЫ:
(1) Wei Xiao // NVIDIA, the United States
(2) Alessandro Grande // Edge Impulse, the United States
(3) Blair Newman // Neuton.ai, the United States