Professor: Andrzej Cichocki
Julia received her Master’s degree in Mathematics and Ph.D. degree in Probability Theory and Statistics from Lomonosov Moscow State University. Also, she completed a Masters-level program in Computer Science and Data Analysis from The Yandex School of data analysis. Now she works as a research scientist at Skolkovo Institute of Science and Technology in the laboratory “Tensor networks and deep learning for applications in data mining”, where she closely collaborates with Prof. Ivan Oseledets and Prof. Andrzej Cichocki.
Julia’s recent research deals with compression and acceleration of computer vision models (classification/object detection/segmentation), as well as neural networks analysis using low-rank methods, such as tensor decompositions and active subspaces. Also, she has some audio-related activity, particularly, Julia participates in the project on speech synthesis and voice conversion. Some of her earlier projects were related to medical data processing (EEG, ECG) and included human disease detection, artifact removal, and weariness detection
Deep learning, Computer Vision, Signal Processing, Speech technologies, Unsupervised learning, Transfer learning, Tensor decompositions, Machine Learning
Phan A., Sobolev K., Sozykin K., Ermilov D., Gusak J., Tichavsky P., Glukhov V., Oseledets I., Cichocki A. (2020). Stable Low-rank Tensor Decomposition for Compression of Convolutional Neural Network. // European Conference on Computer Vision (ECCV) paper
Gusak J., Kholiavchenko M., Ponomarev E., Markeeva L., Blagoveschensky P., Cichocki A., Oseledets I. (2019) Automated Multi-Stage Compression of neural networks. // IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). paper code
Gusak J., Markeeva L., Daulbaev T., Katrutsa A., Cichocki A., Oseledets, I. (2020). Towards Understanding Normalization in Neural ODEs. // International Conference on Learning Representations (ICLR), Workshop on Integration of Deep Neural Models and Differential Equations. paper code
Daulbaev T., Katrutsa A., Markeeva L., Gusak J., Cichocki A., Oseledets, I. (2020). Interpolated Adjoint Method for Neural ODEs. // arXiv preprint arXiv:2003.05271 paper
Gusak J., Daulbaev T., Ponomarev E., Cichocki A., Oseledets I. Reduced-Order Modeling of Deep Neural Networks. // arXiv preprint arXiv:1910.06995 (accepted to Computational Mathematics and Mathematical Physics Journal). paper
Cui C., Zhang K., Daulbaev T., Gusak J., Oseledets I., Zhang, Z. (2019). Active Subspace of Neural Networks: Structural Analysis and Universal Attacks. // arXiv preprint arXiv:1910.13025 (accepted to Computational Mathematics and Mathematical Physics Journal). paper
Bulinskaya E., Gusak J. (2018) Insurance Models Under Incomplete Information. // Springer Proceedings in Mathematics & Statistics, vol 231. Springer, Cham
Gusak, J. (2017). On stability of the solution in the optimal reinsurance problem. // Vestnik Moskovskogo Universiteta. Seriya 1. Matematica. Mekhanika. Issue 2, pp. 58-61.
Bulinskaya, E., Gusak, J. (2016). Optimal Control and Sensitivity Analysis for Two Risk Models. // Communications in Statistics – Simulation and Computation, Taylor & Francis. Volume 45, Issue 5, pp. 1451-1466.
Bulinskaya, E., Gusak, J. and Muromskaya, A. (2015). Discrete-time Insurance Model with Capital Injections and Reinsurance. // Methodology and Computing in Applied Probability, Springer US. Volume 17, Issue 4, pp. 899-914.
Ph.D. in Probability Theory and Statistics, Lomonosov Moscow State University, Faculty of Mechanics and Mathematics, Department of Probability Theory. Ph.D. thesis
Specialist in Mathematics. Specialization: Actuarial and Financial Analysis. Lomonosov Moscow State University, Faculty of Mechanics and Mathematics, Department of Probability Theory. Diploma with Honours.
Master’s-level program in Computer Science and Data Analysis. The Yandex School of data analysis.