Skip to main content

Joint Chinese-Russian Mathematical Online Colloquium

PROGRAM
11:00 (GMT+3)
Ivan Oseledets Ivan Oseledets
Skoltech, AIRI, INM RAS

Bio: Ivan Oseledets is a Director of the Center for Artificial Intelligence Technology, Head of the Laboratory of Computational Intelligence, Skoltech. Ivan’s research covers a broad range of topics. He proposed a new decomposition of high-dimensional arrays (tensors) – tensor-train decomposition and developed many efficient algorithms for solving high-dimensional problems. These algorithms are used in different areas of chemistry, biology, data analysis and machine learning. His current research focuses on the development of new algorithms in machine learning and artificial intelligence such as the construction of adversarial examples, the theory of generative adversarial networks and the compression of neural networks. Ivan Oseledets got several awards for his research and industrial cooperation, including two gold medals from the Russian Academy of Sciences (for students in 2005 and young researchers in 2009), the SIAM Outstanding Paper Prize (2018), the Russian President Award for young researchers in science and innovation (2018), Moscow Government Prize for Young Scientists (2023), Best Professor award from Skoltech (2019), the best cooperation project leader award from Huawei (2015, 2017).

Practical challenges in non-convex optimization.

In this talk, I will discuss several topics. First, is the optimization over low-rank matrix and tensor manifolds, which often appear in applications. Low-rank approximation of matrices is one of the rare examples when a non-convex problem can be solved in a numerically exact way by using singular value decomposition (SVD). There also exists a large class of methods for solving optimization with low-constraints.

In the second part of the talk (if time permits), I will discuss the peculiarities of optimization with deep neural networks. The theory of such optimization is still a big mystery, with a lot of empirical results and theoretical results under unrealistic assumptions. Here I plan to highlight the main points and research directions.

 


 

12:00 (GMT+3)
Yu-Hong Dai Yu-Hong Dai
AMSS CAS

Bio: Yu-Hong Dai is a Professor of Mathematical Optimization at the Academy of Mathematics and Systems Science (AMSS) of the Chinese Academy of Sciences (CAS). Currently, he is the President of the Association of Asia-pacific Operational Research Societies (APORS), President of the Operations Research Society of China, as well as Director of the Center for Optimization and Applications of AMSS of CAS. His research interests include continuous optimization, integer programming and applied optimization. Particularly, he is known for the Dai-Yuan nonlinear conjugate gradient method and the perfect non-convergence example for the BFGS quasi-Newton method. He is also interested in building software and attacking practical optimization problems. He received many honors including the Shiing-Shen Chern Mathematics Award, the Keng Kang Prize of Scientific Computing and the Xiao Shutie Applied Mathematics Award. He is also the invited speaker of ICM 2022.

Optimization with Least Constraint Violation.

A study about theory and algorithms for nonlinear programming usually assumes the feasibility of the problem. However, there are many important practical nonlinear programming problems whose feasible regions are not known to be nonempty or not. This leads to a class of problems called optimization with least constraint violation.

Firstly, the optimization problem with least constraint violation is proved to be a Lipschitz equality constrained optimization problem and an elegant necessary optimality condition, named as L-stationary condition, is established. Properties of the classical penalty method for this Lipschitz minimization problem are developed and the proximal gradient method for the penalized problem is studied.

Secondly, the optimization problem with least constraint violation is reformulated as an MPCC problem and a local minimizer of the MPCC problem is proved to an M-stationary point. The smoothing Fischer-Burmeister function method is constructed and analyzed for solving the related MPCC problem.

Thirdly, the solvability of the dual of the optimization problem with least constraint violation is investigated. The optimality conditions for the problem with least constraint violation are established in terms of the augmented Lagrangian. Moreover, it is proved that the augmented Lagrangian method can find an approximate solution to the optimization problem with least constraint violation and has a linear rate of convergence under an error-bound condition.

Finally, the constrained convex optimization problem with the least constraint violation is considered and analyzed under a general measure function. Several other related works on the optimization problem with least constraint violation will also be mentioned.


 The meeting will be held in the form of a webinar on the Zoom platform.

Pre-registration for the event is not required.

Link to the conference:

https://us06web.zoom.us/j/86111798110?pwd=QUN1clpqL0F6TXlYY0Z0SDNqdUg0Zz09

Meeting ID : 861 1179 8110

Passcode:987654