# Program

8:30 -- 9:00 | 9:00 -- 10:30 | 10:30 -- 11:00 | 11:00 -- 12:30 | 12:30 -- 14:00 | 14:00 -- 15:30 | 15:30 -- 16:00 | 16:00 -- 17:00 | ||

Week 1 | Tuesday | Registration & Opening |
IC1 | Coffee Break |
IC2 | Lunch Break |
IC1 | Coffee Break |
P1 |

Wednesday | IC2 | IC1 | IC2 | P1 | |||||

Thursday | C1 | C2 | C1 | P1 | |||||

Friday | C2 | C1 | C2 | CF1 | |||||

Saturday | C1 | C2 | C1 | ||||||

Week 2 | Sunday | Excursion | |||||||

Monday | C3 | Coffee Break |
C4 | Lunch Break |
C3 | Coffee Break |
CF2 | ||

Tuesday | C5 | C3 | C4 | CF3 | |||||

Wednesday | C5 | C3 | C4 | CF4 | |||||

Thursday | C5 | C4 | C5 | Closing |

## Advanced Courses

**Course 1:** **Efficient algorithms for the matrix geometric means**

Professor Bruno Iannazzo, Università degli Studi di Perugia, Italy.

Matrix geometric means have been introduced on one hand, because of the mathematical wish to generalize concepts as much as possible; on the other hand, because of the demand of suitable models of matrix averages by applications. These facts provide a motivation for the need of efficient algorithms for the matrix geometric means. We review the computational problems related to the matrix means. First, we consider the geometric mean of two matrices, both in the dense and moderate size, and in the large scale case. Then, we consider the geometric mean of more than two matrices, for which no explicit expression is known, and which can be obtained just as a limit of certain sequences. The computation of matrix geometric means requires a wise application of customary techniques in numerical linear algebra, together with the use of advanced techniques in matrix computation, such as the optimization on matrix manifold and the rational Krylov subspaces approximation.

**Course 2:** **Monotonic iterations for nonlinear matrix equations**

Professor Federico Poloni, University of Pisa, Italy.

We study several nonlinear matrix equations arising in applications, mainly from probability, and fixed-point iterations to solve them. We shall see how convergence of the iterations and properties of the solutions can often be proved relying on positiveness and ordering properties (either in the componentwise or Löwner ordering). We will start from matrix equations such as the ones coming from binary trees and quasi-birth-death models (e.g., \( AX^2+BX+C=0\), and then move on to other Riccati-type equations with a richer linear algebraic structure, depending on time availability.

**Course 3:** **The Bezout Matrix**

Professor Gema Maria Diaz-Toca, Universidad de Murcia, Spain.

This course is devoted to presenting the Bezout Matrix. We will see Barnett's method through Bezoutians, which allows to compute the gcd of several univariate polynomials. Two different uses of this method will be discussed. First, we describe an algorithm for parameterizing the gcd of several polynomials. Secondly, we consider the problem of computing the approximate gcd. The application of the Bezout matrix to the solution of a zero dimensional bivariate polynomial system will also be presented.

**Course 4:** **Structured matrix computation and polynomial algebra**

Professor Bernard Mourrain, Inria, Sophia Antipolis Méditerranée, France.

There is a strong relationship between polynomials and structured matrices. In this course, we will consider multivariate polynomials and related structures of matrices such as Toeplitz, Hankel and vandermonde matrices. We will show how these types of matrices appear in methods for the resolution of polynomial systems, such as resultant constructions, Grobner basis and border basis computation. We will analyze their properties and the relations between these different structures. We will describe matrix-based methods for solving zero-dimensional systems of equations. Hankel structured matrices are also present in multivariate polynomial algebra. Using duality, we will also describe them as operators on polynomials and analyze their properties. In particular, we will detail their correlation with Gorenstein algebras. This will lead us to a method for the decomposition of series as polynomial-exponential functions. We will apply this method in different contexts such as sparse representation of symbols of convolution operators, sparse interpolation, tensor decomposition. Explicit computations and examples will illustrate the different notions introduced in the presentation.

**Course 5:** **Riemannian and information geometries of positive-definite matrices and their applications**

Professor Maher Moakher, University of Tunis El Manar, Tunisia.

The importance of the cone of symmetric positive-definite matrices can hardly be exaggerated. Such matrices are omnipresent and play fundamental roles in several disciplines such as mathematics, numerical analysis, probability and statistics and engineering sciences. Nowadays, as some applications deliver data that are constrained to live on this set, it has become even more essential to understand its geometric structure. Starting from a potential function, we introduce a Riemannian metric and give explicit expressions for the different notions of differential geometry such as covariant derivative, Christoffel symbols, curvature, geodesics, distance function as well as various differential operators. From the same potential function, we also introduce divergence functions that define the information geometry. Then, we introduce averages for symmetric positive-definite matrices that are based on the different distance and divergence functions. Some applications of the Riemannian and information geometries of the data processing of symmetric positive-defined matrices will be presented.

## Introductory Courses

**Course 1:** **Introduction to matrix analysis**

Dr Zeineb Chebbi, Université de Cartage, Tunisia.

Matrix analysis has a wide range of applications in many disciplines such as engineering, physics, numerical analysis, statistics, data mining and recently deep learning. We start by recalling the basic concepts from linear algebra such as vectors, matrices, determinants, eigenvalues, eigenvectors, etc. We then present the reduction of square matrices to simpler form including the spectral decomposition, the Jordan canonical decomposition, the polar decomposition and the singular value decomposition. Vector norms and matrix norms are reviewed and bounds for eigenvalues are presented. Some matrix functions, including the matrix exponential function, the matrix principal logarithm function and the matrix power function, are studied. Finally, matrix differential calculus is introduced and several examples differentiation of matrix functions are worked out.

**Course 2:** **Introduction to numerical linear algebra**

Dr Nadia Chouaieb, University of Tunis El Manar, Tunisia.

We study the numerical solution of system of linear equations, least squares problems, eigenvalue problems, and some of their generalizations and applications. Techniques for dense, sparse and structured problems will be presented. We will cover the direct method based on matrix decomposition, such as the LU decomposition, the Cholesky decomposition, the QR decomposition for eigenvalues, as well as the iterative methods like the Jacobi, Gauss-Seidel and SOR methods. We will also study methods from numerical optimization theory such as the conjugate gradient method.

## Practical

**Practical 1:** **Practical numerical linear algebra with Python**

Dr Nadia Chouaieb, University of Tunis El Manar, Tunisia.

## Conferences

**Conference 1:** **Conference 1**

Professor

**Conference 2:** **Conference 2**

Professor

**Conference 3:** **Conference 3**

Professor

**Conference 4:** **Conference 4**

Professor