POPILSS 2026
Les journées POPILSS ont pour objectif de réunir les étudiants et les chercheurs de Lyon, Saint-Étienne et Annecy intéressés par les domaines de l’optimisation, de la résolution de problèmes inverses et de la parcimonie. Elles visent à créer une dynamique de collaboration régionale en encourageant les échanges interdisciplinaires et en identifiant des opportunités de coopération.
Les slides de cette seconde édition sont à retrouver ici :
| Orateur | Titre de la présentation | Lien |
|---|---|---|
| Antoine Collas | Adapting learning models to distribution shifts: the role of normalization | |
| Julie Digne | Implicit Neural Representation for Geometry Processing | |
| Rodolphe Le Riche | Bayesian optimization with derivatives acceleration | |
| Romain Vo | Memory-efficient reconstruction and task-based evaluation for sparse-view X-ray CT |
Orateurs
![]() |
![]() |
![]() |
![]() |
| Julie Digne Directrice de Recherche LIRIS CNRS LIRIS Lyon |
Rodolphe Le Riche Directeur de Recherche CNRS LIMOS EMSE Saint-Etienne |
Romain Vo Chercheur postdoctoral Lab Physique ENS Lyon |
Antoine Collas Chercheur postdoctoral INRIA Saclay |
Date et lieu
Jeudi 19 juin 2025
B120, Polytech Annecy
Accès en bus : 15mn par la Ligne 1 du réseau de bus SIBRA . Départ depuis la station "Gare d'Annecy Quai Sud", prendre la direction "Parc des Glaisins" et s'arrêter à la station "Campus".
B120, Polytech Annecy
Accès en bus : 15mn par la Ligne 1 du réseau de bus SIBRA . Départ depuis la station "Gare d'Annecy Quai Sud", prendre la direction "Parc des Glaisins" et s'arrêter à la station "Campus".
Equipe organisatrice
- Yassine Mhiri, LISTIC
- Argheesh Bhanot, LISTIC
- Jérémy Cohen, CNRS, CREATIS
- Jordan Patracone, LabHC
- Maxime Guillaud, Inria, CITI
- Nelly Pustelnik, CNRS, ENS Lyon
- Laurent Seppecher, ICJ
- Juliàn Tachella, CNRS, ENS Lyon
Sponsors: Nous remercions l'Institut Rhônalpin des Systèmes Complexes (IXXI), l'Université Savoie Mont Blanc et Polytech Annecy Chambéry qui soutiennent cette édition de la journée POPILSS.
Jeudi 19 juin 2025
| Horaire | Orateur |
|---|---|
| 9h30-9h55 | Accueil |
| 9h55-10h | Mot d’ouverture |
| 10h-10h50 | Antoine Collas - Adapting learning models to distribution shifts: the role of normalization |
| 10h50-11h30 | Pause café + posters |
| 11h30-12h20 | Julie Digne - Implicit Neural Representation for Geometry Processing |
| 12h20-14h20 | Buffet + posters |
| 14h20-15h10 | Rodolphe Le Riche - Bayesian optimization with derivatives acceleration |
| 15h10-16h00 | Romain Vo - Memory-efficient reconstruction and task-based evaluation for sparse-view X-ray CT |
Rodolphe Le Riche
Bayesian optimization with derivatives acceleration
Bayesian optimization algorithms form an important class of methods to minimize functions that are costly to evaluate, which is a very common situation. These algorithms iteratively infer Gaussian processes from past observations of the function and decide where new observations should be made through the maximization of an acquisition criterion. Often, in particular in engineering practice, the objective function is defined on a compact set such as in a hyper-rectangle of a d-dimensional real space, and the bounds are chosen wide enough so that the optimum is inside the search domain. In this situation, this work provides a way to integrate in the acquisition criterion the a priori information that these functions, once modeled as GP trajectories, should be evaluated at their minima, and not at any point as usual acquisition criteria do. We propose an adaptation of the widely used Expected Improvement acquisition criterion that accounts only for GP trajectories where the first order partial derivatives are zero and the Hessian matrix is positive definite. The new acquisition criterion keeps an analytical, computationally efficient, expression. This new acquisition criterion is found to improve Bayesian optimization on a test bed of functions made of Gaussian process trajectories in dimensions 2, 3 and 5. The addition of first and second order derivative information is particularly useful for multimodal functions.
This talk corresponds to the paper : Guillaume Perrin and Rodolphe Le Riche Bayesian optimization with derivatives acceleration, Transactions on Machine Learning Research, issn=2835-8856, aug. 2024.
Julie Digne
Implicit Neural Representation for Geometry Processing
Implicit Neural Representations are powerful tools for representing 3D shapes. They encode an implicit field in the parameters of a Neural Network, leveraging the power of auto-differentiation for optimizing the implicit function and avoiding the need for manually crafted basis functions. They can work on single shapes, or be conditioned by a latent shape space. In this talk, I will explain the principles behind implicit neural representation and show several applications for shape analysis skeleton extraction and shape deformation.
Antoine Collas
Adapting learning models to distribution shifts: the role of normalization
Distribution shifts between source and target datasets pose a major challenge in machine learning. In this talk, I will present recent approaches to adapt models to such shifts, with a particular focus on normalization techniques. I will discuss the often overlooked role of normalization in both generalization and adaptation, showing how appropriate design choices can significantly improve transfer performance. These ideas will be illustrated with concrete examples, notably in neuroscience, where signal characteristics vary greatly across subjects and recording devices.
Romain Vo
Memory-efficient reconstruction and task-based evaluation for sparse-view X-ray CT
X-ray computed tomography (CT) involves the reconstruction of the 3D image of an object from a set of measurements called radiographs. It is an essential imaging technique in the medical field, as well as the non-destructive testing of industrial components. In these two applications, there is a common need to produce reliable and high-quality images using a minimal number of projections. Unfortunately, reducing the number of measurements leads to the appearance of artifacts in the reconstructed image, significantly affecting its quality. In this talk, I will present a memory-efficient procedure based on implicit neural representation for reconstructing high-quality 3D images from minimal projections. I will also discuss an evaluation framework that goes beyond standard distortion metrics to assess the quality of reconstructed images, ensuring they are suitable for specific applications using the observer framework.



