Action Exploratoire INRIA ExODE

Coordinateur: Jonathan Rouzaud-Cornabas (INRIA Beagle, Liris)

Participants: Samuel Bernard (INRIA Dracula, Institut Camille Jordan), Thierry Gautier (Avalon)

Date: 2019-2022

En Français:
En biologie, la grande majorité des systèmes peut être modélisée sous la forme d’équations différentielles ordinaires (ODE). Modéliser plus finement des objets biologiques mène à augmenter le nombre d’équations. Simuler des systèmes toujours plus grands mène également à augmenter le nombre d’équations. Par conséquent, nous observons une explosion de la taille des systèmes d’ODE à résoudre. Un verrou majeur est la limitation des logiciels de résolutions numériques d’ODE (solveur ODE) à quelques milliers d’équations à cause de temps de calcul prohibitif. L’AEx ExODE s’attaque à ce verrou via 1) l’introduction de nouvelles méthodes numériques qui tireront parti de la précision mixte qui mélange plusieurs précisions de nombre flottant au sein d’un schéma de calcul, 2) l’adaptation de ces nouvelles méthodes pour des machines de calcul de prochaines générations qui sont fortement hiérarchiques et hétérogénes et composées d’un grand nombre de CPUs et GPUs. Depuis un an, une nouvelle approche du Deep Learning se propose de remplacer les Recurrent Neural Network (RNN) par des systèmes d’ODE. Les méthodes numériques et parallèles d’ExODE seront évalué et adapté dans ce cadre afin de permettre l’amélioration de la performance et de l’exactitude de ces nouvelles approches.

En Anglais:
In biology, the vast majority of systems can be modeled as ordinary differential equations (ODEs). Modeling more finely biological objects leads to increase the number of equations. Simulating ever larger systems also leads to increasing the number of equations. Therefore, we observe a large increase in the size of the ODE systems to be solved. A major lock is the limitation of ODE numerical resolution software (ODE solver) to a few thousand equations due to prohibitive calculation time. The AEx ExODE tackles this lock via 1) the introduction of new numerical methods that will take advantage of the mixed precision that mixes several floating number precisions within numerical methods, 2) the adaptation of these new methods for next generation highly hierarchical and heterogeneous computers composed of a large number of CPUs and GPUs. For the past year, a new approach to Deep Learning has been proposed to replace the Recurrent Neural Network (RNN) with ODE systems. The numerical and parallel methods of ExODE will be evaluated and adapted in this framework in order to improve the performance and accuracy of these new approaches.

Slices – Design Study

PRACE 6th Implementation Phase Project


PRACE, the Partnership for Advanced Computing is the permanent pan-European High Performance Computing service providing world-class systems for world-class science. Systems at the highest performance level (Tier-0) are deployed by Germany, France, Italy, Spain and Switzerland, providing researchers with more than 17 billion core hours of compute time. HPC experts from 25 member states enabled users from academia and industry to ascertain leadership and remain competitive in the Global Race. Currently PRACE is finalizing the transition to PRACE 2, the successor of the initial five year period. The objectives of PRACE-6IP are to build on and seamlessly continue the successes of PRACE and start new innovative and collaborative activities proposed by the consortium. These include: assisting the development of PRACE 2; strengthening the internationally recognised PRACE brand; continuing and extend advanced training which so far provided more than 36 400 person·training days; preparing strategies and best practices towards Exascale computing, work on forward-looking SW solutions; coordinating and enhancing the operation of the multi-tier HPC systems and services; and supporting users to exploit massively parallel systems and novel architectures. A high level Service Catalogue is provided. The proven project structure will be used to achieve each of the objectives in 7 dedicated work packages. The activities are designed to increase Europe’s research and innovation potential especially through: seamless and efficient Tier-0 services and a pan-European HPC ecosystem including national capabilities; promoting take-up by industry and new communities and special offers to SMEs; assistance to PRACE 2 development; proposing strategies for deployment of leadership systems; collaborating with the ETP4HPC, CoEs and other European and international organisations on future architectures, training, application support and policies. This will be monitored through a set of KPIs.

Project Information

Live VM migration scheduling with a dependency graph

Optimizing resource management in a data center is crucial for economic and ecological reasons.
One of the key points is to distribute all virtual machines running on a minimum number of physical servers. With this in mind, we propose to study an optimization problem which is the migration of a set of virtual machines under different constraints in a datacenter. The aim is to determine the best migration sequence for a set of virtual machines from an initial state to a final state, which minimizes the total migration time, with or without intermediate migration. The problem can be modeled by a state graph.

Duration: 2019-2021

Project in cooperation with LIRIS.



Europe is undergoing a major transition in its energy generation and supply infrastructure. The urgent need to halt carbon dioxide emissions and prevent dangerous global temperature rises has received renewed impetus following the unprecedented international commitment to enforcing the 2016 Paris Agreement on climate change. Rapid adoption of solar and wind power generation by several EU countries has demonstrated that renewable energy can competitively supply significant fractions of local energy needs in favourable conditions. These and other factors have combined to create a set of irresistible environmental, economic and health incentives to phase out power generation by fossil fuels in favour of decarbonised, distributed energy sources. While the potential of renewables can no longer be questioned, ensuring reliability in the absence of constant conventionally powered baseload capacity is still a major challenge.

The EoCoE-II project will build on its unique, established role at the crossroads of HPC and renewable energy to accelerate the adoption of production, storage and distribution of clean electricity. How will we achieve this? In its proof-of-principle phase, the EoCoE consortium developed a comprehensive, structured support pathway for enhancing the HPC capability of energy-oriented numerical models, from simple entry-level parallelism to fully-fledged exascale readiness. At the top end of this scale, promising applications from each energy domain have been selected to form the basis of 5 new Energy Science Challenges in the present successor project EoCoE-II that will be supported by 4 Technical Challenges


Project Information
EoCoE-II is a H2020 RIA european project, call H2020-INFRAEDI-2018-1.

Duration: 3 years, Jan 1st 2019, Dec 31st 2021.

Avalon Members: T. Gautier, C. Perez

Online Resources


Inria Project Lab HAC-SPECIS

HAC SPECIS: Inria project lab on High-performance Application and Computers: Studying PErformance and Correctness In Simulation (2016-2020) :

The goal of the HAC SPECIS (High-performance Application and Computers: Studying PErformance and Correctness In Simulation) project is to answer  methodological needs of HPC application and runtime developers and to allow to study real HPC systems both from the correctness and performance point of view. To this end, we gather experts from the HPC, formal verification and performance evaluation community. website :


Start Date: June 2016

Duration: 4 years

Avalon Members: F. Suter, L. Lefevre


Laboratoire d’excellence en mathématiques et informatique fondamentale.

MILYON fédère les communautés mathématiques et informatique de Lyon autour de trois axes : la recherche d’excellence, notamment des domaines à l’interface des deux disciplines ou d’autres sciences ; la formation, avec l’appui à des filières innovantes tournées vers la recherche ; la société, à travers la médiation de la culture scientifique auprès du grand public et le transfert de technologie vers l’industrie.

Il regroupe plus de 350 chercheurs, et trois unités mixtes de recherche de l’Université de Lyon : l’Institut Camille Jordan, le Laboratoire de l’Informatique du Parallélisme et l’Unité de Mathématiques Pures et Appliquées.

Plus d’information sur le site de MILYON.

Start Date:

Duration: Until 2024

Avalon Members:

Inria-Illinois-ANL-BSC-JSC-Riken/AICS Joint Laboratory on Extreme Scale Computing

In June 2014, The University of Illinois at Urbana-Champaign, Inria, the French national computer science institute, Argonne National Laboratory, Barcelona Supercomputing Center, Jülich Supercomputing Centre and the Riken Advanced Institute for Computational Science formed the Joint Laboratory for Extreme Scale Computing, a follow-up of the Inria-Illinois Joint Laboratory for Petascale Computing.

Research areas include:

  • Scientific applications (big compute and big data) that are the drivers of the research in the other topics of the joint-laboratory.
  • Modeling and optimizing numerical libraries, which are at the heart of many scientific applications.
  • Novel programming models and runtime systems, which allow scientific applications to be updated or reimagined to take full advantage of extreme-scale supercomputers.
  • Resilience and Fault-tolerance research, which reduces the negative impact when processors, disk drives, or memory fail in supercomputers that have tens or hundreds of thousands of those components.
  • I/O and visualization, which are important part of parallel execution for numerical silulations and data analytics.
  • HPC Clouds, that may execute a portion of the HPC workload in the near future.

More on the lab website

Start Date: 2014

End date: 2022 years (extended for 4 years in 2019)

Avalon Members: T. Gautier, L. Lefevre, C. Perez