Production Level Software
DIET (Distributed Interactive Engineering Toolbox) is a middleware designed for high-performance computing in a heterogeneous and distributed environment (workstations, clusters, grids, clouds).
Huge problems can now be computed over the Internet thanks to Grid Computing Environments like Globus or Legion. Because most of current applications are numerical, the use of libraries like BLAS, LAPACK, ScaLAPACK or PETSc is mandatory. The integration of such libraries in high level applications using languages like Fortran or C is far from being easy. Moreover, the computational power and memory needs of such applications may of course not be available on every workstation. Thus, the RPC seems to be a good candidate to build Problem Solving Environments on the Grid. Several tools following this approach exist, like Netsolve, NINF, NEOS, or RCS. The aim of the DIET project is to develop a set of tools to build computational servers.
See more on
Execo offers a Python API for asynchronous control of local or remote, standalone or parallel, unix processes. It is especially well suited for quickly and easily scripting workflows of parallel/distributed operations on local or remote hosts: automate a scientific workflow, conduct computer science experiments, perform automated tests, etc. The core
python package is execo. The execo g5k package provides a set of tools and extensions for the Grid’5000 testbed. The execo engine package provides tools to ease the development of computer sciences experiments.
See more on
libKOMP is a runtime support for OpenMP compatible with different compilers: GNU gcc/gfortran, Intel icc/ifort or clang/llvm. It is based on source code initially developed by Intel for its own OpenMP runtime, with extensions from Kaapi softwares (task representation, task scheduling). Our goal was to produce a robust OpenMP library with very low overhead in task scheduling and loop scheduling for a wide range of applications and architectures, including NUMA architecture and many-core such as Intel Xeon Phi.
Moreover it contains an OMPT module for recording trace of execution now available as an independent software called Tikki (https://gitlab.inria.fr/openmp/tikki).
See more on
SimGrid is a toolkit that provides core functionalities for the simulation of distributed applications in heterogeneous distributed environments. The simulation engine uses algorithmic and implementation techniques toward the fast simulation of large systems on a single machine. The models are theoretically grounded and experimentally validated.
The results are reproducible, enabling better scientific practices.
Its models of networks, cpus and disks are adapted to (Data)Grids, P2P, Clouds, Clusters and HPC, allowing multi-domain studies. It can be used either to simulate algorithms and prototypes of applications, or to emulate real MPI applications through the virtualization of their communications, or to formally assess algorithms and applications that can run in the framework.
See more on
Tikki is an OpenMP OMPT tool which provides help to create DAG of tasks, traces, and gather performance counters from an OpenMP application. It is a runtime tool, so no recompilation of the monitored OpenMP application is needed (but an OpenMP runtime that implements the OMPT API is required). The LLVM OpenMP runtime matches this
requirement and provides a compatibility layer for GCC’s OpenMP runtime (which means that if you compiled your code with GCC, Clang or ICC, you can use that runtime to execute your application).
See more on
XKBLAS is yet an other BLAS library (Basic Linear Algebra Subroutines) that targets multi-GPUs architecture thanks to the XKaapi runtime and with block algorithms from the PLASMA library. XKBLAS is able to exploit large multi-GPUs node with sustained high level of performance. It embeds an extended version of XKaapi with specialized heuristic to take into account data locality and memory hierarchy between the Host and the GPUs. The library offers a wrapper library that is able to capture calls to BLAS (C or Fortran). The internal API is based on asynchronous invocations in order to enable overlapping between communication by computation and also to better composed sequences of
calls to BLAS.
See more on
Research Prototype Software
COMET is a component model that enables to efficiently compose independent parallel codes using a task graph for multi-core shared-memory machines. It is a source- to-source compiler that generates L2C components and an L2C assembly from a COMET description. Some generated components embed the OpenMP directives to create and to submit tasks.
Concerto is a component-based reconfiguration model focusing on modelling and coordinating the life-cycle of interacting parts of a system (e.g., software module, resource). Typically, each module of the system is modeled with a control component type. A control component type contains information about the life-cycle and its dependencies. Concerto represents the current configuration of the system by an assembly, i.e., a set of control component instances connected together.
HLCM is a component model that aims at providing a high level of abstractions (composite, connector, etc.) while enabling to derivate a high performance implementation, in particular through the merge of connectors. A second goal of HLCM is to understand the relationships between a high level model and the complexity to derive an efficient low level model.
L2C is a Software Component Model targeted at use cases where overhead matters such as High Performance Computing. It is used as a backend for HLCM and COMET. It aims at being minimalist. It is written in C++ and supports MPI and CORBA communication technologies.
This page has the following sub pages.