XLcloud

XLcloud: Design, Develop and Integrate The Software Elements of a High Performance Cloud Computing (HPCC) System. The XLcloud collaborative project is developing a cloud management platform that addresses the specific requirements of high performance cloud computing applications and users. XLcloud is designed for computational intensive workloads in collaborative applications with interactive remote visualisation capabilities.

XLcloud

XLcloud aims to define and demonstrate the principles of HPC as a Service (High Performance Computing) for all those applications that involve highly intensive calculations. XLcloud is designed as a collaborative tool that enables users to work together on highly sophisticated software in the Cloud, thus sidestepping the need for individuals to have to purchase expensive software on their own. XLcloud combines the expertise of companies and academics that are innovative in the field of high performance computer architectures and flow visualization HD/3D and video.

More information on the XLCLOUD web page.

EU FP7 PRACE-2IP – Second Implementation project of Partnership for Advanced Computing in Europe

EU FP7 PRACE-2IPThe purpose of the PRACE RI is to provide a sustainable high-quality infrastructure for Europe that can meet the most demanding needs of European HPC user communities through the provision of user access to the most powerful HPC systems available worldwide at any given time. In tandem with access to Tier-0 systems, the PRACE-2IP project will foster the coordination between national HPC resources (Tier-1 systems) to best meet the needs of the European HPC user community. To ensure that European scientific and engineering communities have access to leading edge supercomputers in the future, the PRACE-2IP project evaluates novel architectures, technologies, systems, and software. Optimizing and scaling of application for Tier-0 and Tier-1 systems is a core service of PRACE.

Within this project (September 2011 – August 2014), Avalon participated in particular to the workpackage on novel programming techniques that aims to perform research and development on auto-tuned runtime environments for future multi-petascale and exascale systems.

Start date: September 2011

Duration: 3 years

More information on PRACE 2IP website.

ANR MapReduce

This project is devoted to using MapReduce programming paradigm on clouds and hybrid infrastructures. Partners: Argonne National Lab (USA), the University of Illinois at Urbana Champaign (USA), the UIUC-INRIA Joint Lab on Petascale Computing, IBM France, IBCP, MEDIT (SME) and the GRAAL/AVALON INRIA project-team.

ANR MapReduce

This project aims to overcome the limitations of current Map-Reduce frameworks such as Hadoop, thereby enabling highly-scalable Map-Reduce-based data processing on various physical platforms such as clouds, desktop grids, or on hybrid infrastructures built by combining these two types of infrastructures.To meet this global goal, several critical aspects will be investigated. Data storage and sharing architecture. First, we will explore advanced techniques for scalable, high-throughput, concurrency-optimized data and metadata management, based on recent preliminary contributions of the partners. Scheduling. Second, we will investigate various scheduling issues related to large executions of Map-Reduce instances. In particular, we will study how the scheduler of the Hadoop implementation of Map-Reduce can scale over heterogeneous platforms; other issues include dynamic data replication and fair scheduling of multiple parallel jobs. Fault tolerance and security. Finally, we intend to explore techniques to improve the execution of Map-Reduce applications on large-scale infrastructures with respect to fault tolerance and security.

Our global goal is to explore how combining these techniques can improve the behavior of Map-Reduce-based applications on the target large-scale infrastructures. To this purpose, we will rely on recent preliminary contributions of the partners associated in this project, illustrated though the following main building blocks. BlobSeer, a new approach to distributed data management being designed by the KerData team from INRIA Rennes – Bretagne Atlantique to enable scalable, efficient, fine-grain access to massive, distributed data under heavy concurrency. BitDew, a data-sharing platform being currently designed by the GRAAL team from INRIA Grenoble – Rhône-Alpes at ENS Lyon, with the goal of exploring the specificities of desktop grid infrastructures. Nimbus, a reference open source cloud management toolkit developed at the University of Chicago and Argonne National Laboratory (USA) with the goal of facilitating the operation of clusters as Infrastructure-as-a-Service (IaaS) clouds.

More information on the MapReduce web site.

INRIA Project Lab Héméra

French Project around the Grid’5000 testbed.

INRIA Project Lab Héméra

Héméra was an INRIA Large Scale Initiative (2010-2014), that aimed at demonstrating ambitious up-scaling techniques for large scale distributed computing by carrying out several dimensioning experiments on the Grid’5000 infrastructure, at animating the scientific community around Grid’5000 and at enlarging the Grid’5000 community by helping newcomers to make use of Grid’5000.

More information on Hemera website

European Desktop Grid Initiative (EDGI)

European Desktop Grid Initiative – European FP7 project

European Desktop Grid Initiative (EDGI)

EDGI develops middleware in order to support European Grid Initiative (EGI) and National Grid Initiative user communities that are heavy users of Distributed Computing Infrastructures (DCIs) and require an extremely large number of CPUs and cores. EDGI goes beyond existing DCIs that are typically cluster Grids and supercomputer Grids, and extends them with public and institutional Desktop Grids (DGs) and Clouds. EDGI integrates software components of ARC, gLite, Unicore, BOINC, XWHEP, 3G Bridge, and Cloud middleware such as OpenNebula and Eucalyptus into SG-DG-Cloud platforms for service provision.

Avalon team’s task is to get instantly available additional resources for DG systems if the application has some QoS requirements that could not be satisfied by the available resources of the DG system.

Start date: 01/06/2010

Duration: 24 months

More information on EDGI website: EDGI

ANR COOP

Multi-level Cooperative Resource Management

ANR COOP

The problem addressed by the COOP project (Dec. 2009 — May 2013) was to reconcile two layers – Programming Model Frameworks (PMF) and Resource Management Systems (RMS) – with respect to a number of tasks that they both try to handle independently. PMF needs to have a knowledge of resources to select the most efficient transformation of abstract programming concepts into executable ones. However, the actual management of resources is done by RMS in an opaque way, based on a simple abstraction of applications.

More details are available on the ANR COOP website.

Inria-Illinois-ANL Joint Laboratory for Petascale Computing

From June 2009-June 2014, the University of Illinois at Urbana-Champaign and INRIA, the French national computer science institute, formed the Joint Laboratory for Petascale Computing. The Joint Laboratory is based at Illinois and includes researchers from INRIA, Illinois’ Center for Extreme-Scale Computation, and the National Center for Supercomputing Applications. It focuses on software challenges found in complex high-performance computers.

Early focus areas will include:

  • Modeling and optimizing numerical libraries, which are at the heart of many scientific applications.
  • Fault-tolerance research, which reduces the negative impact when processors, disk drives, or memory fail in supercomputers that have tens or hundreds of thousands of those components.
  • Novel programming models, which allow scientific applications to be updated or reimagined to take full advantage of extreme-scale supercomputers.

More on the lab website

ANR SPADES

SPADES will propose solutions for the management of distributed schedulers in Desktop Computing environments, coping with a co-scheduling framework.

ANR SPADES

Today’s emergence of Petascale architectures and evolutions of both research grids and computational grids increase a lot the number of potential resources. However, existing infrastructures and access rules do not allow to fully take advantage of these resources.

One key idea of the SPADES project is to propose a non-intrusive but highly dynamic environment able to take advantages to available resources without disturbing their native use. In other words, the SPADES vision is to adapt the desktop grid paradigm by replacing users at the edge of the Internet by volatile resources. These volatile resources are in fact submitted via batch schedulers to reservation mechanisms which are limited in time or susceptible to preemption (best-effort mode).

One of the priorities of SPADES is to support platforms at a very large scale. Petascale environments are in consequence particularly considered. Nevertheless, these next-generation architectures still suffer from a lack of expertise for an accurate and relevant use.

One of the SPADES goal is to show how to take advantage of the power of such architectures. Another challenge of SPADES is to provide a software solution for a service discovery system able to face a highly dynamic platform. This system will be deployed over volatile nodes and thus must tolerate « failures ». The implementation of such an experimental development leads to the need for an interface with batch submission systems able to make reservations in a transparent manner for users, but also to be able to communicate with these batch systems in order to get the information required by our schedulers.

SPADES will propose solutions for the management of distributed schedulers in Desktop Computing environments, coping with a co-scheduling framework.

More information on SPADES website: SPADES

ANR USS SimGrid

The USS-SimGrid project aims at Ultra Scalable Simulations with SimGrid. This tool is leader in the simulation of HPC settings, and the main goal of this project is to allow its use in the simulation of desktop grids and peer-to-peer settings.

ANR USS SimGrid

Computer Science differs from other experimental sciences, such as biology of physics, in the way experimental results are presented in articles. In those other disciplines articles always begin with a detailed presentation of the methods employed to produce the results that often rely on previously described and acknowledged procedures. In computer science, and more particularly in the field of application simulation, only a short description of a (sometime unavailable) ad-hoc simulation framework is provided. This prevents reproducibility of published results and thus objective comparisons between new research results and the state of the art. To reduce this gap between computer science and other experimental sciences, there is need for powerful, validated, available and well advertised tools and methods. The general goal of this project is to provide such an application simulation framework that meets the needs of both the High Performance Computing and the Large Scale Distributed Computing communities. SimGrid is recognized inthe HPC community as one of the most prominent simulation environments as shown by its large community of users and the number of publications that use it. This project will allow to extend SimGrid to target the Large Scale Distributed Computing community, increase simulation realism, and provide useful tools for test campaign management.

More information on USS Simgrid website: USS Simgrid