vHGW (virtual Home Gateway) project

vHGW (virtual Home Gateway) or how to save energy by running thousands of HGs on one server.vHGW project

According to the current studies, the telecom infrastructure is the major contributor for the ever increasing energy demand in the ICT sector and has a major part on carbon footprint to the environment. And surprisingly, more than 80% of this share is consumed by the Home Gateways (HGs).

Hence, in this preliminary work, we have explored the possibility of relocating some of the functionalities of a HG into a vHGW (virtual Home Gateway) which is hosted by a node located in NSP premises. Based on our experiment, it was possible to host up to 1000 vHGWs on a single server machine which consumes around 100W. And our result showed that the number of vHGWs hosted on server machine does not have a significant variation on its energy consumption. We have also confirmed that the capability of a vHGW’s in the provision of the network and application level services such as, routing, DHCP, firewalling and NAT, alike HG’s.

If we consider a replacement of the current HG by a quasi passive device (which can consume around 1Watt) and if we suppose that end users have triple play services over a fiber link (FTTH). By pulling those network and application level services into a vHGW and using a server machine that can host around a 1000 vHGW’s (and probably more in a near future), we can obtain about 300% energy saving in the overall wire line telecom networks. Therefore, the result of our experiment is aligned to and complies with the recommendation set by the GreenTouch project (http://greentouch.org).

Hence, the result of this study shows the benefit of service relocation of HG’s by reducing significantly the overall energy consumption of a wire line network, and minimizing the sector’s impact on the environment.

For more information about this research work. please visit vHGW Web page.

Celtic+ Seed4C

Seed4C Secured Embedded Element for Cloud

Celtic+ Seed4C

From Security in the cloud to security of the cloud. The value proposition of secure elements to protect software execution on a personal computer or on a server is not to be demonstrated. Nowadays, the emergence of cloud computing has led to a growing number of use case scenarios where one has to deal, not with a single computer but rather with a group of connected computers. In this case the challenge is not only to secure the software running on one single machine, but rather to manage and guarantee the security of a group of computers seen as a single entity. The main idea is to evolve from a security in the cloud (with isolated point of enforcement for security, the state of the art) to security of the cloud (with cooperative point of enforcement for security( the innovation proposed by this project) This project value proposition of cooperative points of enforcement of security is proposed under the concept of Network of Secure elements (NoSES). NoSES are made of individual secure elements attached to computers, user or network appliances and possibly pre-provisioned with initial secret keys. They can establish security associations, communicate together to setup a trusted network of computers and propagate security conditions centrally defined to a group of machines. The range of use cases use cases addressed by this concept is very broad; NoSES can be used to lock the execution of software to a group of specific machines, a particular application of this pertaining to tying virtual machines execution to specific servers. NoSEs can also be used to improve the security of distributed computing, not only by making sure that only trusted nodes can take part of the computing game, but also by certifying the integrity of the results returned by each one of them. Secure elements located in user appliances (such as a mobile handset) featuring a user interface can be part of NOSE and help secure server side operations using 2 factor authentication. The project will study the impact of NoSES upon the different layers of the architecture, from hardware to service in order to define how the trust can be propagated from the lower layers to the upper ones. At the lower level, the form factor and physical interfaces of secure elements to the host will be studied as well as, the management of their life cycle. At an upper level, the definition and implementation of security and access control and privacy policies involving the secure elements will be specified, as well as the middleware solutions to interface to the corresponding functional blocks. Finally, an important part of the project will focus on specific use cases including those mentioned above, and where the use of NoSEs can provide interesting solutions. One particular aspect will address privacy and identity management

More on the SEED4C web site.

ANR SONGS

The last decade has brought tremendous changes to the characteristics of large scale distributed computing platforms. Large grids processing terabytes of information a day and the peer-to-peer technology have become common even though understanding how to efficiently such platforms still raises many challenges. As demonstrated by the USS SimGrid project funded by the ANR in 2008, simulation has proved to be a very effective approach for studying such platforms. Although even more challenging, we think the issues raised by petaflop/exaflop computers and emerging cloud infrastructures can be addressed using similar simulation methodology.

The goal of the SONGS project is to extend the applicability of the SimGrid simulation framework from Grids and Peer-to-Peer systems to Clouds and High Performance Computation systems. Each type of large-scale computing system will be addressed through a set of use cases and lead by researchers recognized as experts in this area.

Any sound study of such systems through simulations relies on the following pillars of simulation methodology: Efficient simulation kernel; Sound and validated models; Simulation analysis tools; Campaign simulation management.

For more information, please visit the project website.

XLcloud

XLcloud: Design, Develop and Integrate The Software Elements of a High Performance Cloud Computing (HPCC) System. The XLcloud collaborative project is developing a cloud management platform that addresses the specific requirements of high performance cloud computing applications and users. XLcloud is designed for computational intensive workloads in collaborative applications with interactive remote visualisation capabilities.

XLcloud

XLcloud aims to define and demonstrate the principles of HPC as a Service (High Performance Computing) for all those applications that involve highly intensive calculations. XLcloud is designed as a collaborative tool that enables users to work together on highly sophisticated software in the Cloud, thus sidestepping the need for individuals to have to purchase expensive software on their own. XLcloud combines the expertise of companies and academics that are innovative in the field of high performance computer architectures and flow visualization HD/3D and video.

More information on the XLCLOUD web page.

EU FP7 PRACE-2IP – Second Implementation project of Partnership for Advanced Computing in Europe

EU FP7 PRACE-2IPThe purpose of the PRACE RI is to provide a sustainable high-quality infrastructure for Europe that can meet the most demanding needs of European HPC user communities through the provision of user access to the most powerful HPC systems available worldwide at any given time. In tandem with access to Tier-0 systems, the PRACE-2IP project will foster the coordination between national HPC resources (Tier-1 systems) to best meet the needs of the European HPC user community. To ensure that European scientific and engineering communities have access to leading edge supercomputers in the future, the PRACE-2IP project evaluates novel architectures, technologies, systems, and software. Optimizing and scaling of application for Tier-0 and Tier-1 systems is a core service of PRACE.

Within this project (September 2011 – August 2014), Avalon participated in particular to the workpackage on novel programming techniques that aims to perform research and development on auto-tuned runtime environments for future multi-petascale and exascale systems.

Start date: September 2011

Duration: 3 years

More information on PRACE 2IP website.

ANR MapReduce

This project is devoted to using MapReduce programming paradigm on clouds and hybrid infrastructures. Partners: Argonne National Lab (USA), the University of Illinois at Urbana Champaign (USA), the UIUC-INRIA Joint Lab on Petascale Computing, IBM France, IBCP, MEDIT (SME) and the GRAAL/AVALON INRIA project-team.

ANR MapReduce

This project aims to overcome the limitations of current Map-Reduce frameworks such as Hadoop, thereby enabling highly-scalable Map-Reduce-based data processing on various physical platforms such as clouds, desktop grids, or on hybrid infrastructures built by combining these two types of infrastructures.To meet this global goal, several critical aspects will be investigated. Data storage and sharing architecture. First, we will explore advanced techniques for scalable, high-throughput, concurrency-optimized data and metadata management, based on recent preliminary contributions of the partners. Scheduling. Second, we will investigate various scheduling issues related to large executions of Map-Reduce instances. In particular, we will study how the scheduler of the Hadoop implementation of Map-Reduce can scale over heterogeneous platforms; other issues include dynamic data replication and fair scheduling of multiple parallel jobs. Fault tolerance and security. Finally, we intend to explore techniques to improve the execution of Map-Reduce applications on large-scale infrastructures with respect to fault tolerance and security.

Our global goal is to explore how combining these techniques can improve the behavior of Map-Reduce-based applications on the target large-scale infrastructures. To this purpose, we will rely on recent preliminary contributions of the partners associated in this project, illustrated though the following main building blocks. BlobSeer, a new approach to distributed data management being designed by the KerData team from INRIA Rennes – Bretagne Atlantique to enable scalable, efficient, fine-grain access to massive, distributed data under heavy concurrency. BitDew, a data-sharing platform being currently designed by the GRAAL team from INRIA Grenoble – Rhône-Alpes at ENS Lyon, with the goal of exploring the specificities of desktop grid infrastructures. Nimbus, a reference open source cloud management toolkit developed at the University of Chicago and Argonne National Laboratory (USA) with the goal of facilitating the operation of clusters as Infrastructure-as-a-Service (IaaS) clouds.

More information on the MapReduce web site.

INRIA Project Lab Héméra

French Project around the Grid’5000 testbed.

INRIA Project Lab Héméra

Héméra was an INRIA Large Scale Initiative (2010-2014), that aimed at demonstrating ambitious up-scaling techniques for large scale distributed computing by carrying out several dimensioning experiments on the Grid’5000 infrastructure, at animating the scientific community around Grid’5000 and at enlarging the Grid’5000 community by helping newcomers to make use of Grid’5000.

More information on Hemera website

European Desktop Grid Initiative (EDGI)

European Desktop Grid Initiative – European FP7 project

European Desktop Grid Initiative (EDGI)

EDGI develops middleware in order to support European Grid Initiative (EGI) and National Grid Initiative user communities that are heavy users of Distributed Computing Infrastructures (DCIs) and require an extremely large number of CPUs and cores. EDGI goes beyond existing DCIs that are typically cluster Grids and supercomputer Grids, and extends them with public and institutional Desktop Grids (DGs) and Clouds. EDGI integrates software components of ARC, gLite, Unicore, BOINC, XWHEP, 3G Bridge, and Cloud middleware such as OpenNebula and Eucalyptus into SG-DG-Cloud platforms for service provision.

Avalon team’s task is to get instantly available additional resources for DG systems if the application has some QoS requirements that could not be satisfied by the available resources of the DG system.

Start date: 01/06/2010

Duration: 24 months

More information on EDGI website: EDGI

ANR COOP

Multi-level Cooperative Resource Management

ANR COOP

The problem addressed by the COOP project (Dec. 2009 — May 2013) was to reconcile two layers – Programming Model Frameworks (PMF) and Resource Management Systems (RMS) – with respect to a number of tasks that they both try to handle independently. PMF needs to have a knowledge of resources to select the most efficient transformation of abstract programming concepts into executable ones. However, the actual management of resources is done by RMS in an opaque way, based on a simple abstraction of applications.

More details are available on the ANR COOP website.

ANR SPADES

SPADES will propose solutions for the management of distributed schedulers in Desktop Computing environments, coping with a co-scheduling framework.

ANR SPADES

Today’s emergence of Petascale architectures and evolutions of both research grids and computational grids increase a lot the number of potential resources. However, existing infrastructures and access rules do not allow to fully take advantage of these resources.

One key idea of the SPADES project is to propose a non-intrusive but highly dynamic environment able to take advantages to available resources without disturbing their native use. In other words, the SPADES vision is to adapt the desktop grid paradigm by replacing users at the edge of the Internet by volatile resources. These volatile resources are in fact submitted via batch schedulers to reservation mechanisms which are limited in time or susceptible to preemption (best-effort mode).

One of the priorities of SPADES is to support platforms at a very large scale. Petascale environments are in consequence particularly considered. Nevertheless, these next-generation architectures still suffer from a lack of expertise for an accurate and relevant use.

One of the SPADES goal is to show how to take advantage of the power of such architectures. Another challenge of SPADES is to provide a software solution for a service discovery system able to face a highly dynamic platform. This system will be deployed over volatile nodes and thus must tolerate « failures ». The implementation of such an experimental development leads to the need for an interface with batch submission systems able to make reservations in a transparent manner for users, but also to be able to communicate with these batch systems in order to get the information required by our schedulers.

SPADES will propose solutions for the management of distributed schedulers in Desktop Computing environments, coping with a co-scheduling framework.

More information on SPADES website: SPADES