vHGW (virtual Home Gateway) project

vHGW (virtual Home Gateway) or how to save energy by running thousands of HGs on one server.vHGW project

According to the current studies, the telecom infrastructure is the major contributor for the ever increasing energy demand in the ICT sector and has a major part on carbon footprint to the environment. And surprisingly, more than 80% of this share is consumed by the Home Gateways (HGs).

Hence, in this preliminary work, we have explored the possibility of relocating some of the functionalities of a HG into a vHGW (virtual Home Gateway) which is hosted by a node located in NSP premises. Based on our experiment, it was possible to host up to 1000 vHGWs on a single server machine which consumes around 100W. And our result showed that the number of vHGWs hosted on server machine does not have a significant variation on its energy consumption. We have also confirmed that the capability of a vHGW’s in the provision of the network and application level services such as, routing, DHCP, firewalling and NAT, alike HG’s.

If we consider a replacement of the current HG by a quasi passive device (which can consume around 1Watt) and if we suppose that end users have triple play services over a fiber link (FTTH). By pulling those network and application level services into a vHGW and using a server machine that can host around a 1000 vHGW’s (and probably more in a near future), we can obtain about 300% energy saving in the overall wire line telecom networks. Therefore, the result of our experiment is aligned to and complies with the recommendation set by the GreenTouch project (http://greentouch.org).

Hence, the result of this study shows the benefit of service relocation of HG’s by reducing significantly the overall energy consumption of a wire line network, and minimizing the sector’s impact on the environment.

For more information about this research work. please visit vHGW Web page.

Inria Project Lab C2S@Exa

Computer and Computational Sciences at Exascale INRIA Large Scale Initiative

The C2S@Exa INRIA large-scale initiative is concerned with the development of numerical modeling methodologies that fully exploit the processing capabilities of modern massively parallel architectures in the context of a number of selected applications related to important scientific and technological challenges for the quality and the security of life in our society. Avalon is a core-team member, co-leading Pole 4 on Programming models.

Start Date: 2013

Duration: 4 years

Avalon Members: T. Gautier, C. Perez, J. Richard

More information on C2S@Exa website


LEXISTEMS develops Xact.ai, a solution to provide an universal access to knowledge in Natual Langage (data & data’s structuration limitless).

For organizations, Xact.ai is the most effective way to monetize data assets. Whatever the nature and volume of knowledge bases.

LEXISTEMS’ solutions streamline the use and analysis of natural language in business and personal applications.
A new era is opening. Users are empowered, and organizations leverage the true value of their data assets.

LEXISTEMS and Avalon collaborate on the design and development of NLP algorithms and high-level data structuration.


Start Date: September 2016


Avalon Members: Marcos Assuncao, Eddy Caron and Thomas Pellisier-Tanon

More information on website: LEXISTEMS

EU FP7 PaaSage

PaaSage delivers an open and integrated platform to support model based lifecycle management of Cloud applications.

The platform and the accompanying methodology allow model-based development, configuration, optimisation, and deployment of existing and new applications independently of the existing Cloud infrastructures.

PaaSage delivers an open and integrated platform to support model based lifecycle management of Cloud applications. The platform and the accompanying methodology allow model-based development, configuration, optimisation, and deployment of existing and new applications independently of the existing Cloud infrastructures.

Start Date:


Avalon Members:

More on the PaaSage website.

Celtic+ Seed4C

Seed4C Secured Embedded Element for Cloud

Celtic+ Seed4C

From Security in the cloud to security of the cloud. The value proposition of secure elements to protect software execution on a personal computer or on a server is not to be demonstrated. Nowadays, the emergence of cloud computing has led to a growing number of use case scenarios where one has to deal, not with a single computer but rather with a group of connected computers. In this case the challenge is not only to secure the software running on one single machine, but rather to manage and guarantee the security of a group of computers seen as a single entity. The main idea is to evolve from a security in the cloud (with isolated point of enforcement for security, the state of the art) to security of the cloud (with cooperative point of enforcement for security( the innovation proposed by this project) This project value proposition of cooperative points of enforcement of security is proposed under the concept of Network of Secure elements (NoSES). NoSES are made of individual secure elements attached to computers, user or network appliances and possibly pre-provisioned with initial secret keys. They can establish security associations, communicate together to setup a trusted network of computers and propagate security conditions centrally defined to a group of machines. The range of use cases use cases addressed by this concept is very broad; NoSES can be used to lock the execution of software to a group of specific machines, a particular application of this pertaining to tying virtual machines execution to specific servers. NoSEs can also be used to improve the security of distributed computing, not only by making sure that only trusted nodes can take part of the computing game, but also by certifying the integrity of the results returned by each one of them. Secure elements located in user appliances (such as a mobile handset) featuring a user interface can be part of NOSE and help secure server side operations using 2 factor authentication. The project will study the impact of NoSES upon the different layers of the architecture, from hardware to service in order to define how the trust can be propagated from the lower layers to the upper ones. At the lower level, the form factor and physical interfaces of secure elements to the host will be studied as well as, the management of their life cycle. At an upper level, the definition and implementation of security and access control and privacy policies involving the secure elements will be specified, as well as the middleware solutions to interface to the corresponding functional blocks. Finally, an important part of the project will focus on specific use cases including those mentioned above, and where the use of NoSEs can provide interesting solutions. One particular aspect will address privacy and identity management

More on the SEED4C web site.


The last decade has brought tremendous changes to the characteristics of large scale distributed computing platforms. Large grids processing terabytes of information a day and the peer-to-peer technology have become common even though understanding how to efficiently such platforms still raises many challenges. As demonstrated by the USS SimGrid project funded by the ANR in 2008, simulation has proved to be a very effective approach for studying such platforms. Although even more challenging, we think the issues raised by petaflop/exaflop computers and emerging cloud infrastructures can be addressed using similar simulation methodology.

The goal of the SONGS project is to extend the applicability of the SimGrid simulation framework from Grids and Peer-to-Peer systems to Clouds and High Performance Computation systems. Each type of large-scale computing system will be addressed through a set of use cases and lead by researchers recognized as experts in this area.

Any sound study of such systems through simulations relies on the following pillars of simulation methodology: Efficient simulation kernel; Sound and validated models; Simulation analysis tools; Campaign simulation management.

For more information, please visit the project website.


XLcloud: Design, Develop and Integrate The Software Elements of a High Performance Cloud Computing (HPCC) System. The XLcloud collaborative project is developing a cloud management platform that addresses the specific requirements of high performance cloud computing applications and users. XLcloud is designed for computational intensive workloads in collaborative applications with interactive remote visualisation capabilities.


XLcloud aims to define and demonstrate the principles of HPC as a Service (High Performance Computing) for all those applications that involve highly intensive calculations. XLcloud is designed as a collaborative tool that enables users to work together on highly sophisticated software in the Cloud, thus sidestepping the need for individuals to have to purchase expensive software on their own. XLcloud combines the expertise of companies and academics that are innovative in the field of high performance computer architectures and flow visualization HD/3D and video.

More information on the XLCLOUD web page.

EU FP7 PRACE-2IP – Second Implementation project of Partnership for Advanced Computing in Europe

EU FP7 PRACE-2IPThe purpose of the PRACE RI is to provide a sustainable high-quality infrastructure for Europe that can meet the most demanding needs of European HPC user communities through the provision of user access to the most powerful HPC systems available worldwide at any given time. In tandem with access to Tier-0 systems, the PRACE-2IP project will foster the coordination between national HPC resources (Tier-1 systems) to best meet the needs of the European HPC user community. To ensure that European scientific and engineering communities have access to leading edge supercomputers in the future, the PRACE-2IP project evaluates novel architectures, technologies, systems, and software. Optimizing and scaling of application for Tier-0 and Tier-1 systems is a core service of PRACE.

Within this project (September 2011 – August 2014), Avalon participated in particular to the workpackage on novel programming techniques that aims to perform research and development on auto-tuned runtime environments for future multi-petascale and exascale systems.

Start date: September 2011

Duration: 3 years

More information on PRACE 2IP website.

ANR MapReduce

This project is devoted to using MapReduce programming paradigm on clouds and hybrid infrastructures. Partners: Argonne National Lab (USA), the University of Illinois at Urbana Champaign (USA), the UIUC-INRIA Joint Lab on Petascale Computing, IBM France, IBCP, MEDIT (SME) and the GRAAL/AVALON INRIA project-team.

ANR MapReduce

This project aims to overcome the limitations of current Map-Reduce frameworks such as Hadoop, thereby enabling highly-scalable Map-Reduce-based data processing on various physical platforms such as clouds, desktop grids, or on hybrid infrastructures built by combining these two types of infrastructures.To meet this global goal, several critical aspects will be investigated. Data storage and sharing architecture. First, we will explore advanced techniques for scalable, high-throughput, concurrency-optimized data and metadata management, based on recent preliminary contributions of the partners. Scheduling. Second, we will investigate various scheduling issues related to large executions of Map-Reduce instances. In particular, we will study how the scheduler of the Hadoop implementation of Map-Reduce can scale over heterogeneous platforms; other issues include dynamic data replication and fair scheduling of multiple parallel jobs. Fault tolerance and security. Finally, we intend to explore techniques to improve the execution of Map-Reduce applications on large-scale infrastructures with respect to fault tolerance and security.

Our global goal is to explore how combining these techniques can improve the behavior of Map-Reduce-based applications on the target large-scale infrastructures. To this purpose, we will rely on recent preliminary contributions of the partners associated in this project, illustrated though the following main building blocks. BlobSeer, a new approach to distributed data management being designed by the KerData team from INRIA Rennes – Bretagne Atlantique to enable scalable, efficient, fine-grain access to massive, distributed data under heavy concurrency. BitDew, a data-sharing platform being currently designed by the GRAAL team from INRIA Grenoble – Rhône-Alpes at ENS Lyon, with the goal of exploring the specificities of desktop grid infrastructures. Nimbus, a reference open source cloud management toolkit developed at the University of Chicago and Argonne National Laboratory (USA) with the goal of facilitating the operation of clusters as Infrastructure-as-a-Service (IaaS) clouds.

More information on the MapReduce web site.

INRIA Project Lab Héméra

French Project around the Grid’5000 testbed.

INRIA Project Lab Héméra

Héméra was an INRIA Large Scale Initiative (2010-2014), that aimed at demonstrating ambitious up-scaling techniques for large scale distributed computing by carrying out several dimensioning experiments on the Grid’5000 infrastructure, at animating the scientific community around Grid’5000 and at enlarging the Grid’5000 community by helping newcomers to make use of Grid’5000.

More information on Hemera website