WG – Cyril Seguin: Elasticity in Distributed File Systems.

2018-05-29

Title: Elasticity in Distributed File Systems.

Speaker: Cyril Seguin

Abstract: Since about several decades, distributed file systems are more an more used as a storage solution for distributed infrastructures.
They offer efficient, reliable and easy access to huge amounts of shared data by federating several storage resources and by replicating each data across these resources.
In parallel, the advent of cloud computing platforms, and especially infrastructure as a service platforms that punctually offer to users thousand of resources on demand, allows to acquire inexpensive distributed infrastructures.
Elasticity and pay per use cloud’s characteristics allow users to dynamically extend or reduce the number of used resources according their needs, paying exactly what they use.
Deploying a distributed file system on a cloud computing platform can offer to users the possibility to adapt the number of used resources to the platform activity while taking advantage of a distributed file system’s performance.
However, new challenges are raised concerning data availability and the trade-off between number of used resources and performance.
This talk focuses on solving these issues respectively in a static and dynamic context in which the platform activity is respectively known or not.
We show that bringing new data placement strategies and adapting the number of replicas of each data to its access frequency and balancing the requests load on each used resource allow to answer to the previous issues.

WG – Hadrien Croubois: A Cloud-aware autonomous workflow engine and its application to Gene Regulatory Networks inference.

2018-03-13

Title: A Cloud-aware autonomous workflow engine and its application to Gene Regulatory Networks inference.

Speaker: Hadrien Croubois

Abstract: With the recent development of commercial Cloud offers, Cloud solutions are today the obvious solution for many computing use-cases. However, high performance scientific computing is still among the few domains where Cloud still raises more issues than it
solves. Notably, combining the workflow representation of complex scientific applications with the dynamic allocation of resources in a Cloud environment is still a major challenge. In the meantime, users with monolithic applications are facing challenges when trying
to move from classical HPC hardware to elastic platforms. In this paper, we present the structure of an autonomous workflow manager dedicated to IaaS-based Clouds (Infrastructure as a Service) with DaaS storage services (Data as a Service). The solution proposed in
this paper fully handles the execution of multiple workflows on a dynamically allocated shared platform. As a proof of concept we validate our solution through a biologic application with the WASABI workflow.

WG – Alba Cristina Magalhaes Alves de Melo: Parallel Sequence Alignment of Whole Chromosomes with Hundreds of GPUs and Pruning

2018-02-28

Title: Parallel Sequence Alignment of Whole Chromosomes with Hundreds of
GPUs and Pruning

Speaker: Alba Cristina Magalhaes Alves de Melo

Abstract: Biological Sequence Alignment is a very basic operation in Bioinformatics used routinely worldwide. Smith-Waterman is the exact algorithm used to compare two sequences, obtaining the optimal alignment in quadratic time and space. In order to accelerate Smith-Waterman, many GPU-based strategies were proposed in the literature. However, aligning DNA sequences of millions of characters, or Base Pairs (MBP), is still a very challenging task. In this talk, we discuss related work in the area of parallel biological sequence alignment and present our multi-GPU strategy to align DNA sequences with up to 249 millions of characters in 384 GPUs. In order to achieve this, we propose an innovative speculation technique, which is able to parallelize a phase of the Smith-Waterman algorithm that is inherently sequential. We combined our speculation technique with sophisticated buffer management and fine-grain linear space matrix processing strategies to obtain our parallel algorithm. As far as we know, this is the first implementation of Smith-Waterman able to retrieve the optimal alignment between sequences with more than 50
millions of characters. We will also present a pruning technique for one GPU that is able to prune more than 50% of the Smith-Waterman matrix and still retrieve the optimal alignment. We will show the results obtained in the Keeneland cluster (USA), where we compared all the human x chimpanzee homologous chromosomes (ranging from 26 MBP to 249 MBP). The human_chimpanzee chromosome 5 comparison (180 MBP x 183 MBP) attained 10.35 TCUPS (Trillions of Cells Updated per Second) using 384 GPUs. In this case, we processed 45 petacells, being able to produce the optimal alignment in 53 minutes and 7 seconds, with a speculation hit ratio of 98.2%.

Speaker Bio:  Alba Cristina Magalhaes Alves de Melo obtained her PhD degree in Computer Science from the Institut National Polytechnique de Grenoble (INPG), France, in 1996. In 2008, she did a postdoc at the University of Ottawa, Canada; in 2011, she was invited as Guest Scientist at Université Paris-Sud, France; and in 2013 she did a sabbatical at the Universitat Polytecnica de Catalunya, Spain. Since 1997, she works at the Department of Computer Science at the University of Brasilia (UnB), Brazil, where she is now a Full Professor. She is also a CNPq Research Fellow level 1D in Brazil. She was the Coordinator of the Graduate Program in Informatics at UnB for several years (2000-2002, 2004-2006, 2008, 2010, 2014) and she coordinated international collaboration projects with the Universitat Politecnica de Catalunya, Spain (2012, 2014-2016) and with the University of Ottawa, Canada (2012-2015). In 2016, she received the Brazilian Capes Award on “Advisor of the Best PhD Thesis in Computer Science”. Her research interests are High Performance Computing, Bioinformatics and Cloud Computing. She advised 2 postdocs, 4 PhD Thesis and 22 MsC Dissertations. Currently, she advises 4 PhD students and 2 MsC students. She is Senior Member of the IEEE Society and Member of the Brazilian Computer Society. She gave invited talks at Universitat Karlshure, Germany, Université Paris-Sud, France, Universitat Polytecnica de Catalunya, Spain, University of Ottawa, Canada and at Universidad del Chile, Chile. She has currently 91 papers
listed at  DBLP (www.informatik.uni-trier.de/~ley/db/indices/a-tree/m/Melo:Alba_Cristina_Magalhaes_Alves_de.html).

WG – Prof. Rajkumar Buyya: New Frontiers in Cloud Computing for Big Data and Internet-of-Things (IoT) Applications

2018-02-27

Title: New Frontiers in Cloud Computing for Big Data and Internet-of-Things (IoT) Applications

Speaker: Prof. Rajkumar Buyya
Director, Cloud Computing and Distributed Systems (CLOUDS) Lab,
The University of Melbourne, Australia

CEO, Manjrasoft Pvt Ltd, Melbourne, Australia

Abstract: Computing is being transformed to a model consisting of services that are commoditised and delivered in a manner similar to utilities such as water, electricity, gas, and telephony. Several computing paradigms have promised to deliver this utility computing vision. Cloud computing has emerged as one of the buzzwords in the IT industry and turned the vision of “computing utilities” into a reality.  Clouds deliver infrastructure, platform, and software (application) as services, which are made available as subscription-based services in a pay-as-you-go model to consumers. Cloud application platforms need to offer (1) APIs and tools for rapid creation of elastic applications and (2) a runtime system for deployment of applications on geographically distributed computing infrastructure in a seamless manner.
The Internet of Things (IoT) paradigm enables seamless integration of cyber-and-physical worlds and opening up opportunities for creating a new class of applications for domains such as smart cities. The emerging Fog computing is extending Cloud computing paradigm to edge resources for latency-sensitive IoT applications.
This keynote presentation will cover (a) 21st century vision of computing and identifies various IT paradigms promising to deliver the vision of computing utilities; (b) opportunities and challenges for utility and market-oriented Cloud computing, (c) innovative architecture for creating market-oriented and elastic Clouds by harnessing virtualisation technologies; (d) Aneka, a Cloud Application Platform, for rapid development of Cloud/Big Data applications and their deployment on private/public Clouds with resource provisioning driven by SLAs; (e) experimental results on deploying Cloud and Big Data/Internet-of-Things (IoT) applications in engineering, and health care, satellite image processing, and smart cities on elastic Clouds;and (f) directions for delivering our 21st century vision along with pathways for future research in Cloud and Fog computing.

Speaker Bio:  Dr. Rajkumar Buyya is a Redmond Barry Distinguished Professor and Director of the Cloud Computing and Distributed Systems (CLOUDS) Laboratory at the University of Melbourne, Australia. He is also serving as the founding CEO of Manjrasoft, a spin-off company of the University, commercializing its innovations in Cloud Computing. He served as a Future Fellow of the Australian Research Council during 2012-2016. He has authored over 625 publications and seven textbooks including “Mastering Cloud Computing” published by McGraw Hill, China Machine Press, and Morgan Kaufmann for Indian, Chinese and international markets respectively. He also edited several books including “Cloud Computing: Principles and Paradigms” (Wiley Press, USA, Feb 2011). He is one of the highly cited authors in computer science and software engineering worldwide (h-index=114, g-index=245, 67,600+ citations).  Dr. Buyya is recognized as a “Web of Science Highly Cited Researcher” in 2016 and 2017 by Thomson Reuters, a Fellow of IEEE, and Scopus Researcher of the Year 2017 with Excellence in Innovative Research Award by Elsevier for his outstanding contributions to Cloud computing.
Software technologies for Grid and Cloud computing developed under Dr. Buyya’s leadership have gained rapid acceptance and are in use at several academic institutions and commercial enterprises in 40 countries around the world. Dr.  Buyya has led the establishment and development of key community activities, including serving as foundation Chair of the IEEE Technical Committee on Scalable Computing and five IEEE/ACM conferences. These contributions and international research leadership of Dr. Buyya are recognized through the award of “2009 IEEE Medal for Excellence in Scalable Computing” from the IEEE Computer Society TCSC.
Manjrasoft’s Aneka Cloud technology developed under his leadership has received “2010 Frost & Sullivan New Product Innovation Award”. Recently, Dr. Buyya received “Mahatma Gandhi Award” along with Gold Medals for his outstanding and extraordinary achievements in Information Technology field and services rendered to promote greater friendship and India-International cooperation. He served as the founding Editor-in-Chief of the IEEE Transactions on Cloud Computing. He is currently serving as Co-Editor-in-Chief of Journal of Software: Practice and Experience, which was established over 45 years ago. For further information on Dr.Buyya, please visit his cyber home: www.buyya.com

WG – Laércio LIMA PILLA: Current Efforts in Global Scheduling and Fault Tolerance for HPC Systems

2018-01-23

Title: Current Efforts in Global Scheduling and Fault Tolerance for HPC Systems

Speaker: Laércio LIMA PILLA

Abstract: Performance, energy efficiency, and reliability have been important objectives and challenges in current and future computing systems. In this context, our approach has been based on understanding the details of the computing system architecture and the behavior of applications, in order to combine this information, identify issues and propose new solutions. In this presentation, I will discuss our experience with the development of new architecture-aware global scheduling algorithms for multiprocessor and multicomputer systems, and with fault tolerance mechanisms for radiation-induced errors in parallel accelerators. I will also present some future global scheduling plans to handle the inclusion of non-volatile random-access memories (NVRAMs) in computing systems.

WG – Victor Allombert: Programming Multi-BSP Algorithms in ML

2018-01-15

Title: Programming Multi-BSP Algorithms in ML

Speaker: Victor Allombert

Abstract: From personal computers using an increasing number of cores, to supercomputers having millions of computing units, parallel architectures are the current standard. The high performance architectures are usually referenced to as hierarchical, as they are composed from clusters of multi-processors of multi-cores. Programming such architectures is known to be notoriously difficult. Writing parallel programs is, most of the time, difficult for both the algorithmic and the implementation phase. To answer those concerns, many structured models and languages were proposed in order to increase both expressiveness and efficiency. Among other models, Multi-BSP is a bridging model dedicated to hierarchical architecture that ensures efficiency, execution safety, scalability and cost prediction. It is an extension of the well known BSP model that handles flat architectures. We introduce the Multi-ML language, which allows programming Multi-BSP algorithms “à la ML” and thus, guarantees the properties of the Multi-BSP model and the execution safety, thanks to a ml type system. To deal with the multi-level execution model of Multi-ML, we defined formal semantics which describe the valid evaluation of an expression. To ensure the execution safety of Multi-ML programs, we also propose a typing system that preserves replicated coherence. An abstract machine is defined to formally describe the evaluation of a Multi-ML program on a Multi-BSP architecture. An implementation of the language is available as a compilation toolchain. It is thus possible to generate an efficient parallel code from a program written in Multi-ML and execute it on any hierarchical machine.

Inria Project Lab Discovery

Distributed and COoperative management of Virtual Environments autonomousLY

The DISCOVERY initiative aims at exploring a new way of operating Utility Computing (UC) resources.

To accommodate the ever-increasing demand for Utility Computing (UC) resources, while taking into account both energy and economical issues, the current trend consists in building larger and larger data centers in a few strategic locations. Although such an approach enables UC providers to cope with the actual demand while continuing to operate UC resources through centralized software system, it is far from delivering sustainable and efficient UC infrastructures. We claim that a disruptive change in UC infrastructures is required: UC resources should be managed differently, considering locality as a primary concern. To this aim, we propose to leverage any facilities available through the Internet in order to deliver widely distributed UC platforms that can better match the geographical dispersal of users as well as the unending demand. Critical to the emergence of such locality-based UC (LUC) platforms is the availability of appropriate operating mechanisms. We advocate the implementation of a unified system driving the use of resources at an unprecedented scale by turning a complex and diverse infrastructure into a collection of abstracted computing facilities that is both easy to operate and reliable.

Start Date: January 2015

Duration: 4 years

Avalon Members: J. Darrous, G. Fedak, C. Perez

More information on Discovery website

Inria Project Lab C2S@Exa

Computer and Computational Sciences at Exascale INRIA Large Scale Initiative

The C2S@Exa INRIA large-scale initiative is concerned with the development of numerical modeling methodologies that fully exploit the processing capabilities of modern massively parallel architectures in the context of a number of selected applications related to important scientific and technological challenges for the quality and the security of life in our society. Avalon is a core-team member, co-leading Pole 4 on Programming models.

Start Date: 2013

Duration: 4 years

Avalon Members: T. Gautier, C. Perez, J. Richard

More information on C2S@Exa website

PIA ELCI

ELCI is a French software project that brings together academic and industrial partners to design and provide a software environment for the next generation of HPC systems. The principal objective for the project is to facilitate the development of a software environment that meets the demands of the new generation of HPC architectures. This will cover the whole software stack (system and programming environments), numerical solvers and pre/post/co processing software.
ELCI is a French software project that brings together academic and industrial partners to design and provide a software environment for the next generation of HPC systems. The project is funded by the participating partners and by the French FSN “Fond pour la Société Numérique”.

The principal objective for the project is to facilitate the development of a software environment that meets the demands of the new generation of HPC architectures. This will cover the whole software stack (system and programming environments), numerical solvers and pre/post/co processing software.

A co-design approach is employed, that covers the software environment for computer architectures, the requirements of more demanding applications, and is adapted to future hardware architectures (multicore/many core processors, high-speed networks and data storage).

These developments will be validated according to their capacity to deal with the new exascale challenges- larger scalability, higher resiliency, greater security, improved modularity, with better abstraction and interactivity for application cases.

Start Date: September 2014

Duration: 3 years

Avalon Members: T. Gautier, L.Lefevre, C. Perez, I. Rais, J. Richard

More information on the ELCI web site.

LEXISTEMS

LEXISTEMS develops Xact.ai, a solution to provide an universal access to knowledge in Natual Langage (data & data’s structuration limitless).

For organizations, Xact.ai is the most effective way to monetize data assets. Whatever the nature and volume of knowledge bases.

LEXISTEMS’ solutions streamline the use and analysis of natural language in business and personal applications.
A new era is opening. Users are empowered, and organizations leverage the true value of their data assets.

LEXISTEMS and Avalon collaborate on the design and development of NLP algorithms and high-level data structuration.

 

Start Date: September 2016

Duration:

Avalon Members: Marcos Assuncao, Eddy Caron and Thomas Pellisier-Tanon

More information on website: LEXISTEMS