WG – Alba Cristina Magalhaes Alves de Melo: Parallel Sequence Alignment of Whole Chromosomes with Hundreds of GPUs and Pruning

2018-02-28

Title: Parallel Sequence Alignment of Whole Chromosomes with Hundreds of
GPUs and Pruning

Speaker: Alba Cristina Magalhaes Alves de Melo

Abstract: Biological Sequence Alignment is a very basic operation in Bioinformatics used routinely worldwide. Smith-Waterman is the exact algorithm used to compare two sequences, obtaining the optimal alignment in quadratic time and space. In order to accelerate Smith-Waterman, many GPU-based strategies were proposed in the literature. However, aligning DNA sequences of millions of characters, or Base Pairs (MBP), is still a very challenging task. In this talk, we discuss related work in the area of parallel biological sequence alignment and present our multi-GPU strategy to align DNA sequences with up to 249 millions of characters in 384 GPUs. In order to achieve this, we propose an innovative speculation technique, which is able to parallelize a phase of the Smith-Waterman algorithm that is inherently sequential. We combined our speculation technique with sophisticated buffer management and fine-grain linear space matrix processing strategies to obtain our parallel algorithm. As far as we know, this is the first implementation of Smith-Waterman able to retrieve the optimal alignment between sequences with more than 50
millions of characters. We will also present a pruning technique for one GPU that is able to prune more than 50% of the Smith-Waterman matrix and still retrieve the optimal alignment. We will show the results obtained in the Keeneland cluster (USA), where we compared all the human x chimpanzee homologous chromosomes (ranging from 26 MBP to 249 MBP). The human_chimpanzee chromosome 5 comparison (180 MBP x 183 MBP) attained 10.35 TCUPS (Trillions of Cells Updated per Second) using 384 GPUs. In this case, we processed 45 petacells, being able to produce the optimal alignment in 53 minutes and 7 seconds, with a speculation hit ratio of 98.2%.

Speaker Bio:  Alba Cristina Magalhaes Alves de Melo obtained her PhD degree in Computer Science from the Institut National Polytechnique de Grenoble (INPG), France, in 1996. In 2008, she did a postdoc at the University of Ottawa, Canada; in 2011, she was invited as Guest Scientist at Université Paris-Sud, France; and in 2013 she did a sabbatical at the Universitat Polytecnica de Catalunya, Spain. Since 1997, she works at the Department of Computer Science at the University of Brasilia (UnB), Brazil, where she is now a Full Professor. She is also a CNPq Research Fellow level 1D in Brazil. She was the Coordinator of the Graduate Program in Informatics at UnB for several years (2000-2002, 2004-2006, 2008, 2010, 2014) and she coordinated international collaboration projects with the Universitat Politecnica de Catalunya, Spain (2012, 2014-2016) and with the University of Ottawa, Canada (2012-2015). In 2016, she received the Brazilian Capes Award on “Advisor of the Best PhD Thesis in Computer Science”. Her research interests are High Performance Computing, Bioinformatics and Cloud Computing. She advised 2 postdocs, 4 PhD Thesis and 22 MsC Dissertations. Currently, she advises 4 PhD students and 2 MsC students. She is Senior Member of the IEEE Society and Member of the Brazilian Computer Society. She gave invited talks at Universitat Karlshure, Germany, Université Paris-Sud, France, Universitat Polytecnica de Catalunya, Spain, University of Ottawa, Canada and at Universidad del Chile, Chile. She has currently 91 papers
listed at  DBLP (www.informatik.uni-trier.de/~ley/db/indices/a-tree/m/Melo:Alba_Cristina_Magalhaes_Alves_de.html).

DIET goes fishing at Rutgers University

The Workflow DIET engine plays with the Fishes Detection Application

In the context of SUSTAM (Associated joint team between Avalon and RDI2 Lab. in Rutgers University). Daniel Balouek-Thomert  (RDI2), Eddy Caron (Avalon), Hadrien Croubois (Avalon) and Alireza Zamani (RDI2), worked on deployment of an application from the Ocean Observatories Initiative (OOI) project on Grid’5000 using the DIET Middleware

Moreover, on Friday december 1st, Eddy and Hadrien give a talk about DIET and elastic workflow.

Abstract: Cloud platforms have emerged as a leading solution for accessing computational resources for everybody. However, high performance scientific computing is still among the few domains where Cloud still raises more questions than it solves. While it is today possible to use existing approaches to deploy scientific workflows on virtualized platforms, these approaches do not benefit of many new features offered by Cloud providers. Among these features is the possibility of having an elastic platform that reacts to the need of the users, therefore offering an improved Quality of Service (QoS) while reducing the deployment cost. In this talk, we will present the DIET toolbox, a middleware designed for high-performance computing in a heterogeneous and distributed environment and our recent work on elastic workflows.

 

Energy Efficient Traffic Engineering in Software Defined Networks [Radu Carpa. Thèse]

Soutenance de Thèse de Radu Carpa

Jeudi 26 Octobre. Amphi B. 14h00

Résumé

Ce travail a pour but d’améliorer l’efficacité énergétique des réseaux en éteignant un sous-ensemble de liens par une approche SDN. Nous nous différencions des nombreux travaux de ce domaine par une réactivité accrue aux variations des conditions réseaux. Cela a été rendu possible grâce à une complexité calculatoire réduite et une attention particulière au surcoût induit par les échanges de données.
L’architecture logicielle “SegmenT Routing based Energy Efficient Traffic Engineering” (STREETE) que nous proposons repose sur un re-routage dynamique du trafic en fonction de la charge du réseau. Grace à des méthodes d’équilibrage de charge, nous obtenons un placement presque optimal des flux dans le réseau.
STREETE a été validé sur une vraie plateforme SDN. Cela nous a permis de donner des indications sur des améliorations à prendre en compte afin d’éviter des instabilités causées par des basculements incontrôlés des flux réseau entre des chemins alternatifs.

Membres du Jury

  • Andrzej DUDA, Professeur – Grenoble INP-Ensimag (Examinateur)
  • Frédéric GIROIRE, Chargé de Recherches – CNRS Sophia Antipolis (Examinateur)
  • Brigitte JAUMARD, Professeure – Concordia University, Canada (Rapporteure)
  • Béatrice PAILLASSA, Professeure – Institut national polytechnique de Toulouse (Rapporteure)
  • Laurent LEFEVRE, Chargé de Recherches – Inria ENS Lyon (Directeur)
  • Olivier GLUCK, Maître de Conférences – UCBL Lyon1 (Co-encadrant)

ANR MapReduce

This project is devoted to using MapReduce programming paradigm on clouds and hybrid infrastructures. Partners: Argonne National Lab (USA), the University of Illinois at Urbana Champaign (USA), the UIUC-INRIA Joint Lab on Petascale Computing, IBM France, IBCP, MEDIT (SME) and the GRAAL/AVALON INRIA project-team.

ANR MapReduce

This project aims to overcome the limitations of current Map-Reduce frameworks such as Hadoop, thereby enabling highly-scalable Map-Reduce-based data processing on various physical platforms such as clouds, desktop grids, or on hybrid infrastructures built by combining these two types of infrastructures.To meet this global goal, several critical aspects will be investigated. Data storage and sharing architecture. First, we will explore advanced techniques for scalable, high-throughput, concurrency-optimized data and metadata management, based on recent preliminary contributions of the partners. Scheduling. Second, we will investigate various scheduling issues related to large executions of Map-Reduce instances. In particular, we will study how the scheduler of the Hadoop implementation of Map-Reduce can scale over heterogeneous platforms; other issues include dynamic data replication and fair scheduling of multiple parallel jobs. Fault tolerance and security. Finally, we intend to explore techniques to improve the execution of Map-Reduce applications on large-scale infrastructures with respect to fault tolerance and security.

Our global goal is to explore how combining these techniques can improve the behavior of Map-Reduce-based applications on the target large-scale infrastructures. To this purpose, we will rely on recent preliminary contributions of the partners associated in this project, illustrated though the following main building blocks. BlobSeer, a new approach to distributed data management being designed by the KerData team from INRIA Rennes – Bretagne Atlantique to enable scalable, efficient, fine-grain access to massive, distributed data under heavy concurrency. BitDew, a data-sharing platform being currently designed by the GRAAL team from INRIA Grenoble – Rhône-Alpes at ENS Lyon, with the goal of exploring the specificities of desktop grid infrastructures. Nimbus, a reference open source cloud management toolkit developed at the University of Chicago and Argonne National Laboratory (USA) with the goal of facilitating the operation of clusters as Infrastructure-as-a-Service (IaaS) clouds.

More information on the MapReduce web site.

ANR COOP

Multi-level Cooperative Resource Management

ANR COOP

The problem addressed by the COOP project (Dec. 2009 — May 2013) was to reconcile two layers – Programming Model Frameworks (PMF) and Resource Management Systems (RMS) – with respect to a number of tasks that they both try to handle independently. PMF needs to have a knowledge of resources to select the most efficient transformation of abstract programming concepts into executable ones. However, the actual management of resources is done by RMS in an opaque way, based on a simple abstraction of applications.

More details are available on the ANR COOP website.