WG – Carlos Cardonha: Network Models for Multi-Objective Discrete Optimization

2018-06-29

Title: Network Models for Multi-Objective Discrete Optimization

Speaker: Carlos Cardonha

Abstract: This work provides a novel framework for solving multi-objective discrete optimization problems with an arbitrary number of objectives. Our framework formulates these problems as network models, in that enumerating the Pareto frontier amounts to solving a multi-criteria shortest path problem in an auxiliary network. We design tools and techniques for exploiting the network model in order to accelerate the identification of the Pareto frontier, most notably a number of operations to simplify the network by removing nodes and arcs while preserving the set of nondominated solutions. We show that the proposed framework yields orders-of magnitude performance improvements over existing state-of-the-art algorithms on four problem classes containing both linear and nonlinear objective functions.
This is a joint work with David Bergman, Merve Bodur, and André Ciré.

Mini-bio: Carlos Cardonha is a Research Staff Member of the Optimization under Uncertainty Group at IBM Research Brazil, with a Ph.D. in Mathematics (T.U. Berlin) and with a Bachelor’s and a Master’s degree in Computer Science (Universidade de São Paulo). His primary research interests are mathematical programming and theoretical computer science, with focus on the application of techniques in mixed integer linear programming, combinatorial optimization, and algorithms design and analysis to real-world and/or operations research problems.

WG – Alexandre da Silva Veith: Latency-Aware Placement of Data Stream Analytics on Edge Computing

2018-06-26

Title: Latency-Aware Placement of Data Stream Analytics on Edge Computing

Speaker: Alexandre da Silva Veith

Abstract: The interest in processing data events under stringent time constraints as they arrive has led to the emergence of architecture and engines for data stream processing. Edge computing, initially designed to minimize the latency of content delivered to mobile devices, can be used for executing certain stream processing operations. Moving operators from cloud to edge, however, is challenging as operator-placement decisions must consider the application requirements and the network capabilities. In this work, we introduce strategies to create placement configurations for data stream processing applications whose operator topologies follow series-parallel graphs. We consider the operator characteristics and requirements to improve the response time of such applications. Results show that our strategies can improve the response time in up to 50% for application graphs comprising multiple forks and joins while transferring less data and better using the resources.