Distributed & Parallel Computing
Research in parallel computing took off vigorously in the past two decades with the advent of multi-core and many-core architectures. There is now a renewed interest in designing and engineering efficient parallel algorithms targeted at a multitude of parallel architecture and platforms. Several problems from domains including graph algorithms, natural sciences, optimization, and machine learning are of particular interest owing to their wide range of applications.
Some of the fundamental outcomes of current generation research in parallel computing includes tools for analyzing and modeling GPU computations, techniques for optimizing GPU computations, engineering mechanisms to and algorithms that aim to overlap communication and computation, apart from a host of libraries, domain specific languages, and automatic compiler techniques. Parallel computing has also enabled several application domains such as genomics to address critical scaling challenges.
The current focus at the center in this direction is to study parallel algorithms for dynamic and static graph algorithms on centrality measures, connectivity, and shortest paths. In addition, we are focusing on applications to genomics.
A slightly related area of research is that of distributed algorithms where nodes that collaborate to perform a computation do not have access to shared memory but instead have to rely on message passing. Some of the popular models include the LOCAL and the CONGEST models; in the former nodes are allowed to send/receive message of unlimited size to their neighbors, whereas in the latter, message sizes are upper bounded to be logarithmic in the number of nodes. There have been fundamental algorithms in these models for a variety of problems including symmetry breaking in graphs, facility location problems, minimum spanning tree, and network decomposition. Decades of progress on some of these problems has resulted in a plethora of techniques to obtain lower bounds and upper bounds in a variety of models of distributed computing.
In recent years, models of distributed computing such as the k-machine model and the Massively Parallel Computing (MPC) model have shot into prominence owing to their ability to model the practical cloud computing systems. These models are based on the idea that in big data computations on the cloud, communication cost is the predominant factor compared to local computation. Hence, these models aim to reduce the number of rounds of communication needed to solve a problem while keeping message sizes to be bounded.
The current focus on distributed computing research is to design efficient algorithms in the newly emerging models such as the k-machine model and the MPC model. Of particular interest is to symmetry breaking algorithms and also studies on
Current Projects:
- Parallel and Fault-resilient Programming Primitives and Algorithms for Temporal Graph Processing, National Supercomputing Mission, 2021-2023.
- Design and Implementation of Parallel Algorithms and Systems for Dynamic Graphs, Department of Science and Technology, Government of India, 2018-2021.
- Efficient Symmetry Breaking in the CONGEST Model, Department of Science and Technology, Government of India, 2018-2021.
- Efficient K-mer counting using GPUs, TiH Data Science, IIIT Hyderabad, 2021-2022.