General | CVResearch | Teaching





 

Research Interests:
My research program focuses on high performance computing systems and architectures with themes that cover development of resource management strategies across the layers of cloud computing systems, design of reconfigurable hardware architectures for reusable systems, and restructuring computationally challenging algorithms for achieving high performance on field programmable gate array (FPGA) and graphics processing unit (GPU) hardware architectures. Below is an overview of ongoing research activities in my Reconfigurable Computing Laboratory

Resrouce Management:

  • HPC Scale:
    The future generation of extreme computing systems are expected to operate under dynamically changing system-wide power-constraints with more resource demand than the available resources. In order to measure HPC productivity, we utilize a monotonically decreasing time-dependent value function that represents the value of completing the job for an organization. We are currently developing power-aware value-based resource management algorithms in order to improve productivity of a power-constrained and oversubscribed HPC system.

  • SoC Scale:
    Domain-Focused Advanced Software-Reconfigurable Heterogeneous System on Chip (DASH-SoC) We are developing full suite of design time, compile time, and run time software capabilities to enable efficient implementation of applications on the target SoC to improve throughput, latency, power efficiency, and flexibility with reduced expert intervention.

  •  

    Neuromorphic Computing:
    The field of Neuromorphic Computing architectures have appeared with the goal of creating biologically inspired non-von Neuman architectures that emphasize a neural network's strengths: low-power, high parallelism, and fast complex computations. We are currently building an open source Field Programmable Gate Array (FPGA) based emulator to further understand neuromorphic computing platforms such as the TrueNorth and make them accessible for researchers and application developers through a programming and testing environment.

    High Performance Scientific Computing:
    The ingenuity of parallelizing a model or an algorithm comes into play when trying to balance the usage of computation and memory resources on the target hardware and manage the memory all the way down to a byte-by-byte basis. My lab has considerable experience with a great deal of success in the development of algorithms that require fine granularity and memory management on reconfigurable computing (FPGA) and graphics processing unit (GPU) platforms. These developments were based on an evaluation of the challenges that were posed by the hardware architecture as well as the software modifications that were needed to map the program architecture on the target hardware architecture. Over the years, these projects have helped me build a strong foundation on many core architecture based computing systems and develop the expertise on finding unique ways to parallelize a wide range of algorithms for these systems.

    Most recently my lab has contributed to the scientific computing domain with the development of novel methods to accelerate the T-Cell Receptor (TCR) synthesis for studying the immune systems of complex organisms. We reduced the time scale of simulations from 52 weeks to 13 days and for the first time, presented a model of the mouse TCR β repertoire to an extent which enabled us to evaluate the Convergent Recombination Hypothesis (CRH) comprehensively at peta-scale level

    General | CVResearch | Teaching