Xiao Qin's Research

Auburn University

Final Report

Multicore-Based Disks for Data-Intensive Computing (2009 - )

Download PDF




Research and Educational Activities


1.1 Developing Multicore-Embedded Smart Disks

In this study, we developed a multicore-embedded smart disk system that can improve performance of dataintensive applications by offloading data processing to multicore processors embedded in disk drives. Compared with traditional storage devices, next-generation disks will have computing capability to reduce computational load of host processors or CPUs. With the advance of processor and memory technologies, smart disks are promising devices to perform complex on-disk operations. Smart disks can avoid moving a huge amount of data back and forth between storage systems and host processors. To enhance the performance of data-intensive applications, we have designed a smart disk called McSD, in which a multicore processor is embedded. We have implemented a programming framework for data-intensive applications running on a computing system coupled with McSD. The programming framework aims at balancing load between host CPUs and multicore embedded smart disks. To fully utilize multicore processors in smart disks, we have implemented the MapReduce model for McSDs to handle parallel computing. A prototype of McSD has been implemented in a PC cluster connected by Gigabit Ethernet. McSD significantly reduces the execution time of word count, string matching, and matrix multiplication. Overall, we conclude that, integrated with MapReduce, multicore-embedded smart disk systems are a promising approach for improving I/O performance of data-intensive applications.



1.2 Improving MapReduce Performance through Data Placement

MapReduce has become an important distributed processing model for large-scale dataintensive applications like data mining and web indexing. HadoopCan open-source implementation of MapReduce is widely used for short jobs requiring low response time. The current Hadoop implementation assumes that computing nodes in a cluster are homogeneous in nature. Data locality has not been taken into account for launching speculative map tasks, because it is assumed that most maps are data-local. Unfortunately, both the homogeneity and data locality assumptions are not satisfied in virtualized data centers. We show that ignoring the data locality issue in heterogeneous environments can noticeably reduce the MapReduce performance. In this research task, we address the problem of how to place data across nodes in a way that each node has a balanced data processing load. Given a data intensive application running on a Hadoop MapReduce cluster, our data placement scheme adaptively balances the amount of data stored in each node to achieve improved data-processing performance. Experimental results on two real data-intensive applications show that our data placement strategy can always improve the MapReduce performance by rebalancing data across nodes before performing a data-intensive application in a heterogeneous Hadoop cluster.



1.3 An Offloading Framework for I/O Intensive Applications on clusters

In this study, we propose an offloading framework that is able to be easily applied in either an existing or a completely newly developed I/O intensive application with minor efforts. In particular, we not only illustrate core theory of designing an offloading program, such as structures and methods of offloading programs and controlling execution paths, but also discuss several essential issues which are required to be carefully considered in implementation, including configuration, offloading work flow, programming interfaces and data sharing. In order to compare performance of offloading applications with corresponding original versions, we have applied offloading to five programs and measured them in a typical cluster. The experimental results show that offloading applications run much faster than original ones and systems on which offloading applications execute have remarkably lower network burden than ones original applications run on.



1.4 Using Active Storage to Improve the Bioinformatics Application Performance

Active storage is an effective technique to improve applications’ end-to-end performance by offloading data processing to storage nodes. In this research task, we present a pipelining mechanism that leverages active storage to maximize throughput of data-intensive applications on a high-performance cluster. The mechanism overlaps data processing in active storage with parallel computations on the cluster, thereby allowing clusters and their active storage nodes to perform computations in parallel. To demonstrate the effectiveness of the mechanism designed for active storage, we implemented a parallel pipelined application called pp-mpiBLAST, which extends mpiBLAST that is an open-source parallel BLAST tool. Our pp-mpiBLAST relies on active storage to filter unnecessary data and format databases, which are then forwarded to the cluster running mpiBLAST. We develop an analytic model to study the scalability of pp-mpiBLAST on large-scale clusters. Measurements made from a working implementation suggest that this method reduces mpiBLAST’s overall execution time by up to 50%.



1.5 Mini Conference in the Advanced Operating Systems Class

A mini-conference model was used to motivate and educate graduate students to conduct research projects in the discipline of storage systems, energy-efficient computing, and prefetching/ caching for file systems. By the end of the Spring 2010 semester, when the Comp7500 C Advanced Operating Systems Class is taught, each graduate student is required to write a research paper and submit to a mini-conference. All the student papers were reviewed and each student gave a presentaion of 20 minutes. After each presentation, each student had a question-answer session of 5 minutes. The PI also gave constructive comments and suggestions on each students research project. In this mini-conference model, the graduate students who are taking the Comp7500 class improved their presentation and communication skills. After we receive feedbacks from the graduate students, we will formally evaluate the this class next semester.