NameJie ZhangWonil Choi*Minwoo Lee
PastUniv. of Texas at DallasSeoul National Univ./SamsungDSU/Samsung
NameGieseo ParkMustafa ShihabShuwen Gao
PastGeorgia Tech/SeagateAuburn Univ. AlabamaSun Yat-sen University
NameLubaba AbidSukmin Kang
PastUniv. of Texas at Dallas/TI

* co-advised student

Intern :

NameKarl Taht


Computer architecture and system researchers are traditionally accustomed with a conference-centric publication system. The acceptance rate of top-tier conferences in these fields is around 10% ~ 20%, decided by a National Science Foundation (NSF) panel style, 20-person program committee. The submitted papers usually have 10~12 pages in lengthwith detailed results, and go through about five double-blind reviews and a rebuttal process (not all conferences provide the chance for a rebuttal, but most top-conferences do) with top experts on the topic.

Every year we will target four top-tier architecture conferences (ISCA, MICRO, ASPLOS and HPCA) and system conferences such as SIGMETRICS and USENIX (FAST, ATC, HotStorage). Note that these conferences deal with issues and research related to microarchitecture, large-scale computing system, programming languages, and operating systems rather than circuit design or electron device issues.

While it would be beneficial to understand entire computing stack from the bottom to the top if your background is closer and more specialized to a much lower-level research topic such as analog/digital circuit, VLSI and surface sensing technologies, we strongly encourage you to be familiar with the architectural approaches and system solutions before landing in computer architecture and system research area -- all conferences we are targeting have proceedings, and many papers associated to them are available at online.

Back to Table of Contents


Please note that there are many research resources and tools you need to be familiar with, in addition to the following items. These items are not related to research itself but minimum requirements to keep your body and soul together in our research fields.


We typically use many different types of simulation frameworks for computer architecture and system research. While appropriate simulation methodologies might be varying based on your research topics, we recommend you to be familiar with diverse simulation methodologies. Most popular simulators we use are as follows:

All these simulators have open-source license, and all the framework codes are available for free download. Managing/learning these simulation tools would require solid programming knowledge as well as strong background of computer architectures/systems. We often modify these simulators to exam your conceptual idea or new approach. We also integrate these frameworks with one another to simulate a larger computing system and see more details (e.g., power, energy and data movements between heterogeneous devices).


In addition to simulation-based studies, we often analyze and characterize diverse real products and memory devices (e.g, SSD, GPU, etc.). This fundamentally requires a strong programming skill and deep knowledge of the underlying devices themselves. For example, to evaluate those devices in a better way, you might need to develop a microbenchmark or characterization tool that interacts with them through various workloads and access patterns. On the other hand, since handling these devices is also related to manage device drivers at some extent, it is required to have knowledge of OS architecture and kernel implementation such as process creation, context-switching, memory allocation, synchronization mechanisms, interprocess communication, I/O buffering, and file system -- this eye-catch kernel map would be helpful to understand kernel driver issues and check the corresponding kernel source codes. We are using both window device driver model ( WDM) and linux kernel driver model.


You might want to be aware of diverse benchmark/evaluation tools (e.g., SPEC, Intel Iometer, unix disk I/O and some other parallel I/O tools ), runtime libraries (e.g., boost, MapReduce, MPI, GPU-CUDA) and version controls (e.g., git and svn). All these tools are often used for simulation and empirical evaluation studies. In addition, it would be good if you can freely use other script programming languages like Python ( Usually, both simulation and empirical evaluation generate a tremendous amount of data, and therefore analysis on such data can be readily prone to human errors and take quality time away from your research. The script tools help you accelerate analyze data by automatically parsing the structure of raw data and collecting them. Lastly, the following tools and the corresponding documentations (published by Brendan Gregg) would help you to reduce the efforts developing a performance measurement and evaluation tool.

Back to Table of Contents


  1. Mahmut Kandemir (Professor, Pennsylvania State University)
  2. Chita Das (Professor, Pennsyivania State University)
  3. John Shalf (CTO, Lawrence Berkeley National Laboratory)
  4. Sung Kyu Lim (Professor, Georgia Institute of Technology)
  5. Hsien-Hsin S. Lee (Professor, Georgia Institute of Technology)
  6. Nam Sung Kim (Professor, University of Wisconsin–Madison)
  7. David Donfrio (Staff Scientist, Lawrence Berkeley National Laboratory)
  8. Michael Swift (Professor, University of Wisconsin–Madison)
  9. Umit V. Catalyurek (Vice Chair, Ohio State University)
  10. Chao Yang (Staff Scientist, Lawrence Berkeley National Laboratory)
  11. Jim Ang (Manager, Sandia National Labs)
  12. Erik Saule (Professor, University of North Carolina at Charlotte)
  13. H. Metin Aktulga (Professor, Michigan State University)
  14. Ellis Wilson (Researcher, Panasas)
  15. Cy Chan (Lawrence Berkeley National Laboratory)
  16. Farzad Fatollahi-Fard (Lawrence Berkeley National Laboratory)
  17. Georgios Michelogiannakis (Lawrence Berkeley National Laboratory)