Files in this item



application/pdfAdrian_Nistor.pdf (469kB)
(no description provided)PDF


Title:Understanding, detecting, and repairing performance bugs
Author(s):Nistor, Adrian
Director of Research:Marinov, Darko; Lu, Shan
Doctoral Committee Chair(s):Marinov, Darko
Doctoral Committee Member(s):Lu, Shan; Torrellas, Josep; Xie, Tao
Department / Program:Computer Science
Discipline:Computer Science
Degree Granting Institution:University of Illinois at Urbana-Champaign
Subject(s):Performance bugs
Abstract:Software performance is critical for how end-users perceive the quality of software products. Performance bugs---programming errors that cause performance degradation—lead to poor user experience and low system throughput. Despite advances in profiling techniques, performance bugs still escape in production runs. There are two key reasons why performance bugs are not effectively detected during in-house testing. First, there is little available data about how performance bugs are discovered, reported, and fixed in practice. Such data is required when designing effective techniques for addressing performance bugs. Second, the current techniques for detecting performance bugs detect only slow computation and do not address other important parts of the testing process, such as automated oracles or bug fixing. This dissertation makes three contributions. The first contribution is a study of how performance bugs are discovered, reported to developers, and fixed by developers, and how these results compare with the results for non-performance bugs. The study considers performance and non-performance bugs from three popular code bases: Eclipse JDT, Eclipse SWT, and Mozilla. First, we find little evidence that fixing performance bugs has a higher chance to introduce new functional bugs than fixing non-performance bugs, which implies that developers may not need to be over-concerned about fixing performance bugs. Second, although fixing performance bugs is about as error-prone as fixing non-performance bugs, fixing performance bugs is more difficult than fixing non-performance bugs, indicating that developers need better tool support for fixing performance bugs and testing performance bug patches. Third, unlike many non-performance bugs, a large percentage of performance bugs are discovered through code reasoning, not through users observing the negative effects of the bugs (e.g., performance degradation) or through profiling. The result suggests that techniques to help developers reason about performance, better test oracles, and better profiling techniques are needed for discovering performance bugs. The second contribution is TODDLER, a novel automated oracle for performance bugs, which enables testing for performance bugs to use the well established and automated process of testing for functional bugs. TODDLER reports code loops whose computation has repetitive and partially similar memory-access patterns across loop iterations. Such repetitive work is likely unnecessary and can be done faster. TODDLER was implemented for Java and evaluated on 9 popular Java codebases. The experiments with 11 previously known, real-world performance bugs show that TODDLER finds these bugs with a higher accuracy than the standard Java profiler. TODDLER also found 42 new bugs in six Java projects: Ant, Google Core Libraries, JUnit, Apache Collections, JDK, and JFreeChart. Based on the corresponding bug reports, developers so far fixed 10 bugs and confirmed 6 more as real bugs. The third contribution is LULLABY, a novel static technique that detects and fixes performance bugs that have non-intrusive fixes likely to be adopted by developers. Each performance bug detected by LULLABY is associated with a loop and a condition. When the condition becomes true during the loop execution, all the remaining computation performed by the loop is wasted. Developers typically fix such performance bugs because these bugs waste computation in loops and have non-intrusive fixes: when some condition becomes true dynamically, just break out of the loop. Given a program, LULLABY detects such bugs statically and gives developers a potential sourcelevel fix for each bug. LULLABY was evaluated on real-world applications, including 11 popular Java applications (e.g., Groovy, Log4J, Lucene, Struts, Tomcat, etc) and 4 widely used C/C++ applications (Chromium, GCC, Mozilla, and MySQL). LULLABY finds 61 new performance bugs in the Java applications and 89 new performance bugs in the C/C++ applications. Based on the corresponding bug reports, developers so far have fixed 51 and 65 performance bugs in the Java and C/C++ applications, respectively. Most of the remaining bugs are still under consideration by developers.
Issue Date:2014-05-30
Rights Information:Copyright 2014 Adrian Nistor
Date Available in IDEALS:2014-05-30
Date Deposited:2014-05

This item appears in the following Collection(s)

Item Statistics