Files in this item



application/pdfFrancesco_Sorrentino.pdf (4MB)
(no description provided)PDF


Title:Algorithmic techniques for predictive testing of concurrent programs and distributed systems
Author(s):Sorrentino, Francesco
Director of Research:Parthasarathy, Madhusudan
Doctoral Committee Chair(s):Parthasarathy, Madhusudan
Doctoral Committee Member(s):Marinov, Darko; Roşu, Grigore; Qadeer, Shaz
Department / Program:Computer Science
Discipline:Computer Science
Degree Granting Institution:University of Illinois at Urbana-Champaign
Abstract:The rise of multicore hardware platforms has lead to a new era of computing. In order to take full advantage of the power of multicore processors, developers need to write concurrent code. Unfortunately, as a result of the non-determinism introduced by thread scheduling, multi-threaded programs can show different behaviors even for a single input. Errors in concurrent programs often occur under subtle interleaving patterns that the programmer had not foreseen. There are too many interleavings to explore, even on a fixed test input for a concurrent program, making concurrency testing a hard problem. Current testing technologies such as stress testing (running the program under test repeatedly with randomized sleep statements and by varying parameters on different platforms) have proved largely inadequate in exposing such subtle interleavings. Among the various techniques to test concurrent programs, the prediction-based technique is one of most valuable technology. Starting from one arbitrary concurrent execution of the program under testing, alternate interleavings that are likely to contain bugs are predicted. In our research, we explore prediction algorithms based on a combination of static analysis and logical constraint solving to efficiently and effectively test concurrent programs. The strength of our research lies in the fact that the techniques we propose are general enough to predict, with a high degree of accuracy of feasibility, various kinds of concurrent errors. We provide evidence that such an approach is promising in testing concurrent programs. In fact, we have implemented our techniques in a framework, Penelope. We evaluate it over benchmark programs and find scores of null-pointer dereferences, data-races, atomicity violations and deadlocks by using only a single test run as the prediction seed for each benchmark. We also take into account the challenge of bringing our experience in predictive testing of concurrent programs to the distributed systems environment. We use supervised machine learning to model the system behaviors in response to perturbations, based on recorded observations in a pseudo distributed (small-scale) setting. From the learned model, we predict the next system state given current states and applied perturbations. In a perturbation-based testing framework, accurate prediction helps to shorten the waiting time between the consecutive perturbations. Moreover, from the learned model, we reconstruct a possible sequence of perturbation from a given sequence of observed system states for diagnosis. We demonstrate the usefulness of our approach in a case study of a distributed system based on ZooKeeper and SolrCloud.
Issue Date:2014-05-30
Rights Information:Copyright 2014 Francesco Sorrentino
Date Available in IDEALS:2014-05-30
Date Deposited:2014-05

This item appears in the following Collection(s)

Item Statistics