This item is only available for download by members of the University of Illinois community. Students, faculty, and staff at the U of I may log in with your NetID and password to view the item. If you are trying to access an Illinois-restricted dissertation or thesis, you can request a copy through your library's Inter-Library Loan office or purchase a copy directly from ProQuest.
The Freeze Free algorithm for process migration
Roush, Ellard Thomas
Doctoral Committee Chair(s)
Campbell, Roy H.
Department of Study
Degree Granting Institution
University of Illinois at Urbana-Champaign
This thesis introduces the Freeze Free process migration algorithm, which uses the following six techniques to dramatically reduce overhead and complexity.
The Freeze Free algorithm eliminates all request and response messages from the process migration latency period. The first data message implicitly signals a process migration start. The current stack page message implicitly tells the new host to resume execution. The old host blasts the combined process control and execution state, the current code page, the current heap page, and the current stack page to the new host without delay. This information can not be further reduced and support general process migration.
The program counter identifies the current code page, and the stack pointer identifies the current stack page. A heuristic identifies the current heap page by examining the instruction stream. The algorithm delivers the first critical pages without page faults. The system truncates the top stack page to the used portion.
The design separates process control and communication state, which allows process migration and message receipt to proceed in parallel. The design effectively eliminates the message freeze time, which plagued prior systems.
The design separates process control and file state, which allows the process to resume execution, while the system flushes data to the file server.
The algorithm preallocates and partially initializes data structures for use at process migration time.
The design reorganizes data structures so that object information appears only within that same object. This drastically reduces the cost of extracting and inserting state.
This reduces the process migration latency time by an order of magnitude, while simultaneously supporting processor fault tolerance and effectively eliminating message freeze times. Furthermore the latency cost does not change with process size. The latency time is 13.9ms on a 4kB page system, 20.8ms on an 8kB page system, and 36.9ms on a 16kB system.
An analysis shows that process migration latency costs are now a small fraction of the demand pages across the network.
The thesis shows the negative impact of increasing overhead on system load sharing speedup. Small overhead is essential for good speedup.