Reducing the Overhead of Dynamic Compilation

Chandra Krintz, David Grove, Vivek Sarkar, and Brad Calder

Abstract

The execution model for mobile, dynamically-linked, object--oriented programs has evolved from fast interpretation to a mix of interpreted and dynamically compiled execution. The primary motivation for dynamic compilation is that compiled code executes significantly faster than interpreted code. However, dynamic compilation, which is performed while the application is running, introduces execution delay. In this paper we present two dynamic compilation techniques that enable high performance execution while reducing the effect of this compilation overhead. These techniques can be classified as: 1) decreasing the amount of compilation performed, and 2) overlapping compilation with execution.

We first present and evaluate Lazy Compilation, an approach used in most dynamic compilation systems in which individual methods are compiled on-demand upon their first invocation. This is in contrast to Eager Compilation, in which all methods in a class are compiled when a new class is loaded. In this work, we describe our experience with eager compilation, as well as the implementation and transition to lazy compilation. We empirically detail the effectiveness of this decision. Our experimental results using the SpecJVM Java benchmarks and the Jalapeno JVM show that, compared to eager compilation, lazy compilation results in 57% fewer methods being compiled and reductions in total time of 14% to 26%. Total time in this context is compilation plus execution time.

Next, we present profile-driven, background compilation, a technique that augments lazy compilation by using idle cycles in multiprocessor systems to overlap compilation with application execution. With this approach, compilation occurs on a thread separate from that of application threads so as to reduce intermittent, and possibly substantial, delay in execution. Profile information is used to prioritize methods as candidates for background compilation. Methods are compiled according to this priority scheme so that performance-critical methods are invoked using optimized code as soon as possible. Our results indicate that background compilation can achieve the performance of off-line compiled applications and masks almost all compilation overhead. We show significant reductions in total time of 14% to 71% over lazy compilation.