Friday, October 30, 2015

Jank Busters Part One

Jank, or in other words visible stutters, can be noticed when Chrome fails to render a frame within 16.66ms (disrupting 60 frames per second motion). As of today most of the V8 garbage collection work is performed on the main rendering thread, c.f. Figure 1, often resulting in jank when too many objects need to be maintained. Eliminating jank has always been a high priority for the V8 team [1, 2, 3]. In this blog post we will discuss a few optimizations that were implemented between M41 and M46 which significantly reduce garbage collection pauses resulting in better user experience.

Figure 1: Garbage collection performed on the main thread.

A major source of jank during garbage collection is processing various bookkeeping data structures. Many of these data structures enable optimizations that are unrelated to garbage collection. Two examples are the list of all ArrayBuffers, and each ArrayBuffer’s list of views. These lists allow for an efficient implementation of the DetachArrayBuffer operation without imposing any performance hit on access to an ArrayBuffer view. In situations, however, where a web page creates millions of ArrayBuffers, (e.g., WebGL-based games), updating those lists during garbage collection causes significant jank. In M46, we removed these lists and instead detect detached buffers by inserting checks before every load and store to ArrayBuffers. This amortizes the cost of walking the big bookkeeping list during GC by spreading it throughout program execution resulting in less jank. Although the per-access checks can theoretically slow down the throughput of programs that heavily use ArrayBuffers, in practice V8's optimizing compiler can often remove redundant checks and hoist remaining checks out of loops, resulting in a much smoother execution profile with little or no overall performance penalty.

Another source of jank is the bookkeeping associated with tracking the lifetimes of objects shared between Chrome and V8. Although the Chrome and V8 memory heaps are distinct, they must be synchronized for certain objects, like DOM nodes, that are implemented in Chrome's C++ code but accessible from JavaScript. V8 creates an opaque data type called a handle that allows Chrome to manipulate a V8 heap object without knowing any of the details of the implementation. The object's lifetime is bound to the handle: as long as Chrome keeps the handle around, V8's garbage collector won't throw away the object. V8 creates an internal data structure called a global reference for each handle it passes back out to Chrome through the V8 API, and these global references are what tell V8's garbage collector that the object is still alive. For WebGL games, Chrome may create millions of such handles, and V8, in turn, needs to create the corresponding global references to manage their lifecycle. Processing these huge amounts of global references in the main garbage collection pause is observable as jank. Fortunately, objects communicated to WebGL are often just passed along and never actually modified, enabling simple static escape analysis. In essence, for WebGL functions that are known to usually take small arrays as parameters the underlying data is copied on the stack, making a global reference obsolete. The result of such a mixed approach is a reduction of pause time by up to 50% for rendering-heavy WebGL games.

Most of V8’s garbage collection is performed on the main rendering thread. Moving garbage collection operations to concurrent threads reduces the waiting time for the garbage collector and further reduces jank. This is an inherently complicated task since the main JavaScript application and the garbage collector may simultaneous observe and modify the same objects. Until now, concurrency was limited to sweeping the old generation of the regular object JS heap. Recently, we also implemented concurrent sweeping of the code and map space of the V8 heap. Additionally, we implemented concurrent unmapping of unused pages to reduce the work that has to be performed on the main thread, c.f. Figure 2.

Figure 2: Some garbage collection operations performed on the concurrent garbage collection threads.

The impact of the discussed optimizations is clearly visible in WebGL-based games, for example Turbolenz' Oort Online demo. The following video compares Chrome M41 to M46


We are currently in the process of making more garbage collection components incremental, concurrent, and parallel, to shrink garbage collection pause times on the main thread even further. Stay tuned as we have some interesting patches in the pipeline.

Posted by the jank busters: Jochen Eisinger, Michael Lippautz, and Hannes Payer

Wednesday, October 14, 2015

V8 Release 4.7

Roughly every six weeks, we create a new branch of V8 as part of our release process. Each version is branched from V8’s git master immediately before Chrome branches for a Chrome Beta milestone. Today we’re pleased to announce our newest branch, V8 version 4.7, which will be in beta until it is released in coordination with Chrome 47 Stable. V8 4.7 is filled will all sorts of developer-facing goodies, so we’d like to give you a preview of some of the highlights in anticipation of the release in several weeks.

Improved ECMAScript 2015 (ES6) support

Rest operator

The Rest operator enables the developer to pass an indefinite number of arguments to a function. It is similar to the arguments object.

//Without Rest operator
function concat() {
 var args = Array.prototype.slice.call(arguments, 1);
 return args.join("");
} 

//With Rest operator
function concatWithRest(...strings) {
 return strings.join("");
}

Support for upcoming ES features

Array.prototype.includes

The new Array.prototype.includes method is a new feature that is currently a stage 3 proposal for inclusion in ES2016. It provides a terse syntax for determining whether or not an element is in a given array by returning a boolean value.

[1, 2, 3].includes(3) // true
["apple", "banana", "cherry"].includes("apple") // true
["apple", "banana", "cherry"].includes("peach") // false

Ease the pressure on memory while parsing

Recent changes to the V8 parser greatly reduce the memory consumed by parsing files with large nested functions. In particular, this allows V8 to run larger asm.js modules than previously possible.

V8 API

Please check out our summary of API changes. This document gets regularly updated a few weeks after each major release. Developers with an active V8 checkout can use 'git checkout -b 4.7 -t branch-heads/4.7' to experiment with the new features in V8 4.7. Alternatively you can subscribe to Chrome's Beta channel and try the new features out yourself soon.

Posted by the V8 team