Saturday, March 26, 2011

Controlling the Tracing of an Interpreter With Hints, Part 4: Benchmarks

This is part 4 and the final part of the series on how to speed up an interpreter written with PyPy by adding JIT hints to the interpreter. Part 1 described how to control the extent of tracing. Part 2 described how to influence the optimizer with promotion and pure functions. Part 3 described a simple object model and how it can be optimized by doing small rewrites. In this (short) post I present some benchmarks.

Benchmarks

For the benchmarks I ran a subset of the benchmarks on http://speed.pypy.org with CPython and four different executables of PyPy's Python interpreter (all with a JIT). The executables contain all combinations of enabling maps (which make instance attributes fast) and type versions (which makes method lookup fast).

  • pypy-slow: contains neither maps nor type versions.
  • pypy-map: contains maps but not type versions.
  • pypy-version: contains type versions but not maps.
  • pypy-full: contains both maps and type versions

The results are as follows:

The graph shows the speedup over CPython's numbers. The results are quite interesting. Maps by themselves do not speed up much over the bare JIT, whereas typed versions alone improve on the JIT baseline in many cases. However, maps are not useless. In combination with type versions they add a nice improvement over just type versions in a number of benchmarks (most notably raytrace-simple and richards but also in crypto-pyaes, django and go).

It's clear that type versions can be arbitrarily effective. A method lookup on a class can be arbitrarily slow, if the inheritance hierarchy becomes deeper and deeper. The full lookup is replaced by one promotion if type versions are enabled.

Maps on the other hand always replace one dict lookup with one promotion. Since dict lookups are already very fast, this by itself does not lead to a gigantic improvement. Only in combination with type versions do they show their full potential.

Tuesday, March 22, 2011

A thank you to the PSF

This year's PyCon was an incredible time; several members of the PyPy team were there, and we'll be blogging more about our experiences in the coming days. However, we quickly wanted to extend a thank you to the Python Software Foundation (PSF).

As you may have heard, on Friday morning at PyCon Jesse Noller handed the PyPy team a check for $10,000, on behalf of the PSF. This was in recognition of our success over the past few years in bringing PyPy from a research project to a fast, compliant, production-ready Python implementation, and to allow us to continue our work on making it faster and more up-to-date with upstream version changes.

Beyond the large check, we're grateful for the endorsement this represents, not only of our work on PyPy, but also of all alternatve Python VMs. The PSF has shifted its focus from representing just CPython to representing the Python Language, reguardless of its implementation, something we are very appreciative of.

From left to right, PyPy people present at PyCon 2011: Maciej Fijałkowski, Armin Rigo, Alex Gaynor, Laura Creighton and Jacob Hallén

Thank you, PSF.

Monday, March 21, 2011

Controlling the Tracing of an Interpreter With Hints, Part 3: Putting it All Together

This is part 3 of the series on how to speed up an interpreter written with PyPy by adding JIT hints to the interpreter. Part 1 described how to control the extent of tracing. Part 2 described how to influence the optimizer with promotion and pure functions. In this post I describe a worked-out example of a small object model for a dynamic language and how to make it efficient using the hints described in the previous posts.

A Simple Object Model

To implement a dynamic language efficiently, the operations on its objects need to be fast. Most dynamic languages have object models that are made by using dictionaries everywhere. Let's look at an example of how the JIT can be made to optimize such operations.

For the purpose of this blog post we will use a very simple and bare-bones object model that just supports very simple classes and instances, without any inheritance or any fancy features. The model has classes, which contain methods. Instances have a class. Instances have their own attributes. When looking up an attribute on an instance, the instances attributes are searched. If the attribute is not found there, the class' attributes are searched.

To implement this object model, we could use the following RPython code as part of the interpreter source code:

class Class(object):
    def __init__(self, name):
        self.name = name
        self.methods = {}

    def instantiate(self):
        return Instance(self)

    def find_method(self, name):
        result = self.methods.get(name)
        if result is not None:
            return result
        raise AttributeError(name)

    def change_method(self, name, value):
        self.methods[name] = value


class Instance(object):
    def __init__(self, cls):
        self.cls = cls
        self.attributes = {}

    def getfield(self, name):
        result = self.attributes.get(name)
        if result is not None:
            return result
        raise AttributeError(name)

    def write_attribute(self, name, value):
        self.attributes[name] = value

    def getattr(self, name):
        try:
            return self.getfield(name)
        except AttributeError:
            return self.cls.find_method(name)

In this straightforward implementation the methods and attributes are just stored in dictionaries on the classes/instances. While this object model is very simple it already contains all the hard parts of Python's object model. Both instances and classes can have arbitrary fields, and they are changeable at any time. Moreover, instances can change their class after they have been created.

When using this object model in an interpreter, a huge amount of time will be spent doing lookups in these dictionaries. To make the language efficient using a tracing JIT, we need to find a way to get rid of these dictionary lookups somehow.

Let's assume we trace through code that sums three attributes, such as:

inst.getattr("a") + inst.getattr("b") + inst.getattr("c")

The trace could look like this:

# inst.getattr("a")
attributes1 = inst.attributes
result1 = dict.get(attributes1, "a")
guard(result1 is not None)

# inst.getattr("b")
attributes2 = inst.attributes
v1 = dict.get(attributes2, "b")
guard(v1 is None)
cls1 = inst.cls
methods1 = cls.methods
result2 = dict.get(methods1, "b")
guard(result2 is not None)
v2 = result1 + result2

# inst.getattr("c")
attributes3 = inst.attributes
v3 = dict.get(attributes3, "c")
guard(v3 is None)
cls1 = inst.cls
methods2 = cls.methods
result3 = dict.get(methods2, "c")
guard(result3 is not None)

v4 = v2 + result3
return(v4)

In this example, the attribute a is found on the instance, but the attributes b and c are found on the class. The trace indeed contains five calls to dict.get, which is slow.

Making Instance Attributes Faster Using Maps

The first step in making getattr faster in our object model is to optimize away the dictionary lookups on the instances. The hints we have looked at in the two earlier blog posts don't seem to help with the current object model. There is no pure function to be seen, and the instance is not a candidate for promotion, because there tend to be many instances.

This is a common problem when trying to apply hints. Often, the interpreter needs a small rewrite to expose the pure functions and nearly-constant objects that are implicitly there. In the case of instance fields this rewrite is not entirely obvious. The basic idea is as follows. In theory instances can have arbitrary fields. In practice however many instances share their layout (i.e. their set of keys) with many other instances.

Therefore it makes sense to factor the layout information out of the instance implementation into a shared object. This shared layout object is called a map. Maps are an old idea that comes originally from the SELF language. They are also used by many JavaScript implementations such as V8. I've written about maps before, so I won't explain them fully again.

The rewritten Instance class using maps looks like this:

class Map(object):
    def __init__(self):
        self.attribute_indexes = {}
        self.other_maps = {}

    @purefunction
    def getindex(self, name):
        return self.attribute_indexes.get(name, -1)

    @purefunction
    def new_map_with_additional_attribute(self, name):
        if name not in self.other_maps:
            newmap = Map()
            newmap.attribute_indexes.update(self.attribute_indexes)
            newmap.attribute_indexes[name] = len(self.attribute_indexes)
            self.other_maps[name] = newmap
        return self.other_maps[name]


EMPTY_MAP = Map()

class Instance(object):
    def __init__(self, cls):
        self.cls = cls
        self.map = EMPTY_MAP
        self.storage = []

    def getfield(self, name):
        map = hint(self.map, promote=True)
        index = map.getindex(name)
        if index != -1:
            return self.storage[index]
        raise AttributeError(name)

    def write_attribute(self, name, value):
        map = hint(self.map, promote=True)
        index = map.getindex(name)
        if index != -1:
            self.storage[index] = value
            return
        self.map = map.new_map_with_additional_attribute(name)
        self.storage.append(value)

    def getattr(self, name):
        try:
            return self.getfield(name)
        except AttributeError:
            return self.cls.find_method(name)

Instances no longer use dictionaries to store their fields. Instead, they have a reference to a map, which maps field names to indexes into a storage list. The storage list contains the actual field values. The maps are shared between objects with the same layout. Therefore they have to be immutable, which means that their getindex method is a pure function. When a new attribute is added to an instance, a new map needs to be chosen, which is done with the new_map_with_additional_attribute method on the previous map. Now that we have introduced maps, it is safe to promote the map everywhere, because we assume that the number of different instance layouts is small.

With this changed instance implementation, the trace we had above changes to the following, where 0xb74af4a8 is the memory address of the Map instance that has been promoted:

# inst.getattr("a")
map1 = inst.map
guard(map1 == 0xb74af4a8)
index1 = Map.getindex(map1, "a")
guard(index1 != -1)
storage1 = inst.storage
result1 = storage1[index1]

# inst.getattr("b")
map2 = inst.map
guard(map2 == 0xb74af4a8)
index2 = Map.getindex(map2, "b")
guard(index2 == -1)
cls1 = inst.cls
methods1 = cls.methods
result2 = dict.get(methods1, "b")
guard(result2 is not None)
v2 = result1 + result2

# inst.getattr("c")
map3 = inst.map
guard(map3 == 0xb74af4a8)
index3 = Map.getindex(map3, "c")
guard(index3 == -1)
cls1 = inst.cls
methods2 = cls.methods
result3 = dict.get(methods2, "c")
guard(result3 is not None)

v4 = v2 + result3
return(v4)

The calls to Map.getindex can be optimized away, because they are calls to a pure function and they have constant arguments. That means that index1/2/3 are constant and the guards on them can be removed. All but the first guard on the map will be optimized away too, because the map cannot have changed in between. The optimized trace looks like this:

# inst.getattr("a")
map1 = inst.map
guard(map1 == 0xb74af4a8)
storage1 = inst.storage
result1 = storage1[0]

# inst.getattr("b")
cls1 = inst.cls
methods1 = cls1.methods
result2 = dict.get(methods1, "b")
guard(result2 is not None)
v2 = result1 + result2

# inst.getattr("c")
cls2 = inst.cls
methods2 = cls2.methods
result3 = dict.get(methods2, "c")
guard(result3 is not None)

v4 = v2 + result3
return(v4)

The index 0 that is used to read out of the storage array is the result of the constant-folded getindex call. This trace is already much better than the original one. Now we are down from five dictionary lookups to just two.

Versioning of Classes

Instances were optimized making the assumption that the total number of Instance layouts is small compared to the number of instances. For classes we will make an even stronger assumption. We simply assume that it is rare for classes to change at all. This is not totally reasonable (sometimes classes contain counters or similar things) but for this simple example it is good enough.

What we would really like is if the Class.find_method method were pure. But it cannot be, because it is always possible to change the class itself. Every time the class changes, find_method can potentially return a new value.

Therefore, we give every class a version number, which is increased every time a class gets changed (i.e., the content of the methods dictionary changes). This means that the result of methods.get() for a given (name, version) pair will always be the same, i.e. it is a pure operation. To help the JIT to detect this case, we factor it out in a helper method which is explicitly marked as @purefunction. The refactored Class looks like this:

class VersionTag(object):
    pass

class Class(object):
    def __init__(self, name):
        self.name = name
        self.methods = {}
        self.version = VersionTag()

    def find_method(self, name):
        self = hint(self, promote=True)
        version = hint(self.version, promote=True)
        result = self._find_method(name, version)
        if result is not None:
            return result
        raise AttributeError(name)

    @purefunction
    def _find_method(self, name, version):
        return self.methods.get(name)

    def change_method(self, name, value):
        self.methods[name] = value
        self.version = VersionTag()

What is interesting here is that _find_method takes the version argument but it does not use it at all. Its only purpose is to make the call pure (because when the version number changes, the result of the call might be different than the previous one).

The trace with this new class implementation looks like this:

# inst.getattr("a")
map1 = inst.map
guard(map1 == 0xb74af4a8)
index1 = Map.getindex(map1, "a")
guard(index1 != -1)
storage1 = inst.storage
result1 = storage1[index1]

# inst.getattr("b")
map2 = inst.map
guard(map2 == 0xb74af4a8)
index2 = Map.getindex(map2, "b")
guard(index2 == -1)
cls1 = inst.cls
guard(cls1 == 0xb7aaaaf8)
version1 = cls1.version
guard(version1 == 0xb7bbbb18)
result2 = Class._find_method(cls, "b", version1)
guard(result2 is not None)
v2 = result1 + result2

# inst.getattr("c")
map3 = inst.map
guard(map3 == 0xb74af4a8)
index3 = Map.getindex(map3, "c")
guard(index3 == -1)
cls2 = inst.cls
guard(cls2 == 0xb7aaaaf8)
version2 = cls2.version
guard(version2 == 0xb7bbbb18)
result3 = Class._find_method(cls, "c", version2)
guard(result3 is not None)

v4 = v2 + result3
return(v4)

The calls to Class._find_method can now be optimized away, also the promotion of the class and the version, except for the first one. The final optimized trace looks like this:

# inst.getattr("a")
map1 = inst.map
guard(map1 == 0xb74af4a8)
storage1 = inst.storage
result1 = storage1[0]

# inst.getattr("b")
cls1 = inst.cls
guard(cls1 == 0xb7aaaaf8)
version1 = cls1.version
guard(version1 == 0xb7bbbb18)
v2 = result1 + 41

# inst.getattr("c")
v4 = v2 + 17
return(v4)

The constants 41 and 17 are the results of the folding of the _find_method` calls. This final trace is now very good. It no longer performs any dictionary lookups. Instead it contains several guards. The first guard checks that the map is still the same. This guard will fail if the same code is executed with an instance that has another layout. The second guard checks that the class of inst is still the same. It will fail if trace is executed with an instance of another class. The third guard checks that the class did not change since the trace was produced. It will fail if somebody calls the change_method method on the class.

Real-World Considerations

The techniques used above for the simple object model are used for the object model of PyPy's Python interpreter too. Since Python's object model is considerably more complex, some additional work needs to be done.

The first problem that needs to be solved is that Python supports (multiple) inheritance. Therefore looking up a method in a class needs to consider the whole method resolution order. This makes the versioning of classes more complex. If a class is changed its version changes. At the same time, the versions of all the classes inheriting from it need to be changed as well, recursively. This makes class changes expensive, but they should be rare. On the other hand, a method lookup in a complex class hierarchy is as optimized in the trace as in our object model here.

A downside of the versioning of classes that we haven't yet fixed in PyPy, is that some classes do change a lot. An example would be a class that keeps a counter of how many instances have been created so far. This is very slow right now, but we have ideas about how to fix it in the future.

Another optimization is that in practice the shape of an instance is correlated with its class. In our code above, we allow both to vary independently. In PyPy's Python interpreter we act somewhat more cleverly. The class of an instance is not stored on the instance itself, but on the map. This means that we get one fewer promotion (and thus one fewer guard) in the trace, because the class doesn't need to be promoted after the map has been.

More General Patterns

The techniques we used above to make instance and class lookups faster are applicable in more general cases than the one we developed them for. A more abstract view of maps is that of splitting a data-structure into a part that changes slowly, and a part that changes quickly. In the concrete example of maps we split the original dictionary into the map (the slow-changing part) and the storage array (the quick-changing part). All the computation on the slow-changing part can be constant-folded during tracing so that only the manipulation of the quick-changing part remains.

Similarly, versions can be used to constant-fold arbitrary functions of large data structures. The version needs to be updated carefully every time the result of this function can change. Therefore this is useful only if the data structure is expected to change slowly.

Conclusion

In this post I showed how to use purefunction and promote to make a small but still relevant dynamic object model no longer use any dictionary lookups after tracing. Instead a number of guards are inserted into the trace to check whether the assumptions about the objects are still true. This makes operations on objects seriously faster. I plan to write another small post that shows the speed benefits for PyPy's Python interpreter for exactly these operations.

Tuesday, March 15, 2011

Controlling the Tracing of an Interpreter With Hints, Part 2: Controlling Optimization

This is part 2 of a series on how to speed up an interpreter written with PyPy by adding JIT hints to the interpreter. Part 1 described how to control the extent of tracing. In this post I will describe how to add hints that influence the optimizer. If applied correctly these techniques can give really big speedups by pre-computing parts of what happens at runtime. On the other hand, if applied incorrectly they might lead to code bloat, thus making the resulting program actually slower.

Background

Before sending the trace to the backend to produce actual machine code, it is optimized. The optimizer applies a number of techniques to remove or reduce the number of operations: most of these are well known compiler optimization techniques, with the difference that it is easier to apply them in a tracing JIT because it only has to deal with linear traces. Among the techniques:

In some places it turns out that if the interpreter author rewrites some parts of the interpreter with these optimizations in mind the traces that are produced by the optimizer can be vastly improved.

In this post I will describe two hints that allow the interpreter author to increase the optimization opportunities for constant folding. For constant folding to work, two conditions need to be met:

  • the arguments of an operation actually need to all be constant, i.e. statically known by the optimizer
  • the operation needs to be pure, i.e. always yield the same result given the same arguments.

The PyPy JIT generator automatically detects the majority of these conditions. However, for the cases in which the automatic detection does not work, the interpreter author can apply hints to improve the optimization opportunities. There is one kind of hint for both of the conditions above.

Note: These hints are written by an interpreter developer and applied to the RPython source of the interpreter. Normal Python users will never see them.

Where Do All the Constants Come From

It is worth clarifying what is a "constant" in this context. A variable of the trace is said to be constant if its value is statically known by the optimizer.

The simplest example of constants are literal values. For example, if in the RPython source code we have a line like y = x + 1, the second operand will be a constant in the trace.

However, the optimizer can statically know the value of a variable even if it is not a constant in the original source code. For example, consider the following fragment of RPython code:

if x == 4:
    y = y + x

If the fragment is traced with x being 4, the following trace is produced:

guard(x == 4)
y = y + x

In the trace above, the value of x is statically known thanks to the guard. Remember that a guard is a runtime check. The above trace will run to completion when x == 4. If the check fails, execution of the trace is stopped and the interpreter continues to run.

There are cases in which it is useful to turn an arbitrary variable into a constant value. This process is called promotion and it is an old idea in partial evaluation (it's called "the trick" there). Promotion is also heavily used by Psyco and by all older versions of PyPy's JIT. Promotion is a technique that only works well in JIT compilers, in static compilers it is significantly less applicable.

Promotion is essentially a tool for trace specialization. In some places in the interpreter it would be very useful if a variable were constant, even though it could have different values in practice. In such a place, promotion is used. The typical reason to do that is if there is a lot of computation depending on the value of that variable.

Let's make this more concrete. If we trace a call to the following function:

def f1(x, y):
    z = x * 2 + 1
    return z + y

We get a trace that looks like this:

v1 = x * 2
z = v1 + 1
v2 = z + y
return(v2)

Observe how the first two operations could be constant-folded if the value of x were known. Let's assume that the value of x can vary, but does so rarely, i.e. only takes a few different values at runtime. If this is the case, we can add a hint to promote x, like this:

def f2(x, y):
    x = hint(x, promote=True)
    z = x * 2 + 1
    return z + y

The meaning of this hint is that the tracer should pretend that x is a constant in the code that follows. When just running the code, the function has no effect, as it simply returns its first argument. When tracing, some extra work is done. Let's assume that this changed function is traced with the arguments 4 and 8. The trace will be the same, except for one operation at the beginning:

guard(x == 4)
v1 = x * 2
z = v1 + 1
v2 = z + y
return(v2)

The promotion is turned into a guard operation in the trace. The guard captures the value of x as it was at runtime. From the point of view of the optimizer, this guard is not any different than the one produced by the if statement in the example above. After the guard, the rest of the trace can assume that x is equal to 4, meaning that the optimizer will turn this trace into:

guard(x == 4)
v2 = 9 + y
return(v2)

Notice how the first two arithmetic operations were constant folded. The hope is that the guard is executed quicker than the multiplication and the addition that was now optimized away.

If this trace is executed with values of x other than 4, the guard will fail, and execution will continue in the interpreter. If the guard fails often enough, a new trace will be started from the guard. This other trace will capture a different value of x. If it is e.g. 2, then the optimized trace looks like this:

guard(x == 2)
v2 = 5 + y
return(v2)

This new trace will be attached to the guard instruction of the first trace. If x takes on even more values, a new trace will eventually be made for all of them, linking them into a chain. This is clearly not desirable, so we should promote only variables that don't vary much. However, adding a promotion hint will never produce wrong results. It might just lead to too much assembler code.

Promoting integers, as in the examples above, is not used that often. However, the internals of dynamic language interpreters often have values that are variable but vary little in the context of parts of a user program. An example would be the types of variables in a user function. Even though in principle the argument to a Python function could be any Python type, in practise the argument types tend to not vary much. Therefore it is possible to promote the types. In the next blog post I will give a complete example for how this works.

Declaring New Pure Operations

In the last section we saw a way to turn arbitrary variables into constants. All pure operations on these constants can be constant-folded. This works great for constant folding of simple types, e.g. integers. Unfortunately, in the context of an interpreter for a dynamic language, most operations actually manipulate objects, not simple types. The operations on objects are often not pure and might even have side-effects. If one reads a field out of a constant reference to an object this cannot necessarily be folded away because the object can be mutated. Therefore, another hint is needed.

As an example, take the following class:

class A(object):
    def __init__(self, x, y):
        self.x = x
        self.y = y

    def f(self, val):
        self.y = self.compute() + val

    def compute(self):
        return self.x * 2 + 1

Tracing the call a.f(10) of some instance of A yields the following trace (note how the call to compute is inlined):

x = a.x
v1 = x * 2
v2 = v1 + 1
v3 = v2 + val
a.y = v3

In this case, adding a promote of self in the f method to get rid of the computation of the first few operations does not help. Even if a is a constant reference to an object, reading the x field does not necessarily always yield the same value. To solve this problem, there is another annotation, which lets the interpreter author communicate invariants to the optimizer. In this case, she could decide that the x field of instances of A is immutable, and therefore compute is a pure function. To communicate this, there is a purefunction decorator. If the code in compute should be constant-folded away, we would change the class as follows:

class A(object):
    def __init__(self, x, y):
        self.x = x
        self.y = y

    def f(self, val):
        self = hint(self, promote=True)
        self.y = self.compute() + val

    @purefunction
    def compute(self):
        return self.x * 2 + 1

Now the trace will look like this:

guard(a == 0xb73984a8)
v1 = compute(a)
v2 = v1 + val
a.y = v2

Here, 0xb73984a8 is the address of the instance of A that was used during tracing. The call to compute is not inlined, so that the optimizer has a chance to see it. Since compute function is marked as pure, and its argument is a constant reference, the call will be removed by the optimizer. The final trace looks like this:

guard(a == 0xb73984a8)
v2 = 9 + val
a.y = v2

(assuming that the x field's value is 4).

On the one hand, the purefunction annotation is very powerful. It can be used to constant-fold arbitrary parts of the computation in the interpreter. However, the annotation also gives you ample opportunity to mess things up. If a function is annotated to be pure, but is not really, the optimizer can produce subtly wrong code. Therefore, a lot of care has to be taken when using this annotation.

Observably Pure Functions

Why can't we simply write an analysis to find out that the x fields of the A instances is immutable and deduce that compute is a pure function, since it only reads the x field and does not have side effects? This might be possible in this particular case, but in practice the functions that are annotate with the purefunction decorator are usually more complex. The easiest example for this is that of a function that uses memoization to cache its results. If you analyze this function, it looks like the function has side effects, because it changes the memoizing dictionary. However, because this side effect is not externally visible, the function from the outside is pure. This is a property that is not easily detectable by analysis. Therefore, the purity of this function needs to be annotated.

Immutable Fields

One of the most common cases of pure functions is reading immutable values out of objects. Since this is so common, we have special syntactic sugar for it. A RPython class can have a class attribute _immutable_fields_ set to a list of strings, listing the fields that cannot be changed. This is equivalent to using getters and annotating them with purefunction.

Conclusion

In this blog post I explained two more hints that can be used in the source code of the interpreter. They are used to influence what the optimizer does with the trace. I realize the examples given here are a bit too small, in the next installment I will give a worked-out example that puts all the pieces together.

Saturday, March 12, 2011

Controlling the Tracing of an Interpreter With Hints, Part 1: Controlling the Extent of Tracing

The question I was asked most often during my recent US trip was how exactly the hints work that interpreter authors can use to improve the execution speed of the programs running on their interpreters. Since those hints are not really documented all that well, I decided to write blog posts about them. This is the first one.

Background

First, let's recap some basics: PyPy's approach to implementing dynamic languages is to write an interpreter for the language in RPython. This interpreter can be translated to C and then further to machine code. The interpreter consists of code in the form of a large number of generated C functions and some data. Similarly, the user program consists of functions in the language the interpreter executes.

As was explained in a blog post and a paper two years ago, PyPy's JIT is a meta-tracer. Since we want to re-use our tracer for a variety of languages, we don't trace the execution of the user program, but instead trace the execution of the interpreter that is running the program. This means that the traces don't contain the bytecodes of the language in question, but RPython-level operations that the interpreter did to execute the program.

On the other hand, the loops that are traced by the tracer are the loops in the user program. This means that the tracer stops tracing after one iteration of the loop in the user function that is being considered. At this point, it can have traced many iterations of the interpreter main loop.

Here's a diagram of this process:

On the left you see the levels of execution. The CPU executes the binary of PyPy's Python interpreter, which consists of RPython functions that have been compiled first to C, then to machine code. Some of these functions contain loops, others don't. The interpreter runs a Python program written by a programmer (the user). If the tracer is used, it traces operations on the level of the interpreter. However, the extent of the trace is determined by the loops in the user program.

How Far Should Tracing Go

When the tracer encounters a function call at the interpreter level, e.g. the interpreter main loop calling a helper function, it can do one of two things:

  1. it can trace into the helper function, effectively inlining it into the trace.
  2. it can not trace into the function and instead record a call to that function as an operation in the trace. Such a call operation in the trace is sometimes called residual call.

As a default, the tracer will try to trace into the helper because that will give more information to the optimizer, allowing it to do a better job. This is particularly important for the allocation removal optimization, because if a freshly allocated object is passed as an argument to a residual call, its allocation cannot be optimized away.

There is a problem however if the helper function itself contains a loop. The tracer records the linear sequence of operations that are being executed. Thus when it encounters a loop on the interpreter level it records all the operations of every iteration of the loop itself, with the net effect of unrolling it. The only places where the tracer stops and tries to close the trace is in the main loop of the interpreter. When the tracer encounters the main loop, it also checks whether the original user loop has been closed, and thus whether it can stop tracing.

For most helper functions in the interpreter that contain loops, fully unrolling does not make sense. If a loop is unrolled, the trace is specific to the number of iteration that was seen during tracing. If the trace is later executed with a different number of iterations, the trace will be left via a guard failure, which is inefficient. Therefore the default behaviour of the tracer is to never trace into a function on the interpreter level that contains a loop, but to trace into all non-looping helper functions.

This default behaviour is essentially a heuristic, but one that usually makes sense. We want to produce just enough traces to make the resulting code efficient, but not more. Therefore we trace as much as possible (everything by default) except the functions which loops where tracing would produce code that is less general than it could be.

As an example for a helper with a loop, take string concatenation. It loops over the characters of both arguments and copies them over into the result string. It does not make sense to unroll the loops in this function. If we do that, the resulting trace can only be used for strings of the length that was seen during tracing. In practise, the string lengths are usually different each run, meaning that the trace with unrolling is not run to completion in most cases.

Influencing the Default Behaviour

Sometimes the default behaviour is not actually what is wanted. This is something the interpreter author has to decide, usually by looking at the traces that are produced and deciding that they should be improved. There are two ways in which the default is wrong:

  • false negatives: if a helper function that does contain a loop should be traced into, unrolling the loop.
  • false positives: if a helper function that does not contain a loop is inlined into the trace, but the interpreter author decides that this is not helpful.

If the interpreter author finds false negatives or false positives, she can fix that by applying a hint to the tracer. These hints take the form of function decorators (which both live in the pypy.rlib.jit module). In the next two subsections I will describe these two function decorators and their use.

Unrolling Functions With Loops

The first decorator, used to fix false negatives, is the unroll_safe decorator. It is used to tell the tracer to always trace into a function that has a loop, effectively unrolling the loop. This decorator should be used only if the loop in the helper function is expected to always run for the same number of iterations. This sounds like a strong restriction, in practise this is less severe: The number of iterations needs to only be the same in the context where the helper functions is traced from.

It is easiest to understand this condition via an example. Let's look at the BUILD_TUPLE bytecode in Python. It takes one argument, the length n of the tuple being built. The bytecode pops n arguments from the stack, turns them into a tuple and pushes that tuple on the stack. Thus the function that implements BUILD_TUPLE in PyPy's Python interpreter calls a helper popvalues which pops n values from the stack and returns them in a list. This helper is implemented with a loop and would thus not be traced into by default. The loop in the helper can run for very different numbers of iterations, because it is used in a variety of places. However, for every concrete BUILD_TUPLE bytecode, the argument will be constant. Therefore it is safe (and even necessary) to annotate popvalues with the unroll_safe decorator.

A different example is the implementation of the isinstance builtin. It is used to check whether an object a is an instance of a class B like this: isinstance(a, B). The second argument of the function can also be a tuple of classes to check whether an object is an instance of one of a number of classes: isinstance(a, (A, B, C, D)). To implement this second case, the implementation of isinstance contains a loop iterating over the elements of the tuple. The number of loop iterations can vary, but is usually fixed for each individual call site which typically just lists a few classes in the source code. Therefore it is also safe to annotate the implementation of isinstance with the unroll_safe decorator.

Preventing the Tracing of Functions

The second decorator dont_look_inside is used to fix false positives. It tells the JIT to never trace into the decorated function and just always produce a residual call instead. This decorator is in many ways less important than the unrolling one (except for a special situation that I will describe in a follow-up post). It is used if tracing into a function is not expected to yield any speed benefits, because the optimizer will not be able to improve it much. This is often the case if the called helper function does not contain any "dynamic" behaviour. In such a situation it is better to just leave the function call in the trace, because that produces less code.

An example would be the import mechanism in Python. It's very unlikely that any performance improvement can be had by turning part of it into assembler. Therefore we hide it from the tracer by annotating them with dont_look_inside.

Conclusion

In this post we discussed two hints that can be used to control precisely which parts of the interpreter should be meta-traced. If these hints are used carefully, this can go a long way to making the interpreter produce traces that contain exactly the interesting part of the execution, and will contain calls to the functions that can not be optimized by tracing techniques.

In the next part of this series I will discuss a different set of hints that can be used to strongly optimize traces.

Friday, March 11, 2011

Bay Area 2011 Tour Summary

We spent the week in the San Francisco Bay Area showing off PyPy. Here are notes and photos of the tour.

Day 1: Google SF

Google has offices in downtown San Francisco. They are at a beautiful place and the views are spectacular. We thank Wesley Chun and Guido van Rossum for organizing this meeting. Between 25 and 30 engineers showed up. Some of them were Python programmers, but others were C++ programmers; and they all seem to have real problems that they want to solve with PyPy. We didn't have prepared slides so far, so we mostly ran demos and talked. As predicted, Google would love SWIG support. They suggested that we rename the translation toolchain (as we vaguely thought too) to separate it more from PyPy's Python interpreter; up until today, many had no idea that they could use PyPy for other languages. All in all, it was very positive and people looked forward to meeting up at PyCon.

Day 2: Stanford

This was the most academically-oriented talk. You can find the abstract, the slides (PgUp/PgDown to navigate) and the video here. There were around 35 people in the audience, and maybe 1000 real-time video watchers (who didn't get to ask questions). The live audience seemed to be a mixture of students, professors, and people from the local industry. We thank David Allison and Andy Freeman for organizing it. It has been two or three years since they invited me (Armin) and I finally managed to get here :-)

The slides are longer than the talk; we focused on the JIT because that was what the audience was most interested in. They were really impressed at the stability, the tests, and that we don't have lots of bugs reported in the JIT of our latest public release. We later found out that many who came to the talk believed that they were going to get a talk about how we jitted a subset of python because real python is too hard -- impossible to do. They came to heckle with examples of how python was impossible. So they were amazed when the first slide of Armin's presentation was "Python is complicated", and the next slide "Python is messy". It was a positive outcome. We made new fans :-)

Day 3: Yelp

As you can see in the image, tons of people showed up -- ~140. Thanks to Grace Law, who is the coordinator for the SF Python Meet-up, and to Jimmy Retzlaff and Ashley King-Bishof from Yelp. Yelp is also located in downtown San Francisco. This looks like the place to be if you are a start-up in California (and not in Silicon Valley): lots of enthusiastic young people are here, and they are hiring. Yelp has an enormous open space, suitable for huge parties, and the coolest beer dispensers on the planet, made as a hack-a-thon project by three Yelp engineers (pictured below):

By the way, their management structure seems to be flat. There are almost no line managers, i.e. managers for the engineering staff; instead they self-organize into teams. This is not what you expect for the USA; things appear to have changed a lot.

The talk was in two sections, "PyPy from the user's point of view" and "How the JIT works". Good feedback; impressed that we support all of Python 2.7 (including all the modules that are in C in the stdlib), and impressed that the Python 3.0 conversion is not considered a big deal by us, although we have no precise date yet. The plan is, of course, just to tweak the interpreter until it supports both (by adding the necessary conditions); the other aspects like GC and the JIT will not be affected at all.

Day 4: Dropbox

This was another place full of excited, successful young people. The CTO looks like he turned 30 last week, and he's been CTO for 4 years now. The three of us were quite obviously the oldest people there. We felt old. They have another great big open barn complex. It's loud. Very loud. Loud refrigerators, loud street noise, loud machinery in the walls doing who knows what, loudly.

This was the first tech talk at dropbox. Thanks to Rian Hunter for organizing it. They have a big kitchen, and we held the talk in there. There was a skylight, which made the room too bright, so harder to read the slides than would otherwise be the case. They were jazzed about our visit, and wanted copies of all the pictures Jacob took before he left.

They seemed familiar with Google V8, and thought that how long it took to build PyPy was a great incentive for us to make PyPy faster. They are very interested in fast ctypes, fast SWIG, fast Cython. They were pleased and surprised that we don't have too much JIT bloat (typically ~10% of the total RAM usage).

The mobile developers want a smaller Python more than a faster one. Python takes too much memory given the tiny amount available on a lot of cell phones. Not that we have an answer to this problem now.

They were pleased to learn that we will soon be able to JIT ctypes code. And the fact that Armin knows many ways to segfault CPython was a bit of a shock. We talked for an hour after the presentation. Again, a very positive outcome.

Days 5 and 6: Noisebridge sprint

About six people showed up for the sprint. (Late. Californians really do start the day at 11.) Noisebridge is a very eclectic place; people show up to do pretty much everything from sewing to breaking apart equipment to making robots and beer. It's donation-driven. Thanks to Jim Stockford for volunteering the space and arranging this and helping us set up for the sprint.

During the sprint, we did a little bit of everything; there was no clear pattern. Ademan worked on sqlite, Greg Price looked to see if his software could run on PyPy, Will worked on the documentation, and a few of us fixed some more 2.7 tests. Alex Gaynor and Fijal joined us, too.

Day 7: Google Mountain View and Mozilla

We gave two talks on the 7th day of our trip so we were already quite exhausted. Fortunately new people joined, so the talks were actually split between multiple people. We would like to thank Peter Norvig and Ben Bayer for inviting us to Google and Andreas Gal, Brendan Eich and Dave Herman for inviting us to Mozilla. Both talks should hopefully appear online at some point soon, but as of now we don't have a link.

It was pretty incredible to find ourselves at Mozilla talking with at least 15 people who deeply understood the ideas of tracing JITs and also understood why we undertook the decision to generate our JIT instead of writing it. They suffered from having to write JavaScript JIT (even multiple ones) by hand, as Armin did with Psyco. He deeply sympathizes. The discussion afterwards was very successful and we're looking forward to cooperating with them. Many exciting things were discussed as possibilities.

Next day we went to Pycon, which is ongoing and a topic for yet another blog post.

Wednesday, March 2, 2011

US Trip Report: POPL, Microsoft, IBM

Some notes from my recent trip (from 23rd of January to 17th of February) to the US where, I presented PyPy at various scientifically oriented places. In summary, there seems to be quite a bit of interest in PyPy within the research community, details below.

PEPM/POPL/STOP

From the 24th to the 29th of January I was in Austin, Texas at the POPL conference, where I gave a talk at one of the workshops, PEPM (Partial Evaluation and Program Manipulation). The title of our paper is "Allocation Removal by Partial Evaluation in a Tracing JIT", the abstract is:

The performance of many dynamic language implementations suffers from high allocation rates and runtime type checks. This makes dynamic languages less applicable to purely algorithmic problems, despite their growing popularity. In this paper we present a simple compiler optimization based on online partial evaluation to remove object allocations and runtime type checks in the context of a tracing JIT. We evaluate the optimization using a Python VM and find that it gives good results for all our (real-life) benchmarks.

The talk (slides) seemed to be well-received and there was a good discussion afterwards. PEPM in general was a very enjoyable workshop with many interesting talks on partial evaluation (which I am very interested in) and a great keynote by Olivier Danvy about "A Walk in the Semantic Park".

POPL itself was a bit outside of the area I am most knowledgeable in, most of the talks being on formal topics. Some of the talks that stuck to my mind:

  • "The Design of Kodu: A Tiny Visual Programming Language for Children on the Xbox 360", the keynote by Matthew MacLaurin from Microsoft Research. I didn't know about Kodu before, and was very impressed by it.
  • "Automating String Processing in Spreadsheets using Input-Output Examples" (paper) by Sumit Gulwani (also from MS Research) describes a plugin to Excel that can automate many common string processing tasks by giving a couple of examples, which are then abstracted into a generic string manipulation. Very cool.
  • "Dynamic Inference of Static Types for Ruby" (paper) by Michael Furr, Jong-hoon (David) An, Jeffrey S. Foster and Michael Hicks describes an approach to type inference that works by observing the actual types seen during unit-testing. Similar things have been done a few times before, however, the paper actually gives a correctness result.
  • "The Essence of Compiling with Traces" (paper) by Shu-Yu Guo and Jens Palsberg describes a formalization of a simple imperative language and proves that executing it using trace compilation will do exactly the same thing than using an interpreter. It also looks at what conditions an optimization on traces must fulfill to still produce valid results.

After the main conference, I took part in the STOP (Scripts to Programs) workshop. It had a great keynote "Scripting in a Concurrent World" by John Field about the Thorn language and a few interesting other talks.

Microsoft Research

After POPL I went to Redmond to visit Microsoft Research for a week, specifically the RiSE group. This is the group that did the SPUR project, a meta-tracing JIT for C# applied to a JavaScript interpreter in C#. I compared PyPy to SPUR last year. I am very grateful for Microsoft for inviting me there.

At Microsoft I gave a talk about "PyPy's Approach to Implementing Dynamic Languages Using a Tracing JIT Compiler", the slides of which can be found here. The talk was filmed and is online. People seemed to be impressed with the "product qualities" of PyPy, e.g. the buildbot infrastructure and speed tracking website.

The rest of the time I discussed with various researchers in the RiSE group, particularly with Nikolai Tillmann. We talked a lot about similarities and differences between SPUR and PyPy and tried to understand our respective projects better. SPUR is a really great project and I learned a lot in the discussions, for example about the optimizations and heuristics their trace compiler uses.

Another very cool project done by the RiSE group that I learned more about is PEX. PEX is a unit test generator for C# that tries to produce unit tests for so-far untested execution paths within methods. There is an online puzzle version of it, if you want to get an impression of the technology (including a very impressive C# IDE in the browser).

IBM

For the last part of the trip I stayed in New York City for two weeks, mostly as a vacation. However, I also visited IBM Watson Research Center for two days, to which I had been invited by David Edelsohn.

The first day I gave the same presentation I had given at Microsoft (with some improvements to the slides), again it was quite well received. The rest of the time I spent in (very fruitful) discussions with various people and teams, among them the Liquid Metal team and the Thorn team.

The second day I met with members of the FIORANO group, who are working on dynamic compilation for dynamic languages and Java. They explored various ways to speed up Python, both by improving the CPython interpreter as well as with JIT compilation techniques.

Another of their projects is to add a trace compiler to IBM's J9 JVM, about which the paper "A Trace-based Java JIT Compiler Retrofitted from a Method-based Compiler" is going to appear at CGO. I discussed tracing JITs with Peng Wu, one of the authors of that paper. Peng tries to systematically look at the various heuristics found in the different VMs that use tracing JITs. This is a very different perspective from the one I usually have, focusing on how to improve PyPy's specific heuristics. Therefore that discussion helped me thinking about the issues more generally.

Another goal of the group is to try to find benchmarks that are representative for typical Python workloads, which is something that has been done very carefully for Java e.g. when developing the DaCapo benchmark suite. The benchmarks that the Python community uses have not been selected in such a careful and measured way, so I think that trying to be more systematic there is a very worthwhile endeavour.