osa1 github gitlab twitter cv rss

Simplest pathological program for supercompilers

May 16, 2015 - Tagged as: en, supercompilation.

While working on one of my program transformation ideas, I’ve found a very simple program that is apparently pathological for most supercompilers:

loop x = loop (x + 1)

I tried this program on several supercompilers:

In the ideal case a supercompiler would just generate same program, without making any changes.

There’s one supercompiler that I couldn’t try: chsc(The Cambridge Haskell Supercompiler). I wasted a lot of time trying to make it working, but I failed. If you’re able to run it, please post the results in comments section below.

If you know any other supercompilers that I can test, please tell me about those too.


[May 17, 2015] UPDATE: I thought about why this is a bad case for supercompilers and found some explanations.

First of all, this was not a fair comparison. In the case of “supercompilation-by-evaluation” and “supero” I didn’t use Peano definitions, instead used integers and primitive operation (+). I fixed this and “supercompilation-by-evaluation” produced this program:

let
  h0 = \i_u18 -> h1 i_u18
  h1 = \i_u18 -> let
                   loop_u15 = h2
                   a_u48 = h3 i_u18
                 in loop_u15 a_u48
  h2 = \i_u52 -> h1 i_u52
  h3 = \i_u18 -> let a_u35 = h4 i_u18
                 in S a_u35
  h4 = \i_u18 -> let a_u20 = h5 i_u18
                 in S a_u20
  h5 = \i_u18 -> S i_u18
in h0

When simplified, it turns into:

loop = loop . S . S . S

So it still has this problem of unfolding it for a while.

I didn’t try this on “supero”, but in any case a it shouldn’t loop forever. It’s probably a bug in the implementation. Both supercompilers use same termination criteria so I’d expect “supero” to do the same.

“sc-mini” is deliberately kept simple. It checks size of the term, and blows the whistle2 if it’s larger than some fixed amount(it’s set as 40 in the source code). Indeed, look at the term produced by sc-mini, it contains 39 S applications and a variable. In the paper 3, the author mentions “homeomorphic embedding” and refers the user to some papers that describe it.

I think a supercompiler that uses homeomorphic embedding would stop earlier than “supercompilation-by-evaluation”. I’d imagine something like this:

This would compile our program to loop (S Z), which is not ideal maybe(still took a redundant step) but better than what’s produced by others.

Quoted from the paper “Supercompilation by Evalution”:

Much of the supercompilation literature makes use of the homeomorphic embedding test for ensuring termination. Users of this test uniformly report that testing the termination condition makes up the majority of their supercompilers runtime4. The tag-bag criteria appears to be much more efficient in practice, as our supercompiler spends only 6% of its runtime testing the criteria.

Quoted from the paper “Rethinking Supercompilation”:

In some cases, our rule is certainly less restrictive than the homeomorphic embedding. The example in §2.6.4 would have stopped one step earlier with a homeomorphic embedding. Under a fairly standard interpretation of variable names and let expressions, we can show that our rule is always less restrictive than the homeomorphic embedding – although other differences in our treatment of expressions mean such a comparison is not necessarily meaningful. However, we did not choose our termination criteria to permit more expressions – it was chosen for both simplicity and compilation speed.

So it seems like tag-bag approach, when compared to homeomorphic embedding, 1) faster 2) less restrictive(meaning sometimes it allows more steps to be taken before stopping evaluation). This is probably why it evaluates the loop a couple of times where homeomorphic embedding would stop after just one evaluation.


  1. This supercompiler comes with this paper. I highly recommend the paper if you’re interested in supercompilation.↩︎

  2. I don’t know the original source, but apparently “whistle” is the traditional term for the heuristic that tells a supercompiler to stop.↩︎

  3. This supercompiler comes with this paper. I highly recommend the paper if you’re interested in supercompilation.↩︎

  4. References two papers: “Positive supercompilation for higher order call-by-value language”(apparently the author later wrote his PhD thesis on this topic) and “A supercompiler for core Haskell”.↩︎