Wednesday, April 29, 2009
The Python Tail Call Hullabu
Guido Van Rossum has done it again, causing a great blogging uproar over his decision to keep tail call elimination out of Python. The argument boils down to Van Rossum feeling as though tail calls encourage a kind of non-obvious style of programming and make debugging more complicated anyway (you can read the whole thing here. Those few (none) of you who have followed this blog might guess, given my penchant for functional languages, that I am on the pro-tail-call-elimination group and, I suppose, I probably am1. However, my position is perhaps more nuanced, at least enough that I thought it worth explicating. The problem might be cast into relief by pointing out we'd really like, as programmers of an abstract bent, to ignore the existence of the computer our code is running on entirely. Even the Turing machine, arguably the least abstract conception of computing, has an infinite amount of tape. The lambda calculus doesn't have any machine aspect at all - no memory, processors, none of that crap. In a completely abstract world, the difference between a tail call and a call in a non-tail position is immaterial - you have an infinite amount of stack (or whatever it is) to accumulate, and all that really matters is whether the computation terminates (and not even that, if you are working on streams or some other meaningfully infinite data structure). In real life, much to the chagrin of us abstract types, computers have a finite amount of call stack and all sorts of other limitations. But the elimination of tail calls lets us pretend, in certain cases and for certain, carefully crafted algorithms, that the limitation of the stack does not exist. There is a grace and elegance to this which is very attractive, which is why we like it. But it also is clunky and weird. Why, after all, should a tail call be different than a call in a non-tail position? In Scheme, nothing distinguishes a tail call syntactically or semantically (at least not without bursting the bubble of abstraction around the language). As far as the abstract nature of programming is concerned tail and non-tail calls are identical. So even though tail calls let us code in a way which seems abstract, they are really coding to an optimization. In other words, the most abstract of us are spending a lot of time thinking about a non-abstract question anyway. The tail call is a sort of a farce. It is this very reality which I think the non-abstract crowd claims is confusing, and when I really think about it, I can see the point. I'd still like it in Python, but I guess I might just have moved on to other languages. 1 I'd rather have proper lambdas first.