(b?f:g)(x)
Tongue in cheek, of course. Control flow corresponds to determining what (part of a) program you will run next and, of course, you can always compile down to lambda calculus in an abstract sense.
(b?f:g)(x)
Tongue in cheek, of course. Control flow corresponds to determining what (part of a) program you will run next and, of course, you can always compile down to lambda calculus in an abstract sense.
Those who have watched the critically acclaimed "Egel - The Movie" (8 upvotes on lobste.rs) know that I've been dabbling with a small rewriter with a trivialized semantics. The observant viewer might have noticed an off-the-cuff remark in that movie that Egel, since it rewrites, has less need for tail-call optimization, some stuff comes for free.
Let's revisit a small recursive function, factorial: fac 1 = 1 | fac n = n * fac (n-1).
Those who know some compiler theory will know the runtime behaviour of that function if implemented according to the Algol convention for stack machines: for every invocation, a stack frame is created with a dynamic and static link and some space for local variables. The program will subsequently perform its operations within the context of that frame, and for each recursive call, a new stackframe is pushed to the stack.
Egel in contrast, will rewrite a redex to another number of redexes. For a stack machine that corresponds to removing the callee's stack frame entirely and pushing several stack frames with work to be done to the stack. Hence my remark that there's less need for tail-call optimization, for simple recursive functions that consume constant stack space you get that for free. Also noteworthy, tail-call optimization module cons comes for free too.
And that begs the question of whether other calling conventions are wrong, at least for ML-like languages.
using System
data tick
def do_tick = [K -> throw (tick K)]
def handle_tick = [N (tick K) -> try K N catch handle_tick (N+1)]
def main = try do_tick [N -> do_tick ((+) N)] catch handle_tick 1
Sometimes I post about Egel, but more often than not I am confronted with people who -rightfully so- wonder what the point of it all is.
Egel, the language, is a minimal front-end to an alternative operational model that showcases that term rewriting can be achieved by repeatedly rewriting the top of a directed acyclic graph. An alternative to SECD, CACM, or graph rewriting machinery like GHC's spinless tagless G-machine. I estimate that confuses people since that kind of work on operational semantics hasn't really been done for the last twenty years and this is a trivialization effort. Whereas most of the field has moved on to studying elaborate type systems that provide ever more complex static guarantees.
The keyword here is 'trivialization' and let me share my thoughts on that. I like to compare this effort to work on the Pythagorean theorem, which started off with a proof of more than twenty pages, and through insights during the last two millennia has been reduced to a one-liner.
Trivialization is important. At every step, the Pythagorean theorem didn't change, it was still true, no noteworthy things changed; similarly, Egel doesn't really show anything new, everything can already be done with other machinery, and no real unexpected exciting things are happening. Through trivialization you only get something smaller, maybe clearer. But the importance of such an effort becomes clear when one asks oneself where we would be if we were still stuck with a proof of twenty pages.
We live in an era of informatics where baroque abstraction castles are built atop of ever more complex machinery, and at the same time, there's always a need for going back to the fundamentals to see whether something is there that is less complex one can build upon to create new -maybe better- castles.