Sunday, September 29, 2024

Namespaces, revisited.

I conservatively extended namespaces, but that took an internal overhaul of how scopes are treated. Shorthands are syntactic sugar for full-blown expressions.
using System
The above is sugar for:
using System (c0, c1, c2, ...)
Where c0, c1, c2, ... are all combinators in the System workspace. And that is sugar for:
using ε = System (c0, c1, c2, ...)
Which is again sugar for a number of renames.
using ε = System (c0 = c0, c1 = c1, c2 = c2, ...)
The renames all introduce bound names into the current scope. E.g.,
using Sys = System (p = print)
This will bind 'Sys::p' to 'System::print'.

Docstrings

I added docstrings to the Egel language some time ago. You can add documentation to either a file or a combinator.
@"""A multiline docstring for a file"""

def f =
    @"this is the fabulous 'f' combinator"
    [X -> 42]
Given docstrings, you can now ask for that interactively in the REPL.
>> help 'f'
{"this is the fabulous 'f' combinator"}
'Help' will do a best-match against all combinator definitions.

Saturday, July 27, 2024

Thread local Stopgap

I made the stopgap solution thread local. It's still expensive.
   ~VMObjectArray() {
#ifdef GC_STOPGAP
        if (deferring) {
            for (int i = 0; i < _size; i++) {
                defer.push(_array[i]);
            }
            delete[] _array;
        } else {
            deferring = true;
            for (int i = 0; i < _size; i++) {
                defer.push(_array[i]);
            }
            delete[] _array;
            while (!defer.empty()) {
                defer.pop();
            }
            deferring = false;
        }
#else
        delete[] _array;
#endif
    }

Tuesday, July 16, 2024

Stopgap solution to Cascading Free

 Egel runtime objects are instances of C++'s std::shared_ptr. Because egel objects can be deeply nested, it is not uncommon for large structures to go out of scope and result in cascading frees that blow up the C stack.

I.e., the following short egel program would core dump:

# rewrite a list of a million numbers to none
from_to 0 1000000 |> [_ -> none]   
I implemented a costly stopgap solution. First, we need a collector for trash. It has an in queue and an out queue, and its sole intention is to retain objects for a while such that they don't go out of scope recursively and blow up the C stack.

class GC {

public:
    GC() {
    }

    void push(const VMObjectPtr &o) {
        std::lock_guard<std::mutex> lock(mtx_in);
        _in.push(o);
    }

    void copy() {
        std::scoped_lock lock(mtx_in, mtx_out);
        while (!_in.empty()) {
            _out.push(_in.top());
            _in.pop();
        }
    }

    bool done() {
        std::scoped_lock lock(mtx_in, mtx_out);
        return _in.empty() && _out.empty();
    }

    void empty() {
        std::lock_guard<std::mutex> lock(mtx_out);
        while (!_out.empty()) _out.pop();
    }

    void clear() {
        while (!done()) {
            copy();
            empty();
        }
    }

private:
    mutable std::mutex mtx_in;
    mutable std::mutex mtx_out;
    std::stack<vmobjectptr> _in;
    std::stack<vmobjectptr> _out;
};

inline GC garbage_collector;
Now when an array goes out of scope, we place its members in the collector.
    ~VMObjectArray() {
        for (int i = 0; i < _size; i++) {
            garbage_collector.push(_array[i]);
        }
        delete[] _array;
    }

And during rewrites we then empty the queues.
    while ((trampoline != nullptr) && (*run != HALTED)) {
        if (*run == RUNNING) {
            trampoline = f->reduce(trampoline);
            garbage_collector.clear();
        } else if (*run == SLEEPING) {
            std::this_thread::sleep_for(std::chrono::milliseconds(100));
        } else {  // *run == HALTED
        }
    }
I am not happy with this solution. First, I consider it a std::shared_ptr bug that it can blow up the C stack, and second, the performance cost is way too steep for my taste.

I'ld rather convince C++ developers that this is their bug; for all I know cascading frees are a normal thing in reference counted garbage collection and their implementation should be able to handle that.

Monday, June 3, 2024

Temporal Logic Synthesis

 I wanted to show that Egel can be used to create small tools, and decided upon a temporal logic synthesizer. After reading a bit I trimmed that down to: do something useful with LTL and Z3. The first because LTL is relatively small, the latter since I want Z3 bindings for Advent of Code anyway.

Synthesis is a bit of a hopeless subject since the best you can hope for is to derive small automata with maybe five states. Moreover, it already exploded on translating LTL to negation normal form, the first ostensibly minor step in the whole process.

But I remain committed for now.

Tuesday, February 6, 2024

E-Graphs trivially in PSPACE

Some people pointed me at E-graphs, a manner for representing and then heuristically minimizing terms. Of course, Boolean circuits are terms too and minimizing circuits is PSPACE. Very much so, even.

So, it's unclear how much minimization you can actually achieve with E-graphs.

Friday, February 2, 2024

Control Flow Revisited

Two forms (b?f:g)(x) and f(b?x:y), should be enough for all (functional) control flow analysis. In a very abstract sense, application and choice are all you need.

After I commented that contification just equals hoisting on a small example someone on the interwebs made a smart comment that 'f' cannot be hoisted in the following example but can be contified.

let f = fun arg -> body in 
match thingy with 
| A -> f x 
| B -> f y 
| C -> g foo

Of course, you just need a rewrite to a term that uses b0? f(b1?x:y) : g foo to see that 'f' is still hoisted. But it won't be pretty so that's an issue.

Thursday, February 1, 2024

NAND rewriting

I think somewhere between '01 and '04 I was interested in SAT problems. At that point in time there was an unoutspoken vibrance in the field that NP problems might be tackled.

At that time I concentrated on NAND, or Sheffer stroke, rewriting. I figured if people were close to breaking some barrier then there should be some rewrite strategy to a solution that didn't suffer from exponential blowup during the path.

NAND terms are beautiful little mathematical expressions and the rewrites I came up with had pleasant forms so I wrote several small programs.

One of the things I stumbled upon was a 3SAT formula after I looked for a while at the truth table for NAND.

x y z
0 0 1
0 1 1
1 0 1
1 1 0


Every row has at least one zero (-x+-y+-z), and certain pairs have at least one one (x+z) and (y+z).

I checked the conjunction and realized I had a linear translation from circuits to 3SAT.

Meanwhile, I was reading satlive.org and noticed that people were unaware of that and posted the solution there thinking Armin Biere would at least pick it up.

Years later I learned I had rediscovered a solution from Tseitin from the sixties or something, years before SAT solvers became relevant. Nowadays I think everyone who studies the problem a bit would've come up with it.

The NAND rewriting strategy never panned out, stuff kept exploding. Moreover, rewriting is extremely slow in comparison to cheaply manipulating arrays in what is very fast (learning) local search.

I can still somewhat rewrite NAND terms in my head or play pebble games on them, but that doesn't seem to be a marketable skill.

But, ah well, I comfort myself that I was in the good company of Knuth who, for at least a while, thought that P=NP.

Tuesday, January 30, 2024

All the control flow you'll ever need

 

(b?f:g)(x)


 Tongue in cheek, of course. Control flow corresponds to determining what (part of a) program you will run next and, of course, you can always compile down to lambda calculus in an abstract sense.

 

Monday, January 29, 2024

Why compiling with continuations is wrong(-ish)

There are two axes to plot programming languages, eager/lazy and strict/permissive. Eager/lazy are two evaluation models, strict/permissive determines how much a language sticks to the chosen model.


Most languages are very eager, usually except for short-circuited boolean operators, and reasonably strict, the order of evaluation is mostly known. Non-strictness is often exploited by compiler writers to reorder expressions for performance; i.e., one might not know precisely in what order '2*x+3*y' is evaluated but it will be fast.

Lazy languages like Haskell and Clean, are mostly lazy and permissive, entire program fragments might be reordered or made eager due to purity. All for performance reasons.

Order of evaluation is of course important, all programmers need to deal with it - albeit less in pure languages. Compiler writers need to deal with it too, a popular manner of writing a Scheme compiler is to transform to continuation passing style since that will make the order of evaluation explicit but also allows for control flow operators.

Let's dive into that a bit with a small snippet of code.

pyth x y = sqrt (add (pow2 x) (pow2 y))

After transformation to continuation passing style, one ends up with:

pyth' x y cont = pow2' x (\x2 -> pow2' y (\y2 -> add' x2 y2 
                                 (\anb -> sqrt' anb cont)))

This transformation is of course global, all functions get a continuation object as an extra argument, and all functions are defined as complex expressions that pass along continuations. It's also a pretty bad manner of writing a program if it weren't for that this style serves well as an intermediate representation for compilation to a backend since it makes order of evaluation and temporary values explicit. Fantastic, done!

The Egel interpreter internally also uses an intermediate representation prior to outputting to bytecode. Let's look at that.

pyth x y =
    let y2 = pow2 y in
    let x2 = pow2 x in
    let anb = add x2 y2 in sqrt anb

The intermediate representation hoists reducible expressions, redexes, and reducible expressions only, out of the term and binds the results, often called reduxes. And similar to the continuation passing style or administrative normal form, makes the order of evaluation and use of temporary values explicit.

But only redexes -i.e. work- are hoisted out. And control flow is nice, but it was, and it will always be, mostly about communicating clearly to your backend what work needs to be done - in what order is the detail.

And that's where I think continuation passing style, or administrative normal form, went wrong-ish as a backend solution for rewriters; they are answers that will function but also the almost correct but strangely wrong answers to a trivial question: what work needs to be done. And instead of answering that they concentrated on the order or got the work statement wrong. No wonder that naive continuation passing style will state too much to be done. Meanwhile, this intermediate representation also gives a precise answer to what functions consume constant stack space and can be optimized further, i.e., those are the functions that rewrite to exactly one redex.

TL;DR: When developing a rewriter it becomes important what work needs to be done in what order. And hoisting out redexes makes that clear, whereas continuation passing style and administrative normal form give almost correct but wonkish answers.



Saturday, January 27, 2024

Are All Algol-like Calling Conventions Wrong?

Those who have watched the critically acclaimed "Egel - The Movie" (8 upvotes on lobste.rs) know that I've been dabbling with a small rewriter with a trivialized semantics. The observant viewer might have noticed an off-the-cuff remark in that movie that Egel, since it rewrites, has less need for tail-call optimization, some stuff comes for free.

Let's revisit a small recursive function, factorial: fac 1 = 1 | fac n = n * fac (n-1).

Those who know some compiler theory will know the runtime behaviour of that function if implemented according to the Algol convention for stack machines: for every invocation, a stack frame is created with a dynamic and static link and some space for local variables. The program will subsequently perform its operations within the context of that frame, and for each recursive call, a new stackframe is pushed to the stack.

Egel in contrast, will rewrite a redex to another number of redexes. For a stack machine that corresponds to removing the callee's stack frame entirely and pushing several stack frames with work to be done to the stack. Hence my remark that there's less need for tail-call optimization, for simple recursive functions that consume constant stack space you get that for free. Also noteworthy, tail-call optimization module cons comes for free too.

And that begs the question of whether other calling conventions are wrong, at least for ML-like languages.


Friday, January 19, 2024

Modelling Effects

A brief experiment involving 'effects.' Presented below is the translation of a program 'tick + tick' with 'tick' serving as an effect.

The key approach involves isolating the effects through a continuation and utilizing try/catch as a handler or control flow mechanism.

I acknowledge that the final outcome may seem convoluted; however, its purpose is to demonstrate the semantic model's capacity to accommodate such constructs.
using System

data tick

def do_tick = [K -> throw (tick K)]

def handle_tick = [N (tick K) -> try K N catch handle_tick (N+1)]

def main = try do_tick [N -> do_tick ((+) N)] catch handle_tick 1

Wednesday, January 17, 2024

Random Thoughts On the New Release

I published a new performance release of the Egel interpreter named "v.0.1.10: Egel, let's go." I am now at a fork in my thoughts.

Egel successfully shows that an operational semantics where the root of a directed acyclic graph is repeatedly rewritten can serve as the back-end of a small functional scripting language. However, Egel is notably slow, approximately eight times slower than Python according to various benchmarks.

Egel is also rather rudimentary as a scripting language, sufficient for solving Advent of Code puzzles but lacking essential features, presenting an overall unpolished experience. What I want to do is minimally add date/time objects as primitives and incorporate support for applicatives and lenses.

Despite these potential improvements, it's challenging to justify investing significant effort into a tool that will not find widespread use.

Wednesday, January 10, 2024

On Trivialization

Sometimes I post about Egel, but more often than not I am confronted with people who -rightfully so- wonder what the point of it all is.

Egel, the language, is a minimal front-end to an alternative operational model that showcases that term rewriting can be achieved by repeatedly rewriting the top of a directed acyclic graph. An alternative to SECD, CACM, or graph rewriting machinery like GHC's spinless tagless G-machine. I estimate that confuses people since that kind of work on operational semantics hasn't really been done for the last twenty years and this is a trivialization effort. Whereas most of the field has moved on to studying elaborate type systems that provide ever more complex static guarantees.

The keyword here is 'trivialization' and let me share my thoughts on that. I like to compare this effort to work on the Pythagorean theorem, which started off with a proof of more than twenty pages, and through insights during the last two millennia has been reduced to a one-liner.

Trivialization is important. At every step, the Pythagorean theorem didn't change, it was still true, no noteworthy things changed; similarly, Egel doesn't really show anything new, everything can already be done with other machinery, and no real unexpected exciting things are happening. Through trivialization you only get something smaller, maybe clearer. But the importance of such an effort becomes clear when one asks oneself where we would be if we were still stuck with a proof of twenty pages.

We live in an era of informatics where baroque abstraction castles are built atop of ever more complex machinery, and at the same time, there's always a need for going back to the fundamentals to see whether something is there that is less complex one can build upon to create new -maybe better- castles.

Thursday, January 4, 2024

Emitting a Graph

I've been examining a new emitter that involves rewriting directed acyclic graphs (DAGs). Specifically, when Egel rewrites a combinator, the bytecode of the combinator needs to produce another segment of the DAG.

In light of this, I've been sketching out an algorithm to address this requirement. Below is an example of a new graph that could be the outcome of rewriting a combinator node.


In the representation, solid lines indicate constants, and dotted lines point to redexes that must be rewritten in a particular sequence. Exception handlers are omitted for clarity.

As the rewritten graph node serves as the apex of a DAG, it seems likely that I'll need some form of stack to systematically process AST nodes in a specific order. This processing order is crucial for generating bytecode that effectively constructs the desired graph.

It's a neat puzzle.