Sunday, May 14, 2017


Reading up on how WanaDecrypt0r works but details are scarce. It looks like a heap spray^1 with an unlink exploit but no confirmation of that anywhere.

No wonder Google is investing in Fuchsia?

^1: Actually, that came from some remark I read but I disagree. It might be as trivial as unlink. That is, set up non-paged memory in a deterministic manner, exploit a buffer overrun in SMB to corrupt a part of the kernel heap, unlink into the payload. I don't see why it would need a heap spray.

EDIT: It turns out to have been a memmove on an overflown calculated size.

Thursday, May 11, 2017

Log 051117

I am trying to figure out why the constructor of a combinator is passed a pointer to the VM. It's not a bad design decision, any combinator might want to poll the VM for information for various reasons. Might have been operator overloading where you needed to inspect a global table? But, of course, combinators which reference the current runtime are hard to ship. I might want to cut that but at the risk that I would need to introduce it again at some stage later.

Stuff you know you don't know, stuff you don't know you don't know. And all that.

Wednesday, May 10, 2017

Log 051017

Well. Apart from dreaming, mobile code is pushed into the far future because there are still too many mundane tasks to do. I didn't even implement string unescaping yet, so that's why Hello World doesn't work right. And the more pressing matter after that is implementing a sane manner of being able to link in C++ defined combinators and some kind of sane library for primitive IO and standard operators. That's at least about half a year's of work before I even get to the point I can start thinking about linking in matrix operations.

So, I better get coding.

Monday, May 8, 2017

Not so bad

It dawned on me that a lot can be solved by extracting portable combinators from the runtime instead of making all combinators portable. That would solve the problem that I could 'mess' up a perfectly fine small language.

Saturday, May 6, 2017

So for future's sake

Well great. So, what to do? I have an eager combinator evaluation system. What do you naively need to make it mobile:

  1. The term being reduced (elsewhere) must be transportable, and
  2. the combinators it invokes must be transportable. Moreover, 
  3. the end-result is a term which must be transported back.
The term is a tree of combinators not referencing anything else (if I got the invariants right), so that just needs a means of serialization.

A combinator naively is just a piece of code, so when you're able to serialize that you're halfway there. The other half is that combinators might reference other combinators but that's easy to solve with standard compiler technology. A combinator can be described as a self-contained piece of code referencing external symbols and all you need to do is sweep, ship, and link.

Problem solved, right? However: Verbose, computationally intense, might end up sweeping the entire program, possibly too fine-grained in that you need to gather and link myriads of combinators, doesn't solve distributed scheduling and coordination let alone local exceptions..

And then there's the stuff I didn't think of yet.

Worst case: I manage to corrupt a perfectly fine small language in such a way that it becomes unusable both for stand-alone or distributed computing.

So. Cheers.

Friday, May 5, 2017

Scrap mobile code? For now? For ever?

Right. So the master plan is to develop a language which combines ideas of statistical computing and mobile code. Mobile R, in short.

Great ideas!

But I've been reading up on mobile code and there's a) either no market for that or b) it falls squarely into the category of academically uninteresting things to do but technologically one of the hardest stunts to pull off.

That is, if you walk into a university these days the response will likely be "So you can migrate combinators, huh? Neat. My paragliding functor does the same and I've published tons of papers about that."

But it's ridiculously hard to implement the functionality in such a way that you actually get to the point you have a tool people will be willing to use.

Even after trivializing the heck out of it, I am looking at, what, three years of development?

Tuesday, May 2, 2017

Log 050217

I committed the changes which cut operator overloading. I wasn't very sure about it because I saw it as a means of solving the expression problem for this language. However, the expression problem occurs in larger languages where you don't have all sources under your control because some code, modules, are maintained by others.

This language doesn't have users, I want to grow it into something domain specific, all source code is fully under control by the one user which is me. And the expression problem shouldn't occur to me or my users since I don't intend it to be used for general software engineering purposes.

As such, it was a solution to an entirely academic problem which only cost a steep amount of CPU cycles per lookup. And, in the unforeseen event I actually grow Egel into a language which attracts end-users, I imagine they care more about performance than about solutions to problems which don't exist.

Tuesday, April 25, 2017

Cutting operator overloading

I decided to cut operator overloading. What I want can be expressed from within the language, it doesn't seem to be a big deal, and it looks like that in an untyped language you want to handle operator overloading differently. I.e., with regard to the last remark, it looks like you want to handle arithmetic with a big case switch in a built-in.

Also, it simplifies the runtime.

Monday, April 24, 2017

Dare I cut more?

Egel is a melting pot of ideas, lots of which I needed to cut in the end. So. Operators.

I have a scheme where one can overload operator definitions to work on different 'types' of operands. But the computational cost is high, maybe too high for such a simple language.

Maybe it's just enough to define an operator once? It's a small language and an expensive lookup for every mathematical operator applied is hefty, certainly since operators can be disambiguated by hand by using namespaces.

Friday, April 21, 2017

Log 042117

Four months later.. Well, I implemented the new scheme and got an order speed-up out of it. That's less than I hoped for I guess but, ah well, from 40 to 3 seconds isn't bad I guess.

I'll upload the v0.1 interpreter tomorrow.

Monday, January 2, 2017

Log 010217

Well, happy New Year to everyone I guess. I can't say I am terribly amused with progress over the last month or so. For one, the slight change in semantical processing is something I should be able to code in a day or two/three, and somehow nearly a month has passed so far. I really wanted my source code on Github by now! But I also hurt my wrist which took two weeks to recover, so there is that too.

But I am back to programming, still in progress of implementing the data structure (simple multiple-linked nested symbol table), and discovered I probably need to overhaul it once again. Reason: I forgot a design decision in my former approach which uses one map for global symbols and one map for local analysis. And I made that decision because I want to go from source code to evaluation fast, and using one static symbol table and another one for local analysis means I'll be able to parallelize more in the long run.

So, now I need to figure out whether it is worth it to preemptively defensively code a global static and local dynamic table approach.

Pff. Design decisions over not even .5KLoc of code. It's a good thing it is only a hobby project because as a manager I would never hire myself to write any code.