Say, I'll allow native constants in Egel with a 'const' construct.
const FIVE = 5
Conventionally, this would introduce a syntactic equivalence; the symbol 'FIVE' and the value '5' should be interchangeable in source code. The difference between such a construct and a normally named abstraction is small but significant. In expressions, you hardly notice.
FIVE + 5
That should evaluate to '10'.
In a pattern match, the difference becomes apparent.
[ FIVE -> "five" ] 5
Conventionally, you don't match against the symbol 'FIVE' here but against the value '5'; i.e., the expression above reduces to "five".
I looked at other languages. C/C++ support constants in header files and C++ has a notion of const expressions. Java has static final variables. Haskell doesn't have a notion of syntactic equivalence. Python defaults to using variables. Lisp has constant expressions, no doubt in the form of a macro.
In the end, none of them really handle syntactic equivalence well. And there's a reason for that.
There are a number of manners to support this construct. Through a preprocessor, through a macro extension, through syntactic sugar, or make it native to the language. For Egel, a preprocessor would be nice, macros would be nice, syntactic sugar would be the easiest, native to the language would mean a small overhaul.
Syntactic sugar is easy but conflicts with another goal I have, I want to be able to use source files and object files (which are C++ compiled dynamic libraries) interchangeably. In the future, it should be possible to compile any source file to an object file and load it. But byte code doesn't carry, and probably shouldn't carry, syntactic information -or some notion of the AST- so substitution on the source code can't be implemented.
Which would imply making constants native to the language but, at the same time, it's hard to implement a notion of syntactic equivalence if all you have are C++ defined combinators. Making constants fully native to the language would mostly imply overhauling the byte code or byte code generation. I.e., since the difference with normal combinators is in the pattern match, the simplest solution would be to generate code like "match against the definition of this symbol, not its name." I am not terribly comfortable with such a change at the moment.
A preprocessor or macro support would be nice but similar to a syntactic sugar extension conflicts with the goal of interchangeable dynamic modules.
And there you have it. Roughly one can support constant definitions at the text, AST, or code level, where code is the most portable but hardly anyone implements a scheme such that you can reason back from generated code to a syntactic equivalence.
But the problem I still have is in binding with C/C++ modules which normally introduce large numbers of constants through macros or enumerations. Some form of FFI support or C++ code generation would greatly diminish efforts needed.
Maybe C++ macros or templates could fix this? I should look into that.
No comments:
Post a Comment