Well, I'm sure that since you're actually having to use arc, and implement it in a multi-thread environment, you'd know much more about the problems that a module system is likely to have than I. How does erlang do theirs? I found a short whitepaper on the subject, but it didn't have very many details. I'm sure that whatever they use is perfectly safe for whatever you'd be doing in snap, though the implementation/syntax might not be the nicest.
About macros, hasn't there been some interest for a while in getting macros to be "first-class" and so forth? How are they implemented, exactly, that makes this so hard? Is it really that hard for the reader to take the output of a function and automatically interpret it as code?
Also, you mention some effort being made into allowing arc to see if the form in head position resolves to a macro. How hard is that to do? I really don't know how macros work, though I understand what they do.
Aren't they just functions tagged to let the interpreter know that they can a) be expanded at compile time, because the output is not dependent upon the value of the input, and b) return forms that need to be evaluated?
How hard would it be to let normal functions do this? Suppose we had a syntactical feature on function definitions, such that (fn x 'a) prequoted a, so that the form located at a is captured, instead of it's value. Would that be enough to "simulate" macros? Or does the interpreter need to know that the forms that come out the other end probably need evaluation?
I should think that all a macro is is a tag that guarantees to the compiler that it's output is completely independent of it's input, and can thus be expanded at compile time.
I know we kind of went through something like this before, but what am I missing. If you clear this up, maybe I'll finally understand how macros really work ;)
And good luck on SNAP. I would love to help you, as lisp + erlang (feature wise) is something I am very interested in. Unfortunately, it's just way beyond my abilities at this point. I shall look forward to reading your blog!
Is the problem with the sockets due to the fact that you're trying to maintain compatibility with the present arc-on-mzscheme? Or is it something more fundamental than that?
The difference is that Arc evaluates forms from a loaded file one expression at a time. 'def in Arc is simply an assignment to a global variable; technically, 'load doesn't load a module, it executes a program (which in most cases just assigns functions to global names).
Erlang source files, on the other hand, are a set of function definitions - they aren't executed at the time you load. There are no globals in Erlang (although the function names are effectively equivalent to global variables that can't be mutated normally). Each Erlang source file is compiled as a single unit, meaning one source file == one Erlang module.
> About macros, hasn't there been some interest for a while in getting macros to be "first-class" and so forth?
I presume you mean something like this:
(let my-macro (annotate 'mac
(fn (x y)
(+ "my-macro says " x " and " y)))
(my-macro "hmm" "haw"))
> How are they implemented, exactly, that makes this so hard?
It isn't how macros, per se, are implemented that makes this hard, it's how efficient interpreters are implemented that makes this hard.
One of the slowest implementations of interpreters are what's called "AST traversers". Basically, the interpreter simply goes through the list-like tree structure of the code and executes it. In a Lisp-like, the AST is the list structures input by the s-expression syntax. This is what macros fool around with.
The slowness of this is usually because it needs to enter each sub-AST (i.e. a sub-expression, e.g. in (foo bar (qux quux)), (qux quux) is a sub-AST) and then return to the parent AST (in the example, it has to return to (foo bar _)).
However a faster way to do it is to pre-traverse the syntax tree and create a sequence of simple instructions. This is usually called a "bytecode" implementation, but take note that it doesn't have to be a byte code.
For example (foo bar (qux quux)) would become:
(call qux quux) ; puts the return value in 'it
(call foo bar it)
The increase in speed per se is not big (you just lose the overhead of the AST-traversal stack while retaining the overhead of the function-call stack), but it gives an opportunity for optimization. For example, since the code is now a straight linear sequence of simple instructions, the interpreter loop can be very tight (and relatively dumb, so there's very little overhead). In addition, it's also possible to transform the linear sequence of simple instructions to even simpler instructions... such as assembly language.
However, consider the above sequence if 'foo turns out to be a macro. If it is, then it's too late: the program has already executed 'qux. If it were part of say a 'w/link macro, then it shouldn't have executed yet. Also, recreating the original form is at best difficult and in general highly intractible, and remember that the macro expects the original form.
So in general for efficient execution most Lisplike systems force macros to execute before pretraversing the AST into the bytecoded form. This also means that macros aren't true first class, because they must be executed during compilation.
In short: most lisplikes (mzscheme included) do not execute the AST form (i.e. the list structures). They preprocess it into a bytecode. But macros work on the AST form. So by the time the code is executed, macros should not exist anymore.
> Also, you mention some effort being made into allowing arc to see if the form in head position resolves to a macro. How hard is that to do?
Trivial, just add a few lines in ac.scm. However rntz didn't push it on Anarki, which suggests that the modification hasn't been very well tested yet. http://arclanguage.com/item?id=7451 but the patch itself has been lost T.T . I think it'll work, but I haven't done the patch too either ^^.
> And good luck on SNAP. I would love to help you, as lisp + erlang (feature wise) is something I am very interested in.
Ah, I see now. How naive of me to presume that lisp actually worked with the AST like it says it does. Oh well.
Is there any way to optimize the interpreter without sacrificing AST interpretation? Or should I write my own language that says "interpreted languages are supposed to be slow; don't worry about it" for the sake of more powerful (in theory) macros? ^^
Or is there actually no difference between the qualities of the two macro systems? Would you care to enumerate the pros and cons of each system? You can do it on a new thread, if you like.
So, how does that work, exactly? Does macrolet tell lisp that since the macro is only defined in that scope, it should search more carefully for it, because it doesn't have to worry about slowing down the whole program?
Err, no. It simply means that the particular symbol for it is bound only within the scope of the 'macrolet form. In practice, most of the time, the desire for first-class macros is really just the desire to bind a particular symbol to a macro within just a particular scope, and 'macrolet does that.
For other cases where a macro expansion should be used more often than just a particular scope, then usually the module or whatever is placed within a package and a package-level macro is used.