"However, there’s a certain kind of seamlessness Fexpress won’t attempt: Racket’s and can’t be passed into Racket’s map, and sometimes this surprises people who expect macros to act like functions. In languages with fexprs as the default abstraction, it tends to be easy to implement and and map in such a way that this interaction succeeds."
Do you have an fexpr-aware version of map at the moment? Or and? Or do you have to switch both?
The point I'm making in that paragraph is that there are reasons for `map` and `and` not to work together even in an fexpr-capable language. So I'm not sure what you're asking for; I'd say Racket's `map` and `and` already are "fexpr-aware."
Are you asking for clarification on how "this interaction succeeds" in a language where fexprs are more pervasive?
Let's see... I found an online PicoLisp runner.
As an initial test, `prin` prints and returns its string argument, and anything non-NIL counts as truthy:
(PicoLisp doesn't evaluate rest arguments like `args` here, even though it does evaluate every other argument in the argument list, and that's by design. This is how we can write custom fexprs.)
It appears that what our fexpr observes is a list of what PicoLisp calls "anonymous symbols" (probably like gensyms), which apparently are bound to the already-evaluated argument values.
Incidentally, notice how this `mapcar` call fully evaluates the list arguments, including the part that prints "oops." That makes these two programs arguably inconsistent:
: (mapcar and (list T NIL) (list NIL (prin "oops^J")))
oops
-> (NIL NIL)
: (list (and T NIL) (and NIL (prin "oops^J")))
-> (NIL NIL)
When the transformation passed to `mapcar` is a pure function, refactoring a `mapcar` call this way makes no difference. When the transformation is a side-effecting procedure, it does make a slight difference because one transformation call might perform its effects before the next transformation's arguments have started to be evaluated. And here, when the transformation is `and`, this refactoring makes a difference because `and` usually changes how often its arguments' effects are performed, but `mapcar` doesn't let it have that control.
That more or less makes sense operationally, but it means that in PicoLisp, I could imagine `mapcar` being more "fexpr-aware" than it is now. A more fexpr-aware design for `mapcar` would either:
- Allow `and` to affect the evaluation of `mapcar`'s own arguments. Now we get to implement a way to evaluate a list-valued expression without evaluating the list elements, so that we can pass the elements along to `and`. And once we're done with that, what should happen when we map fexprs that aren't even procedure-like, such as (mapcar let ...) or (mapcar setq ...)? Is there even a point to doing any of this?
- With full awareness of fexprs, make the decision to explicitly disallow non-procedure fexprs from being used with `mapcar`. That way, we don't even open up that can of worms.
Racket's `map` essentially accomplishes the latter.
By the way, if you'd be interested in a version of `and` that can be used as both a Racket macro with short-circuiting behavior and a first-class procedure without it, you can do that in Racket (using essentially a symbol macro):
In the context of Fexpress, this punning can be taken to another level, letting the first-class version of `my-and` be both a procedure and an Fexpress fexpr at the same time, while the second-class calls to it have Racket macro behavior. I've put together a test to demonstrate it:
In the Fexpress proof of concept, there's no way to convert between Fexpress fexprs and Racket macros in either direction. That makes this punning pretty pointless, except perhaps as a technique for publishing libraries that can be used roughly the same way by both Racket and Fexpress programmers.
Sorry, I do know that fexpr lisps can combine `map` and `and`. It sounds like you're proposing a different approach in Fexpress, so I was wondering what sort of vocabulary you are imagining. Were you thinking you'd have something called `map/f` that can take non-functions as its first argument? And so there'd be fexpr-aware variants for all higher-order functions? (Thinking harder, an `and/f` doesn't really make sense here.)
"Were you thinking you'd have something called `map/f` that can take non-functions as its first argument?"
No, I was thinking `map` should only take a procedure.
I don't think there's a particular way to "map" an fexpr that stands out as a compelling abstraction. I propped up a certain ambitious "evaluate a list-valued expression without evaluating the list elements" vision for `map` above, but this is really more of a lazy function map than an fexpr map.
The `map` operation is basically about the list data structure. Since it's a data structure that holds its elements eagerly, we can transform it with an eager function. If it held its elements lazily, we could use a lazy function. If it held its arguments unevaluated-ly, and if the elements didn't even have to be expressions, then the abstractions we could map over these lists might be fexpr-like, but I don't know of a way to make that make sense.
If you're thinking "but then what do you do with an fexpr once you have one?" I don't know. I've only found so many compelling uses for fexprs, which have to do with places where the code of the call site is needed at run time, possibly because run time and compile time are tangled up:
- Defining custom syntaxes in mutually recursive ways that involve invoking some of the syntaxes before some of the others have finished being defined.
- Making macro systems where redefinitions of the macros automatically propagate to change the behavior of previously defined things that call them.
- Writing abstractions that take care of their own JIT behavior. In order to strategically recompile their use sites according to usage statistics collected at run time, they need access to both the source code and the ability to observe run-time conditions.
In each case, fexprs would be serving as a language front-end layer, not as internal building blocks for general-purpose computation. I think source code is second-class in an essential way; although we can write macro abstraction layers that fabricate source code as they go along, this makes it harder to report informative errors in terms of the user's original code. Macros and fexprs take source code as input, so they're tied to the language front-end in a way that procedures aren't.
And when the source code isn't something we're abstracting over much, then neither is the environment of macros or fexprs that are in scope there. (Like, the environment is more or less an annotation on the source code, and if we focus on having only one original source codebase to annotate this way, then we only have one set of annotations to produce.) So a lot of the best techniques for macros or fexprs are likely to be self-contained abstractions that keep their first-class manipulations of macros and fexprs behind the scenes, rather than whole programming styles which make pervasive use of macros or fexprs as first-class values.
Thank you, that is very helpful! And this also matches some of my growing ambivalence with macros.
When I was building Wart[1], I repeatedly ran into situations that seemed very difficult to debug. Looking back, I tend to debug by simulating computation in my head, and macros made it easy to create code that was more complicated than I could visualize[2]. When I built Mu[3] I hoped that backing away from first-class macros and pervasive support for traces[4] would help. That's still in early days, but the evidence is quite ambivalent so far.
I think macros can get pretty complicated fast. I think in some ways it's at least as hard to write well polished macros (with good error messages, editor support, debugging steppers, pretty printers, and so on) as it is to build a well polished language without macros. I think the reason it feels easy to write macros in many languages is because many languages don't set a high bar for a polished experience.
I don't think taking macros out is an answer I'm very interested in, though. I want people to be able to use my stuff without having to write code the same way I do, and vice versa. Some kind of syntactic extensibility is important to me.
And I don't think taking macros out even really eliminates the "tied to the front end" non-compositionality issues. Every abstraction that can be used has a user experience, and macros just take more control of that experience than functions do. Every time a programmer tries to report a friendly error message from a function, they're trying to take some control over that experience, but the capabilities of a function only give them so much information to work with.
A language without macros can supply a one-size-fits-all kind of polished experience, which may indeed be a much better experience than what most macros provide in practice. But a language like that also imposes a practical limit on how much more polished it can get, especially for domain-specific applications that don't have the attention of the language developers.
To be clear I still use macros. I'm skeptical of the benefit of fexprs, but it totally makes sense to support them as a sharp-edged, high-power capability intended to be used rarely.
There's not much use I have for fexprs. I just had certain ideas about how they fit in with other concepts and how they could be made compatible with a compilation-favoring workflow.
I consider it basically a mistake to use the same syntax for macro calls and function calls... but not because they're different things. When we want to invoke a function like a macro, we can just conceptualize it as a macro that expands into a function call.
I think it's the same for fexprs. Macro calls can more or less expand into fexpr calls by simply preserving all the code they received under a `quote` in their expansion result. This doesn't quite get us a lexical-scope-respecting version of fexprs unless we also capture the local lexical environment, and that's not necessarily possible depending on the macro system, but it's not much of a stretch:
- Racket's macroexpander keeps track of the set of local variables in scope, but it doesn't quite expose it to user-defined macros.
- Arc's macroexpander `ac` keeps track of the set `env` of Arc local variables in scope, but it doesn't expose it to user-defined macros.
- In Guile, the local environment can be captured using (procedure-environment (lambda () '())).
- Fexpress currently lets compilation-friendly fexprs do this using the `depends-on-env?` field of a `compilation-result?` data structure. If a subexpression depends on the lexical environment this way, (fexpress-clambda ...) forms surrounding that subexpression expand differently so that they create a run-time representation of the lexical scope. This way, no one step in the code has to traverse or build up the whole environment, so the cost is spread out. Since Fexpress's variables are immutable, variable accesses don't even have to go through this run-time environment; they can be Racket variable accesses.
Anyhow, I basically consider fexprs to be one of the things a macro-capable language is theoretically capable of, even if people don't commonly prefer to use that functionality and macro systems don't always quite offer it.
I've been interested in exploring the concept of typed macros in general for the purposes of designing module systems which have both macros and typed API signatures. And I've had typed fexprs on my mind as an optimization approach since the Eight thread way back when, and I expect these two trains of thought to be on their way to the same destination.
With Fexpress, I explored the typed fexpr side, and I kept a really narrow focus on types for optimization rather than API boundary delineation so that I wouldn't make it any more complicated than it had to be. I don't want to be like "oh, by the way, in this complex type system for macros, you can find fexprs if you look for them," because some people might recoil at that. :-p And if I said something like that, but I didn't actually have an fexprs-by-default language ready to show, that would be quite a tease.
The actual languages I build with these ideas will probably not have fexpr calls as the default behavior for calling a local variable. Personally, I prefer not to have any default; it's not too much work to write out `funcall` unless I have a whole DSL's worth of local variables I'm invoking. But if it's a typed language, a lot of the local variables will probably have Lisp-1-style behavior anyway, because a variable of function type could be known to have function-call-like macro behavior and so on. And if some of the types people use turn out to declare that their variables have fexpr-like call behavior, I'll consider that to be an interesting outcome. The types, if they do any kind of soundness enforcement (unlike Fexpress's unsound hints), can keep those fexprs from sneaking into other people's code unless they're ready to receive them.
I know API enforcement wouldn't necessarily be your favorite feature in those designs. :-p But I think that's basically the picture of where Fexpress-style fexprs slot into my long-term goals, basically as one part of the possibility space that macros have more room to explore with the help of a type system.