By the way... certain user conventions in my language mostly obviate the need for the syntax highlighting scheme that he came up with. In my language:
1. Vaus are prefixed with $ (like in Kernel)
2. Predicates end with ? (like in Kernel)
3. Non-referentially-transparent things end with ! (similar to Kernel)
4. Local variables start with a capital letter (like in Shen)
These rules combined means that it's trivial to write a syntax highlighter for my language, and it means the information is available even if you don't have syntax highlighting. To demonstrate, here's a 1-to-1 translation of the "factorial" function (shown in the video) into my language:
$def factorial; N ->
$loop: Cnt N
Acc 1
$if: is? Cnt 0
Acc
$recur; --Cnt; Acc * Cnt
Naturally I wouldn't define it that way in my language, but... you get the idea.
Certainly there are some nice ideas in there, like giving special colors to external variables and such forth... but I think languages should look nice and be usable even without any syntax highlighting at all, with syntax highlighting as just an extra convenience, nothing more.
---
By the way... here's the idiomatic way to write "fact" in my language:
$def fact
0 -> 1
X -> X * (fact X - 1)
Inefficient (because it isn't tail recursive), but even so, it's simple. If you want a tail-recursive version...
$def fact-acc
0 Acc -> Acc
X Acc -> fact-acc X - 1; X * Acc
$def fact: X -> fact-acc X 1
Or maybe you want to be like Haskell[1] and define it as...
The colors on this machine aren't quite right, but it shows what I care about:
a) Comments since they're never evaluated
b) Literals since they eval to themselves
c) Parens and ssyntax -- mostly as delimiters, but with backquotes distinguished
Everything else is unhighlighted. If the language does its job I really shouldn't be thinking about whether something's a macro. And local variables ought to be the default, so why add a little salience to Every Single One?
---
Wart comes with the vim settings for this highlighting: http://github.com/akkartik/wart/blob/2e01126102/vimrc.vim. It's very smart about ssyntax. The colors really indicate precedence. Notice in the second statement how some colons are colored like ssyntax, but not others. Or how the exclamation in mac! at the bottom isn't colored like ssyntax.
But after all that I don't want to make too many assumptions about how a new reader will view one's code. It needs to be visually balanced even without highlighting. Your typography rules remind me a little of early wart. See the if macro at the end of http://www.arclanguage.org/item?id=15137 -- and your comment on http://www.arclanguage.org/item?id=15140 :)
"If the language does its job I really shouldn't be thinking about whether something's a macro."
This is the same argument we had before... it's just not true. Macros/vaus behave fundamentally different from functions, they are not the same thing. By making them stand out, it gives your eyes something to grab onto.
Humans are wonderfully good at noticing patterns, but only if there's enough information there to pattern match on. If you don't provide this information in the syntax, it adds additional mental overhead.
You now have to memorize whether something is a vau or not (for common things like $let this isn't a problem, but for things less commonly used it can be a pain). The same goes for locals: if it isn't apparent in the syntax whether a variable is local or not, you have to mentally scan up the scope chain every single time you glance at code.
One of the problems with Lisp is that due to its lack of syntax, there's very few patterns that your mind can pick up on, so you have to do a full-blown mental parse of the source code just to determine whether something is a local or a vau or whatever.
It might not seem like much, but all the tiny extra mental overheads do add up. I believe that once you get used to my syntax, it's easier to read source code, because just by glancing at it your mind can notice all the little patterns.
Of course, there might be better criteria other than fn/vau/global/local/predicate/mutation... if so, I'd be interested in hearing it[1]. But I do think, whatever criteria you choose, it's important to have it be visually apparent so our poor human brains don't have to work so hard to parse our code. We are visual creatures, let's give our minds some visual feedback to chew on.
---
"And local variables ought to be the default, so why add a little salience to Every Single One?"
You forget that most functions/vaus are global. In fact, in my language, roughly 1/2 of the variables are globals, with the other 1/2 being locals. Out of those globals, roughly 1/3 are vaus, with the other 2/3 being fns.
You're right, there is a fine line between adding syntax to make the source code more readable, and adding syntax so it ends up looking like Perl. I've tried to add in syntax only when I feel there's a significant benefit from doing so.
I'll note that my language doesn't have that particular problem mentioned in that particular post because my language doesn't have quasiquote/unquote/unquote-splicing, so that example would just be `@Body`
---
By the way, I used to be really off-put by how Kernel uses `$?!` in symbols to give them special meaning, and I also really disliked how Shen has local variables start with a capital letter... but after trying it out for a while, I got used to it and found that it actually wasn't that bad after all. Now I think it's an overall net win.
---
For comparison, here's how my syntax highlighting for my language currently looks:
* [1]: In particular, I just realized that it might be better to use $ for constructs that introduce additional binding names. This might be more useful than a general vau/fn distinction.
Then again, after looking through the source code, there were only a handful of vaus that didn't introduce new bindings: and, catch, hook, if, or, and quote
So, given how most vaus apparently exist for name binding, I think it's best to just use the general vau/fn distinction.
"Macros/vaus behave fundamentally different from functions, they are not the same thing. ..it adds additional mental overhead."
Functions, macros, they're just ways to get certain behavior in the most readable way possible. Perhaps they add mental overhead in kernel because it's concerned about hygiene and such abstract matters.
Wanting to track your macros is OCD like wanting to avoid namespace pollution is OCD. Just relax, use what you need, remove what you don't need, and the function/macro distinction will fade into the background.
"Functions, macros, they're just ways to get certain behavior in the most readable way possible."
Sure. And their behavior is different: macros/vaus don't evaluate their arguments, functions do. That's because they're used for different purposes, so distinguishing between them is important and/or useful.
---
"Perhaps they add mental overhead in kernel because it's concerned about hygiene and such abstract matters."
I don't see what hygiene has to do with it... we're discussing about making it easy to tell at a glance whether a particular variable is a function or a vau, that's all. That's true regardless of whether the vau is hygienic or not.
I'll also note that I have not actually programmed in Kernel, so all my talk about "mental overhead" is actually referring to Arc, which is a distinctly unhygienic language.
In any case, my gut says that making a distinction between vaus and functions is important, so that's what I'm doing.
---
"Wanting to track your macros is OCD like wanting to avoid namespace pollution is OCD. Just relax, use what you need, remove what you don't need, and the function/macro distinction will fade into the background."
I do indeed worry about namespaces, which is why my language is going to have fantastic namespace support, most likely built on top of first-class environments.
Not only does this allow people to write solid libraries that don't need to worry about collisions, but it also has the massively major benefit that you know exactly what a variable refers to, because each module can be studied in isolation. You can't do that when everything is in one namespace.
So this has the same benefits that lexical scope and referential transparency give you: you can study different subparts of the system in isolation without worrying about what another part is doing.
Incidentally, that's why dynamic scope is so bad: it's not enough to understand what a single function is doing, you also need to understand what the rest of the program is doing, because some other random part of the program might change the dynamic variable.
That's why "lexical by default, marking certain variables as dynamic" is superior to "dynamic by default": it increases locality because you don't need to jump around everywhere trying to figure out what everything does, you can just focus on one part of the system at a time.
That's the whole point of functional programming, and my language is intentionally designed as a functional language. In fact, I plan for all the built-in data types to be immutable as well, for the exact same reasons. This should also help immensely with concurrency, similar to Clojure.
"I'll also note that I have not actually programmed in Kernel, so all my talk about "mental overhead" is actually referring to Arc, which is a distinctly unhygienic language."
That is really interesting, that our respective experiences are so different.
I'm with you on "lexical by default" -- I'm not totally crazy :) But the simplest possible mechanism that provides the similar advantages of namespaces is to just warn when a variable conflict is detected, when a global is defined for a second time.
I'm trying hard to introspect here, and I think the difference between lexical scope and namespaces for me is that when I'm programming by myself I don't need a second namespace, but I do still find dynamic scope to be error-prone. My entire belief system stems from that, that one should program as if one was working alone. Everything that helps that is good, anything that isn't needed is chaff.
I like the highlighting; the color makes the typography less jarring. But you're right, it's one of those things one should familiarize oneself with before judging.
Oh it's very simple. The operators + - * / < > <= >= are the only infix operators (for now). They have the usual precedence rules that other languages use.
How they work is, they take one expression on the left, and one expression on the right, and then wrap em in a list, so that `X + Y` becomes `(add X Y)`, and then "add" is the actual add function.
So they're just syntax sugar for common infix operations, that's all. That's why the last example passed "mul" to "sum" rather than "*".