Arc Forumnew | comments | leaders | submitlogin
3 points by shader 2410 days ago | link | parent

I think the TN/ETN parsing model is somewhat neat in its simplicity, which means it will probably have some longevity.

However, most of the work you have done is just a simplification of the syntax; it has no relation to the semantics whatsoever, and as such is unlikely to cause a major paradigm shift.

Perhaps the coolest part of your notation is the concept of constant validity, which in this case you achieved by simplifying the notation until it matched the medium. Every atomic operation on the text (add a character, new line, or space) is also a valid atomic operation on the tree. Especially because it works with any text editor, instead of fancy semantically (or at least syntactically) aware editors. However, I think any true advances in programming will require improvements in the semantics.



1 point by breck 2409 days ago | link

Thanks for the feedback!

> However, most of the work you have done is just a simplification of the syntax; it has no relation to the semantics whatsoever,

Agreed. However, I think one thing that is starting to emerge from our data (17 useful ETNs now compiling to Javascript, Rust, TypeScript, Logo, Haskell, C++, LLVM IR, SQL, HTML, CSS, JSON, and Regular Expressions) is how well this Tree Notation syntax can work for every programming paradigm (functional, imperative, declarative, dataflow, oo, logic, stack ...). Perhaps it is best explained as a universal syntax. The neat thing about this is that once you learn the TN syntax, you now know the complete syntax for languages with very different semantics. So while I agree we aren't changing semantics here yet, instead just leveraging the semantics and VMs of existing languages, this universal syntax could be big in that it can lead to better cross language static tools and enable developers who generally stick to one or two paradigms to make use of more.

> Perhaps the coolest part of your notation is the concept of constant validity

Agreed! The elimination of parse errors is one of my favorite features. Of course, the user can still make errors at the ETN level like mistyping a word or providing invalid parameters to a node. To help catch and fix these kinds of errors, I just launched version 5.0 of Ohayo (Ohayo still shitty, but the core is getting really solid) which includes a revamped compiler-compiler that supports 100% type checking of every word in your program. It makes it easy to create, as you say above "well defined input spaces".

-----

2 points by shader 2406 days ago | link

> this universal syntax could be big in that it can lead to better cross language static tools and enable developers who generally stick to one or two paradigms to make use of more.

An alternate syntax will not allow you to use any additional paradigms unless you also provide alternate semantics. It might enable more powerful editing tools or effective macros and metaprogramming though.

-----

2 points by breck 2405 days ago | link

> An alternate syntax will not allow you to use any additional paradigms unless you also provide alternate semantics.

Right. The syntax for ETNs is the same, but the semantics are different. For example, I have a language called "Flow" that is a data flow language, passing a matrix through a series of nodes. I also have a logic language called "Project", that can solve relational issues among nodes. Different semantics, identical syntax.

Right now to use different paradigms, a user generally has to learn different semantics and different syntaxes. This eliminates the latter.

-----

3 points by akkartik 2405 days ago | link

Is that a good thing, though? A classic design principle is that similar things should look similar and different things should look different. Imagine a project with both Flow and Project files. Wouldn't it be nice to be able to tell them apart at a glance?

-----

2 points by breck 2364 days ago | link

> A classic design principle is that similar things should look similar and different things should look different.

Agreed, but I think context eliminates such a need. If this comment were about cooking it would look the same. We reuse one writing system.

> Imagine a project with both Flow and Project files. Wouldn't it be nice to be able to tell them apart at a glance?

Ah, good point! So far it hasn't been a problem, but I imagine there may be issues as the number of Tree Languages (note--I took the feedback and dropped the "ETN" acronym) and combinations increase. It might emerge that there are some universal best practices so semantics won't change too markedly from one language to the next. But I think it could be that semantics vary a lot. Right now I have some languages where flow goes forwards (top down) backwards (children up), stack based, parallel, synchronous, et cetera. I personally haven't had trouble keeping them straight just knowing the context, but that is not necessarily a predictor of how it will go for other people (or even me), in the future. We shall see.

Another similar problem is when you have a file with both Flow and Project code (something that actually comes up a lot).

What happens when 2 languages use the same keyword but with different semantics and it happens that a 3rd language embeds them both? It might cause some confusion. Or even just the basic problem of doing color highlighting for one language in a node of another--how do you ensure the color schemes don't conflict? Perhaps a border or something would do the trick. Problems to solve in the future.

-----

3 points by akkartik 2364 days ago | link

> If this comment were about cooking it would look the same. We reuse one writing system.

That's true, but the fact that we're both able to make analogies just suggests that analogies aren't a good defense for your system. It isn't self-evident that "eliminating different syntaxes" is always a good thing. You need to actually take the trouble to motivate it.

In my experience the hard part of dealing with polyglot systems is juggling the different semantics. Syntax is in the noise. Should it be the same or different? It just doesn't seem worth thinking about.

Don't get me wrong, I find Lisp's uniform syntax very helpful. But Lisp is helpful also because of its (relatively) uniform semantics. While adding Lisp syntax atop say Erlang seems useful, mixing LFE and regular Scheme would be a nightmare.

> What happens when 2 languages use the same keyword but with different semantics and it happens that a 3rd language embeds them both?

Yes, I can relate to this question. For example, here's a fragment from the Mu codebase where I embed tests containing Mu programs in my C++ implementation: http://akkartik.github.io/mu/html/040brace.cc.html#366. The Mu instruction is `return-if`, but because it's in a C++ file, just the `return` is highlighted. Super ugly.

My take-away from all this: polyglot systems are a bad idea. Mu's implementation being in C++ is hopefully a temporary state of affairs. We shouldn't be picking "the right tool for the job". Software is more malleable than past tools. We should be tweaking our one language to do everything the job needs.

So rather than try to come up with solutions for polyglot programming, I'd just discourage it altogether.

Edit: Heh, see slide 9 of http://dev.stephendiehl.com/nearfuture.pdf

-----