The macro definition of with creates a function with all the names as inputs and the body of the with as the body of the function. The newly created function is called with the definitions of each name, which are effectively in independent namespaces.
The withs definition, however; recursively calls itself so that each succeeding name sees the definitions of previous names.
I believe the difference is historically due to higher speed of with. In modern programming it probably makes sense to use withs everywhere and only change to with in places where optimization is necessary.
I actually tend to the opposite: use with everywhere unless I need withs. The reason isn't performance. It tends to make code more elegant to not rely on the order in which things are defined. And when I'm reading code, with gives me the warm fuzzies that the code is going to be cleaner. When I see withs I slow down to look at the dependencies between the bindings.
Similarly to this, when I'm writing Java, I use `final`^1 everywhere I can. It's nice to be able to know that anywhere later where the variable declared final is in scope, it will have the same value as at the point it's set. I don't need to look through any code to see if it's rebound; I know it hasn't been.
[1] "final" is kind of like "const", if I understand `const` right. `final int x = 3;` means that it is an error to later have the line of code `x = 4;`.
I've been trying this syntax on example programs for several weeks. Maybe months, actually. It doesn't yield startling results in compression if you count x.y as 3 nodes, but it does make programs easier to read, particularly when you use it for structure access.
What if x.y!z expanded to ((x y) z)? Then if x were '(1 2 (3 4) 5), x.2!1 would give you 4. If (x 1 3) did work like cut, then you could do x.1.3!1!1, which would also give 4 in this case.