I'm submitting this because I found the first half of it to be a pretty interesting point of view. The latter half isn't bad, but it retreads a lot of the same ground.
I had more to say on this all along, but I didn't have the time to write it out.
---
Performance-wise, it's interesting to keep CSS's abstractive power relatively simple, because (naively[1]) the relationship between N CSS rules and M DOM nodes has an overall complexity on the order of (N * M). Since it's pretty easy to understand the optimization of minimizing file sizes, programmers will just so happen to optimize this N factor without thinking about it. But if CSS had abstractions, the file size wouldn't correlate with N anymore.
Programming-in-the-small-wise, CSS properties are designed in a natural-language like way. What I mean is, there are lots of defaults, and there's often more than one way to specify the same formal behavior, so it's relatively easy to get something to work by taking small steps and guessing. Visual feedback further supports this style of trial-and-error programming.
Programming-in-the-large-wise, CSS is already full of ad hoc special-case interactions between properties to support the defaulting behavior I just described. This means it would be easy to make leaky abstractions. Even the meaning of "90%" depends on what property it appears in, so number-valued variables would likely have a few gotchas all on their own. If abstractions are leaky in practice, their programmer-given names will often be deceptive about their actual functionality, making large programs even harder to comprehend.
So adding abstractions to CSS would tempt people into an in-the-large programming style that not only fails to solve in-the-large maintenance problems, but also complicates performance and puts pressure against having a smooth CSS learning curve.
[1] I expect it's much more expensive than that due to text flowing, but also somewhat time-cheaper (albeit more space-expensive) if indexes or memoization are used.
---
For language design in general, recently I'm interested in finding ways to program without discrete abstraction boundaries or steep learning curves. I've often found CSS inspiring as an example of a language that invasively fills in details of another structured program (in this case a DOM), not to mention that it's an example of a continuous reactive language.
On the other hand, I'm interested in Turing-complete programming, I'm interested in programs that combine the efforts of independent library developers, and I'm interested in network services that have synergy despite having independent maintainers. Some kind of abstraction seems necessary for those things, but I'd rather find abstractions that leak in the right ways to correspond to the abstraction leaks in reality.
For instance, independent library developers or service maintainers aren't strictly independent as long as any user cares about both of them at once. It's often feasible for the user to prod the two projects to work together, or for the user to create or promote another project that makes that interaction irrelevant. Open source ideology and reverse engineering play into this, making it easier for users to manage these collisions. Abstractions, on the other hand, typically work against it. If an abstraction can be leaky in just the right way to be easy to reverse-engineer, that would be an interesting option here.
I have been embracing less precise ways of thinking this year, but I've been using logic as a stepping stone toward vagueness for a long time.
If an abstraction leaks, for instance by allowing reverse engineering, we can usually fix this without any code changes: Just change the language docs to say that abstractions like this one are explicitly noncommittal about whether people can reverse-engineer them. If we don't like the abstraction once its idiosyncrasies are out in the open like that, we can develop an abstraction we do like and use that instead.
I don't think I ever liked these leaks, but elegance and convenience has kept me away from exploring alternative abstractions myself.
The use of mathematical formalism is a practice that has a certain cultural context, and that context is often unfair. Furthermore, formalized ideas are as rigid and neutral as they can possibly be, so it's highly questionable to say people "should" express ideas this way, as though being rigid and neutral is morally right. There's no easy answer, because the cultural process of deciding what's fair or moral can also have effects that are unfair and immoral. For these reasons, it's quite understandable to pursue alternatives to mathematics, and this year I've been particularly interested in how those alternatives might work and succeed. This might explain the shift you're seeing.
Personally, I was very literal-minded as a kid. If someone couldn't tell me exactly what they wanted from me, right down to a fully executable program, I assumed they were either being maliciously equivocal or that someday they'd make a serious error: Genies, robots, aliens, and cartoon tricksters don't allow much tolerance for vagueness, so what cushy precedent would it set if I allowed vagueness to go unchecked?
So I may at some points build up to or rationalize vagueness in terms of precise reasoning, but I don't mean that as an endorsement of precise reasoning. It's a quirk of where I'm coming from.
I've been confirming my biases :) along similar lines this year, and ended up with some surprising conclusions.
It's easy to forget, surrounded as we are with computers, that mathematical proofs were not intended to be "rigid" the way programmers understand the term. Proofs were intended to persuade other mathematicians. Standards of rigor varied depending on who the audience was.
When mathematics made its way to software the notion of rigor hardened to a sort of math envy. Endless man-years have been spent trying to "prove programs correct" rather than to do computer-aided proofs as a human activity. But the best practitioners have always known that you can't take the human out of the proof life-cycle. Just look at ACL2.
The most harmful result of this slippery slope has been to enshrine and ossify notation as something most programmers are to be prevented from changing. A particularly startling experience was to reread Ken Iverson's great Turing Award lecture, "Notation as a tool for thought" (http://www.eecg.toronto.edu/~jzhu/csc326/readings/iverson.pd...) keeping in mind what I wrote earlier. Before programming existed, if a mathematician wanted to change his notation, he did so with the stroke of a pen and without a second thought. Now here's a great programmer, at the very outset, simultaneously holding math up as the gold standard for notation and claiming the flexibility of its notation to be self-evidently deficient.
I'm mostly preaching to the choir about syntax enhancement, given this is a lisp forum. But I'd argue that even lisp doesn't go far enough. We're still obsessed with finding the ideal visual notation to let a program be taken in at a glance. We don't pay nearly enough attention to the fact that large programs can't be taken in at a glance, and so at large scales the visual effects of notation must be subordinated to a tactile feedback-based environment.
Lately I've been building such an environment from scratch. It'll support programming in a sort of assembly language, not just to ease my implementation burden but also to explore if a good environment might reduce the need for a high-level declarative language. Concretely, the environment will log scenarios tried out during development, recording inputs and outputs for all function calls. The binary log will augment the version control history of a project, so that you can clone a repo, download its log and then start seeing warnings anytime you make a change, something like "hey, somebody tried this function with these inputs last week, but got a different output. Would you like to a) override that output, b) pin it as a unit test that must always pass, or c) tag this function as stateful so you don't see these warnings again?"
Beyond that initial use case, I want to make it easy to do things like:
a) Create a test with a single keystroke or gesture,
b) Visualize the space of input for a function as examples organized by different code paths taken,
c) Explore how the set of inputs tried has grown over the version history of a function,
..and so on.
I'm very surprised with this goal; nobody started out more notation-obsessed than myself.
"When mathematics made its way to software the notion of rigor hardened to a sort of math envy. Endless man-years have been spent trying to "prove programs correct" rather than to do computer-aided proofs as a human activity. But the best practitioners have always known that you can't take the human out of the proof life-cycle."
Reflexively, I don't like drawing a distinction between human activity and mathematics. What else in the universe does mathematics except humans?
Any field is going to keep humans in the loop or else it won't be useful to humans. Any field is going to strive to minimize humans' responsibilities in the loop or else it won't be useful to humans.
Right now I'm in the mood to drill down on "field" and "useful." A "field" implies there's some kind of shared research going on, and "useful" implies that it comes in handy for people's naive needs. (Who knows if something that's useful is really beneficial?)
Shared research, in itself, crucially involves summaries and citations to cash in on prior work. But when we snapshot these summaries and citations (e.g. writing them on paper), the discreteness of our snapshots permeates into discreteness of the citation expressions, which then permeates into discrete methods for locally assessing the value of the research, which then permeates into discrete valuations of the research. So I suspect this one useful application domain -- that is, the meta-theory of snapshotted shared research -- is sufficient to give a strong identity to the field of logic[1], regardless of what culture currently surrounds it and what other application domains intersect with it.
[1] I thought I was going to say "mathematics," but "logic" is a culturally closer term for what I described. The impression I've gotten is like this: Logic gives you a canvas for reasoning. Mathematics equips that canvas with intermediate scaffolding for the subjects you care about. It's paint by number! :-p
---
"Just look at ACL2."
Er, what about ACL2?
---
"The most harmful result of this slippery slope has been to enshrine and ossify notation as something most programmers are to be prevented from changing."
For at least this reply, I'd like to break down "harmful" into "less useful" and "less beneficial."
A programmer will find it more useful to be able to use the notation of their choice, especially notation of their own design. However, if many programmers each willingly stray from standard notations, they make the world of program code less useful in at least one way, to at least one audience: Newcomers face a steep learning curve, encountering different notations everywhere they look.
As for whether this is "beneficial" one way or the other, I don't know. I just wanted to cast doubt on the moral connotations of "useful" again.
---
"We don't pay nearly enough attention to the fact that large programs can't be taken in at a glance, and so at large scales the visual effects of notation must be subordinated to a tactile feedback-based environment."
I don't know if they can be completely "taken in," but making them interactive is great. Logic (in the sense I described above) is already essentially interactive because of the process of tracking down citations to convince oneself of any pieces that are hard to believe. It would be pretty useful to be able to navigate a program to see various views that are bite-size enough to take in.
Ooh, as an extrapolation of your unit-test-based ideas, it would be especially nifty to view a codebase as a fractal pattern where every possible run-time state corresponds to one location in the fractal. XD Do just enough fuzzing to draw a good density of pixels for the overall program fractal, render the errors in red, and then do more localized fuzzing when the user zooms in. Some design flaws and corner cases may be visually apparent without even being proper errors!
If you're starting with a conventional assembly language, you can probably display each subroutine call as a pyramid of the calls that happen inside, with each level being some fraction as tall as the previous one. Conditionals would be treated similarly to calls, except that they don't always happen, so sometimes they can be omitted from the inner levels of a pyramid. Non-call, non-conditional statements would be omitted from the fractal altogether to save room.
Sorry, I guess I got carried away on a specific idea. XD Your ideas are already pretty nice too.
"you can probably display each subroutine call as a pyramid of the calls that happen inside"
Oops, just after I posted this I realized that the "calls that happen inside" a subroutine may vary depending on the program state or arguments. If these separate possibilities are not shown as separate locations in the fractal, then zooming in won't be sufficient to narrow down these parameters in the view. If they are shown as separate locations, then a mere 8-bit parameter will shrink each sub-pyramid by a factor of 256.
It's likely the zooming interface will need to be accompanied by a timeline scrubber or tableau view to visualize number-like parameters.
Pointer-like parameters would get out of hand either way, let alone strings, lists, trees, and functions.
Just that it's interactive. To prove anything non-trivial you have to guide the system with the right lemmas. My example wasn't meant to be particularly insightful..
"Any field is going to strive to minimize humans' responsibilities in the loop or else it won't be useful to humans."
You: "Just that it's interactive. To prove anything non-trivial you have to guide the system with the right lemmas. My example wasn't meant to be particularly insightful.."
Oops, thanks for assuming I was making a rhetorical point or something. I actually asked because I didn't know even that much about ACL2. :)
---
Me: ""Any field is going to strive to minimize humans' responsibilities in the loop or else it won't be useful to humans."
You: "Lots of ways to do so have long-term ill-effects."
Yeah. I said "useful" and "humans" in a way that I hoped would cast doubt on the long-term effects. :)
The author has more faith in the morality of human values than I do. I think there's quite a lot that's just an accident of the culture we're currently in.
On the other hand, the author has far less confidence in the resilience of art, science, and philosophy than I do. I think any competitive process will spend some effort on research to make sure it's spending the rest of its effort wisely. In fact, I strongly suspect that we have no innate goal to pursue, so all our effort eventually serves the needs of research.
What's the problem here? The fact that people are being bombarded with lies? Consider the alternative:
- If we tell the same "truth" to everyone, then everyone will find themselves persuaded by the same few marketing pitches, and the wealth gap increases.
- If we educate everyone well, we develop technology faster, and that technology further disrupts our social ties to each other, displaces jobs, increases the risk of world annihilation, accelerates our population growth, and introduces new intelligent beings who may have even more severe problems to deal with than humans do.
- If we take our claims seriously enough not to include a snide hint of "you'd better fact-check this yourself," then the burden of proof remains with the speaker, so fewer people will be able to invest in the task of passing along the message.
- If we try to tell the whole truth to someone, but fail to discuss each topic in the proper proportion, then they may be frustrated by non-sequiturs, or they may mistake certain topics to be more relevant than they are.
There's probably no way to tell the truth to humans without being deceptive and exploitative. There's probably no such thing as truth at all, only the natural selection of processes that are good at doing research.
That doesn't mean we won't care about this, but the result of our caring is to bombard humans with lies.
I think if we stop encoding our claims using discrete snapshots, we can stop judging them on the discrete scale of whether they're "true" or not, and we might find a better overall quality of communication. (Of course, I don't mean we could stop altogether. I mean we could use discreteness less frequently.)
To return to your statement that "any field is going to strive to minimize humans' responsibilities in the loop or else it won't be useful to humans," this isn't a simple gradient descent problem, where there's a single tightly constrained way to add automation and we always minimize responsibility in the exact same way. There are lots of different ways to automate, and we think up new ways all the time. My claim is that it's worth being thoughtful and restrained about what automation we introduce, thinking hard about what its effects may be, and how we may roll it back if we find overly high unanticipated costs.
BTW, I encountered another example of runaway automation since I wrote my earlier comment. http://www.foreignaffairs.com/articles/141729/francis-fukuya... was a great narration of the recent history of the US in using people as automation by creating rules and institutions, and the incredibly poor consequences of some of those decisions.
I don't know :) I struggled to share some half-baked ideas in my post above, but yeah it's very hard to do rollbacks in the real world when people start to rely on infrastructure. Even my example of forcing people to sell houses in some situations is probably politically infeasible.
I'm far more confident :) just pointing at these problems to motivate that certain kinds of constructs might be counter-productive in software because they encourage the wrong kinds of dynamics. For example, I think namespaces and modules are socially counter-productive even when implemented in a technically impeccable manner, because they encourage division of labor. In even the smallest of projects, without thinking about it, we often start out with the goal of working with someone, and proceed to do so by defining an interface and then working apart. That is no fun, and it is gradually self-limiting. I think we do this because it's so easy to create interfaces and silos, to isolate ourselves from each other's changes using namespaces and similar constructs. By creating these constructs, by making them too easily available, we forget to ask if we should use them. This is, I think, an example of taking the human out of the loop. But even here, the only way I can think of to do a rollback is by trying to build a software stack from scratch.
"For example, I think namespaces and modules are socially counter-productive even when implemented in a technically impeccable manner, because they encourage division of labor. [...] it's so easy to create interfaces and silos [...]"
Well, I think module systems are an essentially flawed idea just like everything else, and "socially counter-productive" is something I might say alluding to specific ways they're used, but I don't think they're implicated like that.
These days it's very easy to whip up ad hoc module systems with precise encapsulation properties, because we've already had such a push toward OO encapsulation; shared-nothing concurrency; proof systems with cut elimination; a concept of independent free will of each human; etc. However, some of the easiest ad hoc systems have obvious shallow flaws, like allowing accidental namespace collisions.
On a good day, encapsulation is great for vagueness: If I have to encode a vague concept in a precise programming language anyway, I'm going to think of more than one way to formalize it, and encapsulation gives me a buffer for trying several possible formalizations without changing the rest of the system. Encapsulation lowers the cost to tackle vague projects. It's arguably the kind of rollback you're looking for!
A technically impeccable module system would lower the cost to create technically impeccable encapsulations, thus facilitating vague projects better than ever... on a good day.
However, module systems will always have incremental upgrades to work on, especially regarding performance and logical completeness, not to mention keeping up with innovations in how we choose to share our vague concepts or how we choose to audit them back to their authors. And module systems will at best be a bit of a siren's song, since they facilitate the irresponsible practice of making things.
---
"In even the smallest of projects, without thinking about it, we often start out with the goal of working with someone, and proceed to do so by defining an interface and then working apart ."
Yeah, even in cases where people specifically try to work together more seamlessly, it's hard. To be blunt about it, human bodies are already silos, and our natural language utterances are practically discrete snapshots already.
As a way forward, I'm interested in encapsulation methods that don't lie about how encapsulated they really are in practice, and I'm interested in finding a more "direct manipulation" style of programming rather than the text-editing/tree-editing we use now.
I'm kind of liking David Barbour's plan and your plan of keeping a history recording. Perhaps this recording could be the primary thing that is actually "made," and programs can be invoked from examples there rather than meticulously authored.
That's interesting. I guess I'm only opposed to namespacing once the experiments stop ^_^. But it seems hard to draw that line..
"I'm kind of liking David Barbour's plan and your plan of keeping a history recording. Perhaps this recording could be the primary thing that is actually "made," and programs can be invoked from examples there rather than meticulously authored."
Interesting. I'm not sure how that would work, but in the past week I thought of a maybe-similar hypothesis: perhaps we need notational help only for scenarios. So when you open a new codebase, the first thing you should see is example runs. If you're lucky they'll be curated by the previous programmer, guiding your attention first to the most basic functionality. But regardless, you see some notational representation for scenarios, while the code itself that implements that functionality remains in a sort of assembly like language. I'm not opposed to high-level languages, but if we have a limited amount of bandwidth for managing notations, perhaps we should feed tests before code.
And yeah, I too noticed that my attention has moved from notation to tools, just like David Barbour. It happened without conscious desire to imitate him, as far as I am aware. Perhaps that means we're on the right track.
Upon further thought, I'd take a world where users and forkers can create new namespaces, but every individual project restricts itself to a single one.
I'm a bit confused about where this is going. I was hoping you'd stay skeptical of division of labor and either:
- bring up some alternatives to making things:
- bring up some inexact techniques for buffering an unstable vague subproject from the rest of a project, rather than using precise modularity/encapsulation; or
- bring up some more specific technical arguments against module systems, so that I know what to look for when I'm contemplating inexact techniques myself.
I don't want to treat you as a vending machine of arguments, but if you just so happen to have ideas in these categories, I'd find them particularly interesting. :-p
---
Regarding example runs: What if the only way to share a program were by example? In fact, is that true right now? What would distinguish program notation from scenario notation?
I'm probably at the limit of my ideas :) I don't want to overstate how confident I feel about them. So far the best way I can think of to prevent module overuse from making codebases harder to understand is: don't support namespaces.
But I hadn't considered inexact alternatives, and I still don't understand what that entails. But it might well work better than dropping namespaces entirely. So my previous comment was in the vein of vague encouragement. Just throwing out ideas as they occur to me.
---
"What if the only way to share a program were by example?"
There might well be something here! We'd need a system that can integrate multiple scenarios into a single function. But I have no idea how to build such a system, and it seems far harder than what I'm trying to build. Then again, maybe that will change as we talk about it.
---
"What would distinguish program notation from scenario notation?"
I don't know yet. I suspect every program would need to invent its own notations mimicking its target domain. A browser may need notations like "when click(...) ...", while a server might need a notation like "when received(...) ...". Maybe those two can be integrated.
The general template for a scenario seems to be when X then Y. X would include events and fakes to insert into the system at various points, either carefully orchestrated or randomized like in Quickcheck. Y would include assertions of constraints of various kinds, either on the output, or on the trace generated by the system, or on invariants at various static points in the code. That's what I have so far. Can you think of anything else?
"But I hadn't considered inexact alternatives, and I still don't understand what that entails. But it might well work better than dropping namespaces entirely."
What's your goal with dropping namespaces? You keep bringing up namespaces as though they're what I'm talking about, but I don't even know what kind of namespaces you have in mind. :)
I am indeed interested in dropping certain approaches almost entirely in favor of inexact alternatives, but until I understand what those alternatives are, I can't claim it'll work very well. :-p
One example of an inexact encapsulation technique is physical separation: Two people build Lego structures on separate corners of a table, and sometimes they may construct bridges linking the two, but they can return to working on separate parts at any time. (Unfortunately, this is still an example of making things.)
There are probably lessons in architecture, improvisational theatre, sociology, and a lot of other fields where the boundaries are soft. Just because the boundaries are soft doesn't mean they can't go fast enough to compete with the automation power of logical precision... or does it?
One thing that might help in the meantime is to think not about making things but about augmenting ourselves. This way we may still be dealing with discrete this-person/that-person divisions, but at least we aren't dealing with a discrete creator/creation divide on top of that.
(David Barbour has been talking about encouraging the self-augmenting task of programming the programmer-UI, but I'm not sure I remember his motivation. Maybe I'm getting this from him.)
---
"The general template for a scenario seems to be when X then Y . [...] That's what I have so far. Can you think of anything else?"
That actually sounds a lot like the Era module system. Not in the idiosyncratic details, but in the fact that we're both managing top-level sets of things that do Y when hedged under some modality X. I'm thinking of them as programs with imports, and you're thinking of them as testing scenarios with initial conditions. Perhaps scenarios are a special case of programs?
I'd say "perhaps they're complements," except that it's hard to be specific about any concept without it turning into a program at some point.
This might be why I'm interested in understanding programs as a special case of scenarios, if that's possible.
Sorry, no offense intended. I think I'm ascribing to you something closer to the mainstream view than my own, so you're a stand in for the world :) Not entirely without reason; you haven't said anything in this thread but at other times you've wanted to define your modules in such a way that they'd continue to work regardless of what surrounding modules contain. I'm not sure how far you've moved away from such ideas, and I don't mean to attack you, so where I imply 'you', please assume 'everyone but me'. :)
My radical approach is: "let the caller hack my library to get it to work." Have you been converted quite that far? :)
Er, sorry, I think I must talk in a decisive way that makes me look offended. I'm so indecisive in general, I have to take what I can get. :)
I don't know what would have offended me, so don't worry about it.
---
"I think I'm ascribing to you something closer to the mainstream view than my own, so you're a stand in for the world :)"
As someone who has spent this thread denouncing the evils of truth and making things, when I hear you calling me mainstream, it's heartwarming. :)
---
"Not entirely without reason; you haven't said anything in this thread but at other times you've wanted to define your modules in such a way that they'd continue to work regardless of what surrounding modules contain."
Yes, that's something I still consider an essential feature of a module system. And I think module systems themselves are a matter of course if we have a certain culture around collecting discrete snapshots (research).
But I'm interested in how much we can avoid discrete snapshots in the first place, as well as what different cultural attitudes we could adopt.
---
"My radical approach is: "let the caller hack my library to get it to work." Have you been converted quite that far? :) "
That depends on what context we're discussing this in.
- In the here and now, I'm taking this day to day. I don't have a strong opinion on how much I should indulge my fondness of making things, what I should make or procure for whom, and how much I should work closely with whom. Nor do I have a strong opinion on what you should do. :-p Or maybe it's more accurate to say that I might have strong opinions if presented with more specific situations.
- For thousands of years down the line, I hope we get to "let the caller communicate with me if they want to, under the ethical and logistical supervision of our local peers." There's rarely a "library" involved, and if there is, it's the one who's saying "me" here.
- In the projects I've been working on and pondering, I already have a (very incomplete) plan to allow Era modules to break each other's encapsulation. The idea is that one module should have special privileges with another module as long as it can prove it already knows that module's implementation code or has that module author's permission. A user must have the implementation code of a module in order to install it in the first place, so they can of course hack on it and break compatibility and find themselves rewriting a whole chain of depenencies to use their customization, but this way they can sometimes just add a new module that takes advantage of having extra insider knowledge about the original.
In fact, now that I think about it, this "extra insider knowledge" access is similar to the extra information you're trying to access from your scenarios concept. Hey, we might be able to design a formal system together. :) Did you ever get around to looking at linear type systems?
EDIT to add: I was encouraged to think about this "extra insider knowledge" plan thanks to a recent LtU thread about "private" and "public" access controls: http://lambda-the-ultimate.org/node/4965. The Era module system has only one top-level definition per module, so a private definition doesn't make sense unless there's some way to break into it. Before that thread, I was already worried about what it would take to allow an author to prove extra things about the other definitions they had already published, but that's where I started thinking about a solution.
"Did you ever get around to looking at linear type systems?"
I was (and still am) waiting for you to teach them to me :p
My recent explorations are hardening my bias for operational/ugly/imperative approaches over declarative/elegant/functional ones. The trouble with being elegant is that the implementation is harder to tinker with without causing subtle issues. Later readers have to understand global properties at a deep level.
Don't get me wrong, there's room for high-level languages in my world-view. They're just harder to communicate the big picture of, and I want to start with simpler problems. High-level languages and type systems are hard crystals, and I'm currently more interested in building squishy cells.
"I'm currently more interested in building squishy cells."
I respect that, but just to get it written down, here's what I've come up with so far:
All lambda expressions must be lambda-lifted so they're at the top level of some module. (I guess this might count as a lambda-free logical framework.)
Every function type (X -> Y) is annotated as (X -> [C] Y) with the implementation code C of that function's source module. A simple export/import of an X-to-Y function will actually bind an import variable of the type (existscode C. (SatisfiesSignature S C * (X -> [C] Y))), where S is the import signature.
If a module has access to installed implementation code C and it knows SatisfiesSignature S C and SatisfiesSignature S C', then it can conclude C = C'. It can take advantage of this knowledge to get access to fuller knowledge about how an (X -> [C'] Y) function behaves.
---
"I was (and still am) waiting for you to teach them to me :p"
Really? I don't remember where we left off, but let's get in touch on this.
"..if many programmers each willingly stray from standard notations, they make the world of program code less useful in at least one way, to at least one audience: Newcomers face a steep learning curve, encountering different notations everywhere they look."
But if you rely less on notation and more on interactive feedback, this might be less of a problem :)