Aaron Hsu discusses the cumulative effects of complexity and value of simplicity; reminded me a lot of our recent discussions here.
One of his main focal points is that "Generalized pointers are the refined sugar of programming"
He points out the size and complexity of an expression tree rendered as linked list nodes using pointers - the nodes are large, and the memory layout is unknown and unpredictable. We have to rely on GC to manage it all.
He proposes an alternative structure based on APL using a "depth list" format. The nodes are in one list, and their location in the tree is in a parallel list.
This makes memory usage and layout predictable, and makes the structure much more amenable to generalized memory operations.
I watched this just enough to reassure my initial suspicion that I'd seen this a few months ago. It's certainly in my neighborhood, but there are a lot of words in the first few minutes that mean different things to different people. It seems possible that what he means by obesity is just "running slower than optimal". I don't think that matters much. He talks about waste and excess, but it's actually nice to be able to ball up a paper and start afresh when you're writing a novel or a paper. Waste isn't always a bad thing. Civilizations are in some ways defined by what they waste (https://www.ribbonfarm.com/2012/08/23/waste-creativity-and-g...). So I wish I had a more concrete motivation for what he's aiming towards, so I could assess if the waste he's concerned about is something I'm concerned about.
On one hand I'm glad to see radical ideas like this. As I've struggled to make heap allocations safe and thought about how Rust does it, I've often felt acutely uncomfortable that things have to be as they are. So maybe he's right and pointers are refined sugar that we can thrive without.
But I'm not yet convinced by this particular presentation. Depth lists seem to be basically manually allocated memory that is managed by array indexes rather than addresses. All the benefits derive from them having a consistent lifetime. That gives up a lot of the flexibility of heap pointers! Rather than frame this as, "here's a mechanism that is applicable everywhere," which seems patently false, I'd like to see more of an argument that yes, there are programs you can't write without pointers, but you don't actually need them. From this perspective, Rust's position in https://rust-unofficial.github.io/too-many-lists seems more honest:
"I hate linked lists. With a passion. Linked lists are terrible data structures. Now of course there's several great use cases for a linked list... But all of these cases are super rare... Linked lists are as niche and vague of a data structure as a trie."
(Even this is inadequate. Rust is not just giving up linked lists, it gives up up any data structure that may have two pointers to a single allocation. Doubly linked lists. Trees with a parent pointer. And on and on. Maybe all these data structures are super rare. But it gives me the warm fuzzies to know my language can support them. And I need a stronger argument to give them up.)
The trade-off Mu makes is different: you can have any data structure you want, but refined-sugar will cost you performance to ensure safety, and you'll have to deal with a little additional low-level complexity to juggle two kinds of pointers. I prefer this trade-off to anything else I've seen, but I'm still not quite happy with it. I wish there was something better, or some argument that would persuade me to settle with one of these solutions.