Oh yeah, I never posted an update. I finished porting HN to elisp. That project's readme should now be "This is a mostly-complete implementation of Paul Graham's Arc in Emacs Lisp."
It’s wildly hilarious to see HN running in emacs. It loads up all 500 laarc stories and user profiles in a few seconds, which is much lower overhead than I thought. Then you can run `(login-page “foo” ..)` and it spits out all the right HTML. If it weren’t so weird to do networking in elisp it could even become a real server. </hack>
If anyone's curious, here's an email I sent to a friend about this:
--
I finished porting HN to emacs. One advantage of the elisp runtime is that closures have printed representation. That means you can write them, read them, and evaluate after reading. Which implies serialization.
It also means you can deduplicate them. I notice that laarc has 20k fnids. When I deduped the fns* table on my local elisp version of HN, the size of the table went from 49 to 2.
From looking at the closures, I don’t think there is any reason they can’t be persisted to disk and run later. I can’t find any examples of fnids that capture lexical context which mutates before the fnid is called. The lack of mutation is a key point that should enable serialization... I think/hope.
The takeaway is that there doesn't seem to be a reason to get rid of the fnid system as laarc scales. I started doing that for a few endpoints, but it’s crazy how much work it is in comparison to spinning up an fnid. HN has to detect whether a request is a POST or GET, and respond differently. It's good to have that in general -- in fact, every web framework except arc has the ability to differentiate between POST vs GET. But it really speaks to the power of this technique that it's possible to do without!
I need to figure out the easiest thing to do to keep https://www.laarc.io memory usage down as we scale.
HN's server specs are monstrous, and it’s a good reminder that cloud hosting is still limited if you absolutely cannot scale horizontally. Maybe colocation could be the way to go for us later on.
The site’s memory usage has been steadily creeping up — 76mb was the baseline in the early days, and now it’s >100 and <150. That’s a worrying sign. I can bump the droplet from 1gb to 2, and 2 to 4, but that won’t work forever.
I was going to say “This probably won’t be a problem for 6 months or so.” But that’s not true. If laarc gets picked up on techcrunch, I need to be able to handle an infinite amount of traffic. For sufficiently small values of infinity.
Yeah, memory usage was a big reason I stopped trying to maintain servers in Arc (and went down my insane yak-shaving rabbithole of trying to reinvent the entire computing stack from scratch).