I will say; I did a page reload and noticed the color pick was still in place and had to go hunt for the x. I expected it to reset, but I'm not sure if you see that as a bug or a feature :)
also, it looks like the client side is pure js... Is Lumen not up to that task? just curious if you tried that as it might be nice if, someday, we could get to the point where this kind of stuff could be done in arc.
We started out by making an EventSource endpoint and using JS EventSource object to get updates. Seemed to work great.
Pushed it live.
To everyone on the site. Not just to the people on /l/place. Oops. Mistake #1.
Mistake #2: I was aware that Arc kills threads that take longer than 30 seconds, but I was unaware that EventSource automatically tries to reconnect. Welcome to the DoS party.
With no way to tell the clients to stop. Mistake #3.
It was only live for about 10 minutes, but it took around two hours before the last few clients finally refreshed their browser and stopped DoSing the endpoint. The server absorbed the extra traffic like a champ, but the server logs were ridiculous for about an hour. Lots of "client took too long" error messages spit out by Arc every second.
Sooo yeah, submitting changes via JS is easy. Getting updates is the hard part.
Could poll. But then that's still hammering the server. Moreso than it already is.
It's probably a feature that it's painful to use. Only the determined will use it for art.
That said, if anyone has ideas for how to tweak arc for performance for this scenario – and remember to calculate how much memory is used by how many simultaneous sockets are open – I'd be willing to try again.
> I was aware that Arc kills threads that take longer than 30 seconds...
> Sooo yeah, submitting changes via JS is easy. Getting updates is the hard part.
That's why I suggested a post request...
Just deliver the page and create an api end point, on the server, that gives you the last x changes from the change log.
With that ajax request, you could return these net updates for the client and use client side js to apply them to the dom. You could even create a trivial js debounce[1] algo to throttle requests if you wanted to go even further.
why not go all the way and render client-side in a canvas as well?
As it is, the more "pixels" you have, the slower the page loads. You could probably have just a single canvas and a single event handler.
You could preserve functionality for non-js users by using an imagemap (assuming browsers still even support those) or sticking the existing code in noscript tags.
OK... I don't know where I got "single event handler from," that obviously wouldn't be true, but it would still be simpler than having each pixel be a separate HTML form....
Let me actually see if I can come up with a POC first, at least for the canvas and javascript. Realistically, since it's just GET requests that don't need authentication, I shouldn't even need laarc running, I can just send the same requests by AJAX.
The last change is that pairwise expressions like (< 1 2) now return #t or #f, not 't or '(). Meaning you can pass arc predicates like `even` into racket functions that expect predicates.
It's pretty convenient to call any racket function without worrying about interop.
Yeah it's one more char, but I think it makes the code more explicit, understandable and also extendable (i.e. '.racket' also becomes an option too - not that it's needed).
>(i.e. '.racket' also becomes an option too - not that it's needed).
.arc / .racket (or .rkt) seems more intuitive than .arc / $
We could also keep the dollar sign in both cases, which I prefer aesthetically, because being familiar with javascript and C type languages, seeing a dot alone like that just seems weird.
In the longer term, how do you figure it'd be good for Anarki control whether something gets overwritten or not?
Definitely leave it up to the user.
Some variables (like constants) should always be overwritten, and so the user should write `(= foo* 42)` for those.
Other variables (like tables containing state) should only be set once on startup.
Like, maybe we'd eventually want a `load-force` operation that loads a file in an "overwriting way." At that time, if we simply have `load-force` interpret all `or=` as `=`, then it might clobber too many uses of `or=`, so my advice to use `or=` for this will have turned out to be regrettable.
Perhaps, though FWIW I haven't needed a force-reload type operation. That's accomplished via restarting the server.
The only drawback for `or=` is that if you have code like this:
(= foo* (table)
bar* (table)
...)
then you'd have to reindent the whole expression if you change from `=` to `or=`.
That's not a big deal though. I think I prefer `or=`.
Thanks for answering. That makes the intentions pretty clear. :)
Another thing... Have you considered initializing the application state in one file and the hardcoded constants and functions in another, so that when you're changing the code, you can reload the constants file without the state file getting involved at all?
Sure, but losing data you don't want to lose because you reloaded a source code file does seem like more of an architectural than language issue. It would be a code smell in any other language.
My comment was slightly facetious but the more I think about it the more I'm wondering whether something like redis or php's apc wouldn't be a good idea - and not just as a lib file but integrated into Racket's processes for dealing with arc data directly.
It could serve both as a global data store and a basis for namespacing code in the future (see my other rambling comment about namespaces), since a "namespace" could just be a table key internally.
Otherwise hitching the code to a third-party db, as a requirement, would really limit what could be done with the language and would create all kinds of problems. You would be locked into the db platform as a hardened limitation. You would inherit the complexity of external forces (i.e. what if some other app deletes or messes with the db). What about securing the access/ports on the db.. etc..
It's always possible, but I think you would have to implement something internal where you can properly isolate and fully support all platforms the language does.
Seems likes namespaces would solve these problems the right way.
Yes. Currently, the options we have for stateful data are file I/O, which doesn't work perfectly, or tables that can lose their state if the file they're in gets reloaded. I'm suggesting something like Redis or APC, but implemented in Arc at the language level, to separate that state from the source code.
I was also thinking (in vague, "sketch on the back of a coffee-stained napkin" detail) that it could also be used to flag variables for mutability and for namespacing. In that if you added "x" from foo.arc it would automatically be namespaced by filename and accessible elsewhere as "foo!x",so it wouldn't conflict with "x" in bar.arc.
>Otherwise hitching the code to a third-party db, as a requirement, would really limit what could be done with the language and would create all kinds of problems.
Yeah, but to be fair, Arc is already hitched to Racket, which appears to support SQL and SQLite, so maybe third party wrappers for that wouldn't be a bad idea as well... sometime in the future when things are organized enough that we can have a robust third party ecosystem.
> Arc is already hitched to Racket, which appears to support SQL and SQLite...
Well racket supports SQL and SQL lite as an option, but racket can also run on platforms that don't support them so it's not 'hitched'. i.e. compiling to run on micro-controllers, mobile devices etc.
Languages that have decent bindings to a database also have global variables that still have uses, and that can be lost when you restart the server or do other sorts of loading manipulations. There's a category of state that you want coupled to the state of the codebase.
Yes, you can definitely try to make these different categories of state less error-prone by architectural changes. But I don't think other languages do this well either. Mainstream languages, at least. I know there's research on transparent persistence where every global mutation is automatically persisted, and that's interesting. But I'm not aware of obvious and mature tooling ideas here that one can just bolt on to Arc.
All that said, database bindings would certainly be useful to bolt on to Arc.
Arguably it's an interesting failed experiment. But unfortunately that was not the conclusion Aristotle's successors derived from works like the Metaphysics. [9] Soon after, the western world fell on intellectual hard times. Instead of version 1s to be superseded, the works of Plato and Aristotle became revered texts to be mastered and discussed. And so things remained for a shockingly long time. It was not till around 1600 (in Europe, where the center of gravity had shifted by then) that one found people confident enough to treat Aristotle's work as a catalog of mistakes. And even then they rarely said so outright.
If Arc is to attain widespread adoption – an aim we're focusing on with laarc – I believe that it will need to transition away from the rigid abstractions of the past and give users what they want: true, false, empty list, and a value that means "undefined". Lua calls this nil.
JS also has a value called null. It's very useful to use null as a placeholder. Without null, it's probably impossible to differentiate between calling an arrow function without arguments vs passing in `undefined`. This gave the language flexibility going forward, and gave it an escape hatch that Lua doesn't have: the ability to use a null value as a sentinel value. You'll notice it's quite impossible to store nil in tables in Lua. (It's ... not strictly impossible, but it makes interop with libraries a nightmare.)
Users need JSON. JSON has true, false, null, strings, numbers, arrays, tables, and empty arrays.
Users also want to work with existing libraries. They don't want FFI wrappers. There's just no time for any of that when you're in a mad dash for hammering out features. It's partly why pg gave up on working on the core language and focused solely on HN for some years.
To that end, I propose several changes:
- the result of a predicate is either false or something other than false. This gives us interop with Scheme, because that means we can e.g. pass (fn (a b) (< a.0 b.1)) to racket's SORT function as the comparator, and everything will work fine. We don't get this now, because of the t/nil constraints.
- The result of (if false 'foo) is void, not false. This gives users an explicit way to avoid boolean values in the cases that it matters. Which is rarely, but those rare cases can be excruciating without this.
- The value of the symbol t is changed to #t. This works fine out of the box right now.
- The value of the symbol nil becomes void, not an empty list. (I need to research this one, but I am ~70% confident it can work.)
- car, cdr, and cons are updated to work with the above. I have already done this in a fork of laarc and know this is the key that makes all the other pieces fall in place.
There are many advantages of this model. One is that you'll get the entire racket ecosystem basically for free. It can be up to users whether to rely on racket or whether to stick with pure Arc. (Arc itself should probably stay away from using Racket so as to ease porting the language, but it's a blurred line to decide what counts as "core arc" vs "libraries supporting arc". That said, users need Racket features, and need to be able to evaluate racket code on demand to use these features.)
Another advantage is keyword arguments. This is one of my primary areas of research with respect to arc, and I have a prototype version that handles common cases. It will take some legwork to handle all of the cases correctly, but this gives users the ability to at least call racket functions via keyword arguments without having to resort to #:foo syntax. (#:foo was an aesthetic mistake; I think the original paper basically says "We polled some users and no one had strong feelings, so we went with this." Good luck typing #:foo into your phone's Arc REPL in under two seconds.
(def sorted (xs (o scorer <) (o :key idfn))
((seval 'sort) xs scorer key: key))
In the above example, :key is a keyword argument. You can call sorted like this:
> (sorted '(c a b))
(a b c)
> (sorted '(c a b) >)
(c b a)
> (sorted '((c 1) (a 0) (b 2)) key: car)
((a 0) (b 2) (c 1))
This gives a way to port a huge quantity of Arc code in news.arc from the old style to the new keyword style incrementally. You can transition each argument over to be a keyword when it makes sense to do so. This sort of gradual evolutionary path is important when introducing fundamental changes.
There is no way to protect users from the fact that if they want to transpile, or they want to interop with existing libraries, and users have no language for expressing the semantics of those interfaces, then those users are mostly screwed and will quietly switch to something more effective.
Arc is the embodiment of an exciting idea: that a modern lisp can be fun, and so powerful that you can run circles around competitors before they understand what's happening. (The "suggest title" feature in laarc is still my favorite thing to point to vs HN.) And the core of that mindset is exploration. If this seems like a good idea, we should transition to it. Choosing to roll back later is always an option, even if it would be a little painful.
But I think we'll find that these features make it possible to handle all of the existing cases in Arc without problems, that things tend to "just work by accident" instead of forcing the user to think through unexpected problems, and that you'll be able to write and ship features very quickly -- and perhaps even to more runtimes than just Racket, like in-browser support for the Arc compiler. You'll notice https://docs.ycombinator.lol/ is basically the Arc library, even though it's illustrating functions from Lumen. (whenlet x 42 (prn x)) works, for example.
IMH and unqualified O, I think that what holds back widespread Arc adoption (other than the existence of Clojure) are lack of effective namespacing and unhygienic macros, which make modular code and things like proper package management/libraries infeasible if not impossible. Also, the way the dependencies in news are engineered make it very difficult to disentangle or update what has become an obsolescent web application from the core language without risking breaking everyone else's code.
Ironically I've found (others may feel differently) that while pg may have wanted the purpose of Arc to be exploratory programming, he seems to have done so with a number of his own assumptions baked in to the language, implicitly limiting exploration to what he considers to be correct, and to what correlates with his personal style. It's like Arc is Henry Ford's Model T: you can have it in any color you like, as long as you like black.
But changing these aspects of Arc would make it no longer Arc, at least philosophically.
I agree with some of your first point, but the second not so much. I think pg released arc as a first cut with one application baked in to act as marker for a stable version. Obviously I can only guess, but I also think he expected a larger community of interest, one that could take it to the next level. That never happened and with only a handful of keeners, 10 years later, the things you mention in your first point are non-existent or half-baked experiments. It's possible Arc may get there, and I hope it does, but at this rate it may take a hundred years. :)
More interestingly, though, shawn is bringing some interest back (at least for me) and making substantial changes that could breathe new life into Arc. I don't agree with the empty list - nil change, but the table changes and reader changes are good. I do think the more seamless the racket interop is and the more racket can be leveraged, the better. Clojure has good interop with java and that's what made Clojure explosive. If we can do that with Arc/Racket then we are better off for it.
"Clojure has good interop with java and that's what made Clojure explosive. If we can do that with Arc/Racket then we are better off for it."
Do we ever expect Anarki values to be somehow better than Racket values are? If so, then they shouldn't be the same values. (The occasional "interop headaches" are a symptom of Anarki values being more interchangeable with Racket values than they should be, giving people false hope that they'll be interchangeable all the time.)
I think this is why Arc originally tossed out Racket's macro system, its structure type system, and its module system. Arc macros, values, and libraries could potentially be better than Racket's, somehow, someday. If they didn't already have a better module system in mind, then maybe they were just optimistic that experimentation would get them there.
Maybe that's a failed experiment, especially in Anarki where we've had years to form consensus on better systems than Racket's, and aligning the language with Racket is for the best.
But I have a related but different experience with Cene; I have more concrete reasons to break interop there.
I'm building Cene largely because no other language has the kind of extensibility I want, even the languages I'm implementing it in. So it's not a surprise that Cene's modules aren't going to be able to interoperate with Racket's modules (much less JavaScript's modules) as peers. And since the design of user-defined macros and user-defined types ties into the design of the modules they're defined in, Cene can't really reuse Racket's macro system or first-class values either.
My comment is only a remark to "what holds back widespread Arc adoption".
If your goal is for arc to have widespread adoption then being able to leverage racket in a meaningful way will help get you there.
Currently the ability to drop into racket is not getting people to use arc, it still seems people would rather just use racket. http://arclanguage.org/item?id=20781
IMO, It would be better if arc had implicit methods that provide access to racket capabilities. In Clojure having libraries, name spaces, and a seamless interfaces to java translated into a plethora of libraries for Clojurians to utilize. Can we not do the same? Well if the goal is for "widespread adoption" then we need to.
Isn't Common Lisp a language with a package system and unhygienic macros?
Common Lisp's approach is that the way a symbol is read incorporates information about the current namespace. That way usually all symbols, even quoted ones, can only have collisions if they have collisions within the same file, and this makes hygiene problems easier to debug on a per-file basis.
I don't think it's my favorite approach, but it could very well be a viable approach for Arc. I was using an approach somewhat like this in Lathe's namespace system, although instead of qualifying symbols at read time, I was qualifying each of them individually as needed, using Arc macros.
Good question, but ns.arc manipulates what Racket calls namespaces, which are data structures that carry certain state and variable bindings we might usually think of as "global," particularly the definitions of top-level variables.
What Common Lisp and Clojure call namespaces are like prefixes that get prepended to every symbol in a file, changing them from unqualified names into qualified names.
I think namespaces are a fine approach for Arc. If Anarki's going to have both, it's probably best to rename Anarki's interactions with Racket namespaces (like in ns.arc) so they're called "environments" or something, to reduce confusion. I think they will essentially fit the role of what Common Lisp calls environments.
Of course, people doing Racket interop will still need to know they're called namespaces on the Racket side. Is there another name we can use for Common Lisp style namespaces? "Qualifications" seems like it could work.
I don't know, but I can't see how it would be feasible when any module or package could arbitrarily and globally redefine existing symbols, functions, operators, etc.
I haven't yet really, as yet it's just a set of files of code. After I make some headway on my current projects I plan to turn my attention back to it, they'll each be packages.
and ar-nil is falsy. So your example will work unmodified.
... oh. And now that I check, you're right about void:
arc> (seval '(if (void) 1 2))
1
I foolishly assumed that (void) in racket is falsy. But it's truthy. That rules out using racket's (void). `(null ? 1 : 2)` gives 2 in JS, and `if nil then 1 else 2 end` gives 2 in Lua, so it's surprising that `(if (void) 1 2)` gives 1 in Racket.
For what it's worth, in an experimental version, using #f for ar-nil and #t for ar-t worked. It's a bit of a strange idea, but it helps interop significantly due to being able to pass arc predicates right into racket.
It'd be better for me to show a working prototype with ar-nil set to #f rather than try to argue in favor of it here. But to your original question: yes, anything other than |nil| would be great, since that gets rid of the majority of interop headaches.
One thing that might be worth pointing out: The lack of void means it's back to the old question of "how do you express {"a": false, "b": []} in arc?" Choosing between #f and () for nil implies a choice between forcing {"a": false} or {"b": []} to be the only possible hash table structures, since one of them would be excluded from hash tables. But that might be a tangent.
Yes, the keyword section was poorly explained. My comment should have been prefixed with "some thoughts on arc, in case it's helpful" rather than "here is a proposal." And then I should have taken that comment and put it in a different thread, since keyword arguments are unrelated to the question of nil becoming (). I was mostly just excited for the possibility of leveraging more racket now that denil/niltree might be cut soon.
But why should an empty list be falsy? An empty list can be as valid a form of list as a non-empty one. It also seems to me that an empty list shouldn't be nil, since to me, nil should mean "undefined", and an empty list is well defined as an empty list.
Would disambiguation here really make Arc programs less terse? Is that a decision that should be enforced by the language or left to the author?
In my example above, making an empty list truthy would cause this change:
(def map1 (f xs)
"Returns a list containing the result of function 'f' applied to every element of 'xs'."
- (if xs
+ (if (~empty? xs)
(cons (f car.xs)
(map1 f cdr.xs))))
We can argue how important this is, but disambiguation does definitely make Arc programs less terse.
- the assumption baked into this argument is that cdr of an empty list returns an empty list. Switching nil to #f and letting empty list be truthy avoids this problem.
- Good names are important. ~empty? isn't really a fair characterization. Lumen uses (some? xs). There is also another way: Update `no` to be (or (is x nil) (empty x)), and then use (~no xs).
For what it's worth, my approach here is pattern-matching. In Lathe Comforts for Racket I implement a macro `expect` which expands to Racket's `match` like so:
If Arc came with a similar pattern-matching DSL and `expect`, we could write this:
(def map1 (f xs)
(expect xs (cons x xs) ()
(cons (f x) (map1 f xs))))
The line "expect xs (cons x xs) ()" conveys "If xs isn't a cons cell, finish with an empty list. Otherwise, proceed with x and xs bound to its car and cdr."
I agree; an empty list is a value. And when you consider interop with other langs, they will infer it to be some object too, where predicates will see it as a value not the lack of one.
I actually like `#<void>`, because it makes more of a distinction between pure and impure functions.
I read a good blog post[0] recently on how not distinguishing makes it difficult to guess the behaviour of simple and short code snippets (in JavaScript, but the same could apply to Arc).
There's definitely a tension between being a concise language and being a safe language. Arc doesn't try to help newcomers avoid simple mistakes. It gives them enough rope to hang themselves, like with unhygienic macros. That's partly why I stopped using Arc to teach my students programming (http://akkartik.name/post/mu).
This is off-topic but pg is shockingly Eurocentric, he seems from that excerpt to be completely oblivious to or completely willing to elide the generations of advances made in almost every field by the luminaries of the muslim empires including things like . . . ALGEBRA!
Oh yeah, I never posted an update. I finished porting HN to elisp. That project's readme should now be "This is a mostly-complete implementation of Paul Graham's Arc in Emacs Lisp."
It’s wildly hilarious to see HN running in emacs. It loads up all 500 laarc stories and user profiles in a few seconds, which is much lower overhead than I thought. Then you can run `(login-page “foo” ..)` and it spits out all the right HTML. If it weren’t so weird to do networking in elisp it could even become a real server. </hack>
If anyone's curious, here's an email I sent to a friend about this:
--
I finished porting HN to emacs. One advantage of the elisp runtime is that closures have printed representation. That means you can write them, read them, and evaluate after reading. Which implies serialization.
It also means you can deduplicate them. I notice that laarc has 20k fnids. When I deduped the fns* table on my local elisp version of HN, the size of the table went from 49 to 2.
From looking at the closures, I don’t think there is any reason they can’t be persisted to disk and run later. I can’t find any examples of fnids that capture lexical context which mutates before the fnid is called. The lack of mutation is a key point that should enable serialization... I think/hope.
The takeaway is that there doesn't seem to be a reason to get rid of the fnid system as laarc scales. I started doing that for a few endpoints, but it’s crazy how much work it is in comparison to spinning up an fnid. HN has to detect whether a request is a POST or GET, and respond differently. It's good to have that in general -- in fact, every web framework except arc has the ability to differentiate between POST vs GET. But it really speaks to the power of this technique that it's possible to do without!
I need to figure out the easiest thing to do to keep https://www.laarc.io memory usage down as we scale.
HN's server specs are monstrous, and it’s a good reminder that cloud hosting is still limited if you absolutely cannot scale horizontally. Maybe colocation could be the way to go for us later on.
The site’s memory usage has been steadily creeping up — 76mb was the baseline in the early days, and now it’s >100 and <150. That’s a worrying sign. I can bump the droplet from 1gb to 2, and 2 to 4, but that won’t work forever.
I was going to say “This probably won’t be a problem for 6 months or so.” But that’s not true. If laarc gets picked up on techcrunch, I need to be able to handle an infinite amount of traffic. For sufficiently small values of infinity.
Yeah, memory usage was a big reason I stopped trying to maintain servers in Arc (and went down my insane yak-shaving rabbithole of trying to reinvent the entire computing stack from scratch).
paulgraham: "Really? You've been mad at me for years for writing a new Lisp dialect? But new dialects are so common in the history of Lisp. I've probably used 20 in my life. And why be so attached to CL specifically?
In the old days, Lisp hackers always used multiple dialects, and basically tried to program as close to the platonic form of Lisp as they could modulo the flaws of whatever one they happened to be using. Don't things work that way now? Are there lots of people who are attached to CL specifically rather than Lisp generally?"
---
demoss: "What is annoying is that for 6 years now you have been building a following of people who go "Lisp is theoretically nice, but all the existing ones are SO full of onions! I'm going to wait for Arc to come out before I learn Lisp!""
---
death: "The cardinal rule of Lisp: don't reinvent, integrate."
paulgraham: "I don't know where you picked this up, but it seems the very opposite of the Lisp spirit to me. E.g. Steele and Sussman. Are you sure you didn't mean the cardinal rule of Java or something?"