I noticed the Arc web server uses only one core when it executes. If you open hundreds of connections to the web server, all of them seem to reuse the core that started the web server. The fact that different threads are started with the (thread) function seems to have no impact on improving performance.
It sounds hard to believe using a single core is a problem of mzScheme. So there has to be a way to use to do make Arc use more cores.
"It sounds hard to believe using a single core is a problem of mzScheme."
(Psst, if you say "mzScheme," I feel the need to remind you that Arc works on the latest versions of Racket, which have long since dropped the name "mzscheme".)
Generally, threads are for concurrency, not necessarily parallelism. They're a workaround for an imperative, sequence-of-side-effects model of computation, which would otherwise force us to choose which subcomputation should come first. In Racket, this kind of workaround is their only purpose.
Racket has two features for parallelism, and they're called "futures" and "places":
I'm finding out about these for the first time, but I'll summarize anyway.
Futures are a lot like threads, but they're specifically for speculative parallelism. They're allowed to break some invariants that threads would have preserved, and (as per the nature of speculative parallelism) some of their computations may be thrown away.
Places use shared-nothing concurrency with message passing, and each place runs in parallel.
So although news.arc does spawn a thread to handle every server request, it would probably need to use Racket's futures to take advantage of multi-core systems--and even then, I'm guessing it would need some fine-tuning to avoid wasting resources. If it used Racket's places, that could make its resource usage easier to reason about (but not necessarily better!), but it would require even more substantial refactoring.
You would have to make sure that all the variables used in arc are shared behind locks. That's a big enough problem that it hasn't been attempted, to my knowledge.
I think you're telling fibs. :-p I double-checked srv.arc (which defines 'defop), and the code there opens a thread for every request. This is true in Anarki, in official Arc 3.1, and even way back in Arc0.
Even without parallelism, this would come in handy to prevent I/O operations from pausing the whole server.
Arc does have threads, yes, but it also has a style of mutating in-memory globals willy-nilly. As a result, all its mutator primitives run in atomic sections (http://arclanguage.github.io/ref/atomic.html#atomic) at a deep level. The net effect is as if the HN server has only one thread, since most threads will be blocked most of the time.
I can't find links at the moment, but pg has repeatedly said that HN runs on a single extremely beefy server with lots of caching for performance.
Edit: Racket's docs say that "Threads run concurrently in the sense that one thread can preempt another without its cooperation, but threads do not run in parallel in the sense of using multiple hardware processors." (http://docs.racket-lang.org/guide/concurrency.html) So arc's use of atomic wouldn't matter in this case. It does prevent HN from using multiple load-balanced servers.
Looking back, I see that I did indeed inaccurately answer zck's question about "running single-threaded". I'd like to amend my answer to "No, it runs multi-threaded, but the threads use a single core." Rocketnia is right that arc has concurrency but not parallelism.
"The net effect is as if the HN server has only one thread, since most threads will be blocked most of the time."
Well, in JavaScript, I do concurrency by manually using continuation-passing style and building my own arbiter/trampoline... and using it for all my code. If I ever do something an easier way, I have to rewrite it eventually. Whenever I want to try out a different arbiter/trampoline technique, I have to rewrite all my code.
Arc's threading semantics are at least more automatic than that. Naive Arc code is pretty much always usable as a thread, and it just so happens it's especially useful if it doesn't use expensive 'atomic blocks (or the mutators that use them).
"Automatic" doesn't necessarily mean automatic in a good way for all applications. Even if I were working in Arc, I still might resort to building my own arbiters, trampolines, and such, because concurrency experiments are part of what I'm doing.
All in all, what I mean to say is, Arc's threads are sometimes helpful. :-p
Absolutely. I was speaking only in the context of making use of multiple cores.
I see now that I overstated how bad things are when I said it's as if there's only one thread. Since I/O can be done in parallel and accessing in-memory data is super fast, atomic isn't as bad as I thought for the past 5 years.