Arc Forumnew | comments | leaders | submitlogin
1 point by conanite 5419 days ago | link | parent

They're correct, but inefficient - it would be preferable to have a lock per queue.


1 point by akkartik 5419 days ago | link

Fo' shizzle. The timing of this item is great, because I've been struggling with performance in my arc-based webapp. I have one thread bringing new items into the index, and it's constantly thrashing with UI threads causing unacceptable latencies.

-----

1 point by aw 5419 days ago | link

MzScheme only uses one CPU, so using atomic doesn't affect performance.

Unless you're wrapping an I/O operation in atomic -- don't do that :-)

-----

1 point by akkartik 5419 days ago | link

I don't use atomic myself. But I find that enabling the back-end thread slows down the UI threads. I've been struggling to create a simple program to illustrate this.

In fact, wrapping parts of my UI threads in atomic actually makes them more responsive[1]. I think it's because instead of getting 200 thread switches with their associated overheads, a single thread runs to 'completion'.

[1] You're right that throughput may go down, though I try to eliminate IO in atomic blocks. But latency is more of a concern to me than not wasting CPU. It's not the end of the world if new data takes a few seconds longer to make it to the index, but a few seconds in the front-end can make or break the user experience.

-----

1 point by aw 5419 days ago | link

Wow, that's interesting!

"vector-set-performance-stats!" in http://docs.plt-scheme.org/reference/runtime.html returns the number of thread context switches that have been performed, so you might try calling that at the beginning and at the end of an UI operation so you can see how many thread switches happened in between.

-----

1 point by akkartik 5418 days ago | link

Thanks for that tip! With its help I was able to figure out that my background thread was just doing a lot more work than I thought.

-----