Arc Forumnew | comments | leaders | submitlogin
3 points by i4cu 1892 days ago | link | parent

I agree that order matters. It matters in pretty much every application that uses data. Even HN has data ordered by newest, rank (best) etc. And Arc has a tonne of functionality to support sorting and comparing data to let you do that very thing.

But I don't see the need to order data in your application equating to the need for tables that support constant insertion order.

I 'm not going to say what you're doing is wrong because it's not, but I am going to say it's non-standard and I think there other ways to manage your data that doesn't require ordered tables (which have downsides with data growth). It's just my opinion, but there's not much you can't do with the standard storing of table records and maintaining of indexes.

That said if your data load is always low with little to no growth it could very well be a good fit.



2 points by kinnard 1892 days ago | link

This may have no bearing on how tables are implemented in arc but a more efficient implementation happened to be ordered:

https://morepypy.blogspot.com/2015/01/faster-more-memory-eff...

"One surprising part is that the new design, besides being more memory efficient, is ordered by design: it preserves the insertion order."

-----

1 point by i4cu 1891 days ago | link

> "being more memory efficient"

It's only more memory efficient than its previous incarnation.

It's an interesting implementation though. The use of sparse arrays to index the entries is compelling. Measuring this kind of stuff can be non-trivial though as you also have to account for gc (and/or compaction) at different times within your application. There are always trade-offs.

Personally I haven't compared the various implementations (clojure vs. racket vs. python) such that I can give you any real insight. I know clojure's (or rather java's) array-maps are costly when performing iterative lookups within large data sets because I've benchmarked it.

The best bet is to see how an implementation works for your app with your data.

-----

1 point by kinnard 1891 days ago | link

> "The microbenchmarks that we did show large improvements on large and very large dictionaries (particularly, building dictionaries of at least a couple 100s of items is now twice faster) and break-even on small ones (between 20% slower and 20% faster depending very much on the usage patterns and sizes of dictionaries)."

-----

2 points by i4cu 1891 days ago | link

relative to... its previous incarnation.

Microbenchmarks against a previous version only means they've made a relative improvement.

The benchmarks that would matter to us (or at least me) are:

1. how does it compare to an equivalent implementation without having to maintain insertion order.

2. how does it hold up under stress (larger data sets, with heavy load where gc/compaction have to occur)

Obviously none of this should matter to you as you've said your data load is low with no growth. So bobs your uncle.

-----

1 point by kinnard 1892 days ago | link

I do think the data load for this would always be low.

I feel like implementing tables with indices would basically be like implementing ordered dictionaries (new nomenclature I've arrived at per: http://arclanguage.org/item?id=21037 ), no?

-----