Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Scalable, Robust and Standard Java Web Services with Fibers (paralleluniverse.co)
28 points by pron on May 29, 2014 | hide | past | favorite | 7 comments


This is pretty cool. I'd love to see some benchmarks comparing it to Akka and other async Java libraries.

Also, just thinking out loud here, it's cool that this increases capacity by ~10X but since it doesn't actually improve latency, it's sort of like a call center putting you on hold instead of a busy signal. Is that a fair characterization?

As such my first focus would probably be on fault tolerance and auto-scaling, i.e. spin up more servers in response to increased latencies. And if I do that, then I'd have to ask myself whether it's really worth it to stitch this library deep into my app just to optimize for a situation (busy signals) that I've now architected to avoid in the first place.

Haven't thought that through very much but consider it a straw man argument.. any thoughts?


It's not like a call center putting you on hold. The clients that have requested only microservice A get now a response immediately instead of timeout. About the fault-tolerance, that's right, but you still prefer that if fault happens in one microservice it won't affect unrelated services.


I would assume that other async libraries would give similar results, but only Comsat/Quasar let you keep writing simple, blocking code, using the same, standard APIs.

As to scaling, in the case of a simple web service like this, Comsat would increase your capacity by a lot, in a manner similar to buying more hardware (though I would rater use a free library than buy a lot more servers). But once things get more complex, and your system more distributed, utilizing your machine's resources better gives you a much bigger boost than additional hardware. Just a simple example: if you're using a distributed write-through cache, packing more work into fewer machines will have a very big impact on your latencies.


I see, so in your example service B is modeling a cache miss.

My initial reaction was like, service B is "failing" so wouldn't the fault tolerance and autoscaling you ought to have in place be good enough to deal? But a write-through cache is a good example where there can be a relatively massive disparity in latencies depending on whether it's a hit or miss, and thus a swarm of misses could tie up the server's connections. So getting an extra ~10X connections to work with makes a big difference.

I guess my next thought would be, how broadly applicable are those service characteristics? And is it worth the "risk" of stitching in a library that does bytecode manipulation and whatnot as a general practice, or should I use it in a more targeted fashion?

Again just thinking out loud.. I'm very impressed with the work. Kudos for a library that minimizes the complexity of async programming.


Actually, service B is modeling a failure. What I meant in my previous comment was that adding more machines for scaling is not only more expensive, but can add significant latencies if those machines are using a distributed cache (as each would invalidate the other's caches, creating more misses). It's always preferable to utilize fewer machines better.

I would say that the risk Quasar brings is no different from any 1 year old library. It's young, so test it well and introduce it carefully gradually to production. The bytecode manipulation Quasar performs is vastly simpler than, say, JRebel. Quasar does not change object layouts or any such thing. It simply injects code into suspendable methods to copy local variables into an array and back.


I'm highly interested in all of this, but I can't understand a damn thing from the article.

How does Quasar implement fibers? How is code transformed to CSP (is it at all?) Where are the yielding points, etc?


There's a lot of information on the blog[1] and in the documentation.

While fibers support CSP just like in Go (and can run Erlang-like fibers), in the code here they don't. They simply use fiber-blocking (rather than thread-blocking) IO. So the servlet container assigns a fiber per request, and the call to the microservices (via Apache HTTP client) blocks the fiber but not the thread.

[1]: In particular: http://blog.paralleluniverse.co/2014/02/06/fibers-threads-st...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: