Mitch Pronschinske is the Lead Research Analyst at DZone. Researching and compiling content for DZone's research guides is his primary job. He likes to make his own ringtones, watches cartoons/anime, enjoys card and board games, and plays the accordion. Mitch is a DZone Zone Leader and has posted 2578 posts at DZone. You can read more from them at their website. View Full User Profile

Azul's Zing Benchmarks - Way Better Than Native JVMs

10.21.2010
| 19289 views |
  • submit to reddit
Update:  Terracotta emailed me some clarification on some comments made by Azul below about their BigMemory technology:

"The response times (cache operation latencies) Terracotta sees in its tests are fractions of milliseconds. Their benchmark delivers mean latencies of about one tenth of a millisecond even with cache sizes up to 350GB. Furthermore, by being off-heap, Terracotta's product is not subject to full GC pauses even with standard JVMs.

It is important to note that it is a bit like comparing apples with oranges because Ehcache with BigMemory is a cache - so response times in this context would typically be understood as cache operation latencies. Azul is a JVM - so response times would typically be GC cycle times.

We just wanted to be sure that readers understood the difference between the products and the response times, as the current sentence makes it appear that Terracotta's product is slower than Azul's, which it isn't."

Azul Systems justified its impressive Zing platform response time claims this week as the software stack went GA.  Zing was unveiled in June and this week they released some benchmarks to back up the numbers they showed in the beta stages.  

Zing is basically a virtualized, software-only port of the innovative techniques used within Azul's Vega 3 hardware appliances.  The virtual appliance makes many of Vega 3's benefits available on any x86 hardware.  It's main goal is to resolve scalability and response time issues that have been inherent in Java applications for some time.  It also optimizes tier 1 & 2 Java applications for virtual and muli-tenant cloud environments.  Azul says that Zing will facilitate new levels of innovation in Java applications, allowing a faster development cycle and large amounts of in-memory data.  With Zing we could start to see large, hyper-interactive real-time business operation apps.  You can read more about Zing here.  

Benchmarks

Azul CTO Gil Tene used a demo application from Liferay portal 5.23 running on JBoss AS 5.1.  The load test used a single JVM and had an SLA where 99.9% of users had to receive a response in five seconds or less.  You can see from the graph below that a native JVM was able to support only 45 concurrent users, while Zing could support over 800 users and keep its response time under one second.

 

The full benchmark report has been attached for more in-depth analysis.

Zing and BigMemory - Differences

Both Terracotta's BigMemory and Azul's Zing platform are used to solve inefficiencies in the Java Runtime, particularly around garbage collection.  Azul said that Zing's JVM obviously could bring its 1TB heap size and improve the performance of a caching architecture where BigMemory might be used.  While BigMemory, an off-heap memory store, was tested at 350GB heap sizes with response times down to a few seconds, Azul says their response times are down to a few milliseconds.  "Our worst case scenarios are 30 milliseconds," says George Gould, Azul VP of Marketing.  Azul's Co-Founder and VP of Engineering, Shyam Pillalamarri, adds that BigMemory avoids going to a persistence layer.  "We actually attack that problem [Garbage Collection] fundamentally and take it away completely."  Pillalamarri says that when you want to have very large heap sizes, attacking the problem fundamentally is a lot more beneficial.  If it's just about caching and not functionality that the JVM provides, then you have to change the application and work through some APIs to benefit, he says.  

Open Source Contributions

Azul is also contributing several software components of the Vega 3 and Zing software to the Managed Runtime Initiative,  which was unveiled a few weeks before Zing.  George Gould said that we could expect a major website update for the MRI and a significant push of that code into projects like OpenJDK, IcedTea, and Kernel.org.  They also plan to simplify the build process for the runtime, which currently requires several compilers.  The MRI will include the algorithm that allows Azul's JVM to circumvent Java garbage collection problems.
AttachmentSize
Azul_WEBeTailer_Benchmark.doc186 KB

Comments

Spentmoretime M... replied on Thu, 2010/10/21 - 9:44pm

That improvement is so drastic it's hard to believe... I'd like to see a PowerPC port. We've got a lot of hardware locked up on IBM blades and we're ready for a divorce with AIX. Switch to linux, add some zing, and bam...

Mike P(Okidoky) replied on Fri, 2010/10/22 - 1:16am

At first glance, this claims that "ZingJVM" is 100's of times faster than Sun's (Oracle's) JVM. I know that Sun's JVM compiled to machine code and simple things often run faster than C code. This Zing thing smells like utter bullshit to me. Either that, or their marketing is just plain stupid.

Niklas Mehner replied on Fri, 2010/10/22 - 3:04am in response to: Mike P(Okidoky)

This is probably not the average time of a request, but the maximum time (worst case)

In this case the garbage collection can cause some problems. But the question remains: What kind of garbage collection did they use? With the default settings you will  get 5 second pauses for GC when using a large heap. But after GC tuning this should not happen.

 Another question when dealing with a web application is: Do you really care all that much if 1 out of 10.000 requests is handled slowly?

 P.S.: In the document they write "No use of tuning that would only delays pauses without eliminating them was allowed.". So this is the argument for not tuning the default VM.

These http://java.sun.com/performance/reference/whitepapers/tuning.html#section4.2.6 settings are quite good, but they MAY cause a longer pause, when doing a full GC (which in practice only rarely (if ever) happens).

So yes, I think this paper is bullshit. I still think Azul is doing a great job and probably their GC is a great improvement. But they should not need to write stuff like this.

 

Artur Biesiadowski replied on Fri, 2010/10/22 - 4:44am

At 800 users on Zing the application was peaking at approximately 80 transactions per sec (which was 17x higher than native and represented >3.2 GB /sec of sustained allocation rates).

Please note that those users have '10 seconds think time', so they are really just having 80 concurrent users.
Isn't 50GB heap and 3.2GB of garbage per second a bit eccessive for 80 transactions per second???

They could stop pretending it is a 'real life' benchmark and just write two-page long code to do pure-gc test and publish the results. It is quite impressive how big improvements they have done in gc and I don't think there is a point to hide it behind rigged benchmarks. Following their assumptions, normal java applications (running on 3GB heap) are not able to support more than 3-4 transactions per second...

Peter Veentjer replied on Fri, 2010/10/22 - 5:59am in response to: Mike P(Okidoky)

I guess that for number crunching there will not be a big difference. But when the application generates a lot of objects, a normal jvm doesn't scale terribly well.

I placed a blogpost some time ago with a benchmark where objects are created, and one where they are pooled:

http://pveentjer.wordpress.com/2010/06/28/java-extreme-performance-part-2-object-pooling/

 

If you go to the end of the post, you can find a diagram.

Mitch Pronschinske replied on Fri, 2010/10/22 - 11:06am in response to: Mike P(Okidoky)

@ Mike P

You should probably also know that Cliff Click, one of the key designers of Sun's HotSpot JVM, currently works for Azul and designed the JVM used in Zing as well.

Scott Sellers replied on Sun, 2010/10/24 - 2:31pm

There is additional detailed information about the Liferay+JBoss benchmark that is being discussed available here: http://www.azulsystems.com/products/zing/performance

We made every attempt to make the comparison truly apples-to-apples, including tuning the native garbage collector as best as possible, using a variety of different GC algorithms (both parallel and CMS) as well as additional tuning with the multitudes of knobs that are available. By its very nature, benchmarking is always subject to great debate (how indicative of real-world apps, how different apps may behave, etc) so we always encourage anyone who has a need for a more scalable, highly available Java runtime platform that eliminates garbage collection to give Zing a try: http://www.azulsystems.com/products/zing/trial

>> I'd like to see a PowerPC port. We've got a lot of hardware locked up on IBM blades and we're ready for a divorce with AIX. Switch to linux, add some zing, and bam...

We don't plan a native PowerPC port, however with Zing's Java virtualization technology (http://www.azulsystems.com/products/zing/java-virtualization) you can transparently offload Java apps off existing servers (e.g. running SPARC Solaris, AIX, etc) onto a Zing virtual appliance running on a fast, cheap commodity x86 server.

Scott Sellers
President & CEO
Azul Systems

Liezel Jane Jandayan replied on Mon, 2011/08/22 - 8:51pm

The two key components of their technology are a pauseless garbage collection algorithm, and a zero-overhead diagnostic/monitoring tool. Until now the pauseless GC algorithm has required dedicated hardware in the form of Azul’s Vega appliances, but Zing, which is generally available from today, comprises a software-only port of Azul's entire technology stack optimized for Intel's x86 processor and AMD.

http://twitter.com/yjberkowitz

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.