Azul's Zing Benchmarks - Way Better Than Native JVMs
"The response times (cache operation latencies) Terracotta sees in its tests are fractions of milliseconds. Their benchmark delivers mean latencies of about one tenth of a millisecond even with cache sizes up to 350GB. Furthermore, by being off-heap, Terracotta's product is not subject to full GC pauses even with standard JVMs.
It is important to note that it is a bit like comparing apples with oranges because Ehcache with BigMemory is a cache - so response times in this context would typically be understood as cache operation latencies. Azul is a JVM - so response times would typically be GC cycle times.
We just wanted to be sure that readers understood the difference between the products and the response times, as the current sentence makes it appear that Terracotta's product is slower than Azul's, which it isn't."
Azul Systems justified its impressive Zing platform response time claims this week as the software stack went GA. Zing was unveiled in June and this week they released some benchmarks to back up the numbers they showed in the beta stages.
Zing is basically a virtualized, software-only port of the innovative techniques used within Azul's Vega 3 hardware appliances. The virtual appliance makes many of Vega 3's benefits available on any x86 hardware. It's main goal is to resolve scalability and response time issues that have been inherent in Java applications for some time. It also optimizes tier 1 & 2 Java applications for virtual and muli-tenant cloud environments. Azul says that Zing will facilitate new levels of innovation in Java applications, allowing a faster development cycle and large amounts of in-memory data. With Zing we could start to see large, hyper-interactive real-time business operation apps. You can read more about Zing here.
BenchmarksAzul CTO Gil Tene used a demo application from Liferay portal 5.23 running on JBoss AS 5.1. The load test used a single JVM and had an SLA where 99.9% of users had to receive a response in five seconds or less. You can see from the graph below that a native JVM was able to support only 45 concurrent users, while Zing could support over 800 users and keep its response time under one second.
The full benchmark report has been attached for more in-depth analysis.
Zing and BigMemory - DifferencesBoth Terracotta's BigMemory and Azul's Zing platform are used to solve inefficiencies in the Java Runtime, particularly around garbage collection. Azul said that Zing's JVM obviously could bring its 1TB heap size and improve the performance of a caching architecture where BigMemory might be used. While BigMemory, an off-heap memory store, was tested at 350GB heap sizes with response times down to a few seconds, Azul says their response times are down to a few milliseconds. "Our worst case scenarios are 30 milliseconds," says George Gould, Azul VP of Marketing. Azul's Co-Founder and VP of Engineering, Shyam Pillalamarri, adds that BigMemory avoids going to a persistence layer. "We actually attack that problem [Garbage Collection] fundamentally and take it away completely." Pillalamarri says that when you want to have very large heap sizes, attacking the problem fundamentally is a lot more beneficial. If it's just about caching and not functionality that the JVM provides, then you have to change the application and work through some APIs to benefit, he says.
Open Source ContributionsAzul is also contributing several software components of the Vega 3 and Zing software to the Managed Runtime Initiative, which was unveiled a few weeks before Zing. George Gould said that we could expect a major website update for the MRI and a significant push of that code into projects like OpenJDK, IcedTea, and Kernel.org. They also plan to simplify the build process for the runtime, which currently requires several compilers. The MRI will include the algorithm that allows Azul's JVM to circumvent Java garbage collection problems.