Java is a great language. It manages memory for you, teaches us about
object-oriented programming, and makes us better coders as we use it.
Plus, it really is a ‘write once, run anywhere’ language. Nonetheless,
Java applications can run into a few common performance challenges that
developers and app owners should be familiar with:
One of the great benefits of Java is its ability to take care of the
memory model for you. When objects aren’t in use, Java helps you out by
doing the clean up. Older languages need you to do your memory
management manually, but you would rather spend time focusing on core
application logic than worrying about memory allocation.
Having said that, it’s not to say that Java memory management guarantees
zero memory problems. By managing the memory model for you, or rather
the creation/destroying of objects that are unused, Java puts them in a
heap. Memory leaks typically happen as a result of improper programming –
usually when the developer didn’t relieve all references to an object.
Thus, your heap builds up and your app comes to a grinding halt.
Most people use heap dumps and/or profilers to diagnose memory leaks. A
heap dump allows you to see which object is holding a reference to the
collections. It gives you an idea of where the collection is, but
doesn’t tell you who is accessing the collection or their
characteristics to let you drill down to root cause. Heap dumps are also
usually quite large, in gigabytes, and it takes significant resources
to analyze and open a heap dump, then read it and identify the issue.
The second method, a combination of a heap dump and a profiler, gets you
a little bit closer, but not much. Memory profilers try to help you
analyze your heap dump. They have live data and now you know who is
creating the objects, but you still don’t know what’s actually causing
Both heap dumps and profilers can be helpful in development and
pre-production, but once your apps are out in the wild, profilers just
aren’t useable. One of the most effective ways to isolate and address
memory leaks is through transaction and code path analysis. By taking a
snapshot of the transaction, you can get a better idea of where the
issue is and who is causing it, which usually leads to less downtime and
Almost every application uses a JDBC database. A very common problem
with applications is badly performing SQL. This can be due to fields not
being indexed, too much data being fetched, and other various problems.
This affects application performance adversely because most
applications use multiple SQL invocations per user request.
There could be many causes slow SQL.. But one in particular stands out: the Object Relational Mapper (ORM).
The ORM has become a method of choice for bringing together the two
foundational technologies that we base business applications on today –
object-oriented applications (Java, .NET) and relational databases
(Oracle, mySQL, PostgreSQL, etc.). Most applications today use a
relational database. For many developers, this technology can eliminate
the need to drill-down into the intricacies of how these two
technologies interact. However, ORMs can place an additional burden on
applications, significantly impacting performance while everything looks
fine on the surface.
In the majority of cases, the time and resources taken to retrieve data
are orders of magnitude greater than what’s required to process it. It
is no surprise that performance considerations should always include the
means and ways of accessing and storing data.
While intuitive for an application developer to use (they do hide the
translation complexities), an ORM can also be a significant weight on an
application’s performance. Make sure you understand what’s going on
under the hood.
Issues arising from synchronization are often hard to recognize and their impact on performance can become significant.
The fundamental need to synchronize lies with Java’s support for
concurrency. This is implemented by allowing the execution of code by
separate threads within the same process. Separate threads can share the
same resources, objects in memory. While being a very efficient way to
get more work done (while one thread waits for an IO operation to
complete, another thread gets the CPU to run a computation), the
application is also exposed to interference and consistency problems.
To prevent such a scenario programmers use the “synchronized” keyword in
his/her program to force order on concurrent thread execution. Using
“synchronized” prevents threads from obtaining the same object at the
same time and prevents data inconsistencies.
In practice, however, this simple mechanism comes with substantial side
effects. Modern business applications are typically highly
multi-threaded. Many threads execute concurrently, and consequently
“contend” heavily for shared objects. Synchronization effectively forces
concurrent processing back into sequential execution.
There isn’t a silver bullet for addressing thread and synchronization
issues today. Some developers rely on ‘defensive’ coding practices like
locking, while others may rely on Software Transactional Memory Systems
(STM) to help mitigate the issue. The best development organizations are
the ones that can walk the fine line of balancing code review/rewrite
burdens and concessions to performance.
These are just a few application performance issues Java developers face
on a daily basis. There are a variety of helpful application
performance tools and vendors out there that can help reduce these