NoSQL Zone is brought to you in partnership with:

Debasish specializes in leading delivery of enterprise scale solutions for various clients ranging from small ones to Fortune 500 companies. He is the technology evangelist of Anshin Software (http://www.anshinsoft.com) and takes pride in institutionalizing best practices in software design and programming. He loves to program in Java, Ruby, Erlang and Scala and has been trying desperately to get out of the unmanaged world of C++. Debasish is a DZone MVB and is not an employee of DZone and has posted 55 posts at DZone. You can read more from them at their website. View Full User Profile

Are ORMs Really a Thing of the Past?

10.19.2009
| 11870 views |
  • submit to reddit

Stephan Schmidt has blogged on the ORMs being a thing of the past. While he emphasizes on ORMs' performance concerns and dismisses them as leaky abstractions that throw LazyInitializationException, he does not present any concrete alternative. In his concluding section on alternatives he mentions ..

"What about less boiler plate code due to ORMs? Good DAOs with standard CRUD implementations help there. Just use Spring JDBC for databases. Or use Scala with closures instead of templates. A generic base dao will provide create, read, update and delete operations. With much less magic than the ORM does."

Unfortunately, all these things work on small projects with a few number of tables. Throw in a large project with a complex domain model, requirements for relational persistence and the usual stacks of requirements that today's enterprise applications offer, you will soon discover that your home made less boilerplated stuff goes for a toss. In most cases you will end up either rolling out your own ORM or start building a concoction of domain models invaded with indelible concerns of persistence. In the former case, obviously your ORM will not be as performant or efficient as the likes of Hibernate. And in the latter case, either you will end up building an ActiveRecord model with the domain object mirroring your relational table or you may be more unfortunate with a bigger unmanageable bloat.

It's very true that none of the ORMs in the market today are without their pains. You need to know their internals in order to make them generate efficient queries, you need to understand all the nuances to make use of their caching behaviors and above all you need to manage all the reams of jars that they come with.

Yet, in the Java stack, Hibernate and JPA are still the best of options when we talk about big persistent domain models. Here are my points in support of this claim ..

  • If you are not designing an ActiveRecord based model, it's of paramount importance that you keep your domain model decoupled from the persistent model. And ORMs offer the most pragmatic way towards this approach. I know people will say that it's indeed difficult to achieve this in a real life world and in typical situations compromises need to be made. Yet, I think if you need to make compromise for performance or whatever reasons, it's only an exception. Ultimately you will find that the mjority of your domain model is decoupled enough for a clean evolution.

  • ORMs save you from writing tons of SQL code. This is one of the compelling advantages that I have found with an ORM that my Java code is not littered with SQL that's impossible to refactor when my schema changes. Again, there will be situations when your ORM may not churn out the best of optimized SQLs and you will have to do that manually. But, as I said before, it's an exception and decisions cannot be made based on exceptions only.

  • ORMs help you virtualize your data layer. And this can have huge gains in your scalability aspect. Have a look at how grids like Terracotta can use distributed caches like EhCache to scale out your data layer seamlessly. Without the virtualization of the ORM, you may still achieve scalability using vendor specific data grids. But this comes at the price of lots of $$ and the vendor lock-ins.

Stephan also feels that the future of ORMs will be jeopardized because of the advent of polyglot persistence and nosql data stores. The fact is that the use cases that nosql datastores address are very much orthogonal to those served by the relational databases. Key/value lookups with semi-structured data, eventual consistency, efficient processing of web scale networked data backed with the power of map/reduce paradigms are not something that your online transactional enterprise application with strict requirements of ACID will comply with. So long we have been trying to shoehorn every form of data processing with a single hammer of relational databases. It's indeed very refreshing to see the onset of nosql paradigm and it being already in use in production systems. But ORMs will still have their roles to play in the complementary set of use cases.

From http://debasishg.blogspot.com

Published at DZone with permission of Debasish Ghosh, author and DZone MVB.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)

Comments

Mark Thornton replied on Mon, 2009/10/19 - 3:54am

Performance isn't the only source of compromise wuth ORM's. It is also frequently necessary to weaken the object model to accommodate limitations of the ORM. I'm still looking for an ORM that can manage truly immutable entities.

Michal Jemala replied on Mon, 2009/10/19 - 6:15am

In my opinion ORMs are logical transition between plain JDBC access and pure object persistence supported by OODBMS. However I would agree with Stephan Schmid and suggest adandoning of ORMs. I believe that plain object persistence wihout any need to transform an OO model into the realational tables will gain on popularity very soon..hope so :-)

Ovidiu Guse replied on Mon, 2009/10/19 - 6:52am

"I believe that plain object persistence wihout any need to transform an OO model into the realational tables will gain on popularity very soon..hope so :-)"

 You know ... that was before, the starting point of ORM ... :).

I agree with Debasish.

Artur Sobierajczyk replied on Mon, 2009/10/19 - 12:41pm

I agree that time of classic ORMs is ending. This was another candidate for being a silver bullet for DB-OO programming but failed. What failed is what Michal writes: In my opinion ORMs are logical transition between plain JDBC access and pure object persistence supported by OODBMS. Because it doesn't work in real world. Database isn't a low-level implementation for persistence of objects. It's a first-class citizen of any software architecture which works. So it's not good to hide a database as an implementation detail. Object oriented model isn't THE model of reality but one of views only, and for persistence in RDBMS it must be converted to relational model.

Marc Stock replied on Mon, 2009/10/19 - 2:13pm

The problem is that the referenced post is referring to implementation problems in Hibernate more than for ORMs in general. I use to use Tomcat (aka EclipseLink) and we never had the LazyInitializationException. In fact, it was a far more pleasant experience overall than Hibernate. So blame the implementation, not the concept. He is right that it can be overkill though.

cowwoc replied on Mon, 2009/10/19 - 4:51pm

The author argues that as your domain model grows ORMs become more and more attractive. I'd actually argue the opposite. ORMs work fine for trivial object mappings but as your project grows you end up finding what Ted Neward blogged about so well: http://blogs.tedneward.com/2006/06/26/The+Vietnam+Of+Computer+Science.aspx

 

Leaky abstractions are *extremely* painful. The more I deal with ORMs the more I hate them. I don't know what the solution is, but I can tell you ORMs are not it. Perhaps Object-Oriented DBs are the solution, perhaps not. I'd personally prefer spending my time working around OODBM flaws than around ORM flaws. At least OODBMs like http://www.neodatis.org/ let me work with clean object designs first and optimize second. ORMs seem to have it the wrong way around.

Andy Jefferson replied on Tue, 2009/10/20 - 1:46am

Whatever the database you choose, for whatever reasons, you should be able to use the same object-oriented API for persistence and retrieval of Java objects. Which is why DataNucleus provides persistence to RDBMS, ODBMS, HBase (HADOOP), Amazon S3, Google BigTable, XML, Excel, ODF, JSON, LDAP using standards-based persistence APIs. Hence swapping datastores at a later date becomes trivial.

Mladen Girazovski replied on Wed, 2009/10/21 - 5:22am

 

The problem is that the referenced post is referring to implementation problems in Hibernate more than for ORMs in general. I use to use Tomcat (aka EclipseLink) and we never had the LazyInitializationException. In fact, it was a far more pleasant experience overall than Hibernate. So blame the implementation, not the concept. He is right that it can be overkill though.

 Yep, people say ORM but they mean Hibernate, almost any Blog here rerefering to ORM talks about Hibernate (which ist outdated in many aspects) and is therefore misslabeled, including this blog.

George Farmer replied on Wed, 2009/10/21 - 2:16pm in response to: Andy Jefferson

Swapping datastores at later date is never trivial. Moreover, in reality for a large enough application it is almost impossible no matter what persistence technology you use.

Andy Jefferson replied on Wed, 2009/10/21 - 3:43pm in response to: George Farmer

For ample numbers of DataNucleus clients, swapping datastores is a reality that is both achievable and not excessive in effort. The fact is that you need to change nothing in your model classes, as it should be, and the persistence technology takes care of the rest; the only possible additional thing is where you want to add extra handling for particular features specific to the new datastore, and this is not a hard requirement for the majority of datastore changes.

George Farmer replied on Wed, 2009/10/21 - 4:02pm in response to: Andy Jefferson

Come on, if your data is structured to be "friendly" to RDBMS it will take you a while to restructure it to follow Big Table. It will rather never happen.

Mark Unknown replied on Wed, 2009/10/21 - 7:52pm in response to: cowwoc

Hmm. I must be blind and stupid because as my projects grows I am SO glad I am using an ORM. I am not sure what Ted was talking about.  He might be smart, but he is not always right.

Are ORMs pefect? No. Would an ODBMS be better? Possibly. 

Mark Unknown replied on Wed, 2009/10/21 - 8:00pm in response to: George Farmer

Come on, if your data is structured to be "friendly" to RDBMS it will
take you a while to restructure it to follow Big Table. It will rather
never happen.
http://code.google.com/appengine/docs/java/datastore/usingjpa.html 

Mark Unknown replied on Wed, 2009/10/21 - 8:03pm in response to: Artur Sobierajczyk

Database isn't a low-level implementation for persistence of objects

 "Machine language isn't a low-level implementation for high level languages"

 

Mark Unknown replied on Wed, 2009/10/21 - 8:04pm in response to: Mark Thornton

Performance isn't the only source of compromise wuth ORM's

 What compromise?

 

Andy Jefferson replied on Thu, 2009/10/22 - 12:05am in response to: George Farmer

"Come on" ? Now it "takes a while" ... previously you said it was impossible.

If you have persistence metadata for persisting to RDBMS and then want to go to another datastore then you can clearly do it, subject to what the other datastore plugin supports at that time. Since you pick a plugin that is immature still (and labelled beta FYI) then there are always going to be things that aren't fully supported; when Google finally dedicate resource to their DataNucleus BigTable plugin then they will support the vast majority of ORM metadata components. If you had picked a mature plugin, like say XML, or NeoDatis, then the persistence would work with very little to do.

George Farmer replied on Thu, 2009/10/22 - 9:45am in response to: Andy Jefferson

I said "almost impossible" to be precise. Try to imagine what is involved in swapping RDBMS to BigTable (even when its plugin matures). Non-trivial remapping, data conversion, etc. It's gonna be a nightmare. That's why this possibility is purely theoretical to me. Btw, my post is not directed against DataNucleus, I just feel uneasy when people try to trivialize things non-trivial by nature.

Alexander Ashitkin replied on Sun, 2009/11/08 - 2:17pm

From my personal expirience on a previous project - then i've joined it, it was 37 KLOC, it was without dependency injection and dao access with JDBC. Then i refactor it to spring + hibernate, it came 25 KLOC, although functionality was gained. Database access code was bloated with dozens of mappers and dto objects for different fetching strategies. Managed relations allowed me to remove a lot of methods in dao layer, made it more clear. If i need fine tuned query(not a common situation) - i will do it in some way, but i will try jdbc at last.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.