Enterprise Integration Zone is brought to you in partnership with:

Allan Kelly has held just about every job in the software world, from sys admin to development manager. Today he provides training and coaching to teams in the use of Agile and Lean techniques. He is the author of "Changing Software Development: Learning to become Agile" (2008) and "Business Patterns for Software Developers" (2012) and a frequent conference speaker. Allan is a DZone MVB and is not an employee of DZone and has posted 80 posts at DZone. You can read more from them at their website. View Full User Profile

SOA is the Software Equivalent of a Fast Breeder Nuclear Reactor

02.17.2013
| 7893 views |
  • submit to reddit

A fast breeder nuclear reactor is a wonderful idea. Basically, you put in used nuclear fuel from a conventional reactor, burn it, produce useful electricity and at the end of the process the used fuel has changed into a form you can put back into a conventional reactor.

Alternatively, another use for the final product is put in nuclear bombs but we don’t like to mention that.

Fast breeders have been shown to work but have failed to take over the world. They are very expensive to build and operate, pose security dangers and are hideously complex to operate - as you might imagine of a device cooled by liquid sodium, lead or mercury.

In other words: they are not commercially viable.

They aren’t even sensible to Governments.

I have come to the conclusion that in the software world Service Oriented Architecture, SOA for short, is the equivalent of a fast breeder.

SOA works in the laboratory. The technology can be shown to work by big service companies - IBM, Accenture and co. It seems like a great idea.

But… SOA doesn’t make commercial sense.

Let me explain why I believe this.

SOA is all about Reuse. Reusable code and systems.

Reuse does not come for free, you have to write code for reuse. Figures on cost of reuse v. cost of single use are not common. As a general rule I refer to Fred Brooks 1974 observation that it costs about 3 times more. Admittedly not a solid reference but the best I know.

The first problem is: do you know it is going to be reused?

If you write an SOA system and then find it is used once then it is very expensive.

In order to know it will be used more than once you need to accept requirements from multiple sources.

Which means your requirements costs go up, response times go up, responsiveness goes down. Which means you loose time and money.

Worse still, the loss of focus leads to distracted teams, complicated stakeholder management and competing interests. You risk producing a camel rather than a horse.

There is also an assumption here that there is enough commonality that can be factored out for reuse to make the whole thing viable.

Now SOA, and reuse, are sexy. Its something all developers want to do. And they want to do it properly. And such projects tend to be technically driven.

So they loose their business focus and get absorbed in details.

Then there is the matter of testing. Testing costs also go up.

Add in maintenance: fixing a bug in one system is going to hit other systems, all of them need testing. (“Write once, test many” as we used to say in the early days of Java.)

And who pays for this?

If it comes out of the IT budget we again loose business drivers and increase technical focus. But if one group pays for it they are paying far more than they would need to for single use. And if you apportion costs you are going to spend a lot of time arguing.

In other words: SOA works in the lab but not in commercial environments.

My advice, as is with any type of reuse: write it for the problem in hand, get something that works, have plenty of automated tests. And wait. When someone comes along with a problem that looks similar, a real candidate for “reuse” modify the thing you have just enough to cope with this, with tests. Then wait and repeat.

This way you only pay the cost of reuse when someone actually wants it.

SOA and Fast Breeder reactors both belong to the class of technologies which, while possible, even fascinating, don’t stake up commercially. Actually, come to think of it that covers most forms of software reuse and nuclear power.

Published at DZone with permission of Allan Kelly, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)

Tags:

Comments

Juergen Kress replied on Mon, 2013/02/18 - 9:10am

 

SOA re-usability can be achieved – yes at higher cost – by good architecture and strong governance.

SOA first hand is used as a company-wide integration platform and data mediation layer. Customization and changes are much easier & quicker in standard or custom build applications based on SOA technology. SOA can be also the foundation for BPM and SOA services could be used to build mobile business applications.

Jürgen Kress Oracle SOA Partner Community Leader

twitter.com/soacommunity  blog soacommunity.wordpress.com

Mark Unknown replied on Mon, 2013/02/18 - 7:38pm

"SOA is all about Reuse. Reusable code and systems." No, that is just part of SOA.  

Bolting SOA on after is difficult and expensive.  Doing it from the ground up is not. You don't need to build everything - just build in such a way that you can easily do it later. 

"Add in maintenance: fixing a bug in one system is going to hit other systems, all of them need testing. "  Actually, SOA done right, this is reduced. Without SOA you will have these issues for sure. 

I suggest you look at what Netflix is doing with SOA.

Allan Kelly replied on Tue, 2013/02/19 - 5:52am

Mark,

Thanks for your comments.  Do you have a link to something about the Netflix SOA you say is right?

I'll agree SOA is not all about reuse, however it is often presented as that - especially to developers and probably to the business too.  Perhaps the first thing to get straight when you look at SOA is: Why? - what are you hoping to achieve?

The SOA initiatives I have seen are bolting on after exercises.  In particular: bolting the mainframe to the web.  I've actually started to wonder if really SOA is a cover story for just this.

As for maintenance: if you have separate teams they can at least handle issues themselves.  When you link teams together you need to coordinate these fixes.  At that point dis-economies  of scale really cut in. 

Allan Kelly replied on Tue, 2013/02/19 - 5:56am in response to: Juergen Kress

Jurgen,

Your reply doesn't seem to address any of the points raised in my piece.  Rather it reads like a regurgitation of the usual arguments for SOA.

You mention BPM but I prefer the original name: BPR, lets be honest, Business Process Reengineering was an utter disaster.  If you are saying SOA supports BPM/BPR I'll add that to my list of reason NOT to adopt SOA.

Mark Unknown replied on Tue, 2013/02/19 - 8:13pm in response to: Allan Kelly

There is a quite a bit on Netflix Tech blog - techblog.netflix.com. But here is one in particular - http://techblog.netflix.com/2013_01_01_archive.html

Why SOA? Because it creates a foundation for the single responsibility principle.  I hope to achieve testability. And reduce duplication.  With modern applications, we typically have more than one "user" interface. You might not start with more than one, but you more than likely will end up with one.  I say "user" because it doesn't have to be a visual interface. If you are creating Unit Tests...

"As for maintenance: if you have separate teams they can at least handle issues themselves.  When you link teams together you need to coordinate these fixes.  At that point dis-economies  of scale really cut in. "  

Read through the Netflix Techblog. Their experience is the total opposite.

I would like to hear more about your experience with SOA. I can only think that the problem is that it was being done wrong (i.e. bolting it on - that is NOT SOA. That ignores the architecture part)

Allan Kelly replied on Wed, 2013/02/20 - 4:32am

I've also been told about an example at Amazon

http://apievangelist.com/2012/01/12/the-secret-to-amazons-success-internal-apis/


Although I immediately have reservations about any strategy that puts a gun to people's head: do it or be fired.


I think your onto something about the not bolting it on afterwards, build it with hooks, services, APIs, call it what you will.

Second, a common theme, maybe, from Netflix and Amazon may be that it was wide: many teams, if not all, did this.  The SOA I've seen is a team sat in the corner and tasked with coming up with services for others.

Mark Unknown replied on Wed, 2013/02/20 - 11:17pm in response to: Allan Kelly

"build it with hooks, services, APIs, call it what you will." That is SOA.

" The SOA I've seen is a team sat in the corner and tasked with coming up with services for others." And that is not SOA. That is probably Web Services (assuming they are using SOAP).  

And therein lies the problem.  SOA is not Web Services (or something like that). It is an architecture  on which you base your system/application on.  It is ground up. Not  something on the outside.

I am going to guess that in your experience, an architect team is something that sets off to the side and they don't do development. 


Florin Jurcovici replied on Thu, 2013/02/21 - 1:26am

Idunno, one of us doesn't get it right. For me, SOA is mainly about decoupling. Reuse is just a beneficial side effect. Since about 90% of programming effort always goes into maintenance, and maintenance costs usually grow exponentially in time, and also grow exponentially with complexity, as design errors accumulate and become too costly to fix specifically due to coupling (i.e. you can't touch just one part of the system because it'd require changes to too many other parts), a SOA is likely to significantly reduce costs in the long run.

There are two articles IMO worth reading in the context of SOA. One is ancient: http://laputan.org/mud/mud.html#BigBallOfMud. The other one is newer: http://steverant.pen.io/. Not going SOA will eventually result in a big ball of mud. Going SOA doesn't guarantee that you won't get balls of mud. In fact, if done right, it absolutely should, and should do so early and often. But it will be many small balls, instead of one huge ball, and cleaning up the mess will be an effort linearly related to complexity, and mostly constant in time, rather than an effort growing exponentially with system age and complexity, for the same piece of maintenance work.

I think one of the co-founders of CA, some many years ago, said that in the future (i.e. now) our worst enemy will be complexity. Even MS figured it out that complexity kills (products). The systems we build are becoming increasingly complex. Complexity can't be really made to go away. The only thing you can do about it is to split it into manageable chunks. That's exactly what SOA allows you to do. It might not look very attractive, but I can't see a better solution for what we've come to need to manage, in terms of complexity.



Allan Kelly replied on Thu, 2013/02/21 - 5:24am

 Florin,

Thanks for the comments, I think there is much to what you say.  Particularly about complexity, I don't know who made the comments originally but they are repeated and re-invented in many forms.  The enemy is complexity.

Big Ball of Mud is a very deep pattern, and its even debatable how far you can prevent it, or even if you want to.  My (<a href="http://www.allankelly.net/patterns/encapsulatedcontext.html">Encapsulated Context pattern http://www.allankelly.net/patterns/encapsulatedcontext.html</a>) had something to say about that.  (Continue the argument far enough and it becomes Dick Gabriel's "Worse if Better" discussion.)

Back to SOA.  I think most of what you say applies to large system and modularization/decoupling in general.  SOA is just the latest, and perhaps largest, tool to be through at this issue.

While I am sure that for you, and many other programmers, SOA - and web APIs in general - are really about decoupling I don't think that is how the debate plays out at the highest level.  I don't think CIO and CFOs sign-off on multi-million dollar efforts to decouple their systems.  I think they do sign-off on multi-million dollar efforts to reuse services and expose mainframe to the web.

And that is exactly where I see the problem.  First you have mixed understanding of what SOA is for (programmers see it as decoupling, those paying the bills see something else.)  Second I don't think spending millions and millions of dollars on will pay back, specifically in re-use terms.

As per the other comments: I stand my ground when we are talking about bolting on SOA, retro-fitting it to an existing system.

When its a case of building a system from the ground up with exposed APIs, whether we call it SOA, web services, or whatever, the case is more complex and may make commercial sense.

Lund Wolfe replied on Sun, 2013/02/24 - 1:10am

I have to agree with you about designing for reuse increasing the cost and complexity of a project.  Designing for testability also increases the cost and time/effort, even if it does force a better design and reduced maintenance.  XP says Do The Simplest Thing That Could Possibly Work and You Aren't Gonna Need It.  Refactor for reuse when you have two different callers with slightly different needs.  Do it in baby steps (iteratively) and assume you don't know all the requirements up front.  Let it unfold naturally.

SOA is certainly about reuse at the client interface level, but it is more about having multiple apps or domains versus having a monolithic app or mainframe underneath.  Slapping web services on top of your application to provide reusable standardized platform independent access is the whole point of web services (ideally, it should have been designed to be user centric underneath in the first place).  That is the easy part of reuse.  Adding or refactoring the code underneath to support your new web services may be the challenge (if you spent more time in analysis and design for SOA then this should be much less of a problem due to reduced complexity so reuse is easier to see and implement), and it is likely not something you could have anticipated when designing for reuse (assuming that you didn't know all the requirements up front).

Mark Unknown replied on Sun, 2013/02/24 - 11:11am in response to: Lund Wolfe

"increases the cost ... and reduced maintenance" .  It typically increase initial cost. That is the problem with most software. It was never designed or architected to be maintainable. So, over the life software ... it actually costs lest. And really, it does not take that much more to do it. It just takes longer to initially learn it. Once you know, it becomes natural. The problem is that most developers are not really capable. 

YAGNI is not excuse to not do the things you KNOW will be needed over the lifetime of the software. There are always tradeoffs and there are different types of software and usages. All of that must be considered up front. You then have to hedge your bets.

"Slapping web services on top of your application to provide reusable standardized platform independent access is the whole point of web services". The problem, and what the original poster is expressing, is that this does not work - not well.  WS should be just a way to expose business functionality and a "standard" way.  For most, you either must rewrite a bunch of code, add a bunch of new code, tightly couples you to a layer you should be (ie the peristance) and/or makes your system inflexible.

"it is likely not something you could have anticipated when designing for reuse". My experience is that you can. 100%? No. Very high percentage? Yes. And the cost is not really much more even in the initial development.

May current "applications"  consist of at least 2 "interfaces".  They probably have a web interface and a mobile interface. If you did any Unit Tests ... that is another "interface". And if you need to share info or get info from another system, you have additional interfaces. 

That being said, SOA is really bigger than just one app. What i have been saying above is really about an app being prepared to participate in SOA.


Florin Jurcovici replied on Sun, 2013/02/24 - 2:21pm in response to: Mark Unknown

IMO, you shouldn't unit-test interfaces which are to be exposed as services. Services should be more granular than class interfaces, and be tested via functional/integration tests, not unit tests.

You're right, it is possible to predict pretty accurately what will be needed in future functionality. But very few people do it right. And even if you do, the 10% you predict wrong are an unnecessary cost. Rather, design for extensibility, and then you both get away without the 10% you may predict wrong, and also stay open for future changes. Then again, designing for extensibility is something few people get right too.

YAGNI ... right, but you actually can't get away without doing the things which are needed. Then again, if you can get away without doing them, you actually don't need them. You may believe you absolutely need them, and your app will be limping without them, but IME that's more often than not just a case of not enough experience to design and code the simplest thing that could possibly work.



Mark Unknown replied on Sun, 2013/02/24 - 10:39pm in response to: Florin Jurcovici

I guess it depends on how you define services.  I unit test Spring Services.Those can  be exposed as web services. I am not sure what you mean by class interfaces (or rather what you are trying to mean). Either way ... lets just say "tests".  

BTW,  I don't mean trying to predict future functionality (i.e. business functionality). I do mean, in what ways you system will be used (i.e. integration with other systems).  Again this is almost always dependent on things like .. is this an internal app or a commercial for sale?

" IME that's more often than not just a case of not enough experience to design and code the simplest thing that could possibly work." We probably are actually agreeing. But sadly IME people use this as an excuse to create fragile systems vs not including non-needed business functionality. 

Florin Jurcovici replied on Mon, 2013/02/25 - 3:07am in response to: Mark Unknown

Class interfaces: simply everything public exposed by a class.

Unit test: a test which provides a class with mocks/stubs/fakes for all of its collaborators, and then proceeds to test individual public methods in isolation. Although I sometimes test protected methods too, via subclasses defined in the test itself - but this happens when I'm not smart enough to come up with a proper structure of my code.

Integration test: a test which tests the complete system via its interface, be it a web page, a set of RESTful web services, a rich client running on a desktop, a command line interface or a socket on which you can talk to the application via a proprietary protocol, in an environment in which all infrastructure (hardware, network, databases, consumed services, application configuration etc.) resembles the one that will be used in production.

IMO the two types of tests are clearly distinct species, and are best written using different tools and technologies. Which is why I wanted to emphasize the difference.

Spring services: you mean like exposing something via Spring's http invoker, RMI or the like? I'd rather test the class I wrap and expose, than writing a test on top of the http invoker, or on top of whatever protocol I choose for exposing the service. Besides, other than JAX-RPC or similar standards-based mechanisms, I think it's not nice to tie your services to something Spring- or Java-specific.

Also, indiscriminately exposing classes as services is a dangerous path: you might unintentionally end up exposing implementation classes, simply because it's too easy to do, instead of carefully designing your services interface. (I have seen this happening. While the result was still usable, and way better than a monolithic implementation, it wasn't easily comprehensible and my guess is it will become increasingly difficult to maintain, as the services interface gets more bloated. Had they started out with the idea of two separate applications - one just exposing reusable services to whoever may want to use them, on top of which a user interface is to be built as a distinct application - I think that wouldn't have happened.)


Lund Wolfe replied on Sun, 2013/03/03 - 5:11am in response to: Florin Jurcovici

I think you and Mark are actually in agreement on the unit test.  I have built apps myself with multiple "interfaces" (console app, swing app, unit test class) calling the same core code.  The unit test is more of a functional test.  The interfaces are not being tested.

Mark Unknown replied on Sun, 2013/03/03 - 12:47pm in response to: Florin Jurcovici

I guess the problem is that the industry overloads the term "service".  When I talk about Spring, I use it in the Domain Driven sense. (eg - http://lostechies.com/jimmybogard/2008/08/21/services-in-domain-driven-design/ ). It seems when you hear "services" you think "Web Services" or something like that. I do mean that too... when i talk about those kinds of services. 

What have been saying is that using DDD (and thus DDD services) makes making the other kind much easier.

A Spring "service" does not need to be exposed via RMI, http invoker, etc. It is a concrete class that should be unit tested with any dependencies mocked (some of those being "services").  

You can expose a Spring Service as Web Service (aka JAX-RPC or similar standards-based mechanisms), which makes it nice, but only if appropriate. But if you are using Java on both ends, it makes it much easier and are much better than (standards based) Web Services.  

"Integration test: a test which tests the complete system". We are on the same page here other than it is not always a complete system test - you might just be testing some things "unmocked".

"Also, indiscriminately exposing classes as services is a dangerous path". I didn't suggest that.   

" one just exposing reusable services to whoever may want to use them, on top of which a user interface is to be built as a distinct application". What I have run into is that different "user interface"s have different needs that are not easily predetermined. What I typically do is encapsulate "business logic" and expose "services" in different forms as needed as i discover needs. Of course, if i was building a SaaS, I would not have this luxury.

I really think we are on the same page. Just looking at the same things from different perspectives.

Florin Jurcovici replied on Mon, 2013/03/04 - 1:30am in response to: Mark Unknown

"I really think we are on the same page." Almost, it seems. But not really. I'm a bit of an extremist in some regards, maybe this explains the differences.

Services (like in web services) should be exposed as a generic interface, on top of which each client is free to build its own adapter - and absolutely must do so and not expect this adaptation from the service. The reason is that I'd rather push complexity to clients, where complexity can only bite the client containing it, instead of forcing it upon the service implementation, where it may bite several clients - building various adapters on top of the core business logic and packing everything into a single execution context can have this effect. OTOH, well thought out services should be easy to consume, and having to adapt the service in many distinct ways for different clients might well be an indication that your service isn't doing it right, or that the client tries to use it for something it wasn't meant to provide. This strict policy of not adapting on the service side might lead to some code duplication, when distinct clients require similar adaptation, but IMO it aids robustness. Besides, this kind of duplication can be avoided by publishing a client library along with the service - you'd still get duplication in deployment, but IMO that's most often not a significant concern.

As for DDD's concept of a service: IMO it's a dangerous concept. Unit tests should IMO focus on the code that implements more complex logic, and where errors are more likely to hide, plus such parts where unit tests are a really helpful tool for documenting what that part is expected to do. Since the service layer, in DDD, is intended to be a thin layer of bringing together the smartness packed inside domain objects, not really much more than glue code, this layer IMO most often should not actually need unit tests - the simplest of integration tests should easily reveal any errors in the services, and unit tests should flush out the more serious errors in domain objects. If services implement complex logic which justifies unit tests, I'd rather spend time trying to discover why this is, instead of writing unit tests for services, and then refactor appropriately until I no longer feel the need to unit-test my services.


Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.