Enterprise Integration Zone is brought to you in partnership with:

Nitin has posted 391 posts at DZone. View Full User Profile

Practical Complex Event Processing Using JBoss

08.11.2008
| 19185 views |
  • submit to reddit

Complex Event Processing (CEP) enables enterprises to achieve agile business processes through intelligent correlation of seemingly unrelated events. In this presentation, Chief Information Officer Max Yankelevich of Freedom Open Source Solutions walks attendees through a practical case study of a large Trust/Wealth Management Institution and demonstrates how CEP helped solve a complex business problem – charging their customers more money in fees. The use of JBoss Application Platform including JBoss Rules, jBPM and JBoss Messaging, were leveraged to implement the CEP Architecture and earned the client an extra $20M over the course of five years.

The complete transcript of this presentation has been provided below. The slide deck for this presentation can be downloaded here.

Good afternoon, everybody. Today, we're going to talk about Complex Event Processing. I don't know how many of you are familiar with it, but it's a pretty hot topic among a lot of business folks, and really considered to be the next wave in the service-oriented architecture. So, let's get started. We're going to walk through a case study, a real-life example. But first, we're going to talk about what Complex Event Processing is by itself.

So, CEP (Complex Event Processing) is actually a technology that provides the means to define a specific logical event, really, a complex event, which is made up of many fine grain, physical events over a period of time. So, it's state-event processing. I don't know how many of you folks are familiar with event-driven architecture. It's the next iteration of that. SOA has always led developers, engineers and architects to worry about events, and to introduce more events into the enterprise.

The problem has always been that there are a plethora of events. There are a lot of these really little things firing in the enterprise, but the information that the business wants is actually a conglomeration, or aggregation of these events. With the recognition of that, this whole pattern, which is called Complex Event Processing - within the last year and a half - evolved and really became a space unto itself.

So, CEP is specifically important in the context of SOA because it promotes event-driven architecture. The way the businesses work is actually an event-driven manner. For instance, insurance or banking rely on interactions of customers, and rely on events. Financial institutions rely on events to actually do something to kick off a business process. And again, the events are not the fine grain events of something changing in the database. It's a conglomeration, or aggregation of these multiple things happening that will give a business meaning.

So for instance, again, back to the real world, what are the applications of CEP in enterprises? Well, medical pandemic detection is actually a very interesting case where, based on events, for instance, of the numbers of patients, of cases of patients, and some other external data, you can detect a breakout of some kind of pandemic fraud detection. This is a very often, used case where, again, multiple actions over time signal some kind of a fraud activity. For instance, credit cards, banking, and all of these things going on might look OK by themselves, but taken in aggregation, present a very serious fraud case.

Algorithmic trading. Traffic. Hot spot detection. I'm sure you guys have heard about RFID events, those fine grain events that are generated by RFID sensors. For instance, when the product arrives inside the store, and then is being bought and carried outside the door, again, those two events taken together within some kind of time period represent a very meaningful marketing event, which, basically, indicates how well the product is selling to stock trading panels, and so forth. So, as event-driven architecture (EDA) becomes more and more prevalent, as the SOA implementations generate more and more events, we're going to get more CEP implementations, as well as we're going to derive more business value out of these events.

So, let's talk about a very specific case where CEP was implemented, and you'll see the value. You'll see, actually, a compare and contrast to your standard technology such as database warehouse. So, we'll talk about business case overview. The client was US Trust, now owned by Bank of America, but it was owned not that long ago by Charles Schwab, which is headquartered here in San Francisco.

I think it's the largest wealth management trust/bank/financial institutions in the United States with about $200 billion under management. Most of their revenues are driven from fees. They deal with ultra-wealthy individuals. I think their minimum threshold for an account is $20 million and up.

Essentially, all of their revenues are driven from fees. Therefore, fees are highly customized and highly complex, and based on various things that you wouldn't normally see in your normal bank. So, it's not write a check, and you get charged for it. It's things like - we'll go into it a little bit later. But, the business had a fee system, an older mainframe system, which essentially worked in a batch mode, generating fee statements every six months. But, the business really wanted an up to the minute view of fees to project the revenues.

They wanted the ability to forecast. They wanted to go to a shorter billing cycle for the customers. Some of the fees exampled here show the high level of complexity that these represented for US Trust. An example is if my portfolio contains more than 20% of Red Hat stock, and my expensive artwork goes up in value 10%, and I'm late on my $20 million mortgage payments, and Fed had lowered interest rates, and it's after May 1st, then my fee is about 1% of my real estate portfolio. Make sure that you send the bill to my summer residence.

So, this is some of the complexity that went into calculating fees for these ultra-wealthy clients. And, the way they traditionally have done it is they actually had feeds coming out of all these multiple sources, essentially, all of these systems that service the clients. All of these were either costs of the shelf products, or home-grown.

So, things like portfolio management, trading, banking, mortgage systems, market data and so forth, essentially generated feeds, which allowed the mainframe to churn this into the fees. So again, the existing implementation was a bunch of mainframe and COBOL programs. But, it was not good enough for business because they didn't get real-time visibility, and the company's financials actually were impacted as well.

What would be the solution to get real-time visibility, real-time reporting and forecasting in place for fees? Well, first, an event-driven architecture platform had to be put in place. So, event streams were derived from legacy and COTS applications and distributive platforms. So, we started out with publishing just the raw events through creating technology event adaptors for funky things like IBM IMS, which is a hierarchical database that IBM still sells, and contained a lot of their data. MS SQL Server, some of the products ran on that. There were batch file feeds. CRM.

So, the first thing that we needed to do is we need to have real-time events generated from these systems in some kind of a canonical event format. So, getting these fine grain events like when a customer's information changes, when a trade has been done, when a portfolio change has been done - all of these things had to come as an event stream in real-time, in order to be able to analyze it. And, the second step was actually establishing a CEP platform architecture, which is actually going to intelligently analyze that event stream that's coming from different places, and be able to aggregate into a coherent fees structure.

So, some of the components of CEP platform architecture were JBoss, Q Messaging, and MessageBus, which contained topics that were segregated by business event type, and by what each one of these systems produced. JBoss Cache with Property Change Listener, as part of the JBoss Rules. JBoss Rules itself. For CEP, the main specific language was DSL.

And if you take a look at the CEP product domain, or CEP space, it's very difficult to express CEP language in a standard rules-definition file or standard SQL, so there is a hybrid that's called CQL right now that's being worked. There are some standards that are in the works right now. But again, complex event processing requires its own domain-specific language. So that's something that JBoss Rules platform allowed us to do out of the box, to create this domain-specific language.

JGroups was used for cache replications, and then we'll get into how JBoss Groups and JBoss Cache work together to create the inverted-database concept. And everything ran on top of a JBoss application cluster.

So here's a picture of, actually, how this thing worked together. So, essentially, as we see, all of these legacy systems were producing the events that were consumed on the JBossMQ message bus--again, segregated by topic. And this is where the heart of CEP processing actually takes place. In order to accomplish CEP, you need two things: you need an inverted-stacked database, and you need some kind of an inference engine that's actually going to be able to take these events and aggregate them in some intelligent manner or drop some events.

And then, obviously, the aggregated event, the actual complex event, was propagated out, again, through messaging infrastructure, through JMS, to back-end systems, things like forecast systems, back-office systems, and to portfolio-management and investment-policy statements--so, client-facing systems. So, OK. Taking all of this fine-grain stuff, be able to analyze it, aggregate it, calculate it, and publish a coherent business event out to the outside world, which is pretty difficult to do if you don't have that infrastructure in place.

So, to go a little bit deeper into this setup, a lot of the events were stored in the JBoss Cache. That's where we leverage the property change list feature of JBoss Rules, where, as an event arrives, the rules were actually fired off depending on if it meets certain criteria. So the rules were applied every time a change in any of these areas were triggered. So, essentially, we would recalculate the fees, if necessary, and be able to publish it out to the real world.

CEP DSL--this is the domain-specific language--was where analysts, after the fact, after the system was implemented, would be able to actually change the rules. Because the rules were highly complex, they would be able to change the rules and adjust the rules based on the new business policies that the company has.

Questions on this?

Audience Member: The external system uses real-time, or is it done in batch?

Max: The question was if the external systems were real-time. So, most of them were real-time because they were implementing the databases--for instance, IMS and MS SQL Server--the leverage-triggered technology or exit technology, to get the real-time events. Some of the things, like Metavante, which is a well-known banking system, were still batch; they produce batch events. But again, it architecturally prepares you for a paradigm where it's all event-driven. So, in real world, you will rarely get everything that's real-time, but at least you can kind of shoot for that.

Here's an example of, really, CEP domain-specific language. This is what the rules look like. Again, when you dive in deeper into CEP, it has to have constructs for time definition, time windows. It has to have constructs that are SQL-like, to be able to aggregate the data. So, again, JBoss Rules allow us to author a domain-specific language that the developers and, actually, analysts use to define new rules and new events.

So, you see, it's a combination of SQL-like: "select average from stream." And it has two important things: SQL-like expression, and as well as time range, because you want to be able to define windows on your events. That's another thing that's highly important. This is just more examples of that.

OK. So, what was the result of being able to kind of get data, receive data real-time and aggregate the data and kind of fire it off real-time? Well, first of all, we were able to get the customer up-to-the-minute company revenue visibility for the management, for them to make real-time decisions, so they understood how much is being earned in fees, and make financial decisions accordingly.

And the other thing was, really, they gained about $5 million in revenue per year in addition--I wouldn't say additional revenue, but it's the revenue that they were losing due to a mainframe system not being able to fire off all the proper rules. So they got visibility into rules. They got the management of the rules kind of more visible to them. A mainframe programmer didn't have to do it anymore. So they were able to fine-tune these fees rules to earn them another $5 million per year.

Well, the other thing is they actually sensed that, their legacy fees application, which you would expect.

Now, the interesting thing is that, once the CEP platform was in place, they were able to leverage the same platform for fraud monitoring and tax slots, which are the other two problems that they were facing and addressing kind of in a batch mode. They were able to detect client fraud, just by, for an example, if somebody's doing a $20 million wire out of the bank and, at the same time, tries to commit a trade, they were able to catch these things happening in correlation versus being able to discover it after the fact in their old batch world. OK. And tax slots is a very similar scenario.

So, once you've established kind of two things--once you establish the event-driven architecture, which is the first layer, and once these events are in place, you have to establish the complex event processing platform, because the possibilities pretty much become endless. Again, aggregating the events, and trying to act intelligently on those events, is a very important thing. And I think one of the promises of SOA is accomplished through complex event processing, right? The flexibility and the agility of the business is achieved, really, this way.

Questions? Confused everybody?

[audience member asks a question]

Max: Nah. This is about us. This is a marketing slide. We're called Freedom Open Source Solutions, a premier partner of Red Hat JBoss, and really, our focus is working with JBoss-Red Hat family of products. We have a lot of good engineers and architects that are actually committed to production-izing and really implementing systems that are scalable and fast, which is important in working with professional open source because a lot of confidence from the clients comes from really having systems that run on professional open source and that are well-supported and run properly. So we address a lot of these issues.

Our core practice areas is JBoss, obviously; Red Hat Professional Service is a practice area; practical service-oriented architecture, and this is really the case study from that practice; technical architecture; SWAT professional service consulting; continuous improvement; and agile application development. So these are all the things that we do, as an organization.

Questions?

Audience Member: SQL Server, how did the SQL Server put the event into JBossMQ?

Max: Going back. So, for every one of the existing legacy systems, we had to implement a technology adapter which actually knew how to interact with this particular legacy system. For instance, MS SQL guy knew how to set up triggers and knew how to publish the triggers onto the messaging platform in a canonical-model format. So it actually set, in the same infrastructure as the SQL Server, and had a shadow table that, basically, the data was inserted by a trigger that was set up on a SQL Server set of tables, and actually took that data and published it out to the JMS world.

So every one of these systems had their own kinks, their own APIs, and the Oracle C-Bol product has its own API. So again, an adaptor has to be created per system usually, if it's a package system, or there can be a generic technology adaptor, for instance, that knows how to interact with an Oracle database or a batch system that can file and break it up into events.

And that's why you're kind of segregated your architecture where these guys are taking care of the generation of events, and the complex events processing platform takes care of worrying about aggregating and intelligence around these events.

Audience Member: Can we talk about your inverted fact database?

Max: Yeah, inverted facts database is actually a concept that it's a database turned upside-down essentially. It's not a polling type of interaction with the database, where let's say in a traditional warehouse world, you would have the data in a warehouse and then you would issue continual SQL statements around data and try to pull the data.

Inverted database concept is more of you store your queries, and as data arrives real time, these queries are executed and basically, you pull the data as it happens. It's a push versus a pull. So the JBoss cache and I remember I talked about the property change listener, so the JBoss cache was wired to be the property change listener for JBoss Rules. So as data arrived on the bus, the JBoss cache became the inverted database.

So essentially, you would insert the data into the JBoss cache and be a property listener. The JBoss Rules would be automatically executed, because it knows that the object or set of objects are changed, and the rules would be fired automatically. So that way, again, you're reactive, you're avoiding the polling of the data.

Audience Member: Yeah, you saying you have kinetical events on that data. What kind of format they have that in?

Max: It was in XML format. We selected Origins Badge [sp] XML format, sow e went with an industry standard. It was an XML standard.

The Legacy systems did not produce an economical model for us. The Legacy systems were actually not aware about sending the events. The adaptor's job, which is the custom piece that's here, was aware about the Legacy format as well as about the XML economical format, which would change the propriety of the system into an XML format.

Audience Member: Yeah, I understand. [inaudible]

Max: Well, there are two steps. The payload was itself a no ages bond, which this is an industry standard for, let's say, for some financial order processing and everything else. Now, within an industry there's something that's called WS eventing, and there is a couple of other. We didn't see. Pete [?], there is no currently format for this event, so we had a WS event wrapper essentially with a no ages bond payload.
Audience Member: How did the analyst actually get access to the CEP to be able to change the rules?

Max: Could you repeat that?

Audience Member: How did the analyst actually get access to the CEP to be able to change the rules?

Max: So the question was, how did the analyst actually get access to the CEP DSL to be able to change the rules? The short answer is they have the ability to change the rules in the browser, because you were able to introduce actually a seamed based front end that manipulated the CEP.

The longer answer is you obviously wouldn't put those changes into production. So there was a whole process that was powered by JBPM, which is a workflow tool that kind of migrated the rules changes from environment to environment and ran all the automated sanity checks. So the answer is they had a user interface, which was JBoss seam based to be able to change the rules and then JBPM powered.

Max: OK. Anyone? Sure.

Audience Member: How big is the system? How many customers, what kind of volume does it span?

Max: Well, the customer, again the front end customer...

Audience Member: How many users have 29+?

Max: The interesting thing is the event volume was huge because even though the customer number is small, they are having very large portfolios. So, on this side we we're getting up to, I would say, between 50 to 100 million events per day, because some of this stuff had to do with trading data, so these portfolios are very large. Things like mortgages and banking transactions.

Audience Member: How many primary keys do you end up having? How many customers do those events translate to?

Max: I would say that the customer number was about 100,000, the total customer number. I'm not sure about the primary key.

Audience Member: That's what I mean by primary key.

Max: Oh, just how many customers. For instance, in CRM, there were about 50 to 100,000 customers. But again, the number of events that were flowing through this pipe, they are fine grain events. For instance, every small change in the market, right, let's say if IBM went down, it would definitely trigger maybe 50,000 events because it propagated across the performance.

So, on this side the amount of events was huge because it's really raw data that you're dealing with. On this side, we would publish out maybe 20,000 events per day. So, talking about 50 to 100 million coming in and this is the whole power of complex event processing, because the front end systems do not deal with this myriad of data. They actually get a very coherent event.

Audience Member: Right.

Max: So, coming in, it was large.

Audience Member: Did Bank of America pay them to make the change?

Max: Absolutely not. So again, the objective of this was we cannot... It's a low intrusion type of approach. So, that is why the adapters were created. They actually were co-located with these systems, but we didn't make any core changes to the system itself. They actually knew, for instance, how to interact with C-Bol, or they knew how to interact with Medivante [sp], either through API, understanding the file formats, or understanding their database systems, their schema. So again, it's a low inquiry. It's not feasible to go out to all the vendors and ask them to publish this. Somebody gave me 3, oh, OK, this is three minutes. Any more questions? We have three minutes.

Audience Member: [Inaudible].

Max: If you give me your card, it will be sent to you, but I'm sure it's going to be put up on the site some place, PDF format.

Audience Member: OK.

Max: So, if you guys take away anything from this conversation, if you are doing SAR, or if you are doing any type of event driven architecture, CP is definitely the next generation of that.

Audience Member: What kind of tool do you use to handle the event and the DSL?

Max: Say it again. The question is?

Audience Member: What kind of tool? The kind of tool you have used for handling this stuff?

Max: Oh, the DSL?

Audience Member: And the JBoss rule.

Max: OK. So JBoss rule gives you the ability to create a DSL, which is essentially a domain specific language. They essentially have an API and some formatting files that you can create this domain specific language. What it does is it takes this domain specific language in and transforms it back to its own rules language. Obviously, it just comes with that out of the box.

Audience Member: Would we need a compiler to talk to the DSL?

Max: Oh, it's not a compilation. It's pretty much a metadata type of language.

Audience Member: The DSL?

Max: Yeah. This is metadata, so essentially it's not compilable, the DSL. JBoss rule is like a virtual machine that essentially takes this stuff in and compiles it down to their standard rules, read algorithm or notes, to make it easier. Again, one of the goals was to have business analysts and developers work on this, so this is really a tool for them to be able to worry only about complex event processing and not worry about, let's say, a jewel specific language or JBoss rule specific language.

Max: OK. No more questions? All right. Thank you.

Published at DZone with permission of its author, Nitin Bharti.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)