On a recent call with JP Garbani at Forrester, he asked me: “which of five common transaction monitoring techniques does INETCO Insight use?” My answer: “The best one, of course”
Was I having a bit of fun? Sure. Was I being facetious? Absolutely not.
OK, let’s rewind a second. The five transaction monitoring techniques most commonly used (credit: JP Garbani, Forrester Research) are:
- Tag and Trace. Tagging transactions (by adding headers, injecting unique Ids, etc.) and tracing them using agents at each hop.
- CMDB Mapping. Using an accurate, up-to-date CMDB (oxymoron, I know) to overlay relationships on top of historical log data collected from each tier.
- Flow Analysis. Mapping “conversations” between components using network traffic samples and statistical techniques.
- Manual Debug. Turning on debug modes in deep dive profiling tools.
- Packet re-assembly. Capturing raw packets and reconstructing transactions in real-time.
INETCO Insight uses technique number five, which is the best one when it comes to business transaction management (BTM), especially when you enhance it the way we do. Here’s why.
In a modern application, transactions slow down for one of three reasons:
- An application component is slow
- The server or virtual server infrastructure is over-committed
- The network is too busy or poorly used by the application
Tag and trace is great for spotting slow application components and understanding why. It will also tell you if the problem isn’t in your application components (i.e. It lies in the server, virtualization, network, or 3rd party services). But you are on your own to isolate where a problem is occurring and why if it resides outside of your application components. You get lots of application layer intelligence, no infrastructure layer intelligence, and you have to deploy a lot of agentry and instrumentation to monitor transactions. Verdict: lots of power in one area, zero power in others, and hard to deploy. Deploying tag and trace is like making Mexican food – it’s a lot of work, but you always end up with the same taste in the end.
CMDB Mapping is great for spotting systemic bottlenecks or failure points in a stable application. However, this technique struggles to cope with the unpredictable nature of a Cloud or virtual environment, or one that uses 3rd party services. You get plenty of infrastructure layer intelligence, limited application layer intelligence and you have to commit to a lot of ongoing maintenance to ensure accuracy and relevance. Verdict: reasonable power, hard to deploy and use. Deploying CMDB mapping tools is like planting a vineyard – with years of care and ideal conditions, you’ll get a something truly wonderful – mess any part of it up and all you’ve got is foul grape juice.
Flow analysis is great for network types who want a basic view of application performance at the protocol level (e.g. HTTP response times). You get decent infrastructure layer intelligence, and next to no application layer intelligence. Verdict: limited power, easy to deploy. Flow analysis is like gathering nuts and berries – sure you can survive off them, but given any other choice you probably wouldn’t.
Manual debug is great for application developers who need deep application (even code) level visibility. However, debug profiling typically comes at a high cost to performance, meaning you can only use these tools periodically. You get deep application layer intelligence, no infrastructure layer intelligence, and you compromise performance (further) every time you turn them on. Verdict: Lots of power (too much?), really hard to use. Deploying manual debug tools is like harvesting a field of wheat with pinking shears – you’ll get results, but it’s not exactly the most efficient way to go about it.
OK, so, we’re down to the last one. Packet re-assembly has two massive advantages:
1. You see every single transaction (instead of sampling or periodic debug captures), and
2. it doesn’t take much instrumentation to get all this information.
The knock on packet re-assembly has classically been the lack of comprehensible application layer intelligence. Using these tools has been a bit like trying to reverse engineer the ingredients of a casserole, blindfolded.
This is where INETCO Insight excels. We have developed a unique, and powerful processing engine that reconstructs business transactions from raw network traffic. This engine automatically makes sense of application layer information using advanced decoding, semantic analysis, and correlation capabilities. The result is you get more usable power than tag and trace, paired with much easier deployment. It’s the best of both worlds and it’s why I’m not being facetious when I say INETCO Insight uses the best transaction monitoring technique.