Posts Tagged ‘qpid’

Sometimes Using Less Abstraction is Better

March 24, 2009

Last week I was working on getting the Apache Qpid unit tests running on Windows. The unit tests are arranged to take advantage of the fact that the bulk of the Qpid client and broker is built as shared/dynamic libraries. The unit tests invoke capabilities directly in the shared libraries, making it easier to test. Most of the work needed to get these tests built on Windows was taken care of by the effort to build DLLs on Windows. However, there was a small but important piece remaining that posed a challenge.

Being a networked system, Qpid tests need to be sure it correctly handles situations where the network or the network peer fails or acts in some unexpected way. The Qpid unit tests have a useful little class named SocketProxy which sits between the client and broker. SocketProxy relays network traffic in each direction but can also be told to drop pieces of traffic in one or both directions, and can be instructed to drop the socket in one or both directions. Getting this SocketProxy class to run on Windows was a challenge. SocketProxy uses the Qpid common Poller class to know when network data is available in one or both directions, then directly performs the socket recv() and send() as needed. This use of Poller, ironically, was what caused me problems. Although the Windows port includes an implementation of Poller, it doesn’t work in the same fashion as the Linux implementation.

In Qpid proper, the Poller class is designed to work in concert with the AsynchIO class; Poller detects and multiplexes events and AsynchIO performs I/O. The upper level frame handling in Qpid interacts primarily with the AsynchIO class. Below that interface there’s a bit of difference from Linux to Windows. On Linux, Poller indicates when a socket is ready, then AsynchIO performs theĀ  I/O and hands the data up to the next layer. However, the Windows port uses overlapped I/O and an I/O completion port; AsynchIO initiates I/O, Poller indicates completions (rather than I/O ready-to-start), and AsynchIO gets control to hand the resulting data to the next layer. So, the interface between the frame handling and I/O layers in Qpid is the same for all platforms, but the way that Poller and AsynchIO interact can vary between platforms as needed.

My initial plan for SocketProxy was to take it up a level, abstraction-wise. After all, abstracting away behavior is often a good way to make better use of existing, known-to-work code, and avoid complexities. So my first approach was to replace SocketProxy’s direct event-handling code and socket send/recv operations with use of the AsynchIO and Poller combination that is used in Qpid proper.

The AsynchIO-Poller arrangement’s design and place in Qpid involves some dynamic allocation and release of memory related to sockets, and a nice mechanism to do orderly cleanup of sockets regardless of which end initiates the socket close. Ironically, it is this nice cleanup arrangement which tanked its use in the SocketProxy case. Recall that SocketProxy’s usefulness is its ability to interrupt sockets in messy ways, but not be messy itself in terms of leaking handles and memory. My efforts to get AsynchIO and Poller going in SocketProxy resulted in memory leaks, sockets not getting interrupted as abruptly as needed for the test, and connections not getting closed properly. It was a mess.

The solution? Rather than go up a level of abstraction, go down. Use the least common denominator for what’s needed in a very limited use case. I used select() and fd_set. This is just what I advise customers not to do. Did I lose my mind? Sell out to time pressure? No. In this case, using less abstraction was the correct approach – I just didn’t recognize it immediately.

So what made this situation different from “normal”? Why was it a proper place to use less abstraction?

  • The use case is odd. Poller and AsynchIO are very well designed for running the I/O activities in Qpid, correctly handling all socket activity quickly and efficiently. They’re not designed to force failures, and that’s what was needed. It makes no sense to redesign foundational classes in order to make a test harness more elegant.
  • The use is out of the way. It’s a test harness, not the code that has to be maintained and relied on for efficient, correct performance in deployed environments.
  • It’s needs are limited and isolated. SocketProxy handles only two sockets at a time. Performance is not an issue.

Sometimes less is more – it works in abstractions too. The key is to know when it really is best.

Things I Like About Working on Apache Qpid

March 4, 2009

Qpid logo

I’ve mentioned before that I’ve been working on the Apache Qpid project, particularly on its port to Windows, first available in the M4 release. I also work on other open source projects related to networked applications programming (primarily, ACE). Since no two customers’ needs are the same, it pays to have multiple tools in order to get the best solution for the situation.

Although Qpid graduated as a Apache top-level project (TLP) some time ago, Apache issued a press release about it this week. As I was reading the release, I was reminded of some of the reasons I really enjoy working with the Apache Qpid team.

  • Meritocracy: the Apache way of working forces contributors to demonstrate their skills and desire to contribute over time before being admitted to the set of people who can commit to the source code repository. Prior to this step, code must be vetted by more senior members who assist with integrating and testing as well as helping newcomers along on the path the committership.
  • Intellectual property rights handling: Lawyers get the sweats when dealing with some open source projects because of the fear of intellectual property rights issues which may arise down the road. Apache has put a very nice system in place for ensuring that contributions to the project have all intellectual property rights properly assigned so there are no issues that users need to worry about.
  • Quality: I’ve been very impressed by the experience, skill, and professionalism of the Apache Qpid development and project team. I’m proud to be a part of this team and they inspire me daily to reach higher and make Qpid the best it can be.

I’m pleased to have customers that see the value in sponsoring work on Qpid because the resulting product provides fantastic enterprise messaging functionality without the exorbitant enterprise messaging price tag. I’m currently working to improve the Qpid user experience for Windows developers as well as reworking the build system to make it easier to keep the Windows and Linux/UNIX builds in synch. Much of the Windows improvements (build the libraries as DLLs, produce an installable prebuilt kit) will be available at the next release (M5) in a month or so. The build improvements will get worked into the development stream after M5.

There’s No Substitute for Experience with TCP/IP Sockets

December 31, 2008

The number of software development tools and aids available to us as we begin 2009 is staggering. IDEs, code generators, component and class libraries, design and modeling tools, high-level protocols, etc. were just speculation and dreams when I began working with TCP/IP in 1985. TCP and IP were not yet even approved MIL-STDs and the company I worked for had to get US Department of Defense permission to connect to the fledgling Internet. The “Web” was still 10 years away. If you wanted to use TCP/IP for much more than FTP, Telnet, or email you had to write the protocol and the code to run it yourself. The Sockets API was the highest level access we had at the time. That is a whole area of difficulty and complexity in and of itself, which C++ Network Programming addresses. But the API is more of a usage and programming efficiency issue – what I’m talking about today is the necessity of experience and understanding what’s going on between the API and the wire when working with TCP/IP, regardless of the toolkit or language or API you use.

A lot of software running on what many people consider “the net” piggy-backs on HTTP in one form or another. There are also many helpful libraries, such as .NET and ACE, to assist in writing networked applications at a number of levels. More specific problem areas also have very useful targeted solutions, such as message queuing systems and Apache Qpid. And, like most programming tasks, when everything’s ideal, it’s not too hard to get some code running pretty well. It’s when things don’t work as well as you planned that the way forward becomes murky. That’s when experience is needed. These are some examples of issues I’ve assisted people with lately:

  1. Streaming data transfer would periodically stop for 200 msec, then resume
  2. Character strings transferred would intermittently be bunched together or split apart
  3. Asynchronous I/O-based code stopped working when ported to Linux

The tendency when a problem such as this comes up is to find out who, or what, is to blame. In my experience, the first attempt at blame is usually laid on the most recent addition to the programming toolset – the piece trusted the least and that’s usually closest to the application being written. For ACE programs, this is usually why I get involved so early.

I’ve spent many years debugging applications and network protocol code. I spent way too much time trying to blame the layer below me, or the OS, or the hardware. The biggest lesson I learned is that when something goes wrong with code I wrote, it’s usually my problem and it’s usually a matter of some concept or facility I don’t understand enough to see the problem clearly or find the way to a solution. That’s why it’s so important to understand the features and functionality you are making use of – there’s no substitute for experience.

Helping my clients solve the three problems I mentioned above involved experience. Knowing where to target further diagnosis and gathering the right information made the difference between solving the problem that day and staring at code for days wondering what’s going on. Curious about what the problems were?

  1. Slow-start peculiarity on the receiver; disable Nagle’s on the receiving side.
  2. That’s the streaming nature of TCP. Need to mark the string boundaries and check for them on receive.
  3. Linux silently converts asynchronous socket I/O operations to synchronous and executes them in order; need to restructure order of operations in a very restricted way, or switch paradigm on Linux.

Although each client initially targeted blame at the same place, the real solution was in a different layer for each case. And none involved the software initially thought to be at fault.

When you are ready to begin developing a new networked application, or you’re spending yet another night staring at code and network traces, remember: there’s a good chance you need a little more clarity on something. Take a step back, assume the tools you’re using are probably correct, and begin to challenge your assumptions about how you think it’s all supposed to work. A little more understanding and experience will make it clear.

Apache Qpid graduates incubator; now a top-level project

December 11, 2008

The Apache Qpid project has been in incubation at the Apache Software Foundation for quite a while now, having delivered at least 3 releases of Apache Qpid. Recently the Apache Software Foundation board of directors voted to graduate the project from the incubator as a new top-level project (TLP) at Apache. This is a major milestone for Qpid and is based on:

  • Proven ability to manage and coordinate development and release a product
  • Cultivate a community of developers with sufficient diversity

I joined the Apache Qpid project this past summer, primarily to lead the port to Windows. I’ve been impressed with the development team’s professionalism, experience, and commitment to quality.

Congratulations to the Apache Qpid team on this great accomplishment!