Posts Tagged ‘ACE proactor overlapped asynchronous’

Analysis of ACE_Proactor Shortcomings on UNIX

January 22, 2009

I’ve been looking into two related issues in the ACE development stream:

  1. SSL_Asynch_Stream_Test times out on HP-UX (I recently made a bunch of fixes to the test itself so it runs as well as can be on Linux, but times out on HP-UX)
  2. Proactor_Test shows a stray, intermittent diagnostic on HP-UX: EINVAL returned from aio_suspend()

Although I’ve previously discussed use of ACE_Proactor on Linux (https://stevehuston.wordpress.com/2008/11/25/when-is-it-ok-to-use-ace-proactor-on-linux/) the issues on HP-UX are of a different sort. If the previously discussed Linux aio issues are resolved inside Linux, the same problem I’m seeing on HP-UX may also arise, but it doesn’t get that far. Also, I suspect that the issues arising from these tests’ execution on Solaris are of the same nature, though the symptoms are a bit different.

The symptoms are that the proactor event loop either fails to detect completions, or it gets random errors that smell like the aiocb list is damaged. I believe I’ve got a decent idea of what’s going on, and it’s basically two issues:

  1. If all of the completion dispatch threads are blocked waiting for completions when new I/O is initiated, the new operation(s) areĀ  not taken into account by the threads waiting for completions. This is basically the case in the SSL_Asynch_Stream_Test timeout on HP-UX – all the completion-detecting threads are already running before any I/O is initiated and no completions are ever detected.
  2. The completion and initiation activities modify the aiocb list used to detect completions directly, without interlocks, and without consideration of what affect it may have (or not) on the threads waiting for completions.

The ACE_Reactor framework uses internal notifications to handle the need to unblock waiting demultiplexing threads so they can re-examine the handle set as needed; something similar is needed for the ACE_Proactor to remedy issue #1 above. There is a notification pipe facility in the proactor code, but I need to see if it can be used in this case. I hope so…

The other problem, of concurrent access to the aiocb list by threads both waiting for completions and modifying the list is a much larger problem. That requires more of a fundamental change in the innards of the POSIX Proactor implementation.

Note that there are a number of POSIX Proactor flavors inside ACE (section 8.5 in C++NPv2 describes most of them). The particular shortcomings I’ve noted here only affect the ACE_POSIX_AIOCB_Proactor and ACE_POSIX_SIG_Proactor, which is largely based on the ACE_POSIX_AIOCB_Proactor. The newest one, ACE_POSIX_CB_Proactor, is much less affected, but is not as widely available.

So, the Proactor situation on UNIX platforms is generally not too good for demanding applications. Again, Proactor on Windows is very good, and recommended for high-performance, highly scalable networked applications. On Linux, stick to ACE_Reactor using the ACE_Dev_Poll_Reactor implementation; on other systems, stick with ACE_Reactor and ACE_Select_Reactor or ACE_TP_Reactor depending on your need for multithreaded dispatching.

When is it ok to use ACE Proactor on Linux?

November 25, 2008

The ACE Proactor framework (see C++NPv2 chapter 8, APG chapter 8 ) allows multiple I/O operations to be initiated and completed by a single thread. The idea is a good one, allowing a small number of threads to execute more I/O operations than could be done synchronously since the OS handles the actual transfers in the background. Many Windows programmers use this paradigm with overlapped I/O very effectively.

The overlapped I/O facility is used by ACE Proactor on Windows, and when the time comes for many to port their ACE-based application to Linux (or Solaris, or HP-UX, or AIX, or…) they naturally gravitate toward carrying the Proactor model to Linux. Seems safe, since Linux offers the aio facility, so off they go.

And then it happens. I/O locks up and all progress stops. Why? Because the aio facility upon which ACE Proactor builds is very restricted for socket I/O on Linux (at least through the Linuxes I’ve worked on). The issue is that the I/O operations initiated using aio from the application are silently converted to synchronous and executed in order based on the handle used. To see why this is a problem, consider the following common asynch I/O idiom:

  1. Open socket
  2. Initiate a read (whether expecting data or desiring to sense a broken connection)
  3. Initiate a write that should immediately complete

When the aio operations are converted to synchronous, the read is executed first, in a blocking manner. The write (which really has data to send) will not execute until the read completes. Consider the situation if the peer is waiting for an initial protocol exchange before sending any data. The peer is waiting for the local end to send data, but the local end’s data won’t actually send until the peer sends. But this will never happen, and we have deadlock.

The only way to make a Proactor-based application work on Linux is to follow a strict lock-step protocol. A ping-pong model, if you will. Each side may only have one operation outstanding on the socket at a time. This is fairly fragile and doesn’t suit many applications well. But if you do have such an application model, you can safely use ACE Proactor on Linux.

Note that the aio facility and, thus, ACE Proactor, on HP-UX, AIX, etc. does not suffer from this “silently converts ops to synchronous” problem, so these restrictions don’t apply there.

For an example of how to program this lock-step protocol arrangement, see the ACE_wrappers/tests/Proactor_Test.cpp program – it has an option to run half-duplex.