Revised ACE_Dev_Poll_Reactor Fixes Multithread Issues (and more!) on Linux

When ACE 5.7 is released this week it will contain an important fix (a number of them, actually) for use cases that rely on multiple threads running the Reactor event loop concurrently on Linux. The major fix areas involved for ACE_Dev_Poll_Reactor in ACE 5.7 are:

  • Dispatching events from multiple threads concurrently
  • Properly handling changes in handle registration during callbacks
  • Change in suspend/resume behavior to be more ACE_TP_Reactor-like

At the base of these fixes was a foundational change in the way ACE_Dev_Poll_Reactor manages events returned from Linux epoll. Prior to this change, ACE would obtain all ready events from epoll and then each event loop-executing thread in turn would pick the next event from that set and dispatch it. This design was, I suppose, more or less borrowed from the ACE_Select_Reactor event demultiplexing strategy. In that case it made sense since select() is relatively expensive and avoiding repeated scans of all the watched handles is a good thing. Also, the ACE_Select_Reactor (and ACE_TP_Reactor, which inherits from it) have a mechanism to note that something in the handle registrations changed, signifying that select() must be called again. This mechanism was lacking in ACE_Dev_Poll_Reactor.

However, unlike with select(), it’s completely unnecessary to try to avoid calls to epoll_wait(). Epoll is much more scalable than is select(), and letting epoll manage the event queue, only passing back one event at a time, is much simpler than the previous design, and also much easier to get correct. So that was the first change: obtain one event per call to epoll_wait(), letting Linux manage the event queue and weed out events for handles that are closed, etc. The second change was to add the EPOLLONESHOT option bit to the event registration for each handle. The effect of this is that once an event for a particular handle is delivered from epoll_wait(), that handle is effectively suspended. No more events for the handle will be delivered until the handle’s event mask is re-enabled via epoll_ctl(). These two changes were used to fix and extend ACE_Dev_Poll_Reactor as follows.

Dispatching Events from Multiple Threads Concurrently

The main defect in the previous scheme was the possibility that events obtained from epoll_wait() could be delivered to an ACE_Event_Handler object that no longer existed. This was the primary driver for fixing ACE_Dev_Poll_Reactor. However, another less likely, but still possible, situation was that callbacks for a handler could be called out of order, triggering time-sensitive ordering problems that are very difficult to track down. Both these situations are resolved by only obtaining one I/O event per ACE_Reactor::handle_events() iteration. A side-effect of this change is that the concurrency behavior of ACE_Dev_Poll_Reactor changes from being similar to ACE_WFMO_Reactor (simultaneous callbacks to the same handler are possible) to being similar to ACE_TP_Reactor (only one I/O callback for a particular handle at a time). Since epoll’s behavior with respect to when a handle’s availability for more events differs from Windows’s WaitForMultipleObjects, the old multiple-concurrent-calls-per-handle couldn’t be done correctly anyway, so the new ACE_Dev_Poll_Reactor behavior leads to easier coding and programs that are much more likely to be correct when changing reactor use between platforms.

Properly handling changes in handle registration during callbacks

A difficult problem to track down sometimes arose in the previous design when a callback handler changed handle registration. In such a case, if the reactor made a subsequent callback to the original handler (for example, if the callback returned -1 and needed to be removed) the callback may be made to the wrong handler – the new registered handler instead of the originally called handler. This problem was fixed by making some changes and additions to the dispatching data structures and code and is no longer an issue.

Change in suspend/resume behavior to be more ACE_TP_Reactor-like

An important aspect of ACE_TP_Reactor’s ability to support complicated use cases arising in systems such as TAO is that a dispatched I/O handler is suspended around the upcall. This prevents multiple events from being dispatched simultaneously. As previously mentioned, the changes to ACE_Dev_Poll_Reactor also effectively suspend a handler around an upcall. However, a feature once only available with the ACE_TP_Reactor is that an application can specify that the application,  not the ACE reactor, will resume the suspended handler. This capability is important to properly supporting the nested upcall capability in TAO, for example. The revised ACE_Dev_Poll_Reactor now also has this capability. Once the epoll changes were made to effectively suspend a handler around an upcall, taking advantage of the existing suspend-resume setting in ACE_Event_Handler was pretty straight-forward.

So, if you’ve been holding off on using ACE_Dev_Poll_Reactor on Linux because it was unstable with multiple threads, or you didn’t like the concurrency behavior and the instability it may bring, I encourage you to re-evaluate this area when ACE 5.7 is released this week. And if you’ve ever wondered what good professional support services are, I did this work for a support customer who is very happy they didn’t have to pay hourly for this. And many more people will be happy that since I wasn’t billing for time I could freely fix tangential issues not in the original report such as the application-resume feature. Everyone wins: the customer’s problem is resolved and ACE’s overall product quality and functionality are improved. Enjoy!

Tags: , , ,

9 Responses to “Revised ACE_Dev_Poll_Reactor Fixes Multithread Issues (and more!) on Linux”

  1. Sergio Ruiz Says:

    Hi Steve,
    After I upgraded my server app from ACE 5.6.2 to 5.7, I noticed a high CPU utlization > 100%. My server uses multiple worker threads to run the ACE_Dev_Poll_Reactor. I got this behavior on CentOS 4.x and 5.x.
    My app works fine if I revert back to v5.6.2 or if I run it with TP_Reactor.
    Has anybody reported this issue yet?
    Any ideas what may be causing it?
    Thanks,
    Sergio.

  2. stevehuston Says:

    Hey Sergio… right, I saw your problem report on ace_users. This hasn’t come up anywhere else, and I haven’t had free time to examine the test program you sent. If you can help collect some profiling info to help narrow down the search that may help.

  3. Sergio Ruiz Says:

    Hi Steve,
    I’ve sent you the output from scall() and the one from my test app using ACE_TRACE. The trace shows a large number of sys calls to futex, probably related to the reactor’s token.
    Thanks,
    Sergio

  4. Herb Gillman Says:

    I too am seeing this odd behavior with the dev_poll reactor. Its bizzare. When I start my app, the top CPU utilization goes to > 100% even though nothing really is running. When I run under the debugger, all the Reactor thread’s from my ThreadPool exit and everything appears to run on one thread on one CPU, the top CPU utilization on startup is 0 (as expected).

    I am running on AMD 3Ghz quad core boxes, one with Linux RHEL4.5 and one with Linux RHEL5.3
    ACE version is 5.7.3-2 (downloaded and built from http://dist.bonsai.com/ken/ace_tao_rpm/SRC/)

  5. stevehuston Says:

    Ok, thanks Herb… there’s clearly something going wrong somewhere. I’ve been very busy with customers, but if you and Sergio can get together and work out a fix, I’ll try to help apply it when I have free time. Or, if a customer raises this issue it’ll get attention quickly.

  6. Resolving the CPU-bound ACE_Dev_Poll_Reactor Problem, and more « Steve Huston's Networked Programming Blog Says:

    […] the CPU-bound ACE_Dev_Poll_Reactor Problem, and more By stevehuston I previously wrote about improvements to ACE_Dev_Poll_Reactor I made for ACE 5.7. The improvements were important for […]

  7. stevehuston Says:

    Herb, Sergio… the CPU-bound problems have been resolved; please see ACE 5.7.7 or https://stevehuston.wordpress.com/2010/02/05/resolving-the-cpu-bound-ace_dev_poll_reactor-problem-and-more/

  8. nana Says:

    This is good news!
    Thank you.

    please fix the misspellings.
    side-affect -> side-effect.

Leave a comment