Archive for the ‘threads’ Category

Diagnosing Stack/Heap Collision on AIX

April 29, 2011

I was recently confronted with a program that mysteriously aborted (Trace/BPT trap) at run time on AIX 7.1 (but not on AIX 6.1). Usually. But not on all systems or on all build settings.

This program is the ACE Message_Queue_Test; in particular, the stress test I added to it to ensure that blocks are counted properly when enqueues and dequeues are happening in different combinations from different threads. It ended up not being particular to ACE, but I did add a change to the test’s build settings to account for this issue. But I’m getting ahead of myself…

The symptoms were that after the queue writer threads had been running a while and the reader threads started to exit, a writer thread would hit a Trace/BPT trap. The ACE_Task in this thread had its members all zeroed out, including the message queue pointer, leading to the trap. I tried setting debug watches on the task content but still no real clues.

Yes, the all-zeroes contents of the wiped stack should have tipped me off, but hind-sight is always 20-20.

The other confusion was that the same program built on AIX 6.1 would run fine. But copy it over to AIX 7.1, and crash! So, I opened a support case with IBM AIX support to report the brokenness of the binary compatibility from AIX 6.1 to 7.1. “There. That’s off to IBM’s hands,” I thought. “I hope it isn’t a total pain to get a fix from them. Let’s see what Big Blue can do.”

If you’ve been reading this blog for a while you may recall another support experience I related here, from a different support-providing company that wears hats of a different color than Big Blue. As you may recall, I was less than impressed.

Within hours I got a response that IBM had reproduced the problem. Although they could crash my program on AIX 7.1 and 6.1. They wanted a test case, preprocessed source, to get more info. I responded that they could download the whole 12 MB ACE source kit – the source is in there. Meanwhile I set off to narrow down the code into a small test case, imagining the whole AIX support team laughing hysterically about this joker who wanted them to download a 12 MB tarball to help diagnose a case.

I came back from lunch yesterday gearing up to get my test case together. There was email from IBM support. “Is this where they remind me that they want a small test case?” I wondered.

Nope. The email contained the dbg steps they used to diagnose the problem (which was mine), the 3 choices of ways to resolve the problem, and pointers to the AIX docs that explained all the background.

Wow.

AIX support rocks. I mean, I very often help customers diagnose problems under ACE support that end up being problems in the customer’s code. But I’ve never experienced that from any other company. Really. Outstanding.

So what was the problem in the end? The segment 2 memory area, which holds both the heap and the process stacks, was overflowing. The program was allocating enough memory to cause the heap to run over the stacks. (Remember the zeroed-out stack content? The newly allocated memory was being cleared.)

This is how the diagnosis went:

(dbx) run

Trace/BPT trap in
ACE_Task<ACE_MT_SYNCH>::putq(ACE_Message_Block*,ACE_Time_Value*) at line  39 in file "Task_T.inl" ($t12)
39     return this->msg_queue_->enqueue_tail (mb, tv);
(dbx) list 36,42
36   ACE_Task<ACE_SYNCH_USE>::putq (ACE_Message_Block *mb,
ACE_Time_Value *tv)
37   {
38     ACE_TRACE ("ACE_Task<ACE_SYNCH_USE>::putq");
39     return this->msg_queue_->enqueue_tail (mb, tv);
40   }
41
42   template <ACE_SYNCH_DECL> ACE_INLINE int

(dbx) 0x10000f20/12 i
0x10000f20
(ACE_Task<ACE_MT_SYNCH>::putq(ACE_Message_Block*,ACE_Time_Value*))
7c0802a6        mflr   r0
0x10000f24
(ACE_Task<ACE_MT_SYNCH>::putq(ACE_Message_Block*,ACE_Time_Value*)+0x4)
9421ffc0        stwu   r1,-64(r1)
0x10000f28
(ACE_Task<ACE_MT_SYNCH>::putq(ACE_Message_Block*,ACE_Time_Value*)+0x8)
90010048         stw   r0,0x48(r1)
0x10000f2c
(ACE_Task<ACE_MT_SYNCH>::putq(ACE_Message_Block*,ACE_Time_Value*)+0xc)
90610058         stw   r3,0x58(r1)
0x10000f30
(ACE_Task<ACE_MT_SYNCH>::putq(ACE_Message_Block*,ACE_Time_Value*)+0x10)
9081005c         stw   r4,0x5c(r1)
0x10000f34
(ACE_Task<ACE_MT_SYNCH>::putq(ACE_Message_Block*,ACE_Time_Value*)+0x14)
90a10060         stw   r5,0x60(r1)
0x10000f38
(ACE_Task<ACE_MT_SYNCH>::putq(ACE_Message_Block*,ACE_Time_Value*)+0x18)
80610058         lwz   r3,0x58(r1)
0x10000f3c
(ACE_Task<ACE_MT_SYNCH>::putq(ACE_Message_Block*,ACE_Time_Value*)+0x1c)
0c430200      twllti   r3,0x200
0x10000f40
(ACE_Task<ACE_MT_SYNCH>::putq(ACE_Message_Block*,ACE_Time_Value*)+0x20)
80610058         lwz   r3,0x58(r1)
0x10000f44
(ACE_Task<ACE_MT_SYNCH>::putq(ACE_Message_Block*,ACE_Time_Value*)+0x24)
806300a4         lwz   r3,0xa4(r3)
0x10000f48
(ACE_Task<ACE_MT_SYNCH>::putq(ACE_Message_Block*,ACE_Time_Value*)+0x28)
0c430200      twllti   r3,0x200
0x10000f4c
(ACE_Task<ACE_MT_SYNCH>::putq(ACE_Message_Block*,ACE_Time_Value*)+0x2c)
80630000         lwz   r3,0x0(r3)

(dbx) 0x2FF2289C/4 x
0x2ff2289c:  0000 0000 0000 0000

(dbx) malloc
The following options are enabled:

Implementation Algorithm........ Default Allocator (Yorktown)

Statistical Report on the Malloc Subsystem:
Heap 0
heap lock held by................ pthread ID 0x200248e8
bytes acquired from sbrk().......    267402864 <***!!!
bytes in the freespace tree......        15488
bytes held by the user...........    267387376
allocations currently active.....      4535796
allocations since process start..      9085824

The Process Heap
Initial process brk value........ 0x2001e460
current process brk value........ 0x2ff222d0 <***!!!
sbrk()s called by malloc.........       4071

*** Heap has reached the upper limit of segment 0x2 and
collided with the initial thread's stack.
Changing the executable to a 'large address model' 32bit
exe should resolve the problem (in other words give
it more heap space).

# ldedit -b maxdata:0x20000000 MessageQueueTest
ldedit:  File MessageQueueTest updated.
# dump -ov MessageQueueTest

MessageQueueTest:

***Object Module Header***
# Sections      Symbol Ptr      # Symbols       Opt Hdr Len     Flags
6      0x004cde82         142781                72     0x1002
Flags=( EXEC DYNLOAD DEP_SYSTEM )
Timestamp = "Apr 23 14:51:24 2011"
Magic = 0x1df  (32-bit XCOFF)

***Optional Header***
Tsize        Dsize       Bsize       Tstart      Dstart
0x001b7244  0x0001d8ec  0x000007b8  0x10000178  0x200003bc

SNloader     SNentry     SNtext      SNtoc       SNdata
0x0004      0x0002      0x0001      0x0002      0x0002

TXTalign     DATAalign   TOC         vstamp      entry
0x0007      0x0003      0x2001cc40  0x0001      0x20017f7c

maxSTACK     maxDATA     SNbss       magic       modtype
0x00000000  0x20000000  0x0003      0x010b        1L
# ./MessageQueueTest
#                     <-- NO CRASH!

Summary: Increasing the default heap space from 256M(approx.) to 512M resolved the problem.  IBM gave me three ways to resolve this:

  1. Edit the executable as above with ldedit
  2. Relink the executable with -bmaxdata:0x20000000
  3. Set environment variable LDR_CNTRL=MAXDATA=0x20000000

I ended up changing the Message_Queue_Test’s MPC description to add -bmaxdata to the build. That was the easiest way to always get it correct and make it easier for the regression test suite to execute the program.

Lastly, here’s the link IBM gave me for the ‘large address model’:

http://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/lrg_prg_support.htm

Bottom line – the test is running, the project is done, I have a sunny afternoon to write this blog entry and enjoy the nice New England spring day – instead of narrowing down a test case. Thanks, IBM!

Advertisements

Resolving the CPU-bound ACE_Dev_Poll_Reactor Problem, and more

February 5, 2010

I previously wrote about improvements to ACE_Dev_Poll_Reactor I made for ACE 5.7. The improvements were important for large-scale uses of ACE_Dev_Poll_Reactor, but introduced a problem where some applications went CPU bound, particularly on CentOS. I have made further improvements in ACE_Dev_Poll_Reactor to resolve the CPU-bound issue as well as to further improve performance. These changes will be in the ACE 5.7.7 micro release; the customer that funded the improvements is running load and performance tests on them now.

Here’s what was changed to improve the performance:

  • Change the notify handler so it’s not suspended/resumed around callbacks like normal event handlers are.
  • Delay resuming an auto-suspended handle until the next call to epoll_wait().

I’ll describe more about each point separately.

Don’t Suspend/Resume the Notify Handler

All of the Reactor implementations in ACE have an event handler that responds to reactor notifications. Most of the implementations (such as ACE_Select_Reactor and ACE_TP_Reactor) pay special attention to the notify handler when dispatching events because notifications are always dispatched before I/O events. However, the ACE_Dev_Poll_Reactor does not make the same effort to dispatch notifications before I/O; they’re intermixed as the epoll facility dequeues events in response to epoll_wait() calls. Thus, there was little special-cased code for the notify handler when event dispatching happened. When event handler dispatching was changed to automatically suspend and resume handlers around upcalls, the notify handler was also suspended and resumed. This is actually where the CPU-bound issues came in – when the dispatched callback returned to the reactor, the dispatching thread needs to reacquire the reactor token so it can change internal reactor state required to verify the handler and resume it. Acquiring the reactor token can involve a reactor notification if another thread is currently executing the event dispatching loop. (Can you see it coming?) It was possible for the notify handler to be resumed, which caused a notify, which dispatched the notify handler, which required another resume, which caused a notify, which… ad infinitum.

The way I resolved this was to simply not suspend/resume the notify handler. This removed the source of the infinite notifications and CPU times came back down quickly.

Delay Resuming an Auto-Suspended Handle

Before beginning the performance improvement work, I wrote a new test, Reactor_Fairness_Test. This test uses a number of threads to run the reactor event loop and drives traffic at a set of TCP sockets as fast as possible for a fixed period of time. At the end of the time period, the number of data chunks received at each socket is compared; the counts should all be pretty close. I ran this test with ACE_Select_Reactor (one dispatching thread), ACE_TP_Reactor, and ACE_Dev_Poll_Reactor initially. This was important because the initial customer issue I was working on was related to fairness in dispatching events. ACE_Dev_Poll_Reactor’s fairness is very good but the performance needed to go up.

With the notify changes from above, the ACE_Dev_Poll_Reactor performance went up, to slightly better than ACE_TP_Reactor (and the test uses a relatively small number of sockets). However, while examining strace output for the test run I noticed that there were still many notifies causing a lot of event dispatching that was slowing the test down.

As I described above, when the reactor needs to resume a handler after its callback completes, it must acquire the reactor token (the token is released during the event callback to the handler). This often requires a notify, but even when it doesn’t, the dispatching thread needs to wait for the token just to change some state, then release the token, then go around the event processing loop again which requires it to wait for the token again – a lot of token thrashing that would be great to remove.

The plan I settled on was to keep a list of handlers that needed to be resumed; instead of immediately resuming the handler upon return from the upcall, add the handler to the to-be-resumed list. This only requires a mutex instead of the reactor token, so there’s no possibility of triggering another notify, and there’s little contention for the mutex in other parts of the code. The dispatching thread could quickly add the entry to the list and get back in line for dispatching more events.

The second part of the to-be-resumed list is that a thread that is about to call epoll_wait() to get the next event will first (while holding the reactor token it already had in order to get to epoll_wait()) walk the to-be-resumed list and resume any handlers in the list that are still valid (they may have been canceled or explicitly resumed by the application in the meantime).

After this improvement was made, my reactor fairness test showed still excellent fairness on the ACE_Dev_Poll_Reactor, but with about twice the throughput. This with about 1/2 the CPU usage. These results were gathered in a less than scientific measurements and with a specific usage pattern – your mileage may vary. But if you’ve been scared away from ACE_Dev_Poll_Reactor by the discussions of CPU-bound applications getting poor performance, it’s time to take another look at ACE_Dev_Poll_Reactor.

Revised ACE_Dev_Poll_Reactor Fixes Multithread Issues (and more!) on Linux

June 15, 2009

When ACE 5.7 is released this week it will contain an important fix (a number of them, actually) for use cases that rely on multiple threads running the Reactor event loop concurrently on Linux. The major fix areas involved for ACE_Dev_Poll_Reactor in ACE 5.7 are:

  • Dispatching events from multiple threads concurrently
  • Properly handling changes in handle registration during callbacks
  • Change in suspend/resume behavior to be more ACE_TP_Reactor-like

At the base of these fixes was a foundational change in the way ACE_Dev_Poll_Reactor manages events returned from Linux epoll. Prior to this change, ACE would obtain all ready events from epoll and then each event loop-executing thread in turn would pick the next event from that set and dispatch it. This design was, I suppose, more or less borrowed from the ACE_Select_Reactor event demultiplexing strategy. In that case it made sense since select() is relatively expensive and avoiding repeated scans of all the watched handles is a good thing. Also, the ACE_Select_Reactor (and ACE_TP_Reactor, which inherits from it) have a mechanism to note that something in the handle registrations changed, signifying that select() must be called again. This mechanism was lacking in ACE_Dev_Poll_Reactor.

However, unlike with select(), it’s completely unnecessary to try to avoid calls to epoll_wait(). Epoll is much more scalable than is select(), and letting epoll manage the event queue, only passing back one event at a time, is much simpler than the previous design, and also much easier to get correct. So that was the first change: obtain one event per call to epoll_wait(), letting Linux manage the event queue and weed out events for handles that are closed, etc. The second change was to add the EPOLLONESHOT option bit to the event registration for each handle. The effect of this is that once an event for a particular handle is delivered from epoll_wait(), that handle is effectively suspended. No more events for the handle will be delivered until the handle’s event mask is re-enabled via epoll_ctl(). These two changes were used to fix and extend ACE_Dev_Poll_Reactor as follows.

Dispatching Events from Multiple Threads Concurrently

The main defect in the previous scheme was the possibility that events obtained from epoll_wait() could be delivered to an ACE_Event_Handler object that no longer existed. This was the primary driver for fixing ACE_Dev_Poll_Reactor. However, another less likely, but still possible, situation was that callbacks for a handler could be called out of order, triggering time-sensitive ordering problems that are very difficult to track down. Both these situations are resolved by only obtaining one I/O event per ACE_Reactor::handle_events() iteration. A side-effect of this change is that the concurrency behavior of ACE_Dev_Poll_Reactor changes from being similar to ACE_WFMO_Reactor (simultaneous callbacks to the same handler are possible) to being similar to ACE_TP_Reactor (only one I/O callback for a particular handle at a time). Since epoll’s behavior with respect to when a handle’s availability for more events differs from Windows’s WaitForMultipleObjects, the old multiple-concurrent-calls-per-handle couldn’t be done correctly anyway, so the new ACE_Dev_Poll_Reactor behavior leads to easier coding and programs that are much more likely to be correct when changing reactor use between platforms.

Properly handling changes in handle registration during callbacks

A difficult problem to track down sometimes arose in the previous design when a callback handler changed handle registration. In such a case, if the reactor made a subsequent callback to the original handler (for example, if the callback returned -1 and needed to be removed) the callback may be made to the wrong handler – the new registered handler instead of the originally called handler. This problem was fixed by making some changes and additions to the dispatching data structures and code and is no longer an issue.

Change in suspend/resume behavior to be more ACE_TP_Reactor-like

An important aspect of ACE_TP_Reactor’s ability to support complicated use cases arising in systems such as TAO is that a dispatched I/O handler is suspended around the upcall. This prevents multiple events from being dispatched simultaneously. As previously mentioned, the changes to ACE_Dev_Poll_Reactor also effectively suspend a handler around an upcall. However, a feature once only available with the ACE_TP_Reactor is that an application can specify that the application,  not the ACE reactor, will resume the suspended handler. This capability is important to properly supporting the nested upcall capability in TAO, for example. The revised ACE_Dev_Poll_Reactor now also has this capability. Once the epoll changes were made to effectively suspend a handler around an upcall, taking advantage of the existing suspend-resume setting in ACE_Event_Handler was pretty straight-forward.

So, if you’ve been holding off on using ACE_Dev_Poll_Reactor on Linux because it was unstable with multiple threads, or you didn’t like the concurrency behavior and the instability it may bring, I encourage you to re-evaluate this area when ACE 5.7 is released this week. And if you’ve ever wondered what good professional support services are, I did this work for a support customer who is very happy they didn’t have to pay hourly for this. And many more people will be happy that since I wasn’t billing for time I could freely fix tangential issues not in the original report such as the application-resume feature. Everyone wins: the customer’s problem is resolved and ACE’s overall product quality and functionality are improved. Enjoy!

Analysis of ACE_Proactor Shortcomings on UNIX

January 22, 2009

I’ve been looking into two related issues in the ACE development stream:

  1. SSL_Asynch_Stream_Test times out on HP-UX (I recently made a bunch of fixes to the test itself so it runs as well as can be on Linux, but times out on HP-UX)
  2. Proactor_Test shows a stray, intermittent diagnostic on HP-UX: EINVAL returned from aio_suspend()

Although I’ve previously discussed use of ACE_Proactor on Linux (https://stevehuston.wordpress.com/2008/11/25/when-is-it-ok-to-use-ace-proactor-on-linux/) the issues on HP-UX are of a different sort. If the previously discussed Linux aio issues are resolved inside Linux, the same problem I’m seeing on HP-UX may also arise, but it doesn’t get that far. Also, I suspect that the issues arising from these tests’ execution on Solaris are of the same nature, though the symptoms are a bit different.

The symptoms are that the proactor event loop either fails to detect completions, or it gets random errors that smell like the aiocb list is damaged. I believe I’ve got a decent idea of what’s going on, and it’s basically two issues:

  1. If all of the completion dispatch threads are blocked waiting for completions when new I/O is initiated, the new operation(s) are  not taken into account by the threads waiting for completions. This is basically the case in the SSL_Asynch_Stream_Test timeout on HP-UX – all the completion-detecting threads are already running before any I/O is initiated and no completions are ever detected.
  2. The completion and initiation activities modify the aiocb list used to detect completions directly, without interlocks, and without consideration of what affect it may have (or not) on the threads waiting for completions.

The ACE_Reactor framework uses internal notifications to handle the need to unblock waiting demultiplexing threads so they can re-examine the handle set as needed; something similar is needed for the ACE_Proactor to remedy issue #1 above. There is a notification pipe facility in the proactor code, but I need to see if it can be used in this case. I hope so…

The other problem, of concurrent access to the aiocb list by threads both waiting for completions and modifying the list is a much larger problem. That requires more of a fundamental change in the innards of the POSIX Proactor implementation.

Note that there are a number of POSIX Proactor flavors inside ACE (section 8.5 in C++NPv2 describes most of them). The particular shortcomings I’ve noted here only affect the ACE_POSIX_AIOCB_Proactor and ACE_POSIX_SIG_Proactor, which is largely based on the ACE_POSIX_AIOCB_Proactor. The newest one, ACE_POSIX_CB_Proactor, is much less affected, but is not as widely available.

So, the Proactor situation on UNIX platforms is generally not too good for demanding applications. Again, Proactor on Windows is very good, and recommended for high-performance, highly scalable networked applications. On Linux, stick to ACE_Reactor using the ACE_Dev_Poll_Reactor implementation; on other systems, stick with ACE_Reactor and ACE_Select_Reactor or ACE_TP_Reactor depending on your need for multithreaded dispatching.

There’s No Substitute for Experience with Threads

January 5, 2009

When your system performance is not all you had hoped it would be, are you tempted to think that adding more threads will speed things up? When your customers complain that they upgraded to the latest multicore processor but your application doesn’t run any faster, what answers do you have for them?

Even if race conditions, synchronization bottlenecks, and atomicity are part of your normal vocabulary, the world of multithreaded programming is one where one must really understand what’s below the surface of the API to get the full picture of why some piece of code is, or is not, working. I was reminded of this, and the deep truths of how deep your understanding must be, while catching up on some reading this week.

I was introduced to threads (DECthreads, the precursor to Pthreads, for you history buffs) in the early 1990s. Neat! I can do multiple things at the same time! Fortunately, I spent a fair amount of time in my programming formative years working on an operating system for the Control Data 3600 (anyone remember the TV show “The Bionic Man”? The large console in the bionics lab was a CDC-3600). I learned the hard way that the world can change in odd ways between instructions. So I wasn’t completely fooled by the notion of magically being able to do multiple things at the same time, but threading libraries make the whole area of threads much more approachable. But with power comes responsibility – the responsibility to know what you’re doing with that power tool.

I’ve been working on multithreaded code for many years now and find multithreading a powerful tool for building high-performance networked applications. So I was eager to read the Dr. Dobbs article “Lock-Free Code: A False Sense of Security” by Herb Sutter (his blog entry related to the article is here). His “Effective Concurrency” column is a regular favorite of mine. The article is a diagnosis of an earlier article describing a lock-free queue implementation. I previously read the article being diagnosed and, although I only skimmed it since I had no immediate need for a single-writer, single-reader queue at the time, I didn’t catch anything really wrong with it. So I was anxious to see what I missed.

Boy, I missed a few things. Now that I see them explained, it’s like “ah, of course” but I probably wouldn’t have thought about those issues before I was trying to figure out what’s wrong at runtime. Some may say the issues are sort of esoteric and machine-specific and I may agree, but it doesn’t matter – it’s a case of understanding your environment and tools and another situation where experience makes all the difference between banging your head on the wall and getting the job done.

I’m thankful that I can get more understanding by reading the works of smart people who’ve trodden before me. I’m sure that knowledge will save me some time at some point when debugging some odd race condition. And that’s what it’s all about – learn, experience, save time. Thanks Herb.