mersenneforum.org Synchronization
 User Name Remember Me? Password
 Register FAQ Search Today's Posts Mark Forums Read

 2006-11-27, 22:15 #1 Prime95 P90 years forever!     Aug 2002 Yeehaw, FL 7,309 Posts Synchronization The new multi-threaded carry code must make sure that blocks are processed in order. Before entering the carry code each thread calls a routine that does this: Code: /* Wait until the next normalization block to process is the block */ /* that this thread is working on. */ for ( ; ; ) { gwmutex_lock (&gwdata->thread_lock); if (gwdata->pass1_norm_block == asm_data->this_block) break; gwmutex_unlock (&gwdata->thread_lock); gwevent_wait (&gwdata->pass1_norm_event, 0); } /* We're entering the normalization code. Reset pass1_norm_event so */ /* that other threads wait for us to finish with the normalization code */ /* before they can continnue */ gwevent_reset (&gwdata->pass1_norm_event); gwmutex_unlock (&gwdata->thread_lock); The mutex ensures only one thread at a time accesses the common data structure. The gwevent_wait uses Windows manual reset events. I'm fairly certain there is a Linux pthreads equivalent. The above makes sure that only the thread processing pass1_norm_block enters the carry code. When the thread is done with the carry code it bumps pass1_norm_block and signals the pass1_norm_event. Here is my worry: Suppose we'll do block 2 next (pass1_norm_block is 2) and the threads processing blocks 2 and 3 are waiting to enter the carry code (this_block = 2 and 3). The thread processing block 1 finishes and signals the event. The OS happens to be running some other jobs and has only one CPU for prime95 right now. The OS randomly decides to schedule the this_block = 3 thread. The thread sees it's not time for it to enter the carry code and waits again. The event however is still in the signalled state and the OS could keep the this_block = 3 thread running in a CPU bound loop. The CPU bound loop ends only when the OS happens to schedule the this_block = 2 thread. Synchronization programming has always given me a headache. Is there a better way to implement this? Will the OS (Windows, Linux, FreeBSD, etc.) guarantee that it won't go into a cpu bound loop above? Last fiddled with by Prime95 on 2006-11-27 at 22:16
2006-11-27, 22:43   #2
retina
Undefined

"The unspeakable one"
Jun 2006
My evil lair

32·23·29 Posts

Quote:
 Originally Posted by Prime95 Here is my worry: Suppose we'll do block 2 next (pass1_norm_block is 2) and the threads processing blocks 2 and 3 are waiting to enter the carry code (this_block = 2 and 3). The thread processing block 1 finishes and signals the event. The OS happens to be running some other jobs and has only one CPU for prime95 right now. The OS randomly decides to schedule the this_block = 3 thread. The thread sees it's not time for it to enter the carry code and waits again. The event however is still in the signalled state and the OS could keep the this_block = 3 thread running in a CPU bound loop. The CPU bound loop ends only when the OS happens to schedule the this_block = 2 thread. Synchronization programming has always given me a headache. Is there a better way to implement this? Will the OS (Windows, Linux, FreeBSD, etc.) guarantee that it won't go into a cpu bound loop above?
In my experience with locks (under win32 only) if a thread locks a mutex (or critical section) - checks something - then unocks and sleeps, I found that the scheduler will try to find another thread in the same process to run during the allocated time slice. I suspect this would solve the particular worry you show above. Hoever I don't know how LINUX (or other OSes) do their thing in this case.

PS: I think you are using only one process. If that is so then using a mutex under win32 is a little less efficient than a critical section lock.

 2006-11-28, 01:09 #3 Prime95 P90 years forever!     Aug 2002 Yeehaw, FL 730910 Posts Thanks retina. I've reimplemented the gwmutex_* routines with critical sections. I added a Sleep (0) to the wait loop, though I'm hoping someone comes up with a superior solution. In Linux, the sleep(0) is not guaranteed to yield to other waiting threads, though it seems the current kernel does.
2006-11-28, 03:38   #4
ColdFury

Aug 2002

26·5 Posts

Quote:
 I've reimplemented the gwmutex_* routines with critical sections
Instead of rolling your own synchronization primitives, just use pthreads. Its available on both Win32 and Linux, and they have solved all this stuff already.

Last fiddled with by ColdFury on 2006-11-28 at 03:40

 2006-11-28, 10:33 #5 retina Undefined     "The unspeakable one" Jun 2006 My evil lair 32·23·29 Posts In Win32 there is always the fiber (CreateFiber) option also. More cumbersome, but it makes it possible to fully control the execution of threads as you please. Although, once again, LINUX may not support it.
2006-11-28, 11:19   #6
ColdFury

Aug 2002

1010000002 Posts

Quote:
 Although, once again, LINUX may not support it.
Its a Win32 API, so definitely not.

Using pthreads is really the best solution. You get automatic portability and its very efficient. You don't have to worry about busy-waiting because the implementation will handle locks in the most efficient way for the platform. For instance, on Linux, pthreads uses the futex system call to provide very fast locks.

In fact, user-level mutexes never should have to busy-wait at all, and the kernel should make sure of this, provided the mutex is implemented correctly.

 Similar Threads Thread Thread Starter Forum Replies Last Post jinydu PrimeNet 1 2007-03-15 10:02 ixfd64 PrimeNet 7 2004-10-16 09:44 Matthias C. Noc PrimeNet 4 2004-09-27 08:33 norbert PrimeNet 1 2002-08-30 03:45

All times are UTC. The time now is 09:17.

Sun Jan 24 09:17:29 UTC 2021 up 52 days, 5:28, 0 users, load averages: 2.31, 2.02, 1.86