mersenneforum.org  

Go Back   mersenneforum.org > Factoring Projects > Msieve

Reply
 
Thread Tools
Old 2014-10-31, 18:08   #1
xilman
Bamboozled!
 
xilman's Avatar
 
"π’‰Ίπ’ŒŒπ’‡·π’†·π’€­"
May 2003
Down not across

11×17×59 Posts
Default A curious asymmetry

I've just spotted something very curious which I can't explain after investigating.

Background information is that I'm using factMsieve.pl on two machines, one with six cores and the other with eight. The only difference between the two Perl scripts is that one has $NUM_CPUS=6 and the other $NUM_CPUS=8. The six-core machine is running ../factMsieve.pl c738.poly 2 2 & and the other ../factMsieve.pl c738.poly 1 2 & and, of course each copy of the configuration files (fb, poly and ini) are the same on each system. The Perl scriot correctly allocates interleaved ranges of special-q of the size correct for each system and the ggnfs diagnostic output indicates that each thread on each machine is sieving the correct and non-overlapping range. So all appears to be working perfectly and there should be nothing to worry about.

Except consider this output from the 8-core system
Code:
=>"cat" spairs.out >> c738.dat
Found 10576089 relations, need at least 92944917 to proceed.
-> Q0=31200001, QSTEP=500000.
-> makeJobFile(): q0=31700000, q1=32200000.
-> makeJobFile(): Adjusted to q0=31700000, q1=32200000.
-> Lattice sieving rational q-values from q=31700000 to 32200000.
=> "../bin//gnfs-lasieve4I14e" -k -o spairs.out.T1 -v -n1 -r c738.job.T1
and compare with
Code:
=>"cat" spairs.out2 >> spairs.add.2
-> Q0=27700001, QSTEP=500000.
-> makeJobFile(): q0=28200000, q1=28700000.
-> makeJobFile(): Adjusted to q0=28200000, q1=28700000.
-> Lattice sieving rational q-values from q=28200000 to 28700000.
=> "..//gnfs-lasieve4I14e" -k -o spairs.out2.T1 -v -n2 -r c738.job.2.T1

# Deletia ...

wc spairs.add.2
 31727093   31727093 3319325527 spairs.add.2
One system has sieved a larger area but has produced only a third as many relations. I am presently at a loss as to how to explain this!

Any ideas?
xilman is offline   Reply With Quote
Old 2014-10-31, 18:23   #2
Batalov
 
Batalov's Avatar
 
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2

9,629 Posts
Default

Different siever binaries?
Quote:
=> "../bin//gnfs-lasieve4I14e" -k -o spairs.out.T1 -v -n1 -r c738.job.T1
and compare with
=> "..//gnfs-lasieve4I14e" -k -o spairs.out2.T1 -v -n2 -r c738.job.2.T1
Batalov is offline   Reply With Quote
Old 2014-10-31, 18:41   #3
xilman
Bamboozled!
 
xilman's Avatar
 
"π’‰Ίπ’ŒŒπ’‡·π’†·π’€­"
May 2003
Down not across

11×17×59 Posts
Default

Quote:
Originally Posted by Batalov View Post
Different siever binaries?
Not as far as I know. The paths on each machine are different for hysterical reasons. Both are 64-bit sievers, both were built from source with $CFLAGS appropriate for the architectures (Xeon and Phenom-II). This is the first time I've ever run multi-system multi-core using the client n of m mechanism but in the past have run the two machines on the same factorization by hand-choosing disjoint ranges of special-q without ever seeing this asymmetry.

Still mysterious. Perhaps very close examination of the relations themselves may turn up something.
xilman is offline   Reply With Quote
Old 2014-11-01, 17:01   #4
chris2be8
 
chris2be8's Avatar
 
Sep 2009

1000100110002 Posts
Default

The screen output from the sieves should be something like:
Code:
gnfs-lasieve4I12e (with asm64): L1_BITS=15, SVN $Revision$
FBsize 52010+0 (deg 5), 63950+0 (deg 1)
total yield: 84447, q=660001 (0.00211 sec/rel)
1501 Special q, 8512 reduction iterations
reports: 242371341->26536062->22669357->5220418->4101959->3636891
Number of relations with k rational and l algebraic primes for (k,l)=:

Total yield: 84447
milliseconds total: Sieve 64420 Sched 0 medsched 41390
TD 35820 (Init 3600, MPQS 9720) Sieve-Change 36340
TD side 0: init/small/medium/large/search: 720 1340 2110 1960 4220
sieve: init/small/medium/large/search: 2850 12670 2370 9580 4270
TD side 1: init/small/medium/large/search: 1150 2520 2570 2270 3400
sieve: init/small/medium/large/search: 3100 13470 2490 9900 3720
Check what version it says it is on the two systems. And check what yield the siever reports. That should tell you if it's the siever or the script causing the differnt yield.

The 8 core systems has larger Q0, which could produce a lower yield, but probably not 3 times lower.

Are they both working on their own drives, or in a shared drive on the network? In a shared drive the master system should be gathering relations from both systems.

I've written later versions of factMsieve.pl designed for several systems to work in a shared directory. The biggest benefit is that they can share polynomial searching. But it also lets systems with different speeds easily co-ordinate sieving. I posted them in the factoring projects forum: http://mersenneforum.org/showthread.php?t=15662&page=2

Chris
chris2be8 is offline   Reply With Quote
Old 2014-11-03, 17:22   #5
xilman
Bamboozled!
 
xilman's Avatar
 
"π’‰Ίπ’ŒŒπ’‡·π’†·π’€­"
May 2003
Down not across

11×17×59 Posts
Default

The two factMsieve.pl scripts claimed to have found enough relations between them while I was away at the weekend. Filtering indicated 60M duplicates and 29M uniques after 0.7M free relations had been added. Although the distribution of dups hasn't yet been analysed I strongly suspect that the second client of two, the one fired up with 6 cores.

Not sure whether to analyse more deeply or to write it off to experience. To be fair, the script does say that the multi-host mechanism isn't known to be highly reliable.
xilman is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Curious about different progress stebbo Software 24 2016-09-24 17:41
Just curious houding Information & Answers 16 2014-07-19 08:32
Just curious... NBtarheel_33 Information & Answers 0 2011-02-20 09:07
Just curious.... schickel Lounge 13 2009-01-06 08:56
Curious about iteration Unregistered Software 3 2004-05-30 17:38

All times are UTC. The time now is 01:18.


Wed Dec 8 01:18:20 UTC 2021 up 137 days, 19:47, 0 users, load averages: 0.87, 1.05, 1.17

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.