mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   NFS@Home (https://www.mersenneforum.org/forumdisplay.php?f=98)
-   -   2020 “small” 15e post processing reservations and results (https://www.mersenneforum.org/showthread.php?t=25936)

pinhodecarlos 2020-09-07 20:11

2020 “small” 15e post processing reservations and results
 
Taking 7m3_1071L.

pinhodecarlos 2020-09-11 21:28

7m3_1071L with LA underway.

pinhodecarlos 2020-09-11 22:14

Please reserve 8p7_320 and 7p3_344.

swellman 2020-09-13 16:50

Reserving C182_M19_k94 for postprocessing.

Dylan14 2020-09-16 22:09

Reserving 8p3_870M for postprocessing.

pinhodecarlos 2020-09-18 17:27

[QUOTE=pinhodecarlos;556757]7m3_1071L with LA underway.[/QUOTE]

Should have this done in 2 days.

pinhodecarlos 2020-09-20 20:11

1 Attachment(s)
[QUOTE=pinhodecarlos;556366]Taking 7m3_1071L.[/QUOTE]


[CODE]Sun Sep 20 21:04:58 2020 p59 factor: 32259627287265089267855311069399608930082347651491058747619
Sun Sep 20 21:04:58 2020 p173 factor: 29603924782136668094455417188078478568355174391567263676042860780537825582426197864670448511254304441772497192717530441974172149836810175091601787138620879014730598000565531[/CODE]


[url]https://pastebin.com/wJAySPhZ[/url]

pinhodecarlos 2020-09-21 10:14

Forgot to share 8p7_320 is on LA.

VBCurtis 2020-09-21 18:46

5_2_779m factors
 
5*2^779-1 is factored:
[code]p91 factor: 2786439502475124609387033054543157113255332993551540466442074957474066878758986269325512321
p103 factor: 2318225374174092224906045403193747790961851647680912446723516387914025452782142166626536345526687134071
[/code]

Log at [url]https://pastebin.com/G5TgMpZB[/url]

160M raw 31LP relations (136M unique) were enough to build a 5.9M matrix at TD 100. This took 8 hours to solve on a 12-core Haswell-Xeon.

Dylan14 2020-09-24 13:24

Also reserving f45_151m1 for post-processing.

VBCurtis 2020-09-24 21:47

I'll handle postprocessing on 5_2_782m1.

richs 2020-09-25 16:51

Taking 2212657_37m1

pinhodecarlos 2020-09-26 06:44

1 Attachment(s)
[QUOTE=pinhodecarlos;557462]Forgot to share 8p7_320 is on LA.[/QUOTE]


[CODE]Sat Sep 26 06:16:16 2020 p108 factor: 718444299676339976046358538151903759225888478618505665128659138778217042280231477753550658872045234531411841
Sat Sep 26 06:16:16 2020 p114 factor: 155382073690041462452731080227552995970106878994732522275343584821419240189721285246016676042522604589355858866561[/CODE]




[url]https://pastebin.com/6riJHw4X[/url]

pinhodecarlos 2020-09-26 10:10

12 days for 7p3_344.

chris2be8 2020-09-28 15:54

f38_158p1 has built a matrix. About 57 hours remain so I should have a result on Thursday.

Chris

pinhodecarlos 2020-09-28 19:50

Taking 3p2_1446M.

RichD 2020-09-29 00:24

Taking 24571_53m1.

richs 2020-09-30 04:02

[QUOTE=richs;557859]Taking 2212657_37m1[/QUOTE]

2212657_37m1 in LA. ETA 2+ days.

RichD 2020-09-30 14:02

24571_53m1 factored
 
[CODE]p69 factor: 747959920136887643034088789010470661733528918600968871646703508804673
p129 factor: 160465987182349970785371694598829568930536027954590849160194584876089531675928570463273856107918266082168555990844733996150633489[/CODE]
112M unique relations built a 5.1M matrix using TD=132.
Solve time just over 18 hours. (-t 4)

Log at: [url]https://pastebin.com/kcvz9wu0[/url]

VBCurtis 2020-10-01 16:26

Factors for 5_2_782m1
 
[QUOTE=VBCurtis;557793]I'll handle postprocessing on 5_2_782m1.[/QUOTE]

Factors:
[code]p53 factor: 37854558203120151564768330817860451683134767970605069
p175 factor: 1972617552080689279917029408762129141343119004536169039707125591436188593558895151075194741044381811999314102965671213861315012139642069228106507630521233491529321499834443979[/code]

log at [url]https://pastebin.com/BDTUbjQc[/url]

215M raw relations produced 179M unique relations, good for a 4.77M matrix at TD=104. This took about 6 hours to solve on 10 cores of Haswell-Xeon. This was a lucky poly, with a rather low duplicate rate- even 204M raw relations were enough to produce a 5.1M matrix.

Factors reported to mklasson.com; I'll get them to factordb this weekend.

richs 2020-10-02 14:43

2212657_37m1 factored
 
1 Attachment(s)
[QUOTE=richs;558314]2212657_37m1 in LA. ETA 2+ days.[/QUOTE]

[CODE]p92 factor: 52568116820060424516485189265615991248627113060978294658832304287421910868305778213282143713
p127 factor: 4644601264161203214617316115172098106784326025641779317375588017099278960041230185567953517045524949318160105133854925891428603[/CODE]

Approximately 54 hours on 6 threads of a Core i7-10510U with 12 GB memory for a 6.82M matrix at TD=130.

Log attached and at [URL="https://pastebin.com/WRShF7iU"]https://pastebin.com/WRShF7iU[/URL]

Factors added to FDB.

Dylan14 2020-10-03 14:22

8p3_870M was factored:

[code]p73 factor: 1113395195181441232897635316868625150150474892498911434248545375214552261
p120 factor: 658923352204302315568900804053649974309468075340525524052731980292674835437651013256721338336339708234674682238426747461[/code]

TD 110 built a 12.55 M matrix (TD 130 and TD 120 failed, and I don't have an exact time since I started and stopped the processing several times). Log posted at [url]https://pastebin.com/PaTtwwzy[/url] and [url]https://github.com/Dylan1496/nfs-at-home-logs/blob/master/8p3_870M.log[/url].

pinhodecarlos 2020-10-07 09:51

Shall have factors for 7p3_344 tomorrow morning.

pinhodecarlos 2020-10-08 06:58

1 Attachment(s)
[QUOTE=pinhodecarlos;559149]Shall have factors for 7p3_344 tomorrow morning.[/QUOTE]


[CODE]Thu Oct 8 01:31:39 2020 p90 factor: 940178235247332289120926771690327661238990851387961713089990970947156178623557951982496321
Thu Oct 8 01:31:39 2020 p92 factor: 75176320690912731953147427500077633687450135529477862363109762778916193160940823984886406097[/CODE]


[url]https://pastebin.com/2MSRpWD8[/url]

pinhodecarlos 2020-10-08 13:42

3p2_1446M on LA, 5 days.

RichD 2020-10-09 23:00

Taking 63601_53m1.

Dylan14 2020-10-12 14:25

f45_151m1 was factored:

[code]p82 factor: 8333609684301502260065482359988445887380053452775916486741008483855806118032252769
p100 factor: 8210553077808817660641683814864623033993314973732496673187078167513741337455366179837738471320775259[/code]
TD 130 built a 8.72 M matrix which took ~36 hours to solve. Log posted at [url]https://pastebin.com/uPgmi2HN[/url] and [url]https://github.com/Dylan1496/nfs-at-home-logs/blob/master/f45_151m1.log[/url].

Taking 428__749_19m1 next.

chris2be8 2020-10-13 05:59

f45_151p1 seems to be broken. I downloaded the relations and set msieve -nc1 going. From relation 82596681 onwards they all seem to get error -15 (I've not read the whole log, it's 72Gb). It got to relation 1232422380 before I killed msieve, the log was threatening to fill my hard disk.

Can someone please have a look it it? I think it's been sieving the wrong poly or something similar from relation 82596681 onwards.

Chris

swellman 2020-10-13 15:20

[QUOTE=chris2be8;559718]f45_151p1 seems to be broken. I downloaded the relations and set msieve -nc1 going. From relation 82596681 onwards they all seem to get error -15 (I've not read the whole log, it's 72Gb). It got to relation 1232422380 before I killed msieve, the log was threatening to fill my hard disk.

Can someone please have a look it it? I think it's been sieving the wrong poly or something similar from relation 82596681 onwards.

Chris[/QUOTE]

This is not the only nor first case of a data file with a high percentage of noise content. The root cause is unknown to me but the job suddenly using the wrong poly file seems unlikely. Greg recently commented that “[URL="https://www.mersenneforum.org/showpost.php?p=558891&postcount=5"]sometimes the project gets odd garbage in returned files as we use a forgiving validation check[/URL]”. Maybe Greg will weigh in here.

Chris, have you tried running remdups on the dataset? At least we can get a sense of how many good/unique relations survive filtering without generating a ridiculously large log file.

If you’re using Linux, gzrecover is worth trying.

One other thing - I failed to sieve this on the -a side when queueing it as [URL="https://www.mersenneforum.org/showpost.php?p=557854&postcount=12"]you requested[/URL]. Don’t think that caused this geyser of error messages but it’s still incorrect. [I]Mea culpa[/I].

If possible, in the future please add this line to any poly requiring sieving on the algebraic side
[CODE]lss: 0[/CODE]

Worst case, I can submit this job again for sieving on the algebraic side (likely with a smaller Q range) so that you can eventually download and combine data files, filter and build a decent matrix.

chris2be8 2020-10-13 16:13

We have progress:
[code]
chris@sirius:~/factordb/f45_151p1$ gzrecover -p f45_151p1.dat.gz | remdups 1250 -v >f45_151p1.dat.cut2
Starting program at Tue Oct 13 16:55:26 2020
allocated 2621384 bytes for pointers
allocated 3276800000 bytes for arrays
Tue Oct 13 16:55:28 2020 0.5M unique relns 0.00M duplicate relns (+0.00M, avg D/U ratio in block was 0.1%)
Tue Oct 13 16:55:28 2020 1.0M unique relns 0.00M duplicate relns (+0.00M, avg D/U ratio in block was 0.2%)
. . . snip . . .
Tue Oct 13 17:06:15 2020 204.5M unique relns 32.59M duplicate relns (+0.15M, avg D/U ratio in block was 23.7%)
Tue Oct 13 17:06:17 2020 205.0M unique relns 32.75M duplicate relns (+0.16M, avg D/U ratio in block was 24.0%)
Found 205005109 unique, 32748632 duplicate (13.8% of total), and 282825 bad relations.
Largest dimension used: 746 of 1250
Average dimension used: 625.6 of 1250
Terminating program at Tue Oct 13 17:06:17 2020
[/code]

Using gunzip -c instead of gzrecover it failed at about relation 82M (gzrecover with -v produced so much output I lost the exact messages). But the key message was:
[code]
chris@sirius:~/factordb/f45_151p1$ gunzip -cv f45_151p1.dat.gz | wc -l
f45_151p1.dat.gz:
gzip: f45_151p1.dat.gz: invalid compressed data--format violated
82596767
[/code]

Now I'll try to build a matrix (205005109 unique relations should be enough).

Chris

chris2be8 2020-10-13 20:06

And I've built a matrix for f45_151p1. About 62 hours remain so I should have a result on Friday.

Chris

swellman 2020-10-13 20:56

[QUOTE=chris2be8;559783]And I've built a matrix for f45_151p1. About 62 hours remain so I should have a result on Friday.

Chris[/QUOTE]

Great news!

pinhodecarlos 2020-10-14 06:55

1 Attachment(s)
[QUOTE=pinhodecarlos;559255]3p2_1446M on LA, 5 days.[/QUOTE]


[CODE]Wed Oct 14 00:16:32 2020 p58 factor: 2525863113019784866485159026033476058673368058897447200161
Wed Oct 14 00:16:32 2020 p64 factor: 4092548937881641137005120020566114770332136530455541914336089917
Wed Oct 14 00:16:32 2020 p68 factor: 21987511569017746397101754644242877125552157934707832399108483908297[/CODE]


[url]https://pastebin.com/3F9suyTx[/url]

RichD 2020-10-14 12:43

Taking 3p2_1446L.

chris2be8 2020-10-14 15:58

[QUOTE=chris2be8;559718]f45_151p1 seems to be broken. I downloaded the relations and set msieve -nc1 going. From relation 82596681 onwards they all seem to get error -15 (I've not read the whole log, it's 72Gb). It got to relation 1232422380 before I killed msieve, the log was threatening to fill my hard disk.

Chris[/QUOTE]

Inserting commas into the number of relations I get 1,232,422,380 which is much larger than the number of relations that should be in the file ([url]https://escatter11.fullerton.edu/nfs/crunching_es.php[/url] says it has 251,784,825 relations). So the zlib code called by msieve must have been confused by the corrupt data in the file, it was probably looping returning empty records. But now I know how to get round this sort of problem with gzrecover.

Chris

RichD 2020-10-14 19:26

[QUOTE=RichD;559852]Taking 3p2_1446L.[/QUOTE]

Similar problem with this job but gzrecover got me 195M unique. Not enough to build at TD=120 but there is/was still 3M+ rels outstanding. I'll try again later after more rels come in.

chris2be8 2020-10-16 15:43

f45_151p1 is done:
[code]
p95 factor: 31472543189852063608351206619774185000278966152980269366831825030893781824906736840787066198347
p154 factor: 2981260391694580775932710417268166930144843052325242034357827561830240677103684864557972346141873612166884930120810769757003210409841682193109422450721023
[/code]

Log at [url]https://pastebin.com/apBaaPp7[/url] But for no obvious reason I get a warning about "potentially offensive content" viewing it. Which seems a bit strange.

Posted to factordb and myfactors.mooo.com

Chris

jyb 2020-10-22 06:46

Taking 5m2_415.

RichD 2020-10-22 12:38

63601_53m1 factored
 
[CODE]p95 factor: 15939163736021965072303525816670599679763007724791479363456934792792078066493858947700251031579
p130 factor: 4250511210012933936926743211069755279447637601323234114222731823357219205455928560601978913446813642113365006641849201832005639541[/CODE]
200M unique relations built a 15.2M matrix using TD=120. (124 failed)
Solve time about 222 hours. (-t 4)

Log at: [url]https://pastebin.com/FQyk4h9q[/url]

RichD 2020-10-22 12:39

Taking 467_97m1.

RichD 2020-10-25 22:50

Taking 9p8_305.

RichD 2020-10-26 12:28

3p2_1446L factored
 
[CODE]p79 factor: 8216950503934210834225885598887128532598250771152129072532752177866627797484873
p109 factor: 1144520607301647536266379350787362826332654340993988071518885180769389357264530417991183789452305002846481653[/CODE]
207M unique relations built an 11.4M matrix using TD=120. (124 failed)
Solve time about 108 hours. (-t 3)

Log at: [url]https://pastebin.com/0XfrLwKC[/url]

chris2be8 2020-10-27 16:59

f48_148p1 has built a matrix. About 63 hours remain so I should have a result on Friday.

Chris

jyb 2020-10-27 20:48

5m2_415 factored
 
[code]
Tue Oct 27 13:28:05 2020 p64 factor: 1881521115835592636704229207284490834976445908718569894604726391
Tue Oct 27 13:28:05 2020 p149 factor: 35202352282950023796605785187277301222327000597564896671351610556689887560152216278397966441580668537736856326207577521047499142600032974420273040351
[/code]

[url]https://pastebin.com/rwLhUcsL[/url]

RichD 2020-10-28 08:33

467_97m1 factored
 
[CODE]p62 factor: 35763250671830350884599049782557639827540854752266849834323807
p68 factor: 49018024603611474215623129150812214480072978820698174652428722108083
p96 factor: 169988761808750535461267570873115819425868057231137432558034400456298312388524079785316318027097[/CODE]
201M unique relations built a 10.8M matrix using TD=132.
Solve time about 98 hours. (-t 4)

Log at: [url]https://pastebin.com/xPWyD9j4[/url]

chris2be8 2020-10-30 16:58

f48_148p1 is done:
[code]
p71 factor: 21781584803282614013436186382551182461797562077889268493465243209928097
p75 factor: 107022033753823650148038121598552035287647202954618506893196568900691646113
p98 factor: 53848725060195188925589460030472668171025625254460785453024930018814032130964468669152507523098561
[/code]

Log at [url]https://pastebin.com/WKUs84u7[/url] (I get a warning about "potentially offensive content" viewing it. Does anyone else get it?)

Posted to factordb and myfactors.mooo.com

Chris

jyb 2020-10-30 17:07

[QUOTE=chris2be8;561568]f48_148p1 is done:

[snip]

Log at [url]https://pastebin.com/WKUs84u7[/url] (I get a warning about "potentially offensive content" viewing it. Does anyone else get it?)

[/QUOTE]

I see that warning, but it's easy to click through and still see the log. I wonder if the word "report" (part of the communication with FactorDB) is the source of the problem.

jyb 2020-10-30 19:59

Anybody know what's going on with 5_2_791m1? It has had over 15000 work units pending for many hours, with none being received.

swellman 2020-10-30 20:16

[QUOTE=jyb;561581]Anybody know what's going on with 5_2_791m1? It has had over 15000 work units pending for many hours, with none being received.[/QUOTE]

I’ve alerted Greg.

pinhodecarlos 2020-10-30 20:30

Just wait...it might be bunkering for the challenge.

VBCurtis 2020-10-30 20:45

The huge number of pending WUs across the queues is consistent with these challenge BOINCers storing up a ton of work, to be submitted when the challenge window opens. At least, that's my grasp of what Carlos means when he says "bunkering".

When these events happen, we just try to keep WUs available, let the chaos play out, and then add Q to some jobs after the challenge ends because some of the BOINC work doesn't match up (bad relations, I mean).

pinhodecarlos 2020-10-30 20:47

[QUOTE=swellman;561583]I’ve alerted Greg.[/QUOTE]

Please tell him to not do anything. Challenge is underway. I can guarantee to all people are holding to dump later, this is a firework game. Just enjoy the show.

pinhodecarlos 2020-10-30 20:49

Yes, to bunker is to queue, process, to dump later within the timeframe of the challenge, to the last minute. There’s a lot of strategy and guess game on the sprints challenges. Not sure if you guys noticed but since half of the year people were trying to predict when this nfs challenge would happen so they were bunkering 5-6 days prior to the project sprint announcement to release on the first day of the challenge. Wus have a 7 days limit plus grace period...

jyb 2020-10-30 21:09

[QUOTE=jyb;561581]Anybody know what's going on with 5_2_791m1? It has had over 15000 work units pending for many hours, with none being received.[/QUOTE]

[QUOTE=pinhodecarlos;561584]Just wait...it might be bunkering for the challenge.[/QUOTE]

Yes, but 13_2_873m1 started being handed out later, and it has already received some work units back. I've been assuming that the large number of pending units was because of this challenge, but I find it very unlikely that one particular number would have no units at all come back just because of that.

pinhodecarlos 2020-10-30 21:26

Just wait then until they are reassigned, wus have a 7 day limit.

jyb 2020-10-30 21:45

[QUOTE=pinhodecarlos;561602]Just wait then until they are reassigned, wus have a 7 day limit.[/QUOTE]

If I thought there were no returns because of bunkering, then yes, that would be a fine suggestion. My whole point is that I find it very doubtful that out of more than 15000 work units, every single one of them is being held onto for this reason. If that were true, then we wouldn't expect to see any returns from 13_2_873m1 either. Yet there they are.

Shouldn't we consider the possibility that there's an actual problem of some kind with 5_2_791m1 which is preventing work units from being processed correctly on return? It's not like that sort of thing hasn't happened occasionally in the past.

pinhodecarlos 2020-10-30 21:58

[QUOTE=jyb;561606]If I thought there were no returns because of bunkering, then yes, that would be a fine suggestion. My whole point is that I find it very doubtful that out of more than 15000 work units, every single one of them is being held onto for this reason. If that were true, then we wouldn't expect to see any returns from 13_2_873m1 either. Yet there they are.

Shouldn't we consider the possibility that there's an actual problem of some kind with 5_2_791m1 which is preventing work units from being processed correctly on return? It's not like that sort of thing hasn't happened occasionally in the past.[/QUOTE]


Don't see what´s the problem regarding your first paragraph. I know some of those big hitters have enough cores or virtual machines setup to host more than 15,000 wus in a row.
2nd paragraph, maybe. Just wait please. Let the challenge go away to troubleshoot what's going on. The only thing now is do not mess around during the challenge window, this is the only thing I am asking for now.

VBCurtis 2020-10-30 22:01

The poly is bad- I cut and pasted from my linux terminal, not noticing that it cut off the n: line with a $ at the end of the visible line.

So, as jyb correctly surmised, those 15000 WUs are bad. :-/

I'll re-post the corrected poly file in the queue management thread.

Charybdis caught this same mistake on my Aliquot C196 to 16e queue thread, and I forgot and did it again after doing 5_784 and 13_873 correctly.

Mea Culpa.

pinhodecarlos 2020-10-30 22:06

[QUOTE=VBCurtis;561609]The poly is bad- I cut and pasted from my linux terminal, not noticing that it cut off the n: line with a $ at the end of the visible line.

So, as jyb correctly surmised, those 15000 WUs are bad. :-/

I'll re-post the corrected poly file in the queue management thread.

Charybdis caught this same mistake on my Aliquot C196 to 16e queue thread, and I forgot and did it again after doing 5_784 and 13_873 correctly.

Mea Culpa.[/QUOTE]


This will mess guys bunkering, they will see all wus giving errors, nevertheless it is possible to bunker 15,000 wus BTW. I can do that with my old laptop tricking the server to see my laptop with 512 cores for example. Glad you managed to detect the issue.

VBCurtis 2020-10-30 22:15

I fully believe that they could grab that many workunits- but, as jyb said, zero coming back when all other numbers still have a trickle returning suggested investigation was necessary.

The problem was simple to catch, but that doesn't change the status of those 15,000 workunits.

pinhodecarlos 2020-10-30 22:19

Fair enough.

frmky 2020-10-31 03:03

I canceled the WUs on the server. When they next connect to the server, those WUs will be canceled locally.

RichD 2020-11-01 16:24

9p8_305 factored
 
[CODE]p86 factor: 19652183625441417364455036701749227215746653115421156493850613911841785292330548132931
p100 factor: 7074552900597229236192695114691356909702349100114562034686087633821472243730999357334985850406421321[/CODE]
207M unique relations built a 12.0M matrix using TD=120. (124 failed)
Solve time about 114 hours. (-t 3)

Log at: [url]https://pastebin.com/VEnWGs4g[/url]

chris2be8 2020-11-01 17:24

f57_142p1_2nd_try has built a matrix. About 66 hours remain so I should have a result on Wednesday.

Chris

jyb 2020-11-02 19:11

Taking 7p4_302.

chris2be8 2020-11-04 16:41

f57_142p1_2nd_try is done:
[code]
p63 factor: 167552745124874086437111677817378592021218235263605211795460501
p127 factor: 1418723206877549581259386845930718578719177439135569583383952710748369719861830037858937637046396185716684523600568438957459509
[/code]

Log at [url]https://pastebin.com/2BjBtKCn[/url]

Reported to factordb and [url]http://myfactors.mooo.com/[/url]

And f62_139m1 has built a matrix. About 69 hours remain so I should have a result on Saturday.

Chris

VBCurtis 2020-11-06 02:17

5_2_784m1 is done:
[code]p72 factor: 792486262333263266852580611673461399472464874579580769344350954928178731
p113 factor: 87667478780724818772000946978613182918502677733022737650424060500523668082714122070425544279625540252423176111411[/code]

Log at [url]https://pastebin.com/X6N0rDha[/url]

Relations count was a little light for this 31/32 bit job at 193M raw, 160M unique, but a 6.1M matrix built at TD=84.

VBCurtis 2020-11-06 06:41

13_2_873m1 is in linear algebra. 18.4M matrix@TD=110, should take 5 days or so.

chris2be8 2020-11-07 16:40

f62_139m1 is done:
[code]
p79 factor: 1794002212391093481109942470193527269059906327620839422549145847924328278841621
p157 factor: 7907310984525998305201332734839909417347966313667996156840926177448714554289539194383789997490456699970853693800396749958811498769979568348619291185145860713
[/code]

Log at [url]https://pastebin.com/s0CG38Jh[/url]

Reported to factordb and [url]http://myfactors.mooo.com/[/url]

Chris

swellman 2020-11-08 20:22

Reserving 23_193m1.

jyb 2020-11-10 22:59

7p4_302 factored
 
[code]
p76 factor: 2557396306858622536833198950269232920231851223586758960283894639279364999101
p149 factor: 22639248684868954289733233802668424664059153811804739804773401038232525121918262134224744291695632804024963717087595482763937362387463755935382845493
[/code]

[url]https://pastebin.com/27hhfqvi[/url]

VBCurtis 2020-11-11 02:07

13_2_873m1 is factored:
[code]p91 factor: 1264197058378758747278443083127972624527445420617298860114673757412081889080676460471691167
p136 factor: 2641274307471058139661508183510721736349216346991920991864866640353539874057702394176616387549301934317269999288637509439861345230139949[/code]

406M raw 32LP relations were enough to build an 18.4M matrix at TD=110. This took a bit under 5 days to solve on 10 threads of Haswell-Xeon.

Log at [url]https://pastebin.com/Nw0VvHD8[/url]

Dylan14 2020-11-13 23:35

428__749_19m1 was factored:
[code]p65 factor: 87068987214238717276344079692716779463986237652028200858012718259
p68 factor: 64933503260370506846776791166099887878358697080264220966191908254307
p93 factor: 185341622217198422360237167288720038520330857577268512439957314555898253999724271216920018163[/code]

TD 100 produced a 17.34 M matrix (TD 130, 120 and 110 failed). Log posted at [url]https://pastebin.com/KrT45Bwb[/url] and [url]https://github.com/Dylan1496/nfs-at-home-logs/blob/master/428__749_19m1.log[/url].

chris2be8 2020-11-19 16:54

f91_127m1 has built a matrix. About 47 hours remain so I should have a result on Saturday.

Chris

VBCurtis 2020-11-20 19:39

5*2^791-1 has factors:
[code]p57 factor: 151880995471420485563260391583287247457005251131509944599
p67 factor: 5080420584742637766232124038334332110925922349448403765063448735173
p102 factor: 144906772795320153308270539610867131801288246035066449539969694523816327427690090512592143476246788159[/code]

210M raw relations yielded 169M unique. TD=110 produced a 6.1M matrix, which took about 11hr to solve on 10 cores of Xeon-Haswell.

Log at [url]https://pastebin.com/TxsBk1ee[/url]

chris2be8 2020-11-21 18:19

f91_127m1 is done:
[code]
p54 factor: 630945082975691656959512602591389470959316859699955789
p147 factor: 112095791171392000596360680411819432053360999591306319256507634583140806480682776028724105893635741875783971200344049072717660935570931051415392781
[/code]

Reported to factordb and [url]http://myfactors.mooo.com/[/url]

Log at [url]https://pastebin.com/AsVT9Rka[/url]

Chris

PS. Is there any news about 23_193m1 ?

swellman 2020-11-21 20:04

[QUOTE]

PS. Is there any news about 23_193m1 ?[/QUOTE]

Yes. The data file is corrupt. I will attempt repairs with gzrecover once my Linux box finishes another job in a day or two.

chris2be8 2020-11-23 16:55

f20_193p1 has built a matrix. About 49 hours remain so I should have a result on Wednesday or Thursday depending on how many square root steps it takes.

Chris

swellman 2020-11-23 22:22

[QUOTE=swellman;563969]Yes. The data file is corrupt. I will attempt repairs with gzrecover once my Linux box finishes another job in a day or two.[/QUOTE]

23_193m1 has been repaired, and it lost ~10M rels as a result. 257M of those are unique.

Failed to build a matrix with TD of 120, attempting 114 now. I’ll keep dropping TD until this job goes into LA.

swellman 2020-11-24 12:04

23_193m1
 
23_193m1 finally made it into LA @TD=108, ETA 918 hrs (~Jan 1).

unconnected 2020-11-24 12:22

C172_785232_11530.dat.gz is broken, running gzrecover.

VBCurtis 2020-11-24 18:01

[QUOTE=swellman;564193]23_193m1 finally made it into LA @TD=108, ETA 918 hrs (~Jan 1).[/QUOTE]

Sounds like way too big a matrix for an e-small job. Maybe add 10MQ, if you haven't already moved it to post-processing?
Or, we could do some sieving locally. Let's knock off at least a week from that matrix solve-time- I'm game for an experiment about the tradeoff between sieving and matrix here, e.g. does a machine-week of sieving shorten the matrix by a week?

jyb 2020-11-24 18:13

[QUOTE=VBCurtis;564230]Sounds like way too big a matrix for an e-small job. Maybe add 10MQ, if you haven't already moved it to post-processing?
Or, we could do some sieving locally. Let's knock off at least a week from that matrix solve-time- I'm game for an experiment about the tradeoff between sieving and matrix here, e.g. does a machine-week of sieving shorten the matrix by a week?[/QUOTE]

I was actually wondering if the problem went the other way: 257M unique relations sounds like an awful lot for a 31-bit job. Is it possible that this is too much oversieved? Can you build a matrix with higher density if you cut down the relations to e.g. 210M? It's counterintuitive, but I've seen this sort of thing work before.

RichD 2020-11-24 18:26

It is a 31.5 bit job or 31/32 hybrid. I think around 270M unique might be the sweet spot.

See [url]https://www.mersenneforum.org/showpost.php?p=561549&postcount=32[/url].

jyb 2020-11-24 18:43

[QUOTE=RichD;564238]It is a 31.5 bit job or 31/32 hybrid. I think around 270M unique might be the sweet spot.

See [url]https://www.mersenneforum.org/showpost.php?p=561549&postcount=32[/url].[/QUOTE]

Ah, indeed. I hadn't noticed that. Mea culpa.

swellman 2020-11-24 19:19

23_193m1
 
Well, based on results to date, adding 20M+ Q should get this job to 270M uniques. I’ll keep bumping Q if yield degrades along the way but I suspect this job will reach that goal by the weekend.

Then I’ll start at TD=120 and drop by 4 until the job enters LA. We can explore unique rels vs TD and the resulting ETA.

If more rels are warranted then I’ll bump Q some more and repeat the process.

Shall I proceed?

VBCurtis 2020-11-24 19:48

Definitely.
I think I'll do a little 10kQ test-sieve to see how much CPU time the extra sieving takes, to compare to matrix time saved.

The new queues and new organization rules are working nicely, but I remain interested in conserving boinc resources so that we can get more numbers factored. This sort of test helps give us a sense of how much public computation we're trading for our savings in private matrix-solving computation.

swellman 2020-11-24 19:59

For the record, Q range was 20-190M. Just raised the upper end to 210M.

RichD 2020-11-25 17:44

Taking 7489_67m1.

chris2be8 2020-11-25 20:54

f20_193p1 is done:
[code]
p123 factor: 743395129164316869525665577319693817035097284345238034426961878946172470558944727352947390616022972186685505892718904201101
p127 factor: 8041742398366322937161083588696100777859712569255579301447955135030659742669949210483995348991092040964461849900673572243143281
[/code]

Log at [url]https://pastebin.com/UpcLGEkd[/url]

Added to factordb and [url]http://myfactors.mooo.com/[/url]

Chris

unconnected 2020-11-27 21:52

C172_785232_11530 completed - 39 hours for 7M matrix (TD=140).
[CODE]p95 factor: 91940046764727798958776978304066969326579392672373359442440554774482543811639792023965054565901
p77 factor: 21939798229290874048628197715215699239074246721610961212419318907790122003667
[/CODE]Factors were found on 9th dependency(!)
[URL]https://pastebin.com/VwBuy0Kd[/URL]

richs 2020-11-28 14:57

Sorry the previous two posts are in the wrong thread. They should have been in the 14e thread.


Edit: I tried to do this. Let me know if I messed it up.

Thanks, Ed!

swellman 2020-11-29 00:44

23_193m1
 
[QUOTE=swellman;564258]For the record, Q range was 20-190M. Just raised the upper end to 210M.[/QUOTE]

Downloaded the data and managed to get it into LA. Details follow:

- 363M (est by NFS@Home)

- 353M raw (after D/L and gzrecover)

- 277M unique

TD of 116 got into LA (120 failed)

ETA ~695 hrs (still stabilizing)

I’ll continue with postprocessing unless someone wants to see what more relations buys.

VBCurtis 2020-11-29 00:51

20MQ is on the order of 4000-4500 core-hours of sieving. It saved you 220hr of matrix time; if I guess that you're on a 6-core machine that's a bit worse than 3:1 ratio. I'm glad the extra relations saved you more than a week, bit I'm a bit disappointed the matrix didn't shrink even further!

swellman 2020-11-29 01:36

It’s on a quad core plus HT, so 8(ish) threaded.

Just letting it play out now.

jyb 2020-11-29 18:29

Taking 7p5_302.

jyb 2020-12-04 16:36

7+5_302 factored
 
[code]
p82 factor: 2662904328553964805743785730951982862253245773503902750041322049968738541001702673
p121 factor: 3366160535509774257081346403562279255697200313767137740835745419815053211892068808831024036007601105644084961657039552793
[/code]

[url]https://pastebin.com/gXac1AWL[/url]


All times are UTC. The time now is 03:48.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.