mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   NFS@Home (https://www.mersenneforum.org/forumdisplay.php?f=98)
-   -   2020 “small” 15e post processing reservations and results (https://www.mersenneforum.org/showthread.php?t=25936)

pinhodecarlos 2020-10-07 09:51

Shall have factors for 7p3_344 tomorrow morning.

pinhodecarlos 2020-10-08 06:58

1 Attachment(s)
[QUOTE=pinhodecarlos;559149]Shall have factors for 7p3_344 tomorrow morning.[/QUOTE]


[CODE]Thu Oct 8 01:31:39 2020 p90 factor: 940178235247332289120926771690327661238990851387961713089990970947156178623557951982496321
Thu Oct 8 01:31:39 2020 p92 factor: 75176320690912731953147427500077633687450135529477862363109762778916193160940823984886406097[/CODE]


[url]https://pastebin.com/2MSRpWD8[/url]

pinhodecarlos 2020-10-08 13:42

3p2_1446M on LA, 5 days.

RichD 2020-10-09 23:00

Taking 63601_53m1.

Dylan14 2020-10-12 14:25

f45_151m1 was factored:

[code]p82 factor: 8333609684301502260065482359988445887380053452775916486741008483855806118032252769
p100 factor: 8210553077808817660641683814864623033993314973732496673187078167513741337455366179837738471320775259[/code]
TD 130 built a 8.72 M matrix which took ~36 hours to solve. Log posted at [url]https://pastebin.com/uPgmi2HN[/url] and [url]https://github.com/Dylan1496/nfs-at-home-logs/blob/master/f45_151m1.log[/url].

Taking 428__749_19m1 next.

chris2be8 2020-10-13 05:59

f45_151p1 seems to be broken. I downloaded the relations and set msieve -nc1 going. From relation 82596681 onwards they all seem to get error -15 (I've not read the whole log, it's 72Gb). It got to relation 1232422380 before I killed msieve, the log was threatening to fill my hard disk.

Can someone please have a look it it? I think it's been sieving the wrong poly or something similar from relation 82596681 onwards.

Chris

swellman 2020-10-13 15:20

[QUOTE=chris2be8;559718]f45_151p1 seems to be broken. I downloaded the relations and set msieve -nc1 going. From relation 82596681 onwards they all seem to get error -15 (I've not read the whole log, it's 72Gb). It got to relation 1232422380 before I killed msieve, the log was threatening to fill my hard disk.

Can someone please have a look it it? I think it's been sieving the wrong poly or something similar from relation 82596681 onwards.

Chris[/QUOTE]

This is not the only nor first case of a data file with a high percentage of noise content. The root cause is unknown to me but the job suddenly using the wrong poly file seems unlikely. Greg recently commented that “[URL="https://www.mersenneforum.org/showpost.php?p=558891&postcount=5"]sometimes the project gets odd garbage in returned files as we use a forgiving validation check[/URL]”. Maybe Greg will weigh in here.

Chris, have you tried running remdups on the dataset? At least we can get a sense of how many good/unique relations survive filtering without generating a ridiculously large log file.

If you’re using Linux, gzrecover is worth trying.

One other thing - I failed to sieve this on the -a side when queueing it as [URL="https://www.mersenneforum.org/showpost.php?p=557854&postcount=12"]you requested[/URL]. Don’t think that caused this geyser of error messages but it’s still incorrect. [I]Mea culpa[/I].

If possible, in the future please add this line to any poly requiring sieving on the algebraic side
[CODE]lss: 0[/CODE]

Worst case, I can submit this job again for sieving on the algebraic side (likely with a smaller Q range) so that you can eventually download and combine data files, filter and build a decent matrix.

chris2be8 2020-10-13 16:13

We have progress:
[code]
chris@sirius:~/factordb/f45_151p1$ gzrecover -p f45_151p1.dat.gz | remdups 1250 -v >f45_151p1.dat.cut2
Starting program at Tue Oct 13 16:55:26 2020
allocated 2621384 bytes for pointers
allocated 3276800000 bytes for arrays
Tue Oct 13 16:55:28 2020 0.5M unique relns 0.00M duplicate relns (+0.00M, avg D/U ratio in block was 0.1%)
Tue Oct 13 16:55:28 2020 1.0M unique relns 0.00M duplicate relns (+0.00M, avg D/U ratio in block was 0.2%)
. . . snip . . .
Tue Oct 13 17:06:15 2020 204.5M unique relns 32.59M duplicate relns (+0.15M, avg D/U ratio in block was 23.7%)
Tue Oct 13 17:06:17 2020 205.0M unique relns 32.75M duplicate relns (+0.16M, avg D/U ratio in block was 24.0%)
Found 205005109 unique, 32748632 duplicate (13.8% of total), and 282825 bad relations.
Largest dimension used: 746 of 1250
Average dimension used: 625.6 of 1250
Terminating program at Tue Oct 13 17:06:17 2020
[/code]

Using gunzip -c instead of gzrecover it failed at about relation 82M (gzrecover with -v produced so much output I lost the exact messages). But the key message was:
[code]
chris@sirius:~/factordb/f45_151p1$ gunzip -cv f45_151p1.dat.gz | wc -l
f45_151p1.dat.gz:
gzip: f45_151p1.dat.gz: invalid compressed data--format violated
82596767
[/code]

Now I'll try to build a matrix (205005109 unique relations should be enough).

Chris

chris2be8 2020-10-13 20:06

And I've built a matrix for f45_151p1. About 62 hours remain so I should have a result on Friday.

Chris

swellman 2020-10-13 20:56

[QUOTE=chris2be8;559783]And I've built a matrix for f45_151p1. About 62 hours remain so I should have a result on Friday.

Chris[/QUOTE]

Great news!

pinhodecarlos 2020-10-14 06:55

1 Attachment(s)
[QUOTE=pinhodecarlos;559255]3p2_1446M on LA, 5 days.[/QUOTE]


[CODE]Wed Oct 14 00:16:32 2020 p58 factor: 2525863113019784866485159026033476058673368058897447200161
Wed Oct 14 00:16:32 2020 p64 factor: 4092548937881641137005120020566114770332136530455541914336089917
Wed Oct 14 00:16:32 2020 p68 factor: 21987511569017746397101754644242877125552157934707832399108483908297[/CODE]


[url]https://pastebin.com/3F9suyTx[/url]


All times are UTC. The time now is 03:42.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.