![]() |
![]() |
#67 | |
Jul 2003
So Cal
2,621 Posts |
![]() Quote:
https://gitlab.inria.fr/enge/cm/ https://gitlab.inria.fr/enge/cm/-/co...2b479edd45cd16 Last fiddled with by frmky on 2022-12-14 at 04:02 |
|
![]() |
![]() |
![]() |
#68 |
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2
1009310 Posts |
![]() |
![]() |
![]() |
![]() |
#69 |
Sep 2002
Database er0rr
107068 Posts |
![]()
What would be "good" too is saving the class number calculations, even if it is gigabytes, because that can take several hundred core hours every time one starts stage 1. It would not take as long to read them into memory again on resumption.
Last fiddled with by paulunderwood on 2022-12-14 at 06:40 |
![]() |
![]() |
![]() |
#70 | |
Jul 2003
So Cal
2,621 Posts |
![]() Quote:
https://gitlab.inria.fr/enge/cm/-/co...56d9c57e5e8454 Edit: For the run I just completed, the class numbers file is 32GB, and the primorials (also optionally saved and loaded) are 11GB. Last fiddled with by frmky on 2022-12-14 at 06:56 |
|
![]() |
![]() |
![]() |
#71 |
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2
100111011011012 Posts |
![]()
Gigabytes are pennies these days. Our disks are necessarily over a petabyte (at a genome sequencing center).
Saving a temp image of a few gigabytes is certainly fine. Could draw a limit at, say, a terabyte or two. Surely, the good idea is to start the ecpp.ini file with some limit settings. Everyone could set for themselves. |
![]() |
![]() |
![]() |
#72 | |
Bamboozled!
"๐บ๐๐ท๐ท๐ญ"
May 2003
Down not across
52×7×67 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
#73 | |
Bamboozled!
"๐บ๐๐ท๐ท๐ญ"
May 2003
Down not across
52·7·67 Posts |
![]() Quote:
Here is a GNFS Cado run, only half-way completed and using only two machines sieving. Chicken feed, IOW. pcl@horus:~/nums/cado-nfs/work$ du -h 11M ./client/horus.work 11M ./client/horus+4.work 40M ./client/download 11M ./client/horus+5.work 9.7M ./client/horus+3.work 11M ./client/horus+2.work 11M ./client/horus+6.work 101M ./client 14G ./GW3_619.upload 4.7G ./GW3_619.dup1/0 4.7G ./GW3_619.dup1/1 9.3G ./GW3_619.dup1 26G . pcl@horus:~/nums/cado-nfs/work$ df -h . Filesystem Size Used Avail Use% Mounted on /dev/sdc1 917G 235G 636G 27% /home pcl@horus:~/nums/cado-nfs/work$ Essentially all of that is temporary files in that the relations are superfluous after the square root computation has finished. |
|
![]() |
![]() |
![]() |
#74 | |
Sep 2002
Database er0rr
2×52×7×13 Posts |
![]() Quote:
Code:
mkdir R_class export CM_ECPP_TMPDIR="R_class" Last fiddled with by paulunderwood on 2022-12-15 at 19:57 |
|
![]() |
![]() |
![]() |
#75 |
Jun 2015
Vallejo, CA/.
3×5×7×11 Posts |
![]()
The seven largest primes in the category ECPP have been discovered in the last 10 months (Mar-Dec 2022)
![]() |
![]() |
![]() |
![]() |
#76 |
Jul 2003
So Cal
2,621 Posts |
![]()
I am aware this will soon be exceeded, but 104824^5+5^104824, at 73,269 digits, is prime. Stage 1 took 32 days on 20 24-core computers using GWNUM. Stage 2 took 27 days on 8 20-core computers. A few steps with large prime factors of h took most of the time in stage 2. I will explore the effects of further limiting the largest prime factor of h. Thanks again to Andreas for creating CM and Paul for adding support for GWNUM.
|
![]() |
![]() |
![]() |
#77 | |
Bamboozled!
"๐บ๐๐ท๐ท๐ญ"
May 2003
Down not across
52×7×67 Posts |
![]() Quote:
Any estimate when the first 100K digit prime will be proven? |
|
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
ECPP-DJ | danaj | Computer Science & Computational Number Theory | 59 | 2020-10-10 04:57 |
Comparison to ECPP? | carpetpool | PARI/GP | 2 | 2020-03-11 01:07 |
Can I just leave this here? (ECPP) | trhabib | Miscellaneous Math | 6 | 2011-08-19 16:34 |
Looking for ECPP software | nuggetprime | Software | 14 | 2010-03-07 17:09 |
new ECPP article | R. Gerbicz | GMP-ECM | 2 | 2006-09-13 16:24 |