![]() |
![]() |
#507 |
"Ed Hall"
Dec 2009
Adirondack Mtns
3×7×263 Posts |
![]()
Just to brag a bit: I actually had a successful factoring session with Colab using the GPU branch of GMP-ECM for stage 1 and a local machine for stage 2.
It was only a 146 digit number and it took quite a while, but still, it worked! Colab connected me to a T4 which gave me 2560 cores on which I ran stage 1, with the -save option. The local machine "watched" for the residue file, using the tunneling setup by chalsall, described elsewhere. The local machine used ecm.py by WraithX to run stage 2. A minor session, but it proved the concept. ![]() |
![]() |
![]() |
![]() |
#508 |
"Ed Hall"
Dec 2009
Adirondack Mtns
3·7·263 Posts |
![]()
GMP-ECM has the option -one to tell ECM to stop after the first factor is found. But, when running a GPU, stage 2 is performed on all the residues from stage 1, instead of stopping when a factor is found. Since GMP-ECM seems to still be single* threaded, with lots of cores, it takes a lot longer than it needs to. I can use external separate programs, such as ecm.py, but my scripts would be even more complicated.
Any help? *I had thought at some point, that GMP-ECM introduced multi-threading, but I can't find anything about it. Memory fluctuations? |
![]() |
![]() |
![]() |
#509 |
"Oliver"
Sep 2017
Porta Westfalica, DE
7·223 Posts |
![]()
For P-1 stage 2, GMP-ECM can be configured to use OpenMP. Everything else is single threaded.
|
![]() |
![]() |
![]() |
#510 |
"Ed Hall"
Dec 2009
Adirondack Mtns
552310 Posts |
![]() |
![]() |
![]() |
![]() |
#511 |
"Seth"
Apr 2019
17×29 Posts |
![]()
There's also some code under multiecm.c in gmp-ecm. I've never used it (I prefer Wraith's ecm.py), the header is
Code:
/* multiecm.c - ECM with many curves with many torsion and/or in parallel Author: F. Moraino */ |
![]() |
![]() |
![]() |
#512 | |
"Ed Hall"
Dec 2009
Adirondack Mtns
3×7×263 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
#513 | ||
Sep 2009
245410 Posts |
![]() Quote:
Quote:
|
||
![]() |
![]() |
![]() |
#514 |
"Ed Hall"
Dec 2009
Adirondack Mtns
3×7×263 Posts |
![]()
That seems familiar! I'm sure that's what I was thinking of. Thanks for finding it!
My next issue is what you reference. I'm currently sending residues to a second machine while tasking the GPU machine with the next level of B1. But, if stage 2 is successful on the second machine, I still need to wait for the GPU to finish its current B1. I've tried pkill ecm, but it doesn't seem to do anything at the call. |
![]() |
![]() |
![]() |
#515 |
Apr 2010
1000001102 Posts |
![]() |
![]() |
![]() |
![]() |
#516 |
"Ed Hall"
Dec 2009
Adirondack Mtns
552310 Posts |
![]() |
![]() |
![]() |
![]() |
#517 | |
"Seth"
Apr 2019
17×29 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Running CUDA on non-Nvidia GPUs | Rodrigo | GPU Computing | 3 | 2016-05-17 05:43 |
Error in GMP-ECM 6.4.3 and latest svn | ATH | GMP-ECM | 10 | 2012-07-29 17:15 |
latest SVN 1677 | ATH | GMP-ECM | 7 | 2012-01-07 18:34 |
Has anyone seen my latest treatise? | davieddy | Lounge | 0 | 2011-01-21 19:29 |
Latest version? | [CZ]Pegas | Software | 3 | 2002-08-23 17:05 |