mersenneforum.org  

Go Back   mersenneforum.org > Factoring Projects > GMP-ECM

Reply
 
Thread Tools
Old 2019-11-23, 15:22   #485
storm5510
Random Account
 
storm5510's Avatar
 
Aug 2009
Not U. + S.A.

24·173 Posts
Default

Quote:
Originally Posted by Karl M Johnson View Post
Windows binary wanted
A "working" 64-bit Windows variant would be nice...
storm5510 is offline   Reply With Quote
Old 2019-12-20, 16:46   #486
EdH
 
EdH's Avatar
 
"Ed Hall"
Dec 2009
Adirondack Mtns

3·7·263 Posts
Default

Does the GPU branch allow for multi-threading stage 2? I can't seem to find anything in the docs.

My Colab sessions only get two Xeon cores, but using both would "double" the throughput for stage 2.
EdH is offline   Reply With Quote
Old 2019-12-20, 17:02   #487
PhilF
 
PhilF's Avatar
 
"6800 descendent"
Feb 2005
Colorado

5·149 Posts
Default

Quote:
Originally Posted by EdH View Post
Does the GPU branch allow for multi-threading stage 2? I can't seem to find anything in the docs.

My Colab sessions only get two Xeon cores, but using both would "double" the throughput for stage 2.
I don't think so. Stage 2 is run on the CPU, not the GPU, so I don't think anything about stage 2 gets changed when the program is compiled with the --enable-gpu option.

This is in the readme.gpu file:

Quote:
It will compute step 1 on the GPU, and then perform step 2 on the CPU (not in parallel).
PhilF is offline   Reply With Quote
Old 2019-12-20, 17:38   #488
EdH
 
EdH's Avatar
 
"Ed Hall"
Dec 2009
Adirondack Mtns

10101100100112 Posts
Default

Quote:
Originally Posted by PhilF View Post
I don't think so. Stage 2 is run on the CPU, not the GPU, so I don't think anything about stage 2 gets changed when the program is compiled with the --enable-gpu option.

This is in the readme.gpu file:
Yeah, I saw that, but it seemed that there was somewhere I read that the latest version had multi-threading. Even if I can't invoke both CPUs in conjunction with a GPU run, if I can run the stage 1 with B2=0 and save the residues, I could rerun ECM with both threads to run the residues.

I may have to explore ecm.py and see if there is a manner I can both run the GPU branch and fill the CPU for stage 2. . .
EdH is offline   Reply With Quote
Old 2019-12-21, 16:32   #489
chris2be8
 
chris2be8's Avatar
 
Sep 2009

2·3·409 Posts
Default

Quote:
Originally Posted by EdH View Post
Does the GPU branch allow for multi-threading stage 2? I can't seem to find anything in the docs.
You need to script it. Basically run stage 1 on the GPU, saving parms to a file, split the file into as many bits as you have CPUs, then run an ecm task for each part.

My latest script (not yet fully tested) is:
Code:
#!/bin/bash

# Script to run ecm on 2 or more cores against the number in $NAME.poly or $NAME.n aided by the gpu doing stage 1.
# It's intended to be called from factMsieve.factordb.pl which searches the logs for factors.

# The GPU can do stage 1 in about 1/2 the time the CPU takes to do stage 2 on one core.

# It expects 5 parms, the filename prefix, log suffix, the B1 to resume, B1 for GPU to use and the number of cores to use.
# The .ini file should have already been created by the caller

#set -x

NAME=$1
LEN=$2
OLDB1=$3
NEWB1=$4
CORES=$5

INI=$NAME.ini
if [[ ! -f $INI ]]; then echo "Can't find .ini file"; exit;fi
if [[ -z $LEN ]]; then echo "Can't tell what to call the log";exit;fi
if [[ -z $OLDB1 ]]; then echo "Can't tell previous B1 to use";exit;fi
if [[ -z $NEWB1 ]]; then echo "Can't tell what B1 to use";exit;fi
if [[ -z $CORES ]]; then echo "Can't tell how many cores to use";exit;fi

SAVE=$NAME.save
if [[ ! -f $SAVE ]]; then echo "Can't find save file from last run"; exit;fi

LOG=$NAME.ecm$LEN.log

# First split the save file from the previous run and start running them, followed by standard ecm until the GPU has finished.
# /home/chris/ecm-6.4.4/ecm was compiled with -enable-shellcmd to make it accept -idlecmd.
date "+  %c ecm to $LEN digits starts now" >> $LOG 

rm save.*
split -nr/$CORES $NAME.save save.
rm $NAME.save
for FILE in save.*
 do
  date "+  %c ecm stage 2 with B1=$OLDB1 starts now"  >> $NAME.ecm$LEN.$FILE.log
  (nice -n 19 /home/chris/ecm-gpu/trunk/ecm -resume $FILE $OLDB1;nice -n 19 /home/chris/ecm-6.4.4/ecm -c 999 -idlecmd 'ps -ef | grep -q [-]save' -n $NEWB1 <$INI )  | tee -a $NAME.ecm$LEN.$FILE.log | grep actor &
 done

# Now start running stage 1 on the gpu
/home/chris/ecm.2741/trunk/ecm -gpu -save $NAME.save $NEWB1 1 <$INI | tee -a $LOG | grep actor
date "+  %c ecm to $LEN digits stage 1 ended" >> $LOG
wait # for previous ecm's to finish

date "+  %c Finished" | tee -a $NAME.ecm$LEN.save.* >> $LOG

grep -q 'Factor found' $LOG $NAME.ecm$LEN.save.* # Check if we found a factor
exit $? # And pass RC back to caller
But I've never used colab so don't know how to run things on it.

Chris
chris2be8 is offline   Reply With Quote
Old 2019-12-22, 18:50   #490
EdH
 
EdH's Avatar
 
"Ed Hall"
Dec 2009
Adirondack Mtns

552310 Posts
Default

Quote:
Originally Posted by chris2be8 View Post
You need to script it. Basically run stage 1 on the GPU, saving parms to a file, split the file into as many bits as you have CPUs, then run an ecm task for each part.

My latest script (not yet fully tested) is:
Code:
#!/bin/bash

# Script to run ecm on 2 or more cores against the number in $NAME.poly or $NAME.n aided by the gpu doing stage 1.
# It's intended to be called from factMsieve.factordb.pl which searches the logs for factors.

# The GPU can do stage 1 in about 1/2 the time the CPU takes to do stage 2 on one core.

# It expects 5 parms, the filename prefix, log suffix, the B1 to resume, B1 for GPU to use and the number of cores to use.
# The .ini file should have already been created by the caller

#set -x

NAME=$1
LEN=$2
OLDB1=$3
NEWB1=$4
CORES=$5

INI=$NAME.ini
if [[ ! -f $INI ]]; then echo "Can't find .ini file"; exit;fi
if [[ -z $LEN ]]; then echo "Can't tell what to call the log";exit;fi
if [[ -z $OLDB1 ]]; then echo "Can't tell previous B1 to use";exit;fi
if [[ -z $NEWB1 ]]; then echo "Can't tell what B1 to use";exit;fi
if [[ -z $CORES ]]; then echo "Can't tell how many cores to use";exit;fi

SAVE=$NAME.save
if [[ ! -f $SAVE ]]; then echo "Can't find save file from last run"; exit;fi

LOG=$NAME.ecm$LEN.log

# First split the save file from the previous run and start running them, followed by standard ecm until the GPU has finished.
# /home/chris/ecm-6.4.4/ecm was compiled with -enable-shellcmd to make it accept -idlecmd.
date "+  %c ecm to $LEN digits starts now" >> $LOG 

rm save.*
split -nr/$CORES $NAME.save save.
rm $NAME.save
for FILE in save.*
 do
  date "+  %c ecm stage 2 with B1=$OLDB1 starts now"  >> $NAME.ecm$LEN.$FILE.log
  (nice -n 19 /home/chris/ecm-gpu/trunk/ecm -resume $FILE $OLDB1;nice -n 19 /home/chris/ecm-6.4.4/ecm -c 999 -idlecmd 'ps -ef | grep -q [-]save' -n $NEWB1 <$INI )  | tee -a $NAME.ecm$LEN.$FILE.log | grep actor &
 done

# Now start running stage 1 on the gpu
/home/chris/ecm.2741/trunk/ecm -gpu -save $NAME.save $NEWB1 1 <$INI | tee -a $LOG | grep actor
date "+  %c ecm to $LEN digits stage 1 ended" >> $LOG
wait # for previous ecm's to finish

date "+  %c Finished" | tee -a $NAME.ecm$LEN.save.* >> $LOG

grep -q 'Factor found' $LOG $NAME.ecm$LEN.save.* # Check if we found a factor
exit $? # And pass RC back to caller
But I've never used colab so don't know how to run things on it.

Chris
Thanks! I'm looking it over to see how I can incorporate some of the calls. I'm bouncing around between an awful lot of things ATM, which is probably causing some of my difficulties.
EdH is offline   Reply With Quote
Old 2020-01-04, 11:55   #491
Fan Ming
 
Oct 2019

5·19 Posts
Default GPU-ECM for CC2.0

Does anyone who has set up enough Windows development toolkits interests in compiling Windows binary of relatively new version(for example, 7.0.4-dev or 7.0.4 or 7.0.5-dev) GPU-ECM for CC2.0 card? It would be good to run it on old notebooks, thanks.
Fan Ming is offline   Reply With Quote
Old 2020-04-13, 12:28   #492
EdH
 
EdH's Avatar
 
"Ed Hall"
Dec 2009
Adirondack Mtns

3×7×263 Posts
Default Revisions >3076 of Dev No Longer Work With Cuda 10.x Due To "unnamed structs/unions" in cuda.h

It was recently reported to me that my GMP-ECM-GPU branch instructions for a Colab session no longer work. In verifying the trouble, I too, received the following, during compilation:
Code:
configure: Using cuda.h from /usr/local/cuda-10.0/targets/x86_64-linux/include
checking cuda.h usability... no
checking cuda.h presence... yes
configure: WARNING: cuda.h: present but cannot be compiled
configure: WARNING: cuda.h:     check for missing prerequisite headers?
configure: WARNING: cuda.h: see the Autoconf documentation
configure: WARNING: cuda.h:     section "Present But Cannot Be Compiled"
configure: WARNING: cuda.h: proceeding with the compiler's result
configure: WARNING:     ## ------------------------------------------------ ##
configure: WARNING:     ## Report this to ecm-discuss@lists.gforge.inria.fr ##
configure: WARNING:     ## ------------------------------------------------ ##
checking for cuda.h... no
configure: error: required header file missing
Makefile:807: recipe for target 'config.status' failed
make: *** [config.status] Error 1
Further research per ECM Team request showed the following from config.log:
Code:
In file included from conftest.c:127:0:
/usr/local/cuda-10.0/targets/x86_64-linux/include/cuda.h:432:10:  warning: ISO C99 doesn't support unnamed structs/unions [-Wpedantic]
         };
          ^
/usr/local/cuda-10.0/targets/x86_64-linux/include/cuda.h:442:10:  warning: ISO C99 doesn't support unnamed structs/unions [-Wpedantic]
         };
          ^
configure:15232: $? = 0
configure: failed program was:
| /* confdefs.h */
EdH is offline   Reply With Quote
Old 2020-04-13, 16:35   #493
EdH
 
EdH's Avatar
 
"Ed Hall"
Dec 2009
Adirondack Mtns

10101100100112 Posts
Default

On the off-chance I could solve this simply by adding a name to the unions referenced above, I tried:
Code:
union noname {
But alas, no joy:
Code:
| #include <cuda.h>
configure:15308: result: no
configure:15308: checking cuda.h presence
configure:15308: x86_64-linux-gnu-gcc -E -I/usr/local/cuda-10.0/targets/x86_64-linux/include -I/usr/local//include -I/usr/local//include conftest.c
configure:15308: $? = 0
configure:15308: result: yes
configure:15308: WARNING: cuda.h: present but cannot be compiled
configure:15308: WARNING: cuda.h:     check for missing prerequisite headers?
configure:15308: WARNING: cuda.h: see the Autoconf documentation
configure:15308: WARNING: cuda.h:     section "Present But Cannot Be Compiled"
configure:15308: WARNING: cuda.h: proceeding with the compiler's result
configure:15308: checking for cuda.h
configure:15308: result: no
configure:15315: error: required header file missing
EdH is offline   Reply With Quote
Old 2020-04-19, 13:59   #494
EdH
 
EdH's Avatar
 
"Ed Hall"
Dec 2009
Adirondack Mtns

3·7·263 Posts
Default

Quote:
Originally Posted by EdH View Post
It was recently reported to me that my GMP-ECM-GPU branch instructions for a Colab session no longer work. . .
GMP-ECM has been updated to revision 3081 and this is now working in my Colab instances.

"Thanks!" go out to the GMP-ECM Team.
EdH is offline   Reply With Quote
Old 2020-08-09, 13:11   #495
RichD
 
RichD's Avatar
 
Sep 2008
Kansas

3,923 Posts
Default

I have had mixed results using ECM-GPU on CoLab. Not that CoLab is the problem, it may be the way I am using it. I run sets of 1024 curves at 11e7 on the GPU. Then I transfer the results file to my local system to run step 2. I’ve notice the sigmas are generated consecutively. Is that enough variety or should I break it down and run twice as many sets at 512 curves each?

Running three sets of 1024 curves at 11e7 failed to find a p43. Another run of two sets of 1024 at 11e7 failed to find a p46. Lastly, on the first set of 1024 curves at 11e7 found a p53.

On the GPU I perform:
Code:
echo <number> | ecm -v -save Cxxx.txt -gpu -gpucurves 1024 11e7
After transferring the 1024 line result file I break it into four pieces using “head” and “tail”. Then each 256 line file is run by:
Code:
ecm -resume Cxxxy.txt -one 11e7
where y is a suffix from a to d representing the four smaller files.

The p53 may be a lucky hit but the p43 & p46 are a big time miss. Should I run more of the smaller sets to get a better “spread” of sigma?
RichD is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Running CUDA on non-Nvidia GPUs Rodrigo GPU Computing 3 2016-05-17 05:43
Error in GMP-ECM 6.4.3 and latest svn ATH GMP-ECM 10 2012-07-29 17:15
latest SVN 1677 ATH GMP-ECM 7 2012-01-07 18:34
Has anyone seen my latest treatise? davieddy Lounge 0 2011-01-21 19:29
Latest version? [CZ]Pegas Software 3 2002-08-23 17:05

All times are UTC. The time now is 11:18.


Thu Jun 8 11:18:23 UTC 2023 up 294 days, 8:46, 0 users, load averages: 1.02, 0.97, 0.93

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.

≠ ± ∓ ÷ × · − √ ‰ ⊗ ⊕ ⊖ ⊘ ⊙ ≤ ≥ ≦ ≧ ≨ ≩ ≺ ≻ ≼ ≽ ⊏ ⊐ ⊑ ⊒ ² ³ °
∠ ∟ ° ≅ ~ ‖ ⟂ ⫛
≡ ≜ ≈ ∝ ∞ ≪ ≫ ⌊⌋ ⌈⌉ ∘ ∏ ∐ ∑ ∧ ∨ ∩ ∪ ⨀ ⊕ ⊗ 𝖕 𝖖 𝖗 ⊲ ⊳
∅ ∖ ∁ ↦ ↣ ∩ ∪ ⊆ ⊂ ⊄ ⊊ ⊇ ⊃ ⊅ ⊋ ⊖ ∈ ∉ ∋ ∌ ℕ ℤ ℚ ℝ ℂ ℵ ℶ ℷ ℸ 𝓟
¬ ∨ ∧ ⊕ → ← ⇒ ⇐ ⇔ ∀ ∃ ∄ ∴ ∵ ⊤ ⊥ ⊢ ⊨ ⫤ ⊣ … ⋯ ⋮ ⋰ ⋱
∫ ∬ ∭ ∮ ∯ ∰ ∇ ∆ δ ∂ ℱ ℒ ℓ
𝛢𝛼 𝛣𝛽 𝛤𝛾 𝛥𝛿 𝛦𝜀𝜖 𝛧𝜁 𝛨𝜂 𝛩𝜃𝜗 𝛪𝜄 𝛫𝜅 𝛬𝜆 𝛭𝜇 𝛮𝜈 𝛯𝜉 𝛰𝜊 𝛱𝜋 𝛲𝜌 𝛴𝜎𝜍 𝛵𝜏 𝛶𝜐 𝛷𝜙𝜑 𝛸𝜒 𝛹𝜓 𝛺𝜔