mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > Hardware > Cloud Computing

Reply
 
Thread Tools
Old 2021-02-25, 20:25   #34
Prime95
P90 years forever!
 
Prime95's Avatar
 
Aug 2002
Yeehaw, FL

163648 Posts
Default

Quote:
Originally Posted by PhilF View Post
I didn't know that! So, would one just manually reserve a LL-DC exponent, PRP test it, and then manually submit the result?
Yes, get an LL-DC assignment and then PRP it. Upload your PRP result and proof file as you normally would (either by prime95 or gpuowl's python script).
Prime95 is online now   Reply With Quote
Old 2021-02-25, 20:27   #35
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

32×13×43 Posts
Default

Quote:
Originally Posted by tdulcet View Post
All the Tesla GPUs on Colab have ECC memory enabled, so Jacobi and Gerbicz error checking is not needed. You can see this from the ECC Support? line near the top of the CUDALucas output. Adding Jacobi error checking to CUDALucas is listed in the Contributing section of the main README, but it would have no effect on Colab.
To support George's statement that while ECC ram helps it does not make a hardware/software system error-immune, here is an existence proof of CUDALucas error despite ECC ram. Following is from my own bad-residue log.
Code:
2020-05-12 bad ll 122743793 manual, condor quadro 2000; diverges May 1 2020 after 107M 87.1% by 108M 88%. run was almost 3 months Feb 21 to May 11. Roundoff error was a very comfortable 0.12; no error messages in the logged console output.CUDALucas v2.06 log excerpts:
|  May 01  04:57:06  | M122743793 107000000  0x0f01f93746501744  |  6912K  0.10742  55.9884  559.88s  |  10:04:52:44  87.17%  |
ok to here
bad from here 
|  May 01  20:30:15  | M122743793 108000000  0x9b21e398524e0ebe  |  6912K  0.11475  55.9881  559.88s  |   9:13:19:29  87.98%  |
see https://mersenneforum.org/showpost.p...69&postcount=9 for interim residues from a matched run
Condor is a dual-Xeon HP Z600 workstation with ECC system ram. Quadro 2000 gpus have ECC gpu ram. The gpu is mounted directly in the PCIe socket (no extender involved). I think ECC ram protects from memory errors, but not from certain firmware bugs (Pentium fdiv etc anyone?), PCIe bus transmission errors, fft excessive roundoff error, coding error in either the gpu application or the libraries it may call, etc.

Quote:
Note that the latest version of GpuOwl is v7.2, although it no longer supports any LL tests or the Jacobi error check.
I'm well aware.
Quote:
This would add a lot of complexity to our GPU notebook, if it were to support GpuOwl, as it would have to download and build both v6 and v7 to support both LL DC and PRP tests respectively and then someone would have to write a wrapper to run the correct version based on the next assignment in the worktodo file.
Or run ~V6.11-364 with separate P-1 and PRP tasks. That would still outperform CUDALucas about 2:1 overall. The marginal utility of a first LL primality test is about zero. It necessitates either a lengthy LL DC or a PRP/GEC/proof which will run quicker than the CUDALucas LL or LL DC on the same hardware. There's no reason I know of, to believe that the combined efforts of Mihai Preda, George Woltman, and many others (including lots of testers) to make gpuowl ffts in the various lengths very efficient and reliable, made it so, on all documented gpu models capable of running it, except worse than half that of CUDALucas iters/second/run, for the Tesla models that Colab happens to offer, for which documentation is not yet posted for your specified fft length. (Such an fft length is not implemented in gpuowl!) But I have queued up a PRP task for T4. This is with an older slower gpuowl version, but should still make the point that gpuowl PRP/proof on T4 on Colab free beats CUDALucas LL & LLDC & occasional LLTC on T4 on Colab free.

As far as I know, no version gpuowl has a 6272K fft transform. But a relatively recent version has higher reach with the 6M transform; here for v7.2-53 and similar, excerpt from help output:
Code:
FFT    6M [ 37.75M -  116.51M]  1K:12:256 1K:6:512 1K:3:1K 256:12:1K 512:12:512 512:6:1K 4K:3:256
FFT 6.50M [ 40.89M -  125.95M]  1K:13:256 256:13:1K 512:13:512
For PRP=(aid),1,2,115545511,-1,78,0,3,1
and for 7M fft, the older gpuowl version (probably one of Fan Ming's compiles, Google drive file date jan 21 2020) is producing iteration times 7.75-8.46 ms/iter on T4. An old less optimized version of gpuowl, running a longer fft length, still a little faster than the Colab CUDALucas T4 timings posted recently:
Quote:
Tesla T4: 7.95 - 8.48 ms/iter
UTC time stamped Colab gpuowl log excerpt:
Code:
2021-02-25 18:27:24 config.txt: -user kriesel -cpu colab/TeslaT4 -yield -maxAlloc 15000 -use NO_ASM
2021-02-25 18:27:25 config.txt: 
2021-02-25 18:27:25 colab/TeslaT4 115545511 FFT 7168K: Width 256x4, Height 64x8, Middle 7; 15.74 bits/word
2021-02-25 18:27:26 colab/TeslaT4 OpenCL args "-DEXP=115545511u -DWIDTH=1024u -DSMALL_HEIGHT=512u -DMIDDLE=7u -DWEIGHT_STEP=0x1.322aaa7d291efp+0 -DIWEIGHT_STEP=0x1.ac1b50a86d588p-1 -DWEIGHT_BIGSTEP=0x1.306fe0a31b715p+0 -DIWEIGHT_BIGSTEP=0x1.ae89f995ad3adp-1 -DNO_ASM=1  -I. -cl-fast-relaxed-math -cl-std=CL2.0"
2021-02-25 18:27:28 colab/TeslaT4 

2021-02-25 18:27:28 colab/TeslaT4 OpenCL compilation in 2109 ms
2021-02-25 18:27:46 colab/TeslaT4 115545511 OK     1000   0.00%; 7753 us/sq; ETA 10d 08:50; 947a2638dcd5659d (check 4.25s)
2021-02-25 18:34:34 colab/TeslaT4 115545511       50000   0.04%; 8324 us/sq; ETA 11d 03:03; 2abe8c5a456c9248
2021-02-25 18:40:42 colab/TeslaT4 Stopping, please wait..
2021-02-25 18:40:47 colab/TeslaT4 115545511 OK    93500   0.08%; 8455 us/sq; ETA 11d 07:09; 94321be129778fdc (check 4.62s)
2021-02-25 18:40:47 colab/TeslaT4 Exiting because "stop requested"
2021-02-25 18:40:47 colab/TeslaT4 Bye
2021-02-25 18:48:30 config.txt: -user kriesel -cpu colab/TeslaT4 -yield -maxAlloc 15000 -use NO_ASM
2021-02-25 18:48:30 config.txt: 
2021-02-25 18:48:30 colab/TeslaT4 115545511 FFT 7168K: Width 256x4, Height 64x8, Middle 7; 15.74 bits/word
2021-02-25 18:48:30 colab/TeslaT4 OpenCL args "-DEXP=115545511u -DWIDTH=1024u -DSMALL_HEIGHT=512u -DMIDDLE=7u -DWEIGHT_STEP=0x1.322aaa7d291efp+0 -DIWEIGHT_STEP=0x1.ac1b50a86d588p-1 -DWEIGHT_BIGSTEP=0x1.306fe0a31b715p+0 -DIWEIGHT_BIGSTEP=0x1.ae89f995ad3adp-1 -DNO_ASM=1  -I. -cl-fast-relaxed-math -cl-std=CL2.0"
2021-02-25 18:48:30 colab/TeslaT4 

2021-02-25 18:48:30 colab/TeslaT4 OpenCL compilation in 5 ms
2021-02-25 18:48:49 colab/TeslaT4 115545511 OK    94500   0.08%; 7770 us/sq; ETA 10d 09:11; 802418424467173d (check 4.22s)
2021-02-25 18:49:32 colab/TeslaT4 115545511      100000   0.09%; 7829 us/sq; ETA 10d 11:03; eec0fc882a58923c
2021-02-25 18:56:30 colab/TeslaT4 115545511      150000   0.13%; 8346 us/sq; ETA 11d 03:31; 857fa1746622daba
2021-02-25 19:03:32 colab/TeslaT4 115545511      200000   0.17%; 8442 us/sq; ETA 11d 06:29; 07065de43d5d6667
2021-02-25 19:10:39 colab/TeslaT4 115545511 OK   250000   0.22%; 8445 us/sq; ETA 11d 06:29; a491206a633e11cd (check 4.58s)
2021-02-25 19:17:41 colab/TeslaT4 115545511      300000   0.26%; 8450 us/sq; ETA 11d 06:31; 7dd17f25c99a3c46
2021-02-25 19:18:11 colab/TeslaT4 Stopping, please wait..
2021-02-25 19:18:15 colab/TeslaT4 115545511 OK   303500   0.26%; 8452 us/sq; ETA 11d 06:33; 6154addd71f541a2 (check 4.56s)
2021-02-25 19:18:15 colab/TeslaT4 Exiting because "stop requested"
2021-02-25 19:18:15 colab/TeslaT4 Bye
Judging by executable file size and date, this was produced by gpuowl v6.11-11 from November 2019. There's been a lot of optimization since, which would be better represented by say v6.11-366 https://www.mersenneforum.org/showpo...postcount=1020
And v6.11-366 would run 115M at 6M fft length, per https://www.mersenneforum.org/showpo...36&postcount=9, picking up additional speed by reducing fft length.

Quote:
When our extension is set to automatically run the first cell of the notebook (disabled by default), it will check if the cell is running every minute by default. This is configurable, but I would not recommend that users use a value less than one minute to prevent Google from thinking they/we are DoSing their servers.
There's also the possibility that the smart folks at Google could be tolerating for now action that is not really allowed by their offer of free use, as a learning exercise, and if it becomes a concern to them later, it could morph into a software arms race. I've seen a recent drop in cpu-only session duration, from the 12 hour maximum I was getting, to under 7 hours lately.

Bot: "a computer program that performs automatic repetitive tasks" https://www.merriam-webster.com/dictionary/bot
Seems to me to match the behavior you described for your software, reactivating tabs at regular intervals, dismissing prompts when they appear, etc. Not meaning to be dismissive, pejorative, or other forms of negative, but not ignoring details either, of how the provider wants the service to be used.

Gpuowl does indeed support lower proof powers. (Confirmed by both source code inspection and a short test run on a small exponent.) I'm not sure how low the Primenet server and verification process supports. Please use a reasonably high proof power for efficiency. Each reduction of proof power by one doubles the verification effort.

Last fiddled with by kriesel on 2021-02-25 at 21:19
kriesel is offline   Reply With Quote
Old 2021-02-26, 03:15   #36
danc2
 
Dec 2019

3×11 Posts
Default

Bot and Ethics Discussion
Quote:
Bot: "a computer program that performs automatic repetitive tasks"
To go by this definition of a bot, one must also realize the GIMPS project itself is also a bot and, by extension, every computer program that runs on Google Colab that performs some repetitive task (many computer programs) is allegedly violating Google's terms of service...

However, I think its important to note that "Bot" is not outlined anywhere in the terms of service as being discouraged (only Crypto mining). Please, anyone, quote the terms of service here to dispel any misinformation I am spouting if I am missing this.

Consider the following when considering if an extension breaches an ethical boundary:
✅ Colab is for research use (translation: should be used for research purposes, as often as is allowed)
✅ Colab is unique in that its hardware is not always available (translation: Google wants people to use their machines for research, but also wants to make a higher ROI whenever possible. Reconnecting & auto-starting does not go against this goal.).
✅ Colab has not banned or made public mention about any of the extensions that exist thus far to automatically reconnect or run Colab notebooks (to my knowledge) (translation: Google is unaware, lazy, collecting data for some grand purpose, or does not care about automatically using machines so long as they are used for research purposes)
✅ Colab, if they are doing an experiment/collecting data of some kind, should be thankful to someone who made an extension as they are getting free data. (translation: Google is happy whether an extension is violating an unwritten rule or not)
✅ Colab has never replied to Chris's requests to validate that running the GPU72 project is okay or not (translation: they likely do not care)

GpuOwl
All this info on Gpuowl is really intriguing. It sounds like Gpuowl is really a preferred way to go for some people. I wonder how long it would take to add Gpuowl into this project. Maybe not that long. Though the Gpuowl vs. CUDALucas performance is interesting to talk about, since we already have CUDALucas implemented as the cruncher for GPU work, one may consider that we could add in Gpuowl (as opposed to swap out) to the project and give users the ability to decide which cruncher to use (have not talked to Teal about that yet, though we already said we wanted to add Gpuowl). This also would be nice for testing as we would be using Colab machines with identical GPUs and assumedly a generally identical environment to test on.

If we didn't make it clear before in the README or elsewhere in the forum, we would love to use Gpuowl. All that constrains us is time and resources as we both have jobs and other projects. We want our project to be used by the most people and, one day, find a prime number (or more) and maybe even be on the front page of mersenne. If anyone is wanting to be involved in an upgrade from CUDALucas to Gpuowl, please contact us here or elsewhere.

Last fiddled with by danc2 on 2021-02-26 at 03:17
danc2 is offline   Reply With Quote
Old 2021-02-26, 12:21   #37
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

13A716 Posts
Default

Quote:
Originally Posted by Prime95 View Post
Yes, get an LL-DC assignment and then PRP it. Upload your PRP result and proof file as you normally would (either by prime95 or gpuowl's python script).
Manual submission for the gpuowl case yielded this response on https://www.mersenne.org/manual_result/:
Code:
processing: PRP (not-prime) for M58834309
Result  type (150=PRP_COMPOSITE) inappropriate for the assignment type  (101=LL_PRIME). Processing result but not deleting assignment.
CPU credit is 124.6960 GHz-days.
https://www.mersenne.org/report_exponent/?exp_lo=58834309&full=1 shows the LL DC assignment marked as expired the day after it was issued.


Quote:
Originally Posted by danc2 View Post
Bot and Ethics Discussion
I believe the posted points to be actually and expressly contradicted or made irrelevant by Colab's occasional output of the attachment shown in https://mersenneforum.org/showpost.p...2&postcount=28

And "we're not the only ones doing it or providing a tool for it" is not a credible defense for something Google does not allow. Google offers Colab for interactive use of notebooks specifically by humans, and not by interactive use by programs/robots. Who or what runs the notebook is the distinction I think they are making.

Last fiddled with by kriesel on 2021-02-26 at 12:54
kriesel is offline   Reply With Quote
Old 2021-02-26, 17:18   #38
chalsall
If I May
 
chalsall's Avatar
 
"Chris Halsall"
Sep 2002
Barbados

13·733 Posts
Default

Quote:
Originally Posted by danc2 View Post
Please, anyone, quote the terms of service here to dispel any misinformation I am spouting if I am missing this. ...

✅ Colab has never replied to Chris's requests to validate that running the GPU72 project is okay or not (translation: they likely do not care)
While it is true that Colab has never replied to any of my (multiple) or anyone else's (multiple) attempts to reach them, that's not really unexpected. Google is famous for not engaging in "Human-to-Human" interaction unless you're spending a *lot* of money with them. Usually, we're the product, not the other way around...

My personal opinion on the whole automation thing is that because of the multiple "prove you're a human" challenges we've all faced using their instances over the last several months we've been playing with this, clearly, the intent is to have the human (slave) in the loop.

If people want to try to get around this, it's at their own risk. Personally, I just manually restart the instances when I happen to flip to that virtual desktop during my day.
chalsall is offline   Reply With Quote
Old 2021-02-26, 17:45   #39
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

32×13×43 Posts
Default

Quote:
Originally Posted by Prime95 View Post
Yes, get an LL-DC assignment and then PRP it. Upload your PRP result and proof file as you normally would (either by prime95 or gpuowl's python script).
If someone had the time and inclination, adding a choice to the manual assignment page to generate PRP DC worktodo lines for LLDC candidates would help us humans avoid rewriting the lines and adding our errors
kriesel is offline   Reply With Quote
Old 2021-02-26, 18:19   #40
Uncwilly
6809 > 6502
 
Uncwilly's Avatar
 
"""""""""""""""""""
Aug 2003
101×103 Posts

251516 Posts
Default

Quote:
Originally Posted by kriesel View Post
If someone had the time and inclination, adding a choice to the manual assignment page to generate PRP DC worktodo lines for LLDC candidates would help us humans avoid rewriting the lines and adding our errors
I bet if you ask James real nice, he would add that functionality to Mersenne.ca
Drop in your LL-DC lines from your worktodo, get PRP lines out.
I could set you up with an excel or g-sheets sheet for this.
[edit] I just did it in excel, it imports into g-sheets with no problem and works there. See attachment[/edit]
Attached Files
File Type: zip LLDC to PRPDC Worktodo converter.zip (8.2 KB, 14 views)

Last fiddled with by Uncwilly on 2021-02-26 at 19:01
Uncwilly is offline   Reply With Quote
Old 2021-02-26, 19:33   #41
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

116478 Posts
Default Ooppss

Quote:
Originally Posted by kriesel View Post
Quadro 2000 gpus have ECC gpu ram.
Quadro 2000 and 4000 while designed for pro use do not have ECC; Quadro 5000 has ECC vram.
kriesel is offline   Reply With Quote
Old 2021-02-27, 00:29   #42
Prime95
P90 years forever!
 
Prime95's Avatar
 
Aug 2002
Yeehaw, FL

22·17·109 Posts
Default

Quote:
Originally Posted by kriesel View Post
If someone had the time and inclination, adding a choice to the manual assignment page to generate PRP DC worktodo lines for LLDC candidates would help us humans avoid rewriting the lines and adding our errors
Try mersenne.org's manual assignment page
Prime95 is online now   Reply With Quote
Old 2021-02-27, 15:52   #43
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

32·13·43 Posts
Default

Quote:
Originally Posted by Prime95 View Post
Try mersenne.org's manual assignment page
Thanks George for making that more efficient and reliable by minimizing the middleman's work. First try worked fine with gpuowl v7.2-63-ge47361b & drag & drop the resulting proof onto prime95 v30.4b9's working folder, and subsequent quick Cert completion, except for the claim an LLDC assignment expired the day after it wasn't really assigned, a PRP was. Is it practical to do a similar PRP substitution for PrimeNet API LL DC candidates, for prime95/mprime v30.3 or above, preferably without requiring a client software modification & end user software version updating times n systems? (Given some pending assignments and the occasional P-1 stage 2 will restart from the beginning, wait till it's done to upgrade, warning, rollouts take weeks.)

Last fiddled with by kriesel on 2021-02-27 at 16:07
kriesel is offline   Reply With Quote
Old 2021-03-07, 16:51   #44
tdulcet
 
tdulcet's Avatar
 
"Teal Dulcet"
Jun 2018

2·3·5 Posts
Default

Quote:
Originally Posted by tdulcet View Post
Yes, that would be a trivial change we will consider for the next version of our notebooks.
As requested by @LaurV, both our notebooks now support all the worktypes for the CPU that MPrime currently supports! Feedback is welcome.

Quote:
Originally Posted by Prime95 View Post
The PrimeNet server will happily accept a PRP test with proof for LL-DC work. So, you only need to download one gpuowl version.
Another gpuowl advantage is it will run P-1 if necessary, potentially saving a lengthy PRP test altogether.

Also, in prime95 you can cut the amount of disk space required in half. I'll bet gpuowl has a similar option.
Unfortunately, even half the disk space would still not work for many Colab Pro users or people doing 100 million digit tests. Some Colab Pro users are running eight or more notebooks with a total of 12 or more GIMPS program instances. If they were doing first time PRP tests with proofs (such as eight instances of MPrime and four instances of GpuOwl), that would still require over 21 GiBs of space. A free Colab user running two notebooks with a total of three GIMPS program instances (two instances of MPrime and one instance of GpuOwl) doing 100 million digit PRP tests with proofs would still require over 16.5 GiBs of space. Remember that most user's accounts are limited to just 15 GiB, which is also shared by the Gmail, Drive and Photos services. This is why our GPU notebook, if it were to support GpuOwl, would need to download and build two versions to support both LL DC and PRP tests respectively.

Quote:
Originally Posted by kriesel View Post
for 7M fft, the older gpuowl version (probably one of Fan Ming's compiles, Google drive file date jan 21 2020) is producing iteration times 7.75-8.46 ms/iter on T4. An old less optimized version of gpuowl, running a longer fft length, still a little faster than the Colab CUDALucas T4 timings posted recently
OK, I was able to compile and run GpuOwl 6.11 commit 5c5dc6669d748460c57ff1962fdbbbc599bac0d0 (the last version that successfully compiles with GCC 7) on Colab. On the Tesla V100 GPU, GpuOwl is 638 us/iter and CUDALucas is around 1,140 us/iter, so I can confirm that GpuOwl is about 78.6% faster. Here are the results from 10,000 iterations of an LL test:
Code:
2021-03-04 06:34:54 Tesla V100-SXM2-16GB-0 106928347 LL        0 loaded: 0000000000000004
2021-03-04 06:35:57 Tesla V100-SXM2-16GB-0 106928347 LL   100000   0.09%;  638 us/it; ETA 0d 18:56; 95920d6941eafe3f
2021-03-04 06:35:57 Tesla V100-SXM2-16GB-0 waiting for the Jacobi check to finish..
2021-03-04 06:36:45 Tesla V100-SXM2-16GB-0 106928347 OK   100000 (jacobi == -1)
and 10,000 iterations of a PRP test:
Code:
2021-03-04 06:37:33 Tesla V100-SXM2-16GB-0 106928347 OK        0 loaded: blockSize 400, 0000000000000003
2021-03-04 06:37:34 Tesla V100-SXM2-16GB-0 106928347 OK      800   0.00%;  638 us/it; ETA 0d 18:57; 7d85dc41e3222beb (check 0.41s)
2021-03-04 06:38:37 Tesla V100-SXM2-16GB-0 Stopping, please wait..
2021-03-04 06:38:38 Tesla V100-SXM2-16GB-0 106928347 OK   100000   0.09%;  639 us/it; ETA 0d 18:59; 4d66b4eed5ea9ab3 (check 0.42s)
They were both using the 6M FFT length. We of course still need to test with the other Tesla GPUs available on Colab and with the latest version of GpuOwl. If anyone wants to reproduce these results, here are the commands to download, build and run this version of GpuOwl on Colab:
Code:
sudo apt-get update
sudo apt-get install libgmp3-dev -y
wget -nv https://github.com/preda/gpuowl/archive/5c5dc6669d748460c57ff1962fdbbbc599bac0d0.tar.gz
tar -xzvf 5c5dc6669d748460c57ff1962fdbbbc599bac0d0.tar.gz
cd gpuowl-5c5dc6669d748460c57ff1962fdbbbc599bac0d0
sed -i 's/<filesystem>/<experimental\/filesystem>/' *.h *.cpp
sed -i 's/std::filesystem/std::experimental::filesystem/' *.h *.cpp
sed -i 's/-Wall -O2/-Wall -g -O3/' Makefile
make -j "$(nproc)"
./gpuowl -h
./gpuowl -ll 106928347 -iters 100000
./gpuowl -prp 106928347 -iters 100000
Daniel and I have been testing various iterations of our two notebooks and new PrimeNet script for over nine months. They were extremely well tested before Daniel officially announced them in this thread. I expect that adding GpuOwl support would take several more months to implement and thoroughly test. Here is everything that we think would need to be done in order for our GPU notebook to support GpuOwl (let us know if we are missing anything):
  • Update our PrimeNet Python script to support GpuOwl.
    • Update it to report LL/PRP results, GpuOwl seems to use the same JSON format as Mlucas.
    • Update it to report progress, the format seems to differ slightly based on LL vs PRP (see above) and based on version.
    • Add support for reporting P-1 results, including standalone assignments, when done before an LL test and when combined with a PRP test.
    • Add support for uploading PRP proofs.
    • Replace the existing -g/--gpu option with new --cudalucas and -g/--gpuowl options.
  • Create new GpuOwl install script.
    • Needs to handle AMD, Nvidia and Intel GPUs (not necessary for Colab, but still important so that it can be independently tested and used).
    • Needs to download, build and run both the latest version of GpuOwl for PRP tests with proofs and version ~6.11 for LL DC and standalone P-1 tests.
    • Needs to support both the GCC and Clang compilers, detect the compiler version and dynamically modify the GpuOwl code based on that (see first two sed commands above).
    • Needs to detect if the GNU Multiple Precision (GMP) library and OpenCL are already installed and if not, install them.
  • Create new wrapper script to run the correct version of GpuOwl.
    • Needs to run the correct version based on the next assignment in the worktodo file, probably by maintaining two worktodo and two results files.
    • For LL assignments, needs to first start a P-1 factoring test if that has not yet been done.
    • Needs to automatically handle any idiosyncrasies between the versions for the user.
  • Update our GPU notebook to use GpuOwl (requires all of the above).
As Daniel and I said before, pull requests are welcome if anyone wants to help do any of this!

We would also obviously need latest version of GpuOwl to always successfully build on Colab. To achieve this, I submitted a pull request to the GpuOwl repository, which adds Continuous Integration (CI) to automatically build GpuOwl on Linux (with both GCC and Clang) and Windows on every commit and pull request. It was merged a few days ago. This allows users to now see directly on the top of the GpuOwl README if the latest version of GpuOwl builds by checking the badges. It should also eventually eliminate the need for @kriesel to have to manually build and upload binaries for Windows users. See my pull request for more info.

Quote:
Originally Posted by Prime95 View Post
Try mersenne.org's manual assignment page
Quote:
Originally Posted by kriesel View Post
Is it practical to do a similar PRP substitution for PrimeNet API LL DC candidates, for prime95/mprime v30.3 or above
This added a new 155 worktype to the manual assignment page. I am curious if the worktype would also work with the PrimeNet API, as we could trivially add support for it to our PrimeNet script if/when we add support for uploading PRP proofs.
tdulcet is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Google Diet Colab Notebook Corbeau Cloud Computing 1142 2021-04-15 02:11
Primality testing of numbers k*b^n+c Viliam Furik Math 3 2020-08-18 01:51
Alternatives to Google Colab kriesel Cloud Computing 11 2020-01-14 18:45
Google Notebooks -- Free GPUs!!! -- Deployment discussions... chalsall Cloud Computing 3 2019-10-13 20:03
a new primality testing method jasong Math 1 2007-11-06 21:46

All times are UTC. The time now is 10:10.

Tue Apr 20 10:10:14 UTC 2021 up 12 days, 4:51, 0 users, load averages: 2.71, 2.40, 2.15

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.