20210223, 19:55  #23  
Dec 2019
23_{16} Posts 
~1 Month Results
Quote:
The results from `Oracle*` and `pdxEmail`, and `Windows` CPUs can be ignored as they are not from Colab, but everything else is. Please also note that I was 5 days without power and thus missed out on those 6 days of results. In tottal, the sum of total primality results up to the current date returned in this timeframe of Januuary 16  February 23rd is 46. This is a significant number due to the fact that most of these are not small DC or CERTS. 

20210223, 21:20  #24  
If I May
"Chris Halsall"
Sep 2002
Barbados
24652_{8} Posts 
Quote:
1. A reasonable amount of compute can be "harvested" from Colab. 2. There seem to be quite a few "dimensions" to the compute allotments. 3. While those running the GPU72 Notebook were shutout, others were reporting 12 hours or so of GPU. 3.1. The Google Gods (which may simply be Humans directing machines) act in mysterious ways. 4. Recently, those running the GPU72 Notebook have been getting a bit of compute each day. 4.1. My thirteen (13#) instances (spread across five machines in three countries) have to be interacted with, but they always get at least CPU compute for at least 20 minutes. To be honest, I've been as fascinated with watching the experimenters experiment with the Subjects as much as anything else. (I'm reminded of Douglas Adams, and the Mice and the Dolphins (or was it the whales)). 

20210223, 22:26  #25 
6809 > 6502
"""""""""""""""""""
Aug 2003
101×103 Posts
10100111110011_{2} Posts 

20210223, 22:48  #26 
If I May
"Chris Halsall"
Sep 2002
Barbados
2·5,333 Posts 

20210224, 15:54  #27  
"Teal Dulcet"
Jun 2018
71 Posts 
Quote:
Quote:
Quote:


20210224, 17:50  #28  
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
15272_{8} Posts 
In support of chalsall's statement that Google offers Colab for interactive use, not bot use, the image interpretation task used to occur at least daily on one of my several Colab free accounts; same account every time. It doesn't happen often now, but it still comes up.
re gpuowl faster than cudalucas: Quote:
Compare LL on CUDALucas to PRP on gpuowl. Same exponent, same host, same gpu, same hour, same environmental and clocking conditions, a GTX1080 for this quick benchmark. CUDALucas v2.06 May 5 2017 version compiled by flashjh; Windows 10 run environment Code:
Starting M240110503 fft length = 13824K  Date Time  Test Num Iter Residue  FFT Error ms/It Time  ETA Done   Feb 23 16:14:28  M240110503 10000 0x5b6b7cbec1bdc015  13824K 0.08594 13.4883 134.88s  37:11:35:46 0.00%   Feb 23 16:16:43  M240110503 20000 0xde34ff2ddb2080a4  13824K 0.08789 13.5358 135.35s  37:13:08:45 0.00%   Feb 23 16:18:58  M240110503 30000 0x14e2c4cd92c29164  13824K 0.09180 13.5395 135.39s  37:13:43:10 0.01%   Feb 23 16:21:14  M240110503 40000 0x5256dd82035447c4  13824K 0.08594 13.5488 135.48s  37:14:08:29 0.01%   Feb 23 16:23:29  M240110503 50000 0xe89ddd5520561b21  13824K 0.08594 13.5361 135.36s  37:14:12:38 0.02%  ETA 240110503 * .0135297 sec /3600/24 days/sec =~ 37.600 days Gpuowl v6.11380 excerpt midrun of PRP/GEC/proof, 13M fft (1k:13:512): Code:
20210223 15:37:13 asr3/gtx1080 240110503 OK 131700000 54.85%; 11875 us/it; ETA 14d 21:36; a5f295da6eddc0a1 (check 5.17s) 20210223 15:47:13 asr3/gtx1080 240110503 OK 131750000 54.87%; 11877 us/it; ETA 14d 21:30; f20a694bd0c842de (check 5.71s) 20210223 15:57:12 asr3/gtx1080 240110503 OK 131800000 54.89%; 11883 us/it; ETA 14d 21:31; 7ddaab01bbd26fcd (check 5.20s) 20210223 16:07:11 asr3/gtx1080 240110503 OK 131850000 54.91%; 11866 us/it; ETA 14d 20:50; 38b6acb7773f3896 (check 5.28s) ETA start to finish 240110503 * .011875 sec /3600/24 days/sec =~ 33.001 days Raw iteration speed ratio gpuowl PRP / CUDALucas LL = 37.6/33.001 =~ 1.1394 The fft length difference (13.5M CUDALucas vs 13M gpuowl) only accounts for ~4% out of the observed 14% difference favoring gpuowl (like getting 8 days per week!) What's omitted above is the slightly more than 2:1 overall project speed advantage of PRP/GEC/proof vs. LL, LLDC, and typically 4% LLTC, that's lost by using CUDALucas. And the loss of error checking; not even the relatively weaker Jacobi symbol check in CUDALucas, unless you've added it in your builds. The higher the exponent, the longer the run, and the less likely a run will complete correctly without GEC. In P1, you could perhaps compare my CUDAPm1 fft and threads file timings and estimate P1 run times. If you try running P1 tests on Colab I'd be interested in learning how to resolve the zeroresidue issue I ran into. https://www.mersenneforum.org/showpo...28&postcount=5 Gpuowl P1 run time scaling for various gpus including 2 Colab models can be found here. Benchmarking on V100 has been a nonissue since I don't recall ever encountering one. Lately it's almost entirely T4s, more suitable for TF. 

20210224, 19:31  #29  
Dec 2019
5×7 Posts 
Quote:
GPUOwl stuff: Yes, it would be great if we could use GPUOwl instead of CUDALucas as it sounds like there is more that can be done, as great as CUDALucas is. Last fiddled with by danc2 on 20210224 at 19:31 

20210225, 12:46  #30 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
2·11·311 Posts 
GTX1060 gpuowl vs. CUDALucas ~58M LL DC
Executive summary: Gpuowl 5.8 ms/iter with Jacobi check, CUDALuca 6.256.5 ms/iter (no Jacobi check)
Gpuowl v6.11380 on GTX1060 ~5.806 ms/iter in 58.75M LL DC with Jacobi check: Code:
20210222 21:04:36 condor/gtx1060 58755607 FFT: 3M 1K:6:256 (18.68 bpw) 20210222 21:04:36 condor/gtx1060 Expected maximum carry32: 50550000 20210222 21:04:36 condor/gtx1060 OpenCL args "DEXP=58755607u DWIDTH=1024u DSMALL_HEIGHT=256u DMIDDLE=6u DPM1=0 DMM_CHAIN=1u DMM2_CHAIN=2u DMAX_ACCURAC Y=1 DWEIGHT_STEP_MINUS_1=0x8.01304be8dc228p5 DIWEIGHT_STEP_MINUS_1=0xc.ce52411c70cep6 clunsafemathoptimizations clstd=CL2.0 clfinitemathonly " 20210222 21:04:39 condor/gtx1060 20210222 21:04:39 condor/gtx1060 OpenCL compilation in 2.52 s 20210222 21:04:39 condor/gtx1060 58755607 LL 0 loaded: 0000000000000004 20210222 21:06:00 condor/gtx1060 102714151 P2 GCD: no factor 20210222 21:06:00 condor/gtx1060 {"status":"NF", "exponent":"102714151", "worktype":"PM1", "B1":"1000000", "B2":"30000000", "fftlength":"5767168", "program": {"name":"gpuowl", "version":"v6.11380g79ea0cc"}, "user":"kriesel", "computer":"condor/gtx1060", "aid":"7DAA6CA7DFF308D0DF638276AF9B5028", "timestamp":"202102 23 03:06:00 UTC"} 20210222 21:14:20 condor/gtx1060 58755607 LL 100000 0.17%; 5807 us/it; ETA 3d 22:37; 39c251c47f602a3d 20210222 21:24:01 condor/gtx1060 58755607 LL 200000 0.34%; 5807 us/it; ETA 3d 22:28; eb46c0fb8d0e94f8 20210222 21:33:41 condor/gtx1060 58755607 LL 300000 0.51%; 5807 us/it; ETA 3d 22:18; ed993c4bb040ddef 20210222 21:43:22 condor/gtx1060 58755607 LL 400000 0.68%; 5807 us/it; ETA 3d 22:07; 54e2c2904288419d 20210222 21:53:03 condor/gtx1060 58755607 LL 500000 0.85%; 5808 us/it; ETA 3d 21:59; 16657e0fba393f7f 20210222 22:02:43 condor/gtx1060 58755607 LL 600000 1.02%; 5808 us/it; ETA 3d 21:49; 7ca0fe4b4db9c724 20210222 22:02:43 condor/gtx1060 58755607 OK 500000 (jacobi == 1) 20210222 22:12:24 condor/gtx1060 58755607 LL 700000 1.19%; 5808 us/it; ETA 3d 21:40; 22aa1cb83c55294c ... 20210225 05:41:26 condor/gtx1060 58755607 LL 35100000 59.74%; 5805 us/it; ETA 1d 14:09; 7810938d88993295 20210225 05:41:26 condor/gtx1060 58755607 OK 35000000 (jacobi == 1) 20210225 05:51:06 condor/gtx1060 58755607 LL 35200000 59.91%; 5804 us/it; ETA 1d 13:59; 5d55d69ab7ca60a9 20210225 06:00:46 condor/gtx1060 58755607 LL 35300000 60.08%; 5804 us/it; ETA 1d 13:49; 5635fb50dc776ab9 20210225 06:10:27 condor/gtx1060 58755607 LL 35400000 60.25%; 5804 us/it; ETA 1d 13:39; 2ef462f9a00916b2 20210225 06:14:25 condor/gtx1060 Stopping, please wait.. 20210225 06:14:25 condor/gtx1060 58755607 LL 35441000 60.32%; 5813 us/it; ETA 1d 13:39; bdb95405e8027916 20210225 06:14:25 condor/gtx1060 waiting for the Jacobi check to finish.. 20210225 06:15:12 condor/gtx1060 58755607 OK 35441000 (jacobi == 1) 10:51 / 100k iterations = 6.51 msec/iter, 12% longer than Gpuowl, and no Jacobi check: Code:
Using threads: square 512, splice 128. Starting M58755607 fft length = 3200K  Date Time  Test Num Iter Residue  FFT Error ms/It Time  ETA Done   Feb 25 06:20:15  M58755607 50000 0x6b790995614a3aa2  3200K 0.19189 6.2483 312.41s  4:05:53:31 0.08%  Resettng fft. Using threads: square 512, splice 32. Continuing M58755607 @ iteration 50001 with fft length 3136K, 0.09% done Round off error at iteration = 51500, err = 0.35938 > 0.35, fft = 3136K. Restarting from last checkpoint to see if the error is repeatable. Using threads: square 512, splice 32. Continuing M58755607 @ iteration 50001 with fft length 3136K, 0.09% done Round off error at iteration = 51500, err = 0.35938 > 0.35, fft = 3136K. The error persists. Trying a larger fft until the next checkpoint. Using threads: square 512, splice 128. Continuing M58755607 @ iteration 50001 with fft length 3200K, 0.09% done  Feb 25 06:25:45  M58755607 100000 0x39c251c47f602a3d  3200K 0.18750 6.2484 312.41s  4:05:48:26 0.17%  Resettng fft. Using threads: square 512, splice 32. Continuing M58755607 @ iteration 100001 with fft length 3136K, 0.17% done Round off error at iteration = 100700, err = 0.35156 > 0.35, fft = 3136K. Restarting from last checkpoint to see if the error is repeatable. Using threads: square 512, splice 32. Continuing M58755607 @ iteration 100001 with fft length 3136K, 0.17% done Round off error at iteration = 100700, err = 0.35156 > 0.35, fft = 3136K. The error persists. Trying a larger fft until the next checkpoint. Using threads: square 512, splice 128. Continuing M58755607 @ iteration 100001 with fft length 3200K, 0.17% done  Feb 25 06:31:06  M58755607 150000 0x71a49982b1d8c05d  3200K 0.17969 6.2493 312.46s  4:05:44:06 0.25%  Resettng fft. Using threads: square 512, splice 32. Continuing M58755607 @ iteration 150001 with fft length 3136K, 0.26% done Round off error at iteration = 158700, err = 0.375 > 0.35, fft = 3136K. Restarting from last checkpoint to see if the error is repeatable. Using threads: square 512, splice 32. Continuing M58755607 @ iteration 150001 with fft length 3136K, 0.26% done Round off error at iteration = 158700, err = 0.375 > 0.35, fft = 3136K. The error persists. Trying a larger fft until the next checkpoint. Last fiddled with by kriesel on 20210225 at 12:47 
20210225, 14:44  #31  
"Teal Dulcet"
Jun 2018
47_{16} Posts 
Quote:
Note that there are existing addons that claim to be able to automatically solve these reCAPTCHAs (I have never tried any of them), such as Buster: Captcha Solver for Humans, which could potentially be used if this ever becomes problematic in Colab. Quote:
For a wavefront first time primality test (with an exponent up to 115,080,019), here are the ms/iter speeds with CUDALucas on Colab using our GPU notebook (all 6272K FFT length):
Quote:
Quote:
Quote:
Quote:
When our extension is set to automatically run the first cell of the notebook (disabled by default), it will check if the cell is running every minute by default. This is configurable, but I would not recommend that users use a value less than one minute to prevent Google from thinking they/we are DoSing their servers. 

20210225, 16:59  #32  
P90 years forever!
Aug 2002
Yeehaw, FL
29×277 Posts 
Quote:
Quote:
Another gpuowl advantage is it will run P1 if necessary, potentially saving a lengthy PRP test altogether. Also, in prime95 you can cut the amount of disk space required in half. I'll bet gpuowl has a similar option. Last fiddled with by Prime95 on 20210225 at 17:01 

20210225, 19:04  #33 
"6800 descendent"
Feb 2005
Colorado
5^{2}×29 Posts 

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Google Diet Colab Notebook  Corbeau  Cloud Computing  1225  20220731 13:51 
Primality testing of numbers k*b^n+c  Viliam Furik  Math  3  20200818 01:51 
Alternatives to Google Colab  kriesel  Cloud Computing  11  20200114 18:45 
Google Notebooks  Free GPUs!!!  Deployment discussions...  chalsall  Cloud Computing  3  20191013 20:03 
a new primality testing method  jasong  Math  1  20071106 21:46 