mersenneforum.org I'm getting maybe 10 or 15 minutes with Colab before having to reset
 Register FAQ Search Today's Posts Mark Forums Read

 2021-06-17, 03:54 #12 slandrum   Jan 2021 California 3·5·17 Posts I'm not running GPUto72, but I've been playing around with Colab accounts for a little bit (a few months). For each account, I can run a CPU only instance and a GPU instance. I can run the CPU instance around the clock, as soon as it ends I can restart it immediately. At 6 hours + a random amount of minutes between 2 and 45 it will put up a reCaptcha dialog to verify that I'm still there. Lately it will frequently (maybe 1/5 of the time) also have one of the "are you really a human" puzzles where you have to pick a set of pictures. If I respond to this in time, the session will last approximately 12 hours (total). If I don't respond it will terminate my session some time before the 7 hour mark. I can immediately restart the session though, right after it's been terminated (I'll get a different machine). While having the CPU session (nearly) continuously running I can run a GPU session. The time I get on the GPU varies from 2 hours (actually the shortest was 1 hour 15 minutes) to 14 hours. If the session lasts longer than 3 hours, it will pop up the reCaptcha within 2 minutes of the session going over 3 hours. The GPU sessions lately have been kicking me off at under 3 hours (2:25 to 2:50 is what I've been getting for about a week) but the week prior to that I was averaging about 4 hours per session, and the week before that just over 3 hours/session. One of my accounts has now gone 5 days w/o letting me get a GPU session at all (this is a new behavior). On all the other accounts it's letting me get 1 GPU session in a 24 hour period. These are all free accounts. I usually get a T4 for the GPU session. I sometimes get a K80, and rarely a P100. If I get a P4 (even rarer than the P100) I kill the session, and start a new one to get a different GPU - for what I've been running the P4 gives the worst results - 25% slower than a T4. With the K80 I get comparable results to the T4 - maybe 5-10% faster on average, but each instance has different timing and the best T4 timings are better than the worst K80 timings.. With the P100 I get 4.5x faster results than with the T4 on average. Last fiddled with by slandrum on 2021-06-17 at 03:55
 2021-06-17, 04:22 #13 petrw1 1976 Toyota Corona years forever!     "Wayne" Nov 2006 Saskatchewan, Canada 22×35×5 Posts I'm considering switching from P-1 to TF if all I can get is T4's. I just need build for mfaktc for colab, Actually my preference would be to have a script that asks for a GPU then IF: T4 run mfaktc IF: P100 run GPUOwl P-1 ELSE: try again
2021-06-17, 05:14   #14
moebius

"CharlesgubOB"
Jul 2009
Germany

2×313 Posts

Quote:
 Originally Posted by petrw1 Haven't been able to get P100 for the last 3 days
Me too, but was be able to get more Tesla K-80 runtimes as usual.

2021-06-17, 06:45   #15
kriesel

"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

17×349 Posts

Quote:
 Originally Posted by petrw1 I'm considering switching from P-1 to TF if all I can get is T4's. I just need build for mfaktc for colab, Actually my preference would be to have a script that asks for a GPU then IF: T4 run mfaktc IF: P100 run GPUOwl P-1 ELSE: try again
Have a look at the script at https://www.mersenneforum.org/showpo...5&postcount=16 and adapt it to do what you want. Mfaktc at https://www.mersenneforum.org/showpo...9&postcount=25 or https://download.mersenne.ca/mfaktc/mfaktc-0.21

Last fiddled with by kriesel on 2021-06-17 at 06:47

2021-06-17, 12:55   #16
moebius

"CharlesgubOB"
Jul 2009
Germany

2·313 Posts

Quote:
 Originally Posted by petrw1 Actually my preference would be to have a script that asks for a GPU then IF: T4 run mfaktc IF: P100 run GPUOwl P-1 ELSE: try again
And what about the K-80? it is definitely good for double checks, then probably also for P-1.
Code:
2021-06-17 12:21:40 gpuowl v6.11-380-g79ea0cc
2021-06-17 12:21:41 config: -carry short -use CARRY32,ORIG_SLOWTRIG,IN_WG=128,IN_SIZEX=16,IN_SPACING=4,OUT_WG=128,OUT_SIZEX=16,OUT_SPACING=4 -nospin -block 100 -maxAlloc 10000 -B1 750000 -rB2 20
2021-06-17 12:21:41 device 0, unique id ''
2021-06-17 12:21:41 Tesla K80-0 57641347 FFT: 3M 1K:6:256 (18.32 bpw)
2021-06-17 12:21:41 Tesla K80-0 Expected maximum carry32: 3ED80000
2021-06-17 12:21:42 Tesla K80-0 OpenCL args "-DEXP=57641347u -DWIDTH=1024u -DSMALL_HEIGHT=256u -DMIDDLE=6u -DPM1=0 -DWEIGHT_STEP_MINUS_1=0x1.32332220cd858p-1 -DIWEIGHT_STEP_MINUS_1=-0x1.7f37b40db846ep-2 -DCARRY32=1 -DIN_SIZEX=16 -DIN_SPACING=4 -DIN_WG=128 -DORIG_SLOWTRIG=1 -DOUT_SIZEX=16 -DOUT_SPACING=4 -DOUT_WG=128  -cl-unsafe-math-optimizations -cl-std=CL2.0 -cl-finite-math-only "
2021-06-17 12:21:45 Tesla K80-0

2021-06-17 12:21:45 Tesla K80-0 OpenCL compilation in 3.02 s
2021-06-17 12:21:46 Tesla K80-0 57641347 LL 39500000 loaded: 4d3e186ee9074c1e
2021-06-17 12:25:12 Tesla K80-0 57641347 LL 39600000  68.70%; 2055 us/it; ETA 0d 10:18; 8cbc7bdc53b01d7b
2021-06-17 12:28:38 Tesla K80-0 57641347 LL 39700000  68.87%; 2061 us/it; ETA 0d 10:16; 9dad6557559689e3
2021-06-17 12:32:04 Tesla K80-0 57641347 LL 39800000  69.05%; 2061 us/it; ETA 0d 10:13; 3e7b66ea3779bd0b
2021-06-17 12:35:30 Tesla K80-0 57641347 LL 39900000  69.22%; 2061 us/it; ETA 0d 10:09; 0bc408c38dce350a
2021-06-17 12:38:56 Tesla K80-0 57641347 LL 40000000  69.39%; 2061 us/it; ETA 0d 10:06; 31799786a944acae
2021-06-17 12:42:22 Tesla K80-0 57641347 LL 40100000  69.57%; 2061 us/it; ETA 0d 10:03; 40a53371c5d36a44
2021-06-17 12:42:22 Tesla K80-0 57641347 OK 40000000 (jacobi == -1)
2021-06-17 12:45:48 Tesla K80-0 57641347 LL 40200000  69.74%; 2062 us/it; ETA 0d 09:59; 64bc9996665bee17
2021-06-17 12:49:14 Tesla K80-0 57641347 LL 40300000  69.92%; 2061 us/it; ETA 0d 09:56; 5338e9b91692daa3

2021-06-17, 14:22   #17
petrw1
1976 Toyota Corona years forever!

"Wayne"
Nov 2006

22×35×5 Posts

Quote:
 Originally Posted by moebius And what about the K-80? it is definitely good for double checks, then probably also for P-1.
Using0 GPUOwl it is 4 times slower than a P100 for P-1

2021-06-17, 14:23   #18
petrw1
1976 Toyota Corona years forever!

"Wayne"
Nov 2006

22·35·5 Posts

Quote:
 Originally Posted by kriesel Have a look at the script at https://www.mersenneforum.org/showpo...5&postcount=16 and adapt it to do what you want. Mfaktc at https://www.mersenneforum.org/showpo...9&postcount=25 or https://download.mersenne.ca/mfaktc/mfaktc-0.21
Thanks

2021-06-17, 16:37   #19
LaurV
Romulan Interpreter

"name field"
Jun 2011
Thailand

22·11·223 Posts

Quote:
 Originally Posted by moebius And what about the K-80? it is definitely good for double checks, then probably also for P-1.
You don't get a full K80. K80 is a dual-chip card, and you only get half of it. The rest, as Wayne said.

2021-06-17, 19:16   #20
moebius

"CharlesgubOB"
Jul 2009
Germany

2×313 Posts

Quote:
 Originally Posted by LaurV You don't get a full K80. K80 is a dual-chip card, and you only get half of it. The rest, as Wayne said.
gpuOwl is good, but I don't think that a gpuOWL_instance can use both gpus of the Tesla K-80 at the same time. The values I have are only for one instance of gpuOWl.

Last fiddled with by moebius on 2021-06-17 at 19:18

2021-06-17, 19:26   #21
moebius

"CharlesgubOB"
Jul 2009
Germany

2×313 Posts

Quote:
 Originally Posted by petrw1 Using0 GPUOwl it is 4 times slower than a P100 for P-1
Yes, a K-80 instance is only slightly slower in the LL DC range than a 8-core AMD Ryzen 3700X.

2021-06-17, 20:00   #22
kriesel

"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

17×349 Posts

Quote:
 Originally Posted by moebius gpuOwl is good, but I don't think that a gpuOWL_instance can use both gpus of the Tesla K-80 at the same time. The values I have are only for one instance of gpuOWl.
Gpuowl can't use more hardware than is present.
The K80 is a dual-GPU card. The Colab free sessions offer a CPU core (often with HT) and sometimes a single GPU (such as one GPU, 12GB GPU ram, half the K80 card).

All times are UTC. The time now is 12:23.

Wed Dec 8 12:23:49 UTC 2021 up 138 days, 6:52, 1 user, load averages: 1.69, 1.49, 1.40