![]() |
![]() |
#12 |
Jan 2021
California
2×211 Posts |
![]()
I'm not running GPUto72, but I've been playing around with Colab accounts for a little bit (a few months).
For each account, I can run a CPU only instance and a GPU instance. I can run the CPU instance around the clock, as soon as it ends I can restart it immediately. At 6 hours + a random amount of minutes between 2 and 45 it will put up a reCaptcha dialog to verify that I'm still there. Lately it will frequently (maybe 1/5 of the time) also have one of the "are you really a human" puzzles where you have to pick a set of pictures. If I respond to this in time, the session will last approximately 12 hours (total). If I don't respond it will terminate my session some time before the 7 hour mark. I can immediately restart the session though, right after it's been terminated (I'll get a different machine). While having the CPU session (nearly) continuously running I can run a GPU session. The time I get on the GPU varies from 2 hours (actually the shortest was 1 hour 15 minutes) to 14 hours. If the session lasts longer than 3 hours, it will pop up the reCaptcha within 2 minutes of the session going over 3 hours. The GPU sessions lately have been kicking me off at under 3 hours (2:25 to 2:50 is what I've been getting for about a week) but the week prior to that I was averaging about 4 hours per session, and the week before that just over 3 hours/session. One of my accounts has now gone 5 days w/o letting me get a GPU session at all (this is a new behavior). On all the other accounts it's letting me get 1 GPU session in a 24 hour period. These are all free accounts. I usually get a T4 for the GPU session. I sometimes get a K80, and rarely a P100. If I get a P4 (even rarer than the P100) I kill the session, and start a new one to get a different GPU - for what I've been running the P4 gives the worst results - 25% slower than a T4. With the K80 I get comparable results to the T4 - maybe 5-10% faster on average, but each instance has different timing and the best T4 timings are better than the worst K80 timings.. With the P100 I get 4.5x faster results than with the T4 on average. Last fiddled with by slandrum on 2021-06-17 at 03:55 |
![]() |
![]() |
![]() |
#13 |
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
22×3×433 Posts |
![]()
I'm considering switching from P-1 to TF if all I can get is T4's.
I just need build for mfaktc for colab, Actually my preference would be to have a script that asks for a GPU then IF: T4 run mfaktc IF: P100 run GPUOwl P-1 ELSE: try again |
![]() |
![]() |
![]() |
#14 |
"CharlesgubOB"
Jul 2009
Germany
11568 Posts |
![]() |
![]() |
![]() |
![]() |
#15 | |
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
656710 Posts |
![]() Quote:
Last fiddled with by kriesel on 2021-06-17 at 06:47 |
|
![]() |
![]() |
![]() |
#16 | |
"CharlesgubOB"
Jul 2009
Germany
2×311 Posts |
![]() Quote:
Code:
2021-06-17 12:21:40 gpuowl v6.11-380-g79ea0cc 2021-06-17 12:21:40 Note: not found 'config.txt' 2021-06-17 12:21:41 config: -carry short -use CARRY32,ORIG_SLOWTRIG,IN_WG=128,IN_SIZEX=16,IN_SPACING=4,OUT_WG=128,OUT_SIZEX=16,OUT_SPACING=4 -nospin -block 100 -maxAlloc 10000 -B1 750000 -rB2 20 2021-06-17 12:21:41 device 0, unique id '' 2021-06-17 12:21:41 Tesla K80-0 57641347 FFT: 3M 1K:6:256 (18.32 bpw) 2021-06-17 12:21:41 Tesla K80-0 Expected maximum carry32: 3ED80000 2021-06-17 12:21:42 Tesla K80-0 OpenCL args "-DEXP=57641347u -DWIDTH=1024u -DSMALL_HEIGHT=256u -DMIDDLE=6u -DPM1=0 -DWEIGHT_STEP_MINUS_1=0x1.32332220cd858p-1 -DIWEIGHT_STEP_MINUS_1=-0x1.7f37b40db846ep-2 -DCARRY32=1 -DIN_SIZEX=16 -DIN_SPACING=4 -DIN_WG=128 -DORIG_SLOWTRIG=1 -DOUT_SIZEX=16 -DOUT_SPACING=4 -DOUT_WG=128 -cl-unsafe-math-optimizations -cl-std=CL2.0 -cl-finite-math-only " 2021-06-17 12:21:45 Tesla K80-0 2021-06-17 12:21:45 Tesla K80-0 OpenCL compilation in 3.02 s 2021-06-17 12:21:46 Tesla K80-0 57641347 LL 39500000 loaded: 4d3e186ee9074c1e 2021-06-17 12:25:12 Tesla K80-0 57641347 LL 39600000 68.70%; 2055 us/it; ETA 0d 10:18; 8cbc7bdc53b01d7b 2021-06-17 12:28:38 Tesla K80-0 57641347 LL 39700000 68.87%; 2061 us/it; ETA 0d 10:16; 9dad6557559689e3 2021-06-17 12:32:04 Tesla K80-0 57641347 LL 39800000 69.05%; 2061 us/it; ETA 0d 10:13; 3e7b66ea3779bd0b 2021-06-17 12:35:30 Tesla K80-0 57641347 LL 39900000 69.22%; 2061 us/it; ETA 0d 10:09; 0bc408c38dce350a 2021-06-17 12:38:56 Tesla K80-0 57641347 LL 40000000 69.39%; 2061 us/it; ETA 0d 10:06; 31799786a944acae 2021-06-17 12:42:22 Tesla K80-0 57641347 LL 40100000 69.57%; 2061 us/it; ETA 0d 10:03; 40a53371c5d36a44 2021-06-17 12:42:22 Tesla K80-0 57641347 OK 40000000 (jacobi == -1) 2021-06-17 12:45:48 Tesla K80-0 57641347 LL 40200000 69.74%; 2062 us/it; ETA 0d 09:59; 64bc9996665bee17 2021-06-17 12:49:14 Tesla K80-0 57641347 LL 40300000 69.92%; 2061 us/it; ETA 0d 09:56; 5338e9b91692daa3 |
|
![]() |
![]() |
![]() |
#17 |
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
519610 Posts |
![]() |
![]() |
![]() |
![]() |
#18 | |
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
144C16 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
#19 |
Romulan Interpreter
"name field"
Jun 2011
Thailand
997310 Posts |
![]() |
![]() |
![]() |
![]() |
#20 |
"CharlesgubOB"
Jul 2009
Germany
11568 Posts |
![]()
gpuOwl is good, but I don't think that a gpuOWL_instance can use both gpus of the Tesla K-80 at the same time. The values I have are only for one instance of gpuOWl.
Last fiddled with by moebius on 2021-06-17 at 19:18 |
![]() |
![]() |
![]() |
#21 |
"CharlesgubOB"
Jul 2009
Germany
2×311 Posts |
![]() |
![]() |
![]() |
![]() |
#22 | |
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
656710 Posts |
![]() Quote:
The K80 is a dual-GPU card. The Colab free sessions offer a CPU core (often with HT) and sometimes a single GPU (such as one GPU, 12GB GPU ram, half the K80 card). |
|
![]() |
![]() |