![]() |
![]() |
#12 |
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
139C16 Posts |
![]()
Assuming a VM instance is obtained, and the script connects to and runs off a Google drive folder, there are several steps to making that Google drive connection from the Colaboratory script. Note, in the attached screen captures, my account names and some other data are blacked out afterward, but yours won't be when you use Google.
But as of 2020-02-10, there's this, which should streamline things considerably: https://twitter.com/GoogleColab/stat...29213560610818 (Thanks for the notification, kracker) Top of this reference thread: https://www.mersenneforum.org/showthread.php?t=24839 Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1 Last fiddled with by kriesel on 2020-02-18 at 01:26 |
![]() |
![]() |
#13 |
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
22·5·251 Posts |
![]()
Either from overuse by one account, or due to general high demand, sometimes a Colaboratory gpu may not be available, or a VM not available.
A runtime configuration that requires a gpu will generate a message asking whether to connect and use a backend with no accelerator (gpu). If no gpu is available, and the script does not require one, it may still be the case that the script can not be run because no backend is available. Screen captures of the corresponding messages are attached. I hit both conditions, in October 2019 and since, on my second account in use, while the first sometimes continues to run a script; sometimes neither can obtain a VM at all for many hours. Launching a script that requires a gpu when one is not available may lead to a terminated script and a hidden background task running mprime on the cpu. This can be monitored with !top -d 60 or similar in a separate code section. Top of this reference thread: https://www.mersenneforum.org/showthread.php?t=24839 Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1 Last fiddled with by kriesel on 2020-02-09 at 17:43 |
![]() |
![]() |
#14 |
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
22·5·251 Posts |
![]()
The Google Colaboratory project is using github for issue reporting and tracking. See https://github.com/googlecolab/colab...ssue+is%3Aopen
For questions, there is stackoverflow with the tag google-colaboratory. See https://stackoverflow.com/questions/...-colaboratory/ My question about preferring or requiring a specific gpu model has had dozens of views but no replies yet. https://stackoverflow.com/questions/...-or-to-require Per the Colab FAQ, the way to send feedback, is in a Colab notebook, select the Help menu, then "Send feedback..." Example feedback (which can also include a screen shot, although this one did not): It could sometimes be very useful to be able to determine in a script whether a gpu was available, and if so which one, and branch based on that. See post #16 in this thread for detect & branch code. Also to require or request a particular gpu model would be useful; no solution in the Colab menus for that yet The detect and branch in python approach could be used. See also https://stackoverflow.com/questions/...-or-to-require Connecting as chalsall describes in https://www.mersenneforum.org/showpo...&postcount=943 seems to help when Colab and Google Drive are operating normally; I see runs up to 10 or 12 hours with it; under 7 hours (and occasionally only minutes) without it. Top of this reference thread: https://www.mersenneforum.org/showthread.php?t=24839 Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1 Last fiddled with by kriesel on 2021-02-22 at 16:56 |
![]() |
![]() |
#15 |
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
22·5·251 Posts |
![]()
The model allocated to a session may be any of the following four. There is currently no way to select a model or indicate a gpu-model preference or requirement. (There are reports of V100 in the paid tier, but I have no data on it from the free tier.)
I think all prices below are used, except Radeon VII. Code:
Tesla P100 https://www.techpowerup.com/gpu-specs/tesla-p100-pcie-16-gb.c2888 16GB HBM2 732 GB/sec dual-slot 250W FP64 4.763TFLOPS (1/2) 1175 GhzD/day TF, 173.4 LL(95M) $2150 on eBay indicates 0 of 16280 MiB allocated at Colab notebook launch Tesla P4 https://www.techpowerup.com/gpu-specs/tesla-p4.c2879 8GB 192 GB/sec single-slot 75W FP64 178.2 GFLOPS (1/32) 512 GhzD/day TF, 32.5 LL (95M) $1900 on eBay indicates 0 of 7611 MiB allocated at Colab notebook launch Tesla K80 (note, dual gpu, specs below are per card not per gpu) 12GBx2, 240.6 GB/sec x2, dual-slot 300W FP64 1371 GFLOPS (1/3) 766.7 GhzD/day TF, 115.1 LL (95M) $325 on eBay indicates 0 of 11441 MiB allocated at Colab notebook launch Tesla T4 16GB 320 GB/sec single-slot 70W FP64 254.4 GFLOPS (1/32) 2467. GhzD/day TF, 59.3 LL (95M) $1600 on eBay indicates 0 of 15079 MiB allocated at Colab notebook launch Tesla V100 16GB HBM2 897 GB/sec mezzanine or dual-slot 250W FP64 7.834 TFLOPS (1/2) https://www.techpowerup.com/gpu-specs/tesla-v100-sxm2-16-gb.c3018 4162. GhzD/day TF, 221 LL (95M) $2900 on eBay (never seen one of these in Colab myself) Code:
Tesla C2075 6 GB 144 GB/sec dual-slot 247W FP64 515.2 GFLOPS (1/2) 282.2 GhzD/d TF, 22.2 LL (95M) $80 on eBay Radeon VII: 16 GB HBM2 1024 GB/sec dual-slot 295W FP64 3.36 TFLOPS (1/4) 1113.6 TF, 280.9 LL (95M), the PRP king currently $800+ on eBay RTX2080: 8GB 448 GB/sec dual-slot 215W FP64 314.6 GFLOPS (1/32) 2703 GHzD/d TF, 65 LL (95M) $500 on eBay Note: in gpuowl, use -maxAlloc m, where m is megabytes limit per gpuowl instance, g is free megabytes on the idle gpu, n is number of gpuowl instances per gpu, b = 1000 or perhaps more if there are problems at 1000; m<=(g -b)/n. Or go higher when using multiple instances per gpu and memlock and -pool in gpuowl V7 Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1 Last fiddled with by kriesel on 2020-10-22 at 17:55 Reason: add V100, V7 mem, misc edits |
![]() |
![]() |
#16 |
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
22·5·251 Posts |
![]()
Thanks to a bit of code included in the "Taking advantage of colab pro" page, it's possible to have a Google Colaboratory script detect and branch based on gpu model or no-gpu condition. This allows making use of whatever gpu model one gets, with per-model separate folders and ini file, config.txt, etc. tuned for the specific gpu model or benchmarking for it, and avoid some error messages from the no-gpu-available case.
The attached example script implements that branching, running gpu and cpu as available, and identifying what's happening, as well as giving the user the choice to go ahead with a cpu-only session or quit. It also warns when the worktodo file is smaller than a settable threshold at startup for whichever gpu model is detected. (Mprime is presumed to be PrimeNet connected and getting new work that way as needed.) A companion script section for building the latest commit of gpuowl is also included. This could probably be tidied up some with use of Python functions. If you see ways to make this more efficient, please respond, in a discussion thread https://www.mersenneforum.org/showthread.php?t=23383, clearly, with change "whatever" to "newstuff". Top of this reference thread: https://www.mersenneforum.org/showthread.php?t=24839 Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1 Last fiddled with by kriesel on 2020-09-18 at 18:20 |
![]() |
![]() |
#17 |
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
116348 Posts |
![]()
dylan14 posted several versions of scripts for getting work assignments from and submitting results to mersenne.ca for mfaktc TF above exponent 1,000,000,000, including
https://www.mersenneforum.org/showpo...&postcount=598 https://www.mersenneforum.org/showpo...&postcount=600 https://www.mersenneforum.org/showpo...&postcount=650 https://www.mersenneforum.org/showpo...&postcount=654 https://www.mersenneforum.org/showpo...&postcount=711 https://www.mersenneforum.org/showpo...&postcount=768 https://www.mersenneforum.org/showpo...&postcount=915 Top of this reference thread: https://www.mersenneforum.org/showthread.php?t=24839 Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1 Last fiddled with by kriesel on 2021-02-22 at 16:47 |
![]() |
![]() |
#18 |
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
22·5·251 Posts |
![]()
Fan Ming posted a compile of GMP-ECM for gpu. Not useful for wavefront GIMPS, haven't tried it, but there it is, at https://www.mersenneforum.org/showpo...&postcount=729
Top of this reference thread: https://www.mersenneforum.org/showthread.php?t=24839 Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1 Last fiddled with by kriesel on 2021-02-22 at 17:00 |
![]() |
![]() |
#19 |
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
10011100111002 Posts |
![]()
Without persistent storage, long runs would not be possible, and short runs might be repeated from the start.
Google Drive works. Other cloud storage might or might not. I haven't tried any other than free Google Drive. Google Drive free capacity is 15 GB, including its trash folder. Note that Google offers multiple free mail, storage, etc accounts per person, so one's personal or other email and other cloud storage can be segregated by account, allowing multiple Colab-only accounts to be set up to use the full free 15GB of each. Mprime and Gpuowl clean up after themselves. Cleaning out the trash https://mersenneforum.org/showpost.p...postcount=1025 "If you'd like to purchase more Drive space, visit Google Drive. Note that purchasing more space on Drive will not increase the amount of disk available on Colab VMs. Subscribing to Colab Pro will." https://research.google.com/colaboratory/faq.html Standard plan Google One (100GB) is $20/year; Advanced (200GB) $30/year; Premium (2TB) $100/year. https://one.google.com/about#upgrade Top of this reference thread: https://www.mersenneforum.org/showthread.php?t=24839 Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1 Last fiddled with by kriesel on 2021-02-22 at 17:13 |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Google Diet Colab Notebook | Corbeau | Cloud Computing | 1142 | 2021-04-15 02:11 |
Reference material discussion thread | kriesel | kriesel | 63 | 2021-04-06 12:22 |
Mlucas-specific reference thread | kriesel | kriesel | 7 | 2021-01-20 17:47 |
Alternatives to anesthesia thread (because Google isn't helping) | jasong | jasong | 16 | 2016-07-14 05:34 |
The thread for mundane questions that Google fails at | jasong | jasong | 9 | 2014-02-08 01:54 |