mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > PrimeNet

Reply
 
Thread Tools
Old 2019-04-05, 19:34   #1
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

5,807 Posts
Default manual result submission page feature request

If the code that interprets manual result submissions was extended to be able to recognize and process records like the following example record from CUDALucas 2.06, it would enable progress reporting for CUDALucas on gpus. This progress reporting could be optional. It would be useful for reducing expiration, reassignment, and duplicate work for tasks that are progressing on gpus. It would also increase the collection of interim residues for comparison in double checks.
Field descriptions:
Code:
|   Date     Time    |   Test Num     Iter        Residue        |    FFT   Error     ms/It     Time  |       ETA      Done   |
Example record:
Code:
|  Mar 30  07:52:38  |  M84270409  40000000  0xd6cd73de2029c850  |  4608K  0.21094   5.6984  569.84s  |   2:22:21:27  47.46%  |
This is the default console output record style.

If it worked out for CUDALucas, it could be extended to accommodate some version of gpuowl also. (Unfortunately there's a lot of variation in console or log output format for gpuowl versus version, and depending on the fft length, different versions of gpuowl are fastest.)

Please try adding this for CUDALucas (without breaking the existing functionality). I'd be happy to field test it when ready for that.

Last fiddled with by kriesel on 2019-04-05 at 19:40
kriesel is offline   Reply With Quote
Old 2019-04-05, 19:53   #2
chalsall
If I May
 
chalsall's Avatar
 
"Chris Halsall"
Sep 2002
Barbados

2·17·293 Posts
Default

Quote:
Originally Posted by kriesel View Post
Please try adding this for CUDALucas (without breaking the existing functionality). I'd be happy to field test it when ready for that.
chalsall is offline   Reply With Quote
Old 2019-04-05, 21:29   #3
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

10110101011112 Posts
Default ETA field size varies

A later example, with 0 days 6 hr 44 min 26 sec remaining. Eventually it gets down to minutes and seconds, and "0:00" ETA can occur, and did during my DC of M118000081.
Code:
|  Apr 01  22:35:07  |  M84270409  80000000  0x042f3884c19038dd  |  4608K  0.22070   5.6551  565.51s  |      6:44:26  94.93%  |
kriesel is offline   Reply With Quote
Old 2020-04-04, 19:49   #4
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

16AF16 Posts
Default

Now that so many performance improvements have been made in Gpuowl v6.11, I think previous versions of Gpuowl are of little relevance. So supporting interim residues from Gpuowl v6.11 or CUDALucas v2.06 alone would accomplish a lot.
Sample gpuowl console line:
Code:
2020-04-03 15:34:04 condorella/rx550 131500093 OK  6000000   4.56%; 29302 us/it; ETA 42d 13:30; 1ba5fe1928b87e3e (check 11.94s)
Note, could include at the right, " 1 errors"

Last fiddled with by kriesel on 2020-04-04 at 19:56
kriesel is offline   Reply With Quote
Old 2021-09-02, 14:41   #5
tdulcet
 
tdulcet's Avatar
 
"Teal Dulcet"
Jun 2018

2816 Posts
Default New PrimeNet script

Quote:
Originally Posted by kriesel View Post
If the code that interprets manual result submissions was extended to be able to recognize and process records like the following example record from CUDALucas 2.06, it would enable progress reporting for CUDALucas on gpus. This progress reporting could be optional. It would be useful for reducing expiration, reassignment, and duplicate work for tasks that are progressing on gpus. It would also increase the collection of interim residues for comparison in double checks.
I am not sure if you saw our dedicated thread, but @danc2 and I created a new PrimeNet Python script, which supports assignment progress reporting using the v5 PrimeNet API, similar to Prime95/MPrime. It has supported CUDALucas 2.06 for over a year and I just recently added support for GpuOwl. It supports reporting the current percentage and estimated completion date, although not yet the interim residues. This is mainly because the log files generally do not include the residues for the needed iterations. Interim residue reporting also seems to be superseded by introduction of proofs, at least for PRP tests.

Quote:
Originally Posted by kriesel View Post
If it worked out for CUDALucas, it could be extended to accommodate some version of gpuowl also. (Unfortunately there's a lot of variation in console or log output format for gpuowl versus version, and depending on the fft length, different versions of gpuowl are fastest.)
For CUDALucas, the user can rearrange the columns in the output and also change the precision. We tried to support that, but obliviously can provide no guarantees if people use a non-default output format, so we would recommend that users keep the default output format. To use the script with CUDALucas, just redirect its output to a file and then pass the filename to our script's --cudalucas option. For example, if the output file is cudalucas.out, just use --cudalucas cudalucas.out the first time you run it.

For GpuOwl, yes, there is a huge variation in the log output format. I specifically targeted and tested with GpuOwl v6.11-380 and the latest version (currently v7.2-70), although it likely also works with the versions in between. It supports all the worktypes supported by those versions. For GpuOwl v6.11, it supports LL/PRP and P-1 assignment progress, including standalone P-1 and when done before an LL/PRP test. (GpuOwl does not support doing P-1 before LL tests, but users can easily modify it to do that.) For GpuOwl v7.2, it supports PRP assignment progress, including P-1 when combined with a PRP test. To use the script with GpuOwl, just use the -g/--gpuowl option the first time you run it. It will get the needed information from GpuOwl's gpuowl.log file.

For P-1 stage 2, particularly with v6.11, there is not enough information in the log file to accurately calculate the estimated time to completion, so it currently just uses the -t/--timeout option's value. I am open to better ideas if anyone has any. With v7.2, I am aware that there is an ETA value that it could parse, and I may do that in a future version, but I would prefer to directly calculate it.

Quote:
Originally Posted by kriesel View Post
Please try adding this for CUDALucas (without breaking the existing functionality). I'd be happy to field test it when ready for that.
I am guessing you no longer have interest in CUDALucas, although any help testing the assignment progress reporting with GpuOwl would be very much appropriated. Feedback is also welcome! For historical reasons and simplicity, many of the options listed in our script's -h/--help output make references to CPUs, such as --cpu_model or -c/--cpu_num. When using our script with a GPU, users can just think of those occurrences of CPU as if they said GPU.

Note that PRP proof uploading is not yet supported. I have implement most of the needed code in my local copy of the script, but I have not yet collected enough PRP proofs from GpuOwl to properly test it. The GPU I am currently using is extremely slow and takes over two months per first time PRP test... As soon as I am able to finish that, I will likely officially announce GpuOwl support in one of the GpuOwl threads.

Another minor note, the script uses the PRPDC= prefix for PRP DC assignments. This standard is supported my Prime95/MPrime and Mlucas, but not GpuOwl. If anyone wants to do PRP DC assignments, they would need to replace the kind == "PRP" on this line of GpuOwl with kind == "PRP" or kind == "PRPDC" before compiling it. Linux users could also just run this command to do that:
Code:
sed -i 's/kind == "PRP"/kind == "PRP" or kind == "PRPDC"/' Worktodo.cpp
tdulcet is offline   Reply With Quote
Old 2021-09-02, 15:29   #6
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

5,807 Posts
Default

Yes I'm aware. That's an option for some, who are willing to install Python on however many systems they run GIMPS GPU applications upon, and your primenet.py in however many places. I choose not to install Python on each of dozens of systems for a single use, install and configure primenet.py in each GPU application instance folder, learn more Python & do a careful code review of your Python script beforehand, abandon use and development of my own log analysis/results gathering script, etc.

I know I am not alone in the GIMPS user community in being somewhat Python-resistant. (Not saying Python's a bad choice, just there is resistance. For reasons that seem to make sense to those resistant.)

And it would be only a workaround, not a solution, for the server's shortcoming of not accepting manual reports of progress occurring. Which is what this thread was created about.

CUDALucas is still in limited justifiable use, mostly for low cat 4 DC, on old GPUs incapable of running Gpuowl, although those GPUs might be better employed for production P-1 or TF, or retired due to power inefficiency per unit performance. First time tests in LL are unjustifiable for several reasons. CUDALucas is slow compared to Gpuowl on GPUs capable of both (which includes all Colab GPUs) even before considering the >2:1 advantage, of PRP & proof, over LL & DC & occasional TC etc. (Yes, some storage space concerns on Colab in some cases, discussed previously, solvable.) See https://www.mersenneforum.org/showpo...35&postcount=1
See the reference info for a variety of benchmarks and much other data on the major GIMPS applications.

Last fiddled with by kriesel on 2021-09-02 at 16:12
kriesel is offline   Reply With Quote
Old 2021-09-03, 05:04   #7
danc2
 
Dec 2019

5·7 Posts
Default

Really nice work @tdulcet! Updated Primenet software for GpuOwl is sorely needed. Considering other posts that make mention of GpuOwl being quicker at processing (at least in some cases) than CUDALucas, this is welcome news.
danc2 is offline   Reply With Quote
Old 2021-09-03, 15:37   #8
tdulcet
 
tdulcet's Avatar
 
"Teal Dulcet"
Jun 2018

23·5 Posts
Default

Quote:
Originally Posted by kriesel View Post
Yes, @danc2 originally created the PrimeNet script for his Colab GPU notebook, so users could use the notebook without having to provide their PrimeNet password. However, it fully supports Mlucas and CUDALucas and will soon fully support GpuOwl. It is designed to replace all of the previous PrimeNet Python scripts for primality testing, including those by Loïc Le Loarer, @ewmayer, @Mark Rose, @TeknoHog, @preda and many others. It is the only third party tool that currently supports fully using the v5 PrimeNet API. It is also the only script I am aware of that supports multiple GIMPS programs. Instead of having a separate PrimeNet script with different authors for every GIMPS program, I wanted a single script that worked with all of the third party primality-testing programs. We also wanted users of those third party primality testing programs to be able to enjoy the same advantages as users of Prime95/MPrime, like lower category assignments, being able to monitor their progress online and not having to provide their password.

Quote:
Originally Posted by kriesel View Post
That's an option for some, who are willing to install Python on however many systems they run GIMPS GPU applications upon, and your primenet.py in however many places. I choose not to install Python on each of dozens of systems for a single use, install and configure primenet.py in each GPU application instance folder, learn more Python & do a careful code review of your Python script beforehand, abandon use and development of my own log analysis/results gathering script, etc.
There are lots of ways to bundle scripts with Python into a single executable for Windows users. If there is sufficient interest in this, I would be happy to look into it further. Our script does not requiring installing or configuring anything besides Python and the Requests library (which is usually included with Python 3). Even if you are running GpuOwl on Windows, you can still run the script from within the WSL where Python 3 is likely already installed. (You can actually run both your Windows build of GpuOwl and our PrimeNet script from within the WSL if that is easier.) Once you have registered the first computer with PrimeNet, you can even copy the local.ini config file to the other computers, just first delete the ComputerGUID, HardwareGUID and ComputerID lines and the script will automatically register the other computers with the same options. While you are more than welcome to review the code, using our script does not require learning or knowing anything about Python. The script should also work fine alongside any of your existing scripts, as it will not modify any of your existing files, except for of course adding assignments to your worktodo file when needed.

As you may recall, I have a Bash CUDALucas install script for Linux users that downloads, builds, sets up and runs CUDALucas and our PrimeNet script. I am also working on a GpuOwl install script to automate the installation of GpuOwl and our PrimeNet script on Linux, which I will likely release along with the PrimeNet script. Both of these scripts automate everything, so users only need to run a single command, which is great when installing on multiple computers.

Quote:
Originally Posted by kriesel View Post
I know I am not alone in the GIMPS user community in being somewhat Python-resistant. (Not saying Python's a bad choice, just there is resistance. For reasons that seem to make sense to those resistant.)
I completely understand this. Python is not my favorite programing language either, mainly because it can be very slow and a nightmare to support multiple Python versions from a single script. It also is dynamically typed, so most errors do not occur until runtime. However, Python includes an extensive set of built in modules that makes it easy to support multiple operating systems from a single script, which is why I think it is the best choice for this project. I believe that is also likely why all of the previous authors of the script chose to use Python, although you would of course have to ask them...

Quote:
Originally Posted by kriesel View Post
And it would be only a workaround, not a solution, for the server's shortcoming of not accepting manual reports of progress occurring. Which is what this thread was created about.
Uploading a whole log file, which could be hundreds or thousands of lines, just to report the current assignment progress seems extremely inefficient. The PrimeNet v5 API was already created for this purpose and it is of course what Prime95/MPrime uses. We implemented the progress reporting for CUDALucas and GpuOwl as you requested, our script just does the parsing of the output/log files on the client side instead of on the PrimeNet server. It then submits the progress to the PrimeNet server using the Assignment Progress endpoint of the v5 API, the same as Prime95/MPrime. Using the v5 API has several advantages compared to manual testing:
  • The systems can get much smaller Category 0 and 1 exponents.
  • The stats for the individual systems are listed on the GIMPS website, instead of being combined into a single "Manual testing" system.
  • Users can then monitor the progress of their systems on the GIMPS website CPUs page. Other users can also monitor the progress for the exponents the user is testing.
  • Users can change their worktype preference for each CPU/GPU/worker of each system directly on the GIMPS website, which is nice for users with many systems.
  • User can choose to be automatically notified by the PrimeNet server if a system does not check in when it was supposed to.
  • If a user returns a bad or suspect LL DC result, it is much easier to identify the problematic system.
  • The script not have to queue up a fixed amount of work in advance, since it knows approximately how long the current assignment(s) will take on the system and can get only what is needed. On slow systems, users do not have to remember to manually request extensions on their assignments.
  • Users do not have to trust the script with their PrimeNet password.
  • Users can unreserve assignments directly from the system, in addition to the GIMPS website.
Anyway, I was just opening it up to you or anyone who wanted to help us test the progress reporting for GpuOwl feature before we officially announced GpuOwl support.

Quote:
Originally Posted by danc2 View Post
Really nice work @tdulcet!
Thanks Daniel!
tdulcet is offline   Reply With Quote
Old 2021-09-03, 18:17   #9
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

5,807 Posts
Default

Quote:
Originally Posted by tdulcet View Post
Uploading a whole log file, which could be hundreds or thousands of lines, just to report the current assignment progress seems extremely inefficient.
I'm unaware of anyone ever having proposed that prior to your post. The current last one line is what I suggested the manual submission web form could be modified to accept as progress indication for a manual assignment.

Quote:
you can still run the script from within the WSL where Python 3 is likely already installed.
Ugh, so install WSL AND Linux AND Python AND the script everywhere? No.
I just checked "which python" in Ubuntu Linux atop WSLx on my sub-fleet for Mlucas build/test systems on several differing hardware models.
1 of 6 Ubuntu 18.04 LTS on WSL1 produced a non-null response (/usr/bin/python); 0 of 1 Ubuntu 20.04 LTS on WSL2. (Most systems don't have WSL or Linux.) The Canonical Ubuntu distro for WSL installation does not even install gcc or make. Perl however is present in every WSL install I had available to check. I've done some Windows-side perl installs, but IIRC not any on Ubuntu/WSL, so perl apparently came with the base install.

Quote:
It is also the only script I am aware of that supports multiple GIMPS programs.
You haven't seen my script. Which would be much more complete and released, if I hadn't gotten distracted by compiling reference info and other things. Oh well. Choices are good.

Quote:
There are lots of ways to bundle scripts with Python into a single executable for Windows users.
Yes. But they tend to be quite large, much larger than the attachments limit when posting to this forum, and slow to load.

Last fiddled with by kriesel on 2021-09-03 at 19:12
kriesel is offline   Reply With Quote
Old 2021-09-04, 14:33   #10
tdulcet
 
tdulcet's Avatar
 
"Teal Dulcet"
Jun 2018

23·5 Posts
Default

Quote:
Originally Posted by kriesel View Post
I'm unaware of anyone ever having proposed that prior to your post. The current last one line is what I suggested the manual submission web form could be modified to accept as progress indication for a manual assignment.
Oh, OK. Note that the last line does not always provide enough information. For both Mlucas and GpuOwl, our script has to traverse the logs back to when the program and/or exponent was first started to get some needed information, like the FFT length or for P-1 tests, the number of bits (stage 1) and the buffers/blocks (stage 2). Our script also takes the average of the last up to five ms/iter (or us/iter in GpuOwl) values to more accurately compute the estimated completion date. That is why I think it is better to do the parsing of the log files on the client side and then send the values to the PrimeNet server using the v5 API.

If you want to try the v5 API, you can run this command to register a computer (this assumes your PrimeNet User ID is kriesel):
Code:
curl -sS "http://v5.mersenne.org/v5server/?v=0.95&px=GIMPS&t=uc&g=3e779463f8a4fc328c82e466b9f8bda6&hg=3e779463f8a4fc328c82e466b9f8bda6&wg=&a=Linux64,GpuOwl,v7.2&c=cpu.unknown&f=&L1=8&L2=512&np=1&hp=0&m=0&s=1000&h=24&r=0&u=kriesel&cn=example&ss=19191919&sh=ABCDABCDABCDABCDABCDABCDABCDABCD"
and then this command to report the progress of one of your manual assignments (replace the four things in angle brackets with their actual value):
Code:
curl -sS "http://v5.mersenne.org/v5server/?v=0.95&px=GIMPS&t=ap&g=3e779463f8a4fc328c82e466b9f8bda6&k=<ASSIGNMENT ID>&p=<PROGRESS (PERCENTAGE)>&d=86400&e=<ETA (SECONDS)>&stage=<STAGE (S1 or S2 or LL or PRP)>&ss=19191919&sh=ABCDABCDABCDABCDABCDABCDABCDABCD"
cURL is available on both Windows 10 and Linux, so these commands should work on either. You would then have a new system called "example" listed on the CPUs page with a single assignment, which shows its progress. That is basically what our PrimeNet script does, although it either automatically determines or allows the user to change with command line options most of those parameters and some additional optional parameters. See the v5 API documentation for more information about the parameters.

Quote:
Originally Posted by kriesel View Post
Ugh, so install WSL AND Linux AND Python AND the script everywhere? No.
I was just noting that it would be a option if you already had the WSL enabled and did not want to install Python directly on Windows...

Quote:
Originally Posted by kriesel View Post
I just checked "which python" in Ubuntu Linux atop WSLx on my sub-fleet for Mlucas build/test systems on several differing hardware models.
1 of 6 Ubuntu 18.04 LTS on WSL1 produced a non-null response (/usr/bin/python); 0 of 1 Ubuntu 20.04 LTS on WSL2. (Most systems don't have WSL or Linux.) The Canonical Ubuntu distro for WSL installation does not even install gcc or make. Perl however is present in every WSL install I had available to check. I've done some Windows-side perl installs, but IIRC not any on Ubuntu/WSL, so perl apparently came with the base install.
python (i.e. /usr/bin/python) usually points to Python 2 on Linux (run python -V to see), which was discontinued in 2020 and is not installed by default on most newer distributions. You would need to run which python3. Our PrimeNet script supports both Python 2 and 3, but we recommend users use Python 3 if they can. The WSL distributions are extremely minimal and do not include some of the common packages by default, although they do usually include Python 3 and users can of course install anything they want from the respective package manger. On Ubuntu/Debian, you can always run sudo apt install python3 python3-pip -y to install it. For GCC and Make, you would need to run sudo apt install build-essential -y. WSL 1 vs 2 does not matter, as users can switch their distributions between versions at any time.

Quote:
Originally Posted by kriesel View Post
Yes. But they tend to be quite large, much larger than the attachments limit when posting to this forum, and slow to load.
True, there are tradeoffs. If I added this, I would set it up to be automatically created by the CI service on every commit to my repository, similar to what I added to the GpuOwl repository. Regarding your link, when I finish the proof uploading support, it will automatically upload any proofs by default in a background thread after reporting the results. There will also be an --upload_proofs option:
Code:
  --upload_proofs       Report assignment results, upload all PRP proofs and
                        exit.
tdulcet is offline   Reply With Quote
Old 2021-09-05, 21:02   #11
kruoli
 
kruoli's Avatar
 
"Oliver"
Sep 2017
Porta Westfalica, DE

33×52 Posts
Default

Quote:
Originally Posted by tdulcet View Post
python (i.e. /usr/bin/python) usually points to Python 2 on Linux (run python -V to see), which was discontinued in 2020 and is not installed by default on most newer distributions.
To make python3 the new default, install the python-is-python3 package, availible both on Ubuntu and Debian.
kruoli is online now   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Can't get manual page to parse a result Siegmund PrimeNet 6 2017-01-08 22:22
New Manual Results Submission Page TheMawn PrimeNet 12 2014-04-09 14:30
Manual result submission could fail sonjohan PrimeNet 12 2012-04-25 13:17
Manual submission of automatic assignment result tichy PrimeNet 4 2010-12-17 09:57
Sieve submission page available ltd Prime Sierpinski Project 7 2007-06-16 09:50

All times are UTC. The time now is 08:59.


Tue Oct 26 08:59:46 UTC 2021 up 95 days, 3:28, 0 users, load averages: 1.79, 2.02, 2.06

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.