mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > News

Reply
 
Thread Tools
Old 2020-06-17, 16:43   #12
M344587487
 
M344587487's Avatar
 
"Composite as Heck"
Oct 2017

23·79 Posts
Default

Quote:
Originally Posted by kriesel View Post
  • Yes to your first point, that has been discussed in the technical thread
  • Assume 50MB/PRPverify. Eyeball average primality tests returned per day preCOVID19, 900/day. https://www.mersenne.org/primenet/graphs.php So some small N times 50 x 900, assume N=4; client to server, server to AWS verifier, server to volunteer verifier DC, that's 3. PrimeNet verification result reports are smaller and can be neglected. Allow hefty overhead, say pessimistically 33% for ECC and other overhead adding to transmission size, equivalent to a fourth. 4 x 50 x 900 MB = 180GB/day traffic, all of which flows into or from the PrimeNet server in this assumed configuration; ~16.7Mbits/second average rate. Consumer fiber or cable links could handle that, although the provider may take exception to the large regular load. Madpoo or someone should weigh in on the economics and terms of PrimeNet's network service. Maybe the contemplated configuration is that the PrimeNet server hands out the assignments to clients (small message size, so low traffic volume) and the clients all deal with their share of the high data size traffic directly with the contemplated AWS service. That cuts N by an amount that's one or somewhat more. An unfortunate dialup modem user probably won't be happy. Using a ballpark 20 kbits/second upload rate without compression, one 50MB PRP proof file is 5.5 hours if it works; download at a full US ideal 53.3 kbaud rate is a bit over 2 hours each. He may want to haul them to the local library or McDonalds on a laptop and use their multiMBsec free wireless instead.
  • Re scheduling the uploads, prime95 already spools ordinary length results and sends them when it can, and has schedulable memory usage. Making available an option for scheduling the big verification uploads window sounds like a good suggestion to me. Having tens or hundreds of MB coincide with and heavily load an uplink can make interactive use sluggish. Not good while many are working from home via remote desktop and VPN, teleconferencing, etc.
  • I trust the PrimeNet server to be well managed and secured, and any AWS additional service employed, FAR more than I trust all 5000+ users' client systems to be free enough of malicious stuff. Requiring torrent peering among all may be an authorization showstopper for a lot of users and systems. Employers may be very unenthusiastic about allowing that.
To the second point 180GB/day seems doable for the server not that I know anything about the logistics, but I was talking from the user POV. In the end 50MiB per test is negligible for the majority of users that might do 1 test per day if that, and for the big boys churning out dozens of tests a day they're invested enough in GIMPS to deal with any problems the new requirements might bring up. After thinking about it I don't think it's a big deal.



To the last point the torrent protocol is as trustworthy as direct upload, the worst that a malicious user can do (aside from try and submit invalid verifications by any means) is poison a torrent by sending invalid chunks to waste resources. The chunks will be found to be invalid and discarded but the point is to waste the bandwidth of anyone trying to download legitimate chunks of that torrent. A users access to the trackers P2P connecting ability can be tied to their primenet account, and malicious conduct flagged and banned. That is academic however, you're right that torrents are high on the list of things blocked on locked down networks so it's a non-starter.
M344587487 is offline   Reply With Quote
Old 2020-06-17, 17:04   #13
Prime95
P90 years forever!
 
Prime95's Avatar
 
Aug 2002
Yeehaw, FL

716510 Posts
Default

A PRP proof file is in the neighborhood of 120MB to 150MB (not 50MB). So, triple the 180GB/day estimate number. As exponents get larger over time, so does the proof file size.

IIRC, the server has 1000TB/month before overage charges kick in.

Any plan must take into account Ben Delo. His work is on AWS. If we also have a verifier running on AWS, is it possible to send proof files directly to the AWS verifier, saving a great deal of server bandwidth. Again, I have no idea if an AWS verifier is cost-prohibitive. I do love the fact that it eliminates any trust issues arising from handing the verification out as a work assignment to random users.

Timeline -- I can't imagine this will go live to users for at least a year. This gives us time to figure out logistics. No prime95 code has been written. Gpuowl is closer to being ready.

Last fiddled with by Prime95 on 2020-06-17 at 17:04
Prime95 is offline   Reply With Quote
Old 2020-06-17, 17:55   #14
chalsall
If I May
 
chalsall's Avatar
 
"Chris Halsall"
Sep 2002
Barbados

2×4,657 Posts
Default

Quote:
Originally Posted by Prime95 View Post
Again, I have no idea if an AWS verifier is cost-prohibitive.
When assessing the viability of AWS be sure to include the bandwidth in the costing -- it can be surprisingly high.
chalsall is online now   Reply With Quote
Old 2020-06-17, 18:57   #15
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

22·3·7·53 Posts
Default

Ok it looks pretty good for well connected end users. Five 150MB proofs uploaded per day amounts to 1 minute at 100Mbit/sec upload speed. Low speed asymmetric DSL like I used to have would be ugly though at 13 hours daily of 128Kbits/sec; 9/day 23.4 hours daily. Sustainable satellite internet data rates are too low or costly.
kriesel is online now   Reply With Quote
Old 2020-06-17, 19:06   #16
R. Gerbicz
 
R. Gerbicz's Avatar
 
"Robert Gerbicz"
Oct 2005
Hungary

139810 Posts
Default

Quote:
Originally Posted by Fan Ming View Post
ut if my understanding is correct, if we set r1=r2=...=1 in the verification progress(as mentioned in this post: https://www.mersenneforum.org/showpo...9&postcount=14), is it just like doing a single "weak" GEC? We indeed need a person to look this deeply and to judge whether it's actually safe.
You're right, one big advantage of setting r_i=1 is that it has a nice "ladder" scheme, enabling a fast product calculation and (multiple) error checking.
With the proof basically you are doing one (or multiple) such strong error checking, where you have roughly O(1/N) chance to fake it (if you don't know the h_i random values), what is a very small chance even a for smallish p~10000. It is approx. the same probability what GEC can reach, but when the prp tester is independent form the verifier it makes sense.
R. Gerbicz is offline   Reply With Quote
Old 2020-06-17, 20:04   #17
ewmayer
2ω=0
 
ewmayer's Avatar
 
Sep 2002
República de California

2CFE16 Posts
Default

Quote:
Originally Posted by ATH View Post
I have not read and tried to understand the math in the other thread but I followed the discussion a bit, and you talk a lot about security against people who wants to fake the work and result which is important.

But will this test also ensure that the calculation itself and final residue is correct against hardware and software errors during the test?
A useful analogy: In rigorous general-form primality testing (e.g. using Primo or whatever), a crucial concept is that the typically-compute-intensive test generate a primality certificate, based on agreed-upon mathematical rules which guarantee correctness - if correctly implemented in software - of the result, which allows any independent party to use their own software implementation to check the result in drastically quicker fashion, beginning with the certificate generated by the original prover.

What Mihai, George et al have been working on is effectively a nonfactorial-primality-test (PRP) analog of the same principle. In our case the numbers being tested are massively larger that the general-form primaliy tests, so the resulting certificate is also much larger.

I've not yet had time to catch up in detailed fashion on recent developments in the VDF thread, need to do so with a focus on how-to-ensure-the-certificate-has-been-correctly-checked and how-to-ensure-no-fakery-is-possible. The first is probably less of an issue, since certificate-checking is supposed to be very fast, so e.g. using 2 independent SW implementations to check each certificate would not seem a burdensome requirement, especially compared to the likely much greater effort needed to move the large certificate data files around.

One thing concerns me - how will such a scheme affect manual testers? Many of those folks do their work in sneakernet mode (non-net-connected machine generates results, user local-copies those to a dedicated connected device to report to the server) out of security/privacy concerns, whether legitimate or not. The net access needed to upload a current single-line-of-text result is around 5 orders of magnitude less than the proposed scheme will require.

Last fiddled with by ewmayer on 2020-06-17 at 20:05
ewmayer is offline   Reply With Quote
Old 2020-06-18, 03:24   #18
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

22×3×7×53 Posts
Default

Quote:
Originally Posted by ewmayer View Post
One thing concerns me - how will such a scheme affect manual testers? Many of those folks do their work in sneakernet mode (non-net-connected machine generates results, user local-copies those to a dedicated connected device to report to the server) out of security/privacy concerns, whether legitimate or not. The net access needed to upload a current single-line-of-text result is around 5 orders of magnitude less than the proposed scheme will require.
A 64GB memory stick / 150MB estimated by prime95 for current-wavefront proof files = ~426 proofs. That would last most of us a while. Quite a few of those will fit in one pocket. They will take a long time to write, read, and erase in sequence though.
kriesel is online now   Reply With Quote
Old 2020-06-18, 03:47   #19
Runtime Error
 
Sep 2017
USA

24×32 Posts
Default

This is very exciting!

Quote:
Originally Posted by S485122 View Post
Concerning the ratio PRP/LL, most participants are not very active, they set and forget, which is why very old versions of Prime95 are still reporting results.
How about give double ghz/day credit for these, since they don't need to be double checked? Judging by the success of that auction thread, it might incentivize some folks to update.

Quote:
Originally Posted by kriesel View Post
A 64GB memory stick / 150MB estimated by prime95 for current-wavefront proof files = ~426 proofs.
If there are bits shared between proof files in a batch, they can be compressed even further, right? They could be unzipped server side.

Last fiddled with by Runtime Error on 2020-06-18 at 03:48
Runtime Error is offline   Reply With Quote
Old 2020-06-18, 08:49   #20
M344587487
 
M344587487's Avatar
 
"Composite as Heck"
Oct 2017

23×79 Posts
Default

Quote:
Originally Posted by Prime95 View Post
A PRP proof file is in the neighborhood of 120MB to 150MB (not 50MB). So, triple the 180GB/day estimate number. As exponents get larger over time, so does the proof file size.
...
That's pushing what spotty and slow DSL connections can do at once, at least on standard websites using whatever standard websites normally use to upload. Smart partial upload/resume in programs would solve it for live users. If robust resume is as much of a pain to implement on websites as it seems then allowing split archives on the manual upload page would let manual users manage the issue themselves.
M344587487 is offline   Reply With Quote
Old 2020-06-18, 11:00   #21
Xyzzy
 
Xyzzy's Avatar
 
"Mike"
Aug 2002

1E1E16 Posts
Default

We have a network connection that is so slow it is probably a violation of the Geneva Conventions.

It would greatly help if the client had a built-in adjustable throttling mechanism for uploads.

Xyzzy is offline   Reply With Quote
Old 2020-06-18, 15:31   #22
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

10001011001002 Posts
Default

Quote:
Originally Posted by Xyzzy View Post
We have a network connection that is so slow it is probably a violation of the Geneva Conventions.

It would greatly help if the client had a built-in adjustable throttling mechanism for uploads.

You have our condolences; been there, suffered through that. Throttling and scheduling would be a good combination. Since prime95 already has these features in other contexts, there's precedent and a little somewhat reusable code. Maybe (optional?) compression will help some too, although it adds cpu overhead on both ends. Modern web browsers and ftp clients detect transmission errors and allow retry of completion from partial transmissions sometimes. Let's see what preda and prime95 say about all that.

Last fiddled with by kriesel on 2020-06-18 at 15:36
kriesel is online now   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Your help wanted - Let's buy GIMPS a KNL development system! airsquirrels Hardware 313 2019-10-29 22:51
Is GMP-ECM still under active development? mathwiz GMP-ECM 0 2019-05-15 01:06
LLR 3.8.6 Development version Jean Penné Software 0 2011-06-16 20:05
LLR 3.8.5 Development version Jean Penné Software 6 2011-04-28 06:21
LLR 3.8.4 development version is available! Jean Penné Software 4 2010-11-14 17:32

All times are UTC. The time now is 04:17.

Tue Sep 29 04:17:31 UTC 2020 up 19 days, 1:28, 0 users, load averages: 1.72, 1.77, 1.73

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.