mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > Math

Reply
 
Thread Tools
Old 2020-06-17, 14:08   #122
patnashev
 
"Pavel Atnashev"
Mar 2020

43 Posts
Default

No, I was talking about the "trick" preda was going to implement, which actually saves nothing and adds network traffic.
patnashev is offline   Reply With Quote
Old 2020-06-17, 16:20   #123
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

23·19·29 Posts
Default

Quote:
Originally Posted by patnashev View Post
No, I was talking about the "trick" preda was going to implement, which actually saves nothing and adds network traffic.
I think if you're referring to Mihai's post 119 in this thread, that what he's describing is all local to the proof production and one system, involving no network traffic, and is an optimization, using one mode of computation while it's the faster, during much of the proof production, and switching to another mode of computation while it's the faster, for that one generation of a proof. That saves proof generation time for the PRP tester system and user. Or maybe I've misunderstood you, him, or both. Please clarify.
And thanks for the Efficient Proth/PRP Test Proof Scheme thread https://mersenneforum.org/showthread...633#post538633

Last fiddled with by kriesel on 2020-06-17 at 16:39
kriesel is online now   Reply With Quote
Old 2020-06-17, 16:37   #124
patnashev
 
"Pavel Atnashev"
Mar 2020

43 Posts
Default

He proposes to calculate Pietrzak VDF u_i values using stored residues until reaching a certain depth. This is fast. But then he proposes to go deeper by calculating the next u value (the "middle") by directly performing half of iterations. This step decreases computation of the (future) verifier by exactly the same half of iterations. Zero sum. But that u value needs to be uploaded, and that's an additional 10MB of network traffic.
patnashev is offline   Reply With Quote
Old 2020-06-17, 19:19   #125
R. Gerbicz
 
R. Gerbicz's Avatar
 
"Robert Gerbicz"
Oct 2005
Hungary

7×199 Posts
Default

Quote:
Originally Posted by preda View Post
Let's see if we extremely limit the range of "h", let's say we always have h==1. Then the implication above obviously does not hold anymore:
(A*M)^q==M*B does not imply (A^q==M && M^q==B)
It doesn't imply for any h value: to pass the test (for E=1) you need
(A^h*M)^(2^(top/2)) == M^h*B mod N

but if you fix M then you can choose B in a unique way, so it has N solutions, but that is still nice for us, that means you can fake it with O(1/N) probability, since you have N^2 pairs for (M,B).

Quote:
Originally Posted by preda View Post
What needs uploading is the proof. The proof contains "only" (power + 1) residues.
Still don't understand why you are writing so small proofs, when you and George also talk about 2^10 residues, not 10 residues for power=E=10. Maybe I'm missing something.
I can verify the proof using O(E*p) memory for N=2^p-1, but needing 2^E residues for this, the
3^(2^(t*i/2^E)) mod N, where t~p (the next multiple of 2^E after p).

Last fiddled with by R. Gerbicz on 2020-06-17 at 19:21 Reason: small typo
R. Gerbicz is offline   Reply With Quote
Old 2020-06-17, 19:36   #126
patnashev
 
"Pavel Atnashev"
Mar 2020

43 Posts
Default

Quote:
Originally Posted by R. Gerbicz View Post
Still don't understand why you are writing so small proofs, when you and George also talk about 2^10 residues, not 10 residues for power=E=10. Maybe I'm missing something.
That's the power of Pietrzak VDF. It allows to securely compute partial products on the prover side, effectively "compressing" upload size to log of number of intermediate points. Here's a simple infographic I made to illustrate this.
Click image for larger version

Name:	ProofInfoP.png
Views:	56
Size:	53.6 KB
ID:	22604
patnashev is offline   Reply With Quote
Old 2020-06-17, 20:19   #127
R. Gerbicz
 
R. Gerbicz's Avatar
 
"Robert Gerbicz"
Oct 2005
Hungary

7·199 Posts
Default

Quote:
Originally Posted by patnashev View Post
That's the power of Pietrzak VDF. It allows to securely compute partial products on the prover side, effectively "compressing" upload size to log of number of intermediate points. Here's a simple infographic I made to illustrate this.
Attachment 22604
Nice colorful picture, though there is very few formula (zero) in that pic. And I've already shown that it does not prove that all residues are correct.

Last fiddled with by R. Gerbicz on 2020-06-17 at 20:20 Reason: typo
R. Gerbicz is offline   Reply With Quote
Old 2020-06-17, 20:36   #128
patnashev
 
"Pavel Atnashev"
Mar 2020

1010112 Posts
Default

Quote:
Originally Posted by R. Gerbicz View Post
It doesn't imply for any h value: to pass the test (for E=1) you need
(A^h*M)^(2^(top/2)) == M^h*B mod N

but if you fix M then you can choose B in a unique way, so it has N solutions, but that is still nice for us, that means you can fake it with O(1/N) probability, since you have N^2 pairs for (M,B).
Even simpler, you can set M=1 and then B = A^(h*2^(top/2)). Or you can have M=random and B = A^(h*2^(top/2)) / M^h.
All of this is trivial to compute unless h = hash(B), which is the cornerstone of Pietrzak VDF security.
patnashev is offline   Reply With Quote
Old 2020-06-17, 23:23   #129
preda
 
preda's Avatar
 
"Mihai Preda"
Apr 2015

29×41 Posts
Default

Quote:
Originally Posted by patnashev View Post
He proposes to calculate Pietrzak VDF u_i values using stored residues until reaching a certain depth. This is fast. But then he proposes to go deeper by calculating the next u value (the "middle") by directly performing half of iterations. This step decreases computation of the (future) verifier by exactly the same half of iterations. Zero sum. But that u value needs to be uploaded, and that's an additional 10MB of network traffic.
Yes your observation is correct, all the "trick" does is to moves some effort from the verifier to the producer (while increasing a the proof size a bit in the process). But please consider these two elements, which may make the above transfer worthwhile:

1. The proof is produced once, but is verified multiple times, at least 2 times or maybe more.
2. There might be some asymmetry between the capacity of the producer and that of the verifier. You may have seen in the discussion that the push for high-power proofs (up to power==12) is driven by the desire to have a centralized verifier ("the server") which removes the need to download the proofs for verification by discrete participants. In this setup it is worthwile to push some work to the producer just to make it easier for the server.
preda is offline   Reply With Quote
Old 2020-06-18, 03:50   #130
patnashev
 
"Pavel Atnashev"
Mar 2020

43 Posts
Default

Okay, this really is up to architectural choices you make. I just want to discuss alternatives, so you make the choice knowing the options.

You definitely can calculate the proof up to a great depth, and store it for all eternity to show to anyone willing to verify the test. Although I see no practical reason to do so if there is a central authority ("the server") which coordinates the search for primes. Storing hundreds MB per test looks like a burden.

Also, having a centralized verifier requires significant computation power and may not scale well. Did you run tests to see how much time it takes to compute (x_12, y_12) from u_i and verify x_12 -> y_12 ? One option to decrease the load on server is to offload the verification of (x_t, y_t) to a user, who really needs to download only x_t and perform the necessary amount of iterations.

Personally, I see no reason to go deeper than 8. The amount of data involved becomes huge without really speeding up the search for primes. It's really an optimization issue. I see bandwidth, storage and server load as parameters important enough to optimize. And the optimal strategy for numbers with 30M digits can definitely be different than for numbers with 1M digits.
patnashev is offline   Reply With Quote
Old 2020-06-18, 04:59   #131
axn
 
axn's Avatar
 
Jun 2003

13×192 Posts
Default

Quote:
Originally Posted by patnashev View Post
Also, having a centralized verifier requires significant computation power and may not scale well.
Which is why we want to put more burden on author to reduce the burden on the single central verifier.
Quote:
Originally Posted by patnashev View Post
One option to decrease the load on server is to offload the verification of (x_t, y_t) to a user, who really needs to download only x_t and perform the necessary amount of iterations.
Then we're back to square one. With the central verifier approach, we are able to avoid "trust" as a component of the verification. If you can trust the verifier, you might as well trust the author and only use GEC for error checking.

The only downside with higher values is the amount of bandwidth consumed while uploading the files. EDIT:- For regular users, it might be ok, especially if you spread out the uploads during the course of the test. But for institutional users with 100s of clients, this might be a major system issue.

Last fiddled with by axn on 2020-06-18 at 05:01
axn is offline   Reply With Quote
Old 2020-06-18, 05:11   #132
patnashev
 
"Pavel Atnashev"
Mar 2020

43 Posts
Default

Quote:
Originally Posted by axn View Post
Then we're back to square one. With the central verifier approach, we are able to avoid "trust" as a component of the verification. If you can trust the verifier, you might as well trust the author and only use GEC for error checking.
You do not need to trust the verifier, he makes no decisions. He only computes x_t^distance. It's the server who compares that value with the stored y_t.

Malicious verifier can only produce false negative. But the server just sends x_t to the next verifier, until consensus is reached. And if the result is still negative, which is extremely rare, some actions on the server need to be done to verify it.

Malicious verifier can't produce false positive. There are additional steps to ensure that.

Quote:
Originally Posted by axn View Post
especially if you spread out the uploads during the course of the test.
You can't do that. All the proof data can be computed only at the end of the test. It's critically important.
patnashev is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
phi function rula Homework Help 3 2017-01-18 01:41
delay in crediting? ixfd64 PrimeNet 7 2008-10-20 20:45
Why delay between posts? JHagerson Forum Feedback 1 2006-05-13 21:30
Minimum delay between server connections vaughan ElevenSmooth 5 2005-09-08 17:17
Stats delay ltd Prime Sierpinski Project 10 2005-08-08 13:38

All times are UTC. The time now is 13:29.

Sat Sep 19 13:29:10 UTC 2020 up 9 days, 10:40, 1 user, load averages: 1.24, 1.25, 1.31

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.