mersenneforum.org > Math VDF (Verifiable Delay Function) and PRP
 Register FAQ Search Today's Posts Mark Forums Read

 2020-06-17, 14:08 #122 patnashev   "Pavel Atnashev" Mar 2020 43 Posts No, I was talking about the "trick" preda was going to implement, which actually saves nothing and adds network traffic.
2020-06-17, 16:20   #123
kriesel

"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

23·19·29 Posts

Quote:
 Originally Posted by patnashev No, I was talking about the "trick" preda was going to implement, which actually saves nothing and adds network traffic.
I think if you're referring to Mihai's post 119 in this thread, that what he's describing is all local to the proof production and one system, involving no network traffic, and is an optimization, using one mode of computation while it's the faster, during much of the proof production, and switching to another mode of computation while it's the faster, for that one generation of a proof. That saves proof generation time for the PRP tester system and user. Or maybe I've misunderstood you, him, or both. Please clarify.

Last fiddled with by kriesel on 2020-06-17 at 16:39

 2020-06-17, 16:37 #124 patnashev   "Pavel Atnashev" Mar 2020 43 Posts He proposes to calculate Pietrzak VDF u_i values using stored residues until reaching a certain depth. This is fast. But then he proposes to go deeper by calculating the next u value (the "middle") by directly performing half of iterations. This step decreases computation of the (future) verifier by exactly the same half of iterations. Zero sum. But that u value needs to be uploaded, and that's an additional 10MB of network traffic.
2020-06-17, 19:19   #125
R. Gerbicz

"Robert Gerbicz"
Oct 2005
Hungary

7×199 Posts

Quote:
 Originally Posted by preda Let's see if we extremely limit the range of "h", let's say we always have h==1. Then the implication above obviously does not hold anymore: (A*M)^q==M*B does not imply (A^q==M && M^q==B)
It doesn't imply for any h value: to pass the test (for E=1) you need
(A^h*M)^(2^(top/2)) == M^h*B mod N

but if you fix M then you can choose B in a unique way, so it has N solutions, but that is still nice for us, that means you can fake it with O(1/N) probability, since you have N^2 pairs for (M,B).

Quote:
 Originally Posted by preda What needs uploading is the proof. The proof contains "only" (power + 1) residues.
Still don't understand why you are writing so small proofs, when you and George also talk about 2^10 residues, not 10 residues for power=E=10. Maybe I'm missing something.
I can verify the proof using O(E*p) memory for N=2^p-1, but needing 2^E residues for this, the
3^(2^(t*i/2^E)) mod N, where t~p (the next multiple of 2^E after p).

Last fiddled with by R. Gerbicz on 2020-06-17 at 19:21 Reason: small typo

2020-06-17, 19:36   #126
patnashev

"Pavel Atnashev"
Mar 2020

43 Posts

Quote:
 Originally Posted by R. Gerbicz Still don't understand why you are writing so small proofs, when you and George also talk about 2^10 residues, not 10 residues for power=E=10. Maybe I'm missing something.
That's the power of Pietrzak VDF. It allows to securely compute partial products on the prover side, effectively "compressing" upload size to log of number of intermediate points. Here's a simple infographic I made to illustrate this.

2020-06-17, 20:19   #127
R. Gerbicz

"Robert Gerbicz"
Oct 2005
Hungary

7·199 Posts

Quote:
 Originally Posted by patnashev That's the power of Pietrzak VDF. It allows to securely compute partial products on the prover side, effectively "compressing" upload size to log of number of intermediate points. Here's a simple infographic I made to illustrate this. Attachment 22604
Nice colorful picture, though there is very few formula (zero) in that pic. And I've already shown that it does not prove that all residues are correct.

Last fiddled with by R. Gerbicz on 2020-06-17 at 20:20 Reason: typo

2020-06-17, 20:36   #128
patnashev

"Pavel Atnashev"
Mar 2020

1010112 Posts

Quote:
 Originally Posted by R. Gerbicz It doesn't imply for any h value: to pass the test (for E=1) you need (A^h*M)^(2^(top/2)) == M^h*B mod N but if you fix M then you can choose B in a unique way, so it has N solutions, but that is still nice for us, that means you can fake it with O(1/N) probability, since you have N^2 pairs for (M,B).
Even simpler, you can set M=1 and then B = A^(h*2^(top/2)). Or you can have M=random and B = A^(h*2^(top/2)) / M^h.
All of this is trivial to compute unless h = hash(B), which is the cornerstone of Pietrzak VDF security.

2020-06-17, 23:23   #129
preda

"Mihai Preda"
Apr 2015

29×41 Posts

Quote:
 Originally Posted by patnashev He proposes to calculate Pietrzak VDF u_i values using stored residues until reaching a certain depth. This is fast. But then he proposes to go deeper by calculating the next u value (the "middle") by directly performing half of iterations. This step decreases computation of the (future) verifier by exactly the same half of iterations. Zero sum. But that u value needs to be uploaded, and that's an additional 10MB of network traffic.
Yes your observation is correct, all the "trick" does is to moves some effort from the verifier to the producer (while increasing a the proof size a bit in the process). But please consider these two elements, which may make the above transfer worthwhile:

1. The proof is produced once, but is verified multiple times, at least 2 times or maybe more.
2. There might be some asymmetry between the capacity of the producer and that of the verifier. You may have seen in the discussion that the push for high-power proofs (up to power==12) is driven by the desire to have a centralized verifier ("the server") which removes the need to download the proofs for verification by discrete participants. In this setup it is worthwile to push some work to the producer just to make it easier for the server.

 2020-06-18, 03:50 #130 patnashev   "Pavel Atnashev" Mar 2020 43 Posts Okay, this really is up to architectural choices you make. I just want to discuss alternatives, so you make the choice knowing the options. You definitely can calculate the proof up to a great depth, and store it for all eternity to show to anyone willing to verify the test. Although I see no practical reason to do so if there is a central authority ("the server") which coordinates the search for primes. Storing hundreds MB per test looks like a burden. Also, having a centralized verifier requires significant computation power and may not scale well. Did you run tests to see how much time it takes to compute (x_12, y_12) from u_i and verify x_12 -> y_12 ? One option to decrease the load on server is to offload the verification of (x_t, y_t) to a user, who really needs to download only x_t and perform the necessary amount of iterations. Personally, I see no reason to go deeper than 8. The amount of data involved becomes huge without really speeding up the search for primes. It's really an optimization issue. I see bandwidth, storage and server load as parameters important enough to optimize. And the optimal strategy for numbers with 30M digits can definitely be different than for numbers with 1M digits.
2020-06-18, 04:59   #131
axn

Jun 2003

13×192 Posts

Quote:
 Originally Posted by patnashev Also, having a centralized verifier requires significant computation power and may not scale well.
Which is why we want to put more burden on author to reduce the burden on the single central verifier.
Quote:
 Originally Posted by patnashev One option to decrease the load on server is to offload the verification of (x_t, y_t) to a user, who really needs to download only x_t and perform the necessary amount of iterations.
Then we're back to square one. With the central verifier approach, we are able to avoid "trust" as a component of the verification. If you can trust the verifier, you might as well trust the author and only use GEC for error checking.

The only downside with higher values is the amount of bandwidth consumed while uploading the files. EDIT:- For regular users, it might be ok, especially if you spread out the uploads during the course of the test. But for institutional users with 100s of clients, this might be a major system issue.

Last fiddled with by axn on 2020-06-18 at 05:01

2020-06-18, 05:11   #132
patnashev

"Pavel Atnashev"
Mar 2020

43 Posts

Quote:
 Originally Posted by axn Then we're back to square one. With the central verifier approach, we are able to avoid "trust" as a component of the verification. If you can trust the verifier, you might as well trust the author and only use GEC for error checking.
You do not need to trust the verifier, he makes no decisions. He only computes x_t^distance. It's the server who compares that value with the stored y_t.

Malicious verifier can only produce false negative. But the server just sends x_t to the next verifier, until consensus is reached. And if the result is still negative, which is extremely rare, some actions on the server need to be done to verify it.

Malicious verifier can't produce false positive. There are additional steps to ensure that.

Quote:
 Originally Posted by axn especially if you spread out the uploads during the course of the test.
You can't do that. All the proof data can be computed only at the end of the test. It's critically important.

 Similar Threads Thread Thread Starter Forum Replies Last Post rula Homework Help 3 2017-01-18 01:41 ixfd64 PrimeNet 7 2008-10-20 20:45 JHagerson Forum Feedback 1 2006-05-13 21:30 vaughan ElevenSmooth 5 2005-09-08 17:17 ltd Prime Sierpinski Project 10 2005-08-08 13:38

All times are UTC. The time now is 13:29.

Sat Sep 19 13:29:10 UTC 2020 up 9 days, 10:40, 1 user, load averages: 1.24, 1.25, 1.31