mersenneforum.org > Math VDF (Verifiable Delay Function) and PRP
 Register FAQ Search Today's Posts Mark Forums Read

 2022-10-03, 20:46 #254 kruoli     "Oliver" Sep 2017 Porta Westfalica, DE 30338 Posts Thank you, this is basically what I was trying to propose; some kind of GEC for P-1 stage 1. Last fiddled with by kruoli on 2022-10-03 at 20:47 Reason: Clarification.
2022-10-04, 06:55   #255
preda

"Mihai Preda"
Apr 2015

5AB16 Posts

Quote:
 Originally Posted by paulunderwood https://arxiv.org/pdf/2209.15623.pdf has details of an "improved" GEC+VDF. Is it of any use to the Mersenne Prime/cofactors searches?
The paper is concerned with producing a "certificate" (or proof) of corectness of modular exponentiation to an arbitrary power. But implicitly it also shows the way to a generalization of the GEC to arbitrary powers. This generalization of GEC is almost as efficient as the GEC that we use for PRPs on mersenne numbers. The immediate application that I see is in adding GEC error checking to the P-1 first stage -- very strong error checking at a cost of under 1%.

The general GEC is obtained when the "random" weights w[i] in the paper are all equal to 1. (see the end of Section 2 in the paper)

The difference from the classic GEC is in the verification step, where an additional powering with the sum(block_bits[i]) is added.

As in GEC, we choose a "block size" B, let's say B=400. We split [the binary representation of] the general exponent N into blocks of B bits, starting at the MSB ("left"), let's call them block_bits[i].

We proceed with a normal left-to-right binary exponentiation, and we also keep an accumulator A, that we update every B iterations by multiplying the current result of the exponentiation into A.

Whenever we want to do a check (at any iteration multiple of B), we compare the "new A" (i.e. A updated in the usual way) with the alternate way of computing it, which is:

A' = (old_A ^ (2^B)) * 3^(sum(block_bits[i]))
(this is equivalent to the formula at the end of section 2 in the paper).

and check for equality.

That sum(block_bits[i]) has only a few more bits than the block size B, let's say B+20 bits. So practically the verification step takes 2*B modular squarings. The "classic" GEC only needed B modular squarings for the verification, as the sum(block_bits[i]) above was always equal to 1 in that case (i.e. mersenne exponentiation)

L-to-R exponentiation: https://en.wikipedia.org/wiki/Modula..._binary_method

Last fiddled with by preda on 2022-10-04 at 07:52

 2023-02-23, 07:49 #256 Graeme Willough   Feb 2023 110 Posts A simpler VDF check variant A simpler VDF check variant Following on from preda's VDF work, I have found a variant that allows a simplified check and possibly much greater proof powers to be used. The main difference is that where preda powers-then-muliplies, I multiply-then-power, I have not seen anything similar in this thread, but I cannot imagine that I am the first to go down this route, perhaps it was tried and dismissed. I present two checks based on this, one an insecure one only suitable for self checks, good against random errors but easily subverted. The other is (I think) equivalent to preda's check and uses raising the intermediate values to random powers. These checks are both clearly based on preda's work, all mistakes of understanding and presentation are mine. let p be the power of the proof n highest multiple of 2^p below our number to be checked (Preda's TopK) m(i) be the value of the iteration at the i/(2^p)th iteration m(1) = A the starting value, m(n) = Y the final value as before we iterate the function n/2^p times - I call this a transformation, so A (=m(1)) transforms to m(2) m(2) transforms to m(3) ... m(n-1) transforms to m(n) = Y from now on I'll use an arrow --> to mean "transforms to" e.g. A --> m(2) There is a simple check, equivalent to an unsecure version of preda's. As before we can multiply the m(i)'s together and then transform them as defined above, each m transforms to m+1, so the product of the first n-1 m's should transform to the product of the 2nd to nth m's each m transforming to its successor. A * m(2) * m(3) *... m(n-1) --> m(2) * m(3) * m(4) ... * Y We can set the product m(2)*m(3)... *m(n-1) = M and rewrite the above tansformation as A * M --> M * Y This is the simple check. Using M and Y as defined aabove, check that the above asertion holds. The assertion depends on each and all transforms being correct. It is a good self-check against random errors, but it's trivial to bypass on purpose. (e.g. pick a random M transform A*M as normal then divide by M to get a Y, report M and Y). So it's not suitable for reporting. We need a secure version. Secure check multiply all the odd m's except A (=m(1)) together to get m(3)*m(4)*m(5) ... *m(n-1) = M1 multiply all the even m's except Y together to get (m(2)*m(4)...m(n-2) = M2 M1 and M2 are related like this: A*M1*M2 --> M1*M2*Y - we are just factoring M into M1*M2 and A*M1 --> M2*Y - all the odd m's transform to the next m, which are all the even m's What happens when we raise M2 to a power, r, and then transform it? M2^r = m(2)^r * m(4)^r ... m(n-2)^r --> m(3)^r * m(5)^r ... m(n-1)^r = M1^r i.e. M2^r --> M1^r combining all of these gives our new check. Given an M1, M2 and Y defined as above, pick a random r and check that the following transformation holds A * M1 * M2^r --> M1^r * M2 * Y (1) A is a known constant, generate Y as normal, derive M1 and M2 frm the odd and even m's respectively. We send M1, M2, Y to the proof checker who randomly generates their own r value and checks the assertion. Again the assertion depends on all the intermediare transforms being correct. The random (unpredictable) value of r plays the same role as in preda's l work We always and only ever send those three values (plus the tail of the original number) to the checker, whatever the power of the proof Further, we can do a rolling update of M1 and M2 and don't need to keep the intermediate m's. Assuming we don't want them for other reasons of course (rollback frex). This should allow larger proof powers to be used.
2023-03-15, 11:42   #257
preda

"Mihai Preda"
Apr 2015

1,451 Posts

Quote:
 Originally Posted by Graeme Willough Secure check multiply all the odd m's except A (=m(1)) together to get m(3)*m(4)*m(5) ... *m(n-1) = M1 multiply all the even m's except Y together to get (m(2)*m(4)...m(n-2) = M2 M1 and M2 are related like this: A*M1*M2 --> M1*M2*Y - we are just factoring M into M1*M2 and A*M1 --> M2*Y - all the odd m's transform to the next m, which are all the even m's What happens when we raise M2 to a power, r, and then transform it? M2^r = m(2)^r * m(4)^r ... m(n-2)^r --> m(3)^r * m(5)^r ... m(n-1)^r = M1^r i.e. M2^r --> M1^r combining all of these gives our new check. Given an M1, M2 and Y defined as above, pick a random r and check that the following transformation holds A * M1 * M2^r --> M1^r * M2 * Y (1) A is a known constant, generate Y as normal, derive M1 and M2 frm the odd and even m's respectively. We send M1, M2, Y to the proof checker who randomly generates their own r value and checks the assertion. Again the assertion depends on all the intermediare transforms being correct. The random (unpredictable) value of r plays the same role as in preda's l work We always and only ever send those three values (plus the tail of the original number) to the checker, whatever the power of the proof Further, we can do a rolling update of M1 and M2 and don't need to keep the intermediate m's. Assuming we don't want them for other reasons of course (rollback frex). This should allow larger proof powers to be used.
I agree that if M1, M2, Y are "true" (i.e. computed correctly), then the check of the form A * M1 * M2^r --> M1^r * M2 * Y holds.

But is it the case that if the check A * M1 * M2^r --> M1^r * M2 * Y holds, then Y must be "true"?

Let's see: is it possible to produce "fake" M1, M2, Y such that the check holds?

What if we did the computation as usual (A iterating towards Y, computing M1, M2 along the way as you describe), but instead of repeating for "n" iterations, we stopped at some earlier point. (Or alternativelly, we went past "n".) Then the check would still hold (right?), but the Y is "fake".

Thus, the check holding does not imply that Y is correct. (unfortunatelly)

2023-03-15, 17:31   #258
R.D. Silverman

"Bob Silverman"
Nov 2003
North of Boston

2×19×199 Posts

Quote:
 Originally Posted by preda I agree that if M1, M2, Y are "true" (i.e. computed correctly), then the check of the form A * M1 * M2^r --> M1^r * M2 * Y holds. But is it the case that if the check A * M1 * M2^r --> M1^r * M2 * Y holds, then Y must be "true"? Let's see: is it possible to produce "fake" M1, M2, Y such that the check holds? What if we did the computation as usual (A iterating towards Y, computing M1, M2 along the way as you describe), but instead of repeating for "n" iterations, we stopped at some earlier point. (Or alternativelly, we went past "n".) Then the check would still hold (right?), but the Y is "fake". Thus, the check holding does not imply that Y is correct. (unfortunatelly)
Yes. This is related to homomorphic attacks on RSA.

 Similar Threads Thread Thread Starter Forum Replies Last Post rula Homework Help 3 2017-01-18 01:41 ixfd64 PrimeNet 7 2008-10-20 20:45 JHagerson Forum Feedback 1 2006-05-13 21:30 vaughan ElevenSmooth 5 2005-09-08 17:17 ltd Prime Sierpinski Project 10 2005-08-08 13:38

All times are UTC. The time now is 01:40.

Sun Jun 11 01:40:15 UTC 2023 up 296 days, 23:08, 0 users, load averages: 0.66, 0.65, 0.71