20210623, 15:40  #1 
Jun 2021
3×17 Posts 
Why this code converge?
p any composite number
for small p, inner loop may be less than 250, its heuristic) Code:
{p=1237*1234577; c=ceil(sqrt(p)); for(n=c,c+10000, b=lift(Mod(n^2,p));a=lift(Mod(b^2,p)); for(y=1,250, t=ceil(sqrt(b^2a)); b=lift(Mod(t^2,p));a=lift(Mod(b^2,p)); if(b<c,break());); localprec(7); b=b/c/1.; if(b<.1,print(b);););} Roman 
20210623, 18:35  #2 
Jun 2021
3·17 Posts 
Easy to see that t in this code is coordinate of the lower point of some "theeth of the saw" on the graf x^2 mod p. (t1 upper)
In second cycle we just make jumps to next such "theeth" and go on. Curiosly, this process have two outcome  some closed loop, ring, and zero, or sub sqrt value of t^2 mod p one "jump" before. I dont known, new or old one this heuristic. But its look like a sieve for n values, and hypothetically can point out on the some dependency in x^2 mod p 
20210625, 13:06  #3 
Jun 2021
3·17 Posts 
Most interesting, we can try the same method (saw jumps) for x^a mod p, when a>2, and (with hope) find such x, x^a mod p <p^(a)
Dreams, only dreams of course)) The curse of sqrt too strong. Last fiddled with by RomanM on 20210625 at 13:24 Reason: dreams 
20210722, 14:05  #4 
Jun 2021
3×17 Posts 
Once again, Why does it work? (Personally I don't know yet))
Code:
\p300 {p=233108530344407544527637656910680524145619812480305449042948611968495918245135782867888369318577116418213919268572658314913060672626911354027609793166341626693946596196427744273886601876896313468704059066746903123910748277606548649151920812699309766587514735456594993207; c=ceil(sqrt(p)); for(n=1,p, u=c+n; \\initial values b=lift(Mod(u^2,p)); a=lift(Mod(b^2,p)); for(y=1,250, \\The riddle is here, in this cycle. t=ceil((b^2a)^(1/2)); b=lift(Mod(t^2,p)); a=lift(Mod(b^2,p)); if(b<c, break());); \\break at sub sqrt residual localprec(7); z=(b/c/1.); if(z<1,print(z," ",t)); );} 
20210727, 19:06  #5 
Jun 2021
33_{16} Posts 
No one here? I'm talking with themselves)) Let p=p+1260, and for the same t from code above, residuals will be *slight* different, and p is prime.
How small can be residuals in this case? Or by other word, why we can't find the small residuals for prime p, and they still small for our composite p? 
20210727, 19:34  #6 
"Viliam Furík"
Jul 2018
Martin, Slovakia
19·41 Posts 
Okay, I'm here. I've been watching occasionally since the first post.
Could you explain, what it is supposed to do, what does it do, and what is the question? 
20210727, 19:41  #7 
Jun 2021
3×17 Posts 
Factorization of numbers.

20210727, 19:58  #8 
"Viliam Furík"
Jul 2018
Martin, Slovakia
779_{10} Posts 
That's too vague.
Please explain the code in the three questions asked. 1. What is it supposed to do? > Describe the code and algorithm step by step. 2. What does it do? > You may skip this part if the experienced behaviour is the same as expected behaviour. 3. What is the question? > Explain the question "Why this code converge?", i.e. what do you mean by that, and how should an answer look like. 
20210728, 20:21  #9 
Jun 2021
3×17 Posts 
Ok!
1. Code find the values of t>sqrt(p) (p  any number. Can be prime or composite), for those mod(t^2,p)<sqrt(p) and do this in a very unusual way, far away from common approach. Algorithm is quite simple. Take some integer u>sqrt(p), b=mod(u^2,p); a=mod(b^2,p)=mod(u^4,p); [From (by)^2==0 mod p b^22*b*y+y^2==0 mod p or a2*b*y+y^2==0; Solution: y=bsqrt(b^2a) (and y=b+sqrt(b^2a), using first) Make y an integer, and compute t= by =ceil(sqrt(b^2a))] So t=ceil(sqrt(b^2a)). t is some integer) Let u=t, and go all this again, in cycle. After few step, the value of b became less than sqrt(p) (or cycle go to some ring) 2. See 3. 3. why the hell does this even work??? 
20210729, 01:22  #10 
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2
9,929 Posts 
Well, how do you know that it actually works? Did you try it on all input values up to some limit and then declare "I tested it for all values up to 10^9 and it converged every time".
Maybe if you did that, then there could have been some discussion? Before you did that how would you convince people that there is anything worth spending their time or even considering spending time reading past 1st post? All you are showing is "I tested this on one number! Look!" Everyone says: "meh" and moves on to reading something else. 
20210729, 02:22  #11 
Apr 2020
1101010111_{2} Posts 
Write \(\left\lceil \sqrt{b^2a} \right\rceil\) as \(\sqrt{b^2a}+\epsilon\), where \(\epsilon\) is between 0 and 1. Let's see what happens when we square this. We get \(b^2a+2\epsilon\sqrt{b^2a}+\epsilon^2\).
We know that \(b^2a\) is a multiple of p, so the value of b on the next iteration is at most (and turns out to be equal to) \(2\epsilon\sqrt{b^2a}+\epsilon^2\), which is around \(2\epsilon b\) (until b gets close to sqrt(p), when it will tend to be smaller; for b < sqrt(p) it will be 0). In other words, we multiply b by \(2\epsilon\) to get the new value of b. If \(\epsilon\) behaved like a uniformly random number between 0 and 1, then we would expect values of b to decrease in the long term (exercise: what is the expected rate of decrease?). From the Taylor expansion we see that \(\epsilon\) is roughly equal to the fractional part of \(\frac{a}{2b}\). For small b this does behave essentially like a uniformly random number between 0 and 1; for some larger b it is skewed towards the smaller end, but this still means we expect b to fall. There may be a neater explanation; this is just the first thing I came up with. Last fiddled with by charybdis on 20210729 at 02:35 
Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Rho code  Happy5214  YAFU  3  20151101 21:54 
Please help me with my code  daxmick  Programming  15  20140214 11:57 
Code  Primeinator  Software  20  20090611 22:22 
A little help with VBS code  IronBits  No Prime Left Behind  6  20081112 14:23 
When will HD and flash drive prices converge?  jasong  Hardware  9  20071222 09:30 