Python script for search for factors of M1277 using random kintervals
[CODE]import numpy as np
from random import randint f = open("m1277.txt", "a") def tf1277(k, r): expos = np.zeros((4, r), int) p = 1277 y = 0 z = 0 e1 = 2555 e2 = 3832 e3 = 7663 e4 = 8940 pbinary = [] q = p while q > 0: if q % 2 == 0: pbinary.append(0) q //= 2 if q % 2 == 1: pbinary.append(1) q //= 2 pbinary.reverse() for x in range(4): for prime in [3, 5, 7, 11, 13, 17, 19, 23, 29]: for a in range(prime + 5): if x == 0: e = e1 elif x == 1: e = e2 elif x == 2: e = e3 else: e = e4 if ((a + k) * 10216 + e) % prime == 0: c = a while c < r: expos[x][c] = 1 c += prime break for a in range(len(expos[0])): if expos[0][a] == 0: c = 1 kp = (k + a) * 10216 + 2555 for d in pbinary: c **= 2 if d == 1: c *= 2 c %= kp if c == 1: print("") print("factor = ", kp) print("k = ", (kp  1) // 2 // 1277) f.write("\n") f.write("factor = " + str(kp)) f.write("\n") f.write("k = " + str((kp  1) // 2 // 1277)) f.write("\n") quit() if y == 1000: y = 0 z += 1 print(z * 1000) y += 1 for a in range(len(expos[1])): if expos[1][a] == 0: c = 1 kp = (k + a) * 10216 + 3832 for d in pbinary: c **= 2 if d == 1: c *= 2 c %= kp if c == 1: print("") print("factor = ", kp) print("k = ", (kp  1) // 2 // 1277) f.write("\n") f.write("factor = " + str(kp)) f.write("\n") f.write("k = " + str((kp  1) // 2 // 1277)) f.write("\n") quit() if y == 1000: y = 0 z += 1 print(z * 1000) y += 1 for a in range(len(expos[2])): if expos[2][a] == 0: c = 1 kp = (k + a) * 10216 + 7663 for d in pbinary: c **= 2 if d == 1: c *= 2 c %= kp if c == 1: print("") print("factor = ", kp) print("k = ", (kp  1) // 2 // 1277) f.write("\n") f.write("factor = " + str(kp)) f.write("\n") f.write("k = " + str((kp  1) // 2 // 1277)) f.write("\n") quit() if y == 1000: y = 0 z += 1 print(z * 1000) y += 1 for a in range(len(expos[3])): if expos[3][a] == 0: c = 1 kp = (k + a) * 10216 + 8940 for d in pbinary: c **= 2 if d == 1: c *= 2 c %= kp if c == 1: print("") print("factor = ", kp) print("k = ", (kp  1) // 2 // 1277) f.write("\n") f.write("factor = " + str(kp)) f.write("\n") f.write("k = " + str((kp  1) // 2 // 1277)) f.write("\n") quit() if y == 1000: y = 0 z += 1 print(z * 1000) y += 1 while True: tf1277(randint(58000000000000000, 2 ** (1277 / 2) / 1277 / 2), 1000000) [/CODE] Hello, I have made the above Python script, that is called with random starting k value, and then tests about 1263000 sivedout k's above that value. It uses a grid of zeroes to sieve k's and combines the facts that factors must be 2*k*p+1 and also 1 or 7 (mod 8)  so it only tests factors that are 2555, 3832, 7663, 8940 (mod 10216). If there is a mistake (either logical or mathematical) in my script, I'll be grateful for pointing out. I would also be grateful if somebody could help me with making the code runnable on a Nvidia GPU with CUDA (or in OpenCL, if it's easier), without necessarily using FFT or Barrett multiplication. (But if somebody is willing to explain, I'll be happy to listen, or read...). And for the last, if you want, you can try to find the factor yourself with that code. It takes my CPU about 12 seconds on average to go through random range of size 10216000000 (that is difference between largest and smallest k tested). So if I was to randomly check the equivalent number of ranges (untested range from 2^67 to 2^639 divided by range size 10216000000), it would take me... about 2*10^172 years :down: . But if more people engage in search, probability increases (unfortunately it's only slight change). So if you want to help, you can run the script any time you like. If you want to help more, you can help with improving the speed of the script. 
Running this code is a fantastic way to waste power/time/money/energy. :tu:
Random TF for this number is pointless. And trying to "optimise" a scripting language is equally pointless. You need another method (not TF), and you need another language (not a scripting language). 
[QUOTE=Viliam Furik;556632]if more people engage in search, probability increases (unfortunately it's only slight change).
So if you want to help, you can run the script any time you like. If you want to help more, you can help with improving the speed of the script.[/QUOTE] You'd probably have a better chance of being struck by lightning 10 times and winning the lottery all on the same day. The only hope for M1277, with presently known factoring techniques, is a mountain of ECM or (eventually) SNFS. 
Fun! I have a machine that is rambottlenecked for Prime95. So far, running this too doesn't seem to affect the iterations per second. I'll leave it on for a week or so and perhaps lightning with strike! (Or strike 10 times, per the comment above. But I will probably not buy the requisite lotto ticket.)

How long will it take this program to search for all 70digit factors? Or, most of them? How about 1% of the 70digit range?
ECM on this number has ruled out factors smaller than about 67 digits, and a 70digit or smaller factor is unlikely. So, I'm not even sure you should set it to look at 70 digits maybe 73 is smarter. How many k's are there that make 73digit candidate factors? 
we are looking at a 245 bit factor? hmmm
6667 bits TF took 11 704 Ghz/D, and it double each bit. so 24567=178. Sooo 2^178*11 704= 4.484e57 Ghz/D for the last bit. Not counting all that is before it, Which double it. Lets handwave this a bit and it come to 9e57Ghz/D. Problem: The fastest current TF card,will give you 6e3Ghz/D by day. so, AFAIK, it would take 1.5e54 Day, About 3e41 time the duration since the beginning of the universe. Thats, by TF only. PM1 is almost excluded because of its requirement. 
So, one fast GPU can spend 1e54 days to have roughly a 1 in 250 chance to find a factor.
Or, you could run regular ECM with large bounds (B1=3e9 or bigger) and actually contribute nonuseless work. 
Is there enough data yet to suggest that Mersenne nonprimes have a higher likelihood to have 3 only factors vs 2 only?

[QUOTE=Uncwilly;556682]Is there enough data yet to suggest that Mersenne nonprimes have a higher likelihood to have 3 only factors vs 2 only?[/QUOTE]
Many more Mersenne nonprimes have a thousand factors. And even more have a million factors! :rolleyes: [SPOILER]Infinity is a rather large thing.[/SPOILER] 
[QUOTE=firejuggler;556679]Problem: The fastest current TF card,will give you [COLOR=Red][B]1.2e4[/B][/COLOR]Ghz/D by day.[/QUOTE]
Fixed it for you. Not that would change much, haha. Oneline pari "search by q" (as posted a lot of times in the forum) with a parfor and a random start would be faster that the posted script. 
[QUOTE=Uncwilly;556682]Is there enough data yet to suggest that Mersenne nonprimes have a higher likelihood to have 3 only factors vs 2 only?[/QUOTE]
[i]Exactly[/i] three versus [i]exactly[/i] two prime factors, not sure. Exactly two versus more than two, longknown data suggest "no contest." I'll assume, for the sake of discussion, that you mean Mersenne numbers M[sub]p[/sub] with [i]prime[/i] exponent p. For composite exponents n, the algebraic factorization alone is very likely to provide more than two proper factors, which, in turn, are very likely to provide more than two prime factors. From an old factorization table of 2[sup]n[/sup]  1 for odd n < 1200 I have on file, a quick scan finds that M[sub]p[/sub] was known to be prime for 13 odd prime p, known to have exactly two prime factors for 38 prime p, and known to have 3 or more prime factors for 143 odd prime p. The table lists M[sub]1061[/sub] as a C320 so the number of factors was only known to be 2 or more at the time. It seems that 2[sup]1061[/sup]  1 was subsequently found to be the product of two prime factors, so M[sub]p[/sub] is known to be prime for 13 odd prime p < 1200, known to have exactly two prime factors for 39 odd primes p < 1200, and known to have 3 or more prime factors for 143 odd prime p < 1200. That accounts for all 195 odd primes p < 1200. So I would say that the data suggest that more than two factors is much more likely than exactly two. There are only about 400 primes p for which the [i]exact[/i] number of factors of 2[sup]p[/sup]  1 is known or "probably" known. I'm too lazy to check for how many p the number of factors is known to be [i]at least[/i] 3. 
From M1 to M500, 83 fully factored M(prime), not counting prime themselves. 30 have 2 factors. 17 have 3 factors. Therefore 36 have 4 or more factors.

Ok. Based upon the ECM data we are looking at 5 or fewer factors for M[M]1277[/M]

[URL="https://www.mersenne.org/report_exponent/?exp_lo=1277&exp_hi=&full=1"]M1277[/URL] is 384 decimal digits long, if memory serves. If it were "smooth" as some here like to say, this might have been done and over a long time ago. Four LL tests says it is composite. A P1 test back in 2017 used a (5trillion + 3) B1 bound. The B2, (400trillion + 241).
One individual I am aware of spent months running Stage 1 ECM's with [I]Prime95[/I] and Stage 2 with [I]GMPECM[/I]. Unless somebody could coax Google into trying this on their quantum supercomputer, M1277 will likely remain an enigma for quite some time to come. It must have one, or more, really large factors which we do not have the tech to reach currently. 
[QUOTE=storm5510;557611]. It must have one, or more, really large factors which we do not have the tech to reach currently.[/QUOTE]
It has little to do with tech, lots to do with patience and willingness to pay the power bill. There is still plenty of ECM to do on this number before it's "ready" for SNFS. Anyone can fire up curves at, say, B1 = 6e9 or bigger and have a go and few of us would be surprised if someone found a factor in just that way. If another quarter million or so (I didn't actually calculate how many) such curves fail to find a factor, then we head off to SNFS when someone feels like starting it. We have the tech to do SNFS on it right now, but not the patience. It would take a cluster to solve the matrix, but those exist too. The sieving is a really long task, which is why nobody has bothered to try (and also why more ECM is worth the effort) but CADO can do it. So, no, it's not true that we don't have the tech. 
[QUOTE=VBCurtis;557614][COLOR=Gray]It has little to do with tech, lots to do with patience and willingness to pay the power bill.
There is still plenty of ECM to do on this number before it's "ready" for SNFS.[/COLOR] Anyone can fire up curves at, say,B1 = 6e9 or bigger and have a go and few of us would be surprised if someone found a factor in just that way. [COLOR=Gray]If another quarter million or so (I didn't actually calculate how many) such curves fail to find a factor, then we head off to SNFS when someone feels like starting it. [/COLOR] [COLOR=gray]We have the tech to do SNFS on it right now, but not the patience. It would take a cluster to solve the matrix, but those exist too. The sieving is a really long task, which is why nobody has bothered to try (and also why more ECM is worth the effort) but CADO can do it. [/COLOR][COLOR=gray] So, no, it's not true that we don't have the tech.[/COLOR][/QUOTE] 6,000,000,000 for B1. I believe the ruleofthumb is B2 = B1 * 100. Of course, that is not set in stone. A person could go higher if they choose. I have an older machine that sits in a corner I could do this with. It is not fast or elegant but it gets the job done. It still has the 29.x version of [I]Prime95[/I] on it. I could let this run for months, even years. I will give this a go. The only requirement is to not let it run out of work. :smile: 
Actually, I assumed GMPECM for the runs, as it is dramatically faster than P95 for this very small (by GIMPS standards) number. You'll find that B2 is much much more than 100* B1 when using GMPECM. Memory use is also higher, though.
There's another M1277 thread where the procedure for using P95 for stage 1 and GMPECM for stage 2 is laid out that's the fastest way for this number. If you're doing more than a handful of curves, I strongly suggest you use GMPECM (windows or linux) for stage 2. I think I ran 500 curves in just this way a couple years back; I left an old Core 2 Quad on it for a few months. 
[QUOTE=VBCurtis;557648]Actually, I assumed GMPECM for the runs, as it is dramatically faster than P95 for this very small (by GIMPS standards) number. You'll find that B2 is much much more than 100* B1 when using GMPECM. Memory use is also higher, though.
There's another M1277 thread where the procedure for using P95 for stage 1 and GMPECM for stage 2 is laid out that's the fastest way for this number. If you're doing more than a handful of curves, I strongly suggest you use GMPECM (windows or linux) for stage 2. I think I ran 500 curves in just this way a couple years back; I left an old Core 2 Quad on it for a few months.[/QUOTE] In my case, it is a 3 GHz Core2Duo. I am familiar with the GMPECM procedure. A line in [I]prime.txt[/I] needs to be [C]GmpEcmHook=1[/C] to generate the files needed for stage 2. This old machine only has 4 GB of RAM. I am not sure if this would be enough. I would simply have to try it and see how it behaves. I would not necessarily have to run GMPECM there. I could do it on another machine with a lot more RAM. I probably have the entire setup in an archive here somewhere. This will give me something to do and to think about. I hit the big 65 in 13 days. [U]Thank you for the feedback[/U]. :smile: 
If memory is not sufficient, you could try [c]ecm maxmem 3072[/c]. That will limit it to 3 GB of RAM usage.
For me, GMPECM reports [c]Estimated memory usage: 8.08GB[/c] with B2 = 51,985,969,455,438 (ECMdefault at B1 = 2e9). For [c]ecm maxmem 3072[/c] it says [c]Estimated memory usage: 1.93GB[/c], so this should be totally fine for that system. 
Sorry, I went for the wrong B1.
For me, GMPECM reports [c]Estimated memory usage: 16.50GB[/c] with B2 = 262,752,699,834,252 (ECMdefault at B1 = [B]6e9[/B]). For [c]ecm maxmem 3072[/c] it says [c]Estimated memory usage: 1.93GB[/c], so this should be totally fine for that system. 
[QUOTE=kruoli;557653]Sorry, I went for the wrong B1.
For me, GMPECM reports [c]Estimated memory usage: 16.50GB[/c] with B2 = 262,752,699,834,252 (ECMdefault at B1 = [B]6e9[/B]). For [c]ecm maxmem 3072[/c] it says [c]Estimated memory usage: 1.93GB[/c], so this should be totally fine for that system.[/QUOTE] I needed to go back a look through the parameters again anyway. I was unaware RAM usage could be restricted. Thanks! 14 hours for each stage one curve on that machine with B1 = 6e9. I am loading it in groups of 10, a single curve in each work line. 5.8 days for the group. Then I will go to GMPECM. Once finished, repeat the process. 
Since you have a dual core CPU, you should be able to increase efficiency by doing stage 1 of a set [$]A[/$] and stage 2 of a set [$]B[/$] in parallel, if you like! :smile:
Prime95's parallelization is not efficient at all at those tiny FFTs. The OpenMP functionality of GMPECM only works in stage 2 currently (I'd like to be corrected), but only helps in certain substeps. 
[QUOTE=storm5510;557658]I needed to go back a look through the parameters again anyway. I was unaware RAM usage could be restricted. Thanks!
14 hours for each stage one curve on that machine with B1 = 6e9. I am loading it in groups of 10, a single curve in each work line. 5.8 days for the group. Then I will go to GMPECM. Once finished, repeat the process.[/QUOTE] Once you finish the first batch of 10 on stage 1, one core can do stage 2 while the other does the next batch of stage 1. Stage 1 doesn't take nearly as much memory. The "k" parameter in GMPECM controls how many chunks of work stage 2 is divided into. Restricting memory simply cuts the job into more chunks a small loss in time, but the same job gets done. Using maxmem tells GMPECM to choose a k that keeps memory below your designated cutoff. 
[QUOTE=VBCurtis;557676]Once you finish the first batch of 10 on stage 1, one core can do stage 2 while the other does the next batch of stage 1. Stage 1 doesn't take nearly as much memory.
The "k" parameter in GMPECM controls how many chunks of work stage 2 is divided into. Restricting memory simply cuts the job into more chunks a small loss in time, but the same job gets done. Using maxmem tells GMPECM to choose a k that keeps memory below your designated cutoff.[/QUOTE] I found my archive. The original line, like below, is in a short batch file. I did it this way so I would not have to try to remember all the parameters. I modified it to match the current work. [CODE]echo "2^12771"  ecm v maxmem 3072 resume results.txt savea gmp_results.txt 6e96e9 [/CODE]Batch language can sometimes be strange. However, this worked. Not specifying B2 opens the door for something really big. 
I scrapped the Core2Duo and replaced it with a Xeon Quad. Much better.
I looked at the detail for [B]M1277[/B] on [I]mersenne.org[/I]. A few people are running ECM's on it. What bounds, it does not display. I also looked at the page titled [URL="https://www.mersenne.org/report_ecm/"]PrimeNet ECM Progress[/URL], and I see the last group of curves, 360,000 in all, were ran with B1 at 8e8 (800,000,000). Adding all the curve requirements across the page comes up to 552,500, if I did not miss anything. I created a test batch file for GMPECM using M1861. It has four factors, two of which are relatively small. In every case, it found one or the other of the small factors regardless of the Sigma value. If I ran it in looping mode, like this: [C]c 100[/C], then it would catch one of the factors on the first pass, and keep on going without finding anything else. When I added [C]one[/C] to the command line, it would stop at the first factor then drop out to the command prompt. The [C]one[/C] parameter is mentioned in a paragraph in the [I]readme[/I] file. A few years ago [B]PhilF[/B] ran a series of tests beginning with having [I]Prime95[/I] run Stage 1 to 8e8 with [C]GmpEcmHook=1[/C] in batches of 50. He then ran Stage 2 with GMPECM. If he [U]did not have[/U] [C]one [/C]in his command line, he could have missed something. He ran this round the clock. Anything found with [I]Prime95[/I], he would have known about because it would be in the submitted portion of the results. I feel it may be remotely possible something was was missed by those who used GMPECM and did not pay close attention to the contents of [I]readme[/I]. People do not like reading documentation and will often stop when they see what they need to get by... 
I don't follow what you're saying. How would he have missed something?
one is a flag that asks GMPECM to stop when any factor is found. What does setting that have to do with missing a factor? 
Maybe he is a bit confused by the fact that some factorization programs will not display anything until the input is fully factored  they miss a "prefactor" or "partial factor" function, for example the "factorint(n)" function in pari/gp will not return until the n is fully factored, therefore, if you start it and let it run for a while, then abort it with ctrl+c (because it didn't display nor logged any factor), then you "lose" the factors it found meantime. Fortunately, not the case for ECM.

[QUOTE=LaurV;560273]Maybe he is a bit confused by the fact that some factorization programs will not display anything until the input is fully factored  they miss a "prefactor" or "partial factor" function, for example the "factorint(n)" function in pari/gp will not return until the n is fully factored, therefore, if you start it and let it run for a while, then abort it with ctrl+c (because it didn't display nor logged any factor), then you "lose" the factors it found meantime. Fortunately, not the case for ECM.[/QUOTE]
For a workaround type "? factorint" in gp and then pay close attention to what you can do with the optional flag argument. 
I know that. It was just an example, trying to explain VB's dilemma :razz:. In spite of that, there may be cases when factorint() "knows" internally a factor, but you lose it if you quit with ctrl+c, like for example, multiple ECM factors  one is found very fast but it is not written out, and the ECM continues with the cofactor. Therefore, if you "break", you may want to print n before exiting from break environment. You may have the surprise it is smaller than your initial n.
(However, indeed, some people don't know how to proper use pari functions, so your comment is welcome). 
[QUOTE=VBCurtis]I don't follow what you're saying. How would he have missed something?
one is a flag that asks GMPECM to stop when any factor is found. What does setting that have to do with missing a factor?[/QUOTE] If he was [U]not[/U] using it on long runs. Not using it would allow a found factor to roll off the top of the screen as the program continued to run. A minor workaround would be to use something to capture every line appearing on the screen and writing it into a text file. I believe Linux may have something like this built in. In the past, I have used a little program called [I]mtee[/I]. It does the same thing. Then a person would have to do a manual search of each capture file. [QUOTE=xilman]For a workaround type "? factorint" in gp and then pay close attention to what you can do with the optional flag argument. [/QUOTE] I am not sure I follow. "ecm ? factorint" produces an error message. [U]Edit[/U]: I just sent a message to PhilF to see if he was using the [C]one[/C] option in his processes. 
On Linux you can use something like:
[code] ecm ...  tee a ecm.log  grep i factor [/code] That writes all output to ecm.log (or whatever you call the log file) and writes just lines containing factor to the screen. I would have as many running as I have cores (each with its own log) all writing to one terminal window if they find a factor. Chris 
[QUOTE=storm5510;560315]If he was [U]not[/U] using it on long runs. Not using it would allow a found factor to roll off the top of the screen as the program continued to run.
[/QUOTE] Hardly any of us use ECM with solely screen output. In windows or linux, put > outputfile.txt at the end of your ECM invocation. It's rather challenging to keep records of ECM levels without logs, you know? Or, manage ECM with ecm.py. That way, just one log for any ECM work. 
[QUOTE=VBCurtis;560341]Hardly any of us use ECM with solely screen output. In windows or linux, put > outputfile.txt at the end of your ECM invocation. It's rather challenging to keep records of ECM levels without logs, you know?
Or, manage ECM with ecm.py. That way, just one log for any ECM work.[/QUOTE] Or better still, use tee 
May I ask you to discuss here only the topic of the thread, please?

[QUOTE=Viliam Furik;560365]May I ask you to discuss here only the topic of the thread, please?[/QUOTE]
The topic of the thread as suggested in the first post is so utterly hopeless that the rest of the discussion is the only useful content. OP's idea was debunked by multiple people, who gave data and probabilities. 
[QUOTE=Viliam Furik;560365]May I ask you to discuss here only the topic of the thread, please?[/QUOTE]
This is a discussion about [B]M1277[/B]. The OP introduced it. Anything reasonable and relative should go. The downside to [I]mtee[/I] is that [U]everything[/U] written to the screen is captured. This can result in many megabytes of text. [I]GMPECM[/I] places a group of asterisks before any line containing found factor information. This makes for a quick search with something like [I]Notepad++[/I]. 
[QUOTE=storm5510;560315]If he was [U]not[/U] using it on long runs. Not using it would allow a found factor to roll off the top of the screen as the program continued to run.
A minor workaround would be to use something to capture every line appearing on the screen and writing it into a text file. I believe Linux may have something like this built in. In the past, I have used a little program called [I]mtee[/I]. It does the same thing. Then a person would have to do a manual search of each capture file. I am not sure I follow. "ecm ? factorint" produces an error message. [U]Edit[/U]: I just sent a message to PhilF to see if he was using the [C]one[/C] option in his processes.[/QUOTE]Please pay attention. I specifically wrote [b]in gp[/b]. 
[QUOTE=xilman;560398]Please pay attention.
I specifically wrote [B]in gp[/B].[/QUOTE] Apologies. I am not familiar with [B]gp[/B]. :blush: 
[QUOTE=VBCurtis;560368]The topic of the thread as suggested in the first post is so utterly hopeless that the rest of the discussion is the only useful content. OP's idea was debunked by multiple people, who gave data and probabilities.[/QUOTE]
Maybe a mod can change the topic to "Python script to needlessly waste energy and compute cycles". 
[QUOTE=mathwiz;560418]Maybe a mod can change the topic to "Python script to needlessly waste energy and compute cycles".[/QUOTE]
That's not nice of you... 
[QUOTE=Viliam Furik;560426]That's not nice of you...[/QUOTE]
Perhaps a bit snarky, but is the topic of this thread "general efforts to factor M1277" or just the original Python script? If the latter, then wasting energy and compute is all that would be accomplished. 
For fun, I just timed factor5 (the old version from 2009, I don't have newer) and it is about few [U]thousand[/U] times faster compared with the script (slow laptop, 6 threads in 2 cores to feed the cpu 100%, P95 running in background, i.e. doing nothing due to priority limit, computer still normally responsive).
Edit: that was for fun! TF on this exponent is totally pointless. Unless you consider yourself extremely lucky guy (like we Romanians would say, you stepped on a shit or put your hand in it, or one shit fell on your head, or [URL="https://www.premierhotels.co.za/7_goodlucksuperstitions/"]something[/URL]). 
1 Attachment(s)
[QUOTE=LaurV;560498]For fun, I just timed factor5 (the old version from 2009, I don't have newer) and it is about few [U]thousand[/U] times faster compared with the script (slow laptop, 6 threads in 2 cores to feed the cpu 100%, P95 running in background, i.e. doing nothing due to priority limit, computer still normally responsive).
[/QUOTE] I believe M1277 has been factored to 2^67. I have both the old and the new [I]factor5[/I] here somewhere. The new doesn't know from 2^? It uses [I]k[/I] only. Somewhere in my notebook I keep, there is a conversion formula I scratched down a long time ago. Someone here presented the formula. Who or when, I cannot remember. After doing a little searching in my notes, I believe I found it. It looks like so: [B]k = 2[SUP]x[/SUP] / 2p[/B]. The x would be replaced with 67 then 68. 2p is 2*1277. If this is correct, the lower [I]k[/I] would be 57,781,500,622,426,160, and the upper is 115,563,001,244,852,320. I do not believe it multithreads, so it could take decades, if not centuries, to run. I found the archive. It is attached below if anyone wants to mess with it. It produces a results file, but no interim screen output. [U]Luigi Morelli[/U] wrote this in 2018. I believe many here know who Luigi is. 
This is (as the readme says) the version optimized for large p and small k. It is too slow for small p, and would not worth anyhow, even if the TF would be indicated (but as I said, and everybody said, it is NOT! Don't waste your time with TF).
I downloaded it and deleted it without launch (why the exe only, and not the sources?  rhetoric question, no need answer). 
[QUOTE=LaurV;560498]Unless you consider yourself extremely lucky guy (like we Romanians would say, you stepped on a shit or put your hand in it, or one shit fell on your head, or [URL="https://www.premierhotels.co.za/7_goodlucksuperstitions/"]something[/URL]).[/QUOTE]
We Canucks say: "Stepped in a pile of :poop: and came out smelling like a Rose" 
[QUOTE=LaurV;560561]I downloaded it and deleted it without launch (why the exe only, and not the sources?  rhetoric question, no need answer).[/QUOTE]
One word: [U]Windows[/U]. There appears to a Linux variant [URL="https://download.mersenne.ca/factor5/factorj_1.01.zip"]here[/URL]. After some pondering, I remembered Luigi wrote this for [B]James Heinrich[/B] for use in his TF>1000M project. [I]mfaktc[/I] is limited to exponents no larger than 2^321. James wanted something to filter out composites beyond. I believe his target was somewhere around 10billion. 
[QUOTE=storm5510;560554]I believe M1277 has been factored to 2^67. I have both the old and the new [I]factor5[/I] here somewhere. The new doesn't know from 2^? It uses [I]k[/I] only. Somewhere in my notebook I keep, there is a conversion formula I scratched down a long time ago. Someone here presented the formula. Who or when, I cannot remember.
After doing a little searching in my notes, I believe I found it. It looks like so: [B]k = 2[SUP]x[/SUP] / 2p[/B]. The x would be replaced with 67 then 68. 2p is 2*1277. If this is correct, the lower [I]k[/I] would be 57,781,500,622,426,160, and the upper is 115,563,001,244,852,320. I do not believe it multithreads, so it could take decades, if not centuries, to run. I found the archive. It is attached below if anyone wants to mess with it. It produces a results file, but no interim screen output. [U]Luigi Morelli[/U] wrote this in 2018. I believe many here know who Luigi is.[/QUOTE] Nah, no decades, and certainly not centuries. The range 6768 bits for M1277 represents 23407 GHzD/D, which if the mfaktc supported so low exponents, could be done in under a week with 2080Ti (possibly, depends on what the throughput would be). Unless we are talking specifically about this program, then maybe. 
It really doesn't matter how long it takes any program to get to 68 or 69 bits, when ECM has ruled out factors below ~6768 DIGITS.
LaurV is just trolling. 
[QUOTE=VBCurtis;560605]... when ECM has ruled out factors below ~6768 DIGITS.
[/QUOTE] Do we know a specific biggest factor digitsize that is for sure ruled out? Or at least the 1 in toomuchtomatter probability there is an unnoticed factor? 
[QUOTE=Viliam Furik;560608]Do we know a specific biggest factor digitsize that is for sure ruled out? Or at least the 1 in toomuchtomatter probability there is an unnoticed factor?[/QUOTE]
It's ECM; that number is always fuzzy. When a Tlevel has been completed, there is a 1/e chance of missing a factor of that size (if one exists). So, when T65 is done, a 65digit factor is missed 1/e of the time. By the time 2T65 is complete, the chance of a missed 65digit factor is (1/e)^2. We have done far more than T65 on M1277; I don't have the exact count, but I imagine somewhere around half a T70 = 3*T65. So, a 65digit factor or smaller can be ruled out with something like 1(1/e)^3 certainty, and a 67digit factor is unlikely. Ryan Propper doesn't always report his ECM work, so I would not be surprised to learn a full T70 or more has been completed. Similarly, I would be quite surprised if a factor below 69 digits turns up for this number. 
M1277 is 385 decimal digits long. It may have a factor less than 100 digits in length, then again, it may not. Anyone could stab at it with ECM for a very long time and find nothing. Someone here made mention of using SNFS. [I]YAFU[/I] does SNFS, but it may not be capable of handling an input this large. So, for now, we wait.

[QUOTE=storm5510;560628] So, for now, we wait.[/QUOTE]
Wait for what? We haven't done nearly enough ECM to justify the time SNFS would take. You can wait for others to do the ECM, or you can contribute if you wish. I've done both, myself I ran a couple CPUyears of ECM, and have waited since. I have a largememory machine available now, so perhaps I'll restart a little largebound ECM and contribute more than just posts. 
[QUOTE=VBCurtis;560605]LaurV is just trolling.[/QUOTE]
I am not. hihi. :w00t: I said in every post, sometimes twice, that the TF is not indicated/recommended/wanted. I just did a comparison run between factor5 (a compiled, reasonable optimized toy, that could, in theory, factor this exponent) and the OP script (isn't this what he requested?). 
[QUOTE=VBCurtis;560649]I've done both, myself[/QUOTE]
Me too, for M[M]1061[/M]. When the SNFS factors came out, I have seen how futile my ECM and "tremendous high P1" was (I don't remember the exact B1/B2 values, but they should be on PrimeNet DB). So, now I am healed of considering myself lucky (well... in this domain :smile:). Probabilistic, the smallest factor is somewhere at 140+ digits, sooooo... 
[QUOTE=VBCurtis;560649]Wait for what?
We haven't done nearly enough ECM to justify the time SNFS would take. You can wait for others to do the ECM, or you can contribute if you wish. I've done both, myself I ran a couple CPUyears of ECM, and have waited since. I have a largememory machine available now, so perhaps I'll restart a little largebound ECM and contribute more than just posts.[/QUOTE]You've got to ask yourself one question. Do I feel lucky? Well, do ya, punk? 
[QUOTE=VBCurtis;560649]Wait for what?
We haven't done nearly enough ECM to justify the time SNFS would take. You can wait for others to do the ECM, or you can contribute if you wish. I've done both, myself I ran a couple CPUyears of ECM, and have waited since. I have a largememory machine available now, so perhaps I'll restart a little largebound ECM and contribute more than just posts.[/QUOTE] [U]There is no need to be abrasive[/U]. I am running groups of these with [I]Prime95[/I] on the replacement machine using four workers. I have it set to [U]not[/U] report these to PrimeNet. Reason: I am not running these in a conventional way. I took the last B1 on the [I]mersenne.org[/I] table and doubled it, making 16e8. I will run it this way for a while, then increment it to 24e8, then 32e8, and so on, adding 8e8 as I go. As for [I]GMPECM[/I], I do not know if I can keep it 'fed" with just four workers. If not, then I can run another instance of [I]Prime95[/I] on my primary machine, taking the total number of workers to eight. I would allow [I]GMPECM[/I] to pick its own B2 by not specifying it. Lastly, it is a foregone conclusion some of you will say that I am wasting my time. The response is: It is my time to waste, not yours, so do not bug me about it... 
[QUOTE=VBCurtis;560610]Ryan Propper doesn't always report his ECM work, so I would not be surprised to learn a full T70 or more has been completed. Similarly, I would be quite surprised if a factor below 69 digits turns up for this number.[/QUOTE]
I've unfortunately lost a complete count of curves, but I believe I did a full T65 as well as T70 on this number. 
Thanks for confirming, Ryan!
T70 is often represented by curves at B1=2.9e9 or so. If that level is complete, then T75 curves are what is useful now that's B1=7e9 or bigger. That's one curve per core per day range.... yeesh. I'll give a curve at B1=1e10 a try, and see what ecm v reports for B2, memory use, and # of curves to a T70 or T75. 
[QUOTE=storm5510;560719]Lastly, it is a foregone conclusion some of you will say that I am wasting my time. The response is: It is my time to waste, not yours, so do not bug me about it...[/QUOTE]
More ECM on this number is not a waste, but it's becoming increasingly difficult. :smile: And it's recommended that you or anyone else do curves at B1=76e8 or higher, maybe 1e10 or 2e10 now. 
[QUOTE=VBCurtis;560744]Thanks for confirming, Ryan!
T70 is often represented by curves at B1=2.9e9 or so. If that level is complete, then T75 curves are what is useful now that's B1=7e9 or bigger. That's one curve per core per day range.... yeesh. I'll give a curve at B1=1e10 a try, and see what ecm v reports for B2, memory use, and # of curves to a T70 or T75.[/QUOTE] I will run some tests using AVXECM. Scaled tests, at B1=7e6, show that firststage curve throughput is about 2.4x larger than GMPECM. [CODE]echo "2^12771"  ../../ecm704linux/ecm v 7000000 GMPECM 7.0.4 [configured with GMP 6.2.0, enableasmredc] [ECM] Tuned for x86_64/k8/params.h Input number is 2^12771 (385 digits) Using special division for factor of 2^12771 Using B1=7000000, B2=17125579390, polynomial Dickson(12), sigma=0:12224056895737954441 dF=16384, k=6, d=158340, d2=11, i0=34 Expected number of curves to find a factor of n digits: 35 40 45 50 55 60 65 70 75 80 167 1024 7351 60402 558826 5744532 6.5e+07 7.8e+08 1.1e+10 2.8e+11 Step 1 took 67016ms [/CODE] vs. [CODE]./yafu "ecm(2^12771,8)" v v B1ecm 7000000 10/22/20 15:17:47 v1.35beta @ cpu, System/Build Info: Using GMPECM 7.0, Powered by GMP 6.2.0 detected Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz detected L1 = 32768 bytes, L2 = 28835840 bytes, CL = 64 bytes measured cpu frequency ~= 2500.116400 using 1 random witnesses for RabinMiller PRP checks =============================================================== ======= Welcome to YAFU (Yet Another Factoring Utility) ======= ======= bbuhrow@gmail.com ======= ======= Type help at any time, or quit to quit ======= =============================================================== cached 664579 primes. pmax = 9999991 >> process id is 141455 commencing parallel ecm on 2601983048666099770481310081841021384653815561816676201329778087600902014918340074503059860433081046210605403488570251947845891562080866227034976651419330190731032377347305086443295837415395887618239855136922452802923419286887119716740625346109565072933087221327790207134604146257063901166556207972729700461767055550785130256674608872183239507219512717434046725178680177638925792182271 ECM has been configured with DIGITBITS = 52, VECLEN = 8, GMP_LIMB_BITS = 64 Choosing MAXBITS = 1456, NWORDS = 28, NBLOCKS = 7 based on input size 1277 linesieve took 0.022753 seconds cached 5761455 primes < 99999989 Input has 1277 bits, using 1 threads (8 curves/thread) Processing in batches of 100000000 primes Initialization took 0.1013 seconds. Building curves took 0.0007 seconds., B2=100*B1 commencing Stage 1 @ prime 2 Stage 1 took 219.2920 seconds [/CODE] So, 219.3/8 = 27.4 sec/curve versus 67 sec/curve. AVXECM stage 2 took 118 sec, so 14.75 sec/curve, but these are just the standard continuation up to 100*B1. It should scale linearly up to 7e9. But I've never run AVXECM with B1 anywhere close to that large. My bet is it crashes... but I'll test and see what happens. 
[QUOTE=ryanp;560745][COLOR=Gray]More ECM on this number is not a waste, but it's becoming increasingly difficult. [/COLOR]:smile: And it's recommended that you or anyone else do curves at B1=76e8 or higher, maybe 1e10 or 2e10 now.[/QUOTE]
Thank you for the reply. You are the guy at the top of the ECM producers table. :grin: It takes 155 minutes to run four curves in tandem at 16e8 on the machine where I have [I]Prime95[/I] running now. A single test with GMPECM, with no limit on B2, takes 50 minutes. 76e8, I would eventually get into this area. I could decrease my increment interval to one day, for example, until I got into higher areas. It is all about the time... 
[U]Disregard my above...
[/U] A while back, [B]VBCurtis[/B] suggested I start at 6e9. I have everything running to this level now. After a period of time, at least a month, I will go to 7e9, then continue with this progression in monthly intervals. I made a notation in my notebook with a red Sharpie marker. I would like to be able to run 500 tests with GMPECM on each group. Whether this many is possible, I will have to wait and see. 
All times are UTC. The time now is 04:43. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2021, Jelsoft Enterprises Ltd.