[QUOTE=Maciej Kmieciak;520078]Should I use mfakto for AMD GPU?[/QUOTE]Assuming from your quote that you're on Windows, you don't need to compile anything, just download the precompiled binary from [url]https://download.mersenne.ca/mfakto[/url] (latest version is v0.15pre6). Once unzippped, put your worktodo.txt in the directory and run mfakto.exe and your results will appear in results.txt
Note that there is no integrated automated workfetch or resultsubmission, you'll either need to do it manually, or set up something to do it for you. I've heard some people use [url=https://www.mersenneforum.org/forumdisplay.php?f=103]MISFIT[/url] (I haven't used it myself), for my own [url=https://www.mersenne.ca/tf1G.php]>1000M TF project[/url] I provide example scripts to automate work fetching and result submission. If you have more specific mfakto questions you can ask them in [url=https://www.mersenneforum.org/showthread.php?t=15646]the mfakto thread[/url]. 
Thanks a lot! Everything working fine. Now I feel like a donkey...

Great you got it working. Now you can free up your CPU for other GIMPS work that is better suited to CPU, such as PRP, or perhaps P1 if you have a lot of RAM available and want to find factors.

[QUOTE=James Heinrich;520082](...)GIMPS work that is better suited to CPU, such as PRP, or perhaps P1 if you have a lot of RAM available and want to find factors.[/QUOTE]
Or ECM in very low ranges if you don´t have that much mem available... :whistle: Hope you´ll find many new factors with the GPU! They are really much, much more efficient at TF than CPUs. 
For fetching work and submit results, under windoze, you may install MISFIT as James said. You need to configure it in the beginning, from where to take work, how much, where to report the results. Then you can forget about it, it will do its job by itself.
If you have more than one GPU in your system (or network, whatever you use), then MISFIT is a must to have, it fetches and distributes the work to all GPUs equitable according with their speed, etc. If you want to continue working in high range (900M or so), select to fetch work directly from gimps. There you will find a lot of factors, there are still a lot of "low hanging fruits" there. If you want to switch working at the "LL front", (in the 80M+) you should consider making an account with gpu72.com, and set MISFIT to fetch work from there. You will not find so many factors, and assignments will take longer, but your GPU will work more efficient (I assume you have at least a midrange AMD card, the lowrange work about the same in the 900M), and your contribution to the project will be more appreciated, due to the fact that you help the people who do immediate LL/PRP tests. The 900M range will not be LL/PRP tested for the next 20 years or so, and working there is not so "valuable" (beside of the satisfaction of finding a lot of factors). I use the quotes because "valuing" it is very subjective, ANY work is valuable, and 101km/h is faster than 100km/h, but we don't know what the next 20 years will bring, on the hardware side, or theory side. Maybe the hardware in 20 years will be so fast to do in a day factoring we do now in a year, or maybe some mathematical breakthrough in factoring will make our TF activity futile, who knows? That is why some of us consider working at the LL/PRP front as "more valuable". But that is up to you. It is your hardware, your money, your headache :razz: 
another one from ryan 5443
[QUOTE] [url]https://www.mersenne.org/report_exponent/?exp_lo=5443&full=1[/url] [/QUOTE] 
[URL="https://www.mersenne.ca/exponent/1073741827"]M1,073,741,827[/URL] has a factor: 16084529043983099051873383
This exponent is just outside the range of Primenet. This exponent is relevant to the (trivial) "[URL="http://mprime.s3website.uswest1.amazonaws.com/new_mersenne_conjecture.html"]New Mersenne Conjecture[/URL]" The factor is 84 bits. It was found with mfaktc 99.7% of the way through the 83 to 84 bit range. 
Thank you for the advice. I will consider them.
Meanwhile, I found my first P1 factor: 110778360181007451990785681 of [URL="https://www.mersenne.org/report_exponent/?exp_lo=91909661&full=1"]M91909661[/URL] 
P1 found a factor in stage #2, B1=695000, B2=12336250.
UID: Jwb52z/Clay, M91819087 has a factor: 9230049330437009568775992019241 (P1, B1=695000, B2=12336250), 102.864 bits. 
My second P1 factor has 124.548 bits!
[URL="https://www.mersenne.org/report_exponent/?exp_lo=91818157&full=1"]M91818157[/URL] / 31098974498726581899484794487625862449 
[QUOTE=Maciej Kmieciak;520872]My second P1 factor has [url=https://www.mersenne.ca/exponent/91818157]124.548 bits[/url]![/QUOTE]Wow, that's impressive! I've found nearly 2000 P1 factors over 12 years, and [url=https://www.mersenne.ca/pm1user/74130]yours[/url] would be #2 on [url=https://www.mersenne.ca/pm1user/1311]my[/url] alltime list. Congrats! :bow:

Double Dipping....Lucky 7
My 7th Double Factor:
[CODE]Manual testing 41962471 F 20190707 00:45 0.0 Factor: 8804098022939114966479 / TF: 7273 43.2748 Manual testing 41962471 F 20190707 00:45 0.0 Factor: 4747769406485693763313 / TF: 7273 0.1764[/CODE] These previously: 38603501 40206629 40288393 74094589 80171381 83016971 
P1 found a factor in stage #2, B1=700000, B2=12250000.
UID: Jwb52z/Clay, M91749521 has a factor: 619229042237586676196401 (P1, B1=700000, B2=12250000), 79.035 bits. 
Found my first double factor in a long time:
[QUOTE][Mon Jul 15 16:23:48 2019] P1 found a factor in stage #1, B1=900000. UID: ixfd64/gamepc10, M57051389 has a factor: 15917772157554034111985227636117525826653839990155819807 (P1, B1=900000)[/QUOTE] [QUOTE]Composite factor 15917772157554034111985227636117525826653839990155819807 = 41498102240422994071951283743 * 383578315589782564436654849[/QUOTE] 
[QUOTE=ixfd64;521691]Found my first double factor in a long time:[/QUOTE]What's really odd... you shouldn't have found that because the person who did the previous P1 in Jan 2012 should have already found the doublefactor even using their smaller bounds and stage1 only:
[url]https://www.mersenne.ca/exponent/57051389[/url] 
Yeah, it looks like the original P1 run was bad.
Aaron could probably check whether that computer has returned other results. 
found 2 factors while rendering an animated movie :)
M100117163 has a factor: 6010855553648221341239 [TF:72:73:mfaktc 0.21 barrett76_mul32_gs] M100117243 has a factor: 8149287616660300579927 [TF:72:73:mfaktc 0.21 barrett76_mul32_gs] 
[QUOTE=matzetoni;521755]found 2 factors while rendering an animated movie :)[/QUOTE]
What a waste of compute resources! Who cares about movies? 
[QUOTE=chalsall;521761]What a waste of compute resources! Who cares about movies?[/QUOTE]
Haha, it only took 46 h to render 44 sec of footage, so let that sink in :) but the GPU was unused through out, so I thought let's run some TF tasks in parallel. I would never waste these resources! 
[QUOTE=matzetoni;521765]Haha, it only took 46 h to render 44 sec of footage, so let that sink in :) but the GPU was unused through out, so I thought let's run some TF tasks in parallel. I would never waste these resources![/QUOTE]But, wouldn't the GPU be better for the rendering normally?

[QUOTE=Uncwilly;521770]But, wouldn't the GPU be better for the rendering normally?[/QUOTE]
Yeah, I'm pretty new to 3D modeling and stuff and thought today's rendering algorithms utilize GPUs, too. But somehow the program I used (Maya's Arnold) is only CPU based :B 
Some small inane factor in aliquot(1893980) ...
[CODE]GMPECM 7.0.4 [configured with GMP 6.1.2, enableasmredc] [ECM] Input number is 254002514263338585905509765345696287668987072012638950799287126646368537264285121189205430300041674211729140634486241142246181381327989627 (138 digits) Run 25 out of 300: Using B1=11000000, B2=35133391030, polynomial Dickson(12), sigma=1:691934274 Step 1 took 35229ms Step 2 took 13979ms ********** Factor found in step 2: 8161718039011207199392336849039284518972178201323540591 Found prime factor of 55 digits: 8161718039011207199392336849039284518972178201323540591 Prime cofactor 31121206717661985178571150402021045939613390353377963868618468583534443573643625397 has 83 digits [/CODE] 
ryan factored 2671
[url]https://www.mersenne.org/report_exponent/?exp_lo=2671&full=1[/url] 
[QUOTE=srow7;522377]ryan factored 2671[/QUOTE]And an impressive [url=https://www.mersenne.ca/exponent/2671]240+ bits[/url] it is :cool:

[QUOTE=srow7;522377]ryan factored 2671
[url]https://www.mersenne.org/report_exponent/?exp_lo=2671&full=1[/url][/QUOTE] #9 I think. 
P1 found a factor in stage #2, B1=700000, B2=12425000.
UID: Jwb52z/Clay, M92245541 has a factor: 9670219730590440223397530337 (P1, B1=700000, B2=12425000), 92.966 bits. 
[url]https://www.mersenne.org/report_exponent/?exp_lo=187825411&full=1[/url]

Persistence pays... :cool:

[QUOTE=lycorn;522528]Persistence pays... :cool:[/QUOTE]
catched in the corner :philmoore: 
P1 found a factor in stage #2, B1=700000, B2=12425000.
UID: Jwb52z/Clay, M92287621 has a factor: 718108499477696020719338125254737 (P1, B1=700000, B2=12425000), 109.146 bits! 
[QUOTE=Jwb52z;522982]109.146 bits![/QUOTE]By my count that's [url=https://www.mersenne.ca/pm1user/789]your #4[/url] alltime biggest :smile:

P1 found a factor in stage #2, B1=575000, B2=18687500, E=12.
UID: harlee/i55250U_1600, M9970603 has a factor: 14724706906047922109582216266621697 (P1, B1=575000, B2=18687500, E=12) 113.504 bits! 
[QUOTE=harlee;523240]113.504 bits![/QUOTE]And with a very smooth k, it should actually have been found with the previous P1 run waay back in July 2000:
[url]https://www.mersenne.ca/exponent/9970603[/url] 
Now a small one
[URL="https://www.mersenne.org/report_exponent/?exp_lo=92303513&full=1"]M92303513[/URL] / [URL="http://www.mersenne.ca/factor/379742139514967992231801"]379742139514967992231801[/URL] 78.329 bits. Almost TF range. 
I got one!
2^284.508.0791 has a factor of 965.748.892.491.730.331.561
[url]https://www.mersenne.ca/exponent/284508079[/url] 
M273803 has a factor.
Factor: 14621477683807587588436848342429799 / (ECM curve 14, B1=250000, B2=25000000) 113.494 bits Quite large for the bounds used. 
Another one larger than expected:
M275399 has a factor. Factor: 157348431025891503797857466195609 / (ECM curve 59, B1=250000, B2=25000000) 106.956 bits 
P1 found a factor in stage #2, B1=710000, B2=12602500.
UID: Jwb52z/Clay, M93609431 has a factor: 2176845263049295036367884529 (P1, B1=710000, B2=12602500), 90.814 bits. 
P1 found a factor in stage #1, B1=820000.
UID: Jwb52z/Clay, M94891409 has a factor: 997387267742849938693943 (P1, B1=820000) 79.722 bits. 
Found one for [M]99111179[/M]  [URL="https://www.mersenne.ca/exponent/99111179"] 13267822998683836364728039[/URL] 83.456 bits which I thought was pretty cool.

What if you had found this one?
11510125 :whistle: 
[QUOTE=lycorn;525782]What if you had found this one?
11510125[/QUOTE]I don't understand. :confused: 
P1 found a factor in stage #2, B1=885000, B2=18585000.
UID: Jwb52z/Clay, M96078113 has a factor: 40150180980878122799159 (P1, B1=885000, B2=18585000) 75.088 bits. 
P1 found a factor in stage #1, B1=820000.
UID: Jwb52z/Clay, M94654033 has a factor: 7507521220789479758248769 (P1, B1=820000) 82.635 bits. 
Found one for [M]7111127[/M]  [URL="https://www.mersenne.ca/exponent/7111127"] 786612060695816024641268553407[/URL] 99.312 bits

P1 found a factor in stage #1, B1=890000.
UID: Jwb52z/Clay, M96570209 has a factor: 24529069813789561422559319 (P1, B1=890000) 84.343 bits. 
I think this is the largest I've found so far [M]3333397[/M] [URL="https://www.mersenne.ca/exponent/3333397"]5987402934250702953071699518972409[/URL] 112.206 bits

[QUOTE=mrh;526924]I think this is the largest I've found so far [M]3333397[/M] [URL="https://www.mersenne.ca/exponent/3333397"]5987402934250702953071699518972409[/URL] 112.206 bits[/QUOTE]It is the largest of the 5 you've found: [url]https://www.mersenne.ca/pm1user/19538[/url]

Hey, I read that GPUs are more efficient on higher bitlevels. So why is there less GHzdays / day in 7374 than in 7173?
[CODE] no factor for M909985499 from 2^71 to 2^72 [mfakto 0.14Win cl_barrett15_73_gs_2] tf(): total time spent: 1m 9.138s (656.78 GHzdays / day) no factor for M909985499 from 2^72 to 2^73 [mfakto 0.14Win cl_barrett15_73_gs_2] tf(): total time spent: 2m 17.558s (660.21 GHzdays / day) no factor for M909985499 from 2^73 to 2^74 [mfakto 0.14Win cl_barrett15_82_gs_2] tf(): total time spent: 5m 6.655s (592.31 GHzdays / day) no factor for M909985451 from 2^71 to 2^72 [mfakto 0.14Win cl_barrett15_73_gs_2] tf(): total time spent: 1m 8.872s (659.32 GHzdays / day) no factor for M909985451 from 2^72 to 2^73 [mfakto 0.14Win cl_barrett15_73_gs_2] tf(): total time spent: 2m 17.148s (662.19 GHzdays / day) no factor for M909985451 from 2^73 to 2^74 [mfakto 0.14Win cl_barrett15_82_gs_2] tf(): total time spent: 5m 5.676s (594.21 GHzdays / day) [/CODE] 
[QUOTE=Maciej Kmieciak;527251]Hey, I read that GPUs are more efficient on higher bitlevels. So why is there less GHzdays / day in 7374 than in 7173?
[/QUOTE] Short answer: For those longer factors, mfakto needs to use a different GPU kernel that is less efficient, in other words, uses more instructions for the same operation. Longer answer: Instead of division, mfakto (and mfaktc) use Barrett reduction, that basically turns division into multiplication, and because of the relatively small multipliers available in the GPU cores, there are some tricks that need to be done to extend the precision. There are some further optimization tricks that can be done, but these have the side effect of eating into this extended precision, so each more optimized GPU kernel has a lower corresponding maximum bitlevel. The GHzday formula doesn't take these changes into account, as it is supposed to be related to the time a CPU takes to factor something to a given bitlevel, not a GPU. Now, I'm not familiar with AMD or mfakto specifics, are the barrett15_* kernels really still more efficient than barrett32_* even on more modern cards? 
[QUOTE=Maciej Kmieciak;527251]Hey, I read that GPUs are more efficient on higher bitlevels. So why is there less GHzdays / day in 7374 than in 7173?[/QUOTE]
Shortshortshortest (to paraphrase the anteposter) answer: because you need more time to count to 13 than you need to count to 11. And if you have a task that requires repeatedly counting to 13, then you will do less of these tasks per day, compared with a task that asks you to repeatedly count to 11. Correct answer: (which correctly implies that my previous answer, as well as both answers from the anteposter are wrong :razz:) : Because the formula to calculate the credits is wrong, in the sense that it is only an approximation, based on empirical evidence, derived from middle age times when only the CPUs could do TF. The GHzDays/Day (as a measuring unit) should be the amount of work that a singlecore 32 bits CPU can do in one day, running at 1 GHz. This is (approx) how it was defined long ago. This should have nothing to do with the exponents, bitlevels, TF, LL, whatever. But invention of multicores, 64bit CPUs, GPUs, airplanes, flying spaghetti monster, and other alien stuff which invaded us in the last time, heavily changed the odds, and anyhow, such measurement, even if we could make it extremely accurate, would not be useful (think about it! if your card will always show 562.73 GHzD/D regardless of what you are doing with it, what should be the point?). Actual calculus could be altered by many things, including a "stimulation" for people to do certain type of work (yes, you may get more credit doing this or that bitlevel, in this or that range, because that is most needed now  well, dream on! that would be ideally, wouldn't be?) Of course, everybody can use his/her cards and electricity money to do whatever type of work fits them better. 
GHzdays credit for TF is broadly based on bitlevel, derived from how well an Intel CPU of decades ago could process TF work using Prime95 ([url=https://www.mersenne.ca/throughput.php?cpu1=Intel%28R%29+Pentium%28R%29+III+processor%7C256%7C0&mhz1=600]example[/url]). The credit is scaled according to 3 ranges: up to 62bit is "easy" and given 62.58% credit, 6364 bit is "slightly easier" and given 95.15% credit, and >=65bit gets 100% credit.
Since all the TF done now is >65bit the credit given is linear, but is subject to architectural efficiencies of the GPU (or whatever device you're using) and the software running the calculation. Broadly speaking, higher bit depths require more bits to be played with at once and therefore slow the calculations down somewhat, which is why mfakt[i]x[/i] will choose the smallest kernel that can process the current assignment, since that will be the fastest. For historical interest, for the oldtimers, I found these old notes in the code:[code]CPU credit  background information: In Primenet v4 we used a 90 MHz Pentium CPU as the benchmark machine for calculating CPU credit. The official unit of measure became the P90 CPU year. In 2007, not many people own a plain Pentium CPU, so we adopted a new benchmark machine  a single core of a 2.4 GHz Core 2 Duo. Our official unit of measure became the C2GHD (Core 2 GHz Day). That is, the amount of work produced by the single core of a hypothetical 1 GHz Core 2 Duo machine. A 2.4 GHz should be able to produce 4.8 C2GHD per day. To compare P90 CPU years to C2GHDs, we need to factor in both the the raw speed improvements of modern chips and the architectural improvements of modern chips. Examining prime95 version 24.14 benchmarks for 640K to 2048K FFTs from a P100, PII400, P42000, and a C2D2400 and compensating for speed differences, we get the following architectural multipliers: One core of a C2D = 1.68 P4. A P4 = 3.44 PIIs A PII = 1.12 Pentium Thus, a P90 CPU year = 365 days * 1 C2GHD * (90MHz / 1000MHz) / 1.68 / 3.44 / 1.12 = 5.075 C2GHDs[/code] 
yeah, that's what we said... :razz: ye only made it (too) technical... hehe

One more newbie question. Why TF on bigger exponents is faster than on smaller ones? It seems counterintuitive

[QUOTE=Maciej Kmieciak;527298]One more newbie question. Why TF on bigger exponents is faster than on smaller ones? It seems counterintuitive[/QUOTE]
Recall that any factor of a Mersenne number must have the form of 2*k*p+1, where p is prime. Clearly, if p is larger, then a smaller k is needed to get to the same bit level. Case in point: for M22040009 you need to test to k = 26782920567714 to get to 2^70 whereas for M220400143 you have to test to k = 2678291412717 to get to the same point. 
[QUOTE=Maciej Kmieciak;527298]One more newbie question. Why TF on bigger exponents is faster than on smaller ones? It seems counterintuitive[/QUOTE]Mersenne factors are in the form of 2*k*<exponent>+1, where k is moreorless any number from 1 to verybig (there are some restrictions on what k can actually be, but ignore those for now).
The smallest possible factor for any Mersenne number is where k=1. Take for example [url=https://www.mersenne.ca/exponent/83]M83[/url], which does indeed have such a factor: (2 * 1 * 83) + 1 = 167 (roughly 7.4 bits) and 167 is a factor of M83. But if we take a larger exponent, say [url=https://www.mersenne.ca/exponent/830000063]M830,000,063[/url], then the smallest possible factor is: (2 * 1 * 830000063) + 1 = 1660000127 (roughly 30.6 bits) So you can see the larger the exponent, by necessity the factors are also bigger. TF software doesn't work by bitlevel directly (that's just a convenient measure of progress) but by checking all the valid k values between the two bit levels. Due to the relationship between exponent and k and bitsize, you have fewer possible k values at a given bitlevel for larger exponents. To illustrate, consider three exponent ranges, that you want to TF from 2[sup]40[/sup]2[sup]41[/sup]: for M10,000 k is between 54975581 and 109951162, giving 54,975,581 candidates to test for M10,000,000 k is between 54975 and 109951, giving 54,975 candidates to test for M1,000,000,000 k is between 549 and 1099, giving 549 candidates to test So even though the bitsize of the factors that might be found is constant, as the exponents get larger there are fewer possible candidates that need to be checked. Does that make sense? 
[QUOTE]Does that make sense?[/QUOTE]
Yeah, now I understand. Thank you. [QUOTE]for M10,000 k is between 54975581 and 109951162, giving [B]54,975,581 [/B]candidates to test for M10,000,000 k is between 54975 and 109951, giving [B]54,975[/B] candidates to test for M1,000,000,000 k is between 549 and 1099, giving [B]549[/B] candidates to test[/QUOTE] And the chance of finding a factor remains 1/40 regardless of the number of candidates? 
[QUOTE=Maciej Kmieciak;527309]And the chance of finding a factor remains 1/40 regardless of the number of candidates?[/QUOTE]Yes, chance of factor should be roughly 1/<bitdepth> regardless.

P1 found a factor in stage #2, B1=915000, B2=21045000, E=12.
UID: js2010/SATURN, M97388611 has a factor: 107653599012618660746761 (B1=915000, B2=21045000) 
[QUOTE=Jan S;527828]P1 found a factor in stage #2, B1=915000, B2=21045000, E=12.
UID: js2010/SATURN, M97388611 has a factor: 107653599012618660746761 (B1=915000, B2=21045000)[/QUOTE]76.5107 bits Is that just outside the TF depth? 
[QUOTE=retina;527830]76.5107 bits
Is that just outside the TF depth?[/QUOTE] Inside. GPU to 72 is taking this range to 77 bits. 
[QUOTE=petrw1;527832]Inside. GPU to 72 is taking this range to 77 bits.[/QUOTE]That's what I initially thought. So a TF failure. Someone has a bad GPU card?

[QUOTE=retina;527834]That's what I initially thought. So a TF failure. Someone has a bad GPU card?[/QUOTE]
A look at the history [url]https://www.mersenne.org/report_exponent/?exp_lo=97388611&exp_hi=&full=1[/url] shows that GPUs had only taken the exponent to 2^74 
[QUOTE=Prime95;527836]A look at the history [url]https://www.mersenne.org/report_exponent/?exp_lo=97388611&exp_hi=&full=1[/url] shows that GPUs had only taken the exponent to 2^74[/QUOTE]
Yup... A few weeks ago someone with some ***serious*** compute behind them reserved tens of thousands of P1 assignments. This has resulted in a situation where P1'ing is now often done well before TF'ing has been done "optimally". Not the end of the world  the TF'ers will take any candidates still standing (read: not factored) up to the optimal TF level before they're handed out to LL'ers. 
P1 found a factor in stage #1, B1=905000.
UID: Jwb52z/Clay, M97453399 has a factor: 163141033136105093126137 (P1, B1=905000), 77.110 bits. 
[QUOTE=chalsall;527837]Yup... A few weeks ago someone with some ***serious*** compute behind them reserved tens of thousands of P1 assignments...[/QUOTE]
[B]James Heinrich[/B] has this going on with his project. His latest stats show he has nearly 2.6million reserved exponents. That's simply ridiculous. His 10,000 assignment limit is being bypassed, I believe. A looping batch file can do this. I know because I tried it, but with only 10 exponents in every pass. He has a fetch example on his page. Modify it a little and put a timeout of a few seconds in the batch, then run it.. That's all it takes. The only way I know of that he could possibly prevent this is by tracking IP addresses. Other than that, I have no idea. 
It's quite different for the TF1G project, where assignments can still take under a second each... and anything that is reserved will expire within 10 days. And yes, I reserve quite a bit more than 10k at a time, with a fetch loop much like you described, and have been doing it for months now. It is not rocket science, really. The reason I do it like this is that both machines have a bit of a flaky network connection, one more than the other, so if there is a network outage, my script will just log what happened and continue with the next block. If I run the provided script as is (fetch work, run it through mfaktc, report results, rinse and repeat) there will be times when mfaktc runs out of work and the card will then run idle until network connectivity is restored. It's quite rare that I don't finish what I reserve. A couple hardware failures, some power outages and a few silly human errors here and there, but stuff happens.
Sure, there would be reason to frown upon this behaviour if someone did big reservations without a proven track record, but the way things are running now, I don't see the problem? I haven't logged the total amount of work done, and as you know, there are no credits for TF1G work anyway, but split between two cards I should be above 30 million factorization attempts already, mostly from 67 to 68 bits. I've only kept logs on one machine, and even there only since June 19th; the statistics are now 209384 factors found on 14074599 attempts. Of all the assigned exponents, I seem to have about 1.4 million, but that's only 8 days of work for those two cards. Too much at a time? Maybe... but is it really a problem for the >1G work? 
[QUOTE=nomead;528228]but is it really a problem for the >1G work?[/QUOTE]Not at all. You're welcome to take as many assignments as you can reasonably complete in a reasonable time (standard assignments expire after 10 days, but you should generally be able to return them much sooner since they're so shortrunning).
The large number of assignments out at any given time is largely due to a single user who has a large amount of GPU power available, but offline. He has special dispensation to get assignments for longer than 10 days, and grabs a million or so assignments and returns them approx once per month. You can see on the graph at the bottom of the [URL="https://www.mersenne.ca/tf1G.php"]page[/URL] the monthly spikes of about 200,000 GHzdays of results being submitted at once. You are welcome (indeed encouraged) to loop through assignment requests to get as many as you want, the 10k limit is just to be nice to my server and ensure assignment requests are returned in a timely manner. 
[QUOTE=nomead;528228]...Of all the assigned exponents, I seem to have about 1.4 million, but that's only 8 days of work for those two cards. Too much at a time? Maybe... but is it really a problem for the >1G work?[/QUOTE]
I really did not understand the logic for getting so many. You have the capability of running these in mass. A problem for the >1G work? No, I don't see any. James doesn't see any either, so let it hammer away. :smile: 
P1 found a factor in stage #1, B1=905000.
UID: Jwb52z/Clay, M97735681 has a factor: 22459699301317591337180449 (P1, B1=905000) 84.216 bits. 
P1 found a factor in stage #2, B1=700000, B2=12425000.
UID: Jwb52z/Clay, M92430739 has a factor: 1439732501488765602199388883281 (P1, B1=700000, B2=12425000) 100.184 bits. 
[CODE]UID: storm5510/7700_Kaby_Lake, M5789947 has a factor: 113935630231502890065274318991 (P1, B1=720000, B2=11520000, e=12, n=324K, aid=6A11....5607 CUDAPm1 v0.22)[/CODE]30 digits, 96.524 bits. My personal best is 39 digits. This one is worth a mention. I do not often find one of this size.

P1 found a factor in stage #2, B1=705000, B2=12513750.
UID: Jwb52z/Clay, M92655257 has a factor: 92250276233360973210007799 (P1, B1=705000, B2=12513750), 86.254 bits. 
I have been doing a fair bit of P1 recently and had yet to find a factor. I started to wonder what was going on. And this morning I see that a new (newly 'infected') machine found a 112 bit factor of a number in the 95,000,000 range. :fusion:
Meanwhile: Another machine running some ECM found a 85.8 bit cofactor of a number in the 255,000 range. (6th factor over all for that number.) :lavalamp: I am trying to up my lifetime ECM ranking (goal is to be 99th percentile) and current P1 ranking. 
[QUOTE=Uncwilly;535710]...I am trying to up my lifetime ECM ranking (goal is to be 99th percentile) and current P1 ranking.[/QUOTE]
99th percentile. I had to look this up. Top 1%. I really do not see this as a competition so I rarely look at those pages on [I]mersenne.org[/I]. However, I just made an exception. #84 in TF, #30 in ECM, and #91 in P1. Even though I did not join this forum until 2009, I started running [I]Prime95[/I] in 2005, I believe it was. I had an HP desktop with a Pentium 4 CPU back then. 15 years is a long time, and numbers accumulate. 
1 Attachment(s)
[QUOTE=storm5510;535717]99th percentile. I had to look this up. Top 1%. I really do not see this as a competition so I rarely look at those pages on [I]mersenne.org[/I]. However, I just made an exception. #84 in TF, #30 in ECM, and #91 in P1.
Even though I did not join this forum until 2009, I started running [I]Prime95[/I] in 2005, I believe it was. I had an HP desktop with a Pentium 4 CPU back then. 15 years is a long time, and numbers accumulate.[/QUOTE]I am looking at this: [url]https://www.mersenne.org/account/?details=1[/url] I did a bunch on some Core2Duo machines and various others over time. 
[QUOTE=Uncwilly;535737]I am looking at this:
[URL]https://www.mersenne.org/account/?details=1[/URL] I did a bunch on some Core2Duo machines and various others over time.[/QUOTE] That is amazing! After years, I still keep seeing pages which I did not know existed and are only a mouse click away. :smile: 
1 Attachment(s)
[QUOTE=storm5510;535717]99th percentile. I had to look this up. Top 1%. I really do not see this as a competition so I rarely look at those pages on [I]mersenne.org[/I]. However, I just made an exception. #84 in TF, #30 in ECM, and #91 in P1.
Even though I did not join this forum until 2009, I started running [I]Prime95[/I] in 2005, I believe it was. I had an HP desktop with a Pentium 4 CPU back then. 15 years is a long time, and numbers accumulate.[/QUOTE] I've been in this a *long* time... 
Well, you actually "compete" with about 5% of the participants. Many people in that list are not active for years, they just joined when some prime announcement made GIMPS (in)famous for a while, then they gave up when they found that the work is hard and that you can't get rich overnight. Many other people have just an average computer or laptop which they use for work or home, which runs P95 or similar in background, and it is even not "all the time turned on" (they stop it overnight, etc).
Normal people do not have 50 computers and 100 gpus, and do not waste their time on forums, guys like us are not the normal, we are the exception :razz: OTOH, for some "tops", you have no chance to be the 1%, unless you are the first in the list, the second position is already 2%. I think the ECM lists are the case (didn't look there for some time). That is because there are few participants in that type of work, and those in top of the list really work hard for that work type (like, a lot of manual work too, for example, to do stage 1 with p95 and then move all the workshop to gmpecm or similar, for stage 2, because that is faster  this "artifacts" need work, passion, patience...), so the only chance for you (general you) to reach 1% is to convince a lot of other participants to join the respective type of work. This way the list grows larger and the 1% limit drops down. Or well.. if you are Ben D. or Ryan P. :razz: 
[QUOTE=Gordon;535750]I've been in this a *long* time...[/QUOTE]
You have, indeed. Also, if memory serves, there is a “work type“ missing in the list... 
[QUOTE=lycorn;535952]...there is a “work type“ missing in the list...[/QUOTE]
Could this be because common types are grouped? I have a list, with the work type numbers, which is somewhat longer. 
Not really, no. Have a look at the list of known primes, namely Mp #36... :smile:

P1 on [M]111111769[/M] found an 82 bit factor. Nice to know I'm not in another factoring drought.

M[URL="https://www.mersenne.org/report_exponent/?exp_lo=5675981&full=1"]5675981[/URL] divisible by 6851541559687876319452875506143
M[URL="https://www.mersenne.org/report_exponent/?exp_lo=5676563&full=1"]5676563[/URL] divisible by 4898483840548246495459492282559 Both factors are ~102 bits. 
[QUOTE=evenhash;537314]M[URL="https://www.mersenne.org/report_exponent/?exp_lo=5675981&full=1"]5675981[/URL] divisible by 6851541559687876319452875506143
M[URL="https://www.mersenne.org/report_exponent/?exp_lo=5676563&full=1"]5676563[/URL] divisible by 4898483840548246495459492282559 Both factors are ~102 bits.[/QUOTE] These seem like small P1 hits. Were they missed? 
[QUOTE=LaurV;537329]These seem like small P1 hits. Were they missed?[/QUOTE]Both factors are outside the 34 previous P1 bounds for each exponent:
[url]https://www.mersenne.ca/exponent/5675981[/url] [url]https://www.mersenne.ca/exponent/5676563[/url] (note: the data for the P1 that did find these factors won't appear on my site until after midnight UTC, in ~12 hours, so you're currently only seeing the P1s that didn't find the factor) 
P1 found a factor in stage #2, B1=735000, B2=13230000.
UID: Jwb52z/Clay, M96368791 has a factor: 2926504435995523506078247 (P1, B1=735000, B2=13230000) 81.275 bits. 
P1 found a factor in stage #2, B1=735000, B2=13230000.
UID: Jwb52z/Clay, M96418117 has a factor: 725730092857843730561503 (P1, B1=735000, B2=13230000) 79.264 bits. 
A nice find with P1 stage 1 on a 8M Mersenne:
[CODE]P1 found a factor in stage #1, B1=240000. UID: Dylan14/laptopi78750, M8362931 has a factor: 249176169757007063770688569 (P1, B1=240000)[/CODE]87.687 bits 
[URL="https://www.mersenne.org/M97408327"]M97408327[/URL] / [URL="http://www.mersenne.ca/exponent/97408327"]1527528764305335771693072611713[/URL] (P1, B1=760000, B2=14820000, E=12)
A nice size of 100.269 bits 
P1 found a factor in stage #2, B1=745000, B2=13410000.
UID: Jwb52z/Clay, M97561063 has a factor: 23221054947100904854335911 (P1, B1=745000, B2=13410000), 84.264 bits. 
P1 found a factor in stage #2, B1=745000, B2=13410000.
UID: Jwb52z/Clay, M97686661 has a factor: 1285546011136469252128903201 (P1, B1=745000, B2=13410000), 90.054 bits. 
P1 found a factor in stage #1, B1=745000.
UID: Jwb52z/Clay, M97694633 has a factor: 1155835127550830735937122641 (P1, B1=745000), 89.901 bits. 
P1 found a factor in stage #1, B1=750000.
UID: Jwb52z/Clay, M97703429 has a factor: 4342095128232239235633241 (P1, B1=750000), 81.845 bits. 
P1 found a factor in stage #2, B1=700000, B2=11900000.
UID: Jwb52z/Clay, M97952927 has a factor: 3539363147625806385904417 (P1, B1=700000, B2=11900000), 81.550 bits. 
ECM found a factor in curve #2, stage #2
Sigma=6529679877461107, B1=50000, B2=5000000. M14001437 has a factor: 3529669213575415060009 (ECM curve 2, B1=50000, B2=5000000) 
P1 found a factor in stage #2, B1=500000, B2=20000000, E=12.
M15040721 has a factor: 10135277717869225760959 (P1, B1=500000, B2=20000000, E=12) 2× 3[SUP]2[/SUP]× 7× 31× 11467× 15040721× 15044749 + 1 74 bits (73.10) 
All times are UTC. The time now is 18:50. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2023, Jelsoft Enterprises Ltd.