mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Prime Gap Searches (https://www.mersenneforum.org/forumdisplay.php?f=131)
-   -   Prime Gap News (https://www.mersenneforum.org/showthread.php?t=20379)

robert44444uk 2015-07-26 14:59

Prime Gap News
 
I am a bit surprised that there is no thread in particular for prime gaps. The purpose of this thread is to discuss and post information on prime gaps. The main focus here is likely to be adding to the record gaps kept at Prof Nicely's site, but also perhaps for discussions on the relative merits of gap finding techniques.

This is a continuation of the general posts made in the programming thread recently.

danaj 2015-08-15 17:41

[QUOTE=robert44444uk;407993] I think it is better to find these oddballs naturally. Somebody has to be last.[/QUOTE]

I was thinking of something like [URL="http://ntheory.org/gaps/stats.pl"]this page[/URL]. So we don't have to keep posting these big lists. If I was ambitious I could crawl gapcoin's results, and have some way to submit data.

Other things to display:
[LIST][*]Overall top-20 gaps[*]Record holders, with numbers of gaps and percent of total[*]Stats about my new un-submitted gaps[*]Graphs[*]Interesting stats about gaps in the last month?[*]Something else?[/LIST]

I also threw together a [URL="http://ntheory.org/gaps/top20.pl"]simple dynamic top-20 page[/URL]. I can fill the other columns in with some simple mods.


In other news, I just found a small gap of merit 33.575. Another entry for the overall top-20.

robert44444uk 2015-08-16 07:01

[QUOTE=danaj;408023]I was thinking of something like [URL="http://ntheory.org/gaps/stats.pl"]this page[/URL]. So we don't have to keep posting these big lists. If I was ambitious I could crawl gapcoin's results, and have some way to submit data.

Other things to display:
[LIST][*]Overall top-20 gaps[*]Record holders, with numbers of gaps and percent of total[*]Stats about my new un-submitted gaps[*]Graphs[*]Interesting stats about gaps in the last month?[*]Something else?[/LIST]

I also threw together a [URL="http://ntheory.org/gaps/top20.pl"]simple dynamic top-20 page[/URL]. I can fill the other columns in with some simple mods.


In other news, I just found a small gap of merit 33.575. Another entry for the overall top-20.[/QUOTE]

Well done on your new result. One of these days I will find another 30, but I realise these are like gold dust.

It would be great to maintain your suggested web pages either dynamically or updated weekly. If you are doing that, it would be also good to provide the top 20 in columnar form so that the relative positions can be related easily to the champion in that category.

danaj 2015-08-16 15:51

[QUOTE=robert44444uk;408063]It would be great to maintain your suggested web pages either dynamically or updated weekly.[/QUOTE]They are dynamic in that the page is a Perl script that processes the merits file and my new gaps file, then generates the HTML output. So this morning I scp'd the new merits file, and both pages immediately have the new data (your entry for 157194, a new #1, is there now). Any holes that were filled by your new entries should be gone.

[quote]If you are doing that, it would be also good to provide the top 20 in columnar form so that the relative positions can be related easily to the champion in that category.[/quote]I was thinking of listing the overall top 20 similar to the first table here: [URL="http://primerecords.dk/primegaps/gaps20.htm#top20merit"]JKA's Top-20 list[/URL]

Did you have something different in mind, or did you mean a change in my current top20 page?

I'm out for a week so probably won't be doing any major updates.

danaj 2015-08-21 19:43

I see my stats page is updating while I'm out of town, but apparently I mis-typed something before I left as it isn't adding any of my new gaps. It's uploading the new file, but the file has the same gaps as when I left. :)

danaj 2015-08-23 19:27

Construction while I was gone cut power to a machine I was using as a file repo (and gap searcher), and it not being available made my script stop. All fixed for now (though sadly construction over the next month will keep cutting power to random machines).

I updated the [URL="http://ntheory.org/gaps/top20.pl"]top20 page[/URL] to bold the #1 entries and fill in the full names. I still haven't started using the allgaps file to get the other two columns.

The [URL="http://ntheory.org/gaps/stats.pl"]stats page[/URL] also uses full names now. It has more info on it. All the tables take into account the unsubmitted gaps I have, but I haven't written the code to scrape the unsubmitted gapcoin results yet.

Rob, you will probably get position 5 in most gaps found next week, passing Spielauer. Looks like Antonio Key started submitting k*n#/(2*3*5*11*m) this month, with 160 found so far.

danaj 2015-08-30 06:42

New [URL="http://ntheory.org/gaps/stats.pl"]stats page[/URL] update.

I changed the old "holes and merits < 10" table to a different format. This gives some of the data you wrote (gaps with merit < 20, merit < 15, etc.). I could add more columns. Looks like my new gaps set has found a fair number of these (indicated with the strike-throughs).

High wind today, lost power for 8 hours. My machine had been running for over 400 days and the UPS was good for 3 hours. Sigh. Restarting everywhere. No restart mechanism means a day or so of lost progress and a PITA. My girls' Windows 8 machines have been driving me crazy with this lately, with a combination of no UPS (construction takes them down), Windows 8 updates with restarts, and occasionally "my game hung so I rebooted."

I will be submitting my set of 2931 gaps tomorrow, after ECPP check of endpoints goes a little farther.

robert44444uk 2015-09-02 06:35

[QUOTE=danaj;409152]



I will be submitting my set of 2931 gaps tomorrow, after ECPP check of endpoints goes a little farther.[/QUOTE]

Its not a record unless you submit :ermm:

That's why I do that weekly. Then at least some of them last as a record for a week or two.

danaj 2015-09-02 07:37

I find it works better for me to do 3-4 weeks between submissions. I have enough searches going on in the same space that I get a lot of overlap of gaps just within that time period.

Things I should work on include:

- restart. Some of the tasks take > 4 days just to catch back up. I need a way to either skip forward in the first entry or better yet, give it the output file and let it figure it out.

- threaded verifier. 1 thread can keep up with weekly new gaps and make a little progress on the backlog, but not much progress. Running 8 threads at a time for a while would help.

- graph of gaps to 100k.

robert44444uk 2015-09-02 12:57

[QUOTE=danaj;409399]

Things I should work on include:

- restart. Some of the tasks take > 4 days just to catch back up. I need a way to either skip forward in the first entry or better yet, give it the output file and let it figure it out.

[/QUOTE]

I can't figure out what your problem might be. I have to restart a fair bit (especially on my work laptop) and I only have to go back to the start of the last factorial tackled. I run only 75 multipliers in any thread. (my end minus my beginning), whereas I think you run 300 multipliers, even for very high factorials. This way I rarely have to duplicate more than 1 hour of work.

danaj 2015-09-02 15:27

[QUOTE=robert44444uk;409407]I can't figure out what your problem might be. I have to restart a fair bit (especially on my work laptop) and I only have to go back to the start of the last factorial tackled. I run only 75 multipliers in any thread. (my end minus my beginning), whereas I think you run 300 multipliers, even for very high factorials. This way I rarely have to duplicate more than 1 hour of work.[/QUOTE]

Yes, some of those are easy, and the tests with k=1..1 starts pretty quickly, although they have grown to where the time per test is not insignificant.

k*n#/30 where k = 100M to 200M can take a while even with very small n... The computer I have doing that range is too flakey to handle it so I dropped it down to 10M at a time.

Even just 1..25 when n is almost 40000 can take a long time -- it takes hours just to test one number.

You are also not running the k*n#/(235*m) version, which adds an inner loop. As n grows, so does the number of divisors. So 75 multipliers becomes tens of thousands.

Doing runs like k*n#/30 where n is fixed and k continues to grow indefinitely, which is what most of Rosenthal's patterns are, is super easy to restart. Just start with k being one higher than the last record found, and very little time is lost.


I have stuff running all over the place, not just a single fixed pattern. I believe with k*n#/30 I've run or have running:

1-25
26-50
50-100
100-300
300-10k
10k-20k
20k-30k
30k-40k
40k-50k
50k-100k
100k-1M
1M-10M
10M-100M
100M-200M

For the 23, 235, 2357, and 235711 patterns I don't have as many levels, but there are a few. Some of them are still catching up after being started 3+ days ago (again, because of the extra inner loop, a range of 1000 can become 1M tests).

Different computers run different sets, and some have been put on hold. The 1-25 range has gone very far, and the others move at slower rates. I guess it depends on where you want to concentrate. Records under gap=10k are harder to find since so much more searching has taken place, but the programs will still regularly produce results (which Spielauer and Gapcoin may take away). Up in the 200k range the tests are slow but almost anything you find is a record since it's very sparse. Looking higher has more possibility of a "ooo aaa a top-20 result" at the expense of slow tests. The sweet spot for production seems to be in the 60k-100k range where the testing-effort vs. ease-of-getting-a-record curve is most favorable.

I've been not worrying so much about where they fall, but just test lots of levels so I get broad coverage. It also gives me more to work with in looking at the performance of my code, which was really the point of all of this for me, rather than the gaps themselves.

danaj 2015-09-08 03:57

2 Attachment(s)
Similar to your graph. To avoid extending it out very far I lump all the 300k+ into one bucket. I did this by putting together all the submission data, not looking at which records are still held, and not removing ones that were later overwritten by myself or others. 19 submissions in 2014, 12 in 2015.


Since I'm running so many different search types, the shape of mine looks very different than yours. In 2014 I had a dip in the 50-80k range for some reason (speculation: this may be a reflection of 70k-120k just being easier to get records that year). In 2015 I've had more searches going on in that range, but now a big dip in the 36k-46k range. I suspect that's because most of my very long running searches have moved up to the 50k+ range, but I started a few new ones that have pushed up the small sizes.

danaj 2015-09-08 04:31

You said you were getting 125 gaps/week/core, which is about 18 gaps/day/core.

My best two searches are at 19-21 gaps/day/core. They're right in that good spot of 80-110k now.

I have one running in the 20k range getting 11 gaps/day/core which I think is fantastic given the ~20 minimum merit for a record.

Another two are getting about 10-11 gaps/day/core running a bit before and a bit after that most-efficient spot.

15 searches are in the 2-4 gaps/day/core range, mostly stuck in the 30k to 60k range.
11 searches are in the 1-2 gaps/day/core range, also in that 30kish range.

Here's a fun one. I have 4 active searches that have found nothing in the last 6 days. I'll move them to something more productive.

robert44444uk 2015-09-12 14:14

[QUOTE=danaj;410110]

I'm not sure what to think of the "running code based on Perl" after my name :). I wrote GMP code plus an interface to Perl, and various scripts that use it in Perl. Perhaps everyone else should have "running code based on C, libc, libm, and GMP/gwnum" added :). Of course big kudos to Torbjörn and the other GMP devs, plus Baillie, Wagstaff, Selfridge, Crandall, Pomerance, Cohen, etc. without whom I'd have nothing. Plus encouragement of other open source devs/writers like Kim Walisch, Wojciech Izykowski, David Cleaver, Mario Roy, the Pari/GP team, CRG IV, TOeS, etc. The list could go on a [i]long[/i] time.[/QUOTE]

[B][SIZE="4"]If I have seen further than others, it is by standing upon the shoulders of giants[/SIZE][/B]

[I]Read more at [url]http://www.brainyquote.com/quotes/authors/i/isaac_newton.html#HEzGIu21c7Jlf15c.99[/url][/I]

Antonio 2015-09-16 16:23

[QUOTE=danaj;410110]There sure are a lot more frequent updates now than before the threads here. I'm not sure if Antonio Key is using his own software or, since he's on this forum and started submitting after my post, using my code. Dr. Nicely may not care any more -- he used to be more strict (e.g. everyone using TOeS's codes has TOS in their abbreviated name, similar with JKA).
[/QUOTE]

:blush: OK I admit it! I'm using variations on your Perl script Primegap1.pl :blush:

I thought I had informed Dr. Nicely of this in my first submission to him, but on checking back I had not, sorry. This has been corrected in my email to him today.

danaj 2015-09-16 16:56

I want to make sure I get that EFF check!

What? That's for something different? Well there goes [I]that[/I] retirement plan....

robert44444uk 2015-09-29 17:23

Interesting to see the winners and losers since I started my searching:

[CODE]

Jacobsen 34162 39977 5815
Rosnthal 2649 8333 5684
MJPC&JKA 10076 6816 -3260
M.Jansen 7067 5164 -1903
RobSmith 0 3634 3634
Spielaur 2931 2575 -356
PierCami 2240 1571 -669
Gapcoin 722 1082 360
TorAlmJA 1061 643 -418
Toni_Key 0 800 800
Andersen 241 153 -88
Be.Nyman 121 121 0
RP.Brent 120 120 0
TRNicely 95 95 0
LndrPrkn 72 72 0
Yng&Ptlr 71 71 0
TOeSilva 70 70 0
HrzogTOS 52 52 0
Glaisher 43 43 0
DHLehmer 38 38 0
JLGPardo 27 19 -8
MrtnRaab 20 15 -5
GABandAR 12 12 0
PardiTOS 9 9 0
RosntlJA 8 8 0
DonKnuth 6 6 0
RosntlJF 7 6 -1
LLnhardy 4 4 0
Weslwski 5 4 -1
PDeGeest 3 3 0
AEWestrn 3 3 0
H.Dubner 5 2 -3
ML.Brown 2 2 0
CKernTOS 2 2 0
JamesFry 2 2 0
JRdrgTOS 2 2 0
MJandJKA 2 2 0
APinhTOS 1 1 0
ATeixTOS 1 1 0
CBastTOS 1 1 0
Euclid 1 1 0
JFNSTOeS 1 1 0
TAlmFMJA 1 1 0
YPPauloR 1 1 0


[/CODE]

The thing is, come back in 100 years, and some of those with very few will still be there! if the top 10 finders stopped now, the chances of them having, in total, no more than 50 records is quite high.

danaj 2015-09-29 20:09

All but one of the ones going down are no longer doing submissions. JKA has a bazillion projects going on -- I keep finding his records or work in many areas of this hobby (the latest being prime clusters / constellations / k-tuples).

Rosenthal did a huge submission all at once, and I presume he's still running more. There's a lot of overlap in his searches and ours.

Spielauer is an interesting one, as it seems most of his current work is in the 30-70 digit range, but he had quite a few larger results that are getting hit by Gapcoin. I occasionally take one (looks like 22 total over the last 2 weeks). The current records in this range are fairly high so while each test is very fast, finding records is hard.

robert44444uk 2015-10-01 07:54

The new list is out - movers:

[CODE]

Toni_Key 934
RobSmith 879
Gapcoin -1
JLGPardo -1
Andersen -4
PierCami -16
TorAlmJA -22
M.Jansen -45
MJPC&JKA -79
Jacobsen -385
Rosnthal -541

New gaps 719


[/CODE]

danaj 2015-10-08 00:16

1 Attachment(s)
New graph. Arguably too many people on it to follow.

Gapcoin has two major lines. You can see Rosenthal's lines (k*M#/30 with each line being a particular M) as well as a few of my similar ones. While each update sees MJPCJKA drop a few more, there are still substantial clouds of their results.

robert44444uk 2015-10-08 10:21

[QUOTE=danaj;412198]New graph. Arguably too many people on it to follow.

Gapcoin has two major lines. You can see Rosenthal's lines (k*M#/30 with each line being a particular M) as well as a few of my similar ones. While each update sees MJPCJKA drop a few more, there are still substantial clouds of their results.[/QUOTE]

Massively beautiful.

Another good graph would show the changes say since the start of 2015 to show where work is happening.

Antonio 2015-10-08 16:58

2 Attachment(s)
[QUOTE=robert44444uk;412228]Massively beautiful.

Another good graph would show the changes say since the start of 2015 to show where work is happening.[/QUOTE]

Agreed, another good one would be with Dana's massive contribution removed - give us a chance to see where others are working more easily :smile:.

Here's one just showing my work, a lot of which is off to the right of Dana's graph.

Edited to add a graph of Rob's work.

danaj 2015-10-12 15:32

I wrote the script and the Perl module, including the GMP code for sieving, next_prime, and the BPSW test.

There shouldn't be any overlap unless p=11 in the first case. Most of those searches are very low. I have a couple looking for all square-free divisors below 500k but that is only haphazardly running so go for it.

[quote]I will likely only be a casual searcher as I have been unable to get the script working with gmp optimizations on windows.[/quote]Ouch, really? It would seem that without GMP the programs would run absurdly slow.

For PFGW, you could use just a threshold appropriate for the gap size, then use a second little Perl script to cull the candidates based on actual merit.

PFGW's nextprime is quite slow unless it has been changed since 3.7.7. nextprime(10^2000) is over 2 minutes vs. 13 seconds for my code. It looks like it increments by two from the previous odd, checks divisibility by the first 500 primes using trial division, then calls David Cleaver's strong BPSW function. The latter is called far more often and the actual test is close to 2x slower than my BPSW code.

OTOH, if you're really not running my GMP code, then it would be faster. I'm again surprised it doesn't work with GMP.

[code]perl -Mntheory=:all -E "say prime_get_config->{gmp}"[/code]from the DOS prompt should output 1.

henryzz 2015-10-12 16:52

I couldn't install Math::Prime::Util::GMP into activeperl. It needs gmp installing manually. I will try again getting that working at some point.
Currently I am running on linux but I spend most of my time in windows.

danaj 2015-10-12 17:16

[QUOTE=henryzz;412518]I couldn't install Math::Prime::Util::GMP into activeperl. It needs gmp installing manually. I will try again getting that working at some point.
Currently I am running on linux but I spend most of my time in windows.[/QUOTE]

Ah, ActiveState. No GMP libraries on their build machines, so they don't build the module. Strawberry Perl for Windows includes the GMP libraries. It's far better for developers than ActiveState (it even includes a C compiler and make so you can install your own packages rather than relying on ActiveState to do it for you).

henryzz 2015-10-12 20:43

[QUOTE=danaj;412520]Ah, ActiveState. No GMP libraries on their build machines, so they don't build the module. Strawberry Perl for Windows includes the GMP libraries. It's far better for developers than ActiveState (it even includes a C compiler and make so you can install your own packages rather than relying on ActiveState to do it for you).[/QUOTE]

I had just downloaded strawberryperl just before reading this. Installing now. Hopefully this will do everything I need.

henryzz 2015-10-12 21:37

[QUOTE=henryzz;412531]I had just downloaded strawberryperl just before reading this. Installing now. Hopefully this will do everything I need.[/QUOTE]

Installed and working.

I will stop my Linux runs soon and move them to windows.

Antonio 2015-10-14 07:30

Here's my stats for this weeks submission (checked against merits.txt dated 2015-10-14).
Maintaining >18 records/day/core, despite taking one (of four) core out to try a few search modification ideas.
Nothing spectacular to report, but I live in hope :smile:

[CODE] Gap - Entries
Owner - Improved
Rosnthal - 143
Jacobsen - 83
RobSmith - 21
M.Jansen - 17
Toni_Key - 7
PierCami - 4
TorAlmJA - 4
Andersen - 2
JLGPardo - 1
MJPC&JKA - 1
Gapcoin - 1
Total - 284
523 new entries for merits.txt
239 first time gaps found
Smallest merit increase = 0.009355
Largest merit increase = 11.372329
Largest merit found = 25.683840
Largest merit (first time gap) = 21.487940
Smallest gap recorded = 13046
Largest gap recorded = 219390
[/CODE]

robert44444uk 2015-10-14 17:14

[QUOTE=Antonio;412633]Here's my stats for this weeks submission (checked against merits.txt dated 2015-10-14).
Maintaining >18 records/day/core, despite taking one (of four) core out to try a few search modification ideas.
Nothing spectacular to report, but I live in hope :smile:

[/QUOTE]

I had to wait a while for anything, but with only 4 cores working on this for 3 months, I have some really pleasing results.

What value are you placing on "my $delta"?

If you have this as zero you are doing twice the work you really need. If you are really interested in merits >20, you should set this to a minimum of 8. You get through the range twice as quickly, but are unlikely to miss too many merits >20

danaj 2015-10-14 17:58

Nice results, and a overall top-20 from Spielauer this week.

I'm still using delta=3. It still gives a nice boost without dropping too many. I am not getting close to that efficiency on most of my searches. Once again contemplating dropping some of the 2*3*5*p / 2*3*5*7*p / 2*3*5*7*11*p ones in favor of something simpler.

Antonio 2015-10-14 19:46

I'm using delta = 2, after some trial runs to determine the effect various deltas would have.

I found that for deltas of 1, 2 and 3 merits the % of gaps with merits >10 that were missed were 6.6%, 9.7% and 18.2% respectively and decided to opt for losing less than 10%.

robert44444uk 2015-10-15 12:20

[QUOTE=Antonio;412701]I'm using delta = 2, after some trial runs to determine the effect various deltas would have.

I found that for deltas of 1, 2 and 3 merits the % of gaps with merits >10 that were missed were 6.6%, 9.7% and 18.2% respectively and decided to opt for losing less than 10%.[/QUOTE]

That is for merits of 10. So effectively, for gaps of 100k or more - there are too many found <10 that are not records at lower gap targets.

It would be interesting to look at all of your gaps >20 you have found and find what actual prevprime merits were (total merit = prevprime merit+ nextprime merit). My bet is that almost all of the prevprime merits would be >8.

Antonio 2015-10-15 14:08

[QUOTE=robert44444uk;412751]That is for merits of 10. So effectively, for gaps of 100k or more - there are too many found <10 that are not records at lower gap targets.

It would be interesting to look at all of your gaps >20 you have found and find what actual prevprime merits were (total merit = prevprime merit+ nextprime merit). My bet is that almost all of the prevprime merits would be >8.[/QUOTE]

A casual inspection of my results indicate that this is almost certainly true. When I started I was more interested in filling in the missing gaps rather than finding record merits. I may well move a core over to record merit searching in a week or two, just to see what happens :smile:

robert44444uk 2015-10-16 07:33

[QUOTE=Antonio;412760]A casual inspection of my results indicate that this is almost certainly true. When I started I was more interested in filling in the missing gaps rather than finding record merits. I may well move a core over to record merit searching in a week or two, just to see what happens :smile:[/QUOTE]

After you get 5000 new records, then I think your thirst may be satiated. Then it is a question of how many of those would survive for a while. This is why I won't post smaller than 10 merit, and have set myself up to maximise >20s. These will survive for a long time.

Another one this morning:

>338,000 gap, merit 21.7

henryzz 2015-10-20 11:50

Does getting high merits get harder when you get to larger gaps?

danaj 2015-10-20 16:06

[QUOTE=henryzz;413150]Does getting high merits get harder when you get to larger gaps?[/QUOTE]

Primalilty tests take longer, so the whole search process takes longer. My searches with 11k digit numbers are very slow.

Empirically in the 100-8000 digit range, the BPSW test is about O(log^2.5(n)). 2x larger size is 5-6x longer time. The larger size also means a longer range for a large merit, which means more tests. Presumably log(n) growth. There is a complicating factor of the partial sieve that has a dynamic log^2(n) depth.

Usually I see the tradeoff as small sizes run faster but are better covered hence need high merits to get a record. Large sizes (200k+) are slow but are so sparse that almost anything you find is a record. The sweet spot this year seems to be in the 70-90k range for efficiency of generating records. There are lots of gaps with merit under 10.

My stats page ([URL]http://ntheory.org/gaps/stats.pl[/URL]) has a section that shows the gaps with merit < 20, < 15, < 10, and no gap. The strike-through values are ones my unsubmitted gap set has found. I usually submit every 3 weeks or so.

robert44444uk 2015-10-20 17:50

I barely have a positive score for this week, looking at the number of my records that Danaj has clawed back :buddy:
The good news is that the new ones are a better quality - almost everything I am looking for is 150k plus so should last a while.:max:

henryzz 2015-10-20 21:07

So basically the merit isn't a good measure of how hard it is to find them. Maybe we should correct the formula.

mart_r 2015-10-20 21:52

[QUOTE=henryzz;413185]So basically the merit isn't a good measure of how hard it is to find them. Maybe we should correct the formula.[/QUOTE]

Before I jumped on the bandwagon with the (m*p#)/(d*q#)±x kind of sequences, I wrote a small code that tells me how many candidates there are left to check after a trial division up to p.
This gives sort of an "effective" merit, as displayed in this example:

[CODE]center number = 2000003# / 13#
numbers without
factor <= 2000003 effective merit
- side + side - side + side
merit ± 1 2550 2527 0.03 0.03
merit ± 2 3218 3199 0.04 0.04
merit ± 3 21172 21119 0.27 0.27
merit ± 4 38603 38594 0.50 0.50
merit ± 5 64610 64486 0.84 0.83
merit ± 6 90082 90090 1.16 1.16
merit ± 7 127014 127067 1.64 1.64
merit ± 8 163654 163684 2.12 2.12
merit ± 9 204374 204397 2.64 2.64
merit ±10 244814 244884 3.17 3.17[/CODE]Depending on the parameters, you can choose which merit you want to find, then take exp(effective merit) to have a rough estimate of the number of different tests you might need until an example is found.
If e.g. you aim for a merit >10 in this region (± 5), after four attempts there is a >50% chance that an example is found. (I loosely calculate this 50%-chance by using the factor log(2), so exp(0.84+0.83)*log(2) ~ 3.7 attempts)

danaj 2015-10-20 22:39

A little experiment looking at the time and number of merits >= 5.0 found using k*p#/30-b where k=1..10000 without multiples of 2,3,5.

p=20: 1.7s 102 found = 60/s (28-30 digits)

p=40: 4.1s 236 found = 58/s (69-71 digits)

p=80: 19.6s 515 found = 26/s (166-169 digits)

p=160 235s 985 found = 4/s (392-395 digits)

Interestingly with this form the number we find with merit >= 5 goes up as we get larger. But the time taken goes up quite a bit faster. Which would explain why we see the shape of the graph of current records (high at the beginning, dropping off as gap size increases).

As discussed on the other prime gap thread, it's certainly possible that a different method of selecting the search points would be more efficient, and it's also possible to improve the speed of this or other methods vs. doing prev/next prime with my GMP code. For example with numbers larger than ~3000 digits using gwnum would be faster than GMP. gapcoin uses a different method, but it's not obvious to me how to get exact efficiency comparisons.


From [URL="http://mathworld.wolfram.com/PrimeGaps.html"]Mathworld[/URL]: "The merit of a prime gap compares the size of a gap to the local average gap [...]" which doesn't take into account computational complexity nor the state of current records.

robert44444uk 2015-10-21 07:02

[QUOTE=henryzz;413185]So basically the merit isn't a good measure of how hard it is to find them. Maybe we should correct the formula.[/QUOTE]

There are as many gaps as there are primes...so gaps are not hard to find!

Gaps can be arbitrarily long, so a proportionately large gap size is what makes a gap rare and there is a limit to gap size for a given set of two consecutive primes.

Setting higher limits to reporting them, such as a merit of >10 does at least provide some discrimination.

At any size of prime, beyond the 6,30, 210, 2310 rule, in general the larger the gap the rarer it is and the form of number we are searching - near the primorials - does lend itself to larger gaps - however, we are way off getting to a gap merit of 37. I live in hope.

LaurV 2015-10-21 07:58

[QUOTE=robert44444uk;413219]There are as many gaps as there are primes, [I]minus one[/I][/QUOTE]
Fixed it for you.

Joking apart (or not), the last discussion made me think to analogy with the "difficulty" concept at bitcoin pool-mining, where there is a "bound" for reporting. For example, if you do mining in a pool, the pool will pay you according with your efforts, and the only way it can check how much effort you did is if you report your results which are "better than the bound". So, the pool counts how many results like that are sent by any participant, and they constitute "shares" earned by the participant. The pool gets rewarded (paid) by the bitcoin community when one participant is enough lucky to find a result which is better than the (higher bound) "difficulty" set by the bitcoin community, and in that case the benefit (money) is shared with all participants in the pool according with their "shares".

Next step here is to make a "gapcoin". :razz:
The there will be a lot of participants....

danaj 2015-10-21 16:01

[QUOTE=LaurV;413220]Next step here is to make a "gapcoin".[/QUOTE] [I][SIZE=2](apologies if I missed the sarcasm/humor tag)

[/SIZE][/I]There is such a thing, and it's called gapcoin. [URL="https://bitcointalk.org/index.php?topic=822498.0"]Forum[/URL] / [URL="http://gapcoin.org/index.php"]Site[/URL]. They even have a GPU miner. Over the last year (today is their 1 year anniversary) it has found 1517 record gaps. Antonio has found more than that using a single computer over 3 months. We don't know what resources are being used for Gapcoin. However, Gapcoin is searching for smaller gaps, and has 2 top-20-overall records. The participants also get a coin, where we don't.

You could make an argument that they're looking for higher quality gaps. For people with >100 gaps, average merit:

27.3 Nyman
26.8 Gapcoin
26.2 Spielaur
25.4 Gapcoin unsubmitted
18.7 Jacobsen
18.7 MJPC&JKA
17.9 Jacobsen unsubmitted
17.4 Brent
16.4 TonyKey
15.7 Jansen
15.4 RobSmith
14.9 Rosnthal
14.1 Cami
13.8 Andersen
13.1 TorAlmJA

LaurV 2015-10-22 02:29

It wasn't sarcasm. I use different icons for that :razz:
Now I am thinking it was a bit silly for me not to google it before posting, but robert44's talk about "setting higher limits to reporting them" was crying in my brain to a similarity with pool mining, from which it was a very small step to where the name struck. Now, it seems just normal that someone else was thinking to it before me... :sad: Tragedy of my life...

OTOH, I like their page (the one you linked). Good and elementary explanations for both the coining system and prime gaps' math, for everybody to understand. As an "old salt" bitcoin folk, I will give their GPU miner a try during this long weekend we will have here (23rd is national holiday). To see how fast it is. I have no real feeling (beside of this thread) what the numbers you posted really mean, in term of effort. Seeing you on the list, it may be not easy to get some coins ...

Edit2: "there is no pool mining at present". Time to make one? :razz:

robert44444uk 2015-10-23 06:11

[QUOTE=mart_r;413188]Before I jumped on the bandwagon with the (m*p#)/(d*q#)±x kind of sequences, I wrote a small code that tells me how many candidates there are left to check after a trial division up to p.
This gives sort of an "effective" merit, as displayed in this example:

[CODE]center number = 2000003# / 13#
numbers without
factor <= 2000003 effective merit
- side + side - side + side
merit ± 1 2550 2527 0.03 0.03
merit ± 2 3218 3199 0.04 0.04
merit ± 3 21172 21119 0.27 0.27
merit ± 4 38603 38594 0.50 0.50
merit ± 5 64610 64486 0.84 0.83
merit ± 6 90082 90090 1.16 1.16
merit ± 7 127014 127067 1.64 1.64
merit ± 8 163654 163684 2.12 2.12
merit ± 9 204374 204397 2.64 2.64
merit ±10 244814 244884 3.17 3.17[/CODE]Depending on the parameters, you can choose which merit you want to find, then take exp(effective merit) to have a rough estimate of the number of different tests you might need until an example is found.
If e.g. you aim for a merit >10 in this region (± 5), after four attempts there is a >50% chance that an example is found. (I loosely calculate this 50%-chance by using the factor log(2), so exp(0.84+0.83)*log(2) ~ 3.7 attempts)[/QUOTE]

Thanks mart_r. this is very instructive. I am being really thick though - how is "effective merit" calculated?

mart_r 2015-10-23 19:57

[QUOTE=robert44444uk;413427] how is "effective merit" calculated?[/QUOTE]

Let W(p)=[TEX]\prod_{p:prime} \frac {p-1}{p}[/TEX]

Then the number of numbers without a factor <=p must be divided by log(p#)*W(p) to get the "effective merit".

In my example then, one prime is expected every 77338th number without a factor <= 2000003 (log 2000003# * W(2000003) = 1998602.23 * 0.0386962947 = 77338.5009...).

On second thought, I should have explained this earlier... my bad.

By the way, does anyone know of a formula to get a sufficiently accurate value for W(p) without having to calculate it directly (e.g. if p is large), preferably using known values of Li(p)-[TEX]\pi[/TEX](p)?
I construed something which can be used with known values from Chebychev's theta:
W(p) ~ [TEX]e^\gamma (\log p + \frac {2}{\sqrt p} - \frac {p-\theta (p)}{p})[/TEX]
I wonder if this can be improved somehow.


As you may notice, I'm also still actively searching for gaps from time to time, I only just gathered all data from the past twelve months and was overwhelmed that there were a total of 150 gaps for Mr Nicely's list! I was expecting maybe 50 or thereabouts...

robert44444uk 2015-10-31 14:23

[QUOTE=mart_r;413495]


As you may notice, I'm also still actively searching for gaps from time to time, I only just gathered all data from the past twelve months and was overwhelmed that there were a total of 150 gaps for Mr Nicely's list! I was expecting maybe 50 or thereabouts...[/QUOTE]

And what results - three of a million plus, the first additions to that list for a while. There were none in 2014

1176666 C?P MrtnRaab 2015 12.9561 39443 91199#/46473256830 - 547454
1217460 C?P MrtnRaab 2015 13.4036 39448 91229#/46093437390 - 495038
1462522 C?P MrtnRaab 2015 16.1016 39448 91229#/46056680670 - 853776

robert44444uk 2015-11-12 16:11

Here are some statistics, banded by gap size, by discoverer and by year. 2008 was not a good year for surviving gaps! 2001 is the earliest year in which gaps >2k have survived, Pardo and Dubner seemed the only folks looking at larger gaps back then.

Danaj has over 90% of gaps in the 30-35K range but none >1,000K. Helmut Spielaur almost has 100% of the 2-4K range.

Here are the discoverers

[CODE]


Name Total 0-2K 2-4K 4-6K 6-8K 8-10K 10-15K 15-20K 20-25K 25-30K 30-35K 35-40K 40-45K 45-50K 50-55K 55-60K 60-70K 70-80K 80-100K 100-150K 150-200K 200-1000K >1000K

Jacobsen 43502 0 3 24 262 639 1617 2009 2111 2201 2255 1901 1788 1917 1908 1696 3034 2768 5833 6637 2295 2604 0
Rosnthal 6111 0 4 3 17 37 215 77 38 9 13 49 80 33 42 58 283 526 717 3185 706 19 0
MJPC&JKA 5911 0 0 0 0 0 0 41 40 99 92 300 347 337 341 478 999 971 1148 220 48 448 2
M.Jansen 4638 0 0 2 17 36 193 227 182 164 114 213 196 130 110 123 212 88 108 569 669 1283 2
RobSmith 5308 0 0 0 2 8 37 20 25 15 5 2 3 7 7 14 100 279 1124 2266 899 495 0
Spielaur 2448 270 986 426 492 242 32 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
PierCami 1415 0 0 0 0 0 0 1 1 1 0 2 5 5 13 32 75 84 168 355 254 417 2
Gapcoin 1056 0 0 530 127 12 335 31 21 0 0 0 0 0 0 0 0 0 0 0 0 0 0
TorAlmJA 524 0 4 3 13 9 8 1 0 0 0 2 10 3 2 1 9 28 74 208 101 48 0
Toni_Key 3837 0 0 1 4 3 40 89 82 7 11 26 56 65 74 95 278 235 437 2019 307 8 0
Andersen 128 0 0 2 3 1 4 0 0 0 0 0 0 0 0 0 7 5 5 55 24 22 0
Be.Nyman 121 121 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
RP.Brent 120 120 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
TRNicely 95 95 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
LndrPrkn 72 72 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Yng&Ptlr 71 71 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
TOeSilva 70 70 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
HrzogTOS 52 52 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Glaisher 43 43 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
DHLehmer 38 38 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
JLGPardo 14 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 9 4 1 0
MrtnRaab 170 0 0 6 63 13 19 3 0 4 10 5 15 3 3 3 3 0 0 4 5 8 3
GABandAR 12 12 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Other 64 36 3 3 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 2 1 14 4

Total 75820 1000 1000 1000 1000 1000 2500 2500 2500 2500 2500 2500 2500 2500 2500 2500 5000 4984 9614 15529 5313 5367 13
[/CODE]

[CODE]


Name Total 0-2K 2-4K 4-6K 6-8K 8-10K 10-15K 15-20K 20-25K 25-30K 30-35K 35-40K 40-45K 45-50K 50-55K 55-60K 60-70K 70-80K 80-100K 100-150K 150-200K 200-1000K >1000K

Jacobsen 57.4% 0.0% 0.3% 2.4% 26.2% 63.9% 64.7% 80.4% 84.4% 88.0% 90.2% 76.0% 71.5% 76.7% 76.3% 67.8% 60.7% 55.5% 60.7% 42.7% 43.2% 48.5% 0.0%
Rosnthal 8.1% 0.0% 0.4% 0.3% 1.7% 3.7% 8.6% 3.1% 1.5% 0.4% 0.5% 2.0% 3.2% 1.3% 1.7% 2.3% 5.7% 10.6% 7.5% 20.5% 13.3% 0.4% 0.0%
MJPC&JKA 7.8% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 1.6% 1.6% 4.0% 3.7% 12.0% 13.9% 13.5% 13.6% 19.1% 20.0% 19.5% 11.9% 1.4% 0.9% 8.3% 15.4%
M.Jansen 6.1% 0.0% 0.0% 0.2% 1.7% 3.6% 7.7% 9.1% 7.3% 6.6% 4.6% 8.5% 7.8% 5.2% 4.4% 4.9% 4.2% 1.8% 1.1% 3.7% 12.6% 23.9% 15.4%
RobSmith 7.0% 0.0% 0.0% 0.0% 0.2% 0.8% 1.5% 0.8% 1.0% 0.6% 0.2% 0.1% 0.1% 0.3% 0.3% 0.6% 2.0% 5.6% 11.7% 14.6% 16.9% 9.2% 0.0%
Spielaur 3.2% 27.0% 98.6% 42.6% 49.2% 24.2% 1.3% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
PierCami 1.9% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.1% 0.2% 0.2% 0.5% 1.3% 1.5% 1.7% 1.7% 2.3% 4.8% 7.8% 15.4%
Gapcoin 1.4% 0.0% 0.0% 53.0% 12.7% 1.2% 13.4% 1.2% 0.8% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
TorAlmJA 0.7% 0.0% 0.4% 0.3% 1.3% 0.9% 0.3% 0.0% 0.0% 0.0% 0.0% 0.1% 0.4% 0.1% 0.1% 0.0% 0.2% 0.6% 0.8% 1.3% 1.9% 0.9% 0.0%
Toni_Key 5.1% 0.0% 0.0% 0.1% 0.4% 0.3% 1.6% 3.6% 3.3% 0.3% 0.4% 1.0% 2.2% 2.6% 3.0% 3.8% 5.6% 4.7% 4.5% 13.0% 5.8% 0.1% 0.0%
Andersen 0.2% 0.0% 0.0% 0.2% 0.3% 0.1% 0.2% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.1% 0.1% 0.1% 0.4% 0.5% 0.4% 0.0%
Be.Nyman 0.2% 12.1% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
RP.Brent 0.2% 12.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
TRNicely 0.1% 9.5% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
LndrPrkn 0.1% 7.2% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
Yng&Ptlr 0.1% 7.1% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
TOeSilva 0.1% 7.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
HrzogTOS 0.1% 5.2% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
Glaisher 0.1% 4.3% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
DHLehmer 0.1% 3.8% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
JLGPardo 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.1% 0.1% 0.0% 0.0%
MrtnRaab 0.2% 0.0% 0.0% 0.6% 6.3% 1.3% 0.8% 0.1% 0.0% 0.2% 0.4% 0.2% 0.6% 0.1% 0.1% 0.1% 0.1% 0.0% 0.0% 0.0% 0.1% 0.1% 23.1%
GABandAR 0.0% 1.2% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
Other 0.1% 3.6% 0.3% 0.3% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.3% 30.8%

Total 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0%
[/CODE]

robert44444uk 2015-11-12 16:14

And here are the year breakdowns:

[CODE]

Year Total 0-2K 2-4K 4-6K 6-8K 8-10K 10-15K 15-20K 20-25K 25-30K 30-35K 35-40K 40-45K 45-50K 50-55K 55-60K 60-70K 70-80K 80-100K 100-150K 150-200K 200-1000K >1000K

2015 44603 10 473 347 384 565 1277 1522 1324 999 1466 1090 775 993 1379 1498 3226 3203 6921 12014 2953 2181 3
2014 16419 59 236 351 166 223 975 687 927 1233 829 894 1170 1043 662 376 483 612 1183 2098 1259 953 0
2013 7097 49 100 177 275 114 37 67 99 138 99 302 348 336 352 487 1017 974 1180 253 71 619 3
2012 3934 66 103 78 143 69 138 196 125 72 63 136 89 45 59 83 148 65 41 400 603 1209 3
2011 1075 94 64 38 15 19 58 25 24 57 43 74 104 76 35 30 49 17 39 142 55 17 0
2010 704 52 15 3 0 0 0 2 0 0 0 0 0 0 0 4 37 55 98 293 87 57 1
2009 577 9 5 1 1 0 0 0 1 1 0 2 1 4 9 20 24 25 65 46 128 235 0
2008 18 14 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0
2007 340 17 3 0 5 3 2 0 0 0 0 1 5 2 1 2 8 19 45 129 63 35 0
2006 173 25 0 1 4 4 1 0 0 0 0 0 5 0 2 0 0 4 12 52 45 17 1
2005 18 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0
2004 250 22 0 3 4 2 4 1 0 0 0 1 3 1 1 0 8 9 27 89 44 29 2
2003 44 15 1 1 3 1 5 0 0 0 0 0 0 0 0 0 0 1 3 1 0 13 0
2002 32 19 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 8 4 1 0
2001 26 23 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 0 0
2000 32 32 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1999 36 36 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1998 17 17 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1997 16 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1996 36 36 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1995 12 12 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1994 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1993 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Other 361 361 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Total 75820 1000 1000 1000 1000 1000 2500 2500 2500 2500 2500 2500 2500 2500 2500 2500 5000 4984 9614 15529 5313 5367 13

[/CODE]

[CODE]

Year Total 0-2K 2-4K 4-6K 6-8K 8-10K 10-15K 15-20K 20-25K 25-30K 30-35K 35-40K 40-45K 45-50K 50-55K 55-60K 60-70K 70-80K 80-100K 100-150K 150-200K 200-1000K >1000K

2015 58.8% 1.0% 47.3% 34.7% 38.4% 56.5% 51.1% 60.9% 53.0% 40.0% 58.6% 43.6% 31.0% 39.7% 55.2% 59.9% 64.5% 64.3% 72.0% 77.4% 55.6% 40.6% 23.1%
2014 21.7% 5.9% 23.6% 35.1% 16.6% 22.3% 39.0% 27.5% 37.1% 49.3% 33.2% 35.8% 46.8% 41.7% 26.5% 15.0% 9.7% 12.3% 12.3% 13.5% 23.7% 17.8% 0.0%
2013 9.4% 4.9% 10.0% 17.7% 27.5% 11.4% 1.5% 2.7% 4.0% 5.5% 4.0% 12.1% 13.9% 13.4% 14.1% 19.5% 20.3% 19.5% 12.3% 1.6% 1.3% 11.5% 23.1%
2012 5.2% 6.6% 10.3% 7.8% 14.3% 6.9% 5.5% 7.8% 5.0% 2.9% 2.5% 5.4% 3.6% 1.8% 2.4% 3.3% 3.0% 1.3% 0.4% 2.6% 11.3% 22.5% 23.1%
2011 1.4% 9.4% 6.4% 3.8% 1.5% 1.9% 2.3% 1.0% 1.0% 2.3% 1.7% 3.0% 4.2% 3.0% 1.4% 1.2% 1.0% 0.3% 0.4% 0.9% 1.0% 0.3% 0.0%
2010 0.9% 5.2% 1.5% 0.3% 0.0% 0.0% 0.0% 0.1% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.2% 0.7% 1.1% 1.0% 1.9% 1.6% 1.1% 7.7%
2009 0.8% 0.9% 0.5% 0.1% 0.1% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.1% 0.0% 0.2% 0.4% 0.8% 0.5% 0.5% 0.7% 0.3% 2.4% 4.4% 0.0%
2008 0.0% 1.4% 0.0% 0.0% 0.0% 0.0% 0.1% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
2007 0.4% 1.7% 0.3% 0.0% 0.5% 0.3% 0.1% 0.0% 0.0% 0.0% 0.0% 0.0% 0.2% 0.1% 0.0% 0.1% 0.2% 0.4% 0.5% 0.8% 1.2% 0.7% 0.0%
2006 0.2% 2.5% 0.0% 0.1% 0.4% 0.4% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.2% 0.0% 0.1% 0.0% 0.0% 0.1% 0.1% 0.3% 0.8% 0.3% 7.7%
2005 0.0% 1.6% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
2004 0.3% 2.2% 0.0% 0.3% 0.4% 0.2% 0.2% 0.0% 0.0% 0.0% 0.0% 0.0% 0.1% 0.0% 0.0% 0.0% 0.2% 0.2% 0.3% 0.6% 0.8% 0.5% 15.4%
2003 0.1% 1.5% 0.1% 0.1% 0.3% 0.1% 0.2% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.2% 0.0%
2002 0.0% 1.9% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.1% 0.1% 0.0% 0.0%
2001 0.0% 2.3% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
2000 0.0% 3.2% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
1999 0.0% 3.6% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
1998 0.0% 1.7% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
1997 0.0% 1.6% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
1996 0.0% 3.6% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
1995 0.0% 1.2% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
1994 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
1993 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
Other 0.5% 36.1% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%

Total 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0%


[/CODE]

danaj 2015-11-12 17:12

Interesting tables, thanks for compiling and sharing! There has been a huge amount of activity this year compared to previous, from what I see.

The small gaps are interesting. I kind of want to see what an AWS instance churning on small numbers could do. It's not as exciting as the 60-100k range though, where every hour sees visible results. :)

For 1000k+, I stopped my largest search quite a while back, which is why my largest ones are now 300-400kish. Above 4k digits or so, a different library should be used -- gwnum is better than GMP for this. I was debating writing a script that would take as input something like '1 * 37993# / 30' and do the presieve with my code to get the list of candidates, then call OpenPFGW on each one to test compositeness until a PRP is found (which can then be tested with BPSW or Paul's gwnum-Frobenius routine). More polished would be a C program that pulls all that in.

I've debated running it anyway just to get some results, but it seems wrong to run code that I know is 2-10x slower than other methods. I keep hoping GMP will do something to narrow the distance. Version 6.1.0 just got released, with support for ADX on Broadwell and Skylake (none of my machines are that new) and "Tuned values for FFT multiplications are provided for larger number on many platforms" which could be helpful. I really need to try it out.

robert44444uk 2015-11-13 10:21

[QUOTE=danaj;415939]

For 1000k+, I stopped my largest search quite a while back, which is why my largest ones are now 300-400kish. Above 4k digits or so, a different library should be used -- gwnum is better than GMP for this. I was debating writing a script that would take as input something like '1 * 37993# / 30' and do the presieve with my code to get the list of candidates, then call OpenPFGW on each one to test compositeness until a PRP is found (which can then be tested with BPSW or Paul's gwnum-Frobenius routine). More polished would be a C program that pulls all that in.

I've debated running it anyway just to get some results, but it seems wrong to run code that I know is 2-10x slower than other methods. I keep hoping GMP will do something to narrow the distance. Version 6.1.0 just got released, with support for ADX on Broadwell and Skylake (none of my machines are that new) and "Tuned values for FFT multiplications are provided for larger number on many platforms" which could be helpful. I really need to try it out.[/QUOTE]

You should write this stuff!

robert44444uk 2015-11-13 10:53

And here are the last two summary stat tables:

[CODE]

merit Total 0-2K 2-4K 4-6K 6-8K 8-10K 10-15K 15-20K 20-25K 25-30K 30-35K 35-40K 40-45K 45-50K 50-55K 55-60K 60-70K 70-80K 80-100K 100-150K 150-200K 200-1000K >1000K

35 2 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
34 5 4 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
33 22 11 1 2 6 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0
32 43 17 0 3 15 0 0 0 1 1 1 0 0 1 1 0 1 1 0 1 0 0 0
31 100 36 6 3 38 0 2 3 1 3 2 2 2 1 0 0 1 0 0 0 0 0 0
30 178 38 24 54 14 2 7 3 5 5 9 2 3 1 1 3 2 1 2 1 1 0 0
29 367 55 60 114 10 11 20 19 17 11 13 6 8 3 4 3 2 5 5 1 0 0 0
28 688 63 156 151 27 14 29 46 45 38 25 22 18 11 8 6 10 10 5 2 1 1 0
27 1262 124 279 175 51 40 109 97 111 64 51 39 37 8 17 5 15 21 10 7 2 0 0
26 2005 138 336 186 108 100 212 204 177 135 90 109 55 36 18 8 37 23 15 11 3 4 0
25 2898 75 122 171 187 179 396 435 321 257 168 205 96 48 39 30 77 38 34 10 5 4 1
24 4303 30 15 113 303 285 622 632 507 411 375 315 123 109 61 69 143 72 67 40 6 5 0
23 5109 30 1 26 192 248 622 647 596 589 590 385 207 155 129 149 253 116 79 72 14 9 0
22 4902 34 0 2 47 104 382 323 488 523 658 376 368 217 257 287 357 202 138 106 18 15 0
21 4436 23 0 0 2 17 95 80 204 357 363 347 404 405 427 448 476 297 248 181 38 24 0
20 4365 26 0 0 0 0 3 9 26 97 120 343 399 508 598 415 675 381 410 267 50 38 0
19 4668 28 0 0 0 0 0 1 1 8 30 225 394 580 508 357 824 465 617 463 107 60 0
18 4406 26 0 0 0 0 0 0 0 0 5 91 296 302 242 348 675 655 788 717 193 68 0
17 4202 20 0 0 0 0 0 0 0 0 0 30 85 99 133 239 561 749 943 926 312 105 0
16 4189 24 0 0 0 0 0 0 0 0 0 3 5 14 48 84 408 611 1158 1259 429 145 1
15 4282 17 0 0 0 0 0 0 0 0 0 0 0 2 9 37 290 498 1162 1451 556 258 2
14 4182 23 0 0 0 0 0 0 0 0 0 0 0 0 0 9 102 334 1292 1861 215 345 1
13 4245 20 0 0 0 0 0 0 0 0 0 0 0 0 0 2 52 226 913 2300 247 484 1
12 3782 19 0 0 0 0 0 0 0 0 0 0 0 0 0 1 20 158 513 2120 323 625 3
11 3295 15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 12 73 377 1642 470 704 2
10 2513 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 16 306 893 602 675 1
9 1637 15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 9 370 252 362 628 0
8 1327 14 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 10 111 240 416 536 0
7 951 11 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 5 11 195 409 318 1
6 734 10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 8 198 324 190 0
5 411 10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 12 173 103 112 0
4 215 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 11 99 81 14 0
3 75 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 8 32 26 0 0
2 17 7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 9 0 0 0
1 4 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Total 75820 1000 1000 1000 1000 1000 2500 2500 2500 2500 2500 2500 2500 2500 2500 2500 5000 4984 9614 15529 5313 5367 13


[/CODE]

[CODE]
merit Total 0-2K 2-4K 4-6K 6-8K 8-10K 10-15K 15-20K 20-25K 25-30K 30-35K 35-40K 40-45K 45-50K 50-55K 55-60K 60-70K 70-80K 80-100K 100-150K 150-200K 200-1000K >1000K

35 0.0% 0.1% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
34 0.0% 0.4% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
33 0.0% 1.1% 0.1% 0.2% 0.6% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
32 0.1% 1.7% 0.0% 0.3% 1.5% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
31 0.1% 3.6% 0.6% 0.3% 3.8% 0.0% 0.1% 0.1% 0.0% 0.1% 0.1% 0.1% 0.1% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
30 0.2% 3.8% 2.4% 5.4% 1.4% 0.2% 0.3% 0.1% 0.2% 0.2% 0.4% 0.1% 0.1% 0.0% 0.0% 0.1% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
29 0.5% 5.5% 6.0% 11.4% 1.0% 1.1% 0.8% 0.8% 0.7% 0.4% 0.5% 0.2% 0.3% 0.1% 0.2% 0.1% 0.0% 0.1% 0.1% 0.0% 0.0% 0.0% 0.0%
28 0.9% 6.3% 15.6% 15.1% 2.7% 1.4% 1.2% 1.8% 1.8% 1.5% 1.0% 0.9% 0.7% 0.4% 0.3% 0.2% 0.2% 0.2% 0.1% 0.0% 0.0% 0.0% 0.0%
27 1.7% 12.4% 27.9% 17.5% 5.1% 4.0% 4.4% 3.9% 4.4% 2.6% 2.0% 1.6% 1.5% 0.3% 0.7% 0.2% 0.3% 0.4% 0.1% 0.0% 0.0% 0.0% 0.0%
26 2.6% 13.8% 33.6% 18.6% 10.8% 10.0% 8.5% 8.2% 7.1% 5.4% 3.6% 4.4% 2.2% 1.4% 0.7% 0.3% 0.7% 0.5% 0.2% 0.1% 0.1% 0.1% 0.0%
25 3.8% 7.5% 12.2% 17.1% 18.7% 17.9% 15.8% 17.4% 12.8% 10.3% 6.7% 8.2% 3.8% 1.9% 1.6% 1.2% 1.5% 0.8% 0.4% 0.1% 0.1% 0.1% 7.7%
24 5.7% 3.0% 1.5% 11.3% 30.3% 28.5% 24.9% 25.3% 20.3% 16.4% 15.0% 12.6% 4.9% 4.4% 2.4% 2.8% 2.9% 1.4% 0.7% 0.3% 0.1% 0.1% 0.0%
23 6.7% 3.0% 0.1% 2.6% 19.2% 24.8% 24.9% 25.9% 23.8% 23.6% 23.6% 15.4% 8.3% 6.2% 5.2% 6.0% 5.1% 2.3% 0.8% 0.5% 0.3% 0.2% 0.0%
22 6.5% 3.4% 0.0% 0.2% 4.7% 10.4% 15.3% 12.9% 19.5% 20.9% 26.3% 15.0% 14.7% 8.7% 10.3% 11.5% 7.1% 4.1% 1.4% 0.7% 0.3% 0.3% 0.0%
21 5.9% 2.3% 0.0% 0.0% 0.2% 1.7% 3.8% 3.2% 8.2% 14.3% 14.5% 13.9% 16.2% 16.2% 17.1% 17.9% 9.5% 6.0% 2.6% 1.2% 0.7% 0.4% 0.0%
20 5.8% 2.6% 0.0% 0.0% 0.0% 0.0% 0.1% 0.4% 1.0% 3.9% 4.8% 13.7% 16.0% 20.3% 23.9% 16.6% 13.5% 7.6% 4.3% 1.7% 0.9% 0.7% 0.0%
19 6.2% 2.8% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.3% 1.2% 9.0% 15.8% 23.2% 20.3% 14.3% 16.5% 9.3% 6.4% 3.0% 2.0% 1.1% 0.0%
18 5.8% 2.6% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.2% 3.6% 11.8% 12.1% 9.7% 13.9% 13.5% 13.1% 8.2% 4.6% 3.6% 1.3% 0.0%
17 5.5% 2.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 1.2% 3.4% 4.0% 5.3% 9.6% 11.2% 15.0% 9.8% 6.0% 5.9% 2.0% 0.0%
16 5.5% 2.4% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.1% 0.2% 0.6% 1.9% 3.4% 8.2% 12.3% 12.0% 8.1% 8.1% 2.7% 7.7%
15 5.6% 1.7% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.1% 0.4% 1.5% 5.8% 10.0% 12.1% 9.3% 10.5% 4.8% 15.4%
14 5.5% 2.3% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.4% 2.0% 6.7% 13.4% 12.0% 4.0% 6.4% 7.7%
13 5.6% 2.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.1% 1.0% 4.5% 9.5% 14.8% 4.6% 9.0% 7.7%
12 5.0% 1.9% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.4% 3.2% 5.3% 13.7% 6.1% 11.6% 23.1%
11 4.3% 1.5% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.2% 1.5% 3.9% 10.6% 8.8% 13.1% 15.4%
10 3.3% 1.6% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.1% 0.3% 3.2% 5.8% 11.3% 12.6% 7.7%
9 2.2% 1.5% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.2% 3.8% 1.6% 6.8% 11.7% 0.0%
8 1.8% 1.4% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.2% 1.2% 1.5% 7.8% 10.0% 0.0%
7 1.3% 1.1% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.1% 0.1% 1.3% 7.7% 5.9% 7.7%
6 1.0% 1.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.1% 0.1% 1.3% 6.1% 3.5% 0.0%
5 0.5% 1.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.1% 1.1% 1.9% 2.1% 0.0%
4 0.3% 0.8% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.1% 0.6% 1.5% 0.3% 0.0%
3 0.1% 0.8% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.1% 0.2% 0.5% 0.0% 0.0%
2 0.0% 0.7% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.1% 0.0% 0.0% 0.0%
1 0.0% 0.4% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%


[/CODE]

danaj 2015-11-17 16:22

The range would be 20 to 8700 digits (realistically 700 to 8700) -- I meant that we want every entry to have a merit at least 5. 6 would do, but so would 10, 15, 20, 25, 30, and we'd welcome 35 with joyous hearts.

danaj 2015-12-03 18:03

[QUOTE=Antonio;418098]As it has been so long since the merits file has been updated, I've started to keep a local copy updated with my own results to prevent me from submitting spurious data.[/QUOTE]Don't you have to do this anyway to prevent duplicates (two records for the same gap)? It is quite common for me to see duplicates during a week, and the 235 run I'm doing now will sometimes spit out two dups within minutes (out of 165 records output, there are 141 unique gap lengths).

This is one reason I put off submitting every week. It's not unusual for the next week to find quite a few better results, so putting it off means fewer intermediates. But I have been dropping down to every 2-3 weeks. I figure when I have 2k-3k new records I should get them pushed.

I submitted 2857 gaps on Nov 21. Min 3388, Max 522892, max merit 30.481935.

My current set is 1764 gaps. Min 4162, Max 521074, max merit 31.846851.

I have some searches going on in the sub 10k range, which makes for nice merits, but it is definitely slower in gaps/day than the 70k+ range.

Antonio 2015-12-03 20:22

[QUOTE=danaj;418123]Don't you have to do this anyway to prevent duplicates (two records for the same gap)? It is quite common for me to see duplicates during a week, and the 235 run I'm doing now will sometimes spit out two dups within minutes (out of 165 records output, there are 141 unique gap lengths).

This is one reason I put off submitting every week. It's not unusual for the next week to find quite a few better results, so putting it off means fewer intermediates. But I have been dropping down to every 2-3 weeks. I figure when I have 2k-3k new records I should get them pushed.

I submitted 2857 gaps on Nov 21. Min 3388, Max 522892, max merit 30.481935.

My current set is 1764 gaps. Min 4162, Max 521074, max merit 31.846851.

I have some searches going on in the sub 10k range, which makes for nice merits, but it is definitely slower in gaps/day than the 70k+ range.[/QUOTE]

While the merits file was being updated once or more a week, I was checking my weeks work for duplicates and then against the latest merits file just before submitting. I was not, however, checking against my earlier submissions as they were already in the latest merits file.
I have also modified the search script so that it re-reads the merits.txt file every 12 hrs. (at a convenient point in the program, so this is only approximate) to reduce any redundant output, if/when the merit file is updated. This may change to once every 24 hrs., but it only takes a second or two, and is much more convenient than stopping and re-starting the script.
I backup the results files from all four threads each morning and this now automatically also updates my local merits.txt, so now each thread 'knows' about the other threads results within 12 hrs. of the backup.

I was submitting about 500-600 results each week, but since starting the search for the missing gaps < 100k this has dropped and is now at about 350 per week.

danaj 2015-12-03 21:54

Thanks for the explanation. I see now.

I put my submitted file (with merits) in a sentgaps.txt file, and my post-processing scripts all read that in first. I've thought about having the search scripts pull in new merits files, but haven't gotten around to it because it isn't huge with my process. I have to pull in results from multiple machines, including Windows boxes that communicate by occasionally copy/paste results file to my media PC's shared drive. The process that sucks them all in filters for just new results (ensuring the log has no actual duplicate lines from anything we've processed before). The next ones does the filtering on the latest and greatest merits file, the previous submission, and double checking nothing is screwed up (including double checking the gaps for small entries) -- that turns my log file into the new gaps file.

There's some duplicate work there:
- the search program found the gap.
- that last script may redo the prev/next prime step, or just verify the endpoints. Also checks the merit calculation.
- sending to Dr. Nicely results in him running cglp4 to check the gap.
- I have a double checker that goes through every gap in my gaps file plus all submitted gaps, in digit size order, and does a slightly more strenuous version of cglp4 (ES BPSW plus a Frobenius test on the endpoints). This is somewhere north of 5k digits, but has to get restarted every now and then to process newly found gaps.
- Another process goes through every gap as above in digit size order and runs an ECPP proof on the endpoints. Much slower, at 1200 or so digits and won't move much farther without dedicating a lot more processing power.

Antonio 2015-12-04 07:39

[QUOTE=danaj;418138]Thanks for the explanation. I see now.

I put my submitted file (with merits) in a sentgaps.txt file, and my post-processing scripts all read that in first. I've thought about having the search scripts pull in new merits files, but haven't gotten around to it because it isn't huge with my process. I have to pull in results from multiple machines, including Windows boxes that communicate by occasionally copy/paste results file to my media PC's shared drive. The process that sucks them all in filters for just new results (ensuring the log has no actual duplicate lines from anything we've processed before). The next ones does the filtering on the latest and greatest merits file, the previous submission, and double checking nothing is screwed up (including double checking the gaps for small entries) -- that turns my log file into the new gaps file.

There's some duplicate work there:
- the search program found the gap.
- that last script may redo the prev/next prime step, or just verify the endpoints. Also checks the merit calculation.
- sending to Dr. Nicely results in him running cglp4 to check the gap.
- I have a double checker that goes through every gap in my gaps file plus all submitted gaps, in digit size order, and does a slightly more strenuous version of cglp4 (ES BPSW plus a Frobenius test on the endpoints). This is somewhere north of 5k digits, but has to get restarted every now and then to process newly found gaps.
- Another process goes through every gap as above in digit size order and runs an ECPP proof on the endpoints. Much slower, at 1200 or so digits and won't move much farther without dedicating a lot more processing power.[/QUOTE]

With only a limited resource I don't do any double checks normally, but I have included an optional call to cglp4 as part of the results file checking process which I invoke when I have made changes to the script (just to check that I haven't messed anything up).

As a side note I have just received an email to say that my results submitted on Nov. 19th have been included in the merits list, since then I submitted 411 results on Nov. 26th and 376 results on Dec. 3rd. So it looks like keeping an updated merits file locally is worthwhile, especially if TRN's illness continues to be a problem.

Antonio 2015-12-04 12:54

[QUOTE=Antonio;418179] So it looks like keeping an updated merits file locally is worthwhile, especially if TRN's illness continues to be a problem.[/QUOTE]

Just to note I have received the following from TRN this morning:-
[QUOTE]Sorry for the delays---health problems and travels
have interfered. I will have your gaps of
26 November and 03 December processed
and posted as time and health allow. Thanks for
the submissions and for your patience.

TRN
[/QUOTE]

robert44444uk 2016-01-15 15:21

I've made a request to the mods to change the priority of this thread so it appears on the front page. This will also give us a chance to spread our wings a bit and split the correspondence into technically different areas - some thoughts are:
[LIST][*]Achievements[*]Theory[*]Programming[*] Very small gaps - 1,300 - 4,000[*]Small gaps 4,000 - 60,000[*]Medium gaps 60,000 - 500,000[*]Large gaps >500,000[*]Collaboration requests[/LIST]
What do others think?

firejuggler 2016-01-18 23:22

A link to the programs and how to set them up would be nice.

danaj 2016-01-19 03:22

I'm traveling for the next 3 weeks, but when I get back home I'll see about making a web page with programs and instructions. Someone else could likely collect the various programs for posting here in the interim.

I've been running the program [URL="http://www.mersenneforum.org/showpost.php?p=414830&postcount=82"]in this post[/URL] on most of my machines these days, as it is easy to run and deal with restarts. I don't run on AWS any more.

One thing I believe would help my use at least is a little web program to accept results. So instead of writing to a file that I collect in batches, the search program could just open a URL with the gap and name.

danaj 2016-02-07 16:00

A recent comment on Gapcoin's thread was that "[...]Gapcoin seems to find prime gaps which are considered of higher merit than most others." Indeed Gapcoin has found a number of large merits, including a 33.49.

Average merit of highest 1595 results (since Gapcoin has that many records):
27.79 Jacobsen
27.48 Nyman
27.38 Spielauer
26.49 Gapcoin
23.14 MJPCJKA
23.03 Martin Raab
23.02 Michiel Jansen
21.99 Toni Key
21.24 Rosenthal
18.94 Rob Smith
14.55 Pierre Cami
13,72 Torbjörn Alm

Nyman has only 127 results (some of which are real maximal results). My result could be argued unfair since I'm picking the top 4% of my results.


There are a lot of goals people are pursuing. Feel free to point out more.

Number of gaps. The easiest way to increase this, I believe, is to look a bit past the frontier. This used to be in the 60-80k range, now it's in the 120k or so range. This has lots of open spaces so most gaps have no record and those that do are relatively small so there is a decent chance of beating the result.

Filling in the frontier. There has been some effort to fill in all the remaining gap lengths under 100k, and also to see if we can get all non-maximal lengths under 100k to have a merit >= 10. I have some searches going through these ranges, and I believe Antonio and Rosenthal are also doing this.

Large gap sizes. At this point I think anything 300k+. These take a lot of time per result. Martin Raab is finding huge gaps, while Rob and I are finding a few. The chances of finding a 30+ merit here is quite small given the time required to find any gap. Rosenthal found the largest recently: 31.78 merit at length 309030. The next largest is Rob Smith's 30.38 at length 157194.

Small gap sizes. I don't believe anyone is doing exhaustive searches any more, but there are quite a few people looking for results in the sub-2k range. This is a big effort for very few results (but IMO useful ones!). In the last year it looks like Leif Leonhardy, Bertil Nyman, Helmut Spielauer, and Rob Smith are doing this and getting results.

Large merits. I've found searches in the 4k to 12k range seem to push these out quite rapidly. Finding 30+ merit gaps in here seems to not be too difficult. Under 4k doesn't produce much for me, and the searches take longer as the length rises of course. In theory doing more searches in the 2k-4k range, perhaps with an all-GMP C program to minimize overhead, would be more efficient. The 5k-8k range is where Gapcoin seems to have concentrated, with their newer miner also looking to ~24k (concentrated in 12k-16k). Most of their high merits are in the 6k-8k range. I started a couple processes last month and liked the results so a few days ago I dedicated my workstation to a search in this range. Martin Raab and Antonio Key are also finding results recently.

danaj 2016-06-05 20:55

1 Attachment(s)
Here's an animated GIF showing gaps from 1300 to 100k with 2015-10-07 and 2016-06-04.

They're not labeled but you should be able to see the rise over the 8 months. This would be much cooler if I added more steps in-between, but it's using Libreoffice for the graph, which really doesn't like charts with this many points, so it takes quite a while to make.

As before, it's too busy and too easy to get some people lost in all the colors and tiny dots. Antonio probably loses out the most here, as he has a huge number of gaps but the color doesn't stand out enough.

Some interesting things I see
[LIST][*]The 5k to 15k range all bumped up a lot -- visually it looks like 3+ merits for the median. Rosenthals lines down there are almost gone, and Spielauer has a little competition in the lower range. A few new points in the 33-34 merit range.[*]20k to 60k or so doesn't look like it moved much in terms of the median merit, but the lower merits got pushed up.[*]The low merits in the 70k+ range are almost all gone. Very little under merit 10 left.[*]Some new lines from constant-p searches. I think the new one of mine from 60k on is a k*4139#/30 search (k up to 5 million).[*]Gapcoin's strong lines at 5k-6k and 13k-15k are still there. They gained 540 from Oct to Dec as they submitted, but then lost 740 this year from broken records. But as usual with getting records broken, it's mostly the small merits that go away. They're unsubmitted records are picking up in quantity, and they're not shown here.[/LIST]

robert44444uk 2016-06-06 10:58

Those graphs are very beautiful!

It would be good to see for the next 100k, as there has been massive effort there in the last year.

robert44444uk 2017-07-04 07:44

It looks as if mart_r has broken the largest gap record - this from Dr. Nicely's site:

6582144 C?? MrtnRaab 2017 13.1829 216841 499973#/30030 - 4509212

Dr Nicely writes:

NEW LARGEST KNOWN PRIME GAP
Martin Raab has discovered a new first (and largest) known occurrence prime gap of measure G=6582144 following the 216841-digit prime P1=499973#/30030 - 4509212 (where 499973# indicates the product of all primes from 2 through 499973 inclusive). This gap was first reported by Raab on 01 July 2017. The endpoints have passed Miller's test (base 2) for probabilistic primality; the interior integers remain to be checked independently for compositeness. A test for deterministic certification of primality is at present out of the question. The gap has merit M=13.182884.

danaj 2017-12-26 19:33

New record merit gap from Gapcoin:

8350 41.93878373 293703234068022590158723766104419463425709075574811762098588798217895728858676728143227

The first one with merit 40+.

robert44444uk 2017-12-28 17:45

[QUOTE=danaj;474919]New record merit gap from Gapcoin:

8350 41.93878373 293703234068022590158723766104419463425709075574811762098588798217895728858676728143227

The first one with merit 40+.[/QUOTE]

Oh my - this is a very significant result and will be very hard to beat - 40 was a major target and this result blows it away.

Maybe this will be the next challenge for the PGS when we get to 2^64.

George M 2018-01-01 00:52

I love how we all think the prime numbers are random when really....
 
I don’t think a lot of people know about the Prime Gap Equation but I found it on a Wikipedia Article and it just shows that every prime number determines the following prime number. So, WHAT ARE WE ALL TALKING ABOUT?! heh. And I read a book by Australian mathematician and stand-up comedian Matt Parker, called “Things to Make and Do in the Fourth Dimension” and he says that for some prime number p, there exists another prime number q that ranges from (p, p + 5414). Well, this is how I’m phrasing it, but he simply said in the book that each gap between two adjacent (neighbouring) prime numbers have an upper bound of 5414.

VBCurtis 2018-01-01 02:08

[QUOTE=George M;475715]Well, this is how I’m phrasing it, but he simply said in the book that each gap between two adjacent (neighbouring) prime numbers have an upper bound of 5414.[/QUOTE]
We don't pay much attention to false claims, so you're right that we don't know about this equation nor his false claim.

For your own education, find the next prime after this number:
293703234068022590158723766104419463425709075574811762098588798217895728858676728143227

Hint: the next prime is more than 5414 greater than this number, which disproves the hooey you cite.

axn 2018-01-01 03:52

[QUOTE=VBCurtis;475733]We don't pay much attention to false claims, so you're right that we don't know about this equation nor his false claim.[/QUOTE]

I think OP misunderstood what is being claimed.

Basically, there was a result that there are infinitely many prime pairs p,q such that the gap q-p is bounded by a small number. They successively improved the upper bound on that gap (which at one point stood at 5414 -- See [url]http://michaelnielsen.org/polymath1/index.php?title=Timeline_of_prime_gap_bounds[/url])

danaj 2018-01-01 11:39

I don't think we need to add any more on the recent topic. Based on George's other posts today, I think it was an early New Year's celebration that included random posts to lots of threads.

On topic, 10 of the top 14 merits were found in 2017 including the top 5. Before I moved resources over to the PGS exhaustive search, I'd put a fair amount into smaller P1s, leading to a lot more large-merit finds. Gapcoin may do even more in 2018 given the popularity of cryptocoins these day

George M 2018-01-01 13:05

[QUOTE=VBCurtis;475733]We don't pay much attention to false claims, so you're right that we don't know about this equation nor his false claim.

For your own education, find the next prime after this number:
293703234068022590158723766104419463425709075574811762098588798217895728858676728143227

Hint: the next prime is more than 5414 greater than this number, which disproves the hooey you cite.[/QUOTE]

But... but... GOD DAMMIT. Let’s just keep the upper bound of the gap at 70,000,000 where it originally was at..

George M 2018-01-01 14:02

Prime Gap Hystory
 
On 13 May 2013, an upper bound of prime gaps was proven to be 63,374,611 (rounding to 70 million). This was done by Yitang (Tom) Zhang.

Then Tim Trudgian brought it down to 59,874,594 with Scott Morrison bringing it further down to 59,470,640 around late May. At 31 May however, it was brought down to 42,342,946.

Then a mathematician called Terence Tao who learnt algebra at aged 3, completed his maths degree at aged 16, got a maths PhD and won a Fields Medal in 2006, brought down the bound to 42,342,924. Terence Tao is known as the “hyper-genius” at maths with an IQ of 220 (world’s highest).

He and another Fields Medalist, Tim Gower, then started an open project as part of Polymath where mathematicians could join together and collaborate to bring this bound down. As of 20 July 2013, the upper bound was brought down to 5414.

Doesn’t sound hooey to me, but if you say so...

10metreh 2018-01-01 14:11

[QUOTE=George M;475830]On 13 May 2013, an upper bound of prime gaps was proven to be 63,374,611 (rounding to 70 million). This was done by Yitang (Tom) Zhang.

Then Tim Trudgian brought it down to 59,874,594 with Scott Morrison bringing it further down to 59,470,640 around late May. At 31 May however, it was brought down to 42,342,946.

Then a mathematician called Terence Tao who learnt algebra at aged 3, completed his maths degree at aged 16, got a maths PhD and won a Fields Medal in 2006, brought down the bound to 42,342,924. Terence Tao is known as the “hyper-genius” at maths with an IQ of 220 (world’s highest).

He and another Fields Medalist, Tim Gower, then started an open project as part of Polymath where mathematicians could join together and collaborate to bring this bound down. As of 20 July 2013, the upper bound was brought down to 5414.

Doesn’t sound hooey to me, but if you say so...[/QUOTE]

These are not upper bounds on gaps. Zhang proved that there are infinitely many prime gaps smaller than 70,000,000. This does NOT mean that all gaps are smaller than 70,000,000.

In fact arbitrarily large gaps exist: n!+m is divisible by m for m ≤ n, so there are n-1 consecutive composite numbers from n!+2 to n!+n. This gives a gap of size at least n.

George M 2018-01-02 07:19

[QUOTE=10metreh;475831]These are not upper bounds on gaps. Zhang proved that there are infinitely many prime gaps smaller than 70,000,000. This does NOT mean that all gaps are smaller than 70,000,000.

In fact arbitrarily large gaps exist: n!+m is divisible by m for m ≤ n, so there are n-1 consecutive composite numbers from n!+2 to n!+n. This gives a gap of size at least n.[/QUOTE]

Oh. And I misspelt “history” btw... but anyway, thanks for that clarification.

mart_r 2018-01-03 19:35

2 Attachment(s)
The new Gapcoin discovery is a marvel. It's reminiscent of the Nyman gap of 1132.
To top it off: if the left-hand bounding prime was composite, it would expand to a gap of merit=46.71 - which would have been an even more mind-blowing result.

And all that without the benefits of a large primorial. I've attached a graph that shows that the numbers in the primeless interval that are coprime to about 200# (the order of magnitude of the primes themselves) is even a bit above the average. The graph itself shows that Gapcoin indeed uses "random" numbers, that is to say, without using primorials to take advantage of cancelling out a lot of small factors. (I've just come up with the term "coprime profile" for it - catchy/appropriate?) - For comparison, the second graph shows the same for a gap that utilizes a primorial.

CRGreathouse 2018-01-03 19:57

[QUOTE=10metreh;475831]These are not upper bounds on gaps. Zhang proved that there are infinitely many prime gaps smaller than 70,000,000. This does NOT mean that all gaps are smaller than 70,000,000.

In fact arbitrarily large gaps exist: n!+m is divisible by m for m ≤ n, so there are n-1 consecutive composite numbers from n!+2 to n!+n. This gives a gap of size at least n.[/QUOTE]

Right. To drive the point home: asymptotically more than 99% of primes are followed by gaps of length more than 70 million.

rudy235 2018-01-04 01:28

[QUOTE=CRGreathouse;476237]Right. To drive the point home: asymptotically more than 99% of primes are followed by gaps of length more than 70 million.[/QUOTE]

I would say "[I]almost all[/I]" (i.e all the elements except a countable set) rather than "more than 99%" :smile:

CRGreathouse 2018-01-04 02:05

[QUOTE=rudy235;476292]I would say "[I]almost all[/I]" (i.e all the elements except a countable set) rather than "more than 99%" :smile:[/QUOTE]

I would say that too (or else "measure 1") if I were talking with other mathematicians, but for general audiences I usually prefer the weaker version -- what it lacks in precision it makes up in understandability.

rudy235 2018-01-04 02:19

[QUOTE=CRGreathouse;476296]I would say that too (or else "measure 1") if I were talking with other mathematicians, but for general audiences I usually prefer the weaker version -- what it lacks in precision it makes up in understandability.[/QUOTE]

I've never been called a mathematician before, but thank you!:smile:

VBCurtis 2018-01-04 18:22

[QUOTE=rudy235;476292]I would say "[I]almost all[/I]" (i.e all the elements except a countable set) rather than "more than 99%" :smile:[/QUOTE]

Isn't this even weaker, since the set of all primes is a countable set? The 99% could go the other way, and "all but a countable set" is still true.

CRGreathouse 2018-01-04 19:11

[QUOTE=VBCurtis;476400]Isn't this even weaker, since the set of all primes is a countable set? The 99% could go the other way, and "all but a countable set" is still true.[/QUOTE]

I think "all but a set of density 0" was intended.

rudy235 2018-01-04 19:24

What I meant to say was the it was more likely to be approximated by 99.9999999% than by 99.9%

robert44444uk 2018-06-21 09:59

I've just sent in to Dr Nicely details of the 4th largest gap with merit >10:

3870360 C?C RobSmith 2018 12.41 135376 x*y#/46410-z

Just outside the medals.

robert44444uk 2018-06-25 18:03

A very nice find - the highest merit by far at this level

512226 29.71746 x*y#/46410-z

Dr Nicely has just posted some new million plus gaps for me including the 4th highest

3870360 C?P RobSmith 2018 12.4164 135376 2267*312583#/46410 - 1157722

rudy235 2018-06-26 16:19

BERTIL NYMAN
 
I don't know if this is the right place to post this but, on the other hand, I don't think I should open a new thread for this.

In an email I got Dr. Nicely I was made aware that Bertil Nyman is no longer with us. I have no additional information in this regard.

If you look at Dr. Nicely's Primes gap pages you can see how prolific was his presence there.
112 CFCs and 4 C?Cs in the first 500 gaps (1-1998) This 4 C?Cs will almost certainly become CFC's when the search reaches 2^64

As to Maximal Gaps he had 3 and is bound to get two more completing 5.

May he rest in peace.

robert44444uk 2018-06-27 09:41

That's sad if that is the case. Perhaps here would be a good place

[url]http://www.mersenneforum.org/showthread.php?t=16103[/url]

Thomas11 2018-06-28 13:10

Seems that he already passed away in 2016, aged 75:
[URL="http://www.unt.se/familj/minnesord/bertil-nyman-4103825.aspx"]http://www.unt.se/familj/minnesord/bertil-nyman-4103825.aspx[/URL]

The newspaper article mentions the following:
[QUOTE]Another great hobby was to find large primal gaps, using computers and proprietary programs, sequences of consecutive integers where none of them is a prime. The prime number 1693182318746371 is followed by 1132 numbers that are not prime. This is the first instance of a so-called kilo gap, i.e. at least 1000 in a row without being a prime, and was found in 1999 by Bertil.[/QUOTE]

robert44444uk 2018-07-03 10:31

[QUOTE=robert44444uk;490532]A very nice find - the highest merit by far at this level

512226 29.71746 x*y#/46410-z

[/QUOTE]

And here it is:

512226 C?P RobSmith 2018 29.72 7486 7613*17393#/46410 - 253388

Bobby Jacobs 2018-10-23 18:20

It has been a while since anybody has posted here, but there is some big news. We now know that the gaps of 1530 and 1550 are maximal prime gaps. That is very great!

robert44444uk 2019-05-08 08:38

I'm happy to announce the discovery of a huge gap of 203890 with merit 35.64 following the prime 140207*5813#/46410-86644. :smile:

This is by far the largest gap with merit of more than 35, beating Michiel Jansen's 2012 find of 66520 length, which had merit of 35.42. As far as I can see, there are only 12 gaps with greater merit that have ever been discovered, the largest of which is a gap of 26892 found by danaj in 2016. The greatest merit ever found is 41.93 by Gapcoin in 2017.

ATH 2019-05-08 10:50

Congratulations!!!

Very high merit for such a large gap.

MJansen 2019-05-11 07:15

Congratulations!


Michiel Jansen

Bobby Jacobs 2019-06-07 16:21

[QUOTE=ATH;516128]Congratulations!!!

Very high merit for such a large gap.[/QUOTE]

Big gaps usually have large merit.

rudy235 2019-07-07 04:33

[QUOTE=Bobby Jacobs;518806]Big gaps usually have large merit.[/QUOTE]

Not "usually". On the contrary.

A few of them may have unusually large merits but that is just the tail end of a bell curve distribution. The most frequent merit is "1" or very close to 1

Another (totally different) thing is that we do not usually make lists of gaps with merit 1 and try at least to find large GAPS with merit greater than 10. Case in point [URL="http://primerecords.dk/primegaps/gaps20.htm#top20meritabove10"]http://primerecords.dk/primegaps/gaps20.htm#top20meritabove10[/URL]
But I assure you there are many more gaps with merits less than 10 than gaps with merits greater than 10.

Smaller merits are also very infrequent as for instance a gigantic twin prime like 33218925 * 2[SUP]169690[/sup] +/- 1 has a merit of 5E-6

robert44444uk 2019-07-08 09:32

I have another monster gap to report - this one will get posted at Dr. Nicely's site soon. It is one of the series - "no larger gaps discovered to date with larger merit"

26.67 614640 ???*????#/46410-??????

axn 2019-07-08 10:20

[QUOTE=robert44444uk;520993]no larger gaps discovered to date with larger merit[/QUOTE]
These are the others (> 200000) in that series currently
[CODE]203890 C?P RobSmith 2019 35.64 2485 140207*5813#/46410 - 86644
309030 C?P Rosnthal 2015 31.78 4223 1111111111111111111*9787#/(7#*641) - 130308
512226 C?P RobSmith 2018 29.72 7486 7613*17393#/46410 - 253388
556982 C?P RobSmith 2019 26.84 9014 2089*20963#/46410 - 267908
1113106 C?C MJPC&JKA 2013 25.9045 18662 587*43103#/2310 - 455704
1286500 C?P Rosnthal 2016 25.8571 21608 1111111111111111111*49999#/(510510*499) - 525318
2332764 C?P MrtnRaab 2017 25.7927 39279 90823#/510510 - 1065962
4680156 C?P MrtnRaab 2016 20.3767 99750 230077#/2229464046810 - 3131794
6582144 C?P MrtnRaab 2017 13.1829 216841 499973#/30030 - 4509212
[/CODE]


All times are UTC. The time now is 18:06.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.