20201216, 00:30  #34 
Just call me Henry
"David"
Sep 2007
Cambridge (GMT/BST)
16BE_{16} Posts 
I will have a go at this over the Christmas break(starts Friday for me) I have some git experience so I should be able to work that out. Part of my issue will be lack of python knowledge although long term it probably is a language I should learn.

20201216, 21:13  #35 
May 2018
2^{4}×13 Posts 
Can we use this code to finally find maximal prime gaps greater than 2^{64}?

20201216, 23:10  #36 
Just call me Henry
"David"
Sep 2007
Cambridge (GMT/BST)
2×41×71 Posts 

20201230, 20:55  #37 
Just call me Henry
"David"
Sep 2007
Cambridge (GMT/BST)
2·41·71 Posts 
I am having a few issues with predictions not matching reality.
Why is sum(prob_minmerit) being overestimated so much in this case? I am searching m * 9973#/208110 + x. min_merit has been set quite high at 20. Does the min_merit I used for the stats step make any difference? Code:
sum(prob_minmerit): 114.86, 73.7/day found: 3 sum(prob_record): 58.263, 37.4/day found: 58 
20201231, 02:51  #38  
"Seth"
Apr 2019
2^{2}×3^{2}×7 Posts 
Quote:
I am glad to see prob_records closely matching predictions, I'm running into a lot of issues with it not working in my runs for various real but frustrating to fix reasons. Sorry the graphs are broken, I made an optimization not to record the probabilities of small gaps when sieve_length is > 100,000 in gap_stats (see here) this makes gap_stats faster at the cost of breaking this graph. You can disable that code by changing line 1034 to `size_t j = 0` this will be slower but will always record the probabilities. 

20201231, 11:54  #39  
"Seth"
Apr 2019
374_{8} Posts 
Quote:
In the future when using very large SL the graphs will still be truncated but all the probability for the truncated values is still included so they will be normalized correctly. See the attached photo 

20210103, 10:21  #40 
"Seth"
Apr 2019
252_{10} Posts 
Took me a couple of days to understand this, you are running with onesideskip (or maybe more precisely you are running without noonesideskip). This means that 99% of the time you skip finding the gap to next_prime (because the gap to prev_prime is small) this skews the observed gaps to be much larger. I updated the code so it changes the label which hopefully makes this more apparent and normalizes by the number of m's tested.
I'm not sure if there's something better I could do but they should now roughly match with large gaps being slightly over represented (because we are finding more than expected) 
20210104, 11:56  #41 
"Seth"
Apr 2019
2^{2}×3^{2}×7 Posts 
Medium improvement
I optimized handling of medium primes section of the code and got a 3040% improvement! For long searches (m_inc > 1M) this is probably more than a 10% overall speedup which I'm very excited about improvement!
I also added short flags; `save` instead of `saveunknowns` and `u` instead of `unknownfilename`. There are a handful of other changes better combined_sieve time estimation, better plotting (mentioned above), the largest record found with record_check, warnings if sieve_length is unreasonable sized, and a bunch of other things. I'd encourage everyone to `git pull` for the newest version. 
20210123, 01:11  #42 
"Seth"
Apr 2019
374_{8} Posts 
I made a number of improvements over the last couple of weeks.
Most importantly `combined_sieve` and `gap_stats` are now multithreaded! (you might need to `sudo apt install libompdev`) I test very long intervals (e.g. m_inc > 10 million) which generally took more than a day to finish which left me juggling running multiple combined_sieve / gap_stats / gap_tests at the same time to keep all threads on my computer active. Now you can sieve, stats, and test an interval with one command `./misc/run.sh t 4 u 907_210_1_15000_s7554_l1000M.txt` I've also slightly improved `misc/record_check.py` to include largest and smallest record and number of unique records. 
20210204, 12:17  #43  
Jun 2003
Oxford, UK
2^{3}×241 Posts 
Quote:


20210205, 10:51  #44  
"Seth"
Apr 2019
252_{10} Posts 
Quote:
time ./combined_sieve t $THREADS save u "$UNKNOWN_FN" time ./gap_stats t $THREADS save u "$UNKNOWN_FN" time ./gap_test.py t $THREADS u "$UNKNOWN_FN" I can add support for `prptoppercent` and `minmerit` this week. It doesn't have any additional resume behavior (`combined_sieve` has none but if complete wouldn't rerun, `gap_stats` is generally quite fast and doesn't rerun if already finished, `gap_test.py` caches all it's progress and resumes. 

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
UPS /combined UPS and PS for P95 computer  Christenson  Hardware  12  20111027 03:41 
Combined sieving discussion  ltd  Prime Sierpinski Project  76  20080725 11:44 
Combined Sieve Guide Discussion  Joe O  Prime Sierpinski Project  35  20060901 13:44 
Combined Sieving?  jaat  Sierpinski/Riesel Base 5  5  20060418 02:26 
Sieve discussion Meaning of first/second pass, combined  Citrix  Prime Sierpinski Project  14  20051231 19:39 