5000 < k < 6000
I plan on beginning a massive search for this block for n<1M with a novel approach.
I know that some individual K have been searched fairly deeply in this range, and I have not yet decided whether to use this search as a doublecheck, or remove those previously searched ranges. (Leaning more toward a DC) I think it is critical that we shift away from searching individual K and more toward RPS/PG/NPLB type searches of large, easily delineated blocks. My approach will be quite different from how these drives are normally organized, but should yeild some interesting and efficient results. Please let me know if there are any major drives that overlap this range. 
just out of curiosity how does your approach differs?

hi Justin,
please check here which k's are already reserved: [url]http://www.rieselprime.de/Data/04000.htm[/url] if you plan to search the entire range (500 k's) there's no difference in inlcuding the already searched k's for a doublecheck in the sieve. for a range to n=1M only the sieving would take months for only one person with an amount of cores available. as cipher said, tell something more about your strategy to do this hard work. 
I am currently in discussion with Justin on the searching of this range. I'll post here what is decided.
Gary 
Update
Just posting an update on this project. I have been sieving for some time, and it is at around 3T and going. I talked with lavalamp a bit to develop this "new approach" to prime searching.
What is not is a speedup or improvement over conventional searching. Technically it should take just as long. What it is, is a different approach that should be able to generate more statistics on the results and maybe an easier way to "predict" what might lie ahead. The LLRing is done up to n=75,000 and the project has begun. The idea is to test the candidates in RANDOM order within a large range. That way, they should be somewhat evenly dispursed, so you can predict (based on how many you have tested already) how many will be prime, and how many it will take to find a new prime. Instead of having a lot of primes early and then getting sparse later, they will come at a constant rate throughout the project. More details and stats available at [url]www.bodang.com/sieve/table.php[/url] If anyone has any thoughts or ideas, please contact me here or (better) at [email]vanklein@gmail.com[/email] 
please make some more specific details:
i'm missing the test/sieverange in [i]n[/i]! is it still n=1M? so there're about 18.1M candidates left to test? i'm just llrnetting and i got a n=468010! when is the statspage updated? you gave 581491 tested n with 661 primes found (n>50k if i'm right). which range was tested here? the most seems testd upto n=75k (as you mentioned above). why is k=5295 tested higher (upto n=130k). are others higher tested? the average of 878 n to test to find a prime is therefore only for n>75k. but in higher ranges it's quite more n to test (about 2000 to 4000 i think)! PS: the title of the page is "12121search" ! 
[QUOTE=kar_bon;183328]please make some more specific details:
i'm missing the test/sieverange in [i]n[/i]! is it still n=1M? so there're about 18.1M candidates left to test? i'm just llrnetting and i got a n=468010! when is the statspage updated? you gave 581491 tested n with 661 primes found (n>50k if i'm right). which range was tested here? the most seems testd upto n=75k (as you mentioned above). why is k=5295 tested higher (upto n=130k). are others higher tested? the average of 878 n to test to find a prime is therefore only for n>75k. but in higher ranges it's quite more n to test (about 2000 to 4000 i think)! PS: the title of the page is "12121search" ![/QUOTE] Haha, I just notived that 12121 thing and updated it. Oops. Yes, the range is k = 50006000 with n up to 1M. The stats page is updated automatically every time I update the results from prpnet. (I do so daily) But I am working on making it auto update. With LLRNET and the website hosted on different servers, thats a bit tricky. You are right, those numbers are for ALL n tested greater than n=50,000, and right now the test is complete up to n=75,000 with some randoms after that. The statistics about prime frequency are definitely skewed because the majority of the results are n=50,00075,000 and there are many more primes in that range, than say 900,0001,000,000. The plan is to change the reported information to n > 75,000 once there is enough completed results for n > 75,000. In that case (I estimate 1 week or so) then we will have a real idea of what the prime frequency should be. (Yes, I agree, it will be close to 20004000, probably higher) but that also depends on sieve depth. I still have not decided how much more I will sieve, but at 3T, there definitely is more to go. Sieving to at least 5T, can only improve the prime frequency. The sieve effort is on hole for now, but I will get back to it soon. (Any volunteers to help?) Some n are higher tested because I personally LLR tested all k up to 60,000 and then used LLRNET for 60,00075,000. On some n, I was too busy or forgot to stop LLR, but these are fairly few, and do not go much past 120,000, and they are all numebers with few n, so they should not effect the stats much. User stats coming soon. 
another point to check and consider is the sieve depth!
i've done some tests via LLRnet and there'e my get a candidate with nvalue by random. so i had a pair with n=970k. as you mentioned, you sieved to only 3T or a bit higher, that's ways too low for a nrange upto n=1M. perhpaps Gary can tell you a better value but i think a sieve depth to 50T or 100T would be minimum before testing nvalues over 900k! it's a new idea to present testcandidates by random but there're aspects against doing so: 1. the sieve depth has to be quite high 2. the verification of the whole range (k=50006000 and n=75k1000k) is a tremendous work: check if all candidates tested 3. presenting the results (which k got which prime n's) like the table would be open until all tests are done, and this could take years! what about sieving and testing smaller ranges in n (say 100kranges) , and while testing per LLRnet, the sieving for the next range could be done higher! PS: the htmlversion of the page contains a link to itself!? should this be the link to phpversion? the htmlversion shows better the completed range as [...]! another hint: it would be easier to split the column with 'n tested (prime)' so you could better summarize both columns (for example copying the page and inserting in Ex*el!) 
[QUOTE=kar_bon;183346]another point to check and consider is the sieve depth!
i've done some tests via LLRnet and there'e my get a candidate with nvalue by random. so i had a pair with n=970k. as you mentioned, you sieved to only 3T or a bit higher, that's ways too low for a nrange upto n=1M. perhpaps Gary can tell you a better value but i think a sieve depth to 50T or 100T would be minimum before testing nvalues over 900k! it's a new idea to present testcandidates by random but there're aspects against doing so: 1. the sieve depth has to be quite high 2. the verification of the whole range (k=50006000 and n=75k1000k) is a tremendous work: check if all candidates tested 3. presenting the results (which k got which prime n's) like the table would be open until all tests are done, and this could take years! what about sieving and testing smaller ranges in n (say 100kranges) , and while testing per LLRnet, the sieving for the next range could be done higher! PS: the htmlversion of the page contains a link to itself!? should this be the link to phpversion? the htmlversion shows better the completed range as [...]! another hint: it would be easier to split the column with 'n tested (prime)' so you could better summarize both columns (for example copying the page and inserting in Ex*el!)[/QUOTE] You are right about the sieving, but if we break it off into chunks, then we are just doing the test the "old way" 100T may be a tad high for a range of this size, but it definitely has quite a bit more sieving to go. Which will be done concurrently. I dont think the numbers are big enough (n = 999,999 would take <15 min to test on a modern core) to justify putting everything on hold for the months that it would take to sieve deep enough. The html page shouldn't be accessed. it is old. Did i link to it somewhere? Sorry. Originally the php page was very slow to load due to slopy database calls, so there was a link to the html static page for speed......which was just a copy (link included) of the php. Now that the php is fast enough, that isnt necessary and will be deleted. I understand your point about using excel, but the page is dynamically generated, so I can split the columns on the fly pretty easily. I dont really think that a column just for number of primes over 50,000 is necessary, in fact I wasnt going to list is at all, because there already is a column listing the primes. If one really wanted to know how many there were, they could count. But if enough people deem it necessary, I suppose #p>50,000 could get its own column. The primes are a subset of the results, so for now, i think it is ok for them to stay that way. 
As for the comment about the [x] completed range being better, I agree, but considering the range is searched randomly, the liklihood of that number ever increasing, or having any significance for that matter is minimal.
Oh, and how did you get the number of tests remaining? did you really add up all the columns? :smile: 
[QUOTE=justinsane;183359]Oh, and how did you get the number of tests remaining? did you really add up all the columns? :smile:[/QUOTE]
that's what i've done with Excel immediatly! perhaps you could display this value on top of the page, too, so i haven't count again :grin: 
All times are UTC. The time now is 23:42. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2022, Jelsoft Enterprises Ltd.