20180619, 17:32  #1 
Jun 2018
7_{8} Posts 
Is this viable to fast finding prime numbers?
Been working as hobby on a theorem that can test for primes and generate them using only a fraction of the original number, decided to make a document about it (the best as i could, I'm not mathematician or programmer, so for now is a draft)
If u kindly take a look and give some feedback or contribute on it's development would be awesome :) http://nbviewer.jupyter.org/github/P...er/index.ipynb Excuse the abuse on tables but considered that the sets of tables reinforce the explanation and procedure. If have questions, suggestions or comments please go ahead :) Thanks in advance :) 
20180619, 19:38  #2 
Aug 2006
1763_{16} Posts 
It looks like this method requires storing on the order of sqrt(x) numbers in the sequences[] array, which means that it will be very slow and fill memory for numbers around 22 digits (assuming ~100 gigabits of RAM) and won't work for numbers much larger. You seem to be doing some variant of trial division.
But checking a 100digit prime should only take a few milliseconds (I don't know what is stateoftheart). 
20180619, 20:11  #3 
Jun 2018
7_{10} Posts 
Thanks for your feedback :)
Sure is some way of trial division but since is in worst at most 1/3 of the original I thought would be more easy while processing, still have to try for all existing odds between 3 and middle, but observed that if (maybe using some supporting sieve?) to find prime numbers within the range is possible only test for those that are true, still that would add complexity and maybe end up being slower... And yeah, tested for a set of numbers of 6 digits and tables ended up growing up fast the usage of ram, so maybe looping trough only without actually storing but the values to be tested within the range and if is true or false? also maybe just in case of generating having a set of arrays that hold boolean values where in case to be 0 just mark it as false (could be also by looping i guess) the set of arrays should be ram friendly since will be about middle * (middle / 2) bits. What u think? 
20180619, 21:57  #4 
Aug 2006
5,987 Posts 
Definitely looping through without storing would be faster. It still won't scale to big numbers but it will be doable for larger numbers.

20180619, 23:34  #5 
Jun 2018
7 Posts 
Thanks, will think about what can do to implement that looping and see if can skip some values or think about something else to speed it up :) if have some advice please let me know :)
About programming have 2 questions, I'm hesitant about using gmpy2 for further development, not sure if is compatible with MIT license and can use it for improve the speed, heard it could match c speed when calculating arbitrary precision numbers, you know if is safe to use on my project? And if could be a good idea to further develop in python or pick another language like C/C++ or Rust? Also one of the things I'm unsure is about the "cost" of processor cycles, for example, mentioned there as one general rule that If start increment the values from right to left from where the rows converge until reach the middle get same result as the trial division made if start from middle and trial division to the right. In resume, on one side have a square root and trial division and from the other have a division and simple addition and subtraction but more quantity of them, which could be more efficient? 
20180620, 21:16  #6  
Aug 2006
5,987 Posts 
Quote:
For the project at its current state there's no advantage to moving to a lowerlevel language like C or Rust. Right now working in a highlevel language like Python that lets you make big changes easily is appropriate because you want to be able to change things around a lot rather than worrying about optimizing the code. Quote:


20180620, 23:02  #7  
If I May
"Chris Halsall"
Sep 2002
Barbados
11,087 Posts 
Quote:
Law is simply code, written by humans for humans. Stallman might not be easy to deal with, but he understands code.... 

20180621, 00:07  #8 
Jun 2018
7 Posts 
Oh ok, gonna keep up with python :) sure is more versatile and simple to write and the fact that Jupyter runs the output right there is a big advantage XD
I will not hesitate to use gmpy2 then, no one would complain hopefully, guess if really really need will make a change on the license. Sure in case of being prime will need to run the whole sequence, still if is faster to discard earlier a non prime could improve the overall run time if is testing a range, maybe will figure out to make it accept parameters to let pick which method would use and measure both times :) still gonna try to find a shorter way somehow Thanks again for your feedback :) To see law as code for humans is certainly accurate but unfortunately imho isn't simple enough for most humans find that kind of code clear, wouldn't be law schools otherwise :o 
20180621, 00:14  #9 
If I May
"Chris Halsall"
Sep 2002
Barbados
11,087 Posts 

20180621, 01:53  #10 
Jun 2018
7 Posts 
Can understand why most people could be afraid, know some very mean lawyers :o but better have one handy if doubtful about legal stuff, one never knows when a question made in time could save from trouble specially before signing things :o

20180621, 12:14  #11 
Aug 2006
5987_{10} Posts 

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Endorsement Prime Numbers finding algorithm  marouane  Computer Science & Computational Number Theory  18  20171106 15:41 
Pretty Fast Primality Test (for numbers = 3 mod 4)  tapion64  Miscellaneous Math  40  20140420 05:43 
New method of finding large prime numbers  georgelouis@mac  Math  41  20110125 21:06 
Best settings for finding errors fast  Bourni  Hardware  8  20061227 13:59 
Finding smooth numbers  Citrix  Math  9  20051231 11:07 