mersenneforum.org Pari hash table lookup
 Register FAQ Search Today's Posts Mark Forums Read

 2009-02-21, 01:52 #1 Joshua2     Sep 2004 13·41 Posts Pari hash table lookup I was wondering how to recompute a table using pari. I want to create a table of a^x for 'a' between 2 and and 100,000 and x between 2 and 1000 for example. Then in my program I can check to see if a result is in the table. I discovered how to hard code an array into pari and how to use indexes, but not how to have an array created on runtime.
2009-02-22, 01:58   #2
m_f_h

Feb 2007

6608 Posts

Quote:
 Originally Posted by Joshua2 I was wondering how to recompute a table using pari. I want to create a table of a^x for 'a' between 2 and and 100,000 and x between 2 and 1000 for example. Then in my program I can check to see if a result is in the table. I discovered how to hard code an array into pari and how to use indexes, but not how to have an array created on runtime.
There are different things.

First, creating an array on runtime is not any different from "hard coding" it. And anyway, since you know the limits for a and x, you can create it once at the beginning

But are you sure that you want to / have enough memory to store all the 100 000 x 1000 values? (The few duplicate values do not make up a very significant proportion wrt order of magnitude of needed space.)
Recall that an array of arbitrary precision numbers will not just require 4 or 8 bytes for each of the integers. (a^x for a=10^5 and x=10^3 has 5000 decimal digits...)

"recomputing a table" (which is probably not a good idea here) can of course be done in the straightforward way, e.g. [to add an element to the "table"], T=concat(T,x) or S=setunion(S,Set(x)).

You can also use lists, which will be much more efficient (but not enough for your problem, I think). (cf listcreate, listinsert)

For the problem at hand, I suggest implementing a more intelligent approach.

In other situations, you might want to "emulate" a memoization ("remember") table. You can do so using a set (defined as global variable) to which you incrementally add vectors with the arguments and the computed result (i.e. here [a,x,a^x]). You can use the setsearch() function to determine where the result for given arguments is (or would be) stored.
But again, given the way sets are implemented, this is not very efficient if you intend to compute many values.

 2009-02-22, 23:59 #3 Joshua2     Sep 2004 13×41 Posts I am planning on storing the a^x modulo a large prime such as 2^32-5 so I will not need arbitrary precision, and create a second table modulo another large prime. I have 64 bit processor with 4gb of memory, and if I need more I will upgrade to 8gb. If I can only do 50,000x1000 that would be fine as well. I will not need to add anything to this table, but I will be doing a large number of lookups into it. 95+% of my processing time will be spent doing lookups, so it must be as efficient as possible. I do not understand your "more intelligent approach." I did think of splitting the a^x into several runs. Like one table of 1-20kx1000 2nd of 20-40k x 1000, and so on. This should fix the memory constraint. I believe it may also speed up the search since I will be looking in smaller tables, and the result is more likely to be correct because there are less numbers in each table (since the modulo even done twice is not guaranteed). I tried numbers 200x100 with my modulo approach and there were no errors as compared to arbitrary precision.
2009-02-23, 10:14   #4
ldesnogu

Jan 2008
France

2·52·11 Posts

Quote:
 Originally Posted by Joshua2 I am planning on storing the a^x modulo a large prime such as 2^32-5 so I will not need arbitrary precision, and create a second table modulo another large prime. I have 64 bit processor with 4gb of memory, and if I need more I will upgrade to 8gb. If I can only do 50,000x1000 that would be fine as well. I will not need to add anything to this table, but I will be doing a large number of lookups into it. 95+% of my processing time will be spent doing lookups, so it must be as efficient as possible. I do not understand your "more intelligent approach."
Sometimes it's more efficient to (partly or not) recompute things than storing them. If your tables get too large, you'll end up waiting hundreds of cycles waiting for RAM to give data back, while recomputation could be an order of magnitude faster.
Of course that depends on the complexity of (re)computing your data.

 2009-02-23, 11:43 #5 fivemack (loop (#_fork))     Feb 2006 Cambridge, England 645210 Posts Umm, why not use the pari ispower() function rather than a large hash table for recognising whether numbers are perfect powers? I think it uses quite clever algorithms (Bernstein has invented some ridiculously fancy ones), and it even returns the power ... Code: ? ispower(17^51) %1 = 51 ? ispower(16^64) %2 = 256 ? ispower(17^51+3) %3 = 0
 2009-02-23, 22:59 #6 Joshua2     Sep 2004 13·41 Posts I am actually doing a^x + b^y = c^z. (I was calling c^z a^x in earlier postings). Arbitrary precision is slow, so I do it modulo a smaller number that maximizes the speed of my 64 bit processor. I don't believe I can call ispower(a^x+b^y, mod 2^32) or code something equivalent myself. I think hash tables and this method would be faster. I wrote something kinda like that yesterday in c# and it is takes ~40s to run for all a,b,c<200 x,y,z<100. That is about the time it takes pari to do a,b,c<100 x,y<10. That method is more efficient in that it tries all z that would work.

 Similar Threads Thread Thread Starter Forum Replies Last Post Sam Kennedy Programming 1 2012-12-25 23:25 diep Math 5 2012-10-05 17:44 Rich Puzzles 2 2011-12-23 18:17 ewmayer Programming 14 2011-02-28 01:45 Primeinator PrimeNet 2 2009-07-21 23:19

All times are UTC. The time now is 05:19.

Sat Jan 29 05:19:33 UTC 2022 up 189 days, 23:48, 1 user, load averages: 2.83, 2.34, 1.96