Thread: SIQS on GPU
View Single Post
Old 2016-10-21, 13:37   #2
bsquared
 
bsquared's Avatar
 
"Ben"
Feb 2007

2·5·337 Posts
Default

Quote:
Originally Posted by cgy606 View Post
Hi,

I came across a interesting thesis written by somebody out of the University of Bath in the UK about the prospect of factorization using SIQS implemented on a GPU. I have provided the hyperlink below:

http://www.cs.bath.ac.uk/~mdv/course...on-2009-10.pdf

Pretty sure somebody on here has read this. I can understand most of it (a little hazzy on the very technical stuff because this isn't my research field) and am wondering if anybody has actually played around with this guy's or somebody else' SIQS code for a GPU? Would be interesting to compare the timing of a say C140 using a modern day GPU for SIQS to using NFS with a GPU (poly selection) along with a CPU (for other steps)
Yes, I've seen it. Unfortunately the thesis is inconclusive. It appears the code did not provide any factorization results, mostly due to a number of memory access related issues that significantly hampered any actual speedup over a cpu. Multiple todo items are specified in the report to address these, but I have no idea if they are actually being worked on. It is now 6 years old and we have yet to see any working siqs code for a gpu.....
bsquared is offline   Reply With Quote