Hi,
I came across a interesting thesis written by somebody out of the University of Bath in the UK about the prospect of factorization using SIQS implemented on a GPU. I have provided the hyperlink below:
http://www.cs.bath.ac.uk/~mdv/course...on-2009-10.pdf
Pretty sure somebody on here has read this. I can understand most of it (a little hazzy on the very technical stuff because this isn't my research field) and am wondering if anybody has actually played around with this guy's or somebody else' SIQS code for a GPU? Would be interesting to compare the timing of a say C140 using a modern day GPU for SIQS to using NFS with a GPU (poly selection) along with a CPU (for other steps)