mersenneforum.org  

Go Back   mersenneforum.org > New To GIMPS? Start Here! > Homework Help

Reply
 
Thread Tools
Old 2011-12-04, 19:41   #1
AnoNYStudent
 

4,051 Posts
Question Parallelizing Pollard's P-1 Algorithm

I would like to somehow parallelize pollard's p-1 algorithm over multiple machines. I am currently writing a project proposal and I don't know if it is possible or not.

I heard about the GIMPS project and, since it is distributed, I figured that you all would know best if this project is even possible.

For Pollard's P-1, does GIMPS distribute a single job over multiple machines (in order to get speedup)? Or, does it give each machine a *different* factoring job?

Is Pollard's P-1 algorithm parallizable?
  Reply With Quote
Old 2011-12-04, 20:27   #2
Mr. P-1
 
Mr. P-1's Avatar
 
Jun 2003

116910 Posts
Default

Quote:
Originally Posted by AnoNYStudent View Post
For Pollard's P-1, does GIMPS distribute a single job over multiple machines (in order to get speedup)? Or, does it give each machine a *different* factoring job?
In GIMPS, Each machine gets a different factoring job, and the same machine does both stages, or stage 2 is not done at all.

In principle, the two stages could be done by different machines, or a stage could be started on one machine and completed on another. GIMPS does not do this, largely because of the difficulty of transferring, storing, and tracking large amounts of data.

Quote:
Is Pollard's P-1 algorithm parallizable?
Stage 1 isn't parallelisable. Stage 2 is. Stage 1 must be complete before stage 2 starts.
Mr. P-1 is offline   Reply With Quote
Old 2011-12-04, 21:28   #3
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

3×29×83 Posts
Default

I would say rather than parallelize over multiple computers, you might try trying the parallelize it for GPU's; they have literally hundreds of processors going at once, and GIMPSters certainly would love a GPU P-1 program.

Mr. P-1, when you say S1 isn't parallelizable, do you mean in the same way as LL, meaning you'll lose efficiency but it's possible, or do you truly mean not at all?
Dubslow is offline   Reply With Quote
Old 2011-12-05, 02:50   #4
cheesehead
 
cheesehead's Avatar
 
"Richard B. Woods"
Aug 2002
Wisconsin USA

22×3×641 Posts
Default

Quote:
Originally Posted by Mr. P-1 View Post

Stage 1 isn't parallelisable.
Why not?
cheesehead is offline   Reply With Quote
Old 2011-12-05, 03:50   #5
axn
 
axn's Avatar
 
Jun 2003

153316 Posts
Default

Quote:
Originally Posted by cheesehead View Post
Why not?
Stage1 is essentially a serial computation. You _can_ use multiple threads to accelerate the stage1 multiplications (a la prime95), but there is a limit to how much speedup you can achieve that way.

OTOH, you can use multiple independent processes to parallelize stage2, with virtually no limit. Stage2 is what you call "embarrassingly" parallel.

Last fiddled with by axn on 2011-12-05 at 03:50
axn is offline   Reply With Quote
Old 2011-12-05, 04:14   #6
cheesehead
 
cheesehead's Avatar
 
"Richard B. Woods"
Aug 2002
Wisconsin USA

22×3×641 Posts
Default

Quote:
Originally Posted by axn View Post
Stage1 is essentially a serial computation.
No, it is _not_ essentially serial. It's just a bunch of modular multiplications, with the final product _not_ dependent on the order in which the multiplications are done!

In L-L, there's that pesky -2 in each iteration that prevents us from being able to figure far enough in advance the exact values to be squared in iterations enough ahead to be usefully paralleled. There's no such barrier to forecasting any given value to be multiplied in P-1 stage 1, that I know of.

Quote:
You _can_ use multiple threads to accelerate the stage1 multiplications (a la prime95),
Isn't that equivalent to an admission that stage 1 can be parallelized?

Quote:
OTOH, you can use multiple independent processes to parallelize stage2, with virtually no limit. Stage2 is what you call "embarrassingly" parallel.
Either I've never seen, or have forgotten, an explanation of why this same approach cannot be applied to stage 1.

Please explain, exactly, the supposed impossibility that prevents paralleliizng stage 1's multiplications.

Last fiddled with by cheesehead on 2011-12-05 at 04:27
cheesehead is offline   Reply With Quote
Old 2011-12-05, 04:24   #7
axn
 
axn's Avatar
 
Jun 2003

34×67 Posts
Default

Quote:
Originally Posted by cheesehead View Post
No, it is _not_ essentially serial. It's just a bunch of multiplications, with the final product _not_ dependent on the order in which the multiplications are done.
It is a bunch of exponentiations. And unfortunately, exponentiation operation is neither associative nor commutative. So, unlike multiplications which can be "divided and conquered", you're SOOL with exponentiation ==> inherently serial.
a*b*c*d = (a*b) * (c*d) -- parts in bracket can execute in parallel.
((a^b)^c)^d) -- strictly serial.

Quote:
Originally Posted by cheesehead View Post
In L-L, there's that pesky -2 in each iteration that prevents us from being able to figure far enough in advance the exact values to be squared in iterations enough ahead to be usefully paralleled. There's no such barrier to forecasting any given value to be multiplied in P-1.
Who are you and what have you done with cheesehead?! This is such a confused statement, I don't even know where to start!

Even without the -2, LL test is a long series of squarings, with each iteration dependent on the outcome of the previous -- inherently serial.
axn is offline   Reply With Quote
Old 2011-12-05, 04:42   #8
AnonNYStudent
 

9,001 Posts
Default

Quote:
Originally Posted by Dubslow View Post
I would say rather than parallelize over multiple computers, you might try trying the parallelize it for GPU's; they have literally hundreds of processors going at once, and GIMPSters certainly would love a GPU P-1 program.
This is definitely a possibility next quarter when I take GPU programming.

But at the moment I have two options:
1) Find some problem to solve using multiple CPU cores or
2) Find some problem to solve by using multiple machines across the network.

I would really love to write a GPU program but I have no experience with CUDA. We are using OpenMP for our project. Hopefully after gaining experience with writing parallel programs I will be able to do write them for the GPU next.

Thanks for all the replies. They give me more confidence that it is at least possible to parallelize some parts of Pollard's algorithm. I wanted to pick a cool problem to solve but I didn't know if it was too advanced.
  Reply With Quote
Old 2011-12-05, 04:46   #9
frmky
 
frmky's Avatar
 
Jul 2003
So Cal

2×7×181 Posts
Default

Quote:
Originally Posted by cheesehead View Post
Why not?
Stage 1 is basically a big modular exponentiation. The time is dominated by two large multiplications per step assuming Montgomery or Barrett, so for GIMPS-sized numbers GPUs should be decent. The multiplications can be done using cuFFT as in CudaLucas. Perhaps I'm overlooking something?
frmky is online now   Reply With Quote
Old 2011-12-05, 04:52   #10
cheesehead
 
cheesehead's Avatar
 
"Richard B. Woods"
Aug 2002
Wisconsin USA

22·3·641 Posts
Default

Perhaps the argument that y'all should be making is not that stage 1 isn't parallelizable, but that the degree of parallelization required to accomplish a net savings in elapsed time (vs. serial processing) is exponential (or something), and thus soon becomes impractical in the general case?

Last fiddled with by cheesehead on 2011-12-05 at 05:01
cheesehead is offline   Reply With Quote
Old 2011-12-05, 04:57   #11
cheesehead
 
cheesehead's Avatar
 
"Richard B. Woods"
Aug 2002
Wisconsin USA

22·3·641 Posts
Default

Quote:
Originally Posted by axn View Post
It is a bunch of exponentiations. And unfortunately, exponentiation operation is neither associative nor commutative. So, unlike multiplications which can be "divided and conquered", you're SOOL with exponentiation ==> inherently serial.
a*b*c*d = (a*b) * (c*d) -- parts in bracket can execute in parallel.
((a^b)^c)^d) -- strictly serial.
You wouldn't be trying to convince me that ((3^2)^3)^5 is not equal to ((3^5)^2)^3 or to (3^15)*(3^15), would you?

:-)

Last fiddled with by cheesehead on 2011-12-05 at 05:06
cheesehead is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Pollard rho with known factor of P-1 henryzz Math 2 2017-08-15 12:13
Pollard Rho Discrete Log rogue Math 6 2012-09-26 11:20
Problems with the Pollard Rho Algorithm theta Programming 2 2005-08-26 00:26
Pollard Rho Help? theta Factoring 2 2005-08-23 21:14
Pollard's Algorithm & That of Devaraj-a comparison devarajkandadai Miscellaneous Math 22 2005-06-10 11:13

All times are UTC. The time now is 05:59.


Fri Dec 9 05:59:17 UTC 2022 up 113 days, 3:27, 0 users, load averages: 1.08, 1.04, 1.05

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.

≠ ± ∓ ÷ × · − √ ‰ ⊗ ⊕ ⊖ ⊘ ⊙ ≤ ≥ ≦ ≧ ≨ ≩ ≺ ≻ ≼ ≽ ⊏ ⊐ ⊑ ⊒ ² ³ °
∠ ∟ ° ≅ ~ ‖ ⟂ ⫛
≡ ≜ ≈ ∝ ∞ ≪ ≫ ⌊⌋ ⌈⌉ ∘ ∏ ∐ ∑ ∧ ∨ ∩ ∪ ⨀ ⊕ ⊗ 𝖕 𝖖 𝖗 ⊲ ⊳
∅ ∖ ∁ ↦ ↣ ∩ ∪ ⊆ ⊂ ⊄ ⊊ ⊇ ⊃ ⊅ ⊋ ⊖ ∈ ∉ ∋ ∌ ℕ ℤ ℚ ℝ ℂ ℵ ℶ ℷ ℸ 𝓟
¬ ∨ ∧ ⊕ → ← ⇒ ⇐ ⇔ ∀ ∃ ∄ ∴ ∵ ⊤ ⊥ ⊢ ⊨ ⫤ ⊣ … ⋯ ⋮ ⋰ ⋱
∫ ∬ ∭ ∮ ∯ ∰ ∇ ∆ δ ∂ ℱ ℒ ℓ
𝛢𝛼 𝛣𝛽 𝛤𝛾 𝛥𝛿 𝛦𝜀𝜖 𝛧𝜁 𝛨𝜂 𝛩𝜃𝜗 𝛪𝜄 𝛫𝜅 𝛬𝜆 𝛭𝜇 𝛮𝜈 𝛯𝜉 𝛰𝜊 𝛱𝜋 𝛲𝜌 𝛴𝜎𝜍 𝛵𝜏 𝛶𝜐 𝛷𝜙𝜑 𝛸𝜒 𝛹𝜓 𝛺𝜔