mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > Hardware

Reply
 
Thread Tools
Old 2005-07-07, 00:12   #1
Peter Nelson
 
Peter Nelson's Avatar
 
Oct 2004

21116 Posts
Default Using PCI-E based network hardware for faster math clusters

Thinking out loud....

Several projects here like big number field sieve attempts at RSA use a cluster of machines which communicate on a network with each other.

Unlike Prime95 which is "trivially" parallel with no inter-processor comms requirement, some other algorithms require much inter processor comms.

From what Bob Silverman said, with scaling up to lots of processors, the network comms becomes the bottleneck rather than how fast each processor can go at the processing.

Network speeds have properties of throughput bandwidth and latency (the delay from one node through whatever switching architecture to another).

Many network connections were PCI cards or if integrated on the motherboard also shared the PCI bus.

However, PCI was not full duplex and was shared between devices.
Hence offloading graphics to its own fast bus via AGP.

Now that AGP is being largely replaced by the relatively new standard "PCI Express" this offers alternatives for faster networks.

x1, x4, x8 or x16 slots might be available on the motherboard and in addition onboard lan MIGHT take advantage of PCI-E interconnect too depending on the design.

Certainly I have read of improvements in benchmarking Infiniband (a high speed interconnect network for building clusters) where it was found that moving from traditional PCI based infiniband cards to the PCI-Express based new cards increased performance some 30-40% with reduced latency.

This led me to wonder how much this improvement would be for gigabit ethernet applications, and how this would help math clusters, making possible higher computational limits.

Now I know that some new motherboards with lan are just using PCI, others are using PCI-E for onboard lan. Dual gigabit port motherboards were either using PCI, maybe both PCI-E or commonly one port on PCI-E the other on PCI.

Obviously PCI-E interfacing on both is desirable. You sometimes have to dig hard to find out how the interfaces are implemented because mobo manufacturers are not yet touting "direct network connection via PCI-E" as a feature.

Another consideration with gigabit ethernet is whether "jumbo frames" support is enabled. This transfers more data with less interrupts (so less cpu usage) but is not supported by many switches.

I have at least three questions.....

a) Are there any reviews or benchmarks of speeds of Gigabit ethernet interfaces when on PCI-E compared to PCI (this might include total throughput, latency, cpu utilisation)? How much does the gigabit improvement mirror that seen when implementing infiniband on the new interface?

b) In typical or specific math applications like variants of NFS what are the characteristics of the network traffic (maybe using MPI)?

eg is small packets that are very latency dependant?
or is total bulk of data to be thrown around between nodes typically the limiting factor (eg sending full wire rate).
eg topology one-one, one-many?

like would two or four gigabit ports together in a meshed topology help the characteristics of the traffic? Traditionally these have been limited by PCI bus bandwidth but now they can have their own lanes this will not be the limiting factor.

What number of nodes is currently the largest realistic if the cluster's sieving (etc) performance is constrained by gigabit ethernet infrastructure?

c) Given answers to the above, how much improvement in performance of the cluster application is likely to be delivered by the speedups in (a)?

Does this mean additional nodes could usefully be added to the cluster before the network bogs it down? (You may extrapolate or borrow from experiences of cluster size using infiniband or other hi performance networks).

Assume on the basis that infiniband remains a specialist expensive solution and Gigabit remains a commodity (even integrated) interconnect at minimal cost.

You may assume any switches used are non-blocking (can handle what you throw at them without dropping packets).

You may additionally consider whether including a 10 gigabit switch and/or cards would provide further benefit OR the bottleneck would then revert to the processors in the cluster.

Assume cluster nodes are >= 3Ghz Pentium 4 through to AMD dualcore X2 spec. Dual channel PC3200 memory giving decent bandwidth on clients. Memory per node can be say up to 4GB.
*****

For your info although Intel's plug in boards are currently PCI based they will be introducing some PCI-E based net products later this year. 3com etc may have similar products.

*****

I do not expect you to have precise answers but to share my thought journey. I have not been able to locate much in relation to gigabit ether over pci-e latency and expected benefits for math applications, but believe this to be an important development with significant benefits to computation if a cluster is built using recently introduced commodity hardware.

Last fiddled with by Peter Nelson on 2005-07-07 at 00:21
Peter Nelson is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Clusters! In! Space! CRGreathouse Information & Answers 29 2011-05-02 04:33
P95 and Linux clusters revivalfire Information & Answers 3 2008-07-18 16:06
Network problem mfgoode Puzzles 19 2007-03-19 15:31
Network LLR Citrix 15k Search 76 2005-09-04 17:32
BeoWolf Clusters MrNetGuy Hardware 8 2005-02-23 05:20

All times are UTC. The time now is 05:03.


Tue Dec 6 05:03:52 UTC 2022 up 110 days, 2:32, 0 users, load averages: 2.04, 1.46, 1.12

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.

≠ ± ∓ ÷ × · − √ ‰ ⊗ ⊕ ⊖ ⊘ ⊙ ≤ ≥ ≦ ≧ ≨ ≩ ≺ ≻ ≼ ≽ ⊏ ⊐ ⊑ ⊒ ² ³ °
∠ ∟ ° ≅ ~ ‖ ⟂ ⫛
≡ ≜ ≈ ∝ ∞ ≪ ≫ ⌊⌋ ⌈⌉ ∘ ∏ ∐ ∑ ∧ ∨ ∩ ∪ ⨀ ⊕ ⊗ 𝖕 𝖖 𝖗 ⊲ ⊳
∅ ∖ ∁ ↦ ↣ ∩ ∪ ⊆ ⊂ ⊄ ⊊ ⊇ ⊃ ⊅ ⊋ ⊖ ∈ ∉ ∋ ∌ ℕ ℤ ℚ ℝ ℂ ℵ ℶ ℷ ℸ 𝓟
¬ ∨ ∧ ⊕ → ← ⇒ ⇐ ⇔ ∀ ∃ ∄ ∴ ∵ ⊤ ⊥ ⊢ ⊨ ⫤ ⊣ … ⋯ ⋮ ⋰ ⋱
∫ ∬ ∭ ∮ ∯ ∰ ∇ ∆ δ ∂ ℱ ℒ ℓ
𝛢𝛼 𝛣𝛽 𝛤𝛾 𝛥𝛿 𝛦𝜀𝜖 𝛧𝜁 𝛨𝜂 𝛩𝜃𝜗 𝛪𝜄 𝛫𝜅 𝛬𝜆 𝛭𝜇 𝛮𝜈 𝛯𝜉 𝛰𝜊 𝛱𝜋 𝛲𝜌 𝛴𝜎𝜍 𝛵𝜏 𝛶𝜐 𝛷𝜙𝜑 𝛸𝜒 𝛹𝜓 𝛺𝜔