![]() |
![]() |
#1 |
Dec 2002
2×52×17 Posts |
![]()
L.S.,
Nice almost empty forum space here. Time to request a server feature. I would like to see the status page on the Mersenne website, the table and the associated files listed below the table to be server generated, say once a day. It will save some valueble time for George and make the availability more consistent. YotN, Henk. |
![]() |
![]() |
![]() |
#2 |
"William"
May 2003
New Haven
2,371 Posts |
![]()
Is there any thought about making it possible for projects to run their own "slave server" that can be configured to work on different tasks but consolidate results back to a common master server? Tim Charron's ECM Server/Client works this way.
There are projects that use the Prime95 program in a distributed but manually coordinated way. Some, like LMH, are directly part of the search for Mersenne Primes. Others, like GIMPS ECM and P-1 and ElevenSmooth are factoring Mersenne numbers known to be composite. Slave servers might also solve the bottleneck problems from large manual submissions - these could set up a slave server that accepts the results and then reports to the master using a protocol that the master can throttle. |
![]() |
![]() |
![]() |
#3 |
Sep 2003
Borg HQ, Delta Quadrant
2·33·13 Posts |
![]()
^^ I'd like a feature like that for PrimeNet. That is, a "mini-server" that can be set up inside a network that connects to the PrimeNet server and gets/updates/returns exponents. Then PCs on the inside of said network could connect to the mini-server to get and return their exponents. This would be very useful, especially for large farms.
|
![]() |
![]() |
![]() |
#4 |
Aug 2002
Texas
5·31 Posts |
![]() |
![]() |
![]() |
![]() |
#5 | |
Sep 2003
Borg HQ, Delta Quadrant
2·33·13 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
#6 |
Jan 2003
Altitude>12,500 MSL
11001012 Posts |
![]()
For clarity let's assume we are discussing GIMPS clients that have PrimeNet-integrated networking support.
The best strategy is in fact the one already in use, specifically: (a) each program batch-queues/buffers enough of its own work to keep busy for at least as long as it maximally expects it could be out of contact with the server under routine operations, and (b) each program manages its own state synchronization. For grid designs where large volumes of data - those where data movement cost in $ or time is significant (on the order of 100MB or more) - can benefit from distributed caching and/or peer-to-peer, logically-local (intra-LAN, etc.) data exchanges. For GIMPS, adding an intermediate 'mini-server' having stateful data adds nothing useful. The data exchanges are too small and zippy and the work units too long (hours or more) to benefit from a more complicated design. A non-caching proxy, however, performs a useful function by concentrating traffic from identically-configured clients through a single point that can be monitored and logged as an authorized portal. That the proxy does not change the client batch-queuing/buffering & state synchronization model is an excellent property of the overall design. For what real reason would a small farm or cluster need to use anything else? A proxy needs a single, many-to-one network connection to PrimeNet, the same as a 'mini-server' would. Conversely, an episode of relatively long-term server unavailability would affect similarly configured work-queued clients and a work-caching mini-server identically - sooner or later they both run out of work. As such, the only possible answer driving a need for a stateful 'mini-server' is if there is no network connection on the farm or cluster. Unless the person running it felt compelled to stay 'in the loop', refusing to trust PrimeNet to manage those resources. |
![]() |
![]() |
![]() |
#7 | ||
Sep 2003
Borg HQ, Delta Quadrant
2×33×13 Posts |
![]() Quote:
Quote:
![]() I hope you're getting what I'm trying to say here... ![]() |
||
![]() |
![]() |
![]() |
#8 |
Jan 2003
Altitude>12,500 MSL
11001012 Posts |
![]()
How would increasing the client work queue fail to achieve the identical result? You already agree the result is invariant by falling back on an appeal to limited server availability. Moreover, a failover design supports disaster risk management, not routine operations.
I assert that the correct course of action is the one with the greatest leverage, and that in this case high server availability - something we need to provide anyway - makes any other requirement vanish. |
![]() |
![]() |
![]() |
#9 | |
Sep 2003
Borg HQ, Delta Quadrant
2·33·13 Posts |
![]() Quote:
I thought the idea of failover is that if the primary server fails, the secondary server takes over. While v5 will undoubtably be more stable than v4 I see that as disaster risk management AND routine operations. However, since you are in charge of developing v5 while I will probably have no impact on it whatsoever, the decision is ultimately yours. Since I can see that you are set against this, I will end this discussion now and we can just agree to disagree on this issue. |
|
![]() |
![]() |
![]() |
#10 | |
"William"
May 2003
New Haven
1001010000112 Posts |
![]() Quote:
But making Prime95 a powerful simple tool for "whatever people want to work on" isn't really the GIMPS charter, so I understand completely if the decision is to ignore these ancilliary applications of Prime95 - but I wanted it to be a conscious design decision, not an accidently overlooked opportunity. Last fiddled with by wblipp on 2003-12-12 at 02:01 |
|
![]() |
![]() |
![]() |
#11 |
Aug 2002
3378 Posts |
![]()
There is also the trust level of the proxy server for work returned. Having control of the box and the code, is safer for the project. The server trusts the client communicating with it, why should it trust some queue proxy?
Remember, the majority of the work is still done by people who "set it and forget it", and the core is designed with that in mind. The flexability in the queing for the client now is great, lets teams and single crunchers do what they will with their own work queues. I'd rather spend some time on anti-poaching features. :) One plus for a queing server, atleast for TPR, would be automating the work type specifically for the team, to optimize production for the team. The ability to check in/out, for example, P-1 ready exponents, or exponents at certain FFT ranges to specific processor types, would be very useful. But is this for the core server, or could someone make a seperate queue and client, to handle editing the local worktodo.ini files? If you're at the level of manually editing your worktodo, having another program that handles adding work for you is no hardship. But let a trusted client handle request/result transactions with the core server. Maybe once the core is done, there could be a secondary project server(s), on a different server or IP port, settable via prime.ini, for things like ECM. So the logs can be consolidated with a single stats server. But the primary purpose isn't supporting every math project available (Prime95 is very nice in allowing for work beyond GIMPS), this is about GIMPS. The donated time/money/space/electricity/backups are for GIMPS. Failover at the core could be handled via clustering and a web accelerator FEP. |
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Feature request | tcharron | Software | 3 | 2018-10-03 20:08 |
Feature request | TheMawn | PrimeNet | 3 | 2013-06-17 02:32 |
Feature request | JuanTutors | Software | 22 | 2013-03-11 19:23 |
Feature Request | moo | Software | 24 | 2005-11-26 22:08 |
Feature request | JuanTutors | Software | 2 | 2005-07-04 22:02 |