![]() |
![]() |
#738 |
Nov 2008
2×33×43 Posts |
![]() |
![]() |
![]() |
![]() |
#739 |
Jun 2003
2×32×269 Posts |
![]()
Not really. Did some work at the beginning of 1M range (but that was long time ago). Took some individual sequences to 100+ digits (long sequences). Definitely _not_ the blocks starting at 1.4M and 2M.
|
![]() |
![]() |
![]() |
#740 |
Nov 2008
44228 Posts |
![]()
Aliquot sequence 11040 is broken: only the last 20 lines are present.
Last fiddled with by 10metreh on 2010-03-20 at 07:43 |
![]() |
![]() |
![]() |
#741 | |
Account Deleted
"Tim Sorbera"
Aug 2006
San Antonio, TX USA
17·251 Posts |
![]() Quote:
I see from 5621 to 6073. More than 20, but not the whole sequence. |
|
![]() |
![]() |
![]() |
#742 |
"Rich"
Aug 2002
Benicia, California
2×5×7×17 Posts |
![]()
I have noticed a couple of HP numbers that previously terminated in a prime revert back to a line where the database has become corrupted for the composites 52 and 157. The remainder of the sequence that previously had been completed to a prime disappears.
|
![]() |
![]() |
![]() |
#743 |
"Ed Hall"
Dec 2009
Adirondack Mtns
3,541 Posts |
![]()
Is there somewhere I could see the details of the db workings? I'm curious about several things, like:
1. Why would a 9 digit (or below) number need computation to prove primality, when it can be looked up via programming in several lists of primes within a few milliseconds? 2. I've noticed that workers connected shows 0 (probably because of the "Worker script disabled!" message). I would guess this could explain the lack of progress. What actually are workers? Are these algorithms or persons/networks? 3. Is there a manner to become "certified" or "trusted" and to be able to help correct some of the issues that have appeared lately, such as 57 and 159 needing to be identified properly in sequences? Although I have limited time, like the rest of the members, and am a complete unknown, I would like to offer assistance in correcting those issues I can fully understand. Of course, I would need to know the "how" for the corrections, not just that something is amiss. I know that 57 is composite, but why does the db show it as an aliquot factor and what is needed to resolve it? The db clearly shows 57 as a composite number, composed of 3 and 19. What would cause its misrepresentation within an aliquot sequence? Would that info be available for review? Is this trouble at a programming level within the db code? Take Care, Ed |
![]() |
![]() |
![]() |
#744 | ||||
A Sunny Moo
Aug 2007
USA (GMT-5)
3·2,083 Posts |
![]() Quote:
Quote:
![]() Initially, user-initiated ECM jobs were handled on a last-come, first-serve basis. That is, if a largish ECM job was in progress and somebody else came along and queued one of their own, the first person would be bumped off. This wasn't too bad at first, though eventually you'd have to essentially babysit a job in order to actually get anything done on it, and even then it could be tricky if you happened to be doing that at the same time as somebody else. ![]() So, Syd added a queue system for user-initiated ECM jobs. (This is not to be confused with the queue system which already existed for non-user-initiated jobs; those still had their own queue and took priority, but that wasn't too big a deal since they were relatively quick and small in number.) In keeping with the earlier last-come, first-serve model, jobs were added to the front of the queue when they were requested--the idea being that a quick ECM job could finish right there while a person waited. It worked for a while, but as the DB grew in popularity, the same problem came into play as before: if you didn't babysit a job, it generally just sank farther and farther down the queue until it would essentially never happen. (Also somewhere in here other jobs such as P-1, P+1, and QS for numbers <70 digits were made available.) At this time Syd came up with the idea of remote workers over the internet--that is, people could contribute to cleaning through the backlog by connecting their own machines as workers to do ECM, P-1, P+1, and sieve tasks. This helped quite a bit, but it wasn't all rosy: on Windows the worker application would never actually run the application to do the work and instead just report work done immediately after it was assigned. This led to a lot of false ECM work being reported and a lot of mess. Since there was no way to tell whether a client was a Linux box and actually helping, or a Windows box and not doing any good (usually unintentionally), Syd disabled the remote worker interface and left the jobs to his original local workers (now about 6-8 of them most of the time). Not long after that, Syd did a major DB upgrade to the current system, with registered users having a point system to "buy" worker time, thus hopefully alleviating the load. Users could replenish their points by running workers of their own. But the new worker script for the upgraded DB didn't work either, so Syd had to yet again disable the remote worker interface. And now we're left where we are now--despite the point system, somehow the queue has yet again gotten completely out of hand. The only jobs that actually get done any more are the non-user-specific jobs, which (thankfully) still are prioritized and thus the local workers keep them running. That's why primality proofs always get done eventually, if not right away. However, it seems TF/initial ECM jobs are not quite highest priority, hence why you have to run Quick ECM to take out tiny factors on a recently-submitted composite. Quick ECM was added with the big DB upgrade as a way to bypass the queue by sort of returning to the old "last-come, first-serve" system. What it does is run auto-increasing curves--with a little P-1 and P+1 mixed in--for a set amount of time, 30 seconds I think, on the webserver itself. I don't believe you can bump somebody off, but rather jobs run concurrently. So far the Quick ECM system has worked pretty well and avoids backlog due to the hard time limitation. At this point, it's essentially the only useful type of work one can get out of the worker system, and due to the way it works one can often "get lucky" with it and strike a 30- or even 35-digit factor. Despite all this, there's still a continual influx of new "big ECM" jobs into the chock-full worker queue. Some people just don't get it... ![]() Anyway, hopefully that explains things a bit. ![]() Quote:
I don't believe there has ever been any privilege system for correcting some of the other errors, such as things set as "fully factored" or "composite w/no factors" when they're clearly not. Syd has to do all of those manually. Quote:
|
||||
![]() |
![]() |
![]() |
#745 |
"Ed Hall"
Dec 2009
Adirondack Mtns
3,541 Posts |
![]()
Thank you very much for the detailed explanation. I have been on long enough to see several of the issues cropping in and was of the belief that it was intentional. Then, I started wondering if it really is just a worm, or possibly a hardware/software issue.
One day I questioned a coworker in an adjacent office as to why he was working on a recent submission over an older one that was a rather big project. He had full scheduling authority for his workload. He explained that if he knocked out the simple ones, the task board looked cleaner, more resources were freed up and the overall appearance was that he had less backlog. This also led to less anxiety. I wonder how a similar approach would affect the db queue - knock out all the trivial operations first and then move to the more complex. Some of those trivial items might even be affecting the more complex, later operations. Maybe the cutoffs for the bigger ECM jobs should be based on the worker queue - as the queue fills, the accepted ECM sizes decrease. Once the queue works down, the ECM size can increase again. Maybe once a month, put a block on new jobs until the queue is completely flushed. Otherwise, a lot of them would never get processed. I definitely appreciate Syd's efforts in supplying the db and keeping it running and I hate to have to point out errors, etc. I'm sure Syd dislikes hearing them and having to deal with each individually. I suppose at this point, I can only help by not tasking the workers to do big jobs. ![]() Thanks again for the details. Take Care, Ed |
![]() |
![]() |
![]() |
#746 |
Account Deleted
"Tim Sorbera"
Aug 2006
San Antonio, TX USA
10AB16 Posts |
![]()
It'd be nice if Syd added a "Verify status" button that would verify the composite/prime status of the number (not necessarily do a full primality test on large numbers, but at least do a sanity check via a Quick ECM) and check that each factor divides the number. Do you think you could find some time to do this, Syd? I think this would solve a great majority of the problems we're having lately, and wouldn't really have any potential for abuse (unlike allowing users to set numbers as prime).
|
![]() |
![]() |
![]() |
#747 |
Oct 2009
Oulu, Finland
368 Posts |
![]()
What is wrong with the DB?
19^981-1 has a 1171 digit composite cofator. This factor has some known factors too - the DB knows them but for some unknown reason 1171 digit cofactor is marked with red ("status unknown"). I'm unable to fix this situation. http://factordb.com/search.php?id=163998287 Last fiddled with by rekcahx on 2010-03-24 at 17:31 Reason: Wrong length |
![]() |
![]() |
![]() |
#748 | |
Mar 2006
Germany
2×1,433 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Database for k-b-b's: | 3.14159 | Miscellaneous Math | 325 | 2016-04-09 17:45 |
Factoring database issues | Mini-Geek | Factoring | 5 | 2009-07-01 11:51 |
database.zip | HiddenWarrior | Data | 1 | 2004-03-29 03:53 |
Database layout | Prime95 | PrimeNet | 1 | 2003-01-18 00:49 |
Is there a performance database? | Joe O | Lounge | 35 | 2002-09-06 20:19 |