mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   GPU to 72 (https://www.mersenneforum.org/forumdisplay.php?f=95)
-   -   Future requests? (https://www.mersenneforum.org/showthread.php?t=16727)

Dubslow 2015-08-17 19:40

[QUOTE=Mark Rose;408175]P is for Python, no?[/QUOTE]

Beat me to it :smile:

As for the indentation, it forces good style habits (i.e. is easier to read), as well as being simpler to write and learn.

Disabling code by putting it in an always-false conditional is (arguably) a bad habit. If you want to pretend that code isn't there, comment it, don't leave it sitting there looking for all the world like legitimate code if you either fail to notice or forget the false conditional.

(Also what's this nonsense about IDEs? :razz:)

chalsall 2015-08-17 20:05

[QUOTE=Dubslow;408183]Disabling code by putting it in an always-false conditional is (arguably) a bad habit. If you want to pretend that code isn't there, comment it, don't leave it sitting there looking for all the world like legitimate code if you either fail to notice or forget the false conditional.[/QUOTE]

LOL...

Who said anything about always-false in the conditional?

On the other hand, there are those who sometimes remove sections of code temporarily.

Python has no such option. You can either remove the code without the reminder of what was originally there, or you indent around the conditional, or you add a hash mark for each line.

Regardless, you are at the sufferance of the editor / compiler. Something I'm not willing to do.

Dubslow 2015-08-17 20:38

[QUOTE=chalsall;408184]

Regardless, you are at the sufferance of the compiler.[/QUOTE]

That's true of any language, including Perl.

chalsall 2015-08-17 20:44

[QUOTE=Dubslow;408186]That's true of any language, including Perl.[/QUOTE]

Agreed.

On the other hand, no other compiler (that I know of) gets upset because of indentation.

LaurV 2015-08-18 05:45

grrrr, use #if 0/#endif, which avoids tricks with multiple line comments. Clever editors will even show this in a different color, and/or you can hide it completely from view. Personally, I never use /*...*/, but stay in the //-line and doxygen.

As a fun story, I also "invented" some tricks like

/* //
code lines or code blocks come here;
// */ optional alternate code line comes here;

this way, by adding/removing a second slash in the front of the first line, I can select if I use the code block or (if present) the code line, this is from laziness, instead of using #ifdef's, I only have to press a single key to add or remove that second slash. Of course, the editor's parser will show the part which is commented out in a different color. And of course, if this is done with an "if" line, or either #ifdef/ifndef lines... hehe.

And many other tricks like that...
In the domain I work, I always fight for few bytes or few microseconds, and that means a lot of "alternating" paths, even for elementary calculus, like for example, going up in a circular menu with 5 positions, should I write "if(x!=0)x--;else x=4", or "x=(x==0?4:x-1)", or "x=(x+4)%5", etc. You would be surprised how different MCU's/compilers/architectures will give better/worse code for one or for the other. But that is a different story...

bloodIce 2015-08-18 06:14

[QUOTE=chalsall;408184]or you add a hash mark for each line.[/QUOTE]

Eh, come on, the triple quoted string matching only the first indentation is not that bad, is it? Lets not pretend that comments on multiple lines are not thought about. Indentation means readability and if someone ever looks at your code indentation will make it easy to understand where chunks are. Even for yourself months/years later.

P.S. Frankly speaking, when I am touching C(++), I spend more time counting brackets than writing code, I find that unnecessary unproductive.

LaurV 2015-08-18 06:41

[QUOTE=bloodIce;408204]P.S. Frankly speaking, when I am touching C(++), I spend more time counting brackets than writing code, I find that unnecessary unproductive.[/QUOTE]
Huh? What have you been smoking? :w00t:

I never counted brackets in 30++ years of programming, except for Lisp (where the parentheses accumulate at the end, and you have to close 20-30 or so, but even there, a square bracket at the end will close all open lists) in school (after school I never wrote complicated Lisp programs beside of simple "scripts" like for example in AutoLisp/AutoCad).

Every IDE nowadays has [B]pairing[/B] (CTRL+E will jump between open/close parenthesis/bracket/begin/end of block, for example, in uVision IDE which I have in front of my eyes in this very moment), and/or [B]highlighting[/B] (of all, and/or of the mismatched only), and/or [B]shrinking[/B] of the blocks/functions, and/or [B]indentation[/B] etc. All help with the "brackets" so you never need to "count" them.

snme2pm1 2015-08-18 08:45

[QUOTE=James Heinrich;408181]it would make more sense to just run this off mersenne.org and pull from the real data. Perhaps when Aaron has some free time he can help me figure out the best way to pull that kind of data out, I can assemble a front-end once the data is available.[/QUOTE]

I want to applaud James for taking up the challenge.
It would be most helpful to readily observe the TF explored depth of ranges of unfactored exponents.
As a secondary consideration, it would also be lovely to have a historic perspective for information presented.
I will not assume that quite the same view of information will be convenient.
But with no disrespect to Chris, the exercise of daily migration of statistics must have been a burden that will be alleviated with some degree of similar facility hosted at PrimeNet.
By the way this thread has ventured way off topic and now inconsistent with the parent forum.
I'm not about to start talking about my source code programming habits here.
I was inclined to suggest that further discussion about this proposed new report be relocated to a more proper location like [url]http://www.mersenneforum.org/showthread.php?t=19716[/url] "PrimeNet web design" some days ago, but I didn't want to inhibit the flow of comment.
Maybe it is time now to do that?

LaurV 2015-08-18 10:01

We also salute James picking up the glove. :smile:
And we say sorry for divagation from the topic. From [U]our[/U] topic (the royal our). Because, see, what James has to do, is written in the very few posts of this topic, 3 years ago. By me (and few others). The issues were never completely solved, and got worse in time, till what we have today. I don't blame anyone, just stating the facts, I am perfectly aware that people have real life too, I also have real life :razz:

chalsall 2015-08-18 19:19

[QUOTE=LaurV;408209]We also salute James picking up the glove. :smile:[/QUOTE]

Indeed.

As I said before a few times, mersenne.info was created mostly for myself when I was doing LMH TF'ing several years ago. The spidering code was a bit fragile when it came to correctly dealing with connectivity problems, which was exasperated by the horrible connectivity and power issues we suffer here in Barbados (although things are slowly getting better -- a monkey hasn't caused an island / country wide power failure for a while now for example...).

[QUOTE=LaurV;408209]And we say sorry for divagation from the topic.[/QUOTE]

Again, indeed.

I should have known better; programming language wars can take on the fervor of religious debates. At the end of the day everyone's correct and everyone's wrong! :smile:

James: if you want a copy of my code (spidering / summary / rendering) you're welcome to it. But this is likely a situation of "[URL="https://en.wikiquote.org/wiki/Fred_Brooks"]Plan to throw one away; you will, anyhow.[/URL]"

P.S. Just to be clear (as there has been some confusion about this in the past), mersenne.info and GPU72 are /completely/ separate. The latter evolved from the former, but they share no infrastructure.

James Heinrich 2015-08-18 19:40

I decided to try again, and while incomplete, the start is now here:
[url]http://www.mersenne.ca/status/tf[/url]

It only goes to 1.0M resolution, not 0.01M resolution as mersenne.info does.
Comparison between dates is planned, but not yet implemented (partly because there's only 1 day of history in the database :smile:)

edit: Consider this alpha, I know there's glaring bugs so don't even worry about bug reports yet, please :sirrobin:

James Heinrich 2015-08-18 21:41

I forgot I hadn't yet updated the live database with the 1M-resolution in a certain table, vs 10M resolution that was previously there...
[code]Query OK, 182907459 rows affected (2 hours 55 min 23.28 sec)
Rows matched: 203280221 Changed: 182907459 Warnings: 0[/code]:w00t:

chalsall 2015-08-19 00:03

[QUOTE=James Heinrich;408252]I forgot I hadn't yet updated the live database with the 1M-resolution in a certain table, vs 10M resolution that was previously there...[/QUOTE]

And we programmers sometimes think we have a difficult time...

I'm about to have a few tonnes of concrete poured, and wanted to know how long I should let it set before I cover it in water.

After much Googling, I was led [URL="http://www.ccil.com/assets/best_practice_guideline_bcrmca_ccil_csa_2014_edition_201410107.pdf"]here[/URL].

Didn't actually answer my question, but OMG! (Other research leads me to think I should keep it wet as soon as it is applied, and cover it with water as soon as the men are off the site.)

James Heinrich 2015-08-19 00:17

[QUOTE=chalsall;408259]After much Googling, I was led [URL="http://www.ccil.com/assets/best_practice_guideline_bcrmca_ccil_csa_2014_edition_201410107.pdf"]here[/URL].[/QUOTE]You can send him to a warm place, but he's still inexorably drawn back to BC... :tu:[quote]...This document was prepared by representatives of the ready-mixed concrete suppliers in British Columbia...[/quote]

chalsall 2015-08-19 00:36

[QUOTE=James Heinrich;408260]You can send him to a warm place, but he's still inexorably drawn back to BC... :tu:[/QUOTE]

What can I say? They do chemistry better there.... (:wink:)

snme2pm1 2015-08-20 11:14

[QUOTE=James Heinrich;408244]Consider this alpha, I know there's glaring bugs so don't even worry about bug reports yet, please :sirrobin:[/QUOTE]

Ok, so regarded.
Initial view showed some positive elements, and some numbers that were plainly incorrect, but nevertheless somewhat approximate.
But perhaps you know that, and perhaps I need not mention it.
Gosh, one of your quotes speaks about significant processing work (2 hours 55 min 23.28 sec), that's a scary burden for what ever that component was.
Hopefully there will not be a daily burden approaching that magnitude on any piece of equipment, or is that a misguided belief?
The response to requests during the past 24 hours has produced an exception message, and perhaps that is likewise known and similarly unmentionable.

LaurV 2015-08-20 11:20

On the same direction as the previous poster... :razz:

[QUOTE]Notice: Undefined offset: 0 in /var/www/vhosts/mersenne.ca/httpdocs/visualization.php on line 86 Warning: Invalid argument supplied for foreach() in /var/www/vhosts/mersenne.ca/httpdocs/visualization.php on line 86 [/QUOTE]

James Heinrich 2015-08-20 12:16

[QUOTE=snme2pm1;408373]some numbers that were plainly incorrect, but nevertheless somewhat approximate.[/quote]Please give more details as to which numbers you saw as incorrect (and what you think they should be, and where you got the correct number from).

[QUOTE=snme2pm1;408373]Hopefully there will not be a daily burden approaching that magnitude on any piece of equipment, or is that a misguided belief?[/quote]No, daily query is more like[code]Query OK, 16496 rows affected (2 min 38.99 sec)[/code]

[QUOTE=snme2pm1;408373]The response to requests during the past 24 hours has produced an exception message[/QUOTE]Apparently the aforementioned daily query didn't run like it was supposed to. I ran it manually just now, I'll need to investigate why it didn't run on schedule.

Please still consider it alpha, but you can start telling me about bugs now :smile:

James Heinrich 2015-08-20 13:35

[QUOTE=snme2pm1;408373]some numbers that were plainly incorrect, but nevertheless somewhat approximate.[/QUOTE]Probably relevant: mersenne.ca is still chewing through a list of approx 700,000 factors that may or may not already be known to mersenne.ca (about 95% were not previously recorded). This processing will take at least another 2 days, and of course until that's done the number of known factors in any particular range is most likely inaccurate.

chalsall 2015-08-20 13:42

[QUOTE=James Heinrich;408376]Please still consider it alpha, but you can start telling me about bugs now :smile:[/QUOTE]

Looking good James! Thanks for taking this on! :smile:

Please let me know when you get the weekly deltas display implemented (I'm imagining in about a week... :wink:) and I'll put back the links for the "Weekly Progress Reports" on GPU72.

James Heinrich 2015-08-20 13:50

[QUOTE=chalsall;408383](I'm imagining in about a week... :wink:)[/QUOTE]A week from now I'll be peregrinating the Kettle Valley far away from thoughts of Mersenne, so it'll likely be at least two weeks (the data is structured appropriately, but I haven't yet written anything to display interval-delta data. I'll let you know when it's working.

manfred4 2015-08-20 13:52

[QUOTE=James Heinrich;408376]Please give more details as to which numbers you saw as incorrect (and what you think they should be, and where you got the correct number from).
[/QUOTE]

I thought the same - for example there are over 230k Exponents at 65 bits between 400M and 600M on [URL="http://www.mersenne.ca/status/tf/0/1/0"]here[/URL], whereas there should be none at all, see [URL="http://www.mersenne.org/report_factoring_effort/?exp_lo=400000000&exp_hi=600000000&bits_lo=65&bits_hi=&tftobits=72"]here[/URL].

But for the rest of it the look is really nice already ;)

James Heinrich 2015-08-20 14:00

[QUOTE=manfred4;408385]I thought the same - for example there are over 230k Exponents at 65 bits between 400M and 600M on [URL="http://www.mersenne.ca/status/tf/0/1/0"]here[/URL], whereas there should be none at all, see [URL="http://www.mersenne.org/report_factoring_effort/?exp_lo=400000000&exp_hi=600000000&bits_lo=65&bits_hi=&tftobits=72"]here[/URL][/QUOTE]I'll wait until it's finished chewing on the unprocessed known factors before I take a closer look, although a quick glance suggests that alone is not sufficient to explain the difference.

Ideally when Aaron is back on the case I can get a 1-time export of all unfactored PrimeNet exponents and their TF level to bring my master table in sync with PrimeNet (all subsequent changes should be propagated correctly, but I likely have some historical inaccuracies).

alpertron 2015-08-20 15:27

Excellent!!!

One small problem is that there is an extra row at the end. For instance if I want to see the range 0 to 1000M (see [URL="http://www.mersenne.ca/status/tf/0/1/0"]here[/URL]), there is a row 1000M with data from 1000M to 1001M.

James Heinrich 2015-08-20 16:01

[QUOTE=alpertron;408392]One small problem is that there is an extra row at the end. For instance if I want to see the range 0 to 1000M (see [URL="http://www.mersenne.ca/status/tf/0/1/0"]here[/URL]), there is a row 1000M with data from 1000M to 1001M.[/QUOTE]Thanks, fixed.

chalsall 2015-08-20 16:01

[QUOTE=James Heinrich;408384]A week from now I'll be peregrinating the Kettle Valley...[/QUOTE]

Lucky man!

If you have the time, I highly recommend visiting Peachland. There are many unique wineries there as well.

James Heinrich 2015-08-20 16:05

I lived in Kelowna for a [url=https://en.wikipedia.org/wiki/2003_Okanagan_Mountain_Park_Fire]year[/url] so I'm not unfamiliar with the area :smile:

chalsall 2015-08-20 17:41

[QUOTE=chalsall;408264]What can I say? They do chemistry better there.... (:wink:)[/QUOTE]

Just to tangent back here and rant for a bit if I may...

So, our contractor finally shows up to do the last of the snagging (just) before we pour the concrete tomorrow.

He announces that we need more concrete, and more Xypex Additive and Zypex Concentrate (two separate products).

Wanting to confirm this (Zypex is /very/ expensive) Linda calls the local supplier to ask what the proper mix ratio is. The answer was "Oh, three pet (soda drink) bottles cut off to the top of the label full per bag of cement.

A new measurement standard I had never heard of before!

Sigh....

petrw1 2015-08-20 18:43

I'm liking it so far....
 
Could we get another level or 2 of zoom in?

James Heinrich 2015-08-20 20:11

[QUOTE=petrw1;408425]Could we get another level or 2 of zoom in?[/QUOTE]No? :whistle:

In all seriousness, it can be done, but at the expense (much) increased data storage requirements and somewhat increased time to generate the data each day. More explicitly, at the 1M level I track about 16k data points per day, and the query takes ~2.5 minutes. At 10k resolution that increases to 900k data points and a 30-minute query. I can probably optimize the query a bit, but 900k data points per day is... a lot. Mind you that's across the full M2[sup]32[/sup] range, not the 1000M PrimeNet range.

I'm rerunning the query now to see how many datapoints I come up with for 100k resolution, which might be a feasible compromise.
edit: answer: about 100k, which is still a pile, but not completely impractical.

Prime95 2015-08-20 20:43

[QUOTE=James Heinrich;408429]it can be done, but at the expense (much) increased data storage requirements [/QUOTE]

There is another option. At fine granularity run the query on the Primenet server. The server should be able to sum up small intervals without too much server load. Historical data though would be very hard or impossible to calculate.

James Heinrich 2015-08-20 21:34

A more complex compromise I haven't properly thought through could be something like 0.01M resolution in (0M-100M); 0.1M resolution in (100M-1000M); 1M resolution in (1000M-4294M). This would give the most detail where people are likely to care without too many spurious data points.

Running a quick test, that works out to 75k data points and a not-unreasonable ~3-minute query

petrw1 2015-08-20 21:38

[QUOTE=James Heinrich;408433]A more complex compromise I haven't properly thought through could be something like 0.01M resolution in (0M-100M); 0.1M resolution in (100M-1000M); 1M resolution in (1000M-4294M). This would give the most detail where people are likely to care without too many spurious data points.

Running a quick test, that works out to 75k data points and a not-unreasonable ~3-minute query[/QUOTE]

Sounds reasonable...

snme2pm1 2015-08-20 21:49

[QUOTE=James Heinrich;408376]Please give more details as to which numbers you saw as incorrect (and what you think they should be, and where you got the correct number from).
[/QUOTE]

As an example, the unfactored 5M region has been fully explored out 65 bits, [url]http://www.mersenne.org/report_factoring_effort/?exp_lo=5000000&exp_hi=5999999&bits_lo=1&bits_hi=64[/url].
However, [url]http://www.mersenne.ca/status/tf/0/3/0#[/url] shows 375 below 65 bits.
Is it perhaps including factored exponents in the tally?
By the way, the links on those rows have broken, all saying onClick="location = 'http://www.mersenne.org/report_factoring_effort/?exp_lo=0000000&exp_hi=0999999';"
Similarly: the 4M region has been explored out 66 bits, but the preview report shows 942 at only 65 depth.

manfred4 2015-08-20 22:10

Another thing since this has gotten your "new visualization tool"-thread: [URL="http://www.mersenne.ca/graphs/factor_bits_384M/factor_bits_384M_20150820.png"]This Picture[/URL] seems to not be updating properly anymore since 6 days, the progress in the last 6 days seems to be way too less compared to the timeintervals before. Did you mess around there as well?

snme2pm1 2015-08-20 23:02

[QUOTE=snme2pm1;408435]Similarly: the 4M region has been explored out 66 bits, but the preview report shows 942 at only 65 depth.[/QUOTE]

Further probing of mersenne.org suggests that the 66 bit column figure for 4M should be 21629, which is 942+20687, i.e. some of the tally has been splintered into the preceding bucket.
p.s. Later columns look correct.

James Heinrich 2015-08-20 23:27

[QUOTE=James Heinrich;408386]I'll wait until it's finished chewing on the unprocessed known factors before I take a closer look, although a quick glance suggests that alone is not sufficient to explain the difference.[/quote]Update: this list is now down to about 525000 factors to check and absorb. Another couple days.

[QUOTE=James Heinrich;408386]... I can get a 1-time export of all unfactored PrimeNet exponents and their TF level to bring my master table in sync with PrimeNet[/QUOTE]George has just provided me with the data I need on this front, importing it now. As suspected above, there's a significant number of discrepancies in the 500M range and elsewhere. Broken down more, these are the ranges and counts that were off in my data:[code] [000] => 490
[001] => 1711
[002] => 9164
[003] => 5145
[005] => 230
[007] => 13707
[008] => 9998
[009] => 4733
[010] => 5063
[011] => 443
[201] => 1
[219] => 1
[232] => 1
[300] => 1
[387] => 1
[428] => 1
[439] => 3
[456] => 3
[528] => 231
[532] => 22
[533] => 128
[536] => 78
[537] => 2
[541] => 241
[576] => 4577
[577] => 21983
[578] => 20578
[579] => 21684
[580] => 8846
[581] => 10122
[582] => 18635
[583] => 13978
[584] => 14295
[585] => 21140
[586] => 22344
[587] => 16286
[588] => 10010
[589] => 19486
[946] => 12850[/code]

James Heinrich 2015-08-20 23:44

[QUOTE=manfred4;408436]Another thing since this has gotten your "new visualization tool"-thread: [URL="http://www.mersenne.ca/graphs/factor_bits_384M/factor_bits_384M_20150820.png"]This Picture[/URL] seems to not be updating properly anymore since 6 days, the progress in the last 6 days seems to be way too less compared to the timeintervals before. Did you mess around there as well?[/QUOTE]Not intentionally, but it's not impossible that I broke something unintentionally. Keep an eye on it and let me know if it continues to misbehave over the next few days (or sooner if it gets really wonky).

James Heinrich 2015-08-21 00:11

[QUOTE=James Heinrich;408433]A more complex compromise I haven't properly thought through could be something like 0.01M resolution in (0M-100M); 0.1M resolution in (100M-1000M); 1M resolution in (1000M-4294M). This would give the most detail where people are likely to care without too many spurious data points.[/QUOTE]I have just implemented this approach to see how it goes.

chalsall 2015-08-21 01:16

[QUOTE=James Heinrich;408445]I have just implemented this approach to see how it goes.[/QUOTE]

Just to put on the table, it used to take my spider about one hour of work to get the current data, and then the summary scripts about another four.

Not a terrific Internet connection, but not a terribly bad server either.

Happy to share my historical datasets James (while empirically noisy).

LaurV 2015-08-21 14:15

The right arrow in the lower right side does not work (the pointer is in the weeds).

(edit: and I am terrible upset by the <65 column. Could not be split in few, at leas for views of <1M expos? well.. this is nitpicking, you can ignore it, but here in this part of the world we would like to see the split :razz:)

James Heinrich 2015-08-21 15:05

[QUOTE=LaurV;408467]The right arrow in the lower right side does not work (the pointer is in the weeds).[/quote]I fixed what I think it was you meant.

[QUOTE=LaurV;408467]and I am terrible upset by the <65 column. Could not be split in few, at leas for views of <1M expos?[/QUOTE]No problem, it's just a display consideration. I've lowered the min to 61.

LaurV 2015-08-21 16:19

b-e-a-utiful! Thanks! I knew you are the man! :tu:

snme2pm1 2015-08-21 22:21

[QUOTE=LaurV;408467]Could not be split in few, at leas for views of <1M expos?[/QUOTE]

I didn't imagine you were keen on that region.
In a few weeks there might not be anything below the 62 column, anywhere.
I hope it is recognised that the numbers remain inconsistent with PrimeNet.

lycorn 2015-08-22 00:22

There are several of us keen on that region...
As of now, the correct count is

61 - 898
62 - 15084
63 - 114
64 - 2

Total < 65 bits - 16098

All exponents TFed to less than 65 bits are < 1 M.

James Heinrich 2015-08-22 02:56

[QUOTE=snme2pm1;408495]I hope it is recognised that the numbers remain inconsistent with PrimeNet.[/QUOTE]Very likely. It'll be another couple days before I have hope of being in sync. I will let you know when I think that is the case and [I]then[/I] you can start pointing out what doesn't match what.

James Heinrich 2015-08-22 14:14

[QUOTE=James Heinrich;408441]George has just provided me with the data I need on this front, importing it now. As suspected above, there's a significant number of discrepancies in the 500M range and elsewhere.[/QUOTE]As it turns out the data file I got was incomplete. I have re-run the update with the now-complete data file and there was large number of additional updates to my data. Unfortunately I can't give a breakdown of counts by range (or even overall counts) since my connection timed out during the update :sad: But it was [i]many[/i]. I had to restart the run from 831M and the last part of the update looked like this:[code][832] => 18190
[833] => 19477
[834] => 16172
[835] => 18614
[836] => 18242
[837] => 20867
[838] => 18160
[839] => 19710
[840] => 10414
[841] => 16051
[842] => 19450
[843] => 10041
[844] => 18929
[845] => 20201
[846] => 15946
[847] => 22085
[848] => 9124
[849] => 12634
[850] => 8684
[851] => 20088
[852] => 20368
[853] => 3680
[854] => 17773
[855] => 14069
[856] => 17622
[857] => 15487
[858] => 18112
[859] => 16923
[860] => 3982
[861] => 15053
[862] => 1525
[863] => 10259
[864] => 19286
[865] => 19889
[866] => 18642
[867] => 21487
[868] => 19872
[869] => 11385
[870] => 20105
[871] => 21791
[872] => 22159
[873] => 5382
[874] => 18284
[875] => 4120
[876] => 8365
[877] => 13381
[878] => 4000
[879] => 5006
[880] => 5944
[881] => 16342
[882] => 17080
[883] => 12851
[884] => 1862
[885] => 14264
[886] => 17130
[888] => 14323
[889] => 14128
[890] => 18256
[891] => 16852
[892] => 16896
[893] => 14307
[894] => 18943
[895] => 19340
[896] => 9353
[898] => 15794
[899] => 1240
[900] => 1
[901] => 5854
[902] => 8240
[903] => 16026
[904] => 10980
[905] => 12724
[906] => 22555
[907] => 2287
[908] => 12077
[909] => 18991
[910] => 1
[911] => 5391
[912] => 3
[913] => 8443
[914] => 103
[915] => 124
[916] => 2019
[917] => 1
[918] => 1
[919] => 16
[920] => 204
[926] => 2441
[928] => 26
[929] => 24
[930] => 3
[932] => 1
[934] => 2105
[936] => 20
[937] => 27
[938] => 14992
[939] => 548
[940] => 15
[941] => 25
[942] => 1
[943] => 192
[944] => 1
[945] => 27
[946] => 7987
[947] => 1232
[948] => 927
[950] => 15140
[951] => 14177
[952] => 2
[955] => 1
[956] => 1
[957] => 24
[958] => 12
[959] => 7
[960] => 947
[961] => 13218
[962] => 2986
[963] => 1918
[964] => 899
[965] => 822
[966] => 1864
[967] => 2232
[968] => 1501
[969] => 1242
[970] => 14713
[971] => 15157
[972] => 9364
[973] => 12541
[974] => 20195
[975] => 8107
[976] => 10098
[977] => 6152
[978] => 1688
[979] => 1039
[980] => 11962
[981] => 13431
[982] => 15277
[983] => 15145
[984] => 8413
[985] => 12304
[986] => 12808
[987] => 9691
[988] => 15647
[989] => 11218
[990] => 204
[993] => 62
[999] => 376[/code]

[QUOTE=manfred4;408436]Another thing since this has gotten your "new visualization tool"-thread: [URL="http://www.mersenne.ca/graphs/factor_bits_384M/factor_bits_384M_20150820.png"]This Picture[/URL] seems to not be updating properly anymore since 6 days, the progress in the last 6 days seems to be way too less compared to the timeintervals before. Did you mess around there as well?[/QUOTE]I have also discovered a problem that may be related to that. I have fixed the code, unfortunately I need to reparse a large amount of data. Hopefully I can complete that before the end of today, we'll see how it goes.

Madpoo 2015-08-22 16:24

[QUOTE=James Heinrich;408399]I lived in Kelowna for a [url=https://en.wikipedia.org/wiki/2003_Okanagan_Mountain_Park_Fire]year[/url] so I'm not unfamiliar with the area :smile:[/QUOTE]

The fires in Okanogan are bad news (I live in the Seattle area, so the sunrise this morning was spectacular though). I've been looking at property in that area and it's sad to know that some of the beautiful areas I've seen on my drives will be scorched.

EDIT: and of course I just saw that your link mentioned Okanagan (in BC), not Okanogan (in WA). :smile:

Madpoo 2015-08-22 16:36

[QUOTE=James Heinrich;408521]...
I have also discovered a problem that may be related to that. I have fixed the code, unfortunately I need to reparse a large amount of data. Hopefully I can complete that before the end of today, we'll see how it goes.[/QUOTE]

As George had mentioned, if it would be any faster to run certain queries on Primenet itself, let me know.

James Heinrich 2015-08-22 19:31

[QUOTE=Madpoo;408537]As George had mentioned, if it would be any faster to run certain queries on Primenet itself, let me know.[/QUOTE]No, I think it's all good in that department. It's just a matter of bringing my data into full sync with PrimeNet, and once full sync is established then all the daily changes should be parsed from the daily XML dumps and things should (in theory) stay in sync.

I've readjusted all the no-factor TF levels, I'm currently sync'ing B1/B2 PM1 bounds, and I still need to check for any exponents that PrimeNet thinks are factored by my database doesn't know about. Once that's all done I think it should be getting reasonably close.

James Heinrich 2015-08-22 22:06

I have now finished sync'ing all the data I think I need to. So the numbers should be pretty close (keeping in mind my data is delayed 24h from PrimeNet).

lycorn 2015-08-22 22:18

It´s looking nice now.
Thx a lot for your work! :bow:

manfred4 2015-08-22 22:30

Great Job! Finally having this tool again!

But there is one small thing that still bugs me a bit: if you Zoom in in some region like 300M and then zoom out again you have a table starting at 300M and ending at 1200M, not as I would think having 0M to 900M again.

Not found anything out of order datawise anymore, good job!

James Heinrich 2015-08-22 22:34

[QUOTE=manfred4;408565]But there is one small thing that still bugs me a bit: if you Zoom in in some region like 300M and then zoom out again you have a table starting at 300M and ending at 1200M, not as I would think having 0M to 900M again.[/QUOTE]That is one issue I still need to address. Another is the data range zoomed to on mersenne.org when zooming in beyond the data I have.

snme2pm1 2015-08-22 23:18

[QUOTE=James Heinrich;408562](keeping in mind my data is delayed 24h from PrimeNet).[/QUOTE]
Maybe at some convenient point, your schedule for that can be exposed for clarity.

[QUOTE=James Heinrich;408566]the data range zoomed to on mersenne.org when zooming in beyond the data I have.[/QUOTE]
When in a mode of presenting purely recent numbers, why not use an agent at mersenne.org for a live tally, as George suggested?

At the moment, when using Control key with links, browsers are not opening a fresh tab.
Also I didn't try to work out what time zone mersenne.ca operates with for messages.
The red disclaimer patch you had some hours ago used a non-universal style of date, but was that UTC time or something else?
p.s. I wasn't suggesting a live tally across vast ranges at the high end of the space.

James Heinrich 2015-08-23 02:47

[QUOTE=manfred4;408565]But there is one small thing that still bugs me a bit: if you Zoom in in some region like 300M and then zoom out again you have a table starting at 300M and ending at 1200M, not as I would think having 0M to 900M again.[/QUOTE]The behaviour should be more intuitive now.

[QUOTE=snme2pm1;408569]Maybe at some convenient point, your schedule for that can be exposed for clarity.
Also I didn't try to work out what time zone mersenne.ca operates with for messages.[/QUOTE]PrimeNet generates its daily XML dump at midnight UTC, completing within a couple minutes.
mersenne.ca currently checks for the XML dump at 00:27h, but with the caveat that the server actually runs in EDT timezone, with daylight-savings-time, so that would be 01:27h between November and April. However I have (now) set mersenne.ca to operate in UTC since that is what the timestamp data from PrimeNet is in. I have added a data-current-as-of date at the top of the table.

[QUOTE=snme2pm1;408569]At the moment, when using Control key with links, browsers are not opening a fresh tab.[/QUOTE]The range labels at the start of the rows are now <a href> links that can be ctrl-clicked, right-clicked and so forth as normal links; the bulk of the table row still has the onclick event for convenience.

LaurV 2015-08-23 14:48

[QUOTE=James Heinrich;408566]That is one issue I still need to address.[/QUOTE]
Or maybe not! I think is better as it is. Maybe I want to see the data from 2M to 12M, and leave apart the lower expos (this was just an example).

James Heinrich 2015-08-23 15:05

[QUOTE=LaurV;408601]Or maybe not! I think is better as it is. Maybe I want to see the data from 2M to 12M, and leave apart the lower expos (this was just an example).[/QUOTE]You still can. The behaviour change I made affects the "zoom out" link. You can still go directly to view the range you want ([URL="http://www.mersenne.ca/status/tf/0/3/200"]2M-12M[/URL] in your example), currently by fiddling the URL but I'll try and expose that to a more friendly GUI.

LaurV 2015-08-23 15:15

Brilliant! I got it! I can use it till the interface is ready, you can make it low priority in the list.
A very well done job!

Waiting for the "change" reports :razz: hehe

James Heinrich 2015-08-23 18:16

[QUOTE=LaurV;408606]Waiting for the "change" reports :razz: hehe[/QUOTE][url]http://www.mersenne.ca/status/tf/1/1/0[/url]

snme2pm1 2015-08-23 22:44

Illumination
 
[QUOTE=James Heinrich;408613][url]http://www.mersenne.ca/status/tf/1/1/0[/url][/QUOTE]
Thanks very much.

snme2pm1 2015-08-24 01:09

Something looks fishy
 
5.43M shows 65:1, 66:218 // mersenne.org says 65:0
5.44M shows 65:220, 66:219 // that's too many
At 2015-08-23 21:37, in 5.44M region I lodged 219 no factor, 1 factor
5.44M Differences suggest +219 unfactored. Is that supposed to be non-positive?

LaurV 2015-08-24 02:43

[QUOTE=James Heinrich;408613][URL]http://www.mersenne.ca/status/tf/1/1/0[/URL][/QUOTE]Yarrrr! :chappy:

James Heinrich 2015-08-24 03:00

[QUOTE=snme2pm1;408632]5.43M shows 65:1, 66:218 // mersenne.org says 65:0
5.44M shows 65:220, 66:219 // that's too many
At 2015-08-23 21:37, in 5.44M region I lodged 219 no factor, 1 factor[/QUOTE]My best guess at this point is that it's related to my server's asynchronous processing of the PrimeNet data and absorbing the day's newly discovered factors -- the TF level data is generated immediately upon import from PrimeNet but the TF-level table wasn't yet updated for the newly-discovered factors. I have changed that and tomorrow the numbers might match closer.

edit: now that the factors are absorbed, I re-ran today's numbers and at very quick glance it seems to look better? I'm sure someone will point out where it's not.

snme2pm1 2015-08-24 03:36

[QUOTE=James Heinrich;408642]I re-ran today's numbers and at very quick glance it seems to look better?[/QUOTE]
Yes.
Note that the factor in the 5.43M brick was lodged more than 22 hours before midnight:
5433577 F 2015-08-23 01:37
Strange that it wasn't settled after such a long time.

I noticed that around the 00:30 time, the presented pages advised the new day date, but not revised data.
Then I saw one page with revised data, and other requests time-out, until a few minutes later.

ric 2015-08-24 13:16

Huh?
 
[URL="http://www.mersenne.ca/status/tf/1/1/0"]http://www.mersenne.ca/status/tf/1/1/0[/URL]

[CODE]Notice: Undefined offset: 0 in /var/www/vhosts/mersenne.ca/httpdocs/visualization.php on line 125 Warning: Invalid argument supplied for foreach() in /var/www/vhosts/mersenne.ca/httpdocs/visualization.php on line 125 Data as of 2015-08-23T23:59:00+00:00
Notice: Undefined offset: 0 in /var/www/vhosts/mersenne.ca/httpdocs/visualization.php on line 188 Warning: Invalid argument supplied for foreach() in /var/www/vhosts/mersenne.ca/httpdocs/visualization.php on line 188 Notice: Undefined index: exponents in /var/www/vhosts/mersenne.ca/httpdocs/visualization.php on line 57[/CODE]

James Heinrich 2015-08-24 15:12

[QUOTE=snme2pm1;408646]Note that the factor in the 5.43M brick was lodged more than 22 hours before midnight:
Strange that it wasn't settled after such a long time.[/QUOTE]Sorry, perhaps I didn't explain well. PrimeNet runs an export job at 00hUTC (or a few seconds after) exporting all the data submitted on the date that just ended. About 25 minutes later I import that data into mersenne.ca. Since each day's data can contain several thousand factors and it takes about small but non-trivial amount of realtime to verify each factor, during the import I simply pass off all the day's submitted factors to my batch factor processing job, which chews on a few factors every minute or so and (eventually) imports all the data and updates the tables. This works very well, unfortunately in the case of this new report I need to have the factored-or-not flag set correctly in the exponents table [i]before[/i] generating the day's data, so when the PrimeNet data contains any factors I now immediately update that flag, although the factor processing is still done in the queue as before. As far as the site visitor goes, this means basically [U]nothing[/U], the above was just a babbling explanation for those who like such things.

[QUOTE=snme2pm1;408646]I noticed that around the 00:30 time, the presented pages advised the new day date, but not revised data.
Then I saw one page with revised data, and other requests time-out, until a few minutes later.[/QUOTE]Good observation. The data-date was being pulled from the table that's imported directly from PrimeNet so would show the new date as soon as the import was complete, but there would be about a half-hour window where the new data was imported, but not yet processed for the visualization table data. This is fixed now, the date is stored elsewhere and only updated once the visualization tables are updated.

James Heinrich 2015-08-24 15:25

[QUOTE=ric;408677]Notice: Undefined offset: 0[/QUOTE]I needed to fiddle some things around. It should be back to normal now.

ric 2015-08-24 15:33

[QUOTE=James Heinrich;408691]It should be back to normal now.[/QUOTE]

Well done! Thx

James Heinrich 2015-08-24 16:01

[QUOTE=LaurV;408606]I can use it till the interface is ready, you can make it low priority in the list.[/QUOTE]I've put a crude GUI up now, it'll need some refining later but it'll get you started.

James Heinrich 2015-08-25 14:12

Hopefully all is running reasonably well now. I'm off on vacation for 10 days, whatever is currently broken will remain so until I return.

chalsall 2015-08-25 14:28

[QUOTE=James Heinrich;408746]Hopefully all is running reasonably well now. I'm off on vacation for 10 days, whatever is currently broken will remain so until I return.[/QUOTE]

Great job James! Thanks a lot!

Once your system has collected a week's worth of delta data, I'll put the links back on GPU72 over to your tables.

Enjoy for vacation! I have to say I really miss BC's summers (but not it's winters!).

LaurV 2015-08-25 14:46

[QUOTE=James Heinrich;408746]Hopefully all is running reasonably well now. I'm off on vacation for 10 days, whatever is currently broken will remain so until I return.[/QUOTE]
Enjoy the holiday, you worth it!

Madpoo 2015-08-25 20:33

[QUOTE=James Heinrich;408689]Sorry, perhaps I didn't explain well. PrimeNet runs an export job at 00hUTC (or a few seconds after) exporting all the data submitted on the date that just ended.[/QUOTE]

Yeah, it's set to run at 00:01 UTC daily, but there's a fudge factor of up to 30 seconds built in (that's in case some other task might be set to run at that same minute). It takes a little time to parse the day's logs and generate the XML, so it'll typically be done by 00:03 UTC at the latest. In general it'll be done by 00:02:30 :smile:

[QUOTE=James Heinrich;408689]...small but non-trivial amount of realtime to verify each factor...[/QUOTE]

Wha.... ? You don't trust Primenet to have already run that check when the factor was checked in? :smile: It does check, but the steps involved there are unfamiliar to me, so... yeah... I guess trust-but-verify if you're not sure either.

PS - I got your request to include additional resolution in the time stamps of the logs... as soon as my brain catches up with me I'll add that in. It should be trivial to include the seconds in that data, so then it becomes a matter of anyone else looking at those XMLs, make sure you can handle hh:mm:ss and not just hh:mm

petrw1 2015-08-25 20:39

[url]http://www.mersenne.ca/status/tf/0/5/33210[/url]

I only see 1 line 332.10M (not the usual 11) ...

James Heinrich 2015-08-25 20:55

[QUOTE=Madpoo;408774]You don't trust Primenet to have already run that check when the factor was checked in? :smile:[/QUOTE]I do, but re-checking never hurt. But most importantly, PrimeNet result lines record the factor as-submitted, which can be composite of smaller prime factors. It's genuinely a factor of the exponent, just not a prime factor, so I check everything for that :smile:

[QUOTE=Madpoo;408774]PS - I got your request to include additional resolution in the time stamps of the logs...[/QUOTE]Thanks!

James Heinrich 2015-08-25 20:57

[QUOTE=petrw1;408775][url]http://www.mersenne.ca/status/tf/0/5/33210[/url]
I only see 1 line 332.10M (not the usual 11) ...[/QUOTE]Working as expected.
0M - 100M has 0.01M resolution
100M-1000M has 0.1M resolution
1000M-4294M has 1.0M resolution

Check a few posts/pages back for the rationale (mostly saving server time and space, concentrating the data points where they're most interesting).

petrw1 2015-08-25 21:19

[QUOTE=James Heinrich;408779]Working as expected.
0M - 100M has 0.01M resolution
100M-1000M has 0.1M resolution
1000M-4294M has 1.0M resolution

Check a few posts/pages back for the rationale (mostly saving server time and space, concentrating the data points where they're most interesting).[/QUOTE]

Right....I remember now

James Heinrich 2015-08-26 00:43

[QUOTE=Madpoo;408774]Yeah, it's set to run at 00:01 UTC daily, but there's a fudge factor of up to 30 seconds built in (that's in case some other task might be set to run at that same minute). It takes a little time to parse the day's logs and generate the XML, so it'll typically be done by 00:03 UTC at the latest. In general it'll be done by 00:02:30 :smile:[/QUOTE]And today, for the first time, your job croaked. The .xml.bz2 file is only 37 bytes and contains no data.

LaurV 2015-08-26 02:08

That's because nobody did any trial factoring job today! Back to work you lazy monkeys! :razz:

kladner 2015-08-26 03:24

[QUOTE=LaurV;408806]That's because nobody did any trial factoring job today! Back to work you lazy monkeys! [/QUOTE]

I have never done any trial factoring. My graphics cards do it for me. :razz:

srow7 2015-08-31 03:41

[QUOTE=James Heinrich;408779]Working as expected.
0M - 100M has 0.01M resolution
100M-1000M has 0.1M resolution
1000M-4294M has 1.0M resolution

Check a few posts/pages back for the rationale (mostly saving server time and space, concentrating the data points where they're most interesting).[/QUOTE]

"Zoom Out" not working as expected
click "40M", then zoom out, works as expected.
click "50M", then zoom out, it goes to a strange page
[URL]http://www.mersenne.ca/status/tf/0/3/5000[/URL]

LaurV 2015-08-31 04:29

Indeed.
But this is minor, as long as the data in the table is right, we will find the ranges that interest us, after few clicks and/or editing the link.

VictordeHolland 2015-09-03 13:48

Good job, James!
Me like!

James Heinrich 2015-09-04 22:25

[QUOTE=srow7;409239]"Zoom Out" not working as expected
click "40M", then zoom out, works as expected.
click "50M", then zoom out, it goes to a strange page
[URL]http://www.mersenne.ca/status/tf/0/3/5000[/URL][/QUOTE]Thanks, I've changed the nice simple rounding code to something unfortunately more complex, but I think it should do what you would reasonably expect it to now. Please let me know if this broke anything else.

ramgeis 2015-09-04 23:09

Thanks James for your work, it helps a lot! :tu:

Uncwilly 2015-09-04 23:38

I think that there is an error somewhere in this range here:
[url]http://www.mersenne.ca/status/tf/0/3/33000[/url]

The 2 that are listed at 66 and 67 don't seem to exist in the drill down to PrimeNet.
:beer:

James Heinrich 2015-09-05 02:10

[QUOTE=Uncwilly;409616]I think that there is an error somewhere in this range here:
[url]http://www.mersenne.ca/status/tf/0/3/33000[/url]
The 2 that are listed at 66 and 67 don't seem to exist in the drill down to PrimeNet.[/QUOTE]The two exponents in question are[code]+-----------+------+--------+
| exponent | tf | mrange |
+-----------+------+--------+
| 336938033 | 66 | 336 |
| 337911631 | 67 | 337 |
+-----------+------+--------+
2 rows in set (0.20 sec)[/code]They are indeed as described on PrimeNet:
[url]http://www.mersenne.org/M336938033[/url]
[url]http://www.mersenne.org/M337911631[/url]

But my drill-down is not setting the proper range when sending you to PrimeNet. I'll take a look over the weekend and fix it.


edit: give that a shot, should be showing the correct range now for exponents > 100M

Uncwilly 2015-09-05 03:26

[QUOTE=James Heinrich;409622]edit: give that a shot, should be showing the correct range now for exponents > 100M[/QUOTE]
Cool! Works now. I am taking those 2 to a more proper level as we speak.

manfred4 2015-09-07 20:18

Noticed one more thing trying to go back a full week in your history:
[URL="http://www.mersenne.ca/status/tf/7/2/0"]http://www.mersenne.ca/status/tf/7/2/0[/URL]
There I see the additional numbers in factored not matching the negative number in unfactored - it works with all less than 7 days but that one and 8 or 9 days before are broken.
Was there some corrupt data on your side?

LaurV 2015-09-08 03:48

There was an initial phase of "collecting data". If you go up "day by day" you will see totally missing data (11-12 days ago) and then again matching data (20 days ago or so) and then again unmatching, then again no data (when there was no site there). You should ignore that. In time, as days pass, it will be ok. :smile:

James Heinrich 2015-09-16 19:12

[QUOTE=manfred4;409818]Noticed one more thing trying to go back a full week in your history:
There I see the additional numbers in factored not matching the negative number in unfactored - it works with all less than 7 days but that one and 8 or 9 days before are broken. Was there some corrupt data on your side?[/QUOTE]I'll blame some PrimeNet weirdness in the daily reports (which of course happened when I was on vacation for a week). The data for the first few days (Aug 28/29/30) doesn't seem sane, but from 31-Aug-2015 onwards it seems OK. Simple solution: I've deleted the data prior to 31-Aug. :smile:

Sanity restored! :cmd:

chalsall 2015-09-16 19:26

[QUOTE=James Heinrich;410511]Sanity restored! :cmd:[/QUOTE]

"Or am I so sane that you just blew your mind?" - Kramer :wink:

Thanks again for doing this James.

One additional request, if I may... Could you add a "Totals" row at the bottom of each table?

Separately, I probably just missed it, but are you also doing temporal deltas for LLs and DCs? Mersenne.info never did P-1 deltas; are/could you?

James Heinrich 2015-09-16 19:54

[QUOTE=chalsall;410516]One additional request, if I may... Could you add a "Totals" row at the bottom of each table?[/QUOTE]Oops, didn't realize I'd missed that (was there for single-day table, was missing for delta table). Should show up now.

[QUOTE=chalsall;410516]Separately, I probably just missed it, but are you also doing temporal deltas for LLs and DCs? Mersenne.info never did P-1 deltas; are/could you?[/QUOTE]I am in fact not (currently) tracking anything other than TF. In fact I hadn't thought of it, I didn't know mersenne.info had done so.
I suppose I have enough data available now to track P-1, probably LL as well, but I'm not sure if my data reliably knows which exponents have been double-checked. I'm waiting for some additional data export for when Aaron gets some free time which would help me in that regard.

I'm on the verge of moving so if you don't see me tracking P-1 and LL in, say, 3 weeks, remind me :smile:

lycorn 2015-09-16 23:01

Would it be feasible, without too much fuss, to remove the column that shows the count of exponents TFed to 61 bits, now that there are none, and make the leftmost column "<62"? And while you are at it, adding one to "81" and making the rightmost ">81".
Thx again for all your much appreciated work.

James Heinrich 2015-09-16 23:56

Easily done.

LaurV 2015-09-17 02:11

[QUOTE=James Heinrich;410556]Easily done.[/QUOTE]
Ha! You did it for him and didn't do it for me! :tantrum:
(see the milestones thread for details)


All times are UTC. The time now is 00:08.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.