I can tell it was late at night when I posted that. I posted a link to the 64bit asm siever.

[QUOTE=Batalov;325457]No, this is only for the CPUs that have 64Kb L1 cache, i.e. AMD CPUs. (log[SUB]2[/SUB]64Kb = 16) Don't change L1 bits for Intel CPUs.
How much of the q area have you already sieved? What side have you sieved on? There's no need really for a project of this size to go over q>2^30. Try to cover the area from q=10^7 to your current lower limit (where you started, 238450000). Even if you go over 2^30, the yield will be less and less. You may get a better yield by repeating some of the most productive (lower q) areas with the parameters that Tom (fivemack) suggested earlier. Have you used 3LP? Like  [code]lpbr: 33 lpba: 33 mfba: 66 mfbr: 96 alambda: 2.55 rlambda: 3.7 [/code][/QUOTE] Yeah, that's my current approach  resieving starting from q=10^7 with the parameters Tom suggested... [QUOTE=Batalov;325457]Have you tried to filter your existing set of relations? Last but not the least, do you have a computer (set of computers) to solve the resulting >40M matrix? (As the saying goes, take no offense,  it's not the size (of the cluster), it's how you use it that matters. Have you done a snfs~270280 before doing this snfs290?)[/QUOTE] Yes, I'm aware that the matrixsolving will be challenging and slow... I have a C250 (not sure exactly what the SNFS difficulty is) going on a multicore machine that's estimated to take about a month of matrix solving. My cluster isn't really easy to set up with MPI, but it's an option I'm considering... 
[COLOR=black][FONT=Verdana]Well, it is going to be a hard project. You can find the snfs difficulty level in the logs of that other project, and you can roughly project that every 30 digits of snfs difficulty translate into 10 times more time for the project. The matrix of this size will definitely need MPI (and for MPI, you will need very fast interconnect or else the cluster's productivity will be less than that of a single strongest node). Here's the size of the 6,373 matrix (this is a similarly sized project, run by B.Dodson on an MPI cluster; the parameters above were from it; it was a 33bit 3LP project):[/FONT][/COLOR]
[CODE]Sat Jul 21 12:36:44 2012 matrix is 39095900 x 39096100 (11555.1 MB) with weight 3371381266 (86.23/col) Sat Jul 21 12:36:44 2012 sparse part has weight 2638144630 (67.48/col) [/CODE] If you will run from low q's with the 33bit lpbr/a and mfba: 66 mfbr: 96, then you may get enough relations before reaching 300M. Bits were increased exactly to avoid running away too far on the q axis, definitely not to 1000M. The relations should be combinable. 
[QUOTE=Batalov;325478]Well, it is going to be a hard project. You can find the snfs difficulty level in the logs of that other project, and you can roughly project that every 30 digits of snfs difficulty translate into 10 times more time for the project. The matrix of this size will definitely need MPI (and for MPI, you will need very fast interconnect or else the cluster's productivity will be less than that of a single strongest node).[/QUOTE]
So, here's the good news: msieve was able to generate a matrix for (2801^831)/2800 with about 461M input relations (368M unique). Here's a snippet of the log, in case anyone is interested (the forum won't let me post more than this...) [code]Tue Jan 22 14:32:07 2013 begin with 179606350 relations and 160161527 unique ideals Tue Jan 22 14:34:22 2013 reduce to 179416277 relations and 154697630 ideals in 7 passes Tue Jan 22 14:34:22 2013 max relations containing the same ideal: 174 Tue Jan 22 14:35:31 2013 removing 7091785 relations and 5091785 ideals in 2000000 cliques Tue Jan 22 14:35:36 2013 commencing inmemory singleton removal Tue Jan 22 14:35:49 2013 begin with 172324492 relations and 154697630 unique ideals Tue Jan 22 14:37:10 2013 reduce to 172138029 relations and 149417492 ideals in 6 passes Tue Jan 22 14:37:10 2013 max relations containing the same ideal: 168 Tue Jan 22 14:38:16 2013 removing 6922412 relations and 4922412 ideals in 2000000 cliques Tue Jan 22 14:38:20 2013 commencing inmemory singleton removal Tue Jan 22 14:38:32 2013 begin with 165215617 relations and 149417492 unique ideals Tue Jan 22 14:40:03 2013 reduce to 165016535 relations and 144293839 ideals in 6 passes Tue Jan 22 14:40:03 2013 max relations containing the same ideal: 165 Tue Jan 22 14:41:15 2013 removing 6862286 relations and 4862286 ideals in 2000000 cliques Tue Jan 22 14:41:20 2013 commencing inmemory singleton removal Tue Jan 22 14:41:31 2013 begin with 158154249 relations and 144293839 unique ideals Tue Jan 22 14:42:58 2013 reduce to 157961804 relations and 139237031 ideals in 7 passes Tue Jan 22 14:42:58 2013 max relations containing the same ideal: 160 Tue Jan 22 14:44:02 2013 removing 6724377 relations and 4724377 ideals in 2000000 cliques Tue Jan 22 14:44:06 2013 commencing inmemory singleton removal Tue Jan 22 14:44:17 2013 begin with 151237427 relations and 139237031 unique ideals Tue Jan 22 14:45:35 2013 reduce to 151031091 relations and 134303757 ideals in 6 passes Tue Jan 22 14:45:35 2013 max relations containing the same ideal: 156 Tue Jan 22 14:46:38 2013 removing 6670225 relations and 4670225 ideals in 2000000 cliques Tue Jan 22 14:46:42 2013 commencing inmemory singleton removal Tue Jan 22 14:46:52 2013 begin with 144360866 relations and 134303757 unique ideals Tue Jan 22 14:48:03 2013 reduce to 144148234 relations and 129418117 ideals in 6 passes Tue Jan 22 14:48:03 2013 max relations containing the same ideal: 151 Tue Jan 22 14:49:04 2013 removing 6603405 relations and 4603405 ideals in 2000000 cliques Tue Jan 22 14:49:08 2013 commencing inmemory singleton removal Tue Jan 22 14:49:18 2013 begin with 137544829 relations and 129418117 unique ideals Tue Jan 22 14:50:20 2013 reduce to 137319456 relations and 124586283 ideals in 6 passes Tue Jan 22 14:50:20 2013 max relations containing the same ideal: 147 Tue Jan 22 14:51:14 2013 removing 6568551 relations and 4568551 ideals in 2000000 cliques Tue Jan 22 14:51:18 2013 commencing inmemory singleton removal Tue Jan 22 14:51:31 2013 begin with 130750905 relations and 124586283 unique ideals Tue Jan 22 14:52:43 2013 reduce to 130514605 relations and 119778057 ideals in 6 passes Tue Jan 22 14:52:43 2013 max relations containing the same ideal: 140 Tue Jan 22 14:53:36 2013 removing 6536343 relations and 4536343 ideals in 2000000 cliques Tue Jan 22 14:53:39 2013 commencing inmemory singleton removal Tue Jan 22 14:53:49 2013 begin with 123978262 relations and 119778057 unique ideals Tue Jan 22 14:55:10 2013 reduce to 123722752 relations and 114982284 ideals in 6 passes Tue Jan 22 14:55:10 2013 max relations containing the same ideal: 136 Tue Jan 22 14:56:06 2013 removing 6560987 relations and 4560987 ideals in 2000000 cliques Tue Jan 22 14:56:10 2013 commencing inmemory singleton removal Tue Jan 22 14:56:19 2013 begin with 117161765 relations and 114982284 unique ideals Tue Jan 22 14:57:27 2013 reduce to 116893302 relations and 110148493 ideals in 6 passes Tue Jan 22 14:57:27 2013 max relations containing the same ideal: 132 Tue Jan 22 14:58:12 2013 removing 6556656 relations and 4556656 ideals in 2000000 cliques Tue Jan 22 14:58:16 2013 commencing inmemory singleton removal Tue Jan 22 14:58:26 2013 begin with 110336646 relations and 110148493 unique ideals Tue Jan 22 14:59:35 2013 reduce to 110043331 relations and 105293451 ideals in 7 passes Tue Jan 22 14:59:35 2013 max relations containing the same ideal: 125 Tue Jan 22 15:00:18 2013 removing 6618713 relations and 4618713 ideals in 2000000 cliques Tue Jan 22 15:00:22 2013 commencing inmemory singleton removal Tue Jan 22 15:00:29 2013 begin with 103424618 relations and 105293451 unique ideals Tue Jan 22 15:01:26 2013 reduce to 103106567 relations and 100351046 ideals in 7 passes Tue Jan 22 15:01:26 2013 max relations containing the same ideal: 120 Tue Jan 22 15:02:15 2013 removing 4488221 relations and 3261291 ideals in 1226930 cliques Tue Jan 22 15:02:17 2013 commencing inmemory singleton removal Tue Jan 22 15:02:24 2013 begin with 98618346 relations and 100351046 unique ideals Tue Jan 22 15:03:13 2013 reduce to 98473321 relations and 96943205 ideals in 5 passes Tue Jan 22 15:03:13 2013 max relations containing the same ideal: 117 Tue Jan 22 15:04:15 2013 relations with 0 large ideals: 50400 Tue Jan 22 15:04:15 2013 relations with 1 large ideals: 10132 Tue Jan 22 15:04:15 2013 relations with 2 large ideals: 133132 Tue Jan 22 15:04:15 2013 relations with 3 large ideals: 995810 Tue Jan 22 15:04:15 2013 relations with 4 large ideals: 4302185 Tue Jan 22 15:04:15 2013 relations with 5 large ideals: 11705716 Tue Jan 22 15:04:15 2013 relations with 6 large ideals: 20984820 Tue Jan 22 15:04:15 2013 relations with 7+ large ideals: 60291126 Tue Jan 22 15:04:15 2013 commencing 2way merge Tue Jan 22 15:05:23 2013 reduce to 71534639 relation sets and 70004523 unique ideals Tue Jan 22 15:05:23 2013 commencing full merge Tue Jan 22 15:24:09 2013 memory use: 8280.4 MB Tue Jan 22 15:24:19 2013 found 38628551 cycles, need 38416723 Tue Jan 22 15:24:39 2013 weight of 38416723 cycles is about 2689414920 (70.01/cycle) Tue Jan 22 15:24:39 2013 distribution of cycle lengths: Tue Jan 22 15:24:39 2013 1 relations: 4501188 Tue Jan 22 15:24:39 2013 2 relations: 5120135 Tue Jan 22 15:24:39 2013 3 relations: 5404626 Tue Jan 22 15:24:39 2013 4 relations: 5022230 Tue Jan 22 15:24:39 2013 5 relations: 4474123 Tue Jan 22 15:24:39 2013 6 relations: 3769913 Tue Jan 22 15:24:39 2013 7 relations: 3034947 Tue Jan 22 15:24:39 2013 8 relations: 2310051 Tue Jan 22 15:24:39 2013 9 relations: 1670837 Tue Jan 22 15:24:39 2013 10+ relations: 3108673 Tue Jan 22 15:24:39 2013 heaviest cycle: 20 relations Tue Jan 22 15:24:53 2013 commencing cycle optimization Tue Jan 22 15:25:43 2013 start with 186046555 relations Tue Jan 22 15:29:51 2013 pruned 5360265 relations Tue Jan 22 15:29:52 2013 memory use: 6028.8 MB Tue Jan 22 15:29:52 2013 distribution of cycle lengths: Tue Jan 22 15:29:52 2013 1 relations: 4501188 Tue Jan 22 15:29:52 2013 2 relations: 5249050 Tue Jan 22 15:29:52 2013 3 relations: 5634416 Tue Jan 22 15:29:52 2013 4 relations: 5175687 Tue Jan 22 15:29:52 2013 5 relations: 4610943 Tue Jan 22 15:29:52 2013 6 relations: 3822774 Tue Jan 22 15:29:52 2013 7 relations: 3034347 Tue Jan 22 15:29:52 2013 8 relations: 2242055 Tue Jan 22 15:29:52 2013 9 relations: 1574214 Tue Jan 22 15:29:52 2013 10+ relations: 2572049 Tue Jan 22 15:29:52 2013 heaviest cycle: 18 relations Tue Jan 22 15:31:08 2013 RelProcTime: 14912 Tue Jan 22 15:31:08 2013 elapsed time 04:08:35[/code] The bad news: as you, I, and everyone else expected, it's huge, and it's going to take about 2 months to solve without MPI. :) [code]Tue Jan 22 16:09:42 2013 building initial matrix Tue Jan 22 16:26:47 2013 memory use: 13546.6 MB Tue Jan 22 16:29:10 2013 read 38416723 cycles Tue Jan 22 16:29:18 2013 matrix is 38416545 x 38416723 (11588.0 MB) with weight 3463583491 (90.16/col) Tue Jan 22 16:29:18 2013 sparse part has weight 2615145298 (68.07/col) Tue Jan 22 16:35:02 2013 filtering completed in 2 passes Tue Jan 22 16:35:11 2013 matrix is 38411484 x 38411662 (11587.7 MB) with weight 3463450692 (90.17/col) Tue Jan 22 16:35:11 2013 sparse part has weight 2615116589 (68.08/col) Tue Jan 22 16:37:29 2013 matrix starts at (0, 0) Tue Jan 22 16:37:36 2013 matrix is 38411484 x 38411662 (11587.7 MB) with weight 3463450692 (90.17/col) Tue Jan 22 16:37:36 2013 sparse part has weight 2615116589 (68.08/col) Tue Jan 22 16:37:36 2013 saving the first 48 matrix rows for later Tue Jan 22 16:37:41 2013 matrix includes 64 packed rows Tue Jan 22 16:37:46 2013 matrix is 38411436 x 38411662 (11154.4 MB) with weight 2753602949 (71.69/col) Tue Jan 22 16:37:46 2013 sparse part has weight 2539934136 (66.12/col) Tue Jan 22 16:37:46 2013 using block size 65536 for processor cache size 20480 kB Tue Jan 22 16:39:57 2013 commencing Lanczos iteration (32 threads) Tue Jan 22 16:39:57 2013 memory use: 18349.5 MB Tue Jan 22 16:43:56 2013 linear algebra at 0.0%, ETA 1600h46m Tue Jan 22 16:45:07 2013 checkpointing every 30000 dimensions[/code] So I guess I'll be checking back here in about 2 months unless I get MPI set up or find a kind soul who has an MPI cluster they can lend for a bit... hehe. 
Well, it [I]is[/I] good news; you ran it as a 32bit project so indeed, 368M unique relations would do. Two months LA is not a death sentence now with orthogonality checks. Do backup the whole dataset somewhere safe once, and then backup the .chk file every few days.

[QUOTE=Batalov;325539]Well, it [I]is[/I] good news; you ran it as a 32bit project so indeed, 368M unique relations would do. Two months LA is not a death sentence now with orthogonality checks. Do backup the whole dataset somewhere safe once, and then backup the .chk file every few days.[/QUOTE]
Managed to improve things a bit: with a bunch more sieving and running the msieve filtering step with 'D 100', I now have this matrix: [code]Wed Jan 23 17:35:02 2013 matrix is 31290418 x 31290595 (12587.7 MB) with weight 3718456345 (118.84/col) Wed Jan 23 17:35:02 2013 sparse part has weight 2955591831 (94.46/col) Wed Jan 23 17:41:36 2013 filtering completed in 2 passes Wed Jan 23 17:41:45 2013 matrix is 31289277 x 31289453 (12587.6 MB) with weight 3718418753 (118.84/col) Wed Jan 23 17:41:45 2013 sparse part has weight 2955581343 (94.46/col) Wed Jan 23 17:44:46 2013 matrix starts at (0, 0) Wed Jan 23 17:44:53 2013 matrix is 31289277 x 31289453 (12587.6 MB) with weight 3718418753 (118.84/col) Wed Jan 23 17:44:53 2013 sparse part has weight 2955581343 (94.46/col) Wed Jan 23 17:44:53 2013 saving the first 48 matrix rows for later Wed Jan 23 17:44:59 2013 matrix includes 64 packed rows Wed Jan 23 17:45:05 2013 matrix is 31289229 x 31289453 (12151.2 MB) with weight 3104613918 (99.22/col) Wed Jan 23 17:45:05 2013 sparse part has weight 2872469612 (91.80/col) Wed Jan 23 17:45:05 2013 using block size 65536 for processor cache size 20480 kB Wed Jan 23 17:47:19 2013 commencing Lanczos iteration (32 threads) Wed Jan 23 17:47:19 2013 memory use: 17684.1 MB Wed Jan 23 17:50:58 2013 linear algebra at 0.0%, ETA 1199h45m Wed Jan 23 17:52:09 2013 checkpointing every 30000 dimensions[/code] So, 50 days' work... not great, but not terrible either. 
When you have a .chk file, save it and you can carefully kill* and try to [B][SIZE="7"]ncr[/SIZE][/B] with different number of threads. (avoid accidentally repeating the last command line from shell history, i.e. nc2. Do ncr.)
On different systems, different t # (numbers of threads) turn out to be best. (Smaller number of threads may run each longer on an atomic portion, but then sync may happen faster. On a 2 x 6core Xeon workstation, I've tried many times and the best # of threads was 8 or 9. And setting affinities to threads only made things worse.) _________ [SIZE=1]*"carefully" involves some seemingly strange rites. Don't kill the job at dimension that is ~ within +1000 of a number divisible by 5000. This is because an orthogonality check too close to a save file will report failure (even though everything is within norm). There's a detailed explanation in the msieve threads, but just follow a rule of thumb: press ^C only when dimension's 4th digit from right (resposible for thousands) is 1,2,3 or 6,7,8.[/SIZE] 
I see that an invisible HWMNBN got a bit emotional and [COLOR=black][FONT=Verdana]beautified[/FONT][/COLOR] my message. Ah, memories, memories... [STRIKE]We all[/STRIKE] HWMNBN apparently has got a few. :razz:

[QUOTE=Batalov;325736]I see that an invisible HWMNBN got a bit emotional and [COLOR=black][FONT=Verdana]beautified[/FONT][/COLOR] my message. Ah, memories, memories... [STRIKE]We all[/STRIKE] HWMNBN apparently has got a few. :razz:[/QUOTE]
Maybe nc[B][SIZE="7"]r[/SIZE][/B] would have been even better. :smile: 
BTW, can you clarify what you meant when you said "you ran it as a 32bit project"? Are you referring to the fact that I used the 32bit gnfslasieve* binaries, or something else?
What would have been different if it had been run as a 64bit project? Would I have needed more or fewer relations, would I have gotten better yields from the lattice sievers, etc.? 
I'm fairly certain he's referring to the S*LPMAX values; they are both 2^32. In the ggnfs notation, they would be written as "lpb*: 32" ("the large prime bound is 2^32"). The relations required is asymptotically pi(lpb) (or equivalently pi(log(slpmax)) ) so if you had used a large(er) prime bound of 2^33 you would have needed more relations, while otoh getting more rels per q due to the easier factoring.
On Windows, the 32 bit binary is slightly faster, while on Linux, there is inline assembly that makes its 64 bit version significantly faster than either 32 bit version. (It seems like in the last few months Brian Gladman has been trying to port the asm to Windows, but I don't know how successful he's been.) 
All times are UTC. The time now is 07:18. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2020, Jelsoft Enterprises Ltd.