Go Back > Factoring Projects > Msieve

Thread Tools
Old 2012-11-10, 11:59   #1
Nov 2012

216 Posts
Unhappy Segmentation fault in msieve.

Hello, all!

I try ti factoring a number with msieve-1.50 and received this error.
I build msieve with option MPI=1.
If you need additional information - contact me and I try to give it.

[msieve-1.50]$ ./msieve 58423868674360640853764652174229357809650619008996889226248823243442030054249 -q
[include-laptop:04187] *** Process received signal ***
[include-laptop:04187] Signal: Segmentation fault (11)
[include-laptop:04187] Signal code: Address not mapped (1)
[include-laptop:04187] Failing at address: 0xa0
[include-laptop:04187] [ 0] [0xb77a040c]
[include-laptop:04187] [ 1] /usr/lib/openmpi/ [0xb752b746]
[include-laptop:04187] [ 2] ./msieve() [0x8095962]
[include-laptop:04187] [ 3] ./msieve() [0x806c290]
[include-laptop:04187] [ 4] ./msieve() [0x805946a]
[include-laptop:04187] [ 5] ./msieve() [0x804cb71]
[include-laptop:04187] [ 6] ./msieve() [0x804b942]
[include-laptop:04187] [ 7] ./msieve() [0x804b178]
[include-laptop:04187] [ 8] /lib/ [0xb73053d5]
[include-laptop:04187] [ 9] ./msieve() [0x804b4b9]
[include-laptop:04187] *** End of error message ***
Segmentation fault

Best regards,
include is offline   Reply With Quote
Old 2012-11-10, 17:58   #2
Batalov's Avatar
Mar 2008

9,463 Posts

Looks like you have two incompatible - built with one, run with the other (the system one). You can link against the static lib to check if this is the reason.
Batalov is offline   Reply With Quote
Old 2012-11-10, 20:10   #3
jrk's Avatar
May 2008

109510 Posts

I think the problem is that the MPQS code does not set up the MPI grid, and this example is using MPQS. This leads to an error in the lanczos code, which always uses MPI when available.

When I try to run the example with an MPI-aware msieve, it fails in block_lanczos() in common/lanczos/lanczos.c, on this line:
	/* tell all the MPI processes whether a post lanczos matrix
	   was constructed */

	MPI_TRY(MPI_Bcast(&have_post_lanczos, 1, MPI_INT, 0,
And outputs:
[atlas:12145] *** An error occurred in MPI_Bcast
[atlas:12145] *** on communicator MPI_COMM_WORLD
[atlas:12145] *** MPI_ERR_COMM: invalid communicator
[atlas:12145] *** MPI_ERRORS_ARE_FATAL (goodbye)
jrk is offline   Reply With Quote
Old 2012-11-13, 20:17   #4
Nov 2012

2 Posts

Originally Posted by Batalov View Post
Looks like you have two incompatible - built with one, run with the other (the system one). You can link against the static lib to check if this is the reason.

This is a output of find comand in /usr/lib folder
find /usr/lib -name \*libmpi\*

Try to update openmpi and rebuild msieve. Nothing change.

Sorry for long answers.

Last fiddled with by include on 2012-11-13 at 20:17
include is offline   Reply With Quote
Old 2012-11-14, 00:59   #5
Tribal Bullet
jasonp's Avatar
Oct 2004

2·29·61 Posts

jrk is right, building with MPI and running the QS code will never work. It's not that difficult to *make* it work, but there's no point in doing so for such a small input (one thread will finish the resulting matrix in about one second)
jasonp is offline   Reply With Quote

Thread Tools

Similar Threads
Thread Thread Starter Forum Replies Last Post
odd segmentation fault ChristianB YAFU 4 2015-09-09 19:38
Segmentation fault PhilF Linux 5 2006-01-07 17:12
mprime segmentation fault on RHEL bej Software 28 2005-11-11 19:05
Segmentation Fault on kernel 2.6.8-1.521 Prime Monster Software 9 2004-10-11 22:19
Segmentation Fault sirius56 Software 2 2004-10-02 21:43

All times are UTC. The time now is 13:32.

Tue Jun 15 13:32:54 UTC 2021 up 18 days, 11:20, 1 user, load averages: 2.38, 2.05, 1.86

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.