View Single Post
Old 2012-11-10, 20:10   #3
jrk
 
jrk's Avatar
 
May 2008

21078 Posts
Default

I think the problem is that the MPQS code does not set up the MPI grid, and this example is using MPQS. This leads to an error in the lanczos code, which always uses MPI when available.

When I try to run the example with an MPI-aware msieve, it fails in block_lanczos() in common/lanczos/lanczos.c, on this line:
Code:
	/* tell all the MPI processes whether a post lanczos matrix
	   was constructed */

	MPI_TRY(MPI_Bcast(&have_post_lanczos, 1, MPI_INT, 0,
			obj->mpi_la_col_grid))
And outputs:
Code:
[atlas:12145] *** An error occurred in MPI_Bcast
[atlas:12145] *** on communicator MPI_COMM_WORLD
[atlas:12145] *** MPI_ERR_COMM: invalid communicator
[atlas:12145] *** MPI_ERRORS_ARE_FATAL (goodbye)
jrk is offline   Reply With Quote