Sending and receiving 2D array over MPI
Just to amplify Joel's points a bit:
This goes much easier if you allocate your arrays so that they're contiguous (something C's "multidimensional arrays" don't give you automatically:)
int **alloc_2d_int(int rows, int cols) {
int *data = (int *)malloc(rows*cols*sizeof(int));
int **array= (int **)malloc(rows*sizeof(int*));
for (int i=0; i<rows; i++)
array[i] = &(data[cols*i]);
return array;
}
/*...*/
int **A;
/*...*/
A = alloc_2d_init(N,M);
Then, you can do sends and recieves of the entire NxM array with
MPI_Send(&(A[0][0]), N*M, MPI_INT, destination, tag, MPI_COMM_WORLD);
and when you're done, free the memory with
free(A[0]);
free(A);
Also, MPI_Recv
is a blocking recieve, and MPI_Send
can be a blocking send. One thing that means, as per Joel's point, is that you definately don't need Barriers. Further, it means that if you have a send/recieve pattern as above, you can get yourself into a deadlock situation -- everyone is sending, no one is recieving. Safer is:
if (myrank == 0) {
MPI_Send(&(A[0][0]), N*M, MPI_INT, 1, tagA, MPI_COMM_WORLD);
MPI_Recv(&(B[0][0]), N*M, MPI_INT, 1, tagB, MPI_COMM_WORLD, &status);
} else if (myrank == 1) {
MPI_Recv(&(A[0][0]), N*M, MPI_INT, 0, tagA, MPI_COMM_WORLD, &status);
MPI_Send(&(B[0][0]), N*M, MPI_INT, 0, tagB, MPI_COMM_WORLD);
}
Another, more general, approach is to use MPI_Sendrecv
:
int *sendptr, *recvptr;
int neigh = MPI_PROC_NULL;
if (myrank == 0) {
sendptr = &(A[0][0]);
recvptr = &(B[0][0]);
neigh = 1;
} else {
sendptr = &(B[0][0]);
recvptr = &(A[0][0]);
neigh = 0;
}
MPI_Sendrecv(sendptr, N*M, MPI_INT, neigh, tagA, recvptr, N*M, MPI_INT, neigh, tagB, MPI_COMM_WORLD, &status);
or nonblocking sends and/or recieves.
How to send a 2D array in MPI with variation for each processor
The problem here is that all MPI calls expect that memory is contiguous. Your memory is only contiguous within given row, and what you are referring to as a 2D array is really an array of pointers. Pointers are not portable, so trying to send or broadcast an array of pointers to another process makes no sense, and MPI itself doesn't support deep copy, so this approach won't work.
However, if you change your array allocation to something like this:
float** C;
float* C_buff;
C = (float**)malloc(N * sizeof(float*)); // 32 particles
C_buff = (float*)malloc(N * dim * sizeof(float)); // buffer for particles
float* p = &C_buff[0];
for (i = 0; i < N; i++) {
C[i]=p;
p+= dim*sizeof(float));
}
[disclaimer: written in browser, totally untested, use at own risk]
so that C_buff
represents the contiguous memory for your 2D array, and C
contains row pointers to memory within the C_buff
contiguous allocation, then you can use your existing code for initialisation, but then do something like this:
MPI_Send(&C_buff[0][0], N*DIM, MPI_FLOAT, i, 10+i, MPI_COMM_WORLD);
ie. use C_buff
for the MPI calls, and it should work.
Sharing a dynamically allocated 2D array with MPI
Each row of array
is allocated separately in your code, so simple
MPI_Send(&(array[0][0]), x*y, MPI_INT, 1, 0, MPI_COMM_WORLD);
won't work in this case.
An simple solution is to allocate a single block of memory like this:
array = malloc(x * sizeof(int*));
array[0] = malloc(y * x * sizeof(int));
for (int i = 1; i < x; i++)
{
array[i] = array[0] + y * i;
}
And freeing this array will be
free(array[0]);
free(array);
Do not free array[1]
, array[2]
, ... in this case because they are already freed by free(array[0]);
.
Sending 2D Int Array with MPI_Send and Recv
This line looks suspicious:
pixels[j][k] = pixelChunk[j - (xPixels * i - 1)][k];
For example, say we have np = 2
, so we're left with a single chunk, then
i = 1;
xStart = 0;
j = 0;
xPixels = 600;
pixelChunk[0 - (600 * 1 - 1)[k] == pixelChunk[-599][k]
Doesn't look right, does it?
This?
pixels[j][k] = pixelChunk[j - xPixels * (i - 1)][k];
The send/recv code is allright probably.
Sending a column of a dynamically allocated 2d array in MPI
if i understand correctly what you are trying to achieve, then you should invoke MPI_Send()
and MPI_Recv()
with count=1
instead of count=im_local
.
Sending Static Memory 2D Array using MPI and C++
The error is in the line MPI_Send(&positions, SIZE, MPI_INT, r, 1, MPI::COMM_WORLD);
. You should not use &positions
but positions
instead. positions
type is int *
and so &positions
type is int **
and you really are just sending garbage in this case instead of real array.
Related Topics
Call a C Function from C++ Code
Where to Put Default Parameter Value in C++
Heterogeneous Containers in C++
How to Catch Segmentation Fault in Linux
Why Should Exceptions Be Used Conservatively
How to Implement Serialization in C++
Comparing Iterators from Different Containers
Difference Between Atan and Atan2 in C++
Deprecation of the Static Keyword... No More
Determining 32 VS 64 Bit in C++
How to Print Utf-8 from C++ Console Application on Windows
Iteration Over Std::Vector: Unsigned VS Signed Index Variable
How to Print an Unsigned Char as Hex in C++ Using Ostream