MPI Gatherv not working (message truncated)
up vote
0
down vote
favorite
Hello I have a problem with MPI_Gatherv, which is not able to 'gather' values, because it returns:
Fatal error in MPI_Gatherv: Message truncated, error stack:
MPI_Gatherv failed(sbuf=0x000001E0AAE36920, scount=16, MPI_INT,
rbuf=0x000001E0AAE367E0, rcnts=0x000001E0AAE18500,
displs=0x0000005A09F6F9D8, MPI_INT, root=0, MPI_COMM_WORLD) failed
Message truncated; 16 bytes received but buffer size is 16
Code is in language C.
my code:
#include "stdio.h"
#include "mpi.h"
#include <stdlib.h>
int* multiply(int* x, int xLength, int* y, int yLength) {
int* resultMatrix = (int *) malloc(xLength*yLength * sizeof(int));
int r = 0;
for (int i = 0; i < xLength; i++) {
for (int j = 0; j < yLength; j++) {
resultMatrix[r] = x[i] * y[j];
printf("nresult[%d]: %d", r, resultMatrix[r]);
r++;
}
}
return resultMatrix;
}
int* countOfValuesOfProcess(int matrixLength, int numOfProcesses) {
int* countOfValuesOfProcess = (int*) malloc (numOfProcesses);
for (int i = 0; i < numOfProcesses; i++) {
if (i == numOfProcesses - 1) {
countOfValuesOfProcess[i] = (matrixLength / numOfProcesses) + (matrixLength % numOfProcesses);
}else countOfValuesOfProcess[i] = matrixLength / numOfProcesses;
}
return countOfValuesOfProcess;
}
int main(argc, argv)
int argc; char *argv;
{
int x = { 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16};
int y = { 2,3, -1, 4 };
int* result;
int size, rank;
int* recieveInt;
MPI_Status status;
MPI_Init(NULL, NULL);
MPI_Comm_size(MPI_COMM_WORLD, &size);
int xSize = sizeof(x) / sizeof(x[0]);
int ySize = sizeof(y) / sizeof(y[0]);
result = (int *) malloc((xSize * ySize) * sizeof(int));
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
int* numOfValuesPerProcess = countOfValuesOfProcess(xSize, size);
int displs[4];
recieveInt = (int *) malloc(numOfValuesPerProcess[rank] * sizeof(int));
int* resultPart = (int *) malloc((numOfValuesPerProcess[rank] * ySize) * sizeof(int));
//displs count
if (rank == 0) {
displs[0] = 0;
for (int i = 1; i < size; i++) {
displs[i] = (displs[i - 1] + numOfValuesPerProcess[i - 1]);
}
}
MPI_Scatterv(x, numOfValuesPerProcess, displs, MPI_INT, recieveInt, numOfValuesPerProcess[rank], MPI_INT, 0, MPI_COMM_WORLD);
resultPart = multiply(recieveInt, numOfValuesPerProcess[rank], y, ySize);
MPI_Gatherv(resultPart, numOfValuesPerProcess[rank]*ySize, MPI_INT, result, numOfValuesPerProcess, displs, MPI_INT, 0, MPI_COMM_WORLD);
free(resultPart);
free(recieveInt);
}
MPI_Finalize();
return(0);
}
When I replace in gatherv part: numOfValuesPerProcess[rank]*ySize to only numOfValuesPerProcess[rank] it will work but the result will be:
gathered matrix[0]: 2
gathered matrix[1]: 3
gathered matrix[2]: -1
gathered matrix[3]: 4
gathered matrix[4]: 10
gathered matrix[5]: 15
gathered matrix[6]: -5
gathered matrix[7]: 20
gathered matrix[8]: 18
gathered matrix[9]: 27
gathered matrix[10]: -9
gathered matrix[11]: 36
gathered matrix[12]: 26
gathered matrix[13]: 39
gathered matrix[14]: -13
gathered matrix[15]: 52
gathered matrix[16]: -842150451
gathered matrix[17]: -842150451
gathered matrix[18]: -842150451
gathered matrix[19]: -842150451
gathered matrix[20]: -842150451
gathered matrix[21]: -842150451
gathered matrix[22]: -842150451
gathered matrix[23]: -842150451
gathered matrix[24]: -842150451
gathered matrix[25]: -842150451
gathered matrix[26]: -842150451
gathered matrix[27]: -842150451
gathered matrix[28]: -842150451
gathered matrix[29]: -842150451
gathered matrix[30]: -842150451
gathered matrix[31]: -842150451
gathered matrix[32]: -842150451
gathered matrix[33]: -842150451
gathered matrix[34]: -842150451
gathered matrix[35]: -842150451
gathered matrix[36]: -842150451
gathered matrix[37]: -842150451
gathered matrix[38]: -842150451
gathered matrix[39]: -842150451
gathered matrix[40]: -842150451
gathered matrix[41]: -842150451
gathered matrix[42]: -842150451
gathered matrix[43]: -842150451
gathered matrix[44]: -842150451
gathered matrix[45]: -842150451
gathered matrix[46]: -842150451
gathered matrix[47]: -842150451
gathered matrix[48]: -842150451
gathered matrix[49]: -842150451
gathered matrix[50]: -842150451
gathered matrix[51]: -842150451
gathered matrix[52]: -842150451
gathered matrix[53]: -842150451
gathered matrix[54]: -842150451
gathered matrix[55]: -842150451
gathered matrix[56]: -842150451
gathered matrix[57]: -842150451
gathered matrix[58]: -842150451
gathered matrix[59]: -842150451
gathered matrix[60]: -842150451
gathered matrix[61]: -842150451
gathered matrix[62]: -842150451
gathered matrix[63]: -842150451
As we can see the first 16 numbers are gathered but the rest is missing (because we wanted only part of full result) I don't know where is problem I tried to set bigger memory allocation for variable int* result, but not worked.
Where could be a problem ?
Thanks for all advices
c mpi
add a comment |
up vote
0
down vote
favorite
Hello I have a problem with MPI_Gatherv, which is not able to 'gather' values, because it returns:
Fatal error in MPI_Gatherv: Message truncated, error stack:
MPI_Gatherv failed(sbuf=0x000001E0AAE36920, scount=16, MPI_INT,
rbuf=0x000001E0AAE367E0, rcnts=0x000001E0AAE18500,
displs=0x0000005A09F6F9D8, MPI_INT, root=0, MPI_COMM_WORLD) failed
Message truncated; 16 bytes received but buffer size is 16
Code is in language C.
my code:
#include "stdio.h"
#include "mpi.h"
#include <stdlib.h>
int* multiply(int* x, int xLength, int* y, int yLength) {
int* resultMatrix = (int *) malloc(xLength*yLength * sizeof(int));
int r = 0;
for (int i = 0; i < xLength; i++) {
for (int j = 0; j < yLength; j++) {
resultMatrix[r] = x[i] * y[j];
printf("nresult[%d]: %d", r, resultMatrix[r]);
r++;
}
}
return resultMatrix;
}
int* countOfValuesOfProcess(int matrixLength, int numOfProcesses) {
int* countOfValuesOfProcess = (int*) malloc (numOfProcesses);
for (int i = 0; i < numOfProcesses; i++) {
if (i == numOfProcesses - 1) {
countOfValuesOfProcess[i] = (matrixLength / numOfProcesses) + (matrixLength % numOfProcesses);
}else countOfValuesOfProcess[i] = matrixLength / numOfProcesses;
}
return countOfValuesOfProcess;
}
int main(argc, argv)
int argc; char *argv;
{
int x = { 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16};
int y = { 2,3, -1, 4 };
int* result;
int size, rank;
int* recieveInt;
MPI_Status status;
MPI_Init(NULL, NULL);
MPI_Comm_size(MPI_COMM_WORLD, &size);
int xSize = sizeof(x) / sizeof(x[0]);
int ySize = sizeof(y) / sizeof(y[0]);
result = (int *) malloc((xSize * ySize) * sizeof(int));
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
int* numOfValuesPerProcess = countOfValuesOfProcess(xSize, size);
int displs[4];
recieveInt = (int *) malloc(numOfValuesPerProcess[rank] * sizeof(int));
int* resultPart = (int *) malloc((numOfValuesPerProcess[rank] * ySize) * sizeof(int));
//displs count
if (rank == 0) {
displs[0] = 0;
for (int i = 1; i < size; i++) {
displs[i] = (displs[i - 1] + numOfValuesPerProcess[i - 1]);
}
}
MPI_Scatterv(x, numOfValuesPerProcess, displs, MPI_INT, recieveInt, numOfValuesPerProcess[rank], MPI_INT, 0, MPI_COMM_WORLD);
resultPart = multiply(recieveInt, numOfValuesPerProcess[rank], y, ySize);
MPI_Gatherv(resultPart, numOfValuesPerProcess[rank]*ySize, MPI_INT, result, numOfValuesPerProcess, displs, MPI_INT, 0, MPI_COMM_WORLD);
free(resultPart);
free(recieveInt);
}
MPI_Finalize();
return(0);
}
When I replace in gatherv part: numOfValuesPerProcess[rank]*ySize to only numOfValuesPerProcess[rank] it will work but the result will be:
gathered matrix[0]: 2
gathered matrix[1]: 3
gathered matrix[2]: -1
gathered matrix[3]: 4
gathered matrix[4]: 10
gathered matrix[5]: 15
gathered matrix[6]: -5
gathered matrix[7]: 20
gathered matrix[8]: 18
gathered matrix[9]: 27
gathered matrix[10]: -9
gathered matrix[11]: 36
gathered matrix[12]: 26
gathered matrix[13]: 39
gathered matrix[14]: -13
gathered matrix[15]: 52
gathered matrix[16]: -842150451
gathered matrix[17]: -842150451
gathered matrix[18]: -842150451
gathered matrix[19]: -842150451
gathered matrix[20]: -842150451
gathered matrix[21]: -842150451
gathered matrix[22]: -842150451
gathered matrix[23]: -842150451
gathered matrix[24]: -842150451
gathered matrix[25]: -842150451
gathered matrix[26]: -842150451
gathered matrix[27]: -842150451
gathered matrix[28]: -842150451
gathered matrix[29]: -842150451
gathered matrix[30]: -842150451
gathered matrix[31]: -842150451
gathered matrix[32]: -842150451
gathered matrix[33]: -842150451
gathered matrix[34]: -842150451
gathered matrix[35]: -842150451
gathered matrix[36]: -842150451
gathered matrix[37]: -842150451
gathered matrix[38]: -842150451
gathered matrix[39]: -842150451
gathered matrix[40]: -842150451
gathered matrix[41]: -842150451
gathered matrix[42]: -842150451
gathered matrix[43]: -842150451
gathered matrix[44]: -842150451
gathered matrix[45]: -842150451
gathered matrix[46]: -842150451
gathered matrix[47]: -842150451
gathered matrix[48]: -842150451
gathered matrix[49]: -842150451
gathered matrix[50]: -842150451
gathered matrix[51]: -842150451
gathered matrix[52]: -842150451
gathered matrix[53]: -842150451
gathered matrix[54]: -842150451
gathered matrix[55]: -842150451
gathered matrix[56]: -842150451
gathered matrix[57]: -842150451
gathered matrix[58]: -842150451
gathered matrix[59]: -842150451
gathered matrix[60]: -842150451
gathered matrix[61]: -842150451
gathered matrix[62]: -842150451
gathered matrix[63]: -842150451
As we can see the first 16 numbers are gathered but the rest is missing (because we wanted only part of full result) I don't know where is problem I tried to set bigger memory allocation for variable int* result, but not worked.
Where could be a problem ?
Thanks for all advices
c mpi
Should youcountOfValuesOfProcess(xSize*ySize, size);
instead ?
– Gilles Gouaillardet
Nov 22 at 22:28
You mean? MPI_Gatherv(resultPart, numOfValuesPerProcess[rank], MPI_INT, result, countOfValuesOfProcess(xSize*ySize, size), displs, MPI_INT, 0, MPI_COMM_WORLD); Same result: from 16... same gathered matrix[16]: -842150451
– Noro96
Nov 22 at 23:28
I mean you do not scatter enough data and you end up working with uninitialized data.
– Gilles Gouaillardet
Nov 23 at 1:09
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
Hello I have a problem with MPI_Gatherv, which is not able to 'gather' values, because it returns:
Fatal error in MPI_Gatherv: Message truncated, error stack:
MPI_Gatherv failed(sbuf=0x000001E0AAE36920, scount=16, MPI_INT,
rbuf=0x000001E0AAE367E0, rcnts=0x000001E0AAE18500,
displs=0x0000005A09F6F9D8, MPI_INT, root=0, MPI_COMM_WORLD) failed
Message truncated; 16 bytes received but buffer size is 16
Code is in language C.
my code:
#include "stdio.h"
#include "mpi.h"
#include <stdlib.h>
int* multiply(int* x, int xLength, int* y, int yLength) {
int* resultMatrix = (int *) malloc(xLength*yLength * sizeof(int));
int r = 0;
for (int i = 0; i < xLength; i++) {
for (int j = 0; j < yLength; j++) {
resultMatrix[r] = x[i] * y[j];
printf("nresult[%d]: %d", r, resultMatrix[r]);
r++;
}
}
return resultMatrix;
}
int* countOfValuesOfProcess(int matrixLength, int numOfProcesses) {
int* countOfValuesOfProcess = (int*) malloc (numOfProcesses);
for (int i = 0; i < numOfProcesses; i++) {
if (i == numOfProcesses - 1) {
countOfValuesOfProcess[i] = (matrixLength / numOfProcesses) + (matrixLength % numOfProcesses);
}else countOfValuesOfProcess[i] = matrixLength / numOfProcesses;
}
return countOfValuesOfProcess;
}
int main(argc, argv)
int argc; char *argv;
{
int x = { 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16};
int y = { 2,3, -1, 4 };
int* result;
int size, rank;
int* recieveInt;
MPI_Status status;
MPI_Init(NULL, NULL);
MPI_Comm_size(MPI_COMM_WORLD, &size);
int xSize = sizeof(x) / sizeof(x[0]);
int ySize = sizeof(y) / sizeof(y[0]);
result = (int *) malloc((xSize * ySize) * sizeof(int));
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
int* numOfValuesPerProcess = countOfValuesOfProcess(xSize, size);
int displs[4];
recieveInt = (int *) malloc(numOfValuesPerProcess[rank] * sizeof(int));
int* resultPart = (int *) malloc((numOfValuesPerProcess[rank] * ySize) * sizeof(int));
//displs count
if (rank == 0) {
displs[0] = 0;
for (int i = 1; i < size; i++) {
displs[i] = (displs[i - 1] + numOfValuesPerProcess[i - 1]);
}
}
MPI_Scatterv(x, numOfValuesPerProcess, displs, MPI_INT, recieveInt, numOfValuesPerProcess[rank], MPI_INT, 0, MPI_COMM_WORLD);
resultPart = multiply(recieveInt, numOfValuesPerProcess[rank], y, ySize);
MPI_Gatherv(resultPart, numOfValuesPerProcess[rank]*ySize, MPI_INT, result, numOfValuesPerProcess, displs, MPI_INT, 0, MPI_COMM_WORLD);
free(resultPart);
free(recieveInt);
}
MPI_Finalize();
return(0);
}
When I replace in gatherv part: numOfValuesPerProcess[rank]*ySize to only numOfValuesPerProcess[rank] it will work but the result will be:
gathered matrix[0]: 2
gathered matrix[1]: 3
gathered matrix[2]: -1
gathered matrix[3]: 4
gathered matrix[4]: 10
gathered matrix[5]: 15
gathered matrix[6]: -5
gathered matrix[7]: 20
gathered matrix[8]: 18
gathered matrix[9]: 27
gathered matrix[10]: -9
gathered matrix[11]: 36
gathered matrix[12]: 26
gathered matrix[13]: 39
gathered matrix[14]: -13
gathered matrix[15]: 52
gathered matrix[16]: -842150451
gathered matrix[17]: -842150451
gathered matrix[18]: -842150451
gathered matrix[19]: -842150451
gathered matrix[20]: -842150451
gathered matrix[21]: -842150451
gathered matrix[22]: -842150451
gathered matrix[23]: -842150451
gathered matrix[24]: -842150451
gathered matrix[25]: -842150451
gathered matrix[26]: -842150451
gathered matrix[27]: -842150451
gathered matrix[28]: -842150451
gathered matrix[29]: -842150451
gathered matrix[30]: -842150451
gathered matrix[31]: -842150451
gathered matrix[32]: -842150451
gathered matrix[33]: -842150451
gathered matrix[34]: -842150451
gathered matrix[35]: -842150451
gathered matrix[36]: -842150451
gathered matrix[37]: -842150451
gathered matrix[38]: -842150451
gathered matrix[39]: -842150451
gathered matrix[40]: -842150451
gathered matrix[41]: -842150451
gathered matrix[42]: -842150451
gathered matrix[43]: -842150451
gathered matrix[44]: -842150451
gathered matrix[45]: -842150451
gathered matrix[46]: -842150451
gathered matrix[47]: -842150451
gathered matrix[48]: -842150451
gathered matrix[49]: -842150451
gathered matrix[50]: -842150451
gathered matrix[51]: -842150451
gathered matrix[52]: -842150451
gathered matrix[53]: -842150451
gathered matrix[54]: -842150451
gathered matrix[55]: -842150451
gathered matrix[56]: -842150451
gathered matrix[57]: -842150451
gathered matrix[58]: -842150451
gathered matrix[59]: -842150451
gathered matrix[60]: -842150451
gathered matrix[61]: -842150451
gathered matrix[62]: -842150451
gathered matrix[63]: -842150451
As we can see the first 16 numbers are gathered but the rest is missing (because we wanted only part of full result) I don't know where is problem I tried to set bigger memory allocation for variable int* result, but not worked.
Where could be a problem ?
Thanks for all advices
c mpi
Hello I have a problem with MPI_Gatherv, which is not able to 'gather' values, because it returns:
Fatal error in MPI_Gatherv: Message truncated, error stack:
MPI_Gatherv failed(sbuf=0x000001E0AAE36920, scount=16, MPI_INT,
rbuf=0x000001E0AAE367E0, rcnts=0x000001E0AAE18500,
displs=0x0000005A09F6F9D8, MPI_INT, root=0, MPI_COMM_WORLD) failed
Message truncated; 16 bytes received but buffer size is 16
Code is in language C.
my code:
#include "stdio.h"
#include "mpi.h"
#include <stdlib.h>
int* multiply(int* x, int xLength, int* y, int yLength) {
int* resultMatrix = (int *) malloc(xLength*yLength * sizeof(int));
int r = 0;
for (int i = 0; i < xLength; i++) {
for (int j = 0; j < yLength; j++) {
resultMatrix[r] = x[i] * y[j];
printf("nresult[%d]: %d", r, resultMatrix[r]);
r++;
}
}
return resultMatrix;
}
int* countOfValuesOfProcess(int matrixLength, int numOfProcesses) {
int* countOfValuesOfProcess = (int*) malloc (numOfProcesses);
for (int i = 0; i < numOfProcesses; i++) {
if (i == numOfProcesses - 1) {
countOfValuesOfProcess[i] = (matrixLength / numOfProcesses) + (matrixLength % numOfProcesses);
}else countOfValuesOfProcess[i] = matrixLength / numOfProcesses;
}
return countOfValuesOfProcess;
}
int main(argc, argv)
int argc; char *argv;
{
int x = { 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16};
int y = { 2,3, -1, 4 };
int* result;
int size, rank;
int* recieveInt;
MPI_Status status;
MPI_Init(NULL, NULL);
MPI_Comm_size(MPI_COMM_WORLD, &size);
int xSize = sizeof(x) / sizeof(x[0]);
int ySize = sizeof(y) / sizeof(y[0]);
result = (int *) malloc((xSize * ySize) * sizeof(int));
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
int* numOfValuesPerProcess = countOfValuesOfProcess(xSize, size);
int displs[4];
recieveInt = (int *) malloc(numOfValuesPerProcess[rank] * sizeof(int));
int* resultPart = (int *) malloc((numOfValuesPerProcess[rank] * ySize) * sizeof(int));
//displs count
if (rank == 0) {
displs[0] = 0;
for (int i = 1; i < size; i++) {
displs[i] = (displs[i - 1] + numOfValuesPerProcess[i - 1]);
}
}
MPI_Scatterv(x, numOfValuesPerProcess, displs, MPI_INT, recieveInt, numOfValuesPerProcess[rank], MPI_INT, 0, MPI_COMM_WORLD);
resultPart = multiply(recieveInt, numOfValuesPerProcess[rank], y, ySize);
MPI_Gatherv(resultPart, numOfValuesPerProcess[rank]*ySize, MPI_INT, result, numOfValuesPerProcess, displs, MPI_INT, 0, MPI_COMM_WORLD);
free(resultPart);
free(recieveInt);
}
MPI_Finalize();
return(0);
}
When I replace in gatherv part: numOfValuesPerProcess[rank]*ySize to only numOfValuesPerProcess[rank] it will work but the result will be:
gathered matrix[0]: 2
gathered matrix[1]: 3
gathered matrix[2]: -1
gathered matrix[3]: 4
gathered matrix[4]: 10
gathered matrix[5]: 15
gathered matrix[6]: -5
gathered matrix[7]: 20
gathered matrix[8]: 18
gathered matrix[9]: 27
gathered matrix[10]: -9
gathered matrix[11]: 36
gathered matrix[12]: 26
gathered matrix[13]: 39
gathered matrix[14]: -13
gathered matrix[15]: 52
gathered matrix[16]: -842150451
gathered matrix[17]: -842150451
gathered matrix[18]: -842150451
gathered matrix[19]: -842150451
gathered matrix[20]: -842150451
gathered matrix[21]: -842150451
gathered matrix[22]: -842150451
gathered matrix[23]: -842150451
gathered matrix[24]: -842150451
gathered matrix[25]: -842150451
gathered matrix[26]: -842150451
gathered matrix[27]: -842150451
gathered matrix[28]: -842150451
gathered matrix[29]: -842150451
gathered matrix[30]: -842150451
gathered matrix[31]: -842150451
gathered matrix[32]: -842150451
gathered matrix[33]: -842150451
gathered matrix[34]: -842150451
gathered matrix[35]: -842150451
gathered matrix[36]: -842150451
gathered matrix[37]: -842150451
gathered matrix[38]: -842150451
gathered matrix[39]: -842150451
gathered matrix[40]: -842150451
gathered matrix[41]: -842150451
gathered matrix[42]: -842150451
gathered matrix[43]: -842150451
gathered matrix[44]: -842150451
gathered matrix[45]: -842150451
gathered matrix[46]: -842150451
gathered matrix[47]: -842150451
gathered matrix[48]: -842150451
gathered matrix[49]: -842150451
gathered matrix[50]: -842150451
gathered matrix[51]: -842150451
gathered matrix[52]: -842150451
gathered matrix[53]: -842150451
gathered matrix[54]: -842150451
gathered matrix[55]: -842150451
gathered matrix[56]: -842150451
gathered matrix[57]: -842150451
gathered matrix[58]: -842150451
gathered matrix[59]: -842150451
gathered matrix[60]: -842150451
gathered matrix[61]: -842150451
gathered matrix[62]: -842150451
gathered matrix[63]: -842150451
As we can see the first 16 numbers are gathered but the rest is missing (because we wanted only part of full result) I don't know where is problem I tried to set bigger memory allocation for variable int* result, but not worked.
Where could be a problem ?
Thanks for all advices
c mpi
c mpi
asked Nov 22 at 16:14
Noro96
448
448
Should youcountOfValuesOfProcess(xSize*ySize, size);
instead ?
– Gilles Gouaillardet
Nov 22 at 22:28
You mean? MPI_Gatherv(resultPart, numOfValuesPerProcess[rank], MPI_INT, result, countOfValuesOfProcess(xSize*ySize, size), displs, MPI_INT, 0, MPI_COMM_WORLD); Same result: from 16... same gathered matrix[16]: -842150451
– Noro96
Nov 22 at 23:28
I mean you do not scatter enough data and you end up working with uninitialized data.
– Gilles Gouaillardet
Nov 23 at 1:09
add a comment |
Should youcountOfValuesOfProcess(xSize*ySize, size);
instead ?
– Gilles Gouaillardet
Nov 22 at 22:28
You mean? MPI_Gatherv(resultPart, numOfValuesPerProcess[rank], MPI_INT, result, countOfValuesOfProcess(xSize*ySize, size), displs, MPI_INT, 0, MPI_COMM_WORLD); Same result: from 16... same gathered matrix[16]: -842150451
– Noro96
Nov 22 at 23:28
I mean you do not scatter enough data and you end up working with uninitialized data.
– Gilles Gouaillardet
Nov 23 at 1:09
Should you
countOfValuesOfProcess(xSize*ySize, size);
instead ?– Gilles Gouaillardet
Nov 22 at 22:28
Should you
countOfValuesOfProcess(xSize*ySize, size);
instead ?– Gilles Gouaillardet
Nov 22 at 22:28
You mean? MPI_Gatherv(resultPart, numOfValuesPerProcess[rank], MPI_INT, result, countOfValuesOfProcess(xSize*ySize, size), displs, MPI_INT, 0, MPI_COMM_WORLD); Same result: from 16... same gathered matrix[16]: -842150451
– Noro96
Nov 22 at 23:28
You mean? MPI_Gatherv(resultPart, numOfValuesPerProcess[rank], MPI_INT, result, countOfValuesOfProcess(xSize*ySize, size), displs, MPI_INT, 0, MPI_COMM_WORLD); Same result: from 16... same gathered matrix[16]: -842150451
– Noro96
Nov 22 at 23:28
I mean you do not scatter enough data and you end up working with uninitialized data.
– Gilles Gouaillardet
Nov 23 at 1:09
I mean you do not scatter enough data and you end up working with uninitialized data.
– Gilles Gouaillardet
Nov 23 at 1:09
add a comment |
1 Answer
1
active
oldest
votes
up vote
1
down vote
accepted
In your MPI_Gatherv call, you send numOfValuesPerProcess[rank]*ySize elements from each rank, but only reserve space for numOfValuesPerProcess[rank] elements on the receiving side. After you do the multiplication, you are sending/receiving ySize times more data, so the recvcounts and displs arguments for the MPI_Gatherv call need to account for that ySize factor.
As an aside, you also seem to have many memory leaks, with not enough frees for the number of mallocs. Learn to use a tool like valgrind to help find and fix these.
Updated code:
#include "stdio.h"
#include "mpi.h"
#include <stdlib.h>
int* multiply(int* x, int xLength, int* y, int yLength) {
int* resultMatrix = malloc(xLength*yLength * sizeof(int));
int r = 0;
for (int i = 0; i < xLength; i++) {
for (int j = 0; j < yLength; j++) {
resultMatrix[r] = x[i] * y[j];
//printf("nresult[%d]: %d", r, resultMatrix[r]);
r++;
}
}
return resultMatrix;
}
int* countOfValuesOfProcess(int matrixLength, int numOfProcesses) {
int* countOfValuesOfProcess = malloc(numOfProcesses * sizeof(int));
for (int i = 0; i < numOfProcesses; i++)
{
if (i == numOfProcesses - 1) {
countOfValuesOfProcess[i] = (matrixLength / numOfProcesses) + (matrixLength % numOfProcesses);
}
else
{
countOfValuesOfProcess[i] = matrixLength / numOfProcesses;
}
}
return countOfValuesOfProcess;
}
int main(int argc, char *argv)
{
int x = { 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16};
int y = { 2, 3, -1, 4 };
int* result;
int size, rank;
int* recieveInt;
MPI_Status status;
MPI_Init(NULL, NULL);
MPI_Comm_size(MPI_COMM_WORLD, &size);
int xSize = sizeof(x) / sizeof(x[0]);
int ySize = sizeof(y) / sizeof(y[0]);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
int* numOfValuesPerProcess = countOfValuesOfProcess(xSize, size);
int displs[size];
//displs count
if (rank == 0) {
displs[0] = 0;
for (int i = 1; i < size; i++) {
displs[i] = (displs[i - 1] + numOfValuesPerProcess[i - 1]);
}
}
recieveInt = malloc(numOfValuesPerProcess[rank] * sizeof(int));
MPI_Scatterv(x, numOfValuesPerProcess, displs, MPI_INT, recieveInt, numOfValuesPerProcess[rank], MPI_INT, 0, MPI_COMM_WORLD);
int* resultPart = multiply(recieveInt, numOfValuesPerProcess[rank], y, ySize);
for (int i = 0; i < size; i++)
{
numOfValuesPerProcess[i] *= ySize;
displs[i] *= ySize;
}
result = (int *) malloc((xSize * ySize) * sizeof(int));
MPI_Gatherv(resultPart, numOfValuesPerProcess[rank], MPI_INT, result, numOfValuesPerProcess, displs, MPI_INT, 0, MPI_COMM_WORLD);
if (rank == 0)
{
for (int i = 0; i < xSize*ySize; i++)
printf("result[%d]: %dn", i, result[i]);
}
free(resultPart);
free(recieveInt);
free(numOfValuesPerProcess);
free(result);
MPI_Finalize();
return(0);
}
1
Could you please show what to change ? I am totaly new to MPI + allocation in C and pointers, not my flavor :) , I would appriciate if someone could show me fixed version, to see what parts were wrong. For someone who understand power of MPI + C it takes 5 mins to fix, me hour(s)... but thank you anyway tommorow I will try to fix it :) based on your tips
– Noro96
Nov 22 at 23:36
displs
is only relevant on the root rank.
– Gilles Gouaillardet
Nov 23 at 1:08
Yes, thanks Gilles Gouaillardet you are correct. I have updated my answer.
– Savithru
Nov 23 at 2:28
i still believe the main issue is only a part of the matrix is scatter'ed
– Gilles Gouaillardet
Nov 23 at 10:23
It is working :) you can try @GillesGouaillardet
– Noro96
Nov 23 at 16:15
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
accepted
In your MPI_Gatherv call, you send numOfValuesPerProcess[rank]*ySize elements from each rank, but only reserve space for numOfValuesPerProcess[rank] elements on the receiving side. After you do the multiplication, you are sending/receiving ySize times more data, so the recvcounts and displs arguments for the MPI_Gatherv call need to account for that ySize factor.
As an aside, you also seem to have many memory leaks, with not enough frees for the number of mallocs. Learn to use a tool like valgrind to help find and fix these.
Updated code:
#include "stdio.h"
#include "mpi.h"
#include <stdlib.h>
int* multiply(int* x, int xLength, int* y, int yLength) {
int* resultMatrix = malloc(xLength*yLength * sizeof(int));
int r = 0;
for (int i = 0; i < xLength; i++) {
for (int j = 0; j < yLength; j++) {
resultMatrix[r] = x[i] * y[j];
//printf("nresult[%d]: %d", r, resultMatrix[r]);
r++;
}
}
return resultMatrix;
}
int* countOfValuesOfProcess(int matrixLength, int numOfProcesses) {
int* countOfValuesOfProcess = malloc(numOfProcesses * sizeof(int));
for (int i = 0; i < numOfProcesses; i++)
{
if (i == numOfProcesses - 1) {
countOfValuesOfProcess[i] = (matrixLength / numOfProcesses) + (matrixLength % numOfProcesses);
}
else
{
countOfValuesOfProcess[i] = matrixLength / numOfProcesses;
}
}
return countOfValuesOfProcess;
}
int main(int argc, char *argv)
{
int x = { 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16};
int y = { 2, 3, -1, 4 };
int* result;
int size, rank;
int* recieveInt;
MPI_Status status;
MPI_Init(NULL, NULL);
MPI_Comm_size(MPI_COMM_WORLD, &size);
int xSize = sizeof(x) / sizeof(x[0]);
int ySize = sizeof(y) / sizeof(y[0]);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
int* numOfValuesPerProcess = countOfValuesOfProcess(xSize, size);
int displs[size];
//displs count
if (rank == 0) {
displs[0] = 0;
for (int i = 1; i < size; i++) {
displs[i] = (displs[i - 1] + numOfValuesPerProcess[i - 1]);
}
}
recieveInt = malloc(numOfValuesPerProcess[rank] * sizeof(int));
MPI_Scatterv(x, numOfValuesPerProcess, displs, MPI_INT, recieveInt, numOfValuesPerProcess[rank], MPI_INT, 0, MPI_COMM_WORLD);
int* resultPart = multiply(recieveInt, numOfValuesPerProcess[rank], y, ySize);
for (int i = 0; i < size; i++)
{
numOfValuesPerProcess[i] *= ySize;
displs[i] *= ySize;
}
result = (int *) malloc((xSize * ySize) * sizeof(int));
MPI_Gatherv(resultPart, numOfValuesPerProcess[rank], MPI_INT, result, numOfValuesPerProcess, displs, MPI_INT, 0, MPI_COMM_WORLD);
if (rank == 0)
{
for (int i = 0; i < xSize*ySize; i++)
printf("result[%d]: %dn", i, result[i]);
}
free(resultPart);
free(recieveInt);
free(numOfValuesPerProcess);
free(result);
MPI_Finalize();
return(0);
}
1
Could you please show what to change ? I am totaly new to MPI + allocation in C and pointers, not my flavor :) , I would appriciate if someone could show me fixed version, to see what parts were wrong. For someone who understand power of MPI + C it takes 5 mins to fix, me hour(s)... but thank you anyway tommorow I will try to fix it :) based on your tips
– Noro96
Nov 22 at 23:36
displs
is only relevant on the root rank.
– Gilles Gouaillardet
Nov 23 at 1:08
Yes, thanks Gilles Gouaillardet you are correct. I have updated my answer.
– Savithru
Nov 23 at 2:28
i still believe the main issue is only a part of the matrix is scatter'ed
– Gilles Gouaillardet
Nov 23 at 10:23
It is working :) you can try @GillesGouaillardet
– Noro96
Nov 23 at 16:15
add a comment |
up vote
1
down vote
accepted
In your MPI_Gatherv call, you send numOfValuesPerProcess[rank]*ySize elements from each rank, but only reserve space for numOfValuesPerProcess[rank] elements on the receiving side. After you do the multiplication, you are sending/receiving ySize times more data, so the recvcounts and displs arguments for the MPI_Gatherv call need to account for that ySize factor.
As an aside, you also seem to have many memory leaks, with not enough frees for the number of mallocs. Learn to use a tool like valgrind to help find and fix these.
Updated code:
#include "stdio.h"
#include "mpi.h"
#include <stdlib.h>
int* multiply(int* x, int xLength, int* y, int yLength) {
int* resultMatrix = malloc(xLength*yLength * sizeof(int));
int r = 0;
for (int i = 0; i < xLength; i++) {
for (int j = 0; j < yLength; j++) {
resultMatrix[r] = x[i] * y[j];
//printf("nresult[%d]: %d", r, resultMatrix[r]);
r++;
}
}
return resultMatrix;
}
int* countOfValuesOfProcess(int matrixLength, int numOfProcesses) {
int* countOfValuesOfProcess = malloc(numOfProcesses * sizeof(int));
for (int i = 0; i < numOfProcesses; i++)
{
if (i == numOfProcesses - 1) {
countOfValuesOfProcess[i] = (matrixLength / numOfProcesses) + (matrixLength % numOfProcesses);
}
else
{
countOfValuesOfProcess[i] = matrixLength / numOfProcesses;
}
}
return countOfValuesOfProcess;
}
int main(int argc, char *argv)
{
int x = { 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16};
int y = { 2, 3, -1, 4 };
int* result;
int size, rank;
int* recieveInt;
MPI_Status status;
MPI_Init(NULL, NULL);
MPI_Comm_size(MPI_COMM_WORLD, &size);
int xSize = sizeof(x) / sizeof(x[0]);
int ySize = sizeof(y) / sizeof(y[0]);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
int* numOfValuesPerProcess = countOfValuesOfProcess(xSize, size);
int displs[size];
//displs count
if (rank == 0) {
displs[0] = 0;
for (int i = 1; i < size; i++) {
displs[i] = (displs[i - 1] + numOfValuesPerProcess[i - 1]);
}
}
recieveInt = malloc(numOfValuesPerProcess[rank] * sizeof(int));
MPI_Scatterv(x, numOfValuesPerProcess, displs, MPI_INT, recieveInt, numOfValuesPerProcess[rank], MPI_INT, 0, MPI_COMM_WORLD);
int* resultPart = multiply(recieveInt, numOfValuesPerProcess[rank], y, ySize);
for (int i = 0; i < size; i++)
{
numOfValuesPerProcess[i] *= ySize;
displs[i] *= ySize;
}
result = (int *) malloc((xSize * ySize) * sizeof(int));
MPI_Gatherv(resultPart, numOfValuesPerProcess[rank], MPI_INT, result, numOfValuesPerProcess, displs, MPI_INT, 0, MPI_COMM_WORLD);
if (rank == 0)
{
for (int i = 0; i < xSize*ySize; i++)
printf("result[%d]: %dn", i, result[i]);
}
free(resultPart);
free(recieveInt);
free(numOfValuesPerProcess);
free(result);
MPI_Finalize();
return(0);
}
1
Could you please show what to change ? I am totaly new to MPI + allocation in C and pointers, not my flavor :) , I would appriciate if someone could show me fixed version, to see what parts were wrong. For someone who understand power of MPI + C it takes 5 mins to fix, me hour(s)... but thank you anyway tommorow I will try to fix it :) based on your tips
– Noro96
Nov 22 at 23:36
displs
is only relevant on the root rank.
– Gilles Gouaillardet
Nov 23 at 1:08
Yes, thanks Gilles Gouaillardet you are correct. I have updated my answer.
– Savithru
Nov 23 at 2:28
i still believe the main issue is only a part of the matrix is scatter'ed
– Gilles Gouaillardet
Nov 23 at 10:23
It is working :) you can try @GillesGouaillardet
– Noro96
Nov 23 at 16:15
add a comment |
up vote
1
down vote
accepted
up vote
1
down vote
accepted
In your MPI_Gatherv call, you send numOfValuesPerProcess[rank]*ySize elements from each rank, but only reserve space for numOfValuesPerProcess[rank] elements on the receiving side. After you do the multiplication, you are sending/receiving ySize times more data, so the recvcounts and displs arguments for the MPI_Gatherv call need to account for that ySize factor.
As an aside, you also seem to have many memory leaks, with not enough frees for the number of mallocs. Learn to use a tool like valgrind to help find and fix these.
Updated code:
#include "stdio.h"
#include "mpi.h"
#include <stdlib.h>
int* multiply(int* x, int xLength, int* y, int yLength) {
int* resultMatrix = malloc(xLength*yLength * sizeof(int));
int r = 0;
for (int i = 0; i < xLength; i++) {
for (int j = 0; j < yLength; j++) {
resultMatrix[r] = x[i] * y[j];
//printf("nresult[%d]: %d", r, resultMatrix[r]);
r++;
}
}
return resultMatrix;
}
int* countOfValuesOfProcess(int matrixLength, int numOfProcesses) {
int* countOfValuesOfProcess = malloc(numOfProcesses * sizeof(int));
for (int i = 0; i < numOfProcesses; i++)
{
if (i == numOfProcesses - 1) {
countOfValuesOfProcess[i] = (matrixLength / numOfProcesses) + (matrixLength % numOfProcesses);
}
else
{
countOfValuesOfProcess[i] = matrixLength / numOfProcesses;
}
}
return countOfValuesOfProcess;
}
int main(int argc, char *argv)
{
int x = { 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16};
int y = { 2, 3, -1, 4 };
int* result;
int size, rank;
int* recieveInt;
MPI_Status status;
MPI_Init(NULL, NULL);
MPI_Comm_size(MPI_COMM_WORLD, &size);
int xSize = sizeof(x) / sizeof(x[0]);
int ySize = sizeof(y) / sizeof(y[0]);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
int* numOfValuesPerProcess = countOfValuesOfProcess(xSize, size);
int displs[size];
//displs count
if (rank == 0) {
displs[0] = 0;
for (int i = 1; i < size; i++) {
displs[i] = (displs[i - 1] + numOfValuesPerProcess[i - 1]);
}
}
recieveInt = malloc(numOfValuesPerProcess[rank] * sizeof(int));
MPI_Scatterv(x, numOfValuesPerProcess, displs, MPI_INT, recieveInt, numOfValuesPerProcess[rank], MPI_INT, 0, MPI_COMM_WORLD);
int* resultPart = multiply(recieveInt, numOfValuesPerProcess[rank], y, ySize);
for (int i = 0; i < size; i++)
{
numOfValuesPerProcess[i] *= ySize;
displs[i] *= ySize;
}
result = (int *) malloc((xSize * ySize) * sizeof(int));
MPI_Gatherv(resultPart, numOfValuesPerProcess[rank], MPI_INT, result, numOfValuesPerProcess, displs, MPI_INT, 0, MPI_COMM_WORLD);
if (rank == 0)
{
for (int i = 0; i < xSize*ySize; i++)
printf("result[%d]: %dn", i, result[i]);
}
free(resultPart);
free(recieveInt);
free(numOfValuesPerProcess);
free(result);
MPI_Finalize();
return(0);
}
In your MPI_Gatherv call, you send numOfValuesPerProcess[rank]*ySize elements from each rank, but only reserve space for numOfValuesPerProcess[rank] elements on the receiving side. After you do the multiplication, you are sending/receiving ySize times more data, so the recvcounts and displs arguments for the MPI_Gatherv call need to account for that ySize factor.
As an aside, you also seem to have many memory leaks, with not enough frees for the number of mallocs. Learn to use a tool like valgrind to help find and fix these.
Updated code:
#include "stdio.h"
#include "mpi.h"
#include <stdlib.h>
int* multiply(int* x, int xLength, int* y, int yLength) {
int* resultMatrix = malloc(xLength*yLength * sizeof(int));
int r = 0;
for (int i = 0; i < xLength; i++) {
for (int j = 0; j < yLength; j++) {
resultMatrix[r] = x[i] * y[j];
//printf("nresult[%d]: %d", r, resultMatrix[r]);
r++;
}
}
return resultMatrix;
}
int* countOfValuesOfProcess(int matrixLength, int numOfProcesses) {
int* countOfValuesOfProcess = malloc(numOfProcesses * sizeof(int));
for (int i = 0; i < numOfProcesses; i++)
{
if (i == numOfProcesses - 1) {
countOfValuesOfProcess[i] = (matrixLength / numOfProcesses) + (matrixLength % numOfProcesses);
}
else
{
countOfValuesOfProcess[i] = matrixLength / numOfProcesses;
}
}
return countOfValuesOfProcess;
}
int main(int argc, char *argv)
{
int x = { 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16};
int y = { 2, 3, -1, 4 };
int* result;
int size, rank;
int* recieveInt;
MPI_Status status;
MPI_Init(NULL, NULL);
MPI_Comm_size(MPI_COMM_WORLD, &size);
int xSize = sizeof(x) / sizeof(x[0]);
int ySize = sizeof(y) / sizeof(y[0]);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
int* numOfValuesPerProcess = countOfValuesOfProcess(xSize, size);
int displs[size];
//displs count
if (rank == 0) {
displs[0] = 0;
for (int i = 1; i < size; i++) {
displs[i] = (displs[i - 1] + numOfValuesPerProcess[i - 1]);
}
}
recieveInt = malloc(numOfValuesPerProcess[rank] * sizeof(int));
MPI_Scatterv(x, numOfValuesPerProcess, displs, MPI_INT, recieveInt, numOfValuesPerProcess[rank], MPI_INT, 0, MPI_COMM_WORLD);
int* resultPart = multiply(recieveInt, numOfValuesPerProcess[rank], y, ySize);
for (int i = 0; i < size; i++)
{
numOfValuesPerProcess[i] *= ySize;
displs[i] *= ySize;
}
result = (int *) malloc((xSize * ySize) * sizeof(int));
MPI_Gatherv(resultPart, numOfValuesPerProcess[rank], MPI_INT, result, numOfValuesPerProcess, displs, MPI_INT, 0, MPI_COMM_WORLD);
if (rank == 0)
{
for (int i = 0; i < xSize*ySize; i++)
printf("result[%d]: %dn", i, result[i]);
}
free(resultPart);
free(recieveInt);
free(numOfValuesPerProcess);
free(result);
MPI_Finalize();
return(0);
}
edited Nov 23 at 2:33
answered Nov 22 at 23:33
Savithru
1987
1987
1
Could you please show what to change ? I am totaly new to MPI + allocation in C and pointers, not my flavor :) , I would appriciate if someone could show me fixed version, to see what parts were wrong. For someone who understand power of MPI + C it takes 5 mins to fix, me hour(s)... but thank you anyway tommorow I will try to fix it :) based on your tips
– Noro96
Nov 22 at 23:36
displs
is only relevant on the root rank.
– Gilles Gouaillardet
Nov 23 at 1:08
Yes, thanks Gilles Gouaillardet you are correct. I have updated my answer.
– Savithru
Nov 23 at 2:28
i still believe the main issue is only a part of the matrix is scatter'ed
– Gilles Gouaillardet
Nov 23 at 10:23
It is working :) you can try @GillesGouaillardet
– Noro96
Nov 23 at 16:15
add a comment |
1
Could you please show what to change ? I am totaly new to MPI + allocation in C and pointers, not my flavor :) , I would appriciate if someone could show me fixed version, to see what parts were wrong. For someone who understand power of MPI + C it takes 5 mins to fix, me hour(s)... but thank you anyway tommorow I will try to fix it :) based on your tips
– Noro96
Nov 22 at 23:36
displs
is only relevant on the root rank.
– Gilles Gouaillardet
Nov 23 at 1:08
Yes, thanks Gilles Gouaillardet you are correct. I have updated my answer.
– Savithru
Nov 23 at 2:28
i still believe the main issue is only a part of the matrix is scatter'ed
– Gilles Gouaillardet
Nov 23 at 10:23
It is working :) you can try @GillesGouaillardet
– Noro96
Nov 23 at 16:15
1
1
Could you please show what to change ? I am totaly new to MPI + allocation in C and pointers, not my flavor :) , I would appriciate if someone could show me fixed version, to see what parts were wrong. For someone who understand power of MPI + C it takes 5 mins to fix, me hour(s)... but thank you anyway tommorow I will try to fix it :) based on your tips
– Noro96
Nov 22 at 23:36
Could you please show what to change ? I am totaly new to MPI + allocation in C and pointers, not my flavor :) , I would appriciate if someone could show me fixed version, to see what parts were wrong. For someone who understand power of MPI + C it takes 5 mins to fix, me hour(s)... but thank you anyway tommorow I will try to fix it :) based on your tips
– Noro96
Nov 22 at 23:36
displs
is only relevant on the root rank.– Gilles Gouaillardet
Nov 23 at 1:08
displs
is only relevant on the root rank.– Gilles Gouaillardet
Nov 23 at 1:08
Yes, thanks Gilles Gouaillardet you are correct. I have updated my answer.
– Savithru
Nov 23 at 2:28
Yes, thanks Gilles Gouaillardet you are correct. I have updated my answer.
– Savithru
Nov 23 at 2:28
i still believe the main issue is only a part of the matrix is scatter'ed
– Gilles Gouaillardet
Nov 23 at 10:23
i still believe the main issue is only a part of the matrix is scatter'ed
– Gilles Gouaillardet
Nov 23 at 10:23
It is working :) you can try @GillesGouaillardet
– Noro96
Nov 23 at 16:15
It is working :) you can try @GillesGouaillardet
– Noro96
Nov 23 at 16:15
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53434839%2fmpi-gatherv-not-working-message-truncated%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Should you
countOfValuesOfProcess(xSize*ySize, size);
instead ?– Gilles Gouaillardet
Nov 22 at 22:28
You mean? MPI_Gatherv(resultPart, numOfValuesPerProcess[rank], MPI_INT, result, countOfValuesOfProcess(xSize*ySize, size), displs, MPI_INT, 0, MPI_COMM_WORLD); Same result: from 16... same gathered matrix[16]: -842150451
– Noro96
Nov 22 at 23:28
I mean you do not scatter enough data and you end up working with uninitialized data.
– Gilles Gouaillardet
Nov 23 at 1:09