1*f8402805SBarry Smith! 2*f8402805SBarry Smith! This introductory example illustrates running PETSc on a subset 3*f8402805SBarry Smith! of processes 4*f8402805SBarry Smith! 5*f8402805SBarry Smith! ----------------------------------------------------------------------- 6*f8402805SBarry Smith 7*f8402805SBarry Smith program main 8*f8402805SBarry Smith#include <petsc/finclude/petscsys.h> 9*f8402805SBarry Smith use petscmpi ! or mpi or mpi_f08 10*f8402805SBarry Smith use petscsys 11*f8402805SBarry Smith implicit none 12*f8402805SBarry Smith PetscErrorCode ierr 13*f8402805SBarry Smith PetscMPIInt rank, size,grank,zero,two 14*f8402805SBarry Smith PetscReal globalrank 15*f8402805SBarry Smith 16*f8402805SBarry Smith! We must call MPI_Init() first, making us, not PETSc, responsible for MPI 17*f8402805SBarry Smith 18*f8402805SBarry Smith PetscCallMPIA(MPI_Init(ierr)) 19*f8402805SBarry Smith#if defined(PETSC_HAVE_ELEMENTAL) 20*f8402805SBarry Smith PetscCallA(PetscElementalInitializePackage(ierr)) 21*f8402805SBarry Smith#endif 22*f8402805SBarry Smith! We can now change the communicator universe for PETSc 23*f8402805SBarry Smith 24*f8402805SBarry Smith zero = 0 25*f8402805SBarry Smith two = 2 26*f8402805SBarry Smith PetscCallMPIA(MPI_Comm_rank(MPI_COMM_WORLD,rank,ierr)) 27*f8402805SBarry Smith PetscCallMPIA(MPI_Comm_split(MPI_COMM_WORLD,mod(rank,two),zero,PETSC_COMM_WORLD,ierr)) 28*f8402805SBarry Smith 29*f8402805SBarry Smith! Every PETSc routine should begin with the PetscInitialize() 30*f8402805SBarry Smith! routine. 31*f8402805SBarry Smith PetscCallA(PetscInitializeNoArguments(ierr)) 32*f8402805SBarry Smith 33*f8402805SBarry Smith! The following MPI calls return the number of processes being used 34*f8402805SBarry Smith! and the rank of this process in the group. 35*f8402805SBarry Smith 36*f8402805SBarry Smith PetscCallMPIA(MPI_Comm_size(PETSC_COMM_WORLD,size,ierr)) 37*f8402805SBarry Smith PetscCallMPIA(MPI_Comm_rank(PETSC_COMM_WORLD,rank,ierr)) 38*f8402805SBarry Smith 39*f8402805SBarry Smith! Here we would like to print only one message that represents all 40*f8402805SBarry Smith! the processes in the group. Sleep so that IO from different ranks 41*f8402805SBarry Smith! don't get mixed up. Note this is not an ideal solution 42*f8402805SBarry Smith PetscCallMPIA(MPI_Comm_rank(MPI_COMM_WORLD,grank,ierr)) 43*f8402805SBarry Smith globalrank = grank 44*f8402805SBarry Smith PetscCallA(PetscSleep(globalrank,ierr)) 45*f8402805SBarry Smith if (rank .eq. 0) write(6,100) size,rank 46*f8402805SBarry Smith 100 format('No of Procs = ',i4,' rank = ',i4) 47*f8402805SBarry Smith 48*f8402805SBarry Smith! Always call PetscFinalize() before exiting a program. This 49*f8402805SBarry Smith! routine - finalizes the PETSc libraries as well as MPI - provides 50*f8402805SBarry Smith! summary and diagnostic information if certain runtime options are 51*f8402805SBarry Smith! chosen (e.g., -log_view). See PetscFinalize() manpage for more 52*f8402805SBarry Smith! information. 53*f8402805SBarry Smith 54*f8402805SBarry Smith PetscCallA(PetscFinalize(ierr)) 55*f8402805SBarry Smith PetscCallMPIA(MPI_Comm_free(PETSC_COMM_WORLD,ierr)) 56*f8402805SBarry Smith#if defined(PETSC_HAVE_ELEMENTAL) 57*f8402805SBarry Smith PetscCallA(PetscElementalFinalizePackage(ierr)) 58*f8402805SBarry Smith#endif 59*f8402805SBarry Smith 60*f8402805SBarry Smith! Since we initialized MPI, we must call MPI_Finalize() 61*f8402805SBarry Smith 62*f8402805SBarry Smith PetscCallMPIA(MPI_Finalize(ierr)) 63*f8402805SBarry Smith end 64*f8402805SBarry Smith 65*f8402805SBarry Smith!/*TEST 66*f8402805SBarry Smith! 67*f8402805SBarry Smith! test: 68*f8402805SBarry Smith! nsize: 5 69*f8402805SBarry Smith! filter: sort -b 70*f8402805SBarry Smith! filter_output: sort -b 71*f8402805SBarry Smith! requires: !cuda !saws 72*f8402805SBarry Smith! 73*f8402805SBarry Smith!TEST*/ 74