Merge branch 'maint'
If only some processes had no local entries in the matrix the MatSeqXAIJPreallocation() changed the value of aij->nonew on only thoseprocesses thus resulting in gridlock in MatAssemblyEnd_MPIXAIJ()
If only some processes had no local entries in the matrix the MatSeqXAIJPreallocation() changed the value of aij->nonew on only thoseprocesses thus resulting in gridlock in MatAssemblyEnd_MPIXAIJ() since some processes wanted to update the matrix nonzero state while others did notReported-by: Eric Chamberland <Eric.Chamberland@giref.ulaval.ca>
show more ...
fix violations of PETSc style guide: Usage of SETERRQ and NULL
Added missing MatMissingDiagonal() implementations
MatHeaderReplace() corrupted the -objects_dump arrayReported-by: Torquil Macdonald Sørensen <torquil@gmail.com>MatHeaderReplace() and MatHeaderMerge() destroy the second matrix argument therefor
MatHeaderReplace() corrupted the -objects_dump arrayReported-by: Torquil Macdonald Sørensen <torquil@gmail.com>MatHeaderReplace() and MatHeaderMerge() destroy the second matrix argument therefor pass it asa pointer to it can be zeroed and not mistakenly reused.
Merge branch 'jed/mat-assembly-perf' of bitbucket:petsc/petscVecAssembly and MatAssembly now use a scalable exchange pattern based onPetscCommBuildTwoSided. This feature can be controlled with th
Merge branch 'jed/mat-assembly-perf' of bitbucket:petsc/petscVecAssembly and MatAssembly now use a scalable exchange pattern based onPetscCommBuildTwoSided. This feature can be controlled with the options -vec_assembly_bts 0 or 1 (default 0) -matstash_bts 0 or 1 (default 0)The rationale is that the new implementation with scalable datastructures can be slightly slower than the old version at small processcounts. The default here could be changed to depend on the processcount (leading to possibly-confusing scaling performance diagnostics) orthe implementation could learn to take a fast path.* 'jed/mat-assembly-perf' of bitbucket:petsc/petsc: (49 commits) MatStash: fix -Wsign-compare by using size_t for loop index when max is also size_t Sys BuildTwoSided test: fix for non-POD std::complex MatStash: cast to satisfy non-structural MPI type tag check mpiuni: fix compile error /sandbox/petsc/petsc.clone-2/arch-linux-uni/lib/libpetsc.so: undefined reference to `MPI_Type_create_resized' Vec: Silence compiler warning Vec: fix typo in comment Sys: fix C89 compiler warning VecStash BTS: fix block stash InsertMode accounting VecStash BTS: fix indexing bug counting sends to rank 0 MatStash BTS: fix memory leak on MAT_SUBSET_OFF_PROC_ENTRIES VecAssemblyEnd_MPI_BTS: fix donotstash code path VecAssemblyEnd_MPI_BTS: fix C++ conversion to InsertMode Sys: fix datatypes test using MPI_Type_create_resized MatStash BTS: work around lack of offsetof() for non-POD (std::complex) MatAssembly: move check for InsertMode consistency into MatStashScatter impls MatStash BTS: fix memory leak on reassembly with MAT_SUBSET_OFF_PROC_ENTRIES MatStash BTS: small simplification to row ownership calculation MatStash BTS: add MAT_SUBSET_OFF_PROC_ENTRIES, impl with neighbor-only comm MatStash: initial BTS (BuildTwoSided) implementation MatStash: add extension point for new BTS implementation ...
Merge branch 'master' into barry/fix-petscviewer-attempt-2
Merge branch 'barry/add-concurrencykit'
Merge branch 'barry/fix-nonew-notcollective/maint' into jed/mat-assembly-perfJed this fucking 6+ month running thing in next that is not in master is a royal fucking pain in the ass andan abuse of
Merge branch 'barry/fix-nonew-notcollective/maint' into jed/mat-assembly-perfJed this fucking 6+ month running thing in next that is not in master is a royal fucking pain in the ass andan abuse of git. I think it is not justified to ever have anything in next for more than a few weeks at most.Either take it out of next if it is broken or put it in master if it is not broken. Hanging around in next butnot master for ever is not a good policy.
MatShift_MPI/SeqXAIJ() could hang if some processes had no entries on a process while others had entriesbecause some processes would attempt a parallel preallocation and the others would not.Fixed
MatShift_MPI/SeqXAIJ() could hang if some processes had no entries on a process while others had entriesbecause some processes would attempt a parallel preallocation and the others would not.Fixed by first checking if no preallocation was done, and if not doing. Otherwise preallocation is only doneif approprate by each process on the diagonal block portion of the matrix, thus not requiring all processesthat share the matrix to call the parallel preallocation routineReported-by: Patrick Lacasse <patrick.m.lacasse@gmail.com>
Replaced PetscViewerASCIISynchronizedAllow() with PetscViewerASCIIPushSynchronized() PetscViewerASCIIPopSynchronized()
Merge branch 'barry/add-concurrencykit' into barry/fix-petscviewer-attempt-2
fixes for missing ierr = around PetscLogFlops()
merged PetscViewerGetSingleton() and PetscViewerGetSubcomm() into PetscViewerGetSubViewer()Does not currently work, needs fixes to work correctly recursively
Merged in PR312: karpeev/ksp-pcgasm-overhaul.
Merge branch 'barry/propagate-pcsetup-failures'
Fix nonzerostate tracking in all MATMPI types.
MATBAIJ: fix for MatGetSubmatrixuse maximum between a->mbs and a->nbs to allocate vary and iary arrayschange upper bound for columns loop to a->nbs
Infrastructure that allows failures in PCSetUp(), PCApply(), MatMult() etc due to, for example a zero pivot or function evaluation outside its domain, to propagate up and become a KSP_DIVERGED inste
Infrastructure that allows failures in PCSetUp(), PCApply(), MatMult() etc due to, for example a zero pivot or function evaluation outside its domain, to propagate up and become a KSP_DIVERGED instead of generating an error that stops the program. In response to Issue 96. This includes failures in MatCreateSNESMF() applications due to domain errors. The mechanism to propagate some errors is by setting Info or Nan into the output vector and using the norm or inner product reductions in SNES or KSP to propagate the error condition to all processes and then handling them immediately after the norm or inner product.This allows, for example, ODE integrators to try again with a smaller time-step if the PCSetUp failed instead of requiring a complete restart of the run with other options.Currently some error conditions, such as function domain error in a line search may not get progated up using the correct SNESConvergedReasonSee src/snes/examples/tests/ex69.c for the handling of several conditions
MatGetSubMatricesParallel —> MatGetSubMatricesMPI. Implementation rewritten, tests added.
stripped out all PETSc threadcomm code
fix for bug introduced messing up the MPIBAIJ MatGetSubmatrices
updated PETSc directory layout to match standard packaging strategiesinclude/petsc finclude,private,mpiunilib/petsc confbin/petsc*
rm unused variables
1...<<21222324252627282930>>...92