General fixes
Merge remote-tracking branch 'origin/release'
Merge branch 'jed/pcmg-transpose' into 'master'PCMG: implement PCApplyTranspose_MGSee merge request petsc/petsc!3539
Documentation fixes
PCHPDDM: fix for KSPLSQR
PCMG: implement PCApplyTranspose_MGA sample run:$PETSC_ARCH/tests/ts/tutorials/advection-diffusion-reaction/ex5adj \ -pc_type mg -mg_levels_pc_type jacobi
PCBDDC: fix the benign case with MKL_PARDISOSchur computation is buggy for symmetric indefinite factorizationscoming from saddle pointsuse LU and adapt pivot perturbation
PCGAMGOptProlongator_AGG: use CURAND if possible
Add -pc_hypre_euclid_droptolerance and -pc_hypre_euclid_bj optionsAdd test cases for EuclidCommit-type: error-checking, feature/spend 50mReported-by: Chen Gang <569615491@qq.com>
PCJACOBI: use VecAbs if present
PCBDDC: add events for solves
PCGAMG: symmetrize using AXPYscale graph only if needed
PCGAMG: add matmat logging eventsremove useless GAMG_USE_LOG macro
Fix improper use of %D
PCGAMG: clear products intermediate data when no longer needed
MatAIJCUSPARSESetGenerateTranspose: convenience function for seq and mpi
PCGAMG: improve RAP reusageremove setup_count from data structure
PCBDDC: viennacl runs are buggy with cuda 11, switch to cusparse
PCGAMG: comments for GPU optimizations
Minor
PCGAMG: fix view operation
PCGAMG: always generate the transpose by default with CUDA
Convert MPI error type to PETSc error with string message for all MPI callsNow PETSc examples will ONLY return PETSc error codes and never MPI error codes directly so we can understand and post-pro
Convert MPI error type to PETSc error with string message for all MPI callsNow PETSc examples will ONLY return PETSc error codes and never MPI error codes directly so we can understand and post-process their errors better.The test harness will now automatically retry tests that fail with MPI, this may help with Intel MPI that produces seemingly random failures.Commit-type: error-checking/spend 30m
show more ...
checkbadSource: apply rules to *.cu *.cpp sources, and expand CHKERRQ check to CHKERR(Q|MPI|CUDA|CUBLAS|CUSPARSE)
1...<<41424344454647484950>>...210