1KSP Object: 1 MPI process 2 type: gmres 3 restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement 4 happy breakdown tolerance 1e-30 5 maximum iterations=10000, initial guess is zero 6 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 7 left preconditioning 8 using DEFAULT norm type for convergence test 9PC Object: 1 MPI process 10 type: gamg 11 PC has not been set up so information may be incomplete 12 type is MULTIPLICATIVE, levels=0 cycles=unknown 13 Cycles per PCApply=0 14 Using externally compute Galerkin coarse grid matrices 15 GAMG specific options 16 Threshold for dropping small values in graph on each level = 17 Threshold scaling factor for each level not specified = 1. 18 AGG specific options 19 Number of levels of aggressive coarsening 1 20 Square graph aggressive coarsening 21 Coarsening algorithm not yet selected 22 Number smoothing steps to construct prolongation 1 23 Complexity: grid = 0. operator = 0. 24 linear system matrix = precond matrix: 25 Mat Object: 1 MPI process 26 type: seqaij 27 rows=16, cols=16 28 total: nonzeros=64, allocated nonzeros=64 29 total number of mallocs used during MatSetValues calls=0 30 not using I-node routines 31KSP Object: 1 MPI process 32 type: gmres 33 restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement 34 happy breakdown tolerance 1e-30 35 maximum iterations=10000, initial guess is zero 36 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 37 left preconditioning 38 using PRECONDITIONED norm type for convergence test 39PC Object: 1 MPI process 40 type: gamg 41 type is MULTIPLICATIVE, levels=2 cycles=v 42 Cycles per PCApply=1 43 Using externally compute Galerkin coarse grid matrices 44 GAMG specific options 45 Threshold for dropping small values in graph on each level = -1. -1. 46 Threshold scaling factor for each level not specified = 1. 47 AGG specific options 48 Number of levels of aggressive coarsening 1 49 Square graph aggressive coarsening 50 MatCoarsen Object: (pc_gamg_) 1 MPI process 51 type: mis 52 Number smoothing steps to construct prolongation 1 53 Complexity: grid = 1.1875 operator = 1.14062 54 Coarse grid solver -- level 0 ------------------------------- 55 KSP Object: (mg_coarse_) 1 MPI process 56 type: preonly 57 maximum iterations=10000, initial guess is zero 58 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 59 left preconditioning 60 using NONE norm type for convergence test 61 PC Object: (mg_coarse_) 1 MPI process 62 type: bjacobi 63 number of blocks = 1 64 Local solver information for first block is in the following KSP and PC objects on rank 0: 65 Use -mg_coarse_ksp_view ::ascii_info_detail to display information for all blocks 66 KSP Object: (mg_coarse_sub_) 1 MPI process 67 type: preonly 68 maximum iterations=1, initial guess is zero 69 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 70 left preconditioning 71 using NONE norm type for convergence test 72 PC Object: (mg_coarse_sub_) 1 MPI process 73 type: lu 74 out-of-place factorization 75 tolerance for zero pivot 2.22045e-14 76 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] 77 matrix ordering: nd 78 factor fill ratio given 5., needed 1. 79 Factored matrix follows: 80 Mat Object: (mg_coarse_sub_) 1 MPI process 81 type: seqaij 82 rows=3, cols=3 83 package used to perform factorization: petsc 84 total: nonzeros=9, allocated nonzeros=9 85 using I-node routines: found 1 nodes, limit used is 5 86 linear system matrix = precond matrix: 87 Mat Object: (mg_coarse_sub_) 1 MPI process 88 type: seqaij 89 rows=3, cols=3 90 total: nonzeros=9, allocated nonzeros=9 91 total number of mallocs used during MatSetValues calls=0 92 using I-node routines: found 1 nodes, limit used is 5 93 linear system matrix = precond matrix: 94 Mat Object: (mg_coarse_sub_) 1 MPI process 95 type: seqaij 96 rows=3, cols=3 97 total: nonzeros=9, allocated nonzeros=9 98 total number of mallocs used during MatSetValues calls=0 99 using I-node routines: found 1 nodes, limit used is 5 100 Down solver (pre-smoother) on level 1 ------------------------------- 101 KSP Object: (mg_levels_1_) 1 MPI process 102 type: chebyshev 103 Chebyshev polynomial of first kind 104 eigenvalue targets used: min 1.06112, max 11.6723 105 eigenvalues provided (min 0.311583, max 10.6112) with transform: [0. 0.1; 0. 1.1] 106 maximum iterations=2, nonzero initial guess 107 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 108 left preconditioning 109 using NONE norm type for convergence test 110 PC Object: (mg_levels_1_) 1 MPI process 111 type: jacobi 112 type DIAGONAL 113 linear system matrix = precond matrix: 114 Mat Object: 1 MPI process 115 type: seqaij 116 rows=16, cols=16 117 total: nonzeros=64, allocated nonzeros=64 118 total number of mallocs used during MatSetValues calls=0 119 not using I-node routines 120 Up solver (post-smoother) same as down solver (pre-smoother) 121 linear system matrix = precond matrix: 122 Mat Object: 1 MPI process 123 type: seqaij 124 rows=16, cols=16 125 total: nonzeros=64, allocated nonzeros=64 126 total number of mallocs used during MatSetValues calls=0 127 not using I-node routines 128KSP Object: 1 MPI process 129 type: gmres 130 restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement 131 happy breakdown tolerance 1e-30 132 maximum iterations=10000, initial guess is zero 133 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 134 left preconditioning 135 using PRECONDITIONED norm type for convergence test 136PC Object: 1 MPI process 137 type: gamg 138 type is MULTIPLICATIVE, levels=2 cycles=v 139 Cycles per PCApply=1 140 Using externally compute Galerkin coarse grid matrices 141 GAMG specific options 142 Threshold for dropping small values in graph on each level = -1. -1. 143 Threshold scaling factor for each level not specified = 1. 144 AGG specific options 145 Number of levels of aggressive coarsening 1 146 Square graph aggressive coarsening 147 MatCoarsen Object: (pc_gamg_) 1 MPI process 148 type: mis 149 Number smoothing steps to construct prolongation 1 150 Complexity: grid = 1.1875 operator = 1.14062 151 Coarse grid solver -- level 0 ------------------------------- 152 KSP Object: (mg_coarse_) 1 MPI process 153 type: preonly 154 maximum iterations=10000, initial guess is zero 155 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 156 left preconditioning 157 using NONE norm type for convergence test 158 PC Object: (mg_coarse_) 1 MPI process 159 type: bjacobi 160 number of blocks = 1 161 Local solver information for first block is in the following KSP and PC objects on rank 0: 162 Use -mg_coarse_ksp_view ::ascii_info_detail to display information for all blocks 163 KSP Object: (mg_coarse_sub_) 1 MPI process 164 type: preonly 165 maximum iterations=1, initial guess is zero 166 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 167 left preconditioning 168 using NONE norm type for convergence test 169 PC Object: (mg_coarse_sub_) 1 MPI process 170 type: lu 171 out-of-place factorization 172 tolerance for zero pivot 2.22045e-14 173 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] 174 matrix ordering: nd 175 factor fill ratio given 5., needed 1. 176 Factored matrix follows: 177 Mat Object: (mg_coarse_sub_) 1 MPI process 178 type: seqaij 179 rows=3, cols=3 180 package used to perform factorization: petsc 181 total: nonzeros=9, allocated nonzeros=9 182 using I-node routines: found 1 nodes, limit used is 5 183 linear system matrix = precond matrix: 184 Mat Object: (mg_coarse_sub_) 1 MPI process 185 type: seqaij 186 rows=3, cols=3 187 total: nonzeros=9, allocated nonzeros=9 188 total number of mallocs used during MatSetValues calls=0 189 using I-node routines: found 1 nodes, limit used is 5 190 linear system matrix = precond matrix: 191 Mat Object: (mg_coarse_sub_) 1 MPI process 192 type: seqaij 193 rows=3, cols=3 194 total: nonzeros=9, allocated nonzeros=9 195 total number of mallocs used during MatSetValues calls=0 196 using I-node routines: found 1 nodes, limit used is 5 197 Down solver (pre-smoother) on level 1 ------------------------------- 198 KSP Object: (mg_levels_1_) 1 MPI process 199 type: chebyshev 200 Chebyshev polynomial of first kind 201 eigenvalue targets used: min 0.159372, max 1.75309 202 eigenvalues estimated via gmres: min 0.406283, max 1.59372 203 eigenvalues estimated using gmres with transform: [0. 0.1; 0. 1.1] 204 KSP Object: (mg_levels_1_esteig_) 1 MPI process 205 type: gmres 206 restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement 207 happy breakdown tolerance 1e-30 208 maximum iterations=10, initial guess is zero 209 tolerances: relative=1e-12, absolute=1e-50, divergence=10000. 210 left preconditioning 211 using PRECONDITIONED norm type for convergence test 212 estimating eigenvalues using a noisy random number generated right-hand side 213 maximum iterations=2, nonzero initial guess 214 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 215 left preconditioning 216 using NONE norm type for convergence test 217 PC Object: (mg_levels_1_) 1 MPI process 218 type: jacobi 219 type DIAGONAL 220 linear system matrix = precond matrix: 221 Mat Object: 1 MPI process 222 type: seqaij 223 rows=16, cols=16 224 total: nonzeros=64, allocated nonzeros=64 225 total number of mallocs used during MatSetValues calls=0 226 not using I-node routines 227 Up solver (post-smoother) same as down solver (pre-smoother) 228 linear system matrix = precond matrix: 229 Mat Object: 1 MPI process 230 type: seqaij 231 rows=16, cols=16 232 total: nonzeros=64, allocated nonzeros=64 233 total number of mallocs used during MatSetValues calls=0 234 not using I-node routines 235KSP Object: 1 MPI process 236 type: gmres 237 restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement 238 happy breakdown tolerance 1e-30 239 maximum iterations=10000, initial guess is zero 240 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 241 left preconditioning 242 using PRECONDITIONED norm type for convergence test 243PC Object: 1 MPI process 244 type: gamg 245 type is MULTIPLICATIVE, levels=2 cycles=v 246 Cycles per PCApply=1 247 Using externally compute Galerkin coarse grid matrices 248 GAMG specific options 249 Threshold for dropping small values in graph on each level = -1. -1. 250 Threshold scaling factor for each level not specified = 1. 251 AGG specific options 252 Number of levels of aggressive coarsening 1 253 Square graph aggressive coarsening 254 MatCoarsen Object: (pc_gamg_) 1 MPI process 255 type: mis 256 Number smoothing steps to construct prolongation 1 257 Complexity: grid = 1.1875 operator = 1.14062 258 Coarse grid solver -- level 0 ------------------------------- 259 KSP Object: (mg_coarse_) 1 MPI process 260 type: preonly 261 maximum iterations=10000, initial guess is zero 262 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 263 left preconditioning 264 using NONE norm type for convergence test 265 PC Object: (mg_coarse_) 1 MPI process 266 type: bjacobi 267 number of blocks = 1 268 Local solver information for first block is in the following KSP and PC objects on rank 0: 269 Use -mg_coarse_ksp_view ::ascii_info_detail to display information for all blocks 270 KSP Object: (mg_coarse_sub_) 1 MPI process 271 type: preonly 272 maximum iterations=1, initial guess is zero 273 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 274 left preconditioning 275 using NONE norm type for convergence test 276 PC Object: (mg_coarse_sub_) 1 MPI process 277 type: lu 278 out-of-place factorization 279 tolerance for zero pivot 2.22045e-14 280 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] 281 matrix ordering: nd 282 factor fill ratio given 5., needed 1. 283 Factored matrix follows: 284 Mat Object: (mg_coarse_sub_) 1 MPI process 285 type: seqaij 286 rows=3, cols=3 287 package used to perform factorization: petsc 288 total: nonzeros=9, allocated nonzeros=9 289 using I-node routines: found 1 nodes, limit used is 5 290 linear system matrix = precond matrix: 291 Mat Object: (mg_coarse_sub_) 1 MPI process 292 type: seqaij 293 rows=3, cols=3 294 total: nonzeros=9, allocated nonzeros=9 295 total number of mallocs used during MatSetValues calls=0 296 using I-node routines: found 1 nodes, limit used is 5 297 linear system matrix = precond matrix: 298 Mat Object: (mg_coarse_sub_) 1 MPI process 299 type: seqaij 300 rows=3, cols=3 301 total: nonzeros=9, allocated nonzeros=9 302 total number of mallocs used during MatSetValues calls=0 303 using I-node routines: found 1 nodes, limit used is 5 304 Down solver (pre-smoother) on level 1 ------------------------------- 305 KSP Object: (mg_levels_1_) 1 MPI process 306 type: chebyshev 307 Chebyshev polynomial of first kind 308 eigenvalue targets used: min 0.160581, max 1.76639 309 eigenvalues estimated via gmres: min 0.394193, max 1.60581 310 eigenvalues estimated using gmres with transform: [0. 0.1; 0. 1.1] 311 KSP Object: (mg_levels_1_esteig_) 1 MPI process 312 type: gmres 313 restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement 314 happy breakdown tolerance 1e-30 315 maximum iterations=10, initial guess is zero 316 tolerances: relative=1e-12, absolute=1e-50, divergence=10000. 317 left preconditioning 318 using PRECONDITIONED norm type for convergence test 319 estimating eigenvalues using a noisy random number generated right-hand side 320 maximum iterations=2, nonzero initial guess 321 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. 322 left preconditioning 323 using NONE norm type for convergence test 324 PC Object: (mg_levels_1_) 1 MPI process 325 type: jacobi 326 type DIAGONAL 327 linear system matrix = precond matrix: 328 Mat Object: 1 MPI process 329 type: seqaij 330 rows=16, cols=16 331 total: nonzeros=64, allocated nonzeros=64 332 total number of mallocs used during MatSetValues calls=0 333 not using I-node routines 334 Up solver (post-smoother) same as down solver (pre-smoother) 335 linear system matrix = precond matrix: 336 Mat Object: 1 MPI process 337 type: seqaij 338 rows=16, cols=16 339 total: nonzeros=64, allocated nonzeros=64 340 total number of mallocs used during MatSetValues calls=0 341 not using I-node routines 342