xref: /petsc/src/ksp/ksp/tutorials/output/ex7_gamg_cuda_nsize-1.out (revision 70646cd191a02c3aba559ba717dac5da7a8a1e20)
1  0 KSP Residual norm 4.58465
2  1 KSP Residual norm 0.189322
3  2 KSP Residual norm 0.0172992
4  3 KSP Residual norm 0.000973548
5  4 KSP Residual norm 0.000107509
6  5 KSP Residual norm 3.97587e-06
7KSP Object: 1 MPI process
8  type: gmres
9    restart=30, using classical (unmodified) Gram-Schmidt orthogonalization with no iterative refinement
10    happy breakdown tolerance=1e-30
11  maximum iterations=10000, initial guess is zero
12  tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
13  left preconditioning
14  using PRECONDITIONED norm type for convergence test
15PC Object: 1 MPI process
16  type: gamg
17    type is MULTIPLICATIVE, levels=2 cycles=v
18      Cycles per PCApply=1
19      Using externally compute Galerkin coarse grid matrices
20      GAMG specific options
21        Threshold for dropping small values in graph on each level =   -1.   -1.
22        Threshold scaling factor for each level not specified = 1.
23        AGG specific options
24          Number of levels of aggressive coarsening 1
25          Square graph aggressive coarsening
26          MatCoarsen Object: (pc_gamg_) 1 MPI process
27            type: mis
28          Number smoothing steps to construct prolongation 1
29        Complexity:    grid = 1.25    operator = 1.3
30        Per-level complexity: op = operator, int = interpolation
31            #equations  | #active PEs | avg nnz/row op | avg nnz/row int
32                  6            1              5                0
33                 24            1              5                3
34  Coarse grid solver -- level 0 -------------------------------
35    KSP Object: (mg_coarse_) 1 MPI process
36      type: preonly
37      maximum iterations=10000, initial guess is zero
38      tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
39      left preconditioning
40      not checking for convergence
41    PC Object: (mg_coarse_) 1 MPI process
42      type: bjacobi
43        number of blocks = 1
44        Local solver information for first block is in the following KSP and PC objects on rank 0:
45        Use -mg_coarse_ksp_view ::ascii_info_detail to display information for all blocks
46        KSP Object: (mg_coarse_sub_) 1 MPI process
47          type: preonly
48          maximum iterations=1, initial guess is zero
49          tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
50          left preconditioning
51          not checking for convergence
52        PC Object: (mg_coarse_sub_) 1 MPI process
53          type: lu
54            out-of-place factorization
55            tolerance for zero pivot 2.22045e-14
56            using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
57            matrix ordering: nd
58            factor fill ratio given 5., needed 1.06667
59              Factored matrix:
60                Mat Object: (mg_coarse_sub_) 1 MPI process
61                  type: seqaijcusparse
62                  rows=6, cols=6
63                  package used to perform factorization: cusparse
64                  total: nonzeros=32, allocated nonzeros=32
65                    using I-node routines: found 3 nodes, limit used is 5
66          linear system matrix, which is also used to construct the preconditioner:
67          Mat Object: (mg_coarse_sub_) 1 MPI process
68            type: seqaijcusparse
69            rows=6, cols=6
70            total: nonzeros=30, allocated nonzeros=30
71            total number of mallocs used during MatSetValues calls=0
72              not using I-node routines
73      linear system matrix, which is also used to construct the preconditioner:
74      Mat Object: (mg_coarse_sub_) 1 MPI process
75        type: seqaijcusparse
76        rows=6, cols=6
77        total: nonzeros=30, allocated nonzeros=30
78        total number of mallocs used during MatSetValues calls=0
79          not using I-node routines
80  Down solver (pre-smoother) on level 1 -------------------------------
81    KSP Object: (mg_levels_1_) 1 MPI process
82      type: chebyshev
83        Chebyshev polynomial of first kind
84        eigenvalue targets used: min 0.185488, max 2.04037
85        eigenvalues provided (min 0.145059, max 1.85488) with transform: [0. 0.1; 0. 1.1]
86      maximum iterations=2, nonzero initial guess
87      tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
88      left preconditioning
89      not checking for convergence
90    PC Object: (mg_levels_1_) 1 MPI process
91      type: jacobi
92        type DIAGONAL
93      linear system matrix, which is also used to construct the preconditioner:
94      Mat Object: 1 MPI process
95        type: seqaijcusparse
96        rows=24, cols=24
97        total: nonzeros=100, allocated nonzeros=120
98        total number of mallocs used during MatSetValues calls=0
99          not using I-node routines
100  Up solver (post-smoother) same as down solver (pre-smoother)
101  linear system matrix, which is also used to construct the preconditioner:
102  Mat Object: 1 MPI process
103    type: seqaijcusparse
104    rows=24, cols=24
105    total: nonzeros=100, allocated nonzeros=120
106    total number of mallocs used during MatSetValues calls=0
107      not using I-node routines
108Norm of error 4.24117e-06 iterations 5
109