xref: /petsc/doc/manual/mat.md (revision 7f296bb328fcd4c99f2da7bfe8ba7ed8a4ebceee)
1*7f296bb3SBarry Smith(ch_matrices)=
2*7f296bb3SBarry Smith
3*7f296bb3SBarry Smith# Matrices
4*7f296bb3SBarry Smith
5*7f296bb3SBarry SmithPETSc provides a variety of matrix implementations because no single
6*7f296bb3SBarry Smithmatrix format is appropriate for all problems. Currently, we support
7*7f296bb3SBarry Smithdense storage and compressed sparse row storage (both sequential and
8*7f296bb3SBarry Smithparallel versions) for CPU and GPU based matrices, as well as several specialized formats. Additional
9*7f296bb3SBarry Smithspecialized formats can be easily added.
10*7f296bb3SBarry Smith
11*7f296bb3SBarry SmithThis chapter describes the basics of using PETSc matrices in general
12*7f296bb3SBarry Smith(regardless of the particular format chosen) and discusses tips for
13*7f296bb3SBarry Smithefficient use of the several simple uniprocess and parallel matrix
14*7f296bb3SBarry Smithtypes. The use of PETSc matrices involves the following actions: create
15*7f296bb3SBarry Smitha particular type of matrix, insert values into it, process the matrix,
16*7f296bb3SBarry Smithuse the matrix for various computations, and finally destroy the matrix.
17*7f296bb3SBarry SmithThe application code does not need to know or care about the particular
18*7f296bb3SBarry Smithstorage formats of the matrices.
19*7f296bb3SBarry Smith
20*7f296bb3SBarry Smith(sec_matcreate)=
21*7f296bb3SBarry Smith
22*7f296bb3SBarry Smith## Creating matrices
23*7f296bb3SBarry Smith
24*7f296bb3SBarry SmithAs with vectors, PETSc has APIs that allow the user to specify the exact details of the matrix
25*7f296bb3SBarry Smithcreation process but also `DM` based creation routines that handle most of the details automatically
26*7f296bb3SBarry Smithfor specific families of applications. This is done with
27*7f296bb3SBarry Smith
28*7f296bb3SBarry Smith```
29*7f296bb3SBarry SmithDMCreateMatrix(DM dm,Mat *A)
30*7f296bb3SBarry Smith```
31*7f296bb3SBarry Smith
32*7f296bb3SBarry SmithThe type of matrix created can be controlled with either
33*7f296bb3SBarry Smith
34*7f296bb3SBarry Smith```
35*7f296bb3SBarry SmithDMSetMatType(DM dm,MatType <MATAIJ or MATBAIJ or MATAIJCUSPARSE etc>)
36*7f296bb3SBarry Smith```
37*7f296bb3SBarry Smith
38*7f296bb3SBarry Smithor with
39*7f296bb3SBarry Smith
40*7f296bb3SBarry Smith```
41*7f296bb3SBarry SmithDMSetFromOptions(DM dm)
42*7f296bb3SBarry Smith```
43*7f296bb3SBarry Smith
44*7f296bb3SBarry Smithand the options database option `-dm_mat_type <aij or baij or aijcusparse etc>` Matrices can be created for CPU usage, for GPU usage and for usage on
45*7f296bb3SBarry Smithboth the CPUs and GPUs.
46*7f296bb3SBarry Smith
47*7f296bb3SBarry SmithThe creation of `DM` objects is discussed in {any}`sec_struct`, {any}`sec_unstruct`, {any}`sec_network`.
48*7f296bb3SBarry Smith
49*7f296bb3SBarry Smith## Low-level matrix creation routines
50*7f296bb3SBarry Smith
51*7f296bb3SBarry SmithWhen using a `DM` is not practical for a particular application one can create matrices directly
52*7f296bb3SBarry Smithusing
53*7f296bb3SBarry Smith
54*7f296bb3SBarry Smith```
55*7f296bb3SBarry SmithMatCreate(MPI_Comm comm,Mat *A)
56*7f296bb3SBarry SmithMatSetSizes(Mat A,PetscInt m,PetscInt n,PetscInt M,PetscInt N)
57*7f296bb3SBarry Smith```
58*7f296bb3SBarry Smith
59*7f296bb3SBarry SmithThis routine generates a sequential matrix when running one process and
60*7f296bb3SBarry Smitha parallel matrix for two or more processes; the particular matrix
61*7f296bb3SBarry Smithformat is set by the user via options database commands. The user
62*7f296bb3SBarry Smithspecifies either the global matrix dimensions, given by `M` and `N`
63*7f296bb3SBarry Smithor the local dimensions, given by `m` and `n` while PETSc completely
64*7f296bb3SBarry Smithcontrols memory allocation. This routine facilitates switching among
65*7f296bb3SBarry Smithvarious matrix types, for example, to determine the format that is most
66*7f296bb3SBarry Smithefficient for a certain application. By default, `MatCreate()` employs
67*7f296bb3SBarry Smiththe sparse AIJ format, which is discussed in detail in
68*7f296bb3SBarry Smith{any}`sec_matsparse`. See the manual pages for further
69*7f296bb3SBarry Smithinformation about available matrix formats.
70*7f296bb3SBarry Smith
71*7f296bb3SBarry Smith## Assembling (putting values into) matrices
72*7f296bb3SBarry Smith
73*7f296bb3SBarry SmithTo insert or add entries to a matrix on CPUs, one can call a variant of
74*7f296bb3SBarry Smith`MatSetValues()`, either
75*7f296bb3SBarry Smith
76*7f296bb3SBarry Smith```
77*7f296bb3SBarry SmithMatSetValues(Mat A,PetscInt m,const PetscInt idxm[],PetscInt n,const PetscInt idxn[],const PetscScalar values[],INSERT_VALUES);
78*7f296bb3SBarry Smith```
79*7f296bb3SBarry Smith
80*7f296bb3SBarry Smithor
81*7f296bb3SBarry Smith
82*7f296bb3SBarry Smith```
83*7f296bb3SBarry SmithMatSetValues(Mat A,PetscInt m,const PetscInt idxm[],PetscInt n,const PetscInt idxn[],const PetscScalar values[],ADD_VALUES);
84*7f296bb3SBarry Smith```
85*7f296bb3SBarry Smith
86*7f296bb3SBarry SmithThis routine inserts or adds a logically dense subblock of dimension
87*7f296bb3SBarry Smith`m*n` into the matrix. The integer indices `idxm` and `idxn`,
88*7f296bb3SBarry Smithrespectively, indicate the global row and column numbers to be inserted.
89*7f296bb3SBarry Smith`MatSetValues()` uses the standard C convention, where the row and
90*7f296bb3SBarry Smithcolumn matrix indices begin with zero *regardless of the programming language
91*7f296bb3SBarry Smithemployed*. The array `values` is logically two-dimensional, containing
92*7f296bb3SBarry Smiththe values that are to be inserted. By default the values are given in
93*7f296bb3SBarry Smithrow major order, which is the opposite of the Fortran convention,
94*7f296bb3SBarry Smithmeaning that the value to be put in row `idxm[i]` and column
95*7f296bb3SBarry Smith`idxn[j]` is located in `values[i*n+j]`. To allow the insertion of
96*7f296bb3SBarry Smithvalues in column major order, one can call the command
97*7f296bb3SBarry Smith
98*7f296bb3SBarry Smith```
99*7f296bb3SBarry SmithMatSetOption(Mat A,MAT_ROW_ORIENTED,PETSC_FALSE);
100*7f296bb3SBarry Smith```
101*7f296bb3SBarry Smith
102*7f296bb3SBarry Smith*Warning*: Several of the sparse implementations do *not* currently
103*7f296bb3SBarry Smithsupport the column-oriented option.
104*7f296bb3SBarry Smith
105*7f296bb3SBarry SmithThis notation should not be a mystery to anyone. For example, to insert
106*7f296bb3SBarry Smithone matrix into another when using MATLAB, one uses the command
107*7f296bb3SBarry Smith`A(im,in) = B;` where `im` and `in` contain the indices for the
108*7f296bb3SBarry Smithrows and columns. This action is identical to the calls above to
109*7f296bb3SBarry Smith`MatSetValues()`.
110*7f296bb3SBarry Smith
111*7f296bb3SBarry SmithWhen using the block compressed sparse row matrix format (`MATSEQBAIJ`
112*7f296bb3SBarry Smithor `MATMPIBAIJ`), one can insert elements more efficiently using the
113*7f296bb3SBarry Smithblock variant, `MatSetValuesBlocked()` or
114*7f296bb3SBarry Smith`MatSetValuesBlockedLocal()`.
115*7f296bb3SBarry Smith
116*7f296bb3SBarry SmithThe function `MatSetOption()` accepts several other inputs; see the
117*7f296bb3SBarry Smithmanual page for details.
118*7f296bb3SBarry Smith
119*7f296bb3SBarry SmithAfter the matrix elements have been inserted or added into the matrix,
120*7f296bb3SBarry Smiththey must be processed (also called “assembled”) before they can be
121*7f296bb3SBarry Smithused. The routines for matrix processing are
122*7f296bb3SBarry Smith
123*7f296bb3SBarry Smith```
124*7f296bb3SBarry SmithMatAssemblyBegin(Mat A,MAT_FINAL_ASSEMBLY);
125*7f296bb3SBarry SmithMatAssemblyEnd(Mat A,MAT_FINAL_ASSEMBLY);
126*7f296bb3SBarry Smith```
127*7f296bb3SBarry Smith
128*7f296bb3SBarry SmithBy placing other code between these two calls, the user can perform
129*7f296bb3SBarry Smithcomputations while messages are in transit. Calls to `MatSetValues()`
130*7f296bb3SBarry Smithwith the `INSERT_VALUES` and `ADD_VALUES` options *cannot* be mixed
131*7f296bb3SBarry Smithwithout intervening calls to the assembly routines. For such
132*7f296bb3SBarry Smithintermediate assembly calls the second routine argument typically should
133*7f296bb3SBarry Smithbe `MAT_FLUSH_ASSEMBLY`, which omits some of the work of the full
134*7f296bb3SBarry Smithassembly process. `MAT_FINAL_ASSEMBLY` is required only in the last
135*7f296bb3SBarry Smithmatrix assembly before a matrix is used.
136*7f296bb3SBarry Smith
137*7f296bb3SBarry SmithEven though one may insert values into PETSc matrices without regard to
138*7f296bb3SBarry Smithwhich process eventually stores them, for efficiency reasons we usually
139*7f296bb3SBarry Smithrecommend generating most entries on the process where they are destined
140*7f296bb3SBarry Smithto be stored. To help the application programmer with this task for
141*7f296bb3SBarry Smithmatrices that are distributed across the processes by ranges, the
142*7f296bb3SBarry Smithroutine
143*7f296bb3SBarry Smith
144*7f296bb3SBarry Smith```
145*7f296bb3SBarry SmithMatGetOwnershipRange(Mat A,PetscInt *first_row,PetscInt *last_row);
146*7f296bb3SBarry Smith```
147*7f296bb3SBarry Smith
148*7f296bb3SBarry Smithinforms the user that all rows from `first_row` to `last_row-1`
149*7f296bb3SBarry Smith(since the value returned in `last_row` is one more than the global
150*7f296bb3SBarry Smithindex of the last local row) will be stored on the local process.
151*7f296bb3SBarry Smith
152*7f296bb3SBarry SmithIf the `Mat` was obtained from a `DM` with `DMCreateMatrix()`, then the range values are determined by the specific `DM`.
153*7f296bb3SBarry SmithIf the `Mat` was created directly, the range values are determined by the local sizes passed to `MatSetSizes()` or `MatCreateAIJ()` (and such low-level functions for other `MatType`).
154*7f296bb3SBarry SmithIf `PETSC_DECIDE` was passed as the local size, then the vector uses default values for the range using `PetscSplitOwnership()`.
155*7f296bb3SBarry SmithFor certain `DM`, such as `DMDA`, it is better to use `DM` specific routines, such as `DMDAGetGhostCorners()`, to determine
156*7f296bb3SBarry Smiththe local values in the matrix. See {any}`sec_matlayout` for full details on row and column layouts.
157*7f296bb3SBarry Smith
158*7f296bb3SBarry SmithIn the sparse matrix implementations, once the assembly routines have
159*7f296bb3SBarry Smithbeen called, the matrices are compressed and can be used for
160*7f296bb3SBarry Smithmatrix-vector multiplication, etc. Any space for preallocated nonzeros
161*7f296bb3SBarry Smiththat was not filled by a call to `MatSetValues()` or a related routine
162*7f296bb3SBarry Smithis compressed out by assembling with `MAT_FINAL_ASSEMBLY`. If you
163*7f296bb3SBarry Smithintend to use that extra space later, be sure to insert explicit zeros
164*7f296bb3SBarry Smithbefore assembling with `MAT_FINAL_ASSEMBLY` so the space will not be
165*7f296bb3SBarry Smithcompressed out. Once the matrix has been assembled, inserting new values
166*7f296bb3SBarry Smithwill be expensive since it will require copies and possible memory
167*7f296bb3SBarry Smithallocation.
168*7f296bb3SBarry Smith
169*7f296bb3SBarry SmithOne may repeatedly assemble matrices that retain the same
170*7f296bb3SBarry Smithnonzero pattern (such as within a nonlinear or time-dependent problem).
171*7f296bb3SBarry SmithWhere possible, data structures and communication
172*7f296bb3SBarry Smithinformation will be reused (instead of regenerated) during successive
173*7f296bb3SBarry Smithsteps, thereby increasing efficiency. See
174*7f296bb3SBarry Smith<a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/ksp/ksp/tutorials/ex5.c.html">KSP Tutorial ex5</a>
175*7f296bb3SBarry Smithfor a simple example of solving two linear systems that use the same
176*7f296bb3SBarry Smithmatrix data structure.
177*7f296bb3SBarry Smith
178*7f296bb3SBarry SmithFor matrices associated with `DMDA` there is a higher-level interface for providing
179*7f296bb3SBarry Smiththe numerical values based on the concept of stencils. See the manual page of `MatSetValuesStencil()` for usage.
180*7f296bb3SBarry Smith
181*7f296bb3SBarry SmithFor GPUs the routines `MatSetPreallocationCOO()` and `MatSetValuesCOO()` should be used for efficient matrix assembly
182*7f296bb3SBarry Smithinstead of `MatSetValues()`.
183*7f296bb3SBarry Smith
184*7f296bb3SBarry SmithWe now introduce the various families of PETSc matrices. `DMCreateMatrix()` manages
185*7f296bb3SBarry Smiththe preallocation process (introduced below) automatically so many users do not need to
186*7f296bb3SBarry Smithworry about the details of the preallocation process.
187*7f296bb3SBarry Smith
188*7f296bb3SBarry Smith(sec_matlayout)=
189*7f296bb3SBarry Smith
190*7f296bb3SBarry Smith### Matrix and Vector Layouts and Storage Locations
191*7f296bb3SBarry Smith
192*7f296bb3SBarry SmithThe layout of PETSc matrices across MPI ranks is defined by two things
193*7f296bb3SBarry Smith
194*7f296bb3SBarry Smith- the layout of the two compatible vectors in the computation of the matrix-vector product y = A * x and
195*7f296bb3SBarry Smith- the memory where various parts of the matrix are stored across the MPI ranks.
196*7f296bb3SBarry Smith
197*7f296bb3SBarry SmithPETSc vectors always have a contiguous range of vector entries stored on each MPI rank. The first rank has entries from 0 to `rend1` - 1, the
198*7f296bb3SBarry Smithnext rank has entries from `rend1` to `rend2` - 1, etc. Thus the ownership range on each rank is from `rstart` to `rend`, these values can be
199*7f296bb3SBarry Smithobtained with `VecGetOwnershipRange`(`Vec` x, `PetscInt` * `rstart`, `PetscInt` * `rend`). Each PETSc `Vec` has a `PetscLayout` object that contains this information.
200*7f296bb3SBarry Smith
201*7f296bb3SBarry SmithAll PETSc matrices have two `PetscLayout`s, they define the vector layouts for y and x in the product, y = A * x. Their ownership range information
202*7f296bb3SBarry Smithcan be obtained with `MatGetOwnershipRange()`, `MatGetOwnershipRangeColumn()`, `MatGetOwnershipRanges()`, and `MatGetOwnershipRangesColumn()`.
203*7f296bb3SBarry SmithNote that `MatCreateVecs()` provides two vectors that have compatible layouts for the associated vector.
204*7f296bb3SBarry Smith
205*7f296bb3SBarry SmithFor most PETSc matrices, excluding `MATELEMENTAL` and `MATSCALAPACK`, the row ownership range obtained with `MatGetOwnershipRange()` also defines
206*7f296bb3SBarry Smithwhere the matrix entries are stored; the matrix entries for rows `rstart` to `rend - 1` are stored on the corresponding MPI rank. For other matrices
207*7f296bb3SBarry Smiththe rank where each matrix entry is stored is more complicated; information about the storage locations can be obtained with `MatGetOwnershipIS()`.
208*7f296bb3SBarry SmithNote that for
209*7f296bb3SBarry Smithmost PETSc matrices the values returned by `MatGetOwnershipIS()` are the same as those returned by `MatGetOwnershipRange()` and
210*7f296bb3SBarry Smith`MatGetOwnershipRangeColumn()`.
211*7f296bb3SBarry Smith
212*7f296bb3SBarry SmithThe PETSc object `PetscLayout` contains the ownership information that is provided by `VecGetOwnershipRange()` and with `MatGetOwnershipRange()`, `MatGetOwnershipRangeColumn()`. Each vector has one layout, which can be obtained with `VecGetLayout()` and `MatGetLayouts()`. Layouts support the routines `PetscLayoutGetLocalSize()`, `PetscLayoutGetSize()`, `PetscLayoutGetBlockSize()`, `PetscLayoutGetRanges()`, `PetscLayoutCompare()` as well as a variety of creation routines. These are used by the `Vec` and `Mat` and so are rarely needed directly. Finally `PetscSplitOwnership()` is a utility routine that does the same splitting of ownership ranges as `PetscLayout`.
213*7f296bb3SBarry Smith
214*7f296bb3SBarry Smith(sec_matsparse)=
215*7f296bb3SBarry Smith
216*7f296bb3SBarry Smith### Sparse Matrices
217*7f296bb3SBarry Smith
218*7f296bb3SBarry SmithThe default matrix representation within PETSc is the general sparse AIJ
219*7f296bb3SBarry Smithformat (also called the compressed sparse
220*7f296bb3SBarry Smithrow format, CSR). This section discusses tips for *efficiently* using
221*7f296bb3SBarry Smiththis matrix format for large-scale applications. Additional formats
222*7f296bb3SBarry Smith(such as block compressed row and block symmetric storage, which are
223*7f296bb3SBarry Smithgenerally much more efficient for problems with multiple degrees of
224*7f296bb3SBarry Smithfreedom per node) are discussed below. Beginning users need not concern
225*7f296bb3SBarry Smiththemselves initially with such details and may wish to proceed directly
226*7f296bb3SBarry Smithto {any}`sec_matoptions`. However, when an application code
227*7f296bb3SBarry Smithprogresses to the point of tuning for efficiency and/or generating
228*7f296bb3SBarry Smithtiming results, it is *crucial* to read this information.
229*7f296bb3SBarry Smith
230*7f296bb3SBarry Smith#### Sequential AIJ Sparse Matrices
231*7f296bb3SBarry Smith
232*7f296bb3SBarry SmithIn the PETSc AIJ matrix formats, we store the nonzero elements by rows,
233*7f296bb3SBarry Smithalong with an array of corresponding column numbers and an array of
234*7f296bb3SBarry Smithpointers to the beginning of each row. Note that the diagonal matrix
235*7f296bb3SBarry Smithentries are stored with the rest of the nonzeros (not separately).
236*7f296bb3SBarry Smith
237*7f296bb3SBarry SmithTo create a sequential AIJ sparse matrix, `A`, with `m` rows and
238*7f296bb3SBarry Smith`n` columns, one uses the command
239*7f296bb3SBarry Smith
240*7f296bb3SBarry Smith```
241*7f296bb3SBarry SmithMatCreateSeqAIJ(PETSC_COMM_SELF,PetscInt m,PetscInt n,PetscInt nz,PetscInt *nnz,Mat *A);
242*7f296bb3SBarry Smith```
243*7f296bb3SBarry Smith
244*7f296bb3SBarry Smithwhere `nz` or `nnz` can be used to preallocate matrix memory, as
245*7f296bb3SBarry Smithdiscussed below. The user can set `nz=0` and `nnz=NULL` for PETSc to
246*7f296bb3SBarry Smithcontrol all matrix memory allocation.
247*7f296bb3SBarry Smith
248*7f296bb3SBarry SmithThe sequential and parallel AIJ matrix storage formats by default employ
249*7f296bb3SBarry Smith*i-nodes* (identical nodes) when possible. We search for consecutive
250*7f296bb3SBarry Smithrows with the same nonzero structure, thereby reusing matrix information
251*7f296bb3SBarry Smithfor increased efficiency. Related options database keys are
252*7f296bb3SBarry Smith`-mat_no_inode` (do not use i-nodes) and `-mat_inode_limit <limit>`
253*7f296bb3SBarry Smith(set i-node limit (max limit=5)). Note that problems with a single degree
254*7f296bb3SBarry Smithof freedom per grid node will automatically not use i-nodes.
255*7f296bb3SBarry Smith
256*7f296bb3SBarry SmithThe internal data representation for the AIJ formats employs zero-based
257*7f296bb3SBarry Smithindexing.
258*7f296bb3SBarry Smith
259*7f296bb3SBarry Smith#### Preallocation of Memory for Sequential AIJ Sparse Matrices
260*7f296bb3SBarry Smith
261*7f296bb3SBarry SmithThe dynamic process of allocating new memory and copying from the old
262*7f296bb3SBarry Smithstorage to the new is *intrinsically very expensive*. Thus, to obtain
263*7f296bb3SBarry Smithgood performance when assembling an AIJ matrix, it is crucial to
264*7f296bb3SBarry Smithpreallocate the memory needed for the sparse matrix. The user has two
265*7f296bb3SBarry Smithchoices for preallocating matrix memory via `MatCreateSeqAIJ()`.
266*7f296bb3SBarry Smith
267*7f296bb3SBarry SmithOne can use the scalar `nz` to specify the expected number of nonzeros
268*7f296bb3SBarry Smithfor each row. This is generally fine if the number of nonzeros per row
269*7f296bb3SBarry Smithis roughly the same throughout the matrix (or as a quick and easy first
270*7f296bb3SBarry Smithstep for preallocation). If one underestimates the actual number of
271*7f296bb3SBarry Smithnonzeros in a given row, then during the assembly process PETSc will
272*7f296bb3SBarry Smithautomatically allocate additional needed space. However, this extra
273*7f296bb3SBarry Smithmemory allocation can slow the computation.
274*7f296bb3SBarry Smith
275*7f296bb3SBarry SmithIf different rows have very different numbers of nonzeros, one should
276*7f296bb3SBarry Smithattempt to indicate (nearly) the exact number of elements intended for
277*7f296bb3SBarry Smiththe various rows with the optional array, `nnz` of length `m`, where
278*7f296bb3SBarry Smith`m` is the number of rows, for example
279*7f296bb3SBarry Smith
280*7f296bb3SBarry Smith```
281*7f296bb3SBarry SmithPetscInt nnz[m];
282*7f296bb3SBarry Smithnnz[0] = <nonzeros in row 0>
283*7f296bb3SBarry Smithnnz[1] = <nonzeros in row 1>
284*7f296bb3SBarry Smith....
285*7f296bb3SBarry Smithnnz[m-1] = <nonzeros in row m-1>
286*7f296bb3SBarry Smith```
287*7f296bb3SBarry Smith
288*7f296bb3SBarry SmithIn this case, the assembly process will require no additional memory
289*7f296bb3SBarry Smithallocations if the `nnz` estimates are correct. If, however, the
290*7f296bb3SBarry Smith`nnz` estimates are incorrect, PETSc will automatically obtain the
291*7f296bb3SBarry Smithadditional needed space, at a slight loss of efficiency.
292*7f296bb3SBarry Smith
293*7f296bb3SBarry SmithUsing the array `nnz` to preallocate memory is especially important
294*7f296bb3SBarry Smithfor efficient matrix assembly if the number of nonzeros varies
295*7f296bb3SBarry Smithconsiderably among the rows. One can generally set `nnz` either by
296*7f296bb3SBarry Smithknowing in advance the problem structure (e.g., the stencil for finite
297*7f296bb3SBarry Smithdifference problems on a structured grid) or by precomputing the
298*7f296bb3SBarry Smithinformation by using a segment of code similar to that for the regular
299*7f296bb3SBarry Smithmatrix assembly. The overhead of determining the `nnz` array will be
300*7f296bb3SBarry Smithquite small compared with the overhead of the inherently expensive
301*7f296bb3SBarry Smith`malloc`s and moves of data that are needed for dynamic allocation
302*7f296bb3SBarry Smithduring matrix assembly. Always guess high if an exact value is not known
303*7f296bb3SBarry Smith(extra space is cheaper than too little).
304*7f296bb3SBarry Smith
305*7f296bb3SBarry SmithThus, when assembling a sparse matrix with very different numbers of
306*7f296bb3SBarry Smithnonzeros in various rows, one could proceed as follows for finite
307*7f296bb3SBarry Smithdifference methods:
308*7f296bb3SBarry Smith
309*7f296bb3SBarry Smith1. Allocate integer array `nnz`.
310*7f296bb3SBarry Smith2. Loop over grid, counting the expected number of nonzeros for the
311*7f296bb3SBarry Smith   row(s) associated with the various grid points.
312*7f296bb3SBarry Smith3. Create the sparse matrix via `MatCreateSeqAIJ()` or alternative.
313*7f296bb3SBarry Smith4. Loop over the grid, generating matrix entries and inserting in matrix
314*7f296bb3SBarry Smith   via `MatSetValues()`.
315*7f296bb3SBarry Smith
316*7f296bb3SBarry SmithFor (vertex-based) finite element type calculations, an analogous
317*7f296bb3SBarry Smithprocedure is as follows:
318*7f296bb3SBarry Smith
319*7f296bb3SBarry Smith1. Allocate integer array `nnz`.
320*7f296bb3SBarry Smith2. Loop over vertices, computing the number of neighbor vertices, which
321*7f296bb3SBarry Smith   determines the number of nonzeros for the corresponding matrix
322*7f296bb3SBarry Smith   row(s).
323*7f296bb3SBarry Smith3. Create the sparse matrix via `MatCreateSeqAIJ()` or alternative.
324*7f296bb3SBarry Smith4. Loop over elements, generating matrix entries and inserting in matrix
325*7f296bb3SBarry Smith   via `MatSetValues()`.
326*7f296bb3SBarry Smith
327*7f296bb3SBarry SmithThe `-info` option causes the routines `MatAssemblyBegin()` and
328*7f296bb3SBarry Smith`MatAssemblyEnd()` to print information about the success of the
329*7f296bb3SBarry Smithpreallocation. Consider the following example for the `MATSEQAIJ`
330*7f296bb3SBarry Smithmatrix format:
331*7f296bb3SBarry Smith
332*7f296bb3SBarry Smith```
333*7f296bb3SBarry SmithMatAssemblyEnd_SeqAIJ:Matrix size 10 X 10; storage space:20 unneeded, 100 used
334*7f296bb3SBarry SmithMatAssemblyEnd_SeqAIJ:Number of mallocs during MatSetValues is 0
335*7f296bb3SBarry Smith```
336*7f296bb3SBarry Smith
337*7f296bb3SBarry SmithThe first line indicates that the user preallocated 120 spaces but only
338*7f296bb3SBarry Smith100 were used. The second line indicates that the user preallocated
339*7f296bb3SBarry Smithenough space so that PETSc did not have to internally allocate
340*7f296bb3SBarry Smithadditional space (an expensive operation). In the next example the user
341*7f296bb3SBarry Smithdid not preallocate sufficient space, as indicated by the fact that the
342*7f296bb3SBarry Smithnumber of mallocs is very large (bad for efficiency):
343*7f296bb3SBarry Smith
344*7f296bb3SBarry Smith```
345*7f296bb3SBarry SmithMatAssemblyEnd_SeqAIJ:Matrix size 10 X 10; storage space:47 unneeded, 1000 used
346*7f296bb3SBarry SmithMatAssemblyEnd_SeqAIJ:Number of mallocs during MatSetValues is 40000
347*7f296bb3SBarry Smith```
348*7f296bb3SBarry Smith
349*7f296bb3SBarry SmithAlthough at first glance such procedures for determining the matrix
350*7f296bb3SBarry Smithstructure in advance may seem unusual, they are actually very efficient
351*7f296bb3SBarry Smithbecause they alleviate the need for dynamic construction of the matrix
352*7f296bb3SBarry Smithdata structure, which can be very expensive.
353*7f296bb3SBarry Smith
354*7f296bb3SBarry Smith#### Parallel AIJ Sparse Matrices
355*7f296bb3SBarry Smith
356*7f296bb3SBarry SmithParallel sparse matrices with the AIJ format can be created with the
357*7f296bb3SBarry Smithcommand
358*7f296bb3SBarry Smith
359*7f296bb3SBarry Smith```
360*7f296bb3SBarry SmithMatCreateAIJ(MPI_Comm comm,PetscInt m,PetscInt n,PetscInt M,PetscInt N,PetscInt d_nz,PetscInt *d_nnz, PetscInt o_nz,PetscInt *o_nnz,Mat *A);
361*7f296bb3SBarry Smith```
362*7f296bb3SBarry Smith
363*7f296bb3SBarry Smith`A` is the newly created matrix, while the arguments `m`, `M`, and
364*7f296bb3SBarry Smith`N`, indicate the number of local rows and the number of global rows
365*7f296bb3SBarry Smithand columns, respectively. In the PETSc partitioning scheme, all the
366*7f296bb3SBarry Smithmatrix columns are local and `n` is the number of columns
367*7f296bb3SBarry Smithcorresponding to the local part of a parallel vector. Either the local or
368*7f296bb3SBarry Smithglobal parameters can be replaced with `PETSC_DECIDE`, so that PETSc
369*7f296bb3SBarry Smithwill determine them. The matrix is stored with a fixed number of rows on
370*7f296bb3SBarry Smitheach process, given by `m`, or determined by PETSc if `m` is
371*7f296bb3SBarry Smith`PETSC_DECIDE`.
372*7f296bb3SBarry Smith
373*7f296bb3SBarry SmithIf `PETSC_DECIDE` is not used for the arguments `m` and `n`, then
374*7f296bb3SBarry Smiththe user must ensure that they are chosen to be compatible with the
375*7f296bb3SBarry Smithvectors. To do this, one first considers the matrix-vector product
376*7f296bb3SBarry Smith$y = A x$. The `m` that is used in the matrix creation routine
377*7f296bb3SBarry Smith`MatCreateAIJ()` must match the local size used in the vector creation
378*7f296bb3SBarry Smithroutine `VecCreateMPI()` for `y`. Likewise, the `n` used must
379*7f296bb3SBarry Smithmatch that used as the local size in `VecCreateMPI()` for `x`.
380*7f296bb3SBarry Smith
381*7f296bb3SBarry SmithThe user must set `d_nz=0`, `o_nz=0`, `d_nnz=`NULL, and
382*7f296bb3SBarry Smith`o_nnz=NULL` for PETSc to control dynamic allocation of matrix memory
383*7f296bb3SBarry Smithspace. Analogous to `nz` and `nnz` for the routine
384*7f296bb3SBarry Smith`MatCreateSeqAIJ()`, these arguments optionally specify nonzero
385*7f296bb3SBarry Smithinformation for the diagonal (`d_nz` and `d_nnz`) and off-diagonal
386*7f296bb3SBarry Smith(`o_nz` and `o_nnz`) parts of the matrix. For a square global
387*7f296bb3SBarry Smithmatrix, we define each process’s diagonal portion to be its local rows
388*7f296bb3SBarry Smithand the corresponding columns (a square submatrix); each process’s
389*7f296bb3SBarry Smithoff-diagonal portion encompasses the remainder of the local matrix (a
390*7f296bb3SBarry Smithrectangular submatrix). The rank in the MPI communicator determines the
391*7f296bb3SBarry Smithabsolute ordering of the blocks. That is, the process with rank 0 in the
392*7f296bb3SBarry Smithcommunicator given to `MatCreateAIJ()` contains the top rows of the
393*7f296bb3SBarry Smithmatrix; the i$^{th}$ process in that communicator contains the
394*7f296bb3SBarry Smithi$^{th}$ block of the matrix.
395*7f296bb3SBarry Smith
396*7f296bb3SBarry Smith#### Preallocation of Memory for Parallel AIJ Sparse Matrices
397*7f296bb3SBarry Smith
398*7f296bb3SBarry SmithAs discussed above, preallocation of memory is critical for achieving
399*7f296bb3SBarry Smithgood performance during matrix assembly, as this reduces the number of
400*7f296bb3SBarry Smithallocations and copies required. We present an example for three
401*7f296bb3SBarry Smithprocesses to indicate how this may be done for the `MATMPIAIJ` matrix
402*7f296bb3SBarry Smithformat. Consider the 8 by 8 matrix, which is partitioned by default with
403*7f296bb3SBarry Smiththree rows on the first process, three on the second and two on the
404*7f296bb3SBarry Smiththird.
405*7f296bb3SBarry Smith
406*7f296bb3SBarry Smith$$
407*7f296bb3SBarry Smith\left( \begin{array}{cccccccccc}
408*7f296bb3SBarry Smith1  & 2  & 0  & | & 0  & 3  & 0  & |  & 0  & 4  \\
409*7f296bb3SBarry Smith0  & 5  & 6  & | & 7  & 0  & 0  & |  & 8  & 0 \\
410*7f296bb3SBarry Smith9  & 0  & 10 & | & 11 & 0  & 0  & |  & 12 & 0  \\
411*7f296bb3SBarry Smith\hline \\
412*7f296bb3SBarry Smith13 & 0  & 14 & | & 15 & 16 & 17 & |  & 0  & 0  \\
413*7f296bb3SBarry Smith0  & 18 & 0  & | & 19 & 20 & 21 & |  & 0  & 0 \\
414*7f296bb3SBarry Smith0  & 0  & 0  & | & 22 & 23 & 0  & |  & 24 & 0 \\
415*7f296bb3SBarry Smith\hline \\
416*7f296bb3SBarry Smith25 & 26 & 27 & | & 0  & 0  & 28 & |  & 29 & 0 \\
417*7f296bb3SBarry Smith30 & 0  & 0  & | & 31 & 32 & 33 & |  & 0  &34
418*7f296bb3SBarry Smith\end{array} \right)
419*7f296bb3SBarry Smith$$
420*7f296bb3SBarry Smith
421*7f296bb3SBarry SmithThe “diagonal” submatrix, `d`, on the first process is given by
422*7f296bb3SBarry Smith
423*7f296bb3SBarry Smith$$
424*7f296bb3SBarry Smith\left( \begin{array}{ccc}
425*7f296bb3SBarry Smith1  & 2  & 0  \\
426*7f296bb3SBarry Smith0  & 5  & 6  \\
427*7f296bb3SBarry Smith9  & 0  & 10
428*7f296bb3SBarry Smith\end{array} \right),
429*7f296bb3SBarry Smith$$
430*7f296bb3SBarry Smith
431*7f296bb3SBarry Smithwhile the “off-diagonal” submatrix, `o`, matrix is given by
432*7f296bb3SBarry Smith
433*7f296bb3SBarry Smith$$
434*7f296bb3SBarry Smith\left( \begin{array}{ccccc}
435*7f296bb3SBarry Smith 0  & 3  & 0   & 0  & 4  \\
436*7f296bb3SBarry Smith 7  & 0  & 0   & 8  & 0  \\
437*7f296bb3SBarry Smith 11 & 0  & 0   & 12 & 0  \\
438*7f296bb3SBarry Smith\end{array} \right).
439*7f296bb3SBarry Smith$$
440*7f296bb3SBarry Smith
441*7f296bb3SBarry SmithFor the first process one could set `d_nz` to 2 (since each row has 2
442*7f296bb3SBarry Smithnonzeros) or, alternatively, set `d_nnz` to $\{2,2,2\}.$ The
443*7f296bb3SBarry Smith`o_nz` could be set to 2 since each row of the `o` matrix has 2
444*7f296bb3SBarry Smithnonzeros, or `o_nnz` could be set to $\{2,2,2\}$.
445*7f296bb3SBarry Smith
446*7f296bb3SBarry SmithFor the second process the `d` submatrix is given by
447*7f296bb3SBarry Smith
448*7f296bb3SBarry Smith$$
449*7f296bb3SBarry Smith\left( \begin{array}{cccccccccc}
450*7f296bb3SBarry Smith 15 & 16 & 17 \\
451*7f296bb3SBarry Smith 19 & 20 & 21 \\
452*7f296bb3SBarry Smith 22 & 23 & 0
453*7f296bb3SBarry Smith\end{array} \right) .
454*7f296bb3SBarry Smith$$
455*7f296bb3SBarry Smith
456*7f296bb3SBarry SmithThus, one could set `d_nz` to 3, since the maximum number of nonzeros
457*7f296bb3SBarry Smithin each row is 3, or alternatively one could set `d_nnz` to
458*7f296bb3SBarry Smith$\{3,3,2\}$, thereby indicating that the first two rows will have
459*7f296bb3SBarry Smith3 nonzeros while the third has 2. The corresponding `o` submatrix for
460*7f296bb3SBarry Smiththe second process is
461*7f296bb3SBarry Smith
462*7f296bb3SBarry Smith$$
463*7f296bb3SBarry Smith\left( \begin{array}{cccccccccc}
464*7f296bb3SBarry Smith13 & 0  & 14 &  0  & 0  \\
465*7f296bb3SBarry Smith0  & 18 & 0  &  0  & 0 \\
466*7f296bb3SBarry Smith0  & 0  & 0  &  24 & 0 \\
467*7f296bb3SBarry Smith\end{array} \right)
468*7f296bb3SBarry Smith$$
469*7f296bb3SBarry Smith
470*7f296bb3SBarry Smithso that one could set `o_nz` to 2 or `o_nnz` to {2,1,1}.
471*7f296bb3SBarry Smith
472*7f296bb3SBarry SmithNote that the user never directly works with the `d` and `o`
473*7f296bb3SBarry Smithsubmatrices, except when preallocating storage space as indicated above.
474*7f296bb3SBarry SmithAlso, the user need not preallocate exactly the correct amount of space;
475*7f296bb3SBarry Smithas long as a sufficiently close estimate is given, the high efficiency
476*7f296bb3SBarry Smithfor matrix assembly will remain.
477*7f296bb3SBarry Smith
478*7f296bb3SBarry SmithAs described above, the option `-info` will print information about
479*7f296bb3SBarry Smiththe success of preallocation during matrix assembly. For the
480*7f296bb3SBarry Smith`MATMPIAIJ` and `MATMPIBAIJ` formats, PETSc will also list the
481*7f296bb3SBarry Smithnumber of elements owned by on each process that were generated on a
482*7f296bb3SBarry Smithdifferent process. For example, the statements
483*7f296bb3SBarry Smith
484*7f296bb3SBarry Smith```
485*7f296bb3SBarry SmithMatAssemblyBegin_MPIAIJ:Stash has 10 entries, uses 0 mallocs
486*7f296bb3SBarry SmithMatAssemblyBegin_MPIAIJ:Stash has 3 entries, uses 0 mallocs
487*7f296bb3SBarry SmithMatAssemblyBegin_MPIAIJ:Stash has 5 entries, uses 0 mallocs
488*7f296bb3SBarry Smith```
489*7f296bb3SBarry Smith
490*7f296bb3SBarry Smithindicate that very few values have been generated on different
491*7f296bb3SBarry Smithprocesses. On the other hand, the statements
492*7f296bb3SBarry Smith
493*7f296bb3SBarry Smith```
494*7f296bb3SBarry SmithMatAssemblyBegin_MPIAIJ:Stash has 100000 entries, uses 100 mallocs
495*7f296bb3SBarry SmithMatAssemblyBegin_MPIAIJ:Stash has 77777 entries, uses 70 mallocs
496*7f296bb3SBarry Smith```
497*7f296bb3SBarry Smith
498*7f296bb3SBarry Smithindicate that many values have been generated on the “wrong” processes.
499*7f296bb3SBarry SmithThis situation can be very inefficient, since the transfer of values to
500*7f296bb3SBarry Smiththe “correct” process is generally expensive. By using the command
501*7f296bb3SBarry Smith`MatGetOwnershipRange()` in application codes, the user should be able
502*7f296bb3SBarry Smithto generate most entries on the owning process.
503*7f296bb3SBarry Smith
504*7f296bb3SBarry Smith*Note*: It is fine to generate some entries on the “wrong” process.
505*7f296bb3SBarry SmithOften this can lead to cleaner, simpler, less buggy codes. One should
506*7f296bb3SBarry Smithnever make code overly complicated in order to generate all values
507*7f296bb3SBarry Smithlocally. Rather, one should organize the code in such a way that *most*
508*7f296bb3SBarry Smithvalues are generated locally.
509*7f296bb3SBarry Smith
510*7f296bb3SBarry SmithThe routine `MatCreateAIJCUSPARSE()` allows one to create GPU based matrices for NVIDIA systems.
511*7f296bb3SBarry Smith`MatCreateAIJKokkos()` can create matrices for use with CPU, OpenMP, NVIDIA, AMD, or Intel based GPU systems.
512*7f296bb3SBarry Smith
513*7f296bb3SBarry SmithIt is sometimes difficult to compute the required preallocation information efficiently, hence PETSc provides a
514*7f296bb3SBarry Smithspecial `MatType`, `MATPREALLOCATOR` that helps make computing this information more straightforward. One first creates a matrix of this type and then, using the same
515*7f296bb3SBarry Smithcode that one would use to actually compute the matrices numerical values, calls `MatSetValues()` for this matrix, without needing to provide any
516*7f296bb3SBarry Smithpreallocation information (one need not provide the matrix numerical values). Once this is complete one uses `MatPreallocatorPreallocate()` to
517*7f296bb3SBarry Smithprovide the accumulated preallocation information to
518*7f296bb3SBarry Smiththe actual matrix one will use for the computations. We hope to simplify this process in the future, allowing the removal of `MATPREALLOCATOR`,
519*7f296bb3SBarry Smithinstead simply allowing the use of its efficient insertion process automatically during the first assembly of any matrix type directly without
520*7f296bb3SBarry Smithrequiring the detailed preallocation information.
521*7f296bb3SBarry Smith
522*7f296bb3SBarry SmithSee {any}`doc_matrix` for a table of the matrix types available in PETSc.
523*7f296bb3SBarry Smith
524*7f296bb3SBarry Smith(sec_matlmvm)=
525*7f296bb3SBarry Smith
526*7f296bb3SBarry Smith#### Limited-Memory Variable Metric (LMVM) Matrices
527*7f296bb3SBarry Smith
528*7f296bb3SBarry SmithVariable metric methods, also known as quasi-Newton methods, are
529*7f296bb3SBarry Smithfrequently used for root finding problems and approximate Jacobian
530*7f296bb3SBarry Smithmatrices or their inverses via sequential nonlinear updates based on the
531*7f296bb3SBarry Smithsecant condition. The limited-memory variants do not store the full
532*7f296bb3SBarry Smithexplicit Jacobian, and instead compute forward products and inverse
533*7f296bb3SBarry Smithapplications based on a fixed number of stored update vectors.
534*7f296bb3SBarry Smith
535*7f296bb3SBarry Smith```{eval-rst}
536*7f296bb3SBarry Smith.. list-table:: PETSc LMVM matrix implementations.
537*7f296bb3SBarry Smith  :name: tab_matlmvmimpl
538*7f296bb3SBarry Smith  :header-rows: 1
539*7f296bb3SBarry Smith
540*7f296bb3SBarry Smith  * - Method
541*7f296bb3SBarry Smith    - PETSc Type
542*7f296bb3SBarry Smith    - Name
543*7f296bb3SBarry Smith    - Property
544*7f296bb3SBarry Smith  * - "Good" Broyden   :cite:`keyprefix-griewank2012broyden`
545*7f296bb3SBarry Smith    - ``MATLMVMBrdn``
546*7f296bb3SBarry Smith    - ``lmvmbrdn``
547*7f296bb3SBarry Smith    - Square
548*7f296bb3SBarry Smith  * - "Bad" Broyden :cite:`keyprefix-griewank2012broyden`
549*7f296bb3SBarry Smith    - ``MATLMVMBadBrdn``
550*7f296bb3SBarry Smith    - ``lmvmbadbrdn``
551*7f296bb3SBarry Smith    - Square
552*7f296bb3SBarry Smith  * - Symmetric Rank-1 :cite:`keyprefix-nocedal2006numerical`
553*7f296bb3SBarry Smith    - ``MATLMVMSR1``
554*7f296bb3SBarry Smith    - ``lmvmsr1``
555*7f296bb3SBarry Smith    - Symmetric
556*7f296bb3SBarry Smith  * - Davidon-Fletcher-Powell (DFP) :cite:`keyprefix-nocedal2006numerical`
557*7f296bb3SBarry Smith    - ``MATLMVMDFP``
558*7f296bb3SBarry Smith    - ``lmvmdfp``
559*7f296bb3SBarry Smith    - SPD
560*7f296bb3SBarry Smith  * - Dense Davidon-Fletcher-Powell (DFP) :cite:`keyprefix-ErwayMarcia2017`
561*7f296bb3SBarry Smith    - ``MATLMVMDDFP``
562*7f296bb3SBarry Smith    - ``lmvmddfp``
563*7f296bb3SBarry Smith    - SPD
564*7f296bb3SBarry Smith  * - Broyden-Fletcher-Goldfarb-Shanno (BFGS) :cite:`keyprefix-nocedal2006numerical`
565*7f296bb3SBarry Smith    - ``MATLMVMBFGS``
566*7f296bb3SBarry Smith    - ``lmvmbfgs``
567*7f296bb3SBarry Smith    - SPD
568*7f296bb3SBarry Smith  * - Dense Broyden-Fletcher-Goldfarb-Shanno (BFGS) :cite:`keyprefix-ErwayMarcia2017`
569*7f296bb3SBarry Smith    - ``MATLMVMDBFGS``
570*7f296bb3SBarry Smith    - ``lmvmdbfgs``
571*7f296bb3SBarry Smith    - SPD
572*7f296bb3SBarry Smith  * - Dense Quasi-Newton
573*7f296bb3SBarry Smith    - ``MATLMVMDQN``
574*7f296bb3SBarry Smith    - ``lmvmdqn``
575*7f296bb3SBarry Smith    - SPD
576*7f296bb3SBarry Smith  * - Restricted Broyden Family :cite:`keyprefix-erway2017solving`
577*7f296bb3SBarry Smith    - ``MATLMVMSymBrdn``
578*7f296bb3SBarry Smith    - ``lmvmsymbrdn``
579*7f296bb3SBarry Smith    - SPD
580*7f296bb3SBarry Smith  * - Restricted Broyden Family (full-memory diagonal)
581*7f296bb3SBarry Smith    - ``MATLMVMDiagBrdn``
582*7f296bb3SBarry Smith    - ``lmvmdiagbrdn``
583*7f296bb3SBarry Smith    - SPD
584*7f296bb3SBarry Smith```
585*7f296bb3SBarry Smith
586*7f296bb3SBarry SmithPETSc implements seven different LMVM matrices listed in the
587*7f296bb3SBarry Smithtable above. They can be created using the
588*7f296bb3SBarry Smith`MatCreate()` and `MatSetType()` workflow, and share a number of
589*7f296bb3SBarry Smithcommon interface functions. We will review the most important ones
590*7f296bb3SBarry Smithbelow:
591*7f296bb3SBarry Smith
592*7f296bb3SBarry Smith- `MatLMVMAllocate(Mat B, Vec X, Vec F)` – Creates the internal data
593*7f296bb3SBarry Smith  structures necessary to store nonlinear updates and compute
594*7f296bb3SBarry Smith  forward/inverse applications. The `X` vector defines the solution
595*7f296bb3SBarry Smith  space while the `F` defines the function space for the history of
596*7f296bb3SBarry Smith  updates.
597*7f296bb3SBarry Smith- `MatLMVMUpdate(Mat B, Vec X, Vec F)` – Applies a nonlinear update to
598*7f296bb3SBarry Smith  the approximate Jacobian such that $s_k = x_k - x_{k-1}$ and
599*7f296bb3SBarry Smith  $y_k = f(x_k) - f(x_{k-1})$, where $k$ is the index for
600*7f296bb3SBarry Smith  the update.
601*7f296bb3SBarry Smith- `MatLMVMReset(Mat B, PetscBool destructive)` – Flushes the
602*7f296bb3SBarry Smith  accumulated nonlinear updates and resets the matrix to the initial
603*7f296bb3SBarry Smith  state. If `destructive = PETSC_TRUE`, the reset also destroys the
604*7f296bb3SBarry Smith  internal data structures and necessitates another allocation call
605*7f296bb3SBarry Smith  before the matrix can be updated and used for products and solves.
606*7f296bb3SBarry Smith- `MatLMVMSetJ0(Mat B, Mat J0)` – Defines the initial Jacobian to
607*7f296bb3SBarry Smith  apply the updates to. If no initial Jacobian is provided, the updates
608*7f296bb3SBarry Smith  are applied to an identity matrix.
609*7f296bb3SBarry Smith
610*7f296bb3SBarry SmithLMVM matrices can be applied to vectors in forward mode via
611*7f296bb3SBarry Smith`MatMult()` or `MatMultAdd()`, and in inverse mode via
612*7f296bb3SBarry Smith`MatSolve()`. They also support `MatCreateVecs()`, `MatDuplicate()`
613*7f296bb3SBarry Smithand `MatCopy()` operations.
614*7f296bb3SBarry Smith
615*7f296bb3SBarry SmithRestricted Broyden Family, DFP and BFGS methods, including their dense
616*7f296bb3SBarry Smithversions, additionally implement special Jacobian initialization and
617*7f296bb3SBarry Smithscaling options available via
618*7f296bb3SBarry Smith`-mat_lmvm_scale_type <none,scalar,diagonal>`. We describe these
619*7f296bb3SBarry Smithchoices below:
620*7f296bb3SBarry Smith
621*7f296bb3SBarry Smith- `none` – Sets the initial Jacobian to be equal to the identity
622*7f296bb3SBarry Smith  matrix. No extra computations are required when obtaining the search
623*7f296bb3SBarry Smith  direction or updating the approximation. However, the number of
624*7f296bb3SBarry Smith  function evaluations required to converge the Newton solution is
625*7f296bb3SBarry Smith  typically much larger than what is required when using other
626*7f296bb3SBarry Smith  initializations.
627*7f296bb3SBarry Smith
628*7f296bb3SBarry Smith- `scalar` – Defines the initial Jacobian as a scalar multiple of the
629*7f296bb3SBarry Smith  identity matrix. The scalar value $\sigma$ is chosen by solving
630*7f296bb3SBarry Smith  the one dimensional optimization problem
631*7f296bb3SBarry Smith
632*7f296bb3SBarry Smith  $$
633*7f296bb3SBarry Smith  \min_\sigma \|\sigma^\alpha Y - \sigma^{\alpha - 1} S\|_F^2,
634*7f296bb3SBarry Smith  $$
635*7f296bb3SBarry Smith
636*7f296bb3SBarry Smith  where $S$ and $Y$ are the matrices whose columns contain
637*7f296bb3SBarry Smith  a subset of update vectors $s_k$ and $y_k$, and
638*7f296bb3SBarry Smith  $\alpha \in [0, 1]$ is defined by the user via
639*7f296bb3SBarry Smith  `-mat_lmvm_alpha` and has a different default value for each LMVM
640*7f296bb3SBarry Smith  implementation (e.g.: default $\alpha = 1$ for BFGS produces
641*7f296bb3SBarry Smith  the well-known $y_k^T s_k / y_k^T y_k$ scalar initialization).
642*7f296bb3SBarry Smith  The number of updates to be used in the $S$ and $Y$
643*7f296bb3SBarry Smith  matrices is 1 by default (i.e.: the latest update only) and can be
644*7f296bb3SBarry Smith  changed via `-mat_lmvm_sigma_hist`. This technique is inspired by
645*7f296bb3SBarry Smith  Gilbert and Lemarechal {cite}`keyprefix-gilbert-lemarechal`.
646*7f296bb3SBarry Smith
647*7f296bb3SBarry Smith- `diagonal` – Uses a full-memory restricted Broyden update formula
648*7f296bb3SBarry Smith  to construct a diagonal matrix for the Jacobian initialization.
649*7f296bb3SBarry Smith  Although the full-memory formula is utilized, the actual memory
650*7f296bb3SBarry Smith  footprint is restricted to only the vector representing the diagonal
651*7f296bb3SBarry Smith  and some additional work vectors used in its construction. The
652*7f296bb3SBarry Smith  diagonal terms are also re-scaled with every update as suggested in
653*7f296bb3SBarry Smith  {cite}`keyprefix-gilbert-lemarechal`. This initialization requires
654*7f296bb3SBarry Smith  the most computational effort of the available choices but typically
655*7f296bb3SBarry Smith  results in a significant reduction in the number of function
656*7f296bb3SBarry Smith  evaluations taken to compute a solution.
657*7f296bb3SBarry Smith
658*7f296bb3SBarry SmithThe dense implementations are numerically equivalent to DFP and BFGS,
659*7f296bb3SBarry Smithbut they try to minimize memory transfer at the cost of storage
660*7f296bb3SBarry Smith{cite}`keyprefix-ErwayMarcia2017`. Generally, dense formulations of DFP
661*7f296bb3SBarry Smithand BFGS, `MATLMVMDDFP` and `MATLMVMDBFGS`, should be faster than
662*7f296bb3SBarry Smithclassical recursive versions - on both CPU and GPU. It should be noted
663*7f296bb3SBarry Smiththat `MatMult` of dense BFGS, and `MatSolve` of dense DFP requires
664*7f296bb3SBarry SmithCholesky factorization, which may be numerically unstable, if a Jacobian
665*7f296bb3SBarry Smithoption other than `none` is used. Therefore, the default
666*7f296bb3SBarry Smithimplementation is to enable classical recursive algorithms to avoid
667*7f296bb3SBarry Smiththe Cholesky factorization. This option can be toggled via
668*7f296bb3SBarry Smith`-mat_lbfgs_recursive` and `-mat_ldfp_recursive`.
669*7f296bb3SBarry Smith
670*7f296bb3SBarry SmithDense Quasi-Newton, `MATLMVMDQN` is an implementation that uses
671*7f296bb3SBarry Smith`MatSolve` of `MATLMVMDBFGS` for its `MatSolve`, and uses
672*7f296bb3SBarry Smith`MatMult` of `MATLMVMDDFP` for its `MatMult`. It can be
673*7f296bb3SBarry Smithseen as a hybrid implementation to avoid both recursive implementation
674*7f296bb3SBarry Smithand Cholesky factorization, trading numerical accuracy for performances.
675*7f296bb3SBarry Smith
676*7f296bb3SBarry SmithNote that the user-provided initial Jacobian via `MatLMVMSetJ0()`
677*7f296bb3SBarry Smithoverrides and disables all built-in initialization methods.
678*7f296bb3SBarry Smith
679*7f296bb3SBarry Smith(sec_matdense)=
680*7f296bb3SBarry Smith
681*7f296bb3SBarry Smith### Dense Matrices
682*7f296bb3SBarry Smith
683*7f296bb3SBarry SmithPETSc provides both sequential and parallel dense matrix formats, where
684*7f296bb3SBarry Smitheach process stores its entries in a column-major array in the usual
685*7f296bb3SBarry SmithFortran style. To create a sequential, dense PETSc matrix, `A` of
686*7f296bb3SBarry Smithdimensions `m` by `n`, the user should call
687*7f296bb3SBarry Smith
688*7f296bb3SBarry Smith```
689*7f296bb3SBarry SmithMatCreateSeqDense(PETSC_COMM_SELF,PetscInt m,PetscInt n,PetscScalar *data,Mat *A);
690*7f296bb3SBarry Smith```
691*7f296bb3SBarry Smith
692*7f296bb3SBarry SmithThe variable `data` enables the user to optionally provide the
693*7f296bb3SBarry Smithlocation of the data for matrix storage (intended for Fortran users who
694*7f296bb3SBarry Smithwish to allocate their own storage space). Most users should merely set
695*7f296bb3SBarry Smith`data` to `NULL` for PETSc to control matrix memory allocation. To
696*7f296bb3SBarry Smithcreate a parallel, dense matrix, `A`, the user should call
697*7f296bb3SBarry Smith
698*7f296bb3SBarry Smith```
699*7f296bb3SBarry SmithMatCreateDense(MPI_Comm comm,PetscInt m,PetscInt n,PetscInt M,PetscInt N,PetscScalar *data,Mat *A)
700*7f296bb3SBarry Smith```
701*7f296bb3SBarry Smith
702*7f296bb3SBarry SmithThe arguments `m`, `n`, `M`, and `N`, indicate the number of
703*7f296bb3SBarry Smithlocal rows and columns and the number of global rows and columns,
704*7f296bb3SBarry Smithrespectively. Either the local or global parameters can be replaced with
705*7f296bb3SBarry Smith`PETSC_DECIDE`, so that PETSc will determine them. The matrix is
706*7f296bb3SBarry Smithstored with a fixed number of rows on each process, given by `m`, or
707*7f296bb3SBarry Smithdetermined by PETSc if `m` is `PETSC_DECIDE`.
708*7f296bb3SBarry Smith
709*7f296bb3SBarry SmithPETSc does not provide parallel dense direct solvers, instead
710*7f296bb3SBarry Smithinterfacing to external packages that provide these solvers. Our focus
711*7f296bb3SBarry Smithis on sparse iterative solvers.
712*7f296bb3SBarry Smith
713*7f296bb3SBarry Smith(sec_matnest)=
714*7f296bb3SBarry Smith
715*7f296bb3SBarry Smith### Block Matrices
716*7f296bb3SBarry Smith
717*7f296bb3SBarry SmithBlock matrices arise when coupling variables with different meaning,
718*7f296bb3SBarry Smithespecially when solving problems with constraints (e.g. incompressible
719*7f296bb3SBarry Smithflow) and “multi-physics” problems. Usually the number of blocks is
720*7f296bb3SBarry Smithsmall and each block is partitioned in parallel. We illustrate for a
721*7f296bb3SBarry Smith$3\times 3$ system with components labeled $a,b,c$. With
722*7f296bb3SBarry Smithsome numbering of unknowns, the matrix could be written as
723*7f296bb3SBarry Smith
724*7f296bb3SBarry Smith$$
725*7f296bb3SBarry Smith\left( \begin{array}{ccc}
726*7f296bb3SBarry Smith    A_{aa} & A_{ab} & A_{ac} \\
727*7f296bb3SBarry Smith    A_{ba} & A_{bb} & A_{bc} \\
728*7f296bb3SBarry Smith    A_{ca} & A_{cb} & A_{cc}
729*7f296bb3SBarry Smith  \end{array} \right) .
730*7f296bb3SBarry Smith$$
731*7f296bb3SBarry Smith
732*7f296bb3SBarry SmithThere are two fundamentally different ways that this matrix could be
733*7f296bb3SBarry Smithstored, as a single assembled sparse matrix where entries from all
734*7f296bb3SBarry Smithblocks are merged together (“monolithic”), or as separate assembled
735*7f296bb3SBarry Smithmatrices for each block (“nested”). These formats have different
736*7f296bb3SBarry Smithperformance characteristics depending on the operation being performed.
737*7f296bb3SBarry SmithIn particular, many preconditioners require a monolithic format, but
738*7f296bb3SBarry Smithsome that are very effective for solving block systems (see
739*7f296bb3SBarry Smith{any}`sec_block_matrices`) are more efficient when a nested
740*7f296bb3SBarry Smithformat is used. In order to stay flexible, we would like to be able to
741*7f296bb3SBarry Smithuse the same code to assemble block matrices in both monolithic and
742*7f296bb3SBarry Smithnested formats. Additionally, for software maintainability and testing,
743*7f296bb3SBarry Smithespecially in a multi-physics context where different groups might be
744*7f296bb3SBarry Smithresponsible for assembling each of the blocks, it is desirable to be
745*7f296bb3SBarry Smithable to use exactly the same code to assemble a single block
746*7f296bb3SBarry Smithindependently as to assemble it as part of a larger system. To do this,
747*7f296bb3SBarry Smithwe introduce the four spaces shown in {numref}`fig_localspaces`.
748*7f296bb3SBarry Smith
749*7f296bb3SBarry Smith- The monolithic global space is the space in which the Krylov and
750*7f296bb3SBarry Smith  Newton solvers operate, with collective semantics across the entire
751*7f296bb3SBarry Smith  block system.
752*7f296bb3SBarry Smith- The split global space splits the blocks apart, but each split still
753*7f296bb3SBarry Smith  has collective semantics.
754*7f296bb3SBarry Smith- The split local space adds ghost points and separates the blocks.
755*7f296bb3SBarry Smith  Operations in this space can be performed with no parallel
756*7f296bb3SBarry Smith  communication. This is often the most natural, and certainly the most
757*7f296bb3SBarry Smith  powerful, space for matrix assembly code.
758*7f296bb3SBarry Smith- The monolithic local space can be thought of as adding ghost points
759*7f296bb3SBarry Smith  to the monolithic global space, but it is often more natural to use
760*7f296bb3SBarry Smith  it simply as a concatenation of split local spaces on each process.
761*7f296bb3SBarry Smith  It is not common to explicitly manipulate vectors or matrices in this
762*7f296bb3SBarry Smith  space (at least not during assembly), but it is a useful for
763*7f296bb3SBarry Smith  declaring which part of a matrix is being assembled.
764*7f296bb3SBarry Smith
765*7f296bb3SBarry Smith:::{figure} /images/manual/localspaces.svg
766*7f296bb3SBarry Smith:alt: The relationship between spaces used for coupled assembly.
767*7f296bb3SBarry Smith:name: fig_localspaces
768*7f296bb3SBarry Smith
769*7f296bb3SBarry SmithThe relationship between spaces used for coupled assembly.
770*7f296bb3SBarry Smith:::
771*7f296bb3SBarry Smith
772*7f296bb3SBarry SmithThe key to format-independent assembly is the function
773*7f296bb3SBarry Smith
774*7f296bb3SBarry Smith```
775*7f296bb3SBarry SmithMatGetLocalSubMatrix(Mat A,IS isrow,IS iscol,Mat *submat);
776*7f296bb3SBarry Smith```
777*7f296bb3SBarry Smith
778*7f296bb3SBarry Smithwhich provides a “view” `submat` into a matrix `A` that operates in
779*7f296bb3SBarry Smiththe monolithic global space. The `submat` transforms from the split
780*7f296bb3SBarry Smithlocal space defined by `iscol` to the split local space defined by
781*7f296bb3SBarry Smith`isrow`. The index sets specify the parts of the monolithic local
782*7f296bb3SBarry Smithspace that `submat` should operate in. If a nested matrix format is
783*7f296bb3SBarry Smithused, then `MatGetLocalSubMatrix()` finds the nested block and returns
784*7f296bb3SBarry Smithit without making any copies. In this case, `submat` is fully
785*7f296bb3SBarry Smithfunctional and has a parallel communicator. If a monolithic matrix
786*7f296bb3SBarry Smithformat is used, then `MatGetLocalSubMatrix()` returns a proxy matrix
787*7f296bb3SBarry Smithon `PETSC_COMM_SELF` that does not provide values or implement
788*7f296bb3SBarry Smith`MatMult()`, but does implement `MatSetValuesLocal()` and, if
789*7f296bb3SBarry Smith`isrow,iscol` have a constant block size,
790*7f296bb3SBarry Smith`MatSetValuesBlockedLocal()`. Note that although `submat` may not be
791*7f296bb3SBarry Smitha fully functional matrix and the caller does not even know a priori
792*7f296bb3SBarry Smithwhich communicator it will reside on, it always implements the local
793*7f296bb3SBarry Smithassembly functions (which are not collective). The index sets
794*7f296bb3SBarry Smith`isrow,iscol` can be obtained using `DMCompositeGetLocalISs()` if
795*7f296bb3SBarry Smith`DMCOMPOSITE` is being used. `DMCOMPOSITE` can also be used to create
796*7f296bb3SBarry Smithmatrices, in which case the `MATNEST` format can be specified using
797*7f296bb3SBarry Smith`-prefix_dm_mat_type nest` and `MATAIJ` can be specified using
798*7f296bb3SBarry Smith`-prefix_dm_mat_type aij`. See
799*7f296bb3SBarry Smith<a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/snes/tutorials/ex28.c.html">SNES Tutorial ex28</a>
800*7f296bb3SBarry Smithfor a simple example using this interface.
801*7f296bb3SBarry Smith
802*7f296bb3SBarry Smith(sec_matoptions)=
803*7f296bb3SBarry Smith
804*7f296bb3SBarry Smith## Basic Matrix Operations
805*7f296bb3SBarry Smith
806*7f296bb3SBarry SmithTable {any}`2.2 <fig_matrixops>` summarizes basic PETSc matrix operations.
807*7f296bb3SBarry SmithWe briefly discuss a few of these routines in more detail below.
808*7f296bb3SBarry Smith
809*7f296bb3SBarry SmithThe parallel matrix can multiply a vector with `n` local entries,
810*7f296bb3SBarry Smithreturning a vector with `m` local entries. That is, to form the
811*7f296bb3SBarry Smithproduct
812*7f296bb3SBarry Smith
813*7f296bb3SBarry Smith```
814*7f296bb3SBarry SmithMatMult(Mat A,Vec x,Vec y);
815*7f296bb3SBarry Smith```
816*7f296bb3SBarry Smith
817*7f296bb3SBarry Smiththe vectors `x` and `y` should be generated with
818*7f296bb3SBarry Smith
819*7f296bb3SBarry Smith```
820*7f296bb3SBarry SmithVecCreateMPI(MPI_Comm comm,n,N,&x);
821*7f296bb3SBarry SmithVecCreateMPI(MPI_Comm comm,m,M,&y);
822*7f296bb3SBarry Smith```
823*7f296bb3SBarry Smith
824*7f296bb3SBarry SmithBy default, if the user lets PETSc decide the number of components to be
825*7f296bb3SBarry Smithstored locally (by passing in `PETSC_DECIDE` as the second argument to
826*7f296bb3SBarry Smith`VecCreateMPI()` or using `VecCreate()`), vectors and matrices of
827*7f296bb3SBarry Smiththe same dimension are automatically compatible for parallel
828*7f296bb3SBarry Smithmatrix-vector operations.
829*7f296bb3SBarry Smith
830*7f296bb3SBarry SmithAlong with the matrix-vector multiplication routine, there is a version
831*7f296bb3SBarry Smithfor the transpose of the matrix,
832*7f296bb3SBarry Smith
833*7f296bb3SBarry Smith```
834*7f296bb3SBarry SmithMatMultTranspose(Mat A,Vec x,Vec y);
835*7f296bb3SBarry Smith```
836*7f296bb3SBarry Smith
837*7f296bb3SBarry SmithThere are also versions that add the result to another vector:
838*7f296bb3SBarry Smith
839*7f296bb3SBarry Smith```
840*7f296bb3SBarry SmithMatMultAdd(Mat A,Vec x,Vec y,Vec w);
841*7f296bb3SBarry SmithMatMultTransposeAdd(Mat A,Vec x,Vec y,Vec w);
842*7f296bb3SBarry Smith```
843*7f296bb3SBarry Smith
844*7f296bb3SBarry SmithThese routines, respectively, produce $w = A*x + y$ and
845*7f296bb3SBarry Smith$w = A^{T}*x + y$ . In C it is legal for the vectors `y` and
846*7f296bb3SBarry Smith`w` to be identical. In Fortran, this situation is forbidden by the
847*7f296bb3SBarry Smithlanguage standard, but we allow it anyway.
848*7f296bb3SBarry Smith
849*7f296bb3SBarry SmithOne can print a matrix (sequential or parallel) to the screen with the
850*7f296bb3SBarry Smithcommand
851*7f296bb3SBarry Smith
852*7f296bb3SBarry Smith```
853*7f296bb3SBarry SmithMatView(Mat mat,PETSC_VIEWER_STDOUT_WORLD);
854*7f296bb3SBarry Smith```
855*7f296bb3SBarry Smith
856*7f296bb3SBarry SmithOther viewers can be used as well. For instance, one can draw the
857*7f296bb3SBarry Smithnonzero structure of the matrix into the default X-window with the
858*7f296bb3SBarry Smithcommand
859*7f296bb3SBarry Smith
860*7f296bb3SBarry Smith```
861*7f296bb3SBarry SmithMatView(Mat mat,PETSC_VIEWER_DRAW_WORLD);
862*7f296bb3SBarry Smith```
863*7f296bb3SBarry Smith
864*7f296bb3SBarry SmithAlso one can use
865*7f296bb3SBarry Smith
866*7f296bb3SBarry Smith```
867*7f296bb3SBarry SmithMatView(Mat mat,PetscViewer viewer);
868*7f296bb3SBarry Smith```
869*7f296bb3SBarry Smith
870*7f296bb3SBarry Smithwhere `viewer` was obtained with `PetscViewerDrawOpen()`. Additional
871*7f296bb3SBarry Smithviewers and options are given in the `MatView()` man page and
872*7f296bb3SBarry Smith{any}`sec_viewers`.
873*7f296bb3SBarry Smith
874*7f296bb3SBarry Smith```{eval-rst}
875*7f296bb3SBarry Smith.. list-table:: PETSc Matrix Operations
876*7f296bb3SBarry Smith  :name: fig_matrixops
877*7f296bb3SBarry Smith  :header-rows: 1
878*7f296bb3SBarry Smith
879*7f296bb3SBarry Smith  * - Function Name
880*7f296bb3SBarry Smith    - Operation
881*7f296bb3SBarry Smith  * - ``MatAXPY(Mat Y, PetscScalar a, Mat X, MatStructure s);``
882*7f296bb3SBarry Smith    - :math:`Y = Y + a*X`
883*7f296bb3SBarry Smith  * - ``MatAYPX(Mat Y, PetscScalar a, Mat X, MatStructure s);``
884*7f296bb3SBarry Smith    - :math:`Y = a*Y + X`
885*7f296bb3SBarry Smith  * - ``MatMult(Mat A,Vec x, Vec y);``
886*7f296bb3SBarry Smith    - :math:`y = A*x`
887*7f296bb3SBarry Smith  * - ``MatMultAdd(Mat A,Vec x, Vec y,Vec z);``
888*7f296bb3SBarry Smith    - :math:`z = y + A*x`
889*7f296bb3SBarry Smith  * - ``MatMultTranspose(Mat A,Vec x, Vec y);``
890*7f296bb3SBarry Smith    - :math:`y = A^{T}*x`
891*7f296bb3SBarry Smith  * - ``MatMultTransposeAdd(Mat A, Vec x, Vec y, Vec z);``
892*7f296bb3SBarry Smith    - :math:`z = y + A^{T}*x`
893*7f296bb3SBarry Smith  * - ``MatNorm(Mat A,NormType type, PetscReal *r);``
894*7f296bb3SBarry Smith    - :math:`r = A_{type}`
895*7f296bb3SBarry Smith  * - ``MatDiagonalScale(Mat A,Vec l,Vec r);``
896*7f296bb3SBarry Smith    - :math:`A = \text{diag}(l)*A*\text{diag}(r)`
897*7f296bb3SBarry Smith  * - ``MatScale(Mat A,PetscScalar a);``
898*7f296bb3SBarry Smith    - :math:`A = a*A`
899*7f296bb3SBarry Smith  * - ``MatConvert(Mat A, MatType type, Mat *B);``
900*7f296bb3SBarry Smith    - :math:`B = A`
901*7f296bb3SBarry Smith  * - ``MatCopy(Mat A, Mat B, MatStructure s);``
902*7f296bb3SBarry Smith    - :math:`B = A`
903*7f296bb3SBarry Smith  * - ``MatGetDiagonal(Mat A, Vec x);``
904*7f296bb3SBarry Smith    - :math:`x = \text{diag}(A)`
905*7f296bb3SBarry Smith  * - ``MatTranspose(Mat A, MatReuse, Mat* B);``
906*7f296bb3SBarry Smith    - :math:`B = A^{T}`
907*7f296bb3SBarry Smith  * - ``MatZeroEntries(Mat A);``
908*7f296bb3SBarry Smith    - :math:`A = 0`
909*7f296bb3SBarry Smith  * - ``MatShift(Mat Y, PetscScalar a);``
910*7f296bb3SBarry Smith    - :math:`Y =  Y + a*I`
911*7f296bb3SBarry Smith```
912*7f296bb3SBarry Smith
913*7f296bb3SBarry Smith```{eval-rst}
914*7f296bb3SBarry Smith.. list-table:: Values of MatStructure
915*7f296bb3SBarry Smith  :name: fig_matstructure
916*7f296bb3SBarry Smith  :header-rows: 1
917*7f296bb3SBarry Smith
918*7f296bb3SBarry Smith  * - Name
919*7f296bb3SBarry Smith    - Meaning
920*7f296bb3SBarry Smith  * - ``SAME_NONZERO_PATTERN``
921*7f296bb3SBarry Smith    - the matrices have an identical nonzero pattern
922*7f296bb3SBarry Smith  * - ``DIFFERENT_NONZERO_PATTERN``
923*7f296bb3SBarry Smith    - the matrices may have a different nonzero pattern
924*7f296bb3SBarry Smith  * - ``SUBSET_NONZERO_PATTERN``
925*7f296bb3SBarry Smith    - the second matrix has a subset of the nonzeros in the first matrix
926*7f296bb3SBarry Smith  * - ``UNKNOWN_NONZERO_PATTERN``
927*7f296bb3SBarry Smith    - there is nothing known about the relation between the nonzero patterns of the two matrices
928*7f296bb3SBarry Smith```
929*7f296bb3SBarry Smith
930*7f296bb3SBarry SmithThe `NormType` argument to `MatNorm()` is one of `NORM_1`,
931*7f296bb3SBarry Smith`NORM_INFINITY`, and `NORM_FROBENIUS`.
932*7f296bb3SBarry Smith
933*7f296bb3SBarry Smith(sec_matrixfree)=
934*7f296bb3SBarry Smith
935*7f296bb3SBarry Smith## Application Specific Custom Matrices
936*7f296bb3SBarry Smith
937*7f296bb3SBarry SmithSome people like to use matrix-free methods, which do
938*7f296bb3SBarry Smithnot require explicit storage of the matrix, for the numerical solution
939*7f296bb3SBarry Smithof partial differential equations.
940*7f296bb3SBarry SmithSimilarly, users may already have a custom matrix data structure and routines
941*7f296bb3SBarry Smithfor that data structure and would like to wrap their code up into a `Mat`;
942*7f296bb3SBarry Smiththat is, provide their own custom matrix type.
943*7f296bb3SBarry Smith
944*7f296bb3SBarry SmithTo use the PETSc provided matrix-free matrix that uses finite differencing to approximate the matrix-vector product
945*7f296bb3SBarry Smithuse `MatCreateMFFD()`, see {any}`sec_nlmatrixfree`.
946*7f296bb3SBarry SmithTo provide your own matrix operations (such as `MatMult()`)
947*7f296bb3SBarry Smithuse the following command to create a `Mat` structure
948*7f296bb3SBarry Smithwithout ever actually generating the matrix:
949*7f296bb3SBarry Smith
950*7f296bb3SBarry Smith```
951*7f296bb3SBarry SmithMatCreateShell(MPI_Comm comm,PetscInt m,PetscInt n,PetscInt M,PetscInt N,void *ctx,Mat *mat);
952*7f296bb3SBarry Smith```
953*7f296bb3SBarry Smith
954*7f296bb3SBarry SmithHere `M` and `N` are the global matrix dimensions (rows and
955*7f296bb3SBarry Smithcolumns), `m` and `n` are the local matrix dimensions, and `ctx`
956*7f296bb3SBarry Smithis a pointer to data needed by any user-defined shell matrix operations;
957*7f296bb3SBarry Smiththe manual page has additional details about these parameters. Most
958*7f296bb3SBarry Smithmatrix-free algorithms require only the application of the linear
959*7f296bb3SBarry Smithoperator to a vector. To provide this action, the user must write a
960*7f296bb3SBarry Smithroutine with the calling sequence
961*7f296bb3SBarry Smith
962*7f296bb3SBarry Smith```
963*7f296bb3SBarry SmithUserMult(Mat mat,Vec x,Vec y);
964*7f296bb3SBarry Smith```
965*7f296bb3SBarry Smith
966*7f296bb3SBarry Smithand then associate it with the matrix, `mat`, by using the command
967*7f296bb3SBarry Smith
968*7f296bb3SBarry Smith```
969*7f296bb3SBarry SmithMatShellSetOperation(Mat mat,MatOperation MATOP_MULT, (void(*)(void)) PetscErrorCode (*UserMult)(Mat,Vec,Vec));
970*7f296bb3SBarry Smith```
971*7f296bb3SBarry Smith
972*7f296bb3SBarry SmithHere `MATOP_MULT` is the name of the operation for matrix-vector
973*7f296bb3SBarry Smithmultiplication. Within each user-defined routine (such as
974*7f296bb3SBarry Smith`UserMult()`), the user should call `MatShellGetContext()` to obtain
975*7f296bb3SBarry Smiththe user-defined context, `ctx`, that was set by `MatCreateShell()`.
976*7f296bb3SBarry SmithThis shell matrix can be used with the iterative linear equation solvers
977*7f296bb3SBarry Smithdiscussed in the following chapters.
978*7f296bb3SBarry Smith
979*7f296bb3SBarry SmithThe routine `MatShellSetOperation()` can be used to set any other
980*7f296bb3SBarry Smithmatrix operations as well. The file
981*7f296bb3SBarry Smith`$PETSC_DIR/include/petscmat.h` (<a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/include/petscmat.h.html">source</a>)
982*7f296bb3SBarry Smithprovides a complete list of matrix operations, which have the form
983*7f296bb3SBarry Smith`MATOP_<OPERATION>`, where `<OPERATION>` is the name (in all capital
984*7f296bb3SBarry Smithletters) of the user interface routine (for example, `MatMult()`
985*7f296bb3SBarry Smith$\to$ `MATOP_MULT`). All user-provided functions have the same
986*7f296bb3SBarry Smithcalling sequence as the usual matrix interface routines, since the
987*7f296bb3SBarry Smithuser-defined functions are intended to be accessed through the same
988*7f296bb3SBarry Smithinterface, e.g., `MatMult(Mat,Vec,Vec)` $\to$
989*7f296bb3SBarry Smith`UserMult(Mat,Vec,Vec)`. The final argument for
990*7f296bb3SBarry Smith`MatShellSetOperation()` needs to be cast to a `void *`, since the
991*7f296bb3SBarry Smithfinal argument could (depending on the `MatOperation`) be a variety of
992*7f296bb3SBarry Smithdifferent functions.
993*7f296bb3SBarry Smith
994*7f296bb3SBarry SmithNote that `MatShellSetOperation()` can also be used as a “backdoor”
995*7f296bb3SBarry Smithmeans of introducing user-defined changes in matrix operations for other
996*7f296bb3SBarry Smithstorage formats (for example, to override the default LU factorization
997*7f296bb3SBarry Smithroutine supplied within PETSc for the `MATSEQAIJ` format). However, we
998*7f296bb3SBarry Smithurge anyone who introduces such changes to use caution, since it would
999*7f296bb3SBarry Smithbe very easy to accidentally create a bug in the new routine that could
1000*7f296bb3SBarry Smithaffect other routines as well.
1001*7f296bb3SBarry Smith
1002*7f296bb3SBarry SmithSee also {any}`sec_nlmatrixfree` for details on one set of
1003*7f296bb3SBarry Smithhelpful utilities for using the matrix-free approach for nonlinear
1004*7f296bb3SBarry Smithsolvers.
1005*7f296bb3SBarry Smith
1006*7f296bb3SBarry Smith(sec_mattranspose)=
1007*7f296bb3SBarry Smith
1008*7f296bb3SBarry Smith## Transposes of Matrices
1009*7f296bb3SBarry Smith
1010*7f296bb3SBarry SmithPETSc provides several ways to work with transposes of matrix.
1011*7f296bb3SBarry Smith
1012*7f296bb3SBarry Smith```
1013*7f296bb3SBarry SmithMatTranspose(Mat A,MatReuse MAT_INITIAL_MATRIX or MAT_INPLACE_MATRIX or MAT_REUSE_MATRIX,Mat *B)
1014*7f296bb3SBarry Smith```
1015*7f296bb3SBarry Smith
1016*7f296bb3SBarry Smithwill either do an in-place or out-of-place matrix explicit formation of the matrix transpose. After it has been called
1017*7f296bb3SBarry Smithwith `MAT_INPLACE_MATRIX` it may be called again with `MAT_REUSE_MATRIX` and it will recompute the transpose if the A
1018*7f296bb3SBarry Smithmatrix has changed. Internally it keeps track of whether the nonzero pattern of A has not changed so
1019*7f296bb3SBarry Smithwill reuse the symbolic transpose when possible for efficiency.
1020*7f296bb3SBarry Smith
1021*7f296bb3SBarry Smith```
1022*7f296bb3SBarry SmithMatTransposeSymbolic(Mat A,Mat *B)
1023*7f296bb3SBarry Smith```
1024*7f296bb3SBarry Smith
1025*7f296bb3SBarry Smithonly does the symbolic transpose on the matrix. After it is called `MatTranspose()` may be called with
1026*7f296bb3SBarry Smith`MAT_REUSE_MATRIX` to compute the numerical transpose.
1027*7f296bb3SBarry Smith
1028*7f296bb3SBarry SmithOccasionally one may already have a B matrix with the needed sparsity pattern to store the transpose and wants to reuse that
1029*7f296bb3SBarry Smithspace instead of creating a new matrix by calling `MatTranspose`(A,\`\`MAT_INITIAL_MATRIX\`\`,&B) but they cannot just call
1030*7f296bb3SBarry Smith`MatTranspose`(A,\`\`MAT_REUSE_MATRIX\`\`,&B) so instead they can call `MatTransposeSetPrecusor`(A,B) and then call
1031*7f296bb3SBarry Smith`MatTranspose`(A,\`\`MAT_REUSE_MATRIX\`\`,&B). This routine just provides to B the meta-data it needs to compute the numerical
1032*7f296bb3SBarry Smithfactorization efficiently.
1033*7f296bb3SBarry Smith
1034*7f296bb3SBarry SmithThe routine `MatCreateTranspose`(A,&B) provides a surrogate matrix B that behaviors like the transpose of A without forming
1035*7f296bb3SBarry Smiththe transpose explicitly. For example, `MatMult`(B,x,y) will compute the matrix-vector product of A transpose times x.
1036*7f296bb3SBarry Smith
1037*7f296bb3SBarry Smith(sec_othermat)=
1038*7f296bb3SBarry Smith
1039*7f296bb3SBarry Smith## Other Matrix Operations
1040*7f296bb3SBarry Smith
1041*7f296bb3SBarry SmithIn many iterative calculations (for instance, in a nonlinear equations
1042*7f296bb3SBarry Smithsolver), it is important for efficiency purposes to reuse the nonzero
1043*7f296bb3SBarry Smithstructure of a matrix, rather than determining it anew every time the
1044*7f296bb3SBarry Smithmatrix is generated. To retain a given matrix but reinitialize its
1045*7f296bb3SBarry Smithcontents, one can employ
1046*7f296bb3SBarry Smith
1047*7f296bb3SBarry Smith```
1048*7f296bb3SBarry SmithMatZeroEntries(Mat A);
1049*7f296bb3SBarry Smith```
1050*7f296bb3SBarry Smith
1051*7f296bb3SBarry SmithThis routine will zero the matrix entries in the data structure but keep
1052*7f296bb3SBarry Smithall the data that indicates where the nonzeros are located. In this way
1053*7f296bb3SBarry Smitha new matrix assembly will be much less expensive, since no memory
1054*7f296bb3SBarry Smithallocations or copies will be needed. Of course, one can also explicitly
1055*7f296bb3SBarry Smithset selected matrix elements to zero by calling `MatSetValues()`.
1056*7f296bb3SBarry Smith
1057*7f296bb3SBarry SmithBy default, if new entries are made in locations where no nonzeros
1058*7f296bb3SBarry Smithpreviously existed, space will be allocated for the new entries. To
1059*7f296bb3SBarry Smithprevent the allocation of additional memory and simply discard those new
1060*7f296bb3SBarry Smithentries, one can use the option
1061*7f296bb3SBarry Smith
1062*7f296bb3SBarry Smith```
1063*7f296bb3SBarry SmithMatSetOption(Mat A,MAT_NEW_NONZERO_LOCATIONS,PETSC_FALSE);
1064*7f296bb3SBarry Smith```
1065*7f296bb3SBarry Smith
1066*7f296bb3SBarry SmithOnce the matrix has been assembled, one can factor it numerically
1067*7f296bb3SBarry Smithwithout repeating the ordering or the symbolic factorization. This
1068*7f296bb3SBarry Smithoption can save some computational time, although it does require that
1069*7f296bb3SBarry Smiththe factorization is not done in-place.
1070*7f296bb3SBarry Smith
1071*7f296bb3SBarry SmithIn the numerical solution of elliptic partial differential equations, it
1072*7f296bb3SBarry Smithcan be cumbersome to deal with Dirichlet boundary conditions. In
1073*7f296bb3SBarry Smithparticular, one would like to assemble the matrix without regard to
1074*7f296bb3SBarry Smithboundary conditions and then at the end apply the Dirichlet boundary
1075*7f296bb3SBarry Smithconditions. In numerical analysis classes this process is usually
1076*7f296bb3SBarry Smithpresented as moving the known boundary conditions to the right-hand side
1077*7f296bb3SBarry Smithand then solving a smaller linear system for the interior unknowns.
1078*7f296bb3SBarry SmithUnfortunately, implementing this requires extracting a large submatrix
1079*7f296bb3SBarry Smithfrom the original matrix and creating its corresponding data structures.
1080*7f296bb3SBarry SmithThis process can be expensive in terms of both time and memory.
1081*7f296bb3SBarry Smith
1082*7f296bb3SBarry SmithOne simple way to deal with this difficulty is to replace those rows in
1083*7f296bb3SBarry Smiththe matrix associated with known boundary conditions, by rows of the
1084*7f296bb3SBarry Smithidentity matrix (or some scaling of it). This action can be done with
1085*7f296bb3SBarry Smiththe command
1086*7f296bb3SBarry Smith
1087*7f296bb3SBarry Smith```
1088*7f296bb3SBarry SmithMatZeroRows(Mat A,PetscInt numRows,PetscInt rows[],PetscScalar diag_value,Vec x,Vec b),
1089*7f296bb3SBarry Smith```
1090*7f296bb3SBarry Smith
1091*7f296bb3SBarry Smithor equivalently,
1092*7f296bb3SBarry Smith
1093*7f296bb3SBarry Smith```
1094*7f296bb3SBarry SmithMatZeroRowsIS(Mat A,IS rows,PetscScalar diag_value,Vec x,Vec b);
1095*7f296bb3SBarry Smith```
1096*7f296bb3SBarry Smith
1097*7f296bb3SBarry SmithFor sparse matrices this removes the data structures for certain rows of
1098*7f296bb3SBarry Smiththe matrix. If the pointer `diag_value` is `NULL`, it even removes
1099*7f296bb3SBarry Smiththe diagonal entry. If the pointer is not null, it uses that given value
1100*7f296bb3SBarry Smithat the pointer location in the diagonal entry of the eliminated rows.
1101*7f296bb3SBarry Smith
1102*7f296bb3SBarry SmithOne nice feature of this approach is that when solving a nonlinear
1103*7f296bb3SBarry Smithproblem such that at each iteration the Dirichlet boundary conditions
1104*7f296bb3SBarry Smithare in the same positions and the matrix retains the same nonzero
1105*7f296bb3SBarry Smithstructure, the user can call `MatZeroRows()` in the first iteration.
1106*7f296bb3SBarry SmithThen, before generating the matrix in the second iteration the user
1107*7f296bb3SBarry Smithshould call
1108*7f296bb3SBarry Smith
1109*7f296bb3SBarry Smith```
1110*7f296bb3SBarry SmithMatSetOption(Mat A,MAT_NEW_NONZERO_LOCATIONS,PETSC_FALSE);
1111*7f296bb3SBarry Smith```
1112*7f296bb3SBarry Smith
1113*7f296bb3SBarry SmithFrom that point, no new values will be inserted into those (boundary)
1114*7f296bb3SBarry Smithrows of the matrix.
1115*7f296bb3SBarry Smith
1116*7f296bb3SBarry SmithThe functions `MatZeroRowsLocal()` and `MatZeroRowsLocalIS()` can
1117*7f296bb3SBarry Smithalso be used if for each process one provides the Dirichlet locations in
1118*7f296bb3SBarry Smiththe local numbering of the matrix. A drawback of `MatZeroRows()` is
1119*7f296bb3SBarry Smiththat it destroys the symmetry of a matrix. Thus one can use
1120*7f296bb3SBarry Smith
1121*7f296bb3SBarry Smith```
1122*7f296bb3SBarry SmithMatZeroRowsColumns(Mat A,PetscInt numRows,PetscInt rows[],PetscScalar diag_value,Vec x,Vec b),
1123*7f296bb3SBarry Smith```
1124*7f296bb3SBarry Smith
1125*7f296bb3SBarry Smithor equivalently,
1126*7f296bb3SBarry Smith
1127*7f296bb3SBarry Smith```
1128*7f296bb3SBarry SmithMatZeroRowsColumnsIS(Mat A,IS rows,PetscScalar diag_value,Vec x,Vec b);
1129*7f296bb3SBarry Smith```
1130*7f296bb3SBarry Smith
1131*7f296bb3SBarry SmithNote that with all of these for a given assembled matrix it can be only
1132*7f296bb3SBarry Smithcalled once to update the x and b vector. It cannot be used if one
1133*7f296bb3SBarry Smithwishes to solve multiple right-hand side problems for the same matrix
1134*7f296bb3SBarry Smithsince the matrix entries needed for updating the b vector are removed in
1135*7f296bb3SBarry Smithits first use.
1136*7f296bb3SBarry Smith
1137*7f296bb3SBarry SmithOnce the zeroed rows are removed the new matrix has possibly many rows
1138*7f296bb3SBarry Smithwith only a diagonal entry affecting the parallel load balancing. The
1139*7f296bb3SBarry Smith`PCREDISTRIBUTE` preconditioner removes all the zeroed rows (and
1140*7f296bb3SBarry Smithassociated columns and adjusts the right-hand side based on the removed
1141*7f296bb3SBarry Smithcolumns) and then rebalances the resulting rows of smaller matrix across
1142*7f296bb3SBarry Smiththe processes. Thus one can use `MatZeroRows()` to set the Dirichlet
1143*7f296bb3SBarry Smithpoints and then solve with the preconditioner `PCREDISTRIBUTE`. Note
1144*7f296bb3SBarry Smithif the original matrix was symmetric the smaller solved matrix will also
1145*7f296bb3SBarry Smithbe symmetric.
1146*7f296bb3SBarry Smith
1147*7f296bb3SBarry SmithAnother matrix routine of interest is
1148*7f296bb3SBarry Smith
1149*7f296bb3SBarry Smith```
1150*7f296bb3SBarry SmithMatConvert(Mat mat,MatType newtype,Mat *M)
1151*7f296bb3SBarry Smith```
1152*7f296bb3SBarry Smith
1153*7f296bb3SBarry Smithwhich converts the matrix `mat` to new matrix, `M`, that has either
1154*7f296bb3SBarry Smiththe same or different format. Set `newtype` to `MATSAME` to copy the
1155*7f296bb3SBarry Smithmatrix, keeping the same matrix format. See
1156*7f296bb3SBarry Smith`$PETSC_DIR/include/petscmat.h` (<a href="PETSC_DOC_OUT_ROOT_PLACEHOLDER/include/petscmat.h.html">source</a>)
1157*7f296bb3SBarry Smithfor other available matrix types; standard ones are `MATSEQDENSE`,
1158*7f296bb3SBarry Smith`MATSEQAIJ`, `MATMPIAIJ`, `MATSEQBAIJ` and `MATMPIBAIJ`.
1159*7f296bb3SBarry Smith
1160*7f296bb3SBarry SmithIn certain applications it may be necessary for application codes to
1161*7f296bb3SBarry Smithdirectly access elements of a matrix. This may be done by using the the
1162*7f296bb3SBarry Smithcommand (for local rows only)
1163*7f296bb3SBarry Smith
1164*7f296bb3SBarry Smith```
1165*7f296bb3SBarry SmithMatGetRow(Mat A,PetscInt row, PetscInt *ncols,const PetscInt (*cols)[],const PetscScalar (*vals)[]);
1166*7f296bb3SBarry Smith```
1167*7f296bb3SBarry Smith
1168*7f296bb3SBarry SmithThe argument `ncols` returns the number of nonzeros in that row, while
1169*7f296bb3SBarry Smith`cols` and `vals` returns the column indices (with indices starting
1170*7f296bb3SBarry Smithat zero) and values in the row. If only the column indices are needed
1171*7f296bb3SBarry Smith(and not the corresponding matrix elements), one can use `NULL` for
1172*7f296bb3SBarry Smiththe `vals` argument. Similarly, one can use `NULL` for the `cols`
1173*7f296bb3SBarry Smithargument. The user can only examine the values extracted with
1174*7f296bb3SBarry Smith`MatGetRow()`; the values *cannot* be altered. To change the matrix
1175*7f296bb3SBarry Smithentries, one must use `MatSetValues()`.
1176*7f296bb3SBarry Smith
1177*7f296bb3SBarry SmithOnce the user has finished using a row, he or she *must* call
1178*7f296bb3SBarry Smith
1179*7f296bb3SBarry Smith```
1180*7f296bb3SBarry SmithMatRestoreRow(Mat A,PetscInt row,PetscInt *ncols,PetscInt **cols,PetscScalar **vals);
1181*7f296bb3SBarry Smith```
1182*7f296bb3SBarry Smith
1183*7f296bb3SBarry Smithto free any space that was allocated during the call to `MatGetRow()`.
1184*7f296bb3SBarry Smith
1185*7f296bb3SBarry Smith(sec_symbolic_numeric)=
1186*7f296bb3SBarry Smith
1187*7f296bb3SBarry Smith## Symbolic and Numeric Stages in Sparse Matrix Operations
1188*7f296bb3SBarry Smith
1189*7f296bb3SBarry SmithMany sparse matrix operations can be optimized by dividing the computation into two stages: a symbolic stage that
1190*7f296bb3SBarry Smithcreates any required data structures and does all the computations that do not require the matrices' numerical values followed by one or more uses of a
1191*7f296bb3SBarry Smithnumerical stage that use the symbolically computed information. Examples of such operations include `MatTranspose()`, `MatCreateSubMatrices()`,
1192*7f296bb3SBarry Smith`MatCholeskyFactorSymbolic()`, and `MatCholeskyFactorNumeric()`.
1193*7f296bb3SBarry SmithPETSc uses two different API's to take advantage of these optimizations.
1194*7f296bb3SBarry Smith
1195*7f296bb3SBarry SmithThe first approach explicitly divides the computation in the API. This approach is used, for example, with `MatCholeskyFactorSymbolic()`, `MatCholeskyFactorNumeric()`.
1196*7f296bb3SBarry SmithThe caller can take advantage of their knowledge of changes in the nonzero structure of the sparse matrices to call the appropriate routines as needed. In fact, they can
1197*7f296bb3SBarry Smithuse `MatGetNonzeroState()` to determine if a new symbolic computation is needed. The drawback of this approach is that the caller of these routines has to
1198*7f296bb3SBarry Smithmanage the creation of new matrices when the nonzero structure changes.
1199*7f296bb3SBarry Smith
1200*7f296bb3SBarry SmithThe second approach, as exemplified by `MatTranspose()`, does not expose the two stages explicit in the API, instead a flag, `MatReuse` is passed through the
1201*7f296bb3SBarry SmithAPI to indicate if a symbolic data structure is already available or needs to be computed. Thus `MatTranspose(A,MAT_INITIAL_MATRIX,&B)` is called first, then
1202*7f296bb3SBarry Smith`MatTranspose(A,MAT_REUSE_MATRIX,&B)` can be called repeatedly with new numerical values in the A matrix. In theory, if the nonzero structure of A changes, the
1203*7f296bb3SBarry Smithsymbolic computations for B could be redone automatically inside the same B matrix when there is a change in the nonzero state of the A matrix. In practice, in PETSc, the
1204*7f296bb3SBarry Smith`MAT_REUSE_MATRIX` for most PETSc routines only works if the nonzero structure does not change and the code may crash otherwise. The advantage of this approach
1205*7f296bb3SBarry Smith(when the nonzero structure changes are handled correctly) is that the calling code does not need to keep track of the nonzero state of the matrices; everything
1206*7f296bb3SBarry Smith"just works". However, the caller must still know when it is the first call to the routine so the flag `MAT_INITIAL_MATRIX` is being used. If the underlying implementation language supported detecting a yet to be initialized variable at run time, the `MatReuse` flag would not be need.
1207*7f296bb3SBarry Smith
1208*7f296bb3SBarry SmithPETSc uses two approaches because the same programming problem was solved with two different ways during PETSc's early development.
1209*7f296bb3SBarry SmithA better model would combine both approaches; an explicit
1210*7f296bb3SBarry Smithseparation of the stages and a unified operation that internally utilized the two stages appropriately and also handled changes to the nonzero structure. Code could be simplified in many places with this approach, in most places the use of the unified API would replace the use of the separate stages.
1211*7f296bb3SBarry Smith
1212*7f296bb3SBarry SmithSee {any}`sec_matsub` and {any}`sec_matmatproduct`.
1213*7f296bb3SBarry Smith
1214*7f296bb3SBarry Smith(sec_graph)=
1215*7f296bb3SBarry Smith
1216*7f296bb3SBarry Smith## Graph Operations
1217*7f296bb3SBarry Smith
1218*7f296bb3SBarry SmithPETSc has four families of graph operations that treat sparse `Mat` as representing graphs.
1219*7f296bb3SBarry Smith
1220*7f296bb3SBarry Smith```{eval-rst}
1221*7f296bb3SBarry Smith.. list-table::
1222*7f296bb3SBarry Smith   :widths: auto
1223*7f296bb3SBarry Smith   :align: center
1224*7f296bb3SBarry Smith   :header-rows: 1
1225*7f296bb3SBarry Smith
1226*7f296bb3SBarry Smith   * - Operation
1227*7f296bb3SBarry Smith     - Type
1228*7f296bb3SBarry Smith     - Available methods
1229*7f296bb3SBarry Smith     - User guide
1230*7f296bb3SBarry Smith   * - Ordering to reduce fill
1231*7f296bb3SBarry Smith     - N/A
1232*7f296bb3SBarry Smith     - ``MatOrderingType``
1233*7f296bb3SBarry Smith     - :any:`sec_matfactor`
1234*7f296bb3SBarry Smith   * - Partitioning for parallelism
1235*7f296bb3SBarry Smith     - ``MatPartitioning``
1236*7f296bb3SBarry Smith     - ``MatPartitioningType``
1237*7f296bb3SBarry Smith     - :any:`sec_partitioning`
1238*7f296bb3SBarry Smith   * - Coloring for parallelism or Jacobians
1239*7f296bb3SBarry Smith     - ``MatColoring``
1240*7f296bb3SBarry Smith     - ``MatColoringType``
1241*7f296bb3SBarry Smith     - :any:`sec_fdmatrix`
1242*7f296bb3SBarry Smith   * - Coarsening for multigrid
1243*7f296bb3SBarry Smith     - ``MatCoarsen``
1244*7f296bb3SBarry Smith     - ``MatCoarsenType``
1245*7f296bb3SBarry Smith     - :any:`sec_amg`
1246*7f296bb3SBarry Smith```
1247*7f296bb3SBarry Smith
1248*7f296bb3SBarry Smith(sec_partitioning)=
1249*7f296bb3SBarry Smith
1250*7f296bb3SBarry Smith## Partitioning
1251*7f296bb3SBarry Smith
1252*7f296bb3SBarry SmithFor almost all unstructured grid computation, the distribution of
1253*7f296bb3SBarry Smithportions of the grid across the process’s work load and memory can have
1254*7f296bb3SBarry Smitha very large impact on performance. In most PDE calculations the grid
1255*7f296bb3SBarry Smithpartitioning and distribution across the processes can (and should) be
1256*7f296bb3SBarry Smithdone in a “pre-processing” step before the numerical computations.
1257*7f296bb3SBarry SmithHowever, this does not mean it need be done in a separate, sequential
1258*7f296bb3SBarry Smithprogram; rather, it should be done before one sets up the parallel grid
1259*7f296bb3SBarry Smithdata structures in the actual program. PETSc provides an interface to
1260*7f296bb3SBarry Smiththe ParMETIS (developed by George Karypis; see
1261*7f296bb3SBarry Smith[the PETSc installation instructions](https://petsc.org/release/install/)
1262*7f296bb3SBarry Smithfor directions on installing PETSc to use ParMETIS) to allow the
1263*7f296bb3SBarry Smithpartitioning to be done in parallel. PETSc does not currently provide
1264*7f296bb3SBarry Smithdirectly support for dynamic repartitioning, load balancing by migrating
1265*7f296bb3SBarry Smithmatrix entries between processes, etc. For problems that require mesh
1266*7f296bb3SBarry Smithrefinement, PETSc uses the “rebuild the data structure” approach, as
1267*7f296bb3SBarry Smithopposed to the “maintain dynamic data structures that support the
1268*7f296bb3SBarry Smithinsertion/deletion of additional vector and matrix rows and columns
1269*7f296bb3SBarry Smithentries” approach.
1270*7f296bb3SBarry Smith
1271*7f296bb3SBarry SmithPartitioning in PETSc is organized around the `MatPartitioning`
1272*7f296bb3SBarry Smithobject. One first creates a parallel matrix that contains the
1273*7f296bb3SBarry Smithconnectivity information about the grid (or other graph-type object)
1274*7f296bb3SBarry Smiththat is to be partitioned. This is done with the command
1275*7f296bb3SBarry Smith
1276*7f296bb3SBarry Smith```
1277*7f296bb3SBarry SmithMatCreateMPIAdj(MPI_Comm comm,int mlocal,PetscInt n,const PetscInt ia[],const PetscInt ja[],PetscInt *weights,Mat *Adj);
1278*7f296bb3SBarry Smith```
1279*7f296bb3SBarry Smith
1280*7f296bb3SBarry SmithThe argument `mlocal` indicates the number of rows of the graph being
1281*7f296bb3SBarry Smithprovided by the given process, `n` is the total number of columns;
1282*7f296bb3SBarry Smithequal to the sum of all the `mlocal`. The arguments `ia` and `ja`
1283*7f296bb3SBarry Smithare the row pointers and column pointers for the given rows; these are
1284*7f296bb3SBarry Smiththe usual format for parallel compressed sparse row storage, using
1285*7f296bb3SBarry Smithindices starting at 0, *not* 1.
1286*7f296bb3SBarry Smith
1287*7f296bb3SBarry Smith:::{figure} /images/manual/usg.*
1288*7f296bb3SBarry Smith:alt: Numbering on Simple Unstructured Grid
1289*7f296bb3SBarry Smith:name: fig_usg
1290*7f296bb3SBarry Smith
1291*7f296bb3SBarry SmithNumbering on Simple Unstructured Grid
1292*7f296bb3SBarry Smith:::
1293*7f296bb3SBarry Smith
1294*7f296bb3SBarry SmithThis, of course, assumes that one has already distributed the grid
1295*7f296bb3SBarry Smith(graph) information among the processes. The details of this initial
1296*7f296bb3SBarry Smithdistribution is not important; it could be simply determined by
1297*7f296bb3SBarry Smithassigning to the first process the first $n_0$ nodes from a file,
1298*7f296bb3SBarry Smiththe second process the next $n_1$ nodes, etc.
1299*7f296bb3SBarry Smith
1300*7f296bb3SBarry SmithFor example, we demonstrate the form of the `ia` and `ja` for a
1301*7f296bb3SBarry Smithtriangular grid where we
1302*7f296bb3SBarry Smith
1303*7f296bb3SBarry Smith1. partition by element (triangle)
1304*7f296bb3SBarry Smith
1305*7f296bb3SBarry Smith- Process 0: `mlocal = 2`, `n = 4`, `ja =``{2,3, 3}`,
1306*7f296bb3SBarry Smith  `ia =` `{0,2,3}`
1307*7f296bb3SBarry Smith- Process 1: `mlocal = 2`, `n = 4`, `ja =``{0, 0,1}`,
1308*7f296bb3SBarry Smith  `ia =``{0,1,3}`
1309*7f296bb3SBarry Smith
1310*7f296bb3SBarry SmithNote that elements are not connected to themselves and we only indicate
1311*7f296bb3SBarry Smithedge connections (in some contexts single vertex connections between
1312*7f296bb3SBarry Smithelements may also be included). We use a space above to denote the
1313*7f296bb3SBarry Smithtransition between rows in the matrix; and
1314*7f296bb3SBarry Smith
1315*7f296bb3SBarry Smith2. partition by vertex.
1316*7f296bb3SBarry Smith
1317*7f296bb3SBarry Smith- Process 0: `mlocal = 3`, `n = 6`,
1318*7f296bb3SBarry Smith  `ja =``{3,4, 4,5, 3,4,5}`, `ia =``{0, 2, 4, 7}`
1319*7f296bb3SBarry Smith- Process 1: `mlocal = 3`, `n = 6`,
1320*7f296bb3SBarry Smith  `ja =``{0,2, 4, 0,1,2,3,5, 1,2,4}`,
1321*7f296bb3SBarry Smith  `ia =``{0, 3, 8, 11}`.
1322*7f296bb3SBarry Smith
1323*7f296bb3SBarry SmithOnce the connectivity matrix has been created the following code will
1324*7f296bb3SBarry Smithgenerate the renumbering required for the new partition
1325*7f296bb3SBarry Smith
1326*7f296bb3SBarry Smith```
1327*7f296bb3SBarry SmithMatPartitioningCreate(MPI_Comm comm,MatPartitioning *part);
1328*7f296bb3SBarry SmithMatPartitioningSetAdjacency(MatPartitioning part,Mat Adj);
1329*7f296bb3SBarry SmithMatPartitioningSetFromOptions(MatPartitioning part);
1330*7f296bb3SBarry SmithMatPartitioningApply(MatPartitioning part,IS *is);
1331*7f296bb3SBarry SmithMatPartitioningDestroy(MatPartitioning *part);
1332*7f296bb3SBarry SmithMatDestroy(Mat *Adj);
1333*7f296bb3SBarry SmithISPartitioningToNumbering(IS is,IS *isg);
1334*7f296bb3SBarry Smith```
1335*7f296bb3SBarry Smith
1336*7f296bb3SBarry SmithThe resulting `isg` contains for each local node the new global number
1337*7f296bb3SBarry Smithof that node. The resulting `is` contains the new process number that
1338*7f296bb3SBarry Smitheach local node has been assigned to.
1339*7f296bb3SBarry Smith
1340*7f296bb3SBarry SmithNow that a new numbering of the nodes has been determined, one must
1341*7f296bb3SBarry Smithrenumber all the nodes and migrate the grid information to the correct
1342*7f296bb3SBarry Smithprocess. The command
1343*7f296bb3SBarry Smith
1344*7f296bb3SBarry Smith```
1345*7f296bb3SBarry SmithAOCreateBasicIS(isg,NULL,&ao);
1346*7f296bb3SBarry Smith```
1347*7f296bb3SBarry Smith
1348*7f296bb3SBarry Smithgenerates, see {any}`sec_ao`, an `AO` object that can be
1349*7f296bb3SBarry Smithused in conjunction with the `is` and `isg` to move the relevant
1350*7f296bb3SBarry Smithgrid information to the correct process and renumber the nodes etc. In
1351*7f296bb3SBarry Smiththis context, the new ordering is the “application” ordering so
1352*7f296bb3SBarry Smith`AOPetscToApplication()` converts old global indices to new global
1353*7f296bb3SBarry Smithindices and `AOApplicationToPetsc()` converts new global indices back
1354*7f296bb3SBarry Smithto old global indices.
1355*7f296bb3SBarry Smith
1356*7f296bb3SBarry SmithPETSc does not currently provide tools that completely manage the
1357*7f296bb3SBarry Smithmigration and node renumbering, since it will be dependent on the
1358*7f296bb3SBarry Smithparticular data structure you use to store the grid information and the
1359*7f296bb3SBarry Smithtype of grid information that you need for your application. We do plan
1360*7f296bb3SBarry Smithto include more support for this in the future, but designing the
1361*7f296bb3SBarry Smithappropriate general user interface and providing a scalable
1362*7f296bb3SBarry Smithimplementation that can be used for a wide variety of different grids
1363*7f296bb3SBarry Smithrequires a great deal of time.
1364*7f296bb3SBarry Smith
1365*7f296bb3SBarry SmithSee {any}`sec_fdmatrix` and {any}`sec_matfactor` for discussions on performing graph coloring and computing graph reorderings to
1366*7f296bb3SBarry Smithreduce fill in sparse matrix factorizations.
1367*7f296bb3SBarry Smith
1368*7f296bb3SBarry Smith```{eval-rst}
1369*7f296bb3SBarry Smith.. bibliography:: /petsc.bib
1370*7f296bb3SBarry Smith   :filter: docname in docnames
1371*7f296bb3SBarry Smith   :keyprefix: keyprefix-
1372*7f296bb3SBarry Smith   :labelprefix: ref-
1373*7f296bb3SBarry Smith```
1374