xref: /libCEED/doc/sphinx/source/releasenotes.md (revision 7b63f5c6881a9a0bb827bb6972a33367d9223442)
1# Changes/Release Notes
2
3On this page we provide a summary of the main API changes, new features and examples for each release of libCEED.
4
5(main)=
6
7## Current `main` branch
8
9### Interface changes
10
11- Added {c:func}`CeedOperatorSetName` for more readable {c:func}`CeedOperatorView` output.
12
13### Bugfix
14
15- Fix storing of indices for `CeedElemRestriction` on the host with GPU backends.
16- Fix `CeedElemRestriction` sizing for {c:func}`CeedOperatorAssemblePointBlockDiagonal`.
17
18### Examples
19
20- Added various performance enhancements for {ref}`example-petsc-navier-stokes`
21
22(v0-10-1)=
23
24## v0.10.1 (Apr 11, 2022)
25
26### Interface changes
27
28- Added {c:func}`CeedQFunctionSetUserFlopsEstimate` and {c:func}`CeedOperatorGetFlopsEstimate` to facilitate estimating FLOPs in operator application.
29
30### Bugfix
31
32- Install JiT source files in install directory to fix GPU functionality for installed libCEED.
33
34(v0-10)=
35
36## v0.10 (Mar 21, 2022)
37
38### Interface changes
39
40- Update {c:func}`CeedQFunctionGetFields` and {c:func}`CeedOperatorGetFields` to include number of fields.
41- Promote to the public API: QFunction and Operator field objects, `CeedQFunctionField` and `CeedOperatorField`, and associated getters, {c:func}`CeedQFunctionGetFields`; {c:func}`CeedQFunctionFieldGetName`; {c:func}`CeedQFunctionFieldGetSize`; {c:func}`CeedQFunctionFieldGetEvalMode`; {c:func}`CeedOperatorGetFields`; {c:func}`CeedOperatorFieldGetElemRestriction`; {c:func}`CeedOperatorFieldGetBasis`; and {c:func}`CeedOperatorFieldGetVector`.
42- Clarify and document conditions where `CeedQFunction` and `CeedOperator` become immutable and no further fields or suboperators can be added.
43- Add {c:func}`CeedOperatorLinearAssembleQFunctionBuildOrUpdate` to reduce object creation overhead in assembly of CeedOperator preconditioning ingredients.
44- Promote {c:func}`CeedOperatorCheckReady`to the public API to facilitate interactive interfaces.
45- Warning added when compiling OCCA backend to alert users that this backend is experimental.
46- `ceed-backend.h`, `ceed-hash.h`, and `ceed-khash.h` removed. Users should use `ceed/backend.h`, `ceed/hash.h`, and `ceed/khash.h`.
47- Added {c:func}`CeedQFunctionGetKernelName`; refactored {c:func}`CeedQFunctionGetSourcePath` to exclude function kernel name.
48- Clarify documentation for {c:func}`CeedVectorTakeArray`; this function will error if {c:func}`CeedVectorSetArray` with `copy_mode == CEED_USE_POINTER` was not previously called for the corresponding `CeedMemType`.
49- Added {c:func}`CeedVectorGetArrayWrite` that allows access to uninitalized arrays; require initalized data for {c:func}`CeedVectorGetArray`.
50- Added {c:func}`CeedQFunctionContextRegisterDouble` and {c:func}`CeedQFunctionContextRegisterInt32` with {c:func}`CeedQFunctionContextSetDouble` and {c:func}`CeedQFunctionContextSetInt32` to facilitate easy updating of {c:struct}`CeedQFunctionContext` data by user defined field names.
51- Added {c:func}`CeedQFunctionContextGetFieldDescriptions` to retreive user defined descriptions of fields that are registered with `CeedQFunctionContextRegister*`.
52- Renamed `CeedElemTopology` entries for clearer namespacing between libCEED enums.
53- Added type `CeedSize` equivalent to `ptrdiff_t` for array sizes in {c:func}`CeedVectorCreate`, {c:func}`CeedVectorGetLength`, `CeedElemRestrictionCreate*`, {c:func}`CeedElemRestrictionGetLVectorSize`, and {c:func}`CeedOperatorLinearAssembleSymbolic`. This is a breaking change.
54- Added {c:func}`CeedOperatorSetQFunctionUpdated` to facilitate QFunction data re-use between operators sharing the same quadrature space, such as in a multigrid hierarchy.
55- Added {c:func}`CeedOperatorGetActiveVectorLengths` to get shape of CeedOperator.
56
57### New features
58
59- `CeedScalar` can now be set as `float` or `double` at compile time.
60- Added JiT utilities in `ceed/jit-tools.h` to reduce duplicated code in GPU backends.
61- Added support for JiT of QFunctions with `#include "relative/path/local-file.h"` statements for additional local files. Note that files included with `""` are searched relative to the current file first, then by compiler paths (as with `<>` includes). To use this feature, one should adhere to relative paths only, not compiler flags like `-I`, which the JiT will not be aware of.
62- Remove need to guard library headers in QFunction source for code generation backends.
63- `CeedDebugEnv()` macro created to provide debugging outputs when Ceed context is not present.
64- Added {c:func}`CeedStringAllocCopy` to reduce repeated code for copying strings internally.
65- Added {c:func}`CeedPathConcatenate` to facilitate loading kernel source files with a path relative to the current file.
66- Added support for non-tensor H(div) elements, to include CPU backend implementations and {c:func}`CeedBasisCreateHdiv` convenience constructor.
67- Added {c:func}`CeedQFunctionSetContextWritable` and read-only access to `CeedQFunctionContext` data as an optional feature to improve GPU performance. By default, calling the `CeedQFunctionUser` during {c:func}`CeedQFunctionApply` is assumed to write into the `CeedQFunctionContext` data, consistent with the previous behavior. Note that if a user asserts that their `CeedQFunctionUser` does not write into the `CeedQFunctionContext` data, they are responsible for the validity of this assertion.
68- Added support for element matrix assembly in GPU backends.
69
70### Maintainability
71
72- Refactored preconditioner support internally to facilitate future development and improve GPU completeness/test coverage.
73- `Include-what-you-use` makefile target added as `make iwyu`.
74- Create backend constant `CEED_FIELD_MAX` to reduce magic numbers in codebase.
75- Put GPU JiTed kernel source code into separate files.
76- Dropped legacy version support in PETSc based examples to better utilize PETSc DMPlex and Mat updates to support libCEED; current minimum PETSc version for the examples is v3.17.
77
78(v0-9)=
79
80## v0.9 (Jul 6, 2021)
81
82### Interface changes
83
84- Minor modification in error handling macro to silence pedantic warnings when compiling with Clang, but no functional impact.
85
86### New features
87
88- Add {c:func}`CeedVectorAXPY` and {c:func}`CeedVectorPointwiseMult` as a convenience for stand-alone testing and internal use.
89- Add `CEED_QFUNCTION_HELPER` macro to properly annotate QFunction helper functions for code generation backends.
90- Add `CeedPragmaOptimizeOff` macro for code that is sensitive to floating point errors from fast math optimizations.
91- Rust support: split `libceed-sys` crate out of `libceed` and [publish both on crates.io](https://crates.io/crates/libceed).
92
93### Performance improvements
94
95### Examples
96
97- Solid mechanics mini-app updated to explore the performance impacts of various formulations in the initial and current configurations.
98- Fluid mechanics example adds GPU support and improves modularity.
99
100### Deprecated backends
101
102- The `/cpu/self/tmpl` and `/cpu/self/tmpl/sub` backends have been removed. These backends were intially added to test the backend inheritance mechanism, but this mechanism is now widely used and tested in multiple backends.
103
104(v0-8)=
105
106## v0.8 (Mar 31, 2021)
107
108### Interface changes
109
110- Error handling improved to include enumerated error codes for C interface return values.
111- Installed headers that will follow semantic versioning were moved to {code}`include/ceed` directory. These headers have been renamed from {code}`ceed-*.h` to {code}`ceed/*.h`. Placeholder headers with the old naming schema are currently provided, but these headers will be removed in the libCEED v0.9 release.
112
113### New features
114
115- Julia and Rust interfaces added, providing a nearly 1-1 correspondence with the C interface, plus some convenience features.
116- Static libraries can be built with `make STATIC=1` and the pkg-config file is installed accordingly.
117- Add {c:func}`CeedOperatorLinearAssembleSymbolic` and {c:func}`CeedOperatorLinearAssemble` to support full assembly of libCEED operators.
118
119### Performance improvements
120
121- New HIP MAGMA backends for hipMAGMA library users: `/gpu/hip/magma` and `/gpu/hip/magma/det`.
122- New HIP backends for improved tensor basis performance: `/gpu/hip/shared` and `/gpu/hip/gen`.
123
124### Examples
125
126- {ref}`example-petsc-elasticity` example updated with traction boundary conditions and improved Dirichlet boundary conditions.
127- {ref}`example-petsc-elasticity` example updated with Neo-Hookean hyperelasticity in current configuration as well as improved Neo-Hookean hyperelasticity exploring storage vs computation tradeoffs.
128- {ref}`example-petsc-navier-stokes` example updated with isentropic traveling vortex test case, an analytical solution to the Euler equations that is useful for testing boundary conditions, discretization stability, and order of accuracy.
129- {ref}`example-petsc-navier-stokes` example updated with support for performing convergence study and plotting order of convergence by polynomial degree.
130
131(v0-7)=
132
133## v0.7 (Sep 29, 2020)
134
135### Interface changes
136
137- Replace limited {code}`CeedInterlaceMode` with more flexible component stride {code}`compstride` in {code}`CeedElemRestriction` constructors.
138  As a result, the {code}`indices` parameter has been replaced with {code}`offsets` and the {code}`nnodes` parameter has been replaced with {code}`lsize`.
139  These changes improve support for mixed finite element methods.
140- Replace various uses of {code}`Ceed*Get*Status` with {code}`Ceed*Is*` in the backend API to match common nomenclature.
141- Replace {code}`CeedOperatorAssembleLinearDiagonal` with {c:func}`CeedOperatorLinearAssembleDiagonal` for clarity.
142- Linear Operators can be assembled as point-block diagonal matrices with {c:func}`CeedOperatorLinearAssemblePointBlockDiagonal`, provided in row-major form in a {code}`ncomp` by {code}`ncomp` block per node.
143- Diagonal assemble interface changed to accept a {ref}`CeedVector` instead of a pointer to a {ref}`CeedVector` to reduce memory movement when interfacing with calling code.
144- Added {c:func}`CeedOperatorLinearAssembleAddDiagonal` and {c:func}`CeedOperatorLinearAssembleAddPointBlockDiagonal` for improved future integration with codes such as MFEM that compose the action of {ref}`CeedOperator`s external to libCEED.
145- Added {c:func}`CeedVectorTakeAray` to sync and remove libCEED read/write access to an allocated array and pass ownership of the array to the caller.
146  This function is recommended over {c:func}`CeedVectorSyncArray` when the {code}`CeedVector` has an array owned by the caller that was set by {c:func}`CeedVectorSetArray`.
147- Added {code}`CeedQFunctionContext` object to manage user QFunction context data and reduce copies between device and host memory.
148- Added {c:func}`CeedOperatorMultigridLevelCreate`, {c:func}`CeedOperatorMultigridLevelCreateTensorH1`, and {c:func}`CeedOperatorMultigridLevelCreateH1` to facilitate creation of multigrid prolongation, restriction, and coarse grid operators using a common quadrature space.
149
150### New features
151
152- New HIP backend: `/gpu/hip/ref`.
153- CeedQFunction support for user `CUfunction`s in some backends
154
155### Performance improvements
156
157- OCCA backend rebuilt to facilitate future performance enhancements.
158- Petsc BPs suite improved to reduce noise due to multiple calls to {code}`mpiexec`.
159
160### Examples
161
162- {ref}`example-petsc-elasticity` example updated with strain energy computation and more flexible boundary conditions.
163
164### Deprecated backends
165
166- The `/gpu/cuda/reg` backend has been removed, with its core features moved into `/gpu/cuda/ref` and `/gpu/cuda/shared`.
167
168(v0-6)=
169
170## v0.6 (Mar 29, 2020)
171
172libCEED v0.6 contains numerous new features and examples, as well as expanded
173documentation in [this new website](https://libceed.org).
174
175### New features
176
177- New Python interface using [CFFI](https://cffi.readthedocs.io/) provides a nearly
178  1-1 correspondence with the C interface, plus some convenience features.  For instance,
179  data stored in the {cpp:type}`CeedVector` structure are available without copy as
180  {py:class}`numpy.ndarray`.  Short tutorials are provided in
181  [Binder](https://mybinder.org/v2/gh/CEED/libCEED/main?urlpath=lab/tree/examples/tutorials/).
182- Linear QFunctions can be assembled as block-diagonal matrices (per quadrature point,
183  {c:func}`CeedOperatorAssembleLinearQFunction`) or to evaluate the diagonal
184  ({c:func}`CeedOperatorAssembleLinearDiagonal`).  These operations are useful for
185  preconditioning ingredients and are used in the libCEED's multigrid examples.
186- The inverse of separable operators can be obtained using
187  {c:func}`CeedOperatorCreateFDMElementInverse` and applied with
188  {c:func}`CeedOperatorApply`.  This is a useful preconditioning ingredient,
189  especially for Laplacians and related operators.
190- New functions: {c:func}`CeedVectorNorm`, {c:func}`CeedOperatorApplyAdd`,
191  {c:func}`CeedQFunctionView`, {c:func}`CeedOperatorView`.
192- Make public accessors for various attributes to facilitate writing composable code.
193- New backend: `/cpu/self/memcheck/serial`.
194- QFunctions using variable-length array (VLA) pointer constructs can be used with CUDA
195  backends.  (Single source is coming soon for OCCA backends.)
196- Fix some missing edge cases in CUDA backend.
197
198### Performance Improvements
199
200- MAGMA backend performance optimization and non-tensor bases.
201- No-copy optimization in {c:func}`CeedOperatorApply`.
202
203### Interface changes
204
205- Replace {code}`CeedElemRestrictionCreateIdentity` and
206  {code}`CeedElemRestrictionCreateBlocked` with more flexible
207  {c:func}`CeedElemRestrictionCreateStrided` and
208  {c:func}`CeedElemRestrictionCreateBlockedStrided`.
209- Add arguments to {c:func}`CeedQFunctionCreateIdentity`.
210- Replace ambiguous uses of {cpp:enum}`CeedTransposeMode` for L-vector identification
211  with {cpp:enum}`CeedInterlaceMode`.  This is now an attribute of the
212  {cpp:type}`CeedElemRestriction` (see {c:func}`CeedElemRestrictionCreate`) and no
213  longer passed as `lmode` arguments to {c:func}`CeedOperatorSetField` and
214  {c:func}`CeedElemRestrictionApply`.
215
216### Examples
217
218libCEED-0.6 contains greatly expanded examples with {ref}`new documentation <Examples>`.
219Notable additions include:
220
221- Standalone {ref}`ex2-surface` ({file}`examples/ceed/ex2-surface`): compute the area of
222  a domain in 1, 2, and 3 dimensions by applying a Laplacian.
223
224- PETSc {ref}`example-petsc-area` ({file}`examples/petsc/area.c`): computes surface area
225  of domains (like the cube and sphere) by direct integration on a surface mesh;
226  demonstrates geometric dimension different from topological dimension.
227
228- PETSc {ref}`example-petsc-bps`:
229
230  - {file}`examples/petsc/bpsraw.c` (formerly `bps.c`): transparent CUDA support.
231  - {file}`examples/petsc/bps.c` (formerly `bpsdmplex.c`): performance improvements
232    and transparent CUDA support.
233  - {ref}`example-petsc-bps-sphere` ({file}`examples/petsc/bpssphere.c`):
234    generalizations of all CEED BPs to the surface of the sphere; demonstrates geometric
235    dimension different from topological dimension.
236
237- {ref}`example-petsc-multigrid` ({file}`examples/petsc/multigrid.c`): new p-multigrid
238  solver with algebraic multigrid coarse solve.
239
240- {ref}`example-petsc-navier-stokes` ({file}`examples/fluids/navierstokes.c`; formerly
241  `examples/navier-stokes`): unstructured grid support (using PETSc's `DMPlex`),
242  implicit time integration, SU/SUPG stabilization, free-slip boundary conditions, and
243  quasi-2D computational domain support.
244
245- {ref}`example-petsc-elasticity` ({file}`examples/solids/elasticity.c`): new solver for
246  linear elasticity, small-strain hyperelasticity, and globalized finite-strain
247  hyperelasticity using p-multigrid with algebraic multigrid coarse solve.
248
249(v0-5)=
250
251## v0.5 (Sep 18, 2019)
252
253For this release, several improvements were made. Two new CUDA backends were added to
254the family of backends, of which, the new `cuda-gen` backend achieves state-of-the-art
255performance using single-source {ref}`CeedQFunction`. From this release, users
256can define Q-Functions in a single source code independently of the targeted backend
257with the aid of a new macro `CEED QFUNCTION` to support JIT (Just-In-Time) and CPU
258compilation of the user provided {ref}`CeedQFunction` code. To allow a unified
259declaration, the {ref}`CeedQFunction` API has undergone a slight change:
260the `QFunctionField` parameter `ncomp` has been changed to `size`. This change
261requires setting the previous value of `ncomp` to `ncomp*dim` when adding a
262`QFunctionField` with eval mode `CEED EVAL GRAD`.
263
264Additionally, new CPU backends
265were included in this release, such as the `/cpu/self/opt/*` backends (which are
266written in pure C and use partial **E-vectors** to improve performance) and the
267`/cpu/self/ref/memcheck` backend (which relies upon the
268[Valgrind](http://valgrind.org/) Memcheck tool to help verify that user
269{ref}`CeedQFunction` have no undefined values).
270This release also included various performance improvements, bug fixes, new examples,
271and improved tests. Among these improvements, vectorized instructions for
272{ref}`CeedQFunction` code compiled for CPU were enhanced by using `CeedPragmaSIMD`
273instead of `CeedPragmaOMP`, implementation of a {ref}`CeedQFunction` gallery and
274identity Q-Functions were introduced, and the PETSc benchmark problems were expanded
275to include unstructured meshes handling were. For this expansion, the prior version of
276the PETSc BPs, which only included data associated with structured geometries, were
277renamed `bpsraw`, and the new version of the BPs, which can handle data associated
278with any unstructured geometry, were called `bps`. Additionally, other benchmark
279problems, namely BP2 and BP4 (the vector-valued versions of BP1 and BP3, respectively),
280and BP5 and BP6 (the collocated versions---for which the quadrature points are the same
281as the Gauss Lobatto nodes---of BP3 and BP4 respectively) were added to the PETSc
282examples. Furthermoew, another standalone libCEED example, called `ex2`, which
283computes the surface area of a given mesh was added to this release.
284
285Backends available in this release:
286
287| CEED resource (`-ceed`)  | Backend                                             |
288|--------------------------|-----------------------------------------------------|
289| `/cpu/self/ref/serial`   | Serial reference implementation                     |
290| `/cpu/self/ref/blocked`  | Blocked reference implementation                    |
291| `/cpu/self/ref/memcheck` | Memcheck backend, undefined value checks            |
292| `/cpu/self/opt/serial`   | Serial optimized C implementation                   |
293| `/cpu/self/opt/blocked`  | Blocked optimized C implementation                  |
294| `/cpu/self/avx/serial`   | Serial AVX implementation                           |
295| `/cpu/self/avx/blocked`  | Blocked AVX implementation                          |
296| `/cpu/self/xsmm/serial`  | Serial LIBXSMM implementation                       |
297| `/cpu/self/xsmm/blocked` | Blocked LIBXSMM implementation                      |
298| `/cpu/occa`              | Serial OCCA kernels                                 |
299| `/gpu/occa`              | CUDA OCCA kernels                                   |
300| `/omp/occa`              | OpenMP OCCA kernels                                 |
301| `/ocl/occa`              | OpenCL OCCA kernels                                 |
302| `/gpu/cuda/ref`          | Reference pure CUDA kernels                         |
303| `/gpu/cuda/reg`          | Pure CUDA kernels using one thread per element      |
304| `/gpu/cuda/shared`       | Optimized pure CUDA kernels using shared memory     |
305| `/gpu/cuda/gen`          | Optimized pure CUDA kernels using code generation   |
306| `/gpu/magma`             | CUDA MAGMA kernels                                  |
307
308Examples available in this release:
309
310:::{list-table}
311:header-rows: 1
312:widths: auto
313* - User code
314  - Example
315* - `ceed`
316  - * ex1 (volume)
317    * ex2 (surface)
318* - `mfem`
319  - * BP1 (scalar mass operator)
320    * BP3 (scalar Laplace operator)
321* - `petsc`
322  - * BP1 (scalar mass operator)
323    * BP2 (vector mass operator)
324    * BP3 (scalar Laplace operator)
325    * BP4 (vector Laplace operator)
326    * BP5 (collocated scalar Laplace operator)
327    * BP6 (collocated vector Laplace operator)
328    * Navier-Stokes
329* - `nek5000`
330  - * BP1 (scalar mass operator)
331    * BP3 (scalar Laplace operator)
332:::
333
334(v0-4)=
335
336## v0.4 (Apr 1, 2019)
337
338libCEED v0.4 was made again publicly available in the second full CEED software
339distribution, release CEED 2.0. This release contained notable features, such as
340four new CPU backends, two new GPU backends, CPU backend optimizations, initial
341support for operator composition, performance benchmarking, and a Navier-Stokes demo.
342The new CPU backends in this release came in two families. The `/cpu/self/*/serial`
343backends process one element at a time and are intended for meshes with a smaller number
344of high order elements. The `/cpu/self/*/blocked` backends process blocked batches of
345eight interlaced elements and are intended for meshes with higher numbers of elements.
346The `/cpu/self/avx/*` backends rely upon AVX instructions to provide vectorized CPU
347performance. The `/cpu/self/xsmm/*` backends rely upon the
348[LIBXSMM](http://github.com/hfp/libxsmm) package to provide vectorized CPU
349performance. The `/gpu/cuda/*` backends provide GPU performance strictly using CUDA.
350The `/gpu/cuda/ref` backend is a reference CUDA backend, providing reasonable
351performance for most problem configurations. The `/gpu/cuda/reg` backend uses a simple
352parallelization approach, where each thread treats a finite element. Using just in time
353compilation, provided by nvrtc (NVidia Runtime Compiler), and runtime parameters, this
354backend unroll loops and map memory address to registers. The `/gpu/cuda/reg` backend
355achieve good peak performance for 1D, 2D, and low order 3D problems, but performance
356deteriorates very quickly when threads run out of registers.
357
358A new explicit time-stepping Navier-Stokes solver was added to the family of libCEED
359examples in the `examples/petsc` directory (see {ref}`example-petsc-navier-stokes`).
360This example solves the time-dependent Navier-Stokes equations of compressible gas
361dynamics in a static Eulerian three-dimensional frame, using structured high-order
362finite/spectral element spatial discretizations and explicit high-order time-stepping
363(available in PETSc). Moreover, the Navier-Stokes example was developed using PETSc,
364so that the pointwise physics (defined at quadrature points) is separated from the
365parallelization and meshing concerns.
366
367Backends available in this release:
368
369| CEED resource (`-ceed`)  | Backend                                             |
370|--------------------------|-----------------------------------------------------|
371| `/cpu/self/ref/serial`   | Serial reference implementation                     |
372| `/cpu/self/ref/blocked`  | Blocked reference implementation                    |
373| `/cpu/self/tmpl`         | Backend template, defaults to `/cpu/self/blocked`   |
374| `/cpu/self/avx/serial`   | Serial AVX implementation                           |
375| `/cpu/self/avx/blocked`  | Blocked AVX implementation                          |
376| `/cpu/self/xsmm/serial`  | Serial LIBXSMM implementation                       |
377| `/cpu/self/xsmm/blocked` | Blocked LIBXSMM implementation                      |
378| `/cpu/occa`              | Serial OCCA kernels                                 |
379| `/gpu/occa`              | CUDA OCCA kernels                                   |
380| `/omp/occa`              | OpenMP OCCA kernels                                 |
381| `/ocl/occa`              | OpenCL OCCA kernels                                 |
382| `/gpu/cuda/ref`          | Reference pure CUDA kernels                         |
383| `/gpu/cuda/reg`          | Pure CUDA kernels using one thread per element      |
384| `/gpu/magma`             | CUDA MAGMA kernels                                  |
385
386Examples available in this release:
387
388:::{list-table}
389:header-rows: 1
390:widths: auto
391* - User code
392  - Example
393* - `ceed`
394  - * ex1 (volume)
395* - `mfem`
396  - * BP1 (scalar mass operator)
397    * BP3 (scalar Laplace operator)
398* - `petsc`
399  - * BP1 (scalar mass operator)
400    * BP3 (scalar Laplace operator)
401    * Navier-Stokes
402* - `nek5000`
403  - * BP1 (scalar mass operator)
404    * BP3 (scalar Laplace operator)
405:::
406
407(v0-3)=
408
409## v0.3 (Sep 30, 2018)
410
411Notable features in this release include active/passive field interface, support for
412non-tensor bases, backend optimization, and improved Fortran interface. This release
413also focused on providing improved continuous integration, and many new tests with code
414coverage reports of about 90%. This release also provided a significant change to the
415public interface: a {ref}`CeedQFunction` can take any number of named input and output
416arguments while {ref}`CeedOperator` connects them to the actual data, which may be
417supplied explicitly to `CeedOperatorApply()` (active) or separately via
418`CeedOperatorSetField()` (passive). This interface change enables reusable libraries
419of CeedQFunctions and composition of block solvers constructed using
420{ref}`CeedOperator`. A concept of blocked restriction was added to this release and
421used in an optimized CPU backend. Although this is typically not visible to the user,
422it enables effective use of arbitrary-length SIMD while maintaining cache locality.
423This CPU backend also implements an algebraic factorization of tensor product gradients
424to perform fewer operations than standard application of interpolation and
425differentiation from nodes to quadrature points. This algebraic formulation
426automatically supports non-polynomial and non-interpolatory bases, thus is more general
427than the more common derivation in terms of Lagrange polynomials on the quadrature points.
428
429Backends available in this release:
430
431| CEED resource (`-ceed`) | Backend                                             |
432|-------------------------|-----------------------------------------------------|
433| `/cpu/self/blocked`     | Blocked reference implementation                    |
434| `/cpu/self/ref`         | Serial reference implementation                     |
435| `/cpu/self/tmpl`        | Backend template, defaults to `/cpu/self/blocked`   |
436| `/cpu/occa`             | Serial OCCA kernels                                 |
437| `/gpu/occa`             | CUDA OCCA kernels                                   |
438| `/omp/occa`             | OpenMP OCCA kernels                                 |
439| `/ocl/occa`             | OpenCL OCCA kernels                                 |
440| `/gpu/magma`            | CUDA MAGMA kernels                                  |
441
442Examples available in this release:
443
444:::{list-table}
445:header-rows: 1
446:widths: auto
447* - User code
448  - Example
449* - `ceed`
450  - * ex1 (volume)
451* - `mfem`
452  - * BP1 (scalar mass operator)
453    * BP3 (scalar Laplace operator)
454* - `petsc`
455  - * BP1 (scalar mass operator)
456    * BP3 (scalar Laplace operator)
457* - `nek5000`
458  - * BP1 (scalar mass operator)
459    * BP3 (scalar Laplace operator)
460:::
461
462(v0-21)=
463
464## v0.21 (Sep 30, 2018)
465
466A MAGMA backend (which relies upon the
467[MAGMA](https://bitbucket.org/icl/magma) package) was integrated in libCEED for this
468release. This initial integration set up the framework of using MAGMA and provided the
469libCEED functionality through MAGMA kernels as one of libCEED’s computational backends.
470As any other backend, the MAGMA backend provides extended basic data structures for
471{ref}`CeedVector`, {ref}`CeedElemRestriction`, and {ref}`CeedOperator`, and implements
472the fundamental CEED building blocks to work with the new data structures.
473In general, the MAGMA-specific data structures keep the libCEED pointers to CPU data
474but also add corresponding device (e.g., GPU) pointers to the data. Coherency is handled
475internally, and thus seamlessly to the user, through the functions/methods that are
476provided to support them.
477
478Backends available in this release:
479
480| CEED resource (`-ceed`) | Backend                         |
481|-------------------------|---------------------------------|
482| `/cpu/self`             | Serial reference implementation |
483| `/cpu/occa`             | Serial OCCA kernels             |
484| `/gpu/occa`             | CUDA OCCA kernels               |
485| `/omp/occa`             | OpenMP OCCA kernels             |
486| `/ocl/occa`             | OpenCL OCCA kernels             |
487| `/gpu/magma`            | CUDA MAGMA kernels              |
488
489Examples available in this release:
490
491:::{list-table}
492:header-rows: 1
493:widths: auto
494* - User code
495  - Example
496* - `ceed`
497  - * ex1 (volume)
498* - `mfem`
499  - * BP1 (scalar mass operator)
500    * BP3 (scalar Laplace operator)
501* - `petsc`
502  - * BP1 (scalar mass operator)
503* - `nek5000`
504  - * BP1 (scalar mass operator)
505:::
506
507(v0-2)=
508
509## v0.2 (Mar 30, 2018)
510
511libCEED was made publicly available the first full CEED software distribution, release
512CEED 1.0. The distribution was made available using the Spack package manager to provide
513a common, easy-to-use build environment, where the user can build the CEED distribution
514with all dependencies. This release included a new Fortran interface for the library.
515This release also contained major improvements in the OCCA backend (including a new
516`/ocl/occa` backend) and new examples. The standalone libCEED example was modified to
517compute the volume volume of a given mesh (in 1D, 2D, or 3D) and placed in an
518`examples/ceed` subfolder. A new `mfem` example to perform BP3 (with the application
519of the Laplace operator) was also added to this release.
520
521Backends available in this release:
522
523| CEED resource (`-ceed`) | Backend                         |
524|-------------------------|---------------------------------|
525| `/cpu/self`             | Serial reference implementation |
526| `/cpu/occa`             | Serial OCCA kernels             |
527| `/gpu/occa`             | CUDA OCCA kernels               |
528| `/omp/occa`             | OpenMP OCCA kernels             |
529| `/ocl/occa`             | OpenCL OCCA kernels             |
530
531Examples available in this release:
532
533:::{list-table}
534:header-rows: 1
535:widths: auto
536* - User code
537  - Example
538* - `ceed`
539  - * ex1 (volume)
540* - `mfem`
541  - * BP1 (scalar mass operator)
542    * BP3 (scalar Laplace operator)
543* - `petsc`
544  - * BP1 (scalar mass operator)
545* - `nek5000`
546  - * BP1 (scalar mass operator)
547:::
548
549(v0-1)=
550
551## v0.1 (Jan 3, 2018)
552
553Initial low-level API of the CEED project. The low-level API provides a set of Finite
554Elements kernels and components for writing new low-level kernels. Examples include:
555vector and sparse linear algebra, element matrix assembly over a batch of elements,
556partial assembly and action for efficient high-order operators like mass, diffusion,
557advection, etc. The main goal of the low-level API is to establish the basis for the
558high-level API. Also, identifying such low-level kernels and providing a reference
559implementation for them serves as the basis for specialized backend implementations.
560This release contained several backends: `/cpu/self`, and backends which rely upon the
561[OCCA](http://github.com/libocca/occa) package, such as `/cpu/occa`,
562`/gpu/occa`, and `/omp/occa`.
563It also included several examples, in the `examples` folder:
564A standalone code that shows the usage of libCEED (with no external
565dependencies) to apply the Laplace operator, `ex1`; an `mfem` example to perform BP1
566(with the application of the mass operator); and a `petsc` example to perform BP1
567(with the application of the mass operator).
568
569Backends available in this release:
570
571| CEED resource (`-ceed`) | Backend                         |
572|-------------------------|---------------------------------|
573| `/cpu/self`             | Serial reference implementation |
574| `/cpu/occa`             | Serial OCCA kernels             |
575| `/gpu/occa`             | CUDA OCCA kernels               |
576| `/omp/occa`             | OpenMP OCCA kernels             |
577
578Examples available in this release:
579
580| User code             | Example                           |
581|-----------------------|-----------------------------------|
582| `ceed`                | ex1 (scalar Laplace operator)     |
583| `mfem`                | BP1 (scalar mass operator)        |
584| `petsc`               | BP1 (scalar mass operator)        |
585```
586