| fd0828ac | 01-May-2014 |
Matthew G. Knepley <knepley@gmail.com> |
Merge branch 'knepley/feature-plex-hdf5-parallel-load'
* knepley/feature-plex-hdf5-parallel-load: HDF5: Fixed Xdmf script Plex+HDF5: Do not use vectors for periodic visualization Plex+HDF5: Ig
Merge branch 'knepley/feature-plex-hdf5-parallel-load'
* knepley/feature-plex-hdf5-parallel-load: HDF5: Fixed Xdmf script Plex+HDF5: Do not use vectors for periodic visualization Plex+HDF5: Ignore negative sequence numbers DMPlex: Now Plex output is parallel - We now write visualization specific topology in /viz/topology - We output a point reordering, coneSizes, cones, and orientations - Now longer need to interpolate on load DMPlex: Use the presence of faceGeometry in the DM to signal we are using FVM - This will become a PetscFVM object soon DMPlex: Preserve the block size of the coordinate vector after distribution Plex+HDF5: Moved all HDF5 to a separate file, and mapped 2D periodic mesh to the cylinder - Now visualization specific things are in /viz HDF5: Made petsc_gen_xdmf.py executable Viewer+Options: Added Fortran interface for PetscObjectViewFromOptions() IS: Stupid mistake. Damn you compiler DMPlex: Now parallel HDF5 label output does not fail - However, it is also now clear that we will have to write the full interpolation connectivity in order for these to be meaningful DMPlex: Added DMPlexCreatePointNumbering() - Made DMPlexCreateNumbering_Private() more flexible IS: Added ISSortRemoveDups() DMPlex: Added DMPlexInvertCell_Internal() - Stupid type matching HDF5: Added petsc_gen_xdmf.py which processes PETSc HDF5 output and produces an Xdmf file DMPlex: Force a serial load in DMLoad_HDF5(), after which we call DMPlexDistribute() - Eventually we would load into a naive partition IS+HDF5: Corrected code for new block size interface where it is never negative Viewer+HDF5: Added PetscViewerHDF5ReadSizes() - This allows me to check the size of a Vec or IS to be loaded
show more ...
|