Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement svFSI using C++ #19

Closed
ktbolt opened this issue Jan 11, 2021 · 119 comments
Closed

Implement svFSI using C++ #19

ktbolt opened this issue Jan 11, 2021 · 119 comments
Assignees
Labels
enhancement New feature or request

Comments

@ktbolt
Copy link
Contributor

ktbolt commented Jan 11, 2021

While Fortran is still used for scientific and engineering problems we feel that it would be better to implement the FSI solver using C++ to provide a more extensible framework (e.g. plugins for custom BCs) and better capabilities for text file processing (e.g. XML).

Although Fortran is easier to learn than C++ and has nice array support most students don't have any experience programming in Fortran.

We will perform the conversion in a series of steps. The first step is to implement a C++ framework providing a Fortran-callable interface used to store and retrieve data. Each Fortran module or other independent code sections will then be incrementally converted to C++.

incrementally converting sections of independent code.

@ktbolt ktbolt added the enhancement New feature or request label Jan 11, 2021
@ktbolt ktbolt self-assigned this Jan 11, 2021
@ktbolt
Copy link
Contributor Author

ktbolt commented Jan 22, 2021

I've written some Fortran code to prototype ways to interface the svFSI Fortran code to a c++ framework.

I've implemented a Fortran simulation_interface module as an interface to the c++ Simulation class used to read an FSI XML file and VTK mesh files.

The code below shows how to use the simulation_interface module to create a Simulation object, use it to read in an svFSI XML input parameters file, load the mesh (VTK files) and get an array of nodal coordinates for that mesh

! Test interfacing to the Simulation c++ class.
! 
program test_sim_interface

  use simulation_interface
  implicit none
  type(simulation) :: sim_interface       
  real(c_double), pointer :: node_coords(:,:)

  ! Get the solver input xml file name.
  character (len=32) :: in_file_name
  call getarg(1, in_file_name)
  
  ! Create a c++ Simulation object.
  sim_interface = simulation()

  ! Read in the solver input xml file.
  call sim_interface % read_parameters(in_file_name)

  ! Load the mesh.
  call sim_interface % load_mesh()

  ! Get node coordinates. 
  call sim_interface % get_node_coords(node_coords)

  num_coords = ubound(node_coords,1)
  do i = 1, num_coords
      print *, "[main] node_coords(1:): ", node_coords(i,1), node_coords(i,2), node_coords(i,3)
  end do

The node_coords array can then be used in the svFSI code as it normally would if it was defined in the current svFSI code using

!   Position coordinates
REAL(KIND=RKIND), ALLOCATABLE :: x(:,:)

@ktbolt
Copy link
Contributor Author

ktbolt commented Mar 9, 2021

I've integrated the C++ prototype code into svFSI, creating a C++ Simulation object and read in a solver XML file

      PROGRAM MAIN

      USE COMMOD
      USE ALLFUN
      use compare_cpp_fortran

      IMPLICIT NONE
.
.
.
      ! Read in the solver input XML file.
      !
      ! Assume that the XML file has the same name as the .txt 
      ! input solver file.
      !
      call getarg(1, in_file_name)
      str_len = len(trim(in_file_name))
      write (*,*) '[main] Len input file name: ', str_len
      in_file_name = trim(in_file_name(1:str_len-3)) // 'xml'
      write (*,*) '[main] Input XML file name: ', in_file_name
      call sim_interface % read_parameters(in_file_name)

      ! Load the mesh.
      call sim_interface % load_mesh()

      ! Create auxillary data needed for a simulation. 
      call sim_interface % create_aux_data()

      ! Read in solver input file, storing parameter values in 'COMMOD'. 
      !
      print *, "[MAIN] Call READFILES -->"
 101  CALL READFILES
      print *, "[MAIN] --> READFILES return"
      print *, "[MAIN] nsd: ", nsd

      call compare_mesh()

I've also added a compare_cpp_fortran module used to compare Fortran and C++ data.

@vvedula22
Copy link
Contributor

@ktbolt I merged Chi's P2P1 stuff for fluid and FSI. There were some other modifications in which I extended P2P1 for non-Newtonian flows and weakly applied Dirichlet BCs. Please use the commit e258b9f
for your C++ conversion. I have tested this commit on all fluid and FSI cases in the svFSI-Tests repository. All the code additions we do subsequently including Martin's G&R stuff, any additions to cardiac mechanics, shells, IB, etc. could be converted later.

@ktbolt
Copy link
Contributor Author

ktbolt commented Aug 2, 2021

@vvedula22 Great! I will get back to the conversion.

@ktbolt
Copy link
Contributor Author

ktbolt commented Aug 10, 2021

I've redone the XML file format to mimic the layout and names currently used by svFSI

<GeneralSimulationParameters>
  <Continue_previous_simulation> 0 </Continue_previous_simulation>
  <Number_of_spatial_dimensions> 3 </Number_of_spatial_dimensions>
  <Number_of_time_steps> 800 </Number_of_time_steps>
  <Warning> 0 </Warning>
  <Debug> 0 </Debug>
</GeneralSimulationParameters>

<Add_mesh name="msh" >
  <Mesh_file_path> mesh-complete/mesh-complete.mesh.vtu </Mesh_file_path>
  <Add_face name="lumen_outlet">
      <Face_file_path> mesh-complete/mesh-surfaces/lumen_outlet.vtp </Face_file_path>
  </Add_face>
  <Add_face name="lumen_wall">
      <Face_file_path> mesh-complete/mesh-surfaces/lumen_wall.vtp </Face_file_path>
  </Add_face>
</Add_mesh>

<Add_equation name="fluid" >
   <Density> 1.06 </Density>
   <Viscosity model="Constant" >
     <Value> 0.04 </Value>
   </Viscosity>
   <Coupled> true </Coupled>
   <Min_iterations> 3 </Min_iterations>
.
.
.
</Add_equation>

I also rewrote the XML reader to remove the jumble of parameter names and such.

@ktbolt
Copy link
Contributor Author

ktbolt commented Aug 17, 2021

I've created an Implement-svFSI-using-cpp_19 branch from my svFSI fork, integrated new the XML reader and added the previous C++ interface code.

I'm now creating C++ classes that duplicate Fortran modules, retain the same names and such to make it easy to match the Fortran data. I also created a simple C++ Array class to store matrices.

I've converted the types in theCOMMOD module, the mother of all modules, into C++ classes. For example

TYPE fsType
       LOGICAL lShpF
       INTEGER(KIND=IKIND) :: eType = eType_NA
       INTEGER(KIND=IKIND) eNoN
...
       REAL(KIND=RKIND), ALLOCATABLE :: N(:,:)
       REAL(KIND=RKIND), ALLOCATABLE :: Nb(:,:)
END TYPE fsType

is converted to

class fsType {
  bool lShpF;
  int eType;
  int eNoN;
...
  Array<double> N;
  Array<double> Nb;
};

I've stated to duplicate the Fortran subroutines to maintain a similar flow of control and calling sequence.

@ktbolt
Copy link
Contributor Author

ktbolt commented Aug 21, 2021

I've added new XML elements to the parser to support fsi domains and more complicated BCs

<Add_equation name="FSI" >
   <Coupled> true </Coupled>
   <Min_iterations> 3 </Min_iterations>
   <Max_iterations> 10 </Max_iterations>
   <Tolerance> 1e-3 </Tolerance>

   <Domain id="0" >
       <Equation> fluid </Equation>
       <Density> 1.0 </Density>
       <Viscosity model="Constant" >
           <Value> 0.04 </Value>
       </Viscosity>
       <Backflow_stabilization_coefficient> 0.2 </Backflow_stabilization_coefficient>
   </Domain>

   <Add_BC name="wall_inlet" >
      <Type> Dir </Type>
      <Value> 0.0 </Value>
      <Impose_on_state_variable_integral> true </Impose_on_state_variable_integral>
      <Zero_out_perimeter> false </Zero_out_perimeter>
      <Effective_direction> (0, 0, 1) </Effective_direction>
   </Add_BC>

I've added returning mesh data from the C++ to Fortran for comparison, nodal coordinates match fine but element connectivity was wrong, elements ordered differently, missed important step later where Reordering element connectivity occurred and node IDs were incremented by 1 but not in the same place.

So now I've reorganized the C++ code to better replicate the Fortran organization and flow of control, replicating where Fortran subroutines are called from and their name.

@ktbolt
Copy link
Contributor Author

ktbolt commented Aug 25, 2021

I've reorganized the code and have replicated the Fortran code I've come across so far as I track the control flow for a CFD simulation.

consts.h  consts.cpp
load_msh.h  load_msh.cpp
nn.h. nn.cpp. nn_elem_gip.h. nn_elem_gnn.h  nn_elem_nn_bnds.h. nn_elem_props.h
read_files.h  read_files.cpp
read_msh.h. read_msh.cpp
vtk_xml.h  vtk_xml.cpp
vtk_xml_parser.h  vtk_xml_parser.cpp

It's very easy to get mixed up here so I'm documenting all of this in https://github.com/ktbolt/svFSI/blob/Implement-svFSI-using-cpp_19/Code/Source/svFSI_cinterface/README.md.

Even though a lot of the element code is screaming object orient me I'm mostly following the Fortran implementation. I did remove a lot of element type and element nodes case statements, such as SELECT CASE (lM%eNoN) and SELECT CASE (lFa%eNoN), that clutters up the the code, replaced them with maps of lambda functions. All of this type of thing will be replaced with element classes later on.

@ktbolt
Copy link
Contributor Author

ktbolt commented Aug 28, 2021

I've finished converting all the Fortran code called from LOADMSH: reading meshes and faces, checking and reordering element connectivity, setting Gauss integration and shape function element data.

Subtracting 1 from array and vector indices, etc. is totally error prone of course but have Array and Vector class checks to track mistakes down.

I've been adding Array and Vector class operators as needed to try to replicate some of what Fortran supports. For example, the Fortran code

xl = lM % x(:, lM % gIEN(:,e))
v(:,1) = xl(:,2) - xl(:,1)
v(:,2) = xl(:,3) - xl(:,2)
v(:,3) = xl(:,4) - xl(:,3)
v(:,4) = CROSS(v(:,1:2))
sn(1)  = SGN(SUM(v(:,3)*v(:,4)))

is similarly implemented like this in C++

auto elem_conn = mesh.gIEN.col(e);     
xl = mesh.x.get_cols(elem_conn);
v1 = xl[1] - xl[0];
v2 = xl[2] - xl[1];
v3 = xl[3] - xl[2];
auto v4 = v1.cross(v2);
int sn = utils::sign(v3.dot(v4));

And the award for the most obscure Fortran subroutine name (so far) goes to READENDNLFF.

@vvedula22
Copy link
Contributor

And the award for the most obscure Fortran subroutine name (so far) goes to READENDNLFF.

Ha ha! this refers to reading end (or boundary) nodes of a 1D fiber mesh from a file generated using the SV Purkinje plugin :) You will see many more later :)

@ktbolt
Copy link
Contributor Author

ktbolt commented Sep 3, 2021

I've finished converting the SETPROJECTOR(), FINDFACE(), MATCHFACES(), PULLSTACK() and PUSHSTACK() functions. This code is tricky because of the use of integer indexes into arrays storing node IDs , array bounds from [0,N-1] and such.

Need to be careful with lines like this in Fortran

stk%n = stk%n + 1
stk%v(stk%n) = iVal

to get the indexing right in C++.

I've added definitions to the Vector class to support STL iterator operations, for example for a Vector values you can do

// Iterate over values.
for (auto v : values) {
}

// Search for a value.
std::find(values.begin(), values.end(), value) == values.end());

I've replaced using std::vector in Array and other code with Vector to provide more uniform operations. For example

Vector<Vector<T>> get_cols(const Vector<int>& columns) const

Vector<T> rows(const int row, const Vector<int>& indexes) const

Note that some Fortran functions like PULLSTACK() and PUSHSTACK() can be replaced with the std::stack container. I've implemented them using Vector for now to replicate what's being done in Fortran.

@ktbolt
Copy link
Contributor Author

ktbolt commented Sep 12, 2021

I've finished converting the 500 line SUBROUTINE READMSH and the 12 subroutines that it calls to read in all sorts of things, added a bunch of parameters for prestress, initial field values, fiber directions, etc.

I've converted solver .inp to .xml files and tested on the following svFSI-Tests

07-fsi/ale/03-pipe_3D/
07-fsi/ale/04-pipe_prestress/02-deform-ale/02.2-fsi-ALE
06-ustruct/03-LV-Guccione-active
04-fluid/01-pipe3D_RCR/

Fortran and C++ codes data seem to match so far.

@ktbolt
Copy link
Contributor Author

ktbolt commented Sep 23, 2021

I'm converting READEQ now, also converting svFSI_GenBC.inp to XML for testing, will need to add new parameter for Couple_to_genBC.

And it finally dawned on me what the l means in variable names like lEq, lM, etc. ... local! Am I slow?

@ktbolt
Copy link
Contributor Author

ktbolt commented Oct 2, 2021

I've converted READEQ, READDOMAIN, READLS, READCEP, READMATMODEL, READVISCMODEL and FFT.

I've also added a bunch of enum classes to replicate the Fortran PARAMETER definitions in CONSTS.f. The Fortran

      INTEGER(KIND=IKIND), PARAMETER :: prop_NA = 0, fluid_density = 1,
     2   solid_density = 2, solid_viscosity = 3, elasticity_modulus = 4,
     3   poisson_ratio = 5, conductivity = 6, f_x = 7, f_y = 8, f_z = 9,
     4   backflow_stab = 10, source_term = 11, damping = 12,
     5   shell_thickness = 13, ctau_M = 14, ctau_C = 15

is implement in C++ like this

enum class PhysicalProperyTypes
{
  NA = 0,
  fluid_density = 1,
  solid_density = 2,
  solid_viscosity = 3,
  elasticity_modulus = 4,
  poisson_ratio = 5,
  conductivity = 6,
  f_x = 7,
  f_y = 8,
  f_z = 9,
  backflow_stab = 10,
  source_term = 11,
  damping = 12,
  shell_thickness = 13,
  ctau_M = 14,
  ctau_C = 15
};

These enums are referenced using PhysicalProperyTypes::fluid_density for example.

I've also converted more solver .inp files to .xml for testing.

@ktbolt
Copy link
Contributor Author

ktbolt commented Oct 5, 2021

@vvedula22 I'm testing setting CMM parameters using https://github.com/SimVascular/svFSI-Tests/blob/master/07-fsi/cmm/01-pipe_RCR/02-deform-wall_inflate/01-inflate/svFSI.inp.

The mesh is defined using

Add mesh: wall {
   Set mesh as shell: t
   Mesh file path: ../../mesh/walls_combined.vtp
}

The Fortran code is calling SUBROUTINE READVTU(lM, fName) to read in the VTP file

     CALL loadVTK(vtu, fName, iStat)
     IF (iStat .LT. 0) err = "VTU file read error (init)"

This is confusing because the name and implementation of READVTU implies that it should only read and process VTU files but it seems to be reading and processing VTP files as well. What's going on here?

@vvedula22
Copy link
Contributor

@ktbolt This is a special case because we are loading the mesh as a shell surface where the triangulated wall surface is treated as a svFSI mesh and not a face. Likewise, the svFSI face for the shell surface would be the edges of the shell. Ideally, while creating the example, I should have copied the walls_combined file to a .vtu extension to maintain consistency. However, I didn't want to recreate files when not needed.

Regarding READVTU's ability to process VTP files, it is certainly possible because the Fortran VTK library is used to process only basic information in the vtu/vtp files such as position coordinates, connectivity, and any field variables. We don't do any other VTK operation that is specific to the data type, e.g., computing normals, extracting faces, etc.

Internally, the vtkXMLMod still loads the walls_combined file as .vtp file and looks for all the VTP commands such as NumberOfPolys instead of NumberOfCells, etc. However, READVTU only extracts coordinates and connectivity from this structure. It will not access other VTP data such as GlobalElementID.

To summarize, one can load a VTP file using READVTU as long as you don't need the additional vtp-specific field variables such as GlobalElementID or GlobalNodeID. Likewise, one can also load a VTU file using READVTP provided you have these additional field variables in the .vtu file and are required and appropriately processed for the problem.

@ktbolt
Copy link
Contributor Author

ktbolt commented Oct 5, 2021

@vvedula22 Thanks for the detailed reply!

I implemented READVTU to just read .vtu files, will think about redoing that or better yet I will convert the walls_combined.vtp file to walls_combined.vtu for my testing.

It's not clear why SV even uses vtp files to store mesh data, just overcomplicates processing mesh files, should use vtu for all mesh data.

@ktbolt
Copy link
Contributor Author

ktbolt commented Oct 9, 2021

I've converted the 500 line PARTMSH Fortran function that partitions and distributes a mesh across a set of processors, a most brutal experience, a 0-based Fortran array, figuring out which variables are counters and which are element IDs, redoing the MPI calls, and converting the CMMOD module implemented in COMU.f.

It will be interesting to reimplement this code not just following the Fortran implementation but using C++ containers, maybe can simplify this code. Just using descriptive variable names and code decomposition would help a lot.

@ktbolt
Copy link
Contributor Author

ktbolt commented Oct 22, 2021

@vvedula22 What is the format of the Fourier coefficients file used in Fourier coefficients file path: ? I don't see an example for this in svFSI-Tests. Can you send me an example file that I can use to test?

@vvedula22
Copy link
Contributor

OK I will. You also asked for another example during the last solver meeting. Could you remind me what was that test case we talked about then? Will upload both these cases.

@CZHU20
Copy link
Member

CZHU20 commented Oct 23, 2021

Hi @ktbolt, I saw you are creating templates for 2D/3D arrays. Is it possible to take advantage of existing libraries such as eigen3, boost etc to handle the matrix/vector operations for us?

@ktbolt
Copy link
Contributor Author

ktbolt commented Oct 23, 2021

@vvedula22 During the solver meeting I asked for a test case for variable wall properties.

@ktbolt
Copy link
Contributor Author

ktbolt commented Oct 23, 2021

@CZHU20 I wrote my own array and vector templates so I could easily reproduce Fortran arrays and debug indexing problems. I've been looking at Eigen and other packages for matrix/vector operations, would like to use something for the real svFSI implementation. I will probably use Eigen for the Python svZeroDSolver that I will help reimplement in C++.

What is your experience with matrix/vector packages? Do you have a preference?

@CZHU20
Copy link
Member

CZHU20 commented Oct 24, 2021

Hi @ktbolt thanks for the explanation. I am only familiar with Eigen. Since it's just a bunch of header files, I think Eigen can also be easily packaged into svFSI for distribution.

@vvedula22
Copy link
Contributor

@ktbolt Added two cases in svFSI-Tests :

  • Fourier coefficients example is included in the pipe3D_RCR example
  • CMM with variable wall properties is included in the vein-graft example

Also added README files to describe the problem for both these cases. Let me know if you have any questions on these.

@ktbolt
Copy link
Contributor Author

ktbolt commented Oct 26, 2021

Thanks @vvedula22 !

@ktbolt
Copy link
Contributor Author

ktbolt commented Oct 27, 2021

The way svFSI currently preprocesses input mesh, face, and BC data for a simulation is not very efficient, especially for an HPC cluster

  • submit a job to the cluster
  • wait for it to start
  • execute svFSI using MPI
  • process the mesh
  • partition the mesh
  • detect an error in the solver input file or other data
  • job fails

Maybe what we need to do is to preprocess the input data and write that to a file that is guaranteed to be valid. This file can then be used for the simulation.

@ktbolt
Copy link
Contributor Author

ktbolt commented Oct 28, 2021

I discovered that I skipped converting the code that processes boundary conditions. This is probably important so I converted the 500 line READBC subroutine, added more XML parameters for BCs, code to read in various .dat files, etc.

@ktbolt
Copy link
Contributor Author

ktbolt commented Oct 17, 2022

I've converted the ustruct equation, added functions

construct_usolid()
ustruct_2d_c()
ustruct_3d_c()
ustruct_2d_m()
ustruct_3d_m()
g_vol_pen()
get_tau()

I've tested 06-ustruct/01-block-compression/P1P1_VMS/h320MPa_nu0.4999999/N032, C++ and Fortran results match for both serial and MPI simulations.

@ktbolt
Copy link
Contributor Author

ktbolt commented Oct 19, 2022

I've now tested

06-ustruct/03-LV-Guccione-active
06-ustruct/01-block-compression/P1P1_VMS/h320MPa_nu0.4999999/N032

found a bug in TET20 shape functions.

Serial results MPI match.

@ktbolt
Copy link
Contributor Author

ktbolt commented Oct 21, 2022

I've added the heatf equation, added functions

construct_heatf()
heatf_3d()
heatf_2d()

I've now tested 04-fluid/02-dye_AD, results match Fortran.

@ktbolt
Copy link
Contributor Author

ktbolt commented Oct 24, 2022

I've added precond_rcs(), a bit involved reproducing Fortran intrinsics like

Wr(1,Ac) = MAXVAL(ABS(Val(1:3,a:b)))

Wc(1,a) = MAX(MAXVAL(ABS(Val(1:7:3,i))), Wc(1,a))

reproduced in C++ by adding an Array<T>::get_values method to get a range of rows/column values and using the lambda function max_func

auto max_func = [](const double& a, const double& b) { return fabs(a) < fabs(b); };

auto vals1 = Val.get_values({0,2}, {a,b});
Wr(0,Ac) = fabs(*std::max_element(vals1.begin(), vals1.end(), max_func));

auto vals1 = Val.get_values({0,6}, {i,i}, 3);
Wc(0,a) = std::max(fabs(*std::max_element(vals1.begin(), vals1.end(), max_func)), Wc(0,a));

I've also added several Array<T> operators (e.g. sqrt) to reproduce Fortran array operations so the C++ looks the same as the Fortran

  Wr = Wr - 0.5;
  Wr = Wr / abs(Wr);
  Wr = (Wr + abs(Wr)) * 0.5;

  Wr = 1.0 / sqrt(Wr);
  Wc = 1.0 / sqrt(Wc);

I've tested 04-fluid/06-channel-flow-2D that uses precond_rcs(), serial and MPI results match Fortran.

@ktbolt
Copy link
Contributor Author

ktbolt commented Oct 25, 2022

I've now tested 04-fluid/07-cyl2D-weak-DirBC, like 1000 mistakes in bw_fluid_2d(), must have converted it on a Monday. Serial and MPI results match Fortran.

I've also tested 04-fluid/03-driven-cavity-2D/Re1000/N64_nonunif and 04-fluid/05-nonNewtonian, fixed some problems setting viscosity type. Serial and MPI results match Fortran.

@ktbolt
Copy link
Contributor Author

ktbolt commented Oct 25, 2022

I've tested 04-fluid/04-closed-loop, parses theCouple_to_cplBC section parameters correctly and calls the set_bc::cplBC_Integ_X function to interface to the 0D solver.

The theCouple_to_cplBC section parameters look like this

   <Couple_to_cplBC type="SI" >
      <Number_of_unknowns> 39 </Number_of_unknowns>
      <ZeroD_code_file_path> ./cplBC/cplBC.exe </ZeroD_code_file_path>
      <Unknowns_initialization_file_path> ./cplBC/foo_c.ini </Unknowns_initialization_file_path>
      <File_name_for_0D_3D_communication> CPLBC_0D_3D.tmp </File_name_for_0D_3D_communication>
      <File_name_for_saving_unknowns> cplBC_AllData </File_name_for_saving_unknowns>
      <Number_of_user_defined_outputs> 31 </Number_of_user_defined_outputs>
   </Couple_to_cplBC>

We can probably remove most of these parameters, need the path to the 0D solver shared library and the JSON file, would be good to not have to specify <Number_of_unknowns> and <Number_of_user_defined_outputs>.

@ktbolt
Copy link
Contributor Author

ktbolt commented Oct 27, 2022

I've added writing/reading binary restart files and initialization from vtu and restart binary files. The restart binary file format is the same as the Fortran format.

I also fixed a problem with not writing the last time step to a vtk file, issues trying to reproduce Fortran code that uses 0-size arrays.

@ktbolt
Copy link
Contributor Author

ktbolt commented Oct 27, 2022

The last big task is to convert code for remeshing. This is going to be a lot of work, lots of IF (ALLOCATED(colPtr)) DEALLOCATE(colPtr) kind of thing going on, interpolation, reading TetGen files, etc.

I will also need to restructure the C++ code a bit to loop over the initialization code, Fortran just does a goto.

@vvedula22
Copy link
Contributor

@ktbolt The remeshing algorithm needs to be updated on the Fortran side too. I wrote this code during my earliest days as a postdoc and was still learning the code then. This part of the code is not optimized and I think the remeshing followed by projection could be better handled. Would it be possible for you to come back to this in a month or so, ideally in Dec?

@ktbolt
Copy link
Contributor Author

ktbolt commented Oct 27, 2022

@vvedula22 I think it is fine for me to come back to this in Dec or even next year. I want to go through the code and determine if there are sections that have not been tested and increase the C++ solver performance.

@vvedula22
Copy link
Contributor

Sounds good.

@ktbolt
Copy link
Contributor Author

ktbolt commented Nov 1, 2022

I've been reviewing svFSI-Test simulation results and have found some incomplete tests.

The 05-struct/02-LV-Guccione-passive test was producing NaNs, fixed a couple of bugs in the Guccione material model, results now look ok.

Ran the 07-fsi/ale/01-channel-leaflets_2D test using MPI, results match.

@ktbolt
Copy link
Contributor Author

ktbolt commented Nov 2, 2022

I found some problems testing 07-fsi/cmm/01-pipe_RCR/02-deform-wall_inflate/01-inflate

Fix XML Add_BF format

Add code to read shells from .vtp file for a mesh:  
    <Set_mesh_as_shell> true </Set_mesh_as_shell>
    <Mesh_file_path> walls_combined.vtp </Mesh_file_path>

Fix some bugs in cmm code

Serial and MPI results match.

@ktbolt
Copy link
Contributor Author

ktbolt commented Nov 3, 2022

I've fixed a bug in the code that computes divergence for output, can now write out Divergence

   <Output type="Spatial" >
     <Displacement> true </Displacement>
     <Velocity> true </Velocity>
     <Pressure> true </Pressure>
     <Stress> true </Stress>
     <Jacobian> true </Jacobian>
     <Divergence> true </Divergence>
   </Output>

The Divergence values do differ by a few percent from Fortran. The intermediate arrays in the divergence computation agree to around 1e-14 but then these values, of order 1e-17 are, divided by each other and the differences are magnified.

@ktbolt
Copy link
Contributor Author

ktbolt commented Nov 4, 2022

Computing wss and vorticity results were giving NaNs, fixed a couple of bugs in post() and get_viscosity(), those results now look good.

@ktbolt
Copy link
Contributor Author

ktbolt commented Nov 9, 2022

I started testing with Trilinos again, had to rebuild on Mac and fool around with mumps.

I added init_dir_and_coup_neu() and fixed some problems calling trilinos_linear_solver functions which are implemented to be called from Fortran. I also fixed trilinos_lhs_create() so it can be used with C++ 0-based indexing.

I tested 06-ustruct/03-LV-Guccione-active using the Trilinos-ILUT preconditioner, results match Fortran.

@ktbolt
Copy link
Contributor Author

ktbolt commented Nov 15, 2022

I've added missing C++ xml files and ran simulations for a bunch of different mesh/element tests under

03-stokes/01-manufactured-solution
02-lelas/01-beam_Euler-Bernoulli
05-struct/01-block-compression

actually found a new indexing bug doing this.

I've now run and compared simulation, serial and mpi, for all 68 tests (excluding shells). All results look good.

I will now look at code coverage, maybe use gcov to do this, maybe just compare what's in svFSI_master.inp to the test .inp files.

@vvedula22 @CZHU20 Do you think svFSI-Tests tests all of the svFSI code?

@alisonmarsden
Copy link

alisonmarsden commented Nov 15, 2022 via email

@ktbolt
Copy link
Contributor Author

ktbolt commented Nov 19, 2022

I've re-implemented parts of the svFSI XML file parsing code, now much cleaner and easier to add new parameters, better error checking, and the code is 1000 lines smaller.

@ktbolt
Copy link
Contributor Author

ktbolt commented Nov 27, 2022

I've finished filling in a bunch of holes in the new svFSI XML file parsing code and integrated it into the svFSI code. I then re-ran all of the tests and checked results to make sure the new parser is getting all the parameters correctly, fixed a few problems, now all the tests look good.

@ktbolt
Copy link
Contributor Author

ktbolt commented Dec 1, 2022

For linear elasticity the solution looks like this using Trilinos

 LE 1-1  5.353e+01  [0 1.000e+00 1.000e+00 9.950e-04]  [293 -9 94]
 LE 1-2s 1.042e+02  [-60 9.950e-04 9.950e-04 9.728e-04]  [267 -4 93]

When not using a Trilinos the linear solve has a large number of iterations for both C++ and Fortran

 LE     1-1  3.50E1  [   0 1.00000 1.00000 3.94E-1]  !5100    0  99!
 LE     1-2  6.98E1  [  -8 3.94E-1 3.94E-1 9.38E-1]  !5100    0  99!
 LE     1-3  1.04E2  [  -8 3.70E-1 3.70E-1 9.42E-1]  !5100    0  99!
 LE     1-4  1.42E2  [  -9 3.49E-1 3.49E-1 9.42E-1]  !5100    0  99!
 LE     1-5  1.77E2  [  -9 3.29E-1 3.29E-1 9.42E-1]  !5100    0  99!
 LE     1-6  2.11E2  [ -10 3.10E-1 3.10E-1 9.42E-1]  !5100    0  99!
 LE     1-7  2.47E2  [ -10 2.92E-1 2.92E-1 9.42E-1]  !5100    0  99!
 LE     1-8  2.84E2  [ -11 2.75E-1 2.75E-1 9.42E-1]  !5100    0  99!
 LE     1-9  3.20E2  [ -11 2.60E-1 2.60E-1 9.42E-1]  !5100    0  99!
 LE     1-10s 3.56E2  [ -12 2.45E-1 2.45E-1 9.42E-1]  !5100    0  99!

The C++ svFSILS linear solver gmresv() code is about three times slower than the Fortran code. The Fortran code uses sub-arrays a lot to modify 3-arrays, for example

CALL ADDBCMUL(lhs, BCOP_TYPE_ADD, dof, u(:,:,i), u(:,:,i+1))

which I reproduce in C++ by creating copies of sub-arrays, modifying them, and copying them back, for example

auto u_slice = u.slice(i);
auto u_slice_1 = u.slice(i+1);
add_bc_mul::add_bc_mul(lhs, BcopType::BCOP_TYPE_ADD, dof, u_slice, u_slice_1)
u.set_slice(i+1, u_slice_1);

All this copying slows up the gmresv() code I think. There are two options

  1. Change the way the arguments are passed to functions called by gmresv(), pass in index with the 3-array
  2. Extract references to the data in the 3-array

Option 1) would require a lot of changes to all code that may uses these functions.

Option 2) seems to be the best choice although this might produce some scary C++ code.

@ktbolt
Copy link
Contributor Author

ktbolt commented Dec 2, 2022

I've implemented an Array<T> Array3::rslice(const int slice) method that returns an Array object whose data points to a sub-section of the Array3 data so modifying the Array object's data modifies the Array3 object's data. An Array class member variable flags that its data is not to be deleted when the object is destroyed. Crude and simple and not too dangerous (maybe).

I've modified the C++ gmres_v() function to use rslice() so the above C++ code is now

add_bc_mul::add_bc_mul(lhs, BcopType::BCOP_TYPE_ADD, dof, u.rslice(i), u.rslice(i+1))

which looks and behaves like the Fortran code.

The time spent in the C++ gmres_v() function went from 99 sec to 35 sec which is the same as the Fortran code. The C++ and Fortran simulations now run in the same amount of time and the results match.

@ktbolt
Copy link
Contributor Author

ktbolt commented Dec 6, 2022

I've been timing 07-fsi/ale/03-pipe_3D simulations (204,705 fluid elements and 85,220 solid elements). For a single processor single time step the C++ code (1m13s) is about 1.5 times slower than the Fortran code (0m45s).

The time difference is primarily from the FS equation matrix assembly

C++     8.15 s
Fortan  2.57 s

I think the additional time in the C++ code is from creating millions of small Array objects used in the nonlinear solid mechanics construct lhs and material model code, the get_pk2cc() function creates lots of new Array objects each time it is called.

I've done some optimization for this and reduced the C++ code to 1.2 times slower, 55s slower for 10 time steps using four processors on my old Mac.

We will need to keep this sort of thing in mind when implementing the next stage C++ code.

@ktbolt
Copy link
Contributor Author

ktbolt commented Dec 7, 2022

I've tested the latest changes on Ubuntu, looks good.

I also tested the XML parsing code that uses some new C++17 features (variants) on Windows, found a small problem, fixed it and now works.

@ktbolt
Copy link
Contributor Author

ktbolt commented Dec 11, 2022

I'm now cleaning up the code, removing all of my careful with indexing comments, commented-out Fortran code and such.

I will leave in all of the conditionally compiled print statements useful for debugging.

@ktbolt
Copy link
Contributor Author

ktbolt commented Dec 14, 2022

I have finished cleaning up the code.

I will now have a look around the code to see if there is anything I might have missed or not implemented.

@ktbolt
Copy link
Contributor Author

ktbolt commented Dec 17, 2022

I have created the https://github.com/SimVascular/svFSIplus repository for the C++ code, added a main.cpp file to execute the svFSI program with the name of the solver command file just like the Fortran version.

Both serial and MPI simulations work fine.

I also added messages to better diagnose errors in the XML input file, for example

[svFSI] ERROR The svFSI program has failed.
[svFSI] ERROR The following error occured while reading the solver parameters file 'svFSI.xml'.
[svFSI] ERROR Error=XML_ERROR_MISMATCHED_ELEMENT ErrorID=14 (0xe) Line number=5: XMLElement name=Continue_previous_simulation

@ktbolt
Copy link
Contributor Author

ktbolt commented Jan 11, 2023

I've finished making changes to my svFSI branch https://github.com/ktbolt/svFSI/tree/Implement-svFSI-using-cpp_19, now moving all new development to https://github.com/SimVascular/svFSIplus.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants