Documentation of the AMRCART code
A brief glossary by
3. Data-structure, variables, and resources
5. Source code
6. Compiling the code
7. How to start a simulation?
8. How to implement a problem?
The new user may first use the code without the adaptive mesh, which makes the code as handy as any other hydro code. To make the new user familiar with the handling of the code, with the input- and output-data formats, and with the graphics interface, we provide a couple of exercises which may be executed by the user first. The exercises comprise some well known test examples, like Sod's shock-tube problem, as well as some astrophysical examples.
Also users who already have large experience in hydro- and mhd-simulations, as well as in code-handling, will need some time to make full use of all the different options of the codes. With some 30'000 lines and about 150 subroutines (3D-version), the code has a considerable complexity, mostly due to the adaptive mesh with the associated data-structure and run-time data-management. With this glossary, we hope to provide some support for the use of AMRCART in research and teaching. We wish you all much fun and success in your research.
We would like to ask everybody who makes use of the codes, in research and teaching, to refer to the authors and to the following paper:
Rolf Walder and Doris Folini
A-MAZE: A code package to compute 3D magnetic flows, 3D NLTE radiative transfer, and synthetic spectra
In: H.J.G.L.M. Lamers and A. Sapar (eds),
Thermal and Ionization Aspects of Flows from Hot Stars: Observations and Theory,
ASP Conference Series 204, 281-284, 2000.
We also appreciate being taken on the author list of the corresponding publications if it is felt that the code has considerably contributed to the results. This is, however, not mandatory.
The code compiles under Fortran 90/95 without problems, when setting the flags corresponding to a Fortran 77 source code.
Although the basic routines of the code are identical for the MHD- and the hydro-version and although they are similar for the 1D, the 2D, and the 3D version, we prefer so far to provide them separately for each of the versions.
3. Data-structure, variables, and resources
3.1 Data storage and -management
3.2 Linked list of blocks
3.3 Irregular cells
3.4 Change of global parameters
3.5 Changing parameters when restarting
AMRCART uses a hierarchical block-structured mesh, together with time-dependent adaptive mesh refinement. More details on the algorithms and hierarchical block structured meshes can be taken from section 9, from the slides of some lectures in the frame of an A-MAZE workshop, as well as from the literature given in section 9 and on the A-MAZE home page.
Adaptive meshes require run-time allocation of memory. The hierarchical block structured meshe requires the use of pointers. Objects and classes would facilitate programming.
Fortran 77 provides neither of these commodities. Fortran 90, C, or C++ would do better here in some respects. However, so far we found it too costly to rewrite the codes. The shortcomings of Fortran 77 can be overcome to some degree by consequent object-oriented programming (as far as possible), by explicitly programming some sort of run-time memory allocation, and by linked objects. We have tried our best to follow these guidelines. >
3.0 Global parameters and variablesGlobal parameters and variables are declared in every subroutine. We do not make use of any include-statements although nowadays most of the f77-compilers would support this. We, however, provide perl-scripts and targets in the Makefile which allow to globally change the values of global parameters and common-bloc entries.
3.0.1 Global parameters
3.0.1 Global variables
A. Variables related to number of stored data
3.0.2 Global variables stored in common-blocks
A. General data-structure
B. How to integrate
C. Governing physics and problem under consideration
D. mesh refinement
3.1 Data storage and -management (alloc-array), memory requirements
3.1.1 Data storage: the `alloc'-array
A. BasicsAMRCART stores all (relevant) data in one big array alloc(mwords) which is part of the common bloc 'calloc'. Pointers refer to the addresses at which the data is stored. The pointers are stored in the node-array, which is part of the common bloc 'nodal'.
The `ordinary' user, however, is not forced to consider this speciality, since in all subroutines which this user normally has to touch the data-array where the conserved quantities are stored is of the form:
Here, nvar is the total number of conserved quantities and mitot (maxip1), mjtot (maxjp1), mltot (maxlp1) is the size of the data-array in x-, y-, and z-direction respectively.
Suppose, the data-block under consideration has, in x-direction, maxi-1 integration cells and maxi interfaces confining these cells.
nvarbas is the number of conserved quantities necessary for describing the dynamics of a single component fluid.
ncomp is the number of fluid components. It is always bigger (multi-component fluid) or equal to 1. nvar is the total number of conserved quantities, including possible different species: nvar = nvarbas + (ncomp-1)
For the different cases (hydro or mhd; 1D, 2D, or 3D), nvarbas has the following values:
B. Order of storageAs already said, in the subroutines the conserved quantities are stored in the array
The conserved quantities are stored in the following order:
C. The link between alloc and valAll data is generally stored in one big, one-dimensional array, the data array alloc, which is part of the common-block `calloc'. The data of one particular block within the adaptive tree is stored at a certain address `loc', which is stored in the node-array.
In all user-defined subroutines, the data array is visible as described in section 3.1.A. The link between the two forms is by a subroutine-call of the following form:
with the following subroutine statements:
implicit double precision(a-h,o-z)
Thus, in alloc the data in x-direction is stored first, following up by the y-direction, the z-direction, and the quantities.
This very same procedure is used for integer data as well. After having allocated space for the array, the subroutine call is by
with the following subroutine statements:
implicit double precision(a-h,o-z)
Most of the modern compilers give you a warning indicating type-mismatch related to this procedure. However, since the integer-data are only used within the subroutines, this warning is not relevant. See section 2 and section 3.1. A for further comments related to this issue.
D. How to allocate memory?Generally, the procedure described below is only a necessary procedure for somebody who wants to dive deeper into the code.
You can allocate memory in the data-array `alloc' by the statement
loc = igetsp(mwords)
where mwords is the amount of words you want to allocate. The function `igetsp' gives you the location 'loc' at which a storage is possible. `igetsp' uses a simple 'look at the next place' strategy. If you want to access this memory in a different subroutine, you have to store the pointer location as a global variable.
Memory can be released by the statement
Make sure that you properly release the memory when you do not need it anymore, otherwise you get a memory leak.
The data management is essentially done via a free list, 'lfree(maxlsp,2)', storing the already occupied positions of the data-array alloc as well as the number of words occupied behind these positions. Here, the global parameter `maxlsp' is the maximal number of data-bunches which can be stored simultaneously in the `alloc'-array.
This data structure is initialized in the subroutine ./src/AmrModule/stst1.f. Memory is allocated via the function ./src/AmrModule/igetsp.f. Memory is released by the routine ./src/AmrModule/reclam.f.
3.1.1 Memory requirementsSince the mesh is adaptive and time-varying, it is not easy to estimate the effective memory requirements. However, to start the simulation, you have to fix the value of the global parameter `mwords' since in Fortran 77 the arrays have to be dimensioned at compile time.
Due to the regular block-structure and the Cartesian grids, the geometry and the mesh require nearly no memory. Thus, essentially only the flow field of the different blocks needs to be stored.
'alloc' is by far the largest array. Thus mwords is almost equivalent to the entire memory-resources being occupied on the computer by the code.
Assume a single block with a size of 20x20x20 cells. Including two boundary ghost cells, the effective block will contain 24x24x24 cells. To store nvar conserved variables (12'824*nvar) words or (110'592*nvar) bytes are needed for this block.
For estimating the memory requirements one should have the following in mind:
After estimation of the memory requirements one can set the value of the global parameter 'mwords' by the procedure described in section 3.4.
3.2 Linked list of blocks (rnode- and node-array)AMRCART uses a tree of Cartesian grids in the form of regular blocks. It constructs (within the frame you specify in the input-file `tape5') more and more blocks of finer and finer meshes.
3.2.1 Structure of the linked list of blocksBlocks are adaptively allocated at different levels of refinement. Each block carries an arbitrary number, 'mptr'. The first bloc with number mptr at refinement-level level is found by the statement
mptr = lstart(level).
Every next block of the same level of refinement (carrying number 'mptr1') is then found by the statement
mptr1 = node(10,mptr).
The list closes if node(10,mptr)=0. See further details of the array node in section 3.2.2.
A full list of blocks you thus get by the following loops:
3.2.2 Block-linked data-arrayThere are essentially two nodal arrays, `rnode' and `node', which carry block-related data. Apart from them, there are the level-based arrays hxposs(maxlv), hyposs(maxlv), hzposs(maxlv), kposs(maxlv) storing the x-, y- and z-size of the Cartesian cells, and the time step on the different levels of refinement.
A. The float part: rnodeAll floats related to a block of any level of refinement are stored in the rnode-array. rnode is thus a 2-dimensional array: rnode(29,maxgr), where the global parameter maxgr is the maximum allowed number of blocks. We now define the first dimension of the rnode, using the following syntax:
rnode(28,maxgr) in 3D
rnode(12,maxgr) in 2D
rnode(5,maxgr) in 1D
B. The integer part: nodeAll integers related to a block of any level of refinement are stored in the node-array. node is thus a 2-dimensional array: node(.,maxgr), where the global parameter maxgr is the maximum allowed number of blocks. In all dimensions, node(.,maxgr) is defined as follows:
3.3 Irregular cellsThe bound for irregular cells (see Section 8.2.2) is given by the parameter maxirr. No further explanation on irregular cells is given in this glossary.
3.4 How to change the global parameters throughout the code?The current values of the global parameters are
Since no 'include'-statements are used, these parameters have to be changed throughout the code; in which subroutines, you can check by the commands (looking up the parameter mwords)
grep 'mwords=' ./src/*/*.f
grep 'mwords=' ./src/*/*/*.f
grep 'mwords=' ./src/*/*/*/*.f
Based on perl, we provide scripts which are able to change the global parameters in the entire code.
Version 1.* of AMRCART:In order to change e.g. the parameter `mwords' from 20'000'000 to 1'000'000 you should edit the Makefile of the main directory. Fill the entry 'change' as follows
perl -pi -e 's/mwords=20000000/mwords=1000000/g' *.f;\
then globally changes the parameter and re-compiles the code (with the flags given in the Makefiles of the sub-directories). To suppress the re-compiling, omit the statement `make' in the above line of the Makefile.
Version 2.* of AMRCART:In order to change e.g. the parameter `mwords' from 20'000'000 to 1'000'000 you should edit the Makefile of the main directory. Edit the settings of oldstring and newstring in the header of the file, in this example set:
then globally changes the parameter and re-compiles the code (with the dbx-flags 'FFLAGS' set in the header of the Makefile), whereas the command
only exchanges the strings. Consult for this also the section on compiling.
3.5 Changing global parameters when restarting the simulationIt is possible to increase the values of global parameters before restarting the code from a checkfile dumped by the code in a previous run.
For the parameter `mwords' this is easy: just increase its value, re-compile the code, and restart the simulation.
When changing other parameters, the procedure is slightly more complicated.
1. change the parameter as described in Section 3.5
2. in the subroutine ./src/amr/restart.f set the corresponding local parameter (maxlvo, maxgro, mairro, malspo, etc.) to the old value of the parameter.
3. compile the code.
4. restart the simulation.
NOTE : If you eventually want to restart your simulation again, you have to adapt the values of the local parameters (i.e. maxlvo, maxgro, mairro, malspo) to the new values.
5. Source code
Each subroutine or function corresponds to exactly one file. The name of the file is the name of the subroutine plus an extension indicating the version of the code. .g. the filename of the subroutine physbd is physbd_hydro_3d.f in the 3D hydro version.
./src/amrcart.f or ./src/amrmhd.f
Contains the heart of the code. Not necessary to touch for running a problem
All related to boundary conditions (see section 8.3).
Problem dependent initialization routines. Several different problems can be implemented simultaneously (see section 8.2).
All about integration.
Different integrators for the conservation law (e.g. Euler equations, MHD equations).
Different integrators to handle ordinary differential equations (ODE). Used for the integration of the source terms.
Integrator to solve for the Poisson-equation (only 1D so far).
Integrator and routines describing the sources.
Output-routines for the formats we support (see section 4 and section 10 for details)
Routines describing the source-terms of the equations.
Some auxiliary routines, problem dependent or for debugging purposes
There are makefiles in the main directory as well as in each subdirectory of the tree. All subroutines in one subdirectory can be catted into one file by executing the command 'make cat' in the corresponding subdirectory.
The entire code can be catted to one file by executing the command 'make cat' (Versions 1.*) or 'make one_file' (Versions 2.*) in the main directory of the tree (in ./).
6. Compiling the code
The generated execuTABLEs can be started directly or with a script (see section 7).
6.1 MakefileIn version 1, optimized and debugging-version of the code are generated by performing the csh-scripts 'makeopt' and 'makedbx'. The scripts invoke make, however all makefiles in the entire ./src-tree as well as in the working directory must be consistently edited by hand.
From version 2, all makefiles are generated automatically. There are 3 files.
Thus, it is only necessary to edit the header of the main Makefile. It is there, where you must set the name of your compiler and the compiler flags which you wish do use.
So far the following make-commands are implemented. Type make command to invoke them.
7. How to start a simulation?
We strongly recommend to start a simulation with the provided script 'run.single_sim'. This automatically provides you with a run-logfile 'simulation.logfile.$DATE'. Some other useful informations (date, machine, code version) are written into the code-logfile 'tape6' as well.
The script 'run.single_sim' can be edited to provide the correct flags or nice parameters with which you want to run the code.
We provide scripts (see A-MAZE home-page) which automatically processes a series of jobs. The scripts are also able to automatically annotate the output files, to ftp the output files to another machine, to automatically process graphics, and to automatically store the data-files in a mass-storage device.
You may start the debugging session with the dbx-debugger with the script 'rundbx'. The scripts guarantee that you find all the source files as long as you leave the tree of the source files un-touched.
8. How to implement a problem?
8.1 Already implemented problems
You can select one of theses problems by setting in the input file 'tape5' the variable 'iprob' equal to the corresponding problem number. For example, if setting in 'tape5' iprob=4, the code simulates wind-dynamics in a binary star system.
To each problem, there are associated input-files containing the physical parameters of the problem. These input-files of the different problems are located in associated directories
For the problem number 4, there is one single input-file: 'input4.binary_wind'.
After having read the specific input-file, the code does some pre-processing in the subroutine ./src/InitializationModule/init4.f (e.g. setting correct boundary conditions at the domain boundaries and the stars, normalizing the variables, setting switches determining which sources should be computed, etc.). After having set the initial conditions in the subroutine ./src/InitializationModule/finit.f, the code enters the generic part and executes the problem.
8.1 A new problem?To implement a new problem, select an unoccupied problem number and set
- the initial conditions (see section 8.2 'How to implement initial conditions?')
- the boundary conditions (see section 8.3 'How to implement boundary conditions?')
- the sources (see section 8.4 'How to implement source terms?')
appropriately. The code then is hopefully able to process your problem.
Below, you find some notes on how to implement a new problem. Studying an already implemented problem may greatly ease the implementation of a new problem.
8.2 How to implement initial conditionsThis point concerns the routines :
./src/InitializationModule/init.f and ./src/InitializationModule/init#.f,
where # is the problem number (optional preprocessing).
In all pre-implemented problems we use scaled units to compute. The numbers set in the initialization routines must correspond to these numbers. In particular, in all MHD codes, the magnetic field is scaled by a factor of 1/sqrt(4 pi)!
A. InitializationThe initial conditions are set in the subroutine ./src/InitializationModule/finit.f.
The initialization starts below the label corresponding to the problem number, e.g. the initialization for the supernova remnant starts below the statement
c # Sedov point blast problem
The corresponding problem labels are selected by the first statement of this routine
go to (1,2,3,4) iprob
where the variable iprob is set in the input-file 'tape5' accordingly to the problem number.
This subroutine is called for each block of any level of refinement once or twice (in the case of refinement).
The data of the blocks are filled by a loops like
do 110 l = 1,maxlp1
do 110 j = 1,maxjp1
do 110 i = 1,maxip1
val(i,j,l,1) = ...
val(i,j,l,2) = ...
val(i,j,l,nvar) = ...
where maxip1, maxjp1, maxlp1 stand for the integer-lengths of the blocks. The coordinates of the cell-centers (x,y,z) can be constructed by
x = corn1 + (i-1.5)*dx
y = corn2 + (j-1.5)*dy
z = corn3 + (l-1.5)*dz
where corn1, corn2, corn3 stand for the coordinates of the lower, left, front corner of the blocks and i,j,l denotes the cell numbers in x-, y-, and z-direction.
NOTE 1 :
When entering the subroutine, the code passes from the calling routine the generic variables like corn1, corn2, corn3, dx, dy, dz, maxip1, maxjp1, maxlp1.
Problem dependent parameters and variables can be set directly into this subroutine. However, we strongly recommend to write your own pre-processing initialization routine, following in that the already implemented problems. Relevant parameters may then passed from the pre-processing routine to finit by a common block /indat#/ where # stands for the problem number.
In most of the already implemented problems we use scaled units. Thus the numbers set in finit do not nessarilly correspond to any of the handsome units used in astrophysics.
Consult also the comment lines in the subroutine itself for further details.
NOTE 2 :
You fill one boundary ghost-cell as well in `finit'. Thus the cell-center with x-integer-index i=1 is located at corn1-0.5*dx.
NOTE 3 :
As already mentioned before, more boundary cells are needed at each block-interface to integrate one block. Thus, in all the integration routines, the mesh is constructed as
x = corn1 + (i- lWIDTH -0.5)*dx
y = corn2 + (j- lWIDTH -0.5)*dy
z = corn3 + (l- lWIDTH -0.5)*dz
when lWIDTH is the number of boundary cells.
B. Pre-processing (./src/InitializationModule/init.f and ./src/InitializationModule/init#.f)Perhaps you want to do some pre-processing, e.g. reading physical input variables from a file, scaling appropriately the variables, or writing the input parameters to a file. Following the already implemented problems, you can do this in a subroutine ./src/InitializationModule/init#.f where # stands for the problem number.
You then must call your newly created pre-prossing subroutine from the ./src/InitializationModule/init.f. We strongly suggest to use a format as already defined by the implemented problems when making this call. We also suggest to pass the relevant output from your preprocessing routines to the initialization routine finit by a common block. Again, take one of the already implemented problems as a guideline.
If you use graphics provided by the A-MAZE package, i.e. IDL or NCAR-graphics macros, or AVS/express data-reader ReadADG3 you have to include some well formatted problem-dependent headers in the plotfile 'plot.3'. Again, you may do this in a routine ./src/InitializationModule/initAMRCART_DOCU.html#.f. The headers are dumped by
write(3,'(a)') '*PLANAR RIEMANN-PROBLEM : '
write(3,'(a)') '*PHYSICAL INFORMATION '
Note that the scale-factors 'sfac' have an ordering for which you have to consult one of the already implemented preprocessing routines. Using the 'physpar' array some other important quantities can be passed to the graphics. In particular, it should be set
physpar(2,18) = sgascon
physpar(2,19) = amass
physpar(2,20) = gamma
8.3 How to implement boundary conditionsThere are two different boundaries: Domain boundaries along the edges of the computational domain and body boundaries, along physical bodies within the domain. Note:
In all pre-implemented problems we use scaled units to compute. The numbers set in the boundary routines must correspond to these numbers. In particular, in all MHD codes, the magnetic field is scaled by a factor of 1/sqrt(4 pi)!
8.3.1 Domain boundariesThese are set in the subroutines
physbd treats the already implemented boundary conditions: reflecting, free-streaming (no gradients in the conserved variables), and periodic.
These boundary conditions can be set by setting the corresponding logicals of the common bloc /phbdpa/ to .true. (reflecting, free-streaming) or to the correct values (periodic). As an example, you may consult the files
along with the subroutine setting the boundary,
Inflow boundary conditions or any other conditions must be defined in
8.3.2. Body boundariesBoundaries of a body sitting in the computational domain are technically much more difficult to handle than domain boundaries. So far, we have implemented supersonic wind-shedding boundaries in ./src/BoundaryModule/bndwind.f (HYDRO:2D/3D) and accreting boundaries (HYDRO:2D/3D) in ./src/BoundaryModule/bndacc.f. See the details in these routines or construct new ones which fit your problem.
In 2D only, a more advanced boundary treatment based on irregular, body-fitted cells is implemented. We do not further comment on this possibility here.
8.4 How to implement source termsSeveral source-terms are already implemented (see below).
In all pre-implemented problems we use scaled units to compute. The numbers set in the following routines must correspond to these numbers. In particular, in all MHD codes, the magnetic field is scaled by a factor of 1/sqrt(4 pi)!
To implement your particular source term:
- Design your own subroutine computing this source.
- call your subroutine within the subroutine
All sources are collected in the array 'psi(mitot,nvar)'. After your subroutine has been executed, feed the computed sources into this array by psi(.,.) = psi(.,.) + your source. 'psi' will then be passed to the ode-integrator.
For details see the already implemented source routines.
The following sources are already implemented. They are switched on by setting the corresponding variable l_cool,l_heat,l_sgv of the common-block /lsourc/ to one, or by setting the number of plane-parallel force, nplanforce, or the number of radial forces, nradforce appropriately.
all these variables are set to 0 by default.
9.1 Time-marching schemeThe main driver routine for the time marching is ./src/amr/tick.f .
This routine organizes:
We refer to the papers of Marsha Berger for the general description of the mesh refinement part.
9.1.1 IntegrationSubroutine ./src/amr/advanc.f is the driver for the integration of all data-blocks of a particular level of refinement.
All blocks of this level of refinement are integrated one after the other (loop at label 5 in subroutine ./src/amr/advanc.f). In principle, this loop could be executed in parallel. (Note, however, a small data-dependence in subroutine ./src/amr/fluxad.f in the current implementation.)
Boundaries for each block must be constructed on the basis of other blocks of the same level of refinement or blocks carrying coarser grids. This is done before the integration in the loop carrying label 3 in ./src/amr/advanc.f.
The driver for the integration of one block is ./src/amr/stepgrid.f . A fractional step scheme is used to solve the inhomogeneous Euler or MHD equations: sources and conservation laws are integrated alternatingly. See section 9.2 for more details on the applied integrators.
9.1.2 RegriddingThe driver ./src/amr/regrid.f organizes the regridding of blocks at level `lev' after `kcheck' time-steps of the next coarser level `(lev-1)'. The variable `kcheck' can be fixed in the input-file `tape5'. See the references in Section 11 for strategies of regridding and for error-estimation as a basis for adaptive meshes.
9.1.3 UpdatingThe driver ./src/amr/update.f adjusts (makes consistent) the fluxes at coarse grid -- fine grid boundaries. See the references in Section 11 on how to ensure conservation on grid boundaries.
9.2 IntegratorsIn principle, you can exchange any of the implemented integrators by integrators of your choice.
9.2.1 Conservation lawThe integration of the conservation law is organized by the driver ./src/cl_integrator/cl_solv.f. Presently, the following methods to integrate the conservation law are implemented.
- Multidimensional high-resolution finite volume integrator by Philip Colella based on the exact solution of Riemann-problems (hydro codes only, so far).
- Multidimensional high-resolution finite volume integrator by Randy LeVeque based on the approximative or exact Riemann solvers (only 2D MHD, so far).
- High-resolution finite volume integrator with dimensional splitting based on a modified Lax-Friedrichs method proposed by Barmin, Kulikovskiy, and Pogorelov (all codes).
NOTES on the MHD version.
- The question of divB=0 is discussed in Section 9.3.
- Based on very limited experience only, we notice big problems with Riemann-solver based integrators, in particular in connection with strong shocks (carbuncle, odd-even decoupling, etc).
All the integration routines to solve the conservation law can be found in ./src/cl_integrator/ general routines
l_clmeth=1 : Multidimensional integrator by Colella (hydro codes only)
l_clmeth=1, to be set in input-file `tape5'.
Multi-dimensional finite volume method based on the exact solution of Riemann problems (the algebraic equation determining the intermediate states of the Riemann problem is solved by a Newton method).
Reference: Philip Colella
Multidimensional upwind methods for hyperbolic conservation laws
Journal of computational physics 87,171 (1990)
l_clmeth=2 : Multidimensional integrator by LeVeque (2D flat MHD only (nvarbas=6))
l_clmeth=2, to be set in input-file `tape5'.
Multi-dimensional finite volume method based on the exact or approximate solution of Riemann problem and wave propagation.
Randall J. LeVeque
Wave propagation algorithms for multi-dimensional hyperbolic systems
Journal of Computational Physics, 131, 327-353 (1997).
l_clmeth=-1 : Modified Lax-Friedrichs method based on dimensional splitting
l_clmeth=-1, to be set in input-file `tape5'.
Reference A.A. Barmin, A.G. Kulikovskiy, N.V. Pogorelov
Shock capturing Approach and Nonevolutionary Solutions in Magnetohydrodynamics,
Journal of Computational Physics 126, 77-90 (1996)
9.3 The question of div B = 0Three different methods have been proposed to cope with the condition divB=0.
- staggered grids (e.g. ZEUS code by Stone and Norman)
- projection methods
- conservative formulation including source terms
We use the last possibility, mainly because it is by far the simplest method to implement in connection with adaptive grids and general geometry. In this method, divB is equal to zero only up to truncation errors. To guarantee that divB does not explode, it is essential to treat source terms in the equation appropriately.
This method is implemented in the main driver routine of the conservation law integration ./src/cl_integrator/cl_solv.f by calling ./src/cl_integrator/sou_bmono.f and ./src/cl_integrator/get_divb.f.
- K.G. Powell, P.L. Roe and T.J. Linde
A solution adaptive upwind scheme for ideal MHD
Journal of Computational Physics 154, 284-309, 1999
- S.A.E.G. Falle, S.S. Komissarov and P. Joarder
A multidimensional upwind scheme for MHD
Monthly notices of the Royal Astronomical Society, 297,265-277, 1998
9.4 ODE integratorsSo far, a second order Runge-Kutta method is implemented to integrate the source terms (./src/ode_integrator/rk_2nd.f). This routine can easily be exchanged by any other ODE-integration routine, e.g. by an implicit routine.
9.5. ReferencesM.J. Berger and J. Oliger
Adaptive mesh refinement for Hyperbolic Partial Differential Equations,
Journ. of Comp. Phys. 53, 484--512, 1983
Data structures for adaptive grid generation
SIAM J.Sci. Stat. Comput. 7, 904--916, 1986
On conservation at grid interfaces
SIAM J.Num.Anal., 967--984, 24, 1987
M.J. Berger and P. Colella
Local Adaptive Mesh refinement for Shock Hydrodynamics
Journ. of Comp. Phys. 82,64--84, 1989
Marsha J. Berger and Randall J. LeVeque
Adaptive Mesh Refinement using Wave-Propagation Algorithms for Hyperbolic Systems
SIAM J. Numer. Anal., 35, 2298-2316 (1998).
10. Graphics and Visualization
10.1 Interface to graphics packages
10.1.1 IDL (PV-WAVE) graphics for 1D visualizationWe have developed `master1d', a series of IDL macros (which are running under PV-WAVE as well), to visualize 1D simulations. These macros are self-explaining and come with the AMRCART_1D-download.
10.1.2 NCAR graphics for multi-d visualizationFor 2D and 3D visualization, we provide modules based on the NCAR Graphics package (NCAR Graphics - UNIX Version 3.1.3a, 1991 and higher). You will need to install on your machine the NCAR Graphics package to make use of these subroutines. NCAR Graphics - from the US-National Center of Atmospheric Research, Boulder, Colorado - is free-ware and under gnu-public-license. It is available for several different Unix-platforms and machine-architectures. It can be downloaded from
NCAR Graphics provides routines to visualize 1D data as well. However, we have not yet written an interface to make use of them.
Our graphics macros based on the ncar-graphics package can be downloaded from the A-MAZE page.
10.1.2 AVS/express and AVS5Together with Dr. Jean Favre of the Swiss Center of Scientific Computing (CSCS), Manno, Switzerland (http://www.cscs.ch), we have developed a series of modules for the commercial graphics packages AVS/express and AVS5. These modules support our multi-bloc data and allow fast interactive visualization as well as the production of videos. It is still under debate with CSCS Switzerland under which conditions these modules can be provided. Please let us know your interest via email. Before you send your request, you should have a license for AVS/express developer edition. This license is not cheap, though.
10.2 Graphics UtilitiesTogether with this code we provide simple tools allowing for simple visualization.
INTERPOLATE_REGULAR_ARRAY and EXTRACT_SINGLE_BLOCKS
See the Notes in these directories about how to use them.