Documentation of the AMRCART code

A brief glossary by
Doris Folini and Rolf Walder







Contents

1. Introduction
spacer
2. Basics
spacer
3. Data-structure, variables, and resources
spacer
4. I/O-files
spacer
5. Source code
spacer
6. Compiling the code
spacer
7. How to start a simulation?
spacer
8. How to implement a problem?
spacer
9. Algorithms
spacer
10. Graphics
spacer



1 Introduction

This brief glossary, How to run AMRCART?, intends to support colleagues in the use of the adaptive mesh hydro and mhd codes which are part of the A-MAZE code-package. In addition to these glossary, the interested reader may also consult the A-MAZE web-pages , which include a variety of research topics investigated with the help of AMRCART as well as a series of talks on the implemented algorithms and graphics concepts.

The new user may first use the code without the adaptive mesh, which makes the code as handy as any other hydro code. To make the new user familiar with the handling of the code, with the input- and output-data formats, and with the graphics interface, we provide a couple of exercises which may be executed by the user first. The exercises comprise some well known test examples, like Sod's shock-tube problem, as well as some astrophysical examples.

Also users who already have large experience in hydro- and mhd-simulations, as well as in code-handling, will need some time to make full use of all the different options of the codes. With some 30'000 lines and about 150 subroutines (3D-version), the code has a considerable complexity, mostly due to the adaptive mesh with the associated data-structure and run-time data-management. With this glossary, we hope to provide some support for the use of AMRCART in research and teaching. We wish you all much fun and success in your research.

We would like to ask everybody who makes use of the codes, in research and teaching, to refer to the authors and to the following paper:

Rolf Walder and Doris Folini
A-MAZE: A code package to compute 3D magnetic flows, 3D NLTE radiative transfer, and synthetic spectra
In: H.J.G.L.M. Lamers and A. Sapar (eds),
Thermal and Ionization Aspects of Flows from Hot Stars: Observations and Theory,
ASP Conference Series 204, 281-284, 2000.

We also appreciate being taken on the author list of the corresponding publications if it is felt that the code has considerably contributed to the results. This is, however, not mandatory.


2. Basics

AMRCART is written in standard Fortran 77 and should run without changes in the source code on most computational platforms. So far, the code has been used on workstations (DEC-alpha, SUN, HP, SGI), PCs running under LINUX, as well as high-performance computers such as CRAY, NEC, and IBM SP RS/6000.
The code compiles under Fortran 90/95 without problems, when setting the flags corresponding to a Fortran 77 source code.

NOTE 1 Throughout the code, variables are implicitly defined in the 'old' fortran way:

implicit double precision(a-h,o-z)

Again, 'implicitly' means that variables having their names started with letters i,j,k,l,m,n are integer data.
NOTE 2 Some of the compilers may give warning-messages related to type-mismatch in subroutine calls. Just ignore them or use corresponding flags to suppress them. Despite these warnings, the code works flawlessly. See section 3.1 for more details on this issue.
NOTE 3 The size of a floating point number is 8 bytes (double precision). The size of an integer is also 8 bytes! Make sure to compile your code always with a '-i8' flag (or equivalent)!! Please consult as well section 6 and the provided makefiles, which include correct compiler flags for a variety of common compilers.
NOTE 4 Before using any of the scripts described in this glossary you must adapt the shell-paths to fit your machine.


Although the basic routines of the code are identical for the MHD- and the hydro-version and although they are similar for the 1D, the 2D, and the 3D version, we prefer so far to provide them separately for each of the versions.


3. Data-structure, variables, and resources

3.0 Global parameters and variables
3.1 Data storage and -management
3.2 Linked list of blocks
3.3 Irregular cells
3.4 Change of global parameters
3.5 Changing parameters when restarting

AMRCART uses a hierarchical block-structured mesh, together with time-dependent adaptive mesh refinement. More details on the algorithms and hierarchical block structured meshes can be taken from section 9, from the slides of some lectures in the frame of an A-MAZE workshop, as well as from the literature given in section 9 and on the A-MAZE home page.

Adaptive meshes require run-time allocation of memory. The hierarchical block structured meshe requires the use of pointers. Objects and classes would facilitate programming.

Fortran 77 provides neither of these commodities. Fortran 90, C, or C++ would do better here in some respects. However, so far we found it too costly to rewrite the codes. The shortcomings of Fortran 77 can be overcome to some degree by consequent object-oriented programming (as far as possible), by explicitly programming some sort of run-time memory allocation, and by linked objects. We have tried our best to follow these guidelines. >

3.0 Global parameters and variables

Global parameters and variables are declared in every subroutine. We do not make use of any include-statements although nowadays most of the f77-compilers would support this. We, however, provide perl-scripts and targets in the Makefile which allow to globally change the values of global parameters and common-bloc entries.

3.0.1 Global parameters

mwords number of allocated words in data array alloc.
Note: This limits the amount of stored data to 'mwords' words.
maxgr maximum number of blocks
maxlv maximum number of refinement levels
maxirr maximum number of irregular cells (`advanced body boundaries')
nloops maximum number of in-domain bodies
maxvar maximum number of conserved variables
maxlsp maximum number of different memory locations within `alloc'

3.0.1 Global variables

A. Variables related to number of stored data

nvarbas number of basic conserved quantities
(e.g. =5 for 3D Euler flows, see below)
ncomp number of fluid components, always bigger or equal to 1
nvar total number of conserved quantities, nvar = nvarbas + (ncomp-1)
npoiss number of quantities in connection to the Poisson-solver
(only in 1D so far)
ndata number of other quantities you decide to store
(only in 1D so far)

3.0.2 Global variables stored in common-blocks

A. General data-structure

Name task
/nodal/ block- and tree-related data and pointers
/calloc/ data + mesh
/space/ data-management
/stats/ float-statistics
/print/ governs output into logfile

B. How to integrate

/tistep/ time-stepping
/f_clmeth/ which solver for conservation law
/coll_meth/ parameters for Colella integrator
/LF_meth/ parameters for modified Lax-Friedrichs integrator
/f_source/ parameters for source-term integrator

C. Governing physics and problem under consideration

/userdt/ some problem-dependent variables
/phys_mod/ which equations to integrate
/l_sourc/ which implemented source-terms to handle
/thermo_dyn/ flags, parameters and constants related to the thermodynamics
/isoth/ stores sound speed in isothermal computations
/PP_scal/ physical constants and parameters in code units

D. mesh refinement

/ref_par/ parameters for automatic mesh-refinement
/refin/ parameters for mesh-refinement `by hand'

C. Boundaries

/boundar/ spherically symmetric bodies (e.g. stars)
/cirr/ advanced body-boundaries based on non-Cartesian cells (not documented so far)
/cloops/ ditto
/phbdpa/ domain-boundaries
/dorbit/ moving boundaries along the orbit of a binary star


3.1 Data storage and -management (alloc-array), memory requirements

3.1.1 Data storage: the `alloc'-array

A. Basics

AMRCART stores all (relevant) data in one big array alloc(mwords) which is part of the common bloc 'calloc'. Pointers refer to the addresses at which the data is stored. The pointers are stored in the node-array, which is part of the common bloc 'nodal'.

NOTE Also integer data are stored in the array alloc. That's why some compilers give warning messages indicating type-mismatches (see Section 2 and 3.1.1 C).
Again: These warnings are not relevant, since in the subroutines all integers are properly defined.

The `ordinary' user, however, is not forced to consider this speciality, since in all subroutines which this user normally has to touch the data-array where the conserved quantities are stored is of the form:

val(maxip1,maxjp1,maxlp1,nvar)
or
val(mitot,mjtot,mltot,nvar)

Here, nvar is the total number of conserved quantities and mitot (maxip1), mjtot (maxjp1), mltot (maxlp1) is the size of the data-array in x-, y-, and z-direction respectively.

Suppose, the data-block under consideration has, in x-direction, maxi-1 integration cells and maxi interfaces confining these cells.
1. maxi, together with the corresponding maxj (in y-direction), maxl (in z-direction) are stored, for each block, in the node-array, which is part of the common bloc 'nodal'.
2. In the subroutines, these data are stored in the following way

All routines with the exception of the integration routines

Data-array contains one additional boundary cell at each side
(maxip1 = maxi + 1, maxjp1 = maxj + 1, maxlp1 = maxl + 1)
maxip1 number of cells in x-direction
maxjp1 number of cells in y-direction
maxlp1 number of cells in z-direction
Integration routines

lWIDTH boundary cells corresponding are needed: mitot = maxip1+2*(lWIDTH-1)
Typically, lWIDTH is equal to 2 for a second order integration routine
of the conservation law.
mitot maximum number of cells in x-direction
mjtot maximum number of cells in y-direction
mltot maximum number of cells in z-direction


Note:
nvarbas is the number of conserved quantities necessary for describing the dynamics of a single component fluid.
ncomp is the number of fluid components. It is always bigger (multi-component fluid) or equal to 1. nvar is the total number of conserved quantities, including possible different species: nvar = nvarbas + (ncomp-1)
For the different cases (hydro or mhd; 1D, 2D, or 3D), nvarbas has the following values:

HYDRO
Dimension isothermal polytropic EOS
1 nvarbas = 2 nvarbas = 3
2 nvarbas = 3 nvarbas = 4
3 nvarbas = 4 nvarbas = 5
MHD
Dimension isothermal polytropic EOS
1 * nvarbas = 3 nvarbas = 4
2 * nvarbas = 5 nvarbas = 6
3 nvarbas = 7 nvarbas = 8


NOTE * MHD can be computed in 1D, 2D, and 3D with the full-size magnetic and velocity vector (nvarbas=7 for the isothermal case, nvarbas=8 for the adiabatic case). This allows to compute Alfven waves propagating in one or two dimensions. Alfven waves are suppressed if one restricts the vectors to live in 1 or 2 space dimensions only.


B. Order of storage

As already said, in the subroutines the conserved quantities are stored in the array

val(maxip1,maxjp1,maxlp1,.).

The conserved quantities are stored in the following order:

3D MHD (nvarbas=8)
val(.,.,.,1) density
val(.,.,.,2) x-momentum
val(.,.,.,3) y-momentum
val(.,.,.,4) z-momentum
val(.,.,.,5) x-field
val(.,.,.,6) y-field
val(.,.,.,7) z-field
val(.,.,.,8) Total energy
3D HYDRO (nvarbas=5)
val(.,.,.,1) density
val(.,.,.,2) x-momentum
val(.,.,.,3) y-momentum
val(.,.,.,4) z-momentum
val(.,.,.,5) Total energy
2D MHd (nvarbas=6)
val(.,.,.,1) density
val(.,.,.,2) x-momentum
val(.,.,.,3) y-momentum
val(.,.,.,4) x-field
val(.,.,.,5) y-field
val(.,.,.,6) Total energy
2D HYDRO (nvarbas=4)
val(.,.,.,1) density
val(.,.,.,2) x-momentum
val(.,.,.,3) y-momentum
val(.,.,.,4) Total energy
1D MHD (nvarbas=4)
val(.,.,.,1) density
val(.,.,.,2) x-momentum
val(.,.,.,3) x-field
val(.,.,.,4) Total energy
1D HYDRO (nvarbas=3)
val(.,.,.,1) density
val(.,.,.,2) x-momentum
val(.,.,.,3) Total energy


NOTE 1 In MHD, you have the possibility to compute the full 3D velocity- and magnetic field even in 1D or 2D calculations (see also this note). If you want to do that just set

nvarbas=8.

Use than the numbering of the variables as indicated in the MHD 3D case.
NOTE 2 In isothermal computations, the storage of the energy is omitted.
NOTE 3 val(.,.,.,nvarbas+i), where i=1,...,nvar-nvarbas contains the relative mass-fraction of possible different fluid components.


C. The link between alloc and val

All data is generally stored in one big, one-dimensional array, the data array alloc, which is part of the common-block `calloc'. The data of one particular block within the adaptive tree is stored at a certain address `loc', which is stored in the node-array.

In all user-defined subroutines, the data array is visible as described in section 3.1.A. The link between the two forms is by a subroutine-call of the following form:

call sb(alloc(loc),mitot,mjtot,mltot,nvar,...),

with the following subroutine statements:

subroutine sb(val,mitot,mjtot,mltot,nvar,...)

implicit double precision(a-h,o-z)

dimension val(mitot,mjtot,mltot,nvar)

Thus, in alloc the data in x-direction is stored first, following up by the y-direction, the z-direction, and the quantities.


Important note
This very same procedure is used for integer data as well. After having allocated space for the array, the subroutine call is by

call sb(alloc(iloc),mitot,mjtot,mltot,nvar,...),

with the following subroutine statements:

subroutine sb(ival,mitot,mjtot,mltot,nvar,...)

implicit double precision(a-h,o-z)

dimension ival(mitot,mjtot,mltot,nvar)

Again:
Most of the modern compilers give you a warning indicating type-mismatch related to this procedure. However, since the integer-data are only used within the subroutines, this warning is not relevant. See section 2 and section 3.1. A for further comments related to this issue.


D. How to allocate memory?

Generally, the procedure described below is only a necessary procedure for somebody who wants to dive deeper into the code.

You can allocate memory in the data-array `alloc' by the statement

loc = igetsp(mwords)

where mwords is the amount of words you want to allocate. The function `igetsp' gives you the location 'loc' at which a storage is possible. `igetsp' uses a simple 'look at the next place' strategy. If you want to access this memory in a different subroutine, you have to store the pointer location as a global variable.

Memory can be released by the statement

call reclam(loc,mwords)

Make sure that you properly release the memory when you do not need it anymore, otherwise you get a memory leak.

The data management is essentially done via a free list, 'lfree(maxlsp,2)', storing the already occupied positions of the data-array alloc as well as the number of words occupied behind these positions. Here, the global parameter `maxlsp' is the maximal number of data-bunches which can be stored simultaneously in the `alloc'-array.

This data structure is initialized in the subroutine ./src/AmrModule/stst1.f. Memory is allocated via the function ./src/AmrModule/igetsp.f. Memory is released by the routine ./src/AmrModule/reclam.f.

3.1.1 Memory requirements

Since the mesh is adaptive and time-varying, it is not easy to estimate the effective memory requirements. However, to start the simulation, you have to fix the value of the global parameter `mwords' since in Fortran 77 the arrays have to be dimensioned at compile time.

Due to the regular block-structure and the Cartesian grids, the geometry and the mesh require nearly no memory. Thus, essentially only the flow field of the different blocks needs to be stored.

'alloc' is by far the largest array. Thus mwords is almost equivalent to the entire memory-resources being occupied on the computer by the code.

Assume a single block with a size of 20x20x20 cells. Including two boundary ghost cells, the effective block will contain 24x24x24 cells. To store nvar conserved variables (12'824*nvar) words or (110'592*nvar) bytes are needed for this block.

For estimating the memory requirements one should have the following in mind:
- Turning on the mesh-refinement requires to store all blocks twice (at time t and at time t+dt) with the exception of the blocks on the finest level of refinement.
- During the integration of blocks of a particular level of refinement, the data of all the blocks of this level of refinement are doubled. (So the whole storage required by this level is doubled.)
- The integration procedure of one block needs auxiliary memory of approximately 4-5 times the memory allocated for this block.


After estimation of the memory requirements one can set the value of the global parameter 'mwords' by the procedure described in section 3.4.
NOTE 1 The value of 'mwords' can be increased during the simulation (see section 3.4). It can, however, not be decreased!
NOTE 2 On many computers, memory has to be shared with other users. These users will appreciate careful handling of your memory allocation.
NOTE 3 On some machines, e.g. most machines running under Unix, the main memory of your computer must not necessarily have this size. However, if the code starts to swap (make sure that the swap space is big enough), its performance usually shrinks dramatically. Make sure that at least the data of the blocks of the two finest level of refinement can be loaded into the main memory.
NOTE 4 The checkfile fort.10 (see Section 6) will occupy about the same amount of memory (mwords*8bytes+small) on your disk. Please make sure that this amount is indeed available before you start the simulation.



3.2 Linked list of blocks (rnode- and node-array)

AMRCART uses a tree of Cartesian grids in the form of regular blocks. It constructs (within the frame you specify in the input-file `tape5') more and more blocks of finer and finer meshes.

3.2.1 Structure of the linked list of blocks

Blocks are adaptively allocated at different levels of refinement. Each block carries an arbitrary number, 'mptr'. The first bloc with number mptr at refinement-level level is found by the statement

mptr = lstart(level).

Every next block of the same level of refinement (carrying number 'mptr1') is then found by the statement

mptr1 = node(10,mptr).

The list closes if node(10,mptr)=0. See further details of the array node in section 3.2.2.

A full list of blocks you thus get by the following loops:

do level=1,maxlv
mptr=lstart(level)
10 continue
if (mptr.ne.0) then
execute commands (mptr)
mptr = node(10,mptr)
goto 10
endif
enddo


3.2.2 Block-linked data-array

There are essentially two nodal arrays, `rnode' and `node', which carry block-related data. Apart from them, there are the level-based arrays hxposs(maxlv), hyposs(maxlv), hzposs(maxlv), kposs(maxlv) storing the x-, y- and z-size of the Cartesian cells, and the time step on the different levels of refinement.

A. The float part: rnode

All floats related to a block of any level of refinement are stored in the rnode-array. rnode is thus a 2-dimensional array: rnode(29,maxgr), where the global parameter maxgr is the maximum allowed number of blocks. We now define the first dimension of the rnode, using the following syntax:

left minimum x
right maximum x
low minimum y
up maximum y
front minimum z
back maximum z


rnode(28,maxgr) in 3D

rnode(1 ,.) x-coordinate of left, lower, front corner of the block
rnode(2 ,.) y-coordinate of left, lower, front corner of the block
rnode(3 ,.) z-coordinate of left, lower, front corner of the block
rnode(4 ,.) x-coordinate of left, upper, front corner of the block
rnode(5 ,.) y-coordinate of left, upper, front corner of the block
rnode(6 ,.) z-coordinate of left, upper, front corner of the block
rnode(7 ,.) x-coordinate of right, upper, front corner of the block
rnode(8 ,.) y-coordinate of right, upper, front corner of the block
rnode(9 ,.) z-coordinate of right, upper, front corner of the block
rnode(10,.) x-coordinate of right, lower, front corner of the block
rnode(11,.) y-coordinate of right, lower, front corner of the block
rnode(12,.) z-coordinate of right, lower, front corner of the block
rnode(13,.) x-coordinate of left, lower, back corner of the block
rnode(14,.) y-coordinate of left, lower, back corner of the block
rnode(15,.) z-coordinate of left, lower, back corner of the block
rnode(16,.) x-coordinate of left, upper, back corner of the block
rnode(17,.) y-coordinate of left, upper, back corner of the block
rnode(18,.) z-coordinate of left, upper, back corner of the block
rnode(19,.) x-coordinate of right, upper, back corner of the block
rnode(20,.) y-coordinate of right, upper, back corner of the block
rnode(21,.) z-coordinate of right, upper, back corner of the block
rnode(22,.) x-coordinate of right, lower, back corner of the block
rnode(23,.) y-coordinate of right, lower, back corner of the block
rnode(24,.) z-coordinate of right, lower, back corner of the block
rnode(25,.) dx = hxposs(level)
rnode(26,.) dy = hyposs(level)
rnode(27,.) dz = hzposs(level)
rnode(28,.) dt = kposs(level)
rnode(29,.) current time


rnode(12,maxgr) in 2D

rnode(1 ,.) x-coordinate of left, lower corner of the block
rnode(2 ,.) y-coordinate of left, lower corner of the block
rnode(3 ,.) x-coordinate of left, upper corner of the block
rnode(4 ,.) y-coordinate of left, upper corner of the block
rnode(5 ,.) x-coordinate of right, upper corner of the block
rnode(6 ,.) y-coordinate of right, upper corner of the block
rnode(7 ,.) x-coordinate of right, lower corner of the block
rnode(8 ,.) y-coordinate of right, lower corner of the block
rnode(9,.) dy = hyposs(level)
rnode(10,.) dz = hzposs(level)
rnode(11,.) dt = kposs(level)
rnode(12,.) current time


rnode(5,maxgr) in 1D

rnode(1 ,.) x-coordinate of left corner of the block
rnode(2 ,.) x-coordinate of right corner of the block
rnode(3,.) dx = hxposs(level)
rnode(4,.) dt = kposs(level)
rnode(5,.) current time

B. The integer part: node

All integers related to a block of any level of refinement are stored in the node-array. node is thus a 2-dimensional array: node(.,maxgr), where the global parameter maxgr is the maximum allowed number of blocks. In all dimensions, node(.,maxgr) is defined as follows:

node(1 ,.) not used in this version
node(2,.) pointer to temporary data storage (for integration)
node(3,.) not used in this version
node(4 ,.) nestlevel to which block belongs
node(5 ,.) number of cell-interfaces in x-direction (maxi)
node(6 ,.) number of cell-interfaces in y-direction (maxj) (not used in 1D)
node(7 ,.) number of cell-interfaces in z-direction (maxl) (not used in 1D/2D)
node(8 ,.) pointer to location of flow data of block (new time t+delta t)
node(9 ,.) pointer to location of flow data of block (old time t)
node(10,.) number of next block (grid) of this level
node(11,.) not used in this version
node(12,.) not used in this version
node(13,.) pointer to temporary data storage of coarsened grids
(for error estimation)
node(14,.) pointer to irreg array of block
node(15,.) pointer to boundary list (coarse grid aspects)
node(16,.) pointer to boundary list (fine grid aspects)
node(17,.) pointer to start of irregular list

3.3 Irregular cells

The bound for irregular cells (see Section 8.2.2) is given by the parameter maxirr. No further explanation on irregular cells is given in this glossary.

3.4 How to change the global parameters throughout the code?

The current values of the global parameters are
variable 3D 2D 1D
mwords 5000000 5000000 5000000
maxgr 2092 4092 4092
maxlv 20 20 20
nloops 10 10 10
maxvar 10 10 10
maxlsp 490 490 490
maxirr 4189 2933 ----


Since no 'include'-statements are used, these parameters have to be changed throughout the code; in which subroutines, you can check by the commands (looking up the parameter mwords)

grep 'mwords=' ./src/*/*.f
grep 'mwords=' ./src/*/*/*.f
grep 'mwords=' ./src/*/*/*/*.f

Based on perl, we provide scripts which are able to change the global parameters in the entire code.

Version 1.* of AMRCART:

In order to change e.g. the parameter `mwords' from 20'000'000 to 1'000'000 you should edit the Makefile of the main directory. Fill the entry 'change' as follows

( ...
perl -pi -e 's/mwords=20000000/mwords=1000000/g' *.f;\
make) ;\


The command
'make change'
then globally changes the parameter and re-compiles the code (with the flags given in the Makefiles of the sub-directories). To suppress the re-compiling, omit the statement `make' in the above line of the Makefile.

Version 2.* of AMRCART:

In order to change e.g. the parameter `mwords' from 20'000'000 to 1'000'000 you should edit the Makefile of the main directory. Edit the settings of oldstring and newstring in the header of the file, in this example set:

oldstring=20000000
newstring=1000000

The command

'make change_comp'
then globally changes the parameter and re-compiles the code (with the dbx-flags 'FFLAGS' set in the header of the Makefile), whereas the command
'make change'
only exchanges the strings. Consult for this also the section on compiling.

3.5 Changing global parameters when restarting the simulation

It is possible to increase the values of global parameters before restarting the code from a checkfile dumped by the code in a previous run.

For the parameter `mwords' this is easy: just increase its value, re-compile the code, and restart the simulation.

When changing other parameters, the procedure is slightly more complicated.

1. change the parameter as described in Section 3.5
2. in the subroutine ./src/amr/restart.f set the corresponding local parameter (maxlvo, maxgro, mairro, malspo, etc.) to the old value of the parameter.
3. compile the code.
4. restart the simulation.

NOTE : If you eventually want to restart your simulation again, you have to adapt the values of the local parameters (i.e. maxlvo, maxgro, mairro, malspo) to the new values.


4. I/O-files

Input Comments
tape5 Passes all technical parameters to the code:
mesh-size, time step, parameters for the different algorithms, etc.
input*: Input files for the physical parameters of the implemented problems.
Create your own file for your own problem.
Normally, this file is formatted as well.
fort.9: This is an optional file to restart from.
The file is only mandatory if the restart-flag is set to true in the file tape5.
This is a binary file by default and it has to be dumped by a previous run of AMRCART.

NOTE that different hardware platforms use different binary formats.



Output Comments
tape6 This is the code-logfile.
It is dumped in ascii format by default.
Nearly every subroutine throughout the code may write into this file.

NOTE: The amount of output written into the logfile depends on the variables `lprint', `lprint1', and `lprintr'. The value of these variables can be set in the file tape5.
You can find an example of a tape6 logfile in your AMRCART directory.
plot.3 This is the data file:
It contains information on the mesh and the data of all blocks on all levels of refinement.
The file is dumped in the routines
./src/output/basic.f (mesh-info)
./src/output/outvar.f (data of single block)
The dump is in ascii by default.
You can find an example of a plot.3 logfile in your AMRCART directory.
NOTE 1 You may write this file in binary format as well. Note, however, that the binary formats of your crunching and your post-processing machine should be the same or you should be able to convert the formats appropriately.
NOTE 2 More than one time-step can be dumped into this file.
NOTE 3 The size of this file can hardly be estimated in advance. It depends highly on the evolution of the simulation. If the parameter `mwords' (see section 3) is optimally adapted to the problem, the file can become as big as (mwords*(8bytes)/2) per dumped time-step.
fort.10 checkfile
Dumped in binary format by default by the subroutine ./src/amr/check.f (see also fort.9 in the section input-files)

NOTE:
This file will have a size slightly bigger than (mwords*8bytes) (See section 3).



5. Source code

You can find the source code in the subtree ./src/ .
NOTE:
Each subroutine or function corresponds to exactly one file. The name of the file is the name of the subroutine plus an extension indicating the version of the code. .g. the filename of the subroutine physbd is physbd_hydro_3d.f in the 3D hydro version.

./src/amrcart.f or ./src/amrmhd.f
main program

./src/AmrModule/
Contains the heart of the code. Not necessary to touch for running a problem

./src/BoundaryModule/
All related to boundary conditions (see section 8.3).

./src/InitializationModule/
Problem dependent initialization routines. Several different problems can be implemented simultaneously (see section 8.2).

./src/IntegrationModules/
All about integration.

./src/IntegrationModules/ClIntegrationModules
Different integrators for the conservation law (e.g. Euler equations, MHD equations).

./src/IntegrationModules/OdeIntegrationModules
Different integrators to handle ordinary differential equations (ODE). Used for the integration of the source terms.

./src/IntegrationModules/PoissonIntegrationModule
Integrator to solve for the Poisson-equation (only 1D so far).

./src/IntegrationModules/SrcIntegrationModule
Integrator and routines describing the sources.

./src/OutputModule/
Output-routines for the formats we support (see section 4 and section 10 for details)

./src/SourcesModule/
Routines describing the source-terms of the equations.

./src/UtilitiesModule/
Some auxiliary routines, problem dependent or for debugging purposes

There are makefiles in the main directory as well as in each subdirectory of the tree. All subroutines in one subdirectory can be catted into one file by executing the command 'make cat' in the corresponding subdirectory.
The entire code can be catted to one file by executing the command 'make cat' (Versions 1.*) or 'make one_file' (Versions 2.*) in the main directory of the tree (in ./).


6. Compiling the code

You can generate an execuTABLE of the code using the provided Makefiles and scripts. Note that there are makefiles in all subdirectories of the tree as well as in the main directory.

The generated execuTABLEs can be started directly or with a script (see section 7).

NOTE 1 Before using the scripts described below you must adapt the shell-paths according to your machine setup.
NOTE 2 Before using the Makefiles you have to adapt the compiler-flags in the main makefile Makefile in the main directory (./).

Version 2 of AMRCART:
All makefiles in the src-tree are generated automatically by the perl-script list_script_make.
All file with an extension '.f' in the src-tree are taken as source files.

Version 1 of AMRCART:
All makefiles in the src-tree must be edited by hand: Setting the compiler flags and defining the source files to be compiled and object-files to be generated.

6.1 Makefile

In version 1, optimized and debugging-version of the code are generated by performing the csh-scripts 'makeopt' and 'makedbx'. The scripts invoke make, however all makefiles in the entire ./src-tree as well as in the working directory must be consistently edited by hand.

From version 2, all makefiles are generated automatically. There are 3 files.

filename location task
Makefile ./ (main dir) main Makefile, driver of code generation process
Makefile.src.template ./compile.scripts template for Makefile to compile the source files in all directories of the ./src-tree.
list_script_make ./compile.scripts script which correctly adapts the Makefile.src
- to the compiler-flags chosen in the main Makefile
- by including all files with extension .f in the compiling process.


Thus, it is only necessary to edit the header of the main Makefile. It is there, where you must set the name of your compiler and the compiler flags which you wish do use.

So far the following make-commands are implemented. Type make command to invoke them.

opt_code Produces a fully optimized code with the sources found in the ./src-tree. Compiler flags $OFLAGS are used in the compilation and linking process. The name of the code is $CODE_OPT. $CODE_OPT and $OFLAGS are set in the header of the makefile.
By default $CODE_OPT is set to xhydro_3d.opt or xmhd_3d.opt for the 3D hydro- or mhd-code respectively.

$OFLAGS is preset for a series of architectures and different compilers. You must, however, set the environment-variable machine accordingly: e.g. setenv machine = linux, or setenv machine = alpha.

At the same time, all source-files found in the ./src-tree are catted into the file cart.3d_mhd.f.$DATE (in case of 3D MHD) which can be found in the directory ./src . $DATE has the format YY:MM:DD.HH.MM,
where YY: year, MM: month, DD: day, HH: hour, MM: minutes.

NOTE
This command generates a code built with the $OFLAGS from scratch, using all source files in the ./src-tree. Thus, it makes no use of object-files possibly present in the ./src-tree. In this way, you can still build an optimized code although the generated object files may have been compiled with debugger options.
dbx_code Produces a debug (dbx)-version to be used together with your favorite debugger. The name of the code is $CODE_DBX, the flags used in the compiling process are $FFLAGS, the ones in the linking process the $LFLAGS. $CODE_DBX, $FFLAGS, $LFLAGS are set in the Makefile.
By default $CODE_DBX is set to xhydro_3d.dbx or xmhd_3d.dbx for the 3D hydro- or mhd-code respectively.

$FFLAGS and $FFLAGS are preset by for a series of architectures and different compilers. You must, however, set the environment-variable machine accordingly: setenv machine = linux, or setenv machine= alpha.

NOTE
This command generates a code relying on the object-files already present in the ./src-tree. Objects are automatically rebuilt if make detects that the source files are updated since the last compiling process.
objects produces object-files for each source-file individually. Compilation is by default with the debugging flags.
change_string exchanges $oldstring by $newstring in all source files of the src-tree. $oldstring and $newstring can be set in the head of the makefile.
change_comp exchanges $oldstring by $newstring in all source files of the src-tree and re-compiles all source-files with debugging flags. $oldstring and $newstring can be set in the head of the makefile.

Note:
although the source-files are re-compiled you still have to relink the code
rmobj Removes all object files in the src-tree.
one_file Cats all source-files of the src-tree in one file cart.3d_mhd.f_all (in the case of 3D MHD).



7. How to start a simulation?

To run the code you need an execuTABLE of the code (e.g. xcart3.opt), the input-file 'tape5', the restart-file 'fort.9' (when restarting from a previous run), and possibly your problem-dependent input-file(s) (see section 4).

We strongly recommend to start a simulation with the provided script 'run.single_sim'. This automatically provides you with a run-logfile 'simulation.logfile.$DATE'. Some other useful informations (date, machine, code version) are written into the code-logfile 'tape6' as well.

The script 'run.single_sim' can be edited to provide the correct flags or nice parameters with which you want to run the code.

HINT :
We provide scripts (see A-MAZE home-page) which automatically processes a series of jobs. The scripts are also able to automatically annotate the output files, to ftp the output files to another machine, to automatically process graphics, and to automatically store the data-files in a mass-storage device.

NOTE :
You may start the debugging session with the dbx-debugger with the script 'rundbx'. The scripts guarantee that you find all the source files as long as you leave the tree of the source files un-touched.


8. How to implement a problem?

8.1 Already implemented problems

number (iprob) code problem description
1 HYDRO
1D/2D/3D
Supernova remnant (Sedov-Taylor blast wave)
2 HYDRO/MHD
1D/2D/3D
Planar Riemann-problem (e.g. Sod's shock tube problem, colliding flows, piston-driven flow, etc.)
3 HYDRO/MHD
1D/2D/3D
Stellar wind into a uniform environment or predecessor wind
4 HYDRO
2D/3D
Binary wind dynamics: colliding winds and accretion
5 HYDRO
1D/2D/3D
Accretion onto a single star:
variants of Bondi-Hoyle-Lyttleton accretion
8 HYDRO/MHD
2D/3D
Jet into environment
9 HYDRO/MHD
1D
Self-gravitating molecular cloud


You can select one of theses problems by setting in the input file 'tape5' the variable 'iprob' equal to the corresponding problem number. For example, if setting in 'tape5' iprob=4, the code simulates wind-dynamics in a binary star system.

To each problem, there are associated input-files containing the physical parameters of the problem. These input-files of the different problems are located in associated directories
./input.problems/input_files.iprob=$iprob/

For the problem number 4, there is one single input-file: 'input4.binary_wind'.

After having read the specific input-file, the code does some pre-processing in the subroutine ./src/InitializationModule/init4.f (e.g. setting correct boundary conditions at the domain boundaries and the stars, normalizing the variables, setting switches determining which sources should be computed, etc.). After having set the initial conditions in the subroutine ./src/InitializationModule/finit.f, the code enters the generic part and executes the problem.

8.1 A new problem?

To implement a new problem, select an unoccupied problem number and set

- the initial conditions (see section 8.2 'How to implement initial conditions?')

- the boundary conditions (see section 8.3 'How to implement boundary conditions?')

- the sources (see section 8.4 'How to implement source terms?')

appropriately. The code then is hopefully able to process your problem.

Below, you find some notes on how to implement a new problem. Studying an already implemented problem may greatly ease the implementation of a new problem.

8.2 How to implement initial conditions

This point concerns the routines :
./src/InitializationModule/finit.f
./src/InitializationModule/init.f and ./src/InitializationModule/init#.f,
where # is the problem number (optional preprocessing).

Note:
In all pre-implemented problems we use scaled units to compute. The numbers set in the initialization routines must correspond to these numbers. In particular, in all MHD codes, the magnetic field is scaled by a factor of 1/sqrt(4 pi)!

A. Initialization

The initial conditions are set in the subroutine ./src/InitializationModule/finit.f.

The initialization starts below the label corresponding to the problem number, e.g. the initialization for the supernova remnant starts below the statement

1 continue
c # Sedov point blast problem


The corresponding problem labels are selected by the first statement of this routine

go to (1,2,3,4) iprob

where the variable iprob is set in the input-file 'tape5' accordingly to the problem number.

Note
This subroutine is called for each block of any level of refinement once or twice (in the case of refinement).

The data of the blocks are filled by a loops like

do 110 l = 1,maxlp1
do 110 j = 1,maxjp1
do 110 i = 1,maxip1

spacer val(i,j,l,1) = ...
spacer val(i,j,l,2) = ...
spacer ...
spacer val(i,j,l,nvar) = ...

110 continue


where maxip1, maxjp1, maxlp1 stand for the integer-lengths of the blocks. The coordinates of the cell-centers (x,y,z) can be constructed by

x = corn1 + (i-1.5)*dx
y = corn2 + (j-1.5)*dy
z = corn3 + (l-1.5)*dz


where corn1, corn2, corn3 stand for the coordinates of the lower, left, front corner of the blocks and i,j,l denotes the cell numbers in x-, y-, and z-direction.

NOTE 1 :
When entering the subroutine, the code passes from the calling routine the generic variables like corn1, corn2, corn3, dx, dy, dz, maxip1, maxjp1, maxlp1.

Problem dependent parameters and variables can be set directly into this subroutine. However, we strongly recommend to write your own pre-processing initialization routine, following in that the already implemented problems. Relevant parameters may then passed from the pre-processing routine to finit by a common block /indat#/ where # stands for the problem number.

In most of the already implemented problems we use scaled units. Thus the numbers set in finit do not nessarilly correspond to any of the handsome units used in astrophysics.
Consult also the comment lines in the subroutine itself for further details.

NOTE 2 :
You fill one boundary ghost-cell as well in `finit'. Thus the cell-center with x-integer-index i=1 is located at corn1-0.5*dx.

NOTE 3 :
As already mentioned before, more boundary cells are needed at each block-interface to integrate one block. Thus, in all the integration routines, the mesh is constructed as

x = corn1 + (i- lWIDTH -0.5)*dx
y = corn2 + (j- lWIDTH -0.5)*dy
z = corn3 + (l- lWIDTH -0.5)*dz


when lWIDTH is the number of boundary cells.

B. Pre-processing (./src/InitializationModule/init.f and ./src/InitializationModule/init#.f)

Perhaps you want to do some pre-processing, e.g. reading physical input variables from a file, scaling appropriately the variables, or writing the input parameters to a file. Following the already implemented problems, you can do this in a subroutine ./src/InitializationModule/init#.f where # stands for the problem number.

You then must call your newly created pre-prossing subroutine from the ./src/InitializationModule/init.f. We strongly suggest to use a format as already defined by the implemented problems when making this call. We also suggest to pass the relevant output from your preprocessing routines to the initialization routine finit by a common block. Again, take one of the already implemented problems as a guideline.

NOTE
If you use graphics provided by the A-MAZE package, i.e. IDL or NCAR-graphics macros, or AVS/express data-reader ReadADG3 you have to include some well formatted problem-dependent headers in the plotfile 'plot.3'. Again, you may do this in a routine ./src/InitializationModule/initAMRCART_DOCU.html#.f. The headers are dumped by

spacerwrite(3,'(a)') '*PLANAR RIEMANN-PROBLEM : '
spacerwrite(3,'(a)') '*PHYSICAL INFORMATION '
spacerwrite(3,9000) (physpar(1,i),i=1,10)
spacerwrite(3,9000) (physpar(1,i),i=11,20)
spacerwrite(3,9000) (physpar(2,i),i=1,10)
spacerwrite(3,9000) (physpar(2,i),i=11,20)
spacerwrite(3,9000) (sfac(i),i=1,10)
spacerwrite(3,9000) (sfac(i),i=11,20)
spacerwrite(3,9000) (dinpar(i),i=1,10)
spacerwrite(3,9000) (dinpar(i),i=1,10)
9000 format(10(e20.10))

Note that the scale-factors 'sfac' have an ordering for which you have to consult one of the already implemented preprocessing routines. Using the 'physpar' array some other important quantities can be passed to the graphics. In particular, it should be set

physpar(2,18) = sgascon
physpar(2,19) = amass
physpar(2,20) = gamma


8.3 How to implement boundary conditions

There are two different boundaries: Domain boundaries along the edges of the computational domain and body boundaries, along physical bodies within the domain. Note:
In all pre-implemented problems we use scaled units to compute. The numbers set in the boundary routines must correspond to these numbers. In particular, in all MHD codes, the magnetic field is scaled by a factor of 1/sqrt(4 pi)!

8.3.1 Domain boundaries

These are set in the subroutines
./src/BoundaryModule/physbd.f,
and
./src/BoundaryModule/phbd_user.f.

physbd treats the already implemented boundary conditions: reflecting, free-streaming (no gradients in the conserved variables), and periodic.

These boundary conditions can be set by setting the corresponding logicals of the common bloc /phbdpa/ to .true. (reflecting, free-streaming) or to the correct values (periodic). As an example, you may consult the files
./src/InitializationModule/init2.f,
./input.problems/input_files.iprob=2/input2.coll_wind_planar,
along with the subroutine setting the boundary,
./src/BoundaryModule/physbd.f,

Inflow boundary conditions or any other conditions must be defined in
./src/BoundaryModule/phbd_user.f.

8.3.2. Body boundaries

Boundaries of a body sitting in the computational domain are technically much more difficult to handle than domain boundaries. So far, we have implemented supersonic wind-shedding boundaries in ./src/BoundaryModule/bndwind.f (HYDRO:2D/3D) and accreting boundaries (HYDRO:2D/3D) in ./src/BoundaryModule/bndacc.f. See the details in these routines or construct new ones which fit your problem.

In 2D only, a more advanced boundary treatment based on irregular, body-fitted cells is implemented. We do not further comment on this possibility here.

8.4 How to implement source terms

Several source-terms are already implemented (see below).

Note:
In all pre-implemented problems we use scaled units to compute. The numbers set in the following routines must correspond to these numbers. In particular, in all MHD codes, the magnetic field is scaled by a factor of 1/sqrt(4 pi)!

To implement your particular source term:

- Design your own subroutine computing this source.

- call your subroutine within the subroutine
./src/SourcesModule/source.f .

All sources are collected in the array 'psi(mitot,nvar)'. After your subroutine has been executed, feed the computed sources into this array by psi(.,.) = psi(.,.) + your source. 'psi' will then be passed to the ode-integrator.

For details see the already implemented source routines.

The following sources are already implemented. They are switched on by setting the corresponding variable l_cool,l_heat,l_sgv of the common-block /lsourc/ to one, or by setting the number of plane-parallel force, nplanforce, or the number of radial forces, nradforce appropriately.
Note:
all these variables are set to 0 by default.

Optically thin radiative cooling in parameterized form (l_cool = 1)


dE/dt = - Lambda(T)*N^2 where N is the mean particle density and Lambda(T) the radiative loss function

The source term is computed in ./src/SourcesModule/s_cool.f with the radiative loss_functions defined in ./src/SourcesModule/s_rloss1.f, ../rloss2.f, or ../rlossc.f respectively. You can define your own radiative loss-function in a format which can be read by the routine ./src/SourcesModule/s_rlossr.f .

NOTE 1: This routine requires the universal gas-constant R (normalized to the mean atomic weight) in the units you use in the code.

p = R rho T = (k_Bolt / mu) rho T

where k_Boltzmann is the Boltzmann constant and mu the mean Atomic weight in units of the hydrogen atomic weight.

The gas-constant and mu are variables of the common block /thermo_dyn/ and can be filled in your pre-processing routine 'init#'.

NOTE 2: You can define a threshold temperature below which cooling is switched off. If you want to chose this option set the variable thr_cool of the common-block /coo_he/ to the appropriate value.
Radiative heating in parameterized form (l_heat = 1)


This is computed in ./src/SourcesModule/s_heat.f. Only a very simplified model is implemented. It just heats the gas to a certain threshold. Again, set the variable thr_heat of the common-block /coo_he/ to the appropriate value, if you want to chose this option.
NOTE: Also in this subroutine the gas-constant is required.

Constant forces in direction of a vector dir(mplanforce.ne.0)


This is computed in ./src/SourcesModule/s_planforce.f. You can define the forces in your pre-processing routine, using the common-bloc /planfrc/

parameter (nplanforce=10)
common /planfrc/ dir(3,nplanforce),ivmult(nplanforce),mplanforce

where,
nplanforce : total number of possible planar forces
mplanforce : actual number of planar forces (e.g. mplanforce=1)
dir(3,.) : force vector in units of the simulation, e.g.
dir(1,1) = 0.d0
dir(2,1) = 0.d0
dir(3,1) = -20.d0
ivmult : scalar force-multiplyer (e.g. ivmult(1) = 10)

After having defined the entries of this common_bloc, the code takes the implemented forces into account.

Gravitation with respect to different (point)-masses (mradforce.ne.0)


This is computed in ./src/SourcesModule/rgrav.f. You can define the forces in your pre-processing routine, using the common-bloc radfrc

parameter (nradforce=10)
common /radfrc/ rmass(nradforce),l_radforce(nradforce),mradforce

where,
nradforce : total number of possible center of gravity
mradforce : actual number of planar forces (e.g. mplanforce=1)
radforce : =1, dummy for the moment
rmass : mass of the object imposing the force
Self-gravity (l_sgv = 1)


Turns on self-gravity (only in 1D so far).



9. Algorithms

9.1 Time-marching scheme

The main driver routine for the time marching is ./src/amr/tick.f .
This routine organizes:

- integration
- regridding
- updating

We refer to the papers of Marsha Berger for the general description of the mesh refinement part.

9.1.1 Integration

Subroutine ./src/amr/advanc.f is the driver for the integration of all data-blocks of a particular level of refinement.
All blocks of this level of refinement are integrated one after the other (loop at label 5 in subroutine ./src/amr/advanc.f). In principle, this loop could be executed in parallel. (Note, however, a small data-dependence in subroutine ./src/amr/fluxad.f in the current implementation.)

NOTE:
Boundaries for each block must be constructed on the basis of other blocks of the same level of refinement or blocks carrying coarser grids. This is done before the integration in the loop carrying label 3 in ./src/amr/advanc.f.

The driver for the integration of one block is ./src/amr/stepgrid.f . A fractional step scheme is used to solve the inhomogeneous Euler or MHD equations: sources and conservation laws are integrated alternatingly. See section 9.2 for more details on the applied integrators.

9.1.2 Regridding

The driver ./src/amr/regrid.f organizes the regridding of blocks at level `lev' after `kcheck' time-steps of the next coarser level `(lev-1)'. The variable `kcheck' can be fixed in the input-file `tape5'. See the references in Section 11 for strategies of regridding and for error-estimation as a basis for adaptive meshes.

9.1.3 Updating

The driver ./src/amr/update.f adjusts (makes consistent) the fluxes at coarse grid -- fine grid boundaries. See the references in Section 11 on how to ensure conservation on grid boundaries.

9.2 Integrators

In principle, you can exchange any of the implemented integrators by integrators of your choice.

9.2.1 Conservation law

The integration of the conservation law is organized by the driver ./src/cl_integrator/cl_solv.f. Presently, the following methods to integrate the conservation law are implemented.

- Multidimensional high-resolution finite volume integrator by Philip Colella based on the exact solution of Riemann-problems (hydro codes only, so far).

- Multidimensional high-resolution finite volume integrator by Randy LeVeque based on the approximative or exact Riemann solvers (only 2D MHD, so far).

- High-resolution finite volume integrator with dimensional splitting based on a modified Lax-Friedrichs method proposed by Barmin, Kulikovskiy, and Pogorelov (all codes).

NOTES on the MHD version.

- The question of divB=0 is discussed in Section 9.3.
- Based on very limited experience only, we notice big problems with Riemann-solver based integrators, in particular in connection with strong shocks (carbuncle, odd-even decoupling, etc).

All the integration routines to solve the conservation law can be found in ./src/cl_integrator/ general routines
cl_solv.f driver to solve conservation law
reset.f checks whether negative density and/or pressure occurred after time-update. Stops execution or several 'methods of repair'. See input-file 'tape5'.



l_clmeth=1 : Multidimensional integrator by Colella (hydro codes only)

l_clmeth=1, to be set in input-file `tape5'.

Multi-dimensional finite volume method based on the exact solution of Riemann problems (the algebraic equation determining the intermediate states of the Riemann problem is solved by a Newton method).

Reference: Philip Colella
Multidimensional upwind methods for hyperbolic conservation laws
Journal of computational physics 87,171 (1990)

subroutines

methmulti.f driver (may be used for other multidimensional methods as well)
vrm.f vector (exact) Riemann solver
vctoprm.f translates from conserved to primitive variables
vprmtoc.f translates from primitive to conserved variables
vslopex.f computes limited slopes in x-direction
vslopey.f computes limited slopes in y-direction
vslopez.f computes limited slopes in z-direction
vpredx.f computes predictor in x-direction
vpredy.f computes predictor in y-direction
vpredz.f computes predictor in z-direction
reset1.f checks whether negative density and/or pressure
reset2.f occurred after predictor:
reset3.f stops then execution or holds original value.


l_clmeth=2 : Multidimensional integrator by LeVeque (2D flat MHD only (nvarbas=6))

l_clmeth=2, to be set in input-file `tape5'.

Multi-dimensional finite volume method based on the exact or approximate solution of Riemann problem and wave propagation.

Reference:
Randall J. LeVeque
Wave propagation algorithms for multi-dimensional hyperbolic systems
Journal of Computational Physics, 131, 327-353 (1997).

subroutines

m_LeVeque_mhd_2d.f multid-driver of the method
fl6_unsp_mhd_2d.f 1D-driver to compute fluxes
limiter_mhd_2d.f Set up limited slopes
philim_mhd_2d.f All the limiters
rpn6mhd_mhd_2d.f Riemann solver normal to the interface
rpt6mhd_mhd_2d.f transverse Riemann solver


l_clmeth=-1 : Modified Lax-Friedrichs method based on dimensional splitting

l_clmeth=-1, to be set in input-file `tape5'.

Reference A.A. Barmin, A.G. Kulikovskiy, N.V. Pogorelov
Shock capturing Approach and Nonevolutionary Solutions in Magnetohydrodynamics,
Journal of Computational Physics 126, 77-90 (1996)

subroutines

methsplit.f driver for split methods (may be used for other split methods as well)
meth_LF.f driver for modified LAX-Friedrichs
mLF_3ord.f 3order method, based on primitive variables
mLFp_2ord.f 2order method, based on primitive variables
mLFc_2ord.f 2order method, based on conserved variables
flux_euler.f computes Euler flux
flux_MHD.f computes MHD flux

9.3 The question of div B = 0

Three different methods have been proposed to cope with the condition divB=0.
- staggered grids (e.g. ZEUS code by Stone and Norman)
- projection methods
- conservative formulation including source terms

We use the last possibility, mainly because it is by far the simplest method to implement in connection with adaptive grids and general geometry. In this method, divB is equal to zero only up to truncation errors. To guarantee that divB does not explode, it is essential to treat source terms in the equation appropriately.

This method is implemented in the main driver routine of the conservation law integration ./src/cl_integrator/cl_solv.f by calling ./src/cl_integrator/sou_bmono.f and ./src/cl_integrator/get_divb.f.

NOTE 1 We are using an 8-wave method as proposed by Powell et al.. According to Falle et al. 7-wave methods may give as good a result as 8-wave methods.
NOTE 2 In the entire code, fields are scaled by the factor 1/sqrt(4 pi). In this units, the magnetic pressure is just 0.5*B^2, etc. what simplifies the flux- and wave formulae considerably. You have to do this in your pre-processing routines. And if you intend to use the accompanying graphics packages you have to include this scaling factor into sfac(10).
References:

- K.G. Powell, P.L. Roe and T.J. Linde
A solution adaptive upwind scheme for ideal MHD
Journal of Computational Physics 154, 284-309, 1999

- S.A.E.G. Falle, S.S. Komissarov and P. Joarder
A multidimensional upwind scheme for MHD
Monthly notices of the Royal Astronomical Society, 297,265-277, 1998

9.4 ODE integrators

So far, a second order Runge-Kutta method is implemented to integrate the source terms (./src/ode_integrator/rk_2nd.f). This routine can easily be exchanged by any other ODE-integration routine, e.g. by an implicit routine.

9.5. References

M.J. Berger and J. Oliger
Adaptive mesh refinement for Hyperbolic Partial Differential Equations,
Journ. of Comp. Phys. 53, 484--512, 1983

M.J. Berger,
Data structures for adaptive grid generation
SIAM J.Sci. Stat. Comput. 7, 904--916, 1986

M.J. Berger
On conservation at grid interfaces
SIAM J.Num.Anal., 967--984, 24, 1987

M.J. Berger and P. Colella
Local Adaptive Mesh refinement for Shock Hydrodynamics
Journ. of Comp. Phys. 82,64--84, 1989

Marsha J. Berger and Randall J. LeVeque
Adaptive Mesh Refinement using Wave-Propagation Algorithms for Hyperbolic Systems
SIAM J. Numer. Anal., 35, 2298-2316 (1998).
(ftp://amath.washington.edu/pub/rjl/papers/mjb-rjl:amr.ps.gz)


10. Graphics and Visualization

10.1 Interface to graphics packages

10.1.1 IDL (PV-WAVE) graphics for 1D visualization

We have developed `master1d', a series of IDL macros (which are running under PV-WAVE as well), to visualize 1D simulations. These macros are self-explaining and come with the AMRCART_1D-download.

10.1.2 NCAR graphics for multi-d visualization

For 2D and 3D visualization, we provide modules based on the NCAR Graphics package (NCAR Graphics - UNIX Version 3.1.3a, 1991 and higher). You will need to install on your machine the NCAR Graphics package to make use of these subroutines. NCAR Graphics - from the US-National Center of Atmospheric Research, Boulder, Colorado - is free-ware and under gnu-public-license. It is available for several different Unix-platforms and machine-architectures. It can be downloaded from
http://ngwww.ucar.edu.

NOTE:
NCAR Graphics provides routines to visualize 1D data as well. However, we have not yet written an interface to make use of them.
Our graphics macros based on the ncar-graphics package can be downloaded from the A-MAZE page.

10.1.2 AVS/express and AVS5

Together with Dr. Jean Favre of the Swiss Center of Scientific Computing (CSCS), Manno, Switzerland (http://www.cscs.ch), we have developed a series of modules for the commercial graphics packages AVS/express and AVS5. These modules support our multi-bloc data and allow fast interactive visualization as well as the production of videos. It is still under debate with CSCS Switzerland under which conditions these modules can be provided. Please let us know your interest via email. Before you send your request, you should have a license for AVS/express developer edition. This license is not cheap, though.

10.2 Graphics Utilities

Together with this code we provide simple tools allowing for simple visualization.

INTERPOLATE_REGULAR_ARRAY and EXTRACT_SINGLE_BLOCKS

See the Notes in these directories about how to use them.

Rolf Walder and Doris Folini
Last modified: Sun Mar 28 21:23:49 CEST 2010