Figure 1 shows the geometry taken from the report *H*(10) dose rate evolution with some different shielding*. Targets are located 100 cm from a Co60 point source with activity 10^9 Bq[1]. The dose at a depth of 1.0 cm was calculated for a bare target and with three shield configurations. The setup illustrates two points I made in the last article about H*(10) calculations:

- The results are not sensitive to the detector geometry. In this case, a 4.0 cm radius sphere of tissue equivalent material is used rather than the ICRU 15.0 cm radius sphere.
- The radiation weighting factor (
*Wr*) is taken as unity for the photons and secondary electrons and there is no organ correction for the phantom (*Wt*= 1.0). In this case, the biological dose (Sv) equals the physical dose (Gy).

I limited my calculations to the bare target and to one behind a 2 cm lead shield. Here are the results reported by **RayXpert**. The values are given in units of μGy/hour.

Dosimex-G Mercurad Microshield MCNPX RayXpert No shielding 349 353 347 355 365 2 cm lead 143 152 141 146 149

The targets subtend only a small fraction of the solid angle surrounding the source, so it would be foolish to use an isotropic source. Instead, this is an ideal opportunity to use the **GamBet** *Radiation Source Tool*, illustrated in Fig. 2. After the user enters values in the dialog fields, the program creates a standard source file. The idea is to fill a circle of a specified radius in the direction of interest with a large, random distribution of particles. The flux assigned to each particle is calculated to represent the specified source activity. In this case, I created a distribution of 200,000 photons of energy 1.17 MeV and an equal number at 1.33 MeV. The distribution filled a circle of radius 4.0 cm at a distance of 92.0 from the source, ensuring complete irradiation of the detector. The energy deposition was symmetric about the line from the source through the detector, so I further reduced run time by using the option for cylindrical scoring in **GamBet**. The air region was represented by the standard **Penelope** dry air model and the tissue-equivalent target was represented by the custom material definition discussed in the previous article:

Material Name TissueEquivalent Component O 1.0000 Component C 0.1940 Component H 2.1032 Component N 0.0390 Density 1.00 Insulator End

Figure 3 shows the dose distribution calculated by **GamBet** for the bare detector with photons entering from the left[2]. Even for this simple case, there is quite a lot going on. The dose at the surface is lower than the peak dose in the material. This is the result of the buildup of the secondary-electron distribution. The energy loss-rate for electrons is much higher than that of photons of the same energy. The depth to reach the equilibrium dose is about 0.6 cm, hence the choice of the H*(10) measurement depth as 1.0 cm for γ rays in the MeV range. The dose rate decreases gradually through the material because of γ ray attenuation. Note the difference in the air dose rate at the entrance and exit of the target. The enhanced downstream dose rate results from electrons driven out of the material. The H*(10) dose rate determined by **GamBet** was 338 μGy/hour, in good absolute agreement with the values reported by **RayXpert**.

Figure 4 shows the dose-rate distribution with 2.0 cm lead shield placed a short distance in front of the target. The shield extends to the outer radius — the portion irradiated by the 4.0 cm radius beam is clearly visible. The dose buildup depth in lead is shorter than the element size, so it is not visible[3]. Also, note the absence of a buildup region in the target. Most of the equilibrium secondary electron distribution from the lead crosses the small air gap and enters the target. The **GamBet** H*(10) prediction is 146 μGy/hour, again comparable to the values determined by the other codes. Finally, Figure 5 shows dose-rate scans along the diameter of the spherical target for the two cases. The dashed line is the H*(10) point.

One conclusion of this study is that all radiation-shielding codes give reasonable results. This is not surprising because the physics and mathematics base for Monte Carlo radiation codes is over half a century old. The question within this constraint is: why choose **GamBet**? Beyond its low price, **GamBet** has several advantages, some of which are apparent in this study:

- Zoning in
**GamBet**is performed with conformal finite-element meshes, allowing plots such as those of Figure 3 and 4. Complex processes may enter even the simplest radiation calculations, so detailed pictures of dose distributions are essential for evaluating results. **GamBet**is fast for two reasons: 1) the code is designed to encourage division of solutions into stages and 2) the code supports efficient parallel processing. For this example, the run time with 1.6 million incident model photons (4 parallel processes with 400,000 photons each) was 7 minutes.- The element approach in
**GamBet**is more effective than combinatorial solid geometry to represent complex systems. The advantage is not apparent in this calculation, but is overwhelming when representing something like a human body phantom. - Although all Monte Carlo physics engines are about equal for MeV γ rays,
**Penelope**has far more detailed support for low-energy X-rays. **GamBet**has several advanced features, such as integration of calculated 3D electric and magnetic fields and direct coupling to electron beam and thermal transport programs.

**Footnotes**

[1] This article reviews the decay scheme of Co60: *Modeling radioactive sources with GamBet*.

[2] **GamBet** uses standard units of Gy/s. The conversion factor to μGy/hour is 3.6E9.

[3] The element size in **GamBet** determines the display resolution and the accuracy for fitting shapes, but has no effect on the physics of the radiation interactions.

[4] Contact us : techinfo@fieldp.com.

[5] Field Precision home page: www.fieldp.com.

]]>In applications to personnel shielding, it is not sufficient simply to know the radiation field at a location. By radiation field, I mean the fluxes of energetic electrons and photons along with their energy spectra. The critical issue is how do the particles interact with tissue and how does the dose (deposited energy divided by mass) build up inside the organism. A further complication is that the same dose may have different biological effects, depending on the radiation and tissue type. The standard unit of physical dose is the gray (Gy), equal to 1 joule/kg of deposited energy. The unit of biological dose is the sievert (Sv) which also has units of J/kg. The difference is that the biological dose includes weighting factors that depend on the radiation type (*Wr*) and the tissue type (*Wt*) to indicate the relative potential for biological harm. Dose values are related by

*D*(Sv) = *D*(Gy)**Wr***Wt*

The radiation weighting factor is *Wr* = 1.0 for photons and electrons, so we need not worry about it in **GamBet** calculations. The tissue weighting factor *Wt* is difficult to estimate, and therefore there is considerable disagreement about the values. For whole-body irradiation, the convention is to take *Wt* = 1.0. The implication is that for most shielding calculations at energies of interest for medical applications, sieverts are equivalent to grays.

The critical issue is the dose buildup through interactions with tissue. To standardize measurements and calculations, the ICRU (International Commission on Radiation Units and Measurements) defined the *Directional Dose Equivalent*, H*(d). The quantity applies to radiation moving predominantly in one direction, such as X-rays outside a radiation shield. A different quantity, Hp (the personnel dose equivalent), applies to approximately isotropic radiation fields, like the time that Mr. Spock was trapped in the Propulsion Chamber.

The ideal calculation or measurement defined by the ICRU uses a 30-cm-diameter sphere of tissue-equivalent plastic with a density of 1 g/cm^3 and a mass composition of 76.2% oxygen, 11.1% carbon, 10.1% hydrogen and 2.6% nitrogen. The quantity H*(d) is the biological dose at a depth *d* below the surface of the sphere in the direction of the radiation (Figure 1). The most common choice of *d* is 10.0 mm, hence the term H*(10). The reasoning is as follows. When a flux of gamma rays enters a material, the dose increases moving away from the surface because of the generation of secondary electrons (*e.g.*, Compton electrons). The electrons deposit energy more rapidly than photons. The secondary electron density grows with distance until the production rate equals the absorption rate. At this equilibrium point, the dose reaches a maximum. For gamma rays in the 1 MeV range (typical of radioactive sources) in a material with density 1 gm/cm^3 (*e.g.*, water), the equilibrium depth is a 10.0 mm or less. The following article describes benchmark calculations that illustrate this effect.

To conclude, here are a couple practical suggestions for H*(10) calculations in **GamBet**. First, in defining custom materials, **GamBet** follows the Penelope convention using the stochiometric composition (the relative number of atoms in an equivalent *molecule*) rather than mass fractions. To make the conversion, assume that *F0, FC, FH* and *FN* are the stochiometric fractions for oxygen, carbon, hydrogen and nitrogen. Because the quantities are relative, we take *F0* = 1.00. Using the atomic weights and mass fraction of 0.762 for oxygen, we can determine the other quantities from these equations:

FC*12.011/15.994 = 0.111/0.762 FH*1.00794/15.994 = 0.101/0.762 FN*14.00674/15.994 = 0.026/0.762

Here’s the result as it would appear in the *Composition* section of a **GamBet** script:

Material Name TissueEquivalent Component O 1.0000 Component C 0.1940 Component H 2.1032 Component N 0.0390 Density 1.00 Insulator End

Finally, note it is not necessary to use a 30 cm sphere of the tissue equivalent material in calculations. Any block of material will give about the same answer, as long as the object has depth and width greater than *d*.

The next article describes a walkthrough example that gives some good insights into the physics of the H*(10) calculation. The example illustrates several **GamBet** techniques and new features and provides an opportunity to make comparisons with several other radiation-shielding codes.

**Footnotes**

[1] Contact us : techinfo@fieldp.com.

[2] Field Precision home page: www.fieldp.com.

]]>The term *variance reduction*[1] applies to methods to optimize Monte Carlo calculations to gain an edge on the 1/√*N* limit. Variance reduction is not a formalized mathematical method, but rather a set of common-sense fixes that can go a long way toward reducing run time. The essence is finding ways not to waste time on particles that will not contribute to critical results. The success and validity of variance reduction depends strongly on user judgements. **GamBet** uses the **Penelope** package for interaction physics. **Penelope** has built-in features for variance reduction that are implemented in **GamBet** with the following commands:

ENHANCE NReg NSplit [ElecP, PhotP, PosiP, ElecS, PhotS, PosiS]

**GamBet** is unique among Monte Carlo codes in its use of a finite-element conformal meshes to represent the solution volume. Users can divide the space into a number of regions to represent different materials or sections of an object. The *Enhance* command improves statistics in a critical region (such as a detector). Particles entering the region are split into *NSplit* particles with statistical weight 1/*NSplit*. With the optional string parameters, the operation may be limited to specific particle types (primary or secondary electrons, photons and positrons).

REDUCE NReg NKill [ElecP, PhotP, PosiP, ElecS, PhotS, PosiS]

The *Reduce* command is the inverse of *Enhance*. The number of particle entering a specified non-critical region are reduced while increasing their statistical weight.

FORCE [ELEC,PHOT,POSI] [HELAS,...] Factor [NReg]

This command instructs the program to increase the probability of low-probability interactions like bremsstrahlung emission. The statistical weight of reaction products is decreased to preserve the correct energy balance between reactions.

Beyond the **Penelope** techniques, **GamBet** is structured to help achieve short run times. The code was designed to encourage the division of calculations into manageable segments. For example, an initial segment could address a radioactive source with shielding and collimation, while a second segment could address the interaction of forward-directed radiation with tissue. The segments are connected by an *escape file* which records the set of model particles that reach the boundary of the first segment. The key to variance reduction in **GamBet** is filtering and transforming escape files for optimal performance in a following segment. The escape distribution can be modified within a **GamBet** run with the following command:

ESCAPEFILTER Condition01 Condition02 ...

The conditions are strings like *X>0.15*, *T<5.0E6*,…. Particles must meet the combined conditions to be included in the escape file. Conditions may apply to spatial locations, kinetic energy and particle type. The idea is to limit particles to those that will play a role in the following segment and to limit the size of the escape file. Recently we expanded the *EscapeFilter* conditions to include particle direction: *Ux>0.1*,*Uz>0.95*,*Ur<0.25*,… Here, the quantities are the components of a unit vector pointing along the direction of the velocity. One application is to limit particles to those that are aimed toward a detector or target.

The **GamBet** package includes **GenDist**, a powerful utility to create or to modify large particle distributions. **GenDist** can act as an additional stage between calculation segments (Figure 1) to optimize particle properties for reduced variance. The basic sequence is to read an escape file, filter or transform the particle parameters and to write a modify file to be used as the source for a following calculation segment. In the past, the operations were controlled interactively by the user in a program window. We have recently added a script capability for autonomous production runs. Here is a summary of the new script commands:

**READ FPrefix.SRC**

Load an escape file

**WRITE Fprefix.SRC**

Write a file of transformed particle parameters, applying any filter conditions that have been set.

**AXIS [X,Y,Z]**

Set a reference axis for evaluating transverse velocity in transformations and filters.

**FILTER Condition Value**

Set any number of filter conditions. The set of conditions is the same as those in the **GamBet** *EscapeFilter* command.

**XFORM GENERAL XShift YShift ZShift XRotate YRotate ZRotate**

Move or rotate the particles to match the coordinate system of the next computational segment.

**XFORM UNIDIST Dist**

** XFORM NORMPLANE Pos**

** XFORM CLOSETOLINE HLine YLine**

Move particles in ballistic orbits following their velocity vectors. The options are shifts 1) a uniform distance backward or forward, 2) to a plane normal to the current axis or 3) to their positions closest to a line parallel to the current axis. One main application is to find the effective radius of a bremsstrahlung source for X-ray imaging applications by back-projection to the target.

**BEAMSECT2D NThet**

** BEAMSECT3D X 0 X X**

The first command converts a distribution from 2D cylindrical calculation to one suitable for a 3D calculation. The second command limits 3D beam distributions to specific transverse quadrants or mirrors a beam distribution.

**SCALE Fact**

Change the size of a particle beam or convert spatial units to match calculation segments. For example, a source calculation may use units of mm, but units of m may be more appropriate for the detector calculation.

For more details, use these links to download the **GamBet** and **GenDist** manuals. Free updates to the new programs are available to **GamBet** and **Xenos** users.

**Footnotes**

[1] The term *standard deviation reduction* would be a better choice, but it is a less compelling phrase.

[2] Contact us : techinfo@fieldp.com.

[3] Field Precision home page: www.fieldp.com.

]]>

**HeatWave**, a component of our **Xenos** suite for X-ray source design, performs 3D thermal conduction calculations. Energy deposition profiles determined by the **GamBet** Monte Carlo code can be imported as thermal source distributions. The distributions can be modulated with user-specified functions to represent pulsed or periodic beams. This capability can provide, at best, an rough approximation to a rotating target.

The **HeatWave** solution is performed on a stationary mesh — it would be difficult and unwieldy to introduce a moving mesh. On the other hand, it is relatively easy to move the heat source through the mesh, achieving the same result. Accordingly, we have introduced a new capability in **HeatWave** to generate exact moving-target simulations. This article summarizes its operation.

Power distributions from the programs **Aether** (microwave fields), **RFE3** (time-dependent electric fields) and **GamBet** (Monte Carlo X-rays and electrons in matter) can be imported into **HeatWave** with the *SourceFile* command. The only restriction is that the field and particle solutions must be performed on the same mesh as that used for the thermal solution. The result is that the elements of the **HeatWave** solution have assigned source power densities. File sources may be used inb both static and dynamic solutions. In the dynamic mode, a time variation may be associated with the file source. The variation may be defined by a table of [*t*,*f(t)*] values using the *SourceMod* command. Here, *f(t)* is a multiplication factor. The factor may also be defined by an algebraic expression.

In the new version of **HeatWave**, the element power distribution is initially copied to a reference mesh variable. A time-dependent displacement vector is defined with the new commands *XDisp*, *YDisp* and/or *ZDisp*. As with the modulation functions, the time-dependent vector components may be defined by either a table of values (*e.g.*, [*t*, *x(t)*]) or an algebraic function. The element power densities from the reference mesh are periodically mapped to the computational mesh using the current value of the displacement vector [*x(t)*, *y(t)*, *z(t)*]. The mapping algorithm conserves energy. Depending on the displacement, some source energy may be mapped outside the source region. As an example, heating of a rotating target could be modeled using a sawtooth function for *x(t)*.

Figure 1 shows the results of a demonstration calculation. A 4 MeV electron beam of radius 5.0 mm with current 23.9 μA impinges on an aluminum target of thickness 10.0 mm. The top section shows the temperature distribution in the center of the target (*z* = 5.0 mm) with a total irradiation time of 10 s and a fixed source distribution. In contrast, the lower section shows a solution where the source moves the right 20.0 mm during the 10 s interval. Note that the peak temperature drops from 505.1 ºC t0 338.2 ºC.

**HeatWave** with the expanded functionality will be included in all future distributions of **Xenos**. A free upgrade is available to current users.

**Footnotes**

[1] Contact us : techinfo@fieldp.com.

[2] Field Precision home page: www.fieldp.com.

]]>- Magnetic saturation occurs smoothly (
*i.e.*, there are no abrupt changes). - We know the variation at extremely high fields.

The key is choosing a good method to plot the data.

First, some background. The typical measurement setup is a torus of material with a drive coil winding. The quantity *H* (in A/m) is the magnetic field produced by the coil in the absence of the material. A useful quantity is *B0* = μ0**H* (in Tesla), the magnetic flux density produced by the coil inside the torus with no material. The quantity *B* is the total flux density in the torus with the material present. In this case, alignment of atomic currents adds to the field value so that *B* > *B0*. In a soft magnetic material (*i.e.*, no permanent magnetization), both *B0* and *B* equal zero when there is no drive current. The alignment of magnetic domains increases as the drive current increases; therefore, *B* grows faster than *B0*. The relative magnetic permeability is defined as μr = *B/B0*. At high values of drive current, all the material domains have been aligned. In this state, the material makes a maximum contribution to the total flux density of *Bs* (the saturation flux density). This contribution does not change with higher drive current. For high values of *B0*, the total flux density is approximated by

*B* ≅ *B0* + *Bs*. (1)

To illustrate the estimation procedure, we’ll consider the specific example of Magnifer 50 RG, a nickle alloy with a high value of *Bs*. Figure 1 shows a graph from a data sheet supplied by VDM Metals. The sheet lists the saturation flux density as *Bs* = 1.55 T. The plot shows *B* (in mT) versus *H* (mA/cm) at several frequencies. Because we are interested in the static properties, we’ll consider only curve 1. The data extend to a peak value of *H* = 200 A/m. At this point, μr > 1000, so that the material is well below saturation. I have an application where the material is driven well into to saturation by applied fields up to *H* = 5000 A/m, about 400 times the highest known value! Is it possible to make calculations with confidence?

The first step is convert the graphics data to a number set. The FP Universal Scale is the ideal tool for this task. After setting the correct log scales, I could record a set of points with simple mouse clicks, including the conversion factors to create a list of *B* versus *B0* in units of Tesla. In this case, the relative magnetic permeability is the ratio μr = *B/B0*.

The key to estimating the missing values is to create plots of the material behavior at the two extremes: the tabulated values at low *B0* and predictions from Eq. 1 at high *B0*. To ensure the validity of Eq. 1, I picked *B0* values corresponding to highly saturated material: 0.1, 0.2 and 0.5 T. The art is picking the right type of plot. Figure 2 shows *B* versus *B0* with log-log scales. With the requirement of a smooth variation, clearly the unknown values must lie close to the dashed red line connecting the data extremes. Accordingly, I used the **Universal Scale** to find several points along the line. I combined the interpolated values with the low field tabulated values and the high-field predictions to build a data set that spans the complete range of behavior for Magnifer 50. The new data are available on our magnetic materials page.

Finally, Figure 3 shows alternate plots to Fig. 2: *B0* versus μr and *B* versus μr. In all cases, the variation over the unknown saturation region is well approximated by a simple straight-line fit.

**Footnotes**

[1] Contact us : techinfo@fieldp.com.

[2] Field Precision home page: www.fieldp.com.

]]>Four types of particles may be generated in radioactive events:

- Fast electrons and positrons (beta rays)
- Photons (gamma rays)
- Heavy charged particles (protons, alpha rays,…)
- Neutrons.

This article deals with first two types. Radioactive sources of beta and gamma rays have extensive applications in areas such as medical treatments, food irradiation and detector calibration.

The activity of a radioactive source is determined by law of radioactive decay (Eq, 1). In the equation, the quantity *N* equals the total number of nucleii in the source. The left-hand side is the number of nucleii that decay per second. The quantity λ (with units of 1/s) is the decay constant. It depends on the energy state and quantum barrier of the nucleus. Accordingly, sources exhibit huge variations of λ. The historical units of activity for a source is the curie (Ci). One curie equals 3.7 × 10^10 decays/s (approximately equal the activity of 1 gram of Ra226. The modern standard unit is the becquerel (Bq) equal to 1 decay/s (1 Bq = 2.703 × 10^-11 Ci).

We can also interpret the decay constant in terms of a single nucleus. The probability that a nucleus has not decayed after a time *t* is given by Eq. 2. The *average lifetime* (the mean of the distribution) is 1/λ. The *halflife* is another useful quantity. It equals the time for half of the nucleii present in a source at *t* = 0.0 to decay. Equation 2 leads to the expression for the halflife of Eq. 3.

In comparison to particle accelerators, the main advantage of radioactive isotopes as sources is that they do not require power input and expensive ancillary equipment (*e.g.*, power supplies, vacuum systems,…). Many isotopes are produced by exposure in a nuclear reactor and may be relatively inexpensive when reactors are available. The disadvantages of radioactive sources are that they run continuously and produce a broad energy spectrum of electrons and positrons.

The most important nuclear processes for the production of beta and gamma rays are *beta decay* and *electron capture*. Figure 1 shows the atomic mass of the most stable isotopes as a function of atomic number *Z*. Isotopes above the line have an excess of neutrons — their usual route toward stability is to emit a β- particle (electron), converting a neutron to a proton while preserving the number of nucleons. In other words, the nucleus changes its isotopic identity while preserving its isomer identity. Similarly, isotopes with an excess of protons emit β+ particles (positrons). Both forms of nuclear transformations are called beta decay.

First, consider β- emission. There are two isotopes commonly used in research and industry: Cs137 and Co60. Nuclear processes are commonly illustrated with energy-level diagrams — Fig. 2a shows the decay scheme for Cs137. The horizontal axis represents isomer identity and the vertical axis shows energy levels. Dark lines indicate a nucleus in the ground state and light lines designate an excited state. The arrows indicate the directions of transformations. The starting point is the ground state of Cs137. The figure 30.17 years is the half-life for decay. Decay events of type ß- convert the nucleus to the more stable isomer, Ba137. The arrows indicate that there are two decay paths. In 94.6% of the decays, the emission process carries off 0.512 MeV (shared between the emitted electron and an anti-neutrino) and leaves the Ba137 nucleus in an excited state. The state decays with a half-life of 2.55 minutes, resulting in emission of a 0.662 MeV gamma ray. In 5.4% of the events, the ß- particle and antineutrino carry of 1.174 MeV and leaving the product nucleus in the ground state.

The emission process does not produce a single ß- particle of energy 0.512 or 1.174 MeV, but rather a broad spectrum of electrons with kinetic energy spread between zero and the maximum. The reason is the condition of conservation of spin. Nucleii have spin values an integer multiple of h/2π while electrons have spin ½(h/2π). For balance, an additional particle is required with half-integer spin. In his theory of beta decay, Fermi postulated the existence of neutrinos and antineutrinos, neutral particles with spin ½(h/2π) and very small mass, thereby almost undetectable. In a ß- decay, the available energy is partitioned between the electron, the nucleus and an antineutrino. The theory to determine the spectrum is complex — all ß- decays give rise to a spectrum similar to that of Fig. 3. The spectrum is skewed toward lower energy by the effect of Coulomb attraction as the electron escapes from the nucleus. Generally, Cs137 is used as a a source of 0.662 gamma rays because the ß- particles are preferentially absorbed by the source and surrounding structure and the antineutrinos pass away with no effect.

We next consider proton-rich isotopes that approach the stability line through emission of positrons. The mechanism is similar to ß- emission with the exceptions that a neutrino is emitted and the positron emission spectrum shifted toward higher energies because of Coulomb force repulsion from nucleus. Figure 2*b* shows the energy-level diagram for Na22, a positron emitter. The halflife for all decay processes is 2.60 years. There are several decay pathways. The most likely event (90.33% probability) is that a proton changes to a neutron by emission of a positron, leaving the product isotope Ne22 in an excited state. A gamma ray of energy 1.275 MeV is released almost immediately as the nucleus relaxes to the ground state. In this case, the maximum positron energy is 0.545 MeV. In rare instances, a positron with energy ≤1.82 MeV is released, leaving the product nucleus in the ground state. A third process that may occur is electron capture. In 9.62% of the decays, an inner orbital electron is captured by the nucleus, again resulting in the conversion of a proton to a neutron. The Ne22 nucleus is left in the same excited state as with ß+ emission, again followed by the release of a 1.27 MeV γ. The difference from ß+ decay is that no positron or neutrino is emitted. Electron capture leaves a vacancy in the *K* or *L* shell of the electron cloud, so characteristic X-rays are also emitted as the atom relaxes.

This table lists useful commercial radioactive sources of electrons, photons and positrons. A common feature is a halflife of one to a few years. For isotopes with lower values, it would be necessary to produce and use them quickly. A long half life means reduced activity.

We’ll now turn to **GamBet** modeling techniques, in particular how to create a particle input file to represent a radioactive source. There are some challenges:

- Particles are emitted over an extended spatial region, the volume of the source.
- Electrons and positrons have a broad energy distributions.
- Often, we want to normalize particle flux to represent a specific source activity.

Particle file creation is greatly facilitated through the use of statistical codes like **R** (Link to a comprehensive short course with examples on using **R** with **GamBet**).

Dealing with the finite source size is relatively easy. If the activity is uniform over the source volume, then the probability density for emission is uniform over the volume. As an example, consider a cylindrical source of length *L* and radius *R*. Given a routine that creates a random variable ξ in the range 0 ≤ ξ ≤ 1.0, then values of the *z* coordinate (along the cylinder axis) are assigned according to Eq. 4. We can use the *rejection method* to determine coordinates in the *x-y* plane. We assign coordinates by Eqs. 5 and 6 and keep only instances that satisfy Eq. 7.

With regard to energy distributions, photons from sources like Cs137 and Na22 are essentially monoenergetic. In contrast, the β particles have an energy distribution like that of Fig. 3. In principle, thin films could be used as sources of electrons or positrons. In this case, it would be necessary to represent the spectrum and to determine the effect of energy loss in the film The spectral shape and endpoint energy vary with the type of isotope. Chapter 10 of the reference Using R for **GamBet Statistical Analysis** discusses methods for creating arbitrary distributions. In practice, an exact model may not be necessary and the data may not even be available. In applications such as estimating shield effectiveness, it may be sufficient to model the β decay spectrum with a simple function like that of Eq. 8. In the equation, *Emax* is the maximum β energy. Taking the integral gives the cumulative probability distribution (i.e., the probability that a β has energy less than or equal to *E*) of Eq. 9. Values of *P(E)* range from 0.0 to 1.0. We can obtain the desired distribution by assigning energy from a random-uniform variable ξ using Eq. 10, the inverse of Eq. 9. Figure 4 shows the result with 10,000 particles having endpoint energy *Emax* = 0.512 MeV.

To conclude, we’ll address how to create a **GamBet** source file to represent a given source activity. We’ll follow a specific example — a Co60 source with activity 10 Ci. This figure corresponds to a disintegration rate of *Rd* = 3.7 × 10^11 1/s. Figure 5 shows an energy level diagram. The isotope decays through β- decay with a halflife of 5.27 years. Almost all events result in a excited state of the Ni60 nucleus that relaxes to the stable ground state by almost instantaneous emission of γ rays of energy 1.17 and 1.33 MeV. A source assembly typically consists of the source combined with shielding and collimators to create a directional photon flux. A goal of a calculation could be to compare radiation fluxes in the forward and reverse directions.

We specify *Np* = 1000 model emission points uniformly distributed over the source volume using techniques like those discussed previously. At each emission point, we generate *Ng* = 500 photons of energy 1.17 MeV and *Ng* photons of energy 1.33 MeV. The photons are randomly distributed over 4π steradians of solid angle. Equations 11 and 12 can be used to pick the azimuthal and polar angles. In the continuous-beam mode of **GamBet**, each photon in the file should be assigned a flux value given by Eq. 12. In this case, **GamBet** gives absolute values of particle flux through and deposited dose in structure surrounding the source assembly. Note that this case is relatively simple because almost all events follow the same decay path. In the case of Cs137 (Fig. 2*a*), we need to multiply *Rd* by 0.946 to get the correct absolute flux of 0.617 MeV γ rays.

The procedure as described may be inefficient to calculate forward photon flux or shielding leakage because most of the model particles would not contribute. A simple variance reduction technique is to limit the range of solid angle dΩ so that photons are preferentially directed toward the measure point. The solid angle should be large enough to include the possibility of scattering from the shield or collimator. To properly normalize the calculation, the photon flux values should be adjusted by a factor dΩ/4π.

**Footnotes**

[1] Use this link for a copy of the full report in PDF format: **Modeling radioactive sources with GamBet**.

[2] Contact us : techinfo@fieldp.com.

[3] Field Precision home page: www.fieldp.com.

]]>

- In the transport equation approach to the two-dimensional random walk, the idea is to seek average quantities
*n*or**J**and to find relationships between them (like Fick’s first and second laws). These relationships are accurate when there are large numbers of particles. To illustrate the meaning of large, note that the number of electrons in one cubic micrometer of aluminum equals 3 × 10^15. When averages are taken over such large numbers, the transport equations are effectively deterministic. - In the Monte Carlo method, the idea is to follow individual particles based on a knowledge of their interaction mechanisms. A practical computer simulation may involve millions of model particles, orders of magnitude below the actual particle number. Therefore, each model particle represents the average behavior of a large group of actual particles. In contrast to transport equations, the accuracy of Monte Carlo calculations is dominated by statistic variations.

An additional benefit of transport equations is that they often have closed-form solutions that lead to scaling relationships like Eq. 22 of the previous article. We could extract an approximation to the relationship from Monte Carlo results, although at the expense of some labor.

Despite the apparently favorable features of the transport equations, Monte Carlo is the primary tool for electron/photon transport. Let’s understand why. One advantage is apparent comparing the relative effort in the demonstration solutions — the Monte Carlo calculation is much easier to understand. A clear definition of physical properties of particle collisions was combined with a few simple rules. The only derivation required was that for the mean free path. The entire physical model was contained in a few lines of code. In contrast, the transport model required considerable insight and the derivation of several equations. In addition, it was necessary to introduce additional results like the divergence theorem. Most of us feel more comfortable staying close to the physics with a minimum of intervening mathematical constructions. This attitude represents good strategy, not laziness. Less abstraction means less chance for error. A computer calculation that closely adheres to the physics is called a *simulation*. Program managers and funding agents have a warm feeling for simulations.

Beyond the emotional appeal, there is an over-riding practical reason to apply Monte Carlo to electron/photon transport in matter. Transport equations become untenable when the interaction physics becomes complex. For example, consider the following scenario for a demonstration calculation:

In 20% of collisions, a particle splits into two particles with velocity 0.5*v0* and 0.2*v0*. The two particles are emitted at a random angles separated by 60°. Each secondary particle has its own cross section for interaction with the background obstacles.

It would be relatively easy to modify the code of the first article to represent this history and even more complex ones. On the other hand, it would be require consider effort and theoretical insight to modify a transport equation. As a second example, suppose the medium were not uniform but had inclusions with different cross sections and with dimensions less than λ. In this case, the derivation of Fick’s first law is invalid. A much more complex relationship would be needed. Again, it would relatively simple to incorporate such a change in a Monte Carlo model. Although these scenarios may sound arbitrary, they are precisely the type of processes that occur in electron/photon showers.

In summary, the goal in collective physics is to describe behavior of huge numbers of particles. We have discussed two approaches:

**Monte Carlo method**. Define a large but reasonable set of model particles, where each model particle represents the behavior of a group of real particles with similar properties. Propagate the model particles as single particles using known physics and probabilities of interactions. Then, take averages to infer the group behavior.**Transport equation method**. Define macroscopic quantities, averages over particle distributions. Derive and solve differential equations that describe the behavior of the macroscopic quantities.

The choice of method depends on the nature of the particles and their interaction mechanisms. Often, practical calculations usually use a combination of the two approaches. For example, consider the three types of calculations required for the design of X-ray devices (supported in our **Xenos** package):

**Radiation transport in matter**. Photons may be treated with the Monte Carlo technique, but mixed methods are necessary for electrons and positrons. In addition to discrete events (hard interactions) like Compton scattering, energetic electrons in matter undergo small angle scattering and energy loss with a vast number of background electrons (soft interactions). It would be impossible to model each interaction individually. Instead, averages based on transport calculations are used.**Heat transfer**. Here, particles are the energy transferred from one atom to an adjacent one. Because the interaction model is simple and the mean-free-path is extremely small, transport equations are clearly the best choice.**Electric and magnetic fields**. The standard approach is through the Maxwell equations. They are transport type equations, derived by taking averages over a large number of charges. On the other other hand, we employ Monte-Carlo-type methods to treat contributions to fields from high-current electron beams.

**Footnotes**

[1] Use this link for a copy of the full report in PDF format: Monte Carlo method report.

[2] Contact us : techinfo@fieldp.com.

[3] Field Precision home page: www.fieldp.com.

]]>- Although the density may vary in space, the distribution of particle velocities is the same at all points. Particles all have constant speed
*v0*and there is an isotropic distribution of direction vectors. - There is a uniform-random background density of scattering objects.
- Equation 8 of the previous article gives the probability distribution of
*a*(the distance particles travel between collisions) in terms of the mean-free-path λ.

We want to find how the density changes as particles perform their random walk. Changes occur if, on the average, there is a flow of particles (a *flux*) from one region of space to another. If the density *n* is uniform, the same number of particles flow in one direction as the other, so the average flux is zero. Therefore, we expect that fluxes depend on gradients of the particle density. We can find the dependence using the construction of Fig. 2. Assume that the particle density varies in *x* near a point *x0*. Using a coordinate system with origin at *x0*, the first order density variation is given by Eq. 9. The goal is to find an expression for the number of particles per second passing through the line element Δy. To carry out derivation, we assume the following two conditions:

- The material is homogeneous. Equivalently, λ has the same value everywhere.
- Over scale length λ, relative changes in
*n*are small.

Using polar coordinates shown centered on the line element, consider an element in the plane of area (*r* Δθ)(Δ*r*)$. We want to find how many particles per second originating from this region pass through Δ*y*. We can write the quantity as the product *Jx* Δ*y*, where *Jx* is the linear flow density in units of particles/m-s. On the average, every particle in the calculation volume has the same average number of collisions per second, given by Eq. 10. The rate of scattering events in the area element equals ν times the number of particles in the area (Eq. 11). The fraction of scattered particles aimed at the segment is given by Eq. 12.

Finally, the probability that a particle scattered out of the area element reaches the line element was given in the previous article as exp(-*r*/λ). Combining this expression with Eqs. 10 and 11, we can determine the current density from all elements surrounding the line segment. Taking the density variation in the form of Eq. 13 leads to the expression of Eq. 14. The integral of the first term in brackets equals zero, so that only the term proportional to the density gradient contributes. Carrying out the integrals, the linear current density is given by Eq. 15. The planar *diffusion coefficient* (with units m^2/s) is given by Eq. 16. Generalizing to possible variations in both *x* and *y*, can write Eq. 15 in the form of Eq. 17. This relationship between the vector current density and the gradient of density is called Fick’s first law. Equation 18 lists Fick’s second law, a statement of conservation of particles. In the equation, the quantity ∇•**J** is the divergence of flux from a point and *S* is the source of particles at that point (particles/m^2-s). Equation 18 is the *diffusion equation* for particles in a plane. It states that the density at a point changes in time if there is a divergence of flux or a source or sink.

We are now ready to compare the predictions of the model with the Monte Carlo results of the previous section. Equation 19 gives is solution to the diffusion equation for particles emission from the origin of the plane. The quantity *r* equals √(x^2 + y^2). We can verify Eq. 19 by direct substitution by using the cylindrical form of the divergence and gradient operators and taking *D* as uniform in space. In order to make a comparison with the Monte Carlo calculation, we pick a time value *t0* = *Nc* λ/*v0* and evaluate A based on the condition of Eq. 20. The resulting expression for the density at time *t0* is given by Eq. 21. The prediction of Eq. 21 s plotted as the solid line in Fig. 1. The results from the two methods show close absolute agreement.

Finally, we can determine the theoretical 1/e radius of the particle cloud from Eq. 21 to yield Eq. 22. In a random walk, the particle spread increases as the square root of the number of transits between collisions. For *Nc* = 100, the value is *re*/λ ≅ 14.1.

**Footnotes**

[1] Use this link for a copy of the full report in PDF format: Monte Carlo method report.

[2] Contact us : techinfo@fieldp.com.

[3] Field Precision home page: www.fieldp.com.

]]>In the Monte Carlo method, the full set of particles is represented by a calculable set of model particles. In this case, each model particle represents a group. We follow detailed histories of model particles as they undergo random events like collisions with atoms. Characteristically, we use a random-number generator with a known probability distribution to determine the outcomes of the events. In the end, the core assumption is that averages over model particles represent the average behavior of the entire group. The alternative to this approach is the derivation and solution of *moment* (or *transport*) equations. The following article covers this technique.

Instead of an abstract discussion, we’ll address a specific example to illustrate the Monte Carlo method. Consider a random walk in a plane. As shown in Fig. 1, particles emerge from a source at the origin with uniform speed *v0*. They move freely over the surface unless they strike an obstacle. The figure represents the obstacles as circles of diameter *w*. The obstacles are distributed randomly and drift about so we can never be sure of their position. The velocity of obstacles is much smaller than *v0*. If a particle strikes an obstacle, we’ll assume it bounces off at a random direction with no change in speed. The obstacles are unaffected by the collisions.

In a few sentences, we have set some important constraints on the physical model:

- The nature of the particles (constant speed
*v0*). - The nature of the obstacles (diameter
*w*, high mass compared to the particles), - The nature of the interaction (elastic collision with isotropic emission from the collision point)

The same type of considerations apply to calculations of radiation transport. The differences are that 1) the model particles have the properties of photons and electrons, 2) the obstacles are the atoms of materials and 3) there are more complex collision models based on experimental data and theory. To continue, we need to firm up the features of the calculation. Let’s assume that 10^10 particles are released at the origin at time *t* = 0. Clearly, there are too many particles to handle on a computer. Instead, we start *Np* = 10,000 model particles and assume that they will give a good idea of the average behavior. In this case, each model particle represent 10^6 real particles. We want to find the approximate distribution of particle positions after they make *Nc* collisions. The logic of a Monte Carlo calculation for this problem is straightforward. The first model particle starts from the origin moving in a random direction. We follow its history through *Nc* collisions and record its final position. We continue with the other *Np* – 1 model particles and then interpret the resulting distribution of final positions.

The source position is *x* = 0, *y* = 0. To find the emission direction, we use a random number generator, a component of all programing languages and spreadsheets. Typically, the generator returns a random number ξ equally likely to occur anywhere over the interval of Eq. 1. Adjusting the range of values to span the range 0 → 2π, the initial unit direction vector is given by Eq. 2.

The particle moves a distance *a* from its initial position and then has it first collision. The question is, how do we determine *a*? It must be a random quantity because we are uncertain how the obstacles are lined up at any time. In this case, we seek the distribution of expectations that the particle has a collision at distance *a*, where the distance may range from 0 to ∞. To answer the question, we’ll make a brief excursion into probability theory.

Let *P(a)* equal the probability that the particle moves a distance *a* without a collision with an object. By convention, a probability value of 0.0 corresponds to an impossible event and 1.0 indicates a certain event. Therefore, *P*(0) = 1.0 (there is no collision if the particle does not move) and P(∞) = 0.0 (a particle traveling an infinite distance must encounter an object). We can calculate *P(a)* from the construction of Figure 2. The probability that a particle reaches *a* + Δ*a* equals the probability that the particle reaches *a* times the probability that it passes through the layer of thickness Δ*a* without a collision. The second quantity equals 1.0 minus the probability of a collision.

To find the probability of a collision in the layer, consider a segment of height *h*. If the average surface density of obstacles is *N* particles/m^2, then the segment is expected to contain *Nh* Δ*a* obstacles. Each obstacle is a circle of diameter *w*. The distance range for an interaction with an obstacle is called the cross-section σ. In this case, we will associate the interaction width with the obstacle diameter, or σ = *w*. The fraction of the height of the segment obscured by obstacles is given by Eq. 3. The exit probability is given by Eq. 4.

A first-order Taylor expansion (Eq. 5) leads to Eq. 6. Equation 6 defines another useful quantity, the *macroscopic cross section* Σ = n σ with dimensions 1/m. Solving Eq. 6 leads to Eq. 7. The new quantity in Eq. 7 is the mean free path, λ. It equals the average value of *a* for the exponential probability distribution. The ideas of cross section, macroscopic cross section and mean-free path are central to particle transport.

We can now solidify our procedure for a Monte Carlo calculation. The first step is to emit a particle at the origin in the direction determined by Eq. 2. Then we move the particle forward a distance *a* consistent with the probability function of Eq. 7. One practical question is, how do we create an exponential distribution with a random number generator that produces only a uniform distribution in the interval of Eq, 1? The plot of the probability distribution of Eq. 7 in Fig. 3 suggests a method. Consider the 10% of particles with collision probabilities between *P*(0.3) and *P*(0.4). The corresponding range of paths extends from *a*(0.6)/λ = -ln(0.4) = 0.9163 to – \ln(0.3) = 1.204. If we assign path lengths from the uniform random variable according to Eq, 8, then we can be assured that on the average 10% will lie in the range *a*/λ = 0.9163 to 1.204. By extension, if we apply the transformation of Eq. 8 to a uniform distribution, the resulting distribution will be exponential. To confirm, the lower section of Fig. 3 shows a random distribution calculation with 5000 particles.

To continue the Monte Carlo procedure, we stop the particle at a collision point a distance *a* from the starting point determined by Eq. 8 and then generate a new random number ξ to determine the new direction according to Eq. 2. Another call to the random-number generator gives a new propagation distance *a* from Eq. 8. The particle is moved to the next collision point. After *Nc* events, we record the final position and start the next particle. The simple programing task with the choice λ = 1 is performed by the following code:

`DO Np=1,NShower`

! Start from center

XOld = 0.0

YOld = 0.0

! Loop over steps

DO Nc=1,NStep

! Random direction

CALL RANDOM_NUMBER(Xi)

Angle = DTwoPi*Xi

! Random length

CALL RANDOM_NUMBER(Xi)

Length = -LOG(Xi)

! Add the vector

X = XOld + Length*COS(Angle)

Y = YOld + Length*SIN(Angle)

XOld = X

YOld = Y

END DO

END DO

Figure 4 shows the results for λ = 1 (equivalently, the plot is scaled in units of mean-free-paths). The left-hand side shows the trajectories of 10 particles for *Nc* = 100 steps. With only a few particles, there are large statistical variations, making the distribution in angle skewed. We expect that the distribution will become more uniform as the number of particles increases because there is no preferred emission direction. The right-hand side is a plot of final positions for *Np* = 10,000 particles. The distribution is relatively symmetric, clustered within roughly 15 mean-free-parts of the origin. In comparison, the average total distance traversed by each particle is 100.

Beyond the visual indication of Fig. 4, we want quantitative information about how far particles move from the axis. To determine density as a function of radius, we divide the occupied region into radial shells of thickness Δ*r* and count the number of final particle positions in each shell and divide by the area of the shell. Figure 5 shows the results. The circles indicate the relative density of particles in shells of width 0.8λ. Such a plot is called a *histogram* and the individual shells (containers) are called *bins*. Histograms are one of the primary methods of displaying Monte Carlo results. Note that the points follow a smooth variation at large radius, but that they have noticeable statistical variations at small radius. The reason is that the shells near the origin have smaller area, and therefore contain fewer particles to contribute to the average. Statistical variations are the prime concern for the accuracy of Monte Carlo calculations.

**Footnotes**

[1] Use this link for a copy of the full report in PDF format: Monte Carlo method report.

[2] Contact us : techinfo@fieldp.com.

[3] Field Precision home page: www.fieldp.com.

]]>

The utility collection started with the FP Universal Scale. It grew out of my frustration with conventional [...]]]>

The utility collection started with the **FP Universal Scale**. It grew out of my frustration with conventional screen rulers which were either rigidly referenced to screen pixels or absolute units like inches or centimeters. A more useful approach is to reference the ruler to the units of the graph or photograph to be measured. Accordingly, after several years of thought I set out to create an on-screen version of the much-loved Gerber Variable Scale. The implementation involved intensive interactions with the Windows API, so I decided to use **RealBasic** with purchased plugins to handle screen overlays. During development, the program expanded from a simple screen ruler to a complete screen digitization system for scientists and engineers.

There were three motivations for the next utility, the **FP File Organizer**:

- In comparison to sophisticated two-window file managers like
**Free Commander**, I wanted a simple, clean interface that supported the functions I used every work day. - Our technical programs involve extended file organization. In discussing file management in tutorials, I wanted a standard reference environment.
- I needed a general file-manager unit for my MIDI programs.

**FP File Organizer** has several nice features like fast file searches, full path copy to the clipboard, definable tools, special folders and desktop shortcut creation. I use the program for all my work except for multi GB file transfers. For these, I use xcopy or robocopy.

The **Cecil_B** program converts an organized set of BMP files into an AVI movie. I developed it in response to a customer request to make animations of solutions in time-domain programs like **TDiff** and **HeatWave**. I created the final two utilities, **Computer Task Organizer** (**CTO**) and **Boilerplate** to reduce frustrations I noticed over the last 30 years using Windows. With regard to **CTO**, I found that most of my work day involved running the same programs with the same documents or going to the same website repetitively. The program reduces the 100 tasks that I perform every day to single button clicks.

The new utility **Boilerplate** (Figure 1) expands the functions of the Windows clipboard in two ways:

- You can build a library of standard text selections (
*i.e.*, boilerplate) that can be transferred to the clipboard with a single button press — ready to paste into a document. - You can recall items previously on the clipboard.

The second feature deals with an irritating limit of the clipboard — it stores only one item at a time. **Boilerplate** keeps a running record of the last twenty clipboard texts — they can be recalled to the clipboard with a single button click. I got the idea from the old utility **Clipboard Magik**. The program had a lot of potential, but was difficult to utilize in practice.

**Footnotes**

[1] Contact us : techinfo@fieldp.com.

[2] Field Precision home page: www.fieldp.com.

]]>