Algorithm of the nuclide identification
The following sections describe the individual steps taken by the nuclide identification algorithm.
Loading of basic data
In this phase we are collecting all the information necessary for the actual nuclide identification. These include
Peak list, which contains the spectrum peaks to be identified. Every peak has energy, energy uncertainty, area and area uncertainty values.
List of searched components.
Radiations belonging to the component. Every radiation must have energy, energy uncertainty, intensity, intensity uncertainty values.
Detector efficiency curve.
Preliminary search and filtering of peaks and components
The process of matching of peaks and library lines is the following:
Searching for all radiations, which may belong to a spectrum peak. This is done by searching of radiations, where the uncertainty region around the radiation energy, Erad overlaps the uncertainty region around the peak energy, Epeak. The uncertainty region is ESigMult *PeakEUnc for the peak, and ESigMult *RadEUnc for the radiation. The EsigMult is an uncertainty multiplier, its value is 4 by default.
This search process gives a very broad list of matching radiations, purely by the energies.
Calculating the 'spectroscopic strength' value, which is a normalized sum of the PeakInt * Efficiency values. This gives a rough descriptive number about the presence about the important library lines for each component. If the spectroscopic strength is below a threshold value (10% by default), the component will be skipped from the further analysis.
Peak merging: the algorithm now merges together the peaks whose distance is less than MaxFwhmContract * FWHM channels (default MaxFwhmContract=1.5). This is because the identification is much more robust, if the unnecessarily splitted doublets are joined again, and their summed areas will be splitted by the nuclide identification algorithm.
Determination of component clusters. If no overlapping peaks exist between two components, then their activities may be determined independently, thereby reducing the computation time and increasing stability. Similarly, overlapping detection introduced for component clusters (a set of components), not just single components, which can be identified quantitatively in an independent way.
Fitting intensities of components
If a peak is not found in the spectrum at the position of a library line, a virtual peak created. Its area assumed to be zero, while its area uncertainty is computed from the efficiency curve, the FWHM and the actual background under the missing peak. This way a missing peak is also takes part in the activity fit of the component.
Together with the activity fitting, the correlation is also determined between the components of the cluster. If two component shares at least one spectrum peak, then they are correlated. If the correlation is very high, exceeding MaxCompCorr value (it is 0.99 by default), then we treat the two or more components non-distinguishable. An example would be two nuclei, both having only one significant gamma line at nearly the same energy. In this case, no independent activities may be calculated purely from the spectrum, by any means. In this case, we collect these highly-correlated components into component groups, and determine them together.
Another fitting performed on the linear equation system, which tries to describe the measured spectrum peaks and also accounts for the background under the missing peaks.
Checking if all of the spectrum peaks is well-described by the sum of the counts originating from the components' gamma lines. If there is significant difference between the peak area and the sum of library line intensities for a peak (significance threshold is 4 by default), then we re-fit the problem by allowing large discrepancy at that peak.
Final nuclide identification steps
Checking the fitted activities of the components. In case unphysical values (zero or negative activities), dropping the component in question, and performing the fitting again.
Filtering the final component list by the activity uncertainties: if that is higher than a threshold value (35% by default), then the components dropped, and a re-fit is performed. The components then sorted by the uncertainties of their activity values.