Updates

This section contains the updated information on some elements mentioned in the main text, that were updated between the manuscript submission to the jury and the defense of the HDR.

It seems appropriate to put the updates in a separate section, in order to, first, maintain the cohesion of the main text, and second, show the progression of the work.

Exfor entry 41245.022

The questions asked in the section “Inelastic reactions results“ about the Exfor entry 41245.022 may find answers thanks to finding the original paper [2] by collaborators [3] and [4]. Although in Russian, the result referred to in the Exfor entry can be found in Table 1 in the article. In that table, the result is shown as pertaining to 4 transitions in different isotopes of tungsten, see Figure 70.

../_images/yk_2_59_1996_table1_excerpt.png

Figure 70 Excerpt from the Table 1 in [2].

The quoted cross section may therefore be interpreted as a weighted sum of \((\text{n},~\text{n}'~\gamma)\) reactions in a natural tungsten sample.

To ensure that, a future work will soon be undertaken in order to verify that hypothesis thanks to Talys calculations, and, eventually, the experimental data gathered with Grapheme.

Fit script improvement

The 354 keV results pointed to deficiencies in the fitting scripts, that are discussed in the main text (Fit improvements perspectives).

A little, but efficient, work has been done on the fitting procedure. This update is twofold:

Background guess

The initial guess for the background value was initially done by picking up the value of the first (leftmost) bin in the histogram to fit. This is method is simple, but can be very off in cases where the background is very noisy. The improvement has been made by using the average value of the histogram bins, which is more adapted to cases where the peaks of interest are seating on top of an important background.

The code in Listing 28 shows the change related to the background guess in the fit script.

Listing 28 diff display of the change in the adaptive_fit.py script in order to improve the background guess.
@@ -74,14 +93,19 @@ def adaptive_fit(
#   .. Background is either linear or constant
+    bg_guess = sum(the_histo)/(1.+ len(the_histo))
+    # print(f"# {bg_guess=}")
     if bg_type == "lin":
         functions_to_fit.append(lambda x, B, SL: B + SL * (x - xpeak))
-        variables_guess.extend([the_histo[0].count, 0.0])
+        variables_guess.extend([
+            bg_guess,
+            0.0
+        ])
Fall back to single peak fit

In the case of single peak fit (as is the case for the 354 keV transition), the original fit method was (and still is) falling back on the pyhisto.tools.GaussRealFit method that is integrated with the pyhisto package.

The GaussRealFit method relied on statistical methods (computing averages, standard deviation, …) to extract first guesses from the histogram to fit. This very simple guessing strategy can be wrong in cases where the histogram is very noisy, or any situation where the mean value, or standard deviation of the peak of interest are very different from the ones computed by simple statistics.

In order to maintain the transmission of value guesses from the whole range fit (see in the analysis flowcharts), the pyhisto module was updated to version 0.1.2 where the GaussRealFit can now take as argument external value of mean and standard deviation guesses. Listing Listing 29 shows the update in that code. The corresponding call in adaptive_fit.py is updated, as shown in Listing 30.

Listing 29 Changes in the GaussRealFit method in the pyhisto package, in order to take as call argument initial guesses for the mean and standard deviation values.
@@ -43,7 +43,9
class GaussRealFit:
    '''From an histogram, extract Gaussian 'fitted' parameters '''
    def __init__(self,
-                 h: Histogram):
+                 h: Histogram,
+                 _xguess: float = None,
+                 _stdevguess: float = None):
        '''computes sum, average x, stddev, max, bin'''
        self.h1 = h
        _simplefit = GaussFit(self.h1)
@@ -56,13 +586,13
        first_guess = [_simplefit.integral,
-                       _simplefit.mean,
-                       _simplefit.stdev,
-                       self.h1[0].count,
+                       _xguess or _simplefit.mean,
+                       _stdevguess or _simplefit.stdev,
+                       sum(self.h1)/(1. + len(self.h1)), # background guess
                      0.0]
Listing 30 Changes in the adaptive_fit.py scripts to pass initial guesses for the mean and standard deviation values to the GaussRealFit fall back method.
@@ -53,7 +68,11 @@ def adaptive_fit(
    # if no parasites,
    if parasites is None or len(parasites) == 0:
        # fall back to realfit
-        the_fit = GaussRealFit(the_histo)
+        the_fit = GaussRealFit(
+            the_histo,
+            _xguess=xpeak,
+            _stdevguess=guessstdev,
+        )

Consequences of Fit improvement

As a consequence of the fitting procedure improvement, the 354 keV transition in the inelastic scattering channel now returns much more reasonable values, as shown in Figure 71.

../_images/inel354_updated.svg

Figure 71 Updated experimental cross section for the 353.9 keV \(\gamma\) ray in the \((\text{n},~\text{n}')\) reaction on 183W. See Figure 58 for full description and the original (faulty) results.

context module in task files

A context module was added at the top of tasks files. The goal of the the context module is to simplify the imports in tasks files and avoid copy-pasted loading of detectors list.

As an example, Listing 31 shows a diff of the top of the do_start_new_iter.py file. In the listing, it is clear that the use of the context module greatly improves the readability and clarity of the code.

Listing 31 Edits on the do_start_new_iter.py file that implement the context module. The from tasks import fn_start_new_iter as fns lines implements functions of interest for the tasks on the same model than the context module (but specific to the tasks file).
from doit.tools import create_folder
+from context import the_transitions, Ge_detectors, iter_type
+from tasks import fn_start_new_iter as fns
-from doit.action import CmdAction
-from doit import get_var
-
-import yaml
-import pathlib
-from itertools import product, chain
-import uuid
-import datetime
-
-from random import gauss as rand_normal
-from random import uniform as rand_flat
-
-# Load mapping and transitions data
-the_mapping = yaml.load(open("etc/mapping.yaml"), Loader=yaml.Loader)
-detectors = the_mapping["detectors"]
-FC_detectors = list(filter(lambda x: x.startswith("U"), detectors))
-Ge_detectors = list(filter(lambda x: not x.startswith("U"), detectors))
-the_transitions = yaml.load(open("etc/transitions_to_look_at.yaml"), Loader=yaml.Loader)
-
-
-# ID of the iteration (load if exists)
-UUID = "000000"
-if pathlib.Path("etc/uuid.txt").exists():
-    UUID = open("etc/uuid.txt", "r").read().strip()
-
-# Determine if it's a Monte Carlo iteration or not
-iter_type = "mc" if get_var("iter", "mc") == "mc" else "center

Use of intermediate data semi-Monte Carlo code

As explained in Full Monte Carlo data analysis to produce uncertainties and covariances, a partial Monte Carlo code (called resimu), running on intermediate analysis results, has been developed with F. Claeys during his thesis, in order to apply the Monte Carlo method on previous results, mainly to produce correlation matrices [5], [6]and [7].

This other code has a much reduced Monte Carlo loop, that runs in seconds. It has been developed by following some principles of open source: using git, relying on configuration files and testing, … in order to be usable with any data set from Grapheme. And indeed, it has been used so far on 238U, 233U, and 232Th data [6].

We can now add 183W to the list. The plugging in of resimu in the analysis pipeline was very easy, as all the files needed for input already exist (\(\gamma\) count files for each HPGe detector, neutron flux integrated over energy / time-of-flight windows, efficiencies, …). It was just a matter of gathering the input data, doing a little reformating (order of the data columns in the file, …) and writing the configuration file (which mostly lists the input files), all of these can be fully automatize.

The result, at the moment only available for the 268 keV transition) is compared to the one from the full Monte Carlo code in Figure 72. The updated figure shows a very high degree of compatibility between the full Monte Carlo and the semi Monte Carlo code.

../_images/inel268_resimu.svg

Figure 72 Updated experimental cross section for the 268.1 keV \(\gamma\) ray in the \((\text{n},~\text{n}')\) reaction on 183W. See Figure 52 for full description.

Figure 73 compares the correlation matrices obtained with both method. Strong similarities can be observed, in particular, the negative correlation at the lowest energies. Still, some differences can be noticed, but this is to be expected, as the correlation matrix strongly depends on the way the results where computed, and the two methods (full or partial MC) have some significant differences in input parameters (in particular on the \(\gamma\) counts and the treatment of their uncertainty and normalization).

../_images/inel268_correlation_compare.png

Figure 73 Correlation matrix the the 268.1 keV \(\gamma\) ray in the \((\text{n},~\text{n}')\) reaction on 183W, from the full Monte Carlo code on the left (Figure 53), and the semi-Monte Carlo analysis on the right.

The ease of implementing the semi-MC procedure, along the high compatibility of the results is encouraging us to make use of this speedy method of analysis in the future, at least in addition and as a way to check the full Monte Carlo procedure.

Other updates

Other updates and changes have been implemented:

Minimization method in fitting procedures

Changing from the default method trf to dogbox in the fitting procedure did not significantly improve the speed of the script, or accuracy of the result.

Using last modification timestamp instead of file md5 checksum in job dependencies.

This change did not impact significantly the iteration running time

Footnotes