Dead-Time Free Higher Moments of the Count Rate

Erez Gilad 1 Yael Neumeier 1 Chen Dubi 2,3
1The Unit of Nuclear Engineering, Ben-Gurion University of the Negev
2Department of Physics, NRCN
3Department of Mathematics, Ben-Gurion University of the Negev

INTRODUCTION

Dead time losses in neutron detection, caused by both the detector and the electronics dead time, is a highly nonlinear effect, known to create high biasing in physical experiments as the power grows over a certain threshold. Analytic modeling of the dead time losses is a highly complicated task due to the different nature of the dead time in the different components of the monitoring system (paralyzing vs. non-paralyzing), and the stochastic nature of the fission chains.

The most basic analytic models for a paralyzing dead time correction assumes a non-correlated source, resulting in an exponential model for the dead time correction. While this model is often used and very useful for correcting the average count rate in low count rates, it is totally impractical in noise experiments and the so-called Feynman-α experiments. In the present study, a new technique is introduced for dead time corrections, based on backward extrapolation of the losses, created by imposing increasing artificial dead time on the data, back to zero.

METHODS

The new correction method inherently accounts for any higher moments of the count rate (CPS). The method is a generalization of the Backward Extrapolation (BEX) method, originally aimed for dead time corrections of the average count rate. From a theoretical point of view, the ideas in the present study are very much the same as in the original BEX paper. On the practical side, there is a great difference. First, the observable we are correcting is very different and does not depend only on the total count rate, but also on time correlations between consecutive detections. Second, unlike the original BEX work, the correction is of a function-valued observable, i.e., The Feynman-Y curve, rather than of a scalar (the CPS).

RESULTS

The method was implemented to analyze actual detection signal recorded at the MINERVE Zero Power Reactor (ZPR) and its performances were evaluated. To evaluate the performance of the method, the original data recorded from the ZPR suffered a negligible dead time (less than 100 ns) and was only used (in its entirety) as a reference value, while the actual implementation of the method was done on manipulated data, on which a paralyzing dead time was inflicted.

CONCLUSIONS

A new method for performing dead time corrections on the Feynman-Y variance-to-mean ratio is introduced. The method is based on the simple idea of imposing artificial dead time (of increasing durations) to construct the functional dependence of the Feynman-Y curve on the dead time and then extrapolate the function back to zero. This approach was previously used to perform dead time corrections on the count rate, and the present study is a natural extension to dead time correction of higher moments of the count distribution.

The method was implemented on a set of 12 signals, all created from 4 in-pile noise experiments signal by imposing artificial dead time.

The performances and accuracy of the method are tested with respect to two scales: Deviation from the reference Feynman-Y curve, and deviation from the reference reactivity and decay constant α. Results indicate good performances and accuracy in both aspects, with an average difference between the reference and the BEX-estimated α`s of 1.6% and an average difference between the reference and the BEX-estimated reactivity of 15 pcm. Since the overall propagated statistical uncertainty on the reactivity is about 30 pcm, it is safe to state that for all practical purposes, the method can successfully reconstruct the Feynman-Y curve adequately corrected for dead time.

Erez Gilad
Erez Gilad
Ben-Gurion University of the Negev








Powered by Eventact EMS