Dynamic light scattering (DLS), also called photon correlation spectroscopy (PCS), was pioneered in the early 1960s [1
R. Pecora, “Doppler shifts in light scattering from pure liquids and polymer solutions,” J. Chem. Phys.
40(6), 1604–1614 (1964). [CrossRef]
N. C. Ford Jr and G. B. Benedek, “Observation of the spectrum of light scattered from a pure fluid near it’s critical point,” Phys. Rev. Lett.
15(16), 649–653 (1965). [CrossRef]
] and has been widely used in various scientific and engineering fields since. Following its first use for particle size measurement in 1972 [4
S. P. Lee, W. Tscharnuter, and B. Chu, “Calibration of an optical self-beating spectrometer by polystyrene latex spheres, and confirmation of the Stokes-Einstein formula,” Polym. Sci.
10, 2453–2459 (1972).
],it has become a standard method for submicron particle sizing [5
ISO 13321: Particle size analysis–Photon correlation spectroscopy (1996).
ISO 22412: Particle size analysis–Dynamic light scattering (DLS) (2008).
]. In addition to information on the size and size distributions of the suspended particles, the technique provides information on almost any parameter which causes changes in the scattered light intensity with time. For example, the rotation or vibration of scattering centers, or fluctuations in the refractive index, entropy or thermal diffusivity will give rise to intensity fluctuations with time and can be monitored.
The principles of DLS and the instrumentation used are well known [7
J. C. Thomas, “Photon correlation spectroscopy: technique and instrumentation,” Proc. SPIE
1430, 2–18 (1991). [CrossRef]
]. In a DLS experiment, the information on the dynamics of the scatterers is extracted from the time fluctuations of the scattered intensity, I(t). The intensity is usually measured by a small acceptance angle detector, such as a photomultiplier tube with appropriate apertures. The fluctuations are quantified by calculating the intensity time autocorrelation function. In normalized form this is
denotes an average over time and τ is the time delay. The electric field autocorrelation function
is related to the intensity autocorrelation function by the Siegert relationship,
The time average reduces the noise in the autocorrelation signal to improve the statistics. The scattered light signals detected are very weak (usually less than ~10−15
W), so noise reduction is important in DLS measurements. An understanding of noise and noise reduction is of interest for improving the performance of the DLS particle sizing method, particularly when data are recorded for short times or low count rates. Such situations arise when working with small particles sizes, weakly scattering particles, low particle concentrations, particles with a small refractive index, or with evolving systems or online applications where measurement time may be limited. Studies on noise in DLS experiments range from optical aspects [8
J. B. Lastovka, “Light mixing spectroscopy and the spectrum of light scattered by thermal fluctuations in liquids,” PhD Thesis, Massachusetts Institute of Technology, 1968.
V. Degiorgio and J. B. Lastovka, “Intensity correlation spectroscopy,” Phys. Rev. A
4(5), 2033–2050 (1971). [CrossRef]
] to, more recently, the distortion in the correlation data [10
K. Schätzel, “Noise in photon correlation and photon structure functions,” Opt. Acta (Lond.)
30, 155–166 (1983). [CrossRef]
W. Brown, Dynamic Light Scattering: The Method and Some Applications (Oxford Science Publications, 1993), p.149.
] and various aspects of this technique [13
B. Lou, “The influence and corresponding solution of noise in DLS particle sizing experiment,” J. Light Scattering
20(4), 310–313 (2008).
H. Yang, G. Zheng, and M. Li, “A discussion of noise in dynamic light scattering for particle sizing,” Part. Part. Syst. Charact.
25(5-6), 406–413 (2008). [CrossRef]
]. These studies were mainly to analyze the noise sources and the influence of noise on the measurement and experimental methods to minimize noise. A question which has received very little attention is, given that the light scattering signals contain noise, what can be done to suppress or remove it? In a preliminary study [15
J. Shen, J. C. Thomas, X. Zhu, and Y. Wang, “Wavelet noise reduction in dynamic light scattering,” The International Conference on Information Science and Technology (CSIE 2011), 2011 (to be published).
], we have used a wavelet transform method to reduce noise in the DLS data and found denoising with suitable thresholds can give more accurate particle size values than are obtained from the same raw light scattering data. Very few people have used wavelet denoising in DLS. One reason for this is that wavelet denoising must be applied directly to the light scattering signals, before autocorrelation occurs. In most DLS experiments the light scattering signals are processed directly by a hardware correlator [7
J. C. Thomas, “Photon correlation spectroscopy: technique and instrumentation,” Proc. SPIE
1430, 2–18 (1991). [CrossRef]
] and the time history of scattered intensity or photon counts is not recorded. We have used a high speed digital storage oscilloscope (DSO) to record the time-history data and investigate wavelet packet denoising in DLS signals.
2. Wavelet packet denoising method
The wavelet packet transform, which evolved from wavelet analysis, is a method of signal analysis which can be used to decompose a signal into high and low frequency components simultaneously. By using a wavelet packet transform, the signal bands can be divided into multi-levels of both low and high frequency. To three levels of decomposition, a wavelet packet transform can be shown as a form of wavelet packet decomposition tree as shown in Fig. 1
Fig. 1 Three level wavelet packet decomposition.
It can be seen that the signal, S, is equal to the sum of components:
Here A is the approximation term and is the large amplitude, low frequency component; and D is the detail term and is the low amplitude, high frequency component.
The wavelet packet decomposition algorithm is:
and the wavelet packet reconstruction algorithm is:
denote the coefficients of the wavelet packet decomposition and
represent conjugate filters defined in multi-resolution analysis respectively. Here j
is the decomposition layer number, k
is the time series shift index, l
is the wavelet packet coefficient sequence index and n
is the wavelet packet node number.
In a practical measurement, the i
sample of the signal with noise can be written as
is the noise-free signal,
the noise signal and δ
the noise intensity. Because the wavelet transform is linear, the wavelet transform of
is equivalent to the sum of the wavelet transform of the signal
. Using this characteristic, we can make a multi-scale packet wavelet transform, extract wavelet coefficients, remove wavelet coefficients of the noise as much as possible in each scale and, finally, reconstruct the original signal using the wavelet packet reconstruction algorithm.
3. Experiments and results
DLS experiments were carried out using a Brookhaven Instruments BI-200SM stepper-motor-controlled goniometer, a BI-2030AT 72-channel correlator and a Lexel 85-2 2W Ar+ laser operating at a wavelength of 488nm. The time-history scattered light signals were captured with a DSO which had 500MHz bandwidth, 2.5 GS/s sample rate and a 10 megapoint record length (Tektronix MSO4054 Mixed Signal Oscilloscope).
For the experiments, we diluted standard latex sphere particles into distilled, de-ionized water from a MilliPore Milli-Q/Milli-Rho system. We used Duke Scientific polystyrene latex solutions and their nominal sizes are shown in Table 1
.During the experiments, the temperature of the sample cell was controlled at 25°C and measurements were made for laser powers of 13, 25, 50 and 100mW at a scattering angle of 90°. Data were collected by the correlator and the DSO at the same time. For the correlator, 3 measurements of 5 minutes duration were recorded for each sample. For the DSO, 5 records of 10M samples using a sampling time of 40ns were taken for each sample The data record for the oscilloscope is only 400ms and is very short compared with that from the correlator which is 300s, because the correlator works in real time. The photon pulse data were extracted from the DSO waveform records by software and integrated to get the number of photon counts within the unit time chosen (equivalent to the sample time of the correlator). Thus, the photon counting time-history data were obtained. These data were then subjected to signal denoising using different thresholds, the signals were reconstructed and autocorrelation was performed on the reconstructed signals. The DSO data are very short duration experiments and the autocorrelation data are noisy. The total photon pulse numbers were different in every measurement depending on the scattering intensity received by the photomultiplier tube, and the number of pulses within a correlation channel sample time was even more different because different sample times were used for the various particle sizes. The autocorrelation data from both the correlator and the denoised DSO records were fitted using the method of cumulants [16
D. E. Koppel, “Analysis of macromolecular polydispersity in intensity correlation spectroscopy: The method of cumulants,” J. Chem. Phys.
57(11), 4814–4820 (1972). [CrossRef]
] to determine the mean particle size and the root-mean-square fitting error (RMSE) of the cumulants fit. The RMSE gives an indication of the noise on the autocorrelation data.
Table 1 Particle Sizing Results from the Correlator
|Nominal size /nm||Catalog No.||Measured size /nm|
|105||3100A||110 ± 1|
|300||3300A||324 ± 2|
|503||3500A||520 ± 4|
|1035||4010A||1070 ± 10|
The correlator results were taken as the “true” particle size (shown in Table 1
). These were compared with the results from the DSO data, which were denoised using the wavelet packet threshold approach. In the signal denoising, a four-layer wavelet packet decomposition was implemented using a Daubechies 5 (db5) wavelet with different denoising thresholds as follows: 0 (no denoising), 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.8, 1.0, and 2.0. The threshold determines which wavelet packet coefficients are significant and retained in the function reconstruction. A typical field autocorrelation function before and after denoising is shown in Fig. 2
. Typical particle sizes from the denoised signals under the different laser powers are shown in Fig. 3(a)
, and the RMSE in Fig. 3(b)
. In the calculation of the correlation function, the sample time (channel spacing) was 20µs or 40µs and the corresponding number of samples used in the correlation function calculation was 20k or 10k. The coherence times (the decay time of the electric field autocorrelation function) and the decay constants, Г, under the experimental conditions for 105nm, 300nm, 503m, and 1035nm particles are 344µs (Г = 2905s−1
), 1033µs (Г = 968s−1
), 1739µs (Г = 575s−1
), and 3564µs (Г = 281s−1
) respectively. From these we can calculate the average number of photon counts detected in a coherence time,
. The average total photon counts
, the average photon counts in a sample time,
, and the average photon counts in a coherence time,
, for the particles at different laser power are shown in Table 2
. It can be seen from Table 2
that very few photon counts occur in these measurements. Even at 100mW laser power, the total photon counts recorded vary from ~16k for the 1035nm particles to ~267k for the 105nm particles. These are very short experiments and the data are consequently noisy.
Fig. 2 The effect of denoising on autocorrelation functions from 300nm latex spheres.
Fig. 3 (a) Measured particle size for 300nm spheres as a function of denoising threshold, (b) Root mean square (RMS) fitting error change with denoising threshold for 300nm spheres.
Table 2 Mean Photon Count as a Function of Laser Power
|Nominal Size/nm||Laser Power|
4. Discussion and conclusions
shows the effect of denoising with different thresholds on the intensity autocorrelation functions. With increasing filtering threshold, the autocorrelation function becomes smoother as high frequency noise is removed.
shows the variation with filtering threshold of the particle size recovered from the autocorrelation functions for the 300nm particles and a range of incident laser beam powers. Similar behavior was observed for the other particles with the exception of the 105 nm diameter particles. We see that, for laser powers of 25 to 100mW, the recovered particle size for the 300nm particles remains steady or decreases slightly as the threshold increases until a threshold of ~0.5. At this point the particle size is closest to the “true” value. Further increases in the threshold give a further small decrease in particle size. The root-mean-squared fitting error is plotted in Fig. 3(b)
. It shows that the fitting error increases as the laser power decreases and the autocorrelation data become more noisy. It also shows a pronounced decrease to a broad minimal value at thresholds of ~0.5 - 0.6. We conclude that denoising with filtering thresholds ~0.5 – 0.6 gives the best estimate of the particle size.
We note here that denoising had no effect on the data taken with 13mW laser power. This is because the scattered intensity was very low and the data so noisy, that denoising could not improve it. Also the results for the 105 nm particles did not improve with denoising. Denoising with thresholds ≤ ~0.4 had no effect and at larger thresholds the particle size increased slightly. This is due to the relatively large number of photon counts in a sample time for this sample, as explained below.
Comparing the denoising results with the average total photon counts , the average photon counts per sample time , and the average photon counts per coherence time , for the particles at different laser powers, we found that only when is larger than ~0.8 is the threshold denoising effective. In other words, this method only works when the photon counts in the sampling time are ~1 or greater.
For no denoising (threshold 0), the DLS signal with noise, does not give good results, as the number of samples used to compute the correlation function was only 20k or 10k and the data are noisy. However, using denoising with thresholds in the range 0.5 - 0.8 can give both more accurate particle size values and smaller root mean square error for the 300nm, 503nm and 1035nm particles provided that the number of photon counts in the sampling time is ~1 or greater. In practice this condition is easily satisfied.
The data for the 105nm particles behaved differently, and only 20k samples were sufficient to give good results without denoising. Furthermore, as the denoising threshold increases, the particle size recovered gets larger and further away from the expected value. We believe the reason good results are obtained for the 105nm particle using only 20k samples, is that the relatively high average photon count rate (~an order of magnitude greater than the other samples) resulted in much less noisy data, so there is little gain in denoisng the data. Also, the reduction of high frequency components after wavelet threshold denoising may bias the data by removing high frequency components that are due to the 105nm particle. In other words, there is a loss of information for the small size particles because their characteristic higher frequency components are being suppressed, and this may explain why the particle size recovered gets larger with increasing denoising threshold.
In dynamic light scattering, when the average number of photon counts detected in a coherence time,
the statistics are dominated by the shot noise of the detection process, and when,
the statistics are dominated by the intensity fluctuations of the detected signal [17
J. N. Shaumeyer, M. E. Briggs, and R. W. Gammon, “Statistical fitting accuracy in photon correlation spectroscopy,” Appl. Opt.
32(21), 3871–3879 (1993). [PubMed]
]. Obviously, the method we used is only effective on the signal noise caused by the intensity fluctuations of the detected signal, and not on the shot noise caused by the detection process, because all our experiments were carried out under the condition of average photon counts per coherence time,
which is above the shot noise regime.
Denoising, as we have seen here for the data from the 105nm spheres, has no benefit if the count rate is sufficiently high. Likewise, if the measurement time can be increased to capture a large enough number of photons, the data would be smooth and would not benefit from denoising. Furthermore, this denoising technique is technically demanding. To collect 300s of data, which is a relatively short experiment with a real time correlator, using a DSO with a 40ns sample time would require 7.5x109 samples. Such a long record length is not available in a DSO. These data then have to be processed to find the photons and build up the time history, filtered and correlated, all of which ensures the technique would not approach real time performance. Thus, it is not expected that our technique would compete with typical DLS measurements except in the regime of short duration, low count rate experiments.
In conclusion, we have demonstrated that denoising of DLS data can be performed using wavelet packet filtering and that this provides more accurate particle size results for short duration, low count rate measurements.