In this paper, a new lifting wavelet domain audio watermarking algorithm based on the statistical characteristics of sub-band coefficients is proposed. First of all, an original audio signal was segmented and each segment was divided into two sections. Then, the Barker code was used for synchronization, the LWT (lifting wavelet transform) was performed on each section, a synchronization code and a watermark were embedded into the first section and the second section, respectively, by modifying the statistical average value of the sub-band coefficients. The embed strength was determined adaptively according to the auditory masking property. Experiments show that the embedded watermark has better robustness against common signal processing attacks than present algorithms based on LWT and can resist random cropping in particular.
Coordinate Measurement Machines (CMMs) have been extensively used in inspecting mechanical parts with higher accuracy. In order to enhance the efficiency and precision of the measurement of aviation engine blades, a sampling method of profile measurement of aviation engine blade based on the firefly algorithm is researched. Then, by comparing with the equal arc-length sampling algorithm (EAS) and the equi-parametric sampling algorithm (EPS) in one simulation, the proposed sampling algorithm shows its better sampling quality than the other two algorithms. Finally, the effectiveness of the algorithm is verified by an experimental example of blade profile. Both simulated and experimental results show that the method proposed in this paper can ensure the measurement accuracy by measuring a smaller number of points.
This paper proposes a speech enhancement method using the multi-scales and multi-thresholds of the auditory perception wavelet transform, which is suitable for a low SNR (signal to noise ratio) environment. This method achieves the goal of noise reduction according to the threshold processing of the human ear's auditory masking effect on the auditory perception wavelet transform parameters of a speech signal. At the same time, in order to prevent high frequency loss during the process of noise suppression, we first make a voicing decision based on the speech signals. Afterwards, we process the unvoiced sound segment and the voiced sound segment according to the different thresholds and different judgments. Lastly, we perform objective and subjective tests on the enhanced speech. The results show that, compared to other spectral subtractions, our method keeps the components of unvoiced sound intact, while it suppresses the residual noise and the background noise. Thus, the enhanced speech has better clarity and intelligibility.