Abstract

Purpose:We proposed a new dictionary learning algorithm (AK‐SVD) based on K‐SVD. AK‐SVD can denoise the CBCT image, and did not need the noise information as prior knowledge.Methods:The AK‐SVD had two steps: signal sparse representation, and then dictionary optimization. The CBCT image was sparse, and there were limited big coefficients. The other coefficients were zero or near zero. In the sparse representation step of traditional K‐SVD, the noise variance was used as a threshold to select the big representation coefficients. This increased the complexity of the algorithm. The denoising result also was affected by the accuracy of the noise variance estimation, especially in non‐Gaussian noise. In AK‐SVD we used the average of the existing big coefficients as a threshold. The new found coefficient was compared with the threshold. If it was bigger than this threshold, it will be determined as the big coefficient, and be added to the set of existing big coefficients. The finding process continued. If it was smaller than this threshold, the finding process was end.This threshold was not related to the noise variance, and based on this method we improved the traditional K‐SVD.Results:In the synthetic experiments about designing dictionary from synthetic signals, the correct rate of dictionary learning by the AK‐SVD was similar with the ideal results of the K‐SVD where the noise variance was known. However, the AK‐SVD algorithm did not need to evaluate the noise variance, so it had lower computational complexity and wider adaptability. In the denoising experiment about the CBCT image corrupted by the non‐Gaussian noise, AK‐SVD has an advantage in terms of texture.Conclusion:The AK‐SVD can work well with the noise variance unknown, and it had lower computational complexity and wider adaptability than K‐SVD.This work was jointly supported by National Natural Science Foundation of China (61471226), Natural Science Foundation for Distinguished Young Scholars of Shandong Province (JQ201516), China Postdoctoral Science Foundation (2015T80739, 2014M551949), and research funding from Jinan (201401221).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.