Abstract

To implement spectral-based quantitative ultrasound (QUS), currently it is necessary to take a reference scan from a well-characterized tissue-mimicking material for each scanner setting that is used. The purpose of this study was to evaluate if a reference-free approach could be adopted, which would eliminate the need for multiple reference scans while still maintaining the ability to objectively classify different tissue states. Specifically, we utilized a convolutional neural network (CNN) to classify tissues and tissue-mimicking phantoms without taking a reference for each setting used and compared the performance to conventional QUS approaches using a reference phantom. Rabbits that were maintained on a high fat diet for 0, 1, 2, 3 or 6 weeks with five rabbits per diet group (total N = 30) were scanned ultrasonically and classified into two groups based on their liver lipid levels: low fat (8%) and high fat (> 8%). An array transducer L9-4 with center frequency of 4 MHz was used to gather RF backscattered data in vivo from the rabbits. In the conventional QUS approach the RF signals were calibrated from a reference phantom and used to estimate an average BSC for each rabbit. In the reference-free approach, a CNN was trained on the time domain RF signals to classify the rabbit livers. To assess the reliability of the CNN to classify when the settings of the scanner were adjusted, five tissue-mimicking phantoms with different but known properties were scanned under different system settings: power, time gain compensation and number of transmit foci. The CNN was first trained on one system setting and then tested on data acquired from each phantom with the other settings. This was repeated for each individual setting. The testing accuracy of in vivo rabbit liver classification using the CNN without a reference was 73% compared to 60% when using the conventional QUS. The results demonstrated that the CNN can provide accurate and robust classification without having to use a reference for each setting. This work was supported by a grant from the NIH (R21 EB020766).

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.