Abstract

Introduction: Complex soft tissues, such as knee meniscus, play a crucial role in mobility and joint health but are incredibly difficult to repair and replace when damaged. This difficulty is due to the highly hierarchical and porous nature of the tissues, which, in turn, leads to their unique mechanical properties that provide joint stability, load redistribution, and friction reduction. To design tissue substitutes, the internal architecture of the native tissue needs to be understood and replicated.Methods: We explore a combined audiovisual approach, a so-called transperceptual approach, to generate artificial architectures mimicking the native architectures. The proposed methodology uses both traditional imagery and sound generated from each image to rapidly compare and contrast the porosity and pore size within the samples. We have trained and tested a generative adversarial network (GAN) on 2D image stacks of a knee meniscus. To understand how the resolution of the set of training images impacts the similarity of the artificial dataset to the original, we have trained the GAN with two datasets. The first consists of 478 pairs of audio and image files for which the images were downsampled to 64 × 64 pixels. The second dataset contains 7,640 pairs of audio and image files for which the full resolution of 256 × 256 pixels is retained, but each image is divided into 16 square sections to maintain the limit of 64 × 64 pixels required by the GAN.Results: We reconstructed the 2D stacks of artificially generated datasets into 3D objects and ran image analysis algorithms to characterize the architectural parameters statistically (pore size, tortuosity, and pore connectivity). Comparison with the original dataset showed that the artificially generated dataset based on the downsampled images performs best in terms of parameter matching, achieving between 4% and 8% of the mean of grayscale values of the pixels, mean porosity, and pore size of the native dataset.Discussion: Our audiovisual approach has the potential to be extended to larger datasets to explore how similarities and differences can be audibly recognized across multiple samples.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.