Abstract

Keeping unsuitable web content from young eyes is a challenge, given the wide-open environment of the internet. Research was conducted into selecting an Artificial Intelligence interface to be used to train and select whether specific images represented explicit material or not. Ten potential vendors were reviewed, and Google Automl Cloud® was selected for training and verification testing. Unfortunately, it was difficult to obtain a sizable enough archive of approved images to complete the originally envisioned training and testing program. A modest-sized image data base was finally secured, and the code was successfully tested with a small data set, even though the results did not contain enough samples to establish the commercial-level reliability required for further testing.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.