Abstract

This paper investigates an approach that uses the cognitive architecture Soar to improve the performance of an automated robotic system, which uses a combination of vision and force sensing to remove screws from laptop cases. Soar’s long-term memory module, semantic memory, was used to remember pieces of information regarding laptop models and screw holes. The system was trained with multiple laptop models and the method in which Soar was used to facilitate the removal of screws was varied to determine the best performance of the system. In all the cases, Soar could determine the correct laptop model and in what orientation it was placed in the system. Soar was also used to remember what circle locations that were explored contained screws and what circles did not. Remembering the locations of the holes decreased a trial time by over 60%. The system performed the best when the number of training trials used to explore circle locations was limited, as this decreased the total trial time by over 10% for most of the laptop models and orientations. Note to Practitioners —Although the amount of discarded electronic waste in the world is rapidly increasing, efficient methods that can handle this in an automated non-destructive fashion have not been developed. Screws are a common fastener used on electronic products, such as laptops, and must be removed during nondestructive methods. In this paper, we focus on using the cognitive architecture Soar to facilitate the disassembly sequence of removing these screws from the back of laptops. Soar is able to differentiate between different models of laptops and store the locations of screws for these models leading to an improvement of the disassembly time when the same laptop model is used. Currently, this paper only uses one of Soar’s long-term memory modules (semantic memory) and a screwdriver tool. However, this paper can be extended to use multiple tools by using different features available in Soar such as other long-term memory modules and substates.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call