Abstract
A wide variety of tasks may be performed by humans using only visual data as input. Creating artificial intelligence that adequately uses visual data allows controllers to use single cameras for input and to interact with computer games by merely reading the screen render. In this research, we use the Quake II game environment to compare various techniques that train neural network (NN) controllers to perform a variety of behaviors using only raw visual input. First, it is found that a humanlike retina, which has greater acuity in the center and less in the periphery, is more useful than a uniform acuity retina, both having the same number of inputs and interfaced to the same NN structure, when learning to attack a moving opponent in a visually simple room. Next, we use the same humanlike retina and NN in a more visually complex room, but, finding it is unable to learn successfully, we use a Lamarckian learning algorithm with a nonvisual hand-coded controller as a supervisor to help train the visual controller via backpropagation. Last, we replace the hand-coded supervising nonvisual controller with an evolved nonvisual NN controller, eliminating the human aspect from the supervision, and it solves a problem for which a solution was not previously known.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Computational Intelligence and AI in Games
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.