Look-Compute-Move models for a set of autonomous robots have been thoroughly studied for over two decades. We consider the standard Asynchronous Luminous Robots (ALR) model, where robots are located in a graph G. Each robot, repeatedly Looks at its surroundings and obtains a snapshot containing the vertices of G, where all robots are located; based on this snapshot, each robot Computes a vertex (adjacent to its current position), and then Moves to it. Robots have visible lights, allowing them to communicate more information than only its actual position, and they move asynchronously, meaning that each one runs at its own arbitrary speed. We are also interested in a case which has been barely explored: the robots need not all be present initially, they might appear asynchronously. We call this the Extended Asynchronous Appearing Luminous Robots (EALR) model. A central problem in the mobile robots area is bringing the robots to the same vertex. We study several versions of this problem, where the robots move towards the same (or close to each other) vertices. And we concentrate on the requirement that each robot executes a finite number of Look-Compute-Move cycles, independently of the interleaving of other robot’s cycles, and then stops. Our main result is direct connections between the (ALR and) EALR model and the asynchronous wait-free multiprocess read/write shared memory (WFSM) model. General robot tasks in a graph are also provided, which include several version of gathering. Finally, using the connection between the EALR model and the WFSM model, a combinatorial topology characterization for the solvable robot tasks is presented.