Autonomous robotic systems depend on their perception and understanding of their environment for informed decision-making. One of the goals of the Semantic Web is to make knowledge on the Web machine-readable, which can significantly aid robots by providing background knowledge, and thereby support their understanding. In this paper, we present a reasoning system that uses the Ontology for Robotic Knowledge Acquisition (ORKA) to integrate the sensory data and perception algorithms of the robot, thereby enhancing its autonomous capabilities. This reasoning system is subsequently employed to retrieve and integrate information from the Semantic Web, thereby improving the robot's comprehension of its environment. To achieve this, the system employs a Perceived-Entity Linking (PEL) pipeline that associates regions in the sensory data of the robotic agent with concepts in a target knowledge graph. As a use-case for the linking process, the Perceived-Entity Typing task is used to determine the more fine-grained subclass of the perceived entities. Specifically, we provide an analysis of the performance of different knowledge graph embedding methods on the task using a annotated observations and WikiData as a target knowledge graph. The experiments indicate that relying on pre-trained embedding methods results in an increased performance when using TransE as the embedding method for the observations of the robot. This contribution advances the field by demonstrating the potential of integrating Semantic Web technologies with robotic perception, thereby enabling more nuanced and context-aware decision-making in autonomous systems.