Accessing the meaning of words to produce and understand language requires the activation of semantic representations. These latter are stored in semantic memory, organizing concepts according to semantic attributes (e.g., the cat mews), and semantic categories (e.g., the cat is an animal). Concerning semantic attributes, non-living concepts (e.g., tools) are processed preferentially according to functional features (e.g., a saw cuts) rather than visual features, whereas living concepts (e.g., animals) are processed preferentially according to visual features (e.g., the cat is little, with sharp ears) rather than functional ones (Warrington and Shallice, 1984). Within these semantic attributes, some are widely shared, independently from our personal history (e.g., the cat mews), and some are linked with our autobiographic memory (e.g., I had a cat when during my childhood). Moreover, most of the concepts have an emotional connotation which, whether it is widely shared (e.g., a black cat brings misfortune) or linked with our personal history (e.g., I love cats because mine was so soft) in a subjective-centered manner, constitutes a semantic attribute, i.e. a defining characteristic (Cato Jackson and Crosson, 2006). Therefore, not only semantic category and “cold” widely-shared semantic attributes, but also “warm” emotion-related attributes should be activated to produce or understand a word. Even if the meaning of words may be accessed by a single “cold” semantic processing, words with emotional connotation (widely-shared and/or personal) are more quickly and efficiently processed (Bock and Klinger, 1986), allowing a faster and more accurate lexical access than neutral words (Scott et al., 2009; Mendez-Bertolo et al., 2011; Kissler and Herbert, 2013). It is worth noting that processing word-emotional connotation “differs from the actual experience of emotion: emotional connotation refers to knowledge about the emotional property of an object” (Cato Jackson and Crosson, 2006) and that “emotion modulates word production at several processing stages” (Hinojosa et al., 2010). Semantic representations forming concepts are more than the simple summation of defining features (Lambon-Ralph et al., 2010). However, how these semantic representations are organized at the neural level is still poorly understood. While some models suggest a distributed organization between a number of interacting cortical associative regions (Turken and Dronkers, 2011), an alternative model proposes an unified organization of semantic representations in an amodal shape in the anterior temporal lobes (ATLs), receiving integrated information from different modality-specific cortical areas. In this latter framework, the ATLs are named “amodal hubs” (Patterson et al., 2007; Lambon-Ralph et al., 2009). Here, in the light of our clinical observations during picture naming in glioma patients who underwent awake surgery, we bring a new insight on how semantic and personal-emotional information are integrated at the brain systems level, enabling to perform a well-rounded and efficient semantic processing, in order to achieve a complete noetic experience.