Abstract

Philippe Schyns is professor of visual neuroscience, Director of the Institute of Neuroscience and Psychology and Head of the School of Psychology at the University of Glasgow. He did his undergraduate training in Psychology at University of Liege (Belgium) and in Computer Science at University of Louvain-la-Neuve (Belgium). In 1992, he completed a PhD in Cognitive Science at Brown University. Following a post-doc with Tommaso Poggio in the Department of Brain and Cognitive Sciences at MIT he joined the faculty at University of Montreal. A year later, he moved to Glasgow University. He was a visiting scientist at the Advanced Telecommunication Research Institute International (Japan) in 1994 and 1995. He leads a team researching the behavioural and neural aspects of information processing in vision. What turned you on to science in general, and to visual neuroscience in particular, in the first place? It happened in primary school, on a Friday afternoon, and it was an epiphany. We had a Friday hour during which our teacher would entertain us with stories, to keep our attention levels high and our misbehaving low. On this particular day, he started to describe a small quiet pond as a casual observer would experience it, as a still spread of water with a few water lilies and a variety of insects swarming near the surface. Without transition, he switched from this pastoral description to a brutal rendering of the rich life and the fight for survival happening beneath still waters. As time was ticking, he stopped the story and left it unfinished. He promised to resume the description of nature's battlefield the following Friday, but it never happened. I waited expectantly for the story to finish, literally every single Friday of the year, but it never did. In my turbulent high-school years, I was very fortunate that a patient and understanding biology teacher gave me a textbook on the biology of the cell to read and report on. This generated a genuine fascination for detailed and complex mechanisms. Clockworks also interested me and I spent considerable time dismantling the 19th century watches of my family to understand how they worked. I extended this to motorbike engines and seriously considered pursuing a career as a budding racing engineer. Over the summer prior to entering university, I read a book on clinical psychology and realized the torments of the brain were not entirely unlike those in the pond. This decided me to study psychology. It was when I read Douglas Hofstader's Godel, Escher, Bach that everything came together. At the time, I was a junior research assistant in psychology in Liege (Belgium). What fascinated me was how apparently serial thoughts could emerge from many lower-level neuronal mechanisms — we have so many neurons, why do we have so very few thoughts? Virtual machines in computer science offered a metaphor for this ‘reduction’ between cognition and the brain, the connectionist framework some modelling tools and philosophy some food for thought on the forms that this reduction could take. This convinced me to study computer science in Belgium, with great interest in automata theory and the constraints it poses on what is computable and therefore can be modelled and ultimately understood as a mechanism. Ultimately, I left for the US. It was then the only country where I could pursue my interdisciplinary interests. My PhD in Cognitive Science at Brown University offered a truly rewarding intellectual experience and had a big impact on me — with lectures from Dan Kersten (computational vision), Jim Anderson (neural network models), Gregory Murphy (concept learning and categorization), Jaegwon Kim (philosophy of mind), Ulf Grenander and Stuart Geman (statistical pattern recognition). Brown was my first experience of a rich environment with a free flow of thoughts between Applied Mathematics, Neuroscience, Cognitive Science and Philosophy. I relished these years and have since then striven to develop a similar research culture. Looking back, I was fortunate to experience the benefits of the US educational system in my formative years. It was then well ahead of Europe in the opportunities that it offered (and still offers) for genuine interdisciplinary training. And one could easily argue that Europe still has some grounds to cover to bridge between disciplines in undergraduate and graduate teaching. Research often suffers from this gap. The Institute of Neuroscience and Psychology that I direct at University of Glasgow seeks to bridge this gap by straddling between the physical, information, medical and life sciences. Do you have a favourite paper? Some of my favourite studies are summarized in Niko Tinbergen's book The Study of Instincts (1951). I read this book when I was a psychology undergraduate and began to appreciate the importance of understanding vision as an information processing problem, cast in terms of the specific information that is mostly relevant to the observer. This highlights the question: “what is observer-relevant information?” Tinbergen's study of the territorial stickleback fish illustrates the point eloquently. A territorial male would attack a wooden object more vigorously than another male fish if the wooden object's underside was redder, isolating the red colour as the relevant information for the territorial male stickleback. A formal complement to this empirical work is the theoretical framework of Shannon's Information Theory. If Tinbergen's identification of relevant information was critical, Shannon's formalization of information exerted a profound (if latent) influence when I was a graduate student. Finally, building on Chomsky's generative grammars, Grenander's Pattern Theory provides a general statistical framework to formalize the idea that the visual brain can recognize the patterns of information that it can synthesize. Though this requires determining what the information is (to understand what patterns it can form), the idea of recognition as synthesis is a truly profound one whose ramifications are often neglected. What advice would you offer young graduate students? First, do not listen to me but go read Ramon y Cajal's Advice for a Young Investigator. Most of what young investigators really need to know to be a good scientist was already written about by Cajal a century ago. The gist has not changed. Second, researchers are conceptually, methodologically and formally bounded by what they know. What they do not know and cannot even conjure up as a possibility represents their essential limits. I would therefore advise young investigators to develop a comprehensive toolbox of knowledge and practical skills to frame their research questions in the most informed way. Finally, all of the above implies that a young investigator should always think constructively outside of their supervisor's toolbox, before their own brain acquires its own set of scientific patterns. My most enjoyable moments with young and senior investigators occur when their thoughts pushed me outside of my toolbox; the least were the square-shaped wheels presented as novel. If you could do it over, would you pursue the same research career? Yes, but I would spend even more time expanding my own toolbox with mathematics, probability theory, statistical modelling, physics, logic and philosophy. These disciplines provide the tools to see and formalize the patterns that visual neuroscientists must find in their high-dimensional data. In fact, much of the history of cognitive neuroimaging methods follows that of machine learning theory, with some lag. How do you combine teaching and research? Actually, I don't. I spend a lot of my free time outside research administrating a large institute of neuroscience and psychology at University of Glasgow. So, my teaching is minimal and I am grateful that I am very well supported by my university and senior colleagues, who enable me to have an active research career. What was your biggest thrill in science? It must have been the first application of ‘Bubbles’ to brain data. Frederic Gosselin (now at University of Montreal) and I invented Bubbles to model the receptive fields of behavioural responses and brain signals in visual perception tasks — it is a modern take on the classic studies of Tinbergen and Hubel and Wiesel. I was then on sabbatical and had convinced my University to pay for an EEG system, to test Bubbles with the voltage responses of EEG sensors — this was my first study with brain signals, considered risky and therefore a cheap way to go. Most people I had asked for opinions were unanimously telling me that the high variance of the EEG voltage response precluded such a detailed study of information sensitivity. So, when I saw the first picture representing the receptive field of the N170 event-related potential (ERP), I literally fell off my chair. With this began a line of projects on the dynamics of information processing that fascinated me (for example, Schyns et al. ‘Dynamics of visual information integration in the brain for categorizing facial expressions’ Curr. Biol. 17, 1580–1585). A nice side-effect is that my university understood the potential of this research and provided the funds to develop, together with colleagues, the Centre for Cognitive Neuroimaging (CCNi) in Glasgow. It has now extended well beyond EEG measurements, with fMRI, MEG and TMS. Living through the development of CCNi from literally the first brick to its considerable success has certainly been another thrill of my career. What has been your biggest mistake in research? Really big mistakes in research can occur in one of two ways. One is to ask the wrong research question, and the other is to make a massive error in interpreting the data. The latter has become a significant worry since I have incorporated progressively more dimensions of cognitive neuroimaging into my psychophysics/visual cognition research. Psychophysics and psychophysically-inspired visual cognition pride themselves on a tight control of stimuli and simple but powerful behavioral measures. As I have incorporated more brain measurements into my designs, some of the unexpected discoveries about the computing architecture of the visual brain have been genuinely thrilling, but at the same time I have become very aware of the assumptions going into the interpretation of the brain measurements themselves. To illustrate this, several colleagues and I recently showed that the brain, much like a radio tuner, appears to multiplex the coding of visual features in different oscillatory bands of cortical activity, augmenting coding capacity. For example, beta oscillations would code fine-scale features, such as the eyes of a face, whereas theta oscillations would code larger-scale features such as the mouth. Furthermore, we found that the phase of cortical oscillations, not so much their power, performs the lion's share of visual information coding. Finally, we used TMS to directly interact with beta vs. theta oscillations and influence the perception of fine versus global features, respectively (Romei et al. ‘Rhythmic TMS over parietal cortex links distinct brain frequencies to global versus local visual processing’ Curr. Biol. 21, 334–337). If true, this interpretation of the coding architecture of the visual brain is significant. But I always worry that future experiments will point out a mistake in our interpretation of the brain measurements. Our evidence depends on patterns derived from oscillatory analysis of highly integrative measurements of neural activity, whereas the veridical patterns exist at a much finer grain. What do you think are the big questions to be answered in your field? The big question remains to understand the workings of the brain as a multi-layered, dynamic, information processing system. One facet is to model the information critically required to resolve the variety of visual tasks our brains perform with deceptive ease. For example, in a fraction of a second we can accurately categorize a complex visual input as an outdoor scene full of skyscrapers, including the intricate art-deco details of the Chrysler building — there is a considerable amount of hierarchical knowledge in this single example. Yet, the brain cannot recognize what it cannot synthesize. Thus, the complementary facet is to understand how neural circuits rapidly deliver these categorizations by dynamically synthesizing the patterns of internal information (knowledge) to match patterns of information in the outside world. Our brains are such compulsive categorizers that they even see meaningful patterns in random white noise. As a result, white noise can be used to capture the patterns of internal information the brain synthesizes to categorize the outside world (Smith et al. ‘Measuring internal representations from behavioural and brain data’ Curr. Biol. 22, 191–196). What is your greatest ambition in research? Simply stated, I would wish to write a compiler that translates the continuous brain activity of a person recognizing faces, object and scenes into the various algorithms of a ‘cognitive machine’, and vice versa.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call