You are placed in front of a screen that is black save for one spot of red in the center, and are told to focus strictly and solely upon that red dot. You think, “This will be no problem. As a college educated, long time subscriber to, and very occasional reader of, the most sophisticated periodicals, I can surely count on myself to focus on a single red dot.” So, you’re looking at the dot when suddenly a bright yellow triangle appears at the bottom left of the screen and zips to the bottom right. Your eyes betray you, and dart to the enticing new event, a betrayal that lasts a fraction of a second, but is perhaps all the more profound for its brevity.
What happened? Why did your focus waver, against your will? It was a question that, a century and a half ago, we could only broadly lunge after armed with the self-reflective insights of philosophy and the nascent science of psychology. Over the ensuing years, however, there has emerged an entrancing dance between two disciplines, mathematics and neurobiology, that has allowed us to harness the magisterial modeling capabilities of math to the breathtaking complexity of the human mind in order to finally start rigorously answering such seemingly basic questions like, “Just what am I doing, when I’m ‘focusing’ on something?”
The history of computational neuroscience is filled with colorful characters, of mathematicians who dared to leave the safe fields of abstraction to wade into the squishy, goopy irregularities of biology, and of equally stalwart biologists willing to forego the beautiful chaotic messiness of their chosen field and see the value in idealized models. With one foot in each of two highly demanding intellectual fields, computational neuroscientists are a rare breed, but rarer still are those who can not only do the research which deepens our understanding of how billions of neurons add up to make a brain, but can effectively communicate those insights to those of us who don’t happen to be multi-faceted geniuses.
Dr. Grace Lindsay is one such individual, though given her list of activities my working hypothesis is that she is in fact three such individuals. Her 2021 book, Models of the Mind: How Physics, Engineering and Mathematics Have Shaped Our Understanding of the Brain is a grand tour through the history of computational neuroscience, from its humble beginnings in information theory and neuron structure up to its modern manifestations harnessing supercomputers to run large scale convolutional neural networks that model important brain systems. It is a profound book which has already entered the pantheon of classic general reader neuroscience texts like LeDoux’s Synaptic Self or Montague’s Why Choose This Book? which don’t shy away from close detailing of important experiments but also don’t lean on unapproachable jargon to convey their nature or import.
That ability, to rigorously explain complicated ideas to a general audience, is one Lindsay honed over four years of co-hosting the podcast Unsupervised Thinking, which in forty-nine episodes from 2015 to 2019 dove into the deep history and promising future of neuroscience. Those episodes are all still available and worth listening to not only for the insights they reveal into the state of modern computational neuroscience, but as an object lesson in how scientific podcasts should be done. Neither a slickly-produced but wafer-thin science appetizer, nor a stodgy exercise in impenetrable academic posturing, UT is exactly what you want – three colleagues sitting around, talking honestly about the thing they’ve devoted their lives to, showing us not only how new ideas and procedures are formed, but the more fundamental issue of how to pose new questions and evaluate their potential fruitfulness. It’s the sort of skill we don’t tend to teach very well in our high school curricula, and just listening to Lindsay and UT’s other hosts working through how to best formulate the questions they would like to see answered is a great example of fundamental scientific thinking that should be required listening in science classes.
Good SciComm is one of the most important things a gifted scientific writer can do in these troubled times, but it’s only one string in Lindsay’s bow, for primarily she is a researcher who caught the computational neuroscience bug while at the University of Pittsburgh when she attended a lecture by Dr. Brent Dorion as an undergraduate. Dorion demonstrated how neurons can be modeled by mathematical functions, thereby revealing to his audience a new way to think about why the brain is able to do the things it does, in terms of the flow of information and the physical connections needed to optimize the equations that describe it.
After some time in Freiburg at the Bernstein Center where she attempted to narrow down the particular area of neuroscience she wanted to model, she arrived at Columbia University, where she turned her focus on the mechanisms behind the phenomenon of “attention,” which brings us squarely back to that red dot that we began with – what do our brains do when we try and “focus” on the red dot, and how are those processes potentially de-railed by the arrival of new events?
It turns out that our brains are the scene of a rich and continuous tug of war between “top-down” and “bottom-up” processes – between the areas of the brain which want to direct what we should be paying attention to, and those which are predisposed to react to certain features of the environment and excitedly pass that information upwards regardless of what the official directives from the top might be. Those bottom-level neurons which react preferentially to particular features and events – to, for example, lines and motion – are fascinating in their own right and hopefully we’ll get to talk more about them when I finally get around to writing about the work of Jennifer Groh, but Lindsay’s work on attention has been primarily concerned with how the brain is able to “pay attention” to a particular object or feature, and how to build computational models which are able to investigate how attention is organized and what benefits it produces.
That work centers upon evaluating the Feature Similarity Gain Model (FSGM) of S. Treue and JC Martinez-Trujillo, which holds that one way that your brain organizes “attention” is by giving preferential weights to those neurons which react more strongly to the features that characterize what the brain is trying to get us to pay attention to. If you’re staring at a screen and need to press a button every time a green blip shows up, your brain would do well to make sure that any neurons which respond particularly strongly to green get amplified, while those that do not get subdued.
Lindsay put this theory to the test by building a convolutional neural network (or CNN – and if every time you read the phrase “convolutional neural network” from here on out you do so in the voice of James Earl Jones, you’re doing it right), a vast computer model that is often used to mimic the connectivity of mammalian visual systems. She gave it the task of identifying whether particular objects were located in blended or compound pictures. Without any extra “attention”-type architecture built in, the CNN was still able to, more often than not, respond correctly to the images fed into it. However, after building in a preferential tuning system, and experimenting with different layers of implementation for that system, she was able to produce a neural network that responds to visual stimuli and categorizes them with near-human levels of accuracy, teaching us thereby both more about how we pull off the neat chemical trick of focusing on particular features, and how we can build machines with better optical recognition functionality.
Lindsay’s work continues at University College London, where she has recently taken Josh Merel’s virtual rodent, developed to understand motor systems, and employed it in the study of different theories of how appropriate behaviors are developed through various learning methods, and how the transfer of previous knowledge and behavior developed to confront one type of task aids an individual in the performance of new, similar tasks. It’s exciting work from one of this generation’s most promising minds, and most gifted communicators, whose words and ideas we can all look forward to guiding us to a better knowledge of our squishy, squishy brains and the exquisite mathematics underlying them in the years and decades to come.
FURTHER READING: Obviously, Models of the Mind is your first place to go, and then, after that has sufficiently whetted your whistle, I’d recommend this paper, which very clearly lays out some of Lindsay’s work on Attention, this one, which is a larger survey of the history of Attention studies, and this paper on her recent work with learning methods, which is less accessible, but just as rad (to use the scientific terminology).
Lead photo: Courtesy of Grace Lindsay and published on Women You Should Know with Dr. Lindsay’s express permission.
Want to know more awesome Women in Science? Check out my WYSK column archive and my books, Illustrated Women in Science – Volume 1, Volume 2 and Volume 3.
The post Achtung, Brainy: Grace Lindsay And The Mathematical Modeling Of The Human Brain appeared first on Women You Should Know®.