Thinking about Thinking
A New Model for Algorithmic Intelligence
A New Model for Algorithmic Intelligence
Jeff Hawkins does a lot of thinking. Over the past 25 years, his thinking has lead to the creation of the Palm Computing and Handspring corporations, major developments in mobile computing technology, and the Graffiti handwriting recognition system. In his recent book, “On Intelligence”, with co-writer Sandra Blakeslee, Mr. Hawkins also reveals he does a lot of thinking about brains. While employed by Intel as an electrical engineer in 1980, he wrote a letter to Intel’s chairman, Gordon Moore, proposing the creation of a research program into the working of the brain. At the time, Intel’s chief scientist, Dr. Ted Hoff, who had performed early work on neural network theory, suggested too little was known about brain function to make significant progress in the foreseeable future; a prediction that proved to be correct as, over two decades later, scientists are now just beginning to form fundamental models of the brain.

Image courtesy of Figshare
Undeterred, Mr. Hawkins took courses in human physiology and biophysics and, one day in 1986 while feeling swamped by the myriad details of studies on brain structure and philosophical arguments over the definition of intelligence, he contemplated what it means to “understand” something. He was puzzled by what brains do when they are not generating behavior in response to stimulus input. While looking at familiar objects in his office, he asked what would happen if a new object, one he had never seen before, suddenly appeared in view. He reasoned the new object would “catch his attention” and standout as not belonging in the scene he expected, or more precisely, the scene his brain “predicted.” Therefore, he proposed the thing that brains “do” is continuously load memorized patterns into neurons responsible for our senses of sight, touch, hearing, taste and smell and compare these predictions to actual future inputs. If the input matches the prediction, then the input is “understood” and additional predictions are made. An error is generated when the input does not match the prediction, the brain’s “attention” is focused on the mismatch and processes are called into service to correlate this new pattern with stored patterns recalled from memory. For truly new input, its pattern is “learned” and stored in memory within the contextual framework of previously learned models.
Mr. Hawkins’ description of brain function and intelligence is not a philosophical argument, but is rather a model based soundly on the neurophysiology of the mammalian brain. The organ inside our skull is a collection of regions having specific structure and function that neuroscientists have cataloged according to the bodily functions associated with each. These components regulate our autonomic systems such as blood circulation, digestion, breathing and our senses. Enveloping these “old brain” components is a thin, smooth, pinkish gray layer of neural tissue called the neocortex, which in humans is about the size of a large dinner napkin and contains an estimated 30 billion neurons. Brain scientists have discovered the neocortex is responsible for “intelligent behaviors” including the processes of perception, language, imagination, mathematics, art, and music. As it is the outermost layer of the brain, it can be viewed as being the highest level of brain function and is therefore a logical place to look for the biological model of intelligence.
In 1978, Vernon Mountcastle, a neuroscientist at Johns Hopkins University, emphasized the uniform appearance of the neocortex regardless of its different regions known to control processes as disparate as vision, movement and hearing and suggested every region of the neocortex was performing the same basic function. At the time, anatomists recognized the obvious similarity of neocortex structure, but attributed the likeness to a lack of sophisticated analytical instrumentation and set out to catalogue diminutive differences between regions rather than the formulation of a grand unifying model of fundamental neocortex function. After closer inspection, the differences appear to be variations in connectivity density between the neocortex and the brain subsystems requiring different synaptic bandwidth. Mr. Hawkins hypothesizes the universal basic function of the neocortex is to store patterns in an invariant form, store sequences of patterns, store them in a hierarchy and recall the patterns auto-associatively.
Each of these four processes has been separately coded in digital computers. Computer memory and off-line storage have been around for decades. Curve-fitting and modeling represent patterns in a basic invariant form, databases easily store temporal sequences of models, data mining techniques such as classification and association create hierarchies, and Hopfield neural networks represent auto-associative memories. In August 2002, Jeff Hawkins created the Redwood Neuroscience Institute to explore the integration of these existing technologies and their appropriateness as a model for memory and cognition in the brain. [Later, in 2005, he co-founded Numenta to commercialize the institute’s findings.] If successful, the current area of “artificial intelligence” will produce cortex-like memory systems to evolve machines with true “algorithmic intelligence.” Such machines would not only spur the birth of entire industries but also challenge the philosophical foundations of our society — but that’s merely a prediction.
This material originally appeared as a Contributed Editorial in Scientific Computing and Instrumentation 22:2 January 2005, pg. 12.
William L. Weaver is an Associate Professor in the Department of Integrated Science, Business, and Technology at La Salle University in Philadelphia, PA USA. He holds a B.S. Degree with Double Majors in Chemistry and Physics and earned his Ph.D. in Analytical Chemistry with expertise in Ultrafast LASER Spectroscopy. He teaches, writes, and speaks on the application of Systems Thinking to the development of New Products and Innovation.

