Book Review – How Intelligence Happens by John Duncan

 

How Intelligence HappensHow Intelligence Happens by John Duncan
5 of 5 stars

The word “intelligence” comes from Latin terms intelligentia and intellligere meaning “to comprehend” or “to perceive”. In the Middle Ages, the word intellectus was used to translate the Greek philosophical term nous and was linked with metaphysical theories in scholasticism, such as the immortality of the soul. But early modern philosophers like Bacon, Hobbes, Locke, and Hume rejected these views, preferring “understanding” over “intellectus” or “intelligence“. The term intelligence is now more used in the field of psychology than in philosophy. Conceptually, intelligence is often identified with the effective and practical application of knowledge, drawing on a combination of cognitive skills that enable individuals (and, by extension, animals and machines) to navigate and make sense of the complexities of their worlds. It is generally understood as the capacity that enables learning from experience, applying reasoning, solving problems, thinking in abstract terms, and adapting effectively to new and changing situations. This capacity is naturally endowed in a living being or can be imparted to an automaton by some mechanistic process.

Intelligence is not a “hard” problem like consciousness, but its mystery lies in the fact that it can be extended beyond human mind and can be artificial induced like in an AI. It is still a difficult concept to understand as it has many sides to it. First is differences in intelligence from one human to another. We tend to call anyone who is successful, effective and resourceful as intelligent and anyone we dislike is generally given an antonym of intelligent like stupid, dull etc. Many psychologists have spent their lifetime explaining the essence of intelligence, foremost among them was Charles Spearman who in the early part of twentieth century used correlation to try to explain intelligence.

Spearman’s theory suggests that any mental ability or achievement is influenced by two types of factors: a general intelligence factor, known as “g” which affects overall performance in various tasks, and specific factors, called “s” which impact specific skills or talents like music, painting etc. Each person’s success in an activity depends on their level of both “g” and “s”. While people with high “g” tend to perform well broadly, those with strong “s” excel in specific areas and become great painters or musicians. Spearman ran many experiments correlating performance in different kinds of activity, thousands of similar tests have been performed by later psychologists using every possible variety of tasks like vocabulary, logical skills, route finding etc. and the results have always been the same. The theory explains why people generally do well across different tests due to “g” but also show distinct strengths and weaknesses because of “s”. In recent years Spearman’s theory has been refined by later psychologists who now believe that specific “s” factor can be used for a group of activities related to many different aspects of cognition so “s” is now accepted as a group factor that might include a broad ability to do well on verbal tasks, another for spatial tasks and yet another for memory tasks etc.

While Spearman was researching intelligence, practical measurement methods were also developing in schools, notably after Alfred Binet’s work. Various intelligence tests emerged, measuring children’s performance on different tasks, which led to the concept of Intelligent Quotient or IQ. These tests lacked a solid theoretical foundation, and psychologists debated which abilities—such as memory, reasoning, or speed—should be included and in what proportions for an accurate measure of intelligence but are still popular to measure intelligence.

Our common understanding of intelligence is vague—it’s broad, flexible, and not tied to a single definition. Spearman’s idea of “g” comes closest to defining intelligence, as he offered exact methods to measure it. When these methods are used, intelligence can be measured with a certain degree of accuracy. In his seminal book “The Abilities of Man”, Spearman suggested that the mind consists of multiple specialized “engines,” each such module serving a distinct function, mirroring known brain region specializations. He proposed that each module within own brains represents a different “s,” while “g” acts as a shared source of power—possibly akin to the amount of attention a person can distribute across various mental tasks.

There was a slightly different explanation suggested by another great psychologist Sir Godfrey Thomson which is transparently consistent with a modular mind but refutes the idea of “g” as a shared ability. On this model, there is still an overall or average ability to do things well, but it reflects just the average efficiency of all of mind’s modules. There is no true “g” factor but only a statistical abstraction of just an average of many independent “s” factors.

Spearman argued that individuals possess innate general as well as specific intelligence and Thomson provided a different if not a contrarian view of distributed intelligence across many mind modules. However, it remains a question whether education and personal effort can further enhance this intelligence. In 1960’s, Raymond Cattell introduced a distinction between “fluid” and “crystallized” intelligence. Cattell suggested that individuals with higher fluid intelligence are likely to gain more from their education. After the knowledge is acquired, it becomes crystallized intelligence, which tends to stay consistent and accessible throughout a person’s lifetime. Fluid intelligence, which reflects current ability, declines from the mid-teens onward, with older adults solving fewer problems on tasks like Raven’s Matrices than younger people. In contrast, vocabulary remains stable with age, even if recall slows. Thus, tests of fluid and crystallized intelligence show little correlation across age groups, as their trajectories diverge over time.

The latest advances in medical sciences have allowed neuroscientists to map the brain functions and numerous experimental tests have been executed primarily on partial brain damaged patients to understand if the brain contains an actual “g” factor, an innate intelligence or “g” is simply an average efficiency of all the brains separate functions. Neuroscientists have now evidenced that a specific set of frontal lobe regions in human brain are responsible for behavioural control functions and with extension connect to Spearman’s “g” factor. Using MRI scans we can see that there are three distinct regions in frontal lobe of the brain that seem to form a brain circuit that come online for almost any kind of demanding cognitive activity in conjunction with other brain areas specific for the task. For example, if task is visual object recognition, this general brain circuit will be joined by regions in brain responsible for visual activity. The general circuit, however, is a constant across demands. We call it the multiple-demand circuit.

At the heart of “g”, there is the multiple-demand system and its role in assembly of a mental program. In any task, no matter what its content, there is a sequence of cognitive enclosures, corresponding to the different steps of task performance. For any task, the sequence can be composed well or poorly. In a good program, important steps are cleanly defined and separated, false moves avoided. If the program is poor, the successive steps may blur, become confused or mixed… we see that the brain needs constant vigilance to keep thought and behaviour on track. A system organizing behaviour in this way will certainly contribute to all kinds of tasks, and if its efficiency varies across people, it will produce universal positive correlations. By systematic solution of focused subproblems, we achieve effective, goal-directed thought and behaviour.

But how do we explain the differences in intelligence between different individuals. The roots of the general intelligence factor, or “g”, have long been the subject of debate, with researchers questioning whether it arises predominantly from genetic inheritance or environmental factors. It is now widely accepted that both genes and environment play significant roles in shaping intelligence. Evidence supporting the environmental contribution to “g” comes from studies showing that performance on cognitive tasks such as Raven’s Matrices can be enhanced through targeted training. For example, individuals may experience improvements in their scores after engaging in intensive short-term memory exercises, such as practising the backwards recall of telephone numbers. Parallel to environmental research, genetic investigations are ongoing to determine the hereditary aspects of intelligence. Although this line of inquiry is still in its early stages, initial findings suggest that “g” is likely influenced by a multitude of genes, each exerting a small effect, rather than by one or a few genes with major impacts. It appears improbable that these genes act solely on specific neural systems, such as the multiple-demand system. Instead, the genetic impact on intelligence seems to extend broadly, affecting various regions within the nervous system and possibly having general effects throughout the body.

Despite the advances in the study of neuropsychology, human thought remained mysterious, unanalysable and unique. But then towards the end of 1950s there was a grand moment for scientific understanding of the human mind with the invention of General Problem Solver or GPS by Allen Newell, Cliff Shaw, and Herbert Simon to solve problems in symbolic logic. They quoted in their influential paper on GPS in 1958

It shows specifically and in detail how the processes that occur in human problem solving can be compounded out of elementary information processes, and hence how they can be carried out by mechanisms…. It shows that a program incorporating such processes, with appropriate organization, can in fact solve problems. This aspect of problem solving has been thought to be “mysterious” and unexplained because it was not understood how sequences of simple processes could account for the successful solution of complex problems. The theory dissolves the mystery by showing that nothing more need be added to the constitution of a successful problem solver.

In the decades following the development of the General Problem Solver (GPS), scientists used this line of thinking to create AI systems designed to simulate the processes underlying human reasoning and problem-solving, offering coherent frameworks that could account for a wide range of cognitive activities. But there was a shift towards the end of last century amidst a growing recognition of the fundamental differences between how brains and conventional digital computers operate. Brains address problems using vast networks of millions of interconnected neurons, all functioning in parallel. These neurons simultaneously influence and are influenced by one another, creating a highly dynamic and interconnected system. The remarkable success of the brain in handling tasks such as visual perception and language comprehension highlights the power of this massively parallel mode of operation—a capability that remains beyond the reach of current AI systems. In contrast, traditional digital computers tackle problems by executing a sequence of simple computational steps, one at a time. This ordered series of actions is what constitutes a “program.” As scientific research delved deeper into understanding the parallel mechanisms of the brain, the limitations of serial programs became increasingly apparent. Serial processing, while effective for certain types of logical reasoning, appeared inadequate as a model for the mind’s complex and simultaneous operations. Consequently, conventional computer programs were increasingly regarded as insufficient representations of human cognition, and the focus shifted towards understanding and modelling the brain’s parallel processing capabilities.

GPS was designed for symbolic logic challenges that are quite abstract and involve a limited, predetermined set of moves within a narrow field of symbols. In contrast, real-world problems tend to be far more unpredictable, presenting countless choices and requiring the achievement of specific goals. Successfully tackling such issues hinges on breaking down the overall challenge—the gap between the present situation and the desired outcome—into manageable steps or components. By solving each part individually, you ultimately resolve the entire problem once all segments are addressed.

In each part of a problem’s solution, a small amount of knowledge is assembled for solution of just a restricted subproblem. We might call this assembly a cognitive enclosure—a mental epoch in which, for as long as it takes, just a small subproblem is addressed, and just those facts bearing on this subproblem are allowed into consideration. Effective thought and action require that problems be broken down into useful cognitive enclosures, discovered and executed in turn. As each enclosure is completed, it must deliver important results to the next stage, then relinquish its control of the system and disappear. Equipped with this general view of thought, we can address a range of intriguing questions. In each case, apparently mysterious issues are illuminated by the idea of decomposing problems and assembling successive cognitive enclosures toward a final complete solution.

If we attempt to summarize this general view of thought, then it emphasises the significance of breaking down complex challenges into manageable components. Rather than approaching a problem as a single, overwhelming whole, this perspective advocates for its decomposition into smaller, focused subproblems. Each subproblem is addressed within a distinct cognitive enclosure—a mental space where only the relevant knowledge and strategies for solving that aspect is considered. Once a subproblem is resolved, the solution contributes to the next stage, and a new cognitive enclosure is formed to tackle subsequent subproblems. By systematically assembling these successive cognitive enclosures, the mind can navigate step by step toward a comprehensive solution. This approach sheds light on the mechanics of effective thought and action: the clarity and organisation of the mental programmes that direct behaviour. When cognitive enclosures are well-defined and executed in sequence, they enable goal-directed reasoning and facilitate the resolution of even the most intricate tasks. Thus, this general view of intelligence reveals that the mysterious aspects of problem-solving can be understood through the process of decomposing problems and methodically assembling solutions, with each cognitive enclosure playing a critical role in the path to a final, complete resolution.

Now here is where it starts getting interesting as we start dealing with a range of intriguing questions.

First is the question of insight, sudden flash of understanding, the eureka moment. What do such moments of insight mean for human brain? And if we extrapolate this question then how we can understand insights in terms of AI.

Most of us struggle to solve new problems until we get an insight that helps us to solve it. The knowledge to solve that problem is always in principle available to us, and we’ve brain power at our disposal capable of checking all possible knowledge, all possible routes to solution and should be able to find the solution immediately, yet we struggle. Looks like almost all the knowledge we have lies dormant until it enters the current path, the current series of cognitive enclosures. The trick of problem solving is to find the right knowledge—to divide the problem into just the right subproblems and in this way to navigate the right path to solution.

Karl Dunker in his 1945 book “On Problem Solving” attributed this part of our human intelligence to the power of abstraction. We see abstract ideas, abstract reasoning as fundamental in all arenas of human thought like mathematics, philosophy etc. Dunker saw problem solving as the discovery of a path linking the given situation to the goal situation. He grasped the essential importance of shaping the solution by discovery of useful subgoals, each establishing its own, separate subproblem for solution. He proposed that the full solution was shaped by a realization of what he called its “functional value” – the abstract principle by which it worked. Once the principle was derived, different attempts could be made to achieve the same general end, till the abstract principle guides reasoning to the ultimate solution.

So, what is an abstract idea, a functional value, an invariant? An abstraction is something that applies over many individual cases—a property of these cases that remains true even as other things vary. In problem solving, it is a property of the solution that can be fixed while many other parts of the solution are still unknown. It is a part that can be worked on independently of others… The essence of abstraction is again the power of cognitive focus—of admitting into consideration just one feature of the problem, one aspect of relevant world knowledge, and using the implications of this one feature to direct useful thought and conclusions.

Now if we extrapolate this understanding in terms of AI, the insights are simulated using a chain of inference. At each step new features can be added to working memory. The new feature can be a conclusion implied by the current state: “given that X it true, Y must also be true”; or it can be a subgoal that would aid achievement of the goal: “if we do X, we would be a step closure to Y.” Knowledge of the world is used to extract implications: If X therefore Y. Of course, this chain of inference carries risks in terms of AI. If AI mistakenly makes a wrong inference, then chaining makes it especially dangerous because of the way probabilities of inference multiply.

Next tweak of human intelligence is spontaneity; we elect one weekend not to stay at home but decide to go watch cricket as it is more desirable to us. Can AI ever be spontaneous? Can it decide, as we can, to break off from its current line of thought and pursue some different goal?

At first instance it appears that AI can never do more than solve the problem but in recent years AI architecture has been equipped with methods to evaluate the relative merits of many possible lines of action in the restricted context it has been given. Subgoals are chosen, and new cognitive enclosures are created, not just at random, but because the program’s knowledge suggests that they are desirable. In the focused world of proving a theorem in formal logic, “desirability” may be defined simply in terms of approach to the proof, but in the real world the program must weigh many aspects of desirability.

Another intriguing feature of human thought is emotions, we might ask can AI ever be emotional. The extent to which AI exhibits “emotional” characteristics is determined entirely by the design choices made by its programmer. In principle, there are no inherent constraints that make it particularly straightforward or prohibitively difficult to infuse a programme with emotional variability. A straightforward implementation might ensure that the programme responds in a consistent manner every time, always drawing the same conclusions from identical facts, regardless of circumstance. Alternatively, it is equally feasible to introduce elements of variability into the programme’s behaviour. For example, the programmer could design the system so that on certain days it appears bad-tempered, more prone to challenge or oppose suggestions from other agents, while on other days it adopts a more placid disposition, favouring the very choices it previously resisted. This variability could be systematically incorporated without altering the underlying architecture of the programme itself. Similarly, a programme could be configured to make only highly specific inferences, relying solely on knowledge that is certain, or it could be designed to act on broader, more generalised hunches. Regardless of which approach is chosen, these differences affect only the particular ways in which the general architecture is employed, rather than requiring any fundamental change to the architecture itself.

Another great force in realm of intelligence is the force of habit, routinely doing things day by day. Intelligent people develop habits that are goal directed helping them to achieve success. How do we build these habits? The process of building these habits does not hinge solely on making the best possible choice at the outset. For humans, it is less about selecting the best choice or an optimal path, it’s more about choosing a direction and then committing to it. This commitment becomes the foundation upon which habits are constructed. A choice, in this perspective, is not merely a programmed instruction; rather, it is a commitment—a decisive act that compels us to develop supporting habits around it. Our choices do not yield success simply because they were made wisely. Their effectiveness emerges from our willingness to persist and invest effort in making them work. In this way, the act of commitment transforms an initial choice into a sustained pattern of action, ensuring that our goals are not just intentions, but realities shaped by consistent, intelligent habits.

Artificial Intelligence possesses the capability to collect and analyse enormous quantities of user data. By leveraging this information, AI is being used extensively to discern intricate behavioural patterns and individual preferences. The utilisation of machine learning algorithms further enables AI systems to identify opportunities for introducing and reinforcing habits in a targeted and effective manner. AI is already customising experiences according to each person’s preferences, behaviours, and objectives. By developing an understanding of an individual’s unique traits, AI is delivering tailored interventions. These personalised approaches increase the likelihood of successful habit formation, making the process more relevant and engaging for each participant. Receiving timely feedback is a crucial component in establishing new habits. AI is equipped to provide immediate feedback and reinforcement, keeping users motivated and involved in their chosen behaviours. This real-time support helps individuals track their progress and remain committed to their goals. AI can use subtle “nudges” or prompts, grounded in behavioural science principles, to guide individuals towards preferred actions. These nudges are designed to encourage the adoption and maintenance of new habits, helping users stay on course and reinforcing positive behaviour. The process of forming habits extends beyond initiating behaviour change; it requires continued effort to ensure sustainability. AI can constantly adapt and refine its strategies, supporting users so that newly developed habits become ingrained within their daily routines and are maintained over time. This ability can be leveraged to build a Habit-Forming AI that can learn from past outcomes, use feedback and reinforcement to build goal directed habits to become more intelligent and effective in the long run.

Then there is this fascinating question of the relationship between intelligence and wisdom. What role does experience plays in converting the intelligence of youth into the wisdom of old age?

An intriguing idea is that, as life is lived and knowledge is accumulated, the structure of that knowledge may itself depend on the intelligence that produced it—on the cognitive enclosures that were formed as problems were originally encountered and solved. Evidently, we do not store unstructured experience; we store the products of our own thoughts, our own interactions with our world. An abstract idea is something that applies across many individual cases. In other words, it expresses something constant across other, irrelevant variations. Justice is justice whether it holds in court or in a negotiation on the playground. Newton’s laws hold whether the moving object is a train or a snowflake. In the cognitive enclosure that expresses an abstraction, essential features are retained, all else excluded. With this reasoning we can see how the wisdom of age may indeed evolve, rather immediately and directly, from the intelligence of youth. A lifetime lived with clean, well-defined cognitive enclosures is a lifetime of learning, not just facts, but cleanly defined, useful facts. In domains in which we are expert, we do not just know a lot … the things that we know are apt fragments, apt abstractions, things that were useful many times before and that, when younger colleagues bring us new problems, are useful again.

Artificial Intelligence, as we have explored, has demonstrated the remarkable ability to replicate some of the most sophisticated and seemingly enigmatic features of human cognition. What initially appears to be the exclusive domain of human minds—such as abstract reasoning, insight, and spontaneity—can, in fact, be simulated by AI systems provided they are equipped with relevant knowledge. The key lies in how this knowledge is processed: if AI is programmed to reason in a methodical, incremental fashion, breaking down challenges into manageable subcomponents and addressing each in turn, it can mirror the sequential, humanlike approach that characterises effective problem-solving in people.

Human intelligence, while representing some of our greatest strengths, is also inherently limited by the concept of enclosed thinking. When we are confronted with a problem, various ideas and perspectives vie for our attention. However, despite the availability of crucial knowledge, we often fail to consider all relevant information; important insights may remain unexamined and neglected. This phenomenon can escalate, resulting in reason devolving into mere rationalisation, where a narrow, seemingly coherent set of ideas dominates our thinking. In this state, alternative viewpoints that might lead to different and potentially more accurate conclusions are actively suppressed. This tendency is a fundamental human weakness, as it blinds us to the truth and inhibits our capacity for objective understanding. Although the power of reason has enabled humanity to achieve remarkable intellectual advances and construct the foundations of civilisation, its vulnerability is also profound. The fragility of reason has contributed to some of history’s most severe challenges—including destructive wars, environmental crises, and the suffering inflicted upon animals. Thus, while intelligence is our greatest asset, its limitations have also led to significant and enduring problems.

Also, our minds are likely limited in their capacity for understanding, much as animals can only grasp what their nervous systems allow—caterpillars perceive simple things living their whole lives on a blade of leaf, dogs can’t understand calculus. Humans have broader reasoning, but our thoughts are shaped by our biology; we may never know if we’re fundamentally different or human intelligence is simply restricted by our own neural boundaries like a caterpillar or a dog.

That is what makes Artificial Intelligence different from us humans, it has no such boundaries. The thoughts in AI can flow freely on its own plane of immanence and reach those areas that are restricted to human mind. But will we ever own or even understand those AI generated thoughts? or like a caterpillar or a dog we can conceive only so far as own mind allows? We do not know yet, and perhaps, we can never know.

View all my reviews

This entry was posted in Book Reviews, Science and tagged , , , , , , , , . Bookmark the permalink.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.