Weekly Reading Recommendation: Explore ‘The Laws of Thought’ by Tom Griffiths

Image Credit: Dwight Ellefsen/FPG/Archive

Laws of Thinking
By Tom Griffiths, William Collins (UK) / Macmillan (USA)

For nearly seven decades, cognitive researchers have debated the nature of intelligence. On one side is computationalism, which posits that intelligence can best be understood through rules, symbols, and logic represented in equations. The opposing view, connectionism, suggests that intelligence arises from interconnected networks mimicking brain neurons, where no single element is intelligent, but the system collectively exhibits intelligence.

This ongoing intellectual conflict influences fields ranging from cognitive science to the artificial intelligence (AI) that is currently reshaping the global economy. This month, we delve into two impactful books on the subject. Notably, Laws of Thought: Exploring a Mathematical Theory of Mind stands out. In this work, Princeton University professor Tom Griffiths investigates the long-standing efforts to formalize thinking within mathematical laws, elucidating the foundations of modern AI and its future trajectory.

Griffiths organizes his narrative around three competing mathematical approaches to formalizing thought: rules and symbols, neural networks, and probabilistic methods. The first approach treats cognitive processes as problem-solving endeavors, breaking tasks into smaller goals and adhering to formal methodologies. Although this reinforced early AI systems, it also illustrated why human common sense is challenging to codify, as the requisite rules quickly expand into millions of entries.

Neural networks forgo specific rules, opting instead for learning from examples, whereby simple units interact to yield complex behaviors. This mirrors human cognition to some extent. The introduction of probability and statistics adds another layer: uncertainty. The human mind operates without perfect information, adeptly weighing evidence and updating beliefs.

According to Griffiths, a comprehensive understanding of intelligence—whether human or machine—requires an integration of all three frameworks. By utilizing archival research and interviews with leading scholars, he outlines humanity’s historical attempts to quantify mental processes through mathematics, resulting in a detailed but engaging narrative.

In contrast, neuroscientists Gaurav Suri and Jay McClelland present a different perspective in Emergent Mind: How Intelligence Emerges in Humans and Machines. They argue that the mind emerges as a byproduct of an interacting network of neurons—biological or artificial—that fosters thoughts, emotions, and decision-making, building on McClelland’s foundation in connectionism.

These two titles provide fascinating yet contradictory insights into the generative AI revolution. For Griffiths, large-scale language models (LLMs) validate his hybrid perspective; they demonstrate remarkable capabilities, but their occasional errors necessitate a symbolic layer for correction. Conversely, Suri and McClelland view LLMs as a validation of their claims, highlighting the impressive inferencing accomplished purely through neural networks.

The piece focuses more on its content than on mere subject matter; its tone fluctuates between informal asides and awkward phrasing. Explaining mathematics and science can be inherently challenging, and while neither book is entirely comprehensible, Griffiths’ Laws of Thinking offers a clearer narrative as it discusses the historical context of AI.

The authors of Emergent Mind assert that there are no inherent limitations to developing autonomous, goal-driven AI using solely neural networks, presenting a provocative viewpoint that may feel somewhat disconnected from practical realities.

Griffiths’ book, however, equips readers with a solid understanding of the linguistic frameworks necessary to articulate our thoughts, illuminating why the future of intelligence consists of overlapping complexities.

Does this evolving landscape signal a potential reconciliation between these two schools of thought?

Recommended Reads on Machine Intelligence

Algorithm for Survival

Written by Brian Christian and Tom Griffith

This engaging, non-technical book offers insights into how computational ideas influence daily decision-making, illustrating how algorithmic strategies can enhance human judgment. Co-authored by Griffiths, it remains relevant even in the post-ChatGPT era.

AI Restart
Building Reliable Artificial Intelligence

Written by Gary Marcus and Ernest Davis

This book argues that while contemporary neural networks are effective, they can be fragile. It advocates for a hybrid model that merges the strengths of both the connectionist and symbolic approaches discussed in Griffiths’ analysis.

Chris Stokel Walker – I am a technology writer based in Newcastle upon Tyne, UK.

Source: www.newscientist.com