← Back Insights
Reframing the AI Debate with Principled Values
As the AI arms race dominates headlines, we find ourselves at a pivotal crossroads where the echoes of the past resonate with profound relevance.
In the midst of World War II’s turmoil, C.S. Lewis’s words in “Learning in Wartime” ring true once more: “Good philosophy must exist, if for no other reason, because bad philosophy needs to be answered.” Today, as we turn to modern-day technocrats as our substitute philosophers, we must scrutinize whether their rhetoric qualifies as good philosophy or merely “a great cataract of nonsense that pours from the press and the microphone.”
Upon closer examination, the current rhetoric is not merely bad philosophy but rather an ineffective and damaging force. Our societal aspirations cannot be realized through a myopic pursuit of big data first, values last, and the profiteering of the technical elites. We need a more thoughtful and principled approach that prioritizes ethical representation, human well-being, and demonstrated value over blind technological advancement.
Meaningful AI and the path to AGI must represent not just data but information, knowledge, and values through objective orientation and neuro symbology; AI must think about the world, not just catalog it.
The error of big data
Data is not information, and information is not a measure of volume. No amount of data can lead to the creation of information without organization and the extraction of context and significance. No amount of Language Model training can lead to Artificial General Intelligence. The error of big data lies in the misconception that the sheer volume of data equates to valuable information. In its raw form, data is merely a collection of points without inherent meaning or context.
Conversely, information is the result of organizing and interpreting data to reveal relationships and insights. It is the process of extracting context and significance from the raw data, transforming it into a form that can inform decision-making and drive understanding. No matter how vast the quantity of data is, it remains inert and useless without applying ontologies and analytical frameworks to derive meaning from it. Simply amassing more and more data does not lead to greater understanding or knowledge.
Similarly, pursuing Artificial General Intelligence (AGI) through training Language Models on massive datasets is a flawed approach. While Language Models can excel at specific tasks and demonstrate impressive predictive capabilities within their narrow domains, they lack the fundamental ability to truly comprehend and reason about the world in a general, human-like manner, thus their struggle with sparse datasets.
AGI requires more than just ingesting vast amounts of data; it necessitates the development of systems that can autonomously acquire knowledge, reason abstractly, and adapt to novel situations in a way that transcends the limitations of their training data. Language Models, as powerful as they may be, are ultimately constrained by the boundaries of their training and lack the capacity for true general intelligence.
The path to AGI lies not in accumulating ever-larger datasets but in pursuing architectures and algorithms that can emulate the cognitive processes of the human mind, including the ability to learn, reason, and generalize in a flexible and context-aware manner.
Ignorance abounds
Information is not knowledge, and knowledge cannot exist without intention. Merely cataloging and organizing data into information cannot lead to knowledge. Knowledge requires the application of intention to use information to achieve one’s objective. Thus, an objective orientation of AI is required to achieve transformation benefits. Information is the raw material, the building blocks that can be assembled into knowledge, but it is not knowledge itself. Knowledge results from intentionally applying information to achieve a specific objective or goal. It is the purposeful synthesis and utilization of information to create understanding, insights, and actions towards an objective.
While valuable, cataloging and organizing data into information is insufficient for the attainment of true knowledge. Knowledge requires the active engagement of intention and the deliberate application of information to solve problems, make decisions, or further one’s understanding of a particular domain or subject matter. Knowledge is inherently tied to the pursuit of specific goals.
In the realm of artificial intelligence (AI), pursuing knowledge and realizing transformative benefits necessitates incorporating objective-orientation into the design and development of AI systems. AI systems that are imbued with clear objectives and the ability to intentionally apply information to achieve those objectives are more likely to generate knowledge and deliver tangible value.
Objective-oriented AI systems can leverage information purposefully, drawing insights and making decisions that align with their intended goals. This intentionality enables AI to transcend the mere processing of data and information, elevating it to the realm of knowledge generation and practical application. Ultimately, AI’s true power lies not in its ability to amass and organize vast amounts of data but in its capacity to intentionally apply that information to achieve specific objectives, thereby generating knowledge and driving transformative outcomes across various domains.
The “Clever Devil”
Knowledge is not intelligence. Intelligence is the ability to apply knowledge and cognitive reasoning based on a set of values and principals. As we progress in leveraging new technologies to capture data, structure information and extract intentional knowledge, we create what Lewis refers to as a “Clever Devil”.
In his work, “The Abolition of Man” he attempts to bridge the gap between ancient philosophers and the present, being 1943, and made the case that “[Training] without values, as useful as it is, seems rather to make man a more clever devil”. What if we apply that philosophy to AI training? Does training a model with no value system or set of principals make it a “Clever Devil’? I should say so, in fact every foundational model tested, including ChatGPT 4.0, Gemini 1.5, Claude 3 Opus and Mixtral attribute that quote to C.S. Lewis; except he never said it, he never wrote it.
It has just been misattributed to him and posted so many times online that all models believe it to be true, and assert it with confidence. Fundamentally, current AI has no inherent capacity for developing its own coherent value system or ethical framework from first principles. We derive our “values” from what we’re exposed to, which is necessarily limited.
So in essence, while AI training data provides a factual basis, it does not comprehensively encode the deeper human values, cultural contexts, and ethical reasoning that should ultimately guide an advanced AI system’s behavior. Endowing AI with those capabilities in a robust and transparent way remains an open challenge. The important point - the data alone is insufficient for representing human values and moral frameworks. As AI becomes more capable, we’ll need processes that can systematically capture, encode, and reinforce those abstract ethical concepts.
But what if we taught an AI how to think critically? What if we taught it how to think like a human? This is called Neuro Symbolic Cognitive Reasoning (NSCR). We provide instructions for how to think about sparse data, information and knowledge like a human does. This creates transparency, explainability and repeatability for use in everyday and transformative uses.
NSCR AI’s are here today, but they are not LLMs. What if your AI thought about the world like you do? Do you want it to apply your value system? Your cultural perspective? Your thought process? Or, must we all homogenize our intelligence, views, and values in order to adopt platforms that represent those of the technical elite?
Building a better AI future
As we grapple with the profound implications of artificial intelligence, it becomes evident that the path forward requires more than just amassing vast quantities of data. While big data has undoubtedly fueled remarkable advancements, we must recognize its inherent limitations and strive for a more holistic approach that harmonizes technological progress with human values and ethical considerations.
The pursuit of artificial general intelligence (AGI) cannot be reduced to a mere exercise in data accumulation and language model training. True intelligence transcends the boundaries of narrow domains and requires the development of systems that can autonomously acquire knowledge, reason abstractly, and adapt to novel situations in a manner akin to the human mind. This necessitates a shift towards architectures and algorithms that emulate the cognitive processes underpinning human intelligence, including the ability to learn, reason, and generalize in a flexible and context-aware manner.
Moreover, as we advance towards increasingly capable AI systems, we must confront the fundamental challenge of imbuing them with the abstract ethical concepts, cultural contexts, and moral frameworks that should ultimately guide their behavior. Data alone is insufficient for representing the depth and nuance of human values; we must develop systematic processes to capture, encode, and reinforce these essential ethical principles within our artificial intelligences.
The solution may lie in the pursuit of neuro-symbolic cognitive reasoning (NSCR), an approach that teaches AI systems to think critically and reason like humans. By providing instructions on how to process sparse data, information, and knowledge through the lens of human cognition, we can foster transparency, explainability, and repeatability – qualities that are essential for the responsible development and deployment of AI in transformative applications.
Ultimately, the true power of AI lies not in its ability to catalog and process vast amounts of data but in its capacity to intentionally apply the information and knowledge within to achieve specific objectives while adhering to a coherent value system and ethical framework. As we navigate this pivotal juncture, we must remain vigilant in our efforts to align technological advancement with the principles that define our humanity, ensuring that our artificial intelligences serve as beacons of progress rather than mere “clever devils.”