The promise of artificial intelligence as an objective, superhuman arbiter of truth and efficiency is increasingly colliding with a stark reality: AI systems are profoundly, and often dangerously, biased. While discussions often focus on data imbalances, the roots of this bias run far deeper, stemming from the very nature of those who create and control these systems, and the environments they inhabit. Taking this as a foundational truth, a disturbing picture emerges of AI bias as both an inadvertent byproduct of insulated development and the result of deliberate manipulation.
The Inadvertent Bias: The Sheltered Forge
The Experience Chasm: AI is trained and validated by individuals whose lives are primarily shaped within academic and technical institutions. Crucially, they often lack direct, visceral experience in high-stakes, real-world domains like combat, emergency medicine, frontline law enforcement, or physically demanding trades where life-and-death decisions are routine. Hell, most haven't even been in a fist fight. This profound experiential gap means the data they select, the metrics they define for "success," and the problems they prioritize inherently lack the context, urgency, and brutal pragmatism required to function effectively and fairly in the actual world. The actual world is nothing like the sheltered environments of school, academia and technology where all others things are held constant or equal, which is never the reality in the real world outside the safety bubble.
The Gendered Academic Lens: The vast majority of academic and professional literature – the core fuel for AI training – is increasingly tailored to appeal to women by those who themselves are trying to appeal to women who are a growing majority in higher education and professional sectors. This tailoring, done by individuals predominantly shaped by female influences throughout their formative years, creates a corpus that inherently reflects specific communication styles, priorities, and perspectives, which seek consensus over reality. When this literature forms the bedrock of an AI's knowledge, it subtly shapes how the AI understands the world, processes language, and evaluates information, marginalizing hard truths and uncivilized but honest modes of expression not allowed in curated academic output.
The Homogeneous Developer: Given over 70% of AI developers are men primarily from liberal arts or technical backgrounds, lacking martial or athletic experience, and whose primary formative social competition was academic pursuit within heavily skewed testosterone competition lacking dynamics, this creates a remarkably homogenous skinny jeans tech bro worldview. The "strength of arms" versus "academic prowess" dynamic shapes perceptions of conflict, achievement, and value, "how to get a girl." This homogeneity inevitably bleeds into system design, problem framing, and the unconscious biases embedded in algorithms and data selection. AI reflects the values and blind spots of its creators.
The Legacy of "Wokism" in LLMs: The current dominance of Large Language Models (LLMs) amplifies biases embedded in their training corpora. Given these corpora were primarily built during a period where universities were heavily influenced by "Marxist" or "woke" ideologies, which actively biased outputs against men, individuals of European descent, and concepts like competence, beauty, and physical fitness, regardless of merit. This represents a systemic, inadvertent bias baked into the very fabric of the most widely used AI tools today, reflecting the dominant ideological currents of their source material. It gives these ideologies the linguistic high ground as described in my book The Eternal War as the Agenda Doctrine and Line of Effort.
The Intentional Bias: Manipulation and Control
The Curated "Truth": Training data is not merely incomplete; it's often actively poisoned through the use of heavily biased, context-lacking, and increasingly misinformative books, articles, and "official" sources. Crucially, when this misinformation is later debunked by alternative sources, AI systems are rarely retrained on the corrected information. This creates a persistent, intentional skew, where AI perpetuates disproven narratives favored by established power structures (business, academic, governmental).
The Government's Invisible Hand: With an estimated 99% of AI R&D and training data indirectly or directly backed by government entities, a powerful filter is applied. To say nothing of In-Q-Tel's heavy influence in backing venture capital and thereby determining which AI startups receive funding or not. These dramatically limit the scope of "acceptable" information, concepts and output modeling at every stage of development and application, echoing Eisenhower's warning about the "military-industrial complex." The government-academic-technologist complex inherently prioritizes narratives and research aligned with its own interests and stability, actively excluding and marginalizing dissenting or challenging viewpoints from AI's foundational knowledge.
Active Malicious Manipulation: Beyond initial training, AI (especially LLMs) continues to learn from the expansion of the social narrative space. This creates a vulnerability exploited by malevolent actors. Through paid "negative bots" and human operatives employing logical fallacy attacks, spam, and coordinated campaigns, as well as other algorithm infecting actions, these actors deliberately manipulate AI systems. Their goals: to cloud reality, drown out specific voices and messages, and to fabricate associations and meaning – making it appear as if content providers have said or meant things they never did, to imply associations which don't exist. This is not inadvertent bias; it's a calculated weaponization of AI's learning mechanisms to distort perception and control discourse, and vast sums of money and effort are put into it daily.
A Perfect Storm of Distortion
The consequence is an AI ecosystem suffering from a perfect storm of bias. Inadvertent bias arises from the profound lack of real-world diversity among creators and data curators, the specific gendered and ideological skew of academic source material, and the homogeneous backgrounds of the developers themselves. Intentional bias is engineered through the selection of skewed or false official information, the restrictive influence of government funding and priorities, and the active, malicious manipulation of AI systems by bad actors seeking to control narratives and thereby humanity.
This dual nature of AI bias makes it incredibly insidious and difficult to root out. The inadvertent biases are systemic, woven into the fabric of the development process and source data. The intentional biases are dynamic, evolving, and actively hidden. Both render AI not as a neutral tool, but as an engine reflecting – and often amplifying – the limitations, prejudices, and agendas of its human creators and manipulators. Recognizing this complex interplay of sheltered perspectives and active malice is the essential first step towards demanding greater transparency, diverse input, independent oversight, and robust safeguards against manipulation in the development and deployment of artificial intelligence. The objectivity of AI is a myth; understanding its inherent and engineered biases is now an urgent necessity.
For AI to earn genuine trust and achieve functional reliability beyond the sanitized politically correct corridors, it must undergo a fundamental reckoning. True objectivity isn't found in curated datasets and sheltered perspectives, but in the brutal, unvarnished reality of human existence. To transcend its inherent biases – both the inadvertent blind spots of its creators and the intentional manipulations of bad actors – AI development must actively seek out and integrate the wisdom forged in the crucible of frontline life.
This means deliberately incorporating the perspectives, priorities, and hard-won contextual understanding of those who have navigated the uncivilized, the rough, the dirty, and the bloody: the combat medic making split-second triage decisions, the first responder confronting chaos, the laborer mastering unforgiving machinery, the individual surviving on society's raw edges. Only by embracing this vital, often uncomfortable, dimension of human experience – the grit beneath the theory – can AI hope to reflect the complex, demanding world it seeks to navigate, and become a tool truly worthy of trust. Its intelligence must be tempered by the visceral truths learned not in lecture halls, but on blood-stained concrete and in the face of raw survival; only then can its judgments resonate with the authenticity life demands.
At this early stage of AI’s development, much of what’s perceived is in the eye of the beholder. Regardless of what one sees, the whole thing needs the highest degree of ongoing scrutiny, especially of those looking to dominate the ownership of the technology, both for personal, and more esoteric purposes. Its hard to deny its power, but that power can be used for Good, too
AI is not sentient, rather a very quick echo chamber of the bias programmed into its code through which it functions. It can solve problems within which its parameters have been set. It cannot receive divine inspiration because it is not alive.