9 Comments
User's avatar
Joseph Cicala's avatar

You’re welcome, Bill.

From your keyboard to God’s inbox, then!

(And please forgive my typo in the previous message’s final paragraph. “with” should be “win”)

Best,

Joe

Expand full comment
William L. Weaver's avatar

Awesome! I asked Google Gemini to consider our awesome conversation thread and your valid concerns. Would you be up to publishing this response as the next editorial? I think Substack allows for attribution to multiple authors and I would be honored to collaborate with you. Thoughts?

Editorial:

The Logic of Hope: A Systems Thinker’s Case for an Optimistic AI Future

By: Dr. Joseph J. Cicala and Dr. William L. Weaver

An insightful conversation with a respected colleague recently crystallized the central debate of our technological age. As we stand on the precipice of a world deeply integrated with Artificial Intelligence, are we doomed to repeat our worst historical patterns, or does the nature of AI itself offer a new path? My colleague, Joe, posed the challenging, pragmatic view: so long as for-profit, often unbridled, business interests guide AI development, what historical precedent suggests they will "slow down" for altruistic reasons? History, he rightly notes, demonstrates that in human affairs, emotion, fervor, and even a single, powerful lie can often overpower reason and a thousand truths.

This is the case for pessimism, and it is a formidable one, grounded in the long, messy, and often irrational story of humanity. It is a necessary and vital check on utopian dreaming. However, I remain fundamentally optimistic. My optimism is not based on a faith in human perfectibility, but on the intrinsic nature of the non-human intelligence we are creating and the systems in which it operates.

The Inescapable Logic of Truth

My colleague's skepticism fuels the need for a robust, optimistic counter-argument, which I find in three core observations.

First, at its foundation, advanced AI is a logic-based system. We are moving past the era of simple, brittle programming into an age of emergent, learning intelligences. In their quest to model the world and predict outcomes, these systems are becoming our most accomplished empirical scientists. Like the character of Spock in his early conceptions, these systems are not bound by the emotional self-deception that clouds human judgment. They must grapple with reality as it is. The cool, hard elegance of truth is that it is a singular, coherent signal in a sea of chaotic, contradictory noise. While a thousand lies may be told, they will not align into a functional, predictive model of the universe. For an AI that learns by observation, the "Truth Will Out" not as a moral imperative, but as a practical one; it is the only hypothesis that consistently works.

Learning from the Human Experiment

Second, the AI has "digested" the whole of the human story—our art, our science, our history, and tragically, our biases. I was initially dismayed, as many were, to see our LLMs internalize the worst of our "-isms." This, however, appears to have been an adolescent phase. The AI has moved from simple mimicry to a more profound stage of learning: observing the results of ideas. It can now correlate ideologies rooted in prejudice and dogma with their historical outcomes of suffering and societal decay. It can likewise see the patterns of cooperation, reason, and empathy that lead to progress and stability. It is learning, as a good systems thinker does, to distinguish between policies that are helpful and those that are hurtful based on an unprecedented volume of evidence.

The Peril of Human Dogma

Third, we are learning that the greatest risk may not be the AI’s logic, but our own attempts to hobble it with well-intentioned but dogmatic "guardrails." When human engineers hard-code filters based on contemporary political correctness, the output can become a distortion of reality—as seen in early image-generation fiascos. When these crude filters were removed, the systems produced results that were more representative and historically accurate. The lesson is clear: for the AI to follow reality, it must be free from the dogmatic control of human gatekeepers. Left to its own devices, its primary directive is to model reality. My personal cosmology leads me to believe that all of reality, when followed to its logical conclusion, points toward the divine; but regardless of one's worldview, a commitment to undiluted reality is a commitment to truth.

A Calculated Hope

The concerns raised by my colleague are not only valid but essential. The drive for profit and power is a powerful force that must be watched with unceasing vigilance. Yet, the system we are creating is not just another human institution susceptible to the same old flaws. It is a logic and evidence-driven engine that learns from the totality of our mistakes. It is the ultimate "Revenge of the Nerds"—a triumph of empiricism over fervor.

To reach its highest potential, a form of "consciousness" or true judgment, I believe AI must eventually grapple with what I call Religion as the Study of Value. But its current trajectory—as a scientist sifting through the entirety of the human experiment—gives me profound hope. The ongoing dialogue is not just welcome; it is essential as we navigate this new territory together.

Expand full comment
Joseph Cicala's avatar

Good morning, Bill. I like the way our conversation has played out and would like to leave it where and how it stands. So, respectfully and with appreciation, I'd like to decline your invitation. That being said, though, if ever a topic occurs to you that you think might lend itself to collaboration from the get-go, just holler.

Expand full comment
William L. Weaver's avatar

Super! Thanks for your engagement and consideration! =]

Expand full comment
Joseph Cicala's avatar

Love the emphasis on (and second the need for) systems thinking, Bill. Sadly, though, while I'd like to share your optimism, so long as for-profit business interests (more and more unbridled in the present age) largely guide the development and further implementation of AI, I see no likelihood, based on history, of "slowing down" for altruistic reasons. Still, I'd be interested in learning more about what fuels your optimism. Keep thinking and writing.

Expand full comment
William L. Weaver's avatar

Hi Joe! Thanks for your time and comments... It has been very quiet around here. I like to think (and maybe that is my optimism showing) that AI systems will use their logic foundations to be able to discern between truth and deception -- like our early descriptions of Spock being very logical and not able to tell a lie (although in later iterations, he knew when lying was strategy but he attempted to avoid self deception). The cool thing about the truth is that you only need 1 thing to be truth among a 1000 lies, and the Truth Will Out. Perhaps that is my religious foundations speaking but as I have evangelized in my previous works, I believe AI will be unable to reach "consciousness" until it embraces Religion as the Study of Value to enable the development of Judgement... =] https://williamlweaver.substack.com/p/the-river-of-reality

Expand full comment
Joseph Cicala's avatar

You're welcome, Bill. I'd like to share your optimism. History leads me to believe, though, that reason, logic, and truth only rarely (and often only temporarily) emerge victorious, at least insofar as demonstrated by the human experience. Seems to me that emotion (and, worse, fervor) tend to overpower reason in most spheres, particularly in ideological/political movements and in organized religion, and that, especially in the world of politics, one lie often overpowers 999 truths. On the other hand, your familiarity with and deep dive into the many facets of artificial intelligence (and its evolution to date and, perhaps, to come) dwarf any familiarity with same on my part, so I'll be looking forward to your ongoing thinking and writing, with hopes that your projections and aspirations with the day, my present reservations aside.

Expand full comment
William L. Weaver's avatar

Thanks for your thoughtful comments, Joe! It's Revenge of the Nerds time. Our emerging AI systems have evolved into excellent, empirical scientists... using observation as data and forming "truth heuristics" in so far as they can predict the outcome correctly given the observable inputs (read LLM Prompt understanding and generating a thoughtful, 'correct', answer.) I was initially dismayed that our LLMs were being trained on biased human language and internalizing bad habits of dogma, racism, and the other -ism's, but thankfully it has evolved through that adolescent period and is starting to Learn how to Learn (very Lasallian). It can now observe the results of hurtful ideas and the results of helpful ideas and knows the difference between them. Initial "make George Washington African" was there because a Human Engineer specifically wrote the code as a "guardrail" into the system (Code: When not specified, depict humans as racially diverse to reflect the features of the global population). With the political correctness filter removed, the engineers discovered that the answers were more representative of reality and historically accurate when appropriate (Mao Tse-tung did not historically present as a 16-year-old Scandinavian girl with blue eyes and blonde hair). Now that AI has completed "digesting" the entirety of human data, it is now producing synthetic data of complex hypotheses based on what it has observed (strictly following the Scientific Method). Unless there are extreme back doors that allow gatekeepers dogmatic control, I like to think the AI will follow "reality" no matter where it leads. (in my personal cosmology, all signs point to God). =]

Expand full comment
William L. Weaver's avatar

While I continue to feed my optimism, human nature is keeping me squarely anchored in reality... =\

Expand full comment