Susty Code Stops AI from Making Things Up
In this article, you’ll discover:
- Why the Susty Code is the missing piece to stop AI hallucinations.
- How testing epistemic rules early creates safer AI answers.
- Why relying on simple rewards causes AI to make biased choices.
- How this new code prepares tech for upcoming Federal regulations.
- Why humans and AI must work as a strong team to succeed.
Have you ever asked an AI a simple question, only to get a completely made-up answer? This common problem is called an AI hallucination. But a new startup called Artificial Epistemics has a plan to fix it. They recently launched the Susty Code. It is a new workflow system designed to maximize an AI’s ability to tell the truth and act with integrity.
Fixing the Problem at the Source
Instead of trying to fix bad answers after they happen, the Susty Code works early. It attacks the problem at the pre-action level of knowledge processing. Created by founders Joseph M. Firestone and Mark W. McElroy, this new tool checks facts and values before the AI even speaks.
“The Susty Code adds a layer of intelligence to AI called knowledge processing, a natural function in humans and other living systems, but still missing from most of AI today. This goes well beyond pattern-matching, probabilities, and simple reinforcement learning by bringing sophisticated epistemic rules carefully into play.” – Joseph M. Firestone and Mark W. McElroy
Why We Need Better Rules
Many of today’s big tech companies use simple reward systems to train their AI. But this can lead to rigged answers. For example, some developers change their AI to fit their own personal biases. The Susty Code does things differently. It uses long-standing rules from science, ethics and logic to test information so that the answers are as ‘truth-like’ and ‘ought-like’ as possible.
How the Susty Code Works
The system does not just give the AI a list of facts. Rather, it teaches the AI how to think about information. It uses a rule-based approach to test and evaluate competing claims for their truth and legitimacy.
“Our epistemic, rule-based approach isn’t based on reinforcement learning at all. It is based on criticism of networks of knowledge claims and on deciding which competing knowledge claim networks survive criticism and are closer to the truth.” – Artificial Epistemics
Preparing for Future Laws
Getting big companies to change is not easy. Many AI creators just want their systems to work faster. But people are getting worried about bad AI behavior. Soon, there might be strict Federal regulations to control how AI acts.
Companies that use the Susty Code will have a huge advantage. They will not have to worry about their AI going berserk or suggesting actions that are clearly morally wrong. This makes their tools much safer for everyone to use.
Humans Are Still Essential

Even with this smart new code, AI cannot do everything alone. The creators believe that humans and AI must work as a team. People are needed to find problems, check the new knowledge, and give feedback.
“Both human initiation and human feedback will be necessary at various points in knowledge processing, and we see AIs and humans as collaborating frequently throughout.” – Artificial Epistemics
As AI gets smarter, we need better ways to keep it honest. The Susty Code offers a clear path forward. By adding strong epistemic rules to the mix, Artificial Epistemics is helping build AI tools we can actually trust. If you are an AI builder looking to make your system safer, this is the missing piece you have been waiting for.


