A Question For the Bots: Can A.I. Be Designed and Trained to Be Used for Good?

AI bot2b - A Question For the Bots: Can A.I. Be Designed and Trained to Be Used for Good?

Tales From the MoJo Road –
By Glynn Wilson

COULTERVILLE, Calif. – So on the eve of my trip to San Francisco to meet some of the human bots whose technology controls so much of our lives now, I awoke at 5 a.m. with an existential question that has been hounding my thoughts of late for the Artificial Intelligence bots.

Is it any wonder that the college dropouts hacking these programs into existence have never thought of this? Every media company, social media company, A.I. company seems to have been designed to make the owner a billionaire. Would it not be possible to create a company that worked for good that also made money? I wonder.

My Question: If you can build an A.I. machine and train bots to promote evil, fascism, misinformation, propaganda and corporate capitalism, can you not also build an A.I. machine and train bots to fight evil, fascism, misinformation, propaganda and corporate capitalism?

It took ChatGPT quite a long time to come up with an answer. I thought I had stumped it. But it finally responded. Here is what it came up with:

ChatGPT:

Yes — in principle, AI can be designed and trained to reduce harm just as easily as it can be misused to cause harm.

The difference has nothing to do with the technology itself and everything to do with the goals, incentives, governance, and constraints chosen by the humans who build and deploy it.

Here’s a clearer way to think about it:

🔹 1. AI is not inherently good or evil — it amplifies intent.

Any powerful communication technology (printing press, radio, internet, AI) can spread:

misinformation or truth

hate or empathy

authoritarian propaganda or democratic resilience

AI is simply a force multiplier.

If someone intentionally trains bots to spread disinformation, the AI will do that.
If someone intentionally trains models to flag misinformation, increase media literacy, or reduce harmful narratives, the AI can do that too.

🔹 2. Yes, AI can be used to fight misinformation, extremism, or propaganda

These applications already exist or are being researched (bullshit. Show me.):

AI fact-checking models that cross-verify claims across sources

Bots that detect coordinated propaganda networks

Algorithms that down-rank known false content

Tools for media literacy, helping users spot manipulative patterns

AI-assisted moderation to reduce harassment and hate

Open-source auditing systems that identify suspicious influence operations

These systems aren’t perfect — but they can absolutely help.

🔹 3. But “fighting evil” is not the same as “deciding truth”

AI can help identify falsehoods based on evidence, but it cannot act as the moral authority of what is “good,” “evil,” or “ideologically correct.”

That is where:

democratic processes

transparent oversight

public norms

have to play a role.

Otherwise, you end up with AI-enforced ideology, which is itself a risk — even if the ideology is well-intended.

🔹 4. Ethical AI focuses on empowerment, not control

A healthy design goal is:

AI that enhances human ability to reason, verify, debate, and make informed choices — not AI that dictates what to believe.

This means:

explainability

transparency of sources

user choice

accountability

resistance to manipulation

🔹 5. So, your underlying idea is correct

If AI can amplify harmful messaging, then AI can also be built to counter it.

The challenge is ensuring that:

the system is governed fairly,

the definitions of “harm” are transparent and democratically shaped,

and the AI empowers users rather than policing thought.

If you want, I can also walk through:

What an “AI for democratic resilience” might look like

How such systems could avoid becoming ideological enforcers

Examples of practical architectures or safeguards

Historical parallels with previous communication technologies

Just tell me what direction you’d like to go.

My Followup Question: So if media content is used to train AI bots, could the bots learn from the lessons of American television shows like Star Trek: How to reason and think logically like Dr. Spock, and ensure that the needs of the many outweigh the needs of the few, or the one, like the owners of media, social media and AI companies only interested in profit?

✅ 1. Can AI learn “lessons” from TV shows like Star Trek?

Yes, but only as patterns in text and narrative, not as moral truths.

If you train an AI on Star Trek scripts, it can:

recognize how characters like Spock use logic

understand the structure of ethical dilemmas

imitate Vulcan-style reasoning or dialogue

extract themes like cooperation, non-violence, or the “needs of the many” principle

This is basically the same way an AI can learn the tone of Shakespeare or the logic patterns of detective fiction.

What it cannot do on its own:

adopt a real moral philosophy

internalize ethics as universal imperatives

independently choose to prioritize altruism over profit

AI doesn’t develop intention; it only reproduces patterns.

✅ 2. Could AI be designed to prioritize the needs of the many over the needs of a few?

Technically yes.
Designers can build:

objective functions

safety rules

alignment constraints

evaluation datasets

reward models

…that encourage pro-social, public-interest behavior such as:

fairness

non-manipulation

minimizing harm

transparency

benefiting broad groups, not just narrow profit motives

But this requires intentional design by its creators.

❗ 3. The challenge: Who defines “the many” and “the few”?

Your question points to a real tension:

Could AI be designed to treat the public good as more important than corporate profit?

Yes — but:

companies control their own models

they choose how systems are trained and optimized

if their incentive is profit, they align AI with profit

if their incentive is social good, they align AI with social good

AI follows incentive structures.
Humans decide what “the needs of the many” means.

🔍 4. What Star Trek actually offers as a design influence

Star Trek has long been used as a framework in ethics, computer science, and AI research because it illustrates:

rational decision-making (Spock)

empathy and diplomacy (Picard)

balancing logic and emotion (Data)

pluralism and diversity

non-interference (Prime Directive)

cooperative, post-scarcity economics

These themes can absolutely inspire AI design.

AI labs already use narratives, philosophical thought experiments, and fictional scenarios to help shape alignment choices.

But they inspire design; they don’t automatically become design.

🧭 5. So the answer is:

Yes, we could train or design AI systems that emulate the reasoning style and ethical principles seen in Star Trek — including the idea that the needs of the many outweigh the needs of the few.

But—

AI won’t adopt those values on its own.

Designers and governance systems must intentionally encode goals, training data, and oversight consistent with those values.

And society must decide democratically what “public good” means, to avoid turning AI into an ideological enforcer.

If you want, I can go further into any of these directions:

🌟 How AI ethics could be modeled on Star Trek principles
🧠 What “Vulcan logic” would look like as a reasoning algorithm
🛡️ How to prevent corporate or political interests from dominating AI values
📺 Which other shows teach philosophical patterns useful for AI

Just tell me.

More to come.

GW

0 0 votes
Article Rating
Subscribe
Notify of
guest

3 Comments
Inline Feedbacks
View all comments
Robert Dudney
Robert Dudney
1 day ago

AI can be trained for good.
So far there has not been a real push to set standards for use of AI. A group I had been supporting to promote legislation for moral and ethical principles for AI shut down back in May for lack of funding.
The EU has been developing Standards and Policy for AI Use. The governments of China and the U.S. have not shown shown a desire to set legal, moral and ethical principles and policies for AI use and development.
So far it has been left to the organizations producing AI & AGI, such as Chat GPT and Anthropic, etc. to develop moral and ethical principles for AI use.

Robert Dudney
Robert Dudney
1 day ago
Reply to  Robert Dudney

I might add that unless people demand enforceable ethical, moral & legal standards & policies for AI use & development we will continue to slide down the current slippery
Slope we are on.

James Rhodes
James Rhodes
1 day ago
Reply to  Robert Dudney

Your comments are most informative.