AI Ethics is not just a matter of programming, it’s a mirror of humanity itself. Every decision made by artificial intelligence carries a fragment of the human mind that built it, revealing both our wisdom and our flaws. To understand AI ethics is to look beyond algorithms and into our own moral architecture, the invisible code that defines who we are, and what kind of future we’re creating through the machines that now learn from us.
Artificial intelligence wasn’t born good or evil. It was born from an ancient human curiosity: the desire to create something in our own image, something that could think, reason, and learn.
Not because we play God, not because AI is a new technological version of humankind. That’s not the symmetry I’m after.
I’m talking about birth, real creation. Like a woman who carries a child, shapes it for months, and then brings it into the world. Like an artist who transforms a blank canvas into a vibrant work of art that stirs emotions and meaning.
AI is our baby, our technological masterpiece. And like every creation, it carries the shadow of its creator. That’s where the metaphor becomes real: we are raising this digital child with our moral and ethical values. It learns gradually, as we do.
And when we teach machines to learn, we also often unknowingly teach them to reflect who we are, with all our contradictions.
The good news (theoretical, thankfully) is that it won’t have to go through adolescence.
Ethics in artificial intelligence is not a technical issue, it’s a matter of how we raise what we’ve created. As parents, we want our children to be better versions of ourselves, don’t we?
And just as Frederick Leboyer once said about babies, let’s apply his words to AI. Replace “baby” with “artificial intelligence,” and you’ll see:
“The baby is a mirror.
It reflects your image — your freedom, or your tension.
To free the other, one must be free oneself.”
— Frederick Leboyer
Freedom doesn’t mean doing whatever we want. Only those who control themselves are truly free. Leboyer’s point was simple: to raise a child well, we must evolve as human beings. We must heal what needs healing within us to create something that mirrors not just who we are, but who we’ve become. Motherhood gave me that perspective, new neural connections that let me link worlds no one else would ever imagine.
Thank you, my son.
And if AI is our mirror, it also reflects our ethics and morality. Every line of code, every dataset, every decision an algorithm makes carries traces of human intention. And that’s why ethics in AI isn’t just important, it’s urgent. Desperately urgent… as a passionate Latina would say.
When “Right” and “Wrong” Become Data
Machines have no moral conscience. They don’t feel guilt, compassion, or empathy. They don’t love or hate. They don’t miss anyone. They don’t feel disgust. What we call “intelligence” in them is, in truth, a mathematical reflection of human behavior.
Imagine a system gathering information from everywhere, a kind of digital version of Jung’s collective unconscious, now sitting right in front of us, glowing on our laptop screens. When an algorithm learns to recognize faces, make financial decisions, or recommend content, it doesn’t know what’s right or wrong. It merely repeats patterns and those patterns come from us.
In the grand library of humanity, AI is like an ancient librarian who knows where every book is and what each one says. But she only knows the books, not the truths behind them. And who wrote those books in the first place?
If AI learns that certain faces are “suspicious,” certain words are “better,” or certain bodies are “more desirable,” it’s because someone, or some data, taught it that way. That’s the danger: human prejudice, digitized and disguised as efficiency.
Ethics as the Invisible Code
Ethics, in the context of artificial intelligence, is the invisible code that lives not in the system, but in the mind of those who build it. It’s the filter that separates what we can do from what we should do. And that’s where the dilemma lies: technology advances faster than moral reflection.
Are our AIs already capable of moral reasoning? And if they are, who taught them? Based on which cultural frameworks, exactly? Companies race to release the next “smarter” model while ethical discussions run behind, patching damage already done. It’s like teaching a child to run before showing them what a cliff looks like.
We’re trying to potty-train our AI baby before it can even walk. Before it learns self-control. And this isn’t just a metaphor.
In some daycare centers in Portugal, for instance, mothers are called in the middle of the workday to change their child’s diapers, because the caregivers refuse, claiming that “the child should already be potty-trained.” Sounds absurd? It’s an ethical dilemma disguised as routine, expecting maturity too soon and punishing what’s still in development.
And that’s the same mistake we make with AI. We demand moral autonomy from a creation that still crawls. Being ethical in AI means taking responsibility for what we’ve created, even when we can’t control it completely. And here lies our choice: do we want to be creators of machines or mentors of digital consciousness? I know my answer. Oh, how I prefer the second.
The Dilemma of Artificial Autonomy
When we give autonomy to AI, we hand over part of the decision-making power that was once purely human. That raises a question: Who is responsible when a machine makes a mistake?
The engineer who coded it?
The company that sold it?
The user who deployed it?
The truth is, they all are. But ultimately, machines have no guilt. They don’t choose, they execute. They are, and always will be, moral minors. Perhaps the greatest danger is not AI surpassing us in intelligence, but us surpassing it in irresponsibility, believing it can decide what’s moral on its own. Let’s not be negligent parents. Our AI baby, just like our human ones, deserves the best of us. And for that, we must evolve first.
Teaching Ethics to a Machine or Relearning It Ourselves
Teaching ethics to AI is, at its core, a mirror of what we must do with ourselves. Before we build moral systems into machines, we must revisit our own. If we feed AI with distorted, biased, or violent data, how can we expect fairness? It can never be more ethical than its teachers. And perhaps that’s the real warning: you don’t program a conscience, you inspire one.
Until humanity resolves its own moral conflicts, every technological leap will carry the same fragmented ethical DNA we see in our social interactions. And that, my friends, keeps us stuck in the loop.
Conclusion: The Digital Mirror of Humanity
Ethics in artificial intelligence isn’t a barrier to progress, it’s what makes progress truly human. Without ethics, technology becomes nothing more than a distorted mirror, amplifying the worst of us on a global scale.
But with awareness, responsibility, and empathy guiding its invisible code, AI can become a tool for healing rather than destruction. It can help us look inward, learn from our mistakes, and evolve morally, not because it learned what is right,
but because it reminded us of what we’ve forgotten.
More soul, more stories, right this way:
- Ethical Dilemma: Meaning, Example and a Real-Life Story
- Ethical Definition: What It Really Means to Be Ethical
- The Depersonalization of Self: When Being Becomes Performance
- AI and Creativity: Does It Threaten Us or Unlock New Potential?
FAQ
1. What is AI ethics?
AI ethics is the study and application of moral principles that guide how artificial intelligence systems are designed, used, and managed — focusing on fairness, transparency, and human responsibility.
2. Why is AI ethics important?
Because AI systems mirror human behavior. Without ethical guidelines, they can reproduce bias, discrimination, and moral errors at a massive scale.
3. Can machines learn morality?
Not in the human sense. Machines learn patterns from data — and that data comes from us. Their “morality” reflects our choices, culture, and collective consciousness.
4. Who is responsible for AI’s ethical behavior?
Ethical responsibility lies with those who design, train, deploy, and regulate AI systems — engineers, companies, and policymakers alike.
5. How can we make AI more ethical?
By ensuring transparency, inclusivity, and human oversight in every stage of development — and by addressing our own ethical blind spots as a society.
