top of page

The Ethics AI Thinks It Has And What It’s Still Missing

  • Writer: Rabia Malik
    Rabia Malik
  • Jul 21
  • 3 min read

Why Designers Must Lead the Moral Blueprint of Technology

By Rabia Malik


The Simulation of Safety

AI is learning how to sound ethical. But is it learning what ethics actually mean?

Tools like ChatGPT, Claude, Gemini, and other generative AI systems are showing up in how we write, work, build, and communicate. They're helpful, efficient, even comforting at times.

But let’s be honest: these systems aren’t making moral decisions.They’re following patterns. Optimizing outputs. Avoiding risk.They simulate care but they don’t understand it.

Most AI doesn’t have ethics. It has instructions. And that’s not the same thing.

What AI Actually Knows About Ethics

AI systems today are trained to:

  • Avoid dangerous content (hate, violence, abuse)

  • Respond safely based on human feedback

  • Refuse risky topics using hardcoded rules

These guardrails matter. But they are not moral reasoning. They don’t reflect, they react.

AI can’t weigh conflicting values. It doesn’t pause to consider: “Should I say anything at all?”

Ethics is not the same as compliance.Compliance avoids harm. Ethics considers impact.

The Design Gap

This is the part that gets missed.

Design is where ethics actually shows up. Not in policy. Not in code. In the moment a user interacts with a system.

Design is the interface where values become real. It's the space between intent and consequence.

When someone uses your tool, the question isn’t “is this allowed?” It’s:“Does this feel right?” “Does this system recognize me?” “Will I be protected or made invisible?”

These aren’t backend questions. They are design questions. And they are ethical questions.

My Lived Lens

I’ve spent over 15 years designing across fintech, femtech, healthtech, education, sports tech, gaming, and emerging tech.

I’ve built tools for postpartum parents, cancer survivors, kids learning about puberty, and professionals navigating high-stakes systems.

And every time, I’ve seen one truth repeat:

Ethics doesn’t live in a checkbox. It lives in how a product makes someone feel.

That feeling that’s our responsibility as designers. And it’s what AI still doesn’t understand.

What Ethical Design Actually Looks Like

This work is real. It’s happening now. It looks like:

 Designing how an AI bot responds to someone experiencing pregnancy loss not with canned language, but with clarity, empathy, and support options.

 Ensuring facial recognition systems don’t disproportionately flag Black or brown users as threats due to biased training data and flawed assumptions.

 Building in moments of restraint where an agent asks, “Do you want help right now?”  instead of assuming intervention is always the right move.

The goal isn’t to make AI nicer.The goal is to make it more human-aware.

What I'm Building

That’s why I created Inov8er to help shape ethical frameworks for AI tools that support, not harm.

Through Inov8er, I work with:

  • Startups building early-stage AI products

  • Researchers creating agent-based tools

  • Enterprise teams scaling products that impact real lives

I help teams ask better questions, design better rules, and build trust into their systems from the start.

Ethics is not a roadblock. It's a roadmap.

Let’s Build Better

This is just the beginning for me, and maybe for you too.

So let me ask:

What would make an AI system feel ethical to you? Not sound good. Not act polite. But truly feel right?

If you’re building, thinking, questioning I want to talk. If you want to write better rules I want to help.

Let’s make AI better.


Rabia Malik AI + Design Ethics Strategist | UX/UI | MIT Innovative Presenter | Patent Owner | Public Speaker Founder, Inov8er 📩 rabia@inov8er.com

 
 
 

Comments


bottom of page