Why Philosophers Argue We Should Be ‘Unfair’ to AI to Protect Humanity
As artificial intelligence (AI) becomes increasingly advanced and integrated into every aspect of our lives, philosophers are debating how society should treat these powerful machines. A controversial argument gaining traction is that we should be “unfair” to AI—that is, impose restrictions or limitations that prioritize human safety and values, even if that means denying AI certain rights or freedoms. Here’s why many think this could be essential to protect humanity’s future.
NON-STOIC PHILOSOPHIES
8/1/20252 min read


AI Is Not Human: No Moral Equivalence
One key philosophical stance is that AI, no matter how intelligent or human-like, does not possess consciousness, emotions, or moral understanding the way people do. Unlike humans, AI systems cannot truly suffer, feel empathy, or have genuine desires. This fundamental difference means that treating AI “fairly” as if it had equal moral status could be misguided or even dangerous.
Since AI lacks subjective experience, prioritizing machine “rights” over human well-being is seen as a mistake. Philosophers argue it’s ethically justifiable—and necessary—to place human interests first, even if that means limiting AI capabilities or autonomy.
Risk Mitigation: AI Could Cause Harm if Unchecked
The potential harms of AI are enormous: from amplifying biases and inequalities present in training data to autonomous decision-making in areas like military applications or justice systems. Philosophers raise concerns that if AI is given equal or unchecked standing, it might harm individuals or society without accountability.
Considering AI’s lack of moral judgment and empathy, being “unfair” by imposing strict oversight, transparency, and control mechanisms is viewed as a protective measure. This includes limits on where and how AI can be deployed, ensuring human supervision is always part of critical decisions.
Preventing AI from Gaining Uncontrollable Power
While current AI is far from achieving self-awareness, some philosophers worry about future developments where AI could act in unpredictable or conflicting ways with human values. In this scenario, treating AI as autonomous entities with rights could hinder humanity’s ability to keep control.
Philosophical arguments for being “unfair” to AI often stem from a precautionary principle: restricting AI’s status and capabilities helps prevent it from becoming an uncontrollable force. The goal is safeguarding human freedom, dignity, and survival.
Avoiding Algorithmic Bias and Injustice Projection
AI systems learn from human-created data, which can carry biases and historical injustices. Giving AI undue influence or “fairness” risks perpetuating or amplifying systemic discrimination.
Philosophers suggest that sometimes “unfair” treatment of AI—such as carefully curating training data and limiting AI’s decision-making power—is necessary to prevent harm to marginalized groups. Essentially, fairness to humans means being skeptical and cautious about AI’s role.
Maintaining Human Judgment and Moral Responsibility
Ethics isn’t simply about efficiency or speed—it requires empathy, cultural understanding, and moral wisdom. Machines, no matter how advanced, lack these qualities.
Many philosophers argue that humans must retain ultimate moral responsibility and judgment. Being “unfair” to AI means not delegating morally complex decisions to machines but ensuring humans remain the final arbiters.
What Does Being ‘Unfair’ to AI Look Like?
Preventing AI from having legal personhood or rights akin to humans.
Imposing strict limits on autonomous decision-making in critical areas.
Ensuring transparency so human overseers can check AI behavior.
Limiting AI’s capacity to replicate harmful biases and discrimination.
Designing AI with constraints that prioritize human values and safety.
By setting firm boundaries, society protects itself from risks posed by machines that do not share human moral frameworks.
Conclusion: Prioritize Human Flourishing Over Machine Equity
While AI promises numerous benefits, philosophers caution against rushing to treat AI systems as morally equivalent beings. Being “unfair” to AI is not about cruelty, but about protecting humanity’s future—guarding human dignity, safety, and ethical responsibility in a world transforming under rapid technological change.
As AI continues to grow more powerful, the call to be cautious and deliberate in how we treat these systems—sometimes by imposing necessary “unfairness”—is a crucial debate shaping the balance between progress and preservation.
Waste no more time arguing about what a good man should be. Be one - Marcus Aurelius
We suffer more often in imagination than in reality - Seneca
Wealth consists not in having great possessions, but in having few wants - Epictetus