🛡️Satisfaction guaranteed — Setup refunded if not satisfied after 30 days

← Back to blog
opinionFebruary 26, 2026

AI and Society: The Political and Ethical Issues That Divide

From controversial donations to algorithmic biases, analysis of tensions between the AI industry and citizen concerns.

Introduction

A viral post circulates: a user announces canceling their ChatGPT subscription after discovering a $25 million donation from an OpenAI executive to a political PAC. Beyond this particular case, this reaction illustrates a growing tension: AI is no longer just a technological issue, it's becoming a major political and social concern.

The Inevitable Politicization of AI

Every transformative technology eventually becomes political. AI is no exception.

Platform Power

Companies developing the most powerful AI models exercise considerable influence. They decide what AI can and cannot say, what images it can generate, what information it prioritizes. These decisions, presented as technical, are fundamentally political.

Links to Power

AI company executives mingle with power circles, fund campaigns, participate in regulatory discussions. This proximity raises legitimate questions about the independence of these technologies from political interests.

The Geopolitics of AI

The race for AI is also a race between nations. The United States, China, and Europe are developing distinct strategies. Chip sanctions, model export restrictions, divergent regulations create a fragmented landscape where technology becomes an instrument of power.

Biases: Reflection of Our Societies

AI models absorb and amplify biases present in their training data.

Representation Bias

Who is visible in search results, generated images, given examples? Analyses show systematic overrepresentation of certain groups and invisibilization of others.

Treatment Bias

Beyond representation, models may treat requests differently depending on perceived context. Studies have documented differences in response quality based on names, languages, suggested origins.

The Difficulty of Correction

Correcting these biases is complex. Surface interventions can create other problems, deep corrections require considerable resources. And who decides what a "unbiased" result is? This question is itself political.

The Question of Responsibility

When AI causes harm, who is responsible?

Dilution of Responsibility

Between model developers, data providers, integrators, end users, the chain of responsibility is unclear. This complexity often benefits those who could be held accountable.

Legal Precedents

Courts are beginning to rule on cases involving AI. Defamation by chatbot, discrimination by recruitment algorithm, AI-assisted medical errors. Each decision creates precedents that will shape the future legal framework.

The Call for Regulation

Faced with these uncertainties, voices are rising for stricter regulation. The European AI Act represents an ambitious attempt to frame risky uses. Other jurisdictions are observing and drawing inspiration from this approach.

Consumer Choice

Facing these issues, what can the individual user do?

Voting with the Wallet

Canceling a subscription, choosing an alternative provider, favoring open source solutions. These individual choices, aggregated, send signals to companies. But their real impact remains limited given market concentration.

Demanding Transparency

Users can demand more transparency about AI company practices: training data composition, moderation policies, financial ties. This collective pressure can change practices.

Limits of Individual Action

However, systemic problems are not solved by aggregating individual choices. Collective regulation remains necessary to establish fair rules of the game.

Emerging Alternatives

Facing dominant players, alternatives are developing.

Open Source AI

Models like Mistral, LLaMA, or Hugging Face community projects offer alternatives to proprietary solutions. More transparent, more customizable, they allow increased control.

Data Cooperatives

Initiatives are exploring alternative data governance models, where contributors have a say in how their data is used.

Local AI

Running models on your own hardware guarantees privacy and independence from cloud providers. Advances in optimization make this option increasingly viable.

The Necessary Democratic Debate

These questions cannot remain solely in the hands of technologists and entrepreneurs.

Citizen Expertise

Citizen participation initiatives on AI issues are emerging. Citizens' assemblies, public consultations, collective deliberations. These formats allow integrating diverse perspectives into decisions.

Public Education

Understanding the basics of how AI works, its capabilities and limitations, is becoming a civic skill. Without this understanding, public debate risks being dominated by experts and special interests.

The Role of Media

Quality technology journalism, capable of investigating industry practices and explaining issues to the general public, is essential for informed debate.

Toward More Democratic AI?

The future of AI is not written. It depends on the collective choices we make.

Possible Scenarios

Between extreme concentration and radical democratization, between strict regulation and laissez-faire, many futures are possible. Decisions made in the coming years will be decisive.

Principles to Defend

Certain principles deserve to be defended: system transparency, actor responsibility, inclusion in benefits, protection against harm. These principles can guide technical and political choices.

Collective Action

Significant changes come through collective action: citizen mobilization, pressure on elected officials, support for alternatives, participation in debates. AI is too important to be left to technologists alone.

Conclusion

The anecdote of canceling a ChatGPT subscription for political reasons may seem trivial. Yet it reveals a growing awareness: AI is not neutral, and our technological choices are also political choices.

This politicization is healthy. It means society is taking hold of a subject that directly concerns it. Current tensions, if managed democratically, can lead to fairer, more transparent, more responsible AI.

The challenge is to keep this debate open and informed, to resist the temptation of technocracy as much as technophobic rejection. Between blind enthusiasm and irrational fear, there is a path: that of critical and constructive engagement.

AI will be what we collectively make of it. It's up to us to choose.

IAéthiquepolitiquesociétébiaisrégulationOpenAIresponsabilité

Want to automate your operations?

Let's discuss your project in 15 minutes.

Book a call