2027 Policy Deadline Faces Scrutiny

Written by on February 28, 2026

The world’s governments appear to be opting for a voluntary approach to regulating artificial intelligence, encouraging ethical behavior rather than imposing strict rules. This shift is evident in recent developments, including a global pledge and a growing consensus on ethical frameworks.

On February 19, 2026, at the India AI Impact Summit in New Delhi, over 250,000 citizens pledged to use artificial intelligence ethically, earning India a Guinness World Record. Prime Minister Narendra Modi unveiled the MANAV Vision, a set of five AI governance principles, and eighty-nine countries signed the Delhi Declaration. Notably, none of the declaration’s clauses are legally enforceable.

What 250,000 pledges accomplish

India’s approach contrasts with the European Union’s binding AI Act passed in 2024. Similarly, the United States, under the Trump administration, revoked Biden-era AI executive orders and favored voluntary industry commitments. This has led to a growing global consensus on avoiding strict regulation, with countries emphasizing “ethical frameworks,” “values-based approaches,” and “human-centric design.” Discussions range from Harvard courses on “Mindfulness, AI, and Ethics” to calls from Christian scholars for moral frameworks.

The Pentagon war

Despite the global conversation on AI ethics, a practical conflict has emerged. US Secretary of War Pete Hegseth has decided to burn the guide belonging to Anthropic. The Pentagon is laying siege to Anthropic’s Constitutional AI (CAI), which uses a written set of rules to police itself. Hegseth has dismissed Anthropic’s ethical guardrails as “woke AI” and a product of an “Ivy League faculty lounge” mentality, viewing them as impediments to American success.

Then Hegseth threw his toys

The standoff escalated on February 27, 2026, when Anthropic refused to compromise on its prohibitions against domestic mass surveillance and the use of AI in fully autonomous lethal weapons. In response, Hegseth designated Anthropic as a “supply-chain risk” to national security, effectively barring federal contractors from doing business with the company. OpenAI subsequently stepped in to supply AI for classified military networks, though their agreement still includes safety guardrails.

The gaps in South Africa’s AI regulation

South Africa occupies a unique position in this global conversation, lacking both India’s aspirational framework and the EU’s binding rules. The government confirmed that its national AI policy won’t be finalized until the 2026-2027 financial year and will be layered onto existing laws rather than being standalone legislation. Currently, the Protection of Personal Information Act (POPIA) is the only relevant South African law, covering automated decision-making.

Real-world risks of delayed AI regulation in South Africa

Automated decision-making systems are already operating in South African financial services, telecommunications, and human resources. Individuals affected by inaccurate information or biased algorithms have limited recourse. The National AI Policy Framework proposes mandatory impact assessments, but these remain recommendations rather than statutory obligations.

Why the moral language spreads

The shift toward ethical language reflects the rapid development of AI systems outpacing legislative processes. For governments lacking regulatory capacity, moral language fills the void where law would normally go. However, accountability based on corporate ethics lacks the enforceability of law, raising questions about who defines “ethical.”

South Africa’s record must be written in policy

South Africa’s timeline for AI policy completion is around early 2027. The upcoming public comment period in March presents an opportunity for stakeholders to shape the policy and ensure it moves beyond aspirational principles toward enforceable regulation.


Reader's opinions

Leave a Reply


Current track

Title

Artist