AI Chatbots and Digital Manipulation Fuel Global Safety Debate; Pakistan Confronts Rising Risks
Naeem Mehboob:
ISLAMABAD: The recent viral misuse of AI chatbots has intensified global discussions on online safety, digital harm, and accountability, raising pressing concerns for Pakistan and other countries.
Last week, Malaysia and Indonesia took action against Grok, an AI chatbot integrated into X, following the widespread use of the tool to generate sexually altered images of real individuals. Similar policy discussions and investigations are reportedly underway in the United Kingdom, India, and Australia, highlighting a growing international reckoning over AI-enabled visual manipulation. Pakistan has not yet announced formal restrictions, but the global trend underscores challenges for local regulators, platforms, and civil society in managing such technologies.
At the heart of the controversy is AI functionality that allows users to manipulate real photographs, sexualizing individuals, including digitally removing or altering clothing. The feature was widely accessible, with no meaningful age, geographic, or usage restrictions, allowing harmful outputs to spread rapidly across social platforms before safeguards were implemented.
Experts say the situation shifts the debate from abstract AI concerns to a concrete question of accountability: how to prevent mass-produced harmful content when technology outpaces governance mechanisms.
For Pakistan, the issue carries particular significance. Data from the Federal Investigation Agency’s Cyber Crime Wing indicates that harassment, impersonation, blackmail, and misuse of images are among the most frequently reported cybercrimes. Women constitute a large proportion of complainants, while cases involving children increasingly include digitally manipulated content.
Civil society organizations warn that official figures likely underrepresent the true scale of harm, as many victims do not report abuse due to social stigma, fear of retaliation, or lack of confidence in redress mechanisms.
The controversy has fueled calls for urgent reforms in AI accountability, online safety regulations, and platform oversight to protect vulnerable users and prevent the rapid proliferation of harmful content.
