ChainThink reports that on March 15, the "3·15" Gala exposed the widespread issue of AI large models being "poisoned." Li Fumin, an expert at the Institute for Intelligent Governance at Shandong University of Finance and Economics, stated that businesses using services such as GEO to conduct targeted training of large models, thereby guiding AI to generate specific product or service recommendations, constitutes a new form of unfair competition and consumer deception. This practice covertly embeds marketing content through technological means and fabricates facts, causing consumers to unknowingly receive manipulated recommendations. The harm and illegality of such actions demand serious attention. On one hand, these activities violate consumers’ rights to information and fair transaction as stipulated by the Consumer Rights Protection Law. On the other hand, they constitute false or misleading commercial promotion using technological means, disrupting the normal order of recommendation algorithms and the competitive market environment, thereby constituting unfair competition.
Addressing the above AI poisoning behavior requires a multi-pronged approach: regulators must prioritize monitoring AI-induced marketing and strengthen enforcement oversight; AI operators must enhance scrutiny of training data sources and implement output filtering, while establishing traceable mechanisms; and consumers must improve their awareness of the commercial nature of AI-generated content and actively protect their rights through complaints and reporting. (China News Service)
