LIN,SHU-LI / Assistant Professor, Ming Chuan University
September 15, 2025, 3:35 PM
Charlie Kirk, a key ally of President Trump, was murdered during a university speech, shocking and attracting worldwide attention. The FBI immediately issued a wanted notice and released surveillance footage of the suspect. However, since the footage only showed a vague side profile, enthusiastic netizens, eager to help apprehend the killer, used AI to create a more complete frontal profile. Although the killer was ultimately apprehended after his father informed the church pastor of the case, the application of regulations governing the use of AI-generated images of specific individuals and their publication has become an unavoidable new issue.
The mainstream regulatory framework for AI is in the United States, the European Union, and the United Kingdom. All of these countries include AI systems and AI-based models within their scope of application, regulating AI use through a risk-based approach. However, the scope and classification of risks vary.
The United States is primarily concerned about the serious risks posed by the “dual-use foundational model” in national security, economic security, and public health. It lists three national security risk scenarios in its definition, hoping to significantly lower the barrier to entry for outsiders to design, synthesize, acquire, and use nuclear, biological, and chemical agents.
The recent UK-led “AI Safety Summit” closely resembles the US administrative model. It differentiates AI systems based on their versatility and potential harm, focusing regulatory attention on frontier AI within the general AI category. Regarding narrow AI with dangerous capabilities, the UK announced the “Bletchley Declaration” after the summit, emphasizing that the risks posed by AI are inherently international and require international cooperation to address. The declaration’s signatories include the UK, the US, France, Germany, Italy, Japan, the EU, India, mainland China, and approximately 30 other countries, including members of the G7 and G20. This demonstrates that international awareness of AI regulation has gradually become a key focus of international economic, trade, and technological policy cooperation.
As AI synthesis technology becomes increasingly prevalent, the technical and financial risks associated with the release of the synthetic shooter photo, authorized by the FBI to issue a general wanted notice, primarily concern whether the synthesized imagery could mislead people. However, this case highlights the need for AI-related legal frameworks. Future regulations regarding synthetic videos and articles will require compliance with relevant standards and labeling. Furthermore, issues such as the potential for highly realistic AI-generated videos to be used to create photos of individuals who pose a high risk of sexual assault pose significant challenges. It is advisable to expedite the development of regulatory regulations, drawing on the practices and trends of various countries. Otherwise, once problems occur, they will become intractable and difficult-to-manage loopholes.

