Navigating the AI ​​legal landscape – Scottish Scoop News

AI News


A silhouetted figure stands in front of a projected computer code. Generative artificial intelligence is built on similar software and is increasingly a priority for lawmakers. Governments around the world have been intensely debating how to oversee its development.
“Wall Code”/Nat W./IndustryToday/CC By-SA 2.0

A silhouetted figure stands in front of a projected computer code. Generative artificial intelligence is built on similar software and is increasingly a priority for lawmakers. Governments around the world have been intensely debating how to oversee its development.

California lawmakers have made significant strides in the past year to regulate generative artificial intelligence, setting a high bar for global efforts to govern the rapidly advancing technology.

In late 2025, Governor Newsom signed the Frontier Artificial Intelligence Transparency Act (SB 53), a pioneering state law that requires AI companies to disclose their safety protocols and potential risk mitigations. Additionally, the law also established a system for users to report safety concerns.

This is just one of a long list of AI-targeted laws passed in California, including requirements that popular AI systems provide tools to help detect and identify user content.

Jaydee Sun, a computer science teacher at Carlmont High School, said this is an incomplete but important step in the process of integrating AI into daily life.

“It’s difficult because MPs have biases like everyone else. Sometimes things are done for profit rather than improvement,” Sun said.

Overall, public opinion says that the laws enacted in California take sufficient account of the fact that Silicon Valley is home to many of its technology’s leading developers, but it has also raised concerns that additional laws could limit the competitiveness of such companies.

“I think it’s worth having laws and policies that prevent people from using generative AI to harm others,” said Carlmont College sophomore Melinda Nelson.

In a global context, California is a strong advocate of regulation. But governments around the world are also making their own efforts.

In South Korea, for example, lawmakers enacted an innovative AI Basic Law that went into effect in January 2026. This makes South Korea one of the first countries to introduce an entire legal framework related to AI. The core requirements of this law are that humans supervise the use of AI in areas such as healthcare, transportation, and finance, and that AI-generated content must be labeled.

Unlike the detailed sector-specific laws found in California, South Korea is developing a more unified legal framework. Supporting legislation is expected to strengthen the overall direction taken by the government.

Chenxi Lin, a senior at Carlmont University, expressed a different opinion about the severe restrictions placed on AI companies.

“It is not realistic to regulate the use of generative AI. Generative AI should rather be something that organizations and platforms are forced to do. However, the development of generative AI may require some regulation,” Lin said.

California’s recent legislation reflects that distinction, focusing on overseeing advanced AI companies rather than policing how consumers use the tools.

Lawmakers in Indonesia are taking a different approach to the misuse of AI.

A recent action taken in January 2026 temporarily blocked access to Grok, an xAI chatbot, after it was used to create sexually explicit images in a way that circumvented national laws against obscene content.

This struggle is tied to the balancing act that all governments around the world go through. That means how to protect privacy and safety without hindering innovation, and how to ensure accountability in technology that makes it easy to generate real content with little oversight.

Lin points out that even among the technology’s exploitative features, it remains a useful tool.

“It was extremely helpful in proofreading, writing feedback, and generally serving as a beta reader,” Lin says.

Everyday applications like this help explain why regulation is focused on overseeing AI development rather than trying to restrict individual users.

Lawmakers must continue to keep this in mind as they continue to grapple with the rapidly evolving technologies that are becoming part of the lives of millions of users.



Source link