- By Siona McCallum & Jennifer Clarke
- BBC news
image source, Getty Images
Artificial intelligence (AI) technology is developing rapidly and is transforming many aspects of modern life.
But some experts fear it could be used for malicious purposes and threaten jobs.
What is AI and how does it work?
AI allows computers to behave and respond as if they were humans.
Computers can be fed vast amounts of information and trained to identify patterns in them in order to make predictions, solve problems, and even learn from their mistakes. can.
AI relies not only on data, but also on algorithms, a list of rules that must be followed in the correct order to complete a task.
Watch: What is Artificial Intelligence?
This technology is behind the voice-controlled virtual assistants Siri and Alexa. This allows Spotify, YouTube and BBC iPlayer to suggest what you want to play next, and allows Facebook and Twitter to decide which social media posts to show you.
What is My AI for ChatGPT and Snapchat?
Two powerful AI-driven applications, or apps, that have received a lot of attention in recent months are ChatGPT and Snapchat My AI.
These are examples of so-called “generative” AI.
It uses patterns and structures identified from vast amounts of source data to generate new, original, human-like content.
AI works with computer programs known as chatbots to “converse” with human users through text.
Apps can answer questions, tell stories, and write computer code.
However, both programs may generate inaccurate answers to users and may reproduce biases contained in the source material, including sexism and racism.
Why Do Critics Concern AI Can Be Dangerous?
Experts warn that the rapid growth of AI could be dangerous, as the rules governing how it is used are currently few and far between. Some argue that AI research should be stopped.
AI could be used to generate misinformation that could destabilize society, they argue. In the worst-case scenario, they say, machines could become too intelligent and take over, leading to the demise of humanity.
EU competition chief Margrethe Vestager says ‘guardrails’ are needed to counter the biggest risks posed by AI
He was particularly concerned about the role AI could play in making decisions that affect people’s lives, such as loan applications, adding that the possibility of AI being used to influence elections was “definitely risky.”
What are the current rules regarding AI?
Governments around the world are struggling with how to regulate AI.
MEPs in the European Parliament just voted in favor of the EU’s proposed artificial intelligence law, which would put in place a strict legal framework on AI that companies would have to follow.
Margrethe Vestager says we need “guardrails” to counter the biggest risks posed by AI.
The law, which is due to come into force in 2025, categorizes AI applications into levels of risk to consumers, with AI-enabled video games and spam filters in the least risky category.
High-risk systems, such as those used to assess credit scores or determine access to housing, will face the most stringent controls.
These rules do not apply in the UK, where the government announced its vision for the future of AI in March.
But Vestager says AI regulation needs to be a “global issue” and wants to build consensus among “like-minded” countries.
U.S. lawmakers have also expressed concerns about whether existing voluntary provisions are working.
Meanwhile, China plans to force companies to notify users whenever AI algorithms are used.
Which jobs are at risk because of AI?
AI has the potential to revolutionize the world of work, but it raises the question of which roles it will replace.
At the same time, however, they also identified significant potential benefits for many sectors, predicting that AI could lead to a 7% increase in global GDP.
