Put simply Google’s AI-powered internet search chatbot, Bard, is not available to internet users in the EU and Canada.
CEO Sundar Pichai announced at this year’s Google I/O conference that Bard has expanded its reach to 180 countries and now supports three languages: English, Korean and Japanese. However, the list of countries and regions where the software can be used does not include the European Union and Canada.
Chatbots like Bard are new, and their impact is still being studied as governments around the world step up efforts to regulate this technology. Authorities in Italy, Spain, France, Germany, and Canada have launched investigations into ChatGPT, citing data privacy concerns.
Experts are concerned the tool could extract personal information leaked on the internet, and it’s not clear how the processed data will be used and stored. They may actually be violating his GDPR law in the EU.
Italy has since lifted the ban, but the upcoming AI law says companies that develop chatbots like Bard cannot disclose output generated by AI, nor can they use illegal or copyrighted content. It has been suggested that they will be subject to regulations mandating the introduction of filters to prevent their creation.
Meanwhile, Google is also working on it. “We plan to gradually expand to more countries and regions in a way that is consistent with local regulations and our AI principles,” confirmed the Chocolate Factory.
Deepfakes on the Dark Web
There is an increasing demand for realistic AI deepfake videos by criminals trying to steal money by conducting cryptocurrency scams or breaking into online biometric tools.
Kaspersky, a cybersecurity firm, searched for posts on the darknet looking for developers who generate deepfakes that mimic celebrities, politicians, etc. It turned out to be billable. Analysts have found that criminals often try to get people to create fake content promoting cryptocurrency scams, hack into people’s online accounts, or, of course, for pornographic purposes.
“Our research shows that there is substantial demand for deepfakes, far outstripping supply. said in “And this is very alarming because, as we all know, demand creates supply. So in the near future, there will be a significant increase in incidents involving real-world, high-quality deepfakes. We expect it to increase to
However, the security vendor also said it was optimistic that tools to detect if a video is genuine would eventually become widely available to combat fraud, identity theft and disinformation.
“The most obvious but depressing piece of advice is simply ‘never trust your eyes or your ears again’. But there is hope. Using the same artificial intelligence techniques that help create deepfakes, It can distinguish real videos, photos and audio from fakes, and such tools are slowly appearing on the market, let’s hope that in the near future the press, Messenger, and Perhaps browsers will have such technology as well.”
Expand Claude’s context window
Anthropic’s large scale language model, Claude, can now handle up to 100,000 tokens. This means users can submit hundreds of pages of documents for analysis at once.
The sequence of characters, such as words, that the model processes to produce its output (the so-called “context window”) affects the performance and functionality of the model. If your model has a large context window, you can handle larger amounts of text and perform more complex tasks such as searching and summarizing.
“The average person can read 100,000 tokens of text in about 5 hours or more, but then it can take considerable time to digest, memorize, and analyze that information.” We can now do this in less than a minute,” explained the startup.
“For example, I loaded the following full text: great gatsby to Claude-Instant (72,000 tokens) and corrected one line to say that Callaway is a “software engineer working on machine learning tools at Anthropic”. When I asked the model to find out what was wrong, it returned the correct answer within 22 seconds. ”
A larger context window enables new applications such as financial report summaries, research papers, and court submissions. The company’s API allows users to find relevant information without having to read through pages of text first. Developers can quiz Claude to search for specific parts for a deeper understanding of the technical documentation.
Bots listen to AI-generated songs, but Spotify doesn’t like it
Tens of thousands of AI-generated tracks uploaded to Spotify have been removed as the company grapples with bot accounts that artificially inflate listener numbers and raise copyright issues.
Major record label Universal Music has warned officials of suspicious activity on Boomy’s Spotify tracks. Boomy, who makes tools to help him create AI-generated music, seems to have miraculously increased the number of listeners to its tens of thousands of songs in a short period of time, with his bot online has been inflated.
An investigation revealed that Spotify had removed 7% of songs uploaded by Boomy. financial times. “Artificial streaming is a long-standing industry-wide problem that Spotify is working to eradicate across its services,” the company said. AI-generated music is controversial, especially when it clearly copies the style of human artists.
Universal Music has asked the streaming service to remove a fake rap song that stole the voices of Drake and The Weeknd, claiming it violated copyright laws. Spotify, SoundCloud, and YouTube have reportedly removed the track, but you can still find it here.
“The recent explosion of generative AI, if left unchecked, will flood the platform with unwanted content, exposing existing copyrights,” the platform’s CEO Lucien Grainge told investors. There will be legal rights issues,” he said. ®
