Image credit: apple
Keeping up with a rapidly changing industry like AI is no easy task. So until AI can do it for you, here’s a quick rundown of last week’s coverage of the world of machine learning and notable research and experiments we didn’t cover individually.
It could be said that Apple very visibly and purposefully went all out in the hyper-competitive AI race last week. That’s not to say the company hasn’t previously committed to investing in and prioritizing AI. However, Apple made it abundantly clear at his WWDC event that AI will be embedded in many of its upcoming hardware and software features.
For example, iOS 17, coming later this year, can use computer vision to suggest recipes for similar dishes based on photos on your iPhone. AI also powers Journal, a new interactive diary with personalized suggestions based on your activity in other apps.
iOS 17 also features an upgraded AutoCorrect that leverages AI models to better predict the words and phrases you might use next. Over time, it will be customized and fun to learn the words your users use most often (such as swear words).
AI is also at the core of Apple’s Vision Pro augmented reality headset, especially FaceTime in Vision Pro. Using machine learning, Vision Pro creates a virtual avatar of the wearer and interpolates the full range of facial distortions, from skin tension to muscle action.
Image credit: apple
It may not be generative AI, which is arguably the hottest subcategory of AI today. But Apple’s intention is to make a comeback of sorts, something Apple should be underestimated after years of struggling machine learning projects, from the overwhelming Siri to self-driving cars in production hell. It seems to me that it was meant to show that it is not.
Strengths projection is more than just a marketing strategy. Apple’s historic underperformance in the AI space has caused a serious brain drain, and talented machine learning scientists, including the team that worked on the underlying technology for OpenAI’s ChatGPT, are turning to greener pastures. The Information reports that he left Apple for
Show that you’re really serious about AI Shipping While products incorporating AI feel like a necessary move, in fact some of Apple’s competitors have recently failed to meet this standard. (Meta, looking here.) Apparently, Apple made a foray last week, even if it wasn’t particularly touted.
Other notable AI headlines from the past few days include:
- Meta creates a music generator. So as not to lose Powered by Google, Meta has released its own AI-powered music generator. And unlike Google, we open sourced it. Meta’s music generation tool called MusicGen can turn a text description into about 12 seconds of audio.
- Regulators inspect AI safety. Following last week’s announcement by the British government In addition to revealing that it will host a “global” AI safety summit this fall, OpenAI, Google DeepMind and Anthropic are also “promoting” their AI models to support evaluation and safety research. promised to provide “early or priority access”.
- AI meets the cloud: Salesforce is launching a new suite of products aimed at strengthening its position in the highly competitive AI space. Called AI Cloud, the suite includes tools designed to deliver “enterprise-ready” AI and is Salesforce’s latest cross-cutting effort to enhance its product portfolio with AI capabilities.
- Testing text-to-video AI: TechCrunch got hands-on with Gen-2, Runway’s AI that generates short video clips from text. verdict? The technology has a long way to go to come close to producing cinema-quality images.
- Putting more money into enterprise AI: Cohere shows it has plenty of money for a generative AI startup.The company, which develops an AI model ecosystem for enterprises, announced last week that it has raised $270 million as part of a Series C round.
- No GPT-5: OpenAI is still not training for GPT-5, OpenAI CEO Sam Altman said at a recent Economic Times conference, with many industry executives A few months after the startup promised not to work on a successor to GPT-4 “for the time being,” academics expressed concern On rapid progress with Altman’s large-scale language model.
- AI Writing Assistant for WordPress: automaticThe company behind WordPress.com and a major contributor to the open source WordPress project announced last Tuesday an AI assistant for its popular content management system.
- Instagram wins chatbots: Image suggests Instagram may be working on an AI chatbot leaked By app researcher Alessandro Paluzzi. Shipped or not, it reflects ongoing app development, and these AI agents can answer questions and give advice, according to leaked information.
Other machine learning
If you want to know how AI will impact science and research in the next few years, a team of six national laboratories created a report on just that, based on a workshop held last year. . One might be tempted to say that the report is already outdated because it’s based on last year’s trends, rather than this year’s, when things moved so quickly. But while ChatGPT made waves in technology and consumer awareness, it really isn’t particularly relevant for serious research. Larger trends are, and are moving at a different pace. The 200-page report is by no means an easy read, but it’s useful because each section is divided into easy-to-understand parts.
Elsewhere in the national lab ecosystem, Los Alamos researchers are eager to advance the field of memristors, which combine data storage and processing, much like our own neurons. It’s a radically different approach to computation that hasn’t yet paid off outside of the lab, but this new approach at least looks like it’s pushing the ball forward.
The capabilities of AI with linguistic analysis are demonstrated in this report on interactions with people detained by police. Natural language processing was used as one of several factors to identify language patterns predicting stop escalation, especially for black men. Human and machine learning methods reinforce each other. (Read the paper here.)
Image credit: Cyril Verdon / Renault Defrancesco BUREAU 141 / EPFL
DeepBreath is a model trained on respiratory recordings taken from patients in Switzerland and Brazil, which the EPFL authors claim helps identify respiratory conditions early. The plan is to put this in a device called Pneumscope, under spin-out company Onescope. Perhaps we will follow up with them to learn more about the company’s performance.
Another advance in AI-powered health comes from the city of Purdue, where researchers developed software that approximates hyperspectral images with smartphone cameras, successfully tracking blood hemoglobin and other metrics. . This is an interesting technology. Using the ultra-slow motion mode of mobile phones, we get a lot of information about every pixel in the image, giving the model enough data for estimation. It could be a great way to get this kind of health information without special hardware.
Image credit: Massachusetts Institute of Technology
While I still don’t believe autopilots perform evasive maneuvers, MIT is nudging the technology with research that helps AI avoid obstacles while maintaining a desired flight path. Older algorithms can suggest changing directions significantly so as not to crash, but it’s more difficult to do so while maintaining stability and not pulping anything under the hood. The team managed to obtain a simulated jet that autonomously performs Top Gun-like maneuvers without compromising stability. It’s harder than it sounds.
Disney Research concludes this week with hopes of an interesting introduction that also happens to apply to filmmaking and theme park operations. At CVPR, we showcased a powerful and versatile ‘face landmark detection network’ that can continuously track facial movements using more arbitrary reference points. Motion capture already works without the tiny capture his dot, but this should make it even higher quality and more dignified for the actor.