According to the company’s recent MLPerf results, SiMa beats leader NVIDIA Jetson in power-constrained edge applications.
There’s big AI like ChatGPT, and there’s useful AI embedded at the edge. In these environments, available power may drop below 20 watts. Before we dive into his AI in SiMa, which we covered last fall, let’s see what we mean by the term “embedded.” Because there is a lot of confusion about who is truly competing with whom.
Embedded edge AI applications
AI is being used more and more in embedded edge applications. This refers to deploying computing resources and machine learning algorithms to devices and systems that operate in the field rather than centralized data centers. Examples of applications using AI at the embedded edge include:
- Self-driving cars: AI is being used to enhance the perception, decision-making, and control systems of self-driving cars and trucks. These systems rely on sensor data from cameras, lidar, and radar, and use machine learning algorithms to detect and classify objects in real time, predict their behavior, and determine how vehicles steer. .
- Factory Automation: AI is being used to optimize and automate manufacturing processes such as quality control, defect detection, and predictive maintenance. These applications rely on machine learning algorithms to analyze data from sensors and other sources to detect anomalies, patterns, and trends that help improve efficiency and reduce downtime.
- Smart home and smart buildings: AI is being used to power smart home and building systems such as HVAC, lighting, and security. These systems use machine learning algorithms to analyze data from sensors and other sources to optimize energy use, detect anomalies and security breaches, and provide personalized user experiences.
- Healthcare: AI is being used in medical devices and wearables such as glucose monitors, ECGs, and smart prostheses. These devices use machine learning algorithms to analyze data from sensors and other sources to monitor health, detect anomalies, and provide personalized treatment and feedback.
- Robotics: AI is being used to power robots and drones in applications such as search and rescue, precision agriculture, and warehouse automation. These systems use machine learning algorithms to analyze data from sensors and other sources, detect and classify objects, navigate complex environments, and perform complex tasks.
Overall, the use of AI at the embedded edge is growing rapidly. This is because more devices and systems are becoming more connected and intelligent, and the demand for real-time processing, low-latency communication, and efficient energy use continues to grow.
Compete with SiMa.ai
When the SiMa team is called up for a deal, the competition is usually the widely respected NVIDIA Orin family. Similar to his MLSoC chip from SiMa.ai, Orin is an edge solution that can run virtually any AI model from 20 to 275 TOPS, but its versatility requires more power. The power consumption of the Orin family ranges from 5 to 60 watts, and the chips include Arm CPU cores, and in the case of the Orin NX and Orin AGX, Ampere GPUs, plus NVIDIA Deep Learning Accelerators (DLP) and Vision Accelerators. contained.
The SiMa.ai MLSoC includes an Arm A65 CPU, Synopsys Computer Vision processor, video decoder and encoder, 4 MB of on-chip memory, security, connectivity and network on chip. It’s also significantly faster than Orin based on MLPerf and other benchmarks.
Therefore, in some applications, customers may choose Orin for greater flexibility, but at the cost of capital costs and power. Others find SiMa MLSoC parts preferable, with better performance per watt. The Embedded Edge Marketplace has many nooks and crannies.
ML Perf
The press asked me to explain why SiMa can claim superior efficiency while the Qualcomm Cloud AI100 is more efficient according to the MLPerf benchmark. The answer is that Qualcomm is targeting the edge cloud. This is a completely separate segment that normally requires a server. The two companies aren’t really competing despite the popular term ‘edge’. Gopal Hegde, Vice President of Products at SiMa.ai said:
To reach the low power levels demanded by customers, edge chips must be designed from scratch. Scaling down AI chips in data centers just won’t cut it for most applications. So SiMa.ai built his MLSoC chip as an embedded platform. In the MLPerf 3.0 round, the company beats his NVIDIA Orin’s power efficiency (images/sec/watt) for image classification by 47%.
Conclusion
The “edge” market is very diverse. Each application requires different models, different accuracies, different performance and latency, and different performance. This market grows rapidly and many, if not all, ships are lifted by the tide. His SoC implementation of SiMa.ai should fit the needs of many customers well, but we are just getting started.
Follow me please twitter or LinkedIn. check out my website.