Komal Jasani, QA engineer at META, pioneered the use of machine learning in VR testing, leading to bug detection, reduced costs and improved user experience.
With the rise in popularity, VR platforms will find applications such as training, social events, games, among others. Not only modeling accurate graphics and smooth animations, but building these VR platforms gives you an increasingly larger field of work, optimizing them well, keeping them free of crashes, and as a result, providing the same experience for your clients every time. The area of question is called quality assurance.
The VR test was manual and took a very long time. Engineers and testers step through their VR applications in search of bugs and flaws. This cannot accommodate increased platform and application complexity. Recent machine learning technologies are taking up the task of speeding up things and discovering more problems before they can be perceived by end users.
Komal Jasani is a highly creative tester at Meta, who pioneered the use of machine learning in VR testing. “With a background in QA engineering and AI solutions, we have built an understanding of how machine learning can be used to improve platform stability and user experience,” she added. These fresh initiatives, along with interorganizational projects, have transformed the manual QA process, resulting in significant impact on the company. By increasing the speed of bug detection by 40%, she encouraged an approach that allowed ML models to absorb data from user sessions to identify patterns that could indicate problems. As a result, these changes resulted in less delays in releases and faster releases. It saves money at the same time, but the new implementation reduced about 25% of QA costs due to less manual testing. Overall, these transitions in automated testing could save around $3.5 million in less than three years.
Another big engine and shaker, she said, was an AI-driven QA initiative that was pushed against the predictive model as immersed in the QA process. The predictive model could catch the problem before it actually occurred, thus achieving maximum test coverage. The time it took to run the test to be approximately 2 days to 18 hours. On the other hand, the defect detection rate has increased by more than 60%.
Nowadays, application problems have emerged sometimes throughout the work process. Integrating AI into existing systems was a major problem as many QA tools are not intended for machine learning. “The team addressed the issue of data shortages in training models by developing synthetic data generation techniques,” she said. The team also designed tools to ensure that the tests run well cross-device. To this end, they solved the delay problem of real-time testing with edge processing. This allowed us to create a much more livable and enjoyable objective experience. The stability and performance of the platform also boosted customer satisfaction by 15%. The QA team did not wait for the problem to manifest. Rather, they began to discover these problems early and predict problems before they even appeared.
In addition to her technical contributions, she also contributes her research findings through publications such as “The role of machine learning in enhancing quality assurance in VR platforms” and “Automated defect detection in virtual environments using machine learning.”
Looking further, industry experts predict that VR testing capabilities will improve as well as VR technology improves. AI and ML are given priority here, particularly to foresee problems and to prevent them. In addition, tools have been developed that evaluate user comfort and ultimately enhance the accessibility and enjoyment of the VR experience for everyone.
The key point here is that ML will allow QA teams to work smarter. It eliminates those distractions that waste time and resources, and ensures that users have the experience that could lead them to work in an environment where user trust and performance is paramount.
topic
