How AI is transforming the surgical robotics space: key players, applications, innovations and insights into IP management 

Applications of AI


AI has evolved and influenced many areas, from art to essays and screening of applications. However, where AI stands to have the greatest tangible impact on our lives is in the surgical suite. Its unrivalled ability to analyse data, recognise trends and support key decisions makes it a crucial part of performing a successful procedure. When paired with surgical robotics, procedures are safer, faster and easier to perform. Companies wanting to innovate in the complex surgical space must understand the current impact of AI in the industry and the wider IP landscape.

Surgical robotics is on the threshold of widespread commercialisation with more than 2,000 companies jockeying for a competitive edge. While the acceleration of innovation may seem recent, this evolution has quietly taken place over 25 years (see Figure 1). It reflects advances by many companies creating key components of the underlying technology, such as:

  • patents filed by IBM in 1999;
  • Intuitive in 2005;
  • Mako’s focus on machine learning for surgical guidance in 2003;
  • Samsung’s patent on machine-learning motion production in 2023; and
  • recent patents by Vicarious Surgical in 2021 and Metamorphosis in 2025. 

Patents provide unique insight into future technical trends and allow for a better understanding of the evolution of this complex space. Some critical questions they can address include:

  • the evolving nature of the surgical suite;
  • the role AI will play in improving patient access and surgical outcomes; and
  • whether AI can enable new entrants to leapfrog earlier players with large portfolios. 

Figure 1. Timeline for introducing AI into key surgical procedures

Table 1. Key applications of AI In robotic surgery

Image processing Presurgical planning Guidance State control and awareness Motion sequencing Autonomous surgery
Robot interprets surgical images/video and displays results Robot uses patient data to create or adjust surgical plan Robot can give the surgeon advice or recommendations Robot understands the status of itself and the procedure Robot can translate instructions from the master input or adjust the toolpath Robot can determine and execute surgical steps
AI image processing allows for rapid and accurate analysis of image data that the surgeon can use to perform procedures Data from previous procedures can be used to reduce errors and optimise patient outcomes AI can detect deviations or recognise tissue faster than the surgeon User can be alerted when the robot is in an error state AI can reduce distortion caused by surgeon tremors or interference errors Autonomous surgical robots powered by AI are able to perform surgery with more precision and less errors than a human surgeon
Notable assignees include Theator, Verily Life Sciences and Auris Health Notable assignees include Medtronic, Smith & Nephew and Carlsmed  Notable assignees include Johnson & Johnson, Medtronic and Zimmer Notable assignees include Verb Surgical, Medtronic and Verily Life Sciences Notable assignees include Johnson & Johnson, Medtronic and Koninklijke Philips  Notable assignees include IX Innovation, Smith & Nephew and Sony

Image processing

The foundational application of AI in robotic surgery is image processing. To be able to perform procedures accurately and with positive outcomes, the robot must be able to understand and interpret its environment. By adding context and digestible information, a surgeon can better perform a robotic surgery. 

Image processing has advanced over time in ways that give surgeons an increasing amount of knowledge about patients and the surgical site. Early image-processing tools were used to identify and differentiate between different types of tissue, which could inform a surgeon on where to make a cut or implant a device. However, more recent developments – illustrated by US20240398479A1 from IX Innovation and WO2021159048A1 from Vicarious Surgical – have enabled the use of virtual reality by using AI to generate depth maps, which allow accurate 3D representations of the surgical site to be rendered within virtual-reality glasses. Figure 2 highlights the evolution of image processing enabled by AI, which has played an integral role in the development of many of the following innovations. 

Figure 2. Evolution of AI image processing

Presurgical planning

AI can be used to create a surgical plan. Large datasets – combining preoperative data with that of past procedures – can be used to create an optimised surgical plan specifically tailored to that patient’s condition and anatomy. Globus Medical claims such a device in its application titled “Machine learning system for navigated spinal surgeries”.  Its system trains a machine-learning model based on prior patients’ spinal structures and implant designs, generating a surgical plan by comparing this data to the current patient’s preoperative data. This level of personalisation is difficult to obtain without AI data analysis and has a significant impact on patient outcomes.

Guidance

AI’s ability to interpret large datasets is leveraged intraoperatively in the form of guidance to the surgeon. Like presurgical planning, a combination of prior and patient data can be used to help surgeons complete a procedure. Intuitive Surgical illustrates this capability with its patent titled “Artificial intelligence guidance system for robotic surgery”, a machine-learning system that is trained to detect tissue type within an intraoperative image and associate a particular tool with it (US12016644B2). The system can then advise the surgeon to use that tool and where to place it. 

State control and awareness

A key step in building an AI-driven autonomous surgical robot is giving it a level of awareness about itself, as well as the state of the procedure, which major players like Medtronic and Philips are using AI to achieve. 

A recent application filed by Medtronic utilises a machine-learning model to determine if an instrument is properly positioned; if not, it alerts the surgeon (US20230016754A1). Philips claims a neural network trained to determine the user’s intent and enable a robotic-arm control mode based on such intent and procedural status in its application WO2021250141A1. These innovations allow faster reaction time to error and more efficient collaboration between surgeons and robots.

Motion sequencing

AI can improve the way surgical robots move. Globus Medical employs a neural network that is trained to recognise signal interference from positional sensors and correct the robot arm’s positioning. 

Samsung has innovated in this space in a different way. In US9687301B2, it claims a machine-learning method to intake motion data from a master manipulator and translate that into tool movements with adjustments to fit the current task. 

These technologies enable cleaner and more accurate motion, reducing unnecessary harm to patients or damage to the device itself. 

Autonomous surgery

The degree of autonomy that a surgical robot commands during a procedure has advanced. What began as minor navigation adjustments and error corrections is now a surgical robot that can autonomously perform sequential surgical steps. These previous innovations were the steppingstones on the path to autonomous surgical robots. Empowered by AI, surgical robots can complete procedures in less time than surgeons and avoid human shortfalls, such as exhaustion and lack of dexterity.   

Auris Health, a Johnson & Johnson subsidiary, claims such a system that uses machine learning to identify the phase of a procedure and trigger the action associated with the detected phase. The machine-learning algorithm uses several inputs (eg, surgical video and sensor data) to determine the phase. If this phase is automated, it then triggers the predetermined process associated with that phase. 

IX Innovations is another company making strides in AI-enabled autonomous surgery. In its application US20230181255A1, it claims a machine-learning model that uses intraoperative data to determine a confidence score for whether the robot believes it can correctly perform the next step of a procedure. Depending on the score, the robot can prompt the surgeon for input prior to carrying out the step. This back and forth between the robot and surgeon creates a balance between reducing human error and placing the robot in an unprecedented situation.

These robots are only partially autonomous – a surgeon must still be attentive and involved, carrying out necessary steps and making other corrections when needed. For fully autonomous surgery to become a reality, AI will need to become more powerful such that it can adapt to any situation in the surgical suite. 

In its recent filing US20250049522A1, Metamorphosis begins to address the possibility of creating what it calls “truly autonomous robotic surgery” (see Figure 3). It claims a control loop that is continuously performing surgical steps and passing intraoperative data through a neural network to determine the next position and orientation of the robotic tool. 

Figure 3. Evolution of autonomous surgery

Company portfolios

Some of the leading players in this space have their recent IP filing activity displayed below. While several companies touch on most if not all the applications discussed above, many place heavier emphasis on one or two applications. 

Intuitive Surgical’s early and extensive portfolio has long served as a roadblock for other companies looking to innovate in the space. Applying AI to surgical robotics tools and procedures has allowed other companies to compete with – or circumvent – Intuitive’s portfolio. As indicated in Figure 4, almost half of Intuitive’s AI portfolio covers image processing, while it has little presence in state control, motion sequencing and autonomous surgery applications. This has allowed other companies, including Johnson & Johnson, IX Innovation and Smith & Nephew to more easily introduce new technologies in these areas. IX Innovation is a rather late entrant, with its earliest AI filings originating in 2021. Despite this, its novel technology, mostly related to autonomous surgery, has earned it at least 50 granted patents utilising AI covering advanced surgical robotics technology.  

Figure 4. Key players – Intuitive Surgical

Smith & Nephew’s portfolio is primarily focused on orthopedic surgery. To ensure that a patient has the best possible outcome when placing an orthopedic implant, it is vital that the correct implant is chosen, as well as its precise placement. As such, it is no surprise that Smith & Nephew has directed much of its AI development towards procedure planning. Over the last five years, it has filed numerous patents claiming machine-learning models that optimise both implant selection and placement. For example, its patent application titled “Methods for improved surgical planning using machine learning and devices thereof” optimises a predictor equation to determine the best size, position and orientation of an implant (US20220125515A1). After the procedure, the machine-learning model is given data pertaining to the procedure’s success so that it can continue to improve. 

Figure 5. Key players – Smith & Nephew

Johnson & Johnson also uses machine-learning models to improve robotic surgery procedures. Instead of the presurgical planning that Smith & Nephew has focused on, it claims machine-learning models that improve a procedure intraoperatively. In one of several patents titled “Method of hub communication with surgical instrument systems”, it claims a clip-applier device that gathers data as it completes crimping strokes to adjust and improve the stroke quality throughout the procedure (US11564756B2). This allows the clip applier to constantly adapt to its environment. The ability to finetune the behaviour of a robotic medical device with AI allows Johnson & Johnson to continue to grow as a major player in surgical robotics. 

Figure 6. Key players – Johnson & Johnson

Figure 7. Key players – Medtronic

Key takeaways

The proliferation of AI in the surgical suite provides benefits to all parties involved, from patients to doctors to healthcare insurers. Faster, more precise surgeries mean better outcomes for patients, less need for follow-up care or revision surgeries and higher throughput. Additionally, quality care becomes more universally available. A skilled surgeon in a city, partnered with AI-driven tools, can virtually lead a procedure taking place in a more remote location. There are plenty of opportunities to innovate in this space as AI creates new winners and losers. Understanding these technological trends and their impact through an IP lens provides a disciplined process to identify, create and capture value from the wave of innovation in this complex space.


This is an Insight article, written by a selected partner as part of IAM’s co-published content. Read more on Insight



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *