Faster than a scalpel-wielding hand, able to snake to hard-to-reach surgical sites in a single bound—future surgeons will be super surgeons, all thanks to robotics.
Alistair Fleming, VP Medical at Sagentia
In many industries, the advance of robotics has created worries about robots supplanting humans. But in the world of surgery, the next generation of robotics is set to do the opposite—to supercharge the surgeon and put him in control as never before.
First generation systems
Intuitive Surgical’s da Vinci system defined the first generation of general surgical robotics. It promised a revolution in surgery and is today used for hundreds of thousands of procedures annually. The da Vinci System “is powered by robotic technology that allows the surgeon’s hand movements to be translated into smaller, precise movements of tiny instruments inside the patient’s body,” according to the company. The surgeon is provided with a high-definition, 3D window on the operative world through a laparoscope also operated by one of the robot’s arms. Characteristics of first-generation robotic surgery systems include the large size and physical dominance of the operating room, the placing of the surgeon into a console outside the sterile field, and surgeons receiving feedback limited mostly to visual cues on-screen.The invisible man
The effect of this “fly-by-wire” surgery has in part been to abstract the surgeon from his traditional role at the heart of the operating room. Once in the middle of the team, he is now pushed to the margins of the room, almost invisible at his console, controlling the operation remotely (indeed the system was originally designed with a completely physically remote battlefield use in mind). New breeds of robotic surgical systems aim to change this, and they will have fundamentally different characteristics from the first generation.
The invisible robot
Whereas systems such as da Vinci have visual-only feedback, the latest systems are being designed for haptic feedback, synthesizing the sense of touch. Ideally, the surgeon should feel that he or she is operating the surgical instruments at the end of the robotic arm directly; it important to restore the sensory nature of open surgery to the surgeon. Haptics is a key area of development in surgical robotics: It aims to make the robot invisible to the surgeon and to help the surgeon directly react to what he or she feels.
Restoring sensory feedback
In this context, haptic feedback covers two main types of sensations: force and vibration sensation. Imagine an example of the end-effector as a pair of scissors which the surgeon is controlling remotely. When you cut with a pair of scissors, you can instinctively tell if you are cutting through paper or cardboard. You can tell this by the resistance of the material you are cutting through and the resulting reaction force exerted through the instrument; this is force feedback. Haptic solutions are arriving on the market which can directly translate the effector forces to the hand. The new haptic features can help to control the forces applied to delicate structures while laying sutures or resecting friable tissues.
In the same example, imagine the difference between cutting through a sheet of paper or plastic. Again, you can tell without looking which is which: Paper is fibrous and almost gritty compared to plastic, which is smooth and silky. There are subtle differences in vibration feedback as you cut. Picking up such differences requires an extra level of sensitivity, and technologies such as acoustic pickups or local piezoelectric sensors can capture the analog sensation of surface textures and material properties. These type of technologies may well be developed further to help provide this kind of feedback.
In surgery, processing such force and vibration subtleties in combination can help distinguish structures from each other such as arteries from veins or a cancerous growth from healthy tissue. It is further imaginable that the robotic systems can help in this differentiation if they can be taught to know what to look for during procedures.
Device designers, however, should not underestimate the technical challenges of achieving haptic feedback. Being able to provide haptic feedback to the surgeon ideally requires sensors right at the tip of the end-effectors. The data harvested at the tip then needs to be communicated back to the surgeon. At present, first generation surgical robots can only capture this data distant from the tip of the device, losing fidelity through the cables and pulleys that connect it to the drive motors.
Conversely, a tip-mounted sensor has to be suited to the surgical environment. It either has to be disposable and cheap (while also robust and safe) or it has to be able to withstand repeated sterilization.
The super surgeon: enhancing reality
Haptic feedback is about restoring the lost sensation of touch to the surgeon, but what about giving him or her entirely new powers? Enhanced visualization is a key area of development; it means allowing surgeons to see better or more than they can with the naked eye. The da Vinci provides some enhanced visualization with its Firefly feature. Augmented visualization developments could make the image on the surgeon’s screen look more akin to textbook illustrations. They may use techniques such as fluorescent or hyperspectral imaging to help the surgeon distinguish between different structures in his visual field. For example, this might mean highlighting blood vessels in one color, a ureter in another, nerves in a third, etc. One of Sagentia’s clients, Lightpoint, has launched an intra-operative molecular imaging system called LightPath to assist surgeons in identifying cancerous tissue. The system detects Cerenkov luminescence, a faint light produced by PET imaging agents widely used in cancer diagnosis. The LightPath system visually highlights the presence of the cancer cells, allowing the surgeon to be more certain that he has removed all cancerous matter while avoiding the unnecessary removal of healthy tissue.There is also a wealth of pre-surgery data such as MRI, X-rays or CT scans which could be beneficial to a surgeon if merged into a surgical system’s live view. Presently, there are examples of this in neurosurgery, where structures don’t move much, but for soft tissue surgery, the surgeon is looking between screens; bringing this data together into a single interface to guide the surgeon would be a powerful tool. To accommodate the dynamic morphological changes in tissue shape and relative position, fiducial markers and intensive image processing may be necessary.
Small world—physical changes
First generation systems have been big pieces of equipment that have dominated operating rooms. There is a desire to reduce the size of these systems, make them less intrusive and more adaptable. How system designers can achieve this is an open question. In practical terms, the clinical need drives architecture. Clinically derived specifications for the range of motion and strength of the instruments defines their scale, the size of drive motors and consequent specs for mounting structures. However, the overall scale can also be impacted by where arms are mounted and motors are located. The newer da Vinci systems occupy far less space than their predecessors, and across the industry this trend will continue, providing surgeons with more ready access to their patients and enabling a wider range of surgical procedures to benefit from robotics. In parallel to this, the wider industrial robotics arena is transitioning from brutal, unyielding systems in cages to “softer,” “self-aware” systems that are safe to work around and even interact with humans. These developments are also vital for future generations of surgical systems, with clinicians recapturing a much more hands-on presence in the OR, in touch with their patients again.
Equally, entirely novel architectures that move away from conventional multi-DoF (degree of freedom) robotic arms have potential to disrupt this model. The trend for minimally invasive surgery has for some years pointed toward fewer (or no) incision sites, but it is difficult to get there with conventional instruments. Still, the ability to enter one area of the body through a single entry site and then use the robot to snake along to the target of the surgery is proving an enticing objective for one cadre of newer system developers. Sagentia’s client Medrobotics won Best in Show at the 2016 Medical Design Excellence Awards for their Flex Robotic System. This system reflects the trend for more flexible systems. The product represents the first of its kind as a flexible robot for advanced surgical procedures, enabling surgeons to navigate around or through tortuous anatomical structures. The Flex delivers high definition visualization along with 2-handed surgery to distant anatomies.Distant horizons
Is there an appetite for an autonomous robot that takes the place of the surgeon completely? We can see the potential for robots to undertake some discrete tasks autonomously, and there are already examples of pre-planned execution (primarily in the orthopedics world). However, we could be a long way from a future where robots are even technically capable and competent of reactive control; and that’s before considering the ethical and regulatory challenges this raises. That said, robots are good at doing defined, specific repetitive tasks well, and a lot of surgery falls into this category. There has already been a recent pre-clinical example, at Children’s National Health System in Washington, D.C., of a robot performing suturing in an animal operation. But there is certainly no regulatory enthusiasm for robots to take on more than very controlled tasks. We believe we are many years away from anything like this.
Restoring surgery to its roots
The word “surgeon” came into the English language post the Norman Conquest and derives from the Greek “kheirourgia,” from “kheirourgos,” “working or done by hand.” Robotics often seems the opposite of this, but new robotic surgery developments have actually returned to the old sentiment. Surgeons are returning to the center stage and regaining some of the manual feel of open surgery within a minimally invasive surgical environment. Even better, the surgeon can increasingly see the unseeable and integrate that in one view with pre-surgery scan data. The trend for minimally invasive surgery continues—and innovators look for ever more subtle ways of accessing anatomy through small openings and snaking their way through to the location for surgery. The next generation of robots will enable all of this happen. They will be smaller, more mobile and more flexible—and they will be more collaborative, even part of the team.
Alistair Fleming is vice president of Sagentia’s medical business. He has more than 20 years experience in innovation and technology development in the medical device industry. His project and program management experience includes the development of first-to-market electrosurgical instruments, capital equipment and robotics for minimally invasive surgery, as well as in vitro diagnostics systems for both central lab and point of care.