The First Law of Robotics states that “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” Applying this law with the surgeon as seat of the robot’s autonomous will accords with the ethical principle of nonmaleficence that requires that physicians’ actions not intentionally and yet the advances in automation and AI could take surgeons further out of the control loop.

One of the strongest reasons given by the FDA panel ( June 16 1999)  for approving da Vinci was its future potential. It made the panel feel morally obliged ‘to go for it’ and we agree. But greater attention needs to be paid to the ethical issues if the field is not to be retarded.

There are three criteria proposed currently inmost ethics committees to make “surgical innovation acceptable”: (i) sufficient laboratory experience before conducting innovative procedures, (ii) sufficient intellectual and technical expertise available in the institution, (iii) good “institutional stability” based on its resources, support systems and staff. It is ethically important that institutions buying into robot surgery meet the three criteria by ensuring sufficient training for surgeons, by having the right level of in house technical expertise and by ensuring that they have well trained and knowledgeable support staff with an understanding of the robot.

So, what does an ethical framework for robot surgery look like. Inside the EU most Ethics committees use the following four main ethical challenges:

  • Compromised informed consent: patient autonomy could be risked by difficulties in achieving valid informed consent for innovative procedures. Barriers to valid consent include: (i) patients being unaware they are receiving a new technique, (ii) evidence of risks and benefits of innovative surgery not being available, (iii) a general tendency to equate newness with increased benefit. This tendency may be exacerbated by trust in the authority of a surgeon.
  • Conflicts of interest: A surgeon may be influenced unconsciously by the career benefits and elevated social status that follows from being a surgical innovator, or a surgeon may prefer a device over other options for no well-grounded reason. The surgeon could be biased towards a technique in which they have invested training time and they could be influenced by ‘brand loyalty’. The institution may also have invested in an innovative procedure, and their wish to be associated with the most recent techniques can also give rise to conflicts of interest.
  • Harms to patients: surgical innovation can potentially cause increased mortality and morbidity compared to standard techniques. Surgery is not benign and there are risks from infection, anaesthesia, and longer hospital stays. But there are also possible financial and psychological harms as well as loss of trust in the 4 medical profession.
  • Unfair allocation of healthcare resources: fair distribution requires limiting expenditure to interventions with proven safety and efficacy, rather than wasting money on ineffective and/or dangerous procedures. Surgical innovations that are more expensive than existing treatments can divert resources from cheaper and possibly safer and more effective options. But a common optimism bias among surgeons and institutions creates a tendency to overestimate the positive effects of the new. It is important for surgical robotics to meet these four ethical challenges if it is to move forward and show its innovative potential. We have found possible ethical stumbles on all four. In what follows we concertise the challenges by examining specific cases within surgical robotics