Stereo Theatre

Stereo Theatre was an i3d Robotics project funded by Innovate UK and in collaboration with the AMRC Design & Prototyping Group (DPG) to develop a proof-of-concept system capable of producing a high-resolution, real-time 3D map of a patient in an operating theatre environment. The project was to enhance the digital operating theatre platform developed by the AMRC DPG.

At the outset, the AMRC’s demonstrator could provide a virtual representation (digital twin) of the real-world theatre and monitor movements with COTS sensors and smart tools but it did not have the ability to produce real-time 3D models of patients.

I3DR’s remit was to develop a standard Unity interface for the 3D visualisation of patients. The AMRC DPG would then integrate this Unity interface into the AMRC digital operating theatre digital twin demonstrator platform developing a TRL 5/6 system capable of producing high resolution, real-time 3D models of a patient in an operating theatre environment.

Using technologies developed for the Mar Rovers, Stereo Theatre demonstrator combines a virtual reality digital twin, projection mapping and smart tools that enables the position of objects and clinicians to be accurately tracked in the theatre space, with relevant information displayed digitally using screens, projections, and augmented reality (AR) devices.

Key challenges:

  • The scanning device needed to be light enough to mount on a ceiling pendant or a person’s arm
  • Space above a patient in an operating theatre is at a premium, so the device needed to be compact
  • The existing design had no IP rating and had gaps in the shell that would allow ingress of liquids and gas. It could not therefore be used in a sterile environment nor be made sterile itself.
  • Network streaming – Named Pipe – Was intended to be used as a generic method of transferring the point cloud data between applications; StereoToolkit → Unity. This named pipe did function over the network, but with unacceptable throughput levels. Shared Memory – Direct memory share between StereoToolkit → Unity. Performance was excellent, but this method was incompatible with remote network viewing

Possible solution:

  • Network stream of camera output, 3D reconstruction on edge device to eliminate synchronisation issues
  • Use a high speed lossless compression algorithm on the video feed ○ Reduce network bandwidth requirements
  • Optionally, downsample the video feed resolution at source ○ Reduce bandwidth requirements
  • 3D reconstruction is performed on remote network device where the 3D model is to be consumed Alternatively
  • Stereo matching is done local to the cameras
  • The generated depth map is then compressed and streamed
    •  The depth map should be a smaller range of ‘pixels’ then two RGB streams combine
    • Depth stream is uncompressed and used as needed on the edge
    • The difficulty lay in either splitting a high-bit depth image into an RGBA container for common image compression algorithms, or performing a more bespoke compression on the depth stream

The current solution was a ‘brute force’ method of updating the entire point cloud and stream for each frame which delivered the best possible update rate for propagating new changes, but was very inefficient for scenes/areas where nothing was happening. Additionally, there was a subtle jitter or random surface movements each update as the stereo matching produced slightly different results each frame.

A solution to this advance filtering issue was Temporal Smoothing – by taking average measurements over a number of frames, it should be possible to reduce this moving surface effect. Careful weighting was required not to damp physical movements

Exception Filtering – rather than sending every point, every update, analysis could be performed on the 2D images to detect changes, and only send updated point cloud information for those changed regions. This would greatly reduce the bandwidth requirements for scenarios where the majority of the field of view was static.

The proof-of-concept demonstration showed the hardware and software are viable tools for use in the operating theatre.

Further development opportunities would bring the system much closer to a commercially viable product and once the product reaches this stage, it would be extremely beneficial to create another demonstration piece – this time with a specific use case and medical expert involvement. Orthopaedic surgery is an ideal candidate for 3D remote observation as there is lots of physical manipulation and intervention and there are complex 3D positioning challenges faced during such surgery needing observation and assistance.

In conclusion:

The Stereo Theatre project addresses the problem that senior consultants are only able to advise on a limited number of patients due to time and geographical constraints. This leads to extreme variations in hospital performance, patient outcomes, survival statistics and patient satisfaction.

The primary target case for this technology is in operating theatres where the use of Industry 4.0 technology will enable senior consultants to be part of operations conducted by junior consultants without them having to be in the same physical location. This will mean more patients being observed by experienced surgeons as well as junior surgeons gaining advice in real scenarios from senior surgeons. Stereo Theatre also offers game-changing advances in overcoming the physical restraints in the teaching of medical students.