Project Description

Stereo Theatre was an Innovate UK-funded project. i3D collaborated with the AMRC Design & Prototyping Group (DPG) to develop a proof-of-concept system capable of producing a high-resolution, real-time 3D map of a patient in an operating theatre environment. The project was to enhance the digital operating theatre platform developed by the AMRC DPG.

At the outset, the AMRC’s demonstrator could provide a virtual representation (digital twin) of the real-world theatre and monitor movements with COTS sensors and smart tools. However, it could not produce real-time 3D models of patients.

i3Dr’s remit was to develop a standard Unity interface for the 3D visualisation of patients. The AMRC DPG would then integrate this into the AMRC digital operating theatre digital twin demonstrator platform, developing a TRL 5/6 system capable of producing high-resolution, real-time 3D models of a patient in an operating theatre environment.

Using technologies developed for the Mar Rovers, the Stereo Theatre demonstrator combines a virtual reality digital twin, projection mapping and smart tools that enable the position of objects and clinicians to be accurately tracked in the theatre space, with relevant information displayed digitally using screens, projections, and augmented reality (AR) devices.

Key challenges:

  • The scanning device needed to be light enough to mount on a ceiling pendant or a person’s arm
  • Space above a patient in an operating theatre is at a premium, so the device needed to be compact
  • The existing design had no IP rating and had gaps in the shell that would allow the ingress of liquids and gas. It could not, therefore, be used in a sterile environment nor be made sterile itself.
  • Network streaming – Named Pipe – This was intended to be used as a generic method of transferring the point cloud data between applications: StereoToolkit → Unity.

This Named Pipe did function over the network, but with unacceptable throughput levels. Shared Memory – Direct memory shared between StereoToolkit → Unity. Performance was excellent, but this method was incompatible with remote network viewing

Possible solution:

  • Network stream of camera output, 3D reconstruction on edge device to eliminate synchronisation issues
  • Use a high-speed lossless compression algorithm on the video feed
  • Reduce network bandwidth requirements
  • Optionally, downsample the video feed resolution at source
  • Reduce bandwidth requirements
  • 3D reconstruction is performed on remote network device where the 3D model is to be consumed Alternatively, stereo matching is performed local to the cameras
  • The generated depth map is then compressed and streamed
  • The depth map should be a smaller range of ‘pixels’ then two RGB streams combine
  • Depth stream is uncompressed and used as needed on the edge
    The difficulty lay in either splitting a high-bit depth image into an RGBA container for common image compression algorithms or performing a more bespoke compression on the depth stream

The current solution was a ‘brute force’ method of updating the entire point cloud and stream for each frame which delivered the best possible update rate for propagating new changes, but was very inefficient for scenes/areas where nothing was happening. Additionally, there was a subtle jitter or random surface movements in each update as the stereo matching produced slightly different results in each frame.

A solution to this advanced filtering issue was Temporal Smoothing – by taking average measurements over a number of frames, it should be possible to reduce this moving surface effect. Careful weighting was required so as not to dampen physical movements.

Exception Filtering – rather than sending every point and every update, the analysis could be performed on the 2D images to detect changes, and only updated point cloud information could be sent for those changed regions. This would greatly reduce the bandwidth requirements for scenarios where most of the field of view was static.

The proof-of-concept demonstration showed the hardware and software are viable tools for use in the operating theatre.

Further development opportunities would bring the system much closer to a commercially viable product. Once the product reaches this stage, it would be extremely beneficial to create another demonstration piece – this time with a specific use case and medical expert involvement. Orthopaedic surgery is an ideal candidate for 3D remote observation as there is a lot of physical manipulation and intervention, and complex 3D positioning challenges are faced during such surgery, which requires observation and assistance.

In conclusion:

The Stereo Theatre project addresses the problem that senior consultants can only advise on a limited number of patients due to time and geographical constraints. This leads to extreme variations in hospital performance, patient outcomes, survival statistics and patient satisfaction.

The primary target case for this technology is in operating theatres where the use of Industry 4.0 technology will enable senior consultants to be part of operations conducted by junior consultants without having to be in the same physical location. This will mean more patients will be observed by experienced surgeons, and junior surgeons will gain advice in real scenarios from senior surgeons. Stereo Theatre also offers game-changing advances in overcoming the physical restraints in the teaching of medical students.