(EN) Man-Machine Interaction Models and Immersive Collaboration

Medical information about patients in the form of multimedia electronic health records – MEHRs – is tending more and more towards a collection of multimodality, multidimensional and multisensory data representations, with simple 2D radiological data being observed and immersively analysed together with dynamic 3D reconstructions, motion images and auditory information. This imposes the development of new ways of interaction and data experiencing, enabling the physician to have comfortable and flexible acces to all these data in full resolution at the same time and at the same place. Both, local analysis of multimedia patient data and remote collaboration over shared synchronously experienced patient information has to be offered to the physician through new forms of virtual environments.

A virtual environment exists when computer generated sensorial information, like sounds and images, gives to an individual the sensation of being in an environment different of the one he really is. With enough adequate technology, an individual would be unable to distinguish his actual presence.

This virtual environments’ characteristic is called immersion. The amount of virtual environment’s immersion needed depends on the application it was built for. For example, some cases require image generation only, while others require the ability of being manipulated by the individuals. Besides, immersion depends on equipments which directly stimulate the individual’s sensorial organs and catch his moves, as does the head-mounted displays (which generate directly in front of an individual’s eyes) and the date gloves (gloves which inform the computer any individual’s hand movement).

Through the use of virtual environments and an immersion level which allows the manipulation of the objects, it becomes possible the accomplishment of immersive and realistic surgery planning. Surgery planning within a virtual environment means to carry through a surgery, which supposedly would be applied to a determined patient, manipulating only computer generated geometric models, before the same procedure is carried trough in a real operation room.

The geometric models are generated through the analysis of a given patient’s medical image data, which results in personalized models for each patient and his pathologies. With the high-fidelity data generated by nowadays image acquisition equipments, as computer tomographies and magnetic resonance images, highly realistic and adapted geometric models are built.

In practical terms, the virtual surgery planning provides the following capabilities:

  • To carry through surgery procedures applied to realistic and patient-adapted geometric models, in an immersive way, allowing the physician an accurate evaluation of the surgery techniques to be used.
  • To make surgery procedure demonstrations, having audience in local and remote sites.
  • To make cooperative surgery simulation, with physicians in local and remote sites, where they share experiences in a practical way.

Sobre Alexandre Savaris

Possui graduação em Ciência da Computação pela Universidade do Oeste de Santa Catarina - UNOESC (2000), especialização em Redes de Computadores e Sistemas Distribuídos pelo Centro Universitário Diocesano do Sudoeste do Paraná - UNICS (2004) e mestrado em Ciência da Computação pela Universidade Federal de Santa Catarina - UFSC (2010). Atualmente é doutorando no curso de Informática da Universidade Federal do Paraná - UFPR, pesquisador no Instituto Nacional de Ciência e Tecnologia para Convergência Digital (INCoD), DBA/gerente de projetos da Rede Catarinense de Telemedicina (RCTM) e professor do curso de graduação em Sistemas de Informação nas Faculdades Barddal. Tem experiência na área de Ciência da Computação, com ênfase em Inteligência Computacional, atuando principalmente nas seguintes áreas: Interação Humano-Computador (IHC), banco de dados e desenvolvimento de software com linguagem Java.