Joint workshop of the Institute of Computer Graphics and Vision and the Institute of Computer Graphics and Knowledge Visualization
When and where
- Date: Oct 10, 2006
- Time: 13:00-17:00
- Location: ICG Seminar room, Inffeldgasse 16, 2nd floor
13:00 Slot 1 Slot 1 (CGV - Fellner: Strategic Issues of Graphics@Graz)
13:30 Slot 2 (ICG - Schmalstieg: Managing Complex AR Models)
14:00 Slot 3 (CGV - Dold: Motion Compensation in CT Scans of the Human Head)
14:30 Slot 4 (ICG - Mendez: AR Visualization of Subsurface Infrastructure
15:00 Coffee break
15:30 Slot 5 (CGV - Havemann et al: Semantic 3D and Cultural Heritage)
16:00 Slot 6 (ICG - Grabner: Cartilage Geometry Measurement and Visualization)
16:30 Slot 7 (CGV - Lancelle et al: VR, gestures, and Global Illumination)
Slot 1 (CGV - Fellner: Strategic Issues of Graphics@Graz)
TU Graz is currently shaping itself as a premier venue of computer graphics and vision research and teaching in (and beyond) Austria. Major research challenges will be presented in this talk, as well as the promising opportunities for funding that have recently been opened. Furthermore the talk will sketch the exciting new perspectives of collaboration between Graphics@Graz and other major European competence centers for computer graphics research.
Slot 2 (ICG - Schmalstieg: Managing Complex AR Models)
Mobile Augmented Reality requires geo-referenced data to present world-registered overlays to the user. In order to cover a wide area and all the artefacts and activities therein, a database containing all this information must be created, stored, maintained, delivered and finally used by the client application. We present a data model and a family of techniques to address these needs, handling complex and sometimes large AR specific datasets. Our solutions are based on XML and leverage recent trends in online information systems.
Slot 3 (CGV - Dold: Motion Compensation in CT Scans of the Human Head)
The conventional method to eliminate motion artifacts is to use the tomograph itself to detect and compensate for head motion. This method prolongates the examination, and it disrupts the steady-state of the magnetic excitation. The new method presented in this talk uses instead cameras to detect head motion, in order to re-target the gradient of the magnetic resonance on-the-fly. A volume-to-volume correction as well as a slice-to-slice correction have been developed and tested for different recorded sequences. The talk will outline the measures to be taken until the new method developed in a PhD thesis will hopefully be applied regularly in a clinical environment.
Slot 4 (ICG - Mendez: AR Visualization of Subsurface Infrastructure)
Abstract : At our institute, a research project called VIDENTE is being carried out for the visualisation of GIS data.The objective of VIDENTE is the creation of a prototype for a mobile Augmented Reality (AR) solution for utility companies. This system will provide real time visualisation of up-to-date data of subsurface utility networks for maintenance purposes. AR enables will enable outdoor users on location to see information that they would otherwise have to retrieve from hardcopy or non-registered on-screen visualisations. The prototype mobile system will visualise 3D and semantic information derived from data stored in an underlying Geographic Information System (GIS). This talk will be focused on the technical aspects of the VIDENTE project. Spanning the topics of visualisation techniques, data representation and transport, and sensor fusion.
Slot 5 (CGV - Havemann et al: Semantic 3D and Cultural Heritage)
Computer graphics and vision is currently becoming an integral part of information technology. The more graphics is used, the more important it is that 3D is coupled with semantics: A scanned 3D model of a building should behave like a building, at least it should be possible to infer more or less automatically that it is a building. One day, it shall be possible to edit such a model using the appropriate DOFs of a building. The prerequisite to closing the semantic gap between a 3D model and its meaning is a high-level, "generative" shape representation. Massive amounts of documentary 3D data are generated in the field of Cultural Heritage. These data have to be operated on, i.e., repaired, integrated, segmented and archived, while still retaining the connection to the shape semantics (e.g., part of the capital of a column of a particular Greek temple). We report on our recent efforts to solve this problem by integrating 3D with XML technology.
Slot 6 (ICG - Grabner: Cartilage Geometry Measurement and Visualization)
We present a system to measure geometric properties (e.g., volume, surface area, and contact area) of the cartilage layer in human ankle joints. The cartilage and subchondral bone surfaces are sampled with a stereophotogrammetric device. A configurable pipeline of processing steps is applied to the sampled surface data to compute the desired quantities. The meshes are cleaned (i.e., disconnected parts removed, holes filled, and noise reduced) and aligned to each other such that they represent the cartilage layer as close as possibly. A hierarchical stitching approach creates from two separate (opposing) meshes a single closed surface, which is used for volume calculations. The method has been evaluated with data from 10 lower leg specimens. While our approach confirms the results of previous contact area studies for low curvature regions of the inspected cartilage and bone surfaces, it is superior to existing (mostly mechanical) methods for highly curved regions.
Slot 7 (CGV - Lancelle et al: VR, gestures, and Global Illumination)
Immersive visualisation (VR) is not used as much as it should, regarding its huge potential for industrial applications. We report on the issues we have solved, the remaining issues identified, and our plans to tackle them. For instance it will be necessary to find new robust gesture-based methods for non-trivial man-machine communication tasks. Another topic covered by this talk are our efforts in the area of (near) photo-realistic rendering, both offline with the "modular rendering toolkit" MRT as well as online, based on efficient GPU-based methods for realtime global illumination.