TUHH Open Research
Help
  • Log In
    New user? Click here to register.Have you forgotten your password?
  • English
  • Deutsch
  • Communities & Collections
  • Publications
  • Research Data
  • People
  • Institutions
  • Projects
  • Statistics
  1. Home
  2. browse.metadata.pjinstitute.breadcrumbs

Browsing by browse.metadata.pjinstitute "Bildverarbeitungssysteme E-2 (H)"

Now showing1 - 5 of 5
Results Per Page
Sort Options
  • Some of the metrics are blocked by your 
    consent settings
    Projectwithout files
    Dense 3D Reconstruction from Monocular Mini-Laparoscopic Sequences
    3D reconstruction using a single moving camera is possible up to a global scale factor. For a stereo camera rig with overlapping field-of-view, it is possible to reconstruct any object inside the overlap including the scale and therefore, the whole scene can be reconstructed including global scale. There are situations where the camera setup is fixed without overlap or where there is a limit in the number of cameras and as much as possible of the field-of-view should be used. In such situations, well-known stereo algorithms can not be used, but it is still possible to reconstruct the scene including global scaling. In this project, such algorithms are developed, extended and evaluated.
    Funder:
    Technische Universität Hamburg  
    Start Date:2015-01-01
    End Date:2018-12-31
    Principal Investigator:
    Grigat, Rolf-Rainer  
    Institute:
    Bildverarbeitungssysteme E-2 (H)  
      82
  • Some of the metrics are blocked by your 
    consent settings
    Projectwithout files
    Edge-Based Surface Reconstruction and Object Detection
    Camera-based surface reconstruction is a major problem in computer vision. Existing approaches have problems with scenes featuring only sparse or no texturing. For untextured scenes, edges can still offer enough information for a complete 3D reconstruction or object detection. Aim of this project is the development of algorithms coping well with untextured as well as textured environments. A key concept is the use frame-spanning correspondences between 2D and 3D edges obtained via Simultaneous Localization And Mapping. This enables us to perform a hybrid 2D/3D image analysis, striving after an imitation of natural image and scene understanding.
    Funder:
    Technische Universität Hamburg  
    Start Date:2014-02-01
    End Date:2022-01-31
    Principal Investigator:
    Grigat, Rolf-Rainer  
    Institute:
    Bildverarbeitungssysteme E-2 (H)  
      96
  • Some of the metrics are blocked by your 
    consent settings
    Projectwithout files
    Learning an invariant distance metric
    The requirement for suitable ways to measure the distance or similarity between data is omnipresent in machine learning, pattern recognition and data mining, but extracting such good metrics for particular problems is in general challenging. This has led to the emergence of metric learning ideas, which intend to automatically learn a distance function tuned to a specific task. In many tasks and data types, there are natural transformations to which the classification result should be invariant or insensitive. This demand and its implications are essential in many machine learning applications, and insensitivity to image transformations was in the first place achieved by using invariant feature vectors. Aim of this project is to learn a metric which is invariant to the different transformations such as horizontal translation, vertical translation, global scale, rotation, line thickness, and shear and also illumination changes that might be applied on data. To do so, the first idea is taking the advantage of the Projection metric on the Grassmann manifolds.
    Funder:
    Sonstige Vereine/Verbände  
    Start Date:2017-02-01
    End Date:2018-12-31
    Principal Investigator:
    Grigat, Rolf-Rainer  
    ; 
    Principal Investigator:
    Goudarzi, Zahra  
    Institute:
    Bildverarbeitungssysteme E-2 (H)  
      89
  • Some of the metrics are blocked by your 
    consent settings
    Projectwithout files
    Real-time data processing for serial crystallography experiments
    In many scientific and industrial applications, the structure of molecules is of high interest. As an example, in pharmacy one is interested in the structure of viruses or amino acids. To identify the structure, it is not possible to construct microscopes that use visible light, since the resolution is limited by the wavelength of the light. The solution is to use light sources with higher energy, such as an X-ray source. We use X-ray diffraction to extract information abut molecules out of artificially grown micro crystals. The X-ray sources typically are X-ray Free Electron Lasers (XFELs) that can deliver a very high radiation dose, far in excess of what the crystal could normally tolerate, in a time span in the range of femtoseconds. The crystal diffracts the X-rays before it is destructed, overcoming the effect of radiation damage [1]. Due to the limited availability of beam time, the optimization of the collection process is crucial for obtaining good results. Therefore, the real-time analysis and monitoring of the collected data is of great interest. We develop algorithms and tools that are capable of meeting real-time constraints of current X-ray sources while retaining or improving the results compared to the conventional methods of choice.
    Funder:
    Deutsches Elektronen-Synchrotron DESY  
    Start Date:2016-04-01
    End Date:2019-12-31
    Principal Investigator:
    Grigat, Rolf-Rainer  
    Institute:
    Bildverarbeitungssysteme E-2 (H)  
      115
  • Some of the metrics are blocked by your 
    consent settings
    Projectwithout files
    Sign Language and Mouth Gesture Recognition
    In sign languages there are two different kinds of mouth patterns. Mouthing is derived from a spoken language and its detection is therefore very similar to lip reading. Mouth gestures on the other hand have formed from within the sign languages and bear no relation to spoken languages [1]. In this project we aim at automatic detection and classification of mouth gestures in German sign language to facilitate the research on usage of mouth gestures and advance on the open problem of reliable automatic sign language recognition. To do so we are developing a deep neural network that can be trained to learn the spatiotemporal features of mouth gestures. [1] Keller, J. (2001). Multimodal Representation and the Linguistic Status of Mouthings in German Sign Language (DGS). The Hands are the Head of the Mouth: The Mouth as Articulator in Sign Languages (S. 191-230). Hamburg: Signum Verlag.
    Funder:
    Universität Hamburg  
    Start Date:2016-10-01
    End Date:2020-12-31
    Principal Investigator:
    Grigat, Rolf-Rainer  
    ; 
    Principal Investigator:
    Brumm, Maren  
    Institute:
    Bildverarbeitungssysteme E-2 (H)  
      136
TUHH
Weiterführende Links
  • Contact
  • Send Feedback
  • Cookie settings
  • Privacy policy
  • Impress
DSpace Software

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science
Design by effective webwork GmbH

  • Deutsche NationalbibliothekDeutsche Nationalbibliothek
  • ORCiD Member OrganizationORCiD Member Organization
  • DataCiteDataCite
  • Re3DataRe3Data
  • OpenDOAROpenDOAR
  • OpenAireOpenAire
  • BASE Bielefeld Academic Search EngineBASE Bielefeld Academic Search Engine
Feedback