TUHH Open Research
Help
  • Log In
    New user? Click here to register.Have you forgotten your password?
  • English
  • Deutsch
  • Communities & Collections
  • Publications
  • Research Data
  • People
  • Institutions
  • Projects
  • Statistics
  1. Home
  2. TUHH
  3. Publication References
  4. CASAM : Collaborative human-machine annotation of multimedia
 
Options

CASAM : Collaborative human-machine annotation of multimedia

Publikationstyp
Journal Article
Date Issued
2013-03-09
Sprache
English
Author(s)
Hendley, Robert J.  
Beale, Russell  
Bowers, Chris P.  
Georgousopoulos, Christos  
Vassiliou, Charalampos  
Sergios, Petridis  
Möller, Ralf  
Karstens, Eric  
Spiliotopoulos, Dimitris  
Institut
Softwaresysteme E-16  
TORE-URI
http://hdl.handle.net/11420/9927
Journal
Multimedia tools and applications  
Volume
70
Issue
2
Start Page
1277
End Page
1308
Citation
Multimedia Tools and Applications 70 (2): 1277-1308 (2014)
Publisher DOI
10.1007/s11042-012-1255-1
Scopus ID
2-s2.0-84901988355
Publisher
Springer Science + Business Media B.V
The CASAM multimedia annotation system implements a model of cooperative annotation between a human annotator and automated components. The aim is that they work asynchronously but together. The system focuses upon the areas where automated recognition and reasoning are most effective and the user is able to work in the areas where their unique skills are required. The system's reasoning is influenced by the annotations provided by the user and, similarly, the user can see the system's work and modify and, implicitly, direct it. The CASAM system interacts with the user by providing a window onto the current state of annotation, and by generating requests for information which are important for the final annotation or to constrain its reasoning. The user can modify the annotation, respond to requests and also add their own annotations. The objective is that the human annotator's time is used more effectively and that the result is an annotation that is both of higher quality and produced more quickly. This can be especially important in circumstances where the annotator has a very restricted amount of time in which to annotate the document. In this paper we describe our prototype system. We expand upon the techniques used for automatically analysing the multimedia document, for reasoning over the annotations generated and for the generation of an effective interaction with the end-user. We also present the results of evaluations undertaken with media professionals in order to validate the approach and gain feedback to drive further research. © 2013 The Authors.
Subjects
Annotation
Artificial Intelligence
Collaborative
Human
Ontology
Synergistic
Video
DDC Class
004: Informatik
TUHH
Weiterführende Links
  • Contact
  • Send Feedback
  • Cookie settings
  • Privacy policy
  • Impress
DSpace Software

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science
Design by effective webwork GmbH

  • Deutsche NationalbibliothekDeutsche Nationalbibliothek
  • ORCiD Member OrganizationORCiD Member Organization
  • DataCiteDataCite
  • Re3DataRe3Data
  • OpenDOAROpenDOAR
  • OpenAireOpenAire
  • BASE Bielefeld Academic Search EngineBASE Bielefeld Academic Search Engine
Feedback