TUHH Open Research
Help
  • Log In
    New user? Click here to register.Have you forgotten your password?
  • English
  • Deutsch
  • Communities & Collections
  • Publications
  • Research Data
  • People
  • Institutions
  • Projects
  • Statistics
  1. Home
  2. TUHH
  3. Publication References
  4. Latency-aware placement of stream processing operators
 
Options

Latency-aware placement of stream processing operators

Publikationstyp
Conference Paper
Date Issued
2024
Sprache
English
Author(s)
Ecker, Raphael
Karagiannis, Vasileios  
Sober, Michael Peter  
Data Engineering E-19  
Ebrahimi, Elmira 
Data Engineering E-19  
Schulte, Stefan  
Data Engineering E-19  
TORE-URI
https://hdl.handle.net/11420/47681
First published in
Lecture notes in computer science  
Number in series
14351
Start Page
30
End Page
41
Citation
29th International Conference on Parallel and Distributed Computing, Euro-Par 2023
Contribution to Conference
29th International Conference on Parallel and Distributed Computing, Euro-Par 2023  
Publisher DOI
10.1007/978-3-031-50684-0_3
Scopus ID
2-s2.0-85192238529
Publisher
Springer
ISBN
978-3-031-50683-3
The rise of the Internet of Things and Fog computing has increased substantially the number of interconnected devices at the edge of the network. As a result, a large amount of computations is now performed in the fog generating vast amounts of data. To process this data in near real time, stream processing is typically employed due to its efficiency in handling continuous streams of information in a scalable manner. However, most stream processing approaches do not consider the underlying network devices as candidate resources for processing data. Moreover, many existing works do not take into account the incurred network latency of performing computations on multiple devices in a distributed way. Consequently, the fog computing resources may not be fully exploited by existing stream processing approaches. To avoid this, we formulate an optimization problem for utilizing the existing fog resources, and we design heuristics for solving this problem efficiently. Furthermore, we integrate our heuristics into Apache Storm, and we perform experiments that show latency-related benefits compared to alternatives.
Subjects
Apache Storm
Edge Computing
Fog Computing
Internet of Things
Stream Processing
DDC Class
005: Computer Programming, Programs, Data and Security
TUHH
Weiterführende Links
  • Contact
  • Send Feedback
  • Cookie settings
  • Privacy policy
  • Impress
DSpace Software

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science
Design by effective webwork GmbH

  • Deutsche NationalbibliothekDeutsche Nationalbibliothek
  • ORCiD Member OrganizationORCiD Member Organization
  • DataCiteDataCite
  • Re3DataRe3Data
  • OpenDOAROpenDOAR
  • OpenAireOpenAire
  • BASE Bielefeld Academic Search EngineBASE Bielefeld Academic Search Engine
Feedback