Ecker, RaphaelRaphaelEckerKaragiannis, VasileiosVasileiosKaragiannisSober, Michael PeterMichael PeterSoberEbrahimi, ElmiraElmiraEbrahimiSchulte, StefanStefanSchulte2024-05-312024-05-312024Euro-Par 2023: Parallel Processing Workshops. Euro-Par 2023. Lecture Notes in Computer Science, vol 14351. Springer, Cham, 2024. - Seite 30-41978-3-031-50683-3https://hdl.handle.net/11420/47681The rise of the Internet of Things and Fog computing has increased substantially the number of interconnected devices at the edge of the network. As a result, a large amount of computations is now performed in the fog generating vast amounts of data. To process this data in near real time, stream processing is typically employed due to its efficiency in handling continuous streams of information in a scalable manner. However, most stream processing approaches do not consider the underlying network devices as candidate resources for processing data. Moreover, many existing works do not take into account the incurred network latency of performing computations on multiple devices in a distributed way. Consequently, the fog computing resources may not be fully exploited by existing stream processing approaches. To avoid this, we formulate an optimization problem for utilizing the existing fog resources, and we design heuristics for solving this problem efficiently. Furthermore, we integrate our heuristics into Apache Storm, and we perform experiments that show latency-related benefits compared to alternatives.enApache StormEdge ComputingFog ComputingInternet of ThingsStream ProcessingComputer Science, Information and General Works::005: Computer Programming, Programs, Data and SecurityLatency-aware placement of stream processing operatorsConference Paper10.1007/978-3-031-50684-0_3Conference Paper