Browsing by browse.metadata.pjinstitute "Algorithmen und Komplexität E-11"
Now showing1 - 8 of 8
Results Per Page
Sort Options
- Some of the metrics are blocked by yourconsent settings
Project without files A fixed-Parameter approach towards combinatorial optimizationAlgorithms for optimization problems on graphs belong to the most investigated topics within theoretical and practical algorithm design, due to their generality in modelling vast arrays of applications and the high level of algorithmic tools that have been developed for them in the past 50-60 years. About 30 years ago, a new approach was put forward to break out of this classical P vs. NP-dichotomy for optimization problems, and propose a fine-grained hierarchy of problem complexities through the new theory of Parameterized Complexity. However, almost all of those algorithmic tools have been designed for graph problems or graph-related problems, which are highly combinatorial in nature. But considering practical applications, it is much more common that one needs to solve problems that come with numerical input data. Those types of problems are difficult or impossible to be addressed with the current tools of parameterized complexity. In this project we will therefore develop new tools for Parameterized Complexity that can deal with numeric data.Acronym:FIPACOStart Date:2019-01-01End Date:2021-12-31Institute:38 - Some of the metrics are blocked by yourconsent settings
Project without files Resilient Broadcasting via Independent Spanning-TreesA classic theoretical measure for the resilience of a communication network is its edge-connectivity. However, for some network problems, this measure is not known to be suitable at all. In resilient broadcasting, for example, one prescribed vertex r communicates with every other vertex through a set of spanning trees such that, for every vertex v, the paths from r to v in these spanning trees are edge-disjoint (such trees are called independent). But it is unknown how exactly the maximal number of independent spanning trees in a network relate to its edge-connectivity, and nothing more is known for the analogous question regarding vertex-failures or for the complexity of finding such independent spanning trees.In fact, the long-standing Edge-Independent Spanning Tree Conjecture in graph theory states that every k-edge-connected network contains k independent spanning trees. A proof of this conjecture would not only characterize the networks in which resilient broadcasting is possible, but arguably also deliver the structural insights that are necessary for the compact design of such networks and for efficient routing schemes in these. Although there is recent structural and algorithmic progress on this conjecture for small k (in which case the conjecture is true), a generalization to higher k was not achieved so far.This project aims at attacking the Edge-Independent Spanning Tree Conjecture for higher k, using the recently proposed structures. We will use graph-theoretic methods in order to search for the right generalization to higher k but also computer-assisted algorithms to support this search.Acronym:Broadcasting SpannbäumeStart Date:2020-10-01End Date:2022-09-30Principal Investigator:Institute:92 - Some of the metrics are blocked by yourconsent settings
Project without files Data-driven decentralized energy tradingStart Date:2021-09-01End Date:2025-08-31Principal Investigator:Institute:77 - Some of the metrics are blocked by yourconsent settings
Project without files Ganzheitliche Fluglandeplanung mit reduzierter Lärm-und SchadstoffemissionAcronym:FlugplanFunder:Start Date:2022-03-01End Date:2023-08-31Principal Investigator:Institute:62 - Some of the metrics are blocked by yourconsent settings
Project without files Kernelization for Big DataThe main research goal of this project is the quest for a rigorous mathematical theory of input-output efficient preprocessing. This new theory will develop the computational tools to design powerful algorithms for preprocessing very large instances of hard problems that very efficiently compress those instances to smaller ones with guaranteed size. Our motivation is the incapability of current preprocessing routines with compression guarantee (kernelizations) to handle very large instances that do not fit into main memory. The theory also seeks to rigorously explain the practical successes of preprocessing very large instances by algorithms without compression guarantee (heuristics), and will lead to a concept of computational intractability to explain the limitations of heuristics. The project aims to design preprocessing algorithms that harness the full capabilities of advanced processor technology and memory hierarchies of computing hardware in science and industry, to efficiently compress big data sets. With new multivariate computational models that utilize instance structure and hardware structure at the same time, we will deepen the understanding of the mathematical origins of compressibility and serve to build more powerful algorithms for preprocessing massive data sets.Start Date:2019-09-01End Date:2021-01-31Principal Investigator:Institute:1578 - Some of the metrics are blocked by yourconsent settings
Project without files Multivariate Algorithms for High Multiplicity SchedulingScheduling and planning problems belong to the fundamental questions in algorithms. Many of those problems are highly unlikely to admit procedures that guarantee to deliver an optimal solution in polynomial time. Therefore, hundreds of approximation algorithms have been developed for such problems in the past decades.In this project we deal with an alternative approach for scheduling problems with high multiplicity, in which a large number of jobs must be planned which can be categorized into few categories. Such problems arise in, for instance, sequencing of landing aircraft, whose safety separation distances mainly depend on which of few aircraft type the respective planes belong to. Our goal is the development of fixed-parameter algorithms, which deliver optimal solutions in time that depends polynomially on the input size and superpolynomially only in the small number of categories. This way, we generalize polynomial-time algorithms for special cases of those problems with only constantly many job categories to more realistic models, and simultaneously improve the run times of fixed-parameter algorithms which so far require a lavish encoding of every single job.Start Date:2019-08-01End Date:2025-09-30Principal Investigator:Institute:1370 - Some of the metrics are blocked by yourconsent settings
Project without files Probleme der Strukturellen und Chromatischen GraphentheorieStart Date:2019-01-01End Date:2020-12-31Principal Investigator:Institute:109 - Some of the metrics are blocked by yourconsent settings
Project without files Simultaneous approximation of multi-criteria optimization problemsIn this project, the researchers develop general-purpose algorithmic methods for finding optimal and near-optimal solutions to complex multi-criteria optimization problems arising in social choice scenarios. Our algorithms will be fully multivariate to harness the structures inherent to data sets from diverse domains, and come with provable guarantees on their run time and the quality of the produced solution. With these properties, they improve upon the vast body of ad-hoc implementations currently available, which are often data-specific, or lack robust guarantees on run time and solution quality.Acronym:SAMOPStart Date:2021-01-01End Date:2023-12-31Principal Investigator:Institute:55