Bhattacharya, DebayanDebayanBhattacharyaBehrendt, FinnFinnBehrendtMaack, LennartLennartMaackBecker, Benjamin TobiasBenjamin TobiasBeckerBeyersdorff, DirkDirkBeyersdorffPetersen, Elina LarissaElina LarissaPetersenPetersen, MarvinMarvinPetersenCheng, BastianBastianChengEggert, DennisDennisEggertBetz, Christian StephanChristian StephanBetzHoffmann, Anna SophieAnna SophieHoffmannSchlaefer, AlexanderAlexanderSchlaefer2024-05-132024-05-132024-02SPIE Medical Imaging 20249781510671584https://hdl.handle.net/11420/47484Large-scale population studies have examined the detection of sinus opacities in cranial MRIs. Deep learning methods, specifically 3D convolutional neural networks (CNNs), have been used to classify these anomalies. However, CNNs have limitations in capturing long-range dependencies across the low and high level features, potentially reducing performance. To address this, we propose an end-to-end pipeline using a novel deep learning network called ConTra-Net. ConTra-Net combines the strengths of CNNs and self-attention mechanisms of transformers to classify paranasal anomalies in the maxillary sinuses. Our approach outperforms 3D CNNs and 3D Vision Transformer (ViT), with relative improvements in F1 score of 11.68% and 53.5%, respectively. Our pipeline with ConTra-Net could serve as an alternative to reduce misdiagnosis rates in classifying paranasal anomalies.en1605-7422Progress in Biomedical Optics and Imaging - Proceedings of SPIE2024SPIEanomaly classificationCNNHybrid Networkmaxillary sinusParanasal anomalyTransformerMLE@TUHHComputer Science, Information and General Works::004: Computer SciencesTechnology::610: Medicine, HealthTechnology::620: EngineeringConvolutional transformer network for paranasal anomaly classification in the maxillary sinusConference Paper10.1117/12.3005515Conference Paper