Langer, CarlottaCarlottaLangerGeorgie, Yasmin KimYasmin KimGeorgieIlja PorohovojHafner, Verena VanessaVerena VanessaHafnerAy, NihatNihatAy2025-11-132025-11-132025-09-16IEEE International Conference on Development and Learning, ICDL 2025https://hdl.handle.net/11420/58690Human perception is inherently multimodal. We integrate, for instance, visual, proprioceptive and tactile information into one experience. Similarly, multimodal learning is of importance for building robotic systems that aim at robustly interacting with the real world. One potential model that has been proposed for multimodal integration is the multimodal variational autoencoder. A variational autoencoder (VAE) consists of two networks, an encoder that maps the data to a stochastic latent space and a decoder that reconstruct this data from an element of this latent space. The multimodal VAE integrates inputs from different modalities at two points in time in the latent space and can thereby be used as a controller for a robotic agent. Here we use this architecture and introduce information-theoretic measures in order to analyze how important the integration of the different modalities are for the reconstruction of the input data. The VAE is trained via the evidence lower bound, which can be written as a sum of two different terms, namely the reconstruction and the latent loss. The impact of the latent loss can be weighted via an additional variable, which has been introduced to combat the problem of posterior collapse. Another approach to improve a VAE is to use an aggregated posterior in the latent loss. Here we train networks with three different weighting schedules and one network with an aggregated posterior and analyze them with respect to their capabilities for multimodal integration.enTechnology::600: TechnologyAnalyzing Multimodal Integration in the Variational Autoencoder from an Information-Theoretic PerspectiveConference Paper10.1109/icdl63968.2025.11204413Conference Paper