TUHH Open Research
Help
  • Log In
    New user? Click here to register.Have you forgotten your password?
  • English
  • Deutsch
  • Communities & Collections
  • Publications
  • Research Data
  • People
  • Institutions
  • Projects
  • Statistics
  1. Home
  2. TUHH
  3. Publications
  4. ThinkingViT: Matryoshka thinking vision transformer for elastic inference
 
Options

ThinkingViT: Matryoshka thinking vision transformer for elastic inference

Citation Link: https://doi.org/10.15480/882.16505
Publikationstyp
Preprint
Date Issued
2025-07-17
Sprache
English
Author(s)
Hojjat, Ali  
Networked Cyber-Physical Systems E-17  
Haberer, Janek  
Pirk, Sören  
Landsiedel, Olaf  
Networked Cyber-Physical Systems E-17  
TORE-DOI
10.15480/882.16505
TORE-URI
https://hdl.handle.net/11420/60966
Citation
arXiv: 2507.10800 (2025)
Publisher DOI
10.48550/arXiv.2507.10800
ArXiv ID
2507.10800
ViTs deliver SOTA performance, yet their fixed computational budget prevents scalable deployment across heterogeneous hardware. Recent Matryoshka-style Transformer architectures mitigate this by embedding nested subnetworks within a single model to enable scalable inference. However, these models allocate the same amount of compute to all inputs, regardless of their complexity, which leads to inefficiencies. To address this, we introduce ThinkingViT, a nested ViT architecture that employs progressive thinking stages to dynamically adjust inference computation based on input difficulty. ThinkingViT first activates a small subset of the most important attention heads to produce an initial prediction. If the prediction confidence exceeds a predefined threshold, inference terminates early. Otherwise, within the same backbone, it activates a larger subset of attention heads and conducts a new forward pass. This process continues iteratively until the model reaches the predefined confidence level or exhausts its maximum capacity. To boost the performance of subsequent rounds, we introduce a Token Recycling approach that fuses the input embeddings with the embeddings from the previous stage. Experiments show that ThinkingViT surpasses nested baselines by up to 2.0 percentage points (p.p.) in accuracy at the same throughput and by up to 2.9 p.p. at equal GMACs on ImageNet-1K. We show that the backbone-preserving design of ThinkingViT allows it to serve as a plug-in upgrade for ViTs in downstream tasks such as semantic segmentation. We also demonstrate that ThinkingViT transfers effectively to other architectures such as Swin. The source code is available at https://github.com/ds-kiel/ThinkingViT.
Subjects
cs.CV
DDC Class
006: Special computer methods
004: Computer Sciences
Lizenz
https://creativecommons.org/licenses/by/4.0/
Publication version
publishedVersion
Loading...
Thumbnail Image
Name

2025_Hojjat_ThinkingViT_MatryoshkaThinkingVisionTransformerForElasticInference.pdf

Size

1.64 MB

Format

Adobe PDF

TUHH
Weiterführende Links
  • Contact
  • Send Feedback
  • Cookie settings
  • Privacy policy
  • Impress
DSpace Software

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science
Design by effective webwork GmbH

  • Deutsche NationalbibliothekDeutsche Nationalbibliothek
  • ORCiD Member OrganizationORCiD Member Organization
  • DataCiteDataCite
  • Re3DataRe3Data
  • OpenDOAROpenDOAR
  • OpenAireOpenAire
  • BASE Bielefeld Academic Search EngineBASE Bielefeld Academic Search Engine
Feedback