Powered by OpenAIRE graph
Found an issue? Give us feedback
ZENODOarrow_drop_down
ZENODO
Article . 2025
License: CC BY
Data sources: Datacite
ZENODO
Article . 2025
License: CC BY
Data sources: Datacite
versions View all 2 versions
addClaim

This Research product is the result of merged Research products in OpenAIRE.

You have already added 0 works in your ORCID record related to the merged Research product.

Lightweight Vision Transformer Framework for Real-Time Human–Object Interaction Recognition

Authors: Michael Turner¹, Olivia Reed², Ethan Walker³;

Lightweight Vision Transformer Framework for Real-Time Human–Object Interaction Recognition

Abstract

Human–Object Interaction (HOI) recognition is a fundamental task in intelligent computing systems, enabling machines to understand how humans engage with surrounding objects in real-time environments. Traditional deep learning approaches for HOI rely heavily on convolutional architectures, which often struggle with long-range dependencies and are computationally expensive for edge deployment. This paper proposes a Lightweight Vision Transformer Framework (LVTF) designed specifically for efficient and accurate real-time HOI recognition. The framework employs a patch-based visual encoder combined with optimized multi-head attention mechanisms to capture global contextual relationships between humans and objects. A lightweight decoder further refines these representations to generate interaction labels with minimal latency. Experimental evaluations conducted on benchmark HOI datasets demonstrate that the LVTF achieves competitive accuracy while reducing computational complexity by nearly 40% compared to conventional transformer and CNN-based models. The reduced model footprint and low inference delay make the proposed approach highly suitable for real-time intelligent applications, including smart surveillance, assistive robotics, and human–computer interaction systems.

Keywords

Vision transformer, human–object interaction, real-time recognition, lightweight architecture, attention mechanism, intelligent systems.

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average
Upload OA version
Are you the author of this publication? Upload your Open Access version to Zenodo!
It’s fast and easy, just two clicks!