
Video summarization is an essential component of multimedia processing and computer vision, designed to condense extensive video content while maintaining its key contextual elements. The rapid expansion of video data across domains such as security surveillance, online media, and digital archiving necessitates the development of robust and efficient summa- rization techniques. This paper introduces LTC-SUM, a novel client-driven framework that leverages 2D Convolutional Neural Networks (CNNs) to generate adaptive, lightweight video sum- maries. Unlike traditional methods that rely on computationally intensive processes or simplistic heuristic techniques, LTC-SUM efficiently extracts significant frames while minimizing redundant information. The system is optimized for real-time applications and ensures low computational overhead. Extensive evaluations demonstrate that LTC-SUM surpasses conventional techniques in performance and accuracy, effectively balancing efficiency and contextual retention. Moreover, the framework’s adaptability extends its applicability to various fields, including surveillance monitoring, educational content processing, and automated me- dia analysis.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
