Accurate and Robust Video Saliency Detection via Self-Paced Diffusion


In order to estimate video saliency in the short term, traditional video saliency detection algorithms usually follow the common bottom-up thread. As a result, when the collected low-level clues are repeatedly ill-detected, such approaches cannot prevent the tenacious accumulation of errors. Also, noticing that a fraction of the video frames on the time axis are not close to the current video frame may help with saliency identification in the present video frame.

Thus, we propose to solve the aforementioned problem with our newly-designed key frame strategy (KFS), whose core rationale is to reveal the valuable long-term information using both the spatial-temporal coherency of the salient foregrounds and the objectness prior (i.e., how likely it is for an object proposal to contain an object of any class). We could use all of this newly discovered long-term data to guide our subsequent "self-paced" saliency diffusion, which allows each important frame to establish its own diffusion range and strength in order to remedy those video frames that were missed.

At the algorithmic level, we partition a video sequence into short-term frame batches before obtaining object proposals frame by frame. After that, we use a pre-trained deep saliency model to collect high-dimensional features in order to describe the spatial contrast for each item proposition. Because the contrast computation within several neighbored video frames (i.e., in a non-local way) is highly insensitive to appearance fluctuation, object proposals with high-quality low-level saliency estimation usually show considerable temporal similarity.

The salient foregrounds' long-term common consistency (e.g., appearance models/movement patterns) could then be disclosed directly by similarity analysis. We improve detection accuracy even further by using self-paced long-term information guided saliency diffusion. We ran thorough tests to compare our method to 16 state-of-the-art methods using the four largest publicly accessible benchmarks, and the results show that our method is superior in terms of accuracy and resilience.

Did you like this research project?

To get this research project Guidelines, Training and Code... Click Here

PROJECT TITLE :Robust, Efficient Depth Reconstruction with Hierarchical Confidence-Based Matching - 2017ABSTRACT:In recent years, taking photos and capturing videos with mobile devices became increasingly standard. Emerging applications
PROJECT TITLE : Video Dissemination over Hybrid Cellular and Ad Hoc Networks - 2014 ABSTRACT: We study the problem of disseminating videos to mobile users by using a hybrid cellular and ad hoc network. In particular, we formulate
PROJECT TITLE : R3E Reliable Reactive Routing Enhancement for Wireless Sensor Networks - 2014 ABSTRACT: Providing reliable and efficient communication under fading channels is one of the major technical challenges in wireless
PROJECT TITLE : Cross-Layer Aided Energy-Efficient Opportunistic Routing in Ad Hoc Networks - 2014 ABSTRACT: Most of the nodes in ad hoc networks rely on batteries, which requires energy saving. Hence, numerous energy-efficient
PROJECT TITLE :Quality-Differentiated Video Multicast in Multirate Wireless Networks - 2013ABSTRACT:Adaptation of modulation and transmission bit-rates for video multicast in a multirate wireless network is a challenging problem

Ready to Complete Your Academic MTech Project Work In Affordable Price ?

Project Enquiry