PROJECT TITLE :
Saliency Prediction on Stereoscopic Videos - 2014
We describe a new 3D saliency prediction model that accounts for numerous low-level luminance, chrominance, motion, and depth attributes of 3D videos plus high-level classifications of scenes by kind. The model additionally accounts for perceptual factors, like the nonuniform resolution of the human eye, stereoscopic limits imposed by Panum's fusional area, and the expected degree of (dis) comfort felt, when viewing the 3D video. The high-level analysis involves classification of each 3D video scene by sort with regard to estimated camera motion and therefore the motions of objects within the videos. Decisions regarding the relative saliency of objects or regions are supported by information obtained through a series of eye-tracking experiments. The algorithm developed from the model elements operates by finding and segmenting salient 3D area-time regions during a video, then calculating the saliency strength of each section using measured attributes of motion, disparity, texture, and the predicted degree of visual discomfort experienced. The saliency energy of each segmented objects and frames are weighted using models of human foveation and Panum's fusional area yielding a single predictor of 3D saliency.
Did you like this research project?
To get this research project Guidelines, Training and Code... Click Here