PROJECT TITLE :
A Framework for Robust Online Video Contrast Enhancement Using Modularity Optimization
We tend to address the problem of video distinction enhancement. Existing techniques either do not exploit temporal information in the least or don't exploit it properly. This leads to inconsistency that causes undesirable flash and flickering artifacts. Our technique analyzes video streams and cluster frames that are similar to each other. Our methodology does not have omniscient information concerning the whole video sequence. It is an on-line process with a mounted delay. A sliding window mechanism successfully detects shot boundaries “on-the-fly” in a video. A graph-based mostly technique called “modularity” performs automatic clustering of video frames while not a priori information regarding clusters. For every cluster in the video, we extract key frames belonging to every cluster using eigen analysis and estimate enhancement parameters for solely the key frame, then use these parameters to reinforce frames belonging to that cluster, thus creating our method strong. We tend to evaluate the clustering technique on video sequences from the TRECVid 2001 dataset and compare it with existing methods. We have a tendency to show reduction of flash artifacts in enhanced videos. We tend to show statistically important improvement in perceived video quality and validate that by conducting experiments on human observers. We show application of our clustering method to perform strong video segmentation.
Did you like this research project?
To get this research project Guidelines, Training and Code... Click Here