PROJECT TITLE :
MoVieUp: Automatic Mobile Video Mashup
With the proliferation of mobile devices, people are taking videos of the identical events anytime and anywhere. Even though these crowdsourced videos are uploaded to the cloud and shared, the viewing expertise is very limited because of monotonous viewing, visual redundancy, and dangerous audio–video quality. During this paper, we have a tendency to gift a fully automatic mobile video mashup system that works in the cloud to combine recordings captured by multiple devices from different view angles and at different time slots into one yet enriched and professional wanting video–audio stream. We tend to summarize a collection of computational filming principles for multicamera settings from a formal focus study. Based on these principles, given a collection of recordings of the identical event, our system is able to synchronize these recordings with audio fingerprints, assess audio and video quality, detect video cut points, and generate video and audio mashups. The audio mashup is that the maximization of audio quality below the less switching principle, while the video mashup is formalized as maximizing video quality and content diversity, constrained by the summarized filming principles. Our system is completely different from any existing work in this field in three ways that: one) our system is totally automatic; two) the system incorporates a collection of computational domain-specific filming principles summarized from a formal focus study; and 3) in addition to video, we tend to conjointly take into account audio mashup that is a key issue of user expertise (UX) yet typically overlooked in existing analysis. Evaluations show that our system achieves performance results that are superior to state-of-the-art video mashup techniques, therefore providing a better UX.
Did you like this research project?
To get this research project Guidelines, Training and Code... Click Here