A prediction of the spatial (frame) quality of a video is found by calculating a space-time frequency decomposition of both reference and test (distorted) videos using a Gabor filter bank. Spatial MOVIE operates by processing spatial and temporal motion picture information in an approximately separable manner. The MOVIE model is quite different from many other models since it uses neuroscience-based models of how the human brain processes visual signals at various stages along the visual pathway, including the lateral geniculate nucleus, primary visual cortex, and in the motion-sensitive extrastriate cortex visual area MT. Thus, the MOVIE index is a full-reference metric. The MOVIE index is a neuroscience-based model for predicting the perceptual quality of a (possibly compressed or otherwise distorted) motion picture or video against a pristine reference video. The original MOVIE paper was accorded an IEEE Signal Processing Society Best Journal Paper Award in 2013. It was described in print in the 2010 technical paper "Motion Tuned Spatio-Temporal Quality Assessment of Natural Videos". It was developed by Kalpana Seshadrinathan and Alan Bovik in the Laboratory for Image and Video Engineering (LIVE) at The University of Texas at Austin. The MOtion-tuned Video Integrity Evaluation (MOVIE) index is a model and set of algorithms for predicting the perceived quality of digital television and cinematic pictures, as well as other kinds of digital images and videos. Model for predicting digital media quality
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |