Abstract
We propose multi-modal story-oriented video summarization (MMSS) which, unlike previous works that use fine-tuned, domain-specific heuristics, provides a domain-independent, graph-based framework. MMSS uncovers correlation between information of different modalities which gives meaningful story-oriented news video summaries. MMSS can also be applied for video retrieval, giving performance that matches the best traditional retrieval techniques (OKAPI and LSI), with no fine-tuned heuristics such as tf/idf.