Technology evolves in amazingly rapid speeds. Device capabilities and capacities are increasing, while costs are going down. This leads to a great increase in the number of multimedia content available worldwide. Video is a very interesting but yet complex multimedia component, and in order to quickly and efficiently access it, advanced video search engines have to be developed.
In this thesis, we focus on expanding the capabilities of the VERGE video search engine, by designing and developing a new module that is based on online content classification. Its main advantage compared to other modules usually found in similar video search engines, is that it uses supervised machine learning methods with an automatically created dataset and exploits existing web search engines. The training set is gathered ‘on the fly’, which means that there is no limitation to the search query keywords. Visual features are extracted from the images of both the training and the testing data set, which are used as the input to a supervised machine learning classifier. The classifier’s purpose is to separate relevant from irrelevant videos, and return the best matches back to the user. In order to evaluate the performance of the online content classification module, experiments for the retrieval of videos based on various search queries were conducted. For each search query, several combinations of training set options and visual descriptors were used. The results are rather promising, and show that the online content classification system might become a useful addition to any multimedia search engine.
Collections
Show Collections