ETD Collection

Permanent URI for this collectionhttps://wiredspace.wits.ac.za/handle/10539/104


Please note: Digitised content is made available at the best possible quality range, taking into consideration file size and the condition of the original item. These restrictions may sometimes affect the quality of the final published item. For queries regarding content of ETD collection please contact IR specialists by email : IR specialists or Tel : 011 717 4652 / 1954

Follow the link below for important information about Electronic Theses and Dissertations (ETD)

Library Guide about ETD

Browse

Search Results

Now showing 1 - 1 of 1
  • Item
    SummaryNet: two-stream convolutional networks for automatic video summarisation
    (2020) Jappie, Ziyad
    Video summarisation is the task of automatically summarising a video sequence, to extract “important” parts of the video so as to give an overview of what has occurred. The benefit of solving this problem is that it can be applied to a myriad of fields such as the entertainment industry, sports, e-learning and many more. There is a distinct inherent difficulty with video summarisation due to its subjectivity - there is no one defined correct answer. As such, it is particularly difficult to define and measure tangible performance. This is in addition to the other difficulties associated with general video processing. We present a novel two-stream network framework for automatic video summarisation, which we call SummaryNet. The SummaryNet employs a deep two-stream network to model pertinent spatio-temporal features by leveraging RGB as well as optical flow information. We use the Two-Stream Inflated 3D ConvNet (I3D) network to extract high-level, semantic feature representations as inputs to our SummaryNet model. Experimental results on common benchmark datasets show that the considered method achieves comparable or better results than the state-of-the-art video summarisation methods