Call for Participation in TRECVID 2013


CALL FOR PARTICIPATION in the
2013 TREC VIDEO RETRIEVAL EVALUATION (TRECVID 2013)

February 2013 - November 2013

Conducted by the National Institute of Standards and Technology (NIST)
with additional funding from other US government agencies

I n t r o d u c t i o n:

The TREC Video Retrieval Evaluation series (trecvid.nist.gov) promotes
progress in content-based analysis of and retrieval from digital video
via open, metrics-based evaluation. TRECVID is a laboratory-style
evaluation that attempts to model real world situations or significant
component tasks involved in such situations.


D a t a:

In TRECVID 2013 NIST will use at least the following data sets:


      * IACC.2.A, IACC.2.B, and IACC.2.C

      Three datasets - totally approximately 7300 Internet Archive
      videos (144GB, 600 hours) with Creative Commons licenses in
      MPEG-4/H.264 format with duration ranging from 10 seconds to 6.4
      minutes and a mean duration of almost 5 minutes. Most videos will have
      some metadata provided by the donor available e.g., title, keywords,
      and description

      * IACC.1.A, IACC.1.B, and IACC.1.C

      Three datasets - totaling approximately 8000 Internet Archive videos
      (160GB, 600 hours) with Creative Commons licenses in MPEG-4/H.264
      format with duration between 10s and 3.5 minutes. Most videos will
      have some metadata provided by the donor available e.g., title,
      keywords, and description

      * IACC.1.tv10.training

      Approximately 3200 Internet Archive videos (50GB, 200 hours)
      with Creative Commons licenses in MPEG-4/H.264 with durations
      between 3.6 and 4.1 minutes. Most videos will have some metadata
      provided by the donor available e.g., title, keywords, and
      description

      * BBC EastEnders

      Approximately 244 video files (totally 300MB, 464 hours) with
      associated metadata, each containing a week's worth of BBC EastEnders
      programs in MPEG-4/H.264 format.

      * Gatwick and i-LIDS MCT airport surveillance video

      The data consist of about 150 hours obtained from airport
      surveillance video data (courtesy of the UK Home Office). The
      Linguistic Data Consortium has provided event annotations for
      the entire corpus. The corpus was divided into development and
      evaluation subsets. Annotations for 2008 development and test
      sets are available.

      * HAVIC

      HAVIC is a large collection of Internet multimedia constructed
      by the Linguistic Data Consortium and NIST. Participants will
      receive training corpora, event training resources, and two
      development test collections. Participants will also receive the
      ~4,000 hour MED Progress evaluation collection, which was used
      for MED '12 and will continue to be used through MED '15 as the
      test collection.


T a s k s:

In TRECVID 2013 NIST will evaluate systems on the following tasks
using the [data] indicated:


    * SIN: Semantic indexing [IACC.2]

      Automatic assignment of semantic tags to video segments can be
      fundamental technology for filtering, categorization, browsing,
      search, and other video exploitation. Technical issues to be
      addressed include methods needed/possible as collection size and
      diversity increase, when the number of features increases, and
      when features are related by an ontology. The "concept pair"
      version of the task will be continued along with the "no
      annotation" condition. A new spatiotemporal localization
      subtask will be added, as will support for measuring system
      progress over 3 years.

    * SED: Interactive surveillance video event detection (interactive) [i-LIDS]

      Detecting human behaviors efficiently in vast amounts
      surveillance video, both retrospectively and in realtime, is
      fundamental technology for a variety of higher-level
      applications of critical importance to public safety and
      security. In 2013 this task participants will examine the
      performance of interactive surveillance video search for a known
      set of events.

    * INS: Instance search (interactive, automatic) [BBB EastEnders video] 

      An important need in many situations involving video collections
      (archive video search/reuse, personal video organization/search,
      surveillance, law enforcement, protection of brand/logo use) is
      to find more video segments of a certain specific person,
      object, or place, given a visual example. In 2013 this will
      still be a pilot task.  We will be using a new collection of
      BBC video from their long-running EastEnders program.

    * MED: Multimedia event detection [HAVIC]
      
      Participants are tasked with building an automated system that
      can determine whether an event is present anywhere in a video
      clip using the content of the video clip only. The system gets
      as inputs a set of "search" videos and an "event kit" (text and
      video describing the event). The system computes an "event
      score" (that gives the strength of evidence of the event) and an
      optional "recounting" of the event in the each search video in
      the input set.   The 2013 evaluation will consist of 20
      "pre-specified" and 20 new "ad-hoc" event kits containing 100,
      10 or 0 example event videos. Participants may choose to either
      run their system on a set of 98,000 search videos or a subset
      containing approximately 32,000 video clips.


    * MER Multimedia event recounting [HAVIC]     

      Once a MED system locates a specific event in a video clip, a
      user may wish to analyze the evidence indicating that the event
      was present.  Given an event kit, and a video clip that the MED
      system has determined to contain the event, the MED system can
      then produce a recounting that summarizes the key evidence of
      the event in a form that is semantically meaningful to a
      human. The MER evaluation will assess those recountings. For
      this 2013 evaluation, the recounting will be text-only, in an
      XML format, though it will also contain required spatiotemporal
      pointers (in a text format) into the clip, to enable a user to
      locate the pieces of evidence being recounted. MER
      participants are those participants in the MED evaluation whose
      systems also produce a recounting for each clip that their MED
      system declares as containing an instance of the event of
      interest.

In addition to the data, TRECVID will provide uniform scoring
procedures, and a forum for organizations interested in comparing
their approaches and results.

Participants will be encouraged to share resources and intermediate
system outputs to lower entry barriers and enable analysis of various
components' contributions and interactions.


***************************************************
* You are invited to participate in TRECVID 2013 *
***************************************************

The evaluation is defined by the Guidelines. A draft version is
available: www-nlpir.nist.gov/projects/tv2013/tv2013.html and
details will be worked out starting in February based in part on input
from the participants.

You should read the guidelines carefully before applying to
participate: Guidelines


P l e a s e   n o t e:
 
1) Dissemination of TRECVID work and results other than in the
(publicly available) conference proceedings is welcomed, but the
conditions of participation specifically preclude any advertising 
claims based on TRECVID results.

2) All system results submitted to NIST are published in the
Proceedings and on the public portions of TRECVID web site archive.

3) The workshop is open only to participating groups that submit
results for at least one task and to selected government personnel
from sponsoring agencies and data donors.

4) Each participating group is required to submit before the November
workshop a notebook paper describing their experiments and results.
This is true even for groups who may not be able to attend the
workshop.

5) It is the responsibility of each team contact to make sure that
information distributed via the call for participation and the
tv13.list@nist.gov email list is disseminated to all team members
with a need to know. This includes information about deadlines and
restrictions on use of data.

6) By applying to participate you indicate your acceptance of the
above conditions and obligations.


T e n t a t i v e   s c h e d u l e

There is a tentative schedule for the tasks included in the Guidelines
webpage: Schedule


W o r k s h o p   f o r m a t

The 2 1/2 day workshop itself, November 20-22 (Wed-Fri) at NIST in
Gaithersburg, Maryland near Washington,DC, will be used as a forum
both for presentation of results (including failure analyses and
system comparisons), and for more lengthy system presentations
describing retrieval techniques used, experiments run using the data,
and other issues of interest to researchers in information
retrieval. As there is a limited amount of time for these
presentations, the evaluation coordinators and NIST will determine
which groups are asked to speak and which groups will present in a
poster session. Groups that are interested in having a speaking slot
during the workshop will be asked to submit a short abstract before
the workshop describing the experiments they performed. Speakers will
be selected based on these abstracts.


H o w   t o   r e s p o n d   t o   t h i s   c a l l

Organizations wishing to participate in TRECVID 2013 must respond
to this call for participation by submitting an on-line application by
21. February.  Only ONE APPLICATION PER TEAM please, regardless of how
many organizations the team comprises.

*PLEASE* only apply if you are able and fully intend to complete the
work for at least one task. Taking the data but not submitting any
runs threatens the continued operation of the workshop and the
availability of data for the entire community.

Here is the application URL: 

ir.nist.gov/tv-submit.open/application.html 


Once you have applied, you'll be given the active participant's userid
and password, be subscribed to the tv13.list email discussion list,
and can participate in finalizing the guidelines as well as sign up to
get the data, which is controlled by separate passwords. 


T R E C V I D   2 0 1 3   e m a i l   d i s c u s s i o n   l i s t

The tv13.list email discussion list (tv13.list@nist.gov) will serve as
the main forum for discussion and for dissemination information about
TRECVID 2013.  It is each participant's responsibility to monitor the
tv13.list postings.  It accepts postings only from the email addresses
used to subscribe to it.  At the bottom of the guidelines there is a
link to an archive of past postings available using the active
participant's userid/password.


Q u e s t i o n s

Any administrative questions about conference participation,
application format/content, subscriptions to the tv13.list,
etc. should be sent to Lori.Buckland at nist.gov

If you would like to contribute to TRECVID with respect to the need(s)
listed below or in some other way, please contact Paul Over (info at
bottom of page) at your earliest convenience:

- agree to host IACC.2 test video data for download by other
  participants on a fast, password-protected site. (Asian and European
  sites especially needed)

Best regards,

Paul Over
Alan Smeaton
Wessel Kraaij
Georges Quenot
Jon Fiscus
Greg Sanders


Last updated: Thursday, 31-Jan-2013 09:21:15 EST
Date created: Wednesday, 25-Jan-12
For further information contact
gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.