Cathy Marshall of Microsoft is chairing the session called “Users and Interaction Track: Interacting with Media”

  • *Addressing the Challenge of Visual Information Access from Digital Image and Video Archives
    Michael G. Christel, Ronald M. Conescu
    Mike Christel presenting. concept-based vs content-based retrieval of video. concept-based using manually described text descriptions of what is in the video. very labor intensive. annotation is incomplete and inconsistant and based on the annotators own experiences and knowledge. words have a tough time describing pictures. content-based is done by computers. good at color, texture and shapes and motion or lackthereof. but people don’t know how to ask for content-based systems. a symantic problem. even forming a query is difficult. can content-based get to concept desctiptions — know if the image is of buildings cars etc?
    TRECVID – is a TREC video search. NIST sponsored corpus of news videos. next year international news video. given a multimedia need and topic can a user find the document (aka video)?
    CMU created an interface of storyboards of keyframes. 3 kinds. one “best sets” like best roads. best people done imperfectly by machine. one color match — all with yellow background. third, text query. 24 novices in the study. independently worked for 15 mins to do 4 topices. full system vs video only.
    precission was very close between the two runs of the two systems. full system generally but no always wins. when does video only win? generic topics v specific copies. man-speaking v bill clinton say. on generic full always beats video only. and people liked the full system better. so full system (includes closed captions etc) always wins. same for experts as novices. people do better than automatic (run without interactive feedback).
    experts used “best search” much more than novices. novices even with video only used much more text in their searches than experts. why do novices use text even when there isn’t hardly any text there?
    see paper for more

    For novices, text annotated is much better than video only. Interfaces encourage text only.
    http://www.infomedia.cs.cmu.edu

    Questions:
    Cathy asked about improvements. how to get there and be as good as Google Image search? better detectors (like road detector) will get more precision even tho recall will still suck.
    what about color and texture searches? find more like this. worked sorta.

  • *Assessing Tools for Use with Webcasts
    Elaine Toms, Christine Dufour, Jonathan Lewis, Ron Baecker
    Christine Dufour speaking. Reusing webcasts is the topic. abstract here. How do people use the tools for webcast. ePresnece webcasting system used. 5 tools: video window, slide window, timeline, search button, table of contents. users given three tasks. n=16 students. 56% had never used ePresence. Not sure what to take from this one. Useful critique of who folks use ePresence and of ePresense Interactive Media I guess. Can we extend from this? And to what? Looks like text is the big winner again here.
    Questions:
    what is the user modivation? so far assigned tasks, not true user needs.
    rate of tool use. how much time spent with each and does that matter afterall?
    Cathy asks how was table of contents created? by powerpoint or by human annotation — inconsistant. since powerpoint was the most useful (since it had the most text i bet) how would people do if just given powerpoint (NB Cathy’s company owns powerpoint)?

  • Exploring User Perceptions of Digital Image Similarity
    Unmil P. Karadkar, Richard Furuta, Jin-Cheon Na
    Unmil speaking. Texas A&M. MIDAS Multi-device integrated dynamic a? systems. abstract here. MIDAS user studies here.