Home I Program I Demos I How to get I Restaurant I Contact

    Program Chair Meeting Amsterdam 2010

    Demo's

 

On Thursday the 24th of June, the University of Amsterdam will show some demos during the Workshop breaks at:

   Time:

  Location:

 

   - 10.30 - 11.00 hrs (workshop break)

- F 0.09 & F0.13

The location map can be found here

   - 14.15 - 14.45 hrs (workshop break)

- F 0.09 & F0.13

 


 

MediaMill (by Koen van de Sande)   Crowd Sourcing Concert Video Retrieval (Cees Snoek)

In this technical demonstration, we showcase the MediaMill system. A search engine that facilitates access to video archives at a semantic level. The core of the system is a large lexicon of automatically detected semantic concepts. Based on this lexicon we demonstrate how users can obtain highly relevant retrieval results using query-by-concept. In addition, we show how the lexicon of concepts can be exploited for novel applications using advanced semantic visualizations. Several aspects of the MediaMill system are evaluated as party of our TRECVID 2009 efforts.

 

In this technical demonstration, we showcase a video search engine that facilitates semantic access to archival rock n' roll concert video. The key novelty is the crowd sourcing mechanism, which relies on online users to improve, extend, and share, automatically detected results in video fragments using an advanced timeline-based video player. The user-feedback serves as valuable input to further improve automated video retrieval results, such as automatically detected singers, guitar players, and drummers.

MOCATOUR (Frank Nack) only during the workshop break of 14.15 - 14.45 hrs!   Depressive Robot Dog " Marvin"  Recognizes Objects  (Jan-Mark Geusebroek)

The central topic of the MOCATOUR (Mobile Cultural Access for Tourists) project is to establish computational methods to facilitate tourists with contextualized and experience-based access to information while they freely explore a city. In the presentation we describe the work on experience representation through virtual graffiti, the related story engine for associative storytelling, and work on using Flickr and Twitter data for mobile location experience.

 

 

Our robot dog is inspired by the robot "Marvin" in the legendary novel "The Hitchhiker's Guide to the Galaxy" by Douglas Adams. We connect a robot dog (Sony AIBO) through our software to computer systems all over the world, enabling the dog to perform computations on these computer systems in parallel. The AIBO could use the overwhelming compute power -a computing cluster the size of a small planet-available at its "nose-tip". To demonstrate this indeed works, we distributed a simple task of object recognition over these computer systems, to determine which of his favorite toys is in front of him.

MediaTable System (Ork de Rooij)   EEG-based Semantic Image Tagging (Sennay Ghebreab & Jeroen Kools)

Many multimedia collections have only metadata like date and file size, but remain largely unannotated. Hence, browsing them is cumbersome. To augment existing metadata, we employ automatic content analysis techniques which yield metadata in the form of high level content based descriptors. This provides users with a good starting point, but the accuracy is insufficient to automate collection categorization. A human in the loop is essential to validate and organize results from automated techniques. To that end, we present MediaTable, aiding users in efficiently categorizing an unknown collection of images or videos. A tabular interface provides an overview of multimedia items and associated metadata, and a bucket list allows users to quickly categorize materials. We adapt familiar interface techniques for sorting, filtering, selection and visualization for use in this new setting. We evaluated MediaTable with both non-expert and expert users. Results indicate that MediaTable yields an efficient categorization process and it provides valuable insight into the collection

 

Semantic tagging of images has a broad spectrum of applications, of both scientific and commercial interest. Experience has indicated that semantic tagging in the form of interaction between human taggers in a game setting is very effective. So far, implementations of such interactions have depended on explicit manual input by the users. We are developing a novel tagging approach that doesn’t only use signals from mouse and keyboard, but also utilizes brain activity data in the form of electroencephalography (EEG) scans. In this demonstration, we will record EEG while humans passively watch natural images and tag the observed images as "animal" or "non/animal" based on the recorded EEG data and visual features underlying the observed images. Currently, this tagging approach is being cast into a game. If it can be determined whether or not two or more humans are looking at the same image or images from the same category, solely on the basis of evoked brain potentials and visual features, that would establish an effective and efficient approach to collecting semantic information.

Intelligent Mobile Agents & A Distributed Environmental Monitoring System. (Andi Winterboer)   EventMedia - Event-based annotation and exploration of media  (Andre Fialho)

The DIADEM project deals with human interaction with distributed intelligent networks through mobile phones. We specifically focus on a distributed gas-detection network and human users living in a heavily populated and industrialized area that requires environmental monitoring to quickly detect potentially hazardous situations. If a potential hazard is detected or reported, the system will call upon human observation in and around the affected area to gather more information. For this purpose, participating users will be requested by a mobile agent to self-report their observations, which are then communicated to the central system. If necessary, the system provides location-based warnings and safety instructions.In this demo, we present a mobile application that requests information from users about their olfactory perceptions to inform the monitoring system. The application deploys different dialogue strategies (e.g., offering smell associations, requesting pleasantness information) based on which question is most likely to be most informative for the detection system as determined by already collected user feedback.

  In this project we aim to develop an event-based approach to explore, annotate and share user generated media. The goal is to provide a web-based environment that allows users to experience and discover meaningful, surprising or entertaining connections between events. We use a knowledge base of events from different event directories linked to the LOD cloud, in conjunction with an event ontology. Furthermore, events are enriched with user generated media and social network metadata to explore relevancy in the context of user identified tasks. The research is user-driven and investigates UIs that visualize the links between users, multimedia content and events.
Monocular Face Analysis, multiple videos (Roberto Valenti)   Tag Relevance Learning for Social Image Retrieval (Xiron Li)

In this demo I will show technologies developed by the University of Amsterdam aimed at face analysis, such as accurate eye location, head pose estimation, gaze estimation and facial expression recognition.

  Since social tagging is known to be subjective and inaccurate how to interpret the relevance of a use-contributed tag with respect to the visual content of an image is a key problem in social image retrieval. In this technical demonstration, we showcase a social image retrieval system, where the problem of subjective social tagging is tackled by a neighbor-voting based tag relevance learning approach.