MEET HANS

Latest demo that showcases HanS’ capabilities. The HanS system combines off-the-shelf ROS modules for navigation and localisation with novel customised components for visual sensemaking and commonsense reasoning. HanS was trained recognise 60 object classes that are commonly found at the Knowledge Media Institute from only a few training examples. Deep Learning methods are here enhanced by symbolic reasoning modules, that consider the typical sizes, spatial relations and common-sense facts that characterise the recognised objects. HanS is also aware of the Health and Safety rules that are enforced in the environment, to assess the risk of the observed situations.

ged

VISUAL INTELLIGENCE for SERVICE ROBOTS

Video presentation at KR 2020 - the 17th International Conference on Principles of Knowledge Representation and Reasoning, where we presented our paper "Towards a Framework for Visual Intelligence in Service Robotics: Epistemic Requirements and Gap Analysis".

Read Paper

POSTER MANIFESTO

This poster outlines our main research goals, aimed at building "Visually Intelligence Agents".
You can click on the thumbnail to take a look at the full-size poster.
See also our latest paper, which has been accepted for presentation at KR 2020, for further reference.

Read Paper

DATA HUB INTEGRATION

In this demo HanS' location and object predictions are sent on-the-fly to the MK Data Hub, the large-scale Smart City Infrastructure developed at KMi.
Both the robot's and the object locations are visualized on a real-time map of the environment.
The bounding boxes used to segment the objects from the background where produced through YOLO . Objects were classified through the top-performing pipeline introduced here .
Credits to Jason Carvalho for developing the Web interface used for this visualization.

GITHUB REPO

All the code we implement is publicly available and maintained on Github.

Specifically, here you can find our few shot object recognition module, trained on learning the similarity between a handful images collected from the target environment and reference images drawn from ShapeNet and Google.

At test time, the weakest prediction in each image frame is further validated against a combination of ConceptNet, WordNet and Visual Genome, to suggest an alternative classification, whenever a better one is available.

MEET HANS

An early demo of Hans our robot who is aware of our health and safety guidelines and is able to navigate the lab, checking that these are enforced. In this early demo Hans goes around KMi spotting portable heaters identified through ARTags.

Read Paper