This poster outlines our main research goals, aimed at building "Visually Intelligence Agents".
You can click on the thumbnail to take a look at the full-size poster.
See also our latest paper, which has been accepted for presentation at KR 2020, for further reference.

Read Paper


In this demo HanS' location and object predictions are sent on-the-fly to the MK Data Hub, the large-scale Smart City Infrastructure developed at KMi.
Both the robot's and the object locations are visualized on a real-time map of the environment.
The bounding boxes used to segment the objects from the background where produced through YOLO . Objects were classified through the top-performing pipeline introduced here .
Credits to Jason Carvalho for developing the Web interface used for this visualization.


All the code we implement is publicly available and maintained on Github.

Specifically, here you can find our few shot object recognition module, trained on learning the similarity between a handful images collected from the target environment and reference images drawn from ShapeNet and Google.

At test time, the weakest prediction in each image frame is further validated against a combination of ConceptNet, WordNet and Visual Genome, to suggest an alternative classification, whenever a better one is available.


An early demo of Hans our robot who is aware of our health and safety guidelines and is able to navigate the lab, checking that these are enforced. In this early demo Hans goes around KMi spotting portable heaters identified through ARTags.

Read Paper