Lead Institution: Loughborough University
Industry Partners: Rolls Royce, Quality Furniture Company, ABB, the Manufacturing Technology Centre
Project Team: Dr Niels Lohse (PI), Dr Peter Kinnell, Dr Andrea Soltoggio and Dr Ellie-Mae Hubbard (all Loughborough)
Project Duration: 18 months (01 April 2017 – 30 September 2018)
Close human-robot collaborative working will be essential to improve the competitiveness of high wage manufacturing economies by increasing productivity without losing agility. Despite significant advances in robotics and autonomous systems, one of the most critical barriers for the successful introduction of these technologies in manufacturing is the lack of robust real-time high-fidelity awareness of the work spaces with all its actors. For collaborative and autonomous human-robot systems to become safe, the whole work space needs to be digitised in high detail. However, there is currently no integrated approach to support the complete digitisation of manufacturing work spaces. Advanced models for data fusion in industrial robots are limited to path planning and human safety in work cells in which operators do not collaborate with the robots. Therefore, data models to efficiently manage the large datasets generated from a network of high-resolution cameras are needed to enable a real-world human-robot collaborative manufacturing cell. These will provide the precursor for observation-driven deep learning approaches to be developed that do not rely on overly idealistic CAD models and prior knowledge.
Marker-based tracking provides an option for lab environments but does not translate well into industrial environments where it is unrealistic to attach markers to all the relevant components, such as machines and people, It is also not robust when unknown objects are introduced into the scene. This feasibility study will address this by investigating whether a network of standard 2D smart cameras, combined with a purely data driven deep learning approach, can be used to recognise, localise and track multiple objects and people within a workspace, with robust and accurate real-time performance that rivals marker based tracking systems (such as vicon etc). This approach aims to take advantage of the ability of deep learning to effectively reduce the dimensionality of data, reliably identify objects and people despite a high degree of variation in the input. If proven feasible, this will establish the foundation for a significant step-change in industrial workspace digitisation and 3D perception, as essential prerequisite for ubiquitous safety systems that do not impede the way people work.