Powered by OpenAIRE graph
Found an issue? Give us feedback

UEL

ULTRA ELECTRONICS LIMITED
Country: United Kingdom
Funder
Top 100 values are shown in the filters
Results number
arrow_drop_down
7 Projects, page 1 of 2
  • Funder: UK Research and Innovation Project Code: EP/H011625/1
    Funder Contribution: 95,017 GBP

    Micro-Doppler is a perturbation on an echo returned from a target which results from the movement of its component parts such as wheels on vehicles or swinging arms or legs on personnel. A great deal of information can potentially therefore be gained from analysing Micro-Doppler returns from a target which has been illuminated by radio frequency (eg radar) or acoustic wavelength radiation.This study aims to investigate the processing techniques which may be applied to acoustic micro-Doppler signature (uDS) data. Specifically, methods to extract, classify and track the uDS of individual targets from the background clutter and non-target backscatter signals will be developed.UCL has carried out extensive work in the area of uDS based target recognition using radar data in recent years. This has resulted in new algorithms and techniques which can be used in identifying and classifying targets. This work has particularly concentrated on identifying personnel and vehicle targets against the returns from the background environment. The work has been carried out in close collaboration with Thales Aerospace and has dealt with field data obtained by both Thales and UCL using personnel detecting radar. Much of this work could potentially be mapped on to the acoustic region and this proposal presents a study to examine how the knowledge gained using radar data can be used in the very different frequency ranges and propagation conditions that exist in the acoustic regime.An acoustic camera will be used to record audio and video data from a scene. Signal characterisation will then be performed using theoretical models and techniques developed using radar data in the previous work. Micro-Doppler classification techniques will be adapted to the acoustic regime, in addition to new methods, which may be suitable for the potentially longer acquisition times at acoustic frequencies. Tracking algorithms will then be applied to the target returns and methods to automate the entire detection and tracking process will be examined.The end result of the work should be a system that can detect, classify and track a range of targets based on their acoustic uDS returns in a range of different environments.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/R006768/1
    Funder Contribution: 5,112,620 GBP

    The aim of this proposal is to create a robustly-validated virtual prediction tool called a "digital twin". This is urgently needed to overcome limitations in current industrial practice that increasingly rely on large computer-based models to make critical design and operational decisions for systems such as wind farms, nuclear power stations and aircraft. The digital twin is much more than just a numerical model: It is a "virtualised" proxy version of the physical system built from a fusion of data with models of differing fidelity, using novel techniques in uncertainty analysis, model reduction, and experimental validation. In this project, we will deliver the transformative new science required to generate digital twin technology for key sectors of UK industry: specifically power generation, automotive and aerospace. The results from the project will empower industry with the ability to create digital twins as predictive tools for real-world problems that (i) radically improve design methodology leading to significant cost savings, and (ii) transform uncertainty management of key industrial assets, enabling a step change reduction in the associated operation and management costs. Ultimately, we envisage that the scientific advancements proposed here will revolutionise the engineering design-to-decommission cycle for a wide range of engineering applications of value to the UK.

    more_vert
  • Funder: European Commission Project Code: 215167
    more_vert
  • Funder: UK Research and Innovation Project Code: EP/E028594/1
    Funder Contribution: 623,617 GBP

    There are now large networks of CCTV cameras collecting colossal amounts of video data, of which many deploy not only fixed but also mobile cameras on wireless connections with an increasing number of the cameras being either PTZ controllable or embedded smart cameras. A multi-camera system has the potential for gaining better viewpoints resulting in both improved imaging quality and more relevant details being captured. However, more is not necessarily better. Such a system can also cause overflow of information and confusion if data content is not analysed in real-time to give the correct camera selection and capturing decision. Moreover, current PTZ cameras are mostly controlled manually by operators based on ad hoc criteria. There is an urgent need for the development of automated systems to monitor behaviours of people cooperatively across a distributed network of cameras and making on-the-fly decisions for more effective content selection in data capturing. Todate, there is no system capable of performing such tasks and fundamental problems need to be tackled. This project will develop novel techniques for video-based people tagging (consistent labelling) and behaviour monitoring across a distributed network of CCTV cameras for the enhancement of global situational awareness in a wide area. More specifically, we will focus on developing three critical underpinning capabilities:(a) To develop a model for robust detection and tagging of people over wide areas of different physical sites captured by a distributed network of cameras, e.g. monitoring the activities of a person travelling through a city/cities.(b) To develop a model for global situational awareness enhancement via correlating behaviours across a network of cameras located at different physical sites, and for real-time detection of abnormal behaviours in public space across camera views; The model must be able to cope with changes in visual context and on definitions of abnormality, e.g. what is abnormal needs be modelled by the time of the day, locations, and scene context.(c) To develop a model for automatic selection and controlling of Pan-Tilt-Zoom (PTZ)/embedded smart cameras (including wireless ones) in a surveillance network to 'zoom into' people based on behaviour analysis using a global situational awareness model therefore achieving active sampling of higher quality visual evidence on the fly in a global context, e.g. when a car enters a restricted zone which has also been spotted stopping unusually elsewhere, the optimally situated PTZ/embedded smart camera is to be activated to perform adaptive image content selection and capturing of higher resolution imagery of, e.g. the face of the driver.

    more_vert
  • Funder: European Commission Project Code: 232190
    more_vert
  • chevron_left
  • 1
  • 2
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.