Intelligent transport systems are a key element within future smart cities.
Sensing and Imaging for Intelligent Transport
Intelligent transport system's combine sensing and imaging with advanced information and communication technology to improve the safety, efficiency, and security of transport infrastructure, whilst reducing its impact on the environment. Real-time information from a network of sensors allows more effective usage and operation of public transport, logistics management, road and rail operations, and management of major transport hubs.
At QuantIC we are developing cameras that can track objects around corners and 3D imaging technology capable of providing images through hard-to-see environments such as heavy rain, snow or fog. These technologies can contribute to advanced traffic management systems, which integrate traffic data, cameras and speed sensors to improve traffic flow and incident detection.
We are also developing cameras and sensors targeted at increasing the safety of our transport infrastructure. Intelligent sensing and imaging systems can deliver improved scanning of goods in logistic operations, intelligent crowd monitoring and management at transport hubs, remote scanning of temperature for passengers, stand-off identification of dangerous substances as well as monitoring of emission levels at junctions and in cities.
QuantIC technology for intelligent transport systems
REAL TIME 3D IMAGING
Most conventional cameras only see in 2D. By using real-time technology, we can time how long it takes light to bounce back from the object, producing a 3D image. This is prime technology for depth imaging through complex and challenging conditions such as fog, smoke, turbid water or when camouflaged by foliage.
Building upon the work in QuantIC phase one, we are using pulsed illumination and SPAD sensor arrays to record the range of each image pixel (>10,000 pixels) simultaneously. Results have shown centimetre depth resolution at 10s or even 100s metre range and frame rates up to 30Hz. The sensor data is denoised using sophisticated data processing to give 3D representations of the surrounding environment.
This technology will be able to provide visibility in inclement weather and help with situation awareness for driver-assisted vehicles.
Imaging through clear and clean water can be quite straightforward. However, once the water becomes turbid, the high degree of back-scattered light saturates the detector and the image can no longer be recognised. One approach to overcoming this backscatter is to use short-pulse illumination and time-resolved single-photon detection so that the returned photons can be seen above the back-scattered background level.
We are using both SPAD detector arrays and single-detector, raster scanned systems to suppress backscatter from the recorded images. In both cases the timing resolution corresponds to a few millimetres in depth and by rejecting all the other photons we have shown imaging at over 9 scattering lengths (measured one way) at ranges of several metres.
Such systems have significant application within sub-sea operations, for example in offshore installation inspection and terrain mapping and hence navigation. We are presently building a demonstration unit for trials in real-world conditions.
NON-LINE OF SIGHT IMAGING
Traditional imaging relies upon direct line of sight between object and camera. However, point a laser at the ground in front of you and light scatters in all directions - around corners or perhaps it hits an out-of-view object.
QuantIC are using both SPAD detector arrays and single detector systems to detect the single photons of this scattered light. Each scattering point creates a circle of light, much like the ripple that is created by dropping a stone in a pond. By observing these ripples, we can locate the object and work out both its size and speed even though it lies out with the line of sight. Our results so far show that we can detect objects 2-3m around a corner.
In the transport industry detecting objects, vehicles, pedestrians hidden from view could help with collision avoidance and situational awareness.
QUANTUM ILLUMINATION MICROSCOPE/IMAGER
In conventional imaging, the sensor signal comprises the real image to which is added sensor noise and indeed any background light which is inadvertently or deliberately introduced. This background light and sensor noise can degrade or obscure the real image.
As a new demonstrator in QuantIC phase two, we are replacing the classical illumination, where photons arrive one by one, with quantum illumination, where the photons arrive two by two. These photon pairs act as probe and reference where every probe photon in the image is authenticated using its correlated (entangled) reference. Results so far have shown that even when the background light has the same wavelength, polarisation, and statistics of the pair source it can be ≈99% eliminated from the image using the photon pair information to distinguish real photon events from the noise.
This type of image may have applications in ultra-low light conditions or when the outputs from different imaging system need to be distinguished/encoded from each other so that they can work in the same transport space. The concept involved is similar to that proposed for Quantum radar.
Lidar and radar use the time of flight of light or radio-wave pulses respectively to measure the distance to the backscattering object. However, in many cases the regular nature of these pulses means that such systems can be confused either by accident or by deliberate intervention.
At QuantIC we are using entangled photons to create two identical, yet within themselves random, light beams. One of these beams illuminates the target and the other acts a unique and unpredictable reference against which to validate any return signal. Such systems eliminate range ambiguity and cannot be spoofed by someone seeking to mislead the system. Our latest work relies on randomness in both time and wavelength meaning that the illumination beam can be weaker than the background light making the approach covert.
Such systems could be relevant to transport applications, where many different LIDAR systems on a busy road would still be distinguishable from each other.
CMOS SPAD ARRAYS
Traditional cameras include a sensor chip comprising an array of pixels, each of which measures the number of photons that strike the sensor at that particular position. Instead a SPAD measures not the number of photons but rather the time at which the first photon arrives. If the object is illuminated with a pulse of light then the arrival time of the first photon corresponds to the distance to that pixel – giving a 3D image.
Building upon the work in QuantIC phase one, contributions that we are making to the field include increasing the detection efficiency from <10% to ≈ 60%, improving the timing resolution (≈30ps -> depth resolution ≈5mm) and increasing the number of pixels in the image (256x256).
In achieving these specifications we are opening new applications in automotive LIDAR, providing low cost 3D imaging for driver-assisted and autonomous vehicles.
GERMANIUM ON SILICON SPADS
The wavelength range over which most optical detectors work depends upon the material from which they are made. Silicon is an ideal material for making detectors that are low noise, sensitive and fast. However, the band gap of the silicon material means that these detectors are only sensitive to wavelengths below ≈1100nm.
We are trying to extend the wavelength range of these detectors by incorporating a germanium layer on top of the silicon which has a smaller bandgap and hence can absorb photons with a longer wavelength. In results so far we have shown single-photon detection at 1300nm and even 1500nm, albeit in cooled configurations.
These detectors have been incorporated into a LIDAR system for 3D imaging at these longer wavelengths. In the future other material layers may extend the performance out to 6μm, allowing thermal imaging with a silicon-based detector.
QuantIC has demonstrated world-leading high efficiency Ge on Si SPADs and applied them to LIDAR systems at higher atmospheric transmission enabling improved imaging through fog, rain, snow, smoke and dust.
CASE STUDY: ARALIA SYSTEMS LTD
Video surveillance is becoming an indispensable tool to ensure both personal and public safety. Common applications include monitoring of critical infrastructure, highways, financial institutions, airports and public transport as well as private property.
QuantIC’s researchers have worked with security company Aralia Systems Ltd to investigate the feasibility of employing an LED visible light system for covert automated video surveillance.
Recent advances in visible light sources and sensors present the potential for extremely small modules for cheap and covert imaging of many parallel video images within a single system. The company has developed a covert infra-red imaging system based around a photometric stereo concept. The photometric stereo imaging allows reconstruction of the topology of the scene and greatly improves the automated image analysis task. The use of visible LED sources will offer significant cost benefits to the system, increase covertness and provide the opportunity for further system functionality including LiFi communications and position sensing. A prototype system is being built and evaluated using high speed LED light sources developed under the QuantIC programme. The project is aimed at definining future design requirements at both a source and system level.
Aralia systems Ltd, a UK SME, has been providing a unique set of intelligent surveillance products such as full scene Video Content Analysis, since 1997, to a wide customer-base (airports, rail transit systems, city councils, retail outlets and oil and gas companies) with core business focused on security and surveillance in the UK and USA markets