As human beings lucky enough to be mobile, we probably take for granted the speed and precision that our eyes and our ears provide as the two senses that help us recognize, understand and act upon where we are currently at in the world and where we need to go.
In all situations, whether they are mundane or critically urgent, your eyes and ears obtain the information that gets sent to the brain for processing. Of course, it’s the brain that “drives” the human body; sending the signals that fire the required muscles that get us or keep us moving.
So it shouldn’t be a surprise that autonomous vehicles, designed and built by humans, are expected to work in a similar fashion. In the case of driverless cars, the function of the “eyes and ears” will be handled by sensors. The four main sensors that are being used in autonomous vehicles are GNSS receivers, cameras, RADAR, and LiDAR, all of which possess their own unique attributes, upsides, and shortcomings that affect the performances of driverless cars differently.
GNSS (Global Navigation Satellite System) is the term used worldwide for satellite navigation systems that provide autonomous geo-spatial positioning with global coverage. It refers to a constellation of satellites providing signals from space that transmit positioning and timing data to GNSS receivers. The receivers then use this data to determine location.
The advantage of having access to multiple satellites is accuracy, redundancy, and availability at all times. Though satellite systems don’t often fail if one fails GNSS receivers can pick up signals from other systems. Also if the line of sight is obstructed, having access to multiple satellites is also a benefit.
Cameras were one of the first types of sensors to be used in driverless vehicles. One of the biggest upsides provided by cameras is the optical aspect, which enables an autonomous vehicle to literally visualize its surroundings. Cameras are very efficient at the classification of texture interpretation, are widely available, and more affordable than RADAR or LiDAR, However, cameras make processing a computationally intense and algorithmically complex task, but they are able to process colors, making them better for interpreting surrounding scenery.
The latest high-definition cameras use powerful processors, which use millions of pixels per frame (some able to shoot 30 to 60 frames per second) to develop intricate imaging. Consequently, the costs of processing power can be astronomical.
RADAR is an abbreviation for radio detection and ranging. In a computational context, RADAR is lighter than a camera and uses radio waves to determine the distances of objects, exact speeds they’re going, and even the angles they’re facing. Unlike cameras, RADAR doesn’t have any data-heavy video feeds to process but has lower processing speeds needed for handling data output compared to LiDAR and cameras.
Another upside of RADAR is its ability to use reflections to see behind obstacles, doesn’t need a set line of sight since radio waves are reflective. Although RADAR is more efficient than cameras and LiDAR in select situations (like bad weather), this sensory method has less angular accuracy and generates less data than LiDAR.
LiDAR is an abbreviation for light detection and ranging. It is the most technologically diverse out of these three sensors (and the costliest for OEMs to add in car designs). LiDAR uses lasers and light to scan over 100 meters in all directions in order to determine the distance between the vehicle and an object to create a fast precise 3D map of the vehicle’s surroundings.
This map can be used to make informed decisions on reacting to different circumstances, and information can be instantly processed. LiDAR is the most efficient and accurate method of its kind, faster than a human or any other form of technology. One of the biggest drawbacks for LiDAR is cost, however more consumer-friendly unit pricing is being promised by manufacturers which should result in LiDAR being incorporated into a broader number of both semi- and fully autonomous cars.
As the autonomous car industry comes to market, the major sensor manufacturers are attracting as much attention as the vehicle manufacturers. There is serious monetary investment going towards both—for research and development and testing. For automakers and OEMs, the notion of autonomous vehicles doesn’t work unless the “eyes and ears” of the operation, the sensors, are actually super-human. The industry will keep advancing as long as the sensors keep working together to keep the vehicles and its passengers as safe and secure as “humanly” possible.
Unfortunately, the one thing all of these sensors have in common is a vulnerability to cyber attacks GNSS signals can be jammed from knowing and reporting where a vehicle is or “spoofed,” where attackers can simulate a legitimate satellite signal, hijacking the vehicle through the navigation technology, causing an accident or potentially redirecting the vehicle to another destination. Cameras can also be “blinded” by high-brightness (and relatively cheap) infrared radiation LEDs or lasers.
Both RADAR and LiDAR sensors can be “confused” by replicated and retransmitted signals (“ghost signals”) which will cause sensors to be unable to distinguish those signals from other legitimate signals it receives and processes as obstacles.
Of course, other challenges to sensor security will no doubt arise in the future. The market will need to continue to keep a primary focus on security solutions which will enable autonomous vehicles to operate smoothly, without malicious or accidental interference to the “suite” of sensors being utilized on the vehicles.
TESLA MODEL 3 SPOOFED OFF THE HIGHWAY – REGULUS NAVIGATION SYSTEM HACK CAUSES CAR TO TURN ON ITS OWN
TOP 10 GPS SPOOFING EVENTS IN HISTORY
Over the last decade, GNSS/GPS spoofing has evolved from an unsubstantiated possibility to a...
GNSS SPOOFING EVENT AT ION GNSS+2017 LEADS TO PROBLEMS WITH CELL PHONES
An accidental GNSS spoofing event at ION GNSS+2017 leads to problems with cell phones.