By getting to know you and your tendencies, your self-driving car will know not only where to fuel up but also where to stop to get your coffee along the way.
Whether it’s to find a destination or swerve around an unexpected obstacle, self-driving cars must constantly be pulling data from the outside world and feeding that data to advanced neural network algorithms to derive meaning in real-time.
The car needs to be in constant communication with location-tracking satellites, for instance, and be able to send and receive messages from other vehicles on the road.
Still, as amazing as those outward abilities are, what few people realize is that these vehicles can gather just as much data from inside the car as from outside of it.
What makes a car autonomous?
A fully autonomous car is one that drives us rather than vice versa.
Vehicle technology must progress through five levels of driver assistance before we can fully sit back and let the car do it all:
- Level 1: An advanced driver assistance system (ADAS) helps us brake, accelerate and steer.
- Level 2: Braking, accelerating, and steering can be controlled autonomously at the same time, though we still must monitor conditions and perform remaining tasks.
- Level 3: The automated driving system (ADS) can perform all driving tasks under certain conditions. We must be ready and able to take back control when requested by the ADS and must perform all tasks when not under optimal conditions.
- Level 4: The ADS performs driving tasks and monitors the environment in certain conditions without us needing to pay attention.
- Level 5: The ADS fully drives the vehicle in all conditions without anyone having to pay attention or be involved in driving.
We’re not going to get to Level 5 overnight. It’s a gradual shift that starts with basic safety features—partially because quick safety wins are the top priority, and partially because we need to learn to trust automated technology in our car because we can hand over the wheel entirely.
For example, many of today’s new cars feature lane-assist technology that helps drivers avoid drifting into adjacent lanes or making unsafe lane changes. These technologies use combination of hardware (sensors, cameras, and radar) and software to help vehicles identify risks so the vehicle can warn us to act to avoid a crash.
Automated safety features began to be introduced in 2016 and are continuing to roll out until 2025, including:
- Adaptive cruise control
- Lane-keep assist
- Self parking
- Traffic-jam assist
The NHTSA anticipates that fully autonomous safety features like highway autopilot will be introduced by 2025.
Where do autonomous cars get their data?
The main source of data for autonomous cars revolves around a sophisticated system of network-based structures that pull information from outside the car.
The car creates and maintains data based on sensors and cameras placed in different areas in and around it. The sensors monitor your car’s position in relation to other vehicles and pedestrians, and cameras further detect traffic lights and road signs.
Autonomous cars also make use of light detection and ranging (LIDAR). These sensors bounce pulses of light off the car’s surroundings to measure distances, account for weather conditions, detect curbs and road shoulders, and identify lane markings.
The driving capabilities have come really far, but there’s still a ways to go to get them on the road full-time. The key challenge—and one of the reasons advancements have stalled—remains gathering the data to train the vehicle to navigate the road with the highest degree of reliability.
The best way to train an autonomous car is to show it as much driving data as possible to teach it proper driving behavior. It’s like a driver’s education class that would take you through every possible scenario behind the wheel.
Data collection is time consuming and expensive, and certain situations are much rarer than others.
For example, how often have you come across a mattress in the middle of a highway? Not often, but it’s possible, and the vehicle must be trained how to react to fully reach Level 5.
Beyond learning from the road, there’s plenty of data inside the car to be mined too.
Autonomous Cars Respond to Voice Commands
In the future, speech recognition will allow you to tell a vehicle where you want to go, and then sit back to nap, read, or scroll through your phone while your car gets you from point A to point B.
This means owners of autonomous cars could soon be able to say “take me home” to their dashboard, and have it not only identify the desired destination, but also drive there without further question.
More advanced versions could even hear implied commands, such as the indirect charge to turn around within the exclamation, “I forgot my wallet!”
As it stands, in-car speech recognition is used predominantly in for voice assistants to change a song, make a call, send a text, or pull up a destination on the GPS.
But the future is full of possibilities, as per Automotive World:
“Voice assistants are projected to be embedded in nearly 90% of new vehicles sold globally by 2028. Amazon, Google, Nuance, IBM and other vendors are all pushing hard to become the default assistant. A key factor to success is likely to be how well the in-vehicle systems integrate into other ecosystems like smartphones and home automation.”
Building this speech recognition technology requires hundreds of hours of speech recordings in various languages, demographics, and locations around the world.
For example, we provide voice data collection to and speech data transcription to Nuance to help them build the next generation of in-car speech recognition.
To make sure they cover all possible use cases, we need to collect recordings of all possible commands, languages, dialects, and accents that would be used to communicate inside the vehicle.
The added wrinkle to in-car speech recognition is the ambient traffic noise. The car’s speech recognition system must learn to ignore the sounds of honking cars, police sirens, and other traffic noise.
To accommodate for these noises, we traveled to many countries to collect speech data in the cabin of a car in the actual environment in which the speech technology would be used.
It’s a lot of effort!
Related reading: To learn more about speech recognition, you can check out our previous articles on how speech recognition technology works or the past, present and future of speech recognition software.
Autonomous Cars Will Be Attentive to Actions Too
Passengers in autonomous cars communicate through more than their intentional voice commands.
Involuntary communication may have just as much to teach a car about what a passenger wants.
There are extreme examples, like if you lose consciousnesses and need the car to decide on its own to head to the hospital. Or sufficiently aware software could detect inebriation in a passenger and require self-driving to remain in effect, thus preventing any driving while under the influence of intoxicating substances — even those that can’t be detected on a breathalyzer.
The utility of an attentive autonomous car goes far beyond safety, though.
A car might be able to notice a passenger’s preference for a slightly slower, smoother ride to a destination, or it could see from their constant glancing at the time that they would prefer a slightly faster, more aggressive path through traffic.
Lip reading may also become a way of collecting data, both from inside and outside the vehicle.
Picture, for example, a car approaching a detour due to construction or an accident. Traffic control is waving you by saying “proceed with caution” but you can’t literally hear what they’re saying. A camera pointing outwards would need to be able to read lips and construe the meaning, especially if you’re not paying attention to the road in favor of the autonomous ride.
Or say you’re riding with someone who’s sleeping, and you need to give a voice command. You can look into a camera with facial recognition abilities and request a stop or to have the volume turned down.
An explosion in the uses of facial recognition technology has sparked a real need for comprehensive image and video datasets, and that brings us back to the vast amount of data collection required to fulfill the potential of autonomous cars and other AI products.
This would all be especially valuable for those without the ability to speak, an important consideration in the development of AI.
Autonomous Cars Will Learn from Your Tendencies
Most of the communication between human and vehicle flows from the human to the vehicle — but now and again, the opposite is necessary as well.
Vehicles will inform passengers of pertinent information without requiring a particular response. For example, there might be a short delay ahead due to rail cars passing.
The goal won’t simply be to keep you informed, though.
At other times, the car could think more actively about other human needs. For example, if it knows you’re on the way to a grocery store, it could suggest heading to a closer one just a block from the vehicle’s current location.
“Autonomous” cars will also sometimes need to ask their passengers for direction in the face of apples-to-oranges decision points. Faced with a traffic backup due to a fallen tree, should the car go around for a quicker route home, or wait in line for lower gas consumption?
By asking these kinds of questions a few times, vehicles could build up a behavioral profile for their owners and make decisions more independently in the future.
That sort of proactive data gathering could be just as important to shaping a car’s behavior as any volume of driving data from the outside world.
Turn a Car Ride into a Short Vacation
By both listening to a passenger and watching their behavior, cars should also be able to greatly improve the experience of moving through traffic by tailoring the experience.
Not every passenger will want this sort of approach, of course, but those that do could have everything from the level of outside noise cancellation to the tint of the windows to the angle of the seat-back tailored to their apparent level of stress.
A person headed home with nothing else on the daily docket might receive a suggestion to stop for a treat on the way home, for instance.
All these ideas require advances in the data-gathering hardware and data-sifting software that allows real understanding of a human occupant, but it does seem that there will be ample time for that development to take place.
That’s because none of the most ambitious applications for autonomy can be considered until self-driving cars can drive everywhere, without any human intervention whatsoever.
That all-important ability is still a fair distance out, meaning that for the next several years the most creative minds in tech will be laser focused on teaching your car how to learn from you.
Need data for your autonomous car technology?
Summa Linguae Technologies provides custom speech data collection and video & image data collection services to train your self-driving car AI or in-car speech recognition technology.
Contact us today to book a consultation.
Free Data Collection Resources
Looking for additional resources to assist with collecting autonomous car data? Check out these helpful downloads:
- Eye Gaze Sample Set (Download) – Get a sample of high-quality eye gaze data.
- Road, Car, and People Dataset (Download) – Training a system that requires road image data? Download our sample dataset.