September 8, 2016

In the Eye of the Beholder: A Biometric Approach to Automatic Luminance Control
By Paul Weindorf, Display Systems Technical Fellow, Visteon


Every driver these days depends on an array of displays on the instrument panel, in the center stack, in the mirror and even through head-up displays (HUDs). Since drivers rely on these displays for critical data, it’s vital that this information be clearly visible, whatever the light levels inside and outside the vehicle may be. Variances in light factors such as sunlight and passing or constant shadows, however, sometimes can make the task of reading the displays challenging.

In some instances, the sun may be shining directly on the display, resulting in reflections that overwhelm the displayed information. In other cases, the driver may be looking out the front windshield into very bright sunlight and then attempt to glance at the instrument cluster without enough time for his or her eyes to adjust to the interior ambient light, again producing a temporary problem seeing the display information.

Automakers are becoming aware that constantly running displays at their highest luminance levels accelerates image burn-in on modern OLED screens and makes cooling the screens more difficult. Cranking up the brightness also draws a lot of power, impacting battery life, especially in electric vehicles.

Display developers are testing silicon light sensors, with one pointing forward, out the windshield and others mounted on the corners of each display to detect ambient light that falls on the screen. These detectors automatically adjust the luminance of the displays to make them brighter or dimmer as lighting conditions require, immensely extending the life of OLED screens and keeping them cooler.


Visteon’s dual OLED display features auto luminance – which adjusts display brightness depending on surrounding conditions


More recently, however, Visteon has proposed a different, more accurate method of automatic luminance control: measuring the constantly changing diameter of the driver’s pupils to determine the appropriate brightness levels of displays. This method, called a total biometric automatic luminance control system, replaces silicon sensors with an infrared eye-gaze camera that precisely determines pupil diameter.

When the driver is looking outside on a sunny day, his or her pupils will contract; when looking at the cockpit instruments, the pupils grow larger. Using the science of “pupillometry” – first applied in the fields of psychology and medicine – the camera detects where the driver is looking and determines the brightness outside and inside the vehicle. The display system automatically adjusts its luminance based directly on the driver’s eye response to light, rather than on input from sensors.

At this year’s national Society for Information Display (SID) conference, our Visteon team discussed how pupillometry can be used to determine luminance value when the driver is looking outside. At the SID Detroit chapter conference later this month, I will propose using pupil-diameter measurements to determine the reflected luminance from the front of the display. The latter is a more difficult issue because, when the driver glances from the road to the instrument cluster, the eye adapts to the dimmer light in an exponential fashion over a 10-second period, requiring an algorithm to determine what the final luminance value should be after the eyes have completed their adjustment.

The primary value of potentially using this biometric system instead of silicon detectors is its straightforward accuracy. When silicon sensors are employed, they are positioned at the corners of the display. Depending on the lighting and shadow conditions, they may not be correctly sensing the true reflected luminance. The biometric approach measures what the human eye actually is seeing off the front of the display. When examining what the driver is gazing at outside, silicon sensors look forward within a particular field of view, but the driver may be looking toward the left or right. The biometric eye-gaze camera uses the glint on the eyes from the infrared emitter to figure out which direction the eyes are looking and to adjust the display luminance based on what may be a greater or lesser intensity than the straightforward field of view.

Another advantage of biometrics is that it allows designers to remove sensors from the display and avoid the need for a forward-looking sensor, providing a sleeker and more pleasing appearance. Furthermore, eye-gaze cameras are now being placed in cars for other purposes, such as to detect drowsiness, and the same camera can also drive automatic luminance control, at no additional cost. An eye-gaze camera can be used to adjust the luminance of projected HUD displays automatically, as well.

The Visteon team’s next step is to build a physical model of a biometric luminance control system based on these principles and technologies. Ultimately, such technologies will allow displays to adjust to the absolute light levels in and around the vehicle, as well as to the driver’s perception of those levels throughout the journey. This concept promises another eye-popping advancement from Visteon for tomorrow’s cars and trucks.


Paul Weindorf is a display technical fellow for Visteon with more than 35 years of experience in the electronics industry. He currently supports display system activities for production, development and advanced projects. His interest lies in the display visibility arena and he participated in the SAE J1757 committee. Weindorf graduated from the University of Washington with a bachelor’s degree in electrical engineering.

August 26, 2016

Heads up! Here come the HUDs
By James Farell, Director of Mechanical, Display and Optical Engineering,
Visteon



Today’s travelers can feel more like pilots than drivers when they sit behind the wheel. They find themselves in a cockpit with an array of digital gauges and navigation systems, and increasingly they are enjoying the benefits of a head-up display, or HUD.

A HUD consists of a picture generation unit, a series of mirrors, and either a transparent combiner screen or the windshield itself to project information directly in front of the operator’s eyes. The first HUDs evolved from World War II-era reflector sights in fighter aircraft and the technology made its way to automobiles in the 1988 Oldsmobile Cutlass Supreme.

Today’s HUD displays information above the dashboard, such as speed, turn indicators, navigation data and the current radio station. It allows drivers to keep their eyes on the road without having to constantly shift their focus between the road and the instrument panel. HUDs project only the most important information that the driver needs at the time, thereby avoiding unnecessary distractions.

Early HUDs employed a monochrome vacuum fluorescent display that was not customizable. Today’s more advanced HUDs often use TFT (thin-film transistor) LCD (liquid crystal display) screens, like those found in some smartphones and flat-screen TVs, with an LED (light emitting diode) backlight to generate a very bright image.

HUD systems fall into two main classes: combiner and windshield.  A combiner HUD uses a screen to reflect an image to the driver, while a windshield HUD has images projected directly off the windshield. In both categories, a virtual image appears beyond the surface of the reflector, helping the eyes maintain focus on both the data and the roadway.

Head-up displays can be tailored for all markets, reflecting the variety and advancements that have been made with this technology.
  • The entry-level HUD, designed for emerging markets, uses a passive TFT LCD or vacuum fluorescent system and a combiner with extremely high-quality optics, but with a relatively narrow field of view. This HUD often uses a mechanical, manual tilting screen, rather than the automatic or motor-driven covers available in higher-level HUDs.
  • The next step up is the low-end HUD, which is considerably brighter and offers a 4.5 x 1.5 degree field of view. With an active-matrix TFT LCD screen for sharper colors, a wider field of view and faster response, it employs simplified kinematics with a combiner that rotates down to lie flat when not in use.
  • The mid-level HUD, for the midrange automotive sector, also has a 4.5-by-1.5 degree field of view but a more complex combiner that completely retracts with a flap that covers it, for a more seamless appearance. It is about 70 percent brighter than the low-end HUD.
  • The high-end HUD is even brighter, with a larger TFT screen that offers a very wide 6-by-2.5 degree field of view. Its complex kinematics system incorporates a two-piece flap for efficient packaging, and the combiner screen can both shift and rotate.
  • The windshield HUD system, which uses no separate combiner but projects data via virtual images in front of the windshield. Its optics are more complex and its cost is higher than the other systems. While the same combiner HUDs can be designed into different positions and locations in different types of vehicles, windshield HUDs must be designed for a specific windshield and are not as adaptable.





Drivers in Asia and Europe, and to a lesser degree in North America, have shown great interest in HUD systems. Sales are growing 30-40 percent annually, and their attraction is expected to increase now that as many as five types of HUDs are available for various levels of vehicles.

The next generation of HUD will offer an augmented reality system with a very wide field of view and an image that can seem to project right onto the roadway. Its information can overlay what the driver sees in the real world -- a pedestrian ready to cross the street, a stop sign or an exit ramp, for instance. Virtual reality HUDs are expected to begin appearing in 2021 vehicles.

When autonomous driving becomes the norm, occupants may be using HUD systems during automated driving periods for videoconferences, rather than phone calls. HUD technology has virtually no limits on what can be displayed. The task of the auto industry is to ensure that HUDs continue to add to safety by reducing driver distractions while also helping prepare for the day when eyes-off-the-road driving will transition to eyes-on-the-screen activities.


Jim Farell leads Visteon’s technology development for all display products, including head-up displays, center information displays, and optics for displays and instrument clusters. During his 24 years at Visteon and Ford, Jim has led teams delivering a diverse portfolio of electronics products including Visteon’s first commercial infotainment platform and first V2X platform. He has a bachelor’s degree from the GMI Engineering and Management Institute, and a master’s in electrical engineering from Stanford University.

August 5, 2016



Self-Driving Cars – How Far From Reality?

By Anshul Saxena, software expert

In the last couple of years, we have witnessed a phenomenal change within the automotive sector in terms of electronics, use of machine learning algorithms and the integration of more sensors. Moore’s Law* may no longer be applicable in terms of the increase in the number of transistors in a chip, but it can still be applied for the growing number of sensors in an automobile.

In terms of features in vehicles, the biggest beneficiary of these adaptations is advanced driver assistance systems (ADAS). With radar, Lidar, infrared, ultrasonic, camera and other sensors, the development in ADAS has reached a stage that could allow for the deployment of a “completely independent" self-driving vehicle in the near future. In fact, vehicles with self-driving features like self-acceleration, self-braking and self-steering are already on the road.

A simple Google search of “Autonomous Cars” would convince most people that self-driving cars are just around the corner. Yet, even with the many leaps taken by the industry to adapt to the latest technologies, we are still lacking in terms of infrastructure. Self-driving cars require extensive ground support in terms of vehicle-to-infrastructure (V2I), vehicle-to-vehicle (V2V) and vehicle-to-surroundings (V2X). Some of the immediate requirements for self-driving cars to become a reality are discussed below.

Every country, many states, and sometimes even municipalities have special road signs. A self-driving car needs to be capable of identifying, reading, decoding and processing each of those road signs. This problem can be approached in two ways – either the industry and governments need to develop "coherent road signs," or a central database of all possible road signs must be created and stored in the vehicle electronics so the car can react appropriately to specific signs.

One can argue that a car navigation system should provide the information regarding road signs, but this technology has limitations with periodic updates and also requires a constant connection to the Internet or GPS. This limits a self-driving car's access to properly mapped geographical locations and, in the true sense, would not constitute a self-driving feature.

Another immediate requirement for a self-driving car is the capability to communicate with other vehicles on the road, but not through existing telecommunication technologies like 4G, Wi-Fi or satellite. The dependence of self-driving cars on these technologies will limit them to areas where this infrastructure is available, while also requiring a very high quality of service to determine real-time communications.

The use of existing telecommunications technology in cars will make them subject to network jams as well as making the car susceptible to cyber-attacks. This could compromise the safety of passengers in a self-driving vehicle. V2V communication needs to take place directly and without any dependence on external telecommunication infrastructure. The best possible case is to develop a real-time negotiating network protocol for the communication between cars, with an additional dedicated layer of security.”

Lastly, and perhaps most importantly, self-driving cars need to behave like a human driver. This means the cars not only require a camera to analyze the surroundings, but also need a sophisticated array of microphones to listen to the surroundings or user commands. The vehicle should be able to process this audible information and integrate it within its existing information processing architecture to make intelligent and independent decisions. Like humans, a self-driving car needs to have “eyes and ears.”

Semi-autonomous cars are already a reality and by the year 2020 they will be very visible on roads. However, self-driving cars may need more time to come to fruition on public highways. The industry needs to develop more sophisticated sensors, machine learning algorithms to process the data from these sensors, and an innovative sensor fusion. If these needs are addressed, we may begin to see the first prototype of the self-driving car in a true sense by 2025.

* Moore’s Law is a computing term which originated in 1965. Gordon Moore, co-founder of Intel, stated that the number of transistors per square inch on integrated circuits had doubled every year since the integrated circuit was invented. Moore predicted that this trend would continue for the foreseeable future. Although the pace has slowed in subsequent years, most experts, including Moore himself, expect Moore's Law to hold for at least another two decades. Source: www.mooreslaw.org



As a software expert, Anshul is involved in the development of SmartCore™ (shown above) and is currently working on the development of audio features and signal processing modules. He is also focused on self-driving car technologies and the effect of the Internet of Things on automobiles. Anshul is based in Karlsruhe, Germany.