August 26, 2016

Heads up! Here come the HUDs
By James Farell, Director of Mechanical, Display and Optical Engineering,
Visteon



Today’s travelers can feel more like pilots than drivers when they sit behind the wheel. They find themselves in a cockpit with an array of digital gauges and navigation systems, and increasingly they are enjoying the benefits of a head-up display, or HUD.

A HUD consists of a picture generation unit, a series of mirrors, and either a transparent combiner screen or the windshield itself to project information directly in front of the operator’s eyes. The first HUDs evolved from World War II-era reflector sights in fighter aircraft and the technology made its way to automobiles in the 1988 Oldsmobile Cutlass Supreme.

Today’s HUD displays information above the dashboard, such as speed, turn indicators, navigation data and the current radio station. It allows drivers to keep their eyes on the road without having to constantly shift their focus between the road and the instrument panel. HUDs project only the most important information that the driver needs at the time, thereby avoiding unnecessary distractions.

Early HUDs employed a monochrome vacuum fluorescent display that was not customizable. Today’s more advanced HUDs often use TFT (thin-film transistor) LCD (liquid crystal display) screens, like those found in some smartphones and flat-screen TVs, with an LED (light emitting diode) backlight to generate a very bright image.

HUD systems fall into two main classes: combiner and windshield.  A combiner HUD uses a screen to reflect an image to the driver, while a windshield HUD has images projected directly off the windshield. In both categories, a virtual image appears beyond the surface of the reflector, helping the eyes maintain focus on both the data and the roadway.

Head-up displays can be tailored for all markets, reflecting the variety and advancements that have been made with this technology.
  • The entry-level HUD, designed for emerging markets, uses a passive TFT LCD or vacuum fluorescent system and a combiner with extremely high-quality optics, but with a relatively narrow field of view. This HUD often uses a mechanical, manual tilting screen, rather than the automatic or motor-driven covers available in higher-level HUDs.
  • The next step up is the low-end HUD, which is considerably brighter and offers a 4.5 x 1.5 degree field of view. With an active-matrix TFT LCD screen for sharper colors, a wider field of view and faster response, it employs simplified kinematics with a combiner that rotates down to lie flat when not in use.
  • The mid-level HUD, for the midrange automotive sector, also has a 4.5-by-1.5 degree field of view but a more complex combiner that completely retracts with a flap that covers it, for a more seamless appearance. It is about 70 percent brighter than the low-end HUD.
  • The high-end HUD is even brighter, with a larger TFT screen that offers a very wide 6-by-2.5 degree field of view. Its complex kinematics system incorporates a two-piece flap for efficient packaging, and the combiner screen can both shift and rotate.
  • The windshield HUD system, which uses no separate combiner but projects data via virtual images in front of the windshield. Its optics are more complex and its cost is higher than the other systems. While the same combiner HUDs can be designed into different positions and locations in different types of vehicles, windshield HUDs must be designed for a specific windshield and are not as adaptable.





Drivers in Asia and Europe, and to a lesser degree in North America, have shown great interest in HUD systems. Sales are growing 30-40 percent annually, and their attraction is expected to increase now that as many as five types of HUDs are available for various levels of vehicles.

The next generation of HUD will offer an augmented reality system with a very wide field of view and an image that can seem to project right onto the roadway. Its information can overlay what the driver sees in the real world -- a pedestrian ready to cross the street, a stop sign or an exit ramp, for instance. Virtual reality HUDs are expected to begin appearing in 2021 vehicles.

When autonomous driving becomes the norm, occupants may be using HUD systems during automated driving periods for videoconferences, rather than phone calls. HUD technology has virtually no limits on what can be displayed. The task of the auto industry is to ensure that HUDs continue to add to safety by reducing driver distractions while also helping prepare for the day when eyes-off-the-road driving will transition to eyes-on-the-screen activities.


Jim Farell leads Visteon’s technology development for all display products, including head-up displays, center information displays, and optics for displays and instrument clusters. During his 24 years at Visteon and Ford, Jim has led teams delivering a diverse portfolio of electronics products including Visteon’s first commercial infotainment platform and first V2X platform. He has a bachelor’s degree from the GMI Engineering and Management Institute, and a master’s in electrical engineering from Stanford University.

August 5, 2016



Self-Driving Cars – How Far From Reality?

By Anshul Saxena, software expert

In the last couple of years, we have witnessed a phenomenal change within the automotive sector in terms of electronics, use of machine learning algorithms and the integration of more sensors. Moore’s Law* may no longer be applicable in terms of the increase in the number of transistors in a chip, but it can still be applied for the growing number of sensors in an automobile.

In terms of features in vehicles, the biggest beneficiary of these adaptations is advanced driver assistance systems (ADAS). With radar, Lidar, infrared, ultrasonic, camera and other sensors, the development in ADAS has reached a stage that could allow for the deployment of a “completely independent" self-driving vehicle in the near future. In fact, vehicles with self-driving features like self-acceleration, self-braking and self-steering are already on the road.

A simple Google search of “Autonomous Cars” would convince most people that self-driving cars are just around the corner. Yet, even with the many leaps taken by the industry to adapt to the latest technologies, we are still lacking in terms of infrastructure. Self-driving cars require extensive ground support in terms of vehicle-to-infrastructure (V2I), vehicle-to-vehicle (V2V) and vehicle-to-surroundings (V2X). Some of the immediate requirements for self-driving cars to become a reality are discussed below.

Every country, many states, and sometimes even municipalities have special road signs. A self-driving car needs to be capable of identifying, reading, decoding and processing each of those road signs. This problem can be approached in two ways – either the industry and governments need to develop "coherent road signs," or a central database of all possible road signs must be created and stored in the vehicle electronics so the car can react appropriately to specific signs.

One can argue that a car navigation system should provide the information regarding road signs, but this technology has limitations with periodic updates and also requires a constant connection to the Internet or GPS. This limits a self-driving car's access to properly mapped geographical locations and, in the true sense, would not constitute a self-driving feature.

Another immediate requirement for a self-driving car is the capability to communicate with other vehicles on the road, but not through existing telecommunication technologies like 4G, Wi-Fi or satellite. The dependence of self-driving cars on these technologies will limit them to areas where this infrastructure is available, while also requiring a very high quality of service to determine real-time communications.

The use of existing telecommunications technology in cars will make them subject to network jams as well as making the car susceptible to cyber-attacks. This could compromise the safety of passengers in a self-driving vehicle. V2V communication needs to take place directly and without any dependence on external telecommunication infrastructure. The best possible case is to develop a real-time negotiating network protocol for the communication between cars, with an additional dedicated layer of security.”

Lastly, and perhaps most importantly, self-driving cars need to behave like a human driver. This means the cars not only require a camera to analyze the surroundings, but also need a sophisticated array of microphones to listen to the surroundings or user commands. The vehicle should be able to process this audible information and integrate it within its existing information processing architecture to make intelligent and independent decisions. Like humans, a self-driving car needs to have “eyes and ears.”

Semi-autonomous cars are already a reality and by the year 2020 they will be very visible on roads. However, self-driving cars may need more time to come to fruition on public highways. The industry needs to develop more sophisticated sensors, machine learning algorithms to process the data from these sensors, and an innovative sensor fusion. If these needs are addressed, we may begin to see the first prototype of the self-driving car in a true sense by 2025.

* Moore’s Law is a computing term which originated in 1965. Gordon Moore, co-founder of Intel, stated that the number of transistors per square inch on integrated circuits had doubled every year since the integrated circuit was invented. Moore predicted that this trend would continue for the foreseeable future. Although the pace has slowed in subsequent years, most experts, including Moore himself, expect Moore's Law to hold for at least another two decades. Source: www.mooreslaw.org



As a software expert, Anshul is involved in the development of SmartCore™ (shown above) and is currently working on the development of audio features and signal processing modules. He is also focused on self-driving car technologies and the effect of the Internet of Things on automobiles. Anshul is based in Karlsruhe, Germany.