December 19, 2016

“Real 3-D” Instrument Cluster Wins Consumer Applause in Screen Tests
By Judy Blessing, Market & Trends Research, Visteon


How many times have you taken your seat in a movie theater, ready to watch a long-anticipated 3-D feature, only to discover that the so-called 3-D images don’t leap out to grab you? It turns out that all 3-D is not the same, and automotive engineers working with consumers on 3-D instrument clusters are finding that the same principle applies. Consumers increasingly prefer realistic 3-D effects, not fake-looking 3-D images.

Those preferences registered clearly in a recent advanced product research clinic in Germany piloted by Visteon. Two separate studies were conducted. In the first, consumers compared different fully reconfigurable clusters, with and without 3-D effects, to learn more about their preferences using 2-D and 3-D displays. In the second study, Visteon researchers compared different types of 3-D technologies to collect feedback. In all instances, very similar graphics were displayed on the screens, but they were adapted to take advantage of the capabilities of the various technologies.

Participants in the second study compared three advanced 3-D instrument clusters, viewed from the same distance and angle, in random order. The first demonstration offered high-performance 3-D graphics rendered on a 12.3-inch display with a resolution of 2880 x 1080 pixels. While the quality of this resolution exceeds the state of the art, it is displayed on just a single layer.


The image above is one of the three advanced 3-D instrument clusters
tested in the clinic.


The second technology shown was a multilayer cluster, which uses two 12.3-inch displays stacked one behind the other with a distance of 8 mm between them. However, each layer only delivers 1440 x 540 pixels, and content on the back layer can appear slightly blurry because of very thin wires in the front, transparent layer. Yet this second approach enables drivers to visualize depth where appropriate.


The image above represents the multilayer display principle (based on Witehira 2005)


The third technology examined was a Visteon second-generation multilayer cluster, named Prism. This system consists of two reconfigurable displays, one vertical and one horizontal, separated by a flat semi-transparent mirror. The mirror reflects the image of a horizontal TFT (thin-film transistor) display, creating a virtual image that overlays the vertical TFT without creating a blur. This arrangement allows design flexibility of the virtual image so it can appear in the same plane as the vertical TFT, behind it or in front of it.


The image above represents the package overview of the Prism concept


Clinic Results
The second-generation multilayer Prism cluster was found to combine the best of both worlds: “real 3-D” effects with clear graphics and quality.

  • Of the clinic participants, 100 percent gave Prism a good or very good rating on quality, and 93 percent agreed or strongly agreed that the second-generation multilayer instrument cluster has a high premium feel.
  • Nearly nine of 10 respondents felt it to be innovative or very innovative, with the original multilayer cluster close behind.
  • Reliability (being easy to read) was highest for the 3-D rendered instrument cluster, which was rated reliable or very reliable by 75 percent of the participants.
The 3-D effects were higher rated on the instrument clusters with two layers, each of which showed “real depth,” bringing the most important information to the front. The first instrument cluster was considered to have no real depth and was seen as “fake” 3-D, but the high resolution was appreciated.

In the end, participants fell into two groups. One preferred unobtrusive, high-resolution rendering and remained a bit reluctant to embrace 3-D fully. The other group saw the advantages of multilayer instrument clusters by bringing information from the back to the front to raise awareness. Both executions are considered good ways to represent 3-D, and the drawbacks of initial multilayered clusters have largely been overcome with the second-generation technology that combines high resolution and no blurriness with preferred 3-D effects.

For additional details on the clinic and the results, see Visteon’s white paper on Consumer insights on innovative 3-D visualization technologies

Judy Blessing brings 18 years of research experience to her manager position in market and trends research. Her in-depth knowledge of all research methodologies allows her to apply the proper testing and analysis to showcase Visteon´s automotive intellect to external customers and industry affiliates. She holds a German University Diploma degree in marketing/market research from the Fachhochschule Pforzheim, Germany.


December 5, 2016


Touchless vehicle apps know what you want, when you want it
By Sivakumar Yeddnapuddi

Today’s cars and trucks are smart, but they’re not smartphones. Potentially, we can choose from millions of apps on our phones, just by touching an icon. If we tried to do all the things we want to do in our cars using apps, we’d be frustrated, because we can’t safely select them while driving. Automakers have been limited to the native applications built into cars—like Bluetooth and USB ports – relying on a passenger to use a phone to find the nearest gas station or restaurant if the vehicle didn’t have built-in navigation.

A new developer-friendly application platform from Visteon – called Phoenix – solves this problem and propels smart vehicle infotainment systems to the head of the class. This web-based infotainment platform “stitches” together apps native to the car with apps from third parties. Application integration is performed via recipes that enable the appropriate apps at the ideal time contextually – without the driver needing to touch anything.

Case in point: As a driver enters the vehicle, a customized startup feature automatically indicates available content from three different apps:  an audible message lists the day’s meetings, the weather forecast and traffic conditions for the anticipated route  

The driver does not need to use a phone, speak voice commands or input commands to a touch screen; all required information is automatically displayed on one screen.

Similarly, today two separate apps and two steps are required to open a garage door remotely and to show the vehicle’s position in relation to the garage entrance via GPS. Phoenix stitches these apps together so that the garage door automatically opens as the vehicle approaches.



Phoenix is easy to use since it complies with open standards such as W3C and GENIVI and is designed with app developers in mind. This platform lets developers build applications using HTML5 along with rich JavaScript based application programming interfaces (APIs). This eliminates the need to rewrite applications when porting to other infotainment systems.

Furthermore, Visteon offers a software development kit (SDK) with libraries of code, documents and a simulator. The Phoenix SDK makes development easier than conventional, often disjointed methods that require custom software or hardware and lack third-party tools – thus increasing cost and time. With Phoenix, the developer creates and tests the app with the SDK and simulator; the app is then validated by the automaker or Visteon and published to an app store. Phoenix is the first platform for vehicle apps to incorporate HTML5 and an SDK.

The Phoenix platform also advances the capability to update in-vehicle apps over the air, whether at a dealership lot or in the driveways of individual owners. For the first time, automakers can securely update just one portion of an app, using Visteon’s proprietary block-and-file technology, rather than needing to upgrade the entire system.

By 2020, when vehicle-to-vehicle (V2V) communication will be more common, vehicles will have the capability to display infotainment on screens from 12 to 17 inches in size, compared with today’s 7- to 8-inch screens. Phoenix will enable developers to create content that optimizes these larger screens, making them more useful for drivers and improving the driving experience.

Sivakumar Yeddanapudi is a platform leader managing the Phoenix program. He also develops infotainment platforms that incorporate the latest technologies such as advanced HTML5 HMI framework, Web browser, cybersecurity, reflash over air and vision processing for cockpit electronics.

He has more than 15 years of automotive experience and served as software developer, technical professional for audio and infotainment software, and now as platform leader, located at Visteon’s headquarters in the U.S.


November 3, 2016


Machine Learning Algorithms in Autonomous Cars

Machine learning algorithms are now used extensively to find solutions to different challenges ranging from financial market predictions to self-driving cars. With the integration of sensor data processing in a centralized electronic control unit (ECU) in a car, it is imperative to increase the use of machine learning to perform new tasks. Potential applications include driving scenario classification or driver condition evaluation via data fusion from different internal and external sensors – such as cameras, radars, lidar or the Internet of Things.

Anshul Saxena, software expert at Visteon's technical center in Karlsruhe, Germany, provides a technical review of the use of machine learning algorithms in autonomous cars, and investigates the reusability of an algorithm for multiple features.

The applications running a car's infotainment system can receive information from sensor data fusion systems and have, for example, the ability to direct the vehicle to a hospital if it senses that something is wrong with the driver. This machine learning-based application can also incorporate the driver’s gesture and speech recognition, and language translation. The algorithms can be classified as a supervised algorithm and an unsupervised algorithm. The difference between the two is how they learn.

Supervised algorithms learn using a training data­set, and keep on learning until they reach the desired level of confidence (minimization of probability error). They can be sub­-classified into classification, regression and dimension reduction or anomaly detection.

Unsupervised algorithms try to make sense of the available data. That means an algorithm develops a relationship within the available data set to identify patterns, or divides the data set into subgroups based on the level of similarity between them. Unsupervised algorithms can be largely sub­-classified into clustering and association rule learning.

There is now another set of machine learning algorithms called reinforcement algorithms, which fall somewhere between supervised   and unsupervised learning. In supervised learning, there is a target label for each training example; in unsupervised learning, there are no labels at all; and reinforcement learning has sparse and time-­delayed labels – the future rewards.


Based only on those rewards, the agent has to learn to behave in the environment. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithm's merits and limitations. Reinforcement learning is of great interest because of the large number of practical applications that it can potentially address, ranging from problems in artificial intelligence to operations research or control engineering – all relevant for developing a self­-driving car. This can be classified as direct learning and indirect learning.

One of the main tasks of any machine learning algorithm in the self­-driving car is continuous rendering of the surrounding environment and the prediction of possible changes to those surroundings. These tasks are mainly divided into four sub-­tasks:
  • Object detection
  • Object Identification or recognition Object classification
  • Object localization and prediction of movement

Machine learning algorithms can be loosely divided into four categories: regression algorithms, pattern recognition, cluster algorithms and decision matrix algorithms. One category of machine learning algorithms can be used to execute two or more different sub­tasks. For example, regression algorithms can be used for object detection as well as for object localization or prediction of movement.


Regression Algorithms
This type of algorithm is good at predicting events. Regression analysis estimates the relationship between two or more variables, compare the effects of variables measured on different scales and are mostly driven by three metrics, namely:
  • The number of independent variables
  • The type of dependent variables
  • The shape of the regression line.

In ADAS, images (radar or camera) play a very important role in localization and actuation, while the biggest challenge for any algorithm is to develop an image-­based model for prediction and feature selection.

Regression algorithms leverage the repeatability of the environment to create a statistical model of the relation between an image and the position of a given object in that image. The statistical model can be learned offline and provides fast online detection by allowing image sampling. Furthermore, it can be extended to other objects without requiring extensive human modeling. As an output to the online stage, the algorithm returns an object position and a confidence on the presence of the object.

These algorithms can also be used for long learning, short prediction. The type of regression algorithms that can be used for self­-driving cars are Bayesian regression, neural network regression and decision forest regression, among others.

Pattern Recognition Algorithms (Classification)
In ADAS, the images obtained through sensors possess all types of environmental data; filtering of the images is required to recognize instances of an object category by ruling out the irrelevant data points. Pattern recognition algorithms are good at ruling out these unusual data points. Recognition of patterns in a data set is an important step before classifying the objects. These types of algorithms can also be defined as data reduction algorithms.

These algorithms help in reducing the data set by detecting object edges and fitting line segments (polylines) and circular arcs to the edges. Line segments are aligned to edges up to a corner, then a new line segment is started. Circular arcs are fit to sequences of line segments that approximate an arc. The image features (line segments and circular arcs) are combined in various ways to form the features that are used for recognizing an object.

The support vector machines (SVM) with histograms of oriented gradients (HOG) and principle component analysis (PCA) are the most common recognition algorithms used in ADAS. The Bayes decision rule and K nearest neighbor (KNN) are also used.

Clustering
Sometimes the images obtained by the system are not clear and it is difficult to detect and locate objects. It is also possible that the classification algorithms may miss the object and fail to classify and report it to the system. The reason could be low-resolution images, very few data points or discontinuous data. This type of algorithm is good at discovering structure from data points. Like regression, it describes the class of problem and the class of methods. Clustering methods are typically organized by modeling approaches such as centroid-­based and hierarchical. All methods are concerned with using the inherent structures in the data to best organize the data into groups of maximum commonality. The most commonly used type of algorithm is K-­means, Multi-­class Neural Network.

Decision Matrix Algorithms
This type of algorithm is good at systematically identifying, analyzing, and rating the performance of relationships between sets of values and information. These algorithms are mainly used for decision making. Whether a car needs to take a left turn or it needs to brake depends on the level of confidence the algorithms have on the classification, recognition and prediction of the next movement of objects. These algorithms are models composed of multiple decision models independently trained and whose predictions are combined in some way to make the overall prediction, while reducing the possibility of errors in decision making. The most commonly used algorithms are gradient boosting (GDM) and AdaBoosting.

As a software expert, Anshul is involved in the development of SmartCore™ and autonomous driving domain controller platforms. He is focused on self-driving car technologies and the effect of the Internet of Things on the auto industry. Anshul is based in Karlsruhe, Germany. 

September 8, 2016

In the Eye of the Beholder: A Biometric Approach to Automatic Luminance Control
By Paul Weindorf, Display Systems Technical Fellow, Visteon


Every driver these days depends on an array of displays on the instrument panel, in the center stack, in the mirror and even through head-up displays (HUDs). Since drivers rely on these displays for critical data, it’s vital that this information be clearly visible, whatever the light levels inside and outside the vehicle may be. Variances in light factors such as sunlight and passing or constant shadows, however, sometimes can make the task of reading the displays challenging.

In some instances, the sun may be shining directly on the display, resulting in reflections that overwhelm the displayed information. In other cases, the driver may be looking out the front windshield into very bright sunlight and then attempt to glance at the instrument cluster without enough time for his or her eyes to adjust to the interior ambient light, again producing a temporary problem seeing the display information.

Automakers are becoming aware that constantly running displays at their highest luminance levels accelerates image burn-in on modern OLED screens and makes cooling the screens more difficult. Cranking up the brightness also draws a lot of power, impacting battery life, especially in electric vehicles.

Display developers are testing silicon light sensors, with one pointing forward, out the windshield and others mounted on the corners of each display to detect ambient light that falls on the screen. These detectors automatically adjust the luminance of the displays to make them brighter or dimmer as lighting conditions require, immensely extending the life of OLED screens and keeping them cooler.


Visteon’s dual OLED display features auto luminance – which adjusts display brightness depending on surrounding conditions


More recently, however, Visteon has proposed a different, more accurate method of automatic luminance control: measuring the constantly changing diameter of the driver’s pupils to determine the appropriate brightness levels of displays. This method, called a total biometric automatic luminance control system, replaces silicon sensors with an infrared eye-gaze camera that precisely determines pupil diameter.

When the driver is looking outside on a sunny day, his or her pupils will contract; when looking at the cockpit instruments, the pupils grow larger. Using the science of “pupillometry” – first applied in the fields of psychology and medicine – the camera detects where the driver is looking and determines the brightness outside and inside the vehicle. The display system automatically adjusts its luminance based directly on the driver’s eye response to light, rather than on input from sensors.

At this year’s national Society for Information Display (SID) conference, our Visteon team discussed how pupillometry can be used to determine luminance value when the driver is looking outside. At the SID Detroit chapter conference later this month, I will propose using pupil-diameter measurements to determine the reflected luminance from the front of the display. The latter is a more difficult issue because, when the driver glances from the road to the instrument cluster, the eye adapts to the dimmer light in an exponential fashion over a 10-second period, requiring an algorithm to determine what the final luminance value should be after the eyes have completed their adjustment.

The primary value of potentially using this biometric system instead of silicon detectors is its straightforward accuracy. When silicon sensors are employed, they are positioned at the corners of the display. Depending on the lighting and shadow conditions, they may not be correctly sensing the true reflected luminance. The biometric approach measures what the human eye actually is seeing off the front of the display. When examining what the driver is gazing at outside, silicon sensors look forward within a particular field of view, but the driver may be looking toward the left or right. The biometric eye-gaze camera uses the glint on the eyes from the infrared emitter to figure out which direction the eyes are looking and to adjust the display luminance based on what may be a greater or lesser intensity than the straightforward field of view.

Another advantage of biometrics is that it allows designers to remove sensors from the display and avoid the need for a forward-looking sensor, providing a sleeker and more pleasing appearance. Furthermore, eye-gaze cameras are now being placed in cars for other purposes, such as to detect drowsiness, and the same camera can also drive automatic luminance control, at no additional cost. An eye-gaze camera can be used to adjust the luminance of projected HUD displays automatically, as well.

The Visteon team’s next step is to build a physical model of a biometric luminance control system based on these principles and technologies. Ultimately, such technologies will allow displays to adjust to the absolute light levels in and around the vehicle, as well as to the driver’s perception of those levels throughout the journey. This concept promises another eye-popping advancement from Visteon for tomorrow’s cars and trucks.


Paul Weindorf is a display technical fellow for Visteon with more than 35 years of experience in the electronics industry. He currently supports display system activities for production, development and advanced projects. His interest lies in the display visibility arena and he participated in the SAE J1757 committee. Weindorf graduated from the University of Washington with a bachelor’s degree in electrical engineering.

August 26, 2016

Heads up! Here come the HUDs
By James Farell, Director of Mechanical, Display and Optical Engineering,
Visteon



Today’s travelers can feel more like pilots than drivers when they sit behind the wheel. They find themselves in a cockpit with an array of digital gauges and navigation systems, and increasingly they are enjoying the benefits of a head-up display, or HUD.

A HUD consists of a picture generation unit, a series of mirrors, and either a transparent combiner screen or the windshield itself to project information directly in front of the operator’s eyes. The first HUDs evolved from World War II-era reflector sights in fighter aircraft and the technology made its way to automobiles in the 1988 Oldsmobile Cutlass Supreme.

Today’s HUD displays information above the dashboard, such as speed, turn indicators, navigation data and the current radio station. It allows drivers to keep their eyes on the road without having to constantly shift their focus between the road and the instrument panel. HUDs project only the most important information that the driver needs at the time, thereby avoiding unnecessary distractions.

Early HUDs employed a monochrome vacuum fluorescent display that was not customizable. Today’s more advanced HUDs often use TFT (thin-film transistor) LCD (liquid crystal display) screens, like those found in some smartphones and flat-screen TVs, with an LED (light emitting diode) backlight to generate a very bright image.

HUD systems fall into two main classes: combiner and windshield.  A combiner HUD uses a screen to reflect an image to the driver, while a windshield HUD has images projected directly off the windshield. In both categories, a virtual image appears beyond the surface of the reflector, helping the eyes maintain focus on both the data and the roadway.

Head-up displays can be tailored for all markets, reflecting the variety and advancements that have been made with this technology.
  • The entry-level HUD, designed for emerging markets, uses a passive TFT LCD or vacuum fluorescent system and a combiner with extremely high-quality optics, but with a relatively narrow field of view. This HUD often uses a mechanical, manual tilting screen, rather than the automatic or motor-driven covers available in higher-level HUDs.
  • The next step up is the low-end HUD, which is considerably brighter and offers a 4.5 x 1.5 degree field of view. With an active-matrix TFT LCD screen for sharper colors, a wider field of view and faster response, it employs simplified kinematics with a combiner that rotates down to lie flat when not in use.
  • The mid-level HUD, for the midrange automotive sector, also has a 4.5-by-1.5 degree field of view but a more complex combiner that completely retracts with a flap that covers it, for a more seamless appearance. It is about 70 percent brighter than the low-end HUD.
  • The high-end HUD is even brighter, with a larger TFT screen that offers a very wide 6-by-2.5 degree field of view. Its complex kinematics system incorporates a two-piece flap for efficient packaging, and the combiner screen can both shift and rotate.
  • The windshield HUD system, which uses no separate combiner but projects data via virtual images in front of the windshield. Its optics are more complex and its cost is higher than the other systems. While the same combiner HUDs can be designed into different positions and locations in different types of vehicles, windshield HUDs must be designed for a specific windshield and are not as adaptable.





Drivers in Asia and Europe, and to a lesser degree in North America, have shown great interest in HUD systems. Sales are growing 30-40 percent annually, and their attraction is expected to increase now that as many as five types of HUDs are available for various levels of vehicles.

The next generation of HUD will offer an augmented reality system with a very wide field of view and an image that can seem to project right onto the roadway. Its information can overlay what the driver sees in the real world -- a pedestrian ready to cross the street, a stop sign or an exit ramp, for instance. Virtual reality HUDs are expected to begin appearing in 2021 vehicles.

When autonomous driving becomes the norm, occupants may be using HUD systems during automated driving periods for videoconferences, rather than phone calls. HUD technology has virtually no limits on what can be displayed. The task of the auto industry is to ensure that HUDs continue to add to safety by reducing driver distractions while also helping prepare for the day when eyes-off-the-road driving will transition to eyes-on-the-screen activities.


Jim Farell leads Visteon’s technology development for all display products, including head-up displays, center information displays, and optics for displays and instrument clusters. During his 24 years at Visteon and Ford, Jim has led teams delivering a diverse portfolio of electronics products including Visteon’s first commercial infotainment platform and first V2X platform. He has a bachelor’s degree from the GMI Engineering and Management Institute, and a master’s in electrical engineering from Stanford University.

August 5, 2016



Self-Driving Cars – How Far From Reality?

By Anshul Saxena, software expert

In the last couple of years, we have witnessed a phenomenal change within the automotive sector in terms of electronics, use of machine learning algorithms and the integration of more sensors. Moore’s Law* may no longer be applicable in terms of the increase in the number of transistors in a chip, but it can still be applied for the growing number of sensors in an automobile.

In terms of features in vehicles, the biggest beneficiary of these adaptations is advanced driver assistance systems (ADAS). With radar, Lidar, infrared, ultrasonic, camera and other sensors, the development in ADAS has reached a stage that could allow for the deployment of a “completely independent" self-driving vehicle in the near future. In fact, vehicles with self-driving features like self-acceleration, self-braking and self-steering are already on the road.

A simple Google search of “Autonomous Cars” would convince most people that self-driving cars are just around the corner. Yet, even with the many leaps taken by the industry to adapt to the latest technologies, we are still lacking in terms of infrastructure. Self-driving cars require extensive ground support in terms of vehicle-to-infrastructure (V2I), vehicle-to-vehicle (V2V) and vehicle-to-surroundings (V2X). Some of the immediate requirements for self-driving cars to become a reality are discussed below.

Every country, many states, and sometimes even municipalities have special road signs. A self-driving car needs to be capable of identifying, reading, decoding and processing each of those road signs. This problem can be approached in two ways – either the industry and governments need to develop "coherent road signs," or a central database of all possible road signs must be created and stored in the vehicle electronics so the car can react appropriately to specific signs.

One can argue that a car navigation system should provide the information regarding road signs, but this technology has limitations with periodic updates and also requires a constant connection to the Internet or GPS. This limits a self-driving car's access to properly mapped geographical locations and, in the true sense, would not constitute a self-driving feature.

Another immediate requirement for a self-driving car is the capability to communicate with other vehicles on the road, but not through existing telecommunication technologies like 4G, Wi-Fi or satellite. The dependence of self-driving cars on these technologies will limit them to areas where this infrastructure is available, while also requiring a very high quality of service to determine real-time communications.

The use of existing telecommunications technology in cars will make them subject to network jams as well as making the car susceptible to cyber-attacks. This could compromise the safety of passengers in a self-driving vehicle. V2V communication needs to take place directly and without any dependence on external telecommunication infrastructure. The best possible case is to develop a real-time negotiating network protocol for the communication between cars, with an additional dedicated layer of security.”

Lastly, and perhaps most importantly, self-driving cars need to behave like a human driver. This means the cars not only require a camera to analyze the surroundings, but also need a sophisticated array of microphones to listen to the surroundings or user commands. The vehicle should be able to process this audible information and integrate it within its existing information processing architecture to make intelligent and independent decisions. Like humans, a self-driving car needs to have “eyes and ears.”

Semi-autonomous cars are already a reality and by the year 2020 they will be very visible on roads. However, self-driving cars may need more time to come to fruition on public highways. The industry needs to develop more sophisticated sensors, machine learning algorithms to process the data from these sensors, and an innovative sensor fusion. If these needs are addressed, we may begin to see the first prototype of the self-driving car in a true sense by 2025.

* Moore’s Law is a computing term which originated in 1965. Gordon Moore, co-founder of Intel, stated that the number of transistors per square inch on integrated circuits had doubled every year since the integrated circuit was invented. Moore predicted that this trend would continue for the foreseeable future. Although the pace has slowed in subsequent years, most experts, including Moore himself, expect Moore's Law to hold for at least another two decades. Source: www.mooreslaw.org



As a software expert, Anshul is involved in the development of SmartCore™ (shown above) and is currently working on the development of audio features and signal processing modules. He is also focused on self-driving car technologies and the effect of the Internet of Things on automobiles. Anshul is based in Karlsruhe, Germany.

July 14, 2016

Changing Your View of What’s Cool in Displays

By Doug Pfau, Visteon Displays Technical Sales Manager


Where would you look for the coolest innovations in display screens? Your laptop or fitness bracelet? Probably not. Your new smartphone? Maybe. What about your car’s instrument panel? You may be surprised to find that the latest screens on your dashboard are among the most exciting and versatile displays anywhere.

Advancements in OLED (organic light-emitting diode) technology, high-resolution imaging and clever peripherals that integrate filters, touch controls and haptic feedback are revolutionizing in-vehicle information display screens, not only for drivers but for passengers up front and in the back seat as well.

Perhaps the cleverest of these developments is a dual view display—which ultimately can become a triple view display. It allows the driver, the front seat passenger and a passenger in the center of the second row to each see a different image or video, all on the same screen. This multi-view display appears on a screen with a horizontal resolution of 2,880 pixels, which is more than twice that of HD. Dual view provides a horizontal resolution of 1,440 pixels for each viewer, or 960 pixels for each of three viewers. With a high-tech louver system in front of the display, smart electronics send every second pixel (or every third pixel for triple view) to the driver and passenger. The driver may see only a navigation screen while the front passenger watches a movie and the rear passenger (for example, in a taxi) sees a video advertisement. Images and/or video from multiple sources are fed into the electronics, which unweave and reweave the images to display them via the appropriate set of pixels.

Another bright idea is the dual OLED display, which uses two 7-inch OLED screens -- a main display and another that slides out when needed. One screen could be dedicated to the navigation system while the other screen shows Apple CarPlay controls. An important benefit of having two screens is the ability to display controls that otherwise would be buried in menus. With dual display, these virtual buttons are easy to find and reach.

A further cutting-edge technology is the curved lens. A conventional thin-film transistor (TFT) display -- the type commonly used in flat-screen TVs, computers and mobile phones -- is optically bonded to a curved lens, creating a combination that pops the image to the surface of the lens. It’s coupled with a fast-response touch capability for optimal user satisfaction.

Still one more illustration is the multilayer display cluster concept that replaces conventional dashboard gauges with 3-D virtual images. Two transparent TFT displays placed about 10 millimeters apart, along with polarizers and advanced graphics, cause the 3-D gauges to look real from any angle.

Visteon's information displays incorporate the consumer appeal of sleek design, craftsmanship and touch capability to deliver high-performing displays for the demanding automotive environment. This video features advanced concepts for a dual-view display, curved lens, dual OLED and multi-layer display.


All of these technologies benefit from autoluminous control, a system that combines inputs from an exterior light sensor pointing out the front windshield with two sensors on the display that read interior light levels and automatically adjust the display’s intensity so it is easily visible.

In addition to transforming instrument-panel styling, these bright, smart displays can make driving safer. A dim display, or one with low resolution, requires drivers to spend more time focusing their eyes on the screen instead of on the road. Dual displays show all controls, rather than burying them in menus, providing drivers with needed information faster, and allowing them to return their attention to the road sooner.

Many of these innovations are currently available. Dual view display, for example, can be seen on the Land Rover Evoque. The once formidable cost of OLED displays is declining as production ramps up for OLED smartphone screens. Similarly, high-resolution TFT screens are becoming much more affordable, with the average screen size expected to grow from 7 inches today to 9 inches within two years. Some displays are even being produced in a free-form configuration that can be integrated into center consoles and other units.

So next time you’re looking for what’s hot and what’s cool, turn to your vehicle’s electronics — it’s the future on wheels.

Doug Pfau is a technical sales manager at Visteon with 26 years of experience. He is responsible for developing customer-specific technology demonstration properties and quoting new designs. Doug leads teams in the area of displays, remote inputs devices and audio head units. During his automotive career, he has worked in product engineering, mechanical CAD, project management, advanced development and technical sales at Visteon and Ford Motor Company.

June 8, 2016


Visteon Discovers Untapped Energy Source in STEM Students
By Thomas Nagi, Product Development Manager

As Visteon has sought to encourage college students to major in science, technology, engineering and math—or STEM—something unexpected has happened. We’ve encountered a refreshing energy, dedication and professionalism among high school students in STEM programs, an eagerness that is redefining traditional business mentoring plans. It’s a discovery that has motivated Visteon to help nurture these brilliant students in new ways over their entire higher-education experience.

We first recognized the real value of STEM mentoring in 2014 while conducting a tour for high schoolers from the Plymouth-Canton Community Schools Educational Park in Michigan. The tour gave us a chance to talk with juniors and seniors from the three high schools in the district. Listening to them discuss their mentoring programs with other businesses, it became clear that mentoring often consisted only of shadowing company employees. One student told me he sat next to an engineer and watched him draw a schematic.

We felt we could and should do much more for these students – a concern that evolved into the Visteon STEM Mentorship Program that we launched with STEM program students from Plymouth-Canton. Employee mentors from Visteon met twice a week with high school juniors who came to our labs and offices on their own time to interact directly with their mentors. Mentors also became involved with a number of seniors, supporting them as they designed and implemented an original vehicle technology solution for their STEM senior project.

Our plan was to help the juniors learn about everything we do as a global supplier of vehicle cockpit electronics – as well as how we do it and the products we make. We exposed them to the whole technology ecosystem, including sales and marketing. They gained professional interviewing experience and participated in market studies. They joined in product demonstrations and real vehicle testing. We even brought them together with engineers from Visteon facilities in Mexico, France and India to learn more about the challenges, opportunities and importance of globalization.

Working closely with these students, we found that many were lacking practical career guidance, unsure of the direction they should pursue in college. Several felt that if they made a decision on a university and major as a high school junior, they would be compelled to stay with those choices. Our mentors, however, acquainted them with engineers who were doing job functions far different from what their degrees would have suggested.  

To our pleasant surprise, we saw phenomenal growth in these students. Their involvement and dedication were astonishing, and we witnessed an unbelievable maturing in the ways they changed intellectually and socially, how they interacted with other professionals, and how they carried themselves. Talking with Visteon engineers around the world was a real eye-opener for them. They had never been exposed to a global perspective on their career plans.

The STEM students we mentored gained a lot, and so did Visteon. The mentors were amazed at the enthusiasm and abilities of the students. We all become galvanized by working with them—their energy and curiosity was contagious.

The first year of the program was so successful that Visteon continued it for 2015-2016. We received three times as many applicants compared to our first year. After interviewing each of them, we were amazed that every single applicant was a “wow” candidate, so we expanded the program to accept them all. We assigned mentors to the group, and provided dedicated mentors to support this year’s senior teams on their class projects. Students have interests in the biomedical, chemical and aerospace fields, as well as software, electrical and mechanical engineering.

As important as the Visteon STEM Mentorship Program has been to students, it’s been even more valuable for Visteon. Senior students who participated last year are in universities now and still in contact with us, asking for job and college recommendations, seeking internships and building contacts for the future. We’re cultivating some of the best and brightest STEM students and we’re making efforts to recruit several seniors (now college freshmen) as Visteon interns. Our intent is to work with them while they attend college, offering us six years of experience with them.

The program also helps our community by keeping these young and talented engineers in the Detroit area. We’re educating them to realize that the industry and area offer a lot of opportunities. Businesses across the auto industry should be adopting this hands-on, career-focused type of mentorship program and encouraging more students to get involved in STEM.

As we’ve discovered, encouraging young people to pursue careers in STEM areas can elevate a company’s recruiting efforts, its reputation and ultimately its innovation.

At the end of the school year, STEM students apply what they’ve learned and present their final innovation project to their mentors and company leaders.

STEM students address the issue of drowsy driving with Project Z –
which uses a variety of techniques to alert drivers.   

STEM students present Project Rudolph – a system that reads outgoing
SMS signals from a smartphone and initiates vehicle hazard warnings for other drivers.

Students apply STEM principles to create Project DAVE
(Driver Enhancement Vehicle Awareness) to tackle the issue of drowsy driving.


Tom Nagi has been with Visteon for more than 15 years and currently is a product development manager in systems engineering and software validation. Prior to this role, he has held positions in product and platform development at Visteon, and in engineering management with other automotive companies.  Tom received a bachelor’s degree in electrical engineering from the University of Michigan-Dearborn.



June 6, 2016


China’s Contrasts Drive an Intriguing Beijing Motor Show
By Upton Bowden, Advanced Technology Planning Manager

China is a land of great contrasts. It’s the world’s largest agricultural producer but also the world’s largest manufacturing economy. Its population demonstrates remarkable prosperity, alongside those who are struggling economically. A similar range of contrasts became apparent at the recent Beijing Motor Show, a showcase for Chinese domestic manufacturers and global auto companies alike.

China has a heritage of innovation extending back thousands of years, but today its auto industry is not yet as advanced in technology as OEMs in North America or Europe. At its highest level, automotive driving technology in China is still a few years behind the Western world, yet traffic and infrastructure complexity rival global mega cities.

The Beijing show featured many types of screens for instrument clusters, center-stack controls and head-up displays (HUDs). Generally, however, these were flat rectangular screens with 2-D displays, far removed from the curved, lens-adorned, ultra-high-resolution 3-D displays of the high-end vehicles exhibited by U.S. and European automakers. For the Chinese OEMs, lenses, styling and graphics are much less important than large screen sizes. Additionally, low cost is a primary driver for China’s OEMs and consumers.

At the show, many of the vehicle displays were not being powered, so the styling aspects of infotainment functions could not be appreciated. Visteon’s own exhibit, which featured displays with exceptional resolution and graphics, were powered and drew considerable attention from visitors. Customers showed particular interest in our combiner HUDs, a lower-cost solution that displays information on an acrylic mechanized combiner lens that avoids the complex optics required for projecting images beyond the windshield.

Chinese manufacturers were curious about when vehicle-to-vehicle communications technology (V2X) would be on the road. China has massive traffic and congestion and is trying to figure out how to make its infrastructure more efficient. Technology like V2X will help. As demonstrated with other automotive technology, China prefers to follow standards developing in other regions before launching programs to address connected cars.

Visitors to the show saw both high-end and entry-level vehicles from global manufacturers, designed to appeal to China’s contrasting mass marketplace and wealthy luxury buyers who often ride in chauffeured vehicles. The latter segment buys lots of large and luxury vehicles. Often there are customizations for the driver and for the luxury passenger in the rear seat.

One unique market segment for the auto show stemming from this ultra-high-end market was the prolific display of personal military defense-grade vehicles that are available commercially. An armored truck with bulletproof glass and re-inflating tires was typical of this category.

Another unique segment is the ultra-conversion vans – the homey antithesis of the threatening military-style monsters. The vans, sold as mini-RVs or massive limos, were equipped with big cushy couches, TV sets and wet bars. These vans are being built by a number of domestic and global suppliers.

The China automotive market for hybrid electric and conventional cars is still growing by double digits as the number of joint ventures between domestic and global OEMs swells. It presents an extremely complex picture, one ultimately focused on both low cost and interest in technology. The Chinese market is so large, however, that it offers room for everyone, with a consumer base that fills the gamut of offerings from domestic, North American and European automakers.


Upton Bowden is an advanced technology planning manager at Visteon with 25  years of experience.  At Visteon, Upton is responsible for identifying innovative concepts and developing compelling automotive applications. Upton leads consumer research clinics to evaluate advanced concepts and study user acceptance. During his automotive career, he has worked in manufacturing, product design, program management, marketing and technical sales at Visteon and Ford Motor Company.

May 2, 2016


10 Ways Young Engineers Can Help the Auto Industry
By Husein Dakroub
Lead Engineer—Infotainment & Connectivity          

I was destined to be an engineer even before I knew what one was. Growing up in Dearborn, Michigan, I loved to help people and redesign things like remote-control race cars, robots and computers. My family instilled in me a strong sense of both my American identity and my Lebanese heritage, and I viewed engineering as one of the few effective ways I could positively impact my community and nations around the world.

When I was ready to take my innate engineering skills into the workplace, I gained insight by word-of-mouth from local engineers and friends. I interviewed at a number of automotive companies that could offer a strong mentorship program so that I could quickly learn the technical and manufacturing aspects of products.

I’m 25 now, with a master’s in computer engineering from the University of Michigan-Dearborn and a rewarding job with Visteon. I’ve seen firsthand that opportunities for young engineers are expanding with the emergence of electric vehicles, connected and autonomous cars and advanced driver assistance systems.

At Visteon, I’ve been fortunate to be involved in developing a number of electronics products, like the company’s first production consolidated-infotainment solution and LTE/VoLTE-enabled telematics solutions. The impact that I and other young engineers can have on the automotive industry, however, extends beyond any individual accomplishments. We can be an important force in helping automotive companies achieve their goals of bringing advanced and secure consumer technologies into vehicles.


Visteon's OpenAir and SmartCore products are just a few examples of the technologies that Husein has helped create.





If you’re a young engineer, you can take steps to accelerate your career while leaving your mark on the auto industry. These actions can help you progress and succeed:


  1. Connect with people in your industry to learn and understand your role in the company and how you can influence the larger industry.
  2. Embrace challenges by taking on additional responsibilities, relocating internationally and working with off-shore teams.
  3. Bridge the gap among generations in the auto industry by applying your understanding of what millennials and the post-millennial Founder Generation want and need in their vehicles.
  4. Learn from your elders and appreciate the wisdom that has come with their experience.
  5. Interact with various cultures whenever you have the opportunity. I’ve worked with colleagues from Canada, China, Germany, India, Japan and other nations, learning and making connections with people outside my own culture.
  6. Always question what you’re working on. Put yourself in the position of the consumer in determining how you really want a feature within the automobile to work.
  7. Fill gaps by trying to understand where weaknesses appear in a project or the company, and do what you can to overcome those cracks.
  8. Bring solutions, rather than criticism. Often, young engineers are more tech-savvy than professionals from earlier generations, so we inherently understand the value of consumer electronics in every aspect of our lives, including our vehicles, and can apply this to the workplace.
  9. Continue to be a risk-taker – an essential trait for driving innovation and growth.
  10. Above all, try new things and remain humble in accepting failures. Bring this attitude to your career and explore the breadth of opportunities that the auto industry offers, especially here in the beautiful and affordable settings of the Great Lakes. It will be a rewarding endeavor.
Husein Dakroub has worked at Visteon since 2012 and currently is a technical lead engineer. He has been involved in the design and development of automotive infotainment and telematics systems, architecting next-generation platforms, and delivering production systems for the connected vehicle. Husein has three published papers in the Society of Automotive Engineers and two patent-pending applications. He received a B.S.E in electrical engineering and a M.S.E in computer engineering from the University of Michigan-Dearborn.