December 19, 2016

“Real 3-D” Instrument Cluster Wins Consumer Applause in Screen Tests
By Judy Blessing, Market & Trends Research, Visteon


How many times have you taken your seat in a movie theater, ready to watch a long-anticipated 3-D feature, only to discover that the so-called 3-D images don’t leap out to grab you? It turns out that all 3-D is not the same, and automotive engineers working with consumers on 3-D instrument clusters are finding that the same principle applies. Consumers increasingly prefer realistic 3-D effects, not fake-looking 3-D images.

Those preferences registered clearly in a recent advanced product research clinic in Germany piloted by Visteon. Two separate studies were conducted. In the first, consumers compared different fully reconfigurable clusters, with and without 3-D effects, to learn more about their preferences using 2-D and 3-D displays. In the second study, Visteon researchers compared different types of 3-D technologies to collect feedback. In all instances, very similar graphics were displayed on the screens, but they were adapted to take advantage of the capabilities of the various technologies.

Participants in the second study compared three advanced 3-D instrument clusters, viewed from the same distance and angle, in random order. The first demonstration offered high-performance 3-D graphics rendered on a 12.3-inch display with a resolution of 2880 x 1080 pixels. While the quality of this resolution exceeds the state of the art, it is displayed on just a single layer.


The image above is one of the three advanced 3-D instrument clusters
tested in the clinic.


The second technology shown was a multilayer cluster, which uses two 12.3-inch displays stacked one behind the other with a distance of 8 mm between them. However, each layer only delivers 1440 x 540 pixels, and content on the back layer can appear slightly blurry because of very thin wires in the front, transparent layer. Yet this second approach enables drivers to visualize depth where appropriate.


The image above represents the multilayer display principle (based on Witehira 2005)


The third technology examined was a Visteon second-generation multilayer cluster, named Prism. This system consists of two reconfigurable displays, one vertical and one horizontal, separated by a flat semi-transparent mirror. The mirror reflects the image of a horizontal TFT (thin-film transistor) display, creating a virtual image that overlays the vertical TFT without creating a blur. This arrangement allows design flexibility of the virtual image so it can appear in the same plane as the vertical TFT, behind it or in front of it.


The image above represents the package overview of the Prism concept


Clinic Results
The second-generation multilayer Prism cluster was found to combine the best of both worlds: “real 3-D” effects with clear graphics and quality.

  • Of the clinic participants, 100 percent gave Prism a good or very good rating on quality, and 93 percent agreed or strongly agreed that the second-generation multilayer instrument cluster has a high premium feel.
  • Nearly nine of 10 respondents felt it to be innovative or very innovative, with the original multilayer cluster close behind.
  • Reliability (being easy to read) was highest for the 3-D rendered instrument cluster, which was rated reliable or very reliable by 75 percent of the participants.
The 3-D effects were higher rated on the instrument clusters with two layers, each of which showed “real depth,” bringing the most important information to the front. The first instrument cluster was considered to have no real depth and was seen as “fake” 3-D, but the high resolution was appreciated.

In the end, participants fell into two groups. One preferred unobtrusive, high-resolution rendering and remained a bit reluctant to embrace 3-D fully. The other group saw the advantages of multilayer instrument clusters by bringing information from the back to the front to raise awareness. Both executions are considered good ways to represent 3-D, and the drawbacks of initial multilayered clusters have largely been overcome with the second-generation technology that combines high resolution and no blurriness with preferred 3-D effects.

For additional details on the clinic and the results, see Visteon’s white paper on Consumer insights on innovative 3-D visualization technologies

Judy Blessing brings 18 years of research experience to her manager position in market and trends research. Her in-depth knowledge of all research methodologies allows her to apply the proper testing and analysis to showcase Visteon´s automotive intellect to external customers and industry affiliates. She holds a German University Diploma degree in marketing/market research from the Fachhochschule Pforzheim, Germany.


December 5, 2016


Touchless vehicle apps know what you want, when you want it
By Sivakumar Yeddnapuddi

Today’s cars and trucks are smart, but they’re not smartphones. Potentially, we can choose from millions of apps on our phones, just by touching an icon. If we tried to do all the things we want to do in our cars using apps, we’d be frustrated, because we can’t safely select them while driving. Automakers have been limited to the native applications built into cars—like Bluetooth and USB ports – relying on a passenger to use a phone to find the nearest gas station or restaurant if the vehicle didn’t have built-in navigation.

A new developer-friendly application platform from Visteon – called Phoenix – solves this problem and propels smart vehicle infotainment systems to the head of the class. This web-based infotainment platform “stitches” together apps native to the car with apps from third parties. Application integration is performed via recipes that enable the appropriate apps at the ideal time contextually – without the driver needing to touch anything.

Case in point: As a driver enters the vehicle, a customized startup feature automatically indicates available content from three different apps:  an audible message lists the day’s meetings, the weather forecast and traffic conditions for the anticipated route  

The driver does not need to use a phone, speak voice commands or input commands to a touch screen; all required information is automatically displayed on one screen.

Similarly, today two separate apps and two steps are required to open a garage door remotely and to show the vehicle’s position in relation to the garage entrance via GPS. Phoenix stitches these apps together so that the garage door automatically opens as the vehicle approaches.



Phoenix is easy to use since it complies with open standards such as W3C and GENIVI and is designed with app developers in mind. This platform lets developers build applications using HTML5 along with rich JavaScript based application programming interfaces (APIs). This eliminates the need to rewrite applications when porting to other infotainment systems.

Furthermore, Visteon offers a software development kit (SDK) with libraries of code, documents and a simulator. The Phoenix SDK makes development easier than conventional, often disjointed methods that require custom software or hardware and lack third-party tools – thus increasing cost and time. With Phoenix, the developer creates and tests the app with the SDK and simulator; the app is then validated by the automaker or Visteon and published to an app store. Phoenix is the first platform for vehicle apps to incorporate HTML5 and an SDK.

The Phoenix platform also advances the capability to update in-vehicle apps over the air, whether at a dealership lot or in the driveways of individual owners. For the first time, automakers can securely update just one portion of an app, using Visteon’s proprietary block-and-file technology, rather than needing to upgrade the entire system.

By 2020, when vehicle-to-vehicle (V2V) communication will be more common, vehicles will have the capability to display infotainment on screens from 12 to 17 inches in size, compared with today’s 7- to 8-inch screens. Phoenix will enable developers to create content that optimizes these larger screens, making them more useful for drivers and improving the driving experience.

Sivakumar Yeddanapudi is a platform leader managing the Phoenix program. He also develops infotainment platforms that incorporate the latest technologies such as advanced HTML5 HMI framework, Web browser, cybersecurity, reflash over air and vision processing for cockpit electronics.

He has more than 15 years of automotive experience and served as software developer, technical professional for audio and infotainment software, and now as platform leader, located at Visteon’s headquarters in the U.S.


November 3, 2016


Machine Learning Algorithms in Autonomous Cars

Machine learning algorithms are now used extensively to find solutions to different challenges ranging from financial market predictions to self-driving cars. With the integration of sensor data processing in a centralized electronic control unit (ECU) in a car, it is imperative to increase the use of machine learning to perform new tasks. Potential applications include driving scenario classification or driver condition evaluation via data fusion from different internal and external sensors – such as cameras, radars, lidar or the Internet of Things.

Anshul Saxena, software expert at Visteon's technical center in Karlsruhe, Germany, provides a technical review of the use of machine learning algorithms in autonomous cars, and investigates the reusability of an algorithm for multiple features.

The applications running a car's infotainment system can receive information from sensor data fusion systems and have, for example, the ability to direct the vehicle to a hospital if it senses that something is wrong with the driver. This machine learning-based application can also incorporate the driver’s gesture and speech recognition, and language translation. The algorithms can be classified as a supervised algorithm and an unsupervised algorithm. The difference between the two is how they learn.

Supervised algorithms learn using a training data­set, and keep on learning until they reach the desired level of confidence (minimization of probability error). They can be sub­-classified into classification, regression and dimension reduction or anomaly detection.

Unsupervised algorithms try to make sense of the available data. That means an algorithm develops a relationship within the available data set to identify patterns, or divides the data set into subgroups based on the level of similarity between them. Unsupervised algorithms can be largely sub­-classified into clustering and association rule learning.

There is now another set of machine learning algorithms called reinforcement algorithms, which fall somewhere between supervised   and unsupervised learning. In supervised learning, there is a target label for each training example; in unsupervised learning, there are no labels at all; and reinforcement learning has sparse and time-­delayed labels – the future rewards.


Based only on those rewards, the agent has to learn to behave in the environment. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithm's merits and limitations. Reinforcement learning is of great interest because of the large number of practical applications that it can potentially address, ranging from problems in artificial intelligence to operations research or control engineering – all relevant for developing a self­-driving car. This can be classified as direct learning and indirect learning.

One of the main tasks of any machine learning algorithm in the self­-driving car is continuous rendering of the surrounding environment and the prediction of possible changes to those surroundings. These tasks are mainly divided into four sub-­tasks:
  • Object detection
  • Object Identification or recognition Object classification
  • Object localization and prediction of movement

Machine learning algorithms can be loosely divided into four categories: regression algorithms, pattern recognition, cluster algorithms and decision matrix algorithms. One category of machine learning algorithms can be used to execute two or more different sub­tasks. For example, regression algorithms can be used for object detection as well as for object localization or prediction of movement.


Regression Algorithms
This type of algorithm is good at predicting events. Regression analysis estimates the relationship between two or more variables, compare the effects of variables measured on different scales and are mostly driven by three metrics, namely:
  • The number of independent variables
  • The type of dependent variables
  • The shape of the regression line.

In ADAS, images (radar or camera) play a very important role in localization and actuation, while the biggest challenge for any algorithm is to develop an image-­based model for prediction and feature selection.

Regression algorithms leverage the repeatability of the environment to create a statistical model of the relation between an image and the position of a given object in that image. The statistical model can be learned offline and provides fast online detection by allowing image sampling. Furthermore, it can be extended to other objects without requiring extensive human modeling. As an output to the online stage, the algorithm returns an object position and a confidence on the presence of the object.

These algorithms can also be used for long learning, short prediction. The type of regression algorithms that can be used for self­-driving cars are Bayesian regression, neural network regression and decision forest regression, among others.

Pattern Recognition Algorithms (Classification)
In ADAS, the images obtained through sensors possess all types of environmental data; filtering of the images is required to recognize instances of an object category by ruling out the irrelevant data points. Pattern recognition algorithms are good at ruling out these unusual data points. Recognition of patterns in a data set is an important step before classifying the objects. These types of algorithms can also be defined as data reduction algorithms.

These algorithms help in reducing the data set by detecting object edges and fitting line segments (polylines) and circular arcs to the edges. Line segments are aligned to edges up to a corner, then a new line segment is started. Circular arcs are fit to sequences of line segments that approximate an arc. The image features (line segments and circular arcs) are combined in various ways to form the features that are used for recognizing an object.

The support vector machines (SVM) with histograms of oriented gradients (HOG) and principle component analysis (PCA) are the most common recognition algorithms used in ADAS. The Bayes decision rule and K nearest neighbor (KNN) are also used.

Clustering
Sometimes the images obtained by the system are not clear and it is difficult to detect and locate objects. It is also possible that the classification algorithms may miss the object and fail to classify and report it to the system. The reason could be low-resolution images, very few data points or discontinuous data. This type of algorithm is good at discovering structure from data points. Like regression, it describes the class of problem and the class of methods. Clustering methods are typically organized by modeling approaches such as centroid-­based and hierarchical. All methods are concerned with using the inherent structures in the data to best organize the data into groups of maximum commonality. The most commonly used type of algorithm is K-­means, Multi-­class Neural Network.

Decision Matrix Algorithms
This type of algorithm is good at systematically identifying, analyzing, and rating the performance of relationships between sets of values and information. These algorithms are mainly used for decision making. Whether a car needs to take a left turn or it needs to brake depends on the level of confidence the algorithms have on the classification, recognition and prediction of the next movement of objects. These algorithms are models composed of multiple decision models independently trained and whose predictions are combined in some way to make the overall prediction, while reducing the possibility of errors in decision making. The most commonly used algorithms are gradient boosting (GDM) and AdaBoosting.

As a software expert, Anshul is involved in the development of SmartCore™ and autonomous driving domain controller platforms. He is focused on self-driving car technologies and the effect of the Internet of Things on the auto industry. Anshul is based in Karlsruhe, Germany.