March 24, 2017

Artificial Intelligence Emerges from Data Rooms to Help Drive Autonomous Cars


By Vijay Nadkarni

A century ago, many large businesses ran their operations with rooms full of skilled clerks rapidly entering figures into a comptometer, a type of mechanical calculator considered very efficient for the times. After many decades, the comptometer proved too limiting for a rapidly advancing marketplace and was replaced by teams of data entry clerks feeding powerful mainframes. Technology continued to accelerate to the point where the calculating and data entry power of an entire corps of workers could be managed by a laptop. Today, apps have generated brilliant machinery and pocket-sized communications devices, thanks to innovative programmers and billions of lines of code.

In considering the challenge of autonomous vehicles, however, yet another level of technology will be required. Conventional programming and computational approaches to problem-solving will be far outpaced by the speed and complexity needed for automated driving.

The programming approaches for autonomous driving that are currently getting the lion’s share of attention – high-speed cameras, LIDAR and ultrasonic sensors – are unable to incorporate all potential driving scenarios while staying up-to-the-minute with traffic conditions, weather, construction zones and other driving issues. There is an approach, however, that will allow cars and trucks to learn and respond quickly and accurately to their constantly changing surroundings. That approach involves artificial intelligence (AI).

Artificial intelligence allows the vehicle to analyze in real time the massive amounts of data – gigabytes per second – received by its cameras, LIDAR and other sensors to avoid objects and plan the vehicle’s path. Applying AI in an optimal manner involves using neural networks for object classification and reinforcement learning for path planning.

Consumers already are bringing AI into their vehicles via their smartphones. Voice-based search engines and in-car navigation depend on a level of AI from off-board servers, and more infotainment systems are integrating connected features from outside servers that use AI in the background.

To manage an autonomous vehicle, engineers will need to transform AI from its typical location in a room full of servers for computer and Internet access to a self-contained location in the vehicle that, for the most part, does not need to depend on outside data connections. They also will have to solve the challenge of AI’s huge demand for power, resulting in heat generation that must be dissipated, contributing to higher fuel consumption.

Another area that needs to be addressed is which type of microprocessor will prove most efficient. Should it be a central processing unit (CPU), a graphics processing unit (GPU) or an application-specific integrated circuit (ASIC)? Each has benefits and drawbacks in terms of power, performance and cost. This issue is complicated by the need to decide between a CPU, where all information is received and processed, and a decentralized system with several smaller processors. Both have value, depending on how the vehicle manufacturer wishes to establish the architecture.

To meet this demand, Visteon is developing a scalable autonomous driving solution applying AI – specifically neural networks and machine learning. This approach can support either centralized or decentralized processing and can greatly improve the accuracy of detecting and classifying objects in a vehicle’s path. This approach holds much promise for moving autonomous driving from a few real-world examples among a roomful of innovators to an everyday reality on our roads and highways.

Vijay Nadkarni is the global head of artificial intelligence and deep learning technology for Visteon's product lines, including autonomous driving/ADAS and infotainment. He is based in Santa Clara where he has management oversight of Visteon's Silicon Valley technology center. Vijay is a hands-on technology veteran whose current focus is machine learning, Cloud computing and mobile apps. Prior to joining Visteon, he founded Chalkzen, which developed a novel Cloud platform for vehicular safety.


December 19, 2016

“Real 3-D” Instrument Cluster Wins Consumer Applause in Screen Tests
By Judy Blessing, Market & Trends Research, Visteon


How many times have you taken your seat in a movie theater, ready to watch a long-anticipated 3-D feature, only to discover that the so-called 3-D images don’t leap out to grab you? It turns out that all 3-D is not the same, and automotive engineers working with consumers on 3-D instrument clusters are finding that the same principle applies. Consumers increasingly prefer realistic 3-D effects, not fake-looking 3-D images.

Those preferences registered clearly in a recent advanced product research clinic in Germany piloted by Visteon. Two separate studies were conducted. In the first, consumers compared different fully reconfigurable clusters, with and without 3-D effects, to learn more about their preferences using 2-D and 3-D displays. In the second study, Visteon researchers compared different types of 3-D technologies to collect feedback. In all instances, very similar graphics were displayed on the screens, but they were adapted to take advantage of the capabilities of the various technologies.

Participants in the second study compared three advanced 3-D instrument clusters, viewed from the same distance and angle, in random order. The first demonstration offered high-performance 3-D graphics rendered on a 12.3-inch display with a resolution of 2880 x 1080 pixels. While the quality of this resolution exceeds the state of the art, it is displayed on just a single layer.


The image above is one of the three advanced 3-D instrument clusters
tested in the clinic.


The second technology shown was a multilayer cluster, which uses two 12.3-inch displays stacked one behind the other with a distance of 8 mm between them. However, each layer only delivers 1440 x 540 pixels, and content on the back layer can appear slightly blurry because of very thin wires in the front, transparent layer. Yet this second approach enables drivers to visualize depth where appropriate.


The image above represents the multilayer display principle (based on Witehira 2005)


The third technology examined was a Visteon second-generation multilayer cluster, named Prism. This system consists of two reconfigurable displays, one vertical and one horizontal, separated by a flat semi-transparent mirror. The mirror reflects the image of a horizontal TFT (thin-film transistor) display, creating a virtual image that overlays the vertical TFT without creating a blur. This arrangement allows design flexibility of the virtual image so it can appear in the same plane as the vertical TFT, behind it or in front of it.


The image above represents the package overview of the Prism concept


Clinic Results
The second-generation multilayer Prism cluster was found to combine the best of both worlds: “real 3-D” effects with clear graphics and quality.

  • Of the clinic participants, 100 percent gave Prism a good or very good rating on quality, and 93 percent agreed or strongly agreed that the second-generation multilayer instrument cluster has a high premium feel.
  • Nearly nine of 10 respondents felt it to be innovative or very innovative, with the original multilayer cluster close behind.
  • Reliability (being easy to read) was highest for the 3-D rendered instrument cluster, which was rated reliable or very reliable by 75 percent of the participants.
The 3-D effects were higher rated on the instrument clusters with two layers, each of which showed “real depth,” bringing the most important information to the front. The first instrument cluster was considered to have no real depth and was seen as “fake” 3-D, but the high resolution was appreciated.

In the end, participants fell into two groups. One preferred unobtrusive, high-resolution rendering and remained a bit reluctant to embrace 3-D fully. The other group saw the advantages of multilayer instrument clusters by bringing information from the back to the front to raise awareness. Both executions are considered good ways to represent 3-D, and the drawbacks of initial multilayered clusters have largely been overcome with the second-generation technology that combines high resolution and no blurriness with preferred 3-D effects.

For additional details on the clinic and the results, see Visteon’s white paper on Consumer insights on innovative 3-D visualization technologies

Judy Blessing brings 18 years of research experience to her manager position in market and trends research. Her in-depth knowledge of all research methodologies allows her to apply the proper testing and analysis to showcase Visteon´s automotive intellect to external customers and industry affiliates. She holds a German University Diploma degree in marketing/market research from the Fachhochschule Pforzheim, Germany.


December 5, 2016


Touchless vehicle apps know what you want, when you want it
By Sivakumar Yeddnapuddi

Today’s cars and trucks are smart, but they’re not smartphones. Potentially, we can choose from millions of apps on our phones, just by touching an icon. If we tried to do all the things we want to do in our cars using apps, we’d be frustrated, because we can’t safely select them while driving. Automakers have been limited to the native applications built into cars—like Bluetooth and USB ports – relying on a passenger to use a phone to find the nearest gas station or restaurant if the vehicle didn’t have built-in navigation.

A new developer-friendly application platform from Visteon – called Phoenix – solves this problem and propels smart vehicle infotainment systems to the head of the class. This web-based infotainment platform “stitches” together apps native to the car with apps from third parties. Application integration is performed via recipes that enable the appropriate apps at the ideal time contextually – without the driver needing to touch anything.

Case in point: As a driver enters the vehicle, a customized startup feature automatically indicates available content from three different apps:  an audible message lists the day’s meetings, the weather forecast and traffic conditions for the anticipated route  

The driver does not need to use a phone, speak voice commands or input commands to a touch screen; all required information is automatically displayed on one screen.

Similarly, today two separate apps and two steps are required to open a garage door remotely and to show the vehicle’s position in relation to the garage entrance via GPS. Phoenix stitches these apps together so that the garage door automatically opens as the vehicle approaches.



Phoenix is easy to use since it complies with open standards such as W3C and GENIVI and is designed with app developers in mind. This platform lets developers build applications using HTML5 along with rich JavaScript based application programming interfaces (APIs). This eliminates the need to rewrite applications when porting to other infotainment systems.

Furthermore, Visteon offers a software development kit (SDK) with libraries of code, documents and a simulator. The Phoenix SDK makes development easier than conventional, often disjointed methods that require custom software or hardware and lack third-party tools – thus increasing cost and time. With Phoenix, the developer creates and tests the app with the SDK and simulator; the app is then validated by the automaker or Visteon and published to an app store. Phoenix is the first platform for vehicle apps to incorporate HTML5 and an SDK.

The Phoenix platform also advances the capability to update in-vehicle apps over the air, whether at a dealership lot or in the driveways of individual owners. For the first time, automakers can securely update just one portion of an app, using Visteon’s proprietary block-and-file technology, rather than needing to upgrade the entire system.

By 2020, when vehicle-to-vehicle (V2V) communication will be more common, vehicles will have the capability to display infotainment on screens from 12 to 17 inches in size, compared with today’s 7- to 8-inch screens. Phoenix will enable developers to create content that optimizes these larger screens, making them more useful for drivers and improving the driving experience.

Sivakumar Yeddanapudi is a platform leader managing the Phoenix program. He also develops infotainment platforms that incorporate the latest technologies such as advanced HTML5 HMI framework, Web browser, cybersecurity, reflash over air and vision processing for cockpit electronics.

He has more than 15 years of automotive experience and served as software developer, technical professional for audio and infotainment software, and now as platform leader, located at Visteon’s headquarters in the U.S.