The Connected Car, Big Data and the Automotive Industry’s Future

With the growing Internet of Things (IoT), the vast majority of our devices will be connected to the internet.

Image result for connected cars

When people think of IoT, watches, phones and other small devices often spring to mind. But increasingly, manufacturers are dreaming big. Large “gadgets” like our automobiles are getting synced into the web and updated with new technology. Automobile manufacturers are now embedding WiFi into their vehicles, offering Apple CarPlay integration, GPS navigation, email and much more. In fact, by 2020, connected car market research estimates that connected car services will account for nearly $40 billion in annual revenue.

The Technology of the Connected Car

Cars already contain advanced technology, with hundreds of sensors and numerous onboard computers and processors. With non-connected cars, however, most of the information is generated or stored locally. For example, you can download GPS-based maps but they might not be up to date. Or you can download MP3s, but you won’t always have access to the latest and greatest hits.

Related image

With connected cars, everything will be up to date. Of course, the biggest benefits of the internet aren’t entertainment based. With connected cars, automobile manufacturers will one day remotely update software systems, and monitor engine performance and powertrain performance.

Image result for connected cars

Big Data Use Cases for the Connected Car

Forget to change your oil, or is an oxygen sensor malfunctioning? Your automobile manufacturer will be able to tell you immediately and remotely. Have to respond to an important email, check your bank balance or pay some bills? With just a few more technological advances, you’ll be able to do so from the comfort of your own car.

As data is being gathered, companies will discover even more about consumer behaviour. For example, are there links between the type of music people listen to, and which drive-through restaurants, gas stations or other places they prefer to visit? This could have a huge impact when companies decide which radio stations to spend their advertising budgets on. By gathering big data through connected cars, it may be possible to uncover a wide range of correlations.

Either way, connected cars promise big innovation. The connected car industry is projected to enjoy huge growth in the years ahead. In 2014, McKinsey estimated the global market for connectivity components and services was valued at about $38 billion. By 2020, McKinsey estimates the industry will grow to $215 billion.

Big Data and the Connected Car Fleet

For businesses, one of the biggest benefits for connected cars will be fleet management. Many companies have to manage hundreds or even thousands of company automobiles. By utilizing connected cars and big data, companies will maintain better control and oversight of their fleets. For example, companies could use an array of sensors in their connected cars to analyze aggregated data. Are certain drivers speeding or using improper braking techniques? Are drivers taking routes that are under construction, or choked with traffic while faster routes are available? By monitoring sensors and analyzing big data, companies will discover this information and more.

Related image

Furthermore, in the future companies will easily monitor cars across systems to maintain them in peak performance. Companies will also be able to ensure their cars aren’t being misused or abused by drivers, which will lower upkeep costs over time.

Using Big Data to Understand Roads and Infrastructure

The massively interconnected roadways in the United States and other countries generate vast amounts of data. Construction, accidents, stoplights — the amount of data from roadways is immense. As more cars become connected, the amount of gathered data will only increase. Using the vast amount of data, city planners and city engineers will better plan out roadways and traffic flows. Navigation systems will also become more accurate in uncovering the fastest routes. Even more importantly, early warning systems can be designed to warn people of hazard spots in the road, such as sharp turns, construction or hidden driveways. This data, in turn, will help civil engineers design more efficient and safer roadways.

How Big Data Analytics Can Lead to Custom Insurance

In the future, insurance companies can use connected cars to monitor driver performance and safety. This will certainly raise ethical and privacy issues and some drivers might not like the idea of “big brother” insurance companies monitoring them through their sensors. Still, safe drivers who consistently obey the rules of the road, say by following posted speed limits, could be offered a discount if they prove their safe driving habits with data from connected cars.

After accidents and other incidents occur, insurance companies may be able to use data from connected cars to figure out what happened. This could reduce false claims and help both insurance companies and legal authorities figure out who is truly at fault. With all of this data, insurance companies may be able to eventually provide one-on-one driving insurance. As a result, this could increase incentives for drivers to drive safely. In turn, more drivers practising safe driving practices could actually make roadways safer for all drivers.

The Future for Connected Cars Is Soon — And Big Data Will Get Us There

We’ve discussed quite a number of hypotheticals in this article. However, connected cars are already a reality. The automotive industry, including manufacturers such as GM, Chrysler and Audi, already offers wifi integration. Meanwhile, Ford’s SYNC technology allows drivers to use their smartphones to turn cars into wifi hubs. Other companies, such as Mobley and Audiovox, also offer gadgets that can turn cars into wifi hotspots.

Furthermore, as adoption rates increase for mobile cars, more gadgets, wifi services, apps and other innovations will be created. As a result, even more, data will be generated and new opportunities will emerge. The possibilities out there are endless and people are imaginative. In 10 years, who knows what our cars will look like and what they might do?

– by Datameer

How big data is transforming the automotive industry

From self-driving cars to vehicles connected to the Internet of Things: Big data is transforming the way we drive forever.

Related image

The rapidly expanding Internet of Things (IoT) is seeing more and more devices connected to the internet. Traditionally, these have been biometric wearables, home appliances and audio-visual equipment. Automobile manufacturers, however, are making a play to corner this market for their own ends.

Entrenching Wi-Fi into automobiles opens an entirely new avenue of pursuit that entails vehicles communicating directly with the internet for GPS navigation, email and music streaming, for example.

By 2020, the connected car market report states that connected car services will account for approximately $40 billion annually. These services include infotainment, navigation, fleet management, remote diagnostics, automatic collision notification, enhanced safety, usage-based insurance, traffic management and, lastly, autonomous driving. The root of these applications is big data, as increasing amounts of data are collected from remote sensors; this information is being interpreted and leveraged to transform the automotive industry into one of automation and self-sufficiency.

Big data and the connected car

It isn’t actually a big stretch to incorporate big data into the automotive industry, as most modern cars already contain advanced technology with numerous sensors, onboard computing tools and processors. The difference is that most of this information is generated and stored locally, with connected cars, the connection to the internet will ensure all applications and information is up to date and shared to the correct platforms.

The end game is likely that automobile manufacturers will be able to update software remotely, allowing them to monitor and respond to engine performance. For example, if the vehicle is due for an oil change or running low on radiator fluid, the manufacturer will be able to inform the driver remotely. Personal errands will also get easier as you will be able to respond to emails, perform internet banking and pay bills on the way home from work.

As big data is gathered from the multitude of sensors, inferences can be drawn regarding consumer behaviour, for instance, establishing if there a link between the music people listen to and drive-through restaurants they frequent. These kinds of connections can impact advertising resource allocations and budgets, and thus the information gathered from connected cars is invaluable commercially.

With regard to fleet management, by using big data and connected cars, it will enable the management of vast numbers of vehicles by way of analysis of aggregated data. Sensors will inform management of speed, braking techniques and route selection and thus they can make informed decisions to relay to the drivers. Better yet, with the application of smart sensor algorithms, the car itself will be able to suggest appropriate responses for actions measured.

Vehicle maintenance will become more preventative than reactive, as monitoring across all systems will elucidate problems before they result in a breakdown. All of this conspires to keep vehicles in peak performance shape, increase efficiency and lower costs.

A considerable amount of information arises from the interconnected motorways in the United States, and in other countries. By using this data, especially the information regarding construction, accidents and intersections, the connected car can navigate more effectively and engineers can design road flows according to real traffic patterns. The result is more efficient and safer roads.

Insurance companies are ready to pounce on this surfeit of big data. By using the information gleaned from smart sensors, the industry can benefit from compiling custom insurance plans, monitoring driver behaviour, performance and safety. These schedules are already in place in some instances, with insurance companies offering discounts hinging on driving performance. Piecing together the events of an accident is more accurate and less subjective than testimony when accomplished through big data reconstruction. This will hopefully make customers drive more cautiously and inevitably make the roadways a safer place.

Big data and autonomous driving

There is undeniable potential for autonomous driving to keep our roads safer, as 90 percent of the death toll on our roads is due to human error. For these vehicles to become a reality, they need data. Big data in fact. The vehicles are furnished with sensors measuring everything from position, speed, direction and braking; to traffic signals, pedestrian proximity and hazards. Using this data, the vehicle is able to make a decision and carry out appropriate responses devoid of human error.

The groundwork has already been laid for the autonomous vehicle with collision warnings and camera controlled reversing applications, as well as steering governance, braking assistance and speed control already available in some higher-end models.

By leveraging this real-time information, the way people drive is transforming. Not only controlling the act of driving itself, but early warnings concerning imminent mechanical issues can prevent failures and anticipate maintenance, saving both time and money.  Additionally, the cars will be automatically synced to environmental conditions and changing surroundings.

As more autonomous vehicles enter the scene, big data will only get bigger and consequently, the potential for autonomous technology will rise, resulting in a vastly more data-centric automotive industry.

This article is published as part of the IDG Contributor NetworkGary Eastwood has over 20 years’ experience as a science and technology journalist, editor and copywriter; writing on subjects such as mobile & UC, smart cities, ICT, the cloud, IoT, clean technology, nanotechnology, robotics & AI and science & innovation for a range of publications. Outside his life as a technology writer and analyst, Gary is an avid landscape photographer who has authored two photography books and ghost-written two others.

Rethinking car software and electronics architecture

As the car continues its transition from a hardware-driven machine to a software-driven electronics device, the auto industry’s competitive rules are being rewritten. The engine was the technology and engineering core of the 20th-century automobile. Today, software, large computing power, and advanced sensors increasingly step into that role; they enable most modern innovations, from efficiency to connectivity to autonomous driving to electrification and new mobility solutions.

However, as the importance of electronics and software has grown, so has complexity. Take the exploding number of software lines of code (SLOC) contained in modern cars as an example. In 2010, some vehicles had about ten million SLOC; by 2016, this expanded by a factor of 15, to roughly 150 million lines. Snowballing complexity is causing significant software-related quality issues, as evidenced by millions of recent vehicle recalls.

With cars positioned to offer increasing levels of autonomy, automotive players see the quality and security of vehicle software and electronics as key requirements to guarantee safety. And this is requiring the industry to rethink today’s approaches to vehicle software and electrical and electronic architecture.

Addressing an urgent industry concern

As the automotive industry is transitioning from hardware- to software-defined vehicles, the average software and electronics content per vehicle is rapidly increasing. Software represents 10 percent of overall vehicle content today for a D-segment, or large, car (approximately $1,220), and the average share of software is expected to grow at a compound annual rate of 11 percent, to reach 30 percent of overall vehicle content (around $5,200) in 2030. Not surprisingly, players across the digital automotive value chain are attempting to capitalize on innovations enabled through software and electronics (Exhibit 1). Software companies and other digital-technology players are leaving their current tier-two and tier-three positions to engage automakers as tier-one suppliers. They’re expanding their participation in the automotive technology “stack” by moving beyond features and apps into operating systems. At the same time, traditional tier-one electronic system players are boldly entering the tech giants’ original feature-and-app turf, and premium automakers are moving into areas further down the stack such as operating systems, hardware abstractions, and signal processing in order to protect the essence of their technical distinction and differentiation.

Software enables critical automotive innovations.

One consequence of these strategic moves is that the vehicle architecture will become a service-oriented architecture (SOA) based on generalized computing platforms. Developers will add new connectivity solutions, applications, artificial-intelligence elements, advanced analytics, and operating systems. The differentiation will not be in the traditional vehicle hardware anymore but in the user-interface and experience elements powered by software and advanced electronics.

Tomorrow’s cars will shift to a platform of new brand differentiators (Exhibit 2). These will likely include infotainment innovations, autonomous-driving capabilities, and intelligent safety features based on “fail-operational” behaviours (for example, a system capable of completing its key function even if part of it fails). The software will move further down the digital stack to integrate with hardware in the form of smart sensors. Stacks will become horizontally integrated and gain new layers that transition the architecture into an SOA.

Architecture will become service oriented, with new factors for differentiation.

Ultimately, the new software and electronic architecture will result out of several game-changing trends that drive complexity and interdependencies. For example, new smart sensors and applications will create a “data explosion” in the vehicle that players need to handle by processing and analyze the data efficiently if they hope to remain competitive. A modularized SOA and over-the-air (OTA) updates will become key requirements to maintain complex software in fleets and enable new function-on-demand business models. Infotainment, and, to a lesser degree, advanced driver-assistance systems (ADAS), will increasingly become “appified” as more third-party app developers provide vehicle content. Digital-security requirements will shift the focus from a pure access-control strategy to an integrated security concept designed to anticipate, avoid, detect, and defend against cyber attacks. The advent of highly automated driving (HAD) capabilities will require functionality convergence, superior computing power, and a high degree of integration.

Exploring ten hypotheses on future electrical or electronic architecture

The path forward for both the technology and the business model is far from fixed. But based on our extensive research and insights from experts, we developed ten hypotheses regarding tomorrow’s automotive electrical or electronic architecture and its implications for the industry.

There will be an increasing consolidation of electronic control units (ECUs)

Instead of a multitude of specific ECUs for specific functionalities (the current “add a feature, add a box” model), the industry will move to a consolidated vehicle ECU architecture.

In the first step, most functionality will be centred on consolidated domain controllers for the main vehicle domains that will partially replace functionality currently running in distributed ECUs. These developments are already underway and will hit the market in two to three years’ time. This consolidation is especially likely for stacks related to ADAS and HAD functionality, while more basic vehicle functions might keep a higher degree of decentralization.

In the evolution toward autonomous driving, virtualization of software functionality and abstraction from hardware will become even more imperative. This new approach could materialize in several forms. One scenario is a consolidation of hardware into stacks serving different requirements on latency and reliability, such as a high-performance stack supporting HAD and ADAS functionality and a separate, time-driven, low-latency stack for basic safety features. In another scenario, the ECU is replaced with one redundant “supercomputer,” while in a third, the control-unit concept is abandoned altogether in favour of a smart-node computing network.

The change is driven primarily by three factors: costs, new market entrants, and demand through HAD. Decreasing costs, both for the development of features as well as the required computing hardware, including communication hardware, will accelerate the consolidation. So too will new market entrants into automotive that will likely disrupt the industry through a software-oriented approach to vehicle architecture. Increasing demand for HAD features and redundancy will also require a higher degree of consolidation of ECUs.

Several premium automakers and their suppliers are already active in ECU consolidation, making early moves to upgrade their electronic architecture, although no clear industry archetype has emerged at this point.

The industry will limit the number of stacks used with specific hardware

Accompanying the consolidation will be a normalization of limited stacks that will enable a separation of vehicle functions and ECU hardware that includes increased virtualization. Hardware and embedded firmware (including the operating system) will depend on key vehicle functional requirements instead of being allocated part of a vehicle functional domain. To allow for separation and a service-oriented architecture, the following four stacks could become the basis for upcoming generations of cars in five to ten years:

  • Time-driven stack. In this domain, the controller is directly connected to a sensor or actuator while the systems have to support hard real-time requirements and low latency times; resource scheduling is time-based. This stack includes systems that reach the highest Automotive Safety Integrity Level classes, such as the classical Automotive Open System Architecture (AUTOSAR) domain.
  • Event- and time-driven stack. This hybrid stack combines high-performance safety applications, for example, by supporting ADAS and HAD capability. Applications and peripherals are separated by the operating system, while applications are scheduled on a time base. Inside an application, scheduling of resources can be based on time or priority. The operating environment ensures that safety-critical applications run on isolated containers with clear separation from other applications within the car. A current example is adaptive AUTOSAR.
  • Event-driven stack.This stack centres on the infotainment system, which is not safety critical. The applications are clearly separated from the peripherals, and resources are scheduled using best-effort or event-based scheduling. The stack contains visible and highly used functions that allow the user to interact with the vehicle, such as Android, Automotive Grade Linux, GENIVI, and QNX.
  • Cloud-based (off-board) stack. The final stack covers and coordinates access to car data and functions from outside the car. The stack is responsible for communication, as well as safety and security checks of applications (authentication), and it establishes a defined car interface, including remote diagnostics.

Automotive suppliers and technology players have already begun to specialize in some of these stacks. Notable examples are in infotainment (event-driven stack), where companies are developing communications capabilities such as 3-D and augmented navigation. A second example is an artificial intelligence and sensing for high-performance applications, where suppliers are joining with key automakers to develop computing platforms.

In the time-driven domain, AUTOSAR and JASPAR are supporting the standardization of these stacks.

An expanded middleware layer will abstract applications from hardware

As vehicles continue to evolve into mobile computing platforms, middleware will make it possible to reconfigure cars and enable the installation and upgrade of their software. Unlike today, where middleware within each ECU facilitates communication across units, in the next vehicle generation it will link the domain controller to access functions. Operating on top of ECU hardware in the car, the middleware layer will enable abstraction and virtualization, an SOA, and distributed computing.

Evidence already suggests automotive players are moving toward more flexible architectures, including an overarching middleware. AUTOSAR’s adaptive platform, for example, is a dynamic system that includes middleware, support for a complex operating system, and state-of-the-art multicore microprocessors. However, current developments appear restricted to a single ECU.

In the middle term, the number of onboard sensors will spike significantly

In the next two to three vehicle generations, automakers will install sensors with similar functionalities to ensure that sufficient safety-related redundancies exist (Exhibit 3). In the long term, however, the automotive industry will develop specific sensor solutions to reduce the number of sensors used and their costs. We believe that a combined solution of radar and camera might be dominant for the next five to eight years. As autonomous-driving capabilities continue to rise, the introduction of lidars will be necessary to ensure redundancy for both object analysis and localization. Configurations for SAE International L4 (high automation) autonomous driving, for example, will likely initially require four to five lidar sensors, including rear-mounted ones for city operation and near-360-degree visibility.

Sensor fusion will provide redundancy for autonomous functions.

In the long term, we see different possible scenarios concerning the number of sensors in vehicles: further increase, stable numbers, or decrease. Which scenario will come to pass depends on regulation, the technical maturity of solutions, and the ability to use multiple sensors for different use cases. Regulatory requirements might, for example, enforce closer driver monitoring, resulting in an increase of sensors inside the vehicle. It can be expected that more consumer-electronics sensors will be used in the automotive interior. Motion sensors and health monitoring of measures such as heart rate and drowsiness, as well as face recognition and iris tracking, are just a few of the potential use cases. However, as an increase or even a stable number of sensors would require a higher bill of materials, not only in the sensors themselves but also in the vehicle network, the incentive to reduce the number of sensors is high. With the arrival of highly automated or fully automated vehicles, future advanced algorithms and machine learning can enhance sensor performance and reliability. Combined with more powerful and capable sensor technologies, a decrease of redundant sensors can be expected. Sensors used today might become obsolete as their functions are overtaken by more capable sensors (for instance, a camera- or lidar-based parking assistant could replace ultrasound sensors).

Sensors will become more intelligent

System architectures will require intelligent and integrated sensors to manage the massive amounts of data needed for highly automated driving. While high-level functions such as sensor fusion and 3-D positioning will run on centralized computing platforms, preprocessing, filtering, and fast reaction cycles will most likely reside in the edge or be done directly in the sensor. One estimate puts the amount of data an autonomous car will generate every hour at four terabytes. Consequently, intelligence will move from ECUs into sensors to conduct basic preprocessing requiring low latency and low computing performance, especially if weighting costs for data processing in the sensors versus costs for high-volume data transmission in the vehicle. Redundancy for driving decisions in HAD will nevertheless require a convergence for centralized computing, likely based on preprocessed data. Intelligent sensors will supervise their own functionality while redundancy of sensors will increase reliability, availability, and hence safety of the sensor network. To ensure correct sensor operation in all conditions, a new class of sensor-cleaning applications—such as deicing capabilities and those for dust or mud removal—will be required.

Full power and data-network redundancy will be necessary

Safety-critical and other key applications that require high reliability will utilize fully redundant circles for everything that is vital to safe manoeuvrings, such as data transmission and power supply. The introduction of electric-vehicle technologies, central computers, and power-hungry distributed computing networks will require new redundant power-management networks. Fail-operational systems to support steer-by-wire and other HAD functions will require redundancy system designs, which is a significant architectural improvement on today’s fail-safe monitoring implementations.

The ‘automotive Ethernet’ will rise and become the backbone of the car

Today’s vehicle networks are insufficient for the requirements of future vehicles. Increased data rates and redundancy requirements for HAD, safety and security in connected environments, and the need for interindustry standardized protocols will most likely result in the emergence of the automotive Ethernet as a key enabler, especially for the redundant central data bus. Ethernet solutions will be required to ensure reliable interdomain communication and satisfy real-time requirements by adding Ethernet extensions like audio-video bridging (AVB) and time-sensitive networks (TSN). Industry players and the OPEN Alliance support the adoption of Ethernet technology, and many automakers have already made this leap.

Traditional networks such as local interconnected networks and controller area networks will continue to be used in the vehicle, but only for closed lower-level networks, for instance, in the sensor and actor area. Technologies such as FlexRay and MOST are likely to be replaced by automotive Ethernet and its extensions, AVB and TSN.

Going forward, we expect the automotive industry to also embrace future Ethernet technologies such as high-delay bandwidth products (HDBP) and 10-Gigabit technologies.

OEMs will always tightly control data connectivity for functional safety and HAD but will open interfaces for third parties to access data

Central connectivity gateways transmitting and receiving safety-critical data will always connect directly and exclusively to an OEM back end, available to third parties for data access, except where obliged by regulation. In infotainment, however, driven by the “appification” of the vehicle, emerging open interfaces will allow content and app providers to deploy content, while OEMs will keep the respective standards as tight as possible.

Today’s onboard diagnostics port will be replaced with connected telematic solutions. Physical maintenance access to the vehicle network will not be required anymore but can go through the OEMs’ back ends. OEMs will provide data ports in their vehicle back end for specific use cases such as lost-vehicle tracking or individualized insurance. Aftermarket devices, however, will have less and less access to vehicle internal data networks.

Large fleet operators will play a stronger role in the user experience and will create value for end customers, for example, by offering different vehicles for different purposes under one subscription (such as weekend or daily commute). This will require them to utilize the different OEMs’ backends and start consolidating data across their fleets. Larger databases will then allow fleet operators to monetize consolidated data and analytics not available on the OEM level.

Cars will use the cloud to combine onboard information with offboard data

Nonsensitive data (that is, data that are not personal or safety-related) will increasingly be processed in the cloud to derive additional insights, though availability to players beyond OEMs will depend on future regulation and negotiations. As the volumes of data grow, data analytics will become critically important for processing the information and turning it into actionable insights. The effectiveness of using data in such a way to enable autonomous driving and other digital innovations will depend on data sharing among multiple players. It’s still unclear how this will be done and by whom, but major traditional suppliers and technology players are already building integrated automotive platforms capable of handling this new plethora of data.

Cars will feature updateable components that communicate bidirectionally

Onboard test systems will allow cars to check function and integration updates automatically, thus enabling life-cycle management and the enhancement or unlocking of aftersales features. All ECUs will send and receive data to and from sensors and actuators, retrieving data sets to support innovative use cases such as route calculation based on vehicle parameters.

OTA update capabilities are a prerequisite for HAD; they also will enable new features, ensure cybersecurity, and enable automakers to deploy features and software quicker. In fact, it’s the OTA update capability that is the driver behind many of the significant changes in vehicle architecture described previously. In addition, this capability also requires an end-to-end security solution across all layers of the stack outside the vehicle to the ECUs in the vehicle. This security solution remains to be designed, and it will be interesting to see how and by whom this will be done.

To achieve smartphone-like upgradability, the industry needs to overcome restrictive dealer contracts, regulatory requirements, and security and privacy concerns. Here too, a variety of automotive players have announced plans to deploy OTA service offerings, including over-the-air updates for their vehicles.

Assessing the future implications of vehicle software and electronic architecture

While the trends affecting the automotive industry today are generating major hardware-related uncertainties, the future looks no less disruptive for software and electronic architecture. Many strategic moves are possible: automakers could create industry consortia to standardize vehicle architecture, digital giants could introduce onboard cloud platforms, mobility players could produce their own vehicles or develop open-source vehicle stacks and software functions, and automakers could introduce increasingly sophisticated connected and autonomous cars.

The transition from hardware-centric products to a software-oriented, service-driven world is especially challenging for traditional automotive companies. Yet, given the described trends and changes, there is no choice for anyone in the industry but to prepare. We see several major strategic pushes:

  • Decouple vehicle and vehicle-functions development cycles. OEMs and tier-one suppliers need to identify how to develop, offer, and deploy features largely apart from vehicle-development cycles, both from a technical and organizational perspective. Given current vehicle-development cycles, companies need to find a way to manage innovations in software. Further, they should think about options to create retrofitting and upgrade solutions (for example, computing units) for existing fleets.
  • Define the target value-add for software and electronics development.OEMs must identify the differentiating features for which they are able to establish control points. In addition, it is crucial to clearly define the target value-add for their own software and electronics development and to identify areas that become a commodity or topics that can only be delivered with a supplier or partner.
  • Attach a clear price tag to software. Separating software from hardware requires OEMs to rethink their internal processes and mechanisms for buying software independently. In addition to the traditional setup, it is also important to analyze how an agile approach to software development can be anchored in procurement processes. Here suppliers (tier one, tier two, and tier three) also play a crucial role as they need to attach a clear business value to their software and system offerings to enable them to capture a larger revenue share.
  • Design a specific organizational setup around new electronics architecture (including related backends). Next to changing internal processes in order to deliver and sell advanced electronics and software, automotive players—both OEMs and suppliers—should also consider a different organizational setup for vehicle-related electronics topics. Mainly, the new “layered” architecture asks for potentially breaking up the current “vertical” setup and introducing new “horizontal” organizational units. Further, they need to ramp up dedicated capabilities and skills for their own software and electronics development teams.
  • Design a business model around automotive features as a product (especially for automotive suppliers). To remain competitive and capture a fair share of value in the field of automotive electronics, it is crucial to analyze which features add real value to the future architecture and therefore can be monetized. Subsequently, players need to derive new business models for the sale of software and electronics systems, be it as a product, a service, or something completely new.

As the new era of automotive software and electronics begins, it’s drastically changing a wide variety of prior industry certainties about business models, customer needs, and the nature of competition. We are optimistic about the revenue and profit pools that will be created. But to benefit from the shifts, all players in the industry need to rethink and carefully position (or reposition) their value propositions in the new environment.

– Ondrej Burkacky, Johannes Deichmann, Georg Doll, and Christian Knochenhauer- This article was developed in collaboration with the Global Semiconductor Alliance.

Telemetry- A Racing car Perspective

A host of electronic devices, including ECU (Engine Control Unit or as some people call it Electronic Control Unit) which transmits specific data, for example, measurements, but not only, to a remote site, in F1 case, to pit wall and pit garage. It electronically records performance of the engine, a status of suspensionsgearbox data, fuel status, all temperature readings including tires temperature, g-forces and actuation of controls by the driver. The data is then used as a foundation for determining car setup and all problems.

Use of telemetry started in the late 1980s when teams were sending data only in the bursts as the car pass close to the pits. Technology moved on to continuous high rate data in the early 1990s, but on tracks like Monza, Spa or Monaco where cars pass through trees or between the buildings, there would be sections of the tack where teams lose coverage in terms of real-time data. There was a period of time when teams couldn’t see anything. Into the 2000s, teams fixed that limitation by retransmitting not received data as soon as the car got back into areas of coverage. By the time the cars went past the garage, all the data for that lap had been seen.  In 2002 two-way telemetry was permitted so engineers could change settings on the cars from the pits. This is no longer allowed, but much was learned. Nowadays they use multiple antennae around the circuit. McLaren Electronic Systems, the supplier of the F1 Electronic Control Unit, place antennae that are available for all the teams to use. As the cars go around the track, as they move out of sight of one antenna they come into sight of next one and use that one to send the data across. This manages the transition between antennae, which is how a mobile phone network works. What that means for F1 is that on any circuit, including the difficult circuits, you get almost 100% time coverage and at the same time high bandwidth that the teams demand.

Working with the telemetry data, a large part of the time is spent working on the differential, the most tunable part of the car. The differential, which allows the two rear wheels to rotate at different speeds, can be adjusted for corner entry, mid corner and corner exit. It plays a big role in cornering stability and done well can contribute a lot to the lap time.

So, how telemetry works? As we sed before, under FIA rules, it is not possible to send electronic information to the cars. So this system is a one-way system that sends data from the cars to the pit. Then the engineers can analyze the data in real time and see if something is wrong or tell the driver how he can improve the way he is driving or setting of the car. A lot of teams send data also to Head Quarters, where a whole team is dedicated to analyze the collected data in real time.

Each car has from 150 to 300 sensors. The number isn’t exact because from track to track they add and remove sensors. Also, from the training sessions to the official race they can remove some sensors they found they won’t need for that particular track and thus can save some weight.

Data is sent from the car to the boxes using from 1,000 to 2,000 telemetry channels, transmitted wirelessly (obviously) using the 1.5 GHz frequency or frequency allowed by local authorities. These channels are encrypted, of course. The typical delay between the data being collected and it is received at the boxes is of 2 ms. For each race, the amount of collected data is in the range of 1.5 billion of samples. Since they also collect the same amount for each training day, the total amount of collected data is in the range of the 5 billion samples. During a 90 minute session, the team will collect between 5 and 6 gigabytes of raw compressed format data from the one car.

Hamilton Spa 2012 telemetry sheet
Telemetry sheet Lewis Hamilton curiously tweeted on Sunday morning before Belgian GP race at Spa 2012, not only contained traces of the two drivers’ laps superimposed, showing where Hamilton was losing time to Button, but also information about the car’s settings, including sensitive data such as its car settings and ride height’. Its shows him losing 0.5s in both the high-speed sectors 1&2 but what it does not show is that the idea of running a higher downforce wing is that you make up a second in Sector 2. So the lap times end up more or less the same. “Jenson has the new rear wing on, I have the old. We voted to change, didn’t work out. I lose 0.4 tenths (of a second) just on the straight” – was the body of the tweet. Button got it right. Hamilton did not. The image was deleted not long after with McLaren team principal Martin Whitmarsh confirming that they asked him to take it down because it contains confidential data. “He made an error of judgement and we asked him to take that one down, and he did.” Asked as to whether any action would be taking ahead Hamilton, Whitmarsh said: “No. But it would be interesting to see how other team principals would deal with it.” As for how at least one team boss would reaction, Red Bull’s Christian Horner said he would deem it to have been a “a breach of confidentiality.”  He added: “I haven’t seen the tweet in detail. But from what I understand it was car data, and if it was car data then I’m sure every engineer in the pit lane is having a very close look at it.”

Since data is compressed, here they don’t talk about megabytes or gigabytes, so the actual transfer rate used by the telemetry system is smaller. Each car is independent, so since each team has two cars, the number of collected data is actually two times higher. The transmitter is placed in the sidepod and then a cable runs to an antenna on the nose on the car. Each car has also an onboard storage system that buffers the most recent data, so if the transmission fails, the car keeps retrying until the transmission is completed. Teams don’t want to disclose if it is a hard disk drive or a flash memory they using for this, but my guess is that in this days they all using flash memory. So no data is lost when the car enters in a Monaco tunnel, for example: as soon as the communication is lost, the car keeps collecting data and storing on its onboard memory and as soon as it exits the tunnel or any blind spot, all data collected during this period is sent at once to the boxes.
The data is then decoded and converted into a signal that can be understood by a PC. It goes through a data server system called ATLAS (Advanced Telemetry Linked Acquisition System, developed by McLaren Electronic Systems – MES) which displays the telemetry channels for the engineers. This is the suite which displays all the wavy lines on the screen.

Telemetry system
ATLAS has become the standard data acquisition package in the F1 paddock due to the use of an FIA spec MES engine control unit on all cars. The entire data acquisition package consists of onboard car data logging electronics and transmitter radio, transmitting data via radio frequency to telemetry receivers in the garages. The receivers decode the data and operate as central servers of the decoded data to distribute it over a local ethernet based network. Any appropriately configured PC computer, running ATLAS software, can simply connect to the network and receive data from the telemetry receiver server. The simple ethernet architecture of the data distribution network also lends itself to an ease of sending the live telemetry back to the factory to engineers and strategists. Data is referred to in two forms; “Telemetry” is live data, and “Historic” is logged data or also backfilled telemetry. The hardware and infrastructure of the system is beyond the scope of this discussion but is fundamental to understanding how an engineer would receive the data and with what tools he or she would interact with it.

In summary, a lot of computers with several LCD displays plotting charts and showing data, with lots of engineers analyzing the data. If you pay close attention to Ferrari F1 cars, you will notice an AMD logo on the tail. For the majority, this simply means that AMD is paying to run an add-on Ferrari car, but that isn’t the case. They are also providing the technology infrastructure for the car’s telemetry system, which collects data in real time and sends to Ferrari team during the races, so they can check in real time if something is going wrong and also instruct the driver of corrections he should make in the way he is driving in order to achieve a higher performance during the race. The collected data are also collected for after race analysis.

In Formula 1 racing and radio frequencies used during the race weekend and test sessions, the only solution was to develop a custom radio system. Systems such as GSM, DECT and Bluetooth were never designed to support the data rates required or operate in this radio environment. The starting point in the design of a custom communications system is to address the first key question: What are the requirements of the system?
Considerations of a wide variety of parameters including the huge data rates, available frequencies, acceptable latency, quality of service, countries of operation, hardware size, cost, power consumption and more are all required. Radio frequency spectrum is a rare resource and is managed by international and national regulation. The selection of a suitable frequency band is a complex issue. This can typically be limitations on maximum output power, acceptable modulation schemes, installation locations and the applications served. The regulations vary from country to country although the process within the EU is now quite well harmonized. All data are sent in an encrypted way to prevent data leakage to other teams.

Telemetry reading

Telemetry reading for Silverstone

In this telemetry print out, the wavy lines represent (from top to bottom) map of the Silverstone circuit, gears used, Revs, and g-force as a small dots. We can loosely compare ATLAS system to Microsoft Excel in reference to its working surfaces. In Excel, most people are familiar with the spreadsheet, referred to as a “workbook.” Within that “workbook” are multiple “worksheets” containing any number of user-created charts and information. Organization of the working surfaces of ATLAS is similar in that an ATLAS “workbook” contains multiple “pages” organized in a similar Excel tabular graphic user interface. Each page contains user created “displays” on which to analyze data. The printed sample on the picture below is data of two drivers “overlaid” and printed on one single ATLAS “workbook”, in the same manner, that an individual chart can be printed from Excel. In this way, the driver can compare each lap and learn and improve his style of driving, or learn and compared his lap to lap of his teammate.

telemetry printout for monaco
This particular type of display is referred to as a “waveform.” A waveform display presents data relative to time or distance as the domain of the plot. Each car’s respective data is identified by colour. Here, blue coloured data traces from one car are compared to red coloured data traces from another car. Each individually named parameter represents the calibrated output of a unique and individual onboard sensor. Additionally, a parameter may represent a “function parameter”, a mathematical output based upon sensor outputs input into mathematical calculations. A track map of Monaco is located in the lower right corner of the display. In addition, we see that corners are identified as green and straights are yellow. The ATLAS software automatically generates the map based upon lateral acceleration and track distance logged data. The green corners are calculated and determined against thresholds of lateral acceleration.

One of the best-known suppliers of the telemetry equipment is Plextek. Plextek is currently supplier for Sauber, Williams, RedBull, STR and Ferrari. This company was approached in 1998 by Pi Group, in this time sponsor and supplier of electronic equipment for Williams and Jaguar F1 Racing, to develop a new telemetry system for Formula One Motor Racing.

First tests of the new Plextek system were undertaken at the Silverstone, Hockenheim, Nurburgring and Barcelona circuits to allow models of a number of different environments to be produced. From the measured data, the proposed system design was developed and tested to produce coverage estimates showing the likely performance of the system. This approach allows an early check on whether the initial objectives of the system are likely to be achieved prior to the final design of the equipment. The Formula One motor racing telemetry system developed by Plextek and Pi Group raced into first place in the San Marino Grand Prix at Imola on Easter Sunday 2001 when the Williams-BMW Team notched up the first victory of their two-year partnership. In the gap between the 2001 and 2002 season, Pi came back to Plextek for a software upgrade program. These system improvements allowed a fully acknowledged handshake protocol. The new software also provided a data downlink channel to the car, which was illegal under the old 2001 FIA rules but has been allowed since 2002. The new Plextek software allowed the teams to receive error-free transfers of data from the cars, and reliably send command information to the cars to tune performance during the race. The upgraded telemetry system was installed on four Formula One team cars including Williams-BMW, Jaguar and Arrows.

How Ferrari’s F1 technology works

So there we were, inside the Ferrari Garage at Greater NOIDA’s Buddh International Circuit, just 24 hours before the first practice session of the first ever Indian Grand Prix.
We saw the Ferrari engineers assemble cars for both Fernando Alonso and Felipe Massa. We discovered in detail the workings of the pit lane garage and how the team of engineers piece together a complex chain of events which result in the most exhaustive bit of teamwork in sports.

In the Ferrari Garage, we ran into Scuderia Ferrari’s chief techie, Andrea Beneventi. His official designation is Head of Electronics for Track and Test and Head of Support for Electronic Applications, and he gave us a deep insight into the technological aspects of Scuderia Ferrari’s F1 enterprise. Take a moment to read the technical jargon – behind it lies a fascinating account of the incredible systems that go into putting a car on the track.


Some key elements he touched upon are as follows:

1. Computers and Servers: It’s easy to assume that F1 cars are all about horsepower, but because of the level of sophistication of these vehicles, there is a massive synergy between software and hardware, both at the Ferrari Garage on the track and at Ferrari HQ in Maranello. The Ferrari Team uses a legion of Acer Laptops, Servers and Rack-based machines among whom most run Quad-Core Processors tied in with 8GB of RAM which also run specially tailored software meant for data collection, telemetry and the works. The machines that the Ferrari team travels around the world with are generally portable, and along with these workstations, the team also employs a myriad of displays, again provided by Acer. On the whole, it seems that Ferrari not only employs some serious horsepower for their engines but also for their computing needs. Pretty impressive, right? We have more.

2. Communication is key:  This is pretty much a well-known fact but the level the Ferrari team takes this to is almost unheard of.  Apparently, real-time Telemetry is shared between the car and the Ferrari Garage in the pit lane and between the factory in Maranello. Now how does this happen? This happens via Controller Area Network (CAN) lines and Ethernet lines.  The CAN lines are connected to RF arrays, which are in turn connected to a series of antennae placed throughout the circuit enabling wireless real-time telemetry to the garage. From here, the engineers can change the programme parameters of the car.  For the link up between the tracking server and Maranello, Ferrari uses Multiprotocol Label Switching Network (MPLS), which enables very high data, transfer speeds of up to 6 MegaBits/ second. In fact, the guys in Maranello even have access to the communication between the drivers and the pit-lane engineers.

By the way, CAN is an automotive protocol which is even used in our regular vehicles. Beneventi says that the robust nature of the protocol makes it ideal even for Formula 1 cars.
3. Data Analysis and Telemetry: Communication and computing power exists primarily to facilitate Data Analysis, Simulations, and real-time telemetry. According to Beneventi, the team collects about 2 to 3 GB of data for one car over the course of a race weekend, which is stored in the main server in the pit garage.

twlwmwtry.jpgThis also automatically synchronizes with Ferrari’s mainframe back in Maranello over their MPLS network. It is on the basis of this data and real-time telemetry that the team plans its race strategy and further wind tunnel development of the car.

4. Research and Development Coupled with Innovation: Ever since 2008, the FIA has brought about stringent Electronic Control Unit protocols, where all performance related enhancements remain constant for all constructors, and only car management ECUs can be custom built by the constructors. This has resulted in a situation where some of the richer constructors like Ferrari do not gain performance advantages due to their financial might where their car development budgets could skyrocket, but this situation does leave minimal wiggle room for constructors to innovate on F1 technologies. According to Beneventi, this is not an entirely bad thing: while it does slash budgets, it also forces the team to innovate on different technologies. For instance, an F1 car is fitted with an Annotation Meter, which is basically a sensor that helps the engineers monitor the positioning of various components inside the car.


With the new rules, Ferrari makes its F1 cars on almost 1/4th of the budget that it used to, which led the team to develop a brand new technology from scratch for the Annotation Meter as their previous technology would not fit in the budget. In spite of the new budgetary restrictions, Beneventi mentions the newer generation Ferrari F1 cars are equipped with far more sophisticated gadgetry than their predecessors, which were developed on higher budgets.

Some Key Technical Terms:

MPLS Network: Multiprotocol Label Switching (MPLS) is a mechanism in high-performance telecommunications networks that directs data from one network node to the next based on short path labels rather than long network addresses, avoiding complex lookups in a routing table.

Controller Area Network: Controller-area network (CAN or CAN-bus) is a vehicle bus standard designed to allow microcontrollers and devices to communicate with each other within a vehicle without a host computer.

Wind Tunnel: A wind tunnel is a research tool used in aerodynamic research to study the effects of air moving past solid objects. Formula 1 constructors use it for the development of aerodynamic efficiency of their vehicles. It is a key process in the development of the car and goes on throughout the year.

Electronic Control Unit: In automotive electronics, the electronic control unit (ECU) is a generic term for any embedded system that controls one or more of the electrical systems or subsystems in a motor car.