5 Ways Big Data Is Changing the Auto Industry

5 Ways Big Data Is Changing the Auto Industry

Big data was an interesting concept a decade ago, and now it’s a ubiquitous feature of modern businesses. Data is fundamentally valuable; depending on what you gather and how you use it, data can give you better business insights, help you change direction, and guide you in learning how and why your business works the way it does. When that data is collected on a massive scale, its benefits grow even further.

Each industry is capitalizing on the spoils of big data a little bit differently, and those new abilities, ideas, and processes are reshaping the industries in new and exciting ways. The automotive industry is a perfect example; from concept to ongoing customer service, big data is fundamentally transforming the auto industry.

The Auto Industry

The auto industry is bigger than you might realize. There are big-name auto manufacturers, who design and assemble vehicles for the masses, but you also need to consider the wide network of suppliers they rely on to create and ship the individual parts necessary for those vehicles. There are also distributors responsible for relocating and selling those vehicles and don’t forget departments like safety and customer service. The auto industry is far-reaching, and it uses big data at almost every level.

Big Changes

Big data is improving the auto industry in multiple different dimensions:

  1. Value analysis. First, big data is helping companies understand the real values of their cars. This is useful when designing new vehicles, but even more useful when valuing old cars. Valuation services like those provided by Kelley Blue Book are more precise and more efficient than ever before, and vehicle recyclers like the Clunker Junker can offer vehicle owners a more precise sum for their old junkers.
  2. Supply chain management. One of big data’s most important applications is dissecting the value and flow of specific processes across multiple organizations; in the auto industry, this analysis is applied to supply chain management. Companies need to know what parts they’re getting from where, how much they cost, how efficiently they’re being provided, and how those actions affect the profitability of the company overall. Complex data processing allows insight into these dimensions for the first time, and companies are optimizing their strategies accordingly.
  3. Cost reduction. Big data in the auto industry is driving overall costs down. Big data analysis allows companies to understand when one material is substantially beneficial over another and helps them discover new procedural changes that can improve efficiency or maximize productivity. Ultimately, that means companies are capable of putting together vehicles far less expensively, and consumers are seeing the benefits. Consumers end up paying less for vehicles, and vehicle manufacturers still get to maximize their profits.
  4. Safety improvement. Companies are also using big data to delve deeper into analyzing vehicle safety. After collecting millions of data points from both test crashes and simulated scenarios, companies are able to make hundreds of additional improvements to their vehicles to increase their capacity to survive immediate events and long-term wear and tear. This, again, is advantageous to both companies and consumers; consumers get to enjoy a safer vehicle, and companies have happier customers and lower insurance costs. It’s gradually making our roads safer as well.
  5. Consumer understanding. Finally, automakers are using big data to better understand what their customers want and need. This allows them to design more attractive, more practical vehicles for the masses (which gives consumers more of what they’re looking for and increases sales for the manufacturer). It also gives automakers key insights that they can then use to create more specific advertising and marketing campaigns, saving money by increasing efficiency and still maximizing exposure for their most important brands.

If you own a car or plan on purchasing one in the near future, big data is already benefitting you. Thanks to big data and predictive analytics, our vehicles will grow increasingly inexpensive, safe, and tailored to our individual needs. Complete those customer surveys if you get the chance, and keep contributing to the vast wealth of data that these companies need to keep improving.

– Larry is an independent business consultant specializing in tech, social media trends, business, and entrepreneurship

 

Telemetry- A Racing car Perspective

A host of electronic devices, including ECU (Engine Control Unit or as some people call it Electronic Control Unit) which transmits specific data, for example, measurements, but not only, to a remote site, in F1 case, to pit wall and pit garage. It electronically records performance of the engine, a status of suspensionsgearbox data, fuel status, all temperature readings including tires temperature, g-forces and actuation of controls by the driver. The data is then used as a foundation for determining car setup and all problems.

Use of telemetry started in the late 1980s when teams were sending data only in the bursts as the car pass close to the pits. Technology moved on to continuous high rate data in the early 1990s, but on tracks like Monza, Spa or Monaco where cars pass through trees or between the buildings, there would be sections of the tack where teams lose coverage in terms of real-time data. There was a period of time when teams couldn’t see anything. Into the 2000s, teams fixed that limitation by retransmitting not received data as soon as the car got back into areas of coverage. By the time the cars went past the garage, all the data for that lap had been seen.  In 2002 two-way telemetry was permitted so engineers could change settings on the cars from the pits. This is no longer allowed, but much was learned. Nowadays they use multiple antennae around the circuit. McLaren Electronic Systems, the supplier of the F1 Electronic Control Unit, place antennae that are available for all the teams to use. As the cars go around the track, as they move out of sight of one antenna they come into sight of next one and use that one to send the data across. This manages the transition between antennae, which is how a mobile phone network works. What that means for F1 is that on any circuit, including the difficult circuits, you get almost 100% time coverage and at the same time high bandwidth that the teams demand.

Working with the telemetry data, a large part of the time is spent working on the differential, the most tunable part of the car. The differential, which allows the two rear wheels to rotate at different speeds, can be adjusted for corner entry, mid corner and corner exit. It plays a big role in cornering stability and done well can contribute a lot to the lap time.

So, how telemetry works? As we sed before, under FIA rules, it is not possible to send electronic information to the cars. So this system is a one-way system that sends data from the cars to the pit. Then the engineers can analyze the data in real time and see if something is wrong or tell the driver how he can improve the way he is driving or setting of the car. A lot of teams send data also to Head Quarters, where a whole team is dedicated to analyze the collected data in real time.

Each car has from 150 to 300 sensors. The number isn’t exact because from track to track they add and remove sensors. Also, from the training sessions to the official race they can remove some sensors they found they won’t need for that particular track and thus can save some weight.

Data is sent from the car to the boxes using from 1,000 to 2,000 telemetry channels, transmitted wirelessly (obviously) using the 1.5 GHz frequency or frequency allowed by local authorities. These channels are encrypted, of course. The typical delay between the data being collected and it is received at the boxes is of 2 ms. For each race, the amount of collected data is in the range of 1.5 billion of samples. Since they also collect the same amount for each training day, the total amount of collected data is in the range of the 5 billion samples. During a 90 minute session, the team will collect between 5 and 6 gigabytes of raw compressed format data from the one car.

Hamilton Spa 2012 telemetry sheet
Telemetry sheet Lewis Hamilton curiously tweeted on Sunday morning before Belgian GP race at Spa 2012, not only contained traces of the two drivers’ laps superimposed, showing where Hamilton was losing time to Button, but also information about the car’s settings, including sensitive data such as its car settings and ride height’. Its shows him losing 0.5s in both the high-speed sectors 1&2 but what it does not show is that the idea of running a higher downforce wing is that you make up a second in Sector 2. So the lap times end up more or less the same. “Jenson has the new rear wing on, I have the old. We voted to change, didn’t work out. I lose 0.4 tenths (of a second) just on the straight” – was the body of the tweet. Button got it right. Hamilton did not. The image was deleted not long after with McLaren team principal Martin Whitmarsh confirming that they asked him to take it down because it contains confidential data. “He made an error of judgement and we asked him to take that one down, and he did.” Asked as to whether any action would be taking ahead Hamilton, Whitmarsh said: “No. But it would be interesting to see how other team principals would deal with it.” As for how at least one team boss would reaction, Red Bull’s Christian Horner said he would deem it to have been a “a breach of confidentiality.”  He added: “I haven’t seen the tweet in detail. But from what I understand it was car data, and if it was car data then I’m sure every engineer in the pit lane is having a very close look at it.”

Since data is compressed, here they don’t talk about megabytes or gigabytes, so the actual transfer rate used by the telemetry system is smaller. Each car is independent, so since each team has two cars, the number of collected data is actually two times higher. The transmitter is placed in the sidepod and then a cable runs to an antenna on the nose on the car. Each car has also an onboard storage system that buffers the most recent data, so if the transmission fails, the car keeps retrying until the transmission is completed. Teams don’t want to disclose if it is a hard disk drive or a flash memory they using for this, but my guess is that in this days they all using flash memory. So no data is lost when the car enters in a Monaco tunnel, for example: as soon as the communication is lost, the car keeps collecting data and storing on its onboard memory and as soon as it exits the tunnel or any blind spot, all data collected during this period is sent at once to the boxes.
The data is then decoded and converted into a signal that can be understood by a PC. It goes through a data server system called ATLAS (Advanced Telemetry Linked Acquisition System, developed by McLaren Electronic Systems – MES) which displays the telemetry channels for the engineers. This is the suite which displays all the wavy lines on the screen.

Telemetry system
ATLAS has become the standard data acquisition package in the F1 paddock due to the use of an FIA spec MES engine control unit on all cars. The entire data acquisition package consists of onboard car data logging electronics and transmitter radio, transmitting data via radio frequency to telemetry receivers in the garages. The receivers decode the data and operate as central servers of the decoded data to distribute it over a local ethernet based network. Any appropriately configured PC computer, running ATLAS software, can simply connect to the network and receive data from the telemetry receiver server. The simple ethernet architecture of the data distribution network also lends itself to an ease of sending the live telemetry back to the factory to engineers and strategists. Data is referred to in two forms; “Telemetry” is live data, and “Historic” is logged data or also backfilled telemetry. The hardware and infrastructure of the system is beyond the scope of this discussion but is fundamental to understanding how an engineer would receive the data and with what tools he or she would interact with it.

In summary, a lot of computers with several LCD displays plotting charts and showing data, with lots of engineers analyzing the data. If you pay close attention to Ferrari F1 cars, you will notice an AMD logo on the tail. For the majority, this simply means that AMD is paying to run an add-on Ferrari car, but that isn’t the case. They are also providing the technology infrastructure for the car’s telemetry system, which collects data in real time and sends to Ferrari team during the races, so they can check in real time if something is going wrong and also instruct the driver of corrections he should make in the way he is driving in order to achieve a higher performance during the race. The collected data are also collected for after race analysis.

In Formula 1 racing and radio frequencies used during the race weekend and test sessions, the only solution was to develop a custom radio system. Systems such as GSM, DECT and Bluetooth were never designed to support the data rates required or operate in this radio environment. The starting point in the design of a custom communications system is to address the first key question: What are the requirements of the system?
Considerations of a wide variety of parameters including the huge data rates, available frequencies, acceptable latency, quality of service, countries of operation, hardware size, cost, power consumption and more are all required. Radio frequency spectrum is a rare resource and is managed by international and national regulation. The selection of a suitable frequency band is a complex issue. This can typically be limitations on maximum output power, acceptable modulation schemes, installation locations and the applications served. The regulations vary from country to country although the process within the EU is now quite well harmonized. All data are sent in an encrypted way to prevent data leakage to other teams.

Telemetry reading

Telemetry reading for Silverstone

In this telemetry print out, the wavy lines represent (from top to bottom) map of the Silverstone circuit, gears used, Revs, and g-force as a small dots. We can loosely compare ATLAS system to Microsoft Excel in reference to its working surfaces. In Excel, most people are familiar with the spreadsheet, referred to as a “workbook.” Within that “workbook” are multiple “worksheets” containing any number of user-created charts and information. Organization of the working surfaces of ATLAS is similar in that an ATLAS “workbook” contains multiple “pages” organized in a similar Excel tabular graphic user interface. Each page contains user created “displays” on which to analyze data. The printed sample on the picture below is data of two drivers “overlaid” and printed on one single ATLAS “workbook”, in the same manner, that an individual chart can be printed from Excel. In this way, the driver can compare each lap and learn and improve his style of driving, or learn and compared his lap to lap of his teammate.

telemetry printout for monaco
This particular type of display is referred to as a “waveform.” A waveform display presents data relative to time or distance as the domain of the plot. Each car’s respective data is identified by colour. Here, blue coloured data traces from one car are compared to red coloured data traces from another car. Each individually named parameter represents the calibrated output of a unique and individual onboard sensor. Additionally, a parameter may represent a “function parameter”, a mathematical output based upon sensor outputs input into mathematical calculations. A track map of Monaco is located in the lower right corner of the display. In addition, we see that corners are identified as green and straights are yellow. The ATLAS software automatically generates the map based upon lateral acceleration and track distance logged data. The green corners are calculated and determined against thresholds of lateral acceleration.

One of the best-known suppliers of the telemetry equipment is Plextek. Plextek is currently supplier for Sauber, Williams, RedBull, STR and Ferrari. This company was approached in 1998 by Pi Group, in this time sponsor and supplier of electronic equipment for Williams and Jaguar F1 Racing, to develop a new telemetry system for Formula One Motor Racing.

First tests of the new Plextek system were undertaken at the Silverstone, Hockenheim, Nurburgring and Barcelona circuits to allow models of a number of different environments to be produced. From the measured data, the proposed system design was developed and tested to produce coverage estimates showing the likely performance of the system. This approach allows an early check on whether the initial objectives of the system are likely to be achieved prior to the final design of the equipment. The Formula One motor racing telemetry system developed by Plextek and Pi Group raced into first place in the San Marino Grand Prix at Imola on Easter Sunday 2001 when the Williams-BMW Team notched up the first victory of their two-year partnership. In the gap between the 2001 and 2002 season, Pi came back to Plextek for a software upgrade program. These system improvements allowed a fully acknowledged handshake protocol. The new software also provided a data downlink channel to the car, which was illegal under the old 2001 FIA rules but has been allowed since 2002. The new Plextek software allowed the teams to receive error-free transfers of data from the cars, and reliably send command information to the cars to tune performance during the race. The upgraded telemetry system was installed on four Formula One team cars including Williams-BMW, Jaguar and Arrows.

In-Vehicle Networking Solutions

Overview

Vehicles today are using more electronics to cope with the diversifying requirements of drivers and passengers and to address concerns about the environment and fuel consumption. Multiple electronic control units (ECU) are connected by multiple in-vehicle LANs differing in transmission speed and communication protocol according to the features and characteristics required for each application, exchanging information and coordinating control to allow more added value functions to be implemented. Renesas has the comprehensive know-how and a successful track record in all applications, from control systems to information systems, and meets the needs of our customers with our wide-ranging product lineup.

System Block Diagram

In-Vehicle Networking

CAN (Controller Area Network)

The CAN protocol is the current de facto standard for vehicle LANs. It is used for backbone network as well as the powertrain, chassis, and body systems. Renesas CAN MCUs have unique CAN functions, with a variety of packages, low power consumption, operation at high temperature and excellent EMI/EMS performance. The wide lineup optimizes diverse user systems.
 Collision Detection System

Ethernet

Ethernet is prominent as a diagnostic protocol for engine, chassis, and body electronic connection control units used for network connections.

Ethernet Diagnostics for Ethernet Electronic Connection Control Unit

FlexRay

FlexRay is a high-speed communication protocol that provides a high degree of flexibility and reliability. It is the basis for active technology development in Japan and worldwide, and its many applications include next-generation X-by-Wire systems and backbone systems. Renesas offers V850 and SH-4A microcontrollers supporting FlexRay and dedicated motor driver ICs, power MOSFETs, and so on. We are also investigating the development of a bus driver IC for FlexRay communication.
Brake-by-Wire System

LIN (Local Interconnect Network)

LIN is a vehicle LAN protocol that uses a single master to achieve a superior cost-performance ratio. It is used in switch input and sensor input actuator control. Renesas offers optimal LIN MCUs for diverse body control with a variety of packages, low power consumption, operation at high temperatures and excellent EMI/EMS performance.
 Multifunction Keyless System

Other Protocols:

Media Oriented Systems Transport (MOST)

MOST is a bus standard for vehicle multimedia networks designed to enable transfer of high-quality audio, video, and data. Its specifications are established by the MOST Cooperation, to which the major carmakers and manufacturers of automotive electrical system components belong. MOST allows easy interconnection of vehicle multimedia components.

Ethernet AVB (Audio Video Bridging)

Ethernet AVB is a real-time Ethernet standard for audio and video. The IEEE has finalized the AVB1.0 standard, and the AVnu Alliance is in the process of formulating conformance specifications. AVB1.0 is expected to be adopted in the vehicle multimedia and camera fields, and vehicle control requirements should be satisfied by AVB2.0.

Ethernet TSN (Time-Sensitive Networking)

Ethernet TSN is an extension of the Ethernet AVB standards that are currently used in professional audio equipment and in-vehicle networks. It is standardized by the TSN task group of IEEE 802.1. Ethernet TSN covers several features, including time synchronization as well as traffic scheduling, frame preemption and ingress policing, and is a high-speed network technology sufficient for supporting autonomous driving applications. Building on its expertise in in-vehicle communication technology, Renesas has become the world’s first to demonstrate standard compliance for frame preemption as first in the industry. Renesas is also eyeing the possible replacement of the existing Ethernet with the TSN for automotive control systems and intends to promote TSN standard in automotive networks to contribute to the realization of more suitable and safer autonomous-driving vehicles.

Sky’s the limit for F1 and the cloud

The complexity and volume of data generated by sophisticated racing cars means cloud computing could soon be in pole position for Formula 1, writes Caroline Reid

When millions of dollars are spent to gain a mere tenth of a second advantage, it’s little surprise that Formula 1 teams are looking to high-tech solutions, such as cloud computing, for the future direction of the sport.

Competing in F1 is a costly business. The leading teams spend more than $400 million each to propel two cars around a track for a few hours 19 times a year. Every team must design and build its own chassis and, with only 2.5 seconds a lap separating the champions from the losers, getting the technological advantage is crucial.

The most visible components may be the sponsor-covered chassis and wheels, but it’s what the eye can’t see that makes the cars so costly. Incorporating onboard computing power in an F1 car presents its own challenges and increases costs.

To make sure the bodywork is as slender and aerodynamic as possible, all the wiring, electronics and cooling systems must be packed in a tight space around the engine – more difficult than it sounds when there’s 1.25km of wiring and up to 150 onboard sensors to be installed.

Each sensor gives readings up to 1,000 times per second and data is sent wirelessly from the car to the pits. This gives around 1.5 billion samples of data from each race and these are monitored in the garage while the car is on track, then analysed afterwards by supercomputers back at the team’s factory. Leading teams take around 20 engineers to races just to work on telemetry read-outs, with a further 30 back at base working simultaneously. In this environment, quick transfer of data is crucial.

This is the reason why cloud computing is starting to play a major part in the world of F1, long before the racing car even gets on the track. Red Bull Racing has won both the drivers’ and constructors’ world championships for the past four years, and cloud computing is playing an ever-increasing role in the team’s quest for victory.

Its head of technical partnerships Alan Peasland explains: “At Infiniti Red Bull Racing we have a private on-premise cloud that we use for a variety of simulation and computing tasks. In the design and development of the car, we use our high-performance computer (HPC) to run computational fluid dynamics (CFD) simulations and finite element analysis in order to support the core design activities.”

Cloud computing is starting to play a major part in the world of F1, long before the racing car even gets on the track

This affects areas from the evaluation of aerodynamic performance to the refinement of the mechanical properties of a design, such as its strength and fatigue life. Most of the computing power running within Red Bull Racing’s HPC is consumed in processing the hundreds of simulations performed by CFD in a typical week. Running parallel to this, the HPC also analyses the data produced as the team tests scale models in its wind tunnel.

To accomplish this it has the support of some of the world’s leading tech companies. Suppliers include IBM Platform Computing, Ansys, iLight, AT&T and Siemens PLM who, according to Mr Peasland, “all contribute to the overall solution that takes us from initial concept design, through simulation and analysis, and into manufacture”.

All this takes place long before a car turns a wheel on a track sometimes halfway around the world from Red Bull Racing’s Milton Keynes base. Calculations done in the cloud are key to making sure everything runs smoothly.

“Performance on track will be influenced not only by the new components we send to each race that help to tailor the car for the specific circuit, but also how quickly the car can be optimised during the race weekend,” says Mr Peasland. Information travels in the other direction too. “Data captured on-car during practice sessions will be transferred back to the factory, by virtue of our AT&T Global EVPN Network, where it will then be analysed by our team of experts.”

Perhaps surprisingly, at the moment cloud computing is little used for processing data during the race, and Red Bull Racing and the other teams instead transport heavy servers to each race. “We have our own software-defined on-premise cloud,” he says. “The main reason for doing so is due to the sensitive nature of the data being processed and stored, and also the speed of access to this data.

“Formula 1 is a high-paced, time-restricted environment in all areas of the business, so being able to have real-time access to large volumes of data is crucial in order to perform complex simulations during race weekends that can ultimately deliver increased performance on the track.”

However, Mr Peasland believes cloud technologies are set become more important in the near future. “As cloud technology advances and with the introduction of hybrid clouds that can support our peaks in demand, it’s highly likely that this will be an area of development for the team,” he says.

Bill Peters, chief information officer of Caterham F1, says his team is considering migrating IT to the cloud. “We’re starting to look at potentially having our supercomputer capabilities as a service that we buy, as opposed to something we have in-house. Similarly, if we could have reliable enough communications to trackside, there’s no reason why you couldn’t host all your trackside systems in the cloud as well, so you wouldn’t need to carry the whole IT circus from track to track,” he says.

It would also help to cut costs and, in a sport where many smaller teams struggle to keep up with the larger outfits’ accelerating budgets, this could be a driving force behind its proliferation. Mr Peasland agrees that “cloud computing, in the right environment and used in the correct way, will most definitely be able to offer cost-savings.” And that isn’t the only way it will change the sport.

He says: “As cloud technology and services mature, it will not only be areas such as CFD and simulation that will benefit but all other business systems, including telephony and communications, design and development. And it’s our innovation partners, such as IBM Platform Computing and AT&T, who will work with us to move us forward in this area.”

A brief history of computing in Formula 1

Modern Formula 1 teams use thousands of cutting-edge computers to measure, control, analyse and simulate every aspect of a Grand Prix car, McLaren software engineer Chris Alexander investigates the history of computer tech in the sport.

From onboard specialist electronics to countless virtual servers in data-centres around the world, computers are pervasive in every aspect of Formula 1 engineering. But how did the technology get to where it is today? Much like the nature of the sport itself, and, indeed, the mobile or desktop machine upon which you’re reading this very article, the journey of computers in F1 is a story of both power and speed.

 

As you can well imagine, the early years of Formula 1 weren’t heavily influenced by computer technology. In fact, when the world championship started in 1950, the first programmable computer had been invented just the year before. The Electronic Delay Storage Automatic Calculator (EDSAC) machine, as it was called, was built at the University of Cambridge and programmed with five-hole punch tape. Due to the primitive nature of the technology, it took up the same amount of floor space as two McLaren MP4-31s, and it took many hours just to input a simple program!

 

Formula 1 cars remained completely mechanical devices through the 1960s, designed on traditional drawing boards by multi-skilled engineers armed with a retractable pencil and a fancy set of French curves.

When Bruce McLaren and Denny Hulme raced McLaren’s earliest grand prix cars in the late 1960s, the driver was the key instrument in analysing and understanding car performance. A simple mistake by a driver, or a failure in his understanding what the car was ‘saying’ to him, could easily result in a retirement.

For instance, in the 1967 Monaco Grand Prix, Bruce pitted with a misfire that he mistakenly thought had been caused by low fuel; his error was pointed out to him in the pits by Jack Brabham, and he returned to the fray, eventually finishing fourth.

In modern F1 cars, thousands of data parameters are measured every second, so engineers at the track and back at base can analyse issues the car without them even needing to come to the pits.

 

It wasn’t until the 1970s when advances in electronic components and microprocessors allowed for the introduction of what we’d recognise today as a microcomputer. It was in 1975 when McLaren first deployed telemetry – collecting data about the car – and it wasn’t in F1, it was on the company’s IndyCar effort, capturing 14 different pieces of information about the car that could be downloaded back at the garage.

To put that in perspective, that’s about the same number of different pieces of information a modern smartphone can capture about its environment.

 

As with the home computer boom, on-car electronic technology began to significantly improve in the 1980s. As both electronic and analogue systems became lighter, smaller and more powerful – key aspects of any piece of equipment added to an F1 car – teams and especially engine manufacturers began to run more complex systems onboard.

The first electronics were used for performing management tasks in addition to telemetry to improve car reliability and performance. These management systems were precursors of what you would find in your modern road car, helping to improve the efficiency and reliability of the engine while performing diagnostics and tracking journeys.

In F1, the very first of these electronic systems were onboard only, lacking the ability to transmit back to the pits. Instead, technicians would download the data from the onboard memory when the car was back in the garage. Initially, storage was limited to just one lap’s worth of data, so the driver would be given a signal on the pit-board to turn on the telemetry for a particular lap, and the data would then be taken off the car when it returned to the garage. Tall, rack-mounted computers increasingly began to occupy garage space, alongside the more conventional mechanics’ tools.

These faltering steps marked the beginnings of the data age in Formula 1.

The early 1980s also saw the introduction of electronic engine management systems. When McLaren introduced the TAG Turbo engine for the MP4/1E in 1983, it came with an advanced Bosch system that brought together the control of fuel injection and ignition within the same unit. This allowed the electronics to control power, drivability, and fuel efficiency to a much greater degree than had previously been possible.

Fuel usage was an important problem to solve at the time. In 1985, cars were limited to 220 litres of fuel without refueling; in 1986, it was tightened further to 195 litres, meaning that accurate and efficient monitoring of the fuel became increasingly important.

McLaren’s 1985 MP4/2B was the team’s first to feature an electronic readout in the cockpit detailing the fuel level remaining. Using this technology, Alain Prost crossed the line first in that year’s San Marino Grand Prix after Ayrton Senna’s Lotus and Stefan Johansson’s Ferrari both ran out of fuel ahead of him (Prost was later disqualified when his car was found to be underweight).

The system was still unreliable, however; Prost famously threw caution to the wind to win his second world title in Adelaide in 1986, despite his fuel readout warning him that he was severely in the red. Luckily for the Frenchman, it was wrong!

As with all things in F1, speed was of the essence, and waiting to download the data from the car took too long before it could be made useful. By the second half of the ’80s, the first streams of data were becoming available in the garage before the car had made it back to the pitlane.

This was ‘burst’ telemetry – in which the car would use radio signals to broadcast key pieces of data back to the garage as it went past the pits on each lap – resulting in a ‘burst’ of data for that lap. This small sample was then available to engineers several minutes before the car came back to the pits and the fuller picture of recorded data could be examined.

 

Despite the advances made during the sport’s first 40 years, it was the 1990s that finally saw an explosion of computing capability – on the car itself, but also throughout the whole team.

By 1993, the sport had really exploited computer technology to run ‘active-ride’ cars. It was an era that exploited electronic control systems even more than even today’s cars: active-ride suspension kept the cars stable; power-steering assisted the driver; power-braking increased traction into corners, and traction control facilitated the smoothest possible exit.

Much more data needed to be collected from the car, and analysed at a much higher frequency than ever before. That job was given to a series of ever more powerful and speedier machines. As technology on the cars increased, the tech used to download and stream data back in the garage became increasingly advanced. In turn, the machinery back at the factory became bigger and faster.

So while the ruling authorities began to rein in the use of assistive technology on the cars themselves, the sport ramped up its use of computers in every other area. It marked the beginning of the overhaul of the sport.

 

Nowadays, F1 relies on the internet to stream everything – from telemetry to television – around the world, connected at 10x the speed of regular home broadband.

In the 1960s, when electronic systems were just starting to be used in Formula 1, the internet had not even been invented; it was 1969 before ARPANET, the first large-scale network, connected together four machines at universities in America. This network was so slow by today’s standards that it would have taken more than five hours to send a three-minute music track from one machine to another.

In 2016, you could transfer the same amount of data from trackside at the Australian Grand Prix to the McLaren Technology Campus in just hundredths of a second!

With constraints on the number of personnel allowed at events, and the amount of equipment that can realistically be freighted around the world, there are now teams of engineers back at every team’s base who have the same capability to access data as their trackside colleagues.

Real-time telemetry data is streamed across the world when the car is on the circuit, and engineers can collaborate on analysis, and share simulation data within seconds between the factory and the track. Speed in this process is essential to make sure as much useful development data is produced during the short but intense practice sessions held on grand prix weekends.

The computers used in a modern F1 team are without compromise: top-of-the-range.

Ultra-portable laptops provide on-the-road engineers with the access to data, simulation and analysis tools they need to optimise the car’s performance at the next event. High-performance workstations give groups at base the ability to quickly run complex simulations on data from a variety of sources. Specialist software such as SAP HANA allows engineers to search thousands of laps of data for exactly the piece of information they need to help them with performance in a race weekend.

And specially designed hardware clusters – groups of tens to hundreds of computers working together on complex mathematics – power computational fluid dynamics (CFD) systems, used for improving aerodynamic performance, as well as running the simulator that lets the driver develop the car when not on track.

 

In addition to these physical computers, all teams leverage cloud-computing: unlike traditional machines, cloud computers are fully virtual, running in massive data-centres around the world and accessed over the internet.

When an engineering team needs a complex problem solved or an enormous volume of data analysed, the cloud can provide thousands of these virtual machines on-demand, dedicated to the task at hand. This technology offers an unprecedented speed and throughput capability with an amount of power which could not feasibly be matched by computers on-site at the factory or track. Additionally, special links can be established over the internet allowing faster transfer of data between the team and the cloud servers, as well as providing first-class security for the sensitive data.

To harness the power of computing now available to them, F1 engineers use sophisticated and custom-made software that you wouldn’t find on your home or office computer.

At McLaren, we’ve developed our own highly sophisticated data analysis and simulation platform, which enables every single engineer within the team to leverage our historic and real-time systems data. This platform unifies access to a wide variety of data, from the cars on track at grands prix, to laps driven by our test drivers in the simulators; from aerodynamic data generated by the windtunnel to specialised test equipment for specific car components, such as the clutch or brakes.

Because all of this data can be accessed in the same way, new exploratory and analysis tools can be quickly and easily developed for emerging needs, as and when they occur in the fast-paced environment of Formula 1. This data platform also provides a solid foundation for numerous specialised, high-performance data applications for specific engineering disciplines. Virtually every engineering group within the team – from suspension to brakes, and chassis, to race engineers – have their own dedicated suite of software tools that helps them to analyse the data that is most important to them.

The use of computers in Formula 1 has changed the face of the sport and contributed immeasurably to the engineering process of building fast cars. It continues to allow teams to push the boundaries of simulation, development and analysis technology, and go racing with well set-up and optimised cars at the beginning of race weekends.

As Formula 1 evolves, its use of computers and software continues at the rapid pace required to support the ever-changing design and engineering challenges