Chip Hall of Fame: Intel 4004 Microprocessor

The first CPU-on-a-chip was a shoestring crash project

Intel 4004

The Intel 4004 was the world’s first microprocessor—a complete general-purpose CPU on a single chip. Released in March 1971, and using cutting-edge silicon-gate technology, the 4004 marked the beginning of Intel’s rise to global dominance in the processor industry. So you might imagine that the full resources of Intel—still a fledgeling company at the time—were devoted to this groundbreaking project. But in fact, the 4004 was an understaffed side project, a crash job that nearly crashed, one simply intended to drum up some cash while Intel developed its real product line, memory chips.

As described by Ken Shirrif in a July 2016 feature for IEEE Spectrum, the increasing transistor count and complexity of integrated circuits in the 1960s meant that by 1970, multiple organizations were hot on the path to the microprocessor. Some of these, like Texas Instruments, had a lot more resources than Intel. So why did Intel, founded just a few years earlier, in 1968, cross the finish line first? It was largely thanks to four engineers, one of whom didn’t even work for the company. (For a lengthy version of this story from the engineers themselves, you can read their oral history panel, as captured by the Computer History Museum).

The first of the four engineers is Masatoshi Shima, who worked for Japanese office calculator company Busicom, which wanted to create a newly computerized calculator. In April 1969, Busicom and Intel signed a provisional agreement for Intel to develop a custom set of chips for the calculator. Consequently, in June 1969 Shima and others travelled to Intel to discuss the plans in more detail. Shima proposed an eight-chip system: three chips to interface with peripherals such as the keyboard and printer, one chip to store data, one chip to store program code, and two chips that together would make up the CPU.

img

Masatoshi Shima at the Computer History Museum’s 2009 Fellows Award event, and the Busicom calculator that was the target application for the world’s first microprocessor.

Ted Hoff is the second engineer in our tale and was the head of the Intel applications department that was negotiating with Busicom. Hoff was worried that Intel would struggle to produce so many chips, especially because the system would require many pins per chip to interconnect, which would push the limits of the ceramic packing technology Intel was using. He proposed halving the chip count: one 256-byte program memory chip, dubbed the 4001, one 40-byte data memory chip, the 4002, a peripheral interface chip, the 4003, and one CPU chip, the 4004. The whole system—called the MCS-4—would be 4-bit, significantly reducing the number of pins needed to interconnect the chips. Hoff brought in engineer No. 3, Intel’s Stanley Mazor. Together Hoff and Mazor put together a set of specs for each chip and a proposed production schedule.

At a follow-up meeting in October of 1969, Intel made its counterproposal. Busicom was interested and Shima returned to Japan to prototype software for the new calculator to make sure the MCS-4 architecture would support Busicom’s needs. An agreement was made in February 1970, with Busicom planning its calculator rollout on the basis of Hoff’s and Mazor’s schedule. It was decided that Shima would come back to California to check on progress in April 1970. The chips were to be put into production on a staggered schedule from July to October 1970, starting with the 4001 and ending with the 4004.

However, unbeknownst to Shima and Busicom, the 4004 project had ground to a halt inside Intel in early 1970. The problem was that Hoff and Mazor were not chip designers—those people who can take specifications and create detailed logic-gate diagrams. Those diagrams, in turn, are used to work out exactly how and where transistors and other components are to be patterned on the physical chip.

In fact, there was no one at Intel who could take on the job, as the company was then focused on developing memory chips. Finally, Intel made one of the great hires of all time and introduced the fourth critical person in this story: Frederico Faggin, a young engineer uniquely suited to the job. At the start of his career, Faggin had designed and built a computer from scratch for Olivetti, in Italy. Then in the late 1960s, he had joined Fairchild Semiconductor, in Silicon Valley, where he made key contributions to the advanced metal oxide semiconductor (MOS) technology that Intel’s chips relied on. Faggin wanted to work in a more entrepreneurial environment than Fairchild, and so accepted an offer from Intel in April 1970.

On Faggin’s first day on the job, Mazor briefed him on the Busicom project. As Faggin wrote in his personal account of the 4004’s development for the Winter 2009 issue of IEEE Solid-State Circuits Magazine, when he saw the schedule: “My jaw dropped: I had less than six months to design four chips, one of which, the CPU, was at the boundary of what was possible.”

img

From left, Federico Faggin, Ted Hoff, and Stanley Mazor holding Intel 4004 processors at the National Inventors Hall of Fame in 1996

The original schedules were based on estimates suitable for designing memory chips—which use many repeating elements—rather than processor chips, which use complex and varied logic circuits. In addition, Faggin had no support staff and none of the tools and infrastructure that other companies had to help create and test digital logic designs.

A few days after Faggin’s start, Shima landed in the States for his progress check. Mazor and Faggin went to pick him up from the airport and bring him back to Intel. Shima was expecting to see a logic-level plan for the chips that he could check against the agreed-upon specifications. “Shima was furious when he found out that no work had been done in the five months and he became very angry at me…. It took almost one week for Shima to calm down,” wrote Faggin.

Faggin worked out a new schedule, and it was agreed that while Intel set about hiring more people for the project, Shima would stay for six months to help with the design. Faggin himself dived into 70- to 80-hour workweeks.

Faggin worked through the chips in order of complexity: the 4001 ROM, followed by the 4003 interface chip, then the 4002 RAM, followed finally by the 4004 CPU. Shima checked the logic of the chips and provided feedback on how they would fit into Busicom’s larger calculator design. At the end of 1970, the chip design was complete. Faggin added a personal flourish to the CPU’s layout: He placed his initials along the edge of the processor, a microscopic “F.F.” etched into every 4004 made. Busicom finally had a complete working set of MCS-4 chips in March 1971.

img

Frederico Faggin and an enlarged picture of the Intel 4004 die. The 4004 had 2,300 transistors.

As Busicom had commissioned the chipset, it had exclusive rights to the design, preventing Intel from selling the 4004 to anyone else. But after some prompting from Hoff and others about the processor’s potential, Intel offered to give Busicom a break on the cost of the chips if Intel could sell the 4000 families for noncalculator applications. Busicom agreed, and Intel began advertising the 4004 in November 1971: “Announcing a new era of integrated electronics,” blared the ad copy—a rare case of absolute truth in advertising.

The Technological Future of Surgery

The future of surgery offers an amazing cooperation between humans and technology, which could elevate the level of precision and efficiency of surgeries so high we have never seen before

Will we have Matrix-like small surgical robots? Will they pull in and out organs from patients’ bodies?

The scene is not impossible. It looks like we have come a long way from ancient Egypt, where doctors performed invasive surgeries as far back as 3,500 years ago. Only two years ago, Nasa teamed up with American medical company Virtual Incision to develop a robot that can be placed inside a patient’s body and then controlled remotely by a surgeon.

That’s the reason why I strongly believe surgeons have to reconsider their stance towards technology and the future of their profession.

Virtual Incision - Robot - Future of Surgery

Surgeons have to rethink their profession

Surgeons are at the top of the medical food chain. At least that’s the impression the general audience gets from popular medical drama series and their own experiences. No surprise there. Surgeons bear huge responsibilities: they might cause irreparable damages and medical miracles with one incision on the patient’s body. No wonder that with the rise of digital technologies, the Operating Rooms and surgeons are inundated with new devices aiming at making the least cuts possible.

We need to deal with these new surgical technologies in order to make everyone understood that they extend the capabilities of surgeons instead of replacing them.

Surgeons also tend to alienate themselves from patients. The human touch is not necessarily the quintessence of their work. However, as technological solutions find their way into their practice taking over part of their repetitive tasks, I would advise them to rethink their stance. Treating patients with empathy before and after surgery would ensure their services are irreplaceable also in the age of robotics and artificial intelligence.

As a first step, though, the society of surgeons has to familiarize with the current state of technology affecting the OR and their job. I talked about these future technologies with Dr. Rafael Grossmann, a Venezuelan surgeon who was part of the team performing the first live operation using medical VR and he was also the first doctor ever to use Google Glass live in surgery.

Future of Surgery

So, I collected the technologies that will have a huge impact on the future of surgery.

1) Virtual reality

For the first time in the history of medicine, in April 2016 Shafi Ahmed cancer surgeon performed an operation using a virtual reality camera at the Royal London hospital. It is a mind-blowingly huge step for surgery. Everyone could participate in the operation in real time through the Medical Realities website and the VR in OR app. No matter whether a promising medical student from Cape Town, an interested journalist from Seattle or a worried relative, everyone could follow through two 360 degree cameras how the surgeon removed a cancerous tissue from the bowel of the patient.

This opens new horizons for medical education as well as for the training of surgeons. VR could elevate the teaching and learning experience in medicine to a whole new level. Today, only a few students can peek over the shoulder of the surgeon during an operation. This way, it is challenging to learn the tricks of the trade. By using VR, surgeons can stream operations globally and allow medical students to actually be there in the OR using their VR goggles. The team of The Body VR is creating educational VR content as well as simulations aiding the process of traditional medical education for radiologists, surgeons, and physicians. I believe there will be more initiatives like that very soon!

2) Augmented reality

As there is a lot of confusion around VR and AR, let me make it clear: AR differs in two very important features from VR. The users of AR do not lose touch with reality, while AR puts information into eyesight as fast as possible. With these distinctive features, it has a huge potential in helping surgeons become more efficient at surgeries. Whether they are conducting a minimally invasive procedure or locating a tumor in liver, AR healthcare apps can help save lives and treat patients seamlessly.

As it could be expected, the AR market is buzzing. More and more players emerge in the field. Promising start-up, Atheer develops the Android-compatible wearable and complementary AiR cloud-based application to boost productivity, collaboration and output. The Medsights Tech company developed a software to test the feasibility of using augmented reality to create accurate 3-dimensional reconstructions of tumors. The complex image reconstructing technology basically empowers surgeons with X-ray views – without any radiation exposure, in real time. The True 3D medical visualization system of EchoPixel allows doctors to interact with patient-specific organs and tissue in an open 3D space. It enables doctors to immediately identify, evaluate, and dissect clinically significant structures.

Google Glass - Future of Surgery

Grossmann also told me that HoloAnatomy, which is using HoloLens to display real data-anatomical models, is a wonderful and rather intuitive use of AR having obvious advantages over traditional methods.

3) Surgical robotics

Surgical robots are the prodigies of surgery. According to market analysis, the industry is about to boom. By 2020, surgical robotics sales are expected to almost double to $6.4 billion.

The most commonly known surgical robot is the da Vinci Surgical System; and believe it or not, it was introduced already 15 years ago! It features a magnified 3D high-definition vision system and tiny wristed instruments that bend and rotate far greater than the human hand. With the da Vinci Surgical System, surgeons operate through just a few small incisions. The surgeon is 100% in control of the robotic system at all times; and he or she is able to carry out more precise operations than previously thought possible.

Recently, Google has announced that it started working with the pharma giant Johnson&Johnson in creating a new surgical robot system. I’m excited to see the outcome of the cooperation soon. They are not the only competitors, though. With their AXSIS robot, Cambridge Consultants aim to overcome the limitations of the da Vinci, such as its large size and inability to work with highly detailed and fragile tissues. Their robot rather relies on flexible components and tiny, worm-like arms. The developers believe it can be used later in ophthalmology, e.g. in cataract surgery.

Da-Vinci-Surgical-Robot - Future of Surgery

4) Minimally Invasive Surgery

Throughout the history of surgery, the ultimate goal of medical professionals was to peak into the workings of the human body and to improve it with as small incisions and excisions as possible. By the end of the 18th century, after Edison produced his lightbulb, a Glasgow physician built a tiny bulb into a tube to be able to look around inside the body.

But it wasn’t until the second half of the 20th century when fiber-optic threads brought brighter light into the caverns of the body. And later, tiny computer chip cameras started sending images back out. At last, doctors could not only clearly see inside a person’s body without making a long incision, but could use tiny tools to perform surgery inside. One of the techniques revolutionizing surgery was the introduction of laparoscopes.

The medical device start-up, Levita aims to refine such procedures with its Magnetic Surgical System. It is an innovative technological platform utilizing magnetic retraction designed to grasp and retract the gallbladder during a laparoscopic surgery.

The FlexDex company introduced a new control mechanism for minimally invasive tools. It transmits movement from the wrist of the surgeon to the joint of the instrument entirely mechanically and it costs significantly less than surgical robots.

5) 3D Printing and simulations in pre-operative planning and education

Complicated and risky surgeries lasting hours need a lot of careful planning. Existing technologies such as 3D printing or various simulation techniques help a lot in reforming medical practice and learning methods as well as modelling and planning successfully complex surgical procedures.

In March 2016 in China, a team of experienced doctors decided to build a full-sized model of the heart of a small baby born with a heart defect. Their aim was to pre-plan an extremely complicated surgery on the tiny heart. This was the first time someone used this method in China. The team of medical professionals successfully completed the surgery. The little boy survived with little to no lasting ill-effects.

In December 2016, in the United Arab Emirates doctors have used 3D printing technology for the first time to help safely remove a cancerous tumour from a 42-year-old woman’s kidney. With the help of the personalized, 3D printed aid the team was able to carefully plan the operation as well as to reduce the procedure by an entire hour!

The technology started to get a foothold also in medical education. To provide surgeons and students with an alternative to a living human being to work on, a pair of physicians at the University of Rochester Medical Center (URMC) have developed a way to use 3D printing to create artificial organs. They look, feel, and even bleed like the real thing. Truly amazing!

To widen the platform of available methods for effectively learning the tricks of the trade, Touch Surgerydeveloped a simulation system. It is basically an app for practicing procedures ranging from heart surgery to carpal tunnel operations.

6) Live diagnostics

The intelligent surgical knife (iKnife) was developed by Zoltan Takats of Imperial College London. It works by using an old technology where an electrical current heats tissue to make incisions with minimal blood loss. With the iKnife, a mass spectrometer analyzes the vaporized smoke to detect the chemicals in the biological sample. This means it can identify whether the tissue is malignant real-time.

The technology is especially useful in detecting cancer in its early stages and thus shifting cancer treatment towards prevention.

Surgical iKnife - Future of Surgery

7) Artificial Intelligence will team up with surgical robotics

Catherine Mohr, vice president of strategy at Intuitive Surgical and expert in the field of surgical robotics believes surgery will take to the next level with the combination of surgical robotics and artificial intelligence. She is thrilled to see IBM WatsonGoogle Deepmind’s Alpha Go or machine learning algorithms to have a role in surgical procedures. She envisioned a tight partnership between humans and machines, with one making up for the weaknesses of the other.

In my view, AI such as the deep learning system, Enlitic, will soon be able to diagnose diseases and abnormalities. It will also give surgeons guidance over their – sometimes extremely – difficult surgical decisions.

 Artificial Intelligence in Surgery - Future of Surgery

I agree with Dr. Mohr in as much as I truly believe the future of surgery, just as the future of medicine means a close cooperation between humans and medical technology. I also cannot stress enough times that robots and other products of the rapid technological development will not replace humans. The two will complement each other’s work in such a successful way that we had never seen nor dreamed about before. But only if we learn how.

Reference: http://medicalfuturist.com

Why humans will always be smarter than artificial intelligence- Phil Wainewright

Not for the first time in its history, artificial intelligence is rising on a tide of hype. Improvements to the technology have produced some apparently impressive advances in fields such as voice and image recognition, predictive pattern analysis and autonomous robotics. The problem is that people are extrapolating many unrealistic expectations from these initial successes, without recognizing the many constraints surrounding their achievements.

Toy robot in front of blurred keyboard, code © Patrick Daxenbichler - Fotolia.com

Machine intelligence is still pretty dumb, most of the time. It’s far too early for the human race to throw in the towel.

People are misled by artificial intelligence because of a phenomenon is known as the Eliza effect, named after a 1966 computer program that responded to people’s typed statements in the manner of a psychotherapist. The computer was executing some very simple programming logic. But the people interacting with it ascribed emotional intelligence and empathy to its replies.

The same phenomenon happens today in our reactions to the apparent successes of machine learning and AI. We overestimate its achievements and underestimate our own performance because we rarely stop to think how much we already know. All of the context we bring to interpreting any situation is something we take for granted.

Machines are good at pattern matching

Computers are much better than us at only one thing — at matching known patterns. They can only match the patterns they have learned, and they have limited capacity to learn more than just a few patterns. Humans are optimized for learning unlimited patterns and then selecting the patterns we need to apply to deal with whatever situation we find ourselves in. This is a skill that’s been honed by millions of years of evolution.

This is why Buzzfeed writer Katie Notopoulos was able to crack Facebook’s new algorithm and wind up her friends the other week. She successfully matched a pattern in a way that Facebook’s algorithm couldn’t fathom — as she explains, the Facebook machine doesn’t really know what it’s doing, the best it can do is to just try and match patterns that it’s been told look like friendship:

This algorithm doesn’t understand friendship. It can fake it, but when we see Valentine’s Day posts on Instagram four days later, or when the machines mistake a tornado of angry comments for ‘engagement’, it’s a reminder that the machines still don’t really get the basics of humanity.

This echoes Douglas Hofstadter’s far more erudite takedown of AI for The Atlantic last month, The Shallowness of Google Translate. If you understand both French and English, then just savour for a moment this put-down of Google’s translation skills:

Clearly Google Translate didn’t catch my meaning; it merely came out with a heap of bull. ‘Il sortait simplement avec un tas de taureau.’ ‘He just went out with a pile of bulls.’ ‘Il vient de sortir avec un tas de taureaux.’ Please pardon my French — or rather, Google Translate’s pseudo-French.

A takedown of Google Translate

Hofstadter is generous enough to acknowledge Google’s achievement in building an engine capable of converting text between any of around 100 languages by coining the term ‘bai-lingual’ — “‘bai’ being Mandarin for 100” — yet thoroughly demolishes its claim to be performing anything truly intelligent:

The bailingual engine isn’t reading anything — not in the normal human sense of the verb ‘to read’. It’s processing text. The symbols it’s processing are disconnected from experiences in the world. It has no memories on which to draw, no imagery, no understanding, no meaning residing behind the words it so rapidly flings around.

A friend asked me whether Google Translate’s level of skill isn’t merely a function of the program’s database. He figured that if you multiplied the database by a factor of, say, a million or a billion, eventually it would be able to translate anything thrown at it, and essentially perfectly. I don’t think so. Having ever more ‘big data’ won’t bring you any closer to understanding, since understanding involves having ideas, and lack of ideas is the root of all the problems for machine translation today.

Enterprises are constantly encountering the limitations of that lack of ideas in their quest to apply machine learning and artificial intelligence to today’s business problems. Last year I listened to a presentation at the Twilio Signal conference in London by Sahil Dua, a back-end developer at Booking.com. He spoke about the work the travel reservation site has been doing with machine learning to autonomously tag images.

Of course, we all know that the likes of Google, Amazon and Microsoft Azure already offer generic image tagging services. But the problem Booking.com encountered was that those services don’t tag images in a way that’s useful in the Booking.com context. They may identify attributes such as ‘ocean’, ‘nature’, ‘apartment’, but Booking.com needs to know whether there’s a sea view, is there a balcony and does it have a seating area, is there a bed in the room, what size is it, and so on. Dua and his colleagues have had to train the machines to work with a more detailed set of tags that matches their specific context.

Why humans will always be smarter than AI

This concept of context is one that is central to Hofstadter’s lifetime of work to figure out AI. In a seminal 1995 essay he examines an earlier treatise on pattern recognition by Russian researcher Mikhail Bongard, a Russian researcher, and comes to the conclusion that perception goes beyond simply matching known patterns:

… in strong contrast to the usual tacit assumption that the quintessence of visual perception is the activity of dividing a complex scene into its separate constituent objects followed by the activity of attaching standard labels to the now-separated objects (ie, the identification of the component objects as members of various pre-established categories, such as ‘car’, ‘dog’, ‘house’, ‘hammer’, ‘airplane’, etc)

… perception is far more than the recognition of members of already-established categories — it involves the spontaneous manufacture of new categories at arbitrary levels of abstraction.

For Booking.com, those new categories could be defined in advance, but a more general-purpose AI would have to be capable of defining its own categories. That’s a goal Hofstadter has spent six decades working towards and is still not even close.

In her BuzzFeed article, Katie Notopoulos goes on to explain that this is not the first time that Facebook’s recalibration of the algorithms driving its newsfeeds has resulted in anomalous behaviour Today, it’s commenting on posts that leads to content being over-promoted. Back in the summer of 2016, it was people posting simple text posts. What’s interesting is that the solution was not a new tweak to the algorithm. It was Facebook users who adjusted — people learned to post text posts and that made them less rare.

And that’s always going to be the case. People will always be faster to adjust than computers, because that’s what humans are optimized to do. Maybe sometime many years in the future, computers will catch up with humanity’s ability to define new categories — but in the meantime, humans will have learned how to harness computing to augment their own native capabilities. That’s why we will always stay smarter than AI.

Artificial Intelligence and Human Intelligence: The Essential Codependency – Falon Fatemi

Imagine the following scenario. It’s Monday morning. You wake up to an alarm. It’s been set automatically and synchronized to your work schedule for the day. As you partake in your hygiene regimen, your key nutritional KPIs (such as hydration, body mass, and haemoglobin levels) are calibrated for you. You strap yourself into your car but there’s no need to steer. It “knows” where you need to go. And, on account of the technology embedded in the roads, real-time travel information, and the like, the roads are clear of congestion. Everything is connected and is programmed to operate at peak efficiency. Upon arriving at work, all the mundane prospecting and outreach activities have been performed for you. You are poised to focus on decision making and relationship building and can spend more time on your sales and marketing strategy.

The hurdle of turning the above scenario into a reality seemed unsurmountable a handful of years ago. But, thanks to the emergence of Artificial Intelligence, it’s an imminent reality.  While we tend to associate AI with fictional movies – with Star Wars, 2001: A Space Odyssey, Minority Report, and the like – the concepts and gadgets introduced are quickly becoming the ‘new normal’. We’ve already started to leverage AI to automate mundane tasks, including home delivery (e.g., Instacart), navigation (e.g., Google Maps), transportation (e.g, Uber), digital music selection (e.g., Spotify), and more.

Regardless of application, the real power of AI lies in its “contextual awareness,” namely its ability to sense and respond to current context. The potential of AI is especially exciting in the context of sales and marketing. AI is already helping sales and marketers automate mundane and tedious tasks and streamline day-to-day activities. AI technologies, such as Node, are so advanced that they can pinpoint the most lucrative entry points into potential customers and can even recommend conversation openers. The sky is the limit in terms of the breadth of questions that it can answer. How do I get access to the right buyers at target companies? What is my ideal customer persona? How do I optimize my sales and marketing team performance? Leveraging AI, machine learning, and natural language processing, Node empowers sales and marketers by allowing them to personalize marketing strategies to suit the tastes of individual clients.

Yet despite the enormous promise of AI, the reality is such that sales and marketing interactions are deeply personal. While many believe that AI has the potential to render human intelligence obsolete, this is far from the truth. Quite the opposite, AI advancements will only underscore the importance of human intelligence. By liberating sales and marketing teams from tedious work (such as manual CRM entry or scouring social media sites to find introductions into accounts), AI allows them to focus on what really matters – developing strong relations with customers. Applying AI to sales and marketing processes fosters more human interaction, not less.

It’s inevitable that the winners of tomorrow will need to use AI technology to improve sales and marketing efforts. But, at the end of the day, they will still need to know how to relate to and interact with other human beings. The most effective teams will learn how to leverage the best of both AI and human intelligence to build a revenue strategy that will improve client engagement, increase conversion, and drive positive ROI.

In an attempt to marry the powers of AI and human intelligence, Node has partnered with Chris Voss, formerly a lead hostage negotiator for the FBI and the author of Never Split the Difference. Voss’ vast experience in negotiations with the world’s toughest criminals has equipped him with a deep understanding of the most effective negotiation tactics. The potential of bringing this know-how into the business arena is enormous. Based on Voss’ experience, teams who have combined AI with human-based emotional intelligence techniques are able to quadruple their sales rates. In 2018, Node, in collaboration with Voss, is launching an event series that will teach participants how to unite AI with Voss’ human intelligence-driven tactics to close sales deals. Sessions will be dedicated to advanced negotiating techniques, scenario planning, and more.

Advancements in AI technologies and capabilities will inevitably impact the ways in which sales and marketing teams operate. Businesses will need to adapt and embrace a radically new approach to sales and marketing. It will not be business as usual. Developing a well-crafted strategy that marries AI and human intelligence will be critical. Just as in the case of hostage negotiations, doing can mean the difference between survival and demise.

– Falon Fatemi  CEO and founder of Node, a first-of-its-kind AI platform that transforms how businesses are able to analyze relationships between entities on the web to uncover new opportunities. She worked at Google for 6 years where she was the youngest employee starting at age 19. Falon Fatemi had spent over a decade focused on go-to-market strategy, global expansion, and building strategic partnerships with Google, YouTube and the startup world. 

Why Your GPS Receiver Isn’t Bigger Than a Breadbox

Bradford W. Parkinson shepherded the first GPS constellation to launch and pushed for civilian access By Tekla S. Perry

Photo of Parkinson

Date of birth: 16 February 1935

Birthplace: Madison, Wis.

Education: B.S. in general engineering, U.S. Naval Academy, 1957; M.A. in aeronautics and Astronautics, Massachusetts Institute of Technology, 1961; PhD in aeronautics and Astronautics, Stanford University, 1966

Current position: Professor emeritus, recalled to active duty, Stanford University

First jobs: Supermarket carryout and stock boy, general labourer in construction, newspaper delivery

First tech job: Surveyor for construction projects

Most surprising job assignment: As an instructor at the Air Force Academy, flying 26 air combat missions to troubleshoot the electronics on the AC-130 gunship

Patents: Seven

Most recent books read: The Winter Fortress: The Epic Mission to Sabotage Hitler’s Atomic Bomb, by Neal Bascomb

Favourite book: Tortilla Flat, by John Steinbeck

Favourite music: Classical, particularly Sergei Rachmaninoff, Edvard Grieg, and Ludwig van Beethoven

Favourite food: Spaghetti with meatballs

Favourite restaurant: Café Roma, San Luis Obispo, Calif.

How his spouse would describe him: Intense

After hours/leisure activity: These days, hiking, snowshoeing; in the past, sailing, skiing

Car: BMW Z4 and a GMC truck

Organizational Memberships: IEEE, Institute of Navigation, Royal Institute of Navigation, SME, American Institute of Aeronautics and Astronautics

IEEE Medal of Honor: “For fundamental contributions to and leadership in developing the design and driving the early applications of the Global Positioning System”

Other major awards: ASME Medal, Charles Stark Draper Prize for Engineering, Marconi Prize, Royal Institute of Navigation’s Gold Medal, Institute of Navigation’s Thurlow Award

As I drive through the vineyard-covered hills of San Luis Obispo, Calif., the tiny Global Positioning System receiver in my phone works with Google Maps to alert me to upcoming turns. The app reassures me that I’ll arrive at my destination on time, in spite of a short delay for construction. How different this trip would have been in the pre-GPS era, when the obscured road sign at one intersection would likely have sent me off track. I have a weak sense of direction, and getting lost—or worrying about getting lost—was a stressful part of my life for a long time.

This GPS-guided journey is taking me to Bradford W. Parkinson, the person who made GPS technology—a tool we now take for granted—come together. Parkinson is being awarded the 2018 IEEE Medal of Honor for leading the development of GPS and pushing its early applications. “Just don’t call me the inventor of GPS,” he says moments after we meet. “I was a chief advocate, the chief architect, and a developer, but I was not the inventor.” How about “leader”? “Even that’s overblown. I surrounded myself with guys who would not fail.” Brad Parkinson may be modest about his contributions, but it’s hard to dispute that he was the person who turned a pie-in-the-sky vision of navigating by satellite into a reality.

Parkinson’s preparation for his GPS role began early, with a passion for maps. The walls of Parkinson’s boyhood bedroom were covered with large maps of northern Minnesota’s Boundary Waters—lakes and streams he loved to explore by canoe. “It was easy to get lost,” he recalls. “You had to keep your wits about you.” Then there was his summer job in 1957, just after graduation from the U.S. Naval Academy, as a surveyor on a large construction project.

His graduate education put down another stepping-stone. Sent to MIT in 1960 by the U.S. Air Force, which he’d joined rather than the Navy, he took several courses in inertial navigation. Charles Stark Draper was teaching them, and so it was an irresistible opportunity. That coursework led to a three-year post as chief analyst for inertial navigation systems testing at Holloman Air Force Base. In 1964, he headed off to Stanford for PhD studies. His thesis advisor was Benjamin Lange. “Ben wanted to put a free-rotor gyroscope in orbit to test the general theory of relativity,” Parkinson says.

Parkinson invented a sensor that could tell the position of the rotor relative to the desired axis. Using an algorithm he called hemispheric torquing, he could then apply a magnetic field to adjust the rotor’s position, sending it spinning along the desired axis without changing its overall position in space. Parkinson’s technology is still in use today in some highly accurate inertial navigation systems.

As Parkinson’s knowledge of navigation and space systems grew, the seeds that became GPS were being planted by others. In 1960, the U.S. Navy began testing its Transit program, a satellite-based method of updating the inertial guidance systems used by submarines. Transit’s system worked with as few as four satellites (though the constellation typically included more) in low polar orbits. Along with a network of ground stations, the satellites allowed slow-moving vessels to determine their longitude and latitude a few times a day with an accuracy of about 100 meters.

Ivan Getting, president of the Aerospace Corp., didn’t think that was good enough. In 1962, he started campaigning for a three-dimensional satellite-positioning system that would be more accurate and always available. Getting told me some years ago that he promoted this vision to the presidential science advisor, the heads of the armed forces, and anyone else he thought could have influence, trying “to get the damn thing funded.” Getting’s evangelism led to an Air Force–sponsored study of space-based navigation. The final report, by James Woodford and Hiroshi Nakamura, was published in 1966, although it remained classified until 1979. It laid out 12 main techniques, including the one that became GPS.

In 1972, Parkinson’s path and satellite navigation’s evolution collided. Parkinson had spent the previous year studying at the Naval War College, in Newport, R.I., and sailing whenever he could. Up next would likely be an assignment in the Pentagon. Then he got a call from a colonel who was part of a group known as the Air Force’s inertial guidance mafia. “This wasn’t a black-hat organization,” Parkinson explains, “just people who had gone through the MIT inertial guidance course who looked out for each other.” The colonel recommended that if Parkinson wanted to build systems rather than just analyze them, he should join the Advanced Ballistic Reentry System (ABRES) program, in Los Angeles.

Parkinson, just promoted to colonel himself, took the advice and moved to Los Angeles. He’d been at ABRES a little over three months, working on advanced nose cones and other missile technology, when he was identified as a perfect fit to take over a satellite navigation program called 621B. Parkinson had the right qualifications, but he didn’t want the job. “The consensus was that the program was going nowhere, that it was absolutely dead,” he says.

But it was a three-star general making the offer, Parkinson recalls, and you don’t say no to a general—usually. Parkinson said he’d take the job if he could be named program manager. Anything less, he said, would have been a downward move and wouldn’t have allowed him to control the program—“ and the program was in deep trouble.” The general refused. “So I said, ‘Then I don’t volunteer,’ ” Parkinson recalls. “He wasn’t used to brand-new colonels saying ‘No’ to him.” Parkinson walked out of the office—but he had barely gotten through the door, he says, when the general called him back. Parkinson got his title and took over 621B in mid-1973.

The 621B program aimed to create a satellite-based navigation system that would work almost anywhere in the world. The team had already developed much of the plan and wanted to demonstrate it using four satellites—not an inexpensive proposition. Parkinson began by going through every piece of the proposal with his engineers.

“We were a little worried when he first came on,” recalls Walter Melton, an engineer assigned to the project from the Aerospace Corp. “We heard that he was from the inertial mafia.” The engineers were concerned that Parkinson would be biased against satellite navigation, which was considered a competitor to inertial navigation. “But after the first several weeks it became clear he understood and was a supporter.”

In August 1973, Parkinson presented the proposal to the Defense Systems Acquisitions Review Council at the Pentagon. “I told all these generals and senior civilians sitting around a table what I was trying to do, and then they took a vote,” he recalls. The vote was “No.” Malcolm Currie, then undersecretary of defense research and engineering, chaired that meeting. At the time, Currie was spending a lot of time near his home in the Los Angeles area, preparing to move his family to Washington, D.C. During one of Currie’s Los Angeles trips, Parkinson gave him a one-on-one tutorial on satellite navigation that took up most of an afternoon.

Parkinson now thinks that afternoon was the reason satellite navigation didn’t die after the “No” vote. Indeed, it made an ally of Currie, who quickly reminded Parkinson that the concept presented was merely one he had inherited, not developed himself. Parkinson reported in an oral history that Currie told him, “Listen, you did a very, very nice job, but you and I know that this is not truly a joint program…. Go back, reconstitute it as a joint program, and bring it to me as quickly as you possibly can, and I am very, very certain that we are going to approve it.”

Parkinson and his engineers worked over Labor Day weekend to develop a new architecture for their satellite-navigation system. They met at the Pentagon rather than in L.A., he says, “because too many people associated with the program were entrenched in old ideas.” Gathering in offices that were vacant because of the holiday, Parkinson says, “We hammered out what we wanted to do, and we summarized it in seven pages.”

Parkinson recalls that the “Lonely Halls” meeting, as it came to be known, led to several key changes: The system’s code-division-multiple-access (CDMA) radio signal was modified to include a civilian signal as well as the protected military signal; the orbits of the satellites were adjusted to reduce the number of satellites needed at the optimal altitude, considering the range of available launch vehicles; and the design embraced orbiting atomic clocks, which would free ground-based receivers from the need to keep precise time.

Parkinson says this third change was the most risky—atomic clocks that could handle space radiation did not yet exist. But he knew that Roger Easton at the Naval Research Laboratory was developing a space-qualified atomic clock as part of the Navy’s Timation satellite navigation program—and he bet that some version of that clock would be available for the demonstration satellites.

This decision turned out to be critical for the cheap, small GPS receivers that consumers use today. If instead we all had to carry around superaccurate clocks, the receivers would be vastly more expensive and as large as a stack of dictionaries, Parkinson says. They also would require periodic synchronization to maintain accuracy. They certainly wouldn’t have turned into a tiny package of electronics costing a fraction of a dollar, tucked inside every cellphone.

And Parkinson badly wanted consumers to use the new system. The mission of the project, in his view, was always twofold—extraordinary accuracy and affordability. He even hung a wooden plank above the entrance to the project’s offices in Los Angeles to reinforce the message: “The mission of this office is to drop five bombs in the same hole and to build a cheap set that navigates—and don’t you forget it.”

Parkinson spent the months after the Lonely Halls meeting selling the proposal to Pentagon staff and decision-makers. He flew to Washington as often as twice a week, holding some 60 meetings in two months (he still has a list).

He parried every doubt: Yes, the signal would be powerful enough to be detected in the surrounding noise. Yes, the system’s 10-meter accuracy was achievable. Yes, US $180 million would cover the constellation of four satellites and related ground equipment. (The final price was about $250 million, but that included two added satellites—not a horrible overrun, Parkinson says.)

“He was quite the salesman,” says Melton, his colleague in the 621B program.

The Defense Council approved the proposal in December 1973. Parkinson led the program for three and a half years, until the first GPS satellite was up in space and initial tests verified that the system worked as designed.

Easton’s atomic clock, it turns out, was not ready for that initial launch, but Parkinson had engineers at Rockwell International also working on a space-worthy atomic clock, which was ready. Parkinson still gives Easton credit.

“Easton convinced me that we could do it—and that made a heck of a difference,” he says.

The full 24-satellite system became operational in 1995. The Russian GLONASS system, a similar project begun during the Soviet era, was also completed in 1995. Both the European Union’s Galileo system and China’s BeiDou system are expected to be completed in 2020.

Parkinson retired in 1978 from the Air Force, but he didn’t leave GPS behind. After several positions in industry (including vice president of Rockwell International’s Space Systems Group and vice president at Intermetrics), he returned to Stanford in 1984, this time as a professor of aeronautics and astronautics. He immediately rejoined the orbiting gyroscope project, now called Gravity Probe B, as program manager and a coprincipal investigator; it successfully launched in 2004.

He also led a research group aimed at developing civilian applications for GPS technology. That work led to a robotic sailboat, the first GPS-guided landing of a commercial aircraft, and a system of ground stations that would improve the accuracy of GPS positioning by monitoring and correcting the satellite data. The last project evolved into the Federal Aviation Administration’s Wide Area Augmentation System, which uses data from ground stations to improve GPS’s accuracy by correcting errors in the signal caused, for example, by orbital drift and delays introduced by the atmosphere.

Parkinson’s group also developed the application he is most passionate about today: automated tractors. He had kept automated tractors in his sights for some time; he listed the application as part of GPS’s future in talks he gave as early as 1978.

It wasn’t until around the 1990s, though, that he got his chance to push the technology along. At Stanford, he met with a representative from John Deere who was building ties between the company and universities. Parkinson demonstrated a GPS-guided self-driving golf cart.

“Think tractor,” he told the visitor.

The Deere representative was skeptical that farmers would buy such a system, Parkinson recalls, but the company was eager to partner with Stanford and agreed to fund a development project.

“They sent us about $900,000 and two huge tractors,” Parkinson says. A team of students spent several years developing the technology, first demonstrating a fully functional system in 1997.

Watching the rise of GPS-guided precision farming since then has been gratifying, he adds. Parkinson’s home overlooks a farm, and he often walks in the fields, sometimes spotting, to his delight, a GPS-guided tractor at work. “The tractors pay for themselves in savings in fertilizer and in time.”

“I think he likes the agricultural application because it brings home that GPS is for everyone,” says Penina Axelrad, a former Ph.D. student of Parkinson’s who is now a professor at the University of Colorado. “Now, of course, GPS is in everyone’s smartphones, but that was an early application that everyone could value.”

Parkinson is now mostly retired, though he still has research projects running at Stanford.

He remains one of GPS’s biggest fans—he has more than a dozen devices in his house and car that use GPS, including a watch he wears most of his waking hours.

“He just gets so excited when he sees cool things enabled by GPS,” says Axelrad.

He is also quick to protect GPS when he feels that it’s threatened. Right now, he sees a big threat coming from Ligado Networks, which aims to create a broadband wireless network using the 1,525- to 1,559-megahertz frequency band. This band is adjacent to the frequencies used by GPS, which are between 1,164 and 1,587 MHz, nestled among other bands essentially reserved for satellite communications. Ligado’s band is reserved for Earth-to-satellite communications with some limited use of cell towers to help users connect to the network. Back in 2011, however, the U.S. Federal Communications Commission considered giving Ligado’s predecessor, LightSquared, a conditional waiver to use the frequency band for unlimited ground-based communication. The GPS industry protested, showing data on interference, and in 2012 the waiver was pulled. But Ligado recently came back with a proposal for a lower-power system that it says won’t interfere with GPS.

That proposal is still before the FCC. But testing by the Department of Transportation shows extensive interference, Parkinson says, particularly for the most accurate devices. He’s been working on an editorial to alert the public to the danger of the proposal. Ligado aims to change the designation “of a quiet signal from space to powerful ground transmitters,” he writes. “They would apparently use this to compete with the existing broadband companies. This country already has at least four broadband providers but has only one GPS.”

Says Parkinson, “We endanger it at our peril.”

This article appears in the May 2018 print issue as “GPS’s Navigator in Chief.”