WRIGHT-PATTERSON AIR FORCE BASE, Ohio -- It could be argued that the one persistent challenge faced by the Air Force over its 70-year history is how to best integrate airmen with cutting-edge technology.
Most pressing, from the earliest days of aviation, was the need to protect the human body from the potentially deadly forces generated by advances in aircraft speed, maneuverability and altitude capabilities.
Even in the pre-Air Force days leading up to World War II, altitudes were being achieved that necessitated aircraft with oxygen systems to keep pilots and crews coherent and alive during missions. This was closely followed by the development of aircraft with a pressurized fuselage, such as the B-29, which allowed crews to fly high-altitude missions without oxygen masks and cumbersome heated flight suits to protect them from subzero temperatures.
The advent of the jet age led to ever increasing altitudes and gravitational forces -- known as G-forces -- on the pilot, necessitating the development of G-suits to push blood to the pilot's brain, minimizing blackouts, and ejection seats to allow pilots to safely escape aircraft operating at high speed and altitude.
The testing of these technologies quickly became the public face of the Air Force's human performance research and human factors engineering.
Baby Boomers routinely saw newsreel films and photos in magazines of researchers testing ways to protect pilots from the effects of high G-forces and altitudes with rocket sleds, centrifuges, atmospheric chambers and even balloons used in Project Excelsior as an Airman, Col. Joseph Kittinger, protected by a pressure suit, made a free-fall jump from 19 miles above the Earth's surface.
It was physiological research necessary to keep advancing the Air Force's capabilities in the air, and later, in space. But it also made for good theater for the public.
Concurrent Study
However, from the very beginnings of the Air Force, there has been concurrent, less theatrical study of another interface between humans and their machines that has been just as ground breaking; that between the machine and the human brain. It is research that is pivoting from an emphasis on optimizing tools for use by airmen to creating technologies that will work with Airmen, as a partner.
Cognitive research by the Air Force began with an issue created by the enormous U.S. production output during World War II: lack of uniformity between aircraft cockpits and displays.
"There wasn't such a thing as a standard cockpit configuration, and aviators were confusing things like landing gear and putting flaps down," said Morley Stone, the chief technology officer for the Air Force Research Laboratory at Wright-Patterson Air Force Base in Dayton, Ohio. "Of course, that was leading to a variety of mishaps. … Really, that gave birth to the whole field of human factors engineering."
Air Force Lt. Col. Paul Fitts led the research team at Wright Patterson that developed a consistent method for laying out an aircraft cockpit and instruments, allowing a pilot to quickly and efficiently comprehend the current state of the aircraft. They also developed methods to manipulate controls more reliably, no matter the airframe.
"That key research that occurred here at Wright-Patterson, as well as elsewhere, enabled the standardization of the key instrumentation needed to fly an aircraft," said Mark Draper, a principal engineering research psychologist with the 711th Human Performance Wing at the AFRL. "It's called a T-scan pattern. Pilots quickly learned the T-scan to rapidly ascertain if their aircraft is doing they want it to do. That became the standard for decades."
However, as new weaponry, onboard radars, sensors, communications and command and control technologies were added to airframes, pilots and crews quickly became overwhelmed by too much information for the human brain to process efficiently, a condition that pilots call a "helmet fire."
"A key milestone, which was really significant, was the introduction of the glass cockpit," said Draper. "Over several decades of just adding more controls and hardware instruments here and there, the real estate became really limited.
"If we were able to put in computer monitors, if you will, into the cockpit, we would be enabling the reusing of that real estate," he continued. "We could tailor the information towards a particular mission or phase of flight. The controls and the displays could be changed. That opened up a wealth of opportunity to not only provide more capability to the pilot, but also to enable the introduction of graphics into cockpits to make the information more easily understood and utilized."
Efficient Workflow
These concepts advanced by human factors engineering at AFRL has led to further research making the workflow of airmen in many career fields more efficient and has even crossed over into the public sector.
According to Stone, this type of research led to everything from the development of the mouse, optimizing how a person inputs information into a computer, to eye-tracking studies to analyze how airmen best recognize and use intelligence surveillance and reconnaissance information displayed on a monitor, to wearable devices that can measure a human's current physical state, heart rate, blood pressure and respiration.
Yet for all of these advances in streamlining interfaces and presenting data in more digestible packets on ergonomic displays, the limits of human cognition still present a ubiquitous obstacle for the future Air Force to efficiently integrate main and machine.
Stone and Draper said they believe one way to scale this obstacle is to enable airmen to share some of their workload with a partner -- a silicon-based partner. Draper and his team at the Human Autonomy Lab at the AFRL focus on how to better interconnect human intelligence with machine intelligence as we move into the future.
"Seventy years into the future, we'll still be limited by the fact that we have a very limited short-term memory, we get bored easily, we're not known to just sit there and stare at one place for a long period of time. That's why our eyes move a lot," said Stone. "We're looking at a whole variety of tools, not just wearable sensors, but other types of non-invasive standoff sensors that look at things like heart rate and respiration and other physical cues, … and trying to get that information out in such a way that you can make it readable to that future synthetic teammate."
Synthetic Partners
These sensors, coupled with ever-increasing computing capabilities, could lead to airmen of 2087 routinely conducting missions with a synthetic partner that will not only shoulder some of the workload, but constantly monitor the carbon-based Airman's physical, mental and emotional state before recommending mission options.
"Computational power is getting ever more powerful. Also, computational power is becoming more miniaturized, so you can start putting it more places," said Draper. "At the same time, you're increasing the reasoning capabilities of the machines to collect domain knowledge, assess the conditions and create courses of action.
"We have sensors becoming very miniaturized and able to sense the human physiology without even being attached to the human," he added. "In a vision of the future, artificial intelligence can serve to continually monitor the human while the human is engaged in various tasks, and then dynamically adapt the interaction with the machinery, the interaction with the environment, and the offloading of tasks -- all with the express purpose of better team performance."
According to Draper, one of the Air Force's first forays into the realm of operational autonomous computing was the introduction of flight management systems into cockpits during the 1980s.
"Up until then, you had preplanning and the pilots did all the navigation with a navigator," he said. "Then they introduced a flight management system, which would automatically generate routes [and] give you the waypoints all the way from point A to point B. However, the initial design of these systems was less than great, and we ran into lots of problems, lots of mishaps. This inspired research in order to better design how humans interact with automation which is critical, especially when we start talking about increasingly intelligent systems that are going to be introduced to future military systems."
These initial steps were the beginning of a slow gradation from applying of autonomous systems as advisors, to allowing them to shoulder some mission requirements, to a possible future of handling some tasks on their own.
"The Air Force in its history has focused very strongly on the cockpit and crew stations for aircraft. However, where we're going is expanding well beyond the cockpit," said Draper.
"The autonomous capability that we currently have is fairly nascent," he said. "Current algorithms are limited, certainly imperfect. We want to design to remedy that … intelligent assistants that sit on your shoulder that sift through data that look for correlations and relationships and present those in an easily digestible way to our airmen to consider. … We want to reduce the overall workload associated with the airmen, but the airmen still retain key decision making authority."
Trust Enables Symbiosis
The key ingredient in a symbiosis between carbon-based and silicon-based airmen is the development of trust.
Consider the amount of trust you have that your consumer-grade GPS or cellular navigation system will correctly plot the best route to your destination and give you timely cues to execute that route. This is the bridge that must be designed and optimized between Airmen and their synthetic counterparts.
"As autonomy becomes more trusted, as it becomes more capable, then the Airmen can start offloading more decision-making capability on the autonomy, and autonomy can exercise increasingly important levels of decision making," said Draper. "That's a migration you slowly incorporate as you unleash autonomy, as its capability dictates, and then you reel it back in when you need to, when your trust in it drops and you know that you need to become more engaged, you tighten the leash. The airman and machine will share decision making, and at times one or the other takes the lead depending on the particular context."
Draper said this trust will be achieved by a paradigm designed with a series of checks and balances, where airmen can override an autonomous decision and artificial intelligence can sense an airman's fatigue, stress or miscalculation and suggest an alternative course of action.
"Humans make errors too, right? We all know this," said Draper. "We should have an almost equivalent artificial intelligence looking at overall system performance, telling the airman, 'Hey, human! What you're doing here potentially can really disrupt some complex things. Do you really want to do that?'"
Draper said he believes autonomous systems will never be given the keys to the kingdom and turned loose to execute missions completely on their own without human management and authorization. There will always be an airman in the loop working with technology to do the right thing. The nature and level of airman engagement will change with new technology, but the critical role of the airman, as supervisor, teammate, overseer, will persist.
"Imagine a perfect assistant with you while you work on a car," Draper said. "You're struggling and you're switching between many different tasks. All the while, you have this intelligent assistant that is constantly supporting you; reaching and moving tools out of your way and bringing in new tools when you need it, or showing you pictures and giving you computer readouts of the engine at exactly the right time. That sort of symbiotic, tight-synced relationship between humans and autonomy is what I envision 70 years from now -- true teammates."
A wireless transmitter attached to a base-layer shirt with an embedded sensor tracks the exercise performance of 1st Lt. Wes Baker at the 711th Human Performance Wing, Air Force Research Laboratory, at Wright Patterson Air Force Base in Dayton, Ohio. Air Force photo by J.M. Eddins Jr.