Sign up to get full access to all our latest automotive content, reports, webinars, and online events.

Human Factors Engineering for the Auto Industry

Add bookmark
Paul Green
Paul Green
05/05/2014

Paul Green is a Research Professor and Leader of the Driver Interface Group at the University of Michigan Transportation Research Institute.

"A problem with research on cognitive load is that nobody has really defined what it is. We operationalize it by saying it is how well you do a given task."

 photo Paul_Green_2014_HMI_IQ_zpse3499ee9.jpg
Could you give us a little bit of background about yourself and how you ended up where you are today?

Paul Green: Decades ago, I was an undergraduate student at Drexel University in Philadelphia studying mechanical engineering and got very interested in human factors. As a co-op student at the Philadelphia Naval Shipyard, I found that, failure to design equipment for use by sailors had more impact on effective use of the equipment than traditional mechanical engineering considerations. As a senior, I participated in a multi-university competition to design a car that maximized fuel economy. I did the ‘HMI.’ An obvious place to go to graduate school to study human factors, then and now, was the University Michigan. I started doing automotive work, and never really looked back. There was a research institute here that I had very indirect connections with as a student (now the U of Michigan Transportation Research Institute, UMTRI), and it was just a perfect fit with what I wanted to do after receiving my PhD. Shortly after joining what is now UMTRI, I started teaching classes.

I understand that the University of Michigan has a famous Human Factors Engineering short course?

This is something of great pride to me. The Human Factors Engineering Short Course is now in its 55th year and there is nothing else like it in the world. It is two weeks long. The people who attend are a cross section of the profession. We get about 50 people each year. They are working for aircraft manufacturers, designing power plants, working in the automotive industry, and so forth. It really is a broad brush of people doing human-factors work that need to know more about the topic.

Some of the attendees have had formal education in human factors, almost always graduate degrees while others have had no formal education in this topic, but for one reason or another are doing human factors work and really need a solid grounding. The instructors are myself and about a dozen other people. Some instructors are from Michigan, some from other universities and companies. These people are really top notch. Typically, three or four of the instructors have been past presidents of the Human Factors and Ergonomics Society. We are really talking about people who are the experts in the field, who literally "wrote the book," and they are extraordinarily good speakers. They have published extensively and are really very good at what they do.

Another characteristic of this course, which we have really pushed hard over the last few years, is to try to give people hands-on experience in doing things. For example, there are now a number of software applications in our profession that are important. One is Cogtool for figuring out task times. A lot of the military folks are interested in simulations, so we teach the class how to use Imprint. The difference between this course and the average course on human factors is that you actually spend a few hours learning how to use these tools, not just looking at Powerpoint slides.

Networking, exchanging ideas inside and outside of class, is an important course element. People eat breakfast, lunch and dinner together and they are staying at the same hotels. They connect. They find colleagues with expertise they need. Google gives you the 10,000 publications that are relevant to a problem. An expert can tell you which three references will be most useful, including those that are not published.

It’s often helpful to bring a group of people together with different experiences around the same topic.

We get the same cover story but with different angles to it. I can remember one year we had somebody from the Naval Underwater Centre in Pensacola Florida who was designing SCUBA and underwater gear. Just by chance we happened to have in the same room somebody from a mine safety equipment company in Pittsburg who was designing respiratory gear for miners, and we also had somebody in the room from a hospital equipment firm who was designing respiratory gear. They are really very different applications but they are all facing the same kind of issues. Without a class like ours, these three people would never have met. They are not competitors, so the opportunities for them to collaborate are quite good.

Getting into safe design, I remember pretty vividly last year the AAA study that was reported on in a lot of mass media regarding cognitive load. It was met with a fair bit of criticism. How do you assess those findings and what was your take on the study?

A problem with research on cognitive load is that nobody has really defined what it is. We operationalize it by saying it is how well you do a given task. If you were to build a model of human information processing, you see that there are three key elements to cognitive operations: There is a cognitive processor that is processing information on multiple dimensions; you have short-term memory and long-term memory.

When you say something has cognitive effects, what is it affecting? Is it affecting this processor? Is it interfering with memory retrieval and if so, which type of memory? Is it interfering with processing only certain types of information? The point is that it’s possible to operationalize cognitive load in several ways, but which definition is being used?

The problem of undefined or ill-defined terms is not unique to cognitive load. I am writing a recommended practice to write definitions for driving performance measures (SAE J2944) that we might want to cover in detail later, and writing it has really made me acutely aware of this weaknesses in our field.

I assume that this opens up the field to criticism?

It does. We have a lot of practical problems that we have to solve. It is apparent that many of the systems that we worry about are going to distract drivers and lead to crashes. But we need to do a better job of defining the things that we measure so that when we present the findings of studies, others can rely on them.

So making a generalization about hands-free design and hands-free devices is not possible then?


It is very difficult. There is an SAE information report (J2972) that is in process toward defining hands-free. The point is that there is no one way to define the term. As an example, hands-free could be literally ‘there is no visual manual interaction’. You just talk. Then there is another version where you have to press a push-to-talk button to engage the system. But, you could also say that there are two steps that you have to go through. They are all hands-free, but are different variants of the term. So rather than saying that this is hands-free and having one definition when it is disputable what is hands-free, what we need are several definitions. Then, a study or a specification would cite hands-free definition A, or hands-free definition B.

Would a guideline favor or choose one definition over another as more applicable to the auto industry?

You could say hands-free means no hands. Period. End of story. But then you could say that it is not practically going to be implemented that way. There will be a push to talk button. Well, that makes sense as a definition for hands-free too. So rather than trying to fight about which definition is the definition, we are much more likely going to make progress by saying there are several viable ways to define it. Identity those ways, so we know which option or variation people use, and move on..

Here is an example:

As I mentioned earlier, I am working on an SAE recommended practice, J2944 - Operational Definitions of Driving Performance Measures and Statistics. This practice originated roughly seven years ago in an SAE meeting when we were talking about the problem of undefined measures and statistics. At that meeting was a graduate student from another university who was looking for a master’s thesis. Even though I am in Michigan, I became his advisor. He examined several hundred articles of which over a hundred had specifically to do with driving performance. His task was to see what names are used to define driving performance measures and statistics in the literature (if defined at all).
The classic example I use is: "What is a lane departure?" You will see terms like ‘lane departure’, ‘lane exceedence’, ‘lane bust’, ‘over the lane line’. I think there were over 13 different names. And no name was used more than about 25% of the time or so.

And the definitions could range from crossing over into another lane to just hitting the edge of your lane?

It’s even worse than that. They were only defined 10-15% of the time. We began to ask, "What is a lane departure?" and we actually came up with 11 different definitions. For example, if the focus is on a front tire, a departure could be that tire touching the inside of the lane marking, touching the outside (4 to 6 inches away) or completely outside the line (which can add another 7 inches). Furthermore, for tractor-trailers on curves, there are cases where the cab is in the lane, but the trailer is not, so we need to have three cases for the rear tires. The definitions individually made sense, and importantly, the range of differences were large enough to have practical consequences.

There are now a number of conferences that are very interested in this recommended practice and they are going to require compliance for papers to be accepted. If people don’t use these definitions or provide something of equivalent specificity, their papers will be rejected. This emphasizes how important it is that engineering procedures and the research to develop them be repeatable.

How do you think a regulation like this will affect brand DNA? Is it going to make it more difficult?

No, I don’t think it is going to affect brand DNA at all. This practice does not say where you can put something or what it should look like. It just says that if you are going to measure the usability of an HMI using lane departures or gap or time to line crossing, that those performance measures could be determined in several ways. Pick a definition for each one you prefer. However, we do provide guidance as to which definition is recommended in some situations. It is not going to tell them that you can’t build your interface this way. It is just going to say that if you are going to do a study, tell us what you did. And here are the ways that you might do that.

I don’t think it is going to have any implications in terms of restricting or confining what people do as we were inclusive in listing how measures and statistics could be defined. If anything, it will reduce restrictions because we have identified many ways of defining terms that experts would not ordinarily identify. It is going to make the work that people do to evaluate driver interfaces much better, because everyone will understand what was done and most importantly, we’ll finally be able to compare studies with each other. This is fundamental to the science that underlies our engineering work.

To return to the issue that underlies your question: How do we deal with brand DNA variety versus the need for consistency across brands? If each vehicle, by expressing their brand DNA, makes a unique interface, then when a driver moves from vehicle to vehicle he/she may not know what to do. We have had that struggle with automotive interfaces for years. One brand’s DNA dictates that we put the windshield wiper control on a stalk on the left side and it rotates while another has a windshield wiper switch on the right side and it moves up and down. The fundamental issue is what level of consistency is necessary for safe and easy operation of a vehicle by ordinary drivers that allows for the manufacturer’s expression of interface design, which is part of brand DNA.

This issue is not only appropriate for the traditional controls and displays that have been there, but now as we go to more complicated interfaces, the same issue holds. How does a manufacturer make their interface unique and interesting? On the other hand, if it is truly unique, then will the driver be able to get in and know what to do. It is not at all clear that that will be the case.

When do we have to say that everybody has to do this the same way because the driver is going to be disadvantaged if they can’t figure it out? And when can a manufacturer be uniquely expressive?

The approach that NHTSA has taken, the approach I would recommend for this problem, is to specify driver performance requirements, to specify the maximum time requirements for a task. There are many instances, however, where the desired outcome can only be achieved by a design specification - put the device here and it should work like this.

Thank you for your insights.


RECOMMENDED