Sign up to get full access to all our latest automotive content, reports, webinars, and online events.

Against all levels of abstraction: Software Timing Verification

Add bookmark
Automotive IQ
Automotive IQ
02/19/2015

Rapita SystemsIn 1998 Guillem Bernat came from Mallorca, Spain to the University of York for a post-doc position for six months. Soon months became years and by co-founding Rapita Systems he stayed in York for 16 years. Originally as a spin-out from his research at the University of York, his idea was to commercialise new technology on execution time analysis, with new methods of doing worst-case execution time based on measurements and static analyses. Rapita Systems quickly evolved into providing a whole spectrum of tools for all targets and verification activities. Today, Dr. Bernat keeps close ties to the automotive industry, because as the electronics in cars have been growing almost exponentially, the costs of verification are growing even faster.

Dr. Bernat, how does Rapita Systems approach software timing in automotive?

We observe the execution of the software on the physical target by focusing on the full software stack of a typical vehicle’s architecture. We want to understand the vulnerabilities of the ECU and so we analyse what is happening inside the particular ECU, as that’s where the most challenging problem is. If you can't get that right, then any higher-level model that you make is not applicable, because you’re basically making an analysis on numbers and estimates that are not accurate.

Is it about understanding the schedule of ECU’s?

It's about how runnables execute, and the interactions that result from having a real-time operating system. We have a tool called RapiTime that does detailed analysis inside the schedulers, and runnables, all the way down to each C function, even every basic block. This relies on the ability to get data out of the system. The trade-off here is that in CPUs which have limited resources to extract data, we need to run analyses at a lower resolution. With detailed tracing capabilities, for example, available using hardware tracing interfaces or direct hardware probes, you can perform much, much more refined analyses.

For today's engineers it's crucial to verify the timing performance of their critical real-time embedded software systems in order to understand how well their systems are performing. Why is it becoming increasingly difficult over time to get the software timing correct?

The hardware is becoming more and more complex because the end users expect a level of functionality that was not imaginable even a couple of years ago. So, the unique selling feature now of cars is to have more gadgets that implement some really complex computer-controlled features. That means more software, more software means more CPU power, and more CPU power means significantly more complex processors which are harder to make reliable.

Ten years ago, you could find typical CPUs in a car where every instruction takes one cycle. So every time you run a piece of code, it would take exactly the same amount of time. Today, you have processors with advanced hardware features like caches and with a lot of complex interactions. Now we’re moving to the next generation, which is multi-cores. This makes understanding the timing behaviour even harder. So basically, you have much more complex software, driven by requirements, which requires more powerful CPUs. These are, by design much more complex and behave quite differently.

So how do you manage such complexity?

There is a dimension I’d like to call the levels of abstraction: a typical development process has teams, let’s say control engineers, who are responsible for designing the control algorithms. However, the way in which these control algorithms are then converted into code, and put into the hardware is the responsibility of somebody else. This creates a mismatch between the natural evolution from small teams where engineers had full understanding of the whole system and a new developer model where you have large teams of people, in which nobody really has a full understanding of every single detail of the system. This mismatch leads to particular challenges because there’s no clear, optimal decision about software architecture or hardware architecture, as nobody has the capacity to understand the full complexity of the whole system. The system is simply too big to fit inside a single person’s brain.

So, we at Rapita Systems see ourselves as providing a solution to these problems. Our tools provide the evidence that the application developers need. It helps them to understand how the execution works over the control algorithms, operating system, AUTOSAR stack and indeed the whole system.

Where do you see the importance of upcoming multi-core processors? Would that make your lives much easier in terms of not having to worry about precise timing anymore?

I think it is a big problem that will soon arise. I’ve seen the same argument put in different places, where people say, "Multi-core provides such an increase in performance that we won’t need to worry about timing anymore."

Except that when you compare the increase of theoretical performance of multi-cores against the increase of requested functionality and therefore lines of code, they don’t even grow in the same order; the multi-core processors do not provide enough CPU power to satisfy the extra functionalities required. So suddenly now you move to a more complex system, a more complex hardware architecture, with more complex software, and this makes the complexity of the whole system grow exponentially. I do not think that multi-core is the solution. It’s just a progression of a natural evolution where you need to provide computational power, but the software timing is much more challenging.

[eventPDF]

Would you say that multi-core systems don't pay off in some aspects of timing analysis?

We’ve done lots of research over time and the single outcome of these projects is that multi-core processes behave well in the average case, but in safety critical systems where you have to guarantee the worst case, there are quite a few scenarios where the behaviour of a system running on a multi-core is actually slower than it would be on a single core. These are the rare cases that need to be reviewed, understood and avoided in the design phase. Otherwise you have the situation where a piece of software in a car is tested, but when it is actually deployed on the road, (a much bigger space of exploration of the behaviour of the system), there could be interactions that had not been tested beforehand because it was impossible to cover all cases. Then watchdog timers will reset the systems and find faults, with the extra risk of costly recall.

Multi-core is a necessity; manufacturers are pushing for multi-core so therefore it needs to be embraced, but it also introduces tremendous challenges that need to be understood. That’s where we see a lot of potential in our tool set, because we provide the mechanism for understanding what the actual time behaviour of software running on multi-cores is. We sometimes also see what the interference of competing for shared resources is.

If we look again at the broader picture of the software timing, what kind of trade-offs are there between the effort and precision required?

There’s an argument that we cannot make perfect systems, or that it is too expensive to make the perfect system. Regardless of the cost, it is essential that you have at least enough precision and effort to satisfy the minimum requirements of the standards that are in place. In such complex systems this can be prohibitive. You therefore have to provide combinations of alternative means of providing evidence about timing which can be derived from observations, testing, or analysis. Different levels of precision, when put together provide the additional confidence that the system behaves as expected. True automation is one of the ways in which you can achieve the same level of precision at a fraction of the effort.

We have customers, for example who, after a long deployment process, report that they achieved the same level of precision that they used on previous systems but at 10% of the effort. This is due to the fact that a process previously done manually, requiring months of effort, can now be done in minutes by automation. Therefore, you can afford to repeat the timing analysis process continuously, allowing indications that there is a timing issue to be quickly identified. These issues can then be addressed as soon as possible. The later that these issues are identified, the more costly they are to fix. This is because you have to return to the software architecture and requirements, then repeat the process all over again, with the additional risk, cost, and production delays, not to mention the delayed date of release of the final version.

I understand that you are constantly engaged with collaborative research projects. What does the future hold for timing analysis tools?

The next generation of timing analysis will be probabilistic timing analysis. The idea behind that is that end systems are becoming so complex that by adding transparent ways of randomising the timing behaviour you will be able to describe the mass statistical forces. It will be possible to provide, with a high degree of confidence, the upper bounds that the execution times will never exceed. This actually becomes more important and relevant for complex multi-core processors because they really are quite far away from predictable or deterministic behaviour. I need to stress that this is not important for the tools of today; this is more the 10-year plan.

You’re running a workshop at the end of March. What can attendees expect and how will they benefit from it?

We see this type of event as educational and entertaining. It is a way to understand the issues, and the main aim is to work through experiences across the board of issues related to timing. People who attend will return home and say, "Oh, I need to check whether we have that problem." It's a way of generating awareness that timing is an important and complicated issue, and that knowledge exists about what the issues are and how they can be resolved.

The second point is a more general understanding of how timing fits the overall ISO 26262 process and therefore that it needs to be done in such a way as to address a specific objective of the standard.

The third benefit is that we present tools to support the process of providing evidence to certification authorities, so that people don't have to reinvent the wheel for every project or every company. Once it is understood that an objective can be satisfied by a process provided by tools, the adoption of that process then provides just a tick in the box of, "This is how we’re going to satisfy that objective."

Dr. Bernat, thank you very much.

Find out more about the workshop on 24 March 2015 and register for free today!

Preliminary Workshop Agenda from 09:00 to 16:00:

  • Introduction
  • Specification. Establish timing objectives. AUTOSAR Timing Extensions. ISO 26262 Requirements. Exercise.
  • Design. Model-based design. Multicore. AUTOSAR Architectural Design.
  • Implementation. Data collection techniques.
  • Unit Test. Timing verification and optimization. Exercise
  • Integration Test. Debugging timing issues. Exercise
  • System Test. Preparing test results.
  • Summary

RECOMMENDED