The use of formal methods in life critical systems has a long background in Australia dating back to the late 1980's. One section of work was conducted at Telectronics and this page is intended to review the kernel that was used by the first software controlled implantable defibrillator.
The entire system and the kernel in particular used formal methods with a moderate degree of success. This page is intended to be a short overview. The presentation will be from the hardware up.
The platform
The systems runs on a customised 65C02 (65C124) developed by Bill Mensch at Western Design Center running at 128kHz. This interface via a custom bus with two chips which implemented the pacemaker functionallity.Code was written in C using a locally developed C compiler based on the Amsterdam Compiler Kit. The runtime used:
- 8-bit ints and 16-bit longs.
- a stack frame in page 0 (first 256 bytes) with the or or y register as the frame pointer. Total stack size was limited to 192 bytes.
- two pretend registers (aka rL and rH) in page 0 completed the programs state.
The compiler was extensively tested and all of the EM code intermediate instructions were exhaustively tested including for unintended side effects by Karen Mcpherson.
The requirements
The system requirements included:- Correctness and Safety
- Performance since battery life was critical
- Deterministic real time performance - in particular the support chips had strict timing limits.
- Stack usage must be able to be calculated.
- Requirement for debugger processes logging and checking data without interfering with the main tasks.
Of these the timing was a particular challenge, my memory is it was around 1-2ms to complete a bus transaction which is around 60 instructions. This was analysed by hand and shown to be met.
The kernel interface
sssFormal Methods, Life critical systems in the past
The kernel was specified in Z in around 4 pages and provided:
- A static set of processes each with a priority level defined at system build time.
- Priority levels run from -1 to 8 with -1 being the idle loop (or debug driver) and 8 being the single level hardware interrupts. Priorities 0 to 7 where implemented in the scheduler with 7 being the highest.The final system perhaps 4 levels.
- The system would always run the highest priority process.
- But a process could raise its priority level and restore it. Lowering priority was never allowed.
- A critical region is just raise priority level, do the work, restore it.
- A process is just a function that needs to return before running lower priority processes
void spl(int level) // set priority level void mpr(int pid) // make process ready
Communications is via:
- Shared store with critical region protection but this is not used at the application level.
- Message passing by mailbox with overflow detection (single buffer)
- Pipes containing a sequence of a fixed number of messages. This is typically used to send information to long running processes such a logging daemon.
Program Proofs
The kernel was proven using hand methods from the Z specification and the following obvious timing constraints were confirmed:- Determine the maximum time that interrupts (priority 8) are disabled for via code inspection (D)
- For each priority level 0..7 determine the maximum run time.
And then calculate the worst case time based on every higher priority process starting at the same time and most importantly:
- The arrival rates are such that *all* higher process complete before any more arrive.
This was plausible since the device duty cycle was still very low, without arrival rate information this problem quickly intractable.
Still it was done for this project and then verified experimentally.