Author Archive

Haskell and Hardware for the Holidays

December 18, 2010

Looking to make a statement this holiday season?  You could try to win the office “ugly holiday sweater” contest.  Or, you could play “Jingle Bells” on your Arduino microcontroller, using Haskell.  This post is about the latter.

We’re going to write this small program using the Copilot embedded domain-specific language (on Hackage and the source on Github).  Copilot is a stream language that allows you to generate embedded C code from programs written essentially as Haskell lists (using Atom as a backend for the C code generation).  This post is about how to use Copilot/Haskell (v. 1.0) to make your embedded C programming easier and more likely to be correct.  Here’s what we’re going to do—please don’t look too closely at my soldering, and turn the volume up, since a piezo speaker isn’t loud:

(For the impatient, the Haskell file is here, and the generated .c and .h files are here and here, respectively.)

We’re going to essentially recreate this C/Wiring program, plus flash some LEDs, but hopefully in a easier, safer way.  We need to manage three tasks:

  1. Determine the note and number of beats to play.
  2. Play the piezo speaker.
  3. Flash the LEDs.

We’ll start by defining which pins control what function:

-- pin numbers
speaker, red, green :: Spec Int32
speaker = 13
red     = 12
green   = 11

The type Spec Int32 takes an Int32 and lifts it into a Copilot expression.

We’ll call the program cycleSong. The type of a Copilot program is Streams, which is a collection of Spec a`s, and it resides within the Writer Monad. First, we’ll declare some variables.

cycleSong :: Streams
cycleSong = do
  -- Copilot vars
  let idx       = varI32 "idx"
      notes     = varI32 "notes"
      duration  = varI32 "duration"
      odd       = varI32 "odd"
      even      = varI32 "even"
      playNote  = varB   "playNote"
  -- external vars
      note = extArrI32 "notes" idx
      beat = extArrI32 "beats" idx

There are two classes of variables: Copilot variables that will refer to streams (infinite lists), and external variables, which can refer to data from C (including the return values of functions, global variables, and arrays). The constructors are mnemonics for the type of the variables; for example, varI32 is a variable that will refer to a stream of Int32s. Similarly, extArrI32 is an external variable referring to a C array of Int32s (i.e., int32_t). Notice the idx argument—it is the stream of values from which the index into the array is drawn (constants can also be used for indexes).

Now for the actual program:

 idx      .= [0] ++ (idx + 1) `mod` (fromIntegral $ length notesLst)
 notes    .= note
 duration .= beat * 300
 odd      .= mux (idx `mod` 2 == 1) green red
 even     .= mux (idx `mod` 2 == 1) red green
 playNote .= true
 -- triggers
 trigger playNote "digitalWrite" (odd <> true)
 trigger playNote "digitalWrite" (even <> false)
 trigger playNote "playtone" (speaker <> notes <> duration)

And that’s basically it.  There are two parts to the program, the definition of Copilot streams, which manage data-flow and control, and triggers, which call an external C function when some property is true.  Copilot streams look pretty much like Haskell lists, except that functions are automatically lifted to the stream level for convenience.  Thus, instead of writing,

 x = [0] ++ map (+1) x

in Copilot, you simply write

 x .= [0] ++ x + 1

Similarly for constants, so the Copilot stream definition

playNote .= true

lifts the constant true to an infinite stream of true values. The function mux is if then elsemux refers to a 2-to-1 multiplexer. So that means that the stream odd takes the value of green when idx is odd, and red otherwise, where green and red refer to the pins controlling the respective LEDs.

Just to round out the description of the other defined streams, idx is the index into the C arrays containing the notes and beats, respectively—that’s why we perform modular arithmetic. The stream duration tells us how long to hold a note; 300 is a magic “tempo” constant.

Now for the triggers. Each of our triggers “fires” whenever the stream playNote is true; in our case, because it is a constant stream of trues, this happens on each iteration. So on each iteration, the C functions digitalWrite and playTone are called with the respective function arguments (‘<>‘ separates arguments). The function digitalWrite is a function that is part of the Wiring language, which is basically C with some standard libraries, from which digitalWrite comes. We’ll write playTone ourselves in a second.

The C Code

We need a little C code now.  We could write this directly, but we’ll just do this in Haskell, since there’s so little we need to write—the Copilot program handles most of the work.  But a caveat: it’s a little ugly, since we’re just constructing Haskell strings. Here are a few functions (included with Copilot) to make this easier, and here are some more. (If someone properly writes a package to write ad-hoc C code from Haskell, please leave a comment!)

First, we need more magic constants to give the frequency associated with notes (a space is a rest).

freq :: Char -> Int32
freq note  =
  case note of
    'c' -> 1915
    'd' -> 1700
    'e' -> 1519
         ...

and here are the notes of the song and the beats per note:

jingleBellsNotes = "eeeeeeegcdefffffeeeddedgeeeeeeegcdefffffeeggfdc"
jingleBellsBeats =
  [ 1,1,2  , 1,1,2, 1,1,1,1, 4
  , 1,1,1,1, 1,1,2, 1,1,1,1, 2,2
  , 1,1,2  , 1,1,2, 1,1,1,1, 4
  , 1,1,1,1, 1,1,2, 1,1,1,1, 4
  ]

The other main piece of C code we need to write is the function playtone. The piezo speaker is controlled by pulse width modulation, basically meaning we’ll turn it on and off really fast to simulate an analogue signal. Here is it’s definition (using a little helper Haskell function to construct C functions):

    [ function "void" "playtone" ["int32_t speaker", "int32_t tone", "int32_t duration"] P.++ "{"
    , "#ifdef CBMC"
    , "  for (int32_t i = 0; i < 1; i ++) {"
    , "#else"
    , "  for (int32_t i = 0; i < duration * 1000; i += tone * 2) {"
    , "#endif"
    , "    digitalWrite(speaker, HIGH);"
    , "    delayMicroseconds(tone);"
    , "    digitalWrite(speaker, LOW);"
    , "    delayMicroseconds(tone);"
    , "  }"
    , "}"
    ]

HIGH, LOW, digitalWrite, and delayMicroseconds are all part of the Wiring standard library.  That ifdef is for verification purposes, which we’ll describe in just a bit.

Besides a little more cruft, that’s it!

Test, Build, Verify

“Jersey Shore” may have introduced you to the concept of gym, tan, laundry, but here we’ll stick to test, build, verify.  That is, first we’ll test our program using the Copilot interpreter, then we’ll build it, then we’ll prove the memory safety of the generated C program.

  • Interpret. We define a function that calls the Copilot interpreter:
    interpreter =
      interpret cycleSong 20
        $ setE (emptySM {i32Map = fromList [ ("notes", notesLst)
                                           , ("beats", beatsLst)]})
        baseOpts

    This calls the Copilot interpreter, saying to unroll cycleSong 20 times. Because the Copilot program samples some external C values, we need to provide that data to the interpreter. Fortunately, we have those arrays already defined as Haskell lists. Executing this, we get the following:

    period: 0   duration: 300   even: 11   idx: 0   notes: 1519   odd: 12   playNote: 1
    period: 1   duration: 300   even: 12   idx: 1   notes: 1519   odd: 11   playNote: 1
    period: 2   duration: 600   even: 11   idx: 2   notes: 1519   odd: 12   playNote: 1
    period: 3   duration: 300   even: 12   idx: 3   notes: 1519   odd: 11   playNote: 1
    period: 4   duration: 300   even: 11   idx: 4   notes: 1519   odd: 12   playNote: 1
    period: 5   duration: 600   even: 12   idx: 5   notes: 1519   odd: 11   playNote: 1
    period: 6   duration: 300   even: 11   idx: 6   notes: 1519   odd: 12   playNote: 1
                                                   . . .

    Good, it looks right. (period isn’t a Copilot variable but just keeps track of what round we’re on.)

  • Build. To build, we generate the C code from the Copilot program, then we’ll use a cross-compiler targeting the ATmega328. The easiest way (I’ve found) is via Homin Lee’s Arscons. Arscons is based on Scons, a Python-based build system. Arscons makes three assumptions: (1) the program is written as a Wiring program (e.g., there’s a loop() function instead of a main() function is the main difference), (2) the extension of the Wiring program is .pde, and (3) the directory containing the XXX.pde is XXX. For us, all that really means is that we have to change the extension of the generated program from .c to .pde. So we define
    main :: IO ()
    main = do
      compile cycleSong name
        $ setPP cCode  -- C code for above/below the Copilot program
        $ setV Verbose -- Verbose compilation
        baseOpts
      copyFile (name P.++ ".c") (name P.++ ".pde") -- SConstruct expects .pde

    and then execute

    > runhaskell CopilotSong.hs

    to do this.

    To build the executable, we issue

    > scons

    then

    scons upload

    when we’re ready to flash the microcontroller.

  • Verify. Is the generated C program memory safe?  Wait… What do I mean by ‘memory safe’?  I’ll consider the program to be memory safe if the following hold:
    • No arithmetic underflows or overflows.
    • No floating-point not-a-numbers (NaNs).
    • No division by zero.
    • No array bounds underflows or overflows.
    • No Null pointer dereferences.

    Of course this is an approximates memory-safety, but it’s a pretty good start. If the compiler is built correctly, we should be pretty close to a memory-safe program. But we want to check the compiler, even though Haskell’s type system gives us some evidence of guarantees already. Furthermore, the compiler knows nothing about arbitrary C functions, and it doesn’t know how large external C arrays are.

    We can prove that the program is memory safe. We call out to CBMC, a C model-checker developed primarily by Daniel Kröning. This is whole-program analysis, so we have to provide the location of the libraries. We define

    verifying :: IO ()
    verifying =
      verify (name P.++ ".c") (length notesLst * 4 + 3)
        (     "-DCBMC -I/Applications/Arduino.app/Contents/Resources/Java/hardware/arduino/cores/arduino/ "
         P.++ "-I/Applications/Arduino.app/Contents/Resources/Java/hardware/tools/avr/avr-4/include "
         P.++ "--function cbmc")

    which calls cbmc on our generated C program. Let me briefly explain the arguments. First we give the name of the C program.

    Then we say how many times to unroll the loops. This requires a little thinking. We want to unroll the loops enough times to potentially get into a state where we might have an out of bounds array access (recall that the Copilot stream idx generates indexes into the arrays). The length of the C arrays are given by length notesLst. When compiling the Copilot program (calling the module’s main function, a periodic schedule is generated for the program). From the schedule, we can see that idx is updated every fourth pass through the loop. So we unwind it enough loop passes for the counter to have the opportunity to walk off the end of the array, plus a few extra passes for setup. This is a minimum bound; you could of course over-approximate and unroll the loop, say, 1000 times.

    Regarding loop unrolling, remember that #ifdef from the definition of playtone()? We include that to reduce the difficulty of loop unrolling. playtone() gets called on every fourth pass through the main loop, and unrolling both loops is just too much for symbolic model-checking (at least on my laptop). So we give ourselves an informal argument that the loop in playtone() won’t introduce any memory safety violations, and the #ifdef gives us one iteration through the loop if we’re verifying the system. A lot of times with embedded code, this is a non-issue, since loops can just be completely unrolled.

    The -D flag defines a preprocessor macro, and the -I defines a include path. Finally, the --function flag gives the entry point into the program. Because we generated a Wiring program which generates a while(1) loop for us through macro magic, we have to create an explicit loop ourselves for verification purposes.

    If you’re interested in seeing what things look like when they fail, change the idx stream to be

      idx .= [0] ++ (idx + 1)
    

    and cbmc will complain

    Violated property:
      file CopilotSing.c line 180 function __r11
      array `beats' upper bound
      (long int)__1 < 47
    
    VERIFICATION FAILED
    

So that’s it. Happy holidays!

Backseat Driving: Copilot Updates

November 20, 2010

A lot has been going on since the announcement of Copilot, a Haskell DSL for generating hard real-time C monitors. We’ve presented Copilot a few times, including at Runtime Verification 2010, at a Galois Technical Seminar (video of the talk is here), and at a recent NASA Technical Interchange.

Copilot has had five releases since we originally open-sourced the project.  Recent work has focused on making the language more straightforward and improving Copilot libraries.  But first, let me remind you how to use Copilot: you can compile specs to hard real-time C code, you can interpret them, you can model-check them, and you can generate specs to test the compiler and interpreter—you can see a bit about usage here.

For example, here’s a Copilot specification that generates the Fibonacci sequence (over Word64s) and tests for even numbers:

fib :: Streams
fib = do
  let f = varW64 "f"
  let t = varB "t"
  f .= [0,1] ++ f + (drop 1 f)
  t .= even f
  where even :: Spec Word64 -> Spec Bool
            even w' = w' `mod` 2 == 0

Notice that lists look almost exactly like Haskell lists.

What about something a little more complicated?  Consider the property:

If the temperature rises more than 2.3 degrees within 2 seconds, then the engine has been shut off.

We might use a Copilot specification like the following to express it, assuming that temp and shutoff are C variables being sampled at phases 1 and 2 respectively, and the period of execution is 1 second:

engine :: Streams
engine = do
  -- external vars
  let temp     = extF "temp" 1
  let shutoff  = extB "shutoff" 2
  -- Copilot vars
  let temps    = varF "temps"
  let overTemp = varB "overTemp"
  let trigger  = varB "trigger"
  -- Copilot specification
  temps    .= [0, 0, 0] ++ temp
  overTemp .= drop 2 temps > 2.3 + temps
  trigger  .= overTemp ==> shutoff

Here’s something that I think shows why you want to write your DSLs in Haskell: Haskell gives you a macro language for your DSL… for free.  For example, consider the following (more complicated) property:

“If the engine temperature exeeds 250 degrees, then the engine is shut off within one second, and in the 0.1 second following the shutoff, the cooler is engaged and remains engaged.”

We can more succinctly specify this property using past-time linear temporal logic (ptLTL).  There’s a Copilot library for writing those kind of specs, which can be interspersed with normal Copilot streams—the ptLTL specs are highlighted in blue below.  Again, assume a period of execution of 1 second:

engine :: Streams
engine = do
  -- external vars
  let engineTemp = extW8 "engineTemp" 1
  let engineOff  = extB "engineOff" 1
  let coolerOn   = extB "coolerOn" 1
  -- Copilot vars
  let cnt        = varW8 "cnt"
  let temp       = varB "temp"
  let cooler     = varB "cooler"
  let off        = varB "off"
  let monitor    = varB "monitor"
  -- Copilot specification
  temp    `ptltl` (alwaysBeen (engineTemp > 250))
  cnt     .=      [0] ++ mux (temp && cnt < 10) (cnt + 1) cnt
  off     .=      cnt >= 10 ==> engineOff
  cooler  `ptltl` (coolerOn `since` engineOff)
  monitor .=      off && cooler

Today, I finished updating another feature of Copilot: the ability to send stream values over ports to other components in a distributed system. We had an implementation of this, but it was a bit hacky. Hopefully, it’s a bit less hacky now. For example, consider the following specification:

distrib :: Streams
distrib = do
  -- Copilot vars
  let a = varW8 "a"
  let b = varB "b"
  -- Copilot spec
  a .= [0,1] ++ a + 1
  b .= mod a 2 == 0 
  -- send commands
  send "portA" (port 2) a 1
  send "portB" (port 1) b 2

The blue commands are send commands.  For example, the first command says, “call the C function portA(str, num), where argument str is the value of stream a and num is port number 1.” The port number says who to send it to.

These are just a few of the recent updates. We’re still working on Copilot, so let me know if you have questions or comments.

Interested? Get Copilot on Hackage or GitHub.

Copilot: a DSL for Monitoring Embedded Systems

September 25, 2010

In case you missed all the excitement on the Galois blog, what follows is a re-post.

Introducing Copilot

Can you write a list in Haskell? Then you can write embedded C code using Copilot. Here’s a Copilot program that computes the Fibonacci sequence (over Word 64s) and tests for even a numbers:


fib :: Streams
fib = do
  "fib" .= [0,1] ++ var "fib" + (drop 1 $ varW64 "fib")
  "t" .= even (var "fib")
    where even :: Spec Word64 -> Spec Bool
          even w = w `mod` const 2 == const 0

Copilot contains an interpreter, a compiler, and uses a model-checker to check the correctness of your program. The compiler generates constant time and constant space C code via Tom Hawkin’s Atom Language (thanks Tom!). Copilot is specifically developed to write embedded software monitors for more complex embedded systems, but it can be used to develop a variety of functional-style embedded code.

Executing

> compile fib "fib" baseOpts

generates fib.c and fib.h (with a main() for simulation—other options change that). We can then run

> interpret fib 100 baseOpts

to check that the Copilot program does what we expect. Finally, if we have CBMC installed, we can run

> verify "fib.c"

to prove a bunch of memory safety properties of the generated program.

Galois has open-sourced Copilot (BSD3 licence). More information is available on the Copilot homepage. Of course, it’s available from Hackage, too.

Flight of the Navigator

Aberdeen Farms entrance

View of the James River.

Pitot tube on the test aircraft.

Our testbed stack: 4 STM32 microcontrollers (ARM Cortex M3s), an SD card for logging data, air pressure sensor, and voltage regulator.

Sebastian installing the stack.

Copilot took its maiden flight in August 2010 in Smithfield, Virginia. NASA rents a private airfield for test flights like this, but you have to get past the intimidating sign posted upon entering the airfield. However, once you arrive, there’s a beautiful view of the James River.

We were flying on a RC aircraft that NASA Langley uses to conduct a variety of Integrated Vehicle Health Management (IVHM) experiments. (It coincidentally had Galois colors!) Our experiments for Copilot were to determine its effectiveness at detecting faults in embedded guidance, navigation, and control software. The test-bed we flew was a partially fault-tolerant pitot tube (air pressure) sensor. Our pitot tube sat at the edge of the wing. Pitot tubes are used on commercial aircraft and they’re a big deal: a number of aircraft accidents and mishaps have been due, in part, to pitot tube failures.

Our experiment consisted of a beautiful hardware stack, crafted by Sebastian Niller of the Technische Universität Ilmenau. Sebastian also led the programming for the stack. The stack consisted of four STM32 ARM Cortex M3 microprocessors. In addition, there was an SD card for writing flight data, and power management. The stack just fit into the hull of the aircraft. Sebastian installed our stack in front of another stack used by NASA on the same flights.

The microprocessors were arranged to provide Byzantine fault-tolerance on the sensor values. One microprocessor acted as the general, receiving inputs from the pitot tube and distributing those values to the other microprocessors. The other microprocessors would exchange their values and perform a fault-tolerant vote on them. Granted, the fault-tolerance was for demonstration purposes only: all the microprocessors ran off the same clock, and the sensor wasn’t replicated (we’re currently working on a fully fault-tolerant system). During the flight tests, we injected (in software) faults by having intermittently incorrect sensor values distributed to various nodes.

The pitot sensor system (including the fault-tolerance code) is a hard real-time system, meaning events have to happen at predefined deadlines. We wrote it in a combination of Tom Hawkin’s Atom, a Haskell DSL that generates C, and C directly.

Integrated with the pitot sensor system are Copilot-generated monitors. The monitors detected

  • unexpected sensor values (e.g., the delta change is too extreme),
  • the correctness of the voting algorithm (we used Boyer-Moore majority voting, which returns the majority only if one exists; our monitor checked whether a majority indeed exists), and
  • whether the majority votes agreed.

The monitors integrated with the sensor system without disrupting its real-time behavior.

We gathered data on six flights. In between flights, we’d get the data from the SD card.

We took some time to pose with the aircraft. The Copilot team from left to right is Alwyn Goodloe, National Institute of Aerospace; Lee Pike, Galois, Inc.; Robin Morisset, École Normale Supérieure; and Sebastian Niller, Technische Universität Ilmenau. Robin and Sebastian are Visiting Scholars at the NIA for the project. Thanks for all the hard work!

There were a bunch of folks involved in the flight test that day, and we got a group photo with everyone. We are very thankful that the researchers at NASA were gracious enough to give us their time and resources to fly our experiments. Thank you!

Finally, here are two short videos. The first is of our aircraft’s takeoff during one of the flights. Interestingly, it has an electric engine to reduce the engine vibration’s effects on experiments.

http://player.vimeo.com/video/15198286

The second is of AirStar, which we weren’t involved in, but that also flew the same day. AirStar is a scaled-down jet (yes, jet) aircraft that was really loud and really fast. I’m posting its takeoff, since it’s just so cool. That thing was a rocket!

http://player.vimeo.com/video/15204969

More Details

Copilot and the flight test is part of a NASA-sponsored project (NASA press-release) led by Lee Pike at Galois. It’s a 3 year project, and we’re currently in the second year.

Even More Details

Besides the language and flight test, we’ve written a few papers:

  • Lee Pike, Alwyn Goodloe, Robin Morisset, and Sebastian Niller. Copilot: A Hard Real-Time Runtime Monitor. To appear in the proceedings of the 1st Intl. Conference on Runtime Verification (RV’2010), 2010. Springer.

This paper describes the Copilot language.

Byzantine faults are fascinating. Here’s a 2-page paper that shows one reason why.

At the beginning of our work, we tried to survey prior results in the field and discuss the constraints of the problem. This report is a bit lengthy (almost 50 pages), but it’s a gentle introduction to our problem space.

Yes, QuickCheck can be used to test low-level protocols.

A short paper motivating the need for runtime monitoring of critical embedded systems.

You’re Still Interested?

We’re always looking for collaborators, users, and we may need 1-2 visiting scholars interested in embedded systems & Haskell next summer. If any of these interest you, drop Lee Pike a note (hint: if you read any of the papers or download Copilot, you can find my email).

Shocking Tell-All Interview on Software Assurance

August 29, 2010

I was recently interviewed by Flight International magazine, one of the oldest aviation news magazines.  Their reporter, Stephen Trimble, was writing on the Air Force’s Chief Scientist’s recent report stating that new software verification and validation techniques are desperately needed.

Here’s an online copy of the article.

Copilot: A Hard Real-Time Runtime Monitor

August 22, 2010

I’m the principal investigator on a NASA-sponsored research project investigating new approaches for monitoring the correctness of safety-critical guidance, navigation, and control software at run-time.  We just got a paper accepted at the Runtime Verification Conference on some of our recent work developing a language for writing monitors.  The language, Copilot, is a domain-specific language (DSL) embedded in Haskell that uses the powerful Atom DSL as a back-end.  Perhaps the best tag-line for Copilot is, “Know how to write Haskell lists?  Good; then you’re ready to write embedded software.”

Stay tuned for a software release and updates on a flight-test of our software on a NASA test UAV…  In the meantime, check out the paper!

Twinkle Twinkle Little Haskell

May 31, 2010

Update Sept 28,2010: the Makefile mentioned below worked fine, except for something having to do with timing.  I was too lazy to track the problem down, but fortunately, I found an scons script (using the scons build system) that I modified to run on Mac OSX, and it works perfectly.  The original script is here—thanks Homin Lee!  The post has been modified appropriately.

Update Oct 1, 2010: Homin Lee has updated the script to work on Mac OSX, so you can just grab the original script now.


It’s been a few months almost a year(!) since John Van Enk showed us how to twinkle (blink) an LED on his Arduino microcontroller using Atom/Haskell.  Since that time, Atom (a Haskell embedded domain-specific language for generating constant time/space C programs) has undergone multiple revisions, and the standard Arduino tool-chain has been updated, so I thought it’d be worthwhile to “re-solve” the problem with something more streamlined that should work today for all your Haskell -> Arduino programming needs.  With the changes to Atom, we can blink a LED with just a couple lines of core logic (as you’d expect given the simplicity of the problem).

For this post, I’m using

If you’ve played with the Arduino, you’ve noticed how nice the integrated IDE/tool-chain is.  Ok, the editor leaves everything to be desired, but otherwise, things just work.  The language is basically C with a few macros and Atmel AVR-specific libraries (the family to which Arduino hardware belongs).

However, if you venture off the beaten path at all—say, trying to compile your own C program outside the IDE—things get messy quickly.  Fortunately, with the scons script, things are a piece of cake.

What we’ll do is write a Haskell program AtomLED.hs and use that to generate AtomLED.c.  From that, the scons script will take care of the rest.

The Core Logic

Here’s the core logic we use for blinking the LED from Atom:

ph :: Int
ph = 40000 -- Sufficiently large number of ticks (the Duemilanove is 16MHz)

blink :: Atom ()
blink = do
  on <- bool "on" True -- Declare a Boolean variable on, initialized to True.

  -- At period ph and phase 0, do ...
  period ph $ phase 0 $ atom "blinkOn" $ do
    call "avr_blink"        -- Call a locally-defined C function, blink().
    on <== not_ (value on)  -- Flip the Boolean.

  period ph $ phase (quot ph 8) $ atom "blinkOff" $ do
    call "avr_blink"
    on <== not_ (value on)

And that’s it!  The blink function has two rules, “blinkOn” and “blinkOff”.  Both rules execute every 40,000 ticks.  (A “tick” in our case is just a local variable that’s incremented, but it could be run off the hardware clock.  Nevertheless, we still know we’re getting nearly constant-time due to the code Atom generates.)  The first rule starts at tick 0, and executes at ticks 40,000, 80,000, etc., while the second starts at tick 40,000/8 = 5000 and executes at ticks 5000, 45,000, 85,000, etc.  In each rule, after calling the avr_blink() C function (we’ll define), it modulates a Boolean upon which blink() depends. Thus, the LED is on 1/8 of the time and off 7/8 of the time. (If we wanted the LED to be on the same amount of time as it is off, we could have done the whole thing with one rule.)

The Details

Really the only other thing we need to do is add a bit of C code around the core logic.  Here’s the listing for the C code stuck at the beginning, written as strings:

[ (varInit Int16 "ledPin" "13") -- We're using pin 13 on the Arduino.
, "void avr_blink(void);"
]

and here’s some code for afterward:

[
"void setup() {"
, " // initialize the digital pin as an output:"
, " pinMode(ledPin, OUTPUT);"
, "}"
, ""
, "// set the LED on or off"
, "void avr_blink() { digitalWrite(ledPin, state.AtomLED.on); }"
, ""
, "void loop() {"
, " " ++ atomName ++ "();"
, "}"
]

The IDE tool-chain expects there to be a setup() and loop() function defined, and it then pretty-prints a main() function from which both are called. The code never returns from loop().

To blink the LED, we call digitalWrite() from avr_blink(). digitalWrite() is provided by the Arduino language.  (In John’s post, he manipulated the port registers directly, which is faster and doesn’t rely on the Arduino libraries, but it’s also lower-level and less portable between Atmel controllers.)  Atom-defined variables are stored in a struct, so state.AtomLED.on references the Atom Boolean variable defined earlier.

Make it Work!

Now just drop the scons script into the directory (the directory must have the same name as the Haskell file, dropping the extension), and run

> runhaskell AtomLED.hs
> scons
> scons upload

And your Haskell should be twinkling your LED. runhaskell AtomLED.hs invokes the Atom compile function to generate a C file and headers; scons invokes the build script to build an ELF image to upload, and scons upload again invokes the compiler to upload to your board.

This should work for any Atom-generated program you want to run on your Arduino (modulo deviations from the configuration I mentioned initially). Also, note the conventions and parameters to set in the scons script.

Post if you have any problems, and I might be able to help. Also, I’d love to package the boilerplate up into a “backend” for Atom, but if you have time, please beat me to it.  Thanks.

Code:

New Group: Functional Programming for Embedded Systems

May 30, 2010

Are you interested in how functional programming can be leveraged to make embedded-systems programming easier and more reliable?  You are not alone.  For example, check out what’s been happening in just the past couple of years.

Now Tom Hawkins (designer of Atom) has started a Google group, fp-embedded, to discuss these issues.  Please join and post your projects & questions!

An Apologia for Formal Methods

March 14, 2010

In the January 2010 copy of IEEE Computer, David Parnas published an article, “Really Rethinking ‘Formal Methods’” (sorry, you’ll need an IEEE subscription or purchase the article to access it), with the following abstract:

We must question the assumptions underlying the well-known current formal software development methods to see why they have not been widely adopted and what should be changed.

I found some of the opinions therein to be antiquated, so I wrote a letter to the editor (free content!), which appears in the March 2010 edition.  IEEE also published a response from David Parnas, which you can also access at the letter link above.

I’ll refrain from visiting this debate here, but please have a look at the letters, enjoy the controversy, and do not hesitate to leave a comment!

10 to the -9

January 24, 2010

10^{-9}, or one-in-a-billion, is the famed number given for the maximum probability of a catastrophic failure, per hour of operation, in life-critical systems like commercial aircraft.  The number is part of the folklore of the safety-critical systems literature; where does it come from?

First, it’s worth noting just how small that number is.  As pointed out by Driscoll et al. in the paper, Byzantine Fault Tolerance, from Theory to Reality, the probability of winning the U.K. lottery is 1 in 10s of millions, and the probability of being struck by lightening (in the U.S.) is 1.6 \times 10^{-6}, more than a 1,000 times more likely than 10^{-9}.

So where did 10^{-9} come from?  A nice explanation comes from a recent paper by John Rushby:

If we consider the example of an airplane type with 100 members, each flying 3000 hours per year over an operational life of 33 years, then we have a total exposure of about 107 flight hours. If hazard analysis reveals ten potentially catastrophic failures in each of ten subsystems, then the “budget” for each, if none are expected to occur in the life of the fleet, is a failure probability of about 10^{-9} per hour [1, page 37]. This serves to explain the well-known 10^{-9} requirement, which is stated as follows: “when using quantitative analyses. . . numerical probabilities. . . on the order of 10^{-9} per flight-hour. . . based on a flight of mean duration for the airplane type may be used. . . as aids to engineering judgment. . . to. . . help determine compliance” (with the requirement for extremely improbable failure conditions) [2, paragraph 10.b].

[1] E. Lloyd and W. Tye, Systematic Safety: Safety Assessment of Aircraft Systems. London, England: Civil Aviation Authority, 1982, reprinted 1992.

[2] System Design and Analysis, Federal Aviation Administration, Jun. 21, 1988, advisory Circular 25.1309-1A.

(By the way, it’s worth reading the rest of the paper—it’s the first attempt I know of to formally connect the notions of (software) formal verification and reliability.)

So there a probabilistic argument being made, but let’s spell it out in a little more detail.  If there are 10 potential failures in 10 subsystems, then there are 10 \times 10 = 100 potential failures.  Thus, there are 2^{100} possible configurations of failure/non-failure in the subsystems.  Only one of these configurations is acceptable—the one in which there are no faults.

If the probability of failure is x, then the probability of non-failure is 1 - x.  So if the probability of failure for each subsystem is 10^{-9}, then the probability of being in the one non-failure configuration is

\displaystyle(1 - 10^{-9})^{100}

We want that probability of non-failure to be greater than the required probability of non-failure, given the total number of flight hours.  Thus,

\displaystyle (1 - 10^{-9})^{100} > 1 - 10^{-7}

which indeed holds:

\displaystyle (1 - 10^{-9})^{100} - (1 - 10^{-7})

is around 4.95 \times 10^{-15}.

Can we generalize the inequality?  The hint for how to do so is that the number of subsystems (100) is no more than the overall failure rate divided by the subsystem rate:

\displaystyle \frac{10^{-7}}{10^{-9}}

This suggests the general form is something like


Subsystem reliability inequality: \displaystyle (1 - C^{-n})^{C^{n-m}} \geq 1 - C^{-m}


where C, n, and m are real numbers, C \geq 1, n \geq 0, and n \geq m.

Let’s prove the inequality holds.  Joe Hurd figured out the proof, sketched below (but I take responsibility for any mistakes in it’s presentation).  For convenience, we’ll prove the inequality holds specifically when C = e, but the proof can be generalized. 

First, if n = 0, the inequality holds immediately. Next, we’ll show that

\displaystyle (1 - e^{-n})^{e^{n-m}}

is monotonically non-decreasing with respect to n by showing that the derivative of its logarithm is greater or equal to zero for all n > 0.  So the derivative of its logarithm is

\displaystyle \frac{d}{dn} \; e^{n-m}\ln(1-e^{-n}) = e^{n-m}\ln(1-e^{-n})+\frac{e^{-m}}{1-e^{-n}}

We show

\displaystyle e^{n-m}\ln(1-e{-n})+\frac{e^{-m}}{1-e^{-n}} \geq 0

iff

\displaystyle e^{-m}\left(e^{n}\ln(1-e^{-n}) + \frac{1}{1-e^{-n}}\right) \geq 0

and since e^{-m} \geq 0,

\displaystyle e^{n}\ln(1-e^{-n}) + \frac{1}{1-e^{-n}} \geq 0

iff

\displaystyle e^{n}\ln(1-e^{-n}) \geq - \frac{1}{1-e^{-n}}

Let x = e^{-n}, so the range of x is 0 < x < 1.
\displaystyle\ln(1-x) \geq - \frac{x}{1-x}

Now we show that in the range of x, the left-hand side is bounded below by the right-hand side of the inequality.
\displaystyle \lim_{x \to 0} \; \ln(1-x) = 0

and
\displaystyle - \frac{x}{1-x} = 0

Now taking their derivatives
\displaystyle \frac{d}{dx} \; \ln(1-x) = \frac{1}{x-1}

and
\displaystyle \frac{d}{dx} \; - \frac{x}{1-x} = - \frac{1}{(x-1)^2}

Because \displaystyle x-1 \geq - (x-1)^2 in the range of x, our proof holds.

The purpose of this post was to clarify the folklore of ultra-reliable systems.  The subsystem reliability inequality presented allows for easy generalization to other reliable systems.

Thanks again for the help, Joe! (more…)

Writer’s Unblock

September 30, 2009

I’ve recently got a few technical papers out the door involving Haskell, physical-layer protocols, SMT, security modeling, and run-time verification of embedded systems (phew!).  One of the benefits of industrial research is getting your hands involved in a lot of different research projects.

  • This paper is about using Haskell to model physical-layer protocols and using QuickCheck to test them.  Physical-layer protocols are used to transmit bits from one clock-domain to another and are used in ethernet, credit card swipers, CD players, and so on.  The gist of the paper is that even though Haskell is pure & lazy, it works great for modeling and testing real-time protocols and even for computing reliability statistics.  I presented it at the Haskell Symposium in September ’09, which was associated with ICFP.  (The talk video is online!)  The paper is a short experience report—indeed, it is the only experience report that was accepted at the symposium.  The Haskell Symposium was an entertaining and friendly environment for presenting.
  • This paper actually precedes the Haskell paper, but it extends the results by describing how to formally verify physical-layer protocols using SMT solvers and k-induction (we use SRI’s SAL tool in this work).  The paper is a journal article accepted at Formal Aspects of Computing.  You’ll find at least two things interesting about this article: (1) For all the excitement about SMT, there don’t seem to be a lot of great examples demonstrating its efficacy—the problems described in this paper were (laboriously!) verified using theorem-provers by others previously, and our approach using SMT is much more automated.  (2) We provide a nice general model of cross clock-domain circuits and particularly metastability.

    So if you can verify physical-layer protocols, why model them in Haskell and QuickCheck them (as we did above)?  There are at least two reasons.  First, if you’re using SMT, then your timing constraints need to be linear inequalities to be decidable.  For systems that with nonlinear constraints, QuickCheck might be your only recourse.  Second, QuickCheck gives you concrete counterexamples and test-cases that you can use to test implementations (SMT solvers often return symbolic counterexamples).

  • This paper describes a simple model for analyzing information flow in a system (where a “system” could be a program, a network, an OS, etc.).  The main portion of the paper describes heuristics based on graph algorithms for deciding what sort of information flow policies you might want to enforce in your system.  In general, there’s been a lot of work on analyzing access control policies but not so much work in figuring out what kind of policy you should have in the first place (if you know of such work, please tell me!).  The paper isn’t deep, and it’s also preliminary insofar as I don’t describe building a complex system using the techniques.  Still, there’s a small (Haskell) script available that implements the algorithms described; I’d love to see these analyses find their way into a tool to help system designers build secure systems.
  • Finally, this report describes the field of run-time monitoring (or run-time verification) as it applies to safety-critical real-time embedded software.  Run-time monitoring compliments formal verification since when a system is too complicated to verify a priori, it can be monitored at run-time to ensure it conforms to its specification.  Not a lot of work has been done on monitoring software that’s hard real-time, distributed, or fault-tolerant—which ironically could benefit the most from run-time monitoring.  The report should serve as a nice, gentle introduction.  The report should be published soon as a NASA Contractor Report—the work was done under a NASA-sponsored project for which I’m the PI.

Don’t hesitate to give me feedback on any of these papers.  Ok, time to fill up the queue again…

Finding Boole

August 10, 2009

Here’s a simple challenge-problem for model-checking Boolean functions: Suppose you want to compute some Boolean function spec :: B^k \rightarrow B, where B^k represents 0 or more Boolean arguments.

Let f_0, f_1, \ldots, f_j range over 2-ary Boolean functions, (of type (Bool, Bool) \rightarrow Bool), and suppose that f is a fixed composition of f_0, f_1, \ldots, f_j. (By the way, I’m going to talk about functions, but you can think of these as combinatorial circuits, if you prefer.)

Our question is, “Do there exist instantiations of f_0, f_1, \ldots, f_j such that for all inputs, f = spec?

What is interesting to me is that our question is quantified and of the form, “exists a forall b …”, and it is “higher-order” insofar as we want to find whether there exist satisfying functions. That said, the property is easy to encode as a model-checking problem. Here, I’ll encode it into SRI’s Symbolic Analysis Laboratory (SAL) using its BDD engine.  (The SAL file in its entirety is here.)

To code the problem in SAL, we’ll first define for convenience a shorthand for the built-in type, BOOLEAN:

B: TYPE = BOOLEAN;

And we’ll define an enumerated data type representing the 16 possible 2-ary Boolean functions:

B2ARY: TYPE = { False, Nor, NorNot, NotA, AndNot, NotB, Xor, Nand
              , And, Eqv, B, NandNot, A, OrNot, Or, True};

Now we need an application function that takes an element f from B2ARY and two Boolean arguments, and depending on f, applies the appropriate 2-ary Boolean function:

app(f: B2ARY, a: B, b: B): B =
  IF    f = False   THEN FALSE
  ELSIF f = Nor     THEN NOT (a OR b)
  ELSIF f = NorNot  THEN NOT a AND b
  ELSIF f = NotA    THEN NOT a
  ELSIF f = AndNot  THEN a AND NOT b
  ELSIF f = NotB    THEN NOT b
  ELSIF f = Xor     THEN a XOR b
  ELSIF f = Nand    THEN NOT (a AND b)
  ELSIF f = And     THEN a AND b
  ELSIF f = Eqv     THEN NOT (a XOR b)
  ELSIF f = B       THEN b
  ELSIF f = NandNot THEN NOT a OR b
  ELSIF f = A       THEN a
  ELSIF f = OrNot   THEN a OR NOT b
  ELSIF f = Or      THEN a OR b
  ELSE                   TRUE
  ENDIF;

Let’s give a concrete definition to f and say that it is the composition of five 2-ary Boolean functions, f0 through f4. In the language of SAL:

f(b0: B, b1: B, b2: B, b3: B, b4: B, b5: B):
  [[B2ARY, B2ARY, B2ARY, B2ARY, B2ARY] -> B] =
    LAMBDA (f0: B2ARY, f1: B2ARY, f2: B2ARY, f3: B2ARY, f4: B2ARY):
      app(f0, app(f1, app(f2, b0,
                              app(f3, app(f4, b1, b2),
                                      b3)),
                      b4),
              b5);

Now let’s define the spec function that f should implement:

spec(b0: B, b1: B, b2: B, b3: B, b4: B, b5: B): B =
  (b0 AND b1) OR (b2 AND b3) OR (b4 AND b5);

Now, we’ll define a module m; modules are SAL’s building blocks for defining state machines. However, in our case, we won’t define an actual state machine since we’re only modeling function composition (or combinatorial circuits). The module has variables corresponding the function inputs, function identifiers, and a Boolean stating whether f is equivalent to its specification (we’ll label the classes of variables INPUT, LOCAL, and OUTPUT, to distinguish them, but for our purposes, the distinction doesn’t matter).

m: MODULE =
BEGIN
  INPUT b0, b1, b2, b3, b4, b5 : B
  LOCAL f0, f1, f2, f3, f4 : B2ARY
  OUTPUT equiv : B

  DEFINITION
    equiv = FORALL (b0: B, b1: B, b2: B, b3: B, b4: B, b5: B):
              spec(b0, b1, b2, b3, b4, b5)
            = f(b0, b1, b2, b3, b4, b5)(f0, f1, f2, f3, f4);
END;

Notice we’ve universally quantified the free variables in spec and the definition of f.

Finally, all we have to do is state the following theorem:

instance : THEOREM m |- NOT equiv;

Asking whether equiv is false in module m. Issuing

$ sal-smc FindingBoole.sal instance

asks SAL’s BDD-based model-checker to solve theorem instance. In a couple of seconds, SAL says the theorem is proved. So spec can’t be implemented by f, for any instantiation of f0 through f4! OK, what about

spec(b0: B, b1: B, b2: B, b3: B, b4: B, b5: B): B =
  TRUE;

Issuing

$ sal-smc FindingBoole.sal instance

we get a counterexample this time:

f0 = True
f1 = NandNot
f2 = NorNot
f3 = And
f4 = Xor

which is an assignment to the function symbols. Obviously, to compute the constant TRUE, only the outermost function, f0, matters, and as we see, it is defined to be TRUE.

By the way, the purpose of defining the enumerated type B2ARY should be clear now—if we hadn’t, SAL would just return a mess in which the value of each function f0 through f4 is enumerated:

f0(false, false) = true
f0(true, false) = true
f0(false, true) = true
f0(true, true) = true
f1(false, false) = true
f1(true, false) = true
f1(false, true) = false
f1(true, true) = true
f2(false, false) = false
f2(true, false) = true
f2(false, true) = false
f2(true, true) = false
f3(false, false) = false
f3(true, false) = false
f3(false, true) = false
f3(true, true) = true
f4(false, false) = false
f4(true, false) = true
f4(false, true) = true
f4(true, true) = false

OK, let’s conclude with one more spec:

spec(b0: B, b1: B, b2: B, b3: B, b4: B, b5: B): B =
  (NOT (b0 AND ((b1 OR b2) XOR b3)) AND b4) XOR b5;

This is implementable by f, and SAL returns

f0 = Eqv
f1 = OrNot
f2 = And
f3 = Eqv
f4 = Nor

Although these assignments compute the same function, they differ from those in our specification. Just to double-check, we can ask SAL if they’re equivalent:

spec1(b0: B, b1: B, b2: B, b3: B, b4: B, b5: B): B =
  ((b0 AND ((NOT (b1 OR b2))  b3)) OR NOT b4)  b5;

specifies the assignments returned, and

eq: THEOREM m |- spec(b0, b1, b2, b3, b4, b5) =  spec1(b0, b1, b2, b3, b4, b5);

asks if the two specifications are equivalent. They are.

“Schrodinger’s Probability” for Error-Checking Codes

May 15, 2009

In a previous post, I discussed the notion of Schrödinger CRCs, first described by Kevin Driscoll et al. in their paper Byzantine Fault Tolerance, from Theory to Reality. The basic idea is that error-detecting codes do not necessarily prevent two receivers from obtaining messages that are semantically different (i.e., different data) but syntactically valid (i.e., the CRC matches the respective data words received). The upshot is that even with CRCs, you can suffer Byzantine faults, with some probability.

… So what is that probability of a Schrödinger’s CRC? That’s the topic of this post—which cleans up a few of the ideas I presented earlier. I published a short paper on the topic, which I presented at Dependable Sensors and Networks, 2010, while Kevin Driscoll was in the audience!  If you’d prefer to read the PDF or get the slides, they’re here.  The simulation code (Haskell) is here.

View this document on Scribd

An Atomic Fibonacci Server: Exploring the Atom (Haskell) DSL

May 5, 2009

This post is consistent with Atom 0.0.1 and not the latest version, Atom 0.0.5 (the author went off and implemented changes I and others suggested :)).  I’ll update the post… soon.

Tom Hawkins has open-sourced Atom, a domain-specific language (DSL) for writing embedded real-time software. Atom is actually an “embedded DSL” (I prefer the term “lightweight DSL”) in the functional language Haskell. It’s a lightweight DSL (LwDSL) because you write legal Haskell and let the Haskell compiler do all the heavy lifting. The DSL is a set of special functions and data types and a “compile function” that generates embedded (i.e., no dynamic memory) C code.  You don’t have to write your own compiler from scratch.

John Van Enk has already posted a couple of blog entries on using Atom; first on adding slightly to the LwDSL (one major advantage of a LwDSL is that it’s easy to extend the language—you don’t have to re-engineer a standalone compiler) and then on using Atom to blink some LEDs on the Arduino.  Keep checking his blog for more updates!

Here, I write a little device and driver program in Atom: the driver sends an index i, and the device returns the ith Fibonacci number. The little bit of challenge in doing this is that the device and driver may run at different rates, so their communication is asynchronous.  How does this work in a language like Atom?

Writing in the Atom DSL

Let’s think about the Fibonacci device (we’ll call it fibDev) first.  The device fibDev will do three things:

  1. Wait for a new index i from the driver.
  2. Produce a result, fib(i).
  3. Give the result to the driver.

Let’s think about step (2) first.  Think for a second how we’d write this (efficiently) in Haskell:

fib :: Int -> Int
fib n = fst $ fibHlp n
    where fibHlp n =
              case n of
                0 -> (1, 1)
                _ -> let (a,b) = fibHlp (n-1)
                     in (b,a+b)

The Atom implementation will use the same algorithm, but it’ll look different.  Atom is a synchronous language, so you specify rules that fire on clock ticks.  Here’s what the core of the algorithm looks like in Atom (I haven’t shown the variable declarations, but look you can look at the full source):

atom "computeFib" $ do
  cond $ value runFib
  cond $ value i >. 0
  decr i
  snd <== (value fst) + (value snd)
  fst <== value snd

Atom is written in a monadic style.  Here, we have two conditions, both of which must be true for the rule to “fire”.  The first condition is that runFib is true (telling the device it’s in its computation step), and the second condition is that the index is greater than 0 (we stop computing at zero).  If the conditions are true, then the value of i is decremented, and we update the values of the fst and snd variables, corresponding the first and second elements, respectively, of the pair in the Haskell specification. Again, this is legal Haskell; the Atom library defines the special operators (e.g., >.).  One great thing about writing embedded code in Atom is that variable updates are synchronous.  For example, in the code above, fst is updated to the previous value of snd. That’s the core of the Fibonacci device.

The rest of the architecture handles the message passing (in the C code we’ll generate, messages are passed via global variables) and synchronization between the driver and device, as summarized below:

System Architecture

System Architecture

We do not assume that fibDvr and fibDev execute at the same rate, so we handle message passing with a series of handshakes.  First, fibDvr sends a new value x and notifies fibDev that the value is ready (by issuing newInd).  fibDev acknowledges that x has been received with valRcvd.  At this point, fibDvr knows to wait for fibDev to compute fib(x).  Once it receives the notice ansReady, it reads off the answer, ans.

All we have to do now is implement the handshakes.  For example, let’s look at step (3) of the device, sending the final answer to the driver.  It’s behavior should be clear from the architectural description.

atom "sendVal" $ do
  cond $ value i ==. 0
  cond $ value runFib
  runFib   <== false
  ans      <== value fst
  ansReady <== true
  valRcvd  <== false

And here’s step (1) for fibDev, waiting for a new index from the driver:

atom "getIndex" $ do
  cond $ not_ (value runFib)
  cond $ value newInd
  i        <== value x
  runFib   <== true
  fst      <== 1
  snd      <== 1
  ansReady <== false
  valRcvd  <== true

These three rules for fibDev define the body of fibDev‘s “do” block.

fibDev :: Atom ()
fibDev = period 3 $ do ...

We tell atom that the period is 3, meaning execute each of our three rules every three clock ticks (based on the underlying clock).

Now that we’re comfortable with the language, let’s look at the entire definition of fibDvr in one go. Recall the job of fibDvr is to send a value then wait for an answer.  Our driver will increment values by 5, starting at 0.  It’ll stop sending new values if the index is bigger than 50.

fibDvr :: Atom ()
fibDvr = period 20 $ do
  x        <- word64 "x" 0 -- new index to send
  oldInd   <- word64 "oldInd" 0 -- previous index sent
  -- external signals --
  valRcvd  <- bool' "valRcvd" -- has the device received the new index?
  ans      <- word64' "ans" -- the newly-computed fib(x)
  ansReady <- bool' "ansReady" -- is an answer waiting?
  ----------------------
  valD     <- word64 "valD" 1 -- local copy of fib(x)
  newInd   <- bool "newInd" True -- a new index is ready
  waiting  <- bool "waiting" True -- waiting for a new computation

  atom "wait" $ do
    cond $ value valRcvd
    cond $ not_ $ value waiting
    newInd  <== false
    waiting <== true

  atom "getAns" $ do
    cond $ value ansReady
    cond $ value waiting
    cond $ value x <. 50
    valD    <== value ans
    x       <== value x + 5
    waiting <== false
    newInd  <== true
    oldInd  <== value x

Note that we’ve specified the period of the driver to be 20, meaning that its two rules get executed every 20 ticks.  So the driver is much slower than the device, but if our handshakes are correct, the two devices communicate correctly for any rates of execution of the two components.  (Proving it for all-time is a classic model checking problem.)

Compiling to C

We include a little Haskell function that we can call to “compile” fibDev and fibDvr into embedded C files.  (The compile function is part of Atom, and it takes a name for the generated C file and Atom specifications to compile.)

compileFib :: IO ()
compileFib = do
  compile "fibDev" $ fibDev
  compile "fibDvr" $ fibDvr

We can call this from an interpreter for Haskell; it takes about a second to compile. Doing so almost produces the source files fibDvr.c and fibDev.c. We do a few things manually:

  • Write two header files, fibDvr.h and fibDev.h and import them. This is the code we want to talk to each other through global variables.  We’ll also include stdio.h so we can printf our results.
  • Because Atom automatically (atomatically? :)) generates variable and function names in the generated code, we declare some of the identifiers in fibDev.c to be static so they aren’t globally visible.
  • We #define the variable names from the Atom-generated identifiers back to the expected identifiers for the variables that are shared.
  • And we add a little main function to execute the code: let’s execute the driver and device for 500 clock ticks:
    int main() {
       while(__clock < 500) {
          fibDvr();
          fibDev();
       }
       return 0;
    }

Of course, Atom could be extended to handle these things itself—John Van Enk has already started doing some of it.  In all, our 80-some lines of Atom compile to over 200 lines of embedded C.  So let’s test it!

> gcc -Wall -o fibDvr fibDev.c fibDvr.c
> ./fibDvr

generates the following output:

i: 0, fib(i): 1
i: 0, fib(i): 1
i: 0, fib(i): 1
i: 5, fib(i): 8
i: 5, fib(i): 8
i: 10, fib(i): 89
i: 10, fib(i): 89
i: 10, fib(i): 89
i: 15, fib(i): 987
i: 15, fib(i): 987
i: 15, fib(i): 987
i: 15, fib(i): 987
i: 20, fib(i): 10946
i: 20, fib(i): 10946
i: 20, fib(i): 10946
i: 20, fib(i): 10946
i: 25, fib(i): 121393
i: 25, fib(i): 121393
i: 25, fib(i): 121393
i: 25, fib(i): 121393
i: 25, fib(i): 121393
i: 30, fib(i): 1346269
i: 30, fib(i): 1346269
i: 30, fib(i): 1346269
i: 30, fib(i): 1346269

Wait, why are we getting the same answers multiple times? Recall that Atom is a synchronous language, so functions are executed based on time (measured in underlying clock ticks), not events. But most times, the guards don’t hold, so state isn’t updated. That’s what we see here.

Oh, we should check our specification. We can do that using our original Haskell specification:

> map fib [0,5..30]
[1,8,89,987,10946,121393,1346269]

Looks good!

Let me know if this helps you understand Atom, or if you have thoughts on how Atom compares to other languages.

Finally, here are the sources:

N-Version Programming… For the nth Time

April 27, 2009

Software contains faults.  The question is how to cost-effectively reduce the number of faults.  One approach that gained traction and then fell out of favor was N-version programming.  The basic idea is simple: have developer teams implement a specification independent from one another.  Then we can execute the programs concurrently and compare their results.  If we have, say, three separate programs, we vote their results, and if one result disagrees with the others, we presume that program contained a software bug.

N-version programming rests on the assumption that software bugs in independently-implemented programs are random, statistically-uncorrelated events.  Otherwise, multiple versions are not effective at detecting errors if the different versions are likely to suffer the same errors.

John Knight and Nancy Leveson famously debunked this assumption on which N-version programming rested in the “Knight-Leveson experiment” they published in 1986.  In 1990, Knight and Leveson published a brief summary of the original experiment, as well as responses to subsequent criticisms made about it, in their paper, A Reply to the Criticisms of the Knight & Leveson Experiment.

The problem with N-version programming is subtle: it’s not that it provides zero improvement in reliability but that it provides significantly less improvement than is needed to make it cost-effective compared to other kinds of fault-tolerance (like architecture-level fault-tolerance).  The problem is that even small probabilities of correlated faults lead to significant reductions in potential reliability improvements.

Lui Sha has a more recent (2001) IEEE Software article discussing N-version programming, taking into account that the software development cycle is finite: is it better to spend all your time and money on one reliable implementation or on three implementations that’ll be voted at runtime?  His answer is almost always the former (even if we assume uncorrelated faults!).

But rather than N-versions of the same program, what about different programs compared at runtime?  That’s the basic idea of runtime monitoring.  In runtime monitoring, one program is the implementation and another is the specification; the implementation is checked against the specification at runtime.  This is easier than checking before runtime (in which case you’d have to mathematically prove every possible execution satisfies the specification).  As Sha points out in his article, the specification can be slow and simple.  He gives the example of using the very simple Bubblesort as the runtime specification of the more complex Quicksort: if the Quicksort does its job correctly (in O(n log n), assuming a good pivot element), then checking its output (i.e., a hopefully properly sorted list) with Bubblesort will only take linear time (despite Bubble sort taking O(n2) in general).

The simple idea of simple monitors fascinates me.  Of course, Bubblesort is not a full specification, though.  Although Sha doesn’t suggest it, we’d probably like our monitor to compare the lengths of the input and output lists to ensure that the Quicksort implementation didn’t remove elements.  And there’s still the possibility that the Quicksort implementation modifies elements, which is also unchecked by a Bubblesort monitor.

But instead of just checking the output, we could sort the same input with both Quickcheck and Bubblesort and compare the results.  This is a “stronger” check insofar as different sorts would have to have exactly the same faults (e.g., not sorting, removing elements, changing elements) for an error not to be caught.  The principal drawback is the latency of the slower Bubblesort check as compared to Quicksort.  But sometimes, it may be ok to signal an error (shortly) after a result is provided.

Just like for N-version programming, we would like the faults in our monitor to be statistically uncorrelated with those in the monitored software.  I am left wondering about the following questions:

  • Is there research comparing the kinds of programming errors made in radically different paradigms, such as a Haskell and C?  Are there any faults we can claim are statistically uncorrelated?
  • Runtime monitoring itself is predicated on the belief that the implementations of different programs will fail in statistically independent ways, just like N-version programming is.  While more plausible, does this assumption hold?

Programming Languages for Unpiloted Air Vehicles

April 20, 2009

I recently presented a position paper at a workshop addressing software challenges for unpiloted air vehicles (UAVs).  The paper is coauthored with Don Stewart and John Van Enk.  From a software perspective, UAVs (and aircraft, in general) are fascinating.  Modern aircraft are essentially computers that fly, with a pilot for backup.  UAVs are computers that fly, without the backup.

Some day in the not-so-distant future, we may have UPS and FedEx cargo planes that are completely autonomous (it’ll be a while before people are comfortable with a pilot-less airplane).  These planes will be in the commercial airspace and landing at commercial airports.  Ultimately, UAVs must be transparent: an observer should not be able to discern whether an airplane is human or computer controlled by its behavior.

You won’t be surprised to know a lot of software is required to make this happen.  To put things in perspective, Boeing’s 777 is said to contain over 2 million lines of newly-developed code; the Joint Strike Fighter aircraft is said to have over 5 million lines.  Next-generation UAVs, with pilot AI, UAV-to-UAV and UAV-to-ground communications, and arbitrary other functionality, will have millions more lines of code. And the software must be correct.

In our paper, we argue that the only way to get a hold on the complexity of UAV software is through the use of domain-specific languages (DSLs). A good DSL relieves the application programmer from carrying out boilerplate activities and providers her with specific tools for her domain. We go on and advocate the need for lightweight DSLs (LwDSLs), also known as embedded DSLs. A LwDSL is one that is hosted in a mature, high-level language (like Haskell); it can be thought of as domain-specific libraries and domain-specific syntax.  The big benefit of a LwDSL is that a new compiler and tools don’t need to be reinvented. Indeed, as we report in the paper, companies realizing the power of LwDSLs are quietly gaining a competitive advantage.

Safety-critical systems, like those on UAVs, combine multiple software subsystems.  If each subsystem is programmed in its own LwDSL hosted in the same language, then it is easy to compose testing and validation across subsystem boundaries. (Only testing within each design-domain just won’t fly, pun intended.)

The “LwDSL approach” won’t magically make the problems of verifying life-critical software, but “raising the abstraction level” must be our mantra moving forward.

Byzantine Cyclic Redundancy Checks (CRC)

April 18, 2009

I’m working on a NASA-sponsored project to monitor safety-critical embedded systems at runtime, and that’s started me thinking about cyclic redundancy checks (CRCs) again.  Error-checking codes are fundamental in fault-tolerant systems.  The basic idea is simple: a transmitter wants to send a data word, so it computes a CRC over the word.  It sends both the data word and the CRC to the receiver, which computes the same CRC over the received word.  If its computed CRC and the received one differ, then there was a transmission error (there are simplifications to this approach, but that’s the basic idea).

CRCs have been around since the 60s, and despite the simple mathematical theory on which they’re built (polynomial division in the Galois Field 2, containing two elements, “0” and “1”), I was surprised to see that even today, their fault-tolerance properties are in some cases unknown or misunderstood.  Phil Koopman at CMU has written a few nice papers over the past couple of years explaining some common misconceptions and analyzing commonly-used CRCs.

Particularly, there seems to be an over-confidence in their ability to detect errors.  One fascinating result is the so-called “Schrödinger’s CRC,” so-dubbed in a paper entitled Byzantine Fault Tolerance, from Theory to Reality, by Kevin Driscoll et al. A Schrödinger’s CRC occurs when a transmitter broadcasts a data word and associated CRC to two receivers.   and at least one of the data words is corrupted in transit and so is the corresponding CRC so that the faulty word and faulty CRC match!  How does this happen?  Let’s look at a concrete example:

             11-Bit Message           USB-5
Receiver A   1 0 1 1 0 1 1 0 0 1 1    0 1 0 0 1
Transmitter  1 ½ 1 1 0 1 1 ½ 0 1 1    ½ 1 0 0 1
Receiver B   1 1 1 1 0 1 1 1 0 1 1    1 1 0 0 1

We illustrate a transmitter broadcasting an 11-bit message to two receivers, A and B. We use USB-5 CRC, generally used to check USB token packets  (by the way, for 11-bit messages, USB-5 has a Hamming Distance of three, meaning the CRC will catch any corruption of fewer than three bits in the combined 11-bit message and CRC).  Now, suppose the transmitter has suffered some fault such as a “stuck-at-1/2” fault so that periodically, the transmitter fails to drive the signal on the bus sufficiently high or low. A receiver may interpret an intermediate signal as either a 0 or 1. In the figure, we show the transmitter sending three stuck-at-1/2 signals, one in the 11-bit message, and two in CRC.  The upshot is an example in which a CRC does not prevent a Byzantine fault—the two receivers obtain different messages, each of which passes its CRC.

One question is how likely this scenario is.  Paulitsch et al. write that The probability of a Schrödinger’s CRC is hard to evaluate.  A worst-case estimate of its occurrence due to a single device is the device failure rate.”  It’d be interesting to know if there’s any data on this probability.


Follow

Get every new post delivered to your Inbox.