Here is the thing about building a robot (from scratch). Your robot is unique. It is a physical entity that is distinct and different from everything else. Even without the uniqueness (maybe you can make clones), it is still a "thing". If it performs slowly or has defects, that makes it somewhat quaint.
I'm sitting here looking at my son's old Robosapien is from 2004. That is ancient technology. But, if you put in some batteries and power it up it still turns heads and generates smiles. This is not nostalgia at work. When we get flawlessly articulated robots existing pervasively in homes, it will be nostalgic. Right now, however, it is still a minor marvel.
Internet/PC/Tablet/Phone software is notoriously non-future proof. This is why writing software is mostly an endeavor that you must enjoy in the "here and now". Once you are done, it is already on its way to obsolescence.
Embedded software is not as bad, if your device is not cutting edge (e.g. Smartphones peripherals are short lived; Smoke alarms, washer/dryers, heart monitors are long lived).
I suspect that it is the physical nature of the robot that makes it future proof. This may be why Biomorphic robots are more interesting to me than the ones with big brains.
There is also the appealing (to me) image of the lone hacker working in a dark, shadowy workshop focusing on getting the actuators to turn... just right...
Wednesday, December 19, 2012
Saturday, December 01, 2012
Lagging on Posts
Not much copious free time these days. A lot of my electronic / mcu work has been shelved (temporarily) and I am doing more PC oriented things at work that take up much of my time.
I am hoping to (re)start blogging and there will be future posts regarding
I am hoping to (re)start blogging and there will be future posts regarding
- A class on "hacking" (in the old school sense) I am teaching to homeschooled kids starting in January
- My adventures using Haskell and OpenCV as a means to take a much higher level approach to the home monitoring stuff I've been blogging about here
more to come..
/todd
Wednesday, September 26, 2012
Cheap little linux things
Forget the Raspberry Pi. This is currently the cheapest "complete" linux-able appliance I've seen so far: http://www.amazon.com/MK802-Android-Google-Player-Allwinner/dp/B008F5NSLU/ref=wl_it_dp_o_pdT1_S_nC?ie=UTF8&colid=1JI25VHV3LM3P&coliid=I22TB06XNRJJBC
(The Pi doesn't come with Wi-fi, is less powerful and is somewhat larger. The Pi is more hackable, but if you are looking for more of a turn-key system, these Android based gadgets look interesting.)
(The Pi doesn't come with Wi-fi, is less powerful and is somewhat larger. The Pi is more hackable, but if you are looking for more of a turn-key system, these Android based gadgets look interesting.)
Monday, August 20, 2012
Webcam based Motion Detection: Unix Style!
I've been playing around with the idea of using cheap USB webcams as motion sensors. My PIR solution would detune in the presence of heat (a kitchen stove!).
I picked up a cheap Logitech webcam (C200 for $15 at Micro Center) and started writing some C code... but hey, wait -- that isn't the Unix way.
I downloaded a camera streamer (fswebcam) and graphicsmagic (a cleaner ImageMagick) and threw together this little script:
#!/bin/sh
THRESH=10 # Motion Threshold
SCALE=64x48
sudo umount /tmp/rd 2>/dev/null
sudo mkdir /tmp/rd 2>/dev/null && sudo chmod 777 /tmp/rd && \
sudo mount -t tmpfs -o size=100K tmpfs /tmp/rd/
trap "sudo umount /tmp/rd" EXIT
fswebcam -q --greyscale --no-banner --scale $SCALE - >/tmp/rd/before.dat
while true; do
sleep 0.5
fswebcam -q --greyscale --no-banner --scale $SCALE - >/tmp/rd/after.dat
gm compare -metric RMSE /tmp/rd/before.dat /tmp/rd/after.dat null: \
|awk -v TH=$THRESH '/Total/{ if($3 > TH) printf("Motion!(%f)\n",$3)}'
mv /tmp/rd/after.dat /tmp/rd/before.dat
done
Note: I am using a ramdisk to keep it fast (and embedded friendly). The scale of the image is reduced to keep comparisons cheap and fast.
I now need to see if I can port this to my Beaglebone!
I picked up a cheap Logitech webcam (C200 for $15 at Micro Center) and started writing some C code... but hey, wait -- that isn't the Unix way.
I downloaded a camera streamer (fswebcam) and graphicsmagic (a cleaner ImageMagick) and threw together this little script:
#!/bin/sh
THRESH=10 # Motion Threshold
SCALE=64x48
sudo umount /tmp/rd 2>/dev/null
sudo mkdir /tmp/rd 2>/dev/null && sudo chmod 777 /tmp/rd && \
sudo mount -t tmpfs -o size=100K tmpfs /tmp/rd/
trap "sudo umount /tmp/rd" EXIT
fswebcam -q --greyscale --no-banner --scale $SCALE - >/tmp/rd/before.dat
while true; do
sleep 0.5
fswebcam -q --greyscale --no-banner --scale $SCALE - >/tmp/rd/after.dat
gm compare -metric RMSE /tmp/rd/before.dat /tmp/rd/after.dat null: \
|awk -v TH=$THRESH '/Total/{ if($3 > TH) printf("Motion!(%f)\n",$3)}'
mv /tmp/rd/after.dat /tmp/rd/before.dat
done
Note: I am using a ramdisk to keep it fast (and embedded friendly). The scale of the image is reduced to keep comparisons cheap and fast.
I now need to see if I can port this to my Beaglebone!
Saturday, June 16, 2012
Strategies for Erlang/OTP on a small embedded (turnkey) platform (i.e. Beaglebone)
As I have mentioned before, I am pretty much using Erlang on the Beaglebone as if it was running on a small (very constrained) server. I'm not using it for bit banging or peripheral interfacing. I've got very low power (and handy) 8-bit micros to deal with that. Any peripheral I connect to the Beaglebone will be bridged through a UART.
Where Erlang/OTP will work for me on the Beaglebone is to effectively do what it would on a large server: I want the Beaglebone to be the brains behind my home sensor network.
Why not just use a PC? Well, this is effectively (once configured) a turnkey system. Turn it on, stick it onto your network (or if no 24/7 internet connection, connect it to a cellular modem) and it does it's job. It does it's job with very little power consumption, a small footprint and minimal "boot" time. It is just another appliance. (Note: I am doing development and testing on a PC, only every once and a while making sure that the code runs okay on the Beaglebone. But the final target is a Beaglebone.)
However, with this smallness comes some constraints. The first is the most glaring one: there is no harddisk. The Beaglebone has a microSD, but that is the boot medium. On a failsafe system we do not boot from the same place as we actively write data. So, I am going to have to make sure that Erlang logs nothing to the microSD. (Yes, I know that the microSD is partitioned, but it *is* just one medium. It is one point of failure and I fear the mysterious vendor-dependent machinations of wear leveling).
What? No logs? Well, think about it. Who is going to see those logs? Well, you could upload them to a server on the internet as part of a bug report, right? Really? What use will a bug report be when Grandma's basement is flooding and the monitoring system isn't doing its job?
Okay, this thing is going to have to be reliable. Think "automobile sensor system" reliable. Think about what happens when your automobile's sensor system fails: This is a very big deal. Home monitoring doesn't sound as important as your automobile's sensor system, but when it fails the results can lead to similar problems. This is one of the reasons I chose Erlang/OTP. I want some support (and some nudging) in creating a very reliable home monitoring system.
But, I digress. What are these strategies I must consider when doing Erlang/OTP on such a small platform?
Where Erlang/OTP will work for me on the Beaglebone is to effectively do what it would on a large server: I want the Beaglebone to be the brains behind my home sensor network.
Why not just use a PC? Well, this is effectively (once configured) a turnkey system. Turn it on, stick it onto your network (or if no 24/7 internet connection, connect it to a cellular modem) and it does it's job. It does it's job with very little power consumption, a small footprint and minimal "boot" time. It is just another appliance. (Note: I am doing development and testing on a PC, only every once and a while making sure that the code runs okay on the Beaglebone. But the final target is a Beaglebone.)
However, with this smallness comes some constraints. The first is the most glaring one: there is no harddisk. The Beaglebone has a microSD, but that is the boot medium. On a failsafe system we do not boot from the same place as we actively write data. So, I am going to have to make sure that Erlang logs nothing to the microSD. (Yes, I know that the microSD is partitioned, but it *is* just one medium. It is one point of failure and I fear the mysterious vendor-dependent machinations of wear leveling).
What? No logs? Well, think about it. Who is going to see those logs? Well, you could upload them to a server on the internet as part of a bug report, right? Really? What use will a bug report be when Grandma's basement is flooding and the monitoring system isn't doing its job?
Okay, this thing is going to have to be reliable. Think "automobile sensor system" reliable. Think about what happens when your automobile's sensor system fails: This is a very big deal. Home monitoring doesn't sound as important as your automobile's sensor system, but when it fails the results can lead to similar problems. This is one of the reasons I chose Erlang/OTP. I want some support (and some nudging) in creating a very reliable home monitoring system.
But, I digress. What are these strategies I must consider when doing Erlang/OTP on such a small platform?
- No logging to disk. Nope, the microSD is just for booting and maintaining a list of registered sensor nodes with encryption key, signature and assignments. At some point I am considering adding additional flash storage (NAND serial flash memory?) just for this registration. The microSD needs to be kept clean as possible -- hey I need to check to see what Linux is touching there too (see next item). In general: Assume you will run 24/7 without total system failures and keep everything in RAM (Erlang ets?). If there is a major glitch requiring a reboot, do it fast and start fresh.
- Trim Linux. Honestly, it is pretty cool to be able to ssh into my router or set-top-box. But, the final version of this home sensor base station should not need a bunch of Linux services running. Trimming services should even improve boot time. I don't want anything running that isn't under *my* control. I'm a bit of a control freak. Heh, Erlang (beam) could even be process pid 1, as far as I am concerned ;-) Well, I may still want to keep an ssh daemon running to debug it... but it won't be of much use when the box sits at Grandma's house.
- You don't have a ton of RAM. Run light. If you are an internet connected device (and my home monitor certainly is), then get the information off of the device (and into the cloud) as soon as possible.
- One home monitor base station = One single point of failure. Consider a couple of stations (2 Beagle bones?) working together: Redundancy.When both are alive, they agree on which gets to handle "control" (i.e. sending the data to a cloud server, turn on/off lights, etc). They both receive, track and analyze all sensor broadcasts, but only one does anything with that data. The two servers can be joined by co-heartbeats. Maybe one has a cellular modem in case the Internet connection goes bye-bye. This may sound like overkill, but it dramatically increases the reliability of the system. How to effectively manage this (particularly after a "failover") is tricky. Assuming that the failed server recovers (reboots or is "plugged back into the outlet"), the two need to be brought back into sync (somewhat).
#3 is the most interesting problem right now. I am hoping that OTP will help me build the solution. I don't need to completely solve it right now, but it should be considered when building my "first, one and only" home monitor base station. A cluster of Beaglebones anyone?
Wednesday, June 13, 2012
My Home Monitoring Project: Goals, where/why Forth and Erlang?
A few people have asked, so here is the quick lowdown:
I am building a home sensor network for my house as well as a target group of "independent elderly" who have grown kids that want to keep track of them (did you leave the stove on after going to bed? etc).
I would like the system to support lots of distributed sensors, so they need to be inexpensive ($20 target BOM). Right now, your home sensor choices are X10, Z-Wave, etc. They are either too power dependent, too limited or too expensive. You should be able to put a sensor in every room of your house (and maybe some in your garden, garage or yard).
The sensors should be very, very low power and ideally run off of CR2032 batteries or 2 AAAs. My sensor MCU is a Silabs C8051F912 (8K Flash; 768 bytes of RAM and insanely low power). I am programming it with Charley Shattuck's MyForth (sort of a macro-assembler that feels like Forth). I am using an RF12B 433Mhz transceiver using my own protocol (soon to be published). The protocol is encrypted using RC4 (with a 3 byte counter -- 16 million unique key sequences before rolling over is secure enough given a 1 sensor message per minute rate; I am mainly using it for authentication to prevent spoofing -- the data itself isn't that "secret").
I am not that interested in home "control" yet (turning on lights, etc -- anything that involves AC power), but the types of things I want to sense are:
- Basement flooding (Water activated Switch).
- Motion (entry/exit of house and rooms).
- Temperature (each room and outdoors).
- Stove (Motion + temperature + time-of-day): Is the stove on for for a long time? Is anyone in the kitchen? Is it an odd time for the stove to be on?
- Soil moisture (garden).
- Doorbell (button press to ring + log it -- was someone at the door earlier today?)
- Vibration/Motion (Was someone on the back deck?)
- Tap detect (did someone knock on the front door)?
- Open Window/Door
It would be nice to combine as many sensor capability into one device (perhaps vibration, motion, temperature & switch) and then you analyze the data based on what the device is supposed to monitor.
Some of my current sensors elements include:
- Piezo vibration sensors
- Passive IR sensors
- MEMs accelerometers (for knocks, etc)
- Magenetic reed switches (for detecting open windows and doors)
The base station is currently utilizing a Beaglebone running Erlang (in truth I am doing development on a laptop running Erlang, but my test target is the Beaglebone). I am also looking at using RabbitMQ for reliable delivery to the "Cloud" (for further processing/notification). A RabbitMQ queue will run on the Beaglebone so if an Internet connection is not available sensor data is locally queued. A local server (also on the Beaglebone) will feed off of the queues to do things like control lights, bells, pumps, etc. There will be a shovel between the local RabbitMQ and the Cloud based RabbitMQ.
I chose Erlang because it has a nice message protocol parsing capability and OTP is focused on 24/7 availability. I have been familiar with Erlang for over 10 years, but this is the first year I've actually had an opportunity to dive in deeply. Plus I am doing some Erlang at work so there is some mental synergy.
I chose MyForth because it is nice and forces me to really think about small system development.
I chose the Silabs 8051 because it is a very, very nice 8-bit family with a ton of peripheral support -- it is also cheap, ubiquitous and low power . I've used it (and MyForth) on a couple of job related tasks and am very happy with the match.
Where am I right now?
- One prototype PIR + temperature sensor is complete
- One base station radio transceiver -> USB/UART prototype completed.
- Almost done with Erlang base station message processing module (decryption tonight!)
I have a long road ahead, but I am enjoying tackling things at a lower level (I just finished implementing RC4 in MyForth!).
Wednesday, June 06, 2012
Erlang on Beaglebone: Don't sweat the small stuff
My interests regarding Erlang on the Beaglebone (or any other small low power ARM system) is less about the peripheral device capability and more about running "big system" stuff on small platforms.
I've thought about hooking SPI (or I2C) peripherals to the Beaglebone, but it just seems too complicated to be worth the effort. Apparently, I will need to patch the Linux kernel and then convince Erlang to play with it by writing a Port driver. It's really at the Linux level where things start to get complex:
Every peripheral uses SPI in its own manner. Being synchronous, you just can't ask a SPI device to "get 1024 bytes" and just consume data. As a master, you need to send a byte for every byte you want to receive (and understand that for 4-wire SPI you will receive a byte simultaneously during the sending of a byte).
You can implement a peripheral's SPI protocol in kernel or user space under Linux.
However, the notion of I/O buffering and time sliced multitasking is somewhat counter to what SPI is about. (Of course you can do SPI in a multitasking environment, but unlike asynchronous UART buffering, if you write a user space SPI driver, you can't expect the kernel to do more than 1 byte worth of SPI work for while you wait for your next slice.)
A kernel based SPI protocol is more efficient, but it means that you are mucking about with kernel development for *every* peripheral you want to support. I don't want to write kernel drivers for every SPI device.
Alas, handling all of this is trivial with a simple microcontroller.
I believe that dealing with low level synchronous protocols is not a good fit for Erlang. I prefer to have Erlang modelling my application domain than worry about bit banging.
My current choice for a home sensor transceiver is a RFM12B. It talks SPI. It has a 2 byte nternal buffer for bytes it receives over-the-air. You need a very responsive SPI driver or you will lose data.
I will use a $3 microcontroller as a bridge between the RFM12B and the Beaglebone UART. The microcontroller will handle the SPI and stream it to a simple asynchronous serial protocol for Linux to receive and buffer.
The Erlang on the Beaglebone will handle the sensor protocol parsing via a standard Linux UART (/dev/XXX). Because Erlang is adept at parsing binary, I'll keep the UART protocol binary.
I won't use Linux/Erlang to sweat the small stuff -- microcontrollers make excellent bridges!
I've thought about hooking SPI (or I2C) peripherals to the Beaglebone, but it just seems too complicated to be worth the effort. Apparently, I will need to patch the Linux kernel and then convince Erlang to play with it by writing a Port driver. It's really at the Linux level where things start to get complex:
Every peripheral uses SPI in its own manner. Being synchronous, you just can't ask a SPI device to "get 1024 bytes" and just consume data. As a master, you need to send a byte for every byte you want to receive (and understand that for 4-wire SPI you will receive a byte simultaneously during the sending of a byte).
You can implement a peripheral's SPI protocol in kernel or user space under Linux.
However, the notion of I/O buffering and time sliced multitasking is somewhat counter to what SPI is about. (Of course you can do SPI in a multitasking environment, but unlike asynchronous UART buffering, if you write a user space SPI driver, you can't expect the kernel to do more than 1 byte worth of SPI work for while you wait for your next slice.)
A kernel based SPI protocol is more efficient, but it means that you are mucking about with kernel development for *every* peripheral you want to support. I don't want to write kernel drivers for every SPI device.
Alas, handling all of this is trivial with a simple microcontroller.
I believe that dealing with low level synchronous protocols is not a good fit for Erlang. I prefer to have Erlang modelling my application domain than worry about bit banging.
My current choice for a home sensor transceiver is a RFM12B. It talks SPI. It has a 2 byte nternal buffer for bytes it receives over-the-air. You need a very responsive SPI driver or you will lose data.
I will use a $3 microcontroller as a bridge between the RFM12B and the Beaglebone UART. The microcontroller will handle the SPI and stream it to a simple asynchronous serial protocol for Linux to receive and buffer.
The Erlang on the Beaglebone will handle the sensor protocol parsing via a standard Linux UART (/dev/XXX). Because Erlang is adept at parsing binary, I'll keep the UART protocol binary.
I won't use Linux/Erlang to sweat the small stuff -- microcontrollers make excellent bridges!
Friday, May 25, 2012
A smattering of MyForth: CRC16 checksums
I talk a lot of about using MyForth in this blog. MyForth was written by Charlie Shattuck as a minimalist 8-bit Forth for the 8051. As an 8-bit Forth it doesn't have a lot of math capabilities (how much math can you really do in just 8 bits?). However, I've added a few primitives to help manipulate 16 bits (which represents as two 8 bit numbers on the stack, with the MSB topmost).
So, I needed to compute a 16 bit checksum. The Silab C8051F930 has a built in CRC engine, but I wasn't using that particular MCU. However, I wanted to maintain some compatibility across Silab chips, so I looked at the C8051F930 datasheet and found a C implementation of the built in CRC checksum.
Here is the basic C function (abridged):
It may be worth pointing out a few things:
That the stack is 8 bit wide and 16 bit numbers are pushed as two 8 bit values (MSB topmost).
The "-if" is a quick way to check if the 7th bit is set on the MSB (hence negative).
"If" condition values are not consumed off of the stack, so that allows me to test and use the top value without a "dup".
The weird looking "for" loop (" 2 #for") indicates that I am using Register 2 to hold the loop value. Yes, I have to do book keeping on registers (remember the days of old?), but it isn't bad as it seems. Deeply nested loops are frowned upon in Forth.
There is certainly a brevity and density to the MyForth source. ;-)
So, I needed to compute a 16 bit checksum. The Silab C8051F930 has a built in CRC engine, but I wasn't using that particular MCU. However, I wanted to maintain some compatibility across Silab chips, so I looked at the C8051F930 datasheet and found a C implementation of the built in CRC checksum.
Here is the basic C function (abridged):
#define POLY 0x1021And here is my MyForth translation:
unsigned short crc (unsigned short CRC_acc, unsigned char CRC_input)
{
unsigned char i;
CRC_acc = CRC_acc ^ (CRC_input << 8);
for (i = 0; i < 8; i++) {
if ((CRC_acc & 0x8000) == 0x8000) {
CRC_acc = (CRC_acc << 1) ^ POLY;
} else {
// if not, just shift the CRC value
CRC_acc = CRC_acc << 1;
}
}
return CRC_acc;
}
$1021 constant POLY
: crc-xor-poly ( accum16 -- accum16 )I added a couple of helper functions to MyForth to deal with the 16 bit numbers:
-if d2* POLY ## dxor ; then d2* ;
: >crc ( accum16 c -- accum16 )
0 # swap dxor
8 # 2 #for crc-xor-poly 2 #next ;
: dxor rot xor push xor pop ;I don't expect you to understand the code (and I will not attempt a detailed explanation here), but I thought it would be interesting to show what minimalist Forth code can look like.
: d2* swap 2* push 2*' pop swap ;
It may be worth pointing out a few things:
That the stack is 8 bit wide and 16 bit numbers are pushed as two 8 bit values (MSB topmost).
The "-if" is a quick way to check if the 7th bit is set on the MSB (hence negative).
"If" condition values are not consumed off of the stack, so that allows me to test and use the top value without a "dup".
The weird looking "for" loop (" 2 #for") indicates that I am using Register 2 to hold the loop value. Yes, I have to do book keeping on registers (remember the days of old?), but it isn't bad as it seems. Deeply nested loops are frowned upon in Forth.
There is certainly a brevity and density to the MyForth source. ;-)
Wednesday, May 23, 2012
Beaglebone and RabbitMQ and Erlang/OTP...really?
So, what good is a 24/7 home sensor base station if it loses messages?
I originally planned to run the beagle board based station as an AMQP client talking to a cloud server where data would be correlated and presented to users. Unfortunately, from a design perspective, this doesn't address temporary "internet outages". What does the station do with a bunch of events when it can't reach the cloud server?
Well, how about running RabbitMQ on a Beaglebone? This local instance of RabbitMQ would act as a cache and shovel events/messages to the cloud server RabbitMQ when it is available. By using a local "work" queue, I can utilize a different delivery path in case the internet connection has gone south (power outage?) for a pre-defined duration. This backup path could be a cellular modem.
I am using the nerve erlang distribution and after building a bunch of supplemental Erlang packages, I was able to get the latest RabbitMQ (2.8.2) running on the Beaglebone. I haven't put it through its paces, but it does seem to run the management plug-in and I can navigate it with a web browser.
Resource-wise, the "freshly" launched RAM footprint is around 30MB. Performance may be an issue, but it looks like I have some room for at least a few sensor event messages. And, I don't need spectacular performance, I need reliability.
I will put it through some stress tests, but I like the overall architectural approach: Erlang/OTP and RabbitMQ in the home base station.
I originally planned to run the beagle board based station as an AMQP client talking to a cloud server where data would be correlated and presented to users. Unfortunately, from a design perspective, this doesn't address temporary "internet outages". What does the station do with a bunch of events when it can't reach the cloud server?
Well, how about running RabbitMQ on a Beaglebone? This local instance of RabbitMQ would act as a cache and shovel events/messages to the cloud server RabbitMQ when it is available. By using a local "work" queue, I can utilize a different delivery path in case the internet connection has gone south (power outage?) for a pre-defined duration. This backup path could be a cellular modem.
I am using the nerve erlang distribution and after building a bunch of supplemental Erlang packages, I was able to get the latest RabbitMQ (2.8.2) running on the Beaglebone. I haven't put it through its paces, but it does seem to run the management plug-in and I can navigate it with a web browser.
Resource-wise, the "freshly" launched RAM footprint is around 30MB. Performance may be an issue, but it looks like I have some room for at least a few sensor event messages. And, I don't need spectacular performance, I need reliability.
I will put it through some stress tests, but I like the overall architectural approach: Erlang/OTP and RabbitMQ in the home base station.
Thursday, May 10, 2012
Bluegiga BLE112 + Beaglebone + Erlang = ?
So, I got a Beaglebone yesterday. I am considering it as the potential base host for my home monitoring system. After verifying that it would boot, I downloaded a Buildroot based Erlang image from http://nerves-project.org/ . So far, so good.
I plugged in the Bluegiga BLE112 bluetooth USB dongle and it was correctly recognized as a serial port (/dev/ttyACM0). So far, so good.
So, would it take my initial base software (just a BLE112 dongle test)? I am using Feuerlab's serial port library (as recommend to me by Ulf Wiger in a comment to my previous BLE112 blog entry).
So, I downloaded and built the Buildroot environment for nerves (mainly to get the ARM cross compiler installed with the uclibc library). So far, so good.
The serial port library compiled without a hitch (after setting TARGET_SYS to my ARM compiler suite). But, it couldn't be that simple, could it?
Copied my sources (including the compiled serial library) to the microSD, booted Beaglebone and gave it a try. It worked.
Today was a good day.
Now, here is the bigger task at hand: Determine if a 256MB RAM ARM is sufficient to do some serious Erlang work (maybe with some OTP too?). I've seen the neat tricks (running Erlang on Raspberry Pi), but how about something more than blinking lights?
Stay tuned.
I plugged in the Bluegiga BLE112 bluetooth USB dongle and it was correctly recognized as a serial port (/dev/ttyACM0). So far, so good.
So, would it take my initial base software (just a BLE112 dongle test)? I am using Feuerlab's serial port library (as recommend to me by Ulf Wiger in a comment to my previous BLE112 blog entry).
So, I downloaded and built the Buildroot environment for nerves (mainly to get the ARM cross compiler installed with the uclibc library). So far, so good.
The serial port library compiled without a hitch (after setting TARGET_SYS to my ARM compiler suite). But, it couldn't be that simple, could it?
Copied my sources (including the compiled serial library) to the microSD, booted Beaglebone and gave it a try. It worked.
Today was a good day.
Now, here is the bigger task at hand: Determine if a 256MB RAM ARM is sufficient to do some serious Erlang work (maybe with some OTP too?). I've seen the neat tricks (running Erlang on Raspberry Pi), but how about something more than blinking lights?
Stay tuned.
Saturday, May 05, 2012
Ignoring the wheel: Taking non-mainstream approaches to embedded design
As I look at the documentation page for the BLE112 (Bluegiga's Bluetooth LE module), I am reminded of how complex computing has become. There are videos, spec sheets and software guides (well over a dozen documents not including slick sheets and qualification documents). All of this for a small embedded device that uses and 8051. This is all "high level" documentation. At the end of all this you can develop your BLE112 comms using an API or their own scripting language.
I understand the complexities involved and why there is so much documentation. But that isn't all: Add to that all of the Erlang OTP stuff I am planning on doing on the server side. There is lots of documentation; lots of "other people's stuff" I need to master.
Sometimes I "ignore the wheel". This is like "re-inventing the wheel" but tends to avoid the actual construction of a wheel itself. "Ignoring the wheel" is about coming up with your own means of transportation.
Rather than use a mainstream approach, I roll my own solutions. This usually means ignoring an already written body of software (libraries), but when I start from scratch, I intimately understand everything I am working with. I *have* to intimately understand everything I am working with.
Now, one trade-off that must be made for "ignoring the wheel" is that you have to get creative with your resources.
Let me propose an example: Building a (pro quality) Data Logger the size of your thumb. Now, this data logger must read some arbitrary sensor every 10 seconds and log it to persistent storage until it is later pulled off of the device and analyzed later on a PC. It must be able to capture tens-of-thousands of short (10 bytes?) log events. The whole thing (case and battery included) should be about the size of your thumb.
How would you approach designing it? (Remember, this should be a "pro quality" logger, not a toy -- it must work flawlessly.)
Linux on an ARM? Small, but probably not small enough and I doubt you could run it off a small (coin cell size) battery -- power consumption is an important factor here.
Okay, maybe something Arduino-like. Or maybe a PIC, or even a Silabs 8051. You want something tiny that uses very little power. The tiny MCUs tend to have less RAM and Flash program space, but this is just a simple data logger, right?
Now, what about the log storage medium? A microSD card is small. Plus, you can take it out and plug it into your computer to dump the data. This sounds great.
So, you go with a microSD. Now, of course you'll have to format it as FAT16 or FAT32 to make it readable by the PC (besides there are plenty of FAT libs for MCUs out there, right?).
Now you have a problem: FAT is simple, but still requires choosing a library and getting it to compile. Plus, you'll need (at least) 512 bytes of RAM to hold sectors/buffers. You did pick an MCU with more than 512 bytes of RAM right?
Is the FAT implementation reliable? Is it rock solid? Can you trust your important logs to it?
Now, how do you arrange the logging? Will you exceed the maximum FAT file size? How do you name the file?
Remember the original goal: You are building a data logger, not a database.
Okay, you get the picture. For hobbyist needs, a simple logger (like OpenLog) fits in a pinch. But things start to get complicated when you consider reliability and longevity.
How can we simplify this design?
First, do you really need FAT? Can you develop a custom "log system" that writes to the "raw" microSD and develop a reader on the PC side? Do you really need the flexibility of a "file system". Consider this: Figure out the max size of a log entry (typically a logger works with structured sensor data: GPS, environmentals, etc). On a 2GB microSD you can fit around 10 million 20 byte logs. Is that adequate?
Second, do you really need a microSD? What if you used a serial Flash storage chip? Maybe one that doesn't require 512 byte RAM buffers for sector writes. Atmel makes a family of serial flash chips that have "on board" pages that you can randomly access before committing to sectors.
Maybe you can pull the data off serially with a cable, or maybe via wireless?
Here is an interesting observation I've made about "Ignoring the wheel" in my own designs: It reinforces the XP tenent: "do the simplest thing that could possibly work". Because I deal a lot with anemic microcontrollers (8051) and minimalist languages (Forth), the ordeal of supporting FAT on a microSD makes me question whether or not it is "needed" to log a bunch of data as quickly and reliably as possible.
If the customer says I need to use a microSD, okay. But what exactly are they expecting for the FAT support? Could something like this (http://elm-chan.org/fsw/ff/00index_p.html) work? It only supports 1 fixed size file at a time, but it is very simple. Essentially, all I would need for a data logger is "write" capability to the filesystem. Why have a full FAT implementation on a tiny (write only) logger?
Can I ignore the wheel and simply do something simple?
I understand the complexities involved and why there is so much documentation. But that isn't all: Add to that all of the Erlang OTP stuff I am planning on doing on the server side. There is lots of documentation; lots of "other people's stuff" I need to master.
Sometimes I "ignore the wheel". This is like "re-inventing the wheel" but tends to avoid the actual construction of a wheel itself. "Ignoring the wheel" is about coming up with your own means of transportation.
Rather than use a mainstream approach, I roll my own solutions. This usually means ignoring an already written body of software (libraries), but when I start from scratch, I intimately understand everything I am working with. I *have* to intimately understand everything I am working with.
Now, one trade-off that must be made for "ignoring the wheel" is that you have to get creative with your resources.
Let me propose an example: Building a (pro quality) Data Logger the size of your thumb. Now, this data logger must read some arbitrary sensor every 10 seconds and log it to persistent storage until it is later pulled off of the device and analyzed later on a PC. It must be able to capture tens-of-thousands of short (10 bytes?) log events. The whole thing (case and battery included) should be about the size of your thumb.
How would you approach designing it? (Remember, this should be a "pro quality" logger, not a toy -- it must work flawlessly.)
Linux on an ARM? Small, but probably not small enough and I doubt you could run it off a small (coin cell size) battery -- power consumption is an important factor here.
Okay, maybe something Arduino-like. Or maybe a PIC, or even a Silabs 8051. You want something tiny that uses very little power. The tiny MCUs tend to have less RAM and Flash program space, but this is just a simple data logger, right?
Now, what about the log storage medium? A microSD card is small. Plus, you can take it out and plug it into your computer to dump the data. This sounds great.
So, you go with a microSD. Now, of course you'll have to format it as FAT16 or FAT32 to make it readable by the PC (besides there are plenty of FAT libs for MCUs out there, right?).
Now you have a problem: FAT is simple, but still requires choosing a library and getting it to compile. Plus, you'll need (at least) 512 bytes of RAM to hold sectors/buffers. You did pick an MCU with more than 512 bytes of RAM right?
Is the FAT implementation reliable? Is it rock solid? Can you trust your important logs to it?
Now, how do you arrange the logging? Will you exceed the maximum FAT file size? How do you name the file?
Remember the original goal: You are building a data logger, not a database.
Okay, you get the picture. For hobbyist needs, a simple logger (like OpenLog) fits in a pinch. But things start to get complicated when you consider reliability and longevity.
How can we simplify this design?
First, do you really need FAT? Can you develop a custom "log system" that writes to the "raw" microSD and develop a reader on the PC side? Do you really need the flexibility of a "file system". Consider this: Figure out the max size of a log entry (typically a logger works with structured sensor data: GPS, environmentals, etc). On a 2GB microSD you can fit around 10 million 20 byte logs. Is that adequate?
Second, do you really need a microSD? What if you used a serial Flash storage chip? Maybe one that doesn't require 512 byte RAM buffers for sector writes. Atmel makes a family of serial flash chips that have "on board" pages that you can randomly access before committing to sectors.
Maybe you can pull the data off serially with a cable, or maybe via wireless?
Here is an interesting observation I've made about "Ignoring the wheel" in my own designs: It reinforces the XP tenent: "do the simplest thing that could possibly work". Because I deal a lot with anemic microcontrollers (8051) and minimalist languages (Forth), the ordeal of supporting FAT on a microSD makes me question whether or not it is "needed" to log a bunch of data as quickly and reliably as possible.
If the customer says I need to use a microSD, okay. But what exactly are they expecting for the FAT support? Could something like this (http://elm-chan.org/fsw/ff/00index_p.html) work? It only supports 1 fixed size file at a time, but it is very simple. Essentially, all I would need for a data logger is "write" capability to the filesystem. Why have a full FAT implementation on a tiny (write only) logger?
Can I ignore the wheel and simply do something simple?
Sunday, April 22, 2012
Why Forth still matters in this ARM/Linux/Android world
My sensors (for my home monitoring system) need to run off of coin cell batteries (the sensors need to be tiny). They also need to run for at least 1 year (under normal circumstances).
The sensor transceivers consume around 15mA when transmitting. This is a lot for a coin cell battery. The general sensor development strategy is to use communication sparingly: If you don't need to broadcast data, don't.
With a battery that can only source 200mA for just one hour, you really need to start thinking about low power design. An ARM/Cortex capable of running Linux consumes a lot of current. You are going to use a low power 8 or 16 bit processor (e.g. Silabs 8051, MSP430, etc).
Now add C and some libraries to the mix. The more generic/portable stuff you do, the longer the processor is going to stay awake. While it is awake, it is consuming power. The longer it "sleeps", the longer your battery will last.
So, I am using a very low level Forth (MyForth) on a Silabs 8051 low power processor. MyForth is essentially a high level macro assembler -- there is very little overhead to support Forth.
There are no libraries that come with MyForth, so I had to roll my own code. This keeps me true to the Forth philosophy (or at least Chuck Moore's philosophy) of only doing what you need to do to meet the task at hand. I am not writing "generic" libraries. I have to fully understand the devices I am interfacing with. I doubt that I could write tighter/faster code for my transceiver -- I am engaging it at a primitive level. No abstractions but the ones that MyForth provides at a (mostly) macro level. Forth, like Tcl, is more idiom reuse oriented -- you can do a lot with just a little code.
If I need to work harder to make sure that my sensor works 24/7 for at least a year (or two!) off of a coin cell battery, then I will.
My stove monitor (temperature + motion sensor) runs on a Silabs C8051F912. That MCU has 768 bytes of RAM and 16KB of Flash. I am talking to an RFM12B transceiver over a SPI interface and controlling a low power IR motion sensor module. Currently, I am broadcasting motion and temperature over the air using less than 128 bytes of RAM (including Forth stack) and less than 4KB of Flash. I spend only a few seconds awake, a most of the time asleep consuming less than .001mA of battery current.
You can do this in C (of course!), but I don't think you could reach the code density (especially as you start using third party libs). I can also hand tune my code directly in assembler where needed.
Yep, Forth still matters to me.
The sensor transceivers consume around 15mA when transmitting. This is a lot for a coin cell battery. The general sensor development strategy is to use communication sparingly: If you don't need to broadcast data, don't.
With a battery that can only source 200mA for just one hour, you really need to start thinking about low power design. An ARM/Cortex capable of running Linux consumes a lot of current. You are going to use a low power 8 or 16 bit processor (e.g. Silabs 8051, MSP430, etc).
Now add C and some libraries to the mix. The more generic/portable stuff you do, the longer the processor is going to stay awake. While it is awake, it is consuming power. The longer it "sleeps", the longer your battery will last.
So, I am using a very low level Forth (MyForth) on a Silabs 8051 low power processor. MyForth is essentially a high level macro assembler -- there is very little overhead to support Forth.
There are no libraries that come with MyForth, so I had to roll my own code. This keeps me true to the Forth philosophy (or at least Chuck Moore's philosophy) of only doing what you need to do to meet the task at hand. I am not writing "generic" libraries. I have to fully understand the devices I am interfacing with. I doubt that I could write tighter/faster code for my transceiver -- I am engaging it at a primitive level. No abstractions but the ones that MyForth provides at a (mostly) macro level. Forth, like Tcl, is more idiom reuse oriented -- you can do a lot with just a little code.
If I need to work harder to make sure that my sensor works 24/7 for at least a year (or two!) off of a coin cell battery, then I will.
My stove monitor (temperature + motion sensor) runs on a Silabs C8051F912. That MCU has 768 bytes of RAM and 16KB of Flash. I am talking to an RFM12B transceiver over a SPI interface and controlling a low power IR motion sensor module. Currently, I am broadcasting motion and temperature over the air using less than 128 bytes of RAM (including Forth stack) and less than 4KB of Flash. I spend only a few seconds awake, a most of the time asleep consuming less than .001mA of battery current.
You can do this in C (of course!), but I don't think you could reach the code density (especially as you start using third party libs). I can also hand tune my code directly in assembler where needed.
Yep, Forth still matters to me.
Does the Roku need Erlang/OTP?
My Roku streaming player locks up every once in a while (every couple of months). It seems to do this in two different ways: Sometimes it seems to get confused about the wi-fi connection and sometimes it just freezes in the middle of a movie. We are heavy Roku users in my household, and it works flawlessly most of time, so it is hard to get a good idea why this is happening (there is no consistent scenario).
So, here is an embedded device that is suppose to work 24/7 and my only recourse when it locks up is to unplug the power supply and plug it back in. There are no buttons on the unit itself to "reboot" it.
This kind of device is ripe for the "Erlang" approach. Now, of course, if you've been doing embedded firmware for any decent length of time, you'd realize that you don't "need" Erlang to fix this. Watchdog timers, process monitors, a soft-reboot button, etc are the first things that come to mind.
But, a lot of the software that run on these set-top boxes seem to come the "other side" (non-embedded developers). I have no proof of this, but the lock ups smell of that kind of development mentality. When I use to build highly available internet servers systems, I always made sure that I had terminal access so that I could log in and kill/restart stuff to fix problems. I knew that my stuff needed to run standalone, but I also knew that I could log into a running system, look around at what the problem was, restart stuff and take the fix back to development for the next release.
You don't get any of that with "appliance" devices. So, the more I did embedded work, the more I developed a "no login; no logs" mindset. Stuff needs to run, damn the logs.
This year I had to do some Cloud apps. I used Erlang/OTP. I thought I caught all of failure conditions but some third party code would fail every once in a while. The system would run for a week or two, but then mysteriously crash. Thank goodness for the logs. I logged in and reviewed about 1MB of logging to find the problem. I fixed it, uploaded new code and restarted the servers.
This doesn't work for a Roku. Once it ships, there is no developer login. The device must never lock up. The user must never lose control. Even in the need of a full reset, the user should be able to do this from the couch. All processes (and devices) must be monitored.
My home monitoring system base station is currently using Erlang/OTP -- not because Erlang solves these problems, but because Erlang/OTP was designed to solve these problems.
So, here is an embedded device that is suppose to work 24/7 and my only recourse when it locks up is to unplug the power supply and plug it back in. There are no buttons on the unit itself to "reboot" it.
This kind of device is ripe for the "Erlang" approach. Now, of course, if you've been doing embedded firmware for any decent length of time, you'd realize that you don't "need" Erlang to fix this. Watchdog timers, process monitors, a soft-reboot button, etc are the first things that come to mind.
But, a lot of the software that run on these set-top boxes seem to come the "other side" (non-embedded developers). I have no proof of this, but the lock ups smell of that kind of development mentality. When I use to build highly available internet servers systems, I always made sure that I had terminal access so that I could log in and kill/restart stuff to fix problems. I knew that my stuff needed to run standalone, but I also knew that I could log into a running system, look around at what the problem was, restart stuff and take the fix back to development for the next release.
You don't get any of that with "appliance" devices. So, the more I did embedded work, the more I developed a "no login; no logs" mindset. Stuff needs to run, damn the logs.
This year I had to do some Cloud apps. I used Erlang/OTP. I thought I caught all of failure conditions but some third party code would fail every once in a while. The system would run for a week or two, but then mysteriously crash. Thank goodness for the logs. I logged in and reviewed about 1MB of logging to find the problem. I fixed it, uploaded new code and restarted the servers.
This doesn't work for a Roku. Once it ships, there is no developer login. The device must never lock up. The user must never lose control. Even in the need of a full reset, the user should be able to do this from the couch. All processes (and devices) must be monitored.
My home monitoring system base station is currently using Erlang/OTP -- not because Erlang solves these problems, but because Erlang/OTP was designed to solve these problems.
Saturday, April 21, 2012
Where is my jetpack?
I'm getting old, and I am getting impatient with technology. Things are getting smaller, sleeker and sexier -- but they aren't getting any smarter.
Where are the big ideas? Why are we still writing web application frameworks? Why are we porting Linux to anything that has a microcontroller and going "look! Isn't that cool? It's running on my watch."
Where is my jetpack?
Or, more relevant to computer technology:
Where is my Dynabook ? (No, an ipad isn't a Dynabook, and the XO is too cumbersome.
Where is my Hitchiker's Guide to the Galaxy? (The WikiReader is close, but it needs graphics and a voice reader)
Where is my "House of the Future"? (It costs way damn too much and has no coherent operating "vision")
Yes, I know, I am bitching and moaning. Why don't I do something about it?
Well, that is why I am working on the "House of the Future" (my home monitoring project).
I'm thinking of forking this blog to document my progress.
Where are the big ideas? Why are we still writing web application frameworks? Why are we porting Linux to anything that has a microcontroller and going "look! Isn't that cool? It's running on my watch."
Where is my jetpack?
Or, more relevant to computer technology:
Where is my Dynabook ? (No, an ipad isn't a Dynabook, and the XO is too cumbersome.
Where is my Hitchiker's Guide to the Galaxy? (The WikiReader is close, but it needs graphics and a voice reader)
Where is my "House of the Future"? (It costs way damn too much and has no coherent operating "vision")
Yes, I know, I am bitching and moaning. Why don't I do something about it?
Well, that is why I am working on the "House of the Future" (my home monitoring project).
I'm thinking of forking this blog to document my progress.
Saturday, April 14, 2012
Counter Point (Bluegiga BLE112: A Game Changer?)
After writing the last blog entry, a thought kept re-occurring: "Mission Accomplished." You may have immediately caught on to the reference: President Bush's 2003 Mission Accomplished speech.
I certainly don't want to trivialize the aftermath of that speech (are we really done yet?), but it does remind me of the faith we (the software community) place in abstractions and generalizations.
A scripting language onboard the BLE112 and so it will only take a few lines of code to do my sensors? Sounds great. But, there are no reports from the ground yet. Is the scripting language stable? Are there bugs hidden in the implementation? Are there subtle things that it does automatically that will bite me in the ass?
The same questions can be asked about Embedded Erlang. It sounds fantastic, and of course the first major Erlang deployment was indeed embedded (the AXD301 switch). But a lot has been added to Erlang and OTP since then. Is all of it good? Is all of it stable and proven?
This is not a criticism of BLE112 scripting or Erlang, but a reminder of why I went "low level" 6 years ago in the first place: The abstractions and libraries were killing me.
A thought: My Roku locks up every couple of months. I have to hard reboot it. I am guessing that it isn't a hardware problem nor a problem with the OS (linux?). Somewhere an app is failing. My home sensor system can't do this. It must work 24/7 for as long as it is powered.
While Erlang has a great approach to fault tolerance, it is not the complete answer. You must intelligently use it's recovery features and you must avoid software faults to begin with.
"Not invented here" is preached against, but if you can't trust the provided tools, what do you do?
I'll have to see if Bluegiga's scripting is solid. But, what do I do if the sensors start silently failing weeks (or months) after deployment. How do I debug it?
In order to get my Erlang base station up and running I will need my Erlang code to interface with a serial port. There is no support in Erlang to do this. I was going to write one (or use "netcat") to create a TCP bridge for the serial port and let Erlang talk to it. Will this eventually introduce subtle problems? How do I manage the bridge? The Erlang approach says to expect that the bridge will fail and simply restart it (and resync the rest of the processes with it). This is a smart approach, but with the "bridge" I have left the Erlang eco-system.
All of these issues nag me, and they should. On the one hand, I think that the embedded world (mostly hardware types writing assembly, C and maybe C++) is way behind the "high level programming" world in providing flexibility. But with that flexibility, software folks have brought with them all of the little devils that come with high level programming.
Food for thought.
I certainly don't want to trivialize the aftermath of that speech (are we really done yet?), but it does remind me of the faith we (the software community) place in abstractions and generalizations.
A scripting language onboard the BLE112 and so it will only take a few lines of code to do my sensors? Sounds great. But, there are no reports from the ground yet. Is the scripting language stable? Are there bugs hidden in the implementation? Are there subtle things that it does automatically that will bite me in the ass?
The same questions can be asked about Embedded Erlang. It sounds fantastic, and of course the first major Erlang deployment was indeed embedded (the AXD301 switch). But a lot has been added to Erlang and OTP since then. Is all of it good? Is all of it stable and proven?
This is not a criticism of BLE112 scripting or Erlang, but a reminder of why I went "low level" 6 years ago in the first place: The abstractions and libraries were killing me.
A thought: My Roku locks up every couple of months. I have to hard reboot it. I am guessing that it isn't a hardware problem nor a problem with the OS (linux?). Somewhere an app is failing. My home sensor system can't do this. It must work 24/7 for as long as it is powered.
While Erlang has a great approach to fault tolerance, it is not the complete answer. You must intelligently use it's recovery features and you must avoid software faults to begin with.
"Not invented here" is preached against, but if you can't trust the provided tools, what do you do?
I'll have to see if Bluegiga's scripting is solid. But, what do I do if the sensors start silently failing weeks (or months) after deployment. How do I debug it?
In order to get my Erlang base station up and running I will need my Erlang code to interface with a serial port. There is no support in Erlang to do this. I was going to write one (or use "netcat") to create a TCP bridge for the serial port and let Erlang talk to it. Will this eventually introduce subtle problems? How do I manage the bridge? The Erlang approach says to expect that the bridge will fail and simply restart it (and resync the rest of the processes with it). This is a smart approach, but with the "bridge" I have left the Erlang eco-system.
All of these issues nag me, and they should. On the one hand, I think that the embedded world (mostly hardware types writing assembly, C and maybe C++) is way behind the "high level programming" world in providing flexibility. But with that flexibility, software folks have brought with them all of the little devils that come with high level programming.
Food for thought.
Bluegiga BLE112: A Game Changer?
Bluegiga has introduced a Bluetooth LE (Low Energy) module: BLE112
Their approach: Add battery, SPI|UART|analog based sensor, write a script and you've got a running sensor node.
This is great. It could save me a lot of work: Most of the sensor-side work is done for me. They have introduced a scripting language that runs on the Bluetooth module (A TI CC2450 -- 8051 core). It is a very low power design with great idle/sleep consumption savings.
Bluetooth LE wants to supplant ANT and Zigbee. It wants to do what ARM (Cortex) has done to the portables market. This Bluegiga module is so far the most impressive implementation I've seen so far (on paper).
If this module does what I want for my home sensor network, then I can focus on the base station. This is where things get interesting anyway.
I'm think more about using Erlang there. I think the time has come to introduce robust server technology to the embedded world. This presentation agrees. Exciting times ahead :-)
Their approach: Add battery, SPI|UART|analog based sensor, write a script and you've got a running sensor node.
This is great. It could save me a lot of work: Most of the sensor-side work is done for me. They have introduced a scripting language that runs on the Bluetooth module (A TI CC2450 -- 8051 core). It is a very low power design with great idle/sleep consumption savings.
Bluetooth LE wants to supplant ANT and Zigbee. It wants to do what ARM (Cortex) has done to the portables market. This Bluegiga module is so far the most impressive implementation I've seen so far (on paper).
If this module does what I want for my home sensor network, then I can focus on the base station. This is where things get interesting anyway.
I'm think more about using Erlang there. I think the time has come to introduce robust server technology to the embedded world. This presentation agrees. Exciting times ahead :-)
Thursday, March 22, 2012
Wither Erlang? Wither GA144? Wither Home Sensor project?
I've mentioned Erlang on this blog before. My last reference was regarding GreenArray's GA144 + arrayForth as sort of a FPGA level Erlang.
Recently, I've found myself in the midst of using Erlang at work. It has been a number of years since I've played with Erlang. I was first learned the language back in 2002. I knew that there has been a spike of interest starting around 2007/2008 (perhaps due to RabbitMQ and CouchDB?), but that seems to have withered away.
The Internet's interest in shiny new things is swift and brief. Erlang had it's Internet moment and now we are left with aging (often incomplete) projects and libraries. Oh, the Erlang community still seems vibrant (and releases of Erlang/OTP are flowing), but the Internet hive-mind has moved on (Clojure?).
As I stretch my Erlang muscles, I am considering giving it a go for the server node of my home sensor/monitoring project. I've been mulling over using the GA144 here, but for practical purposes a Linux based solution may make more sense. While this will slow down my GA144 experimentation (and postings), I don't intend to "abandon" the chip. I am just finding that my home project is at risk of withering unless I get something working. Maybe an Atom based SBC running Linux and Erlang would be a good place to start.
More later....
Recently, I've found myself in the midst of using Erlang at work. It has been a number of years since I've played with Erlang. I was first learned the language back in 2002. I knew that there has been a spike of interest starting around 2007/2008 (perhaps due to RabbitMQ and CouchDB?), but that seems to have withered away.
The Internet's interest in shiny new things is swift and brief. Erlang had it's Internet moment and now we are left with aging (often incomplete) projects and libraries. Oh, the Erlang community still seems vibrant (and releases of Erlang/OTP are flowing), but the Internet hive-mind has moved on (Clojure?).
As I stretch my Erlang muscles, I am considering giving it a go for the server node of my home sensor/monitoring project. I've been mulling over using the GA144 here, but for practical purposes a Linux based solution may make more sense. While this will slow down my GA144 experimentation (and postings), I don't intend to "abandon" the chip. I am just finding that my home project is at risk of withering unless I get something working. Maybe an Atom based SBC running Linux and Erlang would be a good place to start.
More later....
Friday, February 17, 2012
arrayForth notes #6 - Improved PCF2123 SPI Code
There were a lot of errors in the note #4 listing. I made the post mainly because I was excited to see commands fly between the PCF2123 and GA144 as viewed by a logic analyzer (even though spir8 didn't really work correctly).
So, here is where I clean up the code, refactor it a bit and offer something a little more functional.
I could go on for pages the rationale I had behind why this code lives in Node 7 and 8 and how I hooked up the chip. But, that is for another time. I hate having broken code posted to my blog, so this post is mainly a means to offer something that works. BTW, this runs on the eval board target chip.
It still has some inefficiencies (compared to the GreenArray supplied SPI code), but by doing this myself (essentially a "clean room" implementation), I got to learn a lot more about how to code the GA144.
this is a brutally simple interface for a spi pin 1 - chip select cr pin 3 - clock cr pin 5 - mosi cr pin 17 - miso cr /spi call first. main loop .
pauses for an effective rate of 2mhz cr -ckwait asserts the clock line low for 2mhz cr cs asserts the chip select line high cr -cs asserts the chip select line low cr |
850 list
todd's simple spi code cr 8 node 0 org cr /spi @ push ex . /spi ;
02 4A for unext ; approx. 2mhz cr -ckwait 05 2B !b ckwait ;
07 io b! -ckwait ;
0A 29 !b ;
0C if drop 10 then 2F or !b ckwait -ckwa
13 b- 80 7 for over over and spiw1 2/ ne
1A -b @b . -if drop - 2* - ; then drop 2
1E -b 0 7 for 0 spiw1 spir1 next 2/ ; cr 26 |
|
nxp pcf2123 calendar clock module.
initializes the pcf2123 clock cr rdt reads date and time
sets data and time port the cr |
852 list
7 node 0 org cr /cs 00 left a! @p .. cs ! ;
04 @p .. -cs ! ;
07 d- @p .. @p .. ! ! ;
0A -d @p .. !p .. ! @ ;
0D put @p .. spiw8 ! ;
11 @p .. spir8 ! get ;
14 @p .. /spi ! /cs 10 spiw 58 spiw c
1C -smhwdmy /cs 92 spiw spir spir spir spi
27 ymdwhms- /cs 12 spiw spiw spiw spiw spi
32 12 11 10 9 8 7 6 sdt ; cr 3C |
Wednesday, February 15, 2012
arrayForth notes #5 - Observations and hints
I've since improved the code in the previous entry. I've slimmed it down to 59 words (out of a max of 64). I am not sure I can get it small enough to include the wiring, but I'll look further at exemplar SPI code in the baseline for some more slimming tricks.
One thing I am noticing is that literals (numbers) are very expensive. Each literal takes a whole word (18 bit cell) of RAM. This is the reason for such non-intuitive tricks as "dup or" instead of coding the literal "0". "Dup or" doesn't take up a whole RAM cell, so it can be packed with other instructions.
Calls take up quite a bit of space too. If your word is shorter than a single 18 bit cell, you will do better just coding it inline rather than do a word call.
Programming the GA144 in arrayForth means that you must become an expert in how words are compiled. You can escape this by using polyForth or eForth, but you lose the benefit of understanding how the GA144 actually works.
I am still trying to get my arms around wiring, but I remain convinced that the true strength of the GA144 is basically as an FPGA killer for simple concurrent processes.
Whereas the strength of a Silabs 8051 or ARM Cortex M is in the richness of peripherals, the GA144 doesn't benefit much from peripherals. It is an FPGA level Erlang. And, like Erlang, it has specific strengths.
It's biggest strength is power efficient massive concurrency. I would like to see more I/O, but that only lulls me into the traditional concurrency perspective. I need to stop thinking about having dozens of sensors outputs tied to dozens of GA144 inputs. This isn't a strength. Most assuredly , it is my ability to model dozens of sensors concurrently without dealing with interrupts or global state machines -- that is it's biggest strength.
Tuesday, February 14, 2012
arrayForth notes #4 - PCF2123 SPI Code
This is brutally tight code. It is not the most efficient code, but it does fit in 1 node (unfortunately, there is no room for "plumbing" -- so it will most likely need to be refactored into 2 nodes).
The code can be exercised (by the IDE), by using "call" to invoke /pcf2123 for initialization, sdt to set the date and rdt to read the date. Values for "sdt" can be pushed onto the node's stack by using "lit". I used a logic analyzer to look at the results.
There was a lot of setup to get this going, and I am not going to cover that right now.
I need to get the plumbing (wiring) right. Getting 1 node to talk to another is not as intuitive as I hoped. I also fear that the IDE gets critically in the way (hook et al can wipe out your node's RAM when creating paths). This will mean that the IDE will be less useful for stuff that is very node position dependent (i.e. GPIO nodes).
I don't fully grok the inter-node comms yet. In particular, I am not sure how to "push" numbers from one node to another. The IDE does this fine with "lit", but if my node isn't wired by the IDE all I have is "warm/await". The apparent exemplar for passing values between nodes is to explicitly have the target node "fetch" values from the port. Unfortunately, as you can see from the code below, I am out of room (0x40 max words per node and I am at 0x3d). I could shrink the code a bit more, but...
The code can be exercised (by the IDE), by using "call" to invoke /pcf2123 for initialization, sdt to set the date and rdt to read the date. Values for "sdt" can be pushed onto the node's stack by using "lit". I used a logic analyzer to look at the results.
There was a lot of setup to get this going, and I am not going to cover that right now.
I need to get the plumbing (wiring) right. Getting 1 node to talk to another is not as intuitive as I hoped. I also fear that the IDE gets critically in the way (hook et al can wipe out your node's RAM when creating paths). This will mean that the IDE will be less useful for stuff that is very node position dependent (i.e. GPIO nodes).
I don't fully grok the inter-node comms yet. In particular, I am not sure how to "push" numbers from one node to another. The IDE does this fine with "lit", but if my node isn't wired by the IDE all I have is "warm/await". The apparent exemplar for passing values between nodes is to explicitly have the target node "fetch" values from the port. Unfortunately, as you can see from the code below, I am out of room (0x40 max words per node and I am at 0x3d). I could shrink the code a bit more, but...
this is a brutally simple interface for the cr nxp pcf2123 calendar clock module. cr pin 1 - chip select cr pin 3 - clock cr pin 5 - mosi cr pin 17 - miso cr ckwait pauses for an effective rate of 2mhz cr -ckwait asserts the clock line low for 2mhz cr cs asserts the chip select line high cr -cs asserts the chip select line low cr /pcf2123 initializes the pcf2123 clock cr rdt reads date and time
sets data and time |
850 list
todd's simple pcf2123 clock code cr 8 node 0 org cr ckwait 00 4A for unext ; approx. 2mhz cr -ckwait 03 2B !b ckwait ;
05 io b! -ckwait ;
07 29 !b ;
09 if drop 10 then 2F . + !b ckwait -ckw
10 b- 80 7 for over over and spiw1 2/ ne
17 -b 0 7 for 2* dup or spiw1 @b . -if d 1 or dup then drop next 2/ ;
21 cs 10 spiw8 58 spiw8 -cs ;
27 -smhwdmy cs 92 spiw8 spir8 spir8 spir8
32 ymdwhms- cs 12 spiw8 spiw8 spiw8 spiw8 cr 3D |
Wednesday, February 08, 2012
arrayForth notes #3 - SPI
I've finally begun to talk to the PCF2123 from the G144A12 eval board. The SPI code is not space optimized, so just the basics are taking up almost a full node. (More on that later).
So far, I've got "reset" and a register read working (return data not validated). I am using Node 8 and it's 4 GPIO lines (for MOSI, MISO, CLOCK and ENABLE/CS). The PCF2123 is odd in that CS active is high, not low. I've got a tight for unext loop pulse the CLOCK line at around 2Mhz:
In the above Saleae Logic screenshot, I am tracing two CS sessions: First a "reset" (0x10 and 0x58), and then a request to read register 4. I am not sure the data returned is correct (yet), but the fact that I am getting something must mean that the device is happy with my query. Unfortunately, my register read isn't returning the value 0x15 yet, but at least I know that my GPIO pin writes are working.
As I said above, just basic support of SPI is taking up precious space (currently the SPI routines take 36 words of RAM!). I am planning on doing some optimizing, but I think that the actual PCF2123 functionality will need to live in a separate node.
I have a business trip planned for the next couple of days, so if I don't get the SPI "read" working correctly tonight it will have to wait until the weekend. However, the plane ride and hotel stay will afford me some time to look into space optimization of the code and perhaps I will finally tackle the simulator.
And, yes! Code will be posted... once the damn thing works.
So far, I've got "reset" and a register read working (return data not validated). I am using Node 8 and it's 4 GPIO lines (for MOSI, MISO, CLOCK and ENABLE/CS). The PCF2123 is odd in that CS active is high, not low. I've got a tight for unext loop pulse the CLOCK line at around 2Mhz:
In the above Saleae Logic screenshot, I am tracing two CS sessions: First a "reset" (0x10 and 0x58), and then a request to read register 4. I am not sure the data returned is correct (yet), but the fact that I am getting something must mean that the device is happy with my query. Unfortunately, my register read isn't returning the value 0x15 yet, but at least I know that my GPIO pin writes are working.
As I said above, just basic support of SPI is taking up precious space (currently the SPI routines take 36 words of RAM!). I am planning on doing some optimizing, but I think that the actual PCF2123 functionality will need to live in a separate node.
I have a business trip planned for the next couple of days, so if I don't get the SPI "read" working correctly tonight it will have to wait until the weekend. However, the plane ride and hotel stay will afford me some time to look into space optimization of the code and perhaps I will finally tackle the simulator.
And, yes! Code will be posted... once the damn thing works.
Tuesday, February 07, 2012
arrayForth notes Part 2
I'm trying to get my G144A12 Eval board to talk SPI to a Calendar chip (NXP PCF2123). I've managed to get a 2Mhz(ish) clock pulse running from Node 8 on the Target chip. (I've picked the Target chip because I am overwhelmed with all of the stuff the Host chip is connected to -- I'm better at learning stuff from the ground up).
Unfortunately, I've been trying to get a test harness up and running in a different Node (9) and have been crashing the system every few minutes with my clumsy attempts at wiring and routing. Documentation regarding wiring Nodes is much lacking.
I'm obviously very confused about node direction. I've been referring to "right" as "left". Looking down upon the Node map, I've been wondering why I couldn't get Node 9 to talk to Node 8. Apparently, Node 8 is to the "right" of Node 9. So, I suppose I should imagine "lying down on my back" on a Node.
I'd like to get something going between the Calendar and the Eval board this week, but I am flying out of town for a couple of days for a business meeting.
I wonder how airport security would react to a bare eval board stuffed into my back pack? (and to say nothing of the reaction if I were to try and use it in flight ;-)
Unfortunately, I've been trying to get a test harness up and running in a different Node (9) and have been crashing the system every few minutes with my clumsy attempts at wiring and routing. Documentation regarding wiring Nodes is much lacking.
I'd like to get something going between the Calendar and the Eval board this week, but I am flying out of town for a couple of days for a business meeting.
I wonder how airport security would react to a bare eval board stuffed into my back pack? (and to say nothing of the reaction if I were to try and use it in flight ;-)
Friday, February 03, 2012
My smallest SMD solder job yet...
Components are getting too small. For your consideration, a really neat sounding temperature sensor from TI: TMP104. It features look nice:
Okay, so how small could that really be? I ordered a couple and I got this:
Yeah, that's it next to a grain of rice. A long grain of rice.
So, I figured I have to give it a shot. I need a decent temperature sensor for my project, so I whipped out the fine tip soldering iron and the thinnest strands of wire I could snip...
and this is the result. Yeah, it's that little guy up top above the massive MCU.
Once again, for a little perspective:
A quick view under a 10x microscope and it looks solid (if not pretty). I put a couple of drops of Krazy glue to hold the wires down to keep it safe.
Unfortunately, it will take a while to make sure it works... I have to figure out this SMAART wire protocol thingy.
- Accuracy: ±0.5°C Typ (–10°C to +100°C)
- 3 μA Active IQ at 0.25 Hz
- 1 μA Shutdown
- SMAART Wire Interface
- Temperature range of –40°C to +125°C
- Package: 0.8-mm (±5%) × 1-mm (±5%) 4-Ball WCSP (BSBGA)
Okay, so how small could that really be? I ordered a couple and I got this:
Yeah, that's it next to a grain of rice. A long grain of rice.
So, I figured I have to give it a shot. I need a decent temperature sensor for my project, so I whipped out the fine tip soldering iron and the thinnest strands of wire I could snip...
and this is the result. Yeah, it's that little guy up top above the massive MCU.
Once again, for a little perspective:
A quick view under a 10x microscope and it looks solid (if not pretty). I put a couple of drops of Krazy glue to hold the wires down to keep it safe.
Unfortunately, it will take a while to make sure it works... I have to figure out this SMAART wire protocol thingy.
Monday, January 30, 2012
Apples and Oranges (GA144 and ARM Cortex A8)
In my previous posts I mention about how multiple CPU processors such as a GA144 are different than multi-core CPUs. I also talked about how having multiple independent processes may work better some problem spaces. In broad terms, the GA144 can be viewed as a very low level Erlang for modeling lightweight (low memory, low computationally-complex) problems. Viewing it this way, it really isn't competing (for my purposes -- a sensor base station) with most (bare) MCUs.
(Additional similarity: Forth, like Erlang frowns upon having lots of variables so data is carried as functions/parameters (Erlang) or the words/stack (Forth).)
Now, if you throw an ARM + Linux + Erlang (http://www.erlang-embedded.com/) at my sensor base station, what do you get? (If Erlang doesn't really work well on the ARM, replace it with your favorite language plus lots of processes/threads. Also, keep in mind that my sensor base station needs to run for days on battery backup.)
Now, let's pick an ARM/Linux system for comparison: How about the beagleboard bone?
This $89 beauty looks really appealing. I could see using it as my base station. It is based on a new Cortex A8 and is feature rich for its size.
I can't compare (yet) how well it would do against a GA144. The GA144 certainly looks anemic compared to it (from just the Cortex A8 perspective).
However, I can take a quick look at power:
(Additional similarity: Forth, like Erlang frowns upon having lots of variables so data is carried as functions/parameters (Erlang) or the words/stack (Forth).)
Now, if you throw an ARM + Linux + Erlang (http://www.erlang-embedded.com/) at my sensor base station, what do you get? (If Erlang doesn't really work well on the ARM, replace it with your favorite language plus lots of processes/threads. Also, keep in mind that my sensor base station needs to run for days on battery backup.)
Now, let's pick an ARM/Linux system for comparison: How about the beagleboard bone?
This $89 beauty looks really appealing. I could see using it as my base station. It is based on a new Cortex A8 and is feature rich for its size.
I can't compare (yet) how well it would do against a GA144. The GA144 certainly looks anemic compared to it (from just the Cortex A8 perspective).
However, I can take a quick look at power:
- The Beagleboard bone consumes 170mA@5VDC with the Linux kernel idling and 250mA@5VDC peak during kernel boot. (from pages 28-29 of http://beagleboard.org/static/BONESRM_latest.pdf )
- The GA144 consumes 7uA@1.8VDC (typical) with all nodes idling and 540mA@1.8VDC (typical) with all nodes running. (from the G144A12 Chip Reference).
Of course, you can't directly compare the two, but consider this interesting tidbit: The power performance of the GA144 is directly related to how many nodes you run.
I haven't looked at any performance numbers between the two, but I'll wager that the Cortex A8 ultimately outperforms the GA144. But, my sensor base station is neither CPU bound (no complex calculations) nor RAM bound (important data points are persisted in flash storage and fetched as needed).
The real question is: How much useful work can I get done in 1 node?
Saturday, January 28, 2012
GA144 as a low level, low energy Erlang
I've been reading some of the newsgroup threads regarding the GA144 and most of the critiques have been mostly from a low level perspective (GA144 vs FPGAs vs ARMs, etc). One can argue that the processor can't compete when considering the anemic amount of memory each node has and the limited limited peripheral support. But, let us put aside that argument for a moment (we'll get back to it later).
Here I primarily want to discuss the GA144 from a software (problem solving) architecture perspective. This is where my primary interests reside. I'd lilke the GA144 to have formal and flexible SPI support. I'd like to see more peripheral capability built in. I'd like to see 3.3-5VDC support on I/O pins so level shifter chips aren't needed. But, I think the potential strong point of GA144 is in what you can do with the cores from a software (problem solving) architectural design perspective.
Notice I keep mentioning software and problem solving together? I want to make clear that I am not talking about software architecture in terms of library or framework building. I'm talking about the architecture of the solution space. I'm talking about the software model of the solution space.
Let's look at an analogy.
If I were to build a large telecommunication switch (handling thousands of simultaneous calls) and I implemented the software in Erlang or C++ (and assuming that they both would allow me to reach spec -- maybe 99.9999% uptime, no packet loss, etc.) at the end of the day you wouldn't be able to tell the system performance apart.
However, one of the (advertised) benefits of Erlang is that it allows you to do massive concurrency. This is not a performance improvement, but (perhaps) a closer model to how you want to implement a telco switch. Lots of calls are happening at the same time. This makes the software easier to reason about and (arguably) safer -- your implementation stays closer to the solution space model.
Do you see what I am getting at?
Now, let's look at the problem I've been talking about here on the blog (previously described in vague terms): I want to integrate dozens of wireless sensors to a sensor base station. The base station can run off of outlet power but must be able to run off a battery for days (in case of a power outage). It is designed to be small and discrete (no big panel mounted to the wall with UPS backup). It needs to run 24/7 and be very fault tolerant.
The sensors are small, battery efficient and relatively "dumb". Each samples data until it reaches a prescribed threshold (perhaps performing light hysteresis/averaging before wasting power to send data) and it is up to the sensor base station to keep track of what is going on. The base station collects data, analyzes it, tracks it and may send alerts via SMS or perhaps just report it in a daily SMS summary.
Let's consider one typical sensor in my system: A wireless stove range monitor. This sensor, perhaps mounted to a range hood, would monitor the heat coming from the burners. This sensor will be used to (ultimately) let a remote individual know (via SMS) that a stove burner has been left on an unusually long time. Perhaps grandma was cooking and forgot to turn the burner off.
This stove range sensor probably shouldn't be too smart. Or, in other words, it is not up to it to determine if the stove has been on "too long". It reports a temperature reading once it recognizes an elevated temperature reading over a 5 minute interval (grandma is cooking). It then continues to report the temperature every 10 minutes until it drops below a prescribed threshold. This is not a "smart" sensor. But it is not too dumb (it only uses RF transmit power when it has sufficiently determined significant temperature events -- rather than just broadcasting arbitrary temperature samples all day long).
The sensor base station will need a software model that takes this data, tracks it and makes a determination that there is an issue. Just because the stove range is on for a few hours may not mean there is a problem. A slow elevated temperature rise followed by stasis may suggest that a pot is just simmering. However, if the stove is exhibiting this elevated temperature past 11pm at night -- certainly grandma isn't stewing a chicken this time at night! You don't want to get too fancy, but there can be lots of data points to consider when using this stove range monitor.
Here is my solution model (greatly simplified) for this sensor monitor:
Now, this is just one of many types of sensor that the base station must deal with. Each will have its own behavior (algorithm).
I can certainly handle a bunch of sensors with a fast processor (ARM?). But my software model is different for each sensor. Wouldn't it be nice to have each sensor model to be independent? I could do this with Linux and multiple processes. But, really, the above model isn't really that sophisticated. It could (perhaps) easily fit in a couple of GA144 nodes (the sensor handler, logger, calendar and SMS notifier would exist elsewhere). And It would be nice to code this model as described (without considering state machines or context switches, etc).
So, back to the argument at the top of this post... I don't care if the GA144 isn't a competitive "MCU". My software models are simple but concurrent. My design could easily use other MCUs to handle the SMS sending or RF radio receives. What is important is the software model. The less I have to break that up into state machines or deal with an OS, the better.
This is my interest in the GA144: A low level, low energy means of keeping my concurrent software model intact. I don't need the GA144 to be an SMS modem handler. I don't need it to perform the duties of a calendar. I need it to help me implement my software model as designed.
Here I primarily want to discuss the GA144 from a software (problem solving) architecture perspective. This is where my primary interests reside. I'd lilke the GA144 to have formal and flexible SPI support. I'd like to see more peripheral capability built in. I'd like to see 3.3-5VDC support on I/O pins so level shifter chips aren't needed. But, I think the potential strong point of GA144 is in what you can do with the cores from a software (problem solving) architectural design perspective.
Notice I keep mentioning software and problem solving together? I want to make clear that I am not talking about software architecture in terms of library or framework building. I'm talking about the architecture of the solution space. I'm talking about the software model of the solution space.
Let's look at an analogy.
If I were to build a large telecommunication switch (handling thousands of simultaneous calls) and I implemented the software in Erlang or C++ (and assuming that they both would allow me to reach spec -- maybe 99.9999% uptime, no packet loss, etc.) at the end of the day you wouldn't be able to tell the system performance apart.
However, one of the (advertised) benefits of Erlang is that it allows you to do massive concurrency. This is not a performance improvement, but (perhaps) a closer model to how you want to implement a telco switch. Lots of calls are happening at the same time. This makes the software easier to reason about and (arguably) safer -- your implementation stays closer to the solution space model.
Do you see what I am getting at?
Now, let's look at the problem I've been talking about here on the blog (previously described in vague terms): I want to integrate dozens of wireless sensors to a sensor base station. The base station can run off of outlet power but must be able to run off a battery for days (in case of a power outage). It is designed to be small and discrete (no big panel mounted to the wall with UPS backup). It needs to run 24/7 and be very fault tolerant.
The sensors are small, battery efficient and relatively "dumb". Each samples data until it reaches a prescribed threshold (perhaps performing light hysteresis/averaging before wasting power to send data) and it is up to the sensor base station to keep track of what is going on. The base station collects data, analyzes it, tracks it and may send alerts via SMS or perhaps just report it in a daily SMS summary.
Let's consider one typical sensor in my system: A wireless stove range monitor. This sensor, perhaps mounted to a range hood, would monitor the heat coming from the burners. This sensor will be used to (ultimately) let a remote individual know (via SMS) that a stove burner has been left on an unusually long time. Perhaps grandma was cooking and forgot to turn the burner off.
This stove range sensor probably shouldn't be too smart. Or, in other words, it is not up to it to determine if the stove has been on "too long". It reports a temperature reading once it recognizes an elevated temperature reading over a 5 minute interval (grandma is cooking). It then continues to report the temperature every 10 minutes until it drops below a prescribed threshold. This is not a "smart" sensor. But it is not too dumb (it only uses RF transmit power when it has sufficiently determined significant temperature events -- rather than just broadcasting arbitrary temperature samples all day long).
The sensor base station will need a software model that takes this data, tracks it and makes a determination that there is an issue. Just because the stove range is on for a few hours may not mean there is a problem. A slow elevated temperature rise followed by stasis may suggest that a pot is just simmering. However, if the stove is exhibiting this elevated temperature past 11pm at night -- certainly grandma isn't stewing a chicken this time at night! You don't want to get too fancy, but there can be lots of data points to consider when using this stove range monitor.
Here is my solution model (greatly simplified) for this sensor monitor:
- Receive a temperature sample
- Is it at stasis? If so, keep track of how long
- Is it still rising? Compare it with "fire" levels -- there may be no pot on the burner or it is scorching
- Is the temperature still rising? Fast? Send an SMS alert
- Is it on late in the evening? Send an SMS alert
- Keep a running summary (timestamped) of what is going on. Log it.
- Every night at 11pm, generate a summary of when the range was used and for how long. Send the summary via SMS
Imagine this as a long running process. It is constantly running, considering elapsed time and calendar time in its calculations.
Now, this is just one of many types of sensor that the base station must deal with. Each will have its own behavior (algorithm).
I can certainly handle a bunch of sensors with a fast processor (ARM?). But my software model is different for each sensor. Wouldn't it be nice to have each sensor model to be independent? I could do this with Linux and multiple processes. But, really, the above model isn't really that sophisticated. It could (perhaps) easily fit in a couple of GA144 nodes (the sensor handler, logger, calendar and SMS notifier would exist elsewhere). And It would be nice to code this model as described (without considering state machines or context switches, etc).
So, back to the argument at the top of this post... I don't care if the GA144 isn't a competitive "MCU". My software models are simple but concurrent. My design could easily use other MCUs to handle the SMS sending or RF radio receives. What is important is the software model. The less I have to break that up into state machines or deal with an OS, the better.
This is my interest in the GA144: A low level, low energy means of keeping my concurrent software model intact. I don't need the GA144 to be an SMS modem handler. I don't need it to perform the duties of a calendar. I need it to help me implement my software model as designed.
Monday, January 23, 2012
True Modular computing
I want to tie together my past 3 posts.
In Multi-computers vs multitasking vs interrupts I stated a problem (concurrent processing within the embedded realm) and teased you with a solution (GreenArray's GA144).
In Costs of a multiple mcu system I backed a bit away from the GA144 and wondered if a bunch of small, efficient and cheap MCUs could solve the problem.
In Building devices instead of platforms I offered a rationale to my pondered approaches.
So, here I sit composing an interrupt for an 8051 to service a GSM modem's UART (essentially to buffer all the incoming data). And... everything has just gotten complicated. I've been here before. I am no stranger to such code and the approach I am taking is text book. This is how code starts to get hairy and unpredictable.
But, really now... maybe I *should* consider breaking my tasks down into hardware modules (with each module consisting of dedicated software). If I dedicated an 8051 (tight loop, no interrupts) to just talking to the modem, collecting responses and sending just the relevant information to another 8051 (perhaps through I2C or SPI), then I build that module once, debug it once and be done with it.
This is modular design, isn't it?
So (during design) every time I find a need for an interrupt, I just fork another processor?
(This would work with a single GreenArray's GA144 as well as $3 Silabs 8051 MCUs)
In Multi-computers vs multitasking vs interrupts I stated a problem (concurrent processing within the embedded realm) and teased you with a solution (GreenArray's GA144).
In Costs of a multiple mcu system I backed a bit away from the GA144 and wondered if a bunch of small, efficient and cheap MCUs could solve the problem.
In Building devices instead of platforms I offered a rationale to my pondered approaches.
So, here I sit composing an interrupt for an 8051 to service a GSM modem's UART (essentially to buffer all the incoming data). And... everything has just gotten complicated. I've been here before. I am no stranger to such code and the approach I am taking is text book. This is how code starts to get hairy and unpredictable.
But, really now... maybe I *should* consider breaking my tasks down into hardware modules (with each module consisting of dedicated software). If I dedicated an 8051 (tight loop, no interrupts) to just talking to the modem, collecting responses and sending just the relevant information to another 8051 (perhaps through I2C or SPI), then I build that module once, debug it once and be done with it.
This is modular design, isn't it?
So (during design) every time I find a need for an interrupt, I just fork another processor?
(This would work with a single GreenArray's GA144 as well as $3 Silabs 8051 MCUs)
Building devices instead of platforms
The title of this posting is a concept that flies in opposition to conventional wisdom. Everything seems to be a platform these days. When you buy a gadget or appliance you are not buying a device, you are investing in a platform. Refrigerators are appearing with Android embedded. We are looking at a future of doing "software updates" to our appliances!
Of course, there is big talk about the "Internet of Things", and that could be grand (my washing machine could one day query my fridge about the tomato sauce stain it encountered on my shirt and then place an order on Amazon for the correct stain treatment product).
But, the consider the "elephant in the room": these devices will suffer from that dreaded of all software plagues: "ship now; fix later in an update".
This doesn't tend to happen when you talk about your appliances and devices containing "firmware" (in the original sense). My washing machine has embedded microprocessors but there is no evident way to upgrade the firmware. Hence, the engineers have got to get it right before it ships. The software is apparently sophistica Of course, this is not always the case, but the "get it right" mindset is there. You don't want to tell people they have to call a service technician and "upgrade" their washing machine when it locks up.
For all the smarts my washing machine has, it is still a "device" (not a platform).
This rant bleeds into the common "old fogy" rant that a cell phone should be (just) a phone. I don't think we need to start limiting the potential of our devices, but there are some that should just "work".
We are losing that when we start designing a device and our first step is to choose an OS.
Of course, there is big talk about the "Internet of Things", and that could be grand (my washing machine could one day query my fridge about the tomato sauce stain it encountered on my shirt and then place an order on Amazon for the correct stain treatment product).
But, the consider the "elephant in the room": these devices will suffer from that dreaded of all software plagues: "ship now; fix later in an update".
This doesn't tend to happen when you talk about your appliances and devices containing "firmware" (in the original sense). My washing machine has embedded microprocessors but there is no evident way to upgrade the firmware. Hence, the engineers have got to get it right before it ships. The software is apparently sophistica Of course, this is not always the case, but the "get it right" mindset is there. You don't want to tell people they have to call a service technician and "upgrade" their washing machine when it locks up.
For all the smarts my washing machine has, it is still a "device" (not a platform).
This rant bleeds into the common "old fogy" rant that a cell phone should be (just) a phone. I don't think we need to start limiting the potential of our devices, but there are some that should just "work".
We are losing that when we start designing a device and our first step is to choose an OS.
Friday, January 20, 2012
Costs of a multiple mcu system (back of envelope)
If physical size isn't an issue (you don't want the tiniest of footprints) and unit cost isn't measured in cents, have you considered a multi-mcu system? If your system is highly interrupt driven (receiving lots of I/O from more than 1 place), then you'll need either an OS or at least some solid state based design to handle the concurrency.
If I can get 3 Silabs 8051 variants for between $12-$15, I would need just around 6 caps and resistors to get it up and running. So, total MCU costs would be $15 max. These old 8-bit workhorses are self contained systems. They rarely need an external crystal, often come with built in regulators, and are meant to have a low passive component count. You can just "drop" them in with just a few millimeters of board space.
What does this get me? Potentially less complexity for the software. Consider assigning your MCUs as thus: Each I/O subsystem gets its own MCU for processing. You can design, code and test each subsystem separately. With a means of message passing (SPI, shared memory/flash, etc) you now have an integrated system.
This is hardware based functionality factoring. The industry already does this. If you have bought a GPS module, a cellular modem or even an (micro)SD card you have bought an MCU+X (where X is the functionality). Peripherals often come with their own embedded MCUs (with custom firmware) . However, we expect those peripherals to be (relatively) flawless.
Software, on the other hand, is under constant revision, bug fixes and improvements.
Consider this: If you buy a memory hardware module that has built in support for FAT filesystems (maybe accessed via a UART or SPI), then you expect that hardware module to work perfectly. It is a $5 piece of hardware that should work out of the box. There is no notion of "field" upgrades.
However, if you have bought (or downloaded) a FAT filesystem as a software library, you'll see a few revisions/improvements with a release cycle. It doesn't have to work perfectly. You are expected to upgrade occasionally.
Hardware is getting cheap enough that (for small unit counts) we should seriously consider multiple MCU systems.
Curiously, GreenArrays has side stepped this issue and simple incorporated a bunch of small MCUs into one package.
If I can get 3 Silabs 8051 variants for between $12-$15, I would need just around 6 caps and resistors to get it up and running. So, total MCU costs would be $15 max. These old 8-bit workhorses are self contained systems. They rarely need an external crystal, often come with built in regulators, and are meant to have a low passive component count. You can just "drop" them in with just a few millimeters of board space.
What does this get me? Potentially less complexity for the software. Consider assigning your MCUs as thus: Each I/O subsystem gets its own MCU for processing. You can design, code and test each subsystem separately. With a means of message passing (SPI, shared memory/flash, etc) you now have an integrated system.
This is hardware based functionality factoring. The industry already does this. If you have bought a GPS module, a cellular modem or even an (micro)SD card you have bought an MCU+X (where X is the functionality). Peripherals often come with their own embedded MCUs (with custom firmware) . However, we expect those peripherals to be (relatively) flawless.
Software, on the other hand, is under constant revision, bug fixes and improvements.
Consider this: If you buy a memory hardware module that has built in support for FAT filesystems (maybe accessed via a UART or SPI), then you expect that hardware module to work perfectly. It is a $5 piece of hardware that should work out of the box. There is no notion of "field" upgrades.
However, if you have bought (or downloaded) a FAT filesystem as a software library, you'll see a few revisions/improvements with a release cycle. It doesn't have to work perfectly. You are expected to upgrade occasionally.
Hardware is getting cheap enough that (for small unit counts) we should seriously consider multiple MCU systems.
Curiously, GreenArrays has side stepped this issue and simple incorporated a bunch of small MCUs into one package.
Multi-computers (GA144) vs multitasking (ARM) vs interrupts (8051/MSP430)
You've got a system to design. It is a multi-sensor network that ties to a small, efficient battery run base station. The sensor nodes are straightforward. You start thinking about the base station:
Now, this is the sort of thing that you would throw a nice hefty ARM at with a decent OS (maybe linux) to do multitasking and work queue management. But, let's think a second... Most of what I've just described works out to a nice simple flow diagram: Receive data -> parse data into message -> reject duplicate messages -> log message -> correlate message with previous "events" -> determine if we need to send a text message -> send text messages at specific times -> start over.
Each task is pretty straight forward. The work is not CPU bound. You really don't need a beefy ARM to do each task. What we want the ARM to do is to coordinate a bunch of concurrent tasks. Well that will require a preemptive OS. And then we start down that road... time to boot, link in a bunch of generic libraries, think about using something a little more high level than C, etc. We now have a fairly complex system.
Okay, so we get a bit smaller. Maybe a nice ARM Cortex-M3 or M0. Okay, the memory size is reduced and its a bit slower than a classic ARM. A preemptive OS starts to seem a bit weight-y.
So, how about a nice MSP430 (a big fat one with lots of RAM and program space). Now start to think about how to do all of that without a preemptive OS (yes, I know you can run a preemptive time-sliced OS on the MSP430, but that is a learning curve and besides you are constraining your space even further). Do you go with a cooperative OS? Well, now you have to start thinking about a states... how do you partition the tasks into explicit steps? At this point you start thinking about rolling your own event loop.
So, then there are the interrupts:
Okay, you are getting desperate. You start thinking:
This is where thoughts of multi-computers come in. Not multi-cores (we are not CPU bound!), but multi-computers. What if I had enough computers that I could dedicate each fully to a task? What if I didn't have to deal with interrupts? What would these computers do?
- Computer A handles the transceiver
- Computer B handles logging (persistent storage interface, indexing, etc)
- Computer C handles the GSM modem (AT commands, etc)
- Computer D handles parsing and validating the messages
- Computer E handles "scheduling" (time based) events
- Computer F handles check pointing the system (for system reboot recovery)
etc etc etc
This is where I start really, really thinking that I've found a use for my GA144 board: Performing and coordinating lots of really simple tasks.
It becomes GA144 vs ARM/Cortex + Linux.
Now if I can only figure out how to do SPI in arrayForth...
I've got a barrage of data coming at me over the air at 433Mhz. I have a simple 2 byte buffer on the SPI connected transceiver so I don't have a lot of time to waste on the MCU. I must be there to receive the bytes as they arrive.
Once I've received the data, it must be parsed into a message, validated, logged (to persistent storage) and perhaps correlated with other data to determine if an action must be taken. An action can also be timer based (scheduled). Oh, and I must eventually acknowledge the message receipt or else the sender will keep sending the same data.
Additionally (and unfortunately), the action I need to perform may involve dialing a GSM modem and sending a text message. This can take some time. Meanwhile, data is flowing in. What if a scheduled event must take place and I'm in the middle or processing a new message?
Now, this is the sort of thing that you would throw a nice hefty ARM at with a decent OS (maybe linux) to do multitasking and work queue management. But, let's think a second... Most of what I've just described works out to a nice simple flow diagram: Receive data -> parse data into message -> reject duplicate messages -> log message -> correlate message with previous "events" -> determine if we need to send a text message -> send text messages at specific times -> start over.
Each task is pretty straight forward. The work is not CPU bound. You really don't need a beefy ARM to do each task. What we want the ARM to do is to coordinate a bunch of concurrent tasks. Well that will require a preemptive OS. And then we start down that road... time to boot, link in a bunch of generic libraries, think about using something a little more high level than C, etc. We now have a fairly complex system.
And, oh... did I mention that this must all run nicely on a rechargeable battery for weeks? And, yank the battery at any time -- the system must recover and pick up where it left off. So, just having a bunch of preemptive tasks communicating via OS queues isn't quite enough. We will probably need to persistent all queued communication. But I am getting distracted here. The big system eats a little bit too much power...
Okay, so we get a bit smaller. Maybe a nice ARM Cortex-M3 or M0. Okay, the memory size is reduced and its a bit slower than a classic ARM. A preemptive OS starts to seem a bit weight-y.
So, how about a nice MSP430 (a big fat one with lots of RAM and program space). Now start to think about how to do all of that without a preemptive OS (yes, I know you can run a preemptive time-sliced OS on the MSP430, but that is a learning curve and besides you are constraining your space even further). Do you go with a cooperative OS? Well, now you have to start thinking about a states... how do you partition the tasks into explicit steps? At this point you start thinking about rolling your own event loop.
So, then there are the interrupts:
Oh, I forgot about that. The SPI between the MSP430 and transceiver. The UART to the GSM modem. And the flash. Don't forget the persistent storage. And check pointing, the system needs to start where we left off if the battery runs out.
Okay, you are getting desperate. You start thinking:
What if I had one MCU (MSP430, C8051, whatever) dedicated to the transceiver and one dedicated to the GSM modem. The code starts to get simpler. They'll communicate via the persistent storage.... But, who handles the persistent storage? Can the transceiver MCU handle that too?
This is where thoughts of multi-computers come in. Not multi-cores (we are not CPU bound!), but multi-computers. What if I had enough computers that I could dedicate each fully to a task? What if I didn't have to deal with interrupts? What would these computers do?
- Computer A handles the transceiver
- Computer B handles logging (persistent storage interface, indexing, etc)
- Computer C handles the GSM modem (AT commands, etc)
- Computer D handles parsing and validating the messages
- Computer E handles "scheduling" (time based) events
- Computer F handles check pointing the system (for system reboot recovery)
etc etc etc
This is where I start really, really thinking that I've found a use for my GA144 board: Performing and coordinating lots of really simple tasks.
It becomes GA144 vs ARM/Cortex + Linux.
Now if I can only figure out how to do SPI in arrayForth...
Subscribe to:
Posts (Atom)