As I look at the documentation page for the BLE112 (Bluegiga's Bluetooth LE module), I am reminded of how complex computing has become. There are videos, spec sheets and software guides (well over a dozen documents not including slick sheets and qualification documents). All of this for a small embedded device that uses and 8051. This is all "high level" documentation. At the end of all this you can develop your BLE112 comms using an API or their own scripting language.
I understand the complexities involved and why there is so much documentation. But that isn't all: Add to that all of the Erlang OTP stuff I am planning on doing on the server side. There is lots of documentation; lots of "other people's stuff" I need to master.
Sometimes I "ignore the wheel". This is like "re-inventing the wheel" but tends to avoid the actual construction of a wheel itself. "Ignoring the wheel" is about coming up with your own means of transportation.
Rather than use a mainstream approach, I roll my own solutions. This usually means ignoring an already written body of software (libraries), but when I start from scratch, I intimately understand everything I am working with. I *have* to intimately understand everything I am working with.
Now, one trade-off that must be made for "ignoring the wheel" is that you have to get creative with your resources.
Let me propose an example: Building a (pro quality) Data Logger the size of your thumb. Now, this data logger must read some arbitrary sensor every 10 seconds and log it to persistent storage until it is later pulled off of the device and analyzed later on a PC. It must be able to capture tens-of-thousands of short (10 bytes?) log events. The whole thing (case and battery included) should be about the size of your thumb.
How would you approach designing it? (Remember, this should be a "pro quality" logger, not a toy -- it must work flawlessly.)
Linux on an ARM? Small, but probably not small enough and I doubt you could run it off a small (coin cell size) battery -- power consumption is an important factor here.
Okay, maybe something Arduino-like. Or maybe a PIC, or even a Silabs 8051. You want something tiny that uses very little power. The tiny MCUs tend to have less RAM and Flash program space, but this is just a simple data logger, right?
Now, what about the log storage medium? A microSD card is small. Plus, you can take it out and plug it into your computer to dump the data. This sounds great.
So, you go with a microSD. Now, of course you'll have to format it as FAT16 or FAT32 to make it readable by the PC (besides there are plenty of FAT libs for MCUs out there, right?).
Now you have a problem: FAT is simple, but still requires choosing a library and getting it to compile. Plus, you'll need (at least) 512 bytes of RAM to hold sectors/buffers. You did pick an MCU with more than 512 bytes of RAM right?
Is the FAT implementation reliable? Is it rock solid? Can you trust your important logs to it?
Now, how do you arrange the logging? Will you exceed the maximum FAT file size? How do you name the file?
Remember the original goal: You are building a data logger, not a database.
Okay, you get the picture. For hobbyist needs, a simple logger (like OpenLog) fits in a pinch. But things start to get complicated when you consider reliability and longevity.
How can we simplify this design?
First, do you really need FAT? Can you develop a custom "log system" that writes to the "raw" microSD and develop a reader on the PC side? Do you really need the flexibility of a "file system". Consider this: Figure out the max size of a log entry (typically a logger works with structured sensor data: GPS, environmentals, etc). On a 2GB microSD you can fit around 10 million 20 byte logs. Is that adequate?
Second, do you really need a microSD? What if you used a serial Flash storage chip? Maybe one that doesn't require 512 byte RAM buffers for sector writes. Atmel makes a family of serial flash chips that have "on board" pages that you can randomly access before committing to sectors.
Maybe you can pull the data off serially with a cable, or maybe via wireless?
Here is an interesting observation I've made about "Ignoring the wheel" in my own designs: It reinforces the XP tenent: "do the simplest thing that could possibly work". Because I deal a lot with anemic microcontrollers (8051) and minimalist languages (Forth), the ordeal of supporting FAT on a microSD makes me question whether or not it is "needed" to log a bunch of data as quickly and reliably as possible.
If the customer says I need to use a microSD, okay. But what exactly are they expecting for the FAT support? Could something like this (http://elm-chan.org/fsw/ff/00index_p.html) work? It only supports 1 fixed size file at a time, but it is very simple. Essentially, all I would need for a data logger is "write" capability to the filesystem. Why have a full FAT implementation on a tiny (write only) logger?
Can I ignore the wheel and simply do something simple?
Saturday, May 05, 2012
Sunday, April 22, 2012
Why Forth still matters in this ARM/Linux/Android world
My sensors (for my home monitoring system) need to run off of coin cell batteries (the sensors need to be tiny). They also need to run for at least 1 year (under normal circumstances).
The sensor transceivers consume around 15mA when transmitting. This is a lot for a coin cell battery. The general sensor development strategy is to use communication sparingly: If you don't need to broadcast data, don't.
With a battery that can only source 200mA for just one hour, you really need to start thinking about low power design. An ARM/Cortex capable of running Linux consumes a lot of current. You are going to use a low power 8 or 16 bit processor (e.g. Silabs 8051, MSP430, etc).
Now add C and some libraries to the mix. The more generic/portable stuff you do, the longer the processor is going to stay awake. While it is awake, it is consuming power. The longer it "sleeps", the longer your battery will last.
So, I am using a very low level Forth (MyForth) on a Silabs 8051 low power processor. MyForth is essentially a high level macro assembler -- there is very little overhead to support Forth.
There are no libraries that come with MyForth, so I had to roll my own code. This keeps me true to the Forth philosophy (or at least Chuck Moore's philosophy) of only doing what you need to do to meet the task at hand. I am not writing "generic" libraries. I have to fully understand the devices I am interfacing with. I doubt that I could write tighter/faster code for my transceiver -- I am engaging it at a primitive level. No abstractions but the ones that MyForth provides at a (mostly) macro level. Forth, like Tcl, is more idiom reuse oriented -- you can do a lot with just a little code.
If I need to work harder to make sure that my sensor works 24/7 for at least a year (or two!) off of a coin cell battery, then I will.
My stove monitor (temperature + motion sensor) runs on a Silabs C8051F912. That MCU has 768 bytes of RAM and 16KB of Flash. I am talking to an RFM12B transceiver over a SPI interface and controlling a low power IR motion sensor module. Currently, I am broadcasting motion and temperature over the air using less than 128 bytes of RAM (including Forth stack) and less than 4KB of Flash. I spend only a few seconds awake, a most of the time asleep consuming less than .001mA of battery current.
You can do this in C (of course!), but I don't think you could reach the code density (especially as you start using third party libs). I can also hand tune my code directly in assembler where needed.
Yep, Forth still matters to me.
The sensor transceivers consume around 15mA when transmitting. This is a lot for a coin cell battery. The general sensor development strategy is to use communication sparingly: If you don't need to broadcast data, don't.
With a battery that can only source 200mA for just one hour, you really need to start thinking about low power design. An ARM/Cortex capable of running Linux consumes a lot of current. You are going to use a low power 8 or 16 bit processor (e.g. Silabs 8051, MSP430, etc).
Now add C and some libraries to the mix. The more generic/portable stuff you do, the longer the processor is going to stay awake. While it is awake, it is consuming power. The longer it "sleeps", the longer your battery will last.
So, I am using a very low level Forth (MyForth) on a Silabs 8051 low power processor. MyForth is essentially a high level macro assembler -- there is very little overhead to support Forth.
There are no libraries that come with MyForth, so I had to roll my own code. This keeps me true to the Forth philosophy (or at least Chuck Moore's philosophy) of only doing what you need to do to meet the task at hand. I am not writing "generic" libraries. I have to fully understand the devices I am interfacing with. I doubt that I could write tighter/faster code for my transceiver -- I am engaging it at a primitive level. No abstractions but the ones that MyForth provides at a (mostly) macro level. Forth, like Tcl, is more idiom reuse oriented -- you can do a lot with just a little code.
If I need to work harder to make sure that my sensor works 24/7 for at least a year (or two!) off of a coin cell battery, then I will.
My stove monitor (temperature + motion sensor) runs on a Silabs C8051F912. That MCU has 768 bytes of RAM and 16KB of Flash. I am talking to an RFM12B transceiver over a SPI interface and controlling a low power IR motion sensor module. Currently, I am broadcasting motion and temperature over the air using less than 128 bytes of RAM (including Forth stack) and less than 4KB of Flash. I spend only a few seconds awake, a most of the time asleep consuming less than .001mA of battery current.
You can do this in C (of course!), but I don't think you could reach the code density (especially as you start using third party libs). I can also hand tune my code directly in assembler where needed.
Yep, Forth still matters to me.
Does the Roku need Erlang/OTP?
My Roku streaming player locks up every once in a while (every couple of months). It seems to do this in two different ways: Sometimes it seems to get confused about the wi-fi connection and sometimes it just freezes in the middle of a movie. We are heavy Roku users in my household, and it works flawlessly most of time, so it is hard to get a good idea why this is happening (there is no consistent scenario).
So, here is an embedded device that is suppose to work 24/7 and my only recourse when it locks up is to unplug the power supply and plug it back in. There are no buttons on the unit itself to "reboot" it.
This kind of device is ripe for the "Erlang" approach. Now, of course, if you've been doing embedded firmware for any decent length of time, you'd realize that you don't "need" Erlang to fix this. Watchdog timers, process monitors, a soft-reboot button, etc are the first things that come to mind.
But, a lot of the software that run on these set-top boxes seem to come the "other side" (non-embedded developers). I have no proof of this, but the lock ups smell of that kind of development mentality. When I use to build highly available internet servers systems, I always made sure that I had terminal access so that I could log in and kill/restart stuff to fix problems. I knew that my stuff needed to run standalone, but I also knew that I could log into a running system, look around at what the problem was, restart stuff and take the fix back to development for the next release.
You don't get any of that with "appliance" devices. So, the more I did embedded work, the more I developed a "no login; no logs" mindset. Stuff needs to run, damn the logs.
This year I had to do some Cloud apps. I used Erlang/OTP. I thought I caught all of failure conditions but some third party code would fail every once in a while. The system would run for a week or two, but then mysteriously crash. Thank goodness for the logs. I logged in and reviewed about 1MB of logging to find the problem. I fixed it, uploaded new code and restarted the servers.
This doesn't work for a Roku. Once it ships, there is no developer login. The device must never lock up. The user must never lose control. Even in the need of a full reset, the user should be able to do this from the couch. All processes (and devices) must be monitored.
My home monitoring system base station is currently using Erlang/OTP -- not because Erlang solves these problems, but because Erlang/OTP was designed to solve these problems.
So, here is an embedded device that is suppose to work 24/7 and my only recourse when it locks up is to unplug the power supply and plug it back in. There are no buttons on the unit itself to "reboot" it.
This kind of device is ripe for the "Erlang" approach. Now, of course, if you've been doing embedded firmware for any decent length of time, you'd realize that you don't "need" Erlang to fix this. Watchdog timers, process monitors, a soft-reboot button, etc are the first things that come to mind.
But, a lot of the software that run on these set-top boxes seem to come the "other side" (non-embedded developers). I have no proof of this, but the lock ups smell of that kind of development mentality. When I use to build highly available internet servers systems, I always made sure that I had terminal access so that I could log in and kill/restart stuff to fix problems. I knew that my stuff needed to run standalone, but I also knew that I could log into a running system, look around at what the problem was, restart stuff and take the fix back to development for the next release.
You don't get any of that with "appliance" devices. So, the more I did embedded work, the more I developed a "no login; no logs" mindset. Stuff needs to run, damn the logs.
This year I had to do some Cloud apps. I used Erlang/OTP. I thought I caught all of failure conditions but some third party code would fail every once in a while. The system would run for a week or two, but then mysteriously crash. Thank goodness for the logs. I logged in and reviewed about 1MB of logging to find the problem. I fixed it, uploaded new code and restarted the servers.
This doesn't work for a Roku. Once it ships, there is no developer login. The device must never lock up. The user must never lose control. Even in the need of a full reset, the user should be able to do this from the couch. All processes (and devices) must be monitored.
My home monitoring system base station is currently using Erlang/OTP -- not because Erlang solves these problems, but because Erlang/OTP was designed to solve these problems.
Saturday, April 21, 2012
Where is my jetpack?
I'm getting old, and I am getting impatient with technology. Things are getting smaller, sleeker and sexier -- but they aren't getting any smarter.
Where are the big ideas? Why are we still writing web application frameworks? Why are we porting Linux to anything that has a microcontroller and going "look! Isn't that cool? It's running on my watch."
Where is my jetpack?
Or, more relevant to computer technology:
Where is my Dynabook ? (No, an ipad isn't a Dynabook, and the XO is too cumbersome.
Where is my Hitchiker's Guide to the Galaxy? (The WikiReader is close, but it needs graphics and a voice reader)
Where is my "House of the Future"? (It costs way damn too much and has no coherent operating "vision")
Yes, I know, I am bitching and moaning. Why don't I do something about it?
Well, that is why I am working on the "House of the Future" (my home monitoring project).
I'm thinking of forking this blog to document my progress.
Where are the big ideas? Why are we still writing web application frameworks? Why are we porting Linux to anything that has a microcontroller and going "look! Isn't that cool? It's running on my watch."
Where is my jetpack?
Or, more relevant to computer technology:
Where is my Dynabook ? (No, an ipad isn't a Dynabook, and the XO is too cumbersome.
Where is my Hitchiker's Guide to the Galaxy? (The WikiReader is close, but it needs graphics and a voice reader)
Where is my "House of the Future"? (It costs way damn too much and has no coherent operating "vision")
Yes, I know, I am bitching and moaning. Why don't I do something about it?
Well, that is why I am working on the "House of the Future" (my home monitoring project).
I'm thinking of forking this blog to document my progress.
Saturday, April 14, 2012
Counter Point (Bluegiga BLE112: A Game Changer?)
After writing the last blog entry, a thought kept re-occurring: "Mission Accomplished." You may have immediately caught on to the reference: President Bush's 2003 Mission Accomplished speech.
I certainly don't want to trivialize the aftermath of that speech (are we really done yet?), but it does remind me of the faith we (the software community) place in abstractions and generalizations.
A scripting language onboard the BLE112 and so it will only take a few lines of code to do my sensors? Sounds great. But, there are no reports from the ground yet. Is the scripting language stable? Are there bugs hidden in the implementation? Are there subtle things that it does automatically that will bite me in the ass?
The same questions can be asked about Embedded Erlang. It sounds fantastic, and of course the first major Erlang deployment was indeed embedded (the AXD301 switch). But a lot has been added to Erlang and OTP since then. Is all of it good? Is all of it stable and proven?
This is not a criticism of BLE112 scripting or Erlang, but a reminder of why I went "low level" 6 years ago in the first place: The abstractions and libraries were killing me.
A thought: My Roku locks up every couple of months. I have to hard reboot it. I am guessing that it isn't a hardware problem nor a problem with the OS (linux?). Somewhere an app is failing. My home sensor system can't do this. It must work 24/7 for as long as it is powered.
While Erlang has a great approach to fault tolerance, it is not the complete answer. You must intelligently use it's recovery features and you must avoid software faults to begin with.
"Not invented here" is preached against, but if you can't trust the provided tools, what do you do?
I'll have to see if Bluegiga's scripting is solid. But, what do I do if the sensors start silently failing weeks (or months) after deployment. How do I debug it?
In order to get my Erlang base station up and running I will need my Erlang code to interface with a serial port. There is no support in Erlang to do this. I was going to write one (or use "netcat") to create a TCP bridge for the serial port and let Erlang talk to it. Will this eventually introduce subtle problems? How do I manage the bridge? The Erlang approach says to expect that the bridge will fail and simply restart it (and resync the rest of the processes with it). This is a smart approach, but with the "bridge" I have left the Erlang eco-system.
All of these issues nag me, and they should. On the one hand, I think that the embedded world (mostly hardware types writing assembly, C and maybe C++) is way behind the "high level programming" world in providing flexibility. But with that flexibility, software folks have brought with them all of the little devils that come with high level programming.
Food for thought.
I certainly don't want to trivialize the aftermath of that speech (are we really done yet?), but it does remind me of the faith we (the software community) place in abstractions and generalizations.
A scripting language onboard the BLE112 and so it will only take a few lines of code to do my sensors? Sounds great. But, there are no reports from the ground yet. Is the scripting language stable? Are there bugs hidden in the implementation? Are there subtle things that it does automatically that will bite me in the ass?
The same questions can be asked about Embedded Erlang. It sounds fantastic, and of course the first major Erlang deployment was indeed embedded (the AXD301 switch). But a lot has been added to Erlang and OTP since then. Is all of it good? Is all of it stable and proven?
This is not a criticism of BLE112 scripting or Erlang, but a reminder of why I went "low level" 6 years ago in the first place: The abstractions and libraries were killing me.
A thought: My Roku locks up every couple of months. I have to hard reboot it. I am guessing that it isn't a hardware problem nor a problem with the OS (linux?). Somewhere an app is failing. My home sensor system can't do this. It must work 24/7 for as long as it is powered.
While Erlang has a great approach to fault tolerance, it is not the complete answer. You must intelligently use it's recovery features and you must avoid software faults to begin with.
"Not invented here" is preached against, but if you can't trust the provided tools, what do you do?
I'll have to see if Bluegiga's scripting is solid. But, what do I do if the sensors start silently failing weeks (or months) after deployment. How do I debug it?
In order to get my Erlang base station up and running I will need my Erlang code to interface with a serial port. There is no support in Erlang to do this. I was going to write one (or use "netcat") to create a TCP bridge for the serial port and let Erlang talk to it. Will this eventually introduce subtle problems? How do I manage the bridge? The Erlang approach says to expect that the bridge will fail and simply restart it (and resync the rest of the processes with it). This is a smart approach, but with the "bridge" I have left the Erlang eco-system.
All of these issues nag me, and they should. On the one hand, I think that the embedded world (mostly hardware types writing assembly, C and maybe C++) is way behind the "high level programming" world in providing flexibility. But with that flexibility, software folks have brought with them all of the little devils that come with high level programming.
Food for thought.
Bluegiga BLE112: A Game Changer?
Bluegiga has introduced a Bluetooth LE (Low Energy) module: BLE112
Their approach: Add battery, SPI|UART|analog based sensor, write a script and you've got a running sensor node.
This is great. It could save me a lot of work: Most of the sensor-side work is done for me. They have introduced a scripting language that runs on the Bluetooth module (A TI CC2450 -- 8051 core). It is a very low power design with great idle/sleep consumption savings.
Bluetooth LE wants to supplant ANT and Zigbee. It wants to do what ARM (Cortex) has done to the portables market. This Bluegiga module is so far the most impressive implementation I've seen so far (on paper).
If this module does what I want for my home sensor network, then I can focus on the base station. This is where things get interesting anyway.
I'm think more about using Erlang there. I think the time has come to introduce robust server technology to the embedded world. This presentation agrees. Exciting times ahead :-)
Their approach: Add battery, SPI|UART|analog based sensor, write a script and you've got a running sensor node.
This is great. It could save me a lot of work: Most of the sensor-side work is done for me. They have introduced a scripting language that runs on the Bluetooth module (A TI CC2450 -- 8051 core). It is a very low power design with great idle/sleep consumption savings.
Bluetooth LE wants to supplant ANT and Zigbee. It wants to do what ARM (Cortex) has done to the portables market. This Bluegiga module is so far the most impressive implementation I've seen so far (on paper).
If this module does what I want for my home sensor network, then I can focus on the base station. This is where things get interesting anyway.
I'm think more about using Erlang there. I think the time has come to introduce robust server technology to the embedded world. This presentation agrees. Exciting times ahead :-)
Thursday, March 22, 2012
Wither Erlang? Wither GA144? Wither Home Sensor project?
I've mentioned Erlang on this blog before. My last reference was regarding GreenArray's GA144 + arrayForth as sort of a FPGA level Erlang.
Recently, I've found myself in the midst of using Erlang at work. It has been a number of years since I've played with Erlang. I was first learned the language back in 2002. I knew that there has been a spike of interest starting around 2007/2008 (perhaps due to RabbitMQ and CouchDB?), but that seems to have withered away.
The Internet's interest in shiny new things is swift and brief. Erlang had it's Internet moment and now we are left with aging (often incomplete) projects and libraries. Oh, the Erlang community still seems vibrant (and releases of Erlang/OTP are flowing), but the Internet hive-mind has moved on (Clojure?).
As I stretch my Erlang muscles, I am considering giving it a go for the server node of my home sensor/monitoring project. I've been mulling over using the GA144 here, but for practical purposes a Linux based solution may make more sense. While this will slow down my GA144 experimentation (and postings), I don't intend to "abandon" the chip. I am just finding that my home project is at risk of withering unless I get something working. Maybe an Atom based SBC running Linux and Erlang would be a good place to start.
More later....
Recently, I've found myself in the midst of using Erlang at work. It has been a number of years since I've played with Erlang. I was first learned the language back in 2002. I knew that there has been a spike of interest starting around 2007/2008 (perhaps due to RabbitMQ and CouchDB?), but that seems to have withered away.
The Internet's interest in shiny new things is swift and brief. Erlang had it's Internet moment and now we are left with aging (often incomplete) projects and libraries. Oh, the Erlang community still seems vibrant (and releases of Erlang/OTP are flowing), but the Internet hive-mind has moved on (Clojure?).
As I stretch my Erlang muscles, I am considering giving it a go for the server node of my home sensor/monitoring project. I've been mulling over using the GA144 here, but for practical purposes a Linux based solution may make more sense. While this will slow down my GA144 experimentation (and postings), I don't intend to "abandon" the chip. I am just finding that my home project is at risk of withering unless I get something working. Maybe an Atom based SBC running Linux and Erlang would be a good place to start.
More later....
Friday, February 17, 2012
arrayForth notes #6 - Improved PCF2123 SPI Code
There were a lot of errors in the note #4 listing. I made the post mainly because I was excited to see commands fly between the PCF2123 and GA144 as viewed by a logic analyzer (even though spir8 didn't really work correctly).
So, here is where I clean up the code, refactor it a bit and offer something a little more functional.
I could go on for pages the rationale I had behind why this code lives in Node 7 and 8 and how I hooked up the chip. But, that is for another time. I hate having broken code posted to my blog, so this post is mainly a means to offer something that works. BTW, this runs on the eval board target chip.
It still has some inefficiencies (compared to the GreenArray supplied SPI code), but by doing this myself (essentially a "clean room" implementation), I got to learn a lot more about how to code the GA144.
this is a brutally simple interface for a spi pin 1 - chip select crpin 3 - clock crpin 5 - mosi crpin 17 - miso cr/spi call first. main loop . pauses for an effective rate of 2mhz cr-ckwait asserts the clock line low for 2mhz crcs asserts the chip select line high cr-cs asserts the chip select line low cr |
850 list
todd's simple spi code cr8 node 0 org cr/spi @ push ex . /spi ; 02 4A for unext ; approx. 2mhz cr-ckwait 05 2B !b ckwait ; 07 io b! -ckwait ; 0A 29 !b ; 0C if drop 10 then 2F or !b ckwait -ckwa 13 b- 80 7 for over over and spiw1 2/ ne 1A -b @b . -if drop - 2* - ; then drop 2 1E -b 0 7 for 0 spiw1 spir1 next 2/ ; cr26 |
|
nxp pcf2123 calendar clock module. initializes the pcf2123 clock crrdt reads date and time sets data and time port the cr |
852 list
7 node 0 org cr/cs 00 left a! @p .. cs ! ; 04 @p .. -cs ! ; 07 d- @p .. @p .. ! ! ; 0A -d @p .. !p .. ! @ ; 0D put @p .. spiw8 ! ; 11 @p .. spir8 ! get ; 14 @p .. /spi ! /cs 10 spiw 58 spiw c 1C -smhwdmy /cs 92 spiw spir spir spir spi 27 ymdwhms- /cs 12 spiw spiw spiw spiw spi 32 12 11 10 9 8 7 6 sdt ; cr3C |
Wednesday, February 15, 2012
arrayForth notes #5 - Observations and hints
I've since improved the code in the previous entry. I've slimmed it down to 59 words (out of a max of 64). I am not sure I can get it small enough to include the wiring, but I'll look further at exemplar SPI code in the baseline for some more slimming tricks.
One thing I am noticing is that literals (numbers) are very expensive. Each literal takes a whole word (18 bit cell) of RAM. This is the reason for such non-intuitive tricks as "dup or" instead of coding the literal "0". "Dup or" doesn't take up a whole RAM cell, so it can be packed with other instructions.
Calls take up quite a bit of space too. If your word is shorter than a single 18 bit cell, you will do better just coding it inline rather than do a word call.
Programming the GA144 in arrayForth means that you must become an expert in how words are compiled. You can escape this by using polyForth or eForth, but you lose the benefit of understanding how the GA144 actually works.
I am still trying to get my arms around wiring, but I remain convinced that the true strength of the GA144 is basically as an FPGA killer for simple concurrent processes.
Whereas the strength of a Silabs 8051 or ARM Cortex M is in the richness of peripherals, the GA144 doesn't benefit much from peripherals. It is an FPGA level Erlang. And, like Erlang, it has specific strengths.
It's biggest strength is power efficient massive concurrency. I would like to see more I/O, but that only lulls me into the traditional concurrency perspective. I need to stop thinking about having dozens of sensors outputs tied to dozens of GA144 inputs. This isn't a strength. Most assuredly , it is my ability to model dozens of sensors concurrently without dealing with interrupts or global state machines -- that is it's biggest strength.
Tuesday, February 14, 2012
arrayForth notes #4 - PCF2123 SPI Code
This is brutally tight code. It is not the most efficient code, but it does fit in 1 node (unfortunately, there is no room for "plumbing" -- so it will most likely need to be refactored into 2 nodes).
The code can be exercised (by the IDE), by using "call" to invoke /pcf2123 for initialization, sdt to set the date and rdt to read the date. Values for "sdt" can be pushed onto the node's stack by using "lit". I used a logic analyzer to look at the results.
There was a lot of setup to get this going, and I am not going to cover that right now.
I need to get the plumbing (wiring) right. Getting 1 node to talk to another is not as intuitive as I hoped. I also fear that the IDE gets critically in the way (hook et al can wipe out your node's RAM when creating paths). This will mean that the IDE will be less useful for stuff that is very node position dependent (i.e. GPIO nodes).
I don't fully grok the inter-node comms yet. In particular, I am not sure how to "push" numbers from one node to another. The IDE does this fine with "lit", but if my node isn't wired by the IDE all I have is "warm/await". The apparent exemplar for passing values between nodes is to explicitly have the target node "fetch" values from the port. Unfortunately, as you can see from the code below, I am out of room (0x40 max words per node and I am at 0x3d). I could shrink the code a bit more, but...
The code can be exercised (by the IDE), by using "call" to invoke /pcf2123 for initialization, sdt to set the date and rdt to read the date. Values for "sdt" can be pushed onto the node's stack by using "lit". I used a logic analyzer to look at the results.
There was a lot of setup to get this going, and I am not going to cover that right now.
I need to get the plumbing (wiring) right. Getting 1 node to talk to another is not as intuitive as I hoped. I also fear that the IDE gets critically in the way (hook et al can wipe out your node's RAM when creating paths). This will mean that the IDE will be less useful for stuff that is very node position dependent (i.e. GPIO nodes).
I don't fully grok the inter-node comms yet. In particular, I am not sure how to "push" numbers from one node to another. The IDE does this fine with "lit", but if my node isn't wired by the IDE all I have is "warm/await". The apparent exemplar for passing values between nodes is to explicitly have the target node "fetch" values from the port. Unfortunately, as you can see from the code below, I am out of room (0x40 max words per node and I am at 0x3d). I could shrink the code a bit more, but...
this is a brutally simple interface for the crnxp pcf2123 calendar clock module. crpin 1 - chip select crpin 3 - clock crpin 5 - mosi crpin 17 - miso crckwait pauses for an effective rate of 2mhz cr-ckwait asserts the clock line low for 2mhz crcs asserts the chip select line high cr-cs asserts the chip select line low cr/pcf2123 initializes the pcf2123 clock crrdt reads date and time sets data and time |
850 list
todd's simple pcf2123 clock code cr8 node 0 org crckwait 00 4A for unext ; approx. 2mhz cr-ckwait 03 2B !b ckwait ; 05 io b! -ckwait ; 07 29 !b ; 09 if drop 10 then 2F . + !b ckwait -ckw 10 b- 80 7 for over over and spiw1 2/ ne 17 -b 0 7 for 2* dup or spiw1 @b . -if d 1 or dup then drop next 2/ ; 21 cs 10 spiw8 58 spiw8 -cs ; 27 -smhwdmy cs 92 spiw8 spir8 spir8 spir8 32 ymdwhms- cs 12 spiw8 spiw8 spiw8 spiw8 cr3D |
Wednesday, February 08, 2012
arrayForth notes #3 - SPI
I've finally begun to talk to the PCF2123 from the G144A12 eval board. The SPI code is not space optimized, so just the basics are taking up almost a full node. (More on that later).
So far, I've got "reset" and a register read working (return data not validated). I am using Node 8 and it's 4 GPIO lines (for MOSI, MISO, CLOCK and ENABLE/CS). The PCF2123 is odd in that CS active is high, not low. I've got a tight for unext loop pulse the CLOCK line at around 2Mhz:
In the above Saleae Logic screenshot, I am tracing two CS sessions: First a "reset" (0x10 and 0x58), and then a request to read register 4. I am not sure the data returned is correct (yet), but the fact that I am getting something must mean that the device is happy with my query. Unfortunately, my register read isn't returning the value 0x15 yet, but at least I know that my GPIO pin writes are working.
As I said above, just basic support of SPI is taking up precious space (currently the SPI routines take 36 words of RAM!). I am planning on doing some optimizing, but I think that the actual PCF2123 functionality will need to live in a separate node.
I have a business trip planned for the next couple of days, so if I don't get the SPI "read" working correctly tonight it will have to wait until the weekend. However, the plane ride and hotel stay will afford me some time to look into space optimization of the code and perhaps I will finally tackle the simulator.
And, yes! Code will be posted... once the damn thing works.
So far, I've got "reset" and a register read working (return data not validated). I am using Node 8 and it's 4 GPIO lines (for MOSI, MISO, CLOCK and ENABLE/CS). The PCF2123 is odd in that CS active is high, not low. I've got a tight for unext loop pulse the CLOCK line at around 2Mhz:
In the above Saleae Logic screenshot, I am tracing two CS sessions: First a "reset" (0x10 and 0x58), and then a request to read register 4. I am not sure the data returned is correct (yet), but the fact that I am getting something must mean that the device is happy with my query. Unfortunately, my register read isn't returning the value 0x15 yet, but at least I know that my GPIO pin writes are working.
As I said above, just basic support of SPI is taking up precious space (currently the SPI routines take 36 words of RAM!). I am planning on doing some optimizing, but I think that the actual PCF2123 functionality will need to live in a separate node.
I have a business trip planned for the next couple of days, so if I don't get the SPI "read" working correctly tonight it will have to wait until the weekend. However, the plane ride and hotel stay will afford me some time to look into space optimization of the code and perhaps I will finally tackle the simulator.
And, yes! Code will be posted... once the damn thing works.
Tuesday, February 07, 2012
arrayForth notes Part 2
I'm trying to get my G144A12 Eval board to talk SPI to a Calendar chip (NXP PCF2123). I've managed to get a 2Mhz(ish) clock pulse running from Node 8 on the Target chip. (I've picked the Target chip because I am overwhelmed with all of the stuff the Host chip is connected to -- I'm better at learning stuff from the ground up).
Unfortunately, I've been trying to get a test harness up and running in a different Node (9) and have been crashing the system every few minutes with my clumsy attempts at wiring and routing. Documentation regarding wiring Nodes is much lacking.
I'm obviously very confused about node direction. I've been referring to "right" as "left". Looking down upon the Node map, I've been wondering why I couldn't get Node 9 to talk to Node 8. Apparently, Node 8 is to the "right" of Node 9. So, I suppose I should imagine "lying down on my back" on a Node.
I'd like to get something going between the Calendar and the Eval board this week, but I am flying out of town for a couple of days for a business meeting.
I wonder how airport security would react to a bare eval board stuffed into my back pack? (and to say nothing of the reaction if I were to try and use it in flight ;-)
Unfortunately, I've been trying to get a test harness up and running in a different Node (9) and have been crashing the system every few minutes with my clumsy attempts at wiring and routing. Documentation regarding wiring Nodes is much lacking.
I'd like to get something going between the Calendar and the Eval board this week, but I am flying out of town for a couple of days for a business meeting.
I wonder how airport security would react to a bare eval board stuffed into my back pack? (and to say nothing of the reaction if I were to try and use it in flight ;-)
Friday, February 03, 2012
My smallest SMD solder job yet...
Components are getting too small. For your consideration, a really neat sounding temperature sensor from TI: TMP104. It features look nice:
Okay, so how small could that really be? I ordered a couple and I got this:
Yeah, that's it next to a grain of rice. A long grain of rice.
So, I figured I have to give it a shot. I need a decent temperature sensor for my project, so I whipped out the fine tip soldering iron and the thinnest strands of wire I could snip...
and this is the result. Yeah, it's that little guy up top above the massive MCU.
Once again, for a little perspective:
A quick view under a 10x microscope and it looks solid (if not pretty). I put a couple of drops of Krazy glue to hold the wires down to keep it safe.
Unfortunately, it will take a while to make sure it works... I have to figure out this SMAART wire protocol thingy.
- Accuracy: ±0.5°C Typ (–10°C to +100°C)
- 3 μA Active IQ at 0.25 Hz
- 1 μA Shutdown
- SMAART Wire Interface
- Temperature range of –40°C to +125°C
- Package: 0.8-mm (±5%) × 1-mm (±5%) 4-Ball WCSP (BSBGA)
Okay, so how small could that really be? I ordered a couple and I got this:
Yeah, that's it next to a grain of rice. A long grain of rice.
So, I figured I have to give it a shot. I need a decent temperature sensor for my project, so I whipped out the fine tip soldering iron and the thinnest strands of wire I could snip...
and this is the result. Yeah, it's that little guy up top above the massive MCU.
Once again, for a little perspective:
A quick view under a 10x microscope and it looks solid (if not pretty). I put a couple of drops of Krazy glue to hold the wires down to keep it safe.
Unfortunately, it will take a while to make sure it works... I have to figure out this SMAART wire protocol thingy.
Monday, January 30, 2012
Apples and Oranges (GA144 and ARM Cortex A8)
In my previous posts I mention about how multiple CPU processors such as a GA144 are different than multi-core CPUs. I also talked about how having multiple independent processes may work better some problem spaces. In broad terms, the GA144 can be viewed as a very low level Erlang for modeling lightweight (low memory, low computationally-complex) problems. Viewing it this way, it really isn't competing (for my purposes -- a sensor base station) with most (bare) MCUs.
(Additional similarity: Forth, like Erlang frowns upon having lots of variables so data is carried as functions/parameters (Erlang) or the words/stack (Forth).)
Now, if you throw an ARM + Linux + Erlang (http://www.erlang-embedded.com/) at my sensor base station, what do you get? (If Erlang doesn't really work well on the ARM, replace it with your favorite language plus lots of processes/threads. Also, keep in mind that my sensor base station needs to run for days on battery backup.)
Now, let's pick an ARM/Linux system for comparison: How about the beagleboard bone?
This $89 beauty looks really appealing. I could see using it as my base station. It is based on a new Cortex A8 and is feature rich for its size.
I can't compare (yet) how well it would do against a GA144. The GA144 certainly looks anemic compared to it (from just the Cortex A8 perspective).
However, I can take a quick look at power:
(Additional similarity: Forth, like Erlang frowns upon having lots of variables so data is carried as functions/parameters (Erlang) or the words/stack (Forth).)
Now, if you throw an ARM + Linux + Erlang (http://www.erlang-embedded.com/) at my sensor base station, what do you get? (If Erlang doesn't really work well on the ARM, replace it with your favorite language plus lots of processes/threads. Also, keep in mind that my sensor base station needs to run for days on battery backup.)
Now, let's pick an ARM/Linux system for comparison: How about the beagleboard bone?
This $89 beauty looks really appealing. I could see using it as my base station. It is based on a new Cortex A8 and is feature rich for its size.
I can't compare (yet) how well it would do against a GA144. The GA144 certainly looks anemic compared to it (from just the Cortex A8 perspective).
However, I can take a quick look at power:
- The Beagleboard bone consumes 170mA@5VDC with the Linux kernel idling and 250mA@5VDC peak during kernel boot. (from pages 28-29 of http://beagleboard.org/static/BONESRM_latest.pdf )
- The GA144 consumes 7uA@1.8VDC (typical) with all nodes idling and 540mA@1.8VDC (typical) with all nodes running. (from the G144A12 Chip Reference).
Of course, you can't directly compare the two, but consider this interesting tidbit: The power performance of the GA144 is directly related to how many nodes you run.
I haven't looked at any performance numbers between the two, but I'll wager that the Cortex A8 ultimately outperforms the GA144. But, my sensor base station is neither CPU bound (no complex calculations) nor RAM bound (important data points are persisted in flash storage and fetched as needed).
The real question is: How much useful work can I get done in 1 node?
Saturday, January 28, 2012
GA144 as a low level, low energy Erlang
I've been reading some of the newsgroup threads regarding the GA144 and most of the critiques have been mostly from a low level perspective (GA144 vs FPGAs vs ARMs, etc). One can argue that the processor can't compete when considering the anemic amount of memory each node has and the limited limited peripheral support. But, let us put aside that argument for a moment (we'll get back to it later).
Here I primarily want to discuss the GA144 from a software (problem solving) architecture perspective. This is where my primary interests reside. I'd lilke the GA144 to have formal and flexible SPI support. I'd like to see more peripheral capability built in. I'd like to see 3.3-5VDC support on I/O pins so level shifter chips aren't needed. But, I think the potential strong point of GA144 is in what you can do with the cores from a software (problem solving) architectural design perspective.
Notice I keep mentioning software and problem solving together? I want to make clear that I am not talking about software architecture in terms of library or framework building. I'm talking about the architecture of the solution space. I'm talking about the software model of the solution space.
Let's look at an analogy.
If I were to build a large telecommunication switch (handling thousands of simultaneous calls) and I implemented the software in Erlang or C++ (and assuming that they both would allow me to reach spec -- maybe 99.9999% uptime, no packet loss, etc.) at the end of the day you wouldn't be able to tell the system performance apart.
However, one of the (advertised) benefits of Erlang is that it allows you to do massive concurrency. This is not a performance improvement, but (perhaps) a closer model to how you want to implement a telco switch. Lots of calls are happening at the same time. This makes the software easier to reason about and (arguably) safer -- your implementation stays closer to the solution space model.
Do you see what I am getting at?
Now, let's look at the problem I've been talking about here on the blog (previously described in vague terms): I want to integrate dozens of wireless sensors to a sensor base station. The base station can run off of outlet power but must be able to run off a battery for days (in case of a power outage). It is designed to be small and discrete (no big panel mounted to the wall with UPS backup). It needs to run 24/7 and be very fault tolerant.
The sensors are small, battery efficient and relatively "dumb". Each samples data until it reaches a prescribed threshold (perhaps performing light hysteresis/averaging before wasting power to send data) and it is up to the sensor base station to keep track of what is going on. The base station collects data, analyzes it, tracks it and may send alerts via SMS or perhaps just report it in a daily SMS summary.
Let's consider one typical sensor in my system: A wireless stove range monitor. This sensor, perhaps mounted to a range hood, would monitor the heat coming from the burners. This sensor will be used to (ultimately) let a remote individual know (via SMS) that a stove burner has been left on an unusually long time. Perhaps grandma was cooking and forgot to turn the burner off.
This stove range sensor probably shouldn't be too smart. Or, in other words, it is not up to it to determine if the stove has been on "too long". It reports a temperature reading once it recognizes an elevated temperature reading over a 5 minute interval (grandma is cooking). It then continues to report the temperature every 10 minutes until it drops below a prescribed threshold. This is not a "smart" sensor. But it is not too dumb (it only uses RF transmit power when it has sufficiently determined significant temperature events -- rather than just broadcasting arbitrary temperature samples all day long).
The sensor base station will need a software model that takes this data, tracks it and makes a determination that there is an issue. Just because the stove range is on for a few hours may not mean there is a problem. A slow elevated temperature rise followed by stasis may suggest that a pot is just simmering. However, if the stove is exhibiting this elevated temperature past 11pm at night -- certainly grandma isn't stewing a chicken this time at night! You don't want to get too fancy, but there can be lots of data points to consider when using this stove range monitor.
Here is my solution model (greatly simplified) for this sensor monitor:
Now, this is just one of many types of sensor that the base station must deal with. Each will have its own behavior (algorithm).
I can certainly handle a bunch of sensors with a fast processor (ARM?). But my software model is different for each sensor. Wouldn't it be nice to have each sensor model to be independent? I could do this with Linux and multiple processes. But, really, the above model isn't really that sophisticated. It could (perhaps) easily fit in a couple of GA144 nodes (the sensor handler, logger, calendar and SMS notifier would exist elsewhere). And It would be nice to code this model as described (without considering state machines or context switches, etc).
So, back to the argument at the top of this post... I don't care if the GA144 isn't a competitive "MCU". My software models are simple but concurrent. My design could easily use other MCUs to handle the SMS sending or RF radio receives. What is important is the software model. The less I have to break that up into state machines or deal with an OS, the better.
This is my interest in the GA144: A low level, low energy means of keeping my concurrent software model intact. I don't need the GA144 to be an SMS modem handler. I don't need it to perform the duties of a calendar. I need it to help me implement my software model as designed.
Here I primarily want to discuss the GA144 from a software (problem solving) architecture perspective. This is where my primary interests reside. I'd lilke the GA144 to have formal and flexible SPI support. I'd like to see more peripheral capability built in. I'd like to see 3.3-5VDC support on I/O pins so level shifter chips aren't needed. But, I think the potential strong point of GA144 is in what you can do with the cores from a software (problem solving) architectural design perspective.
Notice I keep mentioning software and problem solving together? I want to make clear that I am not talking about software architecture in terms of library or framework building. I'm talking about the architecture of the solution space. I'm talking about the software model of the solution space.
Let's look at an analogy.
If I were to build a large telecommunication switch (handling thousands of simultaneous calls) and I implemented the software in Erlang or C++ (and assuming that they both would allow me to reach spec -- maybe 99.9999% uptime, no packet loss, etc.) at the end of the day you wouldn't be able to tell the system performance apart.
However, one of the (advertised) benefits of Erlang is that it allows you to do massive concurrency. This is not a performance improvement, but (perhaps) a closer model to how you want to implement a telco switch. Lots of calls are happening at the same time. This makes the software easier to reason about and (arguably) safer -- your implementation stays closer to the solution space model.
Do you see what I am getting at?
Now, let's look at the problem I've been talking about here on the blog (previously described in vague terms): I want to integrate dozens of wireless sensors to a sensor base station. The base station can run off of outlet power but must be able to run off a battery for days (in case of a power outage). It is designed to be small and discrete (no big panel mounted to the wall with UPS backup). It needs to run 24/7 and be very fault tolerant.
The sensors are small, battery efficient and relatively "dumb". Each samples data until it reaches a prescribed threshold (perhaps performing light hysteresis/averaging before wasting power to send data) and it is up to the sensor base station to keep track of what is going on. The base station collects data, analyzes it, tracks it and may send alerts via SMS or perhaps just report it in a daily SMS summary.
Let's consider one typical sensor in my system: A wireless stove range monitor. This sensor, perhaps mounted to a range hood, would monitor the heat coming from the burners. This sensor will be used to (ultimately) let a remote individual know (via SMS) that a stove burner has been left on an unusually long time. Perhaps grandma was cooking and forgot to turn the burner off.
This stove range sensor probably shouldn't be too smart. Or, in other words, it is not up to it to determine if the stove has been on "too long". It reports a temperature reading once it recognizes an elevated temperature reading over a 5 minute interval (grandma is cooking). It then continues to report the temperature every 10 minutes until it drops below a prescribed threshold. This is not a "smart" sensor. But it is not too dumb (it only uses RF transmit power when it has sufficiently determined significant temperature events -- rather than just broadcasting arbitrary temperature samples all day long).
The sensor base station will need a software model that takes this data, tracks it and makes a determination that there is an issue. Just because the stove range is on for a few hours may not mean there is a problem. A slow elevated temperature rise followed by stasis may suggest that a pot is just simmering. However, if the stove is exhibiting this elevated temperature past 11pm at night -- certainly grandma isn't stewing a chicken this time at night! You don't want to get too fancy, but there can be lots of data points to consider when using this stove range monitor.
Here is my solution model (greatly simplified) for this sensor monitor:
- Receive a temperature sample
- Is it at stasis? If so, keep track of how long
- Is it still rising? Compare it with "fire" levels -- there may be no pot on the burner or it is scorching
- Is the temperature still rising? Fast? Send an SMS alert
- Is it on late in the evening? Send an SMS alert
- Keep a running summary (timestamped) of what is going on. Log it.
- Every night at 11pm, generate a summary of when the range was used and for how long. Send the summary via SMS
Imagine this as a long running process. It is constantly running, considering elapsed time and calendar time in its calculations.
Now, this is just one of many types of sensor that the base station must deal with. Each will have its own behavior (algorithm).
I can certainly handle a bunch of sensors with a fast processor (ARM?). But my software model is different for each sensor. Wouldn't it be nice to have each sensor model to be independent? I could do this with Linux and multiple processes. But, really, the above model isn't really that sophisticated. It could (perhaps) easily fit in a couple of GA144 nodes (the sensor handler, logger, calendar and SMS notifier would exist elsewhere). And It would be nice to code this model as described (without considering state machines or context switches, etc).
So, back to the argument at the top of this post... I don't care if the GA144 isn't a competitive "MCU". My software models are simple but concurrent. My design could easily use other MCUs to handle the SMS sending or RF radio receives. What is important is the software model. The less I have to break that up into state machines or deal with an OS, the better.
This is my interest in the GA144: A low level, low energy means of keeping my concurrent software model intact. I don't need the GA144 to be an SMS modem handler. I don't need it to perform the duties of a calendar. I need it to help me implement my software model as designed.
Monday, January 23, 2012
True Modular computing
I want to tie together my past 3 posts.
In Multi-computers vs multitasking vs interrupts I stated a problem (concurrent processing within the embedded realm) and teased you with a solution (GreenArray's GA144).
In Costs of a multiple mcu system I backed a bit away from the GA144 and wondered if a bunch of small, efficient and cheap MCUs could solve the problem.
In Building devices instead of platforms I offered a rationale to my pondered approaches.
So, here I sit composing an interrupt for an 8051 to service a GSM modem's UART (essentially to buffer all the incoming data). And... everything has just gotten complicated. I've been here before. I am no stranger to such code and the approach I am taking is text book. This is how code starts to get hairy and unpredictable.
But, really now... maybe I *should* consider breaking my tasks down into hardware modules (with each module consisting of dedicated software). If I dedicated an 8051 (tight loop, no interrupts) to just talking to the modem, collecting responses and sending just the relevant information to another 8051 (perhaps through I2C or SPI), then I build that module once, debug it once and be done with it.
This is modular design, isn't it?
So (during design) every time I find a need for an interrupt, I just fork another processor?
(This would work with a single GreenArray's GA144 as well as $3 Silabs 8051 MCUs)
In Multi-computers vs multitasking vs interrupts I stated a problem (concurrent processing within the embedded realm) and teased you with a solution (GreenArray's GA144).
In Costs of a multiple mcu system I backed a bit away from the GA144 and wondered if a bunch of small, efficient and cheap MCUs could solve the problem.
In Building devices instead of platforms I offered a rationale to my pondered approaches.
So, here I sit composing an interrupt for an 8051 to service a GSM modem's UART (essentially to buffer all the incoming data). And... everything has just gotten complicated. I've been here before. I am no stranger to such code and the approach I am taking is text book. This is how code starts to get hairy and unpredictable.
But, really now... maybe I *should* consider breaking my tasks down into hardware modules (with each module consisting of dedicated software). If I dedicated an 8051 (tight loop, no interrupts) to just talking to the modem, collecting responses and sending just the relevant information to another 8051 (perhaps through I2C or SPI), then I build that module once, debug it once and be done with it.
This is modular design, isn't it?
So (during design) every time I find a need for an interrupt, I just fork another processor?
(This would work with a single GreenArray's GA144 as well as $3 Silabs 8051 MCUs)
Building devices instead of platforms
The title of this posting is a concept that flies in opposition to conventional wisdom. Everything seems to be a platform these days. When you buy a gadget or appliance you are not buying a device, you are investing in a platform. Refrigerators are appearing with Android embedded. We are looking at a future of doing "software updates" to our appliances!
Of course, there is big talk about the "Internet of Things", and that could be grand (my washing machine could one day query my fridge about the tomato sauce stain it encountered on my shirt and then place an order on Amazon for the correct stain treatment product).
But, the consider the "elephant in the room": these devices will suffer from that dreaded of all software plagues: "ship now; fix later in an update".
This doesn't tend to happen when you talk about your appliances and devices containing "firmware" (in the original sense). My washing machine has embedded microprocessors but there is no evident way to upgrade the firmware. Hence, the engineers have got to get it right before it ships. The software is apparently sophistica Of course, this is not always the case, but the "get it right" mindset is there. You don't want to tell people they have to call a service technician and "upgrade" their washing machine when it locks up.
For all the smarts my washing machine has, it is still a "device" (not a platform).
This rant bleeds into the common "old fogy" rant that a cell phone should be (just) a phone. I don't think we need to start limiting the potential of our devices, but there are some that should just "work".
We are losing that when we start designing a device and our first step is to choose an OS.
Of course, there is big talk about the "Internet of Things", and that could be grand (my washing machine could one day query my fridge about the tomato sauce stain it encountered on my shirt and then place an order on Amazon for the correct stain treatment product).
But, the consider the "elephant in the room": these devices will suffer from that dreaded of all software plagues: "ship now; fix later in an update".
This doesn't tend to happen when you talk about your appliances and devices containing "firmware" (in the original sense). My washing machine has embedded microprocessors but there is no evident way to upgrade the firmware. Hence, the engineers have got to get it right before it ships. The software is apparently sophistica Of course, this is not always the case, but the "get it right" mindset is there. You don't want to tell people they have to call a service technician and "upgrade" their washing machine when it locks up.
For all the smarts my washing machine has, it is still a "device" (not a platform).
This rant bleeds into the common "old fogy" rant that a cell phone should be (just) a phone. I don't think we need to start limiting the potential of our devices, but there are some that should just "work".
We are losing that when we start designing a device and our first step is to choose an OS.
Friday, January 20, 2012
Costs of a multiple mcu system (back of envelope)
If physical size isn't an issue (you don't want the tiniest of footprints) and unit cost isn't measured in cents, have you considered a multi-mcu system? If your system is highly interrupt driven (receiving lots of I/O from more than 1 place), then you'll need either an OS or at least some solid state based design to handle the concurrency.
If I can get 3 Silabs 8051 variants for between $12-$15, I would need just around 6 caps and resistors to get it up and running. So, total MCU costs would be $15 max. These old 8-bit workhorses are self contained systems. They rarely need an external crystal, often come with built in regulators, and are meant to have a low passive component count. You can just "drop" them in with just a few millimeters of board space.
What does this get me? Potentially less complexity for the software. Consider assigning your MCUs as thus: Each I/O subsystem gets its own MCU for processing. You can design, code and test each subsystem separately. With a means of message passing (SPI, shared memory/flash, etc) you now have an integrated system.
This is hardware based functionality factoring. The industry already does this. If you have bought a GPS module, a cellular modem or even an (micro)SD card you have bought an MCU+X (where X is the functionality). Peripherals often come with their own embedded MCUs (with custom firmware) . However, we expect those peripherals to be (relatively) flawless.
Software, on the other hand, is under constant revision, bug fixes and improvements.
Consider this: If you buy a memory hardware module that has built in support for FAT filesystems (maybe accessed via a UART or SPI), then you expect that hardware module to work perfectly. It is a $5 piece of hardware that should work out of the box. There is no notion of "field" upgrades.
However, if you have bought (or downloaded) a FAT filesystem as a software library, you'll see a few revisions/improvements with a release cycle. It doesn't have to work perfectly. You are expected to upgrade occasionally.
Hardware is getting cheap enough that (for small unit counts) we should seriously consider multiple MCU systems.
Curiously, GreenArrays has side stepped this issue and simple incorporated a bunch of small MCUs into one package.
If I can get 3 Silabs 8051 variants for between $12-$15, I would need just around 6 caps and resistors to get it up and running. So, total MCU costs would be $15 max. These old 8-bit workhorses are self contained systems. They rarely need an external crystal, often come with built in regulators, and are meant to have a low passive component count. You can just "drop" them in with just a few millimeters of board space.
What does this get me? Potentially less complexity for the software. Consider assigning your MCUs as thus: Each I/O subsystem gets its own MCU for processing. You can design, code and test each subsystem separately. With a means of message passing (SPI, shared memory/flash, etc) you now have an integrated system.
This is hardware based functionality factoring. The industry already does this. If you have bought a GPS module, a cellular modem or even an (micro)SD card you have bought an MCU+X (where X is the functionality). Peripherals often come with their own embedded MCUs (with custom firmware) . However, we expect those peripherals to be (relatively) flawless.
Software, on the other hand, is under constant revision, bug fixes and improvements.
Consider this: If you buy a memory hardware module that has built in support for FAT filesystems (maybe accessed via a UART or SPI), then you expect that hardware module to work perfectly. It is a $5 piece of hardware that should work out of the box. There is no notion of "field" upgrades.
However, if you have bought (or downloaded) a FAT filesystem as a software library, you'll see a few revisions/improvements with a release cycle. It doesn't have to work perfectly. You are expected to upgrade occasionally.
Hardware is getting cheap enough that (for small unit counts) we should seriously consider multiple MCU systems.
Curiously, GreenArrays has side stepped this issue and simple incorporated a bunch of small MCUs into one package.
Multi-computers (GA144) vs multitasking (ARM) vs interrupts (8051/MSP430)
You've got a system to design. It is a multi-sensor network that ties to a small, efficient battery run base station. The sensor nodes are straightforward. You start thinking about the base station:
Now, this is the sort of thing that you would throw a nice hefty ARM at with a decent OS (maybe linux) to do multitasking and work queue management. But, let's think a second... Most of what I've just described works out to a nice simple flow diagram: Receive data -> parse data into message -> reject duplicate messages -> log message -> correlate message with previous "events" -> determine if we need to send a text message -> send text messages at specific times -> start over.
Each task is pretty straight forward. The work is not CPU bound. You really don't need a beefy ARM to do each task. What we want the ARM to do is to coordinate a bunch of concurrent tasks. Well that will require a preemptive OS. And then we start down that road... time to boot, link in a bunch of generic libraries, think about using something a little more high level than C, etc. We now have a fairly complex system.
Okay, so we get a bit smaller. Maybe a nice ARM Cortex-M3 or M0. Okay, the memory size is reduced and its a bit slower than a classic ARM. A preemptive OS starts to seem a bit weight-y.
So, how about a nice MSP430 (a big fat one with lots of RAM and program space). Now start to think about how to do all of that without a preemptive OS (yes, I know you can run a preemptive time-sliced OS on the MSP430, but that is a learning curve and besides you are constraining your space even further). Do you go with a cooperative OS? Well, now you have to start thinking about a states... how do you partition the tasks into explicit steps? At this point you start thinking about rolling your own event loop.
So, then there are the interrupts:
Okay, you are getting desperate. You start thinking:
This is where thoughts of multi-computers come in. Not multi-cores (we are not CPU bound!), but multi-computers. What if I had enough computers that I could dedicate each fully to a task? What if I didn't have to deal with interrupts? What would these computers do?
- Computer A handles the transceiver
- Computer B handles logging (persistent storage interface, indexing, etc)
- Computer C handles the GSM modem (AT commands, etc)
- Computer D handles parsing and validating the messages
- Computer E handles "scheduling" (time based) events
- Computer F handles check pointing the system (for system reboot recovery)
etc etc etc
This is where I start really, really thinking that I've found a use for my GA144 board: Performing and coordinating lots of really simple tasks.
It becomes GA144 vs ARM/Cortex + Linux.
Now if I can only figure out how to do SPI in arrayForth...
I've got a barrage of data coming at me over the air at 433Mhz. I have a simple 2 byte buffer on the SPI connected transceiver so I don't have a lot of time to waste on the MCU. I must be there to receive the bytes as they arrive.
Once I've received the data, it must be parsed into a message, validated, logged (to persistent storage) and perhaps correlated with other data to determine if an action must be taken. An action can also be timer based (scheduled). Oh, and I must eventually acknowledge the message receipt or else the sender will keep sending the same data.
Additionally (and unfortunately), the action I need to perform may involve dialing a GSM modem and sending a text message. This can take some time. Meanwhile, data is flowing in. What if a scheduled event must take place and I'm in the middle or processing a new message?
Now, this is the sort of thing that you would throw a nice hefty ARM at with a decent OS (maybe linux) to do multitasking and work queue management. But, let's think a second... Most of what I've just described works out to a nice simple flow diagram: Receive data -> parse data into message -> reject duplicate messages -> log message -> correlate message with previous "events" -> determine if we need to send a text message -> send text messages at specific times -> start over.
Each task is pretty straight forward. The work is not CPU bound. You really don't need a beefy ARM to do each task. What we want the ARM to do is to coordinate a bunch of concurrent tasks. Well that will require a preemptive OS. And then we start down that road... time to boot, link in a bunch of generic libraries, think about using something a little more high level than C, etc. We now have a fairly complex system.
And, oh... did I mention that this must all run nicely on a rechargeable battery for weeks? And, yank the battery at any time -- the system must recover and pick up where it left off. So, just having a bunch of preemptive tasks communicating via OS queues isn't quite enough. We will probably need to persistent all queued communication. But I am getting distracted here. The big system eats a little bit too much power...
Okay, so we get a bit smaller. Maybe a nice ARM Cortex-M3 or M0. Okay, the memory size is reduced and its a bit slower than a classic ARM. A preemptive OS starts to seem a bit weight-y.
So, how about a nice MSP430 (a big fat one with lots of RAM and program space). Now start to think about how to do all of that without a preemptive OS (yes, I know you can run a preemptive time-sliced OS on the MSP430, but that is a learning curve and besides you are constraining your space even further). Do you go with a cooperative OS? Well, now you have to start thinking about a states... how do you partition the tasks into explicit steps? At this point you start thinking about rolling your own event loop.
So, then there are the interrupts:
Oh, I forgot about that. The SPI between the MSP430 and transceiver. The UART to the GSM modem. And the flash. Don't forget the persistent storage. And check pointing, the system needs to start where we left off if the battery runs out.
Okay, you are getting desperate. You start thinking:
What if I had one MCU (MSP430, C8051, whatever) dedicated to the transceiver and one dedicated to the GSM modem. The code starts to get simpler. They'll communicate via the persistent storage.... But, who handles the persistent storage? Can the transceiver MCU handle that too?
This is where thoughts of multi-computers come in. Not multi-cores (we are not CPU bound!), but multi-computers. What if I had enough computers that I could dedicate each fully to a task? What if I didn't have to deal with interrupts? What would these computers do?
- Computer A handles the transceiver
- Computer B handles logging (persistent storage interface, indexing, etc)
- Computer C handles the GSM modem (AT commands, etc)
- Computer D handles parsing and validating the messages
- Computer E handles "scheduling" (time based) events
- Computer F handles check pointing the system (for system reboot recovery)
etc etc etc
This is where I start really, really thinking that I've found a use for my GA144 board: Performing and coordinating lots of really simple tasks.
It becomes GA144 vs ARM/Cortex + Linux.
Now if I can only figure out how to do SPI in arrayForth...
Sunday, November 20, 2011
arrayForth notes Part 1
I haven't had a lot of time to work on my EVB001 eval board. The time between sessions can go weeks and I tend to forget a lot of stuff. These are notes to myself... sort of dead simple exercises to use as restart points. So...
Here is an example of just attaching to a node and getting it to do some computation. What you type into arrayForth is displayed in Courier font.
First, you need to make sure a-com is set to the attached COM port.
a-com (this should display the com port number)
If the com port is incorrect, you must change it. Do this:
def a-com (enter into the editor)
Navigate to the value, make the change, type save and exit/re-enter arrayForth.
(Hint: Press ';' to position the cursor just after the number; press 'n' to delete it; use 'u' to insert the new number; press ESC to leave that mode; press SPACE to exit editor and then type save )
Check the value again (a-com).
Now, let's go ahead and hook into node 600:
host load panel (load host code and display the panel)
talk 0 600 hook upd (talk to the chip, wire up to node 600 and update the stack view)
You should now see a bunch of numbers on the stack, starting with line 3.
Now, let's throw a couple of numbers onto the node's stack:
7 lit 2 lit (lit pushes the numbers off of the x86 arrayForth stack and onto the node's stack)
You show now see 7 and 2 as the last two values on the 4th line.
Remember, the stack display is in hex and the values you are entering is in decimal.
(If you wish to enter values in Hex mode, press F1 and precede hex numbers with a zero (0).)
Now, let's add the two numbers:
r+ (the "r" distinguishes the F18 addition word from the x86 "+" word)
You should now see 9 on top of the stack.
Simple.
Now, let's try one more contrived exercise. Let's load some data into the node's RAM:
55 0 r! (take 2 values off of the x86 stack: 55 goes into location 0)
You won't see the changed memory until you type:
?ram
So, at last, we have something working "hands on". The example in the arrayForth user guide is great, but sometimes a good start is to just to be able to interactively talk to the chip.
Here is an example of just attaching to a node and getting it to do some computation. What you type into arrayForth is displayed in Courier font.
First, you need to make sure a-com is set to the attached COM port.
a-com (this should display the com port number)
If the com port is incorrect, you must change it. Do this:
def a-com (enter into the editor)
Navigate to the value, make the change, type save and exit/re-enter arrayForth.
(Hint: Press ';' to position the cursor just after the number; press 'n' to delete it; use 'u' to insert the new number; press ESC to leave that mode; press SPACE to exit editor and then type save )
Check the value again (a-com).
Now, let's go ahead and hook into node 600:
host load panel (load host code and display the panel)
talk 0 600 hook upd (talk to the chip, wire up to node 600 and update the stack view)
You should now see a bunch of numbers on the stack, starting with line 3.
Now, let's throw a couple of numbers onto the node's stack:
7 lit 2 lit (lit pushes the numbers off of the x86 arrayForth stack and onto the node's stack)
You show now see 7 and 2 as the last two values on the 4th line.
Remember, the stack display is in hex and the values you are entering is in decimal.
(If you wish to enter values in Hex mode, press F1 and precede hex numbers with a zero (0).)
Now, let's add the two numbers:
r+ (the "r" distinguishes the F18 addition word from the x86 "+" word)
You should now see 9 on top of the stack.
Simple.
Now, let's try one more contrived exercise. Let's load some data into the node's RAM:
55 0 r! (take 2 values off of the x86 stack: 55 goes into location 0)
You won't see the changed memory until you type:
?ram
So, at last, we have something working "hands on". The example in the arrayForth user guide is great, but sometimes a good start is to just to be able to interactively talk to the chip.
Another GreenArrays blog
I've stumbled upon: http://greenarrays.blogspot.com
Oh, and also, look to the right of this entry (under Links) for the permanent home of my arrayForth cheat sheet (just revised today).
Oh, and also, look to the right of this entry (under Links) for the permanent home of my arrayForth cheat sheet (just revised today).
Saturday, October 01, 2011
GreenArrays arrayForth Keyboard cheat sheet
I'm having trouble wrapping my head around the arrayForth editor keyboard layout diagram in section 3.1 of the arrayForth user guide, so I am trying to put together a slightly modified version that more tightly associates the keyboard keys (and position) to function. I am also dropping the grouping-via-color since I don't have a color printer at hand. Here is a link to the PDF.
Tuesday, September 27, 2011
My GreenArrays EVB001 Eval Board Adventure
Okay, so I broke down and purchased an eval board last week. I got my shipment notice last Friday (which included a nice personal note from Greg Bailey mentioning that he saw my last blog post -- thanks Greg) and the board arrived Monday.
Now, to answer my own question (from that post): What to do with 144 cores? I guess I'm going to have to figure that one out...
I've got a big learning curve ahead of me, and although I'm not the type to post daily updates on "learning experiences", I'll probably post now and then how it is going. If I get an overwhelming burst of energy, then I may even fork my EVB001 adventures to a new blog dedicated to just that.
Anyway, what are my current plans?
Now, to answer my own question (from that post): What to do with 144 cores? I guess I'm going to have to figure that one out...
I've got a big learning curve ahead of me, and although I'm not the type to post daily updates on "learning experiences", I'll probably post now and then how it is going. If I get an overwhelming burst of energy, then I may even fork my EVB001 adventures to a new blog dedicated to just that.
Anyway, what are my current plans?
- Learn enough arrayForth (ColorForth) to be dangerous.
- Work my way around the board (nodes and I/O).
- Begin world dominating project.
Regarding #1, I have followed ColorForth for years, but I never really used it. That being said, I am using Charley Shattuck's MyForth on my day job (shhh.. don't tell them) and that is different enough from ANS Forths that the arrayForth "culture-shock" is low.
Working around the board (#2) is critical as I have to figure out what my peripheral hook up options are. I figure that I would try and get the board talking to an accelerometer (or other sensor). This would be a good goal.
Now, world domination (#3) is a bit vague.
Now, here is what I am thinking.... My usual approach of building tiny/simple things that can be replicated (low volume production runs) won't work here. I simply can't afford to dedicate a $450 eval board to a single task. Then again, I hate the idea of just using it as a "prototyping" board for various ideas. I need a more singular goal.
So, I am viewing the eval board as a "platform". But, a platform for what?
When someone (for passion) designs and builds their own car, plane or boat, they are creating something unique. They are not making something with the end goal of mass production. They are building a "system" that satisfies their own needs. Now, if that "system" later results in replication due to demand, then that is great. But, it is all about building something unique -- something unlike the other guy's car, plane or boat.
You may see where I am getting... the usual place: Robotics.
But, here I use the word "Robot" in loose terms. I am thinking about building a platform to support the integration of sensors and actuators. I want to load up the EVB001 with as many sensors as possible and have it collect, correlate and react through the manipulation of actuators. However, I want to do this within the tightest time constraint possible: I want a tight coupling between sensors and actuators. I want a feedback mechanism. I want... my flocking Goslings (or at least one of them at this point).
Integrating lots of sensors with a single fast running ARM is certainly possible. But this would be interrupt hell (or polling hell or linux process/thread management hell). This is why I (and other sensible people) incorporate tiny 8051s, AVRs and MSP430s into dumb sensors -- to make them independently smarter. Unfortunately, when you have a bunch of microcontroller enhanced sensors (and actuators) you have a communication nightmare. And you need a separate master CPU to integrate all of the "smart" data and manipulate the actuators.
None of this is new. None of this is rocket science. However, the robot I design would be my bot. It would be unique.
More deep thoughts later... For now, I just need to figure out how to talk to my new toy ;-)
Monday, September 19, 2011
GreenArrays G144 - What to do with 144 cores?
I've been following the GreenArrays G144 since its inception. Now a kit is available... programmable in colorforth (and eforth). Forth chips aren't new to me. I remember devouring the Novix NC4000 back in the mid-80s (I couldn't afford one...).
So, the kit costs $450. I don't really have that kind of money to drop on a dev kit, but... if I did manage to scrape up the cash, what would I do with 144 computing cores?
Seriously, that is a good question for deep thinking. From an embedded computing perspective, what could one do with 144 computing cores?
If I can come up with some good ideas, this kit may be on my birthday wishlist.... ;-)
So, the kit costs $450. I don't really have that kind of money to drop on a dev kit, but... if I did manage to scrape up the cash, what would I do with 144 computing cores?
Seriously, that is a good question for deep thinking. From an embedded computing perspective, what could one do with 144 computing cores?
If I can come up with some good ideas, this kit may be on my birthday wishlist.... ;-)
Friday, September 09, 2011
Smart Things
I want to design Smart Things.
Smart Things are small devices that do specific things to augment our own intelligence and abilities.
A Smart Thing communicates with the outside world (and other external Smart Things) via protocols like NFC, Bluetooth or TCP/IP.
Co-located Smart Things may use SPI, I2C, UART or even bit banging GPIO.
A Smart Thing is never too smart, although it should require very little human intervention. It should be just smart enough to justify its own existence (as a gadget or parasitic module).
A Smart Thing is usually energy efficient. It should run off of a battery or a host's power.
Some examples of a Smart Things:
Smart Things are small devices that do specific things to augment our own intelligence and abilities.
A Smart Thing communicates with the outside world (and other external Smart Things) via protocols like NFC, Bluetooth or TCP/IP.
Co-located Smart Things may use SPI, I2C, UART or even bit banging GPIO.
A Smart Thing is never too smart, although it should require very little human intervention. It should be just smart enough to justify its own existence (as a gadget or parasitic module).
A Smart Thing is usually energy efficient. It should run off of a battery or a host's power.
Some examples of a Smart Things:
- A UV monitor that samples sunlight, calculates UV Index and can be queried through by an NFC Reader (like the Google Nexus S phone).
- A motion detector that sends alerts through Bluetooth or Wi-Fi.
- A light detector that keeps a log of when a room light has been turned on and for how long. It too could be queried through NFC.
- Sensor modules for gardens. The Smart Things could measure soil moisture and temperature. They could pass the information on through Ant or Zigbee to another Smart Thing that collects and correlates the data (for collection via smart phone).
A Smart Thing should not be too expensive: The more, the merrier.
A Smart Thing should last for at least 10 years (with at most 1 battery change per year) -- it should be embedded and "forgotten".
A really small Smart Thing could be built around a low power 8051 running MyForth or maybe an MSP430 running uForth/fil. The point here being: You don't care what the platform is. It just needs to work... and be smart.
Thursday, September 08, 2011
Tethered Forths
In my last post I talk about writing my new language fil in uForth via 3 stage metacompilation.
I want to note here that there is an alternate approach I am considering: Tethering.
I don't need to metacompile fil in order to get it running on a small MCU. If I retain the C code that does interpreting/compiling (bootstrap.exe) and write a new stripped down VM that doesn't understand interpreting/compiling (fil.exe), I only have to port the stripped down VM to the MCU. I would do all of my development/compiling on the PC (with bootstrap.exe -- renamed pc-fil.exe). The resulting byte code image (from compilation) would be moved to the MCU and run by the ported fil.exe.
This approach is a subset of what uForth already does. However, uForth also allows for the C based interpreter/compiler to run on the MCU. This is a bit weighty, but essentially gives me the ability to interactively develop directly on the MCU (interacting through a serial interface).
A better approach is tethering, and fil will prefer this approach even if I do re-implement the interpreter/compiler in uForth. In tethering, you do your development on the PC (using fil.exe/bootstrap.exe) and craft a MCU VM that listens on a serial port for special "commands". These commands can be as simple as "store byte-code" and "execute byte-code". You maintain a mirror of the dictionary between the PC and MCU. All the MCU VM needs to do (from an interaction perspective) is write PC compiled byte-code to its dictionary and to be able to execute it. No interpreter is needed.
This is the brilliant method used by various Forths including MyForth, and Riscy Pygness.
If I run into too many walls doing my 3 stages, I may take a break and just go tethered. I do have MCU CFT projects I want to get done!
I want to note here that there is an alternate approach I am considering: Tethering.
I don't need to metacompile fil in order to get it running on a small MCU. If I retain the C code that does interpreting/compiling (bootstrap.exe) and write a new stripped down VM that doesn't understand interpreting/compiling (fil.exe), I only have to port the stripped down VM to the MCU. I would do all of my development/compiling on the PC (with bootstrap.exe -- renamed pc-fil.exe). The resulting byte code image (from compilation) would be moved to the MCU and run by the ported fil.exe.
This approach is a subset of what uForth already does. However, uForth also allows for the C based interpreter/compiler to run on the MCU. This is a bit weighty, but essentially gives me the ability to interactively develop directly on the MCU (interacting through a serial interface).
A better approach is tethering, and fil will prefer this approach even if I do re-implement the interpreter/compiler in uForth. In tethering, you do your development on the PC (using fil.exe/bootstrap.exe) and craft a MCU VM that listens on a serial port for special "commands". These commands can be as simple as "store byte-code" and "execute byte-code". You maintain a mirror of the dictionary between the PC and MCU. All the MCU VM needs to do (from an interaction perspective) is write PC compiled byte-code to its dictionary and to be able to execute it. No interpreter is needed.
This is the brilliant method used by various Forths including MyForth, and Riscy Pygness.
If I run into too many walls doing my 3 stages, I may take a break and just go tethered. I do have MCU CFT projects I want to get done!
My new language: fil (Forth Inspired Language)
My last Forth was uForth. I wrote it to run on PCs (Linux, Cygwin, etc) and MCUs (TI MSP430 and any other Harvard architecture). The implementation was a subset of ANSI and most of the Forth words were coded in Forth. The interpreter, compiler and VM were coded in C.
Since then, I've become (re)fascinated by more minimalistic Forths like ColorForth and MyForth. uForth isn't a good playground for minimalistic experimentation, so I am writing a new Forth inspired language to be called fil.
Like uForth, fil will work on small MCUs as well as big PCs. It will work on Harvard based memory architectures (separate flash/ROM and RAM address spaces) as well as the more familiar linear address space. It will have a 16 bit instruction space (limited currently to a 64KB dictionary -- quite large in Forth terrms) and a 32 bit stack/variable cell size. Using a 32 bit instruction space will force a trade off of code bloat (double the size of code) or speed/complexity (right now I used a switch based code interpreter that assumes each token is 16 bits). In the future I may silently upgrade to a 32 bit dictionary. This shouldn't require a rewrite ;-)
But, where do I start? Well, uForth is a good place. I figured I would bootstrap fil off of uForth. In an ideal world (and ideal implementation of Forth), I would metacompile fil straight from uForth. Unfortunately, there are some limitations/assumptions in the uForth C-based core. So, instead, I am taking a hybrid approach of modifying the core (C) code and the uForth (Forth) code. In essence, I am rewriting uForth to support metacompiling my new language (fil).
Metacompiling is not new. It is a time honored Forth technique of building Forth from Forth. However, while traditional metacompilers target machine code, I am targeting a strip down version of uForth's VM (bytecode interpreter).
My approach is has three stages:
1. I implement as much of uForth in Forth so that I can remove any underlying "C" assumptions and basically simplify the VM. What I'll have left is a uForth/fil with the interpreter/compiler/VM written in C. Let's call that C based executable "bootstrap.exe".
2. I rewrite the interpreter/compiler in uForth/fil.
3. I submit the uForth/fil (Forth) source code to itself (the new interpreter/compiler) and produce a new byte code image. I can then strip the interpreter/compiler out of the C code and produce a simple C VM that doesn't know squat about interpreting or compiling. This new VM executable (fil.exe) and byte code image will be fil. I no longer use "bootstrap.exe".
After this, I can port the new VM to various MCUs.
I have already finished Stage 1, but I reserve the option to spiral back in order to remove further C assumptions that prevent progress on Stage 2. I am also not being very careful to retain full uForth backward compatibility. At the end of Stage 1 I already have a "hybrid" fi/uForth language.
Once fil is complete, I will probably revisit the 16 bit dictionary and consider extending it to 32 bit. If I do this, I don't want to break the idea of fil running on small (8/16 bit) MCUs efficiently. I may consider a bank switched approach instead (multiple 16 bit dictionaries). Don't forget: You can pack a lot of code into a 16 bit Forth!
Since then, I've become (re)fascinated by more minimalistic Forths like ColorForth and MyForth. uForth isn't a good playground for minimalistic experimentation, so I am writing a new Forth inspired language to be called fil.
Like uForth, fil will work on small MCUs as well as big PCs. It will work on Harvard based memory architectures (separate flash/ROM and RAM address spaces) as well as the more familiar linear address space. It will have a 16 bit instruction space (limited currently to a 64KB dictionary -- quite large in Forth terrms) and a 32 bit stack/variable cell size. Using a 32 bit instruction space will force a trade off of code bloat (double the size of code) or speed/complexity (right now I used a switch based code interpreter that assumes each token is 16 bits). In the future I may silently upgrade to a 32 bit dictionary. This shouldn't require a rewrite ;-)
But, where do I start? Well, uForth is a good place. I figured I would bootstrap fil off of uForth. In an ideal world (and ideal implementation of Forth), I would metacompile fil straight from uForth. Unfortunately, there are some limitations/assumptions in the uForth C-based core. So, instead, I am taking a hybrid approach of modifying the core (C) code and the uForth (Forth) code. In essence, I am rewriting uForth to support metacompiling my new language (fil).
Metacompiling is not new. It is a time honored Forth technique of building Forth from Forth. However, while traditional metacompilers target machine code, I am targeting a strip down version of uForth's VM (bytecode interpreter).
My approach is has three stages:
1. I implement as much of uForth in Forth so that I can remove any underlying "C" assumptions and basically simplify the VM. What I'll have left is a uForth/fil with the interpreter/compiler/VM written in C. Let's call that C based executable "bootstrap.exe".
2. I rewrite the interpreter/compiler in uForth/fil.
3. I submit the uForth/fil (Forth) source code to itself (the new interpreter/compiler) and produce a new byte code image. I can then strip the interpreter/compiler out of the C code and produce a simple C VM that doesn't know squat about interpreting or compiling. This new VM executable (fil.exe) and byte code image will be fil. I no longer use "bootstrap.exe".
After this, I can port the new VM to various MCUs.
I have already finished Stage 1, but I reserve the option to spiral back in order to remove further C assumptions that prevent progress on Stage 2. I am also not being very careful to retain full uForth backward compatibility. At the end of Stage 1 I already have a "hybrid" fi/uForth language.
Once fil is complete, I will probably revisit the 16 bit dictionary and consider extending it to 32 bit. If I do this, I don't want to break the idea of fil running on small (8/16 bit) MCUs efficiently. I may consider a bank switched approach instead (multiple 16 bit dictionaries). Don't forget: You can pack a lot of code into a 16 bit Forth!
Monday, August 29, 2011
Software defined... Radio, GPS, ... etc?
Most comms modules (e.g. Bluetooth, GPS, etc), memory peripherals (e.g. SD cards, USB sticks, etc) and other sophisticated "chips" have embedded processor cores. These cores may be based on stock 8051 or specialized ARM designs. They are smart devices that save system designers a lot of integration time by being "drop ins" (i.e. you talk with them via simple protocols over UART, I2C or SPI) and they do all of the hard work.
Recently, reading about Software-defined Radios (replacing hardware based tuning/filtering with software) and this article (dumber GPS modules where satellite correlation/fusion is done by back end computers), makes me wonder if the future will present dumber peripherals in trade for more processing on our main CPUs.
How many processor cores are there in an average smart phone? You've got the primary CPU running the OS, but have you considered what is powering your Bluetooth, Wi-fi, GPS, cellular modem, display and touch interface? Having sophisticated software in these peripheral chips certainly aids time to market (less programming for the integrator). But, I rely on the craftiness of the chip designer to meet my needs.
Imagine a smart phone where all of the processing was done on the main CPU. Sure, that would bog it down significantly, but imagine a much faster (and power efficient) main CPU (maybe even with 4 or 5 cores). Now, your GPS/Bluetooth/Cellular-modem are just analog transceivers that streams bits or analog signals. Your smartphone would just have a bunch of antennas, transceivers and sensor hardware. All of the software resides somewhere on the main CPU. Imagine having access to that software.
Recently, reading about Software-defined Radios (replacing hardware based tuning/filtering with software) and this article (dumber GPS modules where satellite correlation/fusion is done by back end computers), makes me wonder if the future will present dumber peripherals in trade for more processing on our main CPUs.
How many processor cores are there in an average smart phone? You've got the primary CPU running the OS, but have you considered what is powering your Bluetooth, Wi-fi, GPS, cellular modem, display and touch interface? Having sophisticated software in these peripheral chips certainly aids time to market (less programming for the integrator). But, I rely on the craftiness of the chip designer to meet my needs.
Sure, its all software (even when on individual hardware modules), but rarely are these things upgradeable. They have a limited product life (even if the analog part of it is still relevant). This is good for hardware companies, but not good for us (the end user/consumer).
I remember playing with early MEM accelerometer chips. They usually just output a voltage for an axis. There were no "interrupts" or SPI or I2C protocols. They were analog devices. It was up to me to figure out what they were spitting out and deal appropriately.
Now I use smart digital accelerometers that notify me when an event (e.g. tilt, acceleration exceeding a threshold, free fall, tap, etc) occurs. Sometimes they can be frustrating if they don't quite provide what I need -- lots of register based tuning usually takes care of this, but still...
Imagine a smart phone where all of the processing was done on the main CPU. Sure, that would bog it down significantly, but imagine a much faster (and power efficient) main CPU (maybe even with 4 or 5 cores). Now, your GPS/Bluetooth/Cellular-modem are just analog transceivers that streams bits or analog signals. Your smartphone would just have a bunch of antennas, transceivers and sensor hardware. All of the software resides somewhere on the main CPU. Imagine having access to that software.
Wednesday, August 03, 2011
UV Index monitor prototype #1
I don't know if I mentioned it here before, but I've been working on a personal UV (Index) monitor.
Folks who have skin cancer (or those at high risk) need to make sure they limit their sun exposure.
The general approach is to just lather up with sunscreen every time you leave the house, but this is impractical (plus you have to re-apply every couple of hours). This becomes more of an annoyance when you consider spending hours riding in a car: Are the windows UV protected? How well? Do you have to lather up every time you drive?
You can get UV index forecasts on your smartphone, but these are just forecasts (for your area and for the whole day). When you are out in the sun, you'll need to know how much UV intensity is hitting you "right now".
Another solution is to carry a UV monitor.
The only ones I've seen on the market are overkill (too large and complex) or vague (how does this work and is it reliable -- where is the sensor?) .
I am aiming at something so small that you'll always carry it with you, but also clear and as accurate as possible. My target form factor is a key fob:
My target UI is based on colored LEDs. There are official colors for the UV index scale and I have an LED for each level. I would like to have (at most) 2 buttons -- one for "instant read" (point at the sun and an LED will light up for 2 seconds indicating UV index level) and one for setting a countdown timer (for sunscreen re-application).
My current prototype has 1 button, 5 high-intensity LEDs (green, yellow, orange, red and blue/violet) and is a little bulkier than a key fob. Amazingly, the LEDs are quite readable in bright sunlight! If you are colorblind you can always read index based on which LED lights up (right?). The current layout ramps "upwards" depending on UV intensity.
It takes a single coin cell battery and is based on a very low power 8051 from SiLabs. It should get 3-5 years off the battery with casual usage.
I need to do lots of tuning/calibration and I know it won't be "demo worthy" for the rest of this summer, but I am making progress. Apparently, the calculations done for UV Index forecasting aren't very practical for small single UV sensors. Somehow, the personal UV monitors make due though. I think I'll use one of the better ones to aid in my calibration.
Maybe I'll have case design and a formal board spin ready for next summer?
Folks who have skin cancer (or those at high risk) need to make sure they limit their sun exposure.
The general approach is to just lather up with sunscreen every time you leave the house, but this is impractical (plus you have to re-apply every couple of hours). This becomes more of an annoyance when you consider spending hours riding in a car: Are the windows UV protected? How well? Do you have to lather up every time you drive?
You can get UV index forecasts on your smartphone, but these are just forecasts (for your area and for the whole day). When you are out in the sun, you'll need to know how much UV intensity is hitting you "right now".
Another solution is to carry a UV monitor.
The only ones I've seen on the market are overkill (too large and complex) or vague (how does this work and is it reliable -- where is the sensor?) .
I am aiming at something so small that you'll always carry it with you, but also clear and as accurate as possible. My target form factor is a key fob:
My target UI is based on colored LEDs. There are official colors for the UV index scale and I have an LED for each level. I would like to have (at most) 2 buttons -- one for "instant read" (point at the sun and an LED will light up for 2 seconds indicating UV index level) and one for setting a countdown timer (for sunscreen re-application).
My current prototype has 1 button, 5 high-intensity LEDs (green, yellow, orange, red and blue/violet) and is a little bulkier than a key fob. Amazingly, the LEDs are quite readable in bright sunlight! If you are colorblind you can always read index based on which LED lights up (right?). The current layout ramps "upwards" depending on UV intensity.
It takes a single coin cell battery and is based on a very low power 8051 from SiLabs. It should get 3-5 years off the battery with casual usage.
I need to do lots of tuning/calibration and I know it won't be "demo worthy" for the rest of this summer, but I am making progress. Apparently, the calculations done for UV Index forecasting aren't very practical for small single UV sensors. Somehow, the personal UV monitors make due though. I think I'll use one of the better ones to aid in my calibration.
Maybe I'll have case design and a formal board spin ready for next summer?
Friday, July 15, 2011
Tiny computers that fit on your fingernail...
Here is a thought:
Pick up a microSD card. Place it on a fingernail. Look at how small it is. How much does it hold? 1GB? 2GB? 8GB? More? Amazing. That is a lot of storage. These things are examples of how storage keeps shrinking while maintaining incredible capacity. You could fit a whole library on a microSD, right?
But consider this: Inside all microSD cards lie an MCU core . (It may be an 8051. The 8051 MCU is still a popular flash memory controller that you'll find in a majority of your USB thumb drives, SD and even (as a naked die) microSD cards.) Each MCU contains some small amount of RAM too.
So, on your fingernail you have an 8 or 16 bit computer (typically running > 50Mhz) with high speed I/O, RAM, gigabytes of persistent storage, and firmware that was probably written in C.
Mind blown.
Pick up a microSD card. Place it on a fingernail. Look at how small it is. How much does it hold? 1GB? 2GB? 8GB? More? Amazing. That is a lot of storage. These things are examples of how storage keeps shrinking while maintaining incredible capacity. You could fit a whole library on a microSD, right?
But consider this: Inside all microSD cards lie an MCU core . (It may be an 8051. The 8051 MCU is still a popular flash memory controller that you'll find in a majority of your USB thumb drives, SD and even (as a naked die) microSD cards.) Each MCU contains some small amount of RAM too.
So, on your fingernail you have an 8 or 16 bit computer (typically running > 50Mhz) with high speed I/O, RAM, gigabytes of persistent storage, and firmware that was probably written in C.
Mind blown.
Thursday, June 09, 2011
Android: Bluetooth Low Energy vs USB
With all the hype about adding devices/peripherals to Android via USB, I desperately want a low energy wireless means of adding devices. A number of my (yet-to-be-started) CFT projects involve collecting sensor data for correlation/display on smart phones. ANT has always looked appealing, but with next to nothing in way of smartphone support, the new Bluetooth 4.0 BLE support looks like it may capture the market.
This year promises new Android devices with BLE. On the peripheral/sensor front, we seem to have 2 major vendor choices: Nordic and TI.
I'm not ready to drop money on a kit just yet, but my "body worn" sensor projects may get a kickstart knowing that a suitable means of data display is coming soon.
This year promises new Android devices with BLE. On the peripheral/sensor front, we seem to have 2 major vendor choices: Nordic and TI.
I'm not ready to drop money on a kit just yet, but my "body worn" sensor projects may get a kickstart knowing that a suitable means of data display is coming soon.
Sunday, May 01, 2011
Ultrasonic goslings: Sensors and software
I'm starting to get back into low level embedded systems. I'm back to see what 8-bits can do in a 64-bit world.
Part of this reboot is to cast a fresh eye towards some of the sensor enhanced systems I've been mulling around for the past couple of years.
In particular, I am re-investigating some ultrasonic tracking stuff. In a nutshell, I want to to build a flock of robots (does 3 constitute a flock?) that will follow me around. Think: Mother goose and goslings.
Imagine that you have an ultrasonic transmitter, attached to your belt, that transmits a short "beep" every second. If your robots have 3 ultrasonic sensors each, then they can use hyperbolic positioning (Multilateration) to figure out where you are. (The time difference between the 3 received beeps gives you direction; the receive time between each transmitted beep gives you distance).
Now, every decent circuit I've seen for ultrasonic transducers tend to be fairly complex to build (mostly for clean amplification and rectification of the received signal). Just throwing a transducer onto a (relatively) clean MCU with a sensitive ADC won't cut it. Or can it?
We tend to want to put the cleanest, most linear signal into the ADC, but nature doesn't work that way. Nature uses a ton of error correction (software). Even without perfectly working ears or eyes, the brain adapts to form a "picture".
Given a noisy, weak, poorly rectified signal from an ultrasonic receiver, can software make sense of it?
Part of this reboot is to cast a fresh eye towards some of the sensor enhanced systems I've been mulling around for the past couple of years.
In particular, I am re-investigating some ultrasonic tracking stuff. In a nutshell, I want to to build a flock of robots (does 3 constitute a flock?) that will follow me around. Think: Mother goose and goslings.
Imagine that you have an ultrasonic transmitter, attached to your belt, that transmits a short "beep" every second. If your robots have 3 ultrasonic sensors each, then they can use hyperbolic positioning (Multilateration) to figure out where you are. (The time difference between the 3 received beeps gives you direction; the receive time between each transmitted beep gives you distance).
Now, every decent circuit I've seen for ultrasonic transducers tend to be fairly complex to build (mostly for clean amplification and rectification of the received signal). Just throwing a transducer onto a (relatively) clean MCU with a sensitive ADC won't cut it. Or can it?
We tend to want to put the cleanest, most linear signal into the ADC, but nature doesn't work that way. Nature uses a ton of error correction (software). Even without perfectly working ears or eyes, the brain adapts to form a "picture".
Given a noisy, weak, poorly rectified signal from an ultrasonic receiver, can software make sense of it?
Monday, April 11, 2011
Forth for ARM Cortex M3...
This news makes me happy :-)
I should break out my old STM eval boards and give it a try.
The last Forth (ignoring mine) that I've used was Charlie Shattuck's MyForth. Well it looks like he has created a new MyForth for the Arduino crowd. Slide here and sources here.
The nice thing about minimalism within the microcontroller world is that your end result is a "device". You don't have a lot of extra stuff (software standards, etc) to deal with.... so as long as your device interfaces correctly with outside world, the question is: Does it do something useful/interesting? Not: Did you use CouchDB, MongoDB or SQL?
Ah, the simple life.
Also, a shout out to GreenArrays for releasing initial measurements in their G144A12 spec sheet.
Ugh. I really need to find the time (and money) to play with the dev kit.
/todd
I should break out my old STM eval boards and give it a try.
The last Forth (ignoring mine) that I've used was Charlie Shattuck's MyForth. Well it looks like he has created a new MyForth for the Arduino crowd. Slide here and sources here.
The nice thing about minimalism within the microcontroller world is that your end result is a "device". You don't have a lot of extra stuff (software standards, etc) to deal with.... so as long as your device interfaces correctly with outside world, the question is: Does it do something useful/interesting? Not: Did you use CouchDB, MongoDB or
Ah, the simple life.
Also, a shout out to GreenArrays for releasing initial measurements in their G144A12 spec sheet.
Ugh. I really need to find the time (and money) to play with the dev kit.
/todd
Sunday, April 10, 2011
File under "Elegant": Factorial in Plan 9 rc (under Linux)
I've posted before that I find Plan 9's rc shell elegant. I've been using a "slightly" modified (I've made read and echo builtins) version for a few months now and have been doing extensive scripting. I hope never to go back to bash.
Here is a small script to compute factorials. Since "bc" deals with arbitrary precision, we can go much higher than the 32 or 64 bits.
Chew on this:
#!/usr/local/plan9/bin/rc
fn fac {
num=0 factorial=1 frombc=$2 tobc=$3 {
for (num in `{seq $1}) {
echo $factorial '*' $num >$tobc
factorial=`{read <$frombc}
}
echo $factorial
}
}
fn fixlinebreaks {
awk -F '\\' '{printf("%s",$1)}
$0 !~ /\\$/ {printf("\n"); fflush("");}'
}
fac $1 <>{bc | fixlinebreaks}
Here is a small script to compute factorials. Since "bc" deals with arbitrary precision, we can go much higher than the 32 or 64 bits.
Chew on this:
#!/usr/local/plan9/bin/rc
fn fac {
num=0 factorial=1 frombc=$2 tobc=$3 {
for (num in `{seq $1}) {
echo $factorial '*' $num >$tobc
factorial=`{read <$frombc}
}
echo $factorial
}
}
fn fixlinebreaks {
awk -F '\\' '{printf("%s",$1)}
$0 !~ /\\$/ {printf("\n"); fflush("");}'
}
fac $1 <>{bc | fixlinebreaks}
There are several interesting things here:
- Concurrent processing (co-processes actually).
- Messaging through unix pipes.
- Lazy computation (generator).
This factorial algorithm is iterative rather than recursive, but rather than using an incrementing counter loop, we generate all numbers using the 'seq' program and loop through that lazily generated list!
How slow do you think this script will run? Well on my Toshiba Portege r705 notebook with a Core i3, factorial of 1024 takes 2.4 seconds. Is that slow?
Earlier I said that I had enhanced rc with "echo" and "read" as builtins (normally they are external). Using the non-builtin "echo" and "read" increases the run time to 5.1 seconds.
Of course this isn't production code, but here is the take-away: "bc" gives you a bignum calculator for free. Use it.
Monday, March 14, 2011
Tackling the Simple Problems: The domain of the minimalist
The hard problems are more interesting by nature and the world is full of hard problems. This blog post isn't about them. Instead, I want to talk about simple problems.
Simple problems are still problems, they just don't have world shaking impact (or so you would think).
To be honest: most simple problems are only simple on the surface. Underneath, complexity is always lurking.
Take, for instance, my desire to (re)build a very simple blogging system (for my own personal use). Blog software isn't all that hard to build. If you don't care about performance and scalability, then it is pretty straightforward. That is, until you get down to building one. As soon as you start thinking about security, feeds, multimedia, etc. you start to expose the underlying complexity of "working" software.
Now, as I said earlier, this is still something of a simple problem. Developing blogger software isn't rocket science. But, in some ways, that makes it harder.
When something is so simple (conceptually), it can be quite difficult to "get it right". Getting it right is about hitting that sweet spot. Blogging software needs to do its simple job correctly and intuitively. If it is hard to install, or has "hard to grok" idiosyncrasies, then it doesn't solve the "simple problem" of blogging.
Consider another "simple problem". I have around 40GB of music (mostly in MP3 format) that I want to play on my living room stereo (away from a computer). There are solutions I can buy, but none quite fit. I don't need streaming (although I would like to listen to online radio sometimes) and I don't need a "total entertainment solution". I tend to listen to whole albums, not mixes or "randomized" selections based on genre.
All I need is a single MP3 storage device, the ability to add/delete queued albums from any of my household PCs (web browser NOT a hard requirement), and a simple "remote" (pause, play, next song, previous song). What I want is a music "server" and it only has to serve one sound system. (Wi-fi streaming of music is broken in my house -- too much sporadic interference).
There are server based (free!) software solutions out there, but they usually solve (only) 90% of my "simple problem". They then throw UPnP, webservers, GUIs and all sorts of networking into the mix. This is more than I want (after all I am a minimalist).
Note: Before computers, my problem was solved 100% by a CD player w/ 200+ CDs and before that it was solved by vinyl LPs. Now I have a bunch of MP3s and less capability to enjoy music than when I had CDs.
Simple problems are harder than you think.
Simple problems are still problems, they just don't have world shaking impact (or so you would think).
To be honest: most simple problems are only simple on the surface. Underneath, complexity is always lurking.
Take, for instance, my desire to (re)build a very simple blogging system (for my own personal use). Blog software isn't all that hard to build. If you don't care about performance and scalability, then it is pretty straightforward. That is, until you get down to building one. As soon as you start thinking about security, feeds, multimedia, etc. you start to expose the underlying complexity of "working" software.
Now, as I said earlier, this is still something of a simple problem. Developing blogger software isn't rocket science. But, in some ways, that makes it harder.
When something is so simple (conceptually), it can be quite difficult to "get it right". Getting it right is about hitting that sweet spot. Blogging software needs to do its simple job correctly and intuitively. If it is hard to install, or has "hard to grok" idiosyncrasies, then it doesn't solve the "simple problem" of blogging.
Consider another "simple problem". I have around 40GB of music (mostly in MP3 format) that I want to play on my living room stereo (away from a computer). There are solutions I can buy, but none quite fit. I don't need streaming (although I would like to listen to online radio sometimes) and I don't need a "total entertainment solution". I tend to listen to whole albums, not mixes or "randomized" selections based on genre.
All I need is a single MP3 storage device, the ability to add/delete queued albums from any of my household PCs (web browser NOT a hard requirement), and a simple "remote" (pause, play, next song, previous song). What I want is a music "server" and it only has to serve one sound system. (Wi-fi streaming of music is broken in my house -- too much sporadic interference).
There are server based (free!) software solutions out there, but they usually solve (only) 90% of my "simple problem". They then throw UPnP, webservers, GUIs and all sorts of networking into the mix. This is more than I want (after all I am a minimalist).
Note: Before computers, my problem was solved 100% by a CD player w/ 200+ CDs and before that it was solved by vinyl LPs. Now I have a bunch of MP3s and less capability to enjoy music than when I had CDs.
Simple problems are harder than you think.
uForth Dump...and run
uForth has been mentioned here several times last year. It was my attempt at a very, very portable Forth (no dynamic memory allocation, ANSI C, bytecode generator for portable images, etc). It has been run successfully on MSP430s as well as Windows/Linux. No MSP430 code here unfortunately. I did most of the MSP430 code as part of my day job in 2010. It isn't mine to give away.
However, you can get a dump of the generic ANSI code here. I haven't touched it in months and it needs documentation (and some general lovin'). Unfortunately, I don't have access to MSP430s anymore and so that is left as an exercise for the reader :-(
However, you can get a dump of the generic ANSI code here. I haven't touched it in months and it needs documentation (and some general lovin'). Unfortunately, I don't have access to MSP430s anymore and so that is left as an exercise for the reader :-(
Monday, March 07, 2011
Notes on Mail header (and MIME) parsers...
I'm trying to resurrect my old gawk based blogging system BLOGnBOX. It (ab)uses gawk to do everything from POP3 mail retrieval (you email your blog entry...) to FTP based posting of the blog (it is a static html blog).
I intend on cleaning it up by doing away from the gawk abuses. I am either going to make it (Plan 9) rc based (with Plan 9 awk and some C for the networking) or perhaps Haskell. That is quite a choice, eh?
I've done a bit of Haskell over the past few months and feel strong enough to do the next generation BLOGnBOX, but the main problem is actually getting the thing going. (This is a nighttime CFT and, well, I have to get into a Haskell frame of thinking).
The first task up is a parser for mime encoded email. I plan on using regular expressions (yes, I know -- use Parsec or something more Haskell-ish). Awk is somewhat of a natural for this, but Gawk has a little more "oomph". I can visualize how I would do it in Awk, but the Haskell is not coming naturally.
Well, it isn't all that difficult to get started in Haskell:
Well, that is a beginning. Of course, I should be using ByteStrings for efficiency... and, yes... I know... I know... I should be using Parsec
/todd
I intend on cleaning it up by doing away from the gawk abuses. I am either going to make it (Plan 9) rc based (with Plan 9 awk and some C for the networking) or perhaps Haskell. That is quite a choice, eh?
I've done a bit of Haskell over the past few months and feel strong enough to do the next generation BLOGnBOX, but the main problem is actually getting the thing going. (This is a nighttime CFT and, well, I have to get into a Haskell frame of thinking).
The first task up is a parser for mime encoded email. I plan on using regular expressions (yes, I know -- use Parsec or something more Haskell-ish). Awk is somewhat of a natural for this, but Gawk has a little more "oomph". I can visualize how I would do it in Awk, but the Haskell is not coming naturally.
Well, it isn't all that difficult to get started in Haskell:
module MailParser where
import Text.Regex
import qualified Data.Map as Map
type Header = Map.Map String [String]
header_regex = mkRegex "^(From|To|Subject)[ ]*:[ ]*(.+)"
parseHeader :: String -> Header -> Header
parseHeader s h = case matchRegex header_regex s
of Nothing -> h
Just (k:v) -> Map.insert k v h
Well, that is a beginning. Of course, I should be using ByteStrings for efficiency... and, yes... I know... I know... I should be using Parsec
/todd
Subscribe to:
Posts (Atom)
