Wednesday, October 01, 2014

Forth and the Minimalist

Not all Forth programmers are minimalists, but chances are, if you use arrayForth, colorForth or something inspired by it (like MyForth), then you may be a minimalist.

Being a minimalist, you seek the simplest, most concise use of resources.  You tend to avoid rambling code and the idea of calling a (3rd party) library function makes you uncomfortable.

One of the reasons I like using MyForth (and the 8051) is that it forces you to think about how to simplify the problem you are trying to solve.  This is a good exercise but also offers some advantages when you are working on low power (or very tiny) embedded systems.  No matter how beefy an MCU can get, there is always a need for something "smaller" and lower power (e.g. a tiny low transistor count 8 bit MCU has more chance running off of "air" than a 32 bit fast, feature rich MCU).

The 8051 has rather poor math capabilities. Everything is geared toward 8 bits. If you use a C compiler, this is hidden from you.  The compiler will generate a ton of code to make sure that your 16 or 32 bit math works. This causes code bloat and will slow you down -- thereby causing more power consumption.  Programming in a minimalist Forth makes you think about whether or not you actually need the math.  Is  there a cheat?  You look at old school methods and you may find them. I grew up on the 6502 (Commodore VIC20/C64, Atari, Apple, etc).  You did all you could to avoid doing "real" math (especially if it broke the 8 bit barrier).  You had limited resources and you made the most of what you had.

But, is this just an "exercise"?  I don't think so. There are practical benefits that go beyond just old school cleverness. You (can) have more compact code that performs better. The less code you produce, the fewer chances for bugs. The less code you produce, the more reliable your product.

Gone are the days (for most of us) of penny counting the costs of components. I'd rather have a bunch of simple components (e.g. logic gates, simple MCU, peripheral processors etc) that do work for me rather than a big processor with a complex library.  Chip components tend to be "coded" at a higher level of quality assurance than pure libraries.  I trust a USB->serial chip more than some USB->serial library for my MCU. If the library fails, they say "update". If a chip fails... they risk going out of business -- who trusts production runs to faulty chips?

In the end, the minimalist is fighting the status quo.  It is a futile fight, but we can't seem to give it up. It is in our nature.

Wednesday, July 30, 2014

AFT - an elegant weapon for a more civilized age..

This is a sort of nostalgic post and, in some sense, it is also a "toot your own horn" one as well.  I am writing this mainly for myself.  I am trying to remind myself what I've liked most about programming.

Years ago, actually almost 2 decades ago -- around 1996,  I wrote a text mark up system called AFT.  AFT stood for Almost Free Text. It was inspired by Ward Cunningham's original Wiki mark up but went further.

I had a problem. I didn't like using WYSIWYG word processors and the world was moving towards HTML.  I liked Ward's mark up. He was using it on this new "Wiki Wiki" thing. I answered an invite sent to the Patterns List and became one of the first of the wiki users in 1995.  (But that is a different story or a different time.)

AFT was my attempt at a writing system to produce publishable (web and print) documentation.  Since then, it has developed a (now waning) user base.  You can see how many web pages/documents use it without changing the default "watermark" with this query.

As of Ubuntu 14.04, you can get AFT by issuing an "apt-get install aft" against the standard repository.
I think it is still part of FreeBSD "world".  I believe it still runs under Windows too.

Various "modern" mark up languages (written in "modern" programming languages) have since surpassed AFT in adoption, but for me, it still is a more elegant and pleasurable experience.

Over the years (although not very recently), I've updated, fixed and generally maintained the code.  There are no known crashes (it literally take whatever you throw at it and tries to produce good looking output -- although that may fail) and it doesn't require me to look at the HTML (or PDF) manual  (written in AFT!) unless I want to do something complex.

AFT is implemented in Perl. Originally it was written in awk, but I taught myself Perl so as to re-implement it in the late 1990s.

It is, for me, interesting Perl code.  I have modernized it over the years, but it still doesn't depend on CPAN (a good thing if you just want to have non-programmers "download it" and run without dependencies -- yes I know there are packaging solutions to that problem today...).

AFT has "back end" support for HTML, LaTeX, lout and some rudimentary RTF.  These days I think mostly HTML and LaTeX is used.

You can customize the HTML or LaTeX to support different styles by modifying or creating a configuration file.  This configuration file is "compiled" into a Perl module and becomes part of the run time script.

AFT has been a pleasure to hack on now and then. It still runs flawlessly on new Perl releases and has proven not too fragile to add experimental features to. I've accepted some small code fixes and fragments over the years, but generally it is mostly my code.

As I wrote (and rewrote) AFT, I thought frequently of Don Knuth's coding approach (as excellently documenting in my all time favorite book on programming: Literate Programming).  I certainly can't match the master, but the slow thoughtful development he enthuses was inspiring.

Over the years I've gotten a few "thank you" notes for AFT (but nothing in the past few years) and that makes it my (to date) proudest contribution to Free Software.

Maybe I'll dust off the code and introduce some more experimental features...



Sunday, July 27, 2014

Concurrency and multi-core MCUs (GA144) in my house monitor

My house monitoring system monitors lots of sensors. This suggests a multi-core approach, doesn't it?

The problem with (the current concept of)  multi-cores is that they are typically ruled by a monolithic operating system. Despite what goes on in each core, there is one single point of failure: the operating system. Plus, without core affinity, our code may be moved around.  In a 8 core Intel processor, you are NOT guaranteed to be running a task per core (likely, for execution efficiency, your task is load balanced among the cores).  Each core is beefy too. Dedicating a whole core to a single sensor sounds very wasteful.

This, I believe, is flawed think  in our current concurrency model (at least as far as embedded systems go).

I want multiple "nodes" for computation. I want each node to be  isolated and self reliant.  (I'm talking from an embedded perspective here -- I understand the impracticality of doing this on general purpose computers).

If I have a dozen sensors, I want to connect them directly to a dozen nodes that independently manage them.  This isn't just about data collection. The nodes should be able to perform some high level functions.  I essentially want one monitoring app per node.

For example: I should be able to instruct a PIR motion-sensor node to watch for a particular motion pattern before it notifies another node to disperse an alert. There may be some averaging or more sophisticated logic to detect the interesting pattern.

Normally, you would have a bunch of physically separate sensor nodes (MCU + RF),  but RF is not very reliable. Plus, to change the behavior of the sensor nodes you would have to collect and program each MCU.

So, consider for this "use case" that the sensors are either wired or that the sensors are RF modules with very little intelligence built in (i.e. you never touch the RF sensor's firmware): RF is just a "wire".  Now we can focus on the nodes.

The Green Arrays GA144 and Parallax Propeller are the first widely-available MCUs (I know of) to encourage this "one app per node" approach.  But, the Propeller doesn't have enough cores (8) and the GA144  (with 144 cores) doesn't have enough I/O (for sake of this discussion, since the GA144 has so many cores I am willing to consider a node to be a "group of core").

Now, let's consider a concession...
With the GA144, I could fall back to the RF approach.  I'll can emulate more I/O by feeding the nodes from edge nodes that actually collect the data (via RF).  I can support dozens of sensors that way.

But, what does that buy me over a beefy single core Cortex-M processing dozens of sensors?

With the Cortex-M, I am going to have to deal with interrupts and either state machines or coroutines. (although polling is possible to replace the interrupts, the need for a state machine or coroutines remain the same).  This is essentially "tasking".

This can become heinous. So,  I start to think about using an OS (for task management).  Now I've introduced more software (and more problems).  But can I run dozens of "threads" on the Cortex-M? What's my context switching overhead?  Do I have a programming language that lets me do green threads?  (Do I use an RTOS instead?)

All of this begins to smell of  anti-concurrency (or at least one step back from our march towards seamless concurrency oriented programming).

So, let's say I go back to the GA144. The sensor monitoring tasks are pretty lightweight and independent. When I code them I don't need to think about interrupts or state machines. Each monitor sits in a loop, waiting for sensor input and  a "request status" message from any other node.
In C psuedo-code :

while (1) { 
  switch(wait_for_msg()) {
    case SENSOR: 
       if (compute_status(get_sensor_data()) == ALERT_NOW)
          send_status(alert_monitor);
       break;
    case REQUEST:
       send_status(requester);
       break;
  }
}

This loop is all there is.  The "compute_status" may talk to other sensor nodes or do averaging, etc.
What about timer events? What if the sensor needs a concept of time or time intervals?  That can be done outside of the node by having a periodic REQUEST trigger.

(This, by the way, is very similar to what an Erlang app would strive for (see my previous post GA144 as a low level, low energy Erlang).

Now, the above code would need to be in Forth to work on the GA144 (ideally arrayForth or PolyForth), but you get the idea (hopefully ;-)


Tuesday, July 22, 2014

A Reboot (of sorts): The IoT has got me down. I think we've lost the plot.

The IoT (Internet of Things) has got me down.  I think we've lost the plot.

In most science fiction I've read (and seen), technology is ubiquitous and blends into the background.  The author of a science fiction book may go into excruciating detail explaining the technology, but that is par for the course.

In science fiction films the technology tends to be taken for granted.  Outside of plot devices, all the cool stuff is "just a part of life".

Re-watch Blade Runner, Minority Report, etc. Do the characters obsess (via smartphone or other personal device) over the temperature of their home while they are away?  Do they gleefully purchase Internet connected cameras and watch what their pets are up to?

It is 2014 and we buy IoT gadgets that demand our attention and time.  Nest and Dropcam: I am looking at you.

Beyond "Where is my Jet Pack?", I want "Set and Forget" technology.  The old antiquated "Security Monitoring" services (e.g. ADT) got it partially right. You lived with it. You didn't focus on it and you weren't visiting web pages to obsess over your house's security state.  But that model is dying (or should be). It is expensive, proprietary and requires a human in the loop ($$$).

What do we replace it with?

I think that the "Internet" in the IoT is secondary.  First, I want a NoT (Network of things) that is focused on making my house sensors work together.  Sure, if I have a flood, fire or a break in, I want to be notified wherever I am at (both in the house and out).  When I am away from my home  is where the Internet part of IoT comes into play.

My current Panoptes prototype (based on X10) monitors my house for motion and door events. My wife or I review events (via our smartphone) in the morning when we wake up. It gives me valuable information, such as "when did my teenage son get to bed?" and "was mother-in-law having a sleepless night?" and "is mother-in-law up right now?".  Reviewing this info doesn't require the Internet but does require a local network connection.

I also register for "door events" before I go to bed. This immediately alerts me (via smartphone) if  "mother-in-law is confused and has wandered outside".

When I leave the house, I can monitor (via XMPP using my smartphone) activity in the house. When I know everyone is out, I can register (also via XMPP)  for door/motion events. I can tell if someone is entering my house (our neighborhood has had a recent break in).

This is an important Internet aspect of Panoptes.  I rarely use it though.  My main use of Panoptes turns out to be when I am at home.

So, I want IoT stuff, but I want it to be "Set and Forget".  This is the primary focus in my series of Monitoring projects.

Monday, June 23, 2014

Design by Teardown: What you will find inside of my Panoptes home monitor basestation

First... It is about time I named this monitoring system.  I'm code naming it "Panoptes".

I'm struggling a bit with the power consumption on my wireless sensors (previously mentioned here).

I've chosen an C8051F912 as the MCU (extra 128 bytes needed for my OTA encryption scheme), but I can't seem to get the sleep mode down below 20uA. (That doesn't sound like a lot of power consumption, but it adds up when considering that I want the batteries to last years.)

So, I am taking a break from low power design to focus a bit on my base station. (For those coming into this blog entry cold, I am designing a Internet-ready home monitoring system with a focus on keeping track of independent elderly people, specifically those who are candidates for nursing homes but aren't quite ready for that transition yet.)

I've decided to approach the base station design from a post-implementation perspective: What would someone find if they did a teardown on my device?

Why come from this perspective?  I would hope that what a savvy engineer would find the implementation sound and even respectable. So, why not base my design decisions on this point of view?

Now, I am not just talking about a hardware teardown, but a software one too. But, I won't get too wrapped up on how my code looks or how it is structured.  I am more interested in interoperability: How does the software interface with the outside world -- in particular, the end user and the Internet.

Let me preface this with one of my primary design goals: Set and Forget. 

This is not a system to be played with or to constantly probe from a web browser.  The typical customer is a caretaker or adult child of an elderly person.  This is about applying modern IoT technology to an old problem.  But, this is not a Nest. This is about the kind of IoT (Internet of Things) that operates discreetly in the background of your life -- you just want to know when interesting things happen, otherwise it isn't on your daily radar.

I have said before that even the base station can host sensors, so for this particular teardown, we will look at a single use: Someone buys the basestation plus a water flood sensor to monitor their laundry room.  This example isn't solely "elderly" oriented but does represent the case where someone would want a true "set and forget" sensor.  (I won't cover "wireless sensor nodes" here, since while necessary, they are bound to a lot more hassle that I'll address later -- things like RF interference/jamming, etc.)

I am trying to bridge the world of industrial strength monitoring with the IoT.  I expect the sensors to be "integrated" with the house. You will want to install them properly (mount them) and permanently. These are devices that should last for years.  The mantra is that they "must work".

The water flood sensor is a good example of a "must work" sensor.

So, this is a long one. Feel free to jump ship here, otherwise, grab a cup of coffee, sit back and ... here we go:

Contents

The water flood sensor is a pair of gold plated probes on a 2x4" plastic plate.  The plate can either rest on the floor, be glued or attached to a baseboard with screws. Two thin wires connect it to the sensor node (in this case the base station).  The base station can be mounted on the wall. It is about the size of a desk of cards. On the side are 6 screw terminals (for 4 sensors plus +DC and ground.  The water flood sensor attaches to two one of the sensor screws and ground.  The user is expected to use the full length of wires or to trim and strip them to a desired length.  You can connects up to 4 water flood sensors if you want to place them strategically in a room (e.g. under the sink/tub, next to the water heater, etc).

(First critical question: Why screw terminals instead of modular connectors?  Answer: This allows the user flexibility in where they mount the base station. It can be several feet away from the sensor. A module jack would fix the length of the wire.  I am assuming either a professional installer or someone comfortable enough to run wires.)

The base station hosts 2 AA batteries for power failure backup (which should run a couple of weeks before needing to be replaced).  Lithium or alkaline are recommended for maximum shelf life.

The base station is normally plugged in to an AC outlet (via a standard USB 5VDC power supply). Since the station uses Wi-Fi, it wouldn't run very long on batteries.

Configuration

The USB port is also used for "configuring" the base station. Once plugged in, it shows up a disk drive.

Then you go to the product website and enter data into a form.
You can associate a screw with a type of sensor (in this case a water flood sensor). You must also enter the SSID and password for your wi-fi router.  Additionally, for notification, you must provide an email address.  None of this data is retained by the website and is all done over https.

Once entered, this data is downloaded as a file. You must save the file (or drag it) to the attached base station.  The LED will blink rapidly and if all goes well it will remain lit.  A slow blink indicates an error.

Once installed and turned on, the base station contacts the wi-fi router and you are sent a "startup successful" email.

Operation

The base station will send you once per week "heartbeat" email to indicate that all is well. If you want check "on demand" you can send it email and it will respond with status.

If water is detected, you are sent email.
That's it. Set and forget.

Hardware Teardown

There are 4 phillips head screws holding the unit together. The case is UL94-5VA flame rated.  Two flanged holes support mounting the enclosure to the wall.  When mounted, the battery compartment flush against the wall. This is a light form of security to prevent someone from taking the batteries out.
The screw terminals are side mounted.  There is a small recessed reset button on the bottom of the enclosure.

Inside there is a small circuit board hosting the three main components: A TI C3000 Wi-fi module, a Nordic nRF24L01P low power RF transceiver (for wireless sensor nodes) and a C8051F381 USB MCU. The Wi-Fi modules is tethered to an antenna that traverses the inside edge of the enclosure.  The screw terminals are connected via  ESD protection diodes to the MCU.
(But, why an 8-bit MCU?  Why not an ARM Cortex? The C8051F381 is essentially an SoC. There are very few outside components needed. Panoptes uses the internal precision oscillator, so there isn't even an external crystal.  There is a built in 5VDC-in regulator and USB support. And, for what the system does, an 8-bit is adequate. Plus, the fewer the parts, the simpler the design.)

There is a small piezo buzzer mounted over a small hole piercing the front of the enclosure. A small red LED next to it pulses every few seconds. This is to indicate that the unit is on and connected. If it cannot connect to the wi-fi router or cannot reach the Internet, the LED blinks rapidly.

Measuring power consumption of the unit shows that it consumes around 105mA when idle (not sending a notification) and peaks at about 250mA, briefly,when sending notification. Most of this current is due to the Wi-Fi module.  The 105mA suggests that the base station maintains a connection to the Internet at all times.

Pouring water upon the floor (thereby triggering the sensor) cause the unit to beep loudly and send a notification email.  After 10 minutes the beeping stops and the unit awaits to be reset. It blinks rapidly red during this time.  You can cease the alarm by pressing (and holding for 3 seconds), the reset button on the bottom of the enclosure.

If the AC power is pulled from the base station (e.g. a power outage), the unit falls back to the battery, sends an alert email, powers down wi-fi and beeps for 5 seconds.  The base station is still fully functional, but is expected to only last a few days without AC power.
The current measures steady at around 500uA at this point.  Any water sensing event will cause both the beeping alarm and an attempt to send an email notice (in case the wi-fi router itself is battery backed).  Every 2 minutes the station beeps to remind anyone near by that the unit is battery powered. 
Pressing and holding reset at this point will cease the beeping but the alert capability remains.

Internet Connectivity

The base station is connected 24x7 to a server running in the "cloud". This connection is via TLS/SSL and it is the cloud host that sends notification emails.  Why not send email directly? The cloud server ensures mail delivery (caching and doing multiple delivery attempts as needed). Plus, for sensors that need correlation outside of simple alerts, the cloud server does all of the logic and interfacing. 

Email is used as the primary notification (and status query) mechanism due to its ubiquitousness. Email is everywhere and doesn't require any special software to be loaded on your PC or smartphone.

No software updates are pushed to the device. Nor can the device be remotely controlled. It is a monitoring sensor. This IoT base station is one way.

In conclusion

Panoptes is designed to be a part of your house. It isn't sexy, but it is indeed a player in the IoT. Outside of 802.11b/g and TLS/SSL , it is bound to no particular Internet standard that may go away in the near future.  You can use it with low power RF based sensors or simply standalone with up to 4 wired sensors.

Despite the low BOM, Panoptes is a high quality product designed to last.  At $100 per base station, $10 - $20 per wireless sensor,  and $2 per month cloud based subscription, it is a worthy investment considering the repair costs of house flooding.

The only thing missing seems to be Zigbee support. But, until low cost wireless sensors are offered in the Zigbee space, the nRF24L01P is adequate.

Thanks for reading!

EDIT: Looking seriously into the Kinetis K20 again as the base station MCU. I could use a little extra help with the Internet protocol side of things and the 8-bitter suffers there.

EDIT2: The TI CC3000 Wi-Fi module has an OTA configuration scheme called SmartLink. This rids me of the need for USB support as I can  configure the AP and password over the air.  I still need to figure out how to send email address and other config stuff, but I should be able to do that over the air too.


Sunday, June 22, 2014

IoT: Real "servers" (PCs) are in your future (as base stations)

While Nest and others using embedded ARMs as base stations for your home "Internet of Things (IoT)", I see a real server in the future. There is only so much you can do with these embedded (usually ARM based) servers when you don't have a disk or memory management.  In particular, with the greater demand for these base stations to talk "native" Internet/Cloud (e.g. more heavy protocols like AMQP, XMPP, etc), it starts to tax an unadorned ARM SoC.

While a "PC" sounds like overkill, I am expecting to see more and more Intel Atom and ARM based, fully solid state, base stations with all the usual bells and whistles we are used to getting with a PC.
What bells and whistles?  Memory protection/management, robust storage, system busses, rich peripheral support, etc.

Let's call them SBCs (Single Board Computers) , which is what they really are.  Until now, SBCs were firmly in the domain of the industrial embedded market.  You don't mess around with unreliable consumer tech like SD cards and low end Chinese market chips (e.g. All Winner, etc) when you are building a security base station for an office building or other 24x7 "install and forget" monitor and control systems.

I've played with the wonderful Olimex ARM boards (like the OLinuXino LIME), but they are "new". There are hardware glitches, limited driver support (I can't just buy a wi-fi  board and expect it to work) and I don't feel that the Linux distribution is fully baked yet. Plus, I have to cross compile (from my Intel based laptop) and I run into all the "this isn't ported yet" problems that come with cross compilation.

With the coming of the Minnow Board MAX, Intel based SBCs are getting cheap enough (and low power enough -- No fan!) to become serious alternatives to the crop of low end ARMs.

What is wrong with the current crop of Cortex A based embedded systems?  The biggest problem is reliability (or at least the perception of) and OS support.  Sure there are Linux based distributions but are they as reliable and mature as their Intel based cousins?  I'm talking about real embedded distributions. I don't need or want X windows with a bunch of media apps.  But, are Intel SBC based Linux distributions any better?  Maybe. But that isn't what I am recommending.

Ubuntu/Debian/Fedora/etc server editions are (perhaps) ideal here.  They, for the most part, are already rock solid (when you have thousands of servers running 24x7 in a data center, you might as well say the OS is "embedded" grade since you can't practically login and deal with OS "issues").

I can see running Ubuntu 14.04 server (stripped down a bit) on a Minnow Board.

Now, the target market for the Minnow Board is for those who want to play with SPI, GPIO, I2C, etc -- they make a point of saying it is an "open hardware embedded platform" and not a PC. But, it seems to have specs to the contrary:  64 bit Atom, 1GB RAM, USB,  SATA2 support, ethernet, etc.

That sounds like a PC to me.  And, if I can run Ubuntu (or Debian) Server on it, it fits my IoT base station needs.   These days, most peripherals I interface to (including my own homebrew ones) can be accessed via UART (via a USB adapter) or native USB.  Do I really need to put my Bluetooth or GPS receiver on SPI these days?  (IMHO, Linux is pretty clumsy when accessing bit banged devices that don't already have kernel support.)

And, at $100, it certainly competes with the current crop of ARM boards.
Then again, if you can accept a Fan in your base station, it is hard to beat a repurposed ASUS Chromebox ($149) which comes with 2GB RAM and a 16GB SSD.


Saturday, June 07, 2014

Building the first (of many) wireless sensor prototype...

I've ordered a bunch of parts, so now I am committed to start building prototypes...

I've been doing X10 (RF sensors) and Linux on an SFF/SBC Intel-based computer (base station) as the prototype for my elderly monitoring system.  Stuff has been running for almost a year now but I am not satisfied with two aspects of this system:


  1. X10. Ugh. Ultimately a dead end.
  2. Intel-based computer.  Too big, too much. Overkill.
So, once again I am looking into a completely home brew solution.

First up: Wireless sensors.

I am throwing together prototypes centered around the ridiculously cheap NRF24L01+ (go ahead, google it and look at the ebay bulk prices -- they are between $1-2 each in lots of 10).  I am pairing these with the ridiculously low power ($1 per unit) C8051F986 (Silabs 8051 w/ 4K flash & 512 bytes RAM).  All these sensor nodes have to do is read some switches (e.g. motion sensors, doors, etc) and transmit a byte or two to the base station. I am coding it using MyForth (which is still my favorite Forth variant).

The BOM for a single wireless sensor node (sans sensors) is about $8 (including generic enclosure).  Add a PIR motion sensor for $17 (low current is expensive!), a magnetic door switch/sensor ($5) and maybe a water level detector (oh, and temperature comes for free with the C8051F986!) and you've got a wireless multi-sensor node for $30.  That's a bargain. I am currently using (screw) terminal blocks so you can hook up short runs of sensors (e.g. monitor the front door AND front hallway from one sensor node).

Next up: Base station

The base station will come in 3 variations:
  1. Wi-Fi
  2. Ethernet
  3. GSM/SMS
I am tackling the Wi-Fi variant first.  I am using a TI CC3000 eval board ($35 from Digikey).

The NRF24L01+ boards in my possession uses a trace antenna, so I am not sure if I'll get the range I need.  For the base station, I ordered a slightly costlier variant that supports an SMA connector.

I am still waffling on the brains for the base station.  Cortex M4 sounds like a no-brainer. In particular, I am fond (and familiar) with the Kinetis K20 series (via the $20 Freedom board).

But, I am NOT happy with the M4 development eco-system.  You either drop a lot of cash (>$1000) or use not-quite-baked free tools.  Yes, GCC has wonderful support for the Cortex processors, but getting down to the vendor specifics requires a lot of work (unless you opt for the IDEs which not only do all the work for you but manages to "hide" most of the hardware from you... I don't want this).  

Kinetis has a free GCC/Eclipse based IDE.  It takes up 1GB on disk, runs slow and isn't fully cooked (it is beta until later this summer). 

And, oh, don't get me started on the debuggers (e.g. OpenSDA, EzPort, OpenBDM, etc). Wow. The chips are amazingly cheap, but the support around the chip is going to cost you (if you don't want to be spoon fed:  mbed, Kinetis IDE, etc -- I am looking at you).

I've been using MPE Forth at my day job when I do Cortex M4 work.  It has worked nicely with the K20 Freedom board.  But, I can't afford MPE Forth right now for my CFT projects.

So, waffle, waffle, waffle.  Last night I threw together a quick base station prototype board (because I need *something* to test the sensor's NRF24L01+ against).  The brains for the prototype is a C8051F930 I had in my junk box.  It has 64KB flash and 4KB RAM. This is quite beefy for an 8051. It also has ridiculously low power needs.

Honestly, it has all the horse power and space I need to do the base station task. Plus I get code sharing with the sensor nodes.  But, an 8051 as the brains?  Shouldn't I go with something more capable?

Well, here is an interesting observation:  My prototypes are already rich with 8051s.  The NRF24L01+ has an 8051 as it's core. The TI CC3000 (Wi-Fi module) does too.  Do I need more horse power than a modern 8051 (Silabs  8051 based CIP-51 cores executes 70% of instructions in 1 or 2 clock cycles)  to just control these two modules and do a little bit of logic?

Friday, May 02, 2014

Industrial Product: Forth + Bare Metal + Cortex M vs C++ Linux + Cortex A

File this is in the category of "spending way too much time thinking rather than doing"...

This is a long post about building an "Industrial Strength" product.

I sit poised between rebooting my "Home Alone" Elderly monitor using a micro-controller  or a microcomputer based solution.

It isn't a very sophisticated setup: A few PIR motion sensors, a water detector, a magnetic switch for door open/close detection and a means of notifying me when something interesting happens (e.g. mother-in-law wanders out of the house when we are not at home, is up in the middle of the night, left the water running, etc).  The notification method is still "up in the air": Do I uses a GSM modem to send SMS to my smartphone or do I maintain an Internet connection and send it email?

I have previously implemented a prototype in LuaJIT and ran it on a small form factor PC (using off the shelf X10 RF sensors).  It sent the data to a cloud server and notified me of events via email. You can read about it in my other blog: http://eldermonitor.blogspot.com/.  This prototype is too heavy (small form factor PC + cloud services).

So, now, for the "make it lighter" reboot, I've been looking at the very interesting A10-OLinuXino-LIME (something of a more industrial quality Raspberry-Pi).  Industrial is the enticing bit. I want this thing to work. I don't want to design my own board (yet). I would like the prototype to work and work "for a long time".  This isn't to say that the Pi wouldn't, but I've had very good experiences with Olimex boards when doing embedded stuff at my day job.

But, here is the thing: Software.

Debian is stable (and used quite a bit in the embedded and server based arena), but is this LinuXino Debian build solid?  I don't know.  I do know that this stuff is still mainly "enthusiast" supported.  For this project, I am not an "enthusiast". It must work.

(An aside: If it "must work", how can I rely on X10 RF stuff?  I've run the sensors for over a year now in my house and they are still going strong.  I haven't had to change batteries either. They aren't sophisticated, but they seem to work for the long haul -- at least for now.)

So, here I am, writing modern C++11 code on my 64 bit i5 dual core laptop and planning to recompile (port?) it to the 32-bit ARM (Cortex A8) ... and thinking... will this thing work reliably?

With the C++ I am thinking about abstractions and algorithms.  Am I making something inherently simple more complex?

Do I really want a full blown Linux here? Will it run for a year without fail (or crash)?

So, I sit here at my workbench and I am comparing the A10-OLinuXino-LIME board (argh, what a horrible name) and the Freescale FRDM-K20D50M (Cortex M4) board and wonder if I am not going light enough.  Getting the USB based X10 CM19a receiver to work on the Cortex M4 is not trivial.  (I may punt and go for hardwired sensors for the time being). And, C++ on the Cortex M4 means either fighting g++  (ugh, the linker config) or paying >$1000 for a serious compiler.

I've got and old (still functional) MPE Forth Stamp Compiler working with the FRDM board. It isn't free but it is solid.  Solid is what matters here.

I have visions of a simple device that once configured (and installed) hums along doing its job for months (...years!) without concern for whether it gets stuck in a reboot (e.g. Linux runs out of space due to a logging issue, SD card corruption, etc) or whether my C++ has some subtle memory issue (e.g. modern C++11 looks down upon "new" and "delete" but can still run out of space by auto allocating objects on the stack).

Forth is, well, Forth. On bare metal, I can completely *grok* my development environment.  Porting MPE Forth to the FRDM board was a pain, but now I *understand* the FRDM board.

What am I trading here?  A modern C++/Linux design vs something that I know will work (and how it works).

I'm an old Unix hand (been doing it since the mid-1980s), but I don't know if I am comfortable with a home monitor running a community supported port of Debian. Too many unknowns?


Sunday, April 20, 2014

Personal UV Sensor reboot

Back in 2011 one of my CFT projects was to develop a personal UV sensor for those who are a high risk for skin cancer.  Due to the limited availability of tuned UV sensors (i.e. a reliable source for UV Index rating values), I had to abandon the project. I produced one prototype, as outlined here: http://toddbot.blogspot.com/2011/08/uv-index-monitor-prototype-1.html but had to put further prototypes on hold due to  sensor procurement issues.

Well, apparently there is a rumor that the forthcoming Apple iWatch will include a UV sensor ( http://www.macrumors.com/2014/04/08/iwatch-uv-light-exposure-sensor/).  This is great if you have an iPhone, lots of money and want a new watch, but this isn't my target.

But, the new UV chip they are using, fits my budget: http://www.silabs.com/Support%20Documents/TechnicalDocs/Si1132.pdf.

I want something small (and cheap) enough that you could clip it to a hat, a UV windbreaker, shirt or blouse. Oh, and it should be water resistant (wear it on the beach or by the side of the pool) or even water proof (go swimming with it on).  It should also allow you to set a timer (in hour increments) to remind you (via beeping) to apply more suntan lotion.

The only UI would be a capacitive touch sensor. Press to see (or hear through beeps) the current UV index. Press to set timer.  LEDs (matching UV Index official colors) and/or small buzzer would be the feedback mechanism.  It should cost under $20 and the battery should last a few years (at least 5) under moderate UI usage.

Why do this?  Well, why is every (new to market) useful sensor device required to work with RF and/or interface with your phone?  Why can't tech just "be there" when you need it, rather than be "gadgets" that work with other "gadgets".

Heck, give me a 10 year battery life and I'd say you just sew the thing into clothing.

Okay, I've said way too much.  Let's just say that I am working on it.... stay tuned.


Tuesday, April 01, 2014

Teaching Forth Programming to kids... is really dangerous...

Teaching Forth Programming to kids is irresponsible and may hinder their progression into the professional programming industry.

So, let's do it.
One rather curious thing I've noticed about aesthetic satisfaction is that our pleasure is significantly enhanced when we accomplish something with limited tools.   - Donald Knuth, Computer Programming as an Art 

Forth doesn't have a lot of modern facilities, but that forces you to figure them out yourself.  Better yet, Forth encourages that you solve the problem at hand (rather than build elaborate frameworks).

I've been called out, in this blog somewhere, for promoting archaic old principles that don't apply to modern development.  But, I don't actually want to force people to learn this stuff. Find your tool and if it helps you do amazing things, stick with it.  

I've been programming in Forth since 1984.  I've learned a dozen languages since.  Forth is still the one that focuses my attention on problem solving better than any other.

Why am I writing this now?  I'm a bit late, but I just discovered this: Gforth for Google Chrome.

It's a toy right now (apparently no persistent file I/O), but I want it to be real.  I want to fire this up in a classroom full of kids and get them hacking.  I want them to build their own abstractions. I want them to see what middleware really is (a bunch of layered restrictions with the goal of making things more structured and easy, while in reality making you conform to things they want to keep hidden -- okay a rant for another day).

Sure, underneath Gforth for Google Chrome there are layers upon layers.  But there is a lesson there too: Its all about simulation.  It's Turing machines all the way down.





Tuesday, March 18, 2014

Statically Typed Languages like C++ and Haskell are good for lazy programmers (like me)

I've seen my errors in production code. I'm doing a forensic analysis of some "embedded" software I shipped to a customer.  By embedded I mean there is no UI and it is supposed to run unsupervised 24x7.

Server (Cloud) software often meet this criteria, but sometimes we cheat and log in (to see how things are going) or restart the OS when things aren't quite right.

But, that's not quite embedded. By embedded I mean "It has to run without me or the user intervening." and "There is no such thing as rebooting to fix a problem."

Now, I love developing code in Lua, so this isn't a Lua criticism, but when I review my daemon logs and see that the process died because I was trying to add a number to a "nil" value, I cringe.

On this same system I have a Haskell process also running.  The only time I saw it had died was when I didn't handlebeing fed bad data from a corrupt database.  Not exactly the same class of bug...

The Lua code problem could have been caught with unit tests or code review (maybe), but I am sort of a lazy programmer.  I can fall back on lazy habits such as "this is so obvious it can't be wrong".

Haskell doesn't let me do this.  It won't let me go on tossing untyped variables here and there . It won't let me assume that I never pass a string as a number (or vice versa).  It forces me to be precise.

Compiling modern C++ (C++11) with warnings on (in clang++ or g++) is similar.

I hate to say this, but after continually abandoning C++ (in cycles: 1992, 1997, 2002, etc), I am back again and I like what I see in C++11.

I'm writing deliverable code at work using C++11 and I just tried my hand at using it, *instead* of a scripting language, to do some configuration file handling.  C++11 for scripting?  Yes. And it seems to work.  Not a pointer in sight. (Actually that would be the only sane way to do this in C++: rely on RAII.)

All that being said, and especially to any potential employer who may be reading this: I am not really a lazy programmer.





Monday, January 20, 2014

The IDE called Forth, or.. Forth I wish I knew how to quit you.

I've been playing with OpenFirmware. Yep, booting it off of a thumb drive (via Grub2). It takes less than a second to boot on a "fast" laptop up to the Forth "ok" prompt.  At that point, I can start playing around with low level hardware.  Open Firmware went so far as to provide a "GNU readline" like capability where I can use Emacs-like command line editing and completion of words.

But, wait, there is no command manual!  What does this word do?  Enter "see" and the word you are interested in and view disassembled source code.

People (still) underestimate the innovations built into a "classic" Forth.   Gforth still has some classic capability built in.

Here is what I had with Forth during the 1980s (on Commodore 64 and then Atari ST):
  1. Full screen block editor.
  2. "See" or equivalent for looking at source.
  3. The ability to ask the block editor to locate the "real" source and let me edit it (i.e. Tags).
  4. The ability to play around with graphics and other hardware facilities.
  5. Very fast start up (or restart for when I crash the machine).
This was the basic Forth IDE.

I'm a long time Emacs user (25+ years)  and I still not at that same IDE productivity level I was with Forth back in the day.

Smalltalk (in particular Squeak) has been the only other things that has come as close (for me).

I know that machines are bigger and software is a lot more complex these days, but why can't I have that same feeling of "being one" with the machine?

Here is what I want:  I want to load up an interesting library (e.g. libpcap, OpenCV, etc) into a Forth (e.g. Gforth) and explore.  I want to play.  Lua, Perl and Python can get me half way there (i.e. bindings), but I still have to grapple with a REPL that doesn't seamlessly integrate with the editor.  

Yes, yes I know you can bend Vim or Emacs to do these things, but the resulting IDE isn't as natural (IMHO). You are still piecing together a generic editor, a command line (linux shell) and a programming language. (For example, how do you list a directory in the chosen language? Oh, you use the shell or editor? Are the results first class elements for the language?). 

Some would say that Visual Studio is a great example of a seamless IDE, but it is still working with a language that isn't naturally "interactive".

This is why I still hang onto Forth. It isn't about Forth building better apps. I don't care what an app is built in.   It isn't about the destination, its all about the journey.





Thursday, January 02, 2014

Todd's 1 question test for all new "advanced" dynamic scripting/programming languages.

Whenever a new language comes out I have a simple test to see if it is worth looking at.  I came up with this test back in the 1990s after being frustrated by the state of "advanced programming languages".

In the 1980s, as a lowly FORTH programmer, twiddling 8-bit bytes, I was blown away with my first experience with Lisp. It was on a DEC2060 (running TOPS-20) and was called "Standard Lisp".

As a student, back in 1984, I coded up a small lisp function to compute the factorial of 120.

The terminal presented me with:

6689502913449127057588118054090372586752746333138029810295671352301633557244962989366874165271984981308157637893214090552534408589408121859898481114389650005964960521256960000000000000000000000000000

My mind was sufficiently blown.

Of course, the largest factorial computed by a programming language where the number/integer type is matched to the machine word size is much smaller.  I had already programmed in Pascal, FORTH, a bit of C and BASIC.  But, this was the first language implementation I had seen that wasn't bound by that machine word limitation.

(I quickly followed that exercise by coding for the factorial of increasing numbers until, by the time I was in the hundreds of thousands, the terminal responded with a message saying that it was taking too many resources and that the process was being "spooled" -- whatever that meant.  I went home and the next morning I was greeted with an email from the sysadmin requesting that I come get a "print job" from the ops center.  I rang the ops center door buzzer, the admin came to the door and asked what I wanted. I told him him who I was and he then frowned, told me to wait and closed the door. A few minutes later he showed up with a hand truck loaded with a big box of green bar printer paper.  Apparently, "spooled" meant the result was being submitted as a print job).

A couple of years later and I discovered that Smalltalk too had this "big num" (or arbitrary precision) feature.  Why wouldn't every language have that? (Yeah, I know... performance...but, still...)

Now, when I am presented with a programming language that is supposed to be the "next step",  I look to see if it supports bignum.  Now, when I say support, I don't mean surrounding the number by quotes and submitting it to a bignum library. That's cheating. I want to say something like:

X=6689502913449127057588118054090372586752746333138029810295671352301633557244962989366874165271984981308157637893214090552534408589408121859898481114389650005964960521256960000000000000000000000000000  / 19;

I don't want to type it as a string. That's saying that there is something "special" or "hard" about large numbers.  Why, in 2014, should I be concerned about whether a number fits into 32 or 64 bits (or in the case of Lua and Javascript: a 52 bit mantissa)?  I want tnative/natural support.

So, what other programming languages pass this bignum test?

  • Erlang does. 
  • Haskell does... sort of... got to choose the right type.
  • Perl does (and has for a while.. just type "use bigum;"  and all following numbers are not bound by machine word size.

Now, don't get me going about native/natural support for rational types.

/todd