Saturday, February 28, 2015

Virtualization: Your PC is a Universe

PCs (and, honestly I am really talking about Laptops and the newer PC replacement tablets) are so powerful that they no longer have to be thought of as singular "client" resources.  That is, with sufficient memory (let's start at 8GB RAM)  and with enough SSD speed storage (>128GB),  folk like myself typically run many virtual computers inside our computers.

If I need to run Windows, I just fire up Virtualbox. If I need to do server development, I can pick stuff like Vagrant, Docker or go directly to LXC.  I can do Android development. I can do Windows development. I can try out Haiku or some new BSD.  I can do all of this without changing the underlying OS.  The underlying OS, in fact, is starting to become irrelevant.  Give me a Windows box and I can do full Linux stuff on it without replacing the OS: Just start up a Linux VM.

The thing is, at any given moment, my laptop is a Universe of virtual computers. I can network these computers together; I can simulate resources; I can test them, probe them and manipulate them.

This is new. Yes, yes -- the tech is pretty old (e.g. virtual machines), but the realization of this tech on a portable computer is new.

If you want to see where we may be heading, check out something like Rump kernels or OSv. We are starting to leave the OS behind and look at computing in terms of "microservices" -- collaborating virtual computers that solve a particular problem.

With the resources we now have on hand, why are we talking about systemd and Dbus and other single computer entities?

The next time you approach a design, try thinking about how your laptop can be *everything*. And then let that influence your design.

I will be Cyborg.

I haven't had a lot of time to post to this blog and I am wondering if this is the end of the line for it.
Well, we will see.  But for now...

I am approaching 50 (in 1.5 years) and my eyes are shot (I'm very near sighted).  The screen is blurry (I have transitional bi-focals, so my "clear" view is pretty marginal) and isn't going to get any better.

So, if my eyes sight starts to quickly wane (my eye doctor isn't really concerned... yet), what do I do?
While I can use magnifying glasses for my circuit work (which starting to become a thing of the past for me anyway), what about my programming and computer science stuff  (i.e. my screen work)?

I'm a programmer and technologist.  I can hack something together to supplement my poor vision.  Even if I were to go blind (that isn't currently in the cards, but who knows), there are ways to continue to do "Computer Science".
There is technology already out there, and I can always invent what I need to aid me if my eyesight worsens.

Sometimes I forget that, with software and some gadgetry, we invent whatever we need. We are indeed sorcerers and alchemists :)

Wednesday, October 01, 2014

Forth and the Minimalist

Not all Forth programmers are minimalists, but chances are, if you use arrayForth, colorForth or something inspired by it (like MyForth), then you may be a minimalist.

Being a minimalist, you seek the simplest, most concise use of resources.  You tend to avoid rambling code and the idea of calling a (3rd party) library function makes you uncomfortable.

One of the reasons I like using MyForth (and the 8051) is that it forces you to think about how to simplify the problem you are trying to solve.  This is a good exercise but also offers some advantages when you are working on low power (or very tiny) embedded systems.  No matter how beefy an MCU can get, there is always a need for something "smaller" and lower power (e.g. a tiny low transistor count 8 bit MCU has more chance running off of "air" than a 32 bit fast, feature rich MCU).

The 8051 has rather poor math capabilities. Everything is geared toward 8 bits. If you use a C compiler, this is hidden from you.  The compiler will generate a ton of code to make sure that your 16 or 32 bit math works. This causes code bloat and will slow you down -- thereby causing more power consumption.  Programming in a minimalist Forth makes you think about whether or not you actually need the math.  Is  there a cheat?  You look at old school methods and you may find them. I grew up on the 6502 (Commodore VIC20/C64, Atari, Apple, etc).  You did all you could to avoid doing "real" math (especially if it broke the 8 bit barrier).  You had limited resources and you made the most of what you had.

But, is this just an "exercise"?  I don't think so. There are practical benefits that go beyond just old school cleverness. You (can) have more compact code that performs better. The less code you produce, the fewer chances for bugs. The less code you produce, the more reliable your product.

Gone are the days (for most of us) of penny counting the costs of components. I'd rather have a bunch of simple components (e.g. logic gates, simple MCU, peripheral processors etc) that do work for me rather than a big processor with a complex library.  Chip components tend to be "coded" at a higher level of quality assurance than pure libraries.  I trust a USB->serial chip more than some USB->serial library for my MCU. If the library fails, they say "update". If a chip fails... they risk going out of business -- who trusts production runs to faulty chips?

In the end, the minimalist is fighting the status quo.  It is a futile fight, but we can't seem to give it up. It is in our nature.

Wednesday, July 30, 2014

AFT - an elegant weapon for a more civilized age..

This is a sort of nostalgic post and, in some sense, it is also a "toot your own horn" one as well.  I am writing this mainly for myself.  I am trying to remind myself what I've liked most about programming.

Years ago, actually almost 2 decades ago -- around 1996,  I wrote a text mark up system called AFT.  AFT stood for Almost Free Text. It was inspired by Ward Cunningham's original Wiki mark up but went further.

I had a problem. I didn't like using WYSIWYG word processors and the world was moving towards HTML.  I liked Ward's mark up. He was using it on this new "Wiki Wiki" thing. I answered an invite sent to the Patterns List and became one of the first of the wiki users in 1995.  (But that is a different story or a different time.)

AFT was my attempt at a writing system to produce publishable (web and print) documentation.  Since then, it has developed a (now waning) user base.  You can see how many web pages/documents use it without changing the default "watermark" with this query.

As of Ubuntu 14.04, you can get AFT by issuing an "apt-get install aft" against the standard repository.
I think it is still part of FreeBSD "world".  I believe it still runs under Windows too.

Various "modern" mark up languages (written in "modern" programming languages) have since surpassed AFT in adoption, but for me, it still is a more elegant and pleasurable experience.

Over the years (although not very recently), I've updated, fixed and generally maintained the code.  There are no known crashes (it literally take whatever you throw at it and tries to produce good looking output -- although that may fail) and it doesn't require me to look at the HTML (or PDF) manual  (written in AFT!) unless I want to do something complex.

AFT is implemented in Perl. Originally it was written in awk, but I taught myself Perl so as to re-implement it in the late 1990s.

It is, for me, interesting Perl code.  I have modernized it over the years, but it still doesn't depend on CPAN (a good thing if you just want to have non-programmers "download it" and run without dependencies -- yes I know there are packaging solutions to that problem today...).

AFT has "back end" support for HTML, LaTeX, lout and some rudimentary RTF.  These days I think mostly HTML and LaTeX is used.

You can customize the HTML or LaTeX to support different styles by modifying or creating a configuration file.  This configuration file is "compiled" into a Perl module and becomes part of the run time script.

AFT has been a pleasure to hack on now and then. It still runs flawlessly on new Perl releases and has proven not too fragile to add experimental features to. I've accepted some small code fixes and fragments over the years, but generally it is mostly my code.

As I wrote (and rewrote) AFT, I thought frequently of Don Knuth's coding approach (as excellently documenting in my all time favorite book on programming: Literate Programming).  I certainly can't match the master, but the slow thoughtful development he enthuses was inspiring.

Over the years I've gotten a few "thank you" notes for AFT (but nothing in the past few years) and that makes it my (to date) proudest contribution to Free Software.

Maybe I'll dust off the code and introduce some more experimental features...

Sunday, July 27, 2014

Concurrency and multi-core MCUs (GA144) in my house monitor

My house monitoring system monitors lots of sensors. This suggests a multi-core approach, doesn't it?

The problem with (the current concept of)  multi-cores is that they are typically ruled by a monolithic operating system. Despite what goes on in each core, there is one single point of failure: the operating system. Plus, without core affinity, our code may be moved around.  In a 8 core Intel processor, you are NOT guaranteed to be running a task per core (likely, for execution efficiency, your task is load balanced among the cores).  Each core is beefy too. Dedicating a whole core to a single sensor sounds very wasteful.

This, I believe, is flawed think  in our current concurrency model (at least as far as embedded systems go).

I want multiple "nodes" for computation. I want each node to be  isolated and self reliant.  (I'm talking from an embedded perspective here -- I understand the impracticality of doing this on general purpose computers).

If I have a dozen sensors, I want to connect them directly to a dozen nodes that independently manage them.  This isn't just about data collection. The nodes should be able to perform some high level functions.  I essentially want one monitoring app per node.

For example: I should be able to instruct a PIR motion-sensor node to watch for a particular motion pattern before it notifies another node to disperse an alert. There may be some averaging or more sophisticated logic to detect the interesting pattern.

Normally, you would have a bunch of physically separate sensor nodes (MCU + RF),  but RF is not very reliable. Plus, to change the behavior of the sensor nodes you would have to collect and program each MCU.

So, consider for this "use case" that the sensors are either wired or that the sensors are RF modules with very little intelligence built in (i.e. you never touch the RF sensor's firmware): RF is just a "wire".  Now we can focus on the nodes.

The Green Arrays GA144 and Parallax Propeller are the first widely-available MCUs (I know of) to encourage this "one app per node" approach.  But, the Propeller doesn't have enough cores (8) and the GA144  (with 144 cores) doesn't have enough I/O (for sake of this discussion, since the GA144 has so many cores I am willing to consider a node to be a "group of core").

Now, let's consider a concession...
With the GA144, I could fall back to the RF approach.  I'll can emulate more I/O by feeding the nodes from edge nodes that actually collect the data (via RF).  I can support dozens of sensors that way.

But, what does that buy me over a beefy single core Cortex-M processing dozens of sensors?

With the Cortex-M, I am going to have to deal with interrupts and either state machines or coroutines. (although polling is possible to replace the interrupts, the need for a state machine or coroutines remain the same).  This is essentially "tasking".

This can become heinous. So,  I start to think about using an OS (for task management).  Now I've introduced more software (and more problems).  But can I run dozens of "threads" on the Cortex-M? What's my context switching overhead?  Do I have a programming language that lets me do green threads?  (Do I use an RTOS instead?)

All of this begins to smell of  anti-concurrency (or at least one step back from our march towards seamless concurrency oriented programming).

So, let's say I go back to the GA144. The sensor monitoring tasks are pretty lightweight and independent. When I code them I don't need to think about interrupts or state machines. Each monitor sits in a loop, waiting for sensor input and  a "request status" message from any other node.
In C psuedo-code :

while (1) { 
  switch(wait_for_msg()) {
    case SENSOR: 
       if (compute_status(get_sensor_data()) == ALERT_NOW)
    case REQUEST:

This loop is all there is.  The "compute_status" may talk to other sensor nodes or do averaging, etc.
What about timer events? What if the sensor needs a concept of time or time intervals?  That can be done outside of the node by having a periodic REQUEST trigger.

(This, by the way, is very similar to what an Erlang app would strive for (see my previous post GA144 as a low level, low energy Erlang).

Now, the above code would need to be in Forth to work on the GA144 (ideally arrayForth or PolyForth), but you get the idea (hopefully ;-)

Tuesday, July 22, 2014

A Reboot (of sorts): The IoT has got me down. I think we've lost the plot.

The IoT (Internet of Things) has got me down.  I think we've lost the plot.

In most science fiction I've read (and seen), technology is ubiquitous and blends into the background.  The author of a science fiction book may go into excruciating detail explaining the technology, but that is par for the course.

In science fiction films the technology tends to be taken for granted.  Outside of plot devices, all the cool stuff is "just a part of life".

Re-watch Blade Runner, Minority Report, etc. Do the characters obsess (via smartphone or other personal device) over the temperature of their home while they are away?  Do they gleefully purchase Internet connected cameras and watch what their pets are up to?

It is 2014 and we buy IoT gadgets that demand our attention and time.  Nest and Dropcam: I am looking at you.

Beyond "Where is my Jet Pack?", I want "Set and Forget" technology.  The old antiquated "Security Monitoring" services (e.g. ADT) got it partially right. You lived with it. You didn't focus on it and you weren't visiting web pages to obsess over your house's security state.  But that model is dying (or should be). It is expensive, proprietary and requires a human in the loop ($$$).

What do we replace it with?

I think that the "Internet" in the IoT is secondary.  First, I want a NoT (Network of things) that is focused on making my house sensors work together.  Sure, if I have a flood, fire or a break in, I want to be notified wherever I am at (both in the house and out).  When I am away from my home  is where the Internet part of IoT comes into play.

My current Panoptes prototype (based on X10) monitors my house for motion and door events. My wife or I review events (via our smartphone) in the morning when we wake up. It gives me valuable information, such as "when did my teenage son get to bed?" and "was mother-in-law having a sleepless night?" and "is mother-in-law up right now?".  Reviewing this info doesn't require the Internet but does require a local network connection.

I also register for "door events" before I go to bed. This immediately alerts me (via smartphone) if  "mother-in-law is confused and has wandered outside".

When I leave the house, I can monitor (via XMPP using my smartphone) activity in the house. When I know everyone is out, I can register (also via XMPP)  for door/motion events. I can tell if someone is entering my house (our neighborhood has had a recent break in).

This is an important Internet aspect of Panoptes.  I rarely use it though.  My main use of Panoptes turns out to be when I am at home.

So, I want IoT stuff, but I want it to be "Set and Forget".  This is the primary focus in my series of Monitoring projects.

Monday, June 23, 2014

Design by Teardown: What you will find inside of my Panoptes home monitor basestation

First... It is about time I named this monitoring system.  I'm code naming it "Panoptes".

I'm struggling a bit with the power consumption on my wireless sensors (previously mentioned here).

I've chosen an C8051F912 as the MCU (extra 128 bytes needed for my OTA encryption scheme), but I can't seem to get the sleep mode down below 20uA. (That doesn't sound like a lot of power consumption, but it adds up when considering that I want the batteries to last years.)

So, I am taking a break from low power design to focus a bit on my base station. (For those coming into this blog entry cold, I am designing a Internet-ready home monitoring system with a focus on keeping track of independent elderly people, specifically those who are candidates for nursing homes but aren't quite ready for that transition yet.)

I've decided to approach the base station design from a post-implementation perspective: What would someone find if they did a teardown on my device?

Why come from this perspective?  I would hope that what a savvy engineer would find the implementation sound and even respectable. So, why not base my design decisions on this point of view?

Now, I am not just talking about a hardware teardown, but a software one too. But, I won't get too wrapped up on how my code looks or how it is structured.  I am more interested in interoperability: How does the software interface with the outside world -- in particular, the end user and the Internet.

Let me preface this with one of my primary design goals: Set and Forget. 

This is not a system to be played with or to constantly probe from a web browser.  The typical customer is a caretaker or adult child of an elderly person.  This is about applying modern IoT technology to an old problem.  But, this is not a Nest. This is about the kind of IoT (Internet of Things) that operates discreetly in the background of your life -- you just want to know when interesting things happen, otherwise it isn't on your daily radar.

I have said before that even the base station can host sensors, so for this particular teardown, we will look at a single use: Someone buys the basestation plus a water flood sensor to monitor their laundry room.  This example isn't solely "elderly" oriented but does represent the case where someone would want a true "set and forget" sensor.  (I won't cover "wireless sensor nodes" here, since while necessary, they are bound to a lot more hassle that I'll address later -- things like RF interference/jamming, etc.)

I am trying to bridge the world of industrial strength monitoring with the IoT.  I expect the sensors to be "integrated" with the house. You will want to install them properly (mount them) and permanently. These are devices that should last for years.  The mantra is that they "must work".

The water flood sensor is a good example of a "must work" sensor.

So, this is a long one. Feel free to jump ship here, otherwise, grab a cup of coffee, sit back and ... here we go:


The water flood sensor is a pair of gold plated probes on a 2x4" plastic plate.  The plate can either rest on the floor, be glued or attached to a baseboard with screws. Two thin wires connect it to the sensor node (in this case the base station).  The base station can be mounted on the wall. It is about the size of a desk of cards. On the side are 6 screw terminals (for 4 sensors plus +DC and ground.  The water flood sensor attaches to two one of the sensor screws and ground.  The user is expected to use the full length of wires or to trim and strip them to a desired length.  You can connects up to 4 water flood sensors if you want to place them strategically in a room (e.g. under the sink/tub, next to the water heater, etc).

(First critical question: Why screw terminals instead of modular connectors?  Answer: This allows the user flexibility in where they mount the base station. It can be several feet away from the sensor. A module jack would fix the length of the wire.  I am assuming either a professional installer or someone comfortable enough to run wires.)

The base station hosts 2 AA batteries for power failure backup (which should run a couple of weeks before needing to be replaced).  Lithium or alkaline are recommended for maximum shelf life.

The base station is normally plugged in to an AC outlet (via a standard USB 5VDC power supply). Since the station uses Wi-Fi, it wouldn't run very long on batteries.


The USB port is also used for "configuring" the base station. Once plugged in, it shows up a disk drive.

Then you go to the product website and enter data into a form.
You can associate a screw with a type of sensor (in this case a water flood sensor). You must also enter the SSID and password for your wi-fi router.  Additionally, for notification, you must provide an email address.  None of this data is retained by the website and is all done over https.

Once entered, this data is downloaded as a file. You must save the file (or drag it) to the attached base station.  The LED will blink rapidly and if all goes well it will remain lit.  A slow blink indicates an error.

Once installed and turned on, the base station contacts the wi-fi router and you are sent a "startup successful" email.


The base station will send you once per week "heartbeat" email to indicate that all is well. If you want check "on demand" you can send it email and it will respond with status.

If water is detected, you are sent email.
That's it. Set and forget.

Hardware Teardown

There are 4 phillips head screws holding the unit together. The case is UL94-5VA flame rated.  Two flanged holes support mounting the enclosure to the wall.  When mounted, the battery compartment flush against the wall. This is a light form of security to prevent someone from taking the batteries out.
The screw terminals are side mounted.  There is a small recessed reset button on the bottom of the enclosure.

Inside there is a small circuit board hosting the three main components: A TI C3000 Wi-fi module, a Nordic nRF24L01P low power RF transceiver (for wireless sensor nodes) and a C8051F381 USB MCU. The Wi-Fi modules is tethered to an antenna that traverses the inside edge of the enclosure.  The screw terminals are connected via  ESD protection diodes to the MCU.
(But, why an 8-bit MCU?  Why not an ARM Cortex? The C8051F381 is essentially an SoC. There are very few outside components needed. Panoptes uses the internal precision oscillator, so there isn't even an external crystal.  There is a built in 5VDC-in regulator and USB support. And, for what the system does, an 8-bit is adequate. Plus, the fewer the parts, the simpler the design.)

There is a small piezo buzzer mounted over a small hole piercing the front of the enclosure. A small red LED next to it pulses every few seconds. This is to indicate that the unit is on and connected. If it cannot connect to the wi-fi router or cannot reach the Internet, the LED blinks rapidly.

Measuring power consumption of the unit shows that it consumes around 105mA when idle (not sending a notification) and peaks at about 250mA, briefly,when sending notification. Most of this current is due to the Wi-Fi module.  The 105mA suggests that the base station maintains a connection to the Internet at all times.

Pouring water upon the floor (thereby triggering the sensor) cause the unit to beep loudly and send a notification email.  After 10 minutes the beeping stops and the unit awaits to be reset. It blinks rapidly red during this time.  You can cease the alarm by pressing (and holding for 3 seconds), the reset button on the bottom of the enclosure.

If the AC power is pulled from the base station (e.g. a power outage), the unit falls back to the battery, sends an alert email, powers down wi-fi and beeps for 5 seconds.  The base station is still fully functional, but is expected to only last a few days without AC power.
The current measures steady at around 500uA at this point.  Any water sensing event will cause both the beeping alarm and an attempt to send an email notice (in case the wi-fi router itself is battery backed).  Every 2 minutes the station beeps to remind anyone near by that the unit is battery powered. 
Pressing and holding reset at this point will cease the beeping but the alert capability remains.

Internet Connectivity

The base station is connected 24x7 to a server running in the "cloud". This connection is via TLS/SSL and it is the cloud host that sends notification emails.  Why not send email directly? The cloud server ensures mail delivery (caching and doing multiple delivery attempts as needed). Plus, for sensors that need correlation outside of simple alerts, the cloud server does all of the logic and interfacing. 

Email is used as the primary notification (and status query) mechanism due to its ubiquitousness. Email is everywhere and doesn't require any special software to be loaded on your PC or smartphone.

No software updates are pushed to the device. Nor can the device be remotely controlled. It is a monitoring sensor. This IoT base station is one way.

In conclusion

Panoptes is designed to be a part of your house. It isn't sexy, but it is indeed a player in the IoT. Outside of 802.11b/g and TLS/SSL , it is bound to no particular Internet standard that may go away in the near future.  You can use it with low power RF based sensors or simply standalone with up to 4 wired sensors.

Despite the low BOM, Panoptes is a high quality product designed to last.  At $100 per base station, $10 - $20 per wireless sensor,  and $2 per month cloud based subscription, it is a worthy investment considering the repair costs of house flooding.

The only thing missing seems to be Zigbee support. But, until low cost wireless sensors are offered in the Zigbee space, the nRF24L01P is adequate.

Thanks for reading!

EDIT: Looking seriously into the Kinetis K20 again as the base station MCU. I could use a little extra help with the Internet protocol side of things and the 8-bitter suffers there.

EDIT2: The TI CC3000 Wi-Fi module has an OTA configuration scheme called SmartLink. This rids me of the need for USB support as I can  configure the AP and password over the air.  I still need to figure out how to send email address and other config stuff, but I should be able to do that over the air too.

Sunday, June 22, 2014

IoT: Real "servers" (PCs) are in your future (as base stations)

While Nest and others using embedded ARMs as base stations for your home "Internet of Things (IoT)", I see a real server in the future. There is only so much you can do with these embedded (usually ARM based) servers when you don't have a disk or memory management.  In particular, with the greater demand for these base stations to talk "native" Internet/Cloud (e.g. more heavy protocols like AMQP, XMPP, etc), it starts to tax an unadorned ARM SoC.

While a "PC" sounds like overkill, I am expecting to see more and more Intel Atom and ARM based, fully solid state, base stations with all the usual bells and whistles we are used to getting with a PC.
What bells and whistles?  Memory protection/management, robust storage, system busses, rich peripheral support, etc.

Let's call them SBCs (Single Board Computers) , which is what they really are.  Until now, SBCs were firmly in the domain of the industrial embedded market.  You don't mess around with unreliable consumer tech like SD cards and low end Chinese market chips (e.g. All Winner, etc) when you are building a security base station for an office building or other 24x7 "install and forget" monitor and control systems.

I've played with the wonderful Olimex ARM boards (like the OLinuXino LIME), but they are "new". There are hardware glitches, limited driver support (I can't just buy a wi-fi  board and expect it to work) and I don't feel that the Linux distribution is fully baked yet. Plus, I have to cross compile (from my Intel based laptop) and I run into all the "this isn't ported yet" problems that come with cross compilation.

With the coming of the Minnow Board MAX, Intel based SBCs are getting cheap enough (and low power enough -- No fan!) to become serious alternatives to the crop of low end ARMs.

What is wrong with the current crop of Cortex A based embedded systems?  The biggest problem is reliability (or at least the perception of) and OS support.  Sure there are Linux based distributions but are they as reliable and mature as their Intel based cousins?  I'm talking about real embedded distributions. I don't need or want X windows with a bunch of media apps.  But, are Intel SBC based Linux distributions any better?  Maybe. But that isn't what I am recommending.

Ubuntu/Debian/Fedora/etc server editions are (perhaps) ideal here.  They, for the most part, are already rock solid (when you have thousands of servers running 24x7 in a data center, you might as well say the OS is "embedded" grade since you can't practically login and deal with OS "issues").

I can see running Ubuntu 14.04 server (stripped down a bit) on a Minnow Board.

Now, the target market for the Minnow Board is for those who want to play with SPI, GPIO, I2C, etc -- they make a point of saying it is an "open hardware embedded platform" and not a PC. But, it seems to have specs to the contrary:  64 bit Atom, 1GB RAM, USB,  SATA2 support, ethernet, etc.

That sounds like a PC to me.  And, if I can run Ubuntu (or Debian) Server on it, it fits my IoT base station needs.   These days, most peripherals I interface to (including my own homebrew ones) can be accessed via UART (via a USB adapter) or native USB.  Do I really need to put my Bluetooth or GPS receiver on SPI these days?  (IMHO, Linux is pretty clumsy when accessing bit banged devices that don't already have kernel support.)

And, at $100, it certainly competes with the current crop of ARM boards.
Then again, if you can accept a Fan in your base station, it is hard to beat a repurposed ASUS Chromebox ($149) which comes with 2GB RAM and a 16GB SSD.