Thursday, December 19, 2013

Perl is 26 years old?

I just read on Hacker's News that Perl turns 26 years old today.  My oldest piece of open source software is written in Perl. AFT.  AFT was coded in Perl 5 back in 1996 (which makes it over 17 years old).

I originally wrote AFT in awk (sometime during the early/mid 1990s) . I ported it to Perl by using the awk to perl translator (a2p) as a starting point.

Every once in a while I revisit AFT to see if there are any new tricks to add or just to clean up the code and make it a bit more Modern Perl.  AFT is still in the Ubuntu software repository, so you can install it under Ubuntu by simply doing   sudo apt-get install aft.

I'm still using Perl. I am doing some funky stuff with the AWS cloud (EC2 instances) and find Lincoln D. Stein's VM::EC2 package fits my needs.  Perl is still great for replacing nasty bash/sh scripts (although some would say I am replacing the nastiness of  bash with nastiness of Perl).

I'm not much of a CPAN user, but those who know me understand that I am very selective of libraries and frameworks. I prefer to avoid layering and loading lots of dependencies. What I find attractive about Perl is how much the basic distribution can get accomplished  (e.g. I don't have to resort to libraries so quickly as when I program in Lua).

I'm particularly interested in investigating Mojolicious for use in my elderly monitoring project.  I use LuaJIT primarily on the base station (with a bit of Erlang).  The cloud side is currently a fair amount of Perl.  But aren't there better languages to do cloud/web development?  Isn't Perl old hat?

For those folk who think that Perl is old hat and doesn't have a place in the modern Web, do you use DuckDuckGo?  It was developed primarily in Perl....

Tuesday, October 01, 2013

Power Considerations for an HVAC Thermostat (10 years off a couple of AA batteries?)

Hear a tick coming from your thermostat every time it turns on the heat or AC? Well, assuming you have an electronic/digital thermostat, you are hearing a latching relay.

Latching relays are wonderful power miserly switches. Unlike a solid state relay, which requires control current (anywhere typically between .25mA and 20mA depending on what you choose), a latching (mechanical) relay will latch (hence it's name) when pulsed.  A short burst of 100mA or so and you are done.  If your thermostat is switching the HVAC once an hour per day (which is pretty excessive), then you are still using a lot less power than a solid state relay.  Another potential problem with solid state relays is heat. They generate heat. They can overheat and if you don't choose and place your components wisely, they can affect the temperature sensor.

With latching relays,  your modern latching thermostat is not consuming that much energy.

Interestingly, I believe that the NEST uses solid state relays.

A quick google shows that some NEST customers are having battery issues. Here is an interesting one:

This is the kind of issue I want to avoid.  The two household gadgets that I don't want to think about (battery) power problems are my smoke detectors and my thermostat. I'll check once a year, but beyond that...

Here is a target: Ignore UI and wireless for a moment. Once a temperature schedule has been set, can you design a thermostat that will run for 10 years off of a couple of Lithium AA batteries?  That is my starting point.

So, as boring as "Thermostat design" sounds, it poses an interesting power problem.  Add sophisticated 'internet-of-things' functionality and now you have two problems.

Saturday, September 28, 2013

An Alternative Take on Thermostats

For most thermostats (including NEST), there seems to be a power/location problem.   At best you have 24VAC control lines you can parasitically suck for power or (worse case) you have some AA batteries that you hope will last a couple of years.  With batteries you start to seriously limit what your thermostat can do (a reminder htat your thermostat is NOT a general purpose computer).  With parasitic sucking off the control lines you start playing games with how to ensure that you always have enough power (OMG,  HVAC off for maintenance or power outage and what should the thermostat do?).

So okay, we've got a power management problem.

A question:

Why is your thermostat's primary User Interface (UI) mounted to a wall (and often in a place not where you are comfortably located)?

The location of your thermostat probably has a lot to do with the "best" place to measure room temperature (out of the sun) and where you can run HVAC wires.  There are other factors, but it doesn't matter.  I have to get up and go there to read or adjust the temperature. (Of course, with NEST and other networked thermostats you can change temperature from almost anywhere).

A proposal:

Don't put the UI into the thermostat.

What?  But where do we put it?  Surely not rely on the smartphone, right?  Yes. But consider this:

The thermostat will now consist of a small discrete box (hide it behind a picture frame or paint it to blend in) with a temperature sensor (maybe), the required solid state relays to control the HVAC, a couple of AA (or AAA) batteries, and an small RF transceiver (433MHz if you desire).

Pick a convenient location (or two) and install the UI there (and a temperature sensor too). It can be on the wall, on the kitchen counter, maybe a shelf, etc.  The only requirement is that you have AC power (see where I am headed?).

Now, let's rename this UI a "control center". This control center (still very small) can not only host the UI but can also support Bluetooth and/or Wi-Fi, so you can use your phone to control the HVAC from your bed or couch.

If we do the RF correctly it can be safe (validated/encrypted) and reliable.  Because the control center is plugged into AC, we don't have power management issues. The thermostat itself is running off of batteries or even parasitic power.  We still have some power mgmt issues, but we can duty cycle the RF to reduce the overall power consumption. (A thermostat's control loop is *never* under immediate control of your thermostat's UI -- you "advise" the control loop what to do, so a couple of second lag is okay).

Now, with this setup we can put a lot of sophistication into the control center. It can be a (shudder) computer and suddenly our thermostat can become something much more interesting...

Thoughts of a Haskell (or Erlang or Lua) powered thermostat starts to become more of a possibility.


Friday, September 27, 2013

A Better Thermostat (or is the Nest the best we can do)?

I've been reading about the Nest and I am convincing myself that it is too gadget-y.
I'm thinking about my household and the usual elder market that I am currently designing stuff for.

Do I really need my thermostat to be Internet enabled? Do I really need to become a honeypot for Black Hats?  Do I really want my thermostat to have software "updates"?  Also, it is yet another device to be managed whenever I swap out routers or change wi-fi passwords.

However, I do think that we can do a lot better than the HVAC's perception of what a digital thermostat is.   Honeywell Prestige 2.0 and Ecobee Smart are the current competitors for the Nest. But they seem to think that we all want bling in our thermostat interfaces. Lots of color, lots of funky icons -- yes! Just a slide of a finger.  Here is where I think the Nest got something right. Grandma can just turn a dial to get the desired temperature. No capacitive touch screen (what about those with crippling arthritis?) and no busy trompe l'oeil effects for those with bad eyes.

Nest, apparently, runs Linux on a Cortex A8. That is a power hungry beast. No wonder it has a rechargeable Li-po battery inside.   It uses the 24VAC to charge it. (If you don't have a Common wire, and I don't, it does this clever but dangerous trick of pulse the fan/AC/heat trigger line to complete the circuit and trickle charge the battery -- apparently that cause the start/stop ping of death on HVAC units that don't debounce away the false triggers).

Then there is the problem of what if you use the interface too much, run buggy software updates or spend too much time on wi-fi. You'll drain the battery, right?  (The trickle charge takes time and you are always running off the Li-po).

I don't think Linux is ready yet for 24x7 systems that run off of battery.  (A tickless kernel is a must for battery longevity, and that is just a start). But I could be wrong.

There are advantages to running an OS. It opens up development options: I'd love to try and get some Haskell code into an "embedded device".  Without thinking about embedded development issues, you can focus on the algorithms.

What about the power miserly Cortex M4s?  They are starting to come with a lot of RAM and massive flash spaces.  I've even seen Haskell (and Lua) running on STM32F4 Discovery boards. But we are still only talking about kilobytes of RAM. Until I see real applications run, just blinking LEDs and toggling a few GPIO pins doesn't convince me that you can write serious sized applications for it (where memory allocation can kill you).

For a smart thermostat, it is all about the algorithms. There are, at most, 3-5 interfaces to the HVAC systems. These are basically switches (relays or solid state relays).  It is all about when to turn these switches.

I've been toying with the idea building upon Forth to do smart thermostat stuff.  If I stick to an ANS flavored Forth, I can implement it on bare metal and eventually up port it to an OS when they become appropriate for battery based systems.

What about Wi-Fi and stuff?  I have more to say about that later (e.g. I'd be happy to just use my smart phone in local proximity with Bluetooth to control and data-mine the thermostat).

Sunday, July 28, 2013

Haskell and the Embedded Programmer

Can you really use Haskell for Embedded Programming? Well, if you define your embedded platform as (at least) an Intel Atom based computer with gobs of memory... then probably. (Yes, I know about GHC ARM, but has anyone deployed real world apps with it?)

Ignoring the really cutting edge (academic) features of the language, Haskell seems ideal for the embedded developer. What? Wait. Hear me out.

First, let's define the class of embedded I am talking about.  How about my home monitoring application ( The base station is headless and should run 24x7, so I classify it as "embedded".

It currently runs mostly Lua (it used to run Erlang) on Linux, but it could have been Java on Android OS for that matter. If you accept the notion of a garbage collecting dependent app can run on an embedded platform, then you've made the leap (for good or bad) to modern embedded development.

Unfortunately, all of these modern embedded systems don't seem to run all that well.  My Android phone pauses every once in a while (garbage collecting I suspect) and my Roku crashes after a few weeks of up time.  I am starting to worry about the stability of my home monitoring system (although it has run for a couple of months before I intentionally reboot it for upgrades, etc).   Not just due to garbage collection, mind you.

Now, Haskell also does garbage collection, but the compiler is backed with a lot of type checking and other supports for determinism. The idea is that if you code "safely" (say it with me: catch all of those potential exceptions -- they will bite you later if you ignore them), you have a better chance of avoiding run time errors.  Avoiding run time errors are very important for systems that run 24x7 with no head (display).

Erlang's "expect to crash" approach is a good meta level approach to embedded design, but if you don't have a careful design or code, recoveries from random crashes is not a good thing. Your embedded code needs to be correct. Every crash situation is bad. It is worse if the problem is due to bad code logic (and not a perfect storm of resource conflicts).

Haskell is no panacea, but I've done 3 projects in Haskell thus far (one embedded in a commercial system) and I haven't been bitten by any type-based run time problems.

So, Haskell (or a sane subset of) appeals to me for embedded development.  But when is it overkill? I haven't done a lot of sophisticated stuff with it (yet), so almost everything I've done could have just as easily been done with Erlang or Lua.

Haskell was a bear on some of these projects, but I'm finding the results encouraging.  I feel better about the resulting systems.  Maybe it is time to revisit using it for my Home Monitoring system.

Friday, May 10, 2013

The 8051 won't die... will it?

The 8051 8-bit MCU architecture was introduced, by Intel, in 1980. It is still in wide production by at least a half dozen vendors (including my fave -- Silabs) at prices as low as a half a buck.  My current favorite is the C8051F988 (currently used as the brains behind my sensor nodes). This MCU costs about $1.60 (for low volume purchases) and is essentially an 8-bit SoC.

The C8051F988 comes in a couple of packages, one of which is hand solderable. It requires no additional passive components (an internal oscillator can run at 25Mhz), but I usually throw a capacitor on the power supply pin and a 4K resistor pulls up the RST line.  It has a mere 512 bytes of RAM and 4K of flash (a beefier version can be had for around 1 dollar more). I program it in Charley Shattuck's MyForth, so the memory doesn't feel so constrained.

The C8051F988 has an ADC, internal temperature sensor, a UART, SPI, I2C, etc.  It executes 70% of its instructions in 1 or 2 clock cycles.

Here is the power specs (from the datasheet):

Ultra Low Power Consumption
- 150 µA/MHz in active mode (24.5 MHz clock)
- 2 µs wakeup time
- 10 nA sleep mode with memory retention
- 50 nA sleep mode with brownout detector
- 300 nA sleep mode with LFO
- 600 nA sleep mode with external crystal
I'm impressed. But, of course, why bother with this when an ARM Cortex blows it away? (Ultra low power consumption 32 bitters are starting to arrive.)

A couple of things keep me interested in this chip (and other 8051 variants), and that gets us to what this blog post is about.

First, look at the part count: For under $2 (hobbyist pricing): the 8051 chip, an (ebay provided) SOP board, a resistor and a capacitor.  I have the old Silabs serial ec2 programmer (for initial flashing of a forth bootloader) and use Silabs free programmer under Linux WINE.  No ARM I know of has this simplicity.

Second, and most interestingly, the 8051 architecture doesn't seem to want to die.  Its a relatively simple chip (low gate count) and can be easily implemented as an FPGA core.  When you want to do simple things, it is very handy. It is built into *everything*: Keyboards, microwaves, the new Bluetooth LE chips (like the popular TI CC254x chips).

Start taking stuff apart and you are likely to run into an 8051.

They don't want to die. Somewhere there are engineers who sniff at the new 32-bit Cortex M series (meant to replace those old 8-bit 8051/AVR/etc processors) and then get their job done with assembly or C code that has run solid for the past 30+ years.

Monday, May 06, 2013

Scale if you must, but don't forget reliability

My home sensor base station is using STOMP to transfer sensor node messages into an AWS cloud instance (running RabbitMQ).  I am using STOMP because I can't find an AMQP binding that supports SSL (outside of Java and Erlang).  Or, more to the point: the sensor base station runs Lua and my AMQP binding for Lua is with  amqplib (which doesn't support SSL).  My pure Lua STOMP binding runs over SSL. But I digress...

The STOMP client runs 24x7, pumping messages into the cloud hosted RabbitMQ at a rate of 2-3 messages per minute.  This is not a lot of traffic, but I expect it to scale.

In order to develop (and test) the "server/logic" side of the system, I am running the consumer of messages on my laptop (it connects to the AWS RabbitMQ as a consumer).  The consumer is also talking STOMP.

However, when my laptop lid is closed, all consumption stops. So, for example, overnight I can accumulate a few thousand sensor messages.  Firing up the consumer in the morning should just suck all those messages down and pick up where it left off. Unfortunately, there is a glitch (in RabbitMQ?) where after a few hundred messages (all ACK based -- fault tolerance, baby!) it stops receiving new messages and the remaining messages are marked by RabbitMQ as "unacked".  Restarting the consumer happily consumes a few hundred messages before the same glitch re-occurs.

This is a serious problem and I need to figure out if RabbitMQ is the culprit.  Interestingly, I found that by slowing down my consumption rate (read/respond every 100ms) the problem is fixed. Argh.  Well, that won't scale, now will it?

So, until I figure out what the real problem is, I'll keep consuming as fast as I can.  But, what about the unacked messages? Well, this is where "reliability" comes in.  By default, all of my consumers do timed reads. If they don't receive a message in 60 seconds, then they terminate. (I don't use RabbitMQ heartbeats. I have an "application" level heartbeat that makes sure the whole system flow is working from base station to cloud. This heartbeat fires every 30 seconds).   Failure is an Erlang technique. The consumers are written in Lua, but I did learn one thing from Erlang: Failure is okay, just plan your recovery.

So, once the process terminates, Ubuntu Upstart restarts it.  The result: The system recovers on its own from this bug.  The system continues to run.  The unacked messages are requeued and delivered.

Scaling is great, but don't forget reliability!

Wednesday, May 01, 2013

Using Forth to question the Status Quo

Whenever I face a  tough problem I try and break it down to the essentials.

Often using traditional methods only leads you to traditional solutions.  Sometimes traditional solutions are just fine, but what if you need something special?

This isn't about programming in Forth. It's about taking the Forth Approach. This means looking at the problem in a different way.  You can use your own favorite language, but consider the path taken (time and time again) by Chuck Moore (Forth's inventor). His chip designs are novel. His approach to problem solving is stunningly minimalist.  Go ahead and google him, I'll wait...

Sometimes is worth it to think; "What is the simplest possible way to do this?".
In my mind, simple doesn't (necessarily)  mean easy.

Consider designing a wireless temperature sensor.  You want to mount them in every room in your house. Now before you whip out your Raspberry Pi, consider three factors: Cost, Power and Size.  This sensor should cost under $20 in parts, be discrete enough to attach to a wall, and should run for at least 1 year reporting temperature once per minute.

Now, thoughts of Wi-Fi enabled Raspberry Pi (too power hungry and expensive) wane. How about something with Bluetooth LE? ($15 at best; no money left for the board, battery and other parts, plus you have to deal a fairly complex "standard").

I've discussed my approach a year ago here (Why Forth still matters in this ARM/Linux...). It basically describes a $7 RF transceiver and a $3 MCU (that requires only a couple of resistors and caps to function).  The MCU has a built in temperature peripheral.   I hand wrote an implementation of RC4 for a modicum of security. I've coded this all in Forth, but I could have used C. The implementation language doesn't matter -- the approach is inherently Forth-ish.

A year later and I still can't find anything cheaper.  The status quo says I should go to Bluetooth LE (or at least Zigbee). But, at what benefit?

Okay, this is an even cheaper RF transceiver (just $2.99, but in the crowded 2.4GHz band):

Complexity and the Future: Revisiting technological foundations of Smart Phones

Every once in a while it is good to sit back, take a deep breath and look at the state of things.

As technology hurls forward we accept building complexity upon complexity.

The Internet is very complex. Fine. I would call it "deeply" complex. In that manner, it is similar to an organism. However, it is based upon a very simple infrastructure called IP (Internet Protocol). Upon that we have a host of other protocols with the most pervasive being TCP.   From there, the complexity escalates rapidly.  But that's okay.  Underneath is TCP/IP and I can always grab hold of it -- it is the earth, solid and firm beneath my feet.

Now, my phone is very complex. A Smart Phone is  made up of a bunch of software. Let's consider Android. Underneath is a Linux kernel (but you can't normally touch that -- imagine floating about the earth just a few inches but never touching ground). On top of that there are processes, a VM, and a bunch of apps.  Oh, and off to the side is the actual phone stuff (sacred and untouchable).

Every once in a while the phone gets slow or needs a reset.  It is indeed a complex beast.

I'm okay with the Internet being unfathomable by a single mind, but my Smart Phone is headed firmly in that direction. So many things working together, so complex. But, unlike the Internet, my phone's software is not self healing. There is no notion of routing around bad processes or chips. When bad things happen, the phone is nearly useless.

What if I want to just make a phone call? Or perhaps send a text message.  Maybe all I want is to message someone over the internet or read my email?

What if we stripped a smart phone of everything but the communication essentials.

Nokia is trying to (re)find a niche. They are offering a new phone for the third world:
This will sell for about $20 in the European market.  The standby time is 35 days.

If there were a bunch of these phones deployed with free text messaging, what could you do with it?
What if there was a slightly better alternative to SMS (larger payload). Or perhaps a multi-segment protocol on top of SMS that the phone could parse.

Why does the entry level into the 21st century Internet require a Smart Phone ($$$)?  Why can't I have a slightly dumber phone that plays well on the Internet?

Friday, April 19, 2013

Home Alone Elderly Monitoring System now has its own Blog

Starting today, I'm posting all of my notes on my Elderly Monitoring system here:

All of the old home automation  material will remain here, but look for new stuff on the new blog.

Thursday, April 04, 2013

A Pattern Language for Haskell System Programming?

Continuing on my previous blog entry (Haskell for the Working Programmer), it occurs to me that there needs to be a set of idioms to define a Haskell for System Programming.  Or, perhaps more strongly: There needs to be a Pattern Language for developing System Programs using Haskell.

When I say System Programs, I am stretching the term a bit to mean an application that touches the real world a lot. That is, it isn't so much focused on doing a ton of data manipulation or business logic, but spends most of its time interacting with the system (hardware, OS, or network) at hand.  This is the land of embedded development, network tools and server management.  Haskell can play here, but I don't see a lot of literature about how to do this.

Haskell is a rich ecosystem for implementing cutting edge ideas. This is its academic heritage and it keeps the language healthy and forward looking. But for us working programmers, we can't always be interrupted from our tasks to explore new mind blowing techniques (e.g. "Okay, I grok Monads, so let's get some work done... whoa, wait... Arrows? I need to start using Arrows?").

So, you want to build Home Monitoring base station (something to collect sensor data, analyze it and perform some action or notification via the Internet).  Oh, and it needs a built in web server for configuration.

How do you do this using Haskell?

That isn't quite a fair question. A Haskell newbie would not be advised to adopt such a big project until they've mastered the language a bit. So, let's ask again, but a little more focused:

I'm an Erlang/Clojure/ML/Scheme programmer (so I understand FP); I've played around with Haskell and now I want to implement my Home Monitoring base system. Where do I start?

That's better.

So, what is needed here is way to guide a developer towards their goal with just a subset of Haskell. (I've recently written over  3000 lines of Haskell for a commercial system. It doesn't sound like a lot, but since it was mostly system programming stuff, I leveraged lots of Hackage libraries and FFIs.  In retrospect, I would say that I used a fairly small  subset of advanced Haskell features to get the job done.)

An aside:  I mentioned  the phrase Pattern Language as opposed to Design Patterns.  I don't want to bring up an old war in the Pattern Community (is there an active Pattern Community these days?), but I am talking about a group of related Patterns (or Idioms) that you can select to help you build a specific thing. Patterns in a Pattern Language are not adhoc. Each Pattern leads to one or more Patterns you may choose to help you build the thing. Check out the original source for further understanding and information. (I'm too lazy to find a great example of a software Pattern Language for system programming, but you can look at my own 1997 contribution to the community: A Pattern language for User Interface Design for the general idea of what I am talking about.)

Now, where were we? Subset of Haskell...

This subset of Haskell wouldn't be restrictive. By all means, if an advanced Haskell technique helps get the job done or makes the program more manageable, then use it.  But Haskell has an ocean of ideas and new users are apt to drown in lieu of getting their project completed.

You don't need to master all of Haskell, just master what you need.  Just by using Haskell (in particular the type system), you are already on your way to constructing correct system programs.  (But, please remember to handle exceptions -- the monadic wrapped real world is full of unexpected behavior).

So, what next?  I don't know. I am wondering if something like The School of Haskell would be a good place to start building such a pattern language (where examples could be not only listed but tried out).

Wednesday, April 03, 2013

Haskell for the Working Programmer

There is a need for a book.  I would call it "Haskell for the Working Programmer".
The audience would be, you guessed it, working programmers.  Maybe even for working functional programmers.   I already grok functional programming. I have been doing it for over a decade now. It is Haskell I'm trying to grok.

Sure, there is Real World Haskell, and it is great. But, it is also getting dated and perhaps the pacing is a bit too iterative for my tastes. That is, they start off with sub-optimal (naive) solutions to problems and then (in later chapters) proceed to make them better and better.  Too much reading!

As a working programmer, I tend to learn better by studying and stealing exemplars. 

Maybe I want a cookbook.  Or, perhaps, what I am really looking for is a book that recognizes that my first goal is to compose a working program.  Of course, I want to understand why something works, but maybe, just maybe, I want to see something (anything) work first!

Should I feel ashamed that I still haven't mastered Monads?  (Sure, I've used them, debug them and have even constructed a few, but I can't say yet that I have mastered them -- at least not the theory). Can I call myself a Haskell programmer?

Read this:  by Paul Callaghan (you can skip down to the Monads section).  It rings true.

In this way, I feel that Haskell is a bit like Perl.  You can write lots of good, usable Perl without diving into the deep dark arts of Perl objects (bless them).  And, you don't have to write a CPAN module to be a productive Perl programmer. The Perl community has accepted this (or at least seemed to).  Do a google for Haskell and you'll find lots of  computer science-y (aka research) blogs and perhaps a dozen or so explanations on why Monads are easy, how everyone uses them and then you feel dumb when you can't figure them out.

Haskell needs more Working Programmers. Erlang seems to have snagged them all and it was the Monad that scared them away.

If you live in the DC area and want to meet with (at least one) Working Programmer, consider joining the DC Area Haskell Users Group and help make it a reality.

Thursday, March 28, 2013

Intel Atom (Haskell) vs ARM (Lua) for Home Monitoring Station

ARMs are cheap and plentiful, but computationally limited.  I'm looking to do some image processing (motion detection is done by cheap webcams rather than PIR sensors) using OpenCV. This should work on an ARM, but how slow will it go?

Haskell doesn't have much of an ARM presence and I can't rely on bindings to port well.  If I use Haskell (and I have already prototyped some motion detection using Haskell and OpenCV), then I'll need a beefy server. Why?  Imagine 5 or 6 motion detectors working in parallel to form a comprehensive picture of movement in a household.  Plus, some of the visual detection is fairly fine grain (detecting steam, smoke and burner flame from a stove -- more on that later!).

I feel like I am selling my ideas short by going with ARM (at this point). I'd toss the Haskell code and use Lua (upon which I've also done some OpenCV prototyping).

What is the state of small (set-top) Atom PCs?
Well, this one:  looks promising ($244). The price is a bit high (compared to Android/ARM stuff out there), but it can be a good system to target.


Tuesday, March 26, 2013

Home Monitoring / Elderly Care project reboot

This blog has been an  on-and-off  forum for my Home Monitoring project for Elderly people. (A non-intrusive, internet aware means of keeping track of Grandma/Grandpa).

For the past year I've been exploring different technologies, from X10 to Z-Wave to Cameras to homebrew sensors.

Within just a year, prices have dropped dramatically on embedded (ARM) computers, Android tablets, and USB/Wi-Fi cameras.  I'm not so much dedicated to cheap hardware, but I am open to any options given the steep prices for Z-Wave and other proprietary tech.

More info to come.  This is going to be interesting and... different.

For background, check out:

Stay tuned.

Friday, March 15, 2013

Thinking Big and Home Monitoring?

Sometimes I think too small.  I come up with a small idea and pour a lot of time into it,  cutting and polishing diamond from stone.  That is fine. That is where I spend a lot of time. But that is not me.

I like Big, bold ideas.

Now is a good time to be looking for Big Ideas.  The internet is huge.  It is the largest thing that humankind has devised. But it is not a single thing. It is not a networked collection of servers and clients. It is the essence of virtual. We run our ideas in virtual machines.  We use virtual memory.  In our distributed, concurrent web of things we don't ask how big, but how many?

What if my home monitoring project isn't really about a bunch of internet enabled sensors. What if the sensors are just players in a virtual network that represent home.  How do I make sense of all of the collected data?  How do I maintain privacy?  How do I make of all of this ubiquitous?

The Nest smart thermostat is a diamond.  It has one task and it does it (apparently) beautifully.
But I am interested in the bigger picture.  I'm interested in a more radical idea:

What if  you could monitor your home by modelling your home (sensors)  as intelligent agents in the cloud?  What if this model ran on a virtual network dedicated exclusively to you?  What if you virtual network of sensors could tell a story (a story about the well being of your home)?

What I don't want:  An alarm sent to my phone notifying me that Grandma's stove has been left on for over an hour without any sort of movement in the kitchen.

What I do want: An alarm sent to my phone because it is 11pm, the stove is on, no one has moved around in the kitchen for 20 minutes,  the TV is on in the bedroom and Grandma has never used the stove that late (her profile has shown that she has never used the stove after 9pm).

I dunno. There is a realistic, big idea somewhere in there. I just need to tease it out.

Next week I am heading off to the San Francisco Erlang 2013 conference.   Hopefully it will remind me that I am not a diamond cutter.

Monday, March 11, 2013

Another Weekend hack: BSON in Lua

I don't use MongoDB, but I'm doing some BSON (instead of JSON), so I whipped up a BSON implementation in pure Lua (because no one else has seemed to have done this).

The files are here:


Sunday, March 03, 2013

My Weekend Lua Hack: A LuaJIT/RabbitMQ binding

I spent Saturday afternoon getting the LuaJIT ffi to work with RabbitMQ. It was surprisingly easy to a simple publish/consume script up and running.

The files are here:

Wednesday, February 27, 2013

Lua, LuaJIT and interesting things

It's been (at least) 6 years since I've last written any Lua code.
I bumped into this: which looked really interesting.

Can Lua do this?
What is Lua up to?

Well there is LuaJIT and it is (apparently) very fast:

Hmm.. Where else can I find Lua?
In Erlang?

Okay. Time to get the new Lua book and get myself back up to speed.

Tuesday, February 26, 2013

Experimental Erlang mumble server

I've been avoiding github for a long time now. I've got a bunch of hacks I've accumulated over the years and now figure that github is as good a place as any to put them.

So, my first push is an experimental Mumble (voip) server I wrote a while back in Erlang. It compiles and runs a basic voip chat server but is probably not up to date with the Mumble spec.  Plus there are some half-baked voice mail stuff included that doesn't work yet.  It's called Maunder.

I don't expect anyone to fork it (or contribute), but it is better for it to live there than to exist solely on my harddrive.

You'll need a mumble client to talk to it.

Wednesday, February 06, 2013

Haskell vs Erlang

I've been doing copious amount of Haskell and Erlang at work for the past few months. I can't say that I'm an expert at either (in particular, Haskell continues to fascinate and frustrate me), but I have solid software that is about ready. (Laptop monitoring software that must work silently and safely 24x7.)

Most of my prototype was done in Haskell and I rewrote some of the software in Erlang.  There were pluses and minuses for both languages.

Take, for example, some applications I had to write to interface with D-Bus (pretty much the standard for Linux process to process communication these days).  I couldn't find a D-Bus interface in Erlang, but the Haskell one was pretty comprehensive.  It was a bit of a struggle, but the payoff was that when I  finally got the code to compile, it pretty much ran flawlessly. Haskell is pretty strict about data types and D-Bus is all about moving data structures around. There was no "ball of mud' structure could trip me up later, I had to explicitly describe the data structures in completion.  Once the D-Bus code compiled, it worked.

Three months later and that Haskell D-Bus code just purrs along.

But, then there was BSON.  I had to produce and consume some structured binary data. I considered using Google's Protocol Buffers, but that was too rigid. I was still working out fields that would comprise the data and didn't want remote stuff to just stop working because I tacked on an extra field.

Here, Haskell was a struggle. You just don't toss around arbitrary data. I was getting runtime failures whenever an errant structure element appeared. Yes, I know, I should be handling the exceptions, but this was proof of principle code and I had yet to harden it. Bad me.

Erlang, meanwhile, shined where it usually does: When I wanted a failure recovery strategies and tons of crash diagnostics, it produced them.  Plus, Erlang OTP is a lot like Unix: It is a complete runtime environment (log rolling, process management, etc).

Both languages were a pleasure to use. Haskell was a pleasure to compose apps with and Erlang was a pleasure to orchestrate a system of communicating processes.  They both have their places.  I can't see replacing any of the D-Bus interfacing apps (currently in Haskell) with Erlang, yet Erlang  OTP certainly rocks with its ability to stay up and running.

Sunday, January 13, 2013

Kitchen monitoring with a Robot?

In my Kitchen monitor adventures (a long going project to keep watch on an elderly person's stove activities), I keep running into privacy issues.

If you remember, my project has evolved to using a wireless camera (and image recognition) to keep an eye on kitchen occupancy and stove usage.  The idea is to notify someone if the stove has been left unattended (no one in the kitchen) for some critical amount of time.  I need a combination of temperature and motion sensing.  The camera would keep track of motion (a little more sophisticated than a simple yes/no motion detector) and a temperature probe would keep track of stove usage.

On privacy concern is actually security:  The system would be comprised of wireless nodes communicating to a base station. There are many ways to jam signals (both on purpose and as happenstance) and if I don't authenticate the data it could be spoofed (pranksters).

The other privacy concern is more personal. I installed a wireless camera in my kitchen and it started to creep people out. Is that thing on?  Are you taking my picture?  Is video being beamed over the internet?

There is indeed something creepy about being monitored by a camera.  Even if I offered promises that the images were "just" for computer analysis and a human doesn't see them (..starting to sound like TSA here...), family members weren't convinced.  Mount the camera on a rotating turret and suddenly it becomes downright ominous.

So, I started to think more about personal privacy.  I had no intent of sending "snapshots" to anyone. The camera was just to be used as a more sophisticated sensor.  What if I mounted those sensors in a robot?  That is, what if the housing was more robot-like than camera like?

This isn't about hiding the camera. There is something subtly more comforting to being watched by a self contained "thing" (be it a cat, dog or maybe even -- robot).

This wouldn't be a mobile robot.  Why does every robot have to move around?  What if this one was the size of a toaster, could be placed on the table or maybe a counter top with a view of most of the kitchen?  Yes, it would have to rotate it's "head" to see everything (versus a camera mounted in a corner near the ceiling).  What if I put the whole computer into the robot and so it becomes completely self contained. No radio between it and the sensors. Maybe wi-fi "just" for communicating serious events (e.g. the kitchen is on fire).

Are we ready for kitchen robots?

Oh, and I am starting to play with a MLX90614  Infra-red thermometer to see if I can do the stove monitoring from a distance (the robot itself).

Stay tuned.

Saturday, January 05, 2013


Every year or so I brush off my dusty Perl interpreter and mess a bit with AFT. I am not sure how relevant the system is these days. I still use it once in a while (I still resist using word processors).

I've been thinking about writing an AFT config/rule file to generate EPUB or MOBI formats for e-readers.  I don't have a tablet or e-reader myself (yet -- ever?), but this looks like a useful exercise.  I've also been thinking about standing up an online version of AFT where you can submit your files (or type them in) and generate HTML, PDF or EPUB/MOBI.

So much copious free time...yeah, right.