Monday, February 22, 2016

Premature Optimization during Design

As I design my embedded software, I am always looking for the most efficient tools and design. We have limited resources and must therefore constrain our designs.  Or do we?

I remember struggling to get Donald Knuth's TeX typesetting system to compile and run on the big DEC2060 timesharing system back in 1984. It was a beast of an application and not written to run on anemic platforms. It was Knuth's idea to solve the typesetting problem, not write an application that would run on limited hardware.

Now, TeX (same sources pretty much) can run on your Android phone.

Back in 1986 I was trying to get Richard Stallman's Emacs to compile and run under Unix. It was a big, bloated and slow beast  (but worth it for all the power it gave me -- I was already an Emacs addict for a couple of years).

Now, I install it on every Linux/BSD laptop I use and fire it up as needed.

These systems (and others) were not designed to work on minimal hardware, but over the years hardware caught up with them.

I am not advocating that IoT devices use big bloated tools, but as far as "basestations" go... why are we constraining ourselves to RasPis and Beaglebones?


Tuesday, February 16, 2016

Mutter... Adventures in VOIP/messaging systems

Over the past couple of years I've been playing around with a "toy" Mumble server I developed.
Mumble, if you don't know, is a popular gamer VOIP and messaging system.  It is open sourced and has clients running on Windows, Linux, iOS (iPhone) and Android (I prefer Plumble).   It has a published spec for communication so it is relatively easy to build a minimal server.  I've built one in the past in Erlang and have recently started one in Lua(JIT).

Why would I want to implement my own Mumble server (I'm calling it Mutter) when a perfectly good one exists as part of the Mumble project?  Well, I am curious how many interesting things I can do with a compliant server without touching the client software.

Some of my experiments involve creating additional levels of authentication (e.g. a query response from a server bot, additional detection of client OS/hardware stuff, etc) as well as the potential to bridge to other VOIP or messaging systems.

Other things I am curious about playing with is "adhoc" conference calls that could spawn quickly and privately in the cloud. 

Right now it is mostly for fun. I've got basic messaging and TCP voice channels working. I am not interested in building a full blown Mumble server (that already exists!) but curious as to what can be done minimally....

S.A.F.E: An IoT compatible Manifesto

My home monitoring projects/products follow a manifesto I call SAFE.  SAFE is an acronym for Set And Forget Engineered.  It follows the basic tenant that home monitoring systems should be reliable and not require lots of care and attention.  You set it and then forget it.

This manifesto doesn't exclude IoT (Internet of Things) devices, but it has some rules. Let's consider the class of devices to include: Flood monitors, Stove usage monitors, Motion detectors and Entry/Exit monitors.


  1. If you don't run off of AC, your nominal battery life should be 5 years.  Assume 2 years of worst case (power consumption-wise) performance.   Do you check/change your smoke alarm batteries religiously every year?  Maybe not.  If you can't guarantee 2 years of performance (and you are a critical monitor) then you should run off of house current (AC).
  2. If you need to run when power is loss, then you should have backup batteries that last at least a couple of days. This is particularly important for Flood monitors, etc.
  3. If you can't automatically recover from a power failure, use backup batteries to keep the system running or use persistent memory to snapshot states.
  4. Your device should have some "local" alert capability and not rely 100% on the Internet for notification.  If I am in the house, there should be an audible alarm and not reliance on my smart phone being notified via the Internet.
  5. If Internet notification is critical, don't trust Wi-Fi.  Let's use an analogy:  Your car's critical systems (engines, steering, braking, locks, etc) should, by design, run on a separate network than your Entertainment system (radio, etc).  Your IoT device probably should follow that same rule. Wi-Fi can get congested, it can have password changes, it is a common target for attack.  But what can you use instead of Wi-Fi? Consider ZigBee, XBee or other more robust protocol (no, not Bluetooth!) as the delivery transport to the home router. All home routers still feature Ethernet ports so your transport receiver can be plugged in there. You still rely on the monitor but you are not affected by all the issues with Wi-Fi.  Now, of course, you should consider encryption and authentication too when using a non-Wi-Fi protocol...
  6. Don't design for over the air software/firmware updates. This is a HUGE security hole and although you may have thoroughly thought it through -- you haven't.  Get your software as  correct as possible and consider doing updates through a computer or smartphone "directly" and "physically".  Things that can be controlled through the Internet will be a nice fat target for people who want to control your stuff through the Internet.  Don't advertise your house as hackable!
  7. No SD cards. Nope. SD cards are not designed for reliability or longevity. Use persistent memory that has at least a 10 year retention.
  8. No rechargeable batteries.  How long do you really get on a L-ion/poly?  Two years? Five years?
  9. Avoid LCD/button interfaces as much as possible. What is this, the 1990s?  If you need a way to silence an alarm or (temporarily) disable a sensor use touch or tap and a simple indicator. 
  10. No disabling or critical manipulation through the Internet.  Sorry, see #6.
  11. Know thy hardware. Don't just choose a Raspberry Pi or Arduino unless you know exactly how each critical component is rated (e.g. environmentals, write duration, etc).
  12. Know thy software. Don't just load up a Linux and go. Are there processes running you don't understand? Update software maybe? 
I try and to design to these tenets. I am surprised how many commercial IoT devices seem to ignore them. 



Wednesday, November 25, 2015

Elderly Monitoring: Revisiting with Brutal Simplicity

Short backstory:

I've been running over a year now with the current Elderly Monitor system in my house. Mother-in-law has dementia and the current system lets us know her general movements throughout her living space (e.g. how long has she been up this morning? How many trips to the bathroom?)  and whether or not she has opened the front door (e.g. is she going for a walk? Has she made her escape?).
The current system consists of X10 wireless monitors for the door (open/close) and living spaces (motion detection).  This is fed to a small Linux computer I've coded with tracking logic, the ability to speak "Front Door is Open", and the ability to communicate (event message and status query) using XMPP to a cloud server (Digital Ocean) and to our smart phones (running Xabber XMPP clients).
It has been a success but with the lessons learned  I've found the most critical aspect of this setup is the ability to simply detect that the front door has opened and then notify us via a speaker in our bedroom.  All of this flows from X10 to Linux computer to soundcard (the Cloud is not involved here).  Still, this seems overly complex. Can it be simplified yet still be expanded to deal with "enhancements" in the future?

Let's review some of the short comings of the current solution:

  1. X10 Wireless - This has been "mostly" reliable and inexpensive. Still, I do have an RF noisy house and what if the neighbor starts using X10 RF?  (Not likely, but for a rock solid solution, this is a weakness).  Also X10 pretty much means that I have to have a full blown PC (with X10 CM19a transceiver) unless I decide to seriously hack the protocol and build my own RF receiver.
  2. The PC - Why do I need a full blown PC just to do the basic "Door is Open"?  
  3. The Cloud - Sure, I've got it, but if I want to distribute the "Door is Open" beyond the bedroom speakers, I have to connect via the Cloud (currently) and subscribe to XMPP messages (essentially what I do with the smartphone).  I need to make some of this stuff local.
  4. Batteries. Batteries. Batteries - Damn. Did the X10 sensor batteries die? When did I change them last? Ugh.
  5. There is no indication of whether or not she has just opened the door or left the house. I can code this logic, but since I want to address the above short comings first, this will have to wait.
What is the simplest thing I can possibly do?  Especially if I want to add logic like #5?
I am revisiting this problem and addressing it with brutal simplicity. 

Two things are going to get ripped out:
  1. The PC. No more computer. A microcontroller should be able to do this.
  2. No more X10. I'm going "wired". No more batteries either. I want to "set and forget" this thing. I'll deal with a little wiring. All of the current sensors are less than 20 feet apart: Door, hallway, bedroom, bathroom.
Stay tuned.  More details to follow as I hash out my brutal "simplest thing that could possibly work" design.


Monday, October 19, 2015

Elderly Monitoring: Telling a Story with minimal sensors

How many sensors do you need to tell a story?

I have a motion sensor in my mother-in-law's room, her bathroom (down the hall from her bedroom)  and an open/close sensor on the house front door (which is next to her room).   With just these two (cheap) X10-RF sensors I can tell a lot about the nightly activity of my dementia suffering guest.

If you  haven't been following this blog's "elder care" stories: My mother-in-law has dementia so she lives with me and my family. She is apt to get confused and wander. Her room (the only available extra room in the house) is unfortunately next to the front door.  The rest of the bedrooms are one floor up. Mother-in-law needs constant monitoring. She has "escaped" our house several times (at night and at dawn -- when we are still asleep) with the idea that she is going to walk to "her house".  She also complains of insomnia and chronic pain.  Is she sleeping at night? Is she up wandering the house?

So, I've designed a cheap  sensor-based monitoring system. I explained that tech setup elsewhere.  Here I want to posit the question: What kind of "story" can you tell with a couple of sensors?

With just a bedroom, bathroom motion sensor and a sensor to alert us when she opens the front door, I can talk about the following:

  • Did she leave the house or is she just "checking the weather" (door opens but is followed by movement in her bedroom)?
  • Is she restless at night (motion in bedroom)?
  • How many times did she visit the bathroom?
  • Is she in the bathroom for an unusually long time? (Bathrooms are where a high incident of heart attacks tend to occur)
  • When did she get up in the morning? (Motion in bedroom, then bathroom, then bedroom again)
Now, my current software doesn't tell a complete story (yet), but with the reports/alerts it generates, my wife and I can determine with a quick view of the data on our smartphone, any of the above scenarios.

I'd like to add a couple more sensors, maybe a light sensor and temperature monitor for the bedroom to help flesh these stories out.

The moral (of this post) is: You can tell a lot with just a few sensors and a lot of common sense.  It isn't about the "hardware tech". It is, ultimately, about making sense of data.  I want to get my software to the point where it "tells the story" rather than just provide data for my wife and I to review.  Here is my ideal morning report (sent to my phone instead of the raw data events):

Betty slept between 9:30pm and 6:15am, awaking at 11:15pm and 2:30am to go to the bathroom.  At 6:30am she opened the front door, closed it and went back into her room. Her room light has been on since 6:45am and there is currently no movement in her room.
I don't need this report in verbose english (like above), but I should be able to quickly derive the above story from summarized data points.  All of this can be surmised by the current three sensors.

Sunday, October 11, 2015

Hacking Inside Out vs Outside In: Lua vs Clojure

I've got a couple of CFT (Copious Free Time) projects going on at the same time:


I've so far have 2 semi-working (new) implementations for each of these projects.  I  started developing both in a high level programming language (Clojure) and ended up runnng into a few walls that caused me to look at alternative implementation languages (in both cases Lua(JIT)).

With Clojure I got to swing around futures, core.async and rich data structure manipulation, but I hit implementation walls involving libraries (mostly Java) that don't quite do what I want. Soon I was finding myself installing broken, old or incompatible packages.   

Abandoning Clojure, I headed back to LuaJIT. Here I had much more control over my environment, but greatly missed builtin things like futures,  core.async and rich data structure manipulation.

Clojure and LuaJIT represents the opposite ends of the spectrum but they do have the ability to overlap (I can drop down to Java/JNI in Clojure and I can evolve Lisp-like richness out of Lua).

It's bottom up vs top down, or inside out vs outside in.

I need to bite the bullet, pick a direction and stick to it.

Thursday, October 01, 2015

Confab - Adhoc private VOIP/chat for conferencing

In my copious free time I am working on a new system I am (tentatively) calling Confab.
Confab is an adhoc (on demand) VOIP conference call system utilizing the popular gamer VOIP/chat system Mumble.

Confab will use any Mumble client (iOS, Android, Windows, Linux, etc) but will only implement enough of a subset of a Mumble server to allow for quick conference calls. (Mumble certs won't be used for authentication, so you won't have to install certs on your Mumble client.)

The idea is that there is no conference call service running until you need one. And, once you are done, it goes away.

But, why not use free stuff like Skype or Google Hangouts?

  1. Skype and Google Hangouts require registered accounts (with personal info about you)
  2. Skype and Google Hangouts persist your previous chats (which can be annoying if you never want to talk to these people again)
  3. Your account is "permanent". Your connections, your password, etc. Always there waiting to be cracked or exploited.


With Confab, you point your browser to the Confab Website, enter a conference start time and you are provided with a server name, a port number and a small once use password (e.g. a23gHYz). The Confab Mumble server (tied to the designated port) doesn't accept connections until the startup time.
Because each session is tied to a unique port number, there is more security than can be offered by a single server with "channels" or "rooms".

You give your participants the server name, the port number and password so they can join.  Once people join you can chat(text) or talk(voip).  The Confab session terminates after 10 minutes of idleness (no one is talking or chatting).  You can also configure an absolute call duration time (e.g. 60, 90 minutes, etc).  Each Confab session should support a couple of dozen participants.

Why not just use a normal Mumble server?  I want to drop the gaming oriented features, but I plan to add unique server-side features such as:
  • Federated servers - connect multiple servers to allow inter-conference calls.
  • Support for bridging to other "open" chat/voip servers.
  • Support "audio casts" (recording and simultaneous broadcasting of audio via one user's phone/computer) to dozens of participants
  • Moderated conferences (e.g. question and answer sessions, etc) via helper bots.
  • Voicemail (and text message) capabilities (call in and leave a message for others)
  • Possible support of POTS (plain old telephone service) bridges

I'm finishing up the basic Mumble-compatible server right now (not yet supporting the above features).  It is designed to be lightweight and fairly scale-able. I have no intention on providing or modifying existing client side software.

My server software will be released as open source.  I am planning on setting up a small test server on Amazon AWS or Digital Ocean.  I'll let you know (here) when it is stood up.  If this works out, maybe I can get some donations (Amazon, PayPal, etc) to offset the costs...

Sunday, July 19, 2015

Elder Home Care in an RF noisy house

The BT tags I mentioned in my previous post is acting erratically.  During certain times the tracker tokens lose contact with the server (for minutes) even if just a couple of feet away.  BT LE is supposed to be broadcasting on a channel not used by IEEE 802.11 Wi-Fi, so I am not sure what is drowning the broadcast. I don't have a 2.4Ghz wireless (house) phone so that isn't the culprit.

I don't have a spectrum analyzer, so I am limited in my investigative resources...

I'd hate to have to drop down to 433Mhz sensors.

The good news is that this can possibly be solved in software.  The problem is the "false positives".  Since the monitor notifies me upon the sensor going out of range, when these RF anomalies occur I am falsely alerted.  One approach is to have a "control" tag permanently installed in the room with the detector. If both the control and tracking tag go "out of range" then it must be an RF anomaly and I shouldn't be notified.

Friday, July 03, 2015

Phase II of Elder Home Care (formerly Elder Home Alone) Monitoring System

It's been a while since I've posted about my home monitoring system.

Short recap:

A couple of years ago, my Mother-in-law lived alone in a Condo and was prone to leave her stove on accidentally and other forgetful things. I started working on a home care monitor for the "Independent Elderly".  It would include basic occupancy trackers, water overflow detectors and stove/kitchen monitoring (to make sure it isn't left unattended and to monitor her eating habits).

Well, fast forward to..  my dementia diagnosed Mother-in-law moved in just over a year ago.  So, the problems are a bit different.  She wanders. She gets up in the middle of the night to go to the bathroom and can't find her way back to her bedroom. She may go upstairs in the dark and stumble or venture outside.  Sometimes, during the day she may decide to walk home... to her childhood home, several states away.  She is old, but fast.

The current system uses "cheap" X10 RF motion detectors and door monitors. I can review past activities (e.g. when did she get up this morning? Did she frequent the bathroom last night?) or I can be alerted to the house door being opened (Is she just checking the weather? Is she going to sit on the porch? Is she going to make a run for it?).
The alerting system consists of some software I wrote (runs under Linux on a small Intel NUC PC) and it, currently, sends XMPP (Jabber/Chat) messages to a cloud server (on Digital Ocean) which runs Prosody XMPP server. My wife and I are connected to this server using XMPP chat software (Xabber) on our Android phones.  We can query the monitoring system from our phone or be chimed when the door opens.

It has run well for a year now. :)

But, now that my Mother-in-law is prone to taking long unannounced walks, this system is not enough.
The phone chimes when the door is opened. Is it one of the kids? Is it her checking the weather? Is the door already opened from a previous check?  Is she *really* still sitting on the porch 10 minutes from now?

So, after an early phone call one morning, from the Police (she managed to get several blocks from the house before sunrise), we decided we needed to invest in a tracking solution.

Most tracking solutions either involve GPS (battery drain, and overkill -- if we know that she has left we can pretty much find her in a matter of minutes -- if we know she has left.

Not a lot of solutions out there.  Found one on an Alzheimers website. It *only* requires recharging every 48 hours. Ugh.  What do we do while it charges? Do we need to buy two?

So, I decided to look into BT LE (Bluetooth LE). I had built several BT LE tags years ago and was interested to see what the state of the art was today.  Apparently, Fitbit uses BT LE beaconing. That is, every second or so it broadcasts it's address so your phone can handily connect to it on demand.
BTE has a very limited range, but that's okay.
Also, Apple has been pushing "iBeacon" for their own (non-elderly) tracking purposes. They have a spec and a number of hardware vendors. I found this tag on Amazon for $14. Although meant to be used with Apple devices, it does a simple BTE beacon/broadcast that I can readily track.  This is perfect size to be "hidden" in her purse (in a small crevice/pocket) and the battery should last 6 months - 1 year (I'll assume 3 months and schedule an early battery replacement).

Armed with the BT 4.0 PCI card in my NUC, I attacked this challenge a week ago. Now I have a rudimentary system that will let me know when my Mother-in-law has ventured beyond the front porch. My android phone (running Xabber) is notified whenever the tag goes out of range.

There is a lot of work to perfect this, but I am happy with the preliminary results. I will be moving the notifier beyond the phone (maybe home media -- DNLA/TV/etc or just speakers on the NUC) and making it work locally in case we lose Internet connectivity.

I will be releasing the software into open source within the next few weeks.

Saturday, February 28, 2015

Virtualization: Your PC is a Universe

PCs (and, honestly I am really talking about Laptops and the newer PC replacement tablets) are so powerful that they no longer have to be thought of as singular "client" resources.  That is, with sufficient memory (let's start at 8GB RAM)  and with enough SSD speed storage (>128GB),  folk like myself typically run many virtual computers inside our computers.

If I need to run Windows, I just fire up Virtualbox. If I need to do server development, I can pick stuff like Vagrant, Docker or go directly to LXC.  I can do Android development. I can do Windows development. I can try out Haiku or some new BSD.  I can do all of this without changing the underlying OS.  The underlying OS, in fact, is starting to become irrelevant.  Give me a Windows box and I can do full Linux stuff on it without replacing the OS: Just start up a Linux VM.

The thing is, at any given moment, my laptop is a Universe of virtual computers. I can network these computers together; I can simulate resources; I can test them, probe them and manipulate them.

This is new. Yes, yes -- the tech is pretty old (e.g. virtual machines), but the realization of this tech on a portable computer is new.

If you want to see where we may be heading, check out something like Rump kernels or OSv. We are starting to leave the OS behind and look at computing in terms of "microservices" -- collaborating virtual computers that solve a particular problem.

With the resources we now have on hand, why are we talking about systemd and Dbus and other single computer entities?

The next time you approach a design, try thinking about how your laptop can be *everything*. And then let that influence your design.


I will be Cyborg.

I haven't had a lot of time to post to this blog and I am wondering if this is the end of the line for it.
Well, we will see.  But for now...

I am approaching 50 (in 1.5 years) and my eyes are shot (I'm very near sighted).  The screen is blurry (I have transitional bi-focals, so my "clear" view is pretty marginal) and isn't going to get any better.

So, if my eyes sight starts to quickly wane (my eye doctor isn't really concerned... yet), what do I do?
While I can use magnifying glasses for my circuit work (which starting to become a thing of the past for me anyway), what about my programming and computer science stuff  (i.e. my screen work)?

Duh.
I'm a programmer and technologist.  I can hack something together to supplement my poor vision.  Even if I were to go blind (that isn't currently in the cards, but who knows), there are ways to continue to do "Computer Science".
There is technology already out there, and I can always invent what I need to aid me if my eyesight worsens.

Sometimes I forget that, with software and some gadgetry, we invent whatever we need. We are indeed sorcerers and alchemists :)

Wednesday, October 01, 2014

Forth and the Minimalist

Not all Forth programmers are minimalists, but chances are, if you use arrayForth, colorForth or something inspired by it (like MyForth), then you may be a minimalist.

Being a minimalist, you seek the simplest, most concise use of resources.  You tend to avoid rambling code and the idea of calling a (3rd party) library function makes you uncomfortable.

One of the reasons I like using MyForth (and the 8051) is that it forces you to think about how to simplify the problem you are trying to solve.  This is a good exercise but also offers some advantages when you are working on low power (or very tiny) embedded systems.  No matter how beefy an MCU can get, there is always a need for something "smaller" and lower power (e.g. a tiny low transistor count 8 bit MCU has more chance running off of "air" than a 32 bit fast, feature rich MCU).

The 8051 has rather poor math capabilities. Everything is geared toward 8 bits. If you use a C compiler, this is hidden from you.  The compiler will generate a ton of code to make sure that your 16 or 32 bit math works. This causes code bloat and will slow you down -- thereby causing more power consumption.  Programming in a minimalist Forth makes you think about whether or not you actually need the math.  Is  there a cheat?  You look at old school methods and you may find them. I grew up on the 6502 (Commodore VIC20/C64, Atari, Apple, etc).  You did all you could to avoid doing "real" math (especially if it broke the 8 bit barrier).  You had limited resources and you made the most of what you had.

But, is this just an "exercise"?  I don't think so. There are practical benefits that go beyond just old school cleverness. You (can) have more compact code that performs better. The less code you produce, the fewer chances for bugs. The less code you produce, the more reliable your product.

Gone are the days (for most of us) of penny counting the costs of components. I'd rather have a bunch of simple components (e.g. logic gates, simple MCU, peripheral processors etc) that do work for me rather than a big processor with a complex library.  Chip components tend to be "coded" at a higher level of quality assurance than pure libraries.  I trust a USB->serial chip more than some USB->serial library for my MCU. If the library fails, they say "update". If a chip fails... they risk going out of business -- who trusts production runs to faulty chips?

In the end, the minimalist is fighting the status quo.  It is a futile fight, but we can't seem to give it up. It is in our nature.

Wednesday, July 30, 2014

AFT - an elegant weapon for a more civilized age..

This is a sort of nostalgic post and, in some sense, it is also a "toot your own horn" one as well.  I am writing this mainly for myself.  I am trying to remind myself what I've liked most about programming.

Years ago, actually almost 2 decades ago -- around 1996,  I wrote a text mark up system called AFT.  AFT stood for Almost Free Text. It was inspired by Ward Cunningham's original Wiki mark up but went further.

I had a problem. I didn't like using WYSIWYG word processors and the world was moving towards HTML.  I liked Ward's mark up. He was using it on this new "Wiki Wiki" thing. I answered an invite sent to the Patterns List and became one of the first of the wiki users in 1995.  (But that is a different story or a different time.)

AFT was my attempt at a writing system to produce publishable (web and print) documentation.  Since then, it has developed a (now waning) user base.  You can see how many web pages/documents use it without changing the default "watermark" with this query.

As of Ubuntu 14.04, you can get AFT by issuing an "apt-get install aft" against the standard repository.
I think it is still part of FreeBSD "world".  I believe it still runs under Windows too.

Various "modern" mark up languages (written in "modern" programming languages) have since surpassed AFT in adoption, but for me, it still is a more elegant and pleasurable experience.

Over the years (although not very recently), I've updated, fixed and generally maintained the code.  There are no known crashes (it literally take whatever you throw at it and tries to produce good looking output -- although that may fail) and it doesn't require me to look at the HTML (or PDF) manual  (written in AFT!) unless I want to do something complex.

AFT is implemented in Perl. Originally it was written in awk, but I taught myself Perl so as to re-implement it in the late 1990s.

It is, for me, interesting Perl code.  I have modernized it over the years, but it still doesn't depend on CPAN (a good thing if you just want to have non-programmers "download it" and run without dependencies -- yes I know there are packaging solutions to that problem today...).

AFT has "back end" support for HTML, LaTeX, lout and some rudimentary RTF.  These days I think mostly HTML and LaTeX is used.

You can customize the HTML or LaTeX to support different styles by modifying or creating a configuration file.  This configuration file is "compiled" into a Perl module and becomes part of the run time script.

AFT has been a pleasure to hack on now and then. It still runs flawlessly on new Perl releases and has proven not too fragile to add experimental features to. I've accepted some small code fixes and fragments over the years, but generally it is mostly my code.

As I wrote (and rewrote) AFT, I thought frequently of Don Knuth's coding approach (as excellently documenting in my all time favorite book on programming: Literate Programming).  I certainly can't match the master, but the slow thoughtful development he enthuses was inspiring.

Over the years I've gotten a few "thank you" notes for AFT (but nothing in the past few years) and that makes it my (to date) proudest contribution to Free Software.

Maybe I'll dust off the code and introduce some more experimental features...



Sunday, July 27, 2014

Concurrency and multi-core MCUs (GA144) in my house monitor

My house monitoring system monitors lots of sensors. This suggests a multi-core approach, doesn't it?

The problem with (the current concept of)  multi-cores is that they are typically ruled by a monolithic operating system. Despite what goes on in each core, there is one single point of failure: the operating system. Plus, without core affinity, our code may be moved around.  In a 8 core Intel processor, you are NOT guaranteed to be running a task per core (likely, for execution efficiency, your task is load balanced among the cores).  Each core is beefy too. Dedicating a whole core to a single sensor sounds very wasteful.

This, I believe, is flawed think  in our current concurrency model (at least as far as embedded systems go).

I want multiple "nodes" for computation. I want each node to be  isolated and self reliant.  (I'm talking from an embedded perspective here -- I understand the impracticality of doing this on general purpose computers).

If I have a dozen sensors, I want to connect them directly to a dozen nodes that independently manage them.  This isn't just about data collection. The nodes should be able to perform some high level functions.  I essentially want one monitoring app per node.

For example: I should be able to instruct a PIR motion-sensor node to watch for a particular motion pattern before it notifies another node to disperse an alert. There may be some averaging or more sophisticated logic to detect the interesting pattern.

Normally, you would have a bunch of physically separate sensor nodes (MCU + RF),  but RF is not very reliable. Plus, to change the behavior of the sensor nodes you would have to collect and program each MCU.

So, consider for this "use case" that the sensors are either wired or that the sensors are RF modules with very little intelligence built in (i.e. you never touch the RF sensor's firmware): RF is just a "wire".  Now we can focus on the nodes.

The Green Arrays GA144 and Parallax Propeller are the first widely-available MCUs (I know of) to encourage this "one app per node" approach.  But, the Propeller doesn't have enough cores (8) and the GA144  (with 144 cores) doesn't have enough I/O (for sake of this discussion, since the GA144 has so many cores I am willing to consider a node to be a "group of core").

Now, let's consider a concession...
With the GA144, I could fall back to the RF approach.  I'll can emulate more I/O by feeding the nodes from edge nodes that actually collect the data (via RF).  I can support dozens of sensors that way.

But, what does that buy me over a beefy single core Cortex-M processing dozens of sensors?

With the Cortex-M, I am going to have to deal with interrupts and either state machines or coroutines. (although polling is possible to replace the interrupts, the need for a state machine or coroutines remain the same).  This is essentially "tasking".

This can become heinous. So,  I start to think about using an OS (for task management).  Now I've introduced more software (and more problems).  But can I run dozens of "threads" on the Cortex-M? What's my context switching overhead?  Do I have a programming language that lets me do green threads?  (Do I use an RTOS instead?)

All of this begins to smell of  anti-concurrency (or at least one step back from our march towards seamless concurrency oriented programming).

So, let's say I go back to the GA144. The sensor monitoring tasks are pretty lightweight and independent. When I code them I don't need to think about interrupts or state machines. Each monitor sits in a loop, waiting for sensor input and  a "request status" message from any other node.
In C psuedo-code :

while (1) { 
  switch(wait_for_msg()) {
    case SENSOR: 
       if (compute_status(get_sensor_data()) == ALERT_NOW)
          send_status(alert_monitor);
       break;
    case REQUEST:
       send_status(requester);
       break;
  }
}

This loop is all there is.  The "compute_status" may talk to other sensor nodes or do averaging, etc.
What about timer events? What if the sensor needs a concept of time or time intervals?  That can be done outside of the node by having a periodic REQUEST trigger.

(This, by the way, is very similar to what an Erlang app would strive for (see my previous post GA144 as a low level, low energy Erlang).

Now, the above code would need to be in Forth to work on the GA144 (ideally arrayForth or PolyForth), but you get the idea (hopefully ;-)


Tuesday, July 22, 2014

A Reboot (of sorts): The IoT has got me down. I think we've lost the plot.

The IoT (Internet of Things) has got me down.  I think we've lost the plot.

In most science fiction I've read (and seen), technology is ubiquitous and blends into the background.  The author of a science fiction book may go into excruciating detail explaining the technology, but that is par for the course.

In science fiction films the technology tends to be taken for granted.  Outside of plot devices, all the cool stuff is "just a part of life".

Re-watch Blade Runner, Minority Report, etc. Do the characters obsess (via smartphone or other personal device) over the temperature of their home while they are away?  Do they gleefully purchase Internet connected cameras and watch what their pets are up to?

It is 2014 and we buy IoT gadgets that demand our attention and time.  Nest and Dropcam: I am looking at you.

Beyond "Where is my Jet Pack?", I want "Set and Forget" technology.  The old antiquated "Security Monitoring" services (e.g. ADT) got it partially right. You lived with it. You didn't focus on it and you weren't visiting web pages to obsess over your house's security state.  But that model is dying (or should be). It is expensive, proprietary and requires a human in the loop ($$$).

What do we replace it with?

I think that the "Internet" in the IoT is secondary.  First, I want a NoT (Network of things) that is focused on making my house sensors work together.  Sure, if I have a flood, fire or a break in, I want to be notified wherever I am at (both in the house and out).  When I am away from my home  is where the Internet part of IoT comes into play.

My current Panoptes prototype (based on X10) monitors my house for motion and door events. My wife or I review events (via our smartphone) in the morning when we wake up. It gives me valuable information, such as "when did my teenage son get to bed?" and "was mother-in-law having a sleepless night?" and "is mother-in-law up right now?".  Reviewing this info doesn't require the Internet but does require a local network connection.

I also register for "door events" before I go to bed. This immediately alerts me (via smartphone) if  "mother-in-law is confused and has wandered outside".

When I leave the house, I can monitor (via XMPP using my smartphone) activity in the house. When I know everyone is out, I can register (also via XMPP)  for door/motion events. I can tell if someone is entering my house (our neighborhood has had a recent break in).

This is an important Internet aspect of Panoptes.  I rarely use it though.  My main use of Panoptes turns out to be when I am at home.

So, I want IoT stuff, but I want it to be "Set and Forget".  This is the primary focus in my series of Monitoring projects.

Monday, June 23, 2014

Design by Teardown: What you will find inside of my Panoptes home monitor basestation

First... It is about time I named this monitoring system.  I'm code naming it "Panoptes".

I'm struggling a bit with the power consumption on my wireless sensors (previously mentioned here).

I've chosen an C8051F912 as the MCU (extra 128 bytes needed for my OTA encryption scheme), but I can't seem to get the sleep mode down below 20uA. (That doesn't sound like a lot of power consumption, but it adds up when considering that I want the batteries to last years.)

So, I am taking a break from low power design to focus a bit on my base station. (For those coming into this blog entry cold, I am designing a Internet-ready home monitoring system with a focus on keeping track of independent elderly people, specifically those who are candidates for nursing homes but aren't quite ready for that transition yet.)

I've decided to approach the base station design from a post-implementation perspective: What would someone find if they did a teardown on my device?

Why come from this perspective?  I would hope that what a savvy engineer would find the implementation sound and even respectable. So, why not base my design decisions on this point of view?

Now, I am not just talking about a hardware teardown, but a software one too. But, I won't get too wrapped up on how my code looks or how it is structured.  I am more interested in interoperability: How does the software interface with the outside world -- in particular, the end user and the Internet.

Let me preface this with one of my primary design goals: Set and Forget. 

This is not a system to be played with or to constantly probe from a web browser.  The typical customer is a caretaker or adult child of an elderly person.  This is about applying modern IoT technology to an old problem.  But, this is not a Nest. This is about the kind of IoT (Internet of Things) that operates discreetly in the background of your life -- you just want to know when interesting things happen, otherwise it isn't on your daily radar.

I have said before that even the base station can host sensors, so for this particular teardown, we will look at a single use: Someone buys the basestation plus a water flood sensor to monitor their laundry room.  This example isn't solely "elderly" oriented but does represent the case where someone would want a true "set and forget" sensor.  (I won't cover "wireless sensor nodes" here, since while necessary, they are bound to a lot more hassle that I'll address later -- things like RF interference/jamming, etc.)

I am trying to bridge the world of industrial strength monitoring with the IoT.  I expect the sensors to be "integrated" with the house. You will want to install them properly (mount them) and permanently. These are devices that should last for years.  The mantra is that they "must work".

The water flood sensor is a good example of a "must work" sensor.

So, this is a long one. Feel free to jump ship here, otherwise, grab a cup of coffee, sit back and ... here we go:

Contents

The water flood sensor is a pair of gold plated probes on a 2x4" plastic plate.  The plate can either rest on the floor, be glued or attached to a baseboard with screws. Two thin wires connect it to the sensor node (in this case the base station).  The base station can be mounted on the wall. It is about the size of a desk of cards. On the side are 6 screw terminals (for 4 sensors plus +DC and ground.  The water flood sensor attaches to two one of the sensor screws and ground.  The user is expected to use the full length of wires or to trim and strip them to a desired length.  You can connects up to 4 water flood sensors if you want to place them strategically in a room (e.g. under the sink/tub, next to the water heater, etc).

(First critical question: Why screw terminals instead of modular connectors?  Answer: This allows the user flexibility in where they mount the base station. It can be several feet away from the sensor. A module jack would fix the length of the wire.  I am assuming either a professional installer or someone comfortable enough to run wires.)

The base station hosts 2 AA batteries for power failure backup (which should run a couple of weeks before needing to be replaced).  Lithium or alkaline are recommended for maximum shelf life.

The base station is normally plugged in to an AC outlet (via a standard USB 5VDC power supply). Since the station uses Wi-Fi, it wouldn't run very long on batteries.

Configuration

The USB port is also used for "configuring" the base station. Once plugged in, it shows up a disk drive.

Then you go to the product website and enter data into a form.
You can associate a screw with a type of sensor (in this case a water flood sensor). You must also enter the SSID and password for your wi-fi router.  Additionally, for notification, you must provide an email address.  None of this data is retained by the website and is all done over https.

Once entered, this data is downloaded as a file. You must save the file (or drag it) to the attached base station.  The LED will blink rapidly and if all goes well it will remain lit.  A slow blink indicates an error.

Once installed and turned on, the base station contacts the wi-fi router and you are sent a "startup successful" email.

Operation

The base station will send you once per week "heartbeat" email to indicate that all is well. If you want check "on demand" you can send it email and it will respond with status.

If water is detected, you are sent email.
That's it. Set and forget.

Hardware Teardown

There are 4 phillips head screws holding the unit together. The case is UL94-5VA flame rated.  Two flanged holes support mounting the enclosure to the wall.  When mounted, the battery compartment flush against the wall. This is a light form of security to prevent someone from taking the batteries out.
The screw terminals are side mounted.  There is a small recessed reset button on the bottom of the enclosure.

Inside there is a small circuit board hosting the three main components: A TI C3000 Wi-fi module, a Nordic nRF24L01P low power RF transceiver (for wireless sensor nodes) and a C8051F381 USB MCU. The Wi-Fi modules is tethered to an antenna that traverses the inside edge of the enclosure.  The screw terminals are connected via  ESD protection diodes to the MCU.
(But, why an 8-bit MCU?  Why not an ARM Cortex? The C8051F381 is essentially an SoC. There are very few outside components needed. Panoptes uses the internal precision oscillator, so there isn't even an external crystal.  There is a built in 5VDC-in regulator and USB support. And, for what the system does, an 8-bit is adequate. Plus, the fewer the parts, the simpler the design.)

There is a small piezo buzzer mounted over a small hole piercing the front of the enclosure. A small red LED next to it pulses every few seconds. This is to indicate that the unit is on and connected. If it cannot connect to the wi-fi router or cannot reach the Internet, the LED blinks rapidly.

Measuring power consumption of the unit shows that it consumes around 105mA when idle (not sending a notification) and peaks at about 250mA, briefly,when sending notification. Most of this current is due to the Wi-Fi module.  The 105mA suggests that the base station maintains a connection to the Internet at all times.

Pouring water upon the floor (thereby triggering the sensor) cause the unit to beep loudly and send a notification email.  After 10 minutes the beeping stops and the unit awaits to be reset. It blinks rapidly red during this time.  You can cease the alarm by pressing (and holding for 3 seconds), the reset button on the bottom of the enclosure.

If the AC power is pulled from the base station (e.g. a power outage), the unit falls back to the battery, sends an alert email, powers down wi-fi and beeps for 5 seconds.  The base station is still fully functional, but is expected to only last a few days without AC power.
The current measures steady at around 500uA at this point.  Any water sensing event will cause both the beeping alarm and an attempt to send an email notice (in case the wi-fi router itself is battery backed).  Every 2 minutes the station beeps to remind anyone near by that the unit is battery powered. 
Pressing and holding reset at this point will cease the beeping but the alert capability remains.

Internet Connectivity

The base station is connected 24x7 to a server running in the "cloud". This connection is via TLS/SSL and it is the cloud host that sends notification emails.  Why not send email directly? The cloud server ensures mail delivery (caching and doing multiple delivery attempts as needed). Plus, for sensors that need correlation outside of simple alerts, the cloud server does all of the logic and interfacing. 

Email is used as the primary notification (and status query) mechanism due to its ubiquitousness. Email is everywhere and doesn't require any special software to be loaded on your PC or smartphone.

No software updates are pushed to the device. Nor can the device be remotely controlled. It is a monitoring sensor. This IoT base station is one way.

In conclusion

Panoptes is designed to be a part of your house. It isn't sexy, but it is indeed a player in the IoT. Outside of 802.11b/g and TLS/SSL , it is bound to no particular Internet standard that may go away in the near future.  You can use it with low power RF based sensors or simply standalone with up to 4 wired sensors.

Despite the low BOM, Panoptes is a high quality product designed to last.  At $100 per base station, $10 - $20 per wireless sensor,  and $2 per month cloud based subscription, it is a worthy investment considering the repair costs of house flooding.

The only thing missing seems to be Zigbee support. But, until low cost wireless sensors are offered in the Zigbee space, the nRF24L01P is adequate.

Thanks for reading!

EDIT: Looking seriously into the Kinetis K20 again as the base station MCU. I could use a little extra help with the Internet protocol side of things and the 8-bitter suffers there.

EDIT2: The TI CC3000 Wi-Fi module has an OTA configuration scheme called SmartLink. This rids me of the need for USB support as I can  configure the AP and password over the air.  I still need to figure out how to send email address and other config stuff, but I should be able to do that over the air too.


Sunday, June 22, 2014

IoT: Real "servers" (PCs) are in your future (as base stations)

While Nest and others using embedded ARMs as base stations for your home "Internet of Things (IoT)", I see a real server in the future. There is only so much you can do with these embedded (usually ARM based) servers when you don't have a disk or memory management.  In particular, with the greater demand for these base stations to talk "native" Internet/Cloud (e.g. more heavy protocols like AMQP, XMPP, etc), it starts to tax an unadorned ARM SoC.

While a "PC" sounds like overkill, I am expecting to see more and more Intel Atom and ARM based, fully solid state, base stations with all the usual bells and whistles we are used to getting with a PC.
What bells and whistles?  Memory protection/management, robust storage, system busses, rich peripheral support, etc.

Let's call them SBCs (Single Board Computers) , which is what they really are.  Until now, SBCs were firmly in the domain of the industrial embedded market.  You don't mess around with unreliable consumer tech like SD cards and low end Chinese market chips (e.g. All Winner, etc) when you are building a security base station for an office building or other 24x7 "install and forget" monitor and control systems.

I've played with the wonderful Olimex ARM boards (like the OLinuXino LIME), but they are "new". There are hardware glitches, limited driver support (I can't just buy a wi-fi  board and expect it to work) and I don't feel that the Linux distribution is fully baked yet. Plus, I have to cross compile (from my Intel based laptop) and I run into all the "this isn't ported yet" problems that come with cross compilation.

With the coming of the Minnow Board MAX, Intel based SBCs are getting cheap enough (and low power enough -- No fan!) to become serious alternatives to the crop of low end ARMs.

What is wrong with the current crop of Cortex A based embedded systems?  The biggest problem is reliability (or at least the perception of) and OS support.  Sure there are Linux based distributions but are they as reliable and mature as their Intel based cousins?  I'm talking about real embedded distributions. I don't need or want X windows with a bunch of media apps.  But, are Intel SBC based Linux distributions any better?  Maybe. But that isn't what I am recommending.

Ubuntu/Debian/Fedora/etc server editions are (perhaps) ideal here.  They, for the most part, are already rock solid (when you have thousands of servers running 24x7 in a data center, you might as well say the OS is "embedded" grade since you can't practically login and deal with OS "issues").

I can see running Ubuntu 14.04 server (stripped down a bit) on a Minnow Board.

Now, the target market for the Minnow Board is for those who want to play with SPI, GPIO, I2C, etc -- they make a point of saying it is an "open hardware embedded platform" and not a PC. But, it seems to have specs to the contrary:  64 bit Atom, 1GB RAM, USB,  SATA2 support, ethernet, etc.

That sounds like a PC to me.  And, if I can run Ubuntu (or Debian) Server on it, it fits my IoT base station needs.   These days, most peripherals I interface to (including my own homebrew ones) can be accessed via UART (via a USB adapter) or native USB.  Do I really need to put my Bluetooth or GPS receiver on SPI these days?  (IMHO, Linux is pretty clumsy when accessing bit banged devices that don't already have kernel support.)

And, at $100, it certainly competes with the current crop of ARM boards.
Then again, if you can accept a Fan in your base station, it is hard to beat a repurposed ASUS Chromebox ($149) which comes with 2GB RAM and a 16GB SSD.


Saturday, June 07, 2014

Building the first (of many) wireless sensor prototype...

I've ordered a bunch of parts, so now I am committed to start building prototypes...

I've been doing X10 (RF sensors) and Linux on an SFF/SBC Intel-based computer (base station) as the prototype for my elderly monitoring system.  Stuff has been running for almost a year now but I am not satisfied with two aspects of this system:


  1. X10. Ugh. Ultimately a dead end.
  2. Intel-based computer.  Too big, too much. Overkill.
So, once again I am looking into a completely home brew solution.

First up: Wireless sensors.

I am throwing together prototypes centered around the ridiculously cheap NRF24L01+ (go ahead, google it and look at the ebay bulk prices -- they are between $1-2 each in lots of 10).  I am pairing these with the ridiculously low power ($1 per unit) C8051F986 (Silabs 8051 w/ 4K flash & 512 bytes RAM).  All these sensor nodes have to do is read some switches (e.g. motion sensors, doors, etc) and transmit a byte or two to the base station. I am coding it using MyForth (which is still my favorite Forth variant).

The BOM for a single wireless sensor node (sans sensors) is about $8 (including generic enclosure).  Add a PIR motion sensor for $17 (low current is expensive!), a magnetic door switch/sensor ($5) and maybe a water level detector (oh, and temperature comes for free with the C8051F986!) and you've got a wireless multi-sensor node for $30.  That's a bargain. I am currently using (screw) terminal blocks so you can hook up short runs of sensors (e.g. monitor the front door AND front hallway from one sensor node).

Next up: Base station

The base station will come in 3 variations:
  1. Wi-Fi
  2. Ethernet
  3. GSM/SMS
I am tackling the Wi-Fi variant first.  I am using a TI CC3000 eval board ($35 from Digikey).

The NRF24L01+ boards in my possession uses a trace antenna, so I am not sure if I'll get the range I need.  For the base station, I ordered a slightly costlier variant that supports an SMA connector.

I am still waffling on the brains for the base station.  Cortex M4 sounds like a no-brainer. In particular, I am fond (and familiar) with the Kinetis K20 series (via the $20 Freedom board).

But, I am NOT happy with the M4 development eco-system.  You either drop a lot of cash (>$1000) or use not-quite-baked free tools.  Yes, GCC has wonderful support for the Cortex processors, but getting down to the vendor specifics requires a lot of work (unless you opt for the IDEs which not only do all the work for you but manages to "hide" most of the hardware from you... I don't want this).  

Kinetis has a free GCC/Eclipse based IDE.  It takes up 1GB on disk, runs slow and isn't fully cooked (it is beta until later this summer). 

And, oh, don't get me started on the debuggers (e.g. OpenSDA, EzPort, OpenBDM, etc). Wow. The chips are amazingly cheap, but the support around the chip is going to cost you (if you don't want to be spoon fed:  mbed, Kinetis IDE, etc -- I am looking at you).

I've been using MPE Forth at my day job when I do Cortex M4 work.  It has worked nicely with the K20 Freedom board.  But, I can't afford MPE Forth right now for my CFT projects.

So, waffle, waffle, waffle.  Last night I threw together a quick base station prototype board (because I need *something* to test the sensor's NRF24L01+ against).  The brains for the prototype is a C8051F930 I had in my junk box.  It has 64KB flash and 4KB RAM. This is quite beefy for an 8051. It also has ridiculously low power needs.

Honestly, it has all the horse power and space I need to do the base station task. Plus I get code sharing with the sensor nodes.  But, an 8051 as the brains?  Shouldn't I go with something more capable?

Well, here is an interesting observation:  My prototypes are already rich with 8051s.  The NRF24L01+ has an 8051 as it's core. The TI CC3000 (Wi-Fi module) does too.  Do I need more horse power than a modern 8051 (Silabs  8051 based CIP-51 cores executes 70% of instructions in 1 or 2 clock cycles)  to just control these two modules and do a little bit of logic?

Friday, May 02, 2014

Industrial Product: Forth + Bare Metal + Cortex M vs C++ Linux + Cortex A

File this is in the category of "spending way too much time thinking rather than doing"...

This is a long post about building an "Industrial Strength" product.

I sit poised between rebooting my "Home Alone" Elderly monitor using a micro-controller  or a microcomputer based solution.

It isn't a very sophisticated setup: A few PIR motion sensors, a water detector, a magnetic switch for door open/close detection and a means of notifying me when something interesting happens (e.g. mother-in-law wanders out of the house when we are not at home, is up in the middle of the night, left the water running, etc).  The notification method is still "up in the air": Do I uses a GSM modem to send SMS to my smartphone or do I maintain an Internet connection and send it email?

I have previously implemented a prototype in LuaJIT and ran it on a small form factor PC (using off the shelf X10 RF sensors).  It sent the data to a cloud server and notified me of events via email. You can read about it in my other blog: http://eldermonitor.blogspot.com/.  This prototype is too heavy (small form factor PC + cloud services).

So, now, for the "make it lighter" reboot, I've been looking at the very interesting A10-OLinuXino-LIME (something of a more industrial quality Raspberry-Pi).  Industrial is the enticing bit. I want this thing to work. I don't want to design my own board (yet). I would like the prototype to work and work "for a long time".  This isn't to say that the Pi wouldn't, but I've had very good experiences with Olimex boards when doing embedded stuff at my day job.

But, here is the thing: Software.

Debian is stable (and used quite a bit in the embedded and server based arena), but is this LinuXino Debian build solid?  I don't know.  I do know that this stuff is still mainly "enthusiast" supported.  For this project, I am not an "enthusiast". It must work.

(An aside: If it "must work", how can I rely on X10 RF stuff?  I've run the sensors for over a year now in my house and they are still going strong.  I haven't had to change batteries either. They aren't sophisticated, but they seem to work for the long haul -- at least for now.)

So, here I am, writing modern C++11 code on my 64 bit i5 dual core laptop and planning to recompile (port?) it to the 32-bit ARM (Cortex A8) ... and thinking... will this thing work reliably?

With the C++ I am thinking about abstractions and algorithms.  Am I making something inherently simple more complex?

Do I really want a full blown Linux here? Will it run for a year without fail (or crash)?

So, I sit here at my workbench and I am comparing the A10-OLinuXino-LIME board (argh, what a horrible name) and the Freescale FRDM-K20D50M (Cortex M4) board and wonder if I am not going light enough.  Getting the USB based X10 CM19a receiver to work on the Cortex M4 is not trivial.  (I may punt and go for hardwired sensors for the time being). And, C++ on the Cortex M4 means either fighting g++  (ugh, the linker config) or paying >$1000 for a serious compiler.

I've got and old (still functional) MPE Forth Stamp Compiler working with the FRDM board. It isn't free but it is solid.  Solid is what matters here.

I have visions of a simple device that once configured (and installed) hums along doing its job for months (...years!) without concern for whether it gets stuck in a reboot (e.g. Linux runs out of space due to a logging issue, SD card corruption, etc) or whether my C++ has some subtle memory issue (e.g. modern C++11 looks down upon "new" and "delete" but can still run out of space by auto allocating objects on the stack).

Forth is, well, Forth. On bare metal, I can completely *grok* my development environment.  Porting MPE Forth to the FRDM board was a pain, but now I *understand* the FRDM board.

What am I trading here?  A modern C++/Linux design vs something that I know will work (and how it works).

I'm an old Unix hand (been doing it since the mid-1980s), but I don't know if I am comfortable with a home monitor running a community supported port of Debian. Too many unknowns?


Sunday, April 20, 2014

Personal UV Sensor reboot

Back in 2011 one of my CFT projects was to develop a personal UV sensor for those who are a high risk for skin cancer.  Due to the limited availability of tuned UV sensors (i.e. a reliable source for UV Index rating values), I had to abandon the project. I produced one prototype, as outlined here: http://toddbot.blogspot.com/2011/08/uv-index-monitor-prototype-1.html but had to put further prototypes on hold due to  sensor procurement issues.

Well, apparently there is a rumor that the forthcoming Apple iWatch will include a UV sensor ( http://www.macrumors.com/2014/04/08/iwatch-uv-light-exposure-sensor/).  This is great if you have an iPhone, lots of money and want a new watch, but this isn't my target.

But, the new UV chip they are using, fits my budget: http://www.silabs.com/Support%20Documents/TechnicalDocs/Si1132.pdf.

I want something small (and cheap) enough that you could clip it to a hat, a UV windbreaker, shirt or blouse. Oh, and it should be water resistant (wear it on the beach or by the side of the pool) or even water proof (go swimming with it on).  It should also allow you to set a timer (in hour increments) to remind you (via beeping) to apply more suntan lotion.

The only UI would be a capacitive touch sensor. Press to see (or hear through beeps) the current UV index. Press to set timer.  LEDs (matching UV Index official colors) and/or small buzzer would be the feedback mechanism.  It should cost under $20 and the battery should last a few years (at least 5) under moderate UI usage.

Why do this?  Well, why is every (new to market) useful sensor device required to work with RF and/or interface with your phone?  Why can't tech just "be there" when you need it, rather than be "gadgets" that work with other "gadgets".

Heck, give me a 10 year battery life and I'd say you just sew the thing into clothing.

Okay, I've said way too much.  Let's just say that I am working on it.... stay tuned.