I don't know if I mentioned it here before, but I've been working on a personal UV (Index) monitor.
Folks who have skin cancer (or those at high risk) need to make sure they limit their sun exposure.
The general approach is to just lather up with sunscreen every time you leave the house, but this is impractical (plus you have to re-apply every couple of hours). This becomes more of an annoyance when you consider spending hours riding in a car: Are the windows UV protected? How well? Do you have to lather up every time you drive?
You can get UV index forecasts on your smartphone, but these are just forecasts (for your area and for the whole day). When you are out in the sun, you'll need to know how much UV intensity is hitting you "right now".
Another solution is to carry a UV monitor.
The only ones I've seen on the market are overkill (too large and complex) or vague (how does this work and is it reliable -- where is the sensor?) .
I am aiming at something so small that you'll always carry it with you, but also clear and as accurate as possible. My target form factor is a key fob:
My target UI is based on colored LEDs. There are official colors for the UV index scale and I have an LED for each level. I would like to have (at most) 2 buttons -- one for "instant read" (point at the sun and an LED will light up for 2 seconds indicating UV index level) and one for setting a countdown timer (for sunscreen re-application).
My current prototype has 1 button, 5 high-intensity LEDs (green, yellow, orange, red and blue/violet) and is a little bulkier than a key fob. Amazingly, the LEDs are quite readable in bright sunlight! If you are colorblind you can always read index based on which LED lights up (right?). The current layout ramps "upwards" depending on UV intensity.
It takes a single coin cell battery and is based on a very low power 8051 from SiLabs. It should get 3-5 years off the battery with casual usage.
I need to do lots of tuning/calibration and I know it won't be "demo worthy" for the rest of this summer, but I am making progress. Apparently, the calculations done for UV Index forecasting aren't very practical for small single UV sensors. Somehow, the personal UV monitors make due though. I think I'll use one of the better ones to aid in my calibration.
Maybe I'll have case design and a formal board spin ready for next summer?
Wednesday, August 03, 2011
Friday, July 15, 2011
Tiny computers that fit on your fingernail...
Here is a thought:
Pick up a microSD card. Place it on a fingernail. Look at how small it is. How much does it hold? 1GB? 2GB? 8GB? More? Amazing. That is a lot of storage. These things are examples of how storage keeps shrinking while maintaining incredible capacity. You could fit a whole library on a microSD, right?
But consider this: Inside all microSD cards lie an MCU core . (It may be an 8051. The 8051 MCU is still a popular flash memory controller that you'll find in a majority of your USB thumb drives, SD and even (as a naked die) microSD cards.) Each MCU contains some small amount of RAM too.
So, on your fingernail you have an 8 or 16 bit computer (typically running > 50Mhz) with high speed I/O, RAM, gigabytes of persistent storage, and firmware that was probably written in C.
Mind blown.
Pick up a microSD card. Place it on a fingernail. Look at how small it is. How much does it hold? 1GB? 2GB? 8GB? More? Amazing. That is a lot of storage. These things are examples of how storage keeps shrinking while maintaining incredible capacity. You could fit a whole library on a microSD, right?
But consider this: Inside all microSD cards lie an MCU core . (It may be an 8051. The 8051 MCU is still a popular flash memory controller that you'll find in a majority of your USB thumb drives, SD and even (as a naked die) microSD cards.) Each MCU contains some small amount of RAM too.
So, on your fingernail you have an 8 or 16 bit computer (typically running > 50Mhz) with high speed I/O, RAM, gigabytes of persistent storage, and firmware that was probably written in C.
Mind blown.
Thursday, June 09, 2011
Android: Bluetooth Low Energy vs USB
With all the hype about adding devices/peripherals to Android via USB, I desperately want a low energy wireless means of adding devices. A number of my (yet-to-be-started) CFT projects involve collecting sensor data for correlation/display on smart phones. ANT has always looked appealing, but with next to nothing in way of smartphone support, the new Bluetooth 4.0 BLE support looks like it may capture the market.
This year promises new Android devices with BLE. On the peripheral/sensor front, we seem to have 2 major vendor choices: Nordic and TI.
I'm not ready to drop money on a kit just yet, but my "body worn" sensor projects may get a kickstart knowing that a suitable means of data display is coming soon.
This year promises new Android devices with BLE. On the peripheral/sensor front, we seem to have 2 major vendor choices: Nordic and TI.
I'm not ready to drop money on a kit just yet, but my "body worn" sensor projects may get a kickstart knowing that a suitable means of data display is coming soon.
Sunday, May 01, 2011
Ultrasonic goslings: Sensors and software
I'm starting to get back into low level embedded systems. I'm back to see what 8-bits can do in a 64-bit world.
Part of this reboot is to cast a fresh eye towards some of the sensor enhanced systems I've been mulling around for the past couple of years.
In particular, I am re-investigating some ultrasonic tracking stuff. In a nutshell, I want to to build a flock of robots (does 3 constitute a flock?) that will follow me around. Think: Mother goose and goslings.
Imagine that you have an ultrasonic transmitter, attached to your belt, that transmits a short "beep" every second. If your robots have 3 ultrasonic sensors each, then they can use hyperbolic positioning (Multilateration) to figure out where you are. (The time difference between the 3 received beeps gives you direction; the receive time between each transmitted beep gives you distance).
Now, every decent circuit I've seen for ultrasonic transducers tend to be fairly complex to build (mostly for clean amplification and rectification of the received signal). Just throwing a transducer onto a (relatively) clean MCU with a sensitive ADC won't cut it. Or can it?
We tend to want to put the cleanest, most linear signal into the ADC, but nature doesn't work that way. Nature uses a ton of error correction (software). Even without perfectly working ears or eyes, the brain adapts to form a "picture".
Given a noisy, weak, poorly rectified signal from an ultrasonic receiver, can software make sense of it?
Part of this reboot is to cast a fresh eye towards some of the sensor enhanced systems I've been mulling around for the past couple of years.
In particular, I am re-investigating some ultrasonic tracking stuff. In a nutshell, I want to to build a flock of robots (does 3 constitute a flock?) that will follow me around. Think: Mother goose and goslings.
Imagine that you have an ultrasonic transmitter, attached to your belt, that transmits a short "beep" every second. If your robots have 3 ultrasonic sensors each, then they can use hyperbolic positioning (Multilateration) to figure out where you are. (The time difference between the 3 received beeps gives you direction; the receive time between each transmitted beep gives you distance).
Now, every decent circuit I've seen for ultrasonic transducers tend to be fairly complex to build (mostly for clean amplification and rectification of the received signal). Just throwing a transducer onto a (relatively) clean MCU with a sensitive ADC won't cut it. Or can it?
We tend to want to put the cleanest, most linear signal into the ADC, but nature doesn't work that way. Nature uses a ton of error correction (software). Even without perfectly working ears or eyes, the brain adapts to form a "picture".
Given a noisy, weak, poorly rectified signal from an ultrasonic receiver, can software make sense of it?
Monday, April 11, 2011
Forth for ARM Cortex M3...
This news makes me happy :-)
I should break out my old STM eval boards and give it a try.
The last Forth (ignoring mine) that I've used was Charlie Shattuck's MyForth. Well it looks like he has created a new MyForth for the Arduino crowd. Slide here and sources here.
The nice thing about minimalism within the microcontroller world is that your end result is a "device". You don't have a lot of extra stuff (software standards, etc) to deal with.... so as long as your device interfaces correctly with outside world, the question is: Does it do something useful/interesting? Not: Did you use CouchDB, MongoDB or SQL?
Ah, the simple life.
Also, a shout out to GreenArrays for releasing initial measurements in their G144A12 spec sheet.
Ugh. I really need to find the time (and money) to play with the dev kit.
/todd
I should break out my old STM eval boards and give it a try.
The last Forth (ignoring mine) that I've used was Charlie Shattuck's MyForth. Well it looks like he has created a new MyForth for the Arduino crowd. Slide here and sources here.
The nice thing about minimalism within the microcontroller world is that your end result is a "device". You don't have a lot of extra stuff (software standards, etc) to deal with.... so as long as your device interfaces correctly with outside world, the question is: Does it do something useful/interesting? Not: Did you use CouchDB, MongoDB or
Ah, the simple life.
Also, a shout out to GreenArrays for releasing initial measurements in their G144A12 spec sheet.
Ugh. I really need to find the time (and money) to play with the dev kit.
/todd
Sunday, April 10, 2011
File under "Elegant": Factorial in Plan 9 rc (under Linux)
I've posted before that I find Plan 9's rc shell elegant. I've been using a "slightly" modified (I've made read and echo builtins) version for a few months now and have been doing extensive scripting. I hope never to go back to bash.
Here is a small script to compute factorials. Since "bc" deals with arbitrary precision, we can go much higher than the 32 or 64 bits.
Chew on this:
#!/usr/local/plan9/bin/rc
fn fac {
num=0 factorial=1 frombc=$2 tobc=$3 {
for (num in `{seq $1}) {
echo $factorial '*' $num >$tobc
factorial=`{read <$frombc}
}
echo $factorial
}
}
fn fixlinebreaks {
awk -F '\\' '{printf("%s",$1)}
$0 !~ /\\$/ {printf("\n"); fflush("");}'
}
fac $1 <>{bc | fixlinebreaks}
Here is a small script to compute factorials. Since "bc" deals with arbitrary precision, we can go much higher than the 32 or 64 bits.
Chew on this:
#!/usr/local/plan9/bin/rc
fn fac {
num=0 factorial=1 frombc=$2 tobc=$3 {
for (num in `{seq $1}) {
echo $factorial '*' $num >$tobc
factorial=`{read <$frombc}
}
echo $factorial
}
}
fn fixlinebreaks {
awk -F '\\' '{printf("%s",$1)}
$0 !~ /\\$/ {printf("\n"); fflush("");}'
}
fac $1 <>{bc | fixlinebreaks}
There are several interesting things here:
- Concurrent processing (co-processes actually).
- Messaging through unix pipes.
- Lazy computation (generator).
This factorial algorithm is iterative rather than recursive, but rather than using an incrementing counter loop, we generate all numbers using the 'seq' program and loop through that lazily generated list!
How slow do you think this script will run? Well on my Toshiba Portege r705 notebook with a Core i3, factorial of 1024 takes 2.4 seconds. Is that slow?
Earlier I said that I had enhanced rc with "echo" and "read" as builtins (normally they are external). Using the non-builtin "echo" and "read" increases the run time to 5.1 seconds.
Of course this isn't production code, but here is the take-away: "bc" gives you a bignum calculator for free. Use it.
Monday, March 14, 2011
Tackling the Simple Problems: The domain of the minimalist
The hard problems are more interesting by nature and the world is full of hard problems. This blog post isn't about them. Instead, I want to talk about simple problems.
Simple problems are still problems, they just don't have world shaking impact (or so you would think).
To be honest: most simple problems are only simple on the surface. Underneath, complexity is always lurking.
Take, for instance, my desire to (re)build a very simple blogging system (for my own personal use). Blog software isn't all that hard to build. If you don't care about performance and scalability, then it is pretty straightforward. That is, until you get down to building one. As soon as you start thinking about security, feeds, multimedia, etc. you start to expose the underlying complexity of "working" software.
Now, as I said earlier, this is still something of a simple problem. Developing blogger software isn't rocket science. But, in some ways, that makes it harder.
When something is so simple (conceptually), it can be quite difficult to "get it right". Getting it right is about hitting that sweet spot. Blogging software needs to do its simple job correctly and intuitively. If it is hard to install, or has "hard to grok" idiosyncrasies, then it doesn't solve the "simple problem" of blogging.
Consider another "simple problem". I have around 40GB of music (mostly in MP3 format) that I want to play on my living room stereo (away from a computer). There are solutions I can buy, but none quite fit. I don't need streaming (although I would like to listen to online radio sometimes) and I don't need a "total entertainment solution". I tend to listen to whole albums, not mixes or "randomized" selections based on genre.
All I need is a single MP3 storage device, the ability to add/delete queued albums from any of my household PCs (web browser NOT a hard requirement), and a simple "remote" (pause, play, next song, previous song). What I want is a music "server" and it only has to serve one sound system. (Wi-fi streaming of music is broken in my house -- too much sporadic interference).
There are server based (free!) software solutions out there, but they usually solve (only) 90% of my "simple problem". They then throw UPnP, webservers, GUIs and all sorts of networking into the mix. This is more than I want (after all I am a minimalist).
Note: Before computers, my problem was solved 100% by a CD player w/ 200+ CDs and before that it was solved by vinyl LPs. Now I have a bunch of MP3s and less capability to enjoy music than when I had CDs.
Simple problems are harder than you think.
Simple problems are still problems, they just don't have world shaking impact (or so you would think).
To be honest: most simple problems are only simple on the surface. Underneath, complexity is always lurking.
Take, for instance, my desire to (re)build a very simple blogging system (for my own personal use). Blog software isn't all that hard to build. If you don't care about performance and scalability, then it is pretty straightforward. That is, until you get down to building one. As soon as you start thinking about security, feeds, multimedia, etc. you start to expose the underlying complexity of "working" software.
Now, as I said earlier, this is still something of a simple problem. Developing blogger software isn't rocket science. But, in some ways, that makes it harder.
When something is so simple (conceptually), it can be quite difficult to "get it right". Getting it right is about hitting that sweet spot. Blogging software needs to do its simple job correctly and intuitively. If it is hard to install, or has "hard to grok" idiosyncrasies, then it doesn't solve the "simple problem" of blogging.
Consider another "simple problem". I have around 40GB of music (mostly in MP3 format) that I want to play on my living room stereo (away from a computer). There are solutions I can buy, but none quite fit. I don't need streaming (although I would like to listen to online radio sometimes) and I don't need a "total entertainment solution". I tend to listen to whole albums, not mixes or "randomized" selections based on genre.
All I need is a single MP3 storage device, the ability to add/delete queued albums from any of my household PCs (web browser NOT a hard requirement), and a simple "remote" (pause, play, next song, previous song). What I want is a music "server" and it only has to serve one sound system. (Wi-fi streaming of music is broken in my house -- too much sporadic interference).
There are server based (free!) software solutions out there, but they usually solve (only) 90% of my "simple problem". They then throw UPnP, webservers, GUIs and all sorts of networking into the mix. This is more than I want (after all I am a minimalist).
Note: Before computers, my problem was solved 100% by a CD player w/ 200+ CDs and before that it was solved by vinyl LPs. Now I have a bunch of MP3s and less capability to enjoy music than when I had CDs.
Simple problems are harder than you think.
uForth Dump...and run
uForth has been mentioned here several times last year. It was my attempt at a very, very portable Forth (no dynamic memory allocation, ANSI C, bytecode generator for portable images, etc). It has been run successfully on MSP430s as well as Windows/Linux. No MSP430 code here unfortunately. I did most of the MSP430 code as part of my day job in 2010. It isn't mine to give away.
However, you can get a dump of the generic ANSI code here. I haven't touched it in months and it needs documentation (and some general lovin'). Unfortunately, I don't have access to MSP430s anymore and so that is left as an exercise for the reader :-(
However, you can get a dump of the generic ANSI code here. I haven't touched it in months and it needs documentation (and some general lovin'). Unfortunately, I don't have access to MSP430s anymore and so that is left as an exercise for the reader :-(
Monday, March 07, 2011
Notes on Mail header (and MIME) parsers...
I'm trying to resurrect my old gawk based blogging system BLOGnBOX. It (ab)uses gawk to do everything from POP3 mail retrieval (you email your blog entry...) to FTP based posting of the blog (it is a static html blog).
I intend on cleaning it up by doing away from the gawk abuses. I am either going to make it (Plan 9) rc based (with Plan 9 awk and some C for the networking) or perhaps Haskell. That is quite a choice, eh?
I've done a bit of Haskell over the past few months and feel strong enough to do the next generation BLOGnBOX, but the main problem is actually getting the thing going. (This is a nighttime CFT and, well, I have to get into a Haskell frame of thinking).
The first task up is a parser for mime encoded email. I plan on using regular expressions (yes, I know -- use Parsec or something more Haskell-ish). Awk is somewhat of a natural for this, but Gawk has a little more "oomph". I can visualize how I would do it in Awk, but the Haskell is not coming naturally.
Well, it isn't all that difficult to get started in Haskell:
Well, that is a beginning. Of course, I should be using ByteStrings for efficiency... and, yes... I know... I know... I should be using Parsec
/todd
I intend on cleaning it up by doing away from the gawk abuses. I am either going to make it (Plan 9) rc based (with Plan 9 awk and some C for the networking) or perhaps Haskell. That is quite a choice, eh?
I've done a bit of Haskell over the past few months and feel strong enough to do the next generation BLOGnBOX, but the main problem is actually getting the thing going. (This is a nighttime CFT and, well, I have to get into a Haskell frame of thinking).
The first task up is a parser for mime encoded email. I plan on using regular expressions (yes, I know -- use Parsec or something more Haskell-ish). Awk is somewhat of a natural for this, but Gawk has a little more "oomph". I can visualize how I would do it in Awk, but the Haskell is not coming naturally.
Well, it isn't all that difficult to get started in Haskell:
module MailParser where
import Text.Regex
import qualified Data.Map as Map
type Header = Map.Map String [String]
header_regex = mkRegex "^(From|To|Subject)[ ]*:[ ]*(.+)"
parseHeader :: String -> Header -> Header
parseHeader s h = case matchRegex header_regex s
of Nothing -> h
Just (k:v) -> Map.insert k v h
Well, that is a beginning. Of course, I should be using ByteStrings for efficiency... and, yes... I know... I know... I should be using Parsec
/todd
Thursday, February 24, 2011
Rc - Making shell scripting suck less
There are some tremendous ideas behind the ubiquitous Unix shell (um, that would be Bourne, bash, (d)ash or maybe ksh?). The problem is that a lot of these ideas are very, very dated. Bash is probably the best example of how to keep a Unix bourne dialect alive. Ksh was beastly (tons of features), but I think bash has finally passed it. But is this a good thing?
As I start writing more complex scripts I begin to feel the age of Bourne. I have been using (d)ash (due to it being the Busybox shell and much smaller than bash -- GNU seem to be set on add the kitchen sink to every tool.) You can pretty much do general purpose scripting with Bash, but still with the legacy syntax of Bourne. You might as well go with Perl or Python (and their associated huge installation footprints).
Then there is rc (the Plan 9 shell). It starts with Bourne and "fixes" things rather than tack on stuff around the edges. It is very minimalistic and has a certain elegance I haven't seen since Awk. Plan 9's toolbox minimalism was an attempt to get back to the origins of Unix (lots of small single purpose tools). The famous anti-example of this is probably GNU ls. Look at the options, the many, many options.
Rc isn't actively supported much (Plan 9 has since faded -- if it ever shone brightly to begin with), but it has the feel of something well thought out.
You'll hear more from me about that in upcoming posts.
Time to shut up and code.
As I start writing more complex scripts I begin to feel the age of Bourne. I have been using (d)ash (due to it being the Busybox shell and much smaller than bash -- GNU seem to be set on add the kitchen sink to every tool.) You can pretty much do general purpose scripting with Bash, but still with the legacy syntax of Bourne. You might as well go with Perl or Python (and their associated huge installation footprints).
Then there is rc (the Plan 9 shell). It starts with Bourne and "fixes" things rather than tack on stuff around the edges. It is very minimalistic and has a certain elegance I haven't seen since Awk. Plan 9's toolbox minimalism was an attempt to get back to the origins of Unix (lots of small single purpose tools). The famous anti-example of this is probably GNU ls. Look at the options, the many, many options.
Rc isn't actively supported much (Plan 9 has since faded -- if it ever shone brightly to begin with), but it has the feel of something well thought out.
You'll hear more from me about that in upcoming posts.
Time to shut up and code.
Monday, February 21, 2011
Exceptions and Errors in embedded systems
These past few posts have been ramblings to myself at the cusp of starting a new CFT (Copious Free Time) project. I am weighing an "elegant" path (Haskell) vs a "Old Unix hacker" path (Shell scripts).
While the Haskell approach is alluring, there is a lot of learning to do there and I am an "Old Unix hacker". I am very familiar with the benefits of functional programming and have found the past 3 months doing Haskell (some on my day job) a lot of fun.
But, I know I can get more accomplished sooner if I take a "Unix hacker" approach.
Now, for the meat of this post (and an often arguing point against using shell scripts in critical environments): Safety.
Or, more specifically, what about all of the points of unchecked failure in a shell script?
Doesn't this betray the notion of an embedded system?
Well, there is the dangerous situation of uncaught typos, but let's say we are real careful. How do we handle problems like:
1. A process in the pipeline dies unexpectedly.
2. The filesystem becomes 100% full.
Interestingly, while something like "dd if=$1 | transform | gzip >$2" looks like it can be full of the above problems, I could argue that you have this problem using any programming language/approach.
However, because it is so difficult to catch "exceptional" errors in the shell, it starts to make me wonder how I would handle this in a language that supports "exceptions".
This is where things start to unravel (for me). What do you do in that exception? How do you recover?
Let's look at some approaches:
1. Unix approach: Wrap the "dd" line in a script and have a monitor start it, capture and log stderr and restart it if necessary (but not too aggressively -- maybe at some point give up and shutdown the system).
2. Erlang approach: Interestingly similar to above.
3. Language w/ exceptions: Catch the error, close the files and.... um, restart?
In the Unix approach, the cleanup is mostly done for you. Good fault tolerance practice (as suggested by Erlang) is pretty much handled by variants of init (I believe that daemontool's supervisor has been doing this well for years).
I am sure there are holes in my argument, but for my CFT, I am persisting all important data on disk (an event queue is central to my system). Every change (addition, execution, removal) of an event is an atomic disk transaction. If any process dies, it can be relaunched and pick up where it left off.
For fault tolerant (embedded) systems I am not sure what I would do in an "exception" handler... outside of clean up and die.
/todd
While the Haskell approach is alluring, there is a lot of learning to do there and I am an "Old Unix hacker". I am very familiar with the benefits of functional programming and have found the past 3 months doing Haskell (some on my day job) a lot of fun.
But, I know I can get more accomplished sooner if I take a "Unix hacker" approach.
Now, for the meat of this post (and an often arguing point against using shell scripts in critical environments): Safety.
Or, more specifically, what about all of the points of unchecked failure in a shell script?
Doesn't this betray the notion of an embedded system?
Well, there is the dangerous situation of uncaught typos, but let's say we are real careful. How do we handle problems like:
1. A process in the pipeline dies unexpectedly.
2. The filesystem becomes 100% full.
Interestingly, while something like "dd if=$1 | transform | gzip >$2" looks like it can be full of the above problems, I could argue that you have this problem using any programming language/approach.
However, because it is so difficult to catch "exceptional" errors in the shell, it starts to make me wonder how I would handle this in a language that supports "exceptions".
This is where things start to unravel (for me). What do you do in that exception? How do you recover?
Let's look at some approaches:
1. Unix approach: Wrap the "dd" line in a script and have a monitor start it, capture and log stderr and restart it if necessary (but not too aggressively -- maybe at some point give up and shutdown the system).
2. Erlang approach: Interestingly similar to above.
3. Language w/ exceptions: Catch the error, close the files and.... um, restart?
In the Unix approach, the cleanup is mostly done for you. Good fault tolerance practice (as suggested by Erlang) is pretty much handled by variants of init (I believe that daemontool's supervisor has been doing this well for years).
I am sure there are holes in my argument, but for my CFT, I am persisting all important data on disk (an event queue is central to my system). Every change (addition, execution, removal) of an event is an atomic disk transaction. If any process dies, it can be relaunched and pick up where it left off.
For fault tolerant (embedded) systems I am not sure what I would do in an "exception" handler... outside of clean up and die.
/todd
Haskell scripting for robust embedded systems...
A convincing (unrelated) counter view to my prior posts here: Practical Haskell: scripting with types.
Sunday, February 20, 2011
Unix Shell scripting for robust embedded systems
The summary/ramification of my previous post:
Shell scripting (in this case Busybox) is a viable approach to developing robust, long running embedded systems.
If you can afford to run a (multi-tasking, memory managed) Linux kernel in your embedded system and Busybox is there, then the shell (ash in this case) becomes a potential basis for a system architecture.
Of course, this is not breaking news. But I think it gets lost when we start taking a " single programming language" view of system development (as advocated by almost every modern programming language). If you are trying hard to figure out how to get your favorite programming language to do what the shell does, then maybe it isn't the right tool for the job.
Sure, the "shell" isn't elegant and is full of pitfalls and gotchas when you use it beyond a couple of lines, but when your shell script starts to grow, you too should consider looking elsewhere for help (i.e. commands beyond what is built into the shell).
An example: Don't get caught up in gawk/bash's ability to read from TCP sockets, leverage netcat (nc).
/todd
Haskell vs Busybox (for an embedded soft-realtime control system)
I'm building an embedded soft-real-time control system. It will handle sensor events and provide feedback to the user using voice synthesis.
I really want to use Haskell for this CFT project, but I can get something running so much quicker by shell scripting. There won't be a lot of sophisticated algorithms and I don't see scalability as a concern.
When it comes down to it, I am find it harder and harder to do system programming in a "programming language" vs something in a shell (with support from awk and friends). It doesn't matter if it is C or Haskell, it starts to feel like (once again) re-inventing a wheel.
As an example (and it has nothing to do with this current CFT project), consider this problem: I want to transform 1024 byte chunks of a file and write the results as a compressed file. The transformation doesn't matter, but let's say the transformation is written in C (or Haskell for that matter) and takes 50-100 ms per 1024 byte chunk.
I want to do this task as fast as possible. I have (at least) 2 CPU cores to work with. Let's look at two approaches:
Approach A: Write a Haskell/C program to read 1024 bytes at a time, perform the translation, then the compression and write the 1024 bytes to an output file.
Okay, so I need to link in a decent gzip compression library and I use an appropriate "opt" parser to grab the input and output file. Done.
Approach B: dd if=$1 bs=1024 | translator | gzip > $2
This assumes that I write the same core "translator" code as above, so we can ignore that and focus on reading, compression and writing.
You can guess which will take shorter to implement, but which is the more efficient?
Well, my wild guess would be Approach B. Why? Well I already have a couple of things going for me. One is that I have automatic concurrency! While "dd" is just sitting there reading the disk, translator is running and gzip is also doing its thing. If I have 3 cores, then I have a good chance that each process can run in parallel (for at least a little while before they block). There is some cost in the piping, but that is something that linux/unix is optimized to perform. Given that, "dd" has a good chance of causing more efficient file input buffering than my single threaded app in Approach A. The dd process has disk buffering + pipe buffering working for it so it may fetch (and dispatch) several 1024 byte chunks before it blocks on a full pipe. A similar (but reverse) caching is happening with gzip too.
So, you then consider rewriting Approach A but using a concurrency module/library. Ugh. Let's not go there.
So, if I take a scripting approach, my "controlling" part of the system can be written using the Shell and I can optimize to Haskell (or C) as needed.
I really want to use Haskell for this CFT project, but I can get something running so much quicker by shell scripting. There won't be a lot of sophisticated algorithms and I don't see scalability as a concern.
When it comes down to it, I am find it harder and harder to do system programming in a "programming language" vs something in a shell (with support from awk and friends). It doesn't matter if it is C or Haskell, it starts to feel like (once again) re-inventing a wheel.
As an example (and it has nothing to do with this current CFT project), consider this problem: I want to transform 1024 byte chunks of a file and write the results as a compressed file. The transformation doesn't matter, but let's say the transformation is written in C (or Haskell for that matter) and takes 50-100 ms per 1024 byte chunk.
I want to do this task as fast as possible. I have (at least) 2 CPU cores to work with. Let's look at two approaches:
Approach A: Write a Haskell/C program to read 1024 bytes at a time, perform the translation, then the compression and write the 1024 bytes to an output file.
Okay, so I need to link in a decent gzip compression library and I use an appropriate "opt" parser to grab the input and output file. Done.
Approach B: dd if=$1 bs=1024 | translator | gzip > $2
This assumes that I write the same core "translator" code as above, so we can ignore that and focus on reading, compression and writing.
You can guess which will take shorter to implement, but which is the more efficient?
Well, my wild guess would be Approach B. Why? Well I already have a couple of things going for me. One is that I have automatic concurrency! While "dd" is just sitting there reading the disk, translator is running and gzip is also doing its thing. If I have 3 cores, then I have a good chance that each process can run in parallel (for at least a little while before they block). There is some cost in the piping, but that is something that linux/unix is optimized to perform. Given that, "dd" has a good chance of causing more efficient file input buffering than my single threaded app in Approach A. The dd process has disk buffering + pipe buffering working for it so it may fetch (and dispatch) several 1024 byte chunks before it blocks on a full pipe. A similar (but reverse) caching is happening with gzip too.
So, you then consider rewriting Approach A but using a concurrency module/library. Ugh. Let's not go there.
So, if I take a scripting approach, my "controlling" part of the system can be written using the Shell and I can optimize to Haskell (or C) as needed.
The Wisdom of Unix shell scripting for systems
Prototype with Perl, Ruby, Python, Tcl, etc and to optimize you have to dive into using an FFI (foreign function interface).
Prototype with a Unix/Linux shell (bash, ash, ksh, etc) and to optimize you rewrite proc/commands in your favorite (compiled?) language and use stdin/stdout as the interface.
When piping makes sense (or concurrent processes with lightweight I/O requirements), 30+ years of shell wisdom is your friend.
If shell scripts can be trusted to boot your Unix/Linux distribution, can the shell be trusted as the controller/glue for your application?
/todd
Prototype with a Unix/Linux shell (bash, ash, ksh, etc) and to optimize you rewrite proc/commands in your favorite (compiled?) language and use stdin/stdout as the interface.
When piping makes sense (or concurrent processes with lightweight I/O requirements), 30+ years of shell wisdom is your friend.
If shell scripts can be trusted to boot your Unix/Linux distribution, can the shell be trusted as the controller/glue for your application?
/todd
Monday, February 07, 2011
Haskell Revelation -- Optimizing via refactoring
I have some code that lazily transforms a fairly large list of data (from an IO source) that must be search sequentially. Since it is lazy, the list isn't fully transformed until the first search. Since, I presume, a rather large stack of thunks are constructed instead, this first search takes a really, really long time. (It would be faster, I surmised, to strictly transform the list as it was being built, rather than lazily upon the first search).
I started playing with `seq` but couldn't quite get the strictness right -- the code represented some of my first attempts at Haskell. So, I decided to refactor the code (replace naive tail recursion with maps, filters, folds, etc). I figured at this point I would be able to see more clearly how to avoid the lazy list.
Surprisingly, this refactoring was enough for the compiler to "do the right thing" and sped my application up significantly. What was the compiling doing here? Did it remove the laziness? Or did it just optimize the hell out of what I thought was a lazy vs strict problem?
/todd
I started playing with `seq` but couldn't quite get the strictness right -- the code represented some of my first attempts at Haskell. So, I decided to refactor the code (replace naive tail recursion with maps, filters, folds, etc). I figured at this point I would be able to see more clearly how to avoid the lazy list.
Surprisingly, this refactoring was enough for the compiler to "do the right thing" and sped my application up significantly. What was the compiling doing here? Did it remove the laziness? Or did it just optimize the hell out of what I thought was a lazy vs strict problem?
/todd
Monday, January 24, 2011
How low can Haskell go? (Or what should I do with Haskell?)
Is Haskell a suitable language for programming robots? Would a robot with a strictly functional brain be safer? I have run a fairly hairy 2K line Haskell app on a target as small as an Atom based netbook. It ran reasonably.
I suppose, for a linux target, that a Haskell executable is pretty much like any other executable.
What is the benefit to taking a purely functional approach to robotics?
For some pioneers (think Greenblatt, Stallman, etc) Lisp was a "systems programming" language. Is Haskell a reasonable successor?
These are questions I am pondering. I am doing some Haskell (here an there -- prototyping stuff at work, etc), but I keep wondering what the killer application would be *for me*.
I suppose, for a linux target, that a Haskell executable is pretty much like any other executable.
What is the benefit to taking a purely functional approach to robotics?
For some pioneers (think Greenblatt, Stallman, etc) Lisp was a "systems programming" language. Is Haskell a reasonable successor?
These are questions I am pondering. I am doing some Haskell (here an there -- prototyping stuff at work, etc), but I keep wondering what the killer application would be *for me*.
Sunday, December 05, 2010
Important techniques for (future) multi-core embedded development
Tuesday, November 23, 2010
The Lonely Programmer (Hacker)
I've been noticing books on the market (and some blogs too) that speak of our newly emerging "Maker" culture -- a culture where people fed up with intangible abstract work (do you sit in a cubicle pushing numbers?) are turning to the gratification of hands on creation. They say that working on physical things can give you a deep sense of accomplishment that nurtures our primordial tool building minds.
I put forth that some programmers can get this feeling from the intangible and abstract. I grew up working with my hands (art, electronics, and just generally building stuff). Programming became an extension of that. It my mind, my code held the same sense of accomplishment and gratification as building something with my hands. I became enamored with virtual worlds!
Nowadays, however, I find that a lot off programmers spend significant time worrying about languages, syntax, test coverage and code re-use. These are topics of varying importance, but they are just about honing your skills. At some point you have to produce something. Hopefully it is beautiful (not just on the outside but inside too). How you managed to create it (the language, test approach, etc) is secondary to the thing itself. And, oh, if it is malleable and can be adapted to do new exciting things, is that proof enough you used good coding techniques?
We spend so much time talking about the tools, we forget that it is the result that matters. We forget about the joy and awe of working code. We instead tend to form language advocacy groups, cult-like methodologies and obsess over software licensing.
Really, if someone were to create a fully aware artificial being capable of not only passing the Turing test but able to engage us in deep conversation, would we (the programmers) nitpick over how poorly the code is structured, the lack of test coverage and how terrible it is that it was implemented in a crappy programing language?
I put forth that some programmers can get this feeling from the intangible and abstract. I grew up working with my hands (art, electronics, and just generally building stuff). Programming became an extension of that. It my mind, my code held the same sense of accomplishment and gratification as building something with my hands. I became enamored with virtual worlds!
Nowadays, however, I find that a lot off programmers spend significant time worrying about languages, syntax, test coverage and code re-use. These are topics of varying importance, but they are just about honing your skills. At some point you have to produce something. Hopefully it is beautiful (not just on the outside but inside too). How you managed to create it (the language, test approach, etc) is secondary to the thing itself. And, oh, if it is malleable and can be adapted to do new exciting things, is that proof enough you used good coding techniques?
We spend so much time talking about the tools, we forget that it is the result that matters. We forget about the joy and awe of working code. We instead tend to form language advocacy groups, cult-like methodologies and obsess over software licensing.
Really, if someone were to create a fully aware artificial being capable of not only passing the Turing test but able to engage us in deep conversation, would we (the programmers) nitpick over how poorly the code is structured, the lack of test coverage and how terrible it is that it was implemented in a crappy programing language?
Thursday, November 11, 2010
MP3 ID3v1 tag reading in Perl and in Haskell
I am building an MP3 jukebox for my home...
I know that I am supposed to use ID3v2, but my MP3 collection (CD ripped, Amazon and Emusic) still sports ID3v1 tags, so I thought it would be a safe bet to just parse it.
I quickly wrote an ID3v1 tag parser in Perl (yes, I know CPAN has several solutions for this but I wanted to write my own just for the fun). Here is what it looks like:
use strict;
use warnings;
use Fcntl qw(:seek);
my @genre = (
'Blues','Classic Rock','Country','Dance',
'Disco','Funk','Grunge','Hip-Hop',
'Jazz','Metal','New Ag(e','Oldies',
'Other','Pop','R&B','Rap',
'Reggae','Rock','Techno','Industrial',
'Alternative','Ska','Death Metal','Pranks',
'Soundtrack','Euro-Techno','Ambient','Trip-Hop',
'Vocal','Jazz+Funk','Fusion','Trance',
'Classical','Instrumental','Acid','House',
'Game','Sound Clip','Gospel','Noise',
'AlternRock','Bass','Soul','Punk',
'Space','Meditative','Instrumental Pop','Instrumental Rock',
'Ethnic','Gothic','Darkwave','Techno-Industrial',
'Electronic','Pop-Folk','Eurodance','Dream',
'Southern Rock','Cult','Gangsta','Top 40',
'Christian Rap','Pop/Funk','Jungle','Native American',
'Cabaret','New Wave','Psychadelic','Rave',
'Showtunes','Trailer','Lo-Fi','Tribal',
'Acid Punk','Acid Jazz','Polka','Retro',
'Musical','Rock &','Hard Rock','Folk',
'Folk-Rock','National Folk','Swing','Fast Fusion',
'Bebob','Latin','Revival','Celtic',
'Bluegrass','Avantgarde','Gothic Rock','Progressive Rock',
'Psychedelic Rock','Symphonic Rock','Slow Rock','Big Band',
'Chorus','Easy Listening','Acoustic','Humour',
'Speech','Chanson','Opera','Chamber Music',
'Symphony','Booty Brass','Primus','Porn Groove',
'Satire','Slow Jam','Club','Tango',
'Samba','Folklore','Ballad','Power Ballad',
'Rhytmic Soul','Freestyle','Duet','Punk Rock',
'Drum Solo','A Capela','Euro-House','Dance Hall' );
my $id3v1;
my $id3v1_tmpl = "A3 A30 A30 A30 A4 A28 C C C";
while (my $filename =) {
chomp $filename;
open my $fh, '<', $filename or next;
binmode $fh;
seek $fh, -128, SEEK_END and read $fh, $id3v1, 128;
close $fh;
my (undef,$title,$artist,$album,$year,$comment,undef,$trk,$genr) =
unpack($id3v1_tmpl,$id3v1);
print "$filename|$title|$artist|$album|$year|$trk|".$genre[$genr]."\n";
}
I know that I am supposed to use ID3v2, but my MP3 collection (CD ripped, Amazon and Emusic) still sports ID3v1 tags, so I thought it would be a safe bet to just parse it.
I quickly wrote an ID3v1 tag parser in Perl (yes, I know CPAN has several solutions for this but I wanted to write my own just for the fun). Here is what it looks like:
use strict;
use warnings;
use Fcntl qw(:seek);
my @genre = (
'Blues','Classic Rock','Country','Dance',
'Disco','Funk','Grunge','Hip-Hop',
'Jazz','Metal','New Ag(e','Oldies',
'Other','Pop','R&B','Rap',
'Reggae','Rock','Techno','Industrial',
'Alternative','Ska','Death Metal','Pranks',
'Soundtrack','Euro-Techno','Ambient','Trip-Hop',
'Vocal','Jazz+Funk','Fusion','Trance',
'Classical','Instrumental','Acid','House',
'Game','Sound Clip','Gospel','Noise',
'AlternRock','Bass','Soul','Punk',
'Space','Meditative','Instrumental Pop','Instrumental Rock',
'Ethnic','Gothic','Darkwave','Techno-Industrial',
'Electronic','Pop-Folk','Eurodance','Dream',
'Southern Rock','Cult','Gangsta','Top 40',
'Christian Rap','Pop/Funk','Jungle','Native American',
'Cabaret','New Wave','Psychadelic','Rave',
'Showtunes','Trailer','Lo-Fi','Tribal',
'Acid Punk','Acid Jazz','Polka','Retro',
'Musical','Rock &','Hard Rock','Folk',
'Folk-Rock','National Folk','Swing','Fast Fusion',
'Bebob','Latin','Revival','Celtic',
'Bluegrass','Avantgarde','Gothic Rock','Progressive Rock',
'Psychedelic Rock','Symphonic Rock','Slow Rock','Big Band',
'Chorus','Easy Listening','Acoustic','Humour',
'Speech','Chanson','Opera','Chamber Music',
'Symphony','Booty Brass','Primus','Porn Groove',
'Satire','Slow Jam','Club','Tango',
'Samba','Folklore','Ballad','Power Ballad',
'Rhytmic Soul','Freestyle','Duet','Punk Rock',
'Drum Solo','A Capela','Euro-House','Dance Hall' );
my $id3v1;
my $id3v1_tmpl = "A3 A30 A30 A30 A4 A28 C C C";
while (my $filename =
chomp $filename;
open my $fh, '<', $filename or next;
binmode $fh;
seek $fh, -128, SEEK_END and read $fh, $id3v1, 128;
close $fh;
my (undef,$title,$artist,$album,$year,$comment,undef,$trk,$genr) =
unpack($id3v1_tmpl,$id3v1);
print "$filename|$title|$artist|$album|$year|$trk|".$genre[$genr]."\n";
}
Basically it takes a stream of MP3 filenames over stdin, opens them and dumps out a pipe delimited summary of what it found. Here is how it is run:
$ find /home/todd/music -name "*.mp3" | perl mp3info.pl >mp3_data.txt
Here is a line from the output (mp3_data.txt):
/home/todd/music/Charles Mingus/Ah Um/Charles Mingus_10_Pedal Point Blues.mp3|Pedal Point Blues|Charles Mingus|Ah Um|1959|10|Jazz
I am considering using Haskell for my jukebox, so I was curious what this would look like in Haskell. Here is my newbie Haskell implementation:
import Text.Printf
import Data.Array
import Char
import System.Environment
import System.IO
-- Create a array of genres
genres = listArray (0, l-1) genres_l
where
genres_l = [
"Blues", "Classic Rock","Country","Dance",
"Disco","Funk","Grunge","Hip-Hop",
"Jazz","Metal","New Age","Oldies",
"Other","Pop","R&B","Rap",
"Reggae","Rock","Techno","Industrial",
"Alternative","Ska","Death Metal","Pranks",
"Soundtrack","Euro-Techno","Ambient","Trip-Hop",
"Vocal","Jazz+Funk","Fusion","Trance",
"Classical","Instrumental","Acid","House",
"Game","Sound Clip","Gospel","Noise",
"AlternRock","Bass","Soul","Punk",
"Space","Meditative","Instrumental Pop","Instrumental Rock",
"Ethnic","Gothic","Darkwave","Techno-Industrial",
"Electronic","Pop-Folk","Eurodance","Dream",
"Southern Rock","Cult","Gangsta","Top 40",
"Christian Rap","Pop/Funk","Jungle","Native American",
"Cabaret","New Wave","Psychadelic","Rave",
"Showtunes","Trailer","Lo-Fi","Tribal",
"Acid Punk","Acid Jazz","Polka","Retro",
"Musical","Rock &","Hard Rock","Folk",
"Folk-Rock","National Folk","Swing","Fast Fusion",
"Bebob","Latin","Revival","Celtic",
"Bluegrass","Avantgarde","Gothic Rock","Progressive Rock",
"Psychedelic Rock","Symphonic Rock","Slow Rock","Big Band",
"Chorus","Easy Listening","Acoustic","Humour",
"Speech","Chanson","Opera","Chamber Music",
"Symphony","Booty Brass","Primus","Porn Groove",
"Satire","Slow Jam","Club","Tango",
"Samba","Folklore","Ballad","Power Ballad",
"Rhytmic Soul","Freestyle","Duet","Punk Rock",
"Drum Solo","A Capela","Euro-House","Dance Hall" ]
l = length genres_l
main = do
hSetEncoding stdin latin1
hSetEncoding stdout latin1
fname <- getContents -- lazily read list of files from stdin
mapM print_id3v1 (lines fname)
print_id3v1 fname = do
print fname
inh <- openBinaryFile fname ReadMode
hSeek inh SeekFromEnd (-128)
dat <- hGetContents inh
printf "%s|%s|%s|%s|%s|%d|%s\n"
fname
(extract 3 30 dat) -- Title
(extract 33 30 dat) -- Artist
(extract 63 30 dat) -- Album
(extract 93 4 dat) -- Year
(Char.ord (head (extract 126 1 dat))) -- Track
(genres !(Char.ord (head (extract 127 1 dat)))) -- Genre
hClose inh
-- extract and trim a range of elements from list
extract idx ln s =
trim0 (take ln (drop idx s))
-- Trim nulls from list
trim0 s = filter (/= '\0') s
You run it similarly. Frustratingly, it isn't very happy with filenames with non-ASCII characters :-(
$ find /home/todd/music/ -name "*.mp3" -print | ./mp3info >mp3_files.txt
mp3info: /home/todd/Music/music/Ch�ying Drolma & Steve Tibbetts/Selwa/Ch�ying Drolma & Steve Tibbetts_05_Gayatri.mp3: openBinaryFile: does not exist (No such file or directory)
I am not a Haskell expert, but I didn't expect it to choke there...
EDIT: Fixed the filename problem by adding:
hSetEncoding stdin latin1
hSetEncoding stdout latin1
EDIT: Fixed the filename problem by adding:
hSetEncoding stdin latin1
hSetEncoding stdout latin1
Tuesday, October 19, 2010
Future runtime environment for Robots
Embedded systems are getting smaller and more sophisticated. While there are currently bare ARMs and 8/16-bit micros controlling our mainstream robots, this won't do for the more sophisticated (domestic) robots of the near future.
When we consider sophisticated robots (whether ruled by a subsumption architecture or other invariably concurrent system), we won't think of Python, Java, Ruby or Perl. We will be thinking of something more Erlang-like or perhaps Unix-y.
Can you imagine jacking into your robot's debug port, typing "ps" and seeing a collection of inter-communicating processes running on a Linux kernel? Or will it be something like Erlang/OTP? (Erlang is the dark horse here... Unix may never go away as a platform.)
So, how about it? Erlang on a gumstix?
When we consider sophisticated robots (whether ruled by a subsumption architecture or other invariably concurrent system), we won't think of Python, Java, Ruby or Perl. We will be thinking of something more Erlang-like or perhaps Unix-y.
Can you imagine jacking into your robot's debug port, typing "ps" and seeing a collection of inter-communicating processes running on a Linux kernel? Or will it be something like Erlang/OTP? (Erlang is the dark horse here... Unix may never go away as a platform.)
So, how about it? Erlang on a gumstix?
Friday, September 10, 2010
New AFT release!
AFT development has dormant for about 1 year, so I decided to reboot it with a minor (experimental) release (5.098) that supports Indexing. This is mostly useful for LaTeX output. I'm not sure how useful the HTML output will be.
I've also started to clean up the Perl source for AFT. I am currently reading the new edition of Effective Perl Programming and hope to apply more idiomatic Modern Perl conventions to the 14+ year old AFT sources.
As always, AFT is available through "sudo apt-get install aft" under Ubuntu (and Debian?). But to get this new release, go to the AFT website. (For a quick look at Indexing in action, look at the aft reference manual and scroll to the last page).
I've also started to clean up the Perl source for AFT. I am currently reading the new edition of Effective Perl Programming and hope to apply more idiomatic Modern Perl conventions to the 14+ year old AFT sources.
As always, AFT is available through "sudo apt-get install aft" under Ubuntu (and Debian?). But to get this new release, go to the AFT website. (For a quick look at Indexing in action, look at the aft reference manual and scroll to the last page).
I need to start archiving my art...
Wednesday, September 08, 2010
I miss Perl...
I've been going through my old books and came upon a stash of Perl tomes. I had a love/hate relationship with Perl. I wrote AFT in Perl, but have not used it as my language of choice in (almost) a decade.
Stumbling across these books got me wondering: Where have all the Perl hackers gone?
Well, I can't say where they have all gone (Ruby? Python? Retirement?), but I miss the funky vibe that Perl had in it's day. Perl was messy, but the language attracted the most interesting people. I've always seen it to be the language of poets (as opposed to engineers). Perl people loved wordplay. Perl people weren't just interested in producing interesting apps, but also making the app source code look interesting.
Well, Perl isn't gone. Apparently, strides have been made in Perl 6 and Rakudo is available for playing around with a lot of Perl 6 features.
But, will the hackers come?
I am reminded of a quote from Ratatouille (Colette introducing the kitchen staff to Linguine):
Stumbling across these books got me wondering: Where have all the Perl hackers gone?
Well, I can't say where they have all gone (Ruby? Python? Retirement?), but I miss the funky vibe that Perl had in it's day. Perl was messy, but the language attracted the most interesting people. I've always seen it to be the language of poets (as opposed to engineers). Perl people loved wordplay. Perl people weren't just interested in producing interesting apps, but also making the app source code look interesting.
Well, Perl isn't gone. Apparently, strides have been made in Perl 6 and Rakudo is available for playing around with a lot of Perl 6 features.
But, will the hackers come?
I am reminded of a quote from Ratatouille (Colette introducing the kitchen staff to Linguine):
"... So you see: We are Artists. Pirates. More than cooks are we."
Sunday, August 29, 2010
Scratch, Squeak and so Forth: Robot programming enviroments
A thought experiment...
If I was going to teach kids how to control a Roomba by way of programming, where would I start?
Outside of Squeak and Scratch, I think the most common answer involves also teaching them how to use a source code editor.
This is unacceptable.
Squeak (as derived from Smalltalk-80) and Scratch (as derived from various Logos) integrates the programming environment with the language. This isn't about just being an "IDE".
Squeak let's you hack in a workspace without concern for files. It is persistent and natural. You write a little, see how it works out and continue. At some point you may save your workspace, but you aren't really thinking about files at this point.
Scratch has a similar (although not as radical) approach. My youngest kids (age 7 & 7) would script for a few hours and then pick some random series of characters as a "name" to save their project into (for later recovery). Eventually, they started using more descriptive words for naming their projects. But at no time did they think about files and file systems.
I think files and editors are distractions (and not necessary ones -- unless you want to get a mainstream programming job).
Forth had this idea decades ago (and so did Smalltalk). Forth had blocks (1K blocks to be specific) that held text and data. The early bare-metal Forths used blocks as the single abstraction from persistent storage devices (disks). Eventually, folks "enhanced" line oriented block editing with a visual screen (with single character cursor movement!).
Forthers in the early 1980s (my self included) were happy.
Now, in 2010, I face the prospect of teaching my oldest kid (age 12 -- expert Scratcher, beginning Python programmer) how to control a Roomba via programming that require the selection and mastery of files (and file systems) -- or at least some kind of file based IDE.
Of the languages I have been looking at, Forth, Python and Lua are the front runners.
I think Forth is a more natural fit. I have just downloaded VIBE and I am being transported back 30 years. I use to write (and extend) Forth editors like this. I am remembering how natural it felt to edit in blocks. (ColorForth continues this tradition, so Chuck Moore arguably never found much of an improvement in using file oriented environments ;-)
Forth has always made me feel "closer to the machine". Plus, with a built-in editor, I am no longer doing the context switch being external editor and Forth. VIBE (or any other Forth Screen Editor) keeps me in Forth.
I will be playing with VIBE under gforth (and perhaps extending it). I am curious to see if this feeling is simply nostalgia or if my 23+ years of Emacs (w/ language editing modes) has been a bad move.
If I was going to teach kids how to control a Roomba by way of programming, where would I start?
Outside of Squeak and Scratch, I think the most common answer involves also teaching them how to use a source code editor.
This is unacceptable.
Squeak (as derived from Smalltalk-80) and Scratch (as derived from various Logos) integrates the programming environment with the language. This isn't about just being an "IDE".
Squeak let's you hack in a workspace without concern for files. It is persistent and natural. You write a little, see how it works out and continue. At some point you may save your workspace, but you aren't really thinking about files at this point.
Scratch has a similar (although not as radical) approach. My youngest kids (age 7 & 7) would script for a few hours and then pick some random series of characters as a "name" to save their project into (for later recovery). Eventually, they started using more descriptive words for naming their projects. But at no time did they think about files and file systems.
I think files and editors are distractions (and not necessary ones -- unless you want to get a mainstream programming job).
Forth had this idea decades ago (and so did Smalltalk). Forth had blocks (1K blocks to be specific) that held text and data. The early bare-metal Forths used blocks as the single abstraction from persistent storage devices (disks). Eventually, folks "enhanced" line oriented block editing with a visual screen (with single character cursor movement!).
Forthers in the early 1980s (my self included) were happy.
Now, in 2010, I face the prospect of teaching my oldest kid (age 12 -- expert Scratcher, beginning Python programmer) how to control a Roomba via programming that require the selection and mastery of files (and file systems) -- or at least some kind of file based IDE.
Of the languages I have been looking at, Forth, Python and Lua are the front runners.
I think Forth is a more natural fit. I have just downloaded VIBE and I am being transported back 30 years. I use to write (and extend) Forth editors like this. I am remembering how natural it felt to edit in blocks. (ColorForth continues this tradition, so Chuck Moore arguably never found much of an improvement in using file oriented environments ;-)
Forth has always made me feel "closer to the machine". Plus, with a built-in editor, I am no longer doing the context switch being external editor and Forth. VIBE (or any other Forth Screen Editor) keeps me in Forth.
I will be playing with VIBE under gforth (and perhaps extending it). I am curious to see if this feeling is simply nostalgia or if my 23+ years of Emacs (w/ language editing modes) has been a bad move.
Tuesday, August 24, 2010
Greenspun's Tenth Rule adapted to Unix
Greenspun's Tenth Rule: Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.
can be adapted to Unix:
Any sufficiently complicated Perl, Python, Ruby, Lua, etc script contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Unix.
And I don't mean the all of the "system" calls. I mean: concurrency, fault tolerance, data persistence, configuration and scalability.
It may be ugly, but combine ksh93/bash, awk, bc, etc (whatever you find on a standard Unix/Linux distro) and you'll find an analog to the features offered by the above mentioned languages. This does not include "abstractions" such as fancy data structures and other syntactical sugar. And, of course, fork/exec isn't going to beat a function call.
However, (and this will be the subject of the next post), Unix under control of advanced shell (such as ksh93 or bash) can have the following capabilities (at least!):
can be adapted to Unix:
Any sufficiently complicated Perl, Python, Ruby, Lua, etc script contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Unix.
And I don't mean the all of the "system" calls. I mean: concurrency, fault tolerance, data persistence, configuration and scalability.
It may be ugly, but combine ksh93/bash, awk, bc, etc (whatever you find on a standard Unix/Linux distro) and you'll find an analog to the features offered by the above mentioned languages. This does not include "abstractions" such as fancy data structures and other syntactical sugar. And, of course, fork/exec isn't going to beat a function call.
However, (and this will be the subject of the next post), Unix under control of advanced shell (such as ksh93 or bash) can have the following capabilities (at least!):
- Coroutines (Co-processes in ksh93 or recent Bash)
- Communicating Sequential Processes (CSP) via named pipes and co-processes
- Dataflow processing (pipes)
- Arbitrary precision math (bc or other calculator)
- Reuse (command line apps)
- File (database) support (ls, awk, find, grep, sqlite command line, etc)
- List processing (command line args + ksh93/bash)
- Functions/apps as first class objects
And more...
Saturday, August 21, 2010
iRobot Create and Unix Programming
In my previous post I talked about using Unix as an embedded programming "language" (as opposed to a hosting environment for embedded apps).
I started to think about where to begin. Well, let's consider the iRobot Create platform (Roomba too!).
Between your controller and the iRobot is a serial port and a binary command protocol.
If I wrote a small C app to talk over the serial port and convert to/from binary and text, I can utilize the Unix pipeline for some further down app that reads the robot's binary response as newline delimited text data.
For example:
echo "get sensors" | irobot-oi /dev/ttyS0 | some-later-app
The "irobot-oi" open "/dev/ttyS0", takes text commands as input ("get sensors"), converts it to an iRobot OI binary request, sends it, reads the binary response and converts it to text for "some-later-app". This app could be written in C, awk, perl, etc.
This approach is so obvious that I would be shocked to learn that no one has tried it. I know there are libraries available the iRobot Open Interface (OI) in python and other languages, but is there a Unix command line available?
From a traditionalist Unix position, "irobot-oi" is actually doing too much. You'd have to encode mappings between each binary command/response byte/bit and text. A more minimal approach would be to write an even simpler binary/text converter that simply understands the protocol encapsulation and accepts/emits a comma delimited representation of the binary data. (Since the OI response protocol is directly dependent on request -- there are varying length responses with no terminator -- we have to develop some smarts into the converter.)
So, instead we would have something like this:
echo "142,2,9,13" | irobot-oi-cvt /dev/ttyS0 | some-later-app
The above command requests the state of the left cliff sensor (9) and the virtual wall detector (13) and sends the result (in comma delimited text) to "some-later-app" (which may do the actual english text mapping or simply react to the numbers).
I started to think about where to begin. Well, let's consider the iRobot Create platform (Roomba too!).
Between your controller and the iRobot is a serial port and a binary command protocol.
If I wrote a small C app to talk over the serial port and convert to/from binary and text, I can utilize the Unix pipeline for some further down app that reads the robot's binary response as newline delimited text data.
For example:
echo "get sensors" | irobot-oi /dev/ttyS0 | some-later-app
The "irobot-oi" open "/dev/ttyS0", takes text commands as input ("get sensors"), converts it to an iRobot OI binary request, sends it, reads the binary response and converts it to text for "some-later-app". This app could be written in C, awk, perl, etc.
This approach is so obvious that I would be shocked to learn that no one has tried it. I know there are libraries available the iRobot Open Interface (OI) in python and other languages, but is there a Unix command line available?
From a traditionalist Unix position, "irobot-oi" is actually doing too much. You'd have to encode mappings between each binary command/response byte/bit and text. A more minimal approach would be to write an even simpler binary/text converter that simply understands the protocol encapsulation and accepts/emits a comma delimited representation of the binary data. (Since the OI response protocol is directly dependent on request -- there are varying length responses with no terminator -- we have to develop some smarts into the converter.)
So, instead we would have something like this:
echo "142,2,9,13" | irobot-oi-cvt /dev/ttyS0 | some-later-app
The above command requests the state of the left cliff sensor (9) and the virtual wall detector (13) and sends the result (in comma delimited text) to "some-later-app" (which may do the actual english text mapping or simply react to the numbers).
Unix as a Programming Language - Rethink Embedded Scripting (and CSP)
As I look at eLua (on an mbed for one of my robot projects), my mind wanders... have I stumbled into another monolithic system approach?
eLua embedded systems are Lua and (maybe) some C. I like Lua, but am I just trading one language ( Forth, C, etc) for another?
This line of thinking keeps bringing me back to Unix. Under (traditional) Unix, I have a multi-process environment connected by pipes. I choose the appropriate language (or existing app) for the task at hand and glue it together with a shell.
Now, this would (most likely) be overkill for a small embedded device, especially in regards to power consumption. But, assume for the moment, that I had all of the Unix resources at hand and didn't concern myself with power consumption. What would my robot controller design look like?
I design and build GPS trackers during the day. I use C and (sometimes) Forth. I've thought about eLua, but I'd still have to write a NMEA sentence parser and fencing algorithm in a language not necessarily perfect for parsing. How would I go about doing this if I had awk at my disposal? Or grep/sed? Or... the whole unix environment tied together with pipes under a shell. Would something like BusyBox be a good foundation?
What is the smallest MCU (or embedded SBC) that I could run something like uCLinux and BusyBox on?
Could I do development under a fairly modest laptop Linux and port down to uCLinux and BusyBox?
Right now I am looking (again) at CSP (perhaps built upon Lua co-routines) as a mechanism for architecting embedded systems that integrate multiple (sensor) inputs. You can also build CSP upon Unix processes (and pipes). This is the way I've done it for 25+ years. Do I really need to cast my architecture into another monolith?
This is just me bring up Unix as a Programming Language again (but this time under the constraints of embedded computing)
eLua embedded systems are Lua and (maybe) some C. I like Lua, but am I just trading one language ( Forth, C, etc) for another?
This line of thinking keeps bringing me back to Unix. Under (traditional) Unix, I have a multi-process environment connected by pipes. I choose the appropriate language (or existing app) for the task at hand and glue it together with a shell.
Now, this would (most likely) be overkill for a small embedded device, especially in regards to power consumption. But, assume for the moment, that I had all of the Unix resources at hand and didn't concern myself with power consumption. What would my robot controller design look like?
I design and build GPS trackers during the day. I use C and (sometimes) Forth. I've thought about eLua, but I'd still have to write a NMEA sentence parser and fencing algorithm in a language not necessarily perfect for parsing. How would I go about doing this if I had awk at my disposal? Or grep/sed? Or... the whole unix environment tied together with pipes under a shell. Would something like BusyBox be a good foundation?
What is the smallest MCU (or embedded SBC) that I could run something like uCLinux and BusyBox on?
Could I do development under a fairly modest laptop Linux and port down to uCLinux and BusyBox?
Right now I am looking (again) at CSP (perhaps built upon Lua co-routines) as a mechanism for architecting embedded systems that integrate multiple (sensor) inputs. You can also build CSP upon Unix processes (and pipes). This is the way I've done it for 25+ years. Do I really need to cast my architecture into another monolith?
This is just me bring up Unix as a Programming Language again (but this time under the constraints of embedded computing)
Wednesday, August 11, 2010
Resurrecting BLOGnBOX
I've been thinking about resurrecting my gawk based blogging system BLOGnBOX. I am getting tired of Blogger.
While BLOGnBOX doesn't have as many features, it lets me focus on what I want from a blog. Although this has changed over time, I think what I currently want is:
While BLOGnBOX doesn't have as many features, it lets me focus on what I want from a blog. Although this has changed over time, I think what I currently want is:
- A way to publish my thoughts and ideas.
- An archive of my thoughts and ideas.
- A primary means of writing.
I am less concerned with the web aspects of a blogging system. I'd like to have a clean, simple (yet elegantly presented) blog that I can tinker with as needed.
What led me down this path of thought is the desire to extract blog entries (or the whole blog) as a PDF document. By PDF, I really mean LaTeX or TeX quality (not just a dump). AFT does this for me, but it might be overkill for a blogging system. Perhaps the next iteration of BLOGnBOX should leverage LaTeX directly?
And, of course, the next iteration should be composed as a Literate Program! (I'm leaning towards Noweb.)
Wednesday, August 04, 2010
Humans were not meant to...
Humans were not meant to sit in cubicles.
Humans were not meant to spend 8 hours a day working on reports and clicking aimlessly on the web to relieve the tedium.
Humans were not meant to be imprisoned by a workforce that wants you to do repetitive tasks mindlessly.
Humans are meant to be adventurous and creative, with bursts of brilliance (whether through exuberant play or hard knuckled persistence).
Children understand this. As adults we unlearn this. We are taught to work hard, conform for the good of society, buy a house, put away money into a 401K and make a stable environment for raising our kids. We are taught to be "wage slaves".
Sadly, most humans will never break free. But that makes it even more important that you become that rare exception (at least for a while). You have a duty to be brilliant, exuberant and adventurous. Laugh hard, fight injustice (aka stupidity), be creative and kind.
Be passionate. Don't be afraid to piss off people. Sometimes people need to be shook up.
Draw, paint, dance, program -- set an example for your kids. Let them know that it isn't their duty to conform, accept the status quo or "work for the weekend". The best thing you can do for your kids is to show them what it means to be free -- to be truly human. Brilliant, volatile and utterly unique. Take chances. (You don't have to quit your job to do this.) Just do something to express your joy, your uniqueness, your value as a human.
F*ck your boss. No, not that person you report to at work. Your real boss: the mental shackles that keeps you "in line". Break those shackles.
Not everyone can do this. But it is your duty to try.
Sometimes, when your boss isn't looking... let go.
Humans were not meant to spend 8 hours a day working on reports and clicking aimlessly on the web to relieve the tedium.
Humans were not meant to be imprisoned by a workforce that wants you to do repetitive tasks mindlessly.
Humans are meant to be adventurous and creative, with bursts of brilliance (whether through exuberant play or hard knuckled persistence).
Children understand this. As adults we unlearn this. We are taught to work hard, conform for the good of society, buy a house, put away money into a 401K and make a stable environment for raising our kids. We are taught to be "wage slaves".
Sadly, most humans will never break free. But that makes it even more important that you become that rare exception (at least for a while). You have a duty to be brilliant, exuberant and adventurous. Laugh hard, fight injustice (aka stupidity), be creative and kind.
Be passionate. Don't be afraid to piss off people. Sometimes people need to be shook up.
Draw, paint, dance, program -- set an example for your kids. Let them know that it isn't their duty to conform, accept the status quo or "work for the weekend". The best thing you can do for your kids is to show them what it means to be free -- to be truly human. Brilliant, volatile and utterly unique. Take chances. (You don't have to quit your job to do this.) Just do something to express your joy, your uniqueness, your value as a human.
F*ck your boss. No, not that person you report to at work. Your real boss: the mental shackles that keeps you "in line". Break those shackles.
Not everyone can do this. But it is your duty to try.
Sometimes, when your boss isn't looking... let go.
Myforth, 8051 and minimalism
Programming in Myforth has given me a greater appreciation for minimalism. (I suppose programming in ColorForth would be even better, but I don't have access to the proper hardware...)
Using minimalistic (simple) tools forces you to approach a problem differently. I felt some of this back in the 80's when all I had was a little Commodore 64 and some big ideas, but coming back to minimalism from two decades of big iron is refreshing.
I now look at a problem and think "what is the simplest way to solve this?". This takes on a deeper meaning when you consider that the tools force you to find simpler ways. When all you have is 768 bytes of RAM and 8KB of flash, you have to think in simple terms.
Take, for instance, the problem of geofencing. I do GPS trackers (on 16-bit micros) for my day job. I have an MSP430, 16KB of RAM and 256KB of program space (flash). The trackers I build have a concept called geofencing. Here you define polygons or circles that affect how the GPS points are handled. Sometimes you want to log at a different rate based on the fence you are in; sometimes you want to "beacon" (transmit) at a different rate.
Programming a fence algorithm can be tricky (and computationally intensive). In addition, GPS coordinates are represented typically in (at least) scaled 32 bit values (w/ 5 decimal digit precision).
Thirty-two bit math is taxing for an MCU that only supports 8-bit math. What is a lowly 8-bit to do?
Well, if your "problem domain" only needs to deal with fences defined around just a few miles, you can truncate a lot of the math to 16 (or fewer) bits while retaining precision. Once you know that you are "in the ball park", you only need to look at the lower bits. You don't need to consider the whole 32 bits.
This is a good example of how refining the problem domain helps generate a simpler (smaller) solution. Even if I used 16 bit or 32 bit MCUs, this can save me some processing time. (My field is "low power consumption" solutions, so saving processing time equals saving power).
Using minimalistic (simple) tools forces you to approach a problem differently. I felt some of this back in the 80's when all I had was a little Commodore 64 and some big ideas, but coming back to minimalism from two decades of big iron is refreshing.
I now look at a problem and think "what is the simplest way to solve this?". This takes on a deeper meaning when you consider that the tools force you to find simpler ways. When all you have is 768 bytes of RAM and 8KB of flash, you have to think in simple terms.
Take, for instance, the problem of geofencing. I do GPS trackers (on 16-bit micros) for my day job. I have an MSP430, 16KB of RAM and 256KB of program space (flash). The trackers I build have a concept called geofencing. Here you define polygons or circles that affect how the GPS points are handled. Sometimes you want to log at a different rate based on the fence you are in; sometimes you want to "beacon" (transmit) at a different rate.
Programming a fence algorithm can be tricky (and computationally intensive). In addition, GPS coordinates are represented typically in (at least) scaled 32 bit values (w/ 5 decimal digit precision).
Thirty-two bit math is taxing for an MCU that only supports 8-bit math. What is a lowly 8-bit to do?
Well, if your "problem domain" only needs to deal with fences defined around just a few miles, you can truncate a lot of the math to 16 (or fewer) bits while retaining precision. Once you know that you are "in the ball park", you only need to look at the lower bits. You don't need to consider the whole 32 bits.
This is a good example of how refining the problem domain helps generate a simpler (smaller) solution. Even if I used 16 bit or 32 bit MCUs, this can save me some processing time. (My field is "low power consumption" solutions, so saving processing time equals saving power).
Tuesday, July 20, 2010
myforth, 8051 and robotics
Just an update. I am quite taken by Charley Shattuck's myforth (click link and scroll to the bottom of the page). Porting it to an obscure 8051 part (for 24 bit sigma-delta ADC) greatly increased my knowledge of 8051 assembly. Myforth is more of a Forth-based macro-assembler than a Forth dialect. It is both minimalistic and rich.
I got great pleasure out of implementing a moving average over 24 bit ADC samples using just 8-bit operators for the math. It reminds me that I take a lot for granted when I toss about 32 bit integers using C on an MCU.
(I've previously used SwiftX Forthfor 8051 programming, but the part in question had only 768 bytes of RAM -- not quite enough.)
I got great pleasure out of implementing a moving average over 24 bit ADC samples using just 8-bit operators for the math. It reminds me that I take a lot for granted when I toss about 32 bit integers using C on an MCU.
(I've previously used SwiftX Forthfor 8051 programming, but the part in question had only 768 bytes of RAM -- not quite enough.)
Sunday, June 06, 2010
Ideas are dangerous things...
A movement starts with an idea. A revolution starts with an idea. An invention starts with an idea.
Ideas are volatile things.
My thoughts now turn to vacation. But, once on vacation, my thoughts will wander back to computers. So, what's next?
I'd like to finish my embedded system CFT projects, but of course, my mind wanders.
My mind is wandering here and here.
A while back, I read (and mentioned) Reaching Escape Velocity (and Gonzo Engineering). Steven Roberts suggests that you find your all consuming project and pursue it.
The problem is, I don't have such a project in mind.
Maybe my true passion isn't making things.
I love ideas. I think I am a programmer (rather than inventor) because I often don't need to build something. Sometimes it is just enough to think it (and most importantly share it). Ideas are like.
So, I am not sure what I am going to do next.
Ideas are volatile things.
...There are thoughts enoughMy thoughts were in embedded systems for the past 4 years. Four years ago I left behind large systems development. I went minimal -- 8-bits in fact. Improbable as it seems, I managed to get back into Forth development. I've spent the last 8 months writing Forth code on a 16-bit MCU (MSP430). The project is ending and the product is about to be delivered.
To blow men’s minds and tear great worlds apart
My thoughts now turn to vacation. But, once on vacation, my thoughts will wander back to computers. So, what's next?
I'd like to finish my embedded system CFT projects, but of course, my mind wanders.
My mind is wandering here and here.
A while back, I read (and mentioned) Reaching Escape Velocity (and Gonzo Engineering). Steven Roberts suggests that you find your all consuming project and pursue it.
The problem is, I don't have such a project in mind.
Maybe my true passion isn't making things.
I love ideas. I think I am a programmer (rather than inventor) because I often don't need to build something. Sometimes it is just enough to think it (and most importantly share it). Ideas are like.
So, I am not sure what I am going to do next.
So whatever your hands find to do
You must do with all your heart
There are thoughts enough
To blow men’s minds and tear great worlds apart
There’s a healing touch to find you
On that broad highway somewhere
To lift you high
As music flying
Through the angel’s hair.
Don’t ask what you are not doing
Because your voice cannot command
In time we will move mountains
And it will come through your hands -- John Hiatt, "Through Your Hands"
Friday, June 04, 2010
Posterous... BLoGnBOX
Wednesday, June 02, 2010
WikiReader Data Logger
A quick hack. Here is a data logger I wrote for the WikiReader that I use at work. It uses my previously discussed WikiReader Serial port hack:
160 constant max-line-chars
variable byte
create line-buf max-line-chars chars allot
variable lbcnt
variable fd
variable linecnt
\ Base logfile
\
: logfile s" log.000" ;
\ Create a new logfile with new 3 digit numeric extension
\ (e.g. log.001)
\
: gen-logfile ( nnn -- caddr u) 0 <# # # # #> logfile + 3 - swap drop 3 cmove logfile ;
\ Does this log file exist or is it available (free)?
\
: logfile-free? ( caddr u -- f) r/o open-file ?dup if drop 1 exit then close-file drop 0 ;
\ Iterate through to a free logfile.
\ We support up to 1000 log files before we fail.
\ (Need a failure handler!)
\
: next-logfile ( -- ) 999 0 do i gen-logfile logfile-free? if unloop exit then loop ;
: wiki-logger ( -- )
0 lbcnt !
lcd-cls
next-logfile
logfile w/o create-file ?dup if
cr lcd-." Can't create log: " lcd-. drop exit
then
fd !
lcd-cr
logfile lcd-type lcd-cr
lcd-." Ready. Press any button to exit." lcd-cr lcd-cr
begin
\ sleep only between lines
key? 0= lbcnt @ 0= and if wait-for-event then
key? if
key dup line-buf lbcnt @ + c! 1 lbcnt +! ( -- c)
10 = if
line-buf lbcnt @ 2dup fd @ write-line drop
lcd-type lcd-cr
0 lbcnt !
then
then
button? if
\ A button was pressed so clean up
\ and exit.
\
button-flush
fd @ close-file
exit
then
ctp-flush \ not interested
again ;
wiki-logger
160 constant max-line-chars
variable byte
create line-buf max-line-chars chars allot
variable lbcnt
variable fd
variable linecnt
\ Base logfile
\
: logfile s" log.000" ;
\ Create a new logfile with new 3 digit numeric extension
\ (e.g. log.001)
\
: gen-logfile ( nnn -- caddr u) 0 <# # # # #> logfile + 3 - swap drop 3 cmove logfile ;
\ Does this log file exist or is it available (free)?
\
: logfile-free? ( caddr u -- f) r/o open-file ?dup if drop 1 exit then close-file drop 0 ;
\ Iterate through to a free logfile.
\ We support up to 1000 log files before we fail.
\ (Need a failure handler!)
\
: next-logfile ( -- ) 999 0 do i gen-logfile logfile-free? if unloop exit then loop ;
: wiki-logger ( -- )
0 lbcnt !
lcd-cls
next-logfile
logfile w/o create-file ?dup if
cr lcd-." Can't create log: " lcd-. drop exit
then
fd !
lcd-cr
logfile lcd-type lcd-cr
lcd-." Ready. Press any button to exit." lcd-cr lcd-cr
begin
\ sleep only between lines
key? 0= lbcnt @ 0= and if wait-for-event then
key? if
key dup line-buf lbcnt @ + c! 1 lbcnt +! ( -- c)
10 = if
line-buf lbcnt @ 2dup fd @ write-line drop
lcd-type lcd-cr
0 lbcnt !
then
then
button? if
\ A button was pressed so clean up
\ and exit.
\
button-flush
fd @ close-file
exit
then
ctp-flush \ not interested
again ;
wiki-logger
Subscribe to:
Posts (Atom)