Sunday, June 28, 2015

Youtube Playlists, Current Image, Zoomed Views

Weekly videos are now being posted to this playlist on youtube.

Talking with George Albercook and Greg Austic got me wondering (in very rough) terms how the natively captured scanned images compare with a camera.

Scan DPI: 600
Physical Area: ~11.7x8.5"

Equivalent to a 35 megapixel camera. Pew pew megapixels! Unfortunately, the videos being posted are processed down to ~1920x1080.

It would be interesting to see the timelapse at 4k on an appropriate monitor, but it's more interesting to start looking closer.  Since we're capturing the images at a higher res than being displayed, we can crop out a 1920x1080 section of the native res image and turn that into a video. Here's what that looks like over the course of a week:

The video above is of a ~3.2" x 1.8" splice of the earth. One week = 604800 seconds. This plays back in ~66 seconds. Life at ~9163x!

The scanner being used can capture images at 2400 DPI, and scanners that can capture at 9600 DPI aren't terribly expensive (though they do get bulkier). Capturing a full image at that res is probably too much for the Raspberry Pi to handle, and the storage space would be excrutiating. 

But capturing even a square inch of space at that DPI would be fascinating, and I think doable : ) 

For now, I wonder what square inch would be most interesting to capture at 2400DPI?

Also, latest images are being posted at the "Latest Image" tab above. Approximately ~10 minute delay.

Tuesday, June 16, 2015

Rain - Underground

Rain in Ann Arbor!

Ann Arbor saw between 1 and 3 inches of rain on Sunday, June 14th. It's interesting to see what the rain does to freshly (~two week) turned soil.

Playing around with imagemagick a bit more. I'm splicing a 100x100px section and getting the 'average brightness' of it. Scans are taken every 5 minutes, but to keep things simple I'm only sampling every third image, or roughly every 15 minutes.

Then I repeat that process 100px down, continuing until we get to the bottom of the image. Imagemagick spits out a pile of numbers like so:
Reading DateRainfall (in.)0-100100-200200-300300-400400-500500-600600-700700-800800-900

Reading Date and Rainfall (in.) are coming from elsewhere. The first 100px is mostly black, this part of the scanner is above ground and since it's night there is little for the scanner's light to reflect off. A graph of the 400-500px region from 00:00 to 4:00:00 looks like:

The numbers above unfortunately represent the day prior to the rainfall. Whoops. In the morning I'll have numbers for the proper day and be able to compare them with data from the City of Ann Arbor's Rain Gauges. Some time later this week I'll post the results.

I'm wondering if I can show to some degree of reliability how far and how quickly the rain is penetrating into the soil with this setup. I guess I should build/buy some soil sensors at this point to compare : )

Also, currently using Imagemagick's identify -format '%[mean]' command to infer "image brightness", but I'm really not sure what the command is doing / how it comes up with the numbers it spits out....

So much to learn!

Comparison of rainfall vs. image brightness for June 14th. This is a 100px snapshot ~4 inches below the surface. Both the rainfall and image brightness values were remapped from 0 - 100.

I'm doing a couple things here that I'm pretty sure are bad ideas.
1. I'm analyzing the jpeg, not the original tiff file.
2. I'm analyzing a section of a copy of the original jpeg, more loss : )
3. I have no real clue what I'm doing with the math. I think I used the same method I used to remap and constrain light sensor values on an arduino project from years ago for this (return (x - in_min) * (out_max - out_min) / (in_max - in_min) + out_min;)...
4. Wheeeee!

Thursday, June 11, 2015


Imagemagick is a wonderful suite of tools that allow people to do lots of interesting things with images. Combining it with very minimal scripting knowledge lets one easily automate processing of large batches of images.

I used ImageMagick's compare tool to highlight in red the difference between 286 sequential images captured over 24 hours. I then used IM's convert tool to remove all other colors, create a transparent background, and finally stack each image onto the next.

Avconv was then used to turn these into a short 10 second video.

It's not terribly clear, but it does highlight the path(s) worms are taking. The large blob of red that shows up at the top is sunlight penetrating the first inch or two of topsoil/debris.

Very little attention to any sort of detail has been taken with this. This was a terribly fun distraction from documenting the actual setup - which I should really finish up in the next week or two : ) Regardless, I am constantly amazed at the amount of blood sweat and tears that are freely available in the software world.