iBeacon Enabled Beer Fridge

This project began with our team’s vision of a beer fridge that would only unlock for those employees who were up to date on their timesheet. And if we ignore all the people who simply reached down and unplugged the Raspberry PI, or all the times the WIFI adapter stopped responding or how the constant brownouts kept corrupting the server files on the SD card, then this is exactly what we built.

3D printed case with electro magnet lock installed on fridge.
3D printed case with electromagnet lock installed on the fridge.

Despite our many struggles, building this fridge was a project our team took on with great enthusiasm as it involved our two great passions – technology and alcohol. I will briefly go over the basic setup, but my intention for this blog post is to focus mainly on the iBeacon integration which, in my opinion, presents the most useful learning that came out of this project.

Read more iBeacon Enabled Beer Fridge

Markerless AR with a Project Tango

I have recently been experimenting with a Project Tango device, primarily in its use as a possible technology for Augmented Reality and interior wayfinding – a topic that routinely comes up with some of our retail clients. In past AR projects I have used marker images which naturally contain patterns that can be recognized and interpreted by a computer vision algorithm to determine the orientation of a device in space in relation to the target image. For the most part, this is quite effective but there are limitations. Aside from the need for a printed image, target occlusion, tracking speed, and lighting conditions are all potential pitfalls. The Tango, an experimental android device released by Google, has depth-sensing cameras that help determine device orientation and create a dimensional data map of the physical environment. The device can then orient itself in part by matching the current feed from the depth cameras against this map. This gives us essentially markerless AR with a much-improved ability to track objects to physical geometry and scale.

Read more Markerless AR with a Project Tango

Redesigning a Porch

During the housing bust, my wife and I took a big risk and bought a dilapidated old Victorian house in Quincy. It was really more house than we knew what to do with, but under all that peeling lead paint, rotting porches and water stained wallpaper we both saw a beautiful old place well worth our efforts.

Front porch prior to renovations
Front porch prior to renovations

With our project beginning to near completion, we were starting to think about putting it on the market but we knew we still had to deal with the front face of the house. A previous owner had chopped off an old wrap around porch and turned it into a 3 season room with no regard for the architecture of the building. Upon tearing into the structure I quickly realized my suspicion that the 135-year-old porch was too rotten to try to save anything but the roofline. So that was my goal – to design a new porch that would preserve the original roof and still be true to the original character of the home.

Read more Redesigning a Porch

SWPA Infographic

The natural gas industry has been trying to expand their pipelines to increase the supply of natural gas to New England and offshore LNG shipping terminals in Canada. There are many known and suspected health effects from fracked gas infrastructure, and compressor stations, such as the one proposed for the South Shore are among the worst emitters. Unfortunately the groups involved in this research aren’t always great at presenting their data in a form people can easily consume. I tried to take all of the complicated tables and distill those into an infographic that was easy to understand and visually appealing.

page1page2page4page3

Pitching Water

About a month ago I was involved in a pitching a water filter company. As part of this project I read a lot about water quality and its health effects. We are lucky that the Boston area has some of the safest drinking water, but there are large portions of this country where that is not the case. Annually each water supplier is required to post a CCR or consumer confidence report with results of tests measuring chemicals found in the water supply.

Parsing this data off of a consumer group database that collects those CCR reports I designed and built a demo web app using Backbone.js that allows users to look up the quality of their local water supply.

Read more Pitching Water

Pebble Run

PEBBLERUN

I’ve been working with my co-worker Mike Walton to build out a proof of concept pebble watch app for one of our fast food clients. Our goal is to make it so users can check out at any location using only a pebble watch. A companion iOS app allows users to enter gift card codes and then sync those on to the pebble watch app in the form of a small QRC image which is compatible with our clients existing POS system. Typically most other pebble watch apps that use QR codes communicate back to the iOS app to retrieve the image, but inconsistent pairing and the delay of loading in files via Bluetooth often leads the user to reach for their wallet instead.
We’ve been experimented with storing the QRC locally on the device itself, but the problem we ran into right away is that Pebble apps are running under very limited resources.
Read more Pebble Run

Recording reality with 360 Stereographic VR Video

bannerrift_1024_549_90

I first started shooting 360 still photography using an SLR film camera with a special pano head and stitching still frame QTVR (Quicktime VR) movies from scanned photographs. A single QTVR pano was a significant undertaking and most hardware at that time was incapable of even full screen playback. I shot my first 360 video for a pitch demo in 2013:

The fact that this was even possible, never-mind that it was playing on a gyroscopically controlled mobile device, was just unimaginable to me only a few years prior. Now 12 months later I’ve been experimenting with shooting 360 video in 3D for virtual reality headsets. The concept has not changed, it just now involves shooting 400 panos per second across 14 cameras.

smallRig_400_400_90I shoot monoscopic 360 video using a six camera go pro rig, but this only goes so far in VR. It’s possible to feed a single 360 video duplicated for both the left and right eye but the effect is a very flat experience. Aside from the head tracking it is little different than waving around an iPad. To shoot 360 in stereo I needed a way to shoot two 360 videos 60mm apart (average IPD) simultaneously.

One can’t simple place two of these six camera heads side by side since each would occulude the other. The solutions so far have been to double the number of cameras with each pair offset for the left and right eye. This works, but increases the diameter of the rig which exacerbates parallax – one of the biggest issues when shooting and stitching 360 video. Parallax was enough of an issue that when shooting 3D 360 video I found it was very difficult to get a clean stitch within about 8-10 feet of the camera. Unfortunately this is the range in which 3D seems to have the most impact, but careful orientation of the cameras to minimize motion across seams does help significantly.

Read more Recording reality with 360 Stereographic VR Video

DIY Mocap

For a recent pitch I noticed some collogues modeling a 3d character and then painstakingly hand animating keyframes. It’s a long process, and in my experience very difficult to get natural feeling motion. I set out to see if there was a better way. For small projects hiring a professional motion capture studio with actors covered in spandex and ping pong balls is not usually in the budget. But using a pair of Kinects and readily available software I found it was actually very feasible and relatively pain free to set up DIY motion capture system.

Character rigged with Human IK
Character rigged with Human IK

The process began by importing and rigging the model in Maya using the Human IK system which automated much of the rigging process.

Separately I set up and configured a pair of Kinects in a conference room and using iPiSoft recorded 30 seconds of myself ‘acting’. Processing took quite a while, but I was using a fairly low powered pc laptop. The end result was a motion capture file that I could then target to my character in Motion builder or even in Unity.

Read more DIY Mocap

Image Recognition inventory app

I have been investigating a couple of image recognition services that will return a description from an uploaded image. About a month ago I put one of these services, IQ Engine’s Smart Camera API, to use when building a demo app as part of a pitch for an office supplies company. The app allowed the user to quickly snap photos of objects and it would automatically create an inventory of supplies organized by category. The app took a photo, re-sampled it, and uploaded it to the web-service which would, in a short period of time, return a description of the photo. It became clear to me in testing this app that the image recognition would attempt to dynamically categorize the image. If a photo contained a product logo or other distinguishing mark a result was returned almost instantly. But, most of the time the images were not recognized instantly are were presumably put in a pool of images that appeared to be tagged by humans. (I noticed many spelling mistakes and inconsistent results with the same images). Regardless of how it worked, the response time was usually a few minutes and the tags were good. Employing a simple but effective system to best match descriptions to a set of predefined categories based on scoring matches in an array (The same technique used by the heroic Subservient Chicken), we were able to make an app that actually worked pretty well at categorizing photos of office related items. Here is a video of the app in action.

Color Sampling App

Recently I was tasked with building a quick and dirty pitch demo app for an automotive company that allows users to find color inspiration by sampling real world objects. The app I created was built with Unity3d and isolated the dominant color regions from an image and applied those to the body material of a 3D model vehicle.

Point C Game

In my free time, I’ve been experimenting with rigging and animating characters in Unity and had built a small demo character controller which I had recently shown to a few of our creative directors. Somehow that demo ended up inspiring this really quick turn around game-like experience presented with the ‘Point C’ campaign we pitched to Capella University. In this ‘game’ you control an IT character who runs around a maze of servers in search of point C while trying to avoid annoying little managers who attempt to block your path. For the AI I used Rival{Theory}’s RAIN system and set up a very simple goal-based behavior tree. I’m eager to dig further into this and other AI systems. We did win the business, but I am not sure how much that had to do with this:

Converting old Flash animations to HTML

I just wrapped up rebuilding a ‘Fighting Hunger Quiz’ for Bank of America. Originally built in Flash by another vendor, I rebuilt everything from the ground up using native web technologies. The original flash animations were preserved by re-working the flash files and exporting JavaScript objects with createJS to be displayed in an HTML canvas. There was a good deal of manual manipulation to get the animations to export as many blending modes, color transformations, nesting, and path tweens were not supported, but it was a pleasure preserving something that had been so well done. Hopefully I did not butcher the original animations too badly. It sure beat building everything over from scratch and what better tool for this type of job than Flash? The exported JavaScript objects were lean and easily extended to create methods for setting animation states so that they could be integrated into our new quiz. And of course it now runs on all devices. The Flash/Create.js pipeline certainly has it’s hiccups, but so far I haven’t found anything that works better.