Custom Covid-19 Mask from a face scan.

There is a critical shortage of n95 masks. Not just for ICU staff, but others working in essential services such as emergency healthcare, food production, city transportation, and nursing homes to name but a few. Unlike most cotton masks, these masks are also designed to protect the wearer. I had looked into the possibility of putting my 3d printer to work building protective masks for essential workers.  But the main problem with these types of rigid 3d printed masks is that they do not fit tightly enough to the face.

Normally, a flexible filter material is molded on a special machine and it is this flexibility that allows the mask, when worn tightly, to seal to the face. Regardless of the filter material used a mask is only as good as its seal and the wearer will always be breathing and exhaling through leaks around the edges of the mask and not just the filter. This is a particular problem with generic rigid 3d printed masks.

I came up with one solution to solve this problem using a face scan from a selfie video and began a personal project creating these masks for essential works, friends, and family members. The video above was my first pitch, however, I have since improved the design. I abandoned the box filter in favor of a larger and more reliable pressure fit screw cap. I increased the size by about 50% which was required for more surface area after switching to an improved filter material cut from nanofiber. This new material, branded as Filti, has been lab-tested for use as a facemask material and has a MERV 16 rating, 95% efficiency, and 0.3 microns.

Read more Custom Covid-19 Mask from a face scan.

Cloning an irreplaceable antique

3D printed clone next to the original glass shade.

There is an antique lamp hanging in my kitchen that I managed to bump into several times when moving into my house. It was a very rare art deco slip shade lamp and was much beloved by my wife so I had removed each of its five glass shades and carefully placed them aside to prevent damage.  Regrettably, in the chaos of our move, the box they were placed in ended up with the trash. After many unsuccessful attempts to find replacements, the naked lamp had continued to hang above our kitchen table as an ever-present reminder of my terrible blunder.

My workplace had recently purchased a large volume z18 MakerBot 3d Printer. I am always on the lookout for new things to print and so I decided I would try to print a new set of shades for the lamp as a Christmas present for my wife. Since I had matching shades on other fixures, I figured that I could capture one with photogrammetry and then print an exact replica using a special filament.

Read more Cloning an irreplaceable antique

Alexa Costume

Alexa is a constant presence in my home. She is the one piece of technology that the kids control with impunity. They abuse her with a never-ending barrage of demands to tell jokes and to play their favorite songs again and again. My son’s first words weren’t ‘Mama’ or ‘Dada’ rather they were shouts to command Alexa like his older sister. And so it made perfect sense when my five-year-old daughter, whispering into my ear so as not to be overheard, informed me that she wanted to dress up as Alexa for Halloween this year.

I was onboard with her idea from the start. We began with a trip to Home Depot and found a 10” diameter Sonotube form for pouring concrete footings. With a little spray paint, some mesh fabric and a stencil we had our basic costume. But no Alexa costume could truly be complete without the LED lights animating to attention when the “Alexa” keyword was spoken.

Read more Alexa Costume

Positioning and occluding objects in AR with photogrammetry

I’ve been working on a demo project that involves putting a pair of sneakers on a statue in a mobile AR app. This presented some significant challenges typical to this type of project – mainly registering AR tracking to the physical space and then precisely fitting the shoes around the feet and ensuring that they were accurately occluded by the legs.

First, here is a quick demo video. Apologies for the choppy frame rate. The screen recording process really hurts performance.

For this demo, I did not want to rely on a printed marker. Since the recent beta for ARKit 2 supports 3D trackable objects and a serializable world map, my initial hope was that I would be able to use the natural features in the environment for localization. But after a few tests, I could tell that this was not going to work well because the statue – cast in bronze, has minimal contrasting surface detail and a fairly reflective surface. Further, relocalizing to a saved point cloud of the surrounding environment was slow, did not work reliably in a dynamic outdoor environment, and lacked the precision required for this effect.

Read more Positioning and occluding objects in AR with photogrammetry

PBR sneakers in Substance Painter

Substance Painter is one of my favorite tools. If you aren’t familiar it’s best described as Photoshop for 3D. It lets you procedurally and manually paint various surface maps that define qualities such as color, roughness, or how metallic an object is. It’s a tool I only use periodically and I’m far from a pro, but I was able to take this 3D model of a sneaker built by one of my colleagues and create a set of PBR textures to give it detail and make it look more photorealistic.

Overall I am happy with how this came out.  It was a difficult model to UV unwrap, which so far in my experience has been the most difficult aspect of texturing.  As result, there are some noticeable projection errors along the upper seam of the sole and the canvas edges.

 

AR try-on demo for jewelry

Here’s a Unity demo I created for a recent pitch demonstrating how we could use AR to try on jewelry. I used Vuforia cylinder targets and a simple script to sample and average skin tone in an attempt to cover up the paper tracker.

Along with what I believe is a bug with how Unity is handling macro focus mode (you’ll notice that autofocus is wreaking havoc)  I’m not entirely satisfied with how I am matching the skin tone. I’m trying to sample and average skin values along the edges of the marker. I’m using the surface normal of those sample positions to only include pixels in areas that are visible to the camera and that fall within a certain angle of view. It works ok, but with the variation in skin tone and how it changes with light angle there are color shifts that are far from convincing.  I think the better solution is to employ a fragment shader that displaces the live camera pixels – pinching them inward to obscure the tracker.

Hopefully, I’ll be able to revisit this soon so I can figure out how to get that working.

3D scanning objects for AR

On behalf of a retail client, I’ve been exploring techniques for 3d scanning real-world objects for the purposes of displaying them in AR.  In this blog post I am going to go over the full asset creation process from start to finish – outlining how to take an existing physical object and process that into a high-quality and performant 3d asset. While the output of this is easily adaptable to any platform, I’m going to focus on preparing a model specifically targeted for mobile AR by creating a .usdz file compatible with Apple’s recently announced AR Quick Look feature released with ios 12.

If viewed on ios 12, the following image should show an AR icon.

To demo this process I captured a very sacred object – ‘Edie’ a ceramic statue of a pig that once belonged to my late grandmother. The main technique I used to capture this object is called photogrammetry. By taking many overlapping pictures of the same object from varying angles, and knowing the characteristics of the camera lens, it’s possible for software to solve how those photos must fit together through detection of similar details in multiple images. From there it can then recreate the geometry and texture of the captured object in 3D.
Read more 3D scanning objects for AR

Exporting animated characters from Maya to Facebook’s AR Studio

In the process of planning for an upcoming project, I’ve encountered 2 rather significant limitations with Facebook AR Studio due to its restrictions on the number of bones that a mesh can be bound to and the fact that joint animations are not imported with the 3d asset. I’d like to share my solutions to both of these issues.

Exporting joint animations:

UPDATE 5/18/2018: AR studio now natively supports exported animations with release v37.

TLDR; I wrote a Maya joint animation export script along with a corresponding tutorial video.

So I will cover what is likely the most common issue first – exporting joint animations with a rigged character.  As of v32 AR Studio does not import joint animations on rigged meshes. This is a pretty big limitation. This means that if you build, rig, and animate your character in a 3d authoring tool (in my case Maya) you will not be able to import those animations into AR Studio along with your model. It’s just not going to work.

Read more Exporting animated characters from Maya to Facebook’s AR Studio

ArKit Object Persistence under the hood

Update: Since I wrote this extended tracking was added in Vuforia 7.2 and ARWorldMap was introduced in ARKit 2.0. While there may still be some viable use cases for registering AR with OpenCV markers those are likely better solutions for localized or persistent AR. I have also posted an example of the same technique in 100% native iOS code using ArUco.

With Apple’s iOS 11 release of ARKit and Google’s preview release of ARCore, it’s become much easier for developers to ship AR apps that don’t require printed markers. This has made free-movement world-scale AR possible on many current mobile devices and has led to a rise in impressive and useful new AR apps in the app store. When building my own AR apps there is one issue that I kept coming up against – Not only did I want to anchor an object to the real world but I wanted to register it to that space in such a way that I could persist object positioning on another device or at a later point in time. If I wanted to build a real-time multiplayer AR game that shares the same space with another device or need to give my users the ability load-in a previous furniture arrangement in a room designer app – well, this is problematic because ARKit’s SLAM tracking is not registered to real-world space.

Read more ArKit Object Persistence under the hood

ARKit object persistance demos

I’ve been working on a system to maintain object positioning between ARKit sessions. That means a user can place AR objects around a room and then return at a later time or on another device and see the objects anchored to the same locations. My system relies on printed markers that are processed with OpenCV to quickly register AR tracking to the initial position of a device in real-world space. Once that initial position is determined objects can be saved or loaded from a database and accurately positioned around the room. I think techniques such as this can go a long way to making AR more useful. I’m working on a much longer post on the actual implementation of this, but for now, I wanted to share two quick demo videos.

The first demo involves interior wayfinding. Ever been to a large home center store and just want to find that one item quickly? Well here is a solution that allows the shopper to simply scan an aisle marker and follow a path to the product of their choice.

The second demo involves annotating a space in AR. I think there are a lot of uses for this, but for now, consider the house guest staying at an Airbnb. Finding things around an unfamiliar kitchen can be a real pain, and here is a way AR can make this a lot easier.

Both of these are bare-bones demos but are still designed to be fully standalone apps. Users are able to create and print markers as well as generate their own paths and annotations and save those to the cloud all from within a single mobile app. We’ve tested similar versions of this app at work to help people find conference rooms, help party planners bang out their shopping list, and even teach owners about their new car.

Built with Unity using ARKit, OpenCV, and Firebase.

Building a Snap Chat Lens with Lens Studio

Just a few days ago Snapchat released Lens Studio allowing developers and 3D artists the ability create and deploy their own AR lenses. Today I sat down and had a crack at it and built my first Snap Chat lens – a very simple space scene inspired by perspective street art.

Len’s Studio is a bare-bones IDE with a lightweight javascript based scripting language and a familiar component-based GUI. All code is bound to scene objects through a small set of events and allow the developer access to touch interactions, camera position, animation controls, and reasonable API access to necessary components.
Read more Building a Snap Chat Lens with Lens Studio

WebAR

There is a small feature included with iOS 11 that may prove to be as important for AR as the more widely promoted release of ARKit. Mobile Safari now supports WebRTC and is the last browser to fully adopt this spec. WebRTC is an open project that offers a standardized API to give browsers real-time communication capabilities. This is significant to AR because it means, finally, we have real-time access to the camera on all mobile browsers. One of the challenges with AR has always been the need to deliver that experience wrapped up in a native app. Now with Web AR it’s as simple as sharing a URL. In fact, it’s even easier than that. Another feature that was added to ios 11 was native camera support for QR codes. So here is a quick demo – Using almost any mobile phone, aim it at this marker and voila – AR on the web.

Of course, for all of my initial enthusiasm surrounding Web AR – it’s still somewhat of a disappointment when compared against native capabilities using libraries such as Vuforia. Read more WebAR

Experimenting with AR at world scale

Since the release of ios11 beta most of my free dev time has been spent exploring ARkit. I’ve recently taken some assets from an earlier VR project for Dunkin and optimized those for mobile AR. I started out displaying the asset at dollhouse size. This was more of an experiment just to see where I might start to get some performance limitations, and I thought it might be a great way of displaying some of the heat map data from our VR gaze tracking tests.

Then I got thinking, why not see what this looks like at world scale? So I re-exported at 1:1 and dropped a full-size Dunkin in my back yard!

The video here looks a little choppy. On the device, a 6s+, the frame rate was actually decent, but the addition of screen recording really hurt performance. There are also some interesting observations for AR at world scale. I was pretty careful when recording this to prevent issues that would lead to poor tracking, but it’s definitely a consideration when dealing with larger scale AR objects. Often the camera is pointed up more towards the sky and without any nearby objects to create parallax, ARkit appears to have more difficulty tracking. In this situation, I believe is relying solely only on inertial data for tracking which can lead to some slippage in the anchored object. This is also compounded by the distances of world scale. A small error in tracking ends up as a larger error in positioning the further away an object is placed.

Still, this is pretty cool, and it turns out to be a useful way to explore a larger virtual environment without the need for strapping on a clunky VR headset.

 

Paper App

Recently our team has been working with one of our pharmaceutical clients to design a system to help their patients, mainly the elderly, manage complex dosage schedules for some of their medications. Adherence had been a big issue for them and this particular regimen involved administering 3 different types of medicine multiple times a day – each at varying schedules that tapered off at different rates over the course of 4 weeks.

Initially, we were tinkering around with ideas for apps or cheap disposable electronic devices, but due to a number of factors around cost and regulatory issues, we soon realized we’d need to come up with a purely analog approach. In the end, this ended up being a really fun challenge.

Read more Paper App

360 Running Man

I shot this 360 video in the office yesterday morning. I was not previously familiar with the meme but for a quick impromptu shoot I am happy with how it came out. I haven’t shot any 360 video since experimenting with 360 Stereoscopic video a while back and I forgot how much easier it is to get a clean stitch with the smaller six camera mono rig. There was a lot of action in tight quarters so the trick really was to align the seams between the cameras to areas that are not too close to the viewer and limit action on or across those seams. If you look very closely you can see Ramon walk across one of those stitch lines as he is delivering the letter to Karen, but it was far enough away that it is barely noticeable. I did the shot in two takes one with and one without the dancers. I was very careful not to move the rig at all between takes and so it was simply a matter of fading in a masked layer for the reveal.