History of Lumo Play, Part 2

Introduction

Chris-Avatar

(Click here for part 1)

Hello there! I'm Chris Iverach-Brereton, Lumo Interactive's CTO. What you're about to read is the second part of a retrospective tech-blog about the development of the newest version of our Lumo Play interactive projection software, which launched its latest update this week. To celebrate this achievement, I'll be posting the entire story over a series of blogs in the upcoming weeks.


From Robot Control Systems to Computer Vision

Do you remember what you were doing on February 17, 2015?

I do. That was my first day working at Lumo Interactive. I was recruited while I was finishing up my master's degree in computer science; one of Dr Baltes' former students was an employee of Lumo at the time and needed someone who could assume the role of computer vision developer on a new project.

For my degree, I worked with humanoid robots, so computer vision played pretty heavily in what I did on a daily basis. My main focus was actually on control systems and robot balancing, but over five years of competing in the FIRA HuroCup and RoboCup competitions, I'd had to do a lot of vision programming, too.

The goal of the new project was to build a motion-tracking system using an IR camera for the Lumo Play projector toy. The minimum viable product in the prototype at the time was a clone of what was used in Po-Motion v1.x. To call this system basic was an over-statement. The system worked by simply taking the current frame from the camera, subtracting the previous frame, thresholding the result, and finding bounding boxes around what was left.

Confused? That's okay. Here's what the original vision system looked like:

po-motion UI 2011

 

Obviously this system wasn't good enough. Sure, it told you "something moved here," but it couldn't tell you how fast, nor in what direction. I made it my first order of business to come up with a better algorithm that could do more.

Vision Server Improvements

My first few attempts were reasonably successful. Using OpenCV, I was able to mock up some tech demos that used Hough line detection to detect the leading and trailing edges of objects as they moved. By matching lines across consecutive frames, I was able to roughly calculate speed and direction of travel. A pretty good start after two weeks on the job!

week2_finger_tracking.png

As an added bonus, the new system could also work for detecting stationary objects. By comparing the current frame to a pre-captured reference frame, we could easily detect changes. By using the endpoints of the line segments generated by the Hough transformation, I could build a convex hull to describe a polygon wherever there was a stationary object.

Those of you with some experience programming may now be asking yourself "why are you using a Hough transformation to produce convex hulls?" and "why are you being inconsistent, calculating the edges for moving objects, but polygons for the stationary ones?"  These are indeed excellent questions, observant reader. The answer to both is "because this was an evolving prototype."

With a little extra work I was able to clean up the algorithms and make them more efficient, creating polygons around the moving and the stationary objects, while still preserving the speed and direction information for the moving objects.

So to summarize, here's a table of the improvements I was able to make to the vision system:

Feature Supported on original software? Supported on new software?
Position of moving objects Yes (bounding box only) Yes (outline of object)
Direction of travel No Yes
Speed of travel No Yes
Stationary object detection No Yes

Object Detection

week4_object_tracking.png

A week later, I had a new prototype ready to show to the team. Instead of a whole bunch of line edges, I had polygons that moved and changed shape. The algorithm seemed good enough that I could now port it from the C++/Linux prototype I'd been working on into native Java code to run on Android.

Developer Life Advice: never be afraid to build prototypes in the language you know best even if it won't be the production language. At the time, I'd been developing with C++ on Linux full-time for five years while doing my master's, so I could bang out C++ code using OpenCV super fast. But I was a complete Android noob. So I decided to make the prototypes on Linux so I could debug the algorithms without worrying about other quirks the system might have. This wound up saving me a lot of time in the end. (And the linux prototypes make a reappearance later on in this story anyway.)

Choosing an OS - First Try! (But, not really.)

A frustratingly common part of startup life, especially in software, is having to adjust your roadmap really quickly in response to hardware advancements. Sometime around Q4 2015, Intel released some affordable NUCs that included a built-in projector.

Brix.jpeg

This device was literally just one sensor away from being the entire Lumo Play toy. Since the NUC used an Intel-family processor, we could run Windows on it.

At the time, our hardware engineer was struggling to write Android drivers for the IR-capable camera modules we'd sourced from China, (which wasn't what he was trained to do).  I had successfully written my vision algorithms for Android, but was struggling with the challenges of making this system a service available to our game APKs. In short, we were all hopeful about the opportunity to move away from Android.

Unfortunately Windows licenses are expensive. Prohibitively-so for a project of our scale and target audience (namely families and schools who wanted to hang a Lumo Play projector on the wall so their kids could play with it).

Linux costs less (it's free!) and runs just fine on these devices.  Being open-source, we would be able to customize the operating system as much as we might need to. So, as promised, the previously mentioned Linux prototype vision system code appears again, in the first of two encores.

With a little polish I was able to convert my vision prototype into a functional vision system for Lumo Play. It ran as a headless process, broadcasting the vision data over a local UDP socket to the games. The games themselves could be written in Unity, and built to target 64-bit Linux. (The distro of choice, for those who are curious, was the latest Debian Stable at the time.)

Coming from an academic background working in a lab with robots, I had very little GUI design experience.  When you're building tools to achieve highly-technical goals, intended to be used by tech-savvy users, you can cut a lot of corners when it comes to aesthetics.  We're okay opening a text file and changing some numbers.  Or opening up a command prompt and typing commands.  As long as your program works, it doesn't have to look pretty.

This is not the case with consumer software.  Especially not for software aimed at Lumo's non-technical target audience.  So I had to ditch my configuration files and build a functional GUI.  My first GUI was built using Qt with a free set if icons I'd used years earlier on a web application I worked on shortly after finishing my undergraduate degree.

To say the GUI looked a little dated may be an over-statement. It screams early-2000s with those colourful icons.  But it was functional and did the job. Not bad for a first draft. But clearly something that needed more work.

prism_qt_gui_v1.png

Jocelyne Le Leannec was in the process of updating our brand guidelines, and pointed me toward the more modern icon set we were using.  Giving my first-draft GUI a facelift helped, but it was clear to both of us that there was still a long way to go.

prism_qt_gui_v2.png

At this point, we had the core functionality of the software locked down pretty well, and we could see from these early drafts what controls the user would need to have access to in order to calibrate the system.  We just needed to sit down and hammer out a good design that was easy to use (well, as easy as possible anyway), and that looked like it belonged in our company's family of software.

Developer Life Advice: I know it's hard to hear from someone that your software is ugly, or that it's confusing, or it's hard to use.  But swallow your pride and listen to the designers -- those people with no programming experience, but who are nonetheless qualified to tell you what your program needs to look like and how users will interact with it.  You have skills they don't have, and they have skills you don't have.  Between the two of you you'll be able to make awesome things.  Yes, there will be the odd heated moment, and it really sucks when the designer tells you to nudge that button over another 2 pixels when you've still got giant gaping holes in your program's core functionality. But prioritize, communicate, and learn to accept the designer's criticism.

Learning to work with a design lead was a steep learning curve for me.  And, to be honest, it's still something I struggle with occasionally.  But it's totally worth doing if you want non-developers to enjoy using your software.

Not All Prototypes Make It To Market

Around this time, we realized that the cost of hardware for an all-in-one unit, even using a free OS and a low-cost IR camera, was not something we could finance with our crowdfunding campaign. We offered refunds, slowly distributed hand built solutions, and gave free software to our extremely patient backers. I personally made a case for one of our promised developer kits in my Dad's workshop. It was shipped to a client in Malaysia, who installed it in a science center. This is probably the first Lumo Play projector prototype ever shipped.

the_dev_kit.jpg

After struggling to develop a BOM for a consumer market, and realizing we were just a few years too early, our team made a decision to focus instead on our largest user demographic: Windows desktop users. We already had a working product in the market, but that piece of software (Po-Motion v2.x) was starting to show its age, and the original 'freemium' business model wasn't generating enough revenue to be sustainable.

We decided to take what we'd learned building Po-motion, and develop a completely new, much better platform from the ground up.

Next Post - Back to Windows

In my next post, I'll cover what I learned during this process, and describe our foray into the world of smart cameras.


Lumo Play 4 is the only app market in the world for motion, gesture, and touchscreen games and effects. Visit our website to learn more.