[RFC] Reinventing Project Link with a SmartPhone

So we want to build products. Fair enough. So let’s see how we can most efficiently build a first product, that could serve as a base for going forward.

The more we go advance with thought experiments, the more I have the feeling that the features of the User Agent we are discussing are the exact sum of

  1. What we’re doing at the moment with Taxonomy, Adapters, etc.
  2. What is already possible with a SmartPhone.

If I am right (I may not be), all our current brainstormings on User Agent are bound to lead us to reinvent the SmartPhone with a different form factor and a few additional pieces of software and connectivity – and with much less fewer time for R&D.

By this reasoning, I see essentially two ways forward:

  1. We go ahead and progressively reinvent the SmartPhone with a different form factor. We might need to/be able to reuse some bits and pieces of B2G OS along the way.
  2. We prototype a hardware device + app that connects to an existing Android SmartPhone and turns it into an IoT powerhouse.

If our main criteria are to be startup-like and fast-to-product, option 2 seems by far the best way forward to reach a product. I can already think of useful applications that we should be able to prototype pretty quickly by going in this direction.

Any thoughts on this, team?

Cc-ing @psanketh , @jmccracken, @mcav, @fabrice, @julienw.

I think the platform choice (raspberry pi vs android) is not so relevant (the user wouldn’t even notice if the Box ran Android inside), but the question is more between:

  1. an always-on Box inside your house without its own user interface (you talk to it either via our hosted web app + discovery services or via a built-in always-on microphone)
  2. “let’s just build a smartphone app”

I think the reasons why we did not (so far) go for the (more obvious) option 2, are:

  • it can’t run timer-ed or event-triggered thinkerbell scripts if your smartphone is outside the house
  • we want to use Rust, which IIUC is harder to cross-compile to Android (NDK?) & to iOS than to Raspbian
  • Android/iOS might not allow us the same low-level radio access which Raspbian does.

Let me clarify what I had in mind. I was suggesting we combine a physical SmartPhone (the one the user already has) and an always-on Box (which could be made to look like a SmartPhone dock).

Let me detail this.

We need a phone

For one thing, the ideas we have been bouncing back and forth recently about making the Foxbox your user agent pretty much need the same level of knowledge of your life as your SmartPhone (e.g. your alarms, your contact list, etc.). How can we do that better than by using that very same SmartPhone?

Moreover, connecting with the SmartPhone gives us a path towards connecting existing apps ecosystems (e.g. Uber, Facebook, etc) without having to reinvent the wheel. I suspect that a number of existing SmartPhone apps (not all) could work out-of-the-box in a Foxbox world, which would make the Foxbox immediately useful. This includes the telephony applications. Just having the ability to ask verbally the SmartPhone “who’s calling?” – without having to pick up the phone – or to tell it “I’ll pick up the call in the living room” and have it route to a dedicated device might be considered a killer app by some users.

Finally, we pretty much need to a SmartPhone in the first place to provide remote control of your house while you’re away.

We need a box

On the other hand, we don’t want users to have to fetch their phone every time they wish to turn off a light. A box, which is always on and part of your furniture, will do the trick much better, regardless of whether it is connected to a SmartPhone.

As you point out, a SmartPhone also won’t support Thinkerbell-style always-on monitors. For this, we need a box.

You’re absolutely right that we should be careful not to reinvent the smartphone from scratch. And now that voice control has been named as a number 1 priority for Project Link, I think we should admit that if the goal is having voice-activated services, neither the Android world nor the iOS world probably need us: people already us their smartphone for that. The smartphone OS would be the one who allows the Uber app to be voice-controlled, and on smartphones there’s probably no room for a “browser” inbetween.

But as you already say,

Exactly! That may be a reason for people to put an always-on microphone in their living room, so: new market. :slight_smile: Unlike smartphones (due to their duopoly market), this new device can run Mozilla software, and forward your voice commands to the relevant voice API on your behalf (switched by the wake word). Unless a wake word is detected, no sound from your living room is sent to any cloud service (so, privacy by default). Only when your say, e.g., “Yahoo, what’s the weather like”, it will send the sound to (in that case) Yahoo for processing (just like Firefox submits your search queries to your default search engine). It can do this for 4 or 5 different hosted speech APIs.

Unlike the smartphone market, the market for such “voice browser devices” is still open. So more than the “IoT hub” product that we built in phase 1 as an abstraction layer over home automation protocols, I could see the “living room microphone” as the defining component of the Link box.