MWC 2018: BMW Autonomous Car and Customer Experience

BMW Personal CoPilot

I had read somewhere about Elon Musk’s digital transformation ideas around passenger transport and the Tesla car. I recall that he said that once the electric car and autonomous driving (level 5) is achieved, the concept of owning a car, what you do with the car, the use of that resource called your car, totally changes. And BMW are helping me understand that. This week, during the Mobile World Congress 2018, BMW have showcased the Customer Journey for their brave new autonomous vehicle. Video in: (short intro in Spanish, English from then on.)

Main takeaways (apart from seeing the steering wheel eerily turning itself):

  • There is a companion app.
  • Through the app you rendez-vous with the car, including instructions for you how to walk there.
  • You set the route and get in the car (in no particular order).
  • Sit back and relax!
  • There is an infotaiment system, you can use that, or the app, during your trip.
  • You can change your destination half way there. Obviously, the car won’t stop in its tracks as seen in the video, it will just smoothly re-calculate the route.
  • The car will drop you off at your destination, and it will go park itself.
  • You can lend your car to friends. They’ll need the app, and you as the owner will manage what you lend them the car foor from your app.

I feel so lucky to live in these times – I now know that when I become too old to drive (not that I drive much these days anyway!), that won’t hinder my mobility a iota.

It’s also easy to imagine a mashup of what I have just explained and what Uber / Lyft currently do. You can think of your car as what you use for your own transportation when you are using it, and the rest of the time, instead of sitting iddle inside a garage, the car can be out and about earning you some bucks.

That’s what Musk was talking about in that interview that stuck in my mind. And it’s great to know that there’ll be plenty of makes and models to choose from. Kudos to BMW for this!

 

MWC 2018: Google Assistant Demo

Google Assistant Pod

Alexa has a competitor: Google Assistant and the plethora of home pods that make it leave the boundaries of the phone’s not-so-good ambient mikes.

I was fortunate to attend a demo this morning at the Mobile World Congress 2018. You can watch the videos below.

My takeaways:

  • Amazon has one year time advantage over its competitors, but not more.
  • They offer practically the same functionality, but Alexa has been around for longer and has more skills (apps).
  • The only thing where Google Assistant has an advantage over the Amazon Echo family and Alexa is the fact that it can switch language on the fly, and French is already implemented.

It’s time to develop skills in a way that they are easily portable to other platforms.

Enjoy the videos!

Sumerian Editor: tough cookie

sumerian editor with cuesFollowing an ages old tradition, I’ve decided to not read the documentation and dive deep into Sumerian instead. So here’s the first impediment I’ve encountered: it does not work in all browsers!

So far I’ve tried in an old Firefox (i.e. not Quantum) running on my Linux laptop and it did not work (but it warned me that it works better in newer versions).

In my Windows machine I struggle with Sumerian on Vivaldi. The scene was loading forever (with the round thingy going round and round) for about 10 minutes, nothing. Until I wanted to make a screen shot to document this post – and just by activating the Sniping Tool (the one that comes with Windows), it loaded up! Intriguing to say the least.

The first experience is good. You have dialogs explaining all the elements in the editor, and tips for beginners so that you can do stuff easily and don’t get frustrated on your first go.

Next I’ll try with IE11 and Firefox Quantum and share the results. Edge and Chrome are banned software in my work laptop. Go figure!

 

Sumerian: building blocks

VR personae

So, it looks like Sumerian is something like this:

  1. Web-based editor to create and edit 3D scenes (JavaScript or drag and drop thanks to its state machine that you can interact with graphically).
    1. First crazy idea: VR application that replicates the functionality of this Web based editor, thus you can go all Tom Cruise @ Minority report to work with Sumerian?!?
  2. Large repository of 3D objects / scenes that you can use with the above / you can inherit from & enhance with your needs. If you have your own, you can also upload them (OBJ and FDX file formats).
  3. The result is stored on AWS CloudFront (a CDN) and it can be accessed via a WebVR-enabled browser and headset.
  4. It seems that you can also create AR applications, i.e. leverage your phone’s camera. In particular,聽ARKit applications for iOS phones are a possibility from the onset.

Now that I have a basic but solid mental model of what Sumerian is, I get to play with the editor. I’ll tell you all about it once I figure it out!

 

I’ve been invited to the Amazon Sumerian Preview

Gilgamesh king of Uruk Credit DEA-Getty Images

My inbox had good news this morning. Turns out I’ve been invited to the Amazon Sumerian Preview. AWSome 馃槈 But what is Sumerian? In Amazon’s words:

“Amazon Sumerian is a set of tools for creating high-quality virtual reality (VR) experiences on the web. With Sumerian, you can construct an interactive 3D scene without any programming experience, test it in the browser, and publish it as a website that is immediately available to users.”

Not really voice, eh! But I believe this is the complement to Voice Services. Touch Screens will be a thing of the past. Voice will be the natural user interface of choice for commands and small snippets of information, VR or AR to get more complex information back ,and the good old Kindle reader for the noble art of reading novels!

Here’s a good place to start learning Sumerian. And the whole documentation is here. Bear in mind that this documentation is live and it’s bound to change significantly.

Time to play!

Alexa is coming home for Christmas: available worldwide (>80 countries)

Berlin Hauptbahnhof

Good news! After the initial US release, followed by Germany, the UK and India, the family of Amazon Echo products can be purchased in over 80 countries. The languages supported are German and English, the latter with 3 different locales (US, Britain and India). Here’s the official news.

For Europe, the German Amazon store has a bargain Echo Dot 2nd generation for just 35 euros.

Time for me to review my skills, be sure multilanguage is well implemented, to be ready to add my own. Surely it’s around the corner 馃槈

Who am I? Alexa introduces Voice Profiles

hand in hand

Privacy within the privacy of your home is a concern for users of the Amazon Echo and of any other voice assistant, especially since skills that聽 sync it with your personal accounts were made available. I am okay with my spouse checking my calendar, but I would not be so happy to mistake her appointments with mine! Voice assistants also bring out an an ages old problem that us tecchies detect very well, but others not so much: that of cardinality. An example: when you have one Echo (or multiple, linked Echos acting as one, if your home is bigger than mine!) but you don’t live alone, then it’s quite likely that more than one human will speak to Alexa. Why this one-to-many relationship between humans and machines represents a cardinality problem?

Let’s continue with the example. At home, my spouse and I use our Amazon Echo. We’re both non-English speakers and have distinct accents in English (we learnt the language in different continents). Our Echo sometimes goes crazy understanding one or the other. The Machine Learning element of Alexa must be very confused about supposedly the same human saying the same thing in such different ways at random moments in time! I bet Alexa would be happier if we could let her know that we’re two humans, if we could teach her to tell us apart, and then teach her to understand us better one by one.

If on top of having different voices and different accents, you wish to use individual services information (personal calendars, mail accounts…) then you need to be able to somehow link those individual services with your Echo devices – again, cardinality problem. Which one will Alexa use? Mine or my spouse’s? Why does it have to be only one? Can’t it be both?

Luckily, Amazon has just launched Voice Profiles to achieve this. You configure your Echo devices to pair with as many humans as needed. How? Through the Alexa app on your Smartphone. Here’s how:

  • The person whose Amazon account is linked with the Echo device must launch the Alexa app on their Smartphone, visit Settings -> Accounts -> Voice, and follow the instructions.
  • The second adult in the household must do the following:
  1. When both of you are at home, launch the Alexa app on the primary user’s Smartphone.
  2. Settings -> Accounts -> Household profile, and follow the instructions to set up this new user.
  3. With any of your Smartphones, log on to the Alexa app with the credential of the second adult in the household.
  4. Follow the instructions below.
  • Any other humans other than the primary account holder must do the following:
  1. Install the Alexa app on your Smartphone if you haven’t done so.
  2. Log in with your Amazon account (or create one if you’re not the second adult in the household).
  3. Provide the info that’s required to pair up with the Echo device.
  4. (you can skip Alexa calling and messaging if you don’t want to use that with your Echo).
  5. Settings -> Accounts -> Voice, and follow the instructions.

Here’s the full instructions.

New generation of Alexa-enabled devices is here!

New Alexa-enabled devices

Last week, Amazon announced the next generation of voice-enabled devices (and tools for devs!). Here’s what we could learn from the official announcement and subsequent media coverage.

Echo Plus: Same form factor as the original Echo device, but enhanced in many ways. It will act as the control center for the home. It can manage over 100 IoT home devices “out of the box” and without the Bluetooth fuss. A simple “Alexa, find my devices” will get them all hooked up. The big question is, when will we start to hear about cheeky neighbours going all Poltergeist on your living room lights, or worse?

Echo new generation: Same functionality of the original Echo device, but smaller, and covered in cloth (different colors). It will sell for $99, according to The Verge.

Echo Spot: Finally! Some years ago I fell in love with a device/idea called Chumby. It was some sort of potato shaped, Internet-enabled alarm clock. Sadly (or not!) I never got one. Echo Spot will fill that gap in my life. A device slightly bigger than a baseball with a nice screen that you can talk to, that can wake you up.

I foresee the Echo Spot being the bestseller of the 3. So for us devs, this means we must enhance our Skills with visual functionality (a.k.a. cards).

Wilkommen! Amazon Echo and Alexa now speak “the Queen’s English”… and German!

German and British flags
German and British flags

In September聽2016 (time flies…), Amazon announced that the Amazon Echo and therefore Amazon Alexa would be made available in the UK and in Germany.

One would think that this would affect two geographic areas and only one language, but nothing further from the truth. Trying to make Alexa understand Geordie or Scouse makes Deutschsprache crystal clear.

So, from now on, there are three languages that you should consider when you define your skill: English (US), English (GB) and German.

It’s very important that you realise that Geography and Language are different things, and you have to make decisions on both areas. I.e. you can publish a Skill in Germany in English (US) and German, or you can decide that your Skill won’t apply to expats and you want to publish it for Germany and in German only. When you define the Interaction Model, you define as many models as languages you wish to implement. When you provide publishing information, you decide on the geography.

 

In our next post we will solve the following riddles: What happens to my “functionality”, do I need to create one version per language (hint: don’t do it!!!). What are the implications of limiting my Skill to a certain geography? Then we will write a bit about predefined Slot types and multilanguage implications.

Amazon Lex, the beating heart of aLEXa, opened for conversational bot creation

Amazon Lex Logo
Amazon Lex Logo

This morning I finally got my invitation for the beta/preview program of Amazon LEX, the heart of Alexa’s voice recognition system.

I am just browsing though the documentation, so bear with me, but it looks very exciting. There are lots of concepts that will be familiar to any Alexa skills developer, especially around the interaction definition area. Some other are brand new.

Hope to have a bot up and running in the upcoming weeks. I’ll keep you posted!