I had read somewhere about Elon Musk’s digital transformation ideas around passenger transport and the Tesla car. I recall that he said that once the electric car and autonomous driving (level 5) is achieved, the concept of owning a car, what you do with the car, the use of that resource called your car, totally changes. And BMW are helping me understand that. This week, during the Mobile World Congress 2018, BMW have showcased the Customer Journey for their brave new autonomous vehicle. Video in: (short intro in Spanish, English from then on.)
Main takeaways (apart from seeing the steering wheel eerily turning itself):
There is a companion app.
Through the app you rendez-vous with the car, including instructions for you how to walk there.
You set the route and get in the car (in no particular order).
Sit back and relax!
There is an infotaiment system, you can use that, or the app, during your trip.
You can change your destination half way there. Obviously, the car won’t stop in its tracks as seen in the video, it will just smoothly re-calculate the route.
The car will drop you off at your destination, and it will go park itself.
You can lend your car to friends. They’ll need the app, and you as the owner will manage what you lend them the car foor from your app.
I feel so lucky to live in these times – I now know that when I become too old to drive (not that I drive much these days anyway!), that won’t hinder my mobility a iota.
It’s also easy to imagine a mashup of what I have just explained and what Uber / Lyft currently do. You can think of your car as what you use for your own transportation when you are using it, and the rest of the time, instead of sitting iddle inside a garage, the car can be out and about earning you some bucks.
That’s what Musk was talking about in that interview that stuck in my mind. And it’s great to know that there’ll be plenty of makes and models to choose from. Kudos to BMW for this!
Following an ages old tradition, I’ve decided to not read the documentation and dive deep into Sumerian instead. So here’s the first impediment I’ve encountered: it does not work in all browsers!
So far I’ve tried in an old Firefox (i.e. not Quantum) running on my Linux laptop and it did not work (but it warned me that it works better in newer versions).
In my Windows machine I struggle with Sumerian on Vivaldi. The scene was loading forever (with the round thingy going round and round) for about 10 minutes, nothing. Until I wanted to make a screen shot to document this post – and just by activating the Sniping Tool (the one that comes with Windows), it loaded up! Intriguing to say the least.
The first experience is good. You have dialogs explaining all the elements in the editor, and tips for beginners so that you can do stuff easily and don’t get frustrated on your first go.
Next I’ll try with IE11 and Firefox Quantum and share the results. Edge and Chrome are banned software in my work laptop. Go figure!
My inbox had good news this morning. Turns out I’ve been invited to the Amazon Sumerian Preview. AWSome 😉 But what is Sumerian? In Amazon’s words:
“Amazon Sumerian is a set of tools for creating high-quality virtual reality (VR) experiences on the web. With Sumerian, you can construct an interactive 3D scene without any programming experience, test it in the browser, and publish it as a website that is immediately available to users.”
Not really voice, eh! But I believe this is the complement to Voice Services. Touch Screens will be a thing of the past. Voice will be the natural user interface of choice for commands and small snippets of information, VR or AR to get more complex information back ,and the good old Kindle reader for the noble art of reading novels!
Good news! After the initial US release, followed by Germany, the UK and India, the family of Amazon Echo products can be purchased in over 80 countries. The languages supported are German and English, the latter with 3 different locales (US, Britain and India). Here’s the official news.
Privacy within the privacy of your home is a concern for users of the Amazon Echo and of any other voice assistant, especially since skills that sync it with your personal accounts were made available. I am okay with my spouse checking my calendar, but I would not be so happy to mistake her appointments with mine! Voice assistants also bring out an an ages old problem that us tecchies detect very well, but others not so much: that of cardinality. An example: when you have one Echo (or multiple, linked Echos acting as one, if your home is bigger than mine!) but you don’t live alone, then it’s quite likely that more than one human will speak to Alexa. Why this one-to-many relationship between humans and machines represents a cardinality problem?
Let’s continue with the example. At home, my spouse and I use our Amazon Echo. We’re both non-English speakers and have distinct accents in English (we learnt the language in different continents). Our Echo sometimes goes crazy understanding one or the other. The Machine Learning element of Alexa must be very confused about supposedly the same human saying the same thing in such different ways at random moments in time! I bet Alexa would be happier if we could let her know that we’re two humans, if we could teach her to tell us apart, and then teach her to understand us better one by one.
If on top of having different voices and different accents, you wish to use individual services information (personal calendars, mail accounts…) then you need to be able to somehow link those individual services with your Echo devices – again, cardinality problem. Which one will Alexa use? Mine or my spouse’s? Why does it have to be only one? Can’t it be both?
Luckily, Amazon has just launched Voice Profiles to achieve this. You configure your Echo devices to pair with as many humans as needed. How? Through the Alexa app on your Smartphone. Here’s how:
The person whose Amazon account is linked with the Echo device must launch the Alexa app on their Smartphone, visit Settings -> Accounts -> Voice, and follow the instructions.
The second adult in the household must do the following:
When both of you are at home, launch the Alexa app on the primary user’s Smartphone.
Settings -> Accounts -> Household profile, and follow the instructions to set up this new user.
With any of your Smartphones, log on to the Alexa app with the credential of the second adult in the household.
Follow the instructions below.
Any other humans other than the primary account holder must do the following:
Install the Alexa app on your Smartphone if you haven’t done so.
Log in with your Amazon account (or create one if you’re not the second adult in the household).
Provide the info that’s required to pair up with the Echo device.
(you can skip Alexa calling and messaging if you don’t want to use that with your Echo).
Settings -> Accounts -> Voice, and follow the instructions.
Echo Plus: Same form factor as the original Echo device, but enhanced in many ways. It will act as the control center for the home. It can manage over 100 IoT home devices “out of the box” and without the Bluetooth fuss. A simple “Alexa, find my devices” will get them all hooked up. The big question is, when will we start to hear about cheeky neighbours going all Poltergeist on your living room lights, or worse?
Echo new generation: Same functionality of the original Echo device, but smaller, and covered in cloth (different colors). It will sell for $99, according to The Verge.
Echo Spot: Finally! Some years ago I fell in love with a device/idea called Chumby. It was some sort of potato shaped, Internet-enabled alarm clock. Sadly (or not!) I never got one. Echo Spot will fill that gap in my life. A device slightly bigger than a baseball with a nice screen that you can talk to, that can wake you up.
I foresee the Echo Spot being the bestseller of the 3. So for us devs, this means we must enhance our Skills with visual functionality (a.k.a. cards).
One would think that this would affect two geographic areas and only one language, but nothing further from the truth. Trying to make Alexa understand Geordie or Scouse makes Deutschsprache crystal clear.
So, from now on, there are three languages that you should consider when you define your skill: English (US), English (GB) and German.
It’s very important that you realise that Geography and Language are different things, and you have to make decisions on both areas. I.e. you can publish a Skill in Germany in English (US) and German, or you can decide that your Skill won’t apply to expats and you want to publish it for Germany and in German only. When you define the Interaction Model, you define as many models as languages you wish to implement. When you provide publishing information, you decide on the geography.
In our next post we will solve the following riddles: What happens to my “functionality”, do I need to create one version per language (hint: don’t do it!!!). What are the implications of limiting my Skill to a certain geography? Then we will write a bit about predefined Slot types and multilanguage implications.
This morning I finally got my invitation for the beta/preview program of Amazon LEX, the heart of Alexa’s voice recognition system.
I am just browsing though the documentation, so bear with me, but it looks very exciting. There are lots of concepts that will be familiar to any Alexa skills developer, especially around the interaction definition area. Some other are brand new.
Hope to have a bot up and running in the upcoming weeks. I’ll keep you posted!