IoTCon in Munich: Impressions from Day 1

Munich is always worth a trip and you can get there from Zurich easily by train or coach. This time my employer Netcetera let me visit the Internet-of-Things conference to scout what is going on in the IoT field in Germany (and elsewhere). Here are some impressions I collected during the first day of the conference. To read about the second day go here.

The German S&S Media who organizes the conference is well-known for professionally organized events with a focus on German-speaking visitors. In my experience only a minority of talks is held in English which is a pity since the talks are often of high quality and would deserve a more international audience. Another specialty of this conference worth mentioning is the fact that you actually get to visit to conferences with one ticket. The Mobile Tech Conference happens at the same time and in the same location as the IoTCon and you get to join these talks with your ticket as well. Ok, I admit you could also argue that this is a marketing trick and it probably is. In other conferences you would just call this another track;-).

 

The Marshmallow Challenge

Sometimes you go to a talk and you like the primary (intended) message of the presentation. Sometimes you also take home a concept that enriches your vocabulary either for future presentations or just for your next discussion about a specific  topic. The talk by Martin Peters and Denis Nobel from German IoT company com2m about Rapid Prototyping was interesting and entertaining. But my primary take-home message was the Marshmallow Challenge concept.

The Marshmallow Challenge is actually pretty simple, all you need 20 sticks of (dry) spaghetti sticks, a tape, some string and a Marshmallow. Now have teams of four build the highest structure possible with these ingredients where the Marshmallow has to be on top. After 18 minutes stop the challenge and see how the teams did compared to each other.

Now Tom Wujec did this experiment with many different teams and compared their performance based on their roles and education. And the interesting point is that Kindergarten students were a lot better in average than business students. Why is that? Watch his 6min TED talk video the get the full picture.

In short: the kindergarten students just try whereas the business students are trained to find the single best plan to accomplish the goal, so they use a lot of the 18 minutes to talk, maybe to struggle for dominance in the team and when the time is nearly over they start building and realize that they had many hidden assumptions that proved wrong. The moral of the story is that we should do a lot more prototyping instead of finding the single best plan to accomplish the goal. Or in other words: do MVPs!

 

Talk to me, Alexa!

Until recently I found it really weird to talk to a machine but this talk by Dean Bryen (@deanbryen) from the Amazon Alexa/Echo team kind of convinced me that voice control actually might have a bright future and is already useful and usable today. If you don’t know Amazon Alexa and the Echo devices here is a quick introduction: Alexa is a voice controlled assistant which let’s you control your lights at home,  play a certain music and of course lets order stuff on Amazon with just a few spoken words. Alexa is available as a cloud service where you send the voice commands as (compressed) audio stream to a server but is also integrated in a couple of products by Amazon but also by 3rd party manufacturers. Amazon’s own Alexa devices are called the Echo and the Echo Dot which have an array of microphones to get the best quality possible of your voice command. They also have built-in loud speakers to play Alexa’s response to you or to let you listen to your favorite song. The older and more expensive Echo comes with a room filling big loud-speaker, whereas the cheaper and smaller Echo Dot sounds more like a tin can.

 

Dean Bryen from Amazon Alexa team

Dean works as an evangelist (or in his own words: coolest job in the universe!) for the Amazon Alexa team. Goal of his talk was show how easy it is to extend the existing Alexa frame-work with something the call Skills. Skills are extension points or some kind of mini apps which add more functionality to what you can do with Alexa and the Echo devices. You can teach Alexa to read a recipe to you or to start the dish washer if your hardware has the necessary interfaces. As a developer you can use the Alexa Skills Kit to add functionality. There are different types of Skills APIs and the Smart Home Skill is focused on turning on, off and dim lamps and other devices in your home. If commands like “Alexa, turn on XXX” are not sufficient for your use case you can use another API which is less strict and allows you to add own commands like “Alexa, put all lights to comfy mode”.

The Skills Kit has a nice developer web console which lets you test audio commands. There you can also test the effect of the so-called Speech Synthesis Markup Language (SSML). The SSML lets you manipulate how Alexa says your text. For instance if you want Alexa to spell Hello as H-E-L-L-O than simple put this mark up around the word: hello

So all-in-all it is very impressive what you can do with Alexa. I just wished it was officially available in Switzerland. According to some forum posts it should work but you’ll have to buy it through some costly 3rd party importer. And there are also some privacy concerns: Alexa listens to what you say all the time and sends (part of?) this text to the Amazon cloud to improve the voice recognition service. It’s like voluntarily installing a microphone and sending everything to the cloud. Big Brother is Listening to you. But hey, in a time where microwave ovens can be turned into cameras we can stop worrying about microphones in the first place (Attention: that was sarcastic. To my knowledge it is NOT possible to turn microwave ovens into cameras!)

 

(Where is) Ali G in da house?

Indoor localization and navigation is a hot topic and interesting for many applications. Systems like GPS and Glonass work fine outside but they require line-of-sight to at least 3 satellites to acquire a precise fix of your location. This means that satellite based localization doesn’t work well indoors unless you are living in a glass house. Several technologies are in the race become the de-facto standard for indoor localization: today probably most popular are WiFi and Bluetooth Low Energy. Both use the signal strength from either access points (WiFi) or beacons (BLE) to trilaterate your exact position. Problem here is that the measured signal strength can be influenced by many factors, most notably by the fleshy water bags called humans. This implies that your positioning based on signal strength is shaky at best and also requires a pre-installed infrastructure of access points or beacons.

What if we could get positioning completely without having to install infrastructure first? And what if all that was possible with the hardware built into a smart phone? Till Kempel from Cologne Intelligence built a prototype based on Google Tango to do exactly that and added indoor navigation on top of it. Google Tango is a library which uses a structured light depth sensor (similar to what the Kinect has) and the common built-in inertial movement unit (accelerometer, magnetometer and gyroscope) to get a very exact fix of your smart phones location and orientation in the 3D space. The results are very impressive and have a lot of potential for interesting applications. But this only works if your phone as the structured light sensor and currently there are only 3 or 4 Android based models available. In my opinion another downside of this approach for indoor positioning and navigation is that you have to walk around with your smart phone in front of you and the camera pointing where you want to walk. Users might be shy to walk around like this in a train station or conference to find their way but hey, maybe we’ll get used to it.
Unlike the processing of Alexa which happens in the cloud Google Tango can do all the processing of image and IMU data on the smart phone and doesn’t require any connectivity. Till’s application needs a bit of training the room so that the navigation learns where the obstacles are but then it works completely without pre-installed infrastructure. It is currently a prototype but they are looking for interested partners to further develop the technology. Have a look at their demo video. It is not fake, I have seen it working in real-time!


 

Summary Day 1

These were my highlights of day 1 (of 2) at the iotcon.de 2017 in Munich. I hope you enjoyed the inspiration. If so, please share this post or leave a feedback!

Curious what I took home from day 2? Continue here…

blank
Posted by Daniel Eichhorn

Daniel Eichhorn is a software engineer and an enthusiastic maker. He loves working on projects related to the Internet of Things, electronics, and embedded software. He owns two 3D printers: a Creality Ender 3 V2 and an Elegoo Mars 3. In 2018, he co-founded ThingPulse along with Marcel Stör. Together, they develop IoT hardware and distribute it to various locations around the world.

One comment

  1. I’d like to see someone create an Alexa Skill to grab Plane Spotter data so I could ask “Alexa what’s flying over my house right now?” Anybody up to the challenge? (I live right under the airport approach path!)

Leave a Reply