I was also inspired by a university experiment which you may have heard about: students at Georgia Tech were told they could message an offsite tutor to help them. The tutor was really great, so great that they nominated her for a teaching award. The M Night Shyamalan twist: the tutor was a bot all along.
In our industry we try to fuse the knowledge and personality of real life educators with the efficiency and accessibility of online learning. A chat bot was the perfect opportunity to combine these elements.
My challenge was clear: create an e-learning chat bot with enough personality and intelligence to rival a human trainer. You’ve got to aim high!
For me, tone and style is all about giving a bot personality without it being cringey or patronising. When considering this, I always think about the bit in Terminator 2 when John Connor is trying to teach Arnie how to speak in slang.
I didn’t want Otto to sound like a mum attempting to correctly use the word ‘lit’ in a sentence. I had to work out what his vocabulary and tone were prior to writing any of his dialogue – just like creating any fictional character. I decided to make Otto sound like a friendly and knowledgeable colleague, with an understanding of the organisation’s jargon; informal without being unprofessional, and speaking in a way that’s accessible and pleasant to people of all ages.
Here’s an example of how we design effective learning: Jargon can be difficult for new starts in any organisation. So, what are the possible solutions? They could have a word document glossary? It sounds like a simple solution to a simple problem, but we all know that it’s going to be hidden away in some labyrinthine Learning Management System or SharePoint, or Google Drive, and your new start is never going to find it. They could just use Google, that would be quicker, right? Yes, it would, but the different terms are used differently in different industries and different companies. They might come away from Google thinking that everyone keeps going on about LMSs because they’re paid-up members of the London Mathematics Society. The solution? Otto very quickly supplies this bespoke snippet of information in an easy-to-understand format – an example of good learning design.
The final element is structure – writing for chatbots is entirely different to standard e-learning because the content is presented in a non-linear fashion. Internally, we referred to Otto’s knowledge base as ‘FAQs’, but I don’t think that’s accurate. They’re more like FGAs – Frequently Given Answers – the possible questions are infinite, but Otto’s answers are finite. To make sure that the answers make sense in many different contexts, your writing has to be linguistically precise and self-contained. In the above picture, you can see the definition of LMS. The way it’s written means it will make sense whether a user has asked ‘what’s an LMS’ ‘Tell me about Learning Management Systems’ ‘What’s an LMS?’ or any other variation.
The next step was to launch an internal version of Otto. Some people asked it maths problems, some wanted to know this week’s lottery numbers, and an awful lot of people thought that if they typed “is this the real life, is this just fantasy?” Otto would do its best Freddie Mercury impression. This kind of expectation has been set by Siri and Alexa, which are the bots that people are most familiar with.
Here’s how Otto answered some of these off-topic questions:
Swipe through the images below to see how Otto answered.
How could I possibly solve this? Could I give Otto an opinion on every single sports team, politician, and TV show ever created? No, that’s not feasible. Could I write catch-all responses to deal with vast swathes of these off-topic questions? Maybe, but that’s still a lot of different things to write about and if it was taught the name of every popular TV show, you greatly increase the risk that it will mistake a genuine query as the title of the latest Netflix boxset.
Which takes me back to this quote:
I’m not sure I agree with it anymore.
And that university experiment I mentioned doesn’t actually apply to this at all. The students spoke to the bot as if it was a human because they were told it was one, which dramatically narrows down the number of conversational routes they would take. You’d never ask a human teacher to sing along to Bohemian Rhapsody with you and you wouldn’t ask them what the lottery numbers are going to be, you generally stay on-topic – making it infinitely easier to write and design.
For me, the answer to this problem didn’t come from learning theory, blokes on YouTube, or university experiments. It came from Grand Theft Auto. GTA is the game where you can do anything, it’s called a sandbox game for that reason. You can drive your car around the city in a legal and respectful fashion, or you can do… other things.
To learn how to control everything in GTA, you’re forced to take a tutorial level at the start. You experience the features of the game within a controlled environment which prompts you to try different commands and learn how the game responds. Now we’re doing the same with Otto, only with fewer guns.
From the start, Otto says that it isn’t Siri or Alexa, it has a different job, so it isn’t failing if it can’t answer your daft question about the weather. It then puts you in a controlled version of its knowledge base so that you get familiar with it before you interact with the real deal. If you do something unexpected, instead of giving that frustrating ‘I didn’t understand’ response, it tells you what you did wrong and how to solve it.
GTA has a main storyline to follow and, if you want to, you can also go completely off-piste and do your own thing. But you always know where you should be going and what you could be doing. You’re prompted with notifications, and there are literally big, luminous green arrows which direct you to the storyline’s objectives.
In Otto, there’s the core content and there’s a whole world of conversation that users can and will explore. Once users are released from the tutorial and enter the conversational sandbox of Otto proper, they aren’t left to blindly roam the digital wilderness. Otto signposts every feature, option, and topic area so that users know exactly where to go and what to expect, but can still get a tenuous e-learning pun if they ask it to tell them a joke.
This expectation-setting has made Otto much easier to use, and a lot more satisfying for users.
This is what I learned in my first six months of writing for chatbots: before you even think about creating content for them, you need to use your writing abilities to create the correct expectations and environment for accessible, bitesize, in-the-workflow learning. In light of that, I’ve rewritten the quote:
It’s a bit of a mouthful, but a lot more accurate!
Head of Content, Lindsey Coode and I hosted a free seminar at Learning Technologies Summer Forum on this very topic last month, check it out below.
Since this article was written, we’ve been hard at work developing a new AI Learning Experience Platform – StreamLXP – learn more about this exciting development here.
Matt started his L&D career as a learning designer. Since then he’s been involved in a variety of projects that combine an interest in tinkering with new technology with learning design and writing skills. Most notably, he provided the personality and linguistic logic behind our chatbot, Flo.
He now manages the Learning Pool Academy, creating resources about our products, replying to comments, creating new courses, and looking after our internal training.
Get started by telling us what you need and one of our team will be in touch very soon.
+44 207 101 9383
US +1 857 284 1420
+44 345 074 4114*
US +1 844 238 5577
* call charges vary depending on your provider