Improve your practice.

Enhance your soft skills with a range of award-winning courses.

Podcast: VirtualSpeech and ChatGPT3 (VR in Education)

January 20, 2023 - Sophie Thompson

In today’s episode, we are looking at how AI might be used in virtual worlds for education. We have heard a lot over the last few weeks about how intuitive and responsive ChatGTP3 has been.

So, we have invited Sophie Thompson, CEO and founder of VirtualSpeech to talk to us. VirtualSpeech is a VR soft skills training platform that allows users to hone several 21st century skills related to communication. They recently announced they have added ChatGTP3 to some of their course offerings.

https://cfrehlich.podbean.com/e/episode-91-virtual-speech-and-chatgpt3/

Transcript

Craig Frehlich (00:24):

Hello everyone. Welcome to another exciting episode of VR and Education. In today’s episode, we’re looking at how AI might be used for virtual worlds in education. We have heard a lot over the last few weeks about how responsive and in intuitive chat GPT three has been. So we’ve invited Sophie Thompson, c e o, and founder of VirtualSpeech to talk to us. VirtualSpeech is a VR soft skills training platform, and they allow users to hone what we might call 21st century skills, mostly around communication and more excitingly. They recently announced that they have added chat GPT three to some of their course offerings, so it’s gonna be a great show. Welcome, Sophie.

Sophie Thompson (01:18):

Hi, Craig. Thank you very much for inviting me.

Craig Frehlich (01:21):

I always start with an origin story related to what got you interested in VR in the first place.

Sophie Thompson (01:28):

It’s quite an interesting question really, because for us it was entirely by accident, to be honest with you. So, my background isn’t technical, but I have, I have always had a massive fear of public speaking, but for me that was even more so, and it was more like social anxiety and pretty severe social anxiety. So, for example, I used to not be able to even order my own food in a restaurant because I was nervous of that, even one-to-one interaction. And I had an assessed presentation coming up at university, and it was three months away and I was already super nervous about it, waking up, dreading it. And my, my business partner, Don, was working in the virtual reality department at Jaguar Landro, the car company. And really it was his idea that we could create a realistic virtual environment where it was psychologically safe to practice. And, I was free to make mistakes and to learn from those mistakes whenever I wanted to. And so I could build up my, my confidence in VR before translating that into the real world. So yeah, that’s how we got started. We were the first VR app to overcome the fruit of public speaking. And then, yeah, six years on that accident has really snowballed.

Craig Frehlich (02:57):

Wow. Six years. That’s amazing. You know, as you described your story, I couldn’t help but think about myself and then even my children who are now grown up and they used to practice in the mirror. So Garner gone are the days where we’re just trying to speak into this mundane mirror, right?

Sophie Thompson (03:15):

Yeah. It’s, yeah, it’s come a long way since then. I wish that when I was at school and much younger that there were tools like VR and AI that can really help you out because actually I think people miss out on so many opportunities because they don’t have the confidence in their communication skills.

Craig Frehlich (03:33):

Yeah. Let’s talk about that. So obviously I suspect one of the targeted 21st century skills that your app first dove into was public speaking. So how does VirtualSpeech help support people who want to improve their public speaking skills? Tell us how it works.

Sophie Thompson (03:54):

So the way that VirtualSpeech works is that, we blend traditional online courses and e-learning with practice exercises online and in virtual reality. And the premise behind that is that everyone knows that the practice makes perfect, and VR allows people to have that opportunity to practice on demand. So you don’t have to wait six months or a year and get that opportunity once or twice a year to practice something like public speaking. You can put on your headset, you can upload your own slides and notes into the app with you, even custom questions, key words, those kinds of things, and have the most realistic practice possible. And I think one of the key things about VR is the emotional connection with the content in vr, because you are learning through experience, it, it can evoke a similar emotional response to if you were carrying out that action in real life. So for example, we have some customers who, when they are nervous, a, a stutter or a stama might come out, but if they were just talking in normal conversation, that wouldn’t happen. And I was talking to them just as you and me are talking now, and then they put on the headset, and when they were speaking in front of this audience in virtual reality, their STA came out. And that is a really good example of that emotional connection and evoking that emotional response, which is critical for, for real behavior change. I think

Craig Frehlich (05:23):

I’ve been lucky enough to try your app a few times, and I also promote it to, uh, the English department at several schools. And there are different environments. One, you can plop yourself in, uh, 360 degree prototype environment, and then you also have environments that are more, you know, digital twins. Is is one more popular than the other? Is the 360 sort of view versus the digital asset, digital twin view more popular than the other in regards to what people see in the VR room?

Sophie Thompson (05:59):

That’s a really good question and something that we have been asking people over the years as well. And, I guess it really depends on why people are using it. So if they’re using it for more realism, generally speaking, we find that people prefer the, the 360 prerecorded people, because, well, I mean, they’re real people, so they’re more lifelike than, than the avatars. But where the avatars are excellent is in terms of the ability to program them, the ability for reaction and interactivity to an extent. So which is why we’ve kept both so that now actually only last month we updated the app so that if you pick a meeting room, you can pick whether it’s the 360 degree audience or if it’s the, the avatar, digital twin audience, so that people can, can get the most out of learning whichever their preferences is.

Craig Frehlich (06:55):

That that was a great segue to one of the reasons why I wanted you on the show. And that’s, you know, we, we can, we think about public speaking and you have a plethora of other skills that people can practice and hone in these virtual environments, like negotiating skills or people networking skills. However, if you think about the efficacy of trying to practice these skills compared to public speaking, it’s a lot harder, especially a asynchronously. Yeah. We usually, we usually need another person or a partner to try and have this two-way conversation. Uh, but now with advances in AI in the form of chat, GTP three, you guys have found a way to add that to the VR experience, and I’m super curious how that’s been working and, and how you guys did this.

ophie Thompson (07:53):

So it’s, I would just caveat that by saying it is very new. So I mean, chat, GPT three was only released, I think it was the first week of December. And, we’ve been working hard at testing it since then to come up with this, this early, proof of concept really. And so as VirtualSpeech stands at the moment, pre chat GPT, we already used AI feedback on the delivery of what somebody was saying. So for example, you could get feedback on their pace, volume tone, how many filler words they’ve used, their ListAbility and so on. But that was all focused on the delivery, which definitely does have value, but by adding chat GPT into, into VirtualSpeech, that enables us to provide feedback on the actual content of what someone has said. Now, the benefits of that is primarily that you can more realistically have a two-way natural conversation.

(08:53)

So before this point, we could pre-program the avatars with like gen generic questions, for example, or in our job interview VR scenarios, they, they have preloaded questions, but they couldn’t respond directly to what you had said or, or bring up anything that you’d said. But with chat GPT, it can pick up on what you’ve said and answer two answers ago and then direct that in the question. So for something like job interviews, that’s, that’s a no-brainer to add that in. Just make job interview practice so much more effective and you don’t need to have somebody else sitting in the room with you because that AI can replicate that. And also in like difficult conversations. So if you are having to, I dunno, practice giving somebody negative feedback, it’s more reactive than say like a branch learning scenario that we have pre-programmed and which is still effective, but it’s more, more of a natural way of, of doing it.

Craig Frehlich (09:54):

I, again, I have played with CH Chat GTP three and learned from others on LinkedIn how to make it more effective. And one of the things I learned when using it just, uh, as with text is to give it a persona right away. So I want you, you know, you type into chat, chat G TP three, I want you to pretend you are a dot, dot dot. Do you pre-program, like, let’s say I’m speaking to an audience and in the audience is an avatar that says they are chat GTP three, does that avatar specifically have a persona that you give it like a disgruntled audience member or you know, a, a positive audience member? Or do you just allow it to be a, a very generic type chat? GTP three

Sophie Thompson (10:47):

At the moment, it’s more generic, but with time. So in our proof of concept it’s more generic, but the idea is that in the, in the short term, we will have different levels. So you can pick how you want your audience to react. So if you are using VirtualSpeech, to increase your confidence in any of these skills, you may want to start off with the, the friendly amicable audience. But if you are more seasoned and you want to test yourself more, you can go with the more proficient or or expert. And that’s where we will have programmed it so that for example, if it’s expert, it’ll say you are a difficult customer who thinks X, Y, and Z, and then let it go from there.

Craig Frehlich (11:30):

Oh, that’s amazing. I love the, uh, differentiation there and the scaffolding. So rich for educators as well. The other thing that I’ve noticed about chat GTP three is it’s not perfect. So for example, I’ve asked it a question and surprisingly it responded relatively inaccurately. So because it’s not perfect, what do you perceive as possible limitations as you guys are using it for your company?

Sophie Thompson (11:59):

Oh, that’s a really good question. I think what I would say is that we are on chat GPT three at the moment, and chat GPT four is coming outta the next few months, which will fix many of the issues that people may experience now with version three. So for us, I think part of it for us to be honest, is not running before we can walk. It is obviously very new technology and not using it where it doesn’t clearly provide a benefit and making sure we have those strict parameters on it so that it doesn’t go off somewhere it shouldn’t do. I think what makes it easier in terms of accuracy with what we are doing is that it’s not like you are asking it a historic fact, for example, it’s more, it’ll be trained with certain learning models and then we’ll give you feedback based on that. So the input into it doesn’t change that much, it’s just directing what it learns from us onto you basically. For example, we can program it to provide feedback, for job interviews on based if the learners on the learners use of the star technique, for example, and we will program that into chat GPT so that it doesn’t just pluck things that we, we don’t agree with those learning styles. For example,

Craig Frehlich (13:24):

You mentioned some of the amazing metrics that you collected prior to having chat GTP three. So you know, you talked about how many ums and ahs, which is a, a great way to provide feedback to a speaker. You also talked about, you know, how long someone pauses. Those amazing metrics are great formative feedback loops for the speaker. Now that you have in some of your modules or programs chat GTP three, are there new metrics that you guys have figured out that you can collect? Or is it pretty much, you know, stay the course in regards to what kinds of feedback and metrics you give the speaker?

Sophie Thompson (14:07):

I think I would say first of all, that we actually collect less personal data than people may think. I know that when it comes to technology, people are very sensitive about that and people might assume that we backend have this like wealth of information about them, but, we actually don’t, and a lot of our data can be anonymized as well. So we just have performance data rather than personal data. So how quickly someone spoke, how many uns and rs their eye contact. But once we have fully implemented chat GPT, we’ll be able to provide more contextualized feedback and more valuable insight into how the learner can improve. So the feedback will be more personalized to them. And in terms of metrics for us, that’s not actually very different to how it is now. I’d say the, the, the main one and which is for the learner’s benefit, is being able to see progress more easily. Because before people would tend to measure progress based off, say that eye contact score the first time they used the, the VR app compared to the fifth time. Whereas now because of the more personalized feedback, we’ll be able to see if chat GPT accelerates that learning progress or if it’s just a, a nice to have

Craig Frehlich (15:24):

Mm-hmm. Good answer. You know, it’s a scary future and you see it on social media talking about, you know, what’s next. And it’s almost like January, 2023 has really kind of hit this exponential growth in terms of ai. So I’m just thinking future, like right now, I know that AI has shown us now how evolved it can be in regards to language processing, but, what about, you know, getting avatars to start to behave, you know, further down the road where it makes decisions on its own movement and behavior? You know, what are your implications? Is this a good thing if it, if pretty soon these avatars not only can start to think for themselves in regards to conversation, but also what they do behavior wise?

Sophie Thompson (16:16):

I think the benefit of AI evolving in virtual world is that there will be increased realism and immersion, and when that comes to learning and education, it will make that experience better, in terms of engagement, effectiveness and so on. So I mentioned about how conversations will feel more natural and you mentioned about how avatars will behave in a more realistic and believable way, because at the moment, to be honest, it’s not really something I major notice, but I imagine in the future, I’ll look back and think it was really obvious that usually when you see avatars their, their movements aren’t quite natural, and the way they pick things up and so on. So I mean, the good thing about the evolvement of AI is the realism, the decision making they’ll be able to do when it comes to, to learning communication skills and how they respond.

(17:12)

I think there are ethical concerns definitely. And I think the biggest one for me is around privacy. And I think as it becomes more capable of making decisions in virtual worlds, it’s becomes increasingly important that we set those parameters so that ultimately it’s still humans that are in control of the ai. And even if the AI can become autonomous, you put that control on it. And also something that always concerns me when talking about emerging tech in general is about how the delay in regulation when it comes to that tech. So when you are choosing a partner for, for any of these projects to really look into what data is is being collected and why is that data being collected?

Craig Frehlich (18:02):

Yeah, I always talk about when I speak to educators of something I call the, the cell phone moment. And the cell phone moment means in most schools we just, educators and education in general, I think waited too long to try and start to use the cell phone in the classroom with students, because now if you try and use the cell phone as a tool for learning, it’s a bit of a fight cuz many, especially teenagers and, and, uh, anyone in their twenties, they see that device as their domain for things like social media mm-hmm. And so it’s this, this delicate dance between, you know, how quickly should we be putting emerging tech into sort of the minds and hands of students versus sort of weighing the pros and cons and ethical implications before using it. So well said.

Sophie Thompson (18:53):

I think it’s important to, to meet students where they are. So if students are in this day and age, likely to have headsets at home and with gaming, and then if we’re using teaching methods that we’ve, we’ve always used that they might not find that as relatable. So I think that’s, that’s an important thing to note as well. But, much as we want to make sure around privacy and regulation and so on, there is research that we can do ourselves to lower that risk and take that leap of faith before it becomes almost like old fashioned to then try and do VR saying like, 10 years time.

Craig Frehlich (19:32):

Yes. Well said. What are, what are some of your company goals, either short or long term for 2023?

Sophie Thompson (19:40):

Short term is definitely integrating chat, GPT, we’re relevant for us and, chat GPT four as well because I, I mean, if the rumors are to be leave, we think that GPT three has blown our minds, apparently GPT four is next level, so that is very interesting. And then in the kind of immediate term next month we are releasing our, uh, de and I course, which follows the, the day in the life of a Nuka a a South Asian woman, uh, in the workplace. We’re really excited about releasing that. And long term really is just to build out our catalog of courses. So we have about 30 courses at the moment, that all have these online exercises and VR exercises and just really building that out and going back and improving the existing courses that we have as well to make sure that they’re really up to date with the latest thoughts and, and line of learning.

Craig Frehlich (20:43):

Hmm. I love the topic of diversity, equity, and inclusion. Many schools are trying to tackle that and having a VR module or app just, just spices it up. If I can use kind of a, a layman’s term because again, especially something that comes into sort of education from a curriculum perspective, often vendors don’t think of the tech version. They always default to some sort of textbook or, you know, video version. So it’s great to see companies like yours saying that this, this would be way better done and much more immersive, immersive for people if they do it via vr. So thank you for that. Anything, go ahead.

Sophie Thompson (21:31):

Sorry. I was just gonna say, and it, it brings out that emotional connection that I mentioned earlier as well. I mean there’s lots of studies about the ineffectiveness of D N I training, but there’s not many or if any actually that I’m aware of around vr, d and I training because that has a different level of impact. So yeah, we’re excited to see the results from that.

Craig Frehlich (21:55):

Anything left unsaid that you think the audience might want to know or hear about what you guys are doing?

Sophie Thompson (22:01):

I think I would say that much as I’m saying that we combine e-learning and vr, we do also offer the VR as standalone. I think the difference is perhaps between us and, and some other people in the space is that, we tend to have more structure outside of VR and use VR as a supplementary learning tool, to a topic rather than kind of replacing it. Not saying people do this for the d and i course, but just as an example, the, the d and i course say it’s 20 minutes long that doesn’t negate someone’s lived experience. A 20 minute course isn’t going to give you a huge insight into somebody, lift experience, but it’s a starting point for reflection and conversation, so on. And really that’s, that’s the beauty of VR is because of that experience that people have and that active learning, that that is what, what makes all the difference. But, but ultimately whenever we talk about tech, and obviously this conversation’s being tech heavy, but as is always the case with learning tech, the learning should come first and then the tech should enhance that or enable learning outcomes rather than distract from them. So I’m a big advocate in, in using VR where it’s relevant as opposed to, because it’s like the latest technology that could be used.

Craig Frehlich (23:30):

Yeah. And then there’s tons of research, which you guys probably are aware of that unpacking before and then after VR experiences is essential to make meaning out of, like you said, the, the, the active learning that went on in the vr. So kudos to you guys for having forethought to manage that for the learner to say, you know, before you go in and after you’ve experienced it, there are still ruminations and things that, uh, the learner should think about be for, uh, a full understanding. So well done.

Sophie Thompson (24:06):

Thank you.

Craig Frehlich (24:08):

How can people get ahold of you? Maybe they’re interested in your app or just some of the other things that you guys are doing?

Sophie Thompson (24:15):

Uh, yes, you can learn more about, VirtualSpeech and see some demos@virtualspeech.com. Uh, you can connect with me on LinkedIn or you can actually just directly drop me an email if you’d like to. And I’m at sophie virtualspeech.com.

Craig Frehlich (24:30):

Amazing. Thanks so much for paving the way. Like I said, I was just elated to see through LinkedIn that you guys were, uh, experimenting with this new AI bot and, uh, exciting times ahead. Thanks for making education better through vr.

Sophie Thompson (24:47):

Oh, wow. Thank you very much, Craig. Thank you for having me on your podcast.