The Do’s and Don’ts of AI Knowledge Assistants in the Workplace with Stephen Harley
The Do’s and Don’ts of AI Knowledge Assistants in the Workplace with Stephen Harley
AI-based virtual assistants have become a natural part of peoples’ homes. Now that they’ve made their way into the workplace, let’s take a closer look at them. Whether it’s helping your team make better decisions or scheduling a meeting, AI-based virtual assistants are here to stay. In this episode, our Product Manager, Stephen Harley, shares what’s possible along with the Do’s and Don’ts of AI Knowledge Assistants for the workplace.
Transcript
Pete Wright:
Hello everybody and welcome to Connected Knowledge from Upland Software on TruStory FM. I’m Pete Wright.
Got AI on the brain? You’d be forgiven if that answer is yes. Peek around just about any corner in the organization, and you’ll likely find a team exploring how AI can help in some fashion or another. But where do you draw the line, especially in your contact center? Our own Stephen Harley is here to help us define what that line might look like. And with over 20 years experience in both engineering and sales, he’s perfect for the job.
Stephen, welcome to Connected Knowledge.
Stephen Harley:
Hi Pete. Thank you for having me.
Pete Wright:
People are using AI knowledge assistants for everything from grocery lists to music and building playlists and D&D and gaming and so much more. Do you think we’re at the point where we’ve reached mainstream for AI assistants?
Stephen Harley:
Yeah, I think we have turned that corner now. If you think, they’ve come a long way from traditional IVRs to voice recognition, to drive more accuracy in the IVR, through to the textual based assistants that we saw 20 years ago.
Pete Wright:
Sure.
Stephen Harley:
Siri launching what seems like such a long time ago, but is only about 13 years old, and now will look dated in its capabilities when you compare to the generative AI and the understanding that’s behind it, that’s driving the new modern virtual assistants. I think they really have hit mainstream in capabilities now.
Pete Wright:
So what are the impacts that you’re seeing, particularly as we talk about AI assistants in the work that you do?
Stephen Harley:
So I think acceptance is probably one of the big areas. Everyone accepted them into their homes. It’s become more familiar to be talking to a virtual assistant now than it used to be. It seemed a little bit alien to people at first, and I think that’s one of the, not only has the technology evolved, but I think our mindset towards these agencies has evolved as well. And so through that acceptance, we’re likely to utilize them more, and through that we find more use cases for them. If you think of the home ones, as you mentioned, grocery shopping, asking for music to be played, et cetera. But then people quickly wanted to connect it to their homes, to control their homes. And we’re looking for yet more use cases to get more out of the technology. So, I think in the enterprise it’s about being able to organize meetings or to post-call, summarize information and allow that to be disseminated out to others easier.
Pete Wright:
I’m always curious when we’re talking about AI and we have, especially somebody who’s worked on all sides of the house and working in call centers, but how do you use AI assistance personally? What’s in your bag of tricks?
Stephen Harley:
We’ve got Alexas at home, so all the way through for my young children using them and asking questions. If they don’t know something, they don’t always have to ask the parent. They can actually just reach out. If they’re stuck on something maths, how many days till Christmas, then they can ask. And so from a personal point of view, you seeing it being adopted very quickly there. But from a more work-related usage, certainly getting the assistant to help making decisions. So as a product manager you can have these actually ask you questions. You can actually reverse the virtual assistant and get them to question what you are doing as you put inputs into it and actually helping in brainstorming and so on.
Pete Wright:
This is the thing that I think is really subversive about, as you say, the last, let’s say 13 years or however long ago the first Alexa dropped. We brought these things into our house and they started doing these answering questions for us and playing music. And now I’ve got shortcuts all over my house so I yell at my house to turn the lights on and off. And it’s kind of adorable. And now LLMs have hit. Like you say, the technology in the cans that are around my house feels outdated. I’m ready for the next thing.
This parallels, to me, a little bit the Google problem that we had in enterprise search where people got so accustomed at home to what they were capable of finding on the internet, that when they got to work and were trying to search their intranets, they were flabbergasted at how slow and terrible it was. And so we saw innovation there. As we look at that parallel from AI assistants at home and what is already a subversive investment that we’re putting into these things in our personal lives, I’m curious your thoughts on the parallel Google problem that employees are going to work. Are they discovering that they want to use this technology at work in a way to support the work they do day-to-day? Or are we not quite there yet? Or are the tools not quite there yet? What are your thoughts?
Stephen Harley:
I think in some ways the tools are partially there. However, there has also been a reluctance to allow those tools to be used in the enterprise. There’s a lot of risk about data security, about whether that information is going to be used in training the next large language models. And so for many enterprises, things like ChatGPT has been blocked whilst at the same time there’s probably people within the organizations that are working closely with those large language models to understand them better. Also, to train them to have a greater understanding of their organization as well. And so I think for the simple tasks that we’ve identified they can be used for, they’re already there. But I think for the enterprise tasks of understanding the unique elements and the intellectual property of an organization and being able to actually access that data, I think that’s really the next phase that we’re going to see in the evolution is more connectors into the ERP systems, the CRM systems, and knowledge in general across the organization that allows them to be more capable in delivering what you need than just helping you in a more generalist form.
Pete Wright:
That’s what gets back to when we talk about how AI, it feels like these assistants are mainstream. They’re mainstream in awareness at work and not yet in necessarily practical application, which is a joyous time. We get to be surprised still at work. That feels might be, dare I say, a little bit fun.
Stephen Harley:
Yeah. I mean each time you work out a different thing that you can utilize it for and save yourself time, it feels like a great win as you’re running through those. Yeah, definitely.
Pete Wright:
Okay, so let’s brass tacks. Given what we know now about our AI-powered knowledge assistants in the workplace and let’s say in the call center, what are you seeing as some of the dos and don’ts in implementing these tools? Or maybe in just investing in researching how these tools can work? How do you guide companies, organizations, and technical leaders in thinking about where those lines are?
Stephen Harley:
Yeah, so I think the first thing is in terms of needing clear objectives of what you are actually wanting the assistant to do. So you need to really identify the tasks that you’re wanting it to do, the functions that you’re wanting it to perform. And making sure that then as you’re designing the virtual assistant, that you make sure that you’ve got the correct guardrails in place so that it doesn’t try to go off-topic.
We’ve seen many examples of things not to do. Many have hit mainstream media of where the, I think it was one of them, a virtual assistant placed on a website started being offensive and rude to customers. We had another that was easily manipulated and was offering to sell a car for $1 from a car manufacturers. So, there’s many examples where trying to rush things out to market quickly is probably the biggest don’t. You need to make sure that it’s well controlled. And I think the biggest do is test, test, test, is when you think that you’ve actually got a successful implementation, it needs to be tested in every sense of what it’s there to achieve and what people may try to do with it.
Pete Wright:
So, I’ve got a question on that point. Famously, these models are, it’s difficult to know exactly what has gone in the model. They’re known for their emergent behavior, we’ll say generously, hallucinations, whatever you want to call it. When we say test, test, test, I feel like that’s one of those lines that we want to make very sure we understand what success looks like in a space that is so gray where we don’t actually know what the pass fail could be from any iteration of the test to the next. How do you know when something is ready?
Stephen Harley:
That is the difficult thing. And I think that comes back to one of the first points of don’t rush in trying to get it out to market because that testing is difficult. We don’t know what’s inside these models. It’s taken billions of words of text that have been taken from all over the internet as well as books and many other sources. So, it’s impossible to know really what is inside them and the larger they get, even more so. Ourselves, we use a technology known as retrieval augmented generation, which is one of the best ways to reduce the chances of hallucinations and also to maintain that it’s only going to give the responses based on information you’ve already fed into it yourself.
And so retrieval augmented generation is where you use a retrieval engine, so the search engine to search across the knowledge information, and they use that to augment and feed into the generative text that it’s responding with. Even then, you have to be very accurate with your prompt engineering to be able to specify, as best as possible, that it should only respond based on the information you’ve given it and not respond from anything that’s actually within its training dataset. So you’re using the large language model for its linguistic understanding and its linguistic generation capabilities, but you are giving it the actual information and the answers, answers in real time. And that’s really one of the safest ways of maintaining that integrity of the responses that it’s giving.
Pete Wright:
Well, that I think gets to our next question, my favorite question, which is all about how do we make the world better for call centers and our agents? And it feels like that’s getting to that question. How are you seeing these knowledge assistants, AI knowledge assistants, already making a difference for our agents?
Stephen Harley:
I think the key thing, there’s many different use cases. And for some of these it is identifying the simple use cases, the use cases that are so obvious that the adoption will be immediate, that if it within the realms of knowledge management. Some of the tasks when creating knowledge is creating things like titles or summaries or keywords. And so you can use the assistants to automate simple tasks like those. You can use it to help you evaluate the knowledge that’s being created. So you can ask it to provide feedback on that knowledge. And it will provide where you may have gaps in the information that’s been provided or that some sections are not easily understood. And so you’ve created a virtual assistant for assisting with the knowledge creation. All of that is before the approval cycle of the knowledge. We call it you’re maintaining the human in the loop aspect.
There’s less risk involved because anything that it comes back with is still the author is choosing whether to use that information or not. But then as you track that through to the usage then, as I mentioned, by utilizing your knowledge feeding into the generative answers, you can then some of the tasks that you can achieve is before an agent even starts to handle the call, you can already be providing potential information to them, preemptive assistance to the agent as they actually start the call with the customer. You can then be continuing to present information to them as the call unfolds. And again, with other AI technologies of the speech analytics and so on, this is becoming easier and is also something that’s now mainstream in terms of real-time.
Pete Wright:
I think this is a great point. And I think because here we are talking about how we’re making the world better for our agents. We’re making their jobs easier. We’re allowing them to have a more comprehensive, competent call and hopefully help more people. And there is also this other reality, the reality of potentially these AI powered knowledge assistants get so good that we increase the ability for our users to self-service to the point that maybe we don’t need as many call center agents. Is AI coming to take their jobs? I think the flip side of that question, I’ll say from my perspective, the more opportunistic or optimistic side of that question is, what we’ll be able to do with the people we have in order to augment and build up and bolster the call center operation and their support of the institution?
Stephen Harley:
I think there’s two very distinct tracks of train of thoughts on that. Technology has always changed jobs. Whether it’s the tractor in the farms, it’s going through any technological evolution. In terms of the call centers, shift left has been an objective and a metric that has been tracked for quite some time now in terms of the first delivery of self-service with typical FAQs on customer websites to then making them more intelligent with improved search to allow more knowledge to be provided. And then automation of whether for internal contact centers, it would be things like password reset. But you now also see in the customer service space that there’s many times you can have interactions, just web-based interactions that trigger automation to solve issues without even having to deal with an agent.
Self-service, and that shift, left has always been inane. And I think the key thing there is for the agents, it actually reduces the mundane tasks, it reduces the repetitive questions that they receive. And I think those are the tasks that lead to agent burnout often, that the job is just not interesting. And so in some ways you could say it actually improves by providing assistance to the agent when they are handling a call. It takes away the mundane tasks that are not interesting to handle. And really, jobs have always evolved, and I think that’s what we’re going to see is yes, there will be a certain element of reduction in those first-line agent requirements, but things are always getting more complicated and really there’ll be a shift around in terms of the responsibilities and capabilities of those agents.
Pete Wright:
Yeah. I’m ever the technical optimist. I feel like these are skills that are going to help all of us figure out how to help one another. Very, very powerful tools. Tell us a little bit about what you are working on right now.
Stephen Harley:
So with our products, we took an early lead last year in developing a number of key use cases around the use of generative AI and other large language model usage. So within our product, we improved our search capabilities and our search intelligence using neural networks to provide more and greater understanding of the user’s intent and more accurate delivery of search results in general. That was a precursor also to then be able to deliver the retrieval augmented generation or generative answers, which is a fantastic feature to, rather than just providing search results, it will actually provide a summary of the information you’ve asked. If you ask it a question, then it will give you the answer immediately. With ours, it also provides citations, so it leads you off to where it gained that information. As I mentioned earlier, although hallucinations should be minimized with the use of that method of retrieval augmented generation, it’s still not entirely removed.
And so the ability to then go direct to that knowledge article is really powerful. On the knowledge creation and enrichment side, we’ve provided assistance to the creation of knowledge to help the user create accurate knowledge to improve the knowledge that they’re creating. Some of the benefits we’ve heard from that is quite often your subject matter experts are not always your authors. And your subject matter experts are quite often reluctant to try to capture that information that they know well. And so providing them assistance in doing that, you’re more likely to get that information into the knowledge article. Or even for people where English isn’t their first language, it really does provide them the ability to have confidence in actually writing the knowledge articles. And we’re now looking at the next phase of how we can ingest unstructured content. So content that we can take from elsewhere, whether it’s the live transcription or post-call transcription of the agent’s and customer’s interactions that we can use to create the next knowledge article or to identify an existing knowledge article and enhance it further with additional information that’s being identified.
So, we still believe that knowledge management is, and from talking to our customers, knowledge management is going to become a more core element to the generative AI strategies as that is the content that’s been through workflows and approval versus other content in the enterprise that is more unstructured or more freeform. Things that you’ll find in SharePoint or the document management systems. It’s not always approved information. It’s someone’s personal files or where someone’s building up information. So if you can take some of that and put that through automation into knowledge articles to create approved content, then you’ve got a solid grounding for your generative AI strategy to then start delivering the information to other users and potentially to your customers.
Pete Wright:
It doesn’t take very long hearing you describe that loop to start. Is there any concern that you have that we’re training our generative AI models on content that was created by our generative AI models?
Stephen Harley:
That will potentially be the case. I know certainly within internet forums they talk about empty forums where it’s just-
Pete Wright:
The content apocalypse.
Stephen Harley:
Yeah. Bots, they’re just responding to bots, etc. I think again, that’s the other thing that with some of the things that we’re looking at building or working on is simple things. Like if you’re generating content, then check to see if it already exists. You don’t want to be creating duplicate content. That actually dilutes the quality of your knowledge base. And if you do have something that’s similar, then as I just mentioned, you can improve that content. So it’s all about maintaining quality of that content. And that content is always evolving. It should always be under review and improvement. So, if we can do some of that review and improvement using the AI and automate those tasks, then you’re maintaining that high level quality of centralized content to prevent the, as you just mentioned, the AI is just regenerating based on its own previous information.
Pete Wright:
Right, right. Well, I think it’s just a fascinating, fascinating journey that we’re on here. Every single time we talk about AI and look at the opportunity that we have to maintain, as you say, quality and currency for our agents, I think that is a step well taken. Stephen, thank you so much for hanging out with me today and talking a little bit about your work. And I realize we keep saying products, but this is the RightAnswers product that you’re working on.
Stephen Harley:
It is, yes. RightAnswers’ knowledge platform. Yes.
Pete Wright:
Look, I have put a number of links in the show notes for people to check out, read more if you’re interested. Check it out. Just scroll down in your notes and you will see a couple of links for learning more about just the very things that we’re talking about today. Thank you so much for joining us. We appreciate you downloading and listening to this show. We appreciate your time and your attention. We’d love to hear what you think. Just swipe up in your show notes and look for that feedback link to send questions to us or any of our past guests. We’ll do our best to get those answered. On behalf of Stephen Harley, I’m Pete Wright and we’ll see you right back here next time on Connected Knowledge.