Podcast

The Knowledge Imperative: How AI is Reshaping the Way We Work with Stephen Harley and Adam Obrentz

The Knowledge Imperative: How AI is Reshaping the Way We Work with Stephen Harley and Adam Obrentz

Drowning in Data, Starved for Answers? Discover how leading experts Stephen Harley and Adam Obrentz are tackling the critical challenge of making organizational knowledge truly work. In this insightful episode, learn about the evolution of knowledge management in the age of omnichannel support and the revolutionary impact of AI. Uncover strategies to move beyond fragmented information and unlock a competitive advantage by delivering the right knowledge, exactly when and where it's needed. Plus, get a glimpse into the exciting future of the RightAnswers platform.

Transcript: 

Pete Wright:

Every organization relies on knowledge, but how often does that knowledge actually work, delivered seamlessly exactly when and where it’s needed? Too often our knowledge is trapped in systems that don’t talk to each other, buried in outdated processes, yet when managed well, knowledge isn’t just information, it’s a competitive advantage. Today we’re joined by two experts who have spent their careers helping to address this challenge. Stephen Harley is a product manager, and Adam Obrentz is a product knowledge consultant here at Upland. They’re here with me today to explore the evolution of knowledge management, the shift to omnichannel support, the role of AI and why companies are finally treating knowledge as a business critical function. We’ll also look at the future of RightAnswers, the platform at the heart of this conversation. Most importantly, we’ll ask how can organizations ensure that knowledge isn’t just stored but actually used? I’m Pete Wright. Welcome to Connected Knowledge. All right, Stephen and Adam, welcome to the show. I’m so glad to have you both here.

Stephen Harley:

Great to be here.

Adam Obrentz:

Thanks for the invite.

Pete Wright:

You have spent your careers, years worked and immersed in the world of knowledge management. Do you remember when you realized that knowledge wasn’t just a tool, but a sort of a pillar of organizations, how they operate? I imagine you both running around as kids with your day runners and your file of faxes and just really organizing information zealously, is that you as kids?

Adam Obrentz:

Quite the opposite for me. I grew up kind of all over the board, jack of all trades, just really interested in so many different things and so many different topics. I think really that’s where my knowledge journey started. I guess as early as I can remember, I was learning to tinker with things and take things apart, and so if I took something apart and I couldn’t put it back together, that’s really where I started my journey, right, is how do I learn to do that? And originally it was books and then it was the website because it…

Or the web was invented and then it was the web, and then it was college and then beyond college into my first job as a technical writer. And then that’s really where I started to see what was behind knowledge management, right? It wasn’t just the consumption of knowledge management, it was the curation of knowledge management or knowledge in general. So that’s really, my journey started as a little type, just taking things apart and not knowing how to put them back together, and then kind of evolved into teaching other people how to manage that process.

Pete Wright:

Stephen, what’s your story? Do you have a breakthrough at the end of this accumulation of tiny insights over the course of your life that led to knowledge management?

Stephen Harley:

Yeah, actually, I think I ought to say it’s kind of similar to Adam, that it wasn’t the direct pull of knowledge. I think it was the similar thing of finding the internet with things like Wikipedia and being able to find information more powerful, but not always finding the accurate information that over time, especially in the last 15 years of working with RightAnswers, just finding how much more efficient you can help people be by having the right information available at the right time to them that really empowers them and all of us to be better at anything that we try to do, whether it is fixing something at home or improving and fixing things within our workday. Knowledge is there. We’re classed as knowledge workers no matter what line of the business we’re in, and it’s really key to be able to get to that knowledge effectively.

Pete Wright:

Let’s dig in a little bit. As we set the stage for why knowledge management matters, we know that organizations struggle not just with capturing knowledge, but with making it actionable across a wide swath of experience and requirements. So what is it that… How would you define the knowledge management landscape today and in particular misconceptions as we go into this conversation?

Stephen Harley:

I think some of the key things is we’re in an age of information overload, and so being able to get to the right information and actually filter out all of the other information that we don’t necessarily need is very important. And I think the need for clear and concise knowledge is more necessary now than it used to be. We don’t have the time to be reading huge volumes of information. We need to get to the key points that we need to know as quick as possible. I think that’s something that we’re finding is the need now of knowledge management across our customer base.

The last two years has been phenomenal in the resurgence, the importance of knowledge management, and I think it’s quite clear that they all believe that structured knowledge with human approval and quality information is the bedrock of any of the new AI, generative AI and projects that they’re working on. In the enterprise, we have access to many, many platforms, a lot of information, but not always well-structured. And so we’re finding that your RightAnswers is being used as the core of any projects that they’re working on, whether it’s with us or independent, and to drive their next initiatives and gain that extra edge, the competitive edge.

Adam Obrentz:

Yeah, and I’ll piggyback off of that as well. I think I’ve seen, especially in the last two years, as Stephen mentioned, a shift from strict knowledge consumption where I’m just serving knowledge to my, excuse me, to my end audience, and now more of a holistic approach where we’re asking people to engage with knowledge, contribute to knowledge, modify existing knowledge, and then as Stephen mentioned, the AI component. How does that all fit into this new landscape where I might deliver knowledge in a standalone knowledge base, or it might be in an omnichannel approach where everything that I need is all in one tool and the knowledge is just one component? So things are evolving constantly, but it’s much different today than it was even two years ago, five years ago is completely different. Really that shift from a content repository to a true knowledge base and everything that goes behind that. It’s not just the curation of knowledge, but how do we do that through KCS, or how do we do that either through AI and how do those two components or multiple components fit into my journey? So things are shifting.

Pete Wright:

I want to piggyback on that, Adam. As a technical communicator, I’m looking at you for, I guess, solace that with all the advent of all of these tools to allow us to consume so much more, so much more quickly, and to Stephen’s point, to get to the information we need right away, are you seeing increased effectiveness and efficiency with that information at this point or are we just following the headlines and saying employees are more overwhelmed than ever?

Adam Obrentz:

That’s great question. So there’s actually two things that I picked out of that, and then I’ll take them one at a time. The first is, are we able to find things faster and more effectively, more efficiently with the advent of these new tools that we have at our disposal, like AI and AI-generated content? Yes, we are seeing this year specifically the move on the fruits of our labor and moving to AI. We’ve had many customers do some case studies with their initial rollouts of AI, and is it helping them solve the problems faster? Is it helping them close tickets faster? The answer is yes. It is very much a science and not… An art rather, I’m sorry, not a science where we do… There is some give and take of how much AI, how much human element is needed.

But yeah, we’re absolutely seeing benefits to all of these new features that have rolled out in the last two years. And to that, my second point is I know that there’s great fear. My background is in instructional design and knowledge management as well, and there’s great fear in our industry amongst those workers of, am I going to be obsolete in three years? I don’t see that. I see very much the need for the human in the loop. Though AI is helping to augment our knowledge creation, we still need people to look at that knowledge that’s being created and to organize and sort that knowledge. A traditional knowledge or technical writer yesterday might work eight hours a day creating copy, where now they might spend three hours a day creating copy through AI and the rest of the five hours a day they’re spending curating the knowledge base that they have in existence. And that’s a big shift, right? We’ve all had that. I don’t have time to clean up my knowledge base, I’m just going to move forward.

And now we’re able to spend time on things that might’ve gotten shelved 2, 3, 4, 5 years ago and really never addressed. And so we’re seeing not only some help in creation of knowledge, but also the time that we need to make sure that our knowledge is presented accurately, it’s up-to-date, and is really effective.

Stephen Harley:

I think on the agent side, the generated answers is really just a new avenue to ensure accuracy on the responses that the agents are giving. There’s a lot of companies that worry about brand and accuracy of supporting their customers. In the past where you would just have a search results and then they have to find the answer within the suggested results. They’re being given the summary of what it believes the answer is. But in high regulatory industries that we work with, they still have to then go to that knowledge article and double check it, but it speeds up that process of here’s the answer that the AI believes is correct, and then that allows them to go straight to that answer in the knowledge and then use that to respond to the customer. So it really is making them more efficient and also more accurate in the response and the support that they’re giving to their customers.

Pete Wright:

Let’s flip the conversation from the… And look at it from the customer’s perspective. You already dropped the word omnichannel. Customers are in a world of immediacy, right? They expect instant answers to their most pressing questions, whether they’re on the website or they’re chatting with a bot or they’re calling a support center. How has this shift toward omnichannel that we’re seeing change the way that organizations think about their knowledge management?

Stephen Harley:

Many of our customers are doing full knowledge audits and reviews. They’re actually rewriting their content to work more effectively with AI. A little ironic in itself, however, many of them have also realized that really they’re improving their knowledge according to best practices of clearer, cleaner written, easier to read, shorter knowledge articles that benefit both human and AI alike. So that’s something that we’ve definitely seen to ensure that they are delivering the correct information to the end users. But definitely the trends we’re seeing is that through omnichannel, people are expecting to get the information quicker, faster, and more succinct. You don’t want to just be pointed at a large knowledge article to go and find your own answer. You want the answer given to you, and people need that so they can then move on quicker. And so that’s definitely something that we’re seeing.

Adam Obrentz:

Yeah, and Stephen’s exactly right. My focus with our customers is primarily on the implementation of new instances of RightAnswers. However, I spend the other half of my day working with existing customers that are looking to improve the experience, and that’s what we’re seeing. We’re seeing customers move away from a standalone knowledge management platform or even an integration where RightAnswers is just a component of another tool and into a all-in-one single place, single source of truth where our customers are able to order new equipment and search the knowledge base and possibly respond to a ticket all in one screen. And the challenge there becomes how much money do I have to spend on omnichannel development? What’s the most important starting point? Am I coming in through a ticketing system or a custom interface and then am I getting a return on investment for the development time it’s taking me to stand up an omnichannel experience?

What’s really great about RightAnswers is it does all of those things. We can meet our customers either if they’re developing a custom interface that’s completely web app-based, API based, and it’s just a small component of the experience, or it can become the hub, it can become the omnichannel platform where people are logging into RightAnswers first and then jumping to their various tools and platforms from RightAnswers. So there’s really a mix in that evolution, but we are absolutely heading to that inevitability where everybody wants to go to one place and stop that swivel chair, as they say in our industry, right, of having to turn from one screen to the next and really just serving our customers where they want to be served.

Pete Wright:

Does that fly in the face of this drive toward personalization in knowledge management, right? If everybody wants it just the way they want it, can they have it just the way they want it if they all have to stop the swivel and turn to the same place? How does that shake out?

Adam Obrentz:

That’s a great question. From my perspective, and Stephen might have a different perspective from the development side, but what I’m seeing is that customers aren’t so focused on the all in one platform as they are being able to serve knowledge kind of where they are in other platforms. So a good example of that would be our application of RightAnswers exe or our RightAnswers browser extension, where users can have access to the knowledge base while they’re browsing the web or searching through their, they might be responsible for responding to customer feedback and threads on their customer forums, right? They might be researching something in a ticketing tool or looking through a ticket and they want to explore whether there’s knowledge that’s connected to that.

It’s not necessarily all in one place, although the idea when we say omnichannel is that I’ve got one screen, one single pane of glass, and everything lives within that single pane of glass. That is ideal in a utopian world if we had infinite resources to develop that single pane of glass. But a lot of our customers don’t have that infinite resource, and so we’re able to meet them kind of halfway, right? We are still able to serve knowledge, we’re still able to connect them to the resources that they need without truly being omnichannel, if you will, but truly being in that single pane of glass.

Pete Wright:

Fair response. I appreciate that. I want to turn specifically to AI, and we’ve talked about AI on the show, you’ve been talking about AI so far. I’m interested if we take the idea that AI is here, right? It’s table stakes. We’re all adapting. What is it that you are asking of customers in terms of building their proficiency for working with AI? And I’m thinking about changes in specific roles, right? What is this content manager, this copywriter, technical writer? Now that he’s spending only three hours writing copy, what are the new skills, the sort of cultural adaptations that this individual needs to develop? Are we teaching prompt engineering? What are the kinds of things that you expect to use the tool more effectively?

Stephen Harley:

I think there’s definitely a new role in terms of working with the platform more from an administrative point of view. We do give access to the prompts. And so there are capabilities of enhancing the out of the box prompts that we deliver to do things like personalized style guides or style writing information, language selection, et cetera. So there’s a lot you can do and achieve with the prompts themselves. And I think that is a skill that many people have learned over the last couple of years that the more direct and the more informative you are within the prompt of what you would like to achieve, the more you can get out of it. So we’ve seen some really exciting developments with our customers of doing that. That’s not necessarily part of the day-to-day business as usual for the users that making those prompt changes may affect how they’re working.

But I think it’s more about bringing them back from just being knowledge writers to being subject matter experts, to being able to understand the products, platforms, processes, whatever industry they’re in, being able to be greater experts at their own business to ensure that the next knowledge that’s created is more enriched with the information that is expected. So it’s about really moving them from just writing knowledge articles to really empowering them to dive deeper into the accuracy of the knowledge itself. I think that’s where we’re seeing the change and also unlocking the potential of many other users who maybe English isn’t their first language, or quite often subject matter experts are not necessarily linguistic experts and we’re giving them a voice to be able to capture the information that they know and write it out with confidence because they can use these tools to actually build out that knowledge without the worry about the quality of the writing itself. So it’s unlocking a new set of users to be knowledge creators as well.

Pete Wright:

It’s very powerful. I make all of my emails sound like John Oliver on Last Week Tonight. I’m very dynamic and subtly British. Adam, what are you going to say?

Adam Obrentz:

I was going to say, that’s hilarious. I was going to say, to piggyback off of Stephen, it’s also about the experience. We’re seeing a shift from having our head down and I was a technical writer for many years, head down and delivering deadlines for the number of articles or how much copy I had to make on a monthly basis and a shift from that because a lot of the emphasis of the heavy lift of the knowledge creation is now being assisted with AI. And in prompt engineering, of course that’s paramount for us, is learning how to engineer those prompts to tweak those prompts on the fly and make things operate smoother or deliver exactly what we’re looking for. But in that spare time that we’ve got, beyond that, it’s about the user experience. And here’s the big shift.

It’s shifting from produce as much copy as you can, edit as much copy as you have as you can, review, review, review to take a step back. And how is the information that I’m developing landing with my end user? And having that insight into, is the copy that I spend three hours a day being used? How is it being used? How do I modify that? How do I use my prompts to help modify that? And that’s something that we’re developing and I know Stephen’s keen to talk about on the horizon for RightAnswers is how do we take that existing solution that might have zero views or isn’t able to be found and use AI to enhance that or identify where it can be enhanced? That’s really where I’m seeing a shift from a consulting perspective. When I meet with our customers, they have a lot more time in their day to really pay attention to how what they’re making is resonating with the consumers that are consuming what they’re making.

Pete Wright:

I want to get to that point specifically, and I know Stephen will hold me to that, but I have a question before we get there, which is again, about the sort of new skills that we’re asking of our customers and our adaptations around trust. So the new skills, first and foremost, are you seeing, I know that it was out there on the horizon that wouldn’t it be great if now we had every employee using AI to document their experiences and put those into our knowledge management system and have them in full sentences in complete and clear steps, even though these people, many of them are not trained technical writers, right? Are you seeing that shift happen? Is that a reality that we can embrace or do we still have sort of the experts using the AI and they still have to do a lot, a lot, a lot of cleanup?

Adam Obrentz:

We drink our own Kool-Aid here at RightAnswers. And so I can answer from a customer side. Actually, from perspective, I manage a technical writer that manages our knowledge back content and she is based in India and is not a subject matter expert and many things that I ask her to write about, nor is she an English expert, and so I am seeing absolutely a shift in that augmented assistance. Less of, two years ago, even if I was looking for somebody to fill that role, I would have been a lot more stringent in my search. I would have been a lot more narrow in my search of the types of skills I was looking for.

And now I’m a lot broader because I have faith that our products can kind of give that extra boost and I can focus on other things with that asset that we have. And so yeah, absolutely. What we’re asking people to be experts in is changing. We still need those experts, but to your point, we need them to learn prompt engineering. We need them to learn maybe the basics of some basic CSS modification, which really doesn’t have anything to do with AI. But again, back to that, I’ve got more time in my day and how do I customize the experience for my end user? Those types of skills, those soft skills are really helpful.

Pete Wright:

So let’s take part two. Stephen, I’ll turn to you. The advancements in search using AI, right? We know that traditional search functions are frustrating and have been, and that users have to be much more precise in what they’re looking for, right? We also know that AI can change that dynamic pretty dramatically. But what comes with that, I think if you look at the public sphere is a question of trust. You are searching for things and sometimes if you can’t find it, do I wonder if you’re just making it up, AI? And so I wonder how you address that question from a customer perspective.

Stephen Harley:

Yeah, so first of all, on the search side, since we implemented neural based searching in our hybrid search engine, which means that we’re actually using the traditional natural language understanding that we’ve had for many years as well as semantic based searching combined, it does mean that users are more likely to find what they’re looking for. And we’ve analyzed the search metrics of the before and after and seen a dramatic improvement in our search relevancy ratings and so on. And so that’s the first thing is if you know exactly what you need to search to match out to the words that are in the knowledge, then that’s where the traditional searching was very efficient. But if you didn’t know the exact terminology to use, that’s where the neural part of the hybrid search comes in and helps you find those.

Putting on top of that the generated answer summary, as I mentioned earlier, that is just, it’s very similar to the snippet of text that you have below the search results. It’s that teaser to say, “Is this what you was looking for? Have I found the right information?” Sometimes it gives you the exact answer. And when we first developed it, one of the first questions I asked is, “Does VPN mean virtual private network?” And it came back with albeit a little lengthy answer, but it said yes to confirm that is true, rather than just you having to read an article that is explaining what virtual private networks are and at some point using VPN as an acronym, it just gave the answer, yes. So that’s where it can give you very specifics. And we’ve seen within some of our customers using this that they are asking it to deliver information out of tables within the knowledge article where it’s actually understanding the construction of the table to know whether it’s a value from a one-year contract or a three-year contract that they need to enact their answer.

But one thing we also provide is citations or which knowledge article was used to answer that question. And this is where it gets even more interesting is since we developed that, we’re now getting some amazing data metrics that we’ve never had before that we can actually use to form a feedback loop. And what that means, so in the past we would look for user journeys. We would look for a user who does a search, views knowledge and then clicks, “Yes, that helps.” We could take that as a really good positive that we provided what they wanted, but then you’ve got all the estimates of if they do a search and view knowledge and no other feedback, then hopefully they found what they was looking for or a search and then another search immediately. The first search wasn’t good enough. And you have to take these all these assumptions.

And now we are actually going back into the user logs and seeing that a question was asked and 10 knowledge articles, we used to potentially answer the question and two of them were actually provided within the citations. So we know that those two actually do answer the question that the language model has told us which ones specifically it used to answer the question. It’s more positive than the user journeys that we’ve analyzed in the past, and we’ve also had times where it’s actually not provided an answer. And so we’ve gone back in and manually passed to those knowledge articles through a language model and asked it the question again and it said it can’t answer it. And then we’ve actually asked the model to explain why not because we believe the answer was actually there in one of those articles.

And he’s gone on to actually say that whilst the answer may be there, the steps were not clear enough for it to be used as the answer. It wasn’t confident that the answer was clear enough to use. And so that then is an immediate feedback to the knowledge team to say that the article if improved, would actually answer the question next time. So like I say, we’ve got this, because we’ve got the AI act of giving these more feedback, we can provide a better loop back to the knowledge managers than we’ve ever had before, which is exciting as well.

Pete Wright:

Awesome, exciting, terrifying, illuminating. These are crazy times, you all, crazy times. Let’s transition a little bit more deeply into how RightAnswers specifically is evolving to meet these knowledge management needs of the future. Maybe let’s start, for those who are coming to this learning about knowledge management and maybe don’t have an experience yet with RightAnswers, what’s the core mission of RightAnswers? What’s it all about?

Stephen Harley:

Our core mission can be best described with our name. It’s about providing the user the right answers at the right time as quick as possible. That’s always been for the 15 years I’ve been working within the company, and that’s always been our main mission. To do that, now it would be utilizing best technologies to and best serve it out to the users, but that’s always been the mantra. And it comes from two sides that I think we’ve already kind of mentioned is it’s about having accurate knowledge and that the key focus there is about how you curate that knowledge, making sure it’s easy to read, making sure it’s good visuals on there with images and so on. But then also the human approval, the approval workflows to ensure that before that information is then out in front of the main user base that that information is accurate.

And that’s really the core of knowledge management itself. Knowledge delivery really is just how do you get that out to your users through traditional search that is now enhanced with more powerful search and generated answers, et cetera. But where we’re seeing things change is the need to have various connectors to other platforms or connectors to synchronize our content into third-party systems. One thing that generative AI has brought in is the need for data to be local. In the past, large organizations have always been hungry to collect as much data as they can just so they can use it in some form in the future. But here with generative AI, there’s a clear need for us to be pushing data into say Salesforce and ServiceNow and so on that otherwise their generative AI capabilities can’t then utilize it in kind of downstream flow.

And so building connectors is one part of where our roadmap has dramatically changed in the last years, as well as supporting our connected knowledge experience of bringing RightAnswers into those platforms through various integrations that we’ve had. We’ve also pushing knowledge out into other places, but that has also brought us to where we’ve gone back and focused on the core of knowledge management in that whether it’s the user creating the knowledge and having tools to help them write new knowledge, analyzing the knowledge. We have analysis of tone and readability, which we’ve had for a while and we’ve since brought in things like semantic error checking. We’ve got knowledge gap analysis on the knowledge article itself. Are the steps clear in the article you’ve written? Or topic analysis that it would like to highlight that the article is covering more than one topic in that you may consider breaking it into multiple articles to make it easier to read, through to author approver review of what are the differences between this article and another to speak.

So it’s all about speeding up and making efficient that whole knowledge lifecycle from creation, curation all the way through. That’s really where things have changed dramatically is bringing that focus onto that core of, I keep repeating it, but the core of knowledge management. Otherwise, you could just put generative AI in front of other data repositories, SharePoint, for example. But people have tried that and already failed that if you haven’t got quality knowledge and control of that knowledge, then all of those generative AI initiatives are failing where they’re trying to do that. They need that structured knowledge.

Pete Wright:

Adam’s head is in the background just screaming, “Preach, preach, on preach.”

Adam Obrentz:

I’m on to that. Yeah, preach for sure. And I’ll add to that. So Stephen is on the product side and I’m on the consulting and the deployment side of our product. And so Stephen and his mind is on this highway of, “I started here and this is where we’re going, going, going,” but I’m very much along for the ride, if you will. So I’ve seen, to answer your question, evolution and where we’re going. I’ve seen Gen-1 of AI release being very much like those early humanoid robots that didn’t really look like humans and they were kind of awkward and they were real slow to Version 2, which was a little bit better to Version 3, which really looks like it’s starting to look like a human and so on and so forth. And to bring that full circle with RightAnswers. To Stephen’s point, we started out with AI. When we embedded AI, we were first in the market to deploy AI in our space, and it was very much a manual process.

If you wanted to have the AI assistant suggest a knowledge creation, you had to tell it what you wanted to do, you had to push a button and open the menu and then pass it through AI. Fast-forward to Version 2 where we now have automated some of those processes, but it’s still a manual process in many places. If I want AI embedded, I want AI to generate my title and my keywords and my summary, I have to manually click. And now Version 3, where we’ve gone beyond that automation, so now we’re automating the creation full stop, but we’re also now creating a secondary AI environment that evaluates the knowledge.

It’s not just the creation of it in a certain way, but it’s the evaluation of it to Stephen’s point, semantic evaluation, going as far as to review for duplicate articles and then giving the author a synopsis of why it’s a duplicate article, specifically, how much of the article is a match, where are the similarities, where are the differences specifically inside of that solution, which really is from an industry perspective, from a technical writer, from somebody that has that background, I mean, that’s like having an extra team that I don’t have to have behind me reviewing my work. I have kind of that team built into the tool. Again, not removing the human element, just augmenting that human element and that evolution is exponential. We started Stephen, how long ago with the advent of our AI assistant V1?

Stephen Harley:

Three years now. Yep.

Adam Obrentz:

Yeah, so it hasn’t been that long ago that we didn’t have it. And now with every release, it’s getting a little bit better, a little bit stronger, a little bit faster. And so that evolution is constantly growing and we are as an organization, paying attention to what our customers are doing with that and trying to evolve not based on what they say they want. Sometimes it is, but also where we, as experts, as knowledge experts and experts in knowledge management and the production of a product that can serve that. What is the best way to do that? And sometimes our customers are kind of blind. They just want, like Stephen said, “Let’s put AI in front of SharePoint.” Well, that failed because there was a great knowledge. So yeah, there’s definitely some great evolution on the horizon, and I’m seeing that with our product.

Stephen Harley:

I think there’s another thing as well that we’ve not really talked about too much is trust. There’s a lot of worry about the capabilities or whether the hallucinations and the failures, and obviously there’s a lot of high-profile information you can find about where AI has failed miserably.

Pete Wright:

Put glue on pizza. Put glue on pizza please.

Stephen Harley:

Yeah, there’s many great examples. So one of the things we designed is our little kind of methodology in how to push out the adoption and with a number of the features, as Adam was kind of referring to there in kind of like phase one, a number of the features are informative. We might be analyzing the information and presenting to the user recommendations for improvement, but with each of those features as we go through different versions, then we’re looking at how can we make them into more of an actionable functionality. If there’s a semantic error in the knowledge article, can we actually have a button that you can click to and it will just rewrite and fix that semantic error rather than just telling you about it. And then you have to go and do it yourself.

And one of the other things we’re doing is capturing the usage data. When are we providing that feedback and actually seeing that the user is updating the information to be able to then go back to that data or release later, analyze that and say, “How close was the AI recommendation to the user’s behavior after receiving it?” To see if we can actually get then into more of an automated situation where we can actually take some of those tasks away or dramatically reduce the time that those tasks take by just trusting the AI. But that trust is something that has to be built up both when we’re developing it, but also when we are then deploying that to our customers and they’re starting to use it.

They don’t just want something to take over immediately. They want to see that it’s actually working, providing value, providing the correct expectations of what it’s supposed to be doing or achieving before they would ever trust it going into an automated side. And I know that many organizations are just jumping straight in, throw the AI in there, it’ll work, trust me. And we’re seeing that there’s many projects that are failing along the roadside. We’re taking a cautious approach, but still driving at a reasonable pace.

Pete Wright:

Trust is earned, not just offered. Trust is earned, not offered. Two years ago, Keith Berg was on this show, our own Keith Berg, and said to me… We were talking about the future of AI and expectations, and Keith said, “It’s not going to be too long before AI is in everything, just like…” And he asked me, “Do you remember the first spell check that you had?” And it was a third party application I had to install, I don’t know on what, Windows 3.1. And I had to install it because it didn’t come as the gift of spell check in everything on the system. And eventually it did, and that took years. But here we are two years ago, and I think Keith’s prediction is closer than certainly than we ever got with spell check in the course of two years. And now AI is in seemingly everything. And so I offer this to you, your two-year predictions, what are you most excited about that you expect to see two years from now that we do not have yet?

Adam Obrentz:

Oh, wishlist time. So where I would love to see the development and integration of AI is from an administrator’s perspective. So one thing that people aren’t thinking about and something that I think about is not only the end user experience, that’s great, but I also live in a world of the backend support, right? So the people that manage the knowledge base, the people that manage the knowledge itself, having AI driving insights into statistics, pulling relevant statistics to the top and presenting that to me, showing me areas for improvement based on some of the… Evaluating how long it takes me to create a new group and then suggesting, “Hey, this is a shortcut here.” Or having that AI kind of watch and learn from my tasks and then propose improvements or suggestions for improvements that can help streamline my day-to-day operations.

So if I had to ask know selfishly AI baked into more of the admin side of the product as I feel like AI is taking off exponentially for the end user and the omnichannel experience and it’s delivering generative answers, and that’s going to continue to improve year over year. It would be great to throw us lonely guys at the bottom, a bone from here and there to help streamline our day-to-day as well.

Pete Wright:

I love that answer. Stephen, what do you think your two-year prognostication?

Stephen Harley:

I think it’s going to be a mixture of the analysis. One thing that we are going back full circle on is really looking at the data that we capture. Are we capturing enough data? How can we use it better for providing the users with insights? We want to move away from charts and tables and mind grass pie charts, et cetera, that you still have to look at and decide, so what is this telling me? We now have the capability to send data to language models and actually ask it to review them itself and provide us insights into what does that data actually mean. And so that’s something that we’re already looking at to build into the product too, and to also put that at the fingertips of the people in the process when they’re doing the work.

That is something that I’m very excited about that we’ll be delivering. I think the other part is really just making the capture of information easier. We’re working on various ways to either use the call transcriptions. That’s something that’s really evolved in the last couple of years is that call centers are not just recording the conversations, they’re doing full transcriptions, either live transcriptions that we can actually tap into and provide knowledge answers at the point of the agent working or post-call on closing calls, we can actually extract information from them. So it’s not just the transcription capabilities, it’s the fact that we can then pass that whole transcription through AI to extract the information out of there, remove the noise of the conversation, the pleasantries of how can I help you, what’s your problem, and so on that really allows us to feed into the top of the funnel of knowledge creation.

But that means that… That’s why one of the things that we’ve been working on more recently is duplication detection, because if we can actually rapidly throw a lot more information into the machine, then that’s actually the last thing you would ever want. You’re just creating lots of duplication going from that tens of thousands of knowledge articles to hundreds of thousands wouldn’t add to help anyone. You make the problem worse. So alongside that, you need good duplication detection. We’re also looking at is the ways to check the quality of the information. Does the knowledge conflict against all the knowledge that you already have that’s approved? Is it effectively like snitching for fake news type of thing?

Pete Wright:

Sure.

Stephen Harley:

That there’s a lot of different techniques that can be used to just improve each step of the way. And I think that’s really the way that I see it evolving is, it’s not just a magic one thing solves everything. It’s about just incremental improvements of the use of good technology throughout the whole process. And that, I believe is where we’re heading and the use of agentic workflows where some of that may actually be multiple processes tied together where we send it to an AI manager that’s actually then distributing that out to various other AI agents to do their tasks, bring it all back together, check whether it meets the tasks that they were given before it’s time to return back.

You might not always see the full thing and the full process. It might all be just going around those agentic workflow, but I think that’s where I see the next two years is tightening up the use of these technologies. I think every organization has built AI into their platform somewhere. I think we may be hitting that saturation point of the magic wand in every application.

Pete Wright:

Sure.

Stephen Harley:

We see our AI everywhere. It’s marketing full of it, whether it’s from your washing machine [inaudible 00:48:23]

Pete Wright:

So many sparkle emojis, so many.

Stephen Harley:

Yeah, so I think really the next two years is the maturity of the use of it as we’re all finding more things it can do or to improve those first iterations that have been provided. I think that’s what the next two years are is the excitement has slowed down. Everyone can see the value that it can bring, but now it’s about fully realizing that value and getting the return on investment of it as well.

Pete Wright:

You can see all of the fruits of those labors coming to fruition right in RightAnswers, which is a fantastically cool solution. It’s just cool. That’s the bottom line. It does so many things and it’s an enabling technology, but it’s also very cool. Please check it out. Links in the show notes. Adam, Stephen, do you guys do demos when people call to schedule a demo? Is there a chance that somebody listening to the show could get a demo with you?

Adam Obrentz:

Absolutely. Yeah. Absolutely.

Pete Wright:

That’s right.

Adam Obrentz:

We offer that. I do them daily, Pete. I’m constantly in the mix with prospects and showing prospects kind of what could be possible with our product and how we can service their needs and just an evaluation, if you will. It’s not just RightAnswers. We offer services as well beyond RightAnswers. If you’re interested in doing health check assessments or knowledge assessments for your existing platform, you can tap into our expertise and my decade plus of experience as a technical writer, administrator, knowledge-based administrator, implementation specialist, senior technology specialist, down to and so on and so on. I can help you guys in many different ways on your knowledge journey, not just with RightAnswers, but of course with RightAnswers, right? We want you to use our platform and experience how cool it is. But yeah, we offer that. We can reach out to our sales team or customer success team to get demos with me and yeah.

Stephen Harley:

[inaudible 00:50:40]

Pete Wright:

We’ll put a link to that in the show notes. And now you can say you could get a demo with podcast famous Adam Obrentz. That’s what I mean. A new thing to add to the litany of your expertise, podcast famous.

Stephen Harley:

Podcast famous.

Adam Obrentz:

I love it. Do I get a badge with that?

Pete Wright:

There is a badge. I’m going to send it. I’m going to work on that. Thank you both so much for this tour. It is a fantastic conversation. There is so much, I think that we could go for three hours, but I appreciate you here. I hope you’ll come back and give us a part two.

Stephen Harley:

Part two of this.

Pete Wright:

This was really fun and illuminating and thank you everybody for downloading and listening to the show. We appreciate your time and your attention. Don’t forget, you can send us a question. We’ve got a link in the show notes to the questions forum. People have sent questions and we’re getting very close to a listener questions episode where we’re going to have some folks on to answer some of the questions that you have sent. So links in the show notes. On behalf of Adam Obrentz and Stephen Harley and the fantastically cool RightAnswers, I’m Pete Wright and we’ll see you next time right here on Connected Knowledge.

Resource right rail card

More resources

More resources

Resource header
Podcast
Charting a Course to Account Planning Success
Charting a Course to Account Planning Success

Avoiding the Pitfalls: How Failed Sales Transformations Pave the Way for Success

Read more

Resource header
Podcast
Selling Simplified – Qualifying Your Way to Better Sales 2
Selling Simplified – Qualifying Your Way to Better Sales 2

Chasing every deal doesn't always lead to the biggest success.

Read more

Resource header
Podcast
Selling Simplified – Qualifying Your Way to Better Sales
Selling Simplified – Qualifying Your Way to Better Sales

Chasing every deal doesn't always lead to the biggest success.

Read more