Podcast

The Blueprint for an Effective AI Implementation

The Blueprint for an Effective AI Implementation

AI has the potential to revolutionize contact centers, but successful implementation requires careful planning and execution. Join us as Simon Kriss, Australia's leading expert, discusses common pitfalls of AI implementation and shares practical advice for successfully leverage AI in your contact center.

Transcript:

Pete Wright:
Hello everybody, and welcome to Connected Knowledge from Upland Software on True Story FM. I’m Pete Wright.
What makes for an effective AI implementation? We’ve all heard of the potential, the productivity and knowledge gains in the call center, for example, but what takes an AI project and makes it more than just a ChatGPT wrapper? And what allows your people to flourish? Simon Kriss is Asia Pacific’s leading voice on the adoption and application of AI in customer experience, and he joins me this week with a blueprint for making your AI implementation sing.

Simon Kriss, welcome to Connected Knowledge. So, so, so grateful that you have graced us with your time today.

Simon Kriss:
Yeah. Hello, Pete. It’s great to have a chance to chat with you.

Pete Wright:
I’m watching the Olympics, still going on as we record this and my goodness, the companies want me to be paying attention to their AI pretty hardcore right now. I’m curious what your sense is on the state of the union for AI. Before we dig in specifically to the contact center in particular, how do you think the cultural implementation is going right now?

Simon Kriss:
Yeah, what a great question. I mean, AI is one of those few technologies where the technology is growing faster than the human use cases for it. We have open AI talking about model number five, model number six, we’re not even using model number four properly. And so it’s just outstripping and unfortunately that leads sometimes to a little bit of FOMO and people just implementing it without really understanding the societal impacts. I worry, I know that the French police and French military and stuff are using a lot of facial recognition software during the Olympics. How accurate is that? Are we picking on certain people over others? Many of us would’ve seen the wonderful Netflix show, Coded Bias, that we know that there’s bias in these models and things, so the societal impacts are interesting.
But that’s just as interesting for businesses as well because we see two extremes in business. We see those businesses that are suffering from FOMO and just jumping in and launching these incredible applications but we just as much see companies that are suffering from what I call FOMF, which is fear of moving forward. And they’re just crippled with anxiety about the size and the dimension and the scale of AI to the point where they don’t do anything. It’s a really interesting time where I think there’s still a lot of marketing hype over true actual deployments, particularly in generative AI.

Pete Wright:
It’s such an interesting perspective and I think it comes at just the right time because when we’re talking about culture, we’re talking about individuals, we’re talking about human creatures, and organizations are made up of complex machines of human creatures. If we’re looking at creating a substantive deployment of AI that really makes a difference, somehow we have to get the human creatures on board. We have to get the human creatures to trust and we have to get the human creatures to buy in. That’s the thing that I’m interested in your take on. How do you offer big wins to get people to really side with the potential that we have been promised?

Simon Kriss:
Yeah. I think it really starts with just general education about what AI is, what AI isn’t. A lot of people see these large language models, the ChatGPT’s and Claude’s and Perplexity’s and they look at them and think, “Wow, this thing is so smart.” Actually, when you look at how much information is loaded into a large language model, it’s less than a 4-year-old has. Because we have this apperceptive mass, we take in information in a number of ways, not just text, and so people simply don’t understand and are a little bit afraid of it. I think one of the biggest parts of a win before you deploy any of this type of tech in an organization is to go and get the people on board and that starts right at the top. If the CEO, COO, CFO don’t understand this technology, it’s not going to filter its way down through the organization.
Now that being said, there’s a bit of a groundswell that we are seeing from the front line coming up where, and this was highlighted in a recent report by LinkedIn and Microsoft, that a lot of people are using generative AI but they’re using it in the shadows. So they’ll use it but not tell their boss that they’re using it or they’ll BYO their AI. So they’ll bring their phone into work and they’ll ask a ChatGPT on the phone and then they’ll type that in, things like that are happening. But the mere fact that they’re not disclosing it means that people are confused about what they should do, what they shouldn’t do.

Pete Wright:
It’s one of those things in the, I think it was a line from a great movie, the American President, right? In the absence of you stepping up to the mic leader, they’ll listen to whoever has the loudest voice. And that leads to that sense of confusion, as an institution, if you don’t have a sense of AI purpose.

Simon Kriss:
That’s correct and fear only exists in a vacuum. If your people are scared, “Is AI going to take my job? Are we going to downsize? Is this going to happen? Is that going to happen?” That’s all on the CEO because clearly that CEO is not openly communicating to the organization, “This is what we’re planning to do with AI.” Even if that is just a holding message of, “We don’t know what we’re going to do with it, but we’re starting to explore it to see what it can do for our business,” and then later come back and say, “Okay, this is what we’re doing with it.” In the absence of any of that, fear exists.

Pete Wright:
I mentioned the inevitable ChatGPT wrapper that becomes the injectable into so many application front ends. What is it about strong AI implementations that makes them stand above what we’re seeing in terms of the rush to market apps?

Simon Kriss:
I mean, the biggest, most obvious thing that everybody says they know about but they don’t really is the underlying data that the AI is using in order to do its job. Now whether that’s big decisioning engines, whether that’s an AI chatbot, whether that’s AI empowered knowledge systems, any of those things always rely on the data that sits underneath. People expect AI to do what a human does which is think illogically, understand this, understand that, and until AI has been taught to do that, it won’t do it. A great practical example is if you have a knowledge management system. Whether that’s as simple as a spreadsheet or as complicated, as sophisticated as a pan-viva system, if you’re going to put AI in front of that, you need to go through that data because there’ll be stuff in there that says, “Say this to the customer” and something else that says, “do not say this to the customer,” but AI will struggle to know the difference, and so it will just serve up whatever.
So if that data hasn’t been simply tagged or categorized, it’s not going to do the AI job properly, and everybody kind of looks at the output and goes, “Ah, see, I told you AI would get it wrong.” AI didn’t get it wrong, you just didn’t tell it the right… If you never told your child, don’t go near the fireplace, they’re going to go and stick their hand in the fireplace, so it’s that type of stuff. The second one is just a tolerance for failure. So most corporations, most governments don’t tolerate failure well. We’re taught not to. As executives, we’re taught failure is bad, failure has to be… When you’re dealing with a transformational technology, you need to have some degree of tolerance for failure.
Now, often that means that if you’re just starting out with AI, you set up a little cell of four or five people that report directly to the CEO, they’re outside of the boundaries of the rest of the company and let them play, let them innovate, let them get it wrong. When they’re finished, when they get it right, then deploy it across the organization. But you either have to separate it out or create a real tolerance for failure in the organization, and they’re two of the big ones.

Pete Wright:
The implication of what you’re saying, sounds like in an organization, who should own AI is the CEO.

Simon Kriss:
Yes.

Pete Wright:
I’m wondering how often in your experience you see that happen?

Simon Kriss:
Rarely. Usually it either kicks off with the CIO owning it because hey, it’s technology.

Pete Wright:
Mm-hmm.

Simon Kriss:
Which isn’t correct either because in the new world of AI, the IT team are not going out the back and spinning up five new servers and building databases. This is all in the cloud API. So tech actually has very little to do, this is a business side problem. But where it normally winds up is whether first change agent happens, which is most often in that CX space which is one of the reasons I focus on the CX space. Is the CX has, particularly, the contact center say has one of the biggest opportunities, most reachable opportunities. So they jump right in and AI winds up living in the CX space first and then later, we’ll create a head of transformation and we’ll put it there. Rather than the CEO saying, “Okay, I want a head of AI and I want them to lead this particular transformation and I want them to build my ethical and responsible framework, and I want them to do this, and I want them to do that.”
So we’re not seeing that level of adulthood in AI. We’re still seeing teenagers rather than seeing adulthood and that plays out a couple ways. One of my favorite things to share is that when it comes to AI, particularly generative AI, it is a lot like teenage romance in that everybody’s talking about it, everybody thinks everybody else is doing it. The reality is hardly anyone’s doing it and those that are doing it are probably doing it wrong. So it’s still very early days. We’re still learning to adopt and organizations are still learning to adapt.

Pete Wright:
I’m just standing outside in my trench coat holding a boombox over my head.

Simon Kriss:
There you go.

Pete Wright:
Please, please listen to me, AI. I mean, I asked you the question who should own it and your answer was rarely. Again, the implication of that response says someone’s doing it right. Do you have an example of somebody of an organization, institution that you feel is nailing it right now?

Simon Kriss:
Oddly enough, some of the large US government departments.

Pete Wright:
You don’t have to joke. You don’t have to joke with me right now.

Simon Kriss:
No. IRS, people like that, are appointing definitive heads of AI. Of course, NASA, some of those larger organizations. We’re also seeing some of the large banks here in Australia and in the US and in the UK doing that. That’s because they’ve had a history with AI for a little bit longer than the rest of us. They’ve been using AI to model who should get a loan, all of the systems that track whether or not a credit card transaction looks weird or not. So they’ve been using traditional AI for a while and so, they were a little further down the path of their maturity.

Pete Wright:
Well, hey, great to hear. You learn something new every day, I’ll tell you.

Simon Kriss:
Yeah.

Pete Wright:
I’m glad to hear that’s right. Back when it was called ML, right?

Simon Kriss:
Yeah. Well-

Pete Wright:
We’ve-

Simon Kriss:
And we rebranded. Just because AI has come along doesn’t mean that all the other technologies are now defund.

Pete Wright:
Sure.

Simon Kriss:
I had somebody recently who said to me, “Oh, we’ve made all this investment in conversational AI, do I have to throw all that out now because generative AI has come along?” And it’s like, “No. You just sweat that asset and continue to use it.” So there’s still a place for RPA, there’s still a place for machine learning, absolutely.

Pete Wright:
Okay. All right. So you have a 12 step process that we want to talk about, and I don’t know. I leave it to your tutelage to decide how much detail you want to get into on your 12 steps but I know we want to walk through the 12 steps and talk about the important areas for AI implementation.

Simon Kriss:
Yeah. This, it’s unfortunate that it came out as 12 steps because it sounds like a 12 step program. Hi, I’m Simon and I’m in love with AI. Yeah.

Pete Wright:
Hi Simon.

Simon Kriss:
Thanks for sharing. So it starts off right at the very top with, and if you imagine a funnel with general AI awareness and things like identifying and prioritizing your use cases, and what most people unfortunately do is drop down to step 10 of the 12 step process and step 10 is investigate your how or in other words, go out and talk to your vendors, go through your RFP process, select a product. And what I see a lot of organizations doing is jumping straight to, what’s the product? Now, let’s shoehorn in a use case. And that’s partially… Some of this is on the vendors because you go to a conference, almost every stall at the conference now is some AI vendor, and they’re all saying, “Oh, come with me. We’ll give you a 90-day free trial and we’ll do this and we’ll do this.” And so companies are stepping into that without even knowing is that the right use case, let alone the right tech partner.
And so, I always encourage companies to go back and start looking at these use cases, figuring out a way to prioritize them. Whether that’s just stack ranking them against business benefit, whether that’s trading off business benefit with data complexity, whatever that is. Risk has to be in that calculation somewhere but prioritize those use cases.

Pete Wright:
Do you have a discernment process around figuring out what those use cases are because I feel like where we are on the adoption curve or the development curve of AI, leaders may be in a position where they’re asked to define use cases before they know the scope of potential because the technology is very new.

Simon Kriss:
So back in the time of Socrates, there was a guy, reportedly, a guy called Meno. And Meno was talking to Socrates and saying, “I’m sure I can learn everything I need to know about the world if I just asked you the right question. The problem is I don’t know enough to know what that right question is.” And that’s exactly where we are with AI today. If I went into an organization, particularly to the executive leaders and said to them, “What do you want to do with AI?” Their answer is invariably, “I don’t know. What can AI do?”

Pete Wright:
There you go.

Simon Kriss:
And so you get this chicken and egg. So that’s why I was talking about always starting off with general AI awareness across leadership. Get the AI leadership together for two hours, four hours, eight hours if you can, and have somebody that knows about this come in and just talk about what is AI, what isn’t it, and what are some of the use cases that we’re already seeing in a high priority. And then get together your most innovative people to look at every area of the business and pull up use cases. And that could be as complicated as putting cameras on the front of every city bus so they can start to detect potholes in the road or as simple as, we need to put a bot in front of their 27 HR policy documents so that new staff that are onboarding can find their information quickly and easily but you do have to go through and find these.
What I tend to find happen is you will probably identify your first 15 or 20 use cases, and then once you start down the path, suddenly another 10 emerge and five over here and this area of the business comes up with five or 10, and suddenly you’ve got 40, 50, 60 use cases but it takes the… Use begets use, would be my advice.

Pete Wright:
Okay. So turning our attention back to the 12 steps, I think I hijacked you literally after step one. So let’s keep walking through the narrative.

Simon Kriss:
Right. So as I was saying, you prioritize the use cases. The very next thing you do after you’ve said, well, this is the use case we’re going to go after first is go and start scrutinizing the data that would be used to power that. Because as I said, it might seem great to put AI in front of your knowledge, but if your knowledge isn’t well-structured, it’s going to take you three months to go and tag all that data, so go and pick another use case. Keep working on the stuff in the background but go and pick another use case. So you’re going to get a little bit of circular stuff going on between steps two and three. Once you’re there though and you know what your use case is and for any organization starting out, so please select one use case and make it internally facing, don’t go external first.
Once you identify what that use case is, start to articulate and socialize, why are we doing this use case? So remove that fear, go out and talk to the organization about this is where we’re starting the journey and this is why. Figure out who’s going to do this, identify what a win looks like, then really start to deeply analyze the process that you’re trying to augment because invariably, we are looking for augmentation before automation. Document the what and the why, then start exploring any downside risk or ethical concerns or anything like that. And then go and buy your tech that you’re going to do and move into your proof of concept. So as a whole pile of work that needs to go on before you select your technology.

Pete Wright:
It seems like the vast majority of the intellectual work happens before you’ve selected the technology.

Simon Kriss:
Absolutely. Yeah. The tech should just be the enabling piece, not the driver.

Pete Wright:
That said.

Simon Kriss:
Uh-huh.

Pete Wright:
Moving from choosing your technology to making it a part of infrastructure and training and adapting the organization to it, has to come with its own set of… Tell me there are steps, sub-steps?

Simon Kriss:
Yeah, so I usually tell them to get through the proof of concept first. I mean, keep communicating through the proof of concept. Let the organization know what’s going on. Let it know we tried version one and we found some bugs, and so we’re going to version two and version three and just let them know what’s going on. And then when you’re ready to operationalize it, then actually traditional change management kicks in. Stakeholder engagement, lots of communication to the organization, all of the training sessions about how to use it, why we would use it, remove all the scaremongering that’s going to happen, that, “Oh, this tool’s going to take my job.” All of that stuff. Start to work through if this tool is going to free up some time, what are we going to do with that time? And what is the original CEO’s message?
Was the original CEO’s message something along the lines of, “We’re going to do AI, but we are not doing it just to downsize. That’s not our key driver. We’re going to look to redeploy people. Yes, will some people’s jobs change or go? That’s highly likely, but that’s not our driver.” Or was there no real message and what it looks like is we’re adopting AI just to get rid of people. So once again, it comes back to really strong communication.

Pete Wright:
The HR training development people listening to the show are breathing a massive sigh of relief because change management principles, we know. Change management principles, we can wrap our head around and that should be a relief. We know how to make change in an organization. We know what to do.

Simon Kriss:
Everything old is new again.

Pete Wright:
Everything old is new again. Just because we’re talking, using the word AI does not mean you don’t know how to do the job.

Simon Kriss:
Absolutely correct.

Pete Wright:
How are we doing at Upland?

Simon Kriss:
Yeah, look really good. Upland, we’re one of the earliest movers of the what I would call traditional knowledge management tools to adopt AI and to get it on board and to start to use it. It’s a highly contested space right now. You’ve got the traditional stalwarts that really know their stuff like Upland coming up against startups that are kicking off and saying, “Oh yeah. Look, we’ve got generative AI, we can do this.” But are they really doing it? Are they just putting a wrapper, as you said, around some old group of PDFs or are they actually categorizing and breaking up the data? Do they have the governance models in place to make sure that data’s governed safely and properly and all of those types of things. Upland is right there. The biggest problem that I can see coming for Upland and it’s the same problem that Microsoft, Salesforce and everybody else faces is there’s technology as designed but that’s different to technology as used or is implemented.
And so, if people are implementing this stuff the right way, that’s going to be great. Where they don’t implement it well, how is that going to reflect on Upland, type of thing. Because there’s primarily three types of AI, if you want to think about it this way. One is product-embedded AI which is what Upland have, in spades. They’ve taken AI, they’ve embedded it in the product. It’s there and it’s complete and the way in which they’ve done it, it complies with all of their GDPR compliance and HIPAA compliance and all that stuff. The second one is what we call domain-specific AI. And that’s where you are going out to buy an AI that does a very specific job. As an example, that might be you are buying an AI that understands in depth the legislation around building houses in the great State of Montana or something.
And so you want something that really just understands that and a shout-out to all the Montana listeners. And then the third type is your general purpose AI. So this is your Microsoft Copilot, your ChatGPTs, things like that. The interesting thing that’s going to happen later is we’re going to see an intersection of the general purpose and the product embedded. So if somebody’s using the Upland software and the CIO decides to roll out Microsoft Copilot across the organization, what happens where they intersect? Is there a master-slave relationship? Do agents have two different copilot partners that it’s working with? And none of the organizations that I’ve seen yet have hit that wall of needing to figure it out but we can see it coming. I mean, everybody thinks that the general purpose AI is going to be a panacea for everything and it’s not. It’s going to be…
Think about it a little bit like seeing your general practitioner. Great if you’ve got cuts and bruises and coughs and colds but if your three-year-old boy has shoved that crayon right up his nose, you’re going to have to take them to a specialist, and it’s going to be the same way with AI. So applications like Upland, having that level of AI in them is always going to be needed because it’s doing a specialist job that a generalized tool won’t do anywhere near as well.

Pete Wright:
This, I mean, what are we two years on since ChatGPT?

Simon Kriss:
Yep.

Pete Wright:
Released? What is the next two years look like? You and I sit down 24 months from now, what’s our conversation going to include?

Simon Kriss:
Yeah, it’s going to be very different. It probably won’t be you and I, it’ll probably be our AI avatars sitting down.

Pete Wright:
Yeah, our avatars are going to talk to each other, that’ll be great.

Simon Kriss:
Sure. Look, I think the next big move, because we’re going to continue to develop this stuff out, is going to be even greater visual AI than what we’re doing today. So more and more things are going to ask for a camera feed to be able to look around the room, to be able to see… In other words, it’s going to start to mimic human behavior a little more. So it’s already listening. We know that Siri listens to us, we know Alexa listens to us, we know these things are listening. Even the Open AI app itself can do that. Applications like Upland can listen to a call in real time as it’s happening to surface the right information. So AI’s already got ears. The next thing AI is going to want is eyes because that builds that a perceptive mass for it. So if it has eyes and ears, it can do a lot more in the world and then ultimately, it’ll move to robotic fingers and the stuff.
So I think we’ll see a couple of things. One, as I said, I think you’ll see more visual feeds into the AI. Secondly, I think it will start to become more personal for people. So it started out that a lot of this tech was big application and it was only when it became personal that people really started to accept it. So it was only when Alexa came into the house that… Initially, there was that, “Oh, I’m not having one of them, those in my house that’s going to listen to me all the time.” Well, we got past that pretty quickly and suddenly we discovered that we could say, “Hey, Alexa, build this for my shopping list and add this to my shopping list and do this for me and turn down the music.” And that was fantastic. So the same thing is going to start to happen with AI.
In fact, I’m a little drawn back to the movie Her with Jacqueline Phoenix from about 10 years ago, and who hasn’t watched that recently, needs to go and re-watch it, it’ll scare you how close. It’s where we each have a personalized AI, and I think this will happen also in the workplace. So your Microsoft Co-Pilot, for one of an example, your Microsoft Co-Pilot be a mid-thirties male with a Colombian accent and mine is a mid-twenties person with a Chinese accent and somebody else will want something else and it’ll start to get really personalized for people. That’s my prediction for two, two and a half years if it takes that long.

Pete Wright:
When you talk about the next two, two and a half years, one of the things that we see and I don’t think we can have a healthy conversation about AI without bringing up the ethical considerations and the legislation against it, and we’re going to have to solve at some level the data acquisition, the copyright issues. Do you feel our systems are in a good enough place to be able to address these and allow us to move forward? If not, what has to change?

Simon Kriss:
Personally, I’m a believer that the horse is bolted on it. The genie is way out of the bottle. Because we already have these models and they’ve already been trained on this data and it’s probably a legacy from when we invented the internet that it was deemed to be the public domain. So anything that you put on the internet and you published unless you explicitly said that this is not to be used for any purpose and it’s confidential, was deemed to be publicized like it was in the newspaper. So I think they’re going to struggle with these concepts of copyright and proving that this output over here directly relates to that input over there. Because otherwise you would say, every output of every AI could be related back to that one little article I wrote three years ago where I used the word ZA, but no one has a copyright on that. So I think that’s going to struggle.
I think legislation is going to struggle. I’m one of the people who’s a bit vocal about the legislation that’s going around now, the EU’s AI Act, President Biden’s Executive Order, and the work that Canada, the UK, Singapore Australia and Japan are doing on this is they’re trying to legislate around the people who are building this technology and the people… But mostly just around people who are building high risk stuff. We all know though that people can low risk built products for nefarious purposes. We know the internet is being used for nefarious purposes every day and AI is going to be the same. So if a human somewhere really wants to be evil, they’re going to be evil. And so I look at the legislation at the moment and it’s like they’re legislating one leg of a three-legged chair and I just worry how effective and efficient it’s going to be.
Ethical concerns are huge and I think every organization will need to set its own ethical standards and that is the job of the board, not of the executives of the company board or of the government counselors or whoever that is. Whoever is the governance body needs to set that because an extension of the risk appetite. If you think about it this way, the ethical concerns that a company who provides services to disabled people and disabled workers, their ethical standards are going to have to be of a higher level than a paint manufacturer just because the ability to do harm is so much greater in one organization than it is in another. So what you find ethical and what I find ethical is going to play out into what does this company find ethical and what does that company find ethical.

Pete Wright:
I suppose you and I will have to table the results of that conversation for maybe 10 years down the road?

Simon Kriss:
Yep.

Pete Wright:
Probably going to take us some time to get there but for today, Simon, thank you so much. This has been a fantastic conversation for me. I’m honored to have you here.

Simon Kriss:
No, this has been great Pete. I’m happy to chat with you and happy to chat with you again. This is a lot of fun.

Pete Wright:
Where would you like me to send people to learn more about you and the work that you’re doing?

Simon Kriss:
It’s really simple. They can find me on LinkedIn. I think there’s only one Simon Kriss and if it says anything about AI, that’s probably me. The other option is they can go to my website, which is just simonkriss.ai.

Pete Wright:
Well branded sir, well branded. We will put all the links in the show notes. Thank you everybody for downloading and listening to this show. Thank you for your time and your attention. We’d love to hear what you think, just swipe up in those show notes. Look for that feedback link to send questions to us or any of our past guests and I will do my best to get them answered for you. On behalf of Simon Kriss, I’m Pete Wright and we’ll see you right back here next time on Connected Knowledge.

Resource right rail card