Talent Talks with: L&D Thought Leader Donald Taylor | TalentLMS

Duration: 40 minutes

Season 3

Episode 8

AI in L&D: beyond the buzzwords and gimmicks

In this episode, we talk to Donald H. Taylor, one of the most influential voices in global L&D and a long-time authority at the intersection of learning and technology. From the dangers of the “AI efficiency trap” to why L&D teams need to move beyond being a content house, we explore where AI genuinely adds value, where it creates risk, and how learning leaders can stay relevant as expectations shift. A grounded, honest conversation about using AI in L&D with purpose, not hype.

Key takeaways:

Start with business problems, not AI tools. Strategic AI use begins by solving real challenges, not adopting software. Focus your pilots where they can make a tangible difference.


Use AI both tactically and strategically. Quick wins like translation are valuable, but true transformation comes from tying AI to long-term goals and rethinking how L&D delivers value.


Speak the language of the business to win budget. Don’t pitch AI in L&D terms. Frame investment as reducing churn, boosting productivity, or speeding up onboarding—outcomes your CFO cares about.


Pilot with the right partners—and the right scope. Involve cross-functional teams in testing. Their buy-in improves feedback, speeds learning, and spreads ownership beyond L&D.


Apply a “buy, build, or wait” mindset. Not all AI solutions are ready for prime time. Sometimes the smartest move is to pause and revisit once the tech matures.


Prioritize usefulness over polish. Perfect content is less valuable than fast, helpful answers. Meet learners where they are—and give them what they need, fast.


Make critical thinking part of your learning culture. AI tools can sound confident while being completely wrong. Help learners and leaders learn to question and validate what AI produces.


Keep the human layer. AI can support—but not replace—the empathy, coaching, and accountability that drive learning outcomes. Don’t lose the personal touch.

Donald Taylor Headshot

About our guest:

Donald is the founder and lead researcher of the L&D Survey Series. His annual L&D Global Sentiment Survey, started in 2014, explores L&D trends from over 100 countries. With Eglė Vinauskaitė, he has published four reports on AI in L&D since 2023. He has chaired the Learning Technologies Conference in London since 2000 and writes and speaks worldwide about L&D and learning technologies.

He works with London-based VC firm Emerge Venture Partners, and advises several EdTech start-ups. The author of Learning Technologies in the Workplace, Donald is a graduate of Oxford University and the recipient of an honorary doctorate from London’s Middlesex University.

Available on

Want more resources on this topic?

TalentLMS blog post: Top 6 AI Coaching Platforms for Corporate Training in 2025

Top 6 AI coaching platforms for corporate training in 2025

Featured image L&D Benchmark Report 2026

The TalentLMS 2026 annual L&D benchmark report

TalentLMS blog post: 13 AI Skills to Equip Your Workforce for an AI-Driven Future

13 AI skills to equip your workforce for an AI-driven future

More episodes we think you’ll love

David Kelly sits on a blue background, with the Talent Talks logo and the name of the podcast episode: "L&D in 2026".
Learning & Development
January 14, 2026
L&D in 2026: Learning debt, AI, and transformation

As work continues to transform, L&D in 2026 is being forced to rethink how it supports people on the job. In this episode, we sit down with David Kelly – longtime industry analyst and influential L&D thinker – to dig into our Annual TalentLMS Benchmark Report.

We unpack why learning debt is rising, why legacy L&D models are breaking under modern work demands, how AI should raise the ceiling of human work, and why skills matter more than job titles.

Go to episode
Stella Lee sits on a blue background, along with the name of the episode "Evolution of eLearning".
Learning & Development, People Management
July 17, 2024
Generative AI in L&D and the evolution of eLearning

What does the surge of generative AI mean for the future of L&D? And how will the roles of L&D pros be reimagined in response? Season two of our podcast kicks off with Director of Paradox Learning, AI strategist, and eLearning expert, Stella Lee. Together with Lee, we’ll unpack why picking the right AI tools is like shopping for a car and reveal the dos and don’ts of upskilling an AI-literate workforce.

Go to episode
Michelle Parry-Slater sits on a blue background, with the Talent Talks logo and the name of the podcast episode: "Lean L&D".
Learning & Development
September 3, 2025
Lean L&D: Maximum impact, minimum resources

When resources are tight but the demand for learning is high, strategy matters more than spend. In this episode, we’re joined by Michelle Parry-Slater, author of The Learning and Development Handbook and Director at Kairos modern learning.

We explore how L&D teams can build learning programs that actually drive business results. And Michelle shares how to create real impact even when budgets are lean.

Go to episode

Never miss an episode! Get every new drop right in your inbox

By clicking the Subscribe button, you accept and consent to receive the type of content mentioned above. Please review the TalentLMS Privacy Policy for further information.

Full Episode Transcript

Host: [00:00:00] Welcome to Talent Talks, the L&D podcast about the future of work and the talent driving it forward. The world of work is changing fast, from AI reshaping strategies to new definitions of success, and the push for people-first mindsets. Learning leaders are being asked to do more, do it better, and do it faster than ever before.

Together, let’s learn, relearn, and sometimes even unlearn what L&D can look like in today’s world. I’m your host, Gina Lionatos, and this is Talent Talks.

Talent Talks is brought to you by TalentLMS, the easy-to-use training platform that delivers real business results from day one. Learn more at talentlms.com[00:01:00] 

On today’s episode.

Donald Taylor: They’re going to use it anyway. It doesn’t matter what HQ says. If somebody thinks they can do the job faster, of course they’re gonna turn on that little friend in the corner that’s gonna help them do the job more quickly. I know you don’t want people to using it. I’ve got news for you. They are using it. Let us help ’em use it right.

Host: How can we make the most of AI when it comes to the way we organize, innovate, and solve problems in L&D? To help us see past the hype and provide some well-needed clarity in this supercharged space is Donald Taylor, an authority in the EdTech industry with decades of experience at the intersection of technology and learning.

Together, we’ll break down the opportunities and challenges that lie ahead for L&D professionals today. Stay with us.[00:02:00] 

Donald, welcome to Talent Talks. I’m absolutely thrilled to have you with us today.

Donald Taylor: It’s a delight to be here. Thank you so much for having me.

Host: Donald, we’re going to talk all things AI and L&D, and I wanted to start with more of a big picture view. So, AI tools are giving us an opportunity to really reimagine how we go about learning and development.

But of course, with that promise of innovation comes a lot of noise and a lot of risk as well. In the past, you’ve highlighted that companies are often stuck between anticipation and delivery. With that in mind, from your perspective and what you’ve seen, what are the biggest mistakes that companies tend to make when it comes to delivering and effectively using AI in their learning workflows?

Donald Taylor: The biggest mistake I think people make is imagining that it’s enough to adopt some tools. So I do this [00:03:00] research with my co-writer Egle Vainauskaite, for the past two years, looking at what happens when people use AI. And there’s been this sort of idea that there are maturity models. 

You start using a tool. You go on, therefore, to understand AI and to use it more widely in the organization for learning and development. What we found instead is that people are using AI in a sophisticated way as a result of other things that they do. And as a result of that, they take a sophisticated approach to using AI.

And there’s a risk, to come back to your question, there’s a risk that if you just look at using AI tools and imagine that’s the beginning of an escalator to take you up to better use of AI. There’s a risk that you just get trapped with using a handful of tools for efficiency measures rather than really exploiting everything you can do.

Host: So efficiency, often being a little bit of a, a [00:04:00] comfort blanket actually, or, or a trap.

Donald Taylor: Well, that’s a good way of putting it. That’s a good way of putting it. Yeah, I, I think, I think comfort blanket’s a good way of putting it. This idea that, well, I need to do something with AI. I know, I will make my email more efficient. Whereas the comfort of doing the old thing in a slightly better way doesn’t lead you on to being able to do new things in a new way.

And that’s really where the value of AI is.

Host: Understood. From the case studies you’ve seen, you’ve been in the game a long time, and I know you’ve really kind of really been getting across the implications of AI for L&D for quite a number of years now. In the research that you’ve done, have you come across any case studies, perhaps over the last couple of years, or examples, good or bad, that kind of stand out as great learnings or warning signs?

Donald Taylor: There are lots of good learnings, and in the most recent report went out on right at the end of September 2025, [00:05:00], we found that there were some super case studies of people really doing things in a new way. And there are two ways you can do this. You can do new things tactically, and you can do new things strategically.

We can probably talk about the strategic big picture stuff in a minute, but the tactical stuff, there are lots of things that people can do. To use AI in a way that will produce business value pretty, not necessarily majorly, but pretty quickly. And I think of one which was actually from the previous report, which is HSBC Bank in the UK, where we found they were using AI to train their customer support agents on how to deal with customers by getting tens of thousands of hours of transcripts of conversations with customers, putting them through an AI tool. Identifying what good and bad looked like, and then going on to use that to train people before they spoke to customers.

The result of this, and I’m boiling this down, but the result of it was a 10% [00:06:00] increase in customer satisfaction scores. Whether the customer relations person was based in the UK or offshore, there was a 10% uptick, which is a huge jump in customer sat figures. 

And of course, it also meant you could train a lot more people faster. So the process was more people faster. The result was happier customers. But another result, which is interesting, is that the customer liaison people themselves felt more confident in what they were doing.

They felt it was like going on an air flight simulator before you actually flew a plane, rather than being in entrusted with the controls without having any training.

Host: I mean, let’s be honest, a lot of L&D managers are overwhelmed in general. If they already feel under-resourced, under pressure, they’re now being told to do AI, so to speak.

Where can they realistically start? What [00:07:00] might be the first three steps or the first few kind of small pilots they can potentially run to start testing and learning in an efficient, but also a meaningful way?

Donald Taylor: Okay, so I think before you start doing any pilots, I think the first thing is get yourself familiar with what AI means as a whole and what generative AI, which is usually what we’re talking about here, means for L&D.

And I would say, so before you jump into piloting anything, I would absolutely be following some key people on the internet, and three people are Doctor Phllippa Hardman, Ross Stevenson, and Josh Cavalier, and probably Trish Uhl as well. These are three, four people who are worth following. I think while using a tool for efficiency doesn’t automatically lead to using it for improved L&D, I think having some sense of what you’re doing with it is vital.

So I would absolutely use some basic tools to begin with just to [00:08:00] handle your email, to produce content in a very simple way, but without telling yourself that you’re doing AI. And actually in the very first report we brought out one year after the launch of ChatGPT, we had a whole load of stuff in the back of that to help people get started.

Okay. So once you start doing that, then what should you do with piloting? I don’t know, because that’s the wrong place to start. The place to start is what’s the business problem that needs fixing, and then who’s the person you’re dealing with internally? Because some people will be more amenable, some people less amenable to a.) working with L&D and b.) working with AI.

So you’ve got to find the right person, find the right person, deal with them completely, honestly. And then go to town. Now, I suggest that probably you are looking at things which are easy wins. That would include things like translation, localization, which everybody that we’re talking [00:09:00] to is finding an easy, quick win.

Right? And I think the production and maintenance of existing courses is probably the best place to start. Put it this way, imagine that you had a thousand eager, smart, but slightly unsocially skilled interns working for you that need quite a lot of control. What would you get ’em to do to solve a business problem?

Host: Absolutely love that. And just on your call-out around translation slash localization, I know from a TalentLMS perspective, that was for us, one of the key first features, AI-enabled features that we made a point of putting into the course creation process because there is such a need for the localization piece, but it’s something that if it was to be done by human resources, it’s very resource heavy and often gets deprioritized within businesses.

So there are great examples of how we can utilize Gen AI to help us, yeah, have those quick wins, but also very strategically [00:10:00] important wins for businesses that they just haven’t had the manpower to bring to reality. I think to the point around efficiency, perhaps it’s not the thing that’s going to get us the long-term strategic successes in our L&D programs, but things like that are getting a huge response because it’s allowing them to do more with less, and we know L&D pros are usually often a team of one, max team of two or three if they’re really lucky. So these kinds of quick win features are getting a huge response because they’re answering a real need, and that’s the beauty of it.

Donald Taylor: Can I just pick up on one point about where do you start. Now I appreciate, you are absolutely right when you say most LMS departments are a department of one, two, or three.

Ericsson is a very different business. It’s more than a hundred thousand people. It’s an engineering company. They’re all very smart, but they have a very interesting process to looking at how they’re going to use AI and in L&D. And I spoke to Pauline Rebourgeon there, who’s a French woman living in Dublin. [00:11:00] who runs something called Learning Next within the organization. Which is a combination of the L&D team and a bunch of people who are just interested in what you can do with tools. 

And they have a scope of work: we’re interested in using AI to help us in these four areas. And then they go out, and they talk to the people in this network, which is about 400 people. A small number, like 10% or fewer than 10% L&D, the rest are just engineers, marketing people who are interested. 

And they say, look, we’d like you to go out and just test this thing. We think we could use AI to help us with x say transaction. Can you go away, look at it, come back in a couple of weeks, tell us what you think? What happens then is that firstly, the testing is done across a wider group of people, it’s far faster than anything that L&D itself could handle. Secondly, you get people from outside the L&D department being involved in the process, and so they’re enthusiastic about it, and they feel a sense of ownership. And things happen pretty fast.

And so you have a group of, let’s [00:12:00] say, of that group of people, maybe five or six, maybe 10, are involved in the translation project, they go away, they talk to each other for two weeks, maybe three of them are L&D, and they come back with a report. The report then says, right here’s what we found. And then the central L&D department says, there are a few things we can do here.

We can decide to go ahead and buy something to do with this. We can go ahead and try to build it ourselves. Or, and that’s quite usual, buy or build. But the third option they have, and I think this is quite an important one to think about, perhaps we can just wait. Because things moving so fast, it looks like something’s on the verge of being useful.

But if we jump too soon, it’s gonna be a pain. Let’s just wait and try it again in three months time. And I think that idea that you’ve got, rather than just saying to everybody, try stuff out, you limit yourself to, we wanna achieve these four things. You have a group of people that help you, they come together. You can experiment quite quickly. 

I do believe that small to medium-sized businesses could shrink that down with an L&D person of one and a few people in the business who are interested in finding [00:13:00] out more and spread the load a bit. I think that would be something that’s quite doable.

Host: Yeah, I love that. Okay. Buy, build, or wait. 

Let’s change track a little bit and talk a little bit about the learner experience. Uh, Donald, so you yourself, I believe, have said before that AI will dramatically affect the way that we share information. We’ve all seen the rise of Google and social media and the way that very quickly conditioned people to expect fast feedback, instant answers.

So what do you think AI is going to do in this space? How do you see technology changing what the next generation of employees will expect from a learning experience?

Donald Taylor: Gina, you say we’ve all seen the rise of Google. I can’t believe that you are old enough to have seen the rise of Google and to have been around before it.

I remember that. I remember the world before the worldwide web. And moving from a world where things were all on paper [00:14:00] to where you’d access it on a machine, to where you could ask, you could have the sum of human knowledge almost in your hand, was an extraordinary movement. Now, the point is not just that we can access it, but also that we can ask questions about it and query it.

I think that there is a generational divide here. I’m not a big one for generational divides. I think it’s very dangerous to fall into the trap. But younger people, my son’s 27, my daughter’s 23, seem to be very astute at understanding when something’s believable, when it’s not believable. I think older people seem to fall a bit into the trap of saying, oh, this is magic and of course it’s right. 

Because probably we’re brought up used to the idea of authority being vested in something that you see and read. I think people who’ve been brought up in the Google slash TikTok world are more likely to be suitably wary. So I think that there is something to be done, first off, in just helping everybody [00:15:00] understand that just because it’s written down, just ’cause it’s a picture, doesn’t mean you can believe it.

And my own example of this was, I’m very interested in the idea of utopias and visions of the future. And when I was in Antwerp earlier this year I went to see a plaque of Thomas More’s, to commemorate Thomas More, who wrote ‘Utopia’, which was outside the cathedral in the ground. And I took a photograph of the plaque, and I said to, I think it was Perplexity or ChatGPT, look, here’s a photograph, it’s in several languages of a plaque in the ground in Europe. Could you just translate it for me? And it came back saying, yep, I can identify that it’s a plaque was put in the ground to commemorate the burning of books in Babelplatz in Berlin in 1936. Completely and utterly wrong, completely wrong. And I said to him, hang on a second, how confident are you about that answer? And he said, oh, I’m 99% confident. 

Now, the problem is that, of course, all it’s doing is reflecting the input that [00:16:00] it’s got from the internet. Almost certainly what’s happened is the thing which has been photographed most as a plaque in the ground has been that particular striking and important plaque in the ground in Berlin.

Almost nobody’s taken that other photograph that I took. And so it said plaque in the ground, it must be that. And then it presents itself with it presents its answer with tremendous authority and self-assurance. And I think coming back to your question finally, Gina, I think the issue is we need to be giving people something they probably haven’t learned at school, which is the willingness to question what they read.

And it’s so much more comforting when you don’t have to do that. But I think we have to be drilling that into everybody. And almost you, I could see people being given, a bit like you do phishing courses, right? Here’s an email. Would you click that? No, you wouldn’t. Here are two [00:17:00] versions of something. Which do you think is more believable?

I think it’s almost got to that level where we need to be giving people that as a fundamental way of dealing with information.

Host: So many examples coming to mind. Including a bunch of videos shared by my mother. Clearly AI-generated videos that, uh,…

Donald Taylor: Oh my goodness, 

Host: …that to your point about the older generation being a little less wary, I guess, let’s say, of things that may have traditionally looked real and authoritative and not knowing how to read between the lines. But that’s a whole other, that’s a whole other topic.

What do you think, I mean, there are, I think, from a learner engagement, but also learner experience perspective for L&D programs, it’s very clear that AI-driven tools are impacting things like the expectations around instant access to relevant learning content or being able to, you know, ask a question and get an answer immediately.

What are your thoughts there? How important is it that our L&D programs start to really [00:18:00] make sure they’re fine-tuned to those learner expectations? And how might we do that? 

Donald Taylor: I think there’s the, there’s the question of the expectation that people are, because of their commercial experience, much more likely to want things and expect things immediately.

But I think also it goes beyond that. So I’ll come back to your question “How can we do it?” in a second. I think it goes beyond it because people now don’t just go to, uh, a generative AI tool and take an answer, they have a conversation with it. And that’s a very different way of interacting with technology from what we’ve had in the past, so let’s just park that for a minute and come back to that second.

How can we make sure that people are getting something that is close to their commercial experience? I think the L&D departments that I’ve been familiar with have often had a very high standard of production content, and I think we have to throw that out a bit and not let the perfect be the enemy of the good.

Now and good enough is infinitely better than tomorrow and perfect. So let’s get stuff out that [00:19:00] works for people. It has to solve a business need, and we know that when people come online to learn something, they just want to get that, and they don’t wanna have to jump through hoops, and they don’t really care if it’s perfect.

So that’s, I think the first thing is to focus on utility rather than on form. Then how do you make sure it’s engaging? Where possible, go to where the people are. If you’re already using an LMS and people are using the LMS, don’t come in with a separate new tool and ask people to use that as well.

Host: Now, you’ve long advocated for the human role in coaching and training and learning and development, which we absolutely align with.

You’ve also spoken about context setting, sense making. We’re already seeing companies like Walmart and MasterCard replacing traditional training with AI chatbots. You know, you previously mentioned the example with HSBC as well. Let’s kind of look at it a bit, perhaps more broadly or ethically speaking.[00:20:00] 

Do you think we need to draw a distinct line between what’s better with AI, and what still needs that human touch? I think we all know that we do need to draw the line. I think what can be a little stifling is knowing how to draw that line and where to draw that line. Do you have any advice on how to do this?

Donald Taylor: When I was a kid, I took two years off after leaving school, and I worked for one year as a computer programmer, and I learned Arabic, and the second year I went traveling. I learned Arabic one hour a week with a woman who lived in my local town who was really good. 

Now, I could quite easily have done that with cassettes and a book in those days. These days I could do it with an app. Right. Why do you get a person involved in doing that? You get a person involved because they provide something that a machine can’t, which is accountability and support. And the [00:21:00] accountability bit for learning is really important for most people. If they’re learning something, and it’s not a quick fix, it’s not a performance support thing. It’s say I’m building something up over time, like a language. You need accountability. 

You need to know I’m going back in next week, and I’ve better bloody well have done my homework. And it worked. I mean, I, by the way, I can’t remember any of the Arabic now, because that was 40 plus years ago, but it really did work.

What do human beings do? They provide that human touch of accountability and support. Now, to an extent, we can replicate that in coaching tools. But ultimately, the best way to do it is with a person. And I can absolutely see in the future a combination of people using a combination of human beings and coaching tools to achieve their goals.

Because your coaching tool can absolutely be there 24/7. The human being, the person you’re gonna get back to next week and say, I’m really sorry I didn’t do it this week, and you better have a good excuse for [00:22:00] it, is the person that’s gonna drive you to spend that extra 15, 20 minutes doing this thing. So I think, I think that’s one thing to hold onto is what do human beings do really well, and they have this empathy, or we have empathy with other humans, that enables us to have that combination of motivation and, uh, support and accountability that makes a good learning experience. That’s what the best teachers do.

Host: Absolutely. Donald last year I caught Josh Bersin at the Learning Technologies Conference, uh, in London, which was wonderful. Um, his stance was that all companies will, or at least should start allowing for AI investment budget. Bersin was saying that the L&D professionals who secure AI budgets are the ones who will be paving the way for what comes next in L&D, which makes perfect sense.

And I’ve also heard you say, Donald, that in AI is not just a faster content maker. But a catalyst [00:23:00] for really rethinking what L&D should be used for in a modern organization. I’d love to hear a bit more around that. From your perspective, where do you think L&D’s role in an organization should ideally be heading?

Donald Taylor: It’s a very big question, Gina. Can we do a separate podcast?

Host: You’re not wrong. You’re not wrong about that.

Donald Taylor: I can’t cover it all in five minutes. I’ll do my best. Josh Bersin’s doing some great stuff in this field. Really come up with some good thinking about it. Absolutely worth following. I think, as I said, I think earlier, there are two ways we can approach this.

I think you can do it tactically, you can do it strategically. Working tactically may be right for getting immediate gains and showing you can do stuff. You have to, I think, be working towards a strategic goal. And I think if you don’t, you run the risk of being left behind. 

There is a, there is one exception here actually, which is if you’ve got a company which is madly keen to be doing the latest thing. Then you can absolutely go out and say, hey, we’re gonna [00:23:40] use AI. We need some money for it. But if you do that, you better be sure you deliver. I don’t think you can get a budget by saying to your organization, with some exceptions, we wanna go out and become a skills-driven organization, give us some money for it.

I think you have to say, we can help reduce the amount of time it takes to fill projects with the right people if you give us the money. Same end, expressed differently. And you’re expressing in terms of a business goal. We have case studies where organizations have done exactly this to identify the skills people have, get them placed fast on [00:24:40] projects, having saved themselves seven-figure sums by doing this.

So, I’m gonna talk now about where we could be going, but I wouldn’t use this language talking to the business. So one area you can go to is being skills, what Egle and I call a skills authority. L&D becomes the part of the organization that really understands skills, treats skills as a critical business asset, and develops them in line with business needs.

And that’s suitable for organizations, again, come back to your word. This is all about context that’s suitable for organizations like a global consultancy. Got people spread around, you need to have everybody knowing the right stuff and being able to deliver against that knowledge. And so, a skills-based organization with L&D as a skills authority makes perfect sense.

If you’ve got a very distributed organization with people doing stuff in different areas, you may say what we need to be is an enablement partner. That’s L&D words for we’re gonna find out what, where good stuff happens in the organization, [00:25:40] surface it, and spread it. And you could explain that to your organization of just being, we are gonna be the best we can be. We’re gonna share the best knowledge and tips about productivity across the organization. An enablement partner means that you are a hotel chain. You’ve got somebody in Sydney. They discover better checking technique. You find a way to surface that, share across the organization. Suddenly everybody from Buenos Aires to Boston is using the same technique. Very different from skills authority. It’s much more about sharing good knowledge that’s already in the business. 

And the third potential strategic destination for L&D that we’ve discovered or that we’ve, yeah, we’ve discovered it. This is what people are actually doing. We’re not making this up. Is what we call the adaptation engine, where L&D folds into a part of the organization, which is then concerned solely with improving processes and improving efficiency across the organization.

The classic case study we [00:26:40] have for this is that in one of our case studies there was high turnover in one part of the organization. L&D would normally go in and say, we need to give the managers a training course to help them understand their people better. In fact, the department which L&D was part of went in to look at the high turnover, discovered that people didn’t feel they were getting what they’d been promised as employees, and so they were leaving.

And so what started as a potential training issue became actually a matter of changing the employee value proposition. Changing how people were sold the idea of working in the business that involved a lot of different parts, one of which was training other bits as well, personnel talent, a little bit of marketing. And that then solved that problem, reduced the churn in people in that area, and as a result, saved the business money. 

So, three different, three different areas, but, again, if you’re getting the budget, you can’t express it in [00:27:40] terms of a skills authority, enablement partner or adaptation engine. You’re talking about solving business problems. Always. That’s where you start. Long answer Gina, but look shorter than doing an entire separate podcast.

Host: It’s such a big topic. I mean, there’s so much to kind of uncover there and uh, it’s such an ongoing conversation as well because we know that this is an ever-evolving kind of space. 

An uncomfortable question maybe, if AI is letting us design course content faster, for example, and at scale we’re able to translate, we’re able to do so much more. Do you see that then perhaps accelerating a shrinking of L&D budgets rather than expanding our influence?

Donald Taylor: Yes and no. 

Yes. If L&D is seen as being the content house. Why on earth would you spend more money on it now? You can do it cheaper. Do it cheaper. And that’s why L&D has to shift away from being focused on content. [00:28:40] And by the way, it’s not just L&D creating content now. Anybody in the organization is creating it for all sorts of things. If they think they can get a bit of content that will help people in their team do something faster, they’ll create it themselves rather than going through the whole process of going to the L&D department.

If all L&D is doing is focusing on content, yes, the budgets will shrink. And guess what? We’re already seeing that it’s partly actually going to economic circumstances as well. But partly also this feeling that, you know what, that video that used to cost $20,000, we can do it for a hundred dollars now, why spend more?

So the answer is, yeah, you can get more money, not less money in L&D. You can be more strategic, and some L&D departments are doing that if you shift your focus away from content.

Host: Yeah, absolutely. Now, we recently rolled out a company-wide AI competency training. So, while you know, our R&D and product teams have been utilizing AI for quite a while to enhance the [00:29:40] TalentLMS platform and the user experience.

But the entire team has now been better equipped to use AI to work smarter, make the most use of our time. It, for me, it was such an eye-opening experience that, just that pure demystification and learning, these are the 1, 2, 3, 4, 5 steps that you need to take to help set up your GPT to work with you and work for you and not work in a way that’s going to actually create more overhead for you down the track.

So not just understanding the tools that are available, but actually then figuring out, to your point as well, which ones make the most sense for each role or each deliverable or each priority. So I think that demystifying AI, as much as one humanly can, is really an important first step to driving adoption across a business. And this is often at the C level, at the, at the high exec level of a business too, right?

Donald Taylor: Yep. I would add something else there, which I think that there is a [00:30:40] need to stick to the purpose on something like that. Because when you take people away from fee-generating activity to do something, to spend time on learning, there is always the risk that, oh, you’ve got a tough month, a tough quarter. We’re gonna take people off it. Everyone’s gotta do all hands to the pump and get something done. 

There is a risk that maybe you do that once, but then it becomes the pattern of behavior, and the learning you’ve set your heart on, falls by the wayside. Can’t have, it can’t happen. The best organizations that I’ve seen have really been dedicated top-down to making this growth of AI literacy happen, regardless of the fact that it does have an impact short term on productivity.

Because long-term, as you’ve said, it’s worth it.

Host: Absolutely. And I think one of the challenges and one of the main criticisms in AI that’s stuck around for quite a while now is that while it may certainly increase efficiencies, it [00:31:40] does obviously have the potential to cause a slump in critical thinking in creativity potentially.

It was a recent MIT study that revealed the use of gen AI may stunt our brain function, and I know that in the past you’ve warned, quite rightly, that L&D can’t outsource judgment. Surely there is quite a responsibility there, both on companies and on professionals and employees within businesses that are integrating AI.

Do you think there’s a way that we can take the potential negative side effects of the tech and consider them? You’ve already kind of pointed to a few strategies, so maybe this is where we can answer it a bit more holistically. How do you think we can tread that line on just how much we’re offloading to AI? To avoid ending up with, you know, there’s now a term that’s come up, AI workslop.

Donald Taylor: Yeah. Back at the beginning of 2025, I coined the word beige wave to try and sum up this idea of this huge amount of really mediocre stuff we [00:32:40] could be expecting to happen in 2025. And yeah, it did happen. How do we overcome it?

I think, just as you have in organizations, evangelists for technology, I think it’s important to have something, not who is a skeptic, but who’s, if you like, the AI conscience or the critical thinking conscience of the company and whose job it is to make sure that we are using the tools well, that we are not making our people suffer from brain rot.

Now, as I say, I’ve seen people use AI, generative AI, extremely effectively, and it has enhanced their thinking. So I think it is not an automatic result of using it that you are going to just get lazy. But I think people need to realize the ways of working that with AI that can be more effective and build your brain, and that needs to be spread across the organization. And this AI conscience could [00:33:40] be the person who checks across the organization and who’s who a whistleblower can go to and say, hey, look, I discovered something that’s nonsense, and it’s taken seriously.

Host: Interestingly, a recent survey found that most executives kind of echo this idea and this concern that Gen AI will inhibit learning rather than enhance it.

It was around 58% that thought that Gen AI will inhibit learning. Is this a trend you’re seeing, like this general reluctance from C-Suite? And you’ve touched on actually some quite useful pointers earlier in our conversation as well that could talk back to this. But how might you counsel an L&D pro who’s trying to, looking to reframe the conversation when the C-Suite is skeptical about AI’s impact on learning quality?

Donald Taylor: I think it comes back to let’s find a business case. So maybe there is a concern about people using AI generally. Right. What we say to the executive is look, [00:34:40] maybe using AI generally is a dangerous path to go down, and we should be constraining its use. That’s fine. Let’s let L&D use it in areas where we can be very specific and targeted with it. Rest of the organization, you can restrict its use. Don’t allow people generally to ChatGPT for projects. 

And then I would find, and this is always the way to go, I would find one part of the organization where there’s a problem and there’s a manager that you can work with and that manager you can help them solve a particular business problem using AI, and perhaps the team can use AI in a way that enables them to be effective.

And you can go back, and it’s a proof point. ‘Cause words are never enough. Go back with a proof point, but say, Hey, look, I know you think it gives you brain rot, but guess what? This team is using ChatGPT to produce the marketing content 10 times faster. That saves us this amount of money. You won’t persuade everybody, you’ll persuade some people, but always you have to have a case in hand. And that always starts with somebody in the [00:35:40] organization who gets it, who thinks you can solve a problem for them. And so always these things come back to what is your network of relationships like in the business? 

It sounds like a really weird way of approaching AI, but any technological deployment in the organization, success will depend on your network and your relationships in the business.

Host: This is another great example of human and technology working together. There are just some parts of the job that AI will never be able to do, and these kinds of conversations that you’ve just mentioned are one of those examples. I also think businesses need to understand the real risk of not educating their people on how to appropriately or accurately use generative AI in order to avoid those situations where some people are going to use it, and if they’re not using it correctly, the fallout can be huge.

Donald Taylor: Gina, that’s the point. They’re going to use it anyway. It doesn’t matter what HQ says. If somebody thinks they can do the job [00:36:40] faster, of course they’re gonna turn on that little friend in the corner that’s gonna help them do the job more quickly, help ’em use it. And I think actually Gina, you’ve answered your own question.

That’s a really good response to come back to your, uh, your HQ, your board with and say, I know you don’t want people using it. I’ve got news for you. They are using it. Let us help ’em use it. Right.

Host: Absolutely. We are close to wrapping up our conversation here, Donald, sadly. We’ve covered a lot of big ideas, a lot of things to consider.

I have personally found it very valuable. I wanted to kind of bring it back into focus now and ask a simple, maybe personal question from your perspective, I suppose. What do you think is the most surprising thing that AI has taught you about how we learn or about the way we could be helping others learn through our training programs?

Donald Taylor: That’s a really good question. I think that the most surprising thing that I’ve taken out of this experience of the last three years [00:37:40] has been that what I have assumed, and I think many people have assumed, as the voice of authority written in my case, English, could be any language, well-written English that forms in states an opinion and backs it up with proof points and argument. That may not be a mark of intelligence at all, but simply a regurgitation of what other people have written. 

And it may be actually when we are writing stuff, that’s all we’re ever doing, most of the time. It’s just regurgitating stuff, but it’s also made me think how precious it is then to be able to learn and to think and to write well in a way that is not generated by AI, but is generated by humanity. 

And for me, I think there are so many ways in which you can see when something’s authentic and see when it’s [00:38:40] not authentic. And the result of this, ’cause it’s not just me thinking this, lots of people thinking this, the result of this is that there’s this, I think it’s called an analog renaissance, just as my kids now love playing vinyl rather than downloading stuff, there’s this sense that the things that are real and produced by human beings now have far more value. 

And that goes from a handwritten note right the way through to the face-to-face conversation. But it all comes down to being able to show that you are producing something yourself in a human way, which comes back to, I think, one of the themes of our conversation. AI is great, but ultimately we’re all humans.

Host: Absolutely. Donald, I can’t thank you enough for being with us today. I have absolutely thoroughly enjoyed this conversation. Thank you. Thank you for being part of Talent Talks.

Donald Taylor: It has been great to chat. Gina. Looking forward to meeting when you’re next at Learning Technologies in London.

Host: Thanks for [00:39:40] tuning in. You can find Talent Talks on all podcast platforms. Subscribe now so you don’t miss an episode.

Talent Talks is brought to you by TalentLMS, the easy-to-use training platform that delivers real business results from day one. Learn more at talentlms.com.

TalentLMS gives you the tools to supercharge every step of your training.

.talentlms.com
For a potential follow-up, enter your phone number (optional)
Please select your country code
United States +(1)
United Kingdom +(44)
Canada +(1)
Australia +(61)
Germany +(49)
Netherlands +(31)
France +(33)
Afghanistan +(93)
Albania +(355)
Algeria +(213)
American Samoa +(1)
Andorra +(376)
Angola +(244)
Anguilla +(1)
Antigua & Barbuda +(1)
Argentina +(54)
Armenia +(374)
Aruba +(297)
Ascension Island +(247)
Austria +(43)
Azerbaijan +(994)
Bahamas +(1)
Bahrain +(973)
Bangladesh +(880)
Barbados +(1)
Belarus +(375)
Belgium +(32)
Belize +(501)
Benin +(229)
Bermuda +(1)
Bhutan +(975)
Bolivia +(591)
Bosnia & Herzegovina +(387)
Botswana +(267)
Brazil +(55)
British Indian Ocean Territory +(246)
British Virgin Islands +(1)
Brunei +(673)
Bulgaria +(359)
Burkina Faso +(226)
Burundi +(257)
Cambodia +(855)
Cameroon +(237)
Cape Verde +(238)
Caribbean Netherlands +(599)
Cayman Islands +(1)
Central African Republic +(236)
Chad +(235)
Chile +(56)
China +(86)
Christmas Island +(61)
Cocos (Keeling) Islands +(61)
Colombia +(57)
Comoros +(269)
Congo – Brazzaville +(242)
Congo – Kinshasa +(243)
Cook Islands +(682)
Costa Rica +(506)
Croatia +(385)
Cuba +(53)
Curaçao +(599)
Cyprus +(357)
Czechia +(420)
Côte d’Ivoire +(225)
Denmark +(45)
Djibouti +(253)
Dominica +(1)
Dominican Republic +(1)
Ecuador +(593)
Egypt +(20)
El Salvador +(503)
Equatorial Guinea +(240)
Eritrea +(291)
Estonia +(372)
Eswatini +(268)
Ethiopia +(251)
Falkland Islands +(500)
Faroe Islands +(298)
Fiji +(679)
Finland +(358)
French Guiana +(594)
French Polynesia +(689)
Gabon +(241)
Gambia +(220)
Georgia +(995)
Ghana +(233)
Gibraltar +(350)
Greece +(30)
Greenland +(299)
Grenada +(1)
Guadeloupe +(590)
Guam +(1)
Guatemala +(502)
Guernsey +(44)
Guinea +(224)
Guinea-Bissau +(245)
Guyana +(592)
Haiti +(509)
Honduras +(504)
Hong Kong SAR China +(852)
Hungary +(36)
Iceland +(354)
India +(91)
Indonesia +(62)
Iran +(98)
Iraq +(964)
Ireland +(353)
Isle of Man +(44)
Israel +(972)
Italy +(39)
Jamaica +(1)
Japan +(81)
Jersey +(44)
Jordan +(962)
Kazakhstan +(7)
Kenya +(254)
Kiribati +(686)
Kosovo +(383)
Kuwait +(965)
Kyrgyzstan +(996)
Laos +(856)
Latvia +(371)
Lebanon +(961)
Lesotho +(266)
Liberia +(231)
Libya +(218)
Liechtenstein +(423)
Lithuania +(370)
Luxembourg +(352)
Macao SAR China +(853)
Madagascar +(261)
Malawi +(265)
Malaysia +(60)
Maldives +(960)
Mali +(223)
Malta +(356)
Marshall Islands +(692)
Martinique +(596)
Mauritania +(222)
Mauritius +(230)
Mayotte +(262)
Mexico +(52)
Micronesia +(691)
Moldova +(373)
Monaco +(377)
Mongolia +(976)
Montenegro +(382)
Montserrat +(1)
Morocco +(212)
Mozambique +(258)
Myanmar (Burma) +(95)
Namibia +(264)
Nauru +(674)
Nepal +(977)
New Caledonia +(687)
New Zealand +(64)
Nicaragua +(505)
Niger +(227)
Nigeria +(234)
Niue +(683)
Norfolk Island +(672)
North Korea +(850)
North Macedonia +(389)
Northern Mariana Islands +(1)
Norway +(47)
Oman +(968)
Pakistan +(92)
Palau +(680)
Palestinian Territories +(970)
Panama +(507)
Papua New Guinea +(675)
Paraguay +(595)
Peru +(51)
Philippines +(63)
Poland +(48)
Portugal +(351)
Puerto Rico +(1)
Qatar +(974)
Romania +(40)
Russia +(7)
Rwanda +(250)
Réunion +(262)
Samoa +(685)
San Marino +(378)
Saudi Arabia +(966)
Senegal +(221)
Serbia +(381)
Seychelles +(248)
Sierra Leone +(232)
Singapore +(65)
Sint Maarten +(1)
Slovakia +(421)
Slovenia +(386)
Solomon Islands +(677)
Somalia +(252)
South Africa +(27)
South Korea +(82)
South Sudan +(211)
Spain +(34)
Sri Lanka +(94)
St. Barthélemy +(590)
St. Helena +(290)
St. Kitts & Nevis +(1)
St. Lucia +(1)
St. Martin +(590)
St. Pierre & Miquelon +(508)
St. Vincent & Grenadines +(1)
Sudan +(249)
Suriname +(597)
Svalbard & Jan Mayen +(47)
Sweden +(46)
Switzerland +(41)
Syria +(963)
São Tomé & Príncipe +(239)
Taiwan +(886)
Tajikistan +(992)
Tanzania +(255)
Thailand +(66)
Timor-Leste +(670)
Togo +(228)
Tokelau +(690)
Tonga +(676)
Trinidad & Tobago +(1)
Tristan da Cunha +(290)
Tunisia +(216)
Turkmenistan +(993)
Turks & Caicos Islands +(1)
Tuvalu +(688)
Türkiye +(90)
U.S. Virgin Islands +(1)
Uganda +(256)
Ukraine +(380)
United Arab Emirates +(971)
Uruguay +(598)
Uzbekistan +(998)
Vanuatu +(678)
Vatican City +(39)
Venezuela +(58)
Vietnam +(84)
Wallis & Futuna +(681)
Western Sahara +(212)
Yemen +(967)
Zambia +(260)
Zimbabwe +(263)
Åland Islands +(358)
Terms of Service.
No credit card required.