Episode 6: Preparing for the AI-powered workplace

Duration: 40min

About our guest:

Ronald Ashri

A technologist, entrepreneur and author, Ronald holds a PhD in Artificial Intelligence from Southampton University. He is currently the co-founder and CTPO at OpenDialog AI, leading the design and engineering teams creating a radically different conversational AI platform for regulated industries. 

Ronald also co-founded a conversational AI consultancy that was acquired in 2019. He is the author of “The AI-Powered Workplace: How Artificial Intelligence, Data, and Messaging Platforms Are Defining the Future of Work”.

Share episode

What’s on the horizon beyond ChatGPT? How can you prepare for AI-led digital transformation? We sit down with Ronald Ashri to discuss everything from employee privacy to rethinking how teams are trained and emerging new AI-related roles (prompt engineers?!).

Ronald’s the man behind OpenDialog AI and the prophetic 2019 book ‘The AI-Powered Workplace’ published just before the explosive evolution we’ve seen in Artificial Intelligence. Join us as we skip the speculation and get stuck into the concrete considerations organizations should take into account when adopting new AI tools.

Key takeaways:

  • There’s a need for basic training on AI for everyone. It should focus on demystifying AI and eliminating anthropomorphization. It’s crucial to understand AI as a sophisticated machine devoid of consciousness. This will help foster a realistic perspective and a necessary level of distrust for effective and vigilant use.

  • AI can help in automating and boosting engagement during training. Specifically, it can help generate training material from existing content, and even build virtual assistants to instruct learners and newcomers. But it’s crucial to manage, track, and improve training courses over time.

  • AI taking over aspects of our jobs is inevitable. But it’s essential to accept and adapt to this changing landscape. To succeed in this, it’s crucial to focus on developing skills that AI cannot easily replace. For example, soft skills remain valuable in the evolving job market.

  • There’s optimism about the positive impact of technological advancement. Such developments can lead to a reduction in work hours, allowing more focus on human interactions. Plus, they bring various new job opportunities, like machine learning engineers and conversation designers. However, it’s necessary to set realistic expectations due to AI’s dynamic nature, which requires continuous monitoring and improvement. 

Want more resources on this topic?

Is AI taking over jobs? It’s time to build more meaningful careers

Read more

Preparing for the future: How to evaluate and train employees on using AI tools

Read more

TalentLMS Research: Skills for success in the AI-driven future

Read more

More episodes we think you’ll love

Mark Perna sits on a blue background, with the Keep it Simple logo in the top left corner along with the name of the episode "Unlocking Gen Z's potential in the workplace". In the bottom right corner is the TalentLMS logo.

February 28, 2024 • 25 min.

People Management | Learning & Development

Unlocking Gen Z’s full potential in the workplace

What are the biggest truths and misconceptions about Gen Z in the workforce? How can managers adapt their leadership style to play to the strengths and weaknesses of the next generation? We talk to Forbes contributor and generational expert, Mark Perna, to examine how businesses can make the most out of a multi-generational workforce, why Gen Z is the ‘benchmark’ generation, and how to attract and retain the talent of tomorrow.

Michelle Weis sits on a blue background, with the Keep it Simple logo in the top left corner along with the name of the episode "Power Skills". In the bottom right corner is the TalentLMS logo.

February 14, 2024 • 34 min.

People Management | Learning & Development

Power Skills to Future-proof Your Teams

What are the 3 power skills every professional needs for the future? How do the human skills employees build in their personal lives benefit the workplace? Educational expert and author Dr. Michelle Weise explains how rapid technological innovation is demanding a change in the way we approach learning and work.

Headshot of Keep it SImple guest Rusty Rueff on a blue background with the words "non-linear careers". Surrounding Rusty is a brightly coloured microphone and the TalentLMS logo.

November 22, 2023 • 34 min.

People Management | Learning & Development

Navigating non-linear career paths

What can a jack of many trades bring to the table? Join us as we explore why ‘squiggly line’ careers are great news for both employees and employers. Alongside Rusty Rueff, who boasts experience under Obama, at Pepsi-Co, and as Glassdoor Board Director, we’re explaining what companies stand to gain from letting employees grow in a non-linear direction, instead of simply waving goodbye. Plus: Why matching skills to roles, not past job titles, is your best hiring strategy.

Back to podcast main page

Never miss an episode! Get every new drop right in your inbox

By clicking the Subscribe button, you accept and consent to receive the type of content mentioned above. Please review the TalentLMS Privacy Policy for further information.

Full Episode Transcript

Host: We’re putting the wild speculations aside and going beyond the buzz surrounding artificial intelligence to see how workplaces can ensure they’re not left behind. Joining us is the co-founder and chief technology officer of Open Dialogue AI, Ronald Ashri. He’s been in the industry for almost 20 years now, and in 2019 authored The AI Powered Workplace, a handbook providing clear steps for business leaders looking to be at the forefront of this digital transformation. [00:01:02] 

We’ll be discussing how artificial intelligence is already streamlining processes across a number of industries and how businesses can evaluate what tools are right for them. And, of course, much, much more. Stay with us. [00:01:34] 

Ronald. Thank you so much for making the time to be with us today. It’s so great to have you. How are you doing? [00:01:58] 

Ronald Ashri: Thank you for having me. I’m doing great. [00:02:05] 

Host: I’d love to take us off with one of the things that you’ve done. You’ve written the AI-powered workplace, and you did that in 2019, and since then, we’ve seen major changes in generative AI. Briefly, could you tell us what has happened in the modern workplace in this short amount of time? And in what kind of ways have we seen AI develop and continue to be integrated into the workplace? [00:02:07]

Ronald Ashri: And you’re asking for this briefly. So, I think, if I reflect back to 2019, it was already obvious that AI was going to have this huge impact, and there was all these possibilities out there. It was a bit more work to actually realize that there was a bit more expertise required. The big change happened probably in the summer of 2022. And then by the release of ChatGPT in November 2022, that was a complete step change in terms of what you could do and how easily you could do it. So I think if we take from 2019 to go back to your question to 2022 businesses, we’re seeing that, yes, this is important. [00:02:33] 

There is a bunch of things that we need to do. Let’s get started. Everyone was with the mindset of “this is happening.” We should get onto it at some point. Right? And then fast forward to the end of 2022 and now it’s, Oh, it’s happening. And we’re not even involved, right? Because. Our people are going directly to ChatGPT and having conversations and solving their problems. And it’s almost, how do we gain control of this again and do it in a more structured manner? It’s a huge change. [00:03:27] 

Host: Is it something that you sort of saw coming in that amount of time from 2019 to 2023? [00:04:04] 

Ronald Ashri: I would love to say that, yes, absolutely, I saw it coming, but I didn’t, I was not expecting that pace of improvement from the generative AI technologies in the summer of 2022; we were doing a series of experiments within my own company. Around how we could use generative AI within the context of conversational assistance and so on. And our conclusions were there’s something we can do here, but there are significant issues, so we’re going to have to tread carefully. The GPT 3.5 as a large language model, so these are the engines, if you will, that power a lot of these things, was a significant change to that.

It’s like, okay, all the, not all of them, but a large part of the issues we were concerned with in terms of accuracy, in terms of its ability to synthesize data or analyze data and so on, have gone away. And then GPT 4 was even better than that. And I’ve talked to quite a few people in the industry. It has taken a lot of people by surprise. [00:05:02] 

In the book, the last chapter is a little exercise of looking into the future, right? It’s a day in the life in 2035, if I’m not mistaken, and a bunch of the things I described there. So I’m talking about a four-day workweek where you start a bit later and you finish a bit earlier. And when you go into work, you get the system that gives you a summary of what has happened and automatically connects you to a bunch of things. And it felt that I was pushing the limits to say this will happen in the 2030s at some point. Now, I think if it doesn’t happen by then, I will be very surprised. [00:05:24]

Host: And I mean, we’re already talking about a lot of these issues. So who knows, four-day weeks, hopefully, will be closer than we think. You also talked about your company. And so for those listeners of ours who might not know, you are the co-founder of Open Dialogue AI. Could you give us a brief summary of what the company is and just give us an idea or flavor of different industries it works across? [00:06:08]

Ronald Ashri: Yeah, of course. Open Dialog AI helps enterprises to deploy conversational applications, as I call them. These are virtual assistants, chatbots, digital assistants, where the way that you interact with them is conversational. That is the primary, that is the first way through which you interact with them. And we provide a platform to allow you to build these applications, configure them, maintain them, improve them, and so on. [00:06:37] 

Because of the nature of a platform and the approach that we take in terms of developing and designing these conversational applications were particularly well suited for regulated industries that care deeply about safety or the ability. There’s this sweet spot where you are both allowing a user to have an open-ended conversation. But you’re also controlling the process and ensuring that you are completing a process, right? You are, you know, making an insurance claim, or you’re booking an appointment with your doctor, and you’re providing the right information and getting the right answers, and so on. So we focus very much on those sorts of problems. [00:07:07] 

As I already hinted, we work within the insurance industry and the healthcare industry. To work in the UK, the NHS, we work with the World Health Organization. Some really, really interesting problems to go solve out there. [00:07:50] 

Host: That’s amazing. Are there any industries that you would like ideally to be able to support and explore in the future? [00:08:06] 

Ronald Ashri: Education is the one that is missing on our list there. So healthcare, it’s the most exciting. So many problems to solve, and healthcare professionals have so much difficulty to scale what they can do. Open dialogue is focused on removing the administrative weight and allowing the healthcare professionals to do what they need to do. [00:08:14] 

So diverging a bit, but one data point that I find really interesting is, at least in the UK, about a third of booked operations do not take place because the patients are not ready. They haven’t followed the instructions upfront. So, you know, forget the waiting list; these are things that are set, the data set, everyone is there, ready to go, and a third of them don’t happen because of that lack of upfront communication, reminders, make sure to do X, Y, and Z, make sure you don’t eat, and all those things, make sure you show up, like all those things. [00:08:39] 

Host: That’s a good one. [00:09:20] 

Ronald Ashri: Yeah, it’s kind of a useful one, right? So there’s so much to go fix. Before we get into the really, I don’t know, robots doing the actual operations or whatever you would think, there’s just so much upfront, easy stuff to pick up. Now, having said all that, education is the other one that I would really like to be involved with. [00:09:21]

Host: I also want to get a better sense of how different jobs and roles within these industries that you’re already supporting are adapting to collaborate with these AI tools. [00:09:43] 

Ronald Ashri: The specific characteristic, I guess, of generative AI in particular is that it’s a very general-purpose technology. So whereas before you could look at some AI technologies and say, okay, so I think, you know, we would have this natural language understanding algorithm that would do something around customer support, or this other thing would do something around recruitment and so on. [00:09:56] 

Now, there isn’t the thing that you look at, and you can’t relate it back to how generative AI would impact how you do that work. And the question becomes, if you have this general-purpose technology that is also changing at great speed and, evolving at great speed. Yeah, it’s pretty much, it touches everything, and it’s either directed by the organization top down to say, here is some new tools to do things, or it is people literally going out and, you know, thinking, oh, here’s a document someone sent me. I don’t have time to read it. Can you provide a synthesis of it for me? So, it deeply changes how people do work. I don’t think we were going to find out what the real impact of that is for some time. [00:10:28] 

Host: So, to an extent, would you say that it’s going to affect the structure of different companies? And if so, how, or would it require top-level executives to sort of rethink how they build their workforce? [00:11:20]

Ronald Ashri: I think it requires, so it will impact the structure and it does require thinking. One of the large consultancies did a study together with Stanford University. They essentially looked at how generative AI can help professionals. Knowledge workers do their jobs faster, and across the board, there was an improvement. They could complete their tasks faster, but more junior people got more of an improvement. I think there’s almost the more senior you are, there’s more knowledge, there’s more experience, you cast more doubts on things or you think you can do certain things on your own while you’re more junior, you’re going for it. [00:11:38] 

And yes, please help me. Let’s just get through this. So that’s interesting and exciting. But at the same time, the error rate. All coming out of that work from the more junior workforce went up as well. And that, again, is partly because those generated AI tools are not necessarily correct all the time, although they will always very confidently tell you, here’s what you should do. So there’s something really interesting dynamic there to explore. [00:12:25] 

Host: I do want us to talk about a little bit about AI and training. So training within organizations, how can you see AI affecting a company’s learning and development? For example, creating training material faster or making training content more engaging and personalizing training. [00:12:55]

Ronald Ashri: So I think there’s two aspects, for sure, you can start thinking of automating some aspects of training or making it more engaging, and I guess it’s both, so it’s not too far-fetched to say if we have a set of material, we can pass it through a process that generates the training material for that data set. [00:13:16] 

So say we have a handbook that describes how workers should deal with data, and we take it to a process to generate, at the end of that process, a virtual assistant whose task is to train a newcomer to the business on those rules. There’s something really interesting in thinking about that and how do you generate training courses manage them and track the improvement over time? [00:13:42] 

And the nice thing about a virtual assistant is you can have those conversations as much as you want. The other way around is also interesting, though, because I think that bit of training people to use AI and really understand what’s happening. It’s very important as we are exposing more and more tools that provide some sort of automation that invite you to just type your prompt and see what happens, right? There are ways to get better results. And right now, we’re not, people are just exploring on their own. [00:14:12] 

Host: For teams who want to or for managers who want to train their teams on how to use AI tools. As you just mentioned, most people sort of take it on their own and try to explore it. What would you say are some key areas that companies really should focus on? [00:14:50]

Ronald Ashri: I think there should be a basic course for everyone on what AI is actually. The demystifying a lot of these things. I think that the first step towards effectively using AI is to take away a lot of the anthropomorphization of AI, and I think it starts with the way we talk about it. It is a technology. [00:15:06] 

It doesn’t have feelings, you know, people talking about prompts in describing it as how you would talk to a person… And there is some value in that. We understand that as humans and so on, but then we also get caught up in that. And forget that we’re dealing with a system that has a certain way of working, and we should deal with that appropriately instead of thinking we’re so talking to something that has sentiments and I’m definitely not, in the camp or these things have consciousness or anything like that. It is a machine. It’s a very sophisticated machine. It’s amazing, fantastic that we have them, but it is a machine. There’s a way that it works. It’s important that we understand that so we don’t think it’s magic. And that makes us more alert to where it goes wrong because there has to be a certain level of distrust to start with. [00:15:34] 

So within my organization. We also have our developers use AI to automate some of the code writing and kind of a recurring theme is it’s amazing, it’s fantastic until something goes wrong. And then they’ll waste a bunch of time trying to understand why this piece of code does not work because the AI inserted some little thing in there that, you know, it’s quite hard to go and trace it back and find what the issue was. [00:16:36] 

So starting with a certain level of distrust is really important. And once you’ve demystified the technology, then it’s about… It’s a tool, like any tool, there is a good way to use it and a bad way to use it. And then there is a nascent industry of people that are learning and then teaching to others how to use these tools appropriately, how to write more useful prompts, how to manage prompts. I would say, initially, I’m sure you’ve heard this job title of prompt engineer, right? And initially, I was a bit… distrustful of this idea of a prompt engineer. But the more you use large language models and generative AI in general, you realize, oh, yes, there is, I don’t know if it’s quite engineering, but there’s definitely a process of thoughts and good practices and bad practices in terms of how to shape how you give instructions to the machine. So learning more about that is very useful. [00:17:08] 

Host: For those in our audience who might not know what a prompt engineer is. Can you just briefly explain? [00:18:12] 

Ronald Ashri: Very good question. Because that’s a very new job title. So if we take it all the way back, the way that you interact with generative AI or something like ChatGPT is by writing a prompt, you’re giving it some instructions, you’re prompting it. And then it will continue, right? And it is called a prompt because what these tools do is they complete whatever you’ve given them. Now, a prompt engineer is a person that has a tool bag of techniques about how to write these prompts. So as to solicit the right sort of response from the system based on what they’re trying to achieve. [00:18:19] 

And there’s, there’s a whole library of patterns, whether it’s some of the simpler things are, you know, ask the model to explain its thinking or ask the model to take it step by step, and that elicits different results from the model, which is both very interesting and also slightly concerning if you’re approaching it from a scientific perspective. Because I’m not sure we completely understand what is happening behind the scenes. [00:19:01]

Host: I definitely want us to go back to that a little later, but I don’t want to miss something that you mentioned before. You talked about how most people deal with AI as though it’s a human being, that it has feelings, and a common argument is also that AI can take our jobs. [00:19:30] 

But what a lot of people are saying right now is that AI won’t actually take our jobs. It’ll help us focus on more creative aspects of our jobs. But as people used to focus on honing their hard skills and getting certificates, et cetera, they might have neglected the soft skills that AI will never really be able to have. Or at least not in the foreseeable future. Does this mean that companies should shift their training priorities to those skills now? [00:19:48] 

Ronald Ashri: Absolutely. So, let me take it in order. Will AI take our jobs? Yes. I think we need to accept that fact and not… I feel any other definition, you know, we will be doing different things. We will… is sugarcoating what’s actually going to happen. There are certain things that people do right now that AI will be doing, and there are a number of benefits that come from that. There are a number of disadvantages, and it’s up to us as a society. To deal with them appropriately. Now, what I would suggest people do as individuals and organizations is absolutely what you said is focus on those other skills. [00:20:17]

What AI is not going to take any time soon. And it’s around, yes, all those softer skills, but which are actually quite hard, and they are the harder things to do, right? It’s interesting that we call them soft skills, but it’s the difficult stuff of working with people and collaborating and forming a team and agreeing on a common goal and figuring out who is best at what and together building something and going out there and advocating for it and so on. All those really difficult things that most people, if you ask them, what is hard about your job? They’ll say, Oh, it’s the people, everything. [00:21:08] 

Host: But again, I’m trying to look at the bright side because people get very defeated by the rise of AI in a way. So just like the prompt engineer is something that came about, I’m sure that as we go along and other roles, whether that’s at university or at corporations… As they fall behind, other needs are going to arise that we may not have even imagined. And speaking of the prompt engineer again, I don’t want to forget this question. Can you also share any more examples of jobs and roles that are likely to emerge or evolve as a result of AI’s integration into the workplace? [00:21:52]

Ronald Ashri: Look, I’m actually, I’m an optimist. I think all of these things are good things. We simply need to be realistic about what they’re going to impact so that we can deal with it. But it’s good because we should be working less. We should be spending more time really focusing on what makes humans, and that is the interaction between us and so on. In terms of jobs, there’s actually a whole host of jobs. Just thinking of my company. So is this absolutely, you know, you can start from the more complex things that machine learning engineers and data scientists and so on. [00:22:35] 

But then you go into things like prompt engineers, you have conversation designers, right? So these are people that think about “How do I design a positive experience for a user that is interacting with a conversational machine?” And there is a lot of thought and work to go into that. There are AI trainers, right? So we have this AI, but they’re not static things. They need to be monitored and improved upon, and so on. There’s a whole host of new jobs, and I’m sure there’s going to be things that we have, and I’m not even on our horizon right now. We haven’t thought of them that are going to pop up. There is no reason this cannot be a really positive thing for us. [00:23:11]

Host: I think it’s also just, the fear of the unknown that it’s always scary. And so of course, our first reaction is a little pessimistic, but I agree with what you’re saying that it’s definitely going to help us in more ways than we can imagine. We talked about how things have changed very rapidly over the past few years. And so in the same way, more and more AI tools are emerging. How can businesses evaluate what kind of tools actually suit them, their business, and their organization as a whole? [00:24:00] 

Ronald Ashri: I think, and this is a conversation I have almost daily because I guess we, as a company, we’re one of those tools being evaluated. So say 2 things I appreciate, and I completely empathize with businesses that will tell us here’s another AI vendor you’re talking about automation, you’re using all these big words, and you sound just like every other AI vendor because we have no real way to evaluate it because, on the other side, there is a lack of expertise. [00:24:34] 

One of the challenges with this technology is that it has developed so fast that it’s really hard for businesses to catch up. Like not everyone obviously is a Google or a Facebook or a Microsoft that has these teams internally, or even just a large bank or something like that, that enables them to go and put a team in place and properly research this. Most organizations are coming at it from “we have no internal skills, and we’re trying to figure this out,” so I think almost as a matter of agency, it is worth investing in upskilling their internal people into the things we’ve talked about before. What is AI? What are the different aspects? A lot of the conversations we have start there. We, as Open Dialogue, offer a specific thing in the very vast space of AI. But the conversation we have is, okay, what is AI? How do I think of automation? And then it’s really down to conversational automation and so on. [00:25:10] 

Host: That’s great advice because upskilling internally, then that you’re sort of putting it within your core of how you operate, how your people think, how they view automation and AI in general. And maybe that can help them sort of catch up to the evolution of AI. [00:26:21] 

Ronald Ashri: Absolutely. I think it’s necessary. My one message I try to get across everyone is this is the general-purpose technology, it will influence everything. So, in the same way, there isn’t a… I know technologists are, especially software engineers have to blame because we’ve always been saying, you know, software will eat the world, and technology will eat the world. And every company is a software company, and so on and so forth. But this time, for real, AI is going to influence everything. So it’s important to build your confidence in interacting with it. [00:26:41] 

Host: Definitely. We briefly touched on the reliability of AI. So how important would you say is reliable data when it comes to starting to use AI? And what are some steps companies can take to ensure that data that is going to be used to teach AI is actually reliable? [00:27:26]  

Ronald Ashri: It is central to it. There is no way to build an AI system that does something useful without the right data going in. And that’s not just, kind of some sort of abstract thing that happens in the lab that, you know, most companies are not going to interact with. [00:27:46] 

So, to give you a small practical example. It is quite easy today to go to service and say, here’s the URL of my website; go crawl it. And then generate a virtual assistant that can answer questions on my business. Do you remember every single thing you’ve said on your website and every single page that we’ve kind of constructed in WordPress over years? [00:28:06] 

So then people talk to this virtual assistant, and it starts saying things that are not wrong. In this case, you know, it did its job. The AI did its job, but oh, the content that we gave it. This is not actually how we want to present ourselves to the world right now. So even in that small scale, before we go into the much larger issues around bias and kind of training models that make very important decisions about our lives, data is always important. How you deal with it is you need to put the time in and the effort in, and here we’re back at reskilling and different types of jobs. You need to curate your data. You need to think carefully about what you’re providing because you’re no longer controlling it the same way you did before. You don’t know where it’s going to pop up. [00:28:33] 

Host: And because you also mentioned that, you know, AI, it’s going to affect everyone. What do you think AI’s uses are in HR and hiring? And what are the risks that companies must look out for when it comes to handling over decisions like hiring talent and managing people? It seems as though these kinds of roles are more suited to a human touch and to people with those soft skills; that I will now officially start calling the tough skills. [00:29:30]

Ronald Ashri: So HR is a really interesting place because that is where you directly see that interface between automation impact on people’s lives.

So when you have HR software that is recording the person as they’re answering a series of questions, and it’s looking for a set of reactions and emotions on the face and so on, and all of that is starting to form a profile of that person that then is going to feed into a decision-making process or you’re scanning a CV for keywords, and so on, all of that is really impactful. And I see both sides of it because it’s easy to say as a person on the street, right? Being, “Oh, they shouldn’t do that. That is really bad.” It’s easy to say that. But then you look at companies and say, we have to sort through thousands, tens of thousands, hundreds of thousands of potential candidates, right? And we need to find the best one. I think we need to keep challenging ourselves to find the right technology to help us with this. [00:30:03] 

But the challenge has to be complete, right? So we have to say, yes, technology. Let’s see how can we best use it. Let’s be realists and pragmatists about where it’s good and where it’s bad. I think from a broader societal perspective, and I apologize if I’m slightly maybe going out of that… [00:31:17] 

Host: No, no, please go ahead. [00:31:40] 

I think things that impact people require regulation. I think medicine is a really good space for us to look at things. We know that when we do a clinical study, some people are going to have side effects. And, you know, when you take a medicine, it has a very long list of side effects, and you might be in that 2 or 3% of the population that actually has a really bad reaction to this thing. But as a society, we’ve accepted that the overall benefit to society of having medicine, of having vaccines, and so on, is worth the risks. And what we try to do is manage the risk and minimize it and monitor it. And sometimes it works, and sometimes it doesn’t, but we try, right? We try really hard, and we’ve set up bodies and we’ve set up regulations and processes to manage it. [00:31:43]

And I think that’s, from a societal perspective, we need to give ourselves a pat on the back. When it comes to AI. We do have to start seeing it in the same way and, saying, yes, this is big, important technology that can help solve problems. We need it. We cannot scale as humanity without it, whether it’s climate or it’s education, as we said at the start, or it’s health, we need these technologies. But we need to put in place regulations. The appropriate thing from a startup founder is to say, no, we don’t want regulations. Go ahead and, you know, you need to let us do our thing. And I get that as well, but we do need regulations in this case. And so bringing it back to HR, you can see how it impacts every step of the process, from finding people, interviewing them, helping them on board into the business and, you know, all that training that needs to happen and so on. You can see how you can monitor that. One of the reasons I wrote the AI-Powered Workplace is you could see that you know, we have our business uses Slack. Most businesses, I think, split between Slack teams or something like Facebook Workplace and so on. [00:32:35] 

And in there. You have all the interactions of your organization, or let’s say 70% of the interactions of your organization, and there’s so much data, so much you can do with it. How is everyone feeling? You know, what’s the direction that we’re going? How clear is everyone on our goals? You can start asking these big questions. And getting results using AI tools to summarize a lot of that. So all of that really helps HR to be much more effective. [00:33:55] 

Host: Perfect. I’m really glad you also talked about regulations, because it is a big debate going on today with anything regarding AI. And I completely agree that whenever anything has to do with the impact on humans and people and our lives, we have to regulate it to an extent. How might managers start putting guidelines in place to account for these ethical considerations? [00:34:30] 

Ronald Ashri: I think I don’t necessarily have a great answer, but where I would start from is you need to, first of all, make clear that you have guidelines in terms of technology and its impact on people. Is there something within the organization that talks about that and how, when you bring in different types of tools and so on, how do they impact people right? [00:34:57] 

I always find it useful to start by leaving the AI side of it outside, right? So it doesn’t confuse us. So let’s talk about, you know, you have a new chat system. Do you ask people to put it on their phones? If they get a notification on their phone, are they supposed to react to that? Do you have guidelines? Or have you talked about it so you can start thinking about it like that? And then you say, okay, so if the HR introduces this new bot that goes around and asks people, how do you feel today? Have you explained to people what happens with that? Where does the data go? Who sees it? There’s a very interesting, there’s a series of studies now that say, actually, people share much more with an automated service. [00:35:23] 

There’s a specific study around dating profiles and saying, okay, if you feel that someone is asking you questions to help you build your dating profile and that person is a human, you are not as open. You kind of put forward your best self, right? Because you’re talking to a person you want to, we’re human. But if, if it’s a bot and it’s very clearly a bot, it’s almost the language is a bit more mechanical and a bit drier. We just share everything. It’s like, oh yeah, it’s this machine. Who cares? I just tell it my innermost secrets and, you know, my likes and dislikes and, just go all in. So there’s some, there’s something really useful about that in terms of releasing people from the pressure of, “Oh my God, I’m, I’m constantly being judged. There’s this human on the other side that is judging me.” So there again, great positives. But we have to be clear with people about what are we doing with their data, what decisions are automated, and which ones are not. It’s often not clear. [00:36:12] 

Host: Thank you so much for that. And as we’re getting towards the end of our time together, I really want to ask you because we’ve discussed artificial intelligence and how it’ll help simplify plenty of complex processes in the workplace. But in one sentence. What piece of advice would you give companies to help keep it simple when adopting AI-powered solutions? [00:37:22] 

Ronald Ashri: I would have to think about that. Keep it simple. Perhaps I’ll go back to what I said before. It’s just a technology. Start there. [00:37:49] 

Host: Keep it simple. It’s just a technology. Amazing. Ronald, thank you so much for taking the time to be with us. [00:38:06] 

Ronald Ashri: Thank you. I enjoyed it. [00:38:14] 

Host: Thanks for tuning in. In the next episode, we’re looking at why being intentional is key to a healthy relationship with our tech, both in our work lives and at home. You can find Keep It Simple on all podcast platforms. [00:38:30] 

Craft and deliver engaging training without creating it all by yourself. Let TalentCraft do the heavy lifting for you. Available on TalentLMS, this AI-powered content creator can build courses with just one prompt. Wave goodbye to tedious content development and say hello to a delightful AI-powered learning experience for you and your teams.

You can find Keep It Simple on all podcast platforms. This episode of Keep It Simple was brought to you by TalentLMS. The training platform built for success and designed with simplicity in mind. For further resources on today’s topic, visit talentlms.com/podcast. [00:39:01] 

Train your people. Measure results. Drive growth.

TalentLMS gives you the tools to supercharge every step of your training.

.talentlms.com

Already have an account?  Login