BUSINESS

How ‘AI playfulness’ teams can help workers stop fearing AI and embrace change


Artificial intelligence—and generative AI, in particular—can be powerful instruments to boost office productivity, so long as people actually use them.

The problem is there’s been so much fear mongering that workers feel the use of tools like ChatGPT to automate more and more of their job is a danger visited upon them, rather than a change they want to participate in.

That’s why Suzy Levy, managing director of human resources consultancy The Red Plate, believes the secret is figuring out how to engage workforces in a “structured and playful way.” That way employees don’t see AI as a tool that offers no personal benefit for them—if not be something they ought to outright fear.

“Every single function in an organization needs to have an AI playfulness team,” she told participants at Fortune’s Brainstorm AI conference in London this week. 

Levy recalled an instance where an acquaintance had written no fewer than 17 performance reviews with the help of ChatGPT after being encouraged to experiment with it. 

What her subordinates needed to work on came from her own prompts; the rest came from AI. This division of labor allowed her to devote her attention to tasks more valuable than crafting the requisite but time-consuming text around them.

“There is an entire engagement journey to get people to start interacting with these tools and thinking about how they can make their job better,” she said, “because humans are experts at knowing where their jobs are terrible.”

Ranil Boteju, chief data and analytics officer at Lloyds Banking Group, was confident teams would soon realize the benefits of AI once they became familiar with its abilities.

Unlike, for example, distributed ledgers like the blockchain—technologies in search of a compelling application—AI was proving popular among his team members. They would come to him with a request to use AI rather than vice-versa.

In particular it was proving enormously helpful with very low-risk tasks that are highly repetitive and highly manual—something of which there is plenty in banking, according to Boteju. Even when validation and quality assurance is factored in, there is still enough productivity gains to make it intrinsically worth the effort.

Don’t forget the criminal element

One example he cited was the frequent job of overhauling legacy IT software systems that often fell to even the more experienced software developers and data engineers on his team.

“They would have to look at the old code and rewrite it from scratch, and that would just take forever,” Boteju said. Once they developed an AI tool to help, the team experienced a 35% to 40% efficiency improvement in their code rewriting. “I’m quite confident for the next few years there are enough very low-risk opportunities [in banking],” he added.

There are pitfalls, though: AI can facilitates theft, whether it’s directly handling money in an account or even a person’s voice in order to circumvent authentication safeguards. 

Bad actors will certainly embrace Levy’s concept of “AI playfulness” to test out just how far they can use the technology to accomplish their goals. 

A former head of leadership, diversity and employee engagement at Accenture, Levy argued every organization will therefore need to factor this into their planning and preparation, 

“AI makes doing dodgy things easier,” she said. “We have to insert an element of criminality in our thinking.” 

Accenture sponsored the panel ‘Human by Design: Making New Technology Work with Us, Not Just for Us at Fortune Brainstorm AI London.

Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.

Source link

Related Articles

Please, use our online surveys for check your audience.
Back to top button
pinup