Monday, November 25, 2024
HomeTechnologyQ&A: NY Life exec says AI will reboot hiring, training, change management

Q&A: NY Life exec says AI will reboot hiring, training, change management


In 2015, New York Life Insurance Co. began building up a data science team to investigate the use of predictive models to improve efficiency and increase productivity.

There were quite a few deployments of predictive models across the company with a little artificial intelligence to aid in automation. Most of the projects were not centered around machine learning and AI but traditional data science. Models were generally used to support actuarial assumptions, to aid agent recruiting, and enhance the purchase experience (e.g., bypassing the need for blood tests for underwriting).

General AI was also used in creating marketing campaigns to determine the most appropriate audiences to target.

In November 2022, everything changed. San Francisco start-up OpenAI launched ChatGPT, a large language model (LLM)-based chatbot that could be used by enterprises to automate all kinds of tasks and scour internal documents for valuable content. It could be used to summarize email and text threads, online meetings and perform advanced data analytics.

Agents and service representatives could use the chatbot technology to obtain detailed answers for clients in a fraction of the time it normally would. New employees no longer needed months to be brought up to speed and instead could be trained to use generative AI to find the information they needed to do their jobs.

Last month, New York Life announced the hire of Don Vu, a veteran data and analytics executive, to lead a newly formed artificial intelligence (AI) and data team with responsibility for AI, data, and insights capabilities and aligning data architecture with business architecture in support of the company’s business strategy and objectives.

His work will be “essential” to the 178-year-old company’s future strategies around AI and the desire to created industry-leading experiences for customers, agents, advisors, and employees.

Alex Cook, senior vice president and head of Strategic Capabilities at New York Life, created the company’s data science team eight years ago. Cook sits on the company’s executive management committee, and his responsibilities include enterprise-wide technology, data, and AI as well as strategy and development and overseeing New York Life’s corporate venture capital group.

New York Life's Alex Cook New York Life

New York Life’s Alex Cook

Cook spoke to Computerworld about the company’s AI investments and how genAI is changing its approach to internal skills needs and hiring. The following are excerpts from that intreview.

What are some of the biggest AI challenges at New York Life? “One of the challenges is just ensuring as we’re building out some of these capabilities that we’re doing it in places where it really makes a difference. It’s easy to see a shiny object and for people to get excited about it. And there are so many potential applications. We need to stay focused on those things that really move the needle for the company and for our clients and agents and make the experience client-agent experience better. That’s a really critical focus for us right now. And, we need to ensure that it’s a function that’s really feasible as opposed to something that once you dig into it, you don’t have the right conditions for success.

“There’s another workstream I’m very focused on — talent and change management. It’s really critical that we have the right people for that. Don [Vu] is a good example of getting the right people on board who understand how to do this well. The change management — managing that for the enterprise needs to be a critical focus for any enterprise, because it has so much impact in so many areas. It’s not just new skills that will have to take on new capabilities, but managing the change when some people will be augmented, and some will be displaced by some of these tools. How do you manage the change effectively? That’s been a critical focus for it.

“Governance is another one. [There’s] a whole ethical AI focus that continues with generative AI. How do we build these things and build them well? How do we do the right kind of testing for unintended bias? It’s critical we do the right testing for accuracy and making sure that’s well understood and governed.

“And we’ve really thought a lot about different scenarios and the planning for … where things could go from here and trying to make sure the company is prepared….

How has New York Life been using genAI?  “We have [AI] models we use to aid in making decisions when hiring agents and advisors, which of them will be more likely to succeed in their career?

“There’s quite a lot AI use in the marketing space. That’s where there’s a bit more use of AI as opposed to more statistical models. In general at New York Life, I’d say the focus has been wherever we’re applying data science or AI, if it’s in the realm of a decision that could impact a client, we’re verry careful to make sure those aren’t black box models. We want to make sure they’re explainable. But it’s particularly important when you’re talking about underwriting — determining somebody’s risk class. You’ve got to make sure the data is relevant and that the decision is based on factors you understand.

“That’s an important baseline. As a mutual life insurance company, we really do have tight alignment of interests — particularly with our core policy holders who are the recipient at the end of the dividends we deliver.

“In that context, we’ve had a lot of focus on ethical AI and ensuring we’re appropriately reviewing the data that’s used to develop and run models. As a heavily regulated industry, we have a lot of focus on the patchwork of regulation on the federal and state level. So, we need to make sure we’re on top of any regulation coming in from the states around use of AI and data. And then we have our own standards that ensure we’re not just in compliance with specific regulators, but with our own standard of practice.”

President Biden recently announced an executive order restricting how AI could be used. Especially in financial services, do these rules advance anything or were existing regulations already enough to deal with AI’s issues? “I think exiting rules are very much a patchwork. If you look at the areas regulators have been focused on, like underwriting, different states have different levels of understanding. I do think it’s important that regulators come up the curve with AI, and generative AI in particular, …just in terms of understanding how these technologies work and how to govern them well. So, I do think it’s a good thing that regulators are starting dig in and educate themselves on what these tools can do. I don’t know that we’ll need a ton of incremental regulation above and beyond what we have today, but there are cases when it’s important to understand the underlying context.

“For example, [take] some things we do, particularly in the insurance domain. Underwriting by its very nature is a discriminatory practice, meaning you’re trying to understand differences in health when, for example, you’re attempting to issue a life insurance policy. This is not a mandated product; it’s completely voluntary.

“So, it’s important we’re able to retain the use of some information in making that determination. And some regulators are confused about elements of that. For example, in some earlier discussions with regulators, [they said], ‘Gee, if someone’s disabled, you can’t use that to discriminate against them.’ Well, if you’re issuing disability insurance, you do have to take that into consideration or we won’t be in business long.

“I do think it’s important regulators understand as they step into regulating some of these new technologies that they don’t take inadvertent steps or misunderstand what these models can do.”

What changed for New York Life last November when ChatGPT was launched? “I think the biggest thing was recognizing the potential for these new approaches to really enhance things we’d been dabbling at, but clearly the quality wasn’t sufficiently high. Chatbots are a great example. Up until that point in time, chatbots were very limited in what they could do and often were more frustrating for clients than helpful.

“I think that’s very different now. I think the capabilities of chatbots have taken a step-function forward and they’ll continue to improve over ensuing months and years at a very fast pace. So, for me, it was a wake-up call.

“It’s a bit like the analogy of the frog in boiling water. If you put the frog in and slowly turn up the heat, they don’t realize how much is happening. That’s a bit of what’s happening in the AI context. It had been slowly advancing for a long time, but then it took a big step forward and with that there was a recognition — a moment had arrived that was worthy of reassessing the scope of what was feasible.”

How are you preparing your employees and getting staffed with AI skills? “Both external hiring and internal training are critical. We have a lot of focus on training opportunities for different types of individuals to learn about this technology and have a role in its development.

“Typically, we have a lot of subject matter expertise that needs to be employed when you are developing these models. It’s not enough to say, here’s a treasure trove of thousands of documents and point the AI at them and have it summarize them with great accuracy. That’s not the way it happens. You have to have people go through those documents to understand what’s in all those documents. Have they been appropriately tagged with metadata that will give the models some direction on what to source in response to a question?

“There’s a need for people to really be a part of that development process; that will be true for quite some time. Then you’ve got the whole dynamic of prompt engineering and how to get smarter about how to ask these models and do so in a more iterative way.

“As we engage in AI development, we’re also engaging in what competencies we need in our existing staff. What opportunities will there be in how we change the nature of their role and support them in that effort? There’s a lot of focus on that in our HR department.

“At the same time, we understand that we do need to bring in external talent to help ensure we’re moving quickly in this space, because it will be developing fast. As we look at machine learning operations, we look at LLM ops even and really understanding the tech stack, there’s a real need for some people who have proficiency in those areas and making sure we’re bringing that talent into the company.”

How do you address angst around AI taking people’s jobs? Do you see it eliminating more jobs that it creates? “I think it’s mostly the latter. Meaning, I see a lot of these AI technologies primarily being [augmentative] for people and allowing them to focus their skills and efforts on things more complex.

“We really do need human engagement and empathy. I think that’s something we definitely see with our agents and advisors. Their role may change to become much more relationship driven and perhaps a little less technical, as that will be covered more by AI. It will be similar with our service reps; their job becomes much more focused on ensuring the client or agent is holistic and becoming more forward thinking about how they’re delivering the right experience; and again, some of the most technical aspects of their tasks will be covered more by AI assistants.”

Will AI displace workers? “There will be some displacement, especially with the historical practice of bringing new people in at an entry level and they learn the ropes through the simpler stuff for some time and then expand their product knowledge over time. I think that route is going to get closed in a bit, and we’ve already encountered some of that over the last few years.

“You really need to upgrade your recruiting and training because the nature of the role on day one is different than it used to be years ago. It’s less about how you come up to speed quickly to a detailed knowledge that’s more technical, to learning more about the right management and relationship skills. And, learning the skills around how you best avail yourself to the technology that will enable you in your job.

“So, it does put a different emphasis on that training. I think there’s going to be a need for a lot of people to help develop AI, and I think there’s a lot of excitement around our existing folks helping to develop this next set of capabilities. For the most part, I think a lot of them will be thankful that they don’t have to engage in a lot of the more mundane tasks they used to do.

“There are implications for our rate of hiring in some of these kinds of roles. I think every company will be facing that dilemma to some degree, and it will have an impact on the job market. For the most part, as they have in the past, people will find new ways to use these tools that are already being created.

“New technology on the margin may have some degree of displacement, but very often it augments and then people find better things to do. I think we’re going to see that here as well; it’s just the pace of change may be a bit faster compared with other technologies of the past.”



This story originally appeared on Computerworld

RELATED ARTICLES
- Advertisment -

Most Popular

Recent Comments