Friday, November 22, 2024
HomeTechnologyGenerative AI is about to destroy your company. Will you stop it?

Generative AI is about to destroy your company. Will you stop it?


As the debate rages about how much IT admins and CISOs should use generative AI — especially for coding — SailPoint CISO Rex Booth sees a wide range of obstacles before enterprises can see any benefits, especially given the industry’s less-than-stellar history of making the right security decisions.

Google has already decided to publicly leverage generative AI in its searches, a move that is freaking out a wide range of AI specialists, including a senior manager of AI at Google itself

Although some have made the case that the extreme efficiencies generative AI promises could fund additional security (and functionality checks on the backend), Booth says industry history says otherwise.

“To propose that we can depend on all companies to use the savings to go back and fix the flaws on the back-end is insane,” Booth said in an interview. “The market hasn’t provided any incentive for that to happen in decades — why should we think the industry will suddenly start favoring quality over profit? The entire cyber industry exists because we’ve done a really bad job of building in security. We’re finally making traction with the developer community to consider security as a core functional component. We can’t let the allure of efficiency distract us from improving the foundation of the ecosystem.

“Sure, use AI, but don’t abdicate responsibility for the quality of every single line of code you commit,” he said. “The proposition of, ‘Hey, the output may be flawed, but you’re getting it at a bargain price’ is ludicrous. We don’t need a higher volume of crappy, insecure software. We need higher quality software.

“If the developer community is going to use AI as an efficiency, good for them. I sure would have when I was writing code.  But it needs to be done smartly.”

One option that’s been bandied about would see junior programmers, who can be more efficiently replaced by AI than experienced coders, retrained as cybersecurity specialists who could not only fix AI-generated coding problems but handle  other security tasks. In theory, that might help address the shortage of cybersecurity talent.

But Booth sees generative AI having the opposite impact. He worries that, “AI could actually lead to a boom in security hiring to clean up the backend, further exacerbating the labor shortages we already have.”

Oh, generative AI, whether your name is ChatGPT, BingChat, Google Bard or something else, is there no end to the ways your use can make IT nightmares worse?

Booth’s argument about the cybersecurity talent shortage makes sense. There is, more or less, a finite number of trained cybersecurity people available for hire. If enterprises try and combat that shortage by paying them more money — an unlikely but possible scenario — it will improve the security situation at one company at the expense of another. “We are constantly just trading people back and forth,” Booth said.

The most likely short-term result from the growing use of large language models is that it will impact coders a lot more than security people. “I am sure that ChatGPT will lead to a sharp decrease in the number of entry-level developer positions,” Booth said. ”It will instead enable a broader spectrum of people to get into the development process.”

This is a reference to the potential for line of business (LOB) executives and managers to use generative AI to directly code, eliminating the need for a coder to act as an intermediary. The key question: Is that a good thing or bad?

The “good thing” argument is that it will save companies money and allow LOBs to get apps coded more quickly. That’s certainly true. The “bad thing” argument is that not only do LOB people know less about security than even the most junior programmer, but their main concern is speed. Will those LOB people even bother to do security checks and repairs? (We all know the answer to that question, but I’m obligated to ask.) 

Booth’s view: if C-suite execs permit development via generative AI without limitations, problems will boil over that go well beyond cybersecurity.

LOBs will “find themselves empowered through the wonders of AI to completely circumvent the normal development process,” he said. “Corporate policy should not permit that. Developers are trained in the domain. They know the right way to do things in the development process. They know proper deployment including integration with the rest of the enterprise. This goes way beyond, ‘Hey, I can slap some code together.’ Just because we can do it faster, that doesn’t mean that all bets are off and it’s suddenly the wild west.”

Actually, for many enterprise CISOs and business managers, that is exactly what it means. 

This forces us back to the sensitive issue of generative AI going out of its way to lie, which is the worst realization of AI hallucinations. Some have said this is nothing new and that human coders have been making mistakes like this for generations. I strongly disagree.

We’re not talking about mistakes here and there or the AI system not knowing a fact. Consider what coders do. Yes, even the best coders make mistakes from time to time and others are sloppy and make a lot more errors. But what’s typical for a human coder is that they will enter 10,000 when the number was supposed to be 100,000. Or they won’t close an instruction. These are bad things, but there’s no evil intent. It’s just a mistake.

To make those mishaps equivalent to what generate AI is doing today, a coder would have to completely invent new instructions and change existing instructions to something ridiculous. That’s not an error or carelessness, that’s intentional lying. Even worse, it’s for no discernible reason other than to lie. That would absolutely be a firing offense unless the coder has an amazingly good explanation.

What if the coder’s boss acknowledged this lying and said, “Yep. the coder clearly lied. I have no idea why they did it and they admit their error, but they won’t say that they won’t do it again. Indeed, my assessment is that they will absolutely do it repeatedly. And until we can figure out why they are doing it, we can’t stop them. And, again, we have no clue why they are doing it and we have no reason we’ll figure it out anytime soon.”

Is there any doubt you would fire that coder (and maybe the manager, too)? And yet, that is precisely what generative AI is doing. Stunningly, top enterprise executives seem to be okay with that, as long as AI tools continue to code quickly and efficiently. 

It is not simply a matter of trusting your code, but trusting your coder. What if I were to tell you that one of the quotes in this column is something I completely made up? (None were, but follow along with me.) Could you tell which quote isn’t real? Spot-checking wouldn’t help; the first 10 comments might be perfect, but the next one might not be.

Think about that a moment, then tell me how much you can really trust code generated by ChatGPT. 

The only way to know that the quotes in this post are legitimate is to trust the quoter, the columnist — me. If you can’t, how can you trust the words? Generative AI has repeatedly shown that it will fabricate things for no reason. Consider that when you are making your strategic decisions.

Copyright © 2023 IDG Communications, Inc.



This story originally appeared on Computerworld

RELATED ARTICLES
- Advertisment -

Most Popular

Recent Comments