Less than a year after OpenAI’s ChatGPT was released to the public, Cisco Systems is already well into the process of embedding generative artificial intelligence (genAI) into its entire product portfolio and internal backend systems.
The plan is to use it in virtually every corner of the business, from automating network functions and monitoring security to creating new software products.
But Cisco’s CIO, Fletcher Previn, is also dealing with a scarcity of IT talent to create and tweak large language model (LLM) platforms for domain-specific AI applications. As a result, IT workers are learning as they go, while discovering new places and ways the ever-evolving technology can create value.
Previn took over as CIO at Cisco in April 2022. Prior to that, he worked at IBM for 15 years — the last four as its CIO. So, Previn is familiar the competitive landscape and he’s aware that every genAI model his company creates is low-hanging fruit for industrial espionage. At the same time, he’s concerned about securing proprietary AI technology that costs millions of dollars to create, and understands that genAI can sometimes take on a mind of its own. Keeping a human in the loop is always important.
Previn spoke to Computerworld about Cisco’s internal AI efforts. The following are exerpts from that interview.
How is Cisco using generative AI and what are your challenges with it? “It’s an exciting time. It’s an especially interesting time to be in IT where now 10 or 11 months after ChatGPT entered the scene, it continues to amaze and terrify in some cases.
“We think of it in…three categories of how are we going to bring AI to bear on for ourselves, for our products, and for our customers?
“In terms of how we’re using it for ourselves, there’s a lot in that. I’m about one year into the job now and spend a lot of time thinking about IT as a culture change and how we bring technology as a force multiplier to our workforce; AI helps in that way.
“If you think about networking — the core business of Cisco — you have this firehose of data and information and it’s the ability to identify things in a timely fashion, make sense of it, and take action based on it where AI excels.
“So, if you think about network monitoring, …you can use AI algorithms to analyze huge amounts of data in real time to detect anomalies, detect performance issues, or predict problems. The whole idea of predictive maintenance and using AI to detect when you’re going to have a network failure or performance problem and then take preventative maintenance to prevent it is huge; then the ability to automate routine network management tasks like configuration management, device provisioning, policy enforcement, reducing manual things in general….”
What keeps you up at night in terms of AI? “I think we want to make sure we have a human in the loop at all times. …I think part of the reason machine learning was slow and difficult is because you had to take a set of data, curate it, and then train the machine learning against that data — and the answer to the question you were asking had to exist in that data. That’s not the case if you can reason over data. Then you can start to approach human reading and writing comprehension to answer questions to which there as no previous answer.
“But we need human beings involved to ensure those answers are correct and compatible with our values, and business models; hence, the need for our responsible and ethical business policies.”
How many products have you created so far that have genAI-embedded in them. “It’s ironic. I’m currently putting together a paper for Cisco’s board and it’s already 10 pages long. The answer is we will embed AI into the entire portfolio of Cisco products and are already well under way in that.
“It’s across the entire portfolio. It would be a shorter answer to ask which products are not using AI. It’s in every product and very quickly people are running towards what AI can bring to bear, whether in the collaboration space, in the security space, in the networking and routing and switching space. You can intuitively see how this is helpful for the security portfolio — simplifying things, increasing speed, automating tasks, understanding what’s happening across complex digital states in real time.
“Those are challenging tasks and AI is a perfect solution to bring to bear on things like ThousandEyes and our Umbrella and Duo SASE [secure access service edge] SD-WAN. How’s the traffic moving? What decisions are being made in how that traffic is getting routed? What anomalies are popping up? Where are problems coming up? Where does something malicious appear to be? If I make changes here, will it propagate to all other places so I don’t need to log into a bunch of other tools to have that outcome I want? That’s the effort currently underway.
“It’s very quickly working into every part of the Cisco portfolio.”
What about security? Have you found AI useful for securing networks? “Security is a huge opportunity for us to leverage AI in an interesting way through threat detection and analyzing network patterns, identifying and highlighting abnormal behavior and detecting security threats in real time.
“Using AI to optimize the flow of traffic and dynamically adjust traffic paths is of high value for an IT organization like mine — reducing latency and improving performance. AI-driven network management can integrate these IT systems and network orchestration systems to create these seamless, unified experiences that’s augmented with intelligence from AI.
“Then bringing some of our products to bear on that as well, whether it’s the ThousandEyes platform, Cisco Secure Network Analytics, Catalyst Smart Center Alerts, Nexus Dashboards. You bring all this telemetry together, analyze it, make sense of it, take action on it in real time and automate some of those actions going forward. That’s the Holy Grail of AI-driven network management.”
How has AI produced efficiencies for employees? “…If you think about how much inefficiency is in any large organization — just with the things we need to do to perform our jobs — what we can bring to bear there [is] automating certain tasks, quickly surfacing high-value information, summarizing things.
“There’s a lot of AI working its way into the Webex platform. As an example, we’re using AI to summarize a meeting, highlight important moments in a meeting, and analyze body language — and not just words and written language. This happened in the meeting; this person got up and walked away; if you missed a meeting you can have AI send you a short summary of what happened in the meeting and what decisions were made.
“You already have LLMs. Now, you have this idea of RMMs [Remote Monitoring and Management] and Cisco is going in the direction of being able to understand body language and non-verbal cues and summarize it and make sense of it.
“Being able to have a video meeting in a hybrid world is necessary, but not sufficient. It’s still in some ways less than the in-person experience. We’re right on the precipice of all this exciting innovation that’s going to solve this in a more meanful way, where people aren’t disadvantaged by way of not being in the office together. …That’s what we’re starting to see now with advances in AI.
“There’s the obvious things with noise cancellation and virtual backgrounds, but now you’re getting into summarizing meetings, what are the decisions and action items, what are the non-verbal body language?”
How are you using AI for software development? “Then for software development, earlier on there was a feeling the first use cases of AI would be the more menial tasks. But it turns out one of the first broad use cases of it is software development, which is really interesting, because the conventional wisdom was always that you cannot shorten the time it takes to develop software; there’s no compression algorithm for software development.
“That’s why there was so much focus on the past on testing and release automation. Now, it turns out you can use things like Copilot for Github and have AI sit on your shoulder and help you write code more efficiently. That’s really interesting in the software development space.
“I think by the end of this year, something like 40% to 60% of all code being checked into Github will be augmented some way by AI. And what impact does that have on your software development pipelines and how do you properly, responsibly, and ethically document where AI has assisted you in the building of things?”
A concern has been if you’re producing code via AI, certain errors, biases or even malware can be introduced. Do you see a danger with so much of future code development being augmented with AI? “I don’t know that those two things are true. You can have a lot of code that’s being checked into Github that’s augmented by AI without it being uncontrolled, runaway optimization. So, things like having two human beings review code before it gets published [or] having a requirement to comment and tag any code generated by AI — there are things you can do to be responsible with AI-generated code and software development and those are the things we’re doing.”
What about the fact that generative AI has been caught stealing intellectual property for training large language models? One of the edicts of President Biden’s executive order is for a system for watermarking AI-created content. Have you run into this? How do you deal with it? “That’s probably more of an immediate issue for these large language models that are indexing the entire internet. It creates a lot of interesting questions around which artist gets compensated for the use of their intellectual property? The original source? The person who used AI to create something new from the original source? I don’t think the answers to those questions are clear yet, so in some ways it’s uncharted territory. At what point does something become an original creation, and at what point is it reuse of someone else’s art.
“I think it’s something we’re going to have to work through as a society. When people take a mashup of a song, even that’s not always clear in the courts. If you sample something from someone else’s song and make a new song out of that, how much of that new song needs to be the theme for it to be considered theft versus something net new? It’ll probably end up being a similar situation here with AI.
“The large language models are very good for mastery of language, and summarizing things, writing things well; it’s a form of AI to be able to look at the word you’ve written and be able to predict what they next word will be. That’s less useful when you want to have industry or company knowledge brought to bear on something.
“For example, if I want to have an AI-assisted network engineer that knows everything about Cisco’s products, including the helpdesk articles and technical support documents, product schematics, and internal things like that, then you’re going to want to create your own proprietary, smaller model for answering those niche, point-solution questions — which is why a lot of people are going to want to build their own AI clusters for the purpose of creating those models.
“That’s something we’re doing here in the IT department, building our own GPU-AI cluster using Cisco’s Ethernet fabric to connect it all, which in our view is the way to go. To build an AI cluster, you need two things: you need GPUs and you need low-latency, high-speed connectivity between those things. Our Ethernet project uses those two things.”
How far along are you with your domain-specific LLMs? “You can’t train an LLM, but you can sort of have it interrogate pools of data and summarize it. So you can use it for things like … optimization in your intranet or helpdesk articles, where you can have an OpenAI model interrogate the articles you’ve written and come up with the likely questions a person would ask it, for which this would be the best article, and then optimize your search based on those results.
“So, for example, you have it look at a helpdesk article and say, ‘These are the questions that I think this article likely answers,’ and then tweak your search engine to say if someone asks these questions, this is the answer you should likely show. That’s a sort of easy, initial use case.
“There are also very technical things like what is the Power-Over-Ethernet budget of this Cisco Catalyst switch versus this other one, and connecting that to our own internal product, help desk and customer service data to be able to help interrogate it in natural language. Those things are already well under way, as is the building of our own cluster. We’ll eventually make several clusters using our own Ethernet fabric, but using different GPUs and different server blueprints so we can put them out as reference blueprints so others can do the same.”
This story originally appeared on Computerworld