Generative artificial intelligence has been topic that’s impossible to avoid on Wall Street for more than a year — and it’s unlikely to fade away anytime soon. In some ways, however, 2024 may prove to be a more pivotal year for AI than 2023 was. With OpenAI’s ChatGPT launching in late November 2022 , many investors last year were largely content to hear about how tech companies were approaching generative AI and see new products or services that enable or integrate the buzzy technology. But this year, the pressure is likely to mount on companies — like Club name Salesforce — to start showing financial benefits from their AI endeavors. The focus will shift to profits from potential. Salesforce is just one of many stocks in the portfolio that are investing heavily in developing and implementing AI initiatives aimed at fueling growth. Chipmaker Broadcom is another. And each of our Significant Six stocks — Microsoft , Meta Platforms , Google parent Alphabet , Amazon , Nvidia and Apple — are making big investments in AI, with the latter doing so in a more under-the-radar fashion . To help you build a deeper knowledge of the underlying technology that’s dominating the conversation from Silicon Valley to Wall Street and Main Street, we put together a list of 20 artificial intelligence terms that are important for investors to understand. We’ve enlisted two experts in the field to help us define and explain the AI jargon. Let’s start with the most basic level. What does artificial intelligence even mean? 1. Artificial intelligence Artificial intelligence is a field of technology that’s been around for decades and broadly refers to computer systems that try to “replicate human cognition in some way,” said Chirag Shah, a professor of information and computer science at the University of Washington. The earliest electronic computers solved math equations for military purposes. The difference with AI systems is a focus on intellectual tasks that give humans “the upper edge as a species,” such as making decisions, Shah said. 2. Algorithm An algorithm is a set of instructions that tells a computer how to accomplish a task. A traditional computing system supports a fixed number of algorithms. That means the number of tasks that the system can accomplish is limited to what is spelled out in those algorithms. Like traditional computer systems, every AI program has an algorithm behind it — but with one key distinction: AI systems can expand their initial set of instructions based on new data that’s received, Shah said. That process — where the system essentially learns to adjust and write its own algorithm — is where the real potential of AI systems is achieved, Shah explained. If a traditional computer is programmed to touch fire, it will keep touching fire in accordance with its algorithm. But in an AI system, if it touches fire and something bad happens, the algorithm is able to recognize something bad has occurred and avoid doing it again — or at the very least, it would learn that touching fire could lead to a problematic outcome. The AI system’s initial set of instructions may not have indicated that touching fire can cause harm, but AI algorithms are able to expand to include that as part of their knowledge base. Sound familiar? The process is basically how humans build knowledge over time. 3. Model A closely related term is an AI model , which is basically the output of an algorithm that’s been fed a bunch of data to learn from. Algorithms and models together form AI systems. 4. Machine learning Machine learning is a subset of AI. If the goal of AI is creating computer systems that mimic human behavior, machine learning is one way to accomplish it. Shah said most of the successful AI systems we’ve come to know over the past 20 years — such as autocorrect on an iPhone or suggested searches on Google — use machine-learning techniques. That is why AI and machine learning, or ML, are sometimes used interchangeably, though there can technically be AI systems that do not use machine learning. “Machine learning is where the system learns to adjust and writes its own algorithm,” Shah explained. 5. Deep learning A popular technique in machine learning is known as deep learning . “If all of artificial intelligence is automation of tasks that we would generally consider as non-trivial, then machine learning is the subset of AI in which the system tries to learn the automation from data, as opposed to being hard-coded, let’s say,” said Mark Riedl, a professor at Georgia Tech’s School of Interactive Computing. “And then machine learning basically says you get to automation from data, but it doesn’t tell you how. Deep learning says, well, ‘how’ is you build something called a neural net.” 6. Neural network Neural net is shorthand for neural network, which is a type of algorithm created to help computers find patterns in data and make predictions on what to do next. Modern neural networks have many layers to them that ultimately make them really good at finding patterns in data. Despite their name, Shah said neural networks are not exact replicas of the human brain. He likened it to wings on an airplane — even though they don’t flap like wings of a bird, they still help the plane fly and are called wings. Similarly, neural networks in computer science do not operate like the human brain, Shah explained, but they still help computers complete cognitive and intellectual tasks that humans do. 7. Generative AI Neural networks are the heart of the increasingly popular type of AI known as generative artificial intelligence , or gen AI for short. Both traditional AI and gen AI systems rely on data and can be used to automate decision-making tasks. The recommended videos on Google’s YouTube or suggested shows on Netflix are examples of traditional AI; so is facial recognition technology, including Face ID on Apple’s iPhones. But with generative AI, the distinguishing feature is the ability to create new content in response to a user question or input of some kind. Depending on the model, that content can include human-like sentences, images, video, and audio. The goal of generative AI is for the outputs to be similar to the data fed to its algorithm, but not the same. In this way, it’s creating new data based on existing data. Or, as Shah put it, generative AI systems have the ability to not just read data, but write it, too. Instead of just suggesting additional Bruce Springsteen concert videos after you watched a performance of “Spirit in the Night” live from Barcelona , a gen AI system could write a song about investing in the lyrical style of The Boss himself. Perhaps a more practical example: Traditional AI is used to help forecast a company’s future revenue, based on historical patterns in sales data, a generative AI system could be used to help a salesperson craft an email to a customer that factors in their past orders and other relevant information for that account. Club stock examples This email feature is included in Salesforce’s new AI tools known as Einstein GPT. Microsoft’s AI virtual assistant Copilot — which went live in November — is perhaps the most prominent generative AI feature among our portfolio companies. The capabilities of Copilot, which is expected to fuel revenue growth for the tech giant , include summarizing long email threads in Outlook and data visualization in Excel. Meta Platforms last year launched in the U.S. a beta version of an advanced conversational assistant, called Meta AI , across WhatsApp, Messenger and Instagram. It also can generate images. More recently, Amazon in January rolled out a generative AI tool that can answer shoppers’ questions about a product on its marketplace. 8. Large language model Generative AI applications capable of writing the Springsteen-inspired investing song and the customer email rely on a type of technology called a large language model, or LLM. For example, OpenAI’s ChatGPT — which kicked off this whole AI wave — is an application powered by an LLM called GPT-3.5. The paid version of the application — known as ChatGPT Plus — runs on a more advanced LLM, GPT-4. Microsoft is a close partner of OpenAI, having invested billions of dollars in the start-up and leaned on its relationship to become a leader in generative AI. A large language model is — as its name suggests — a type of AI model that is capable of recognizing and generating text in a particular language, including software code. To obtain those abilities, large language models, or LLMs, are fed massive amounts of data in a process known as training . 9. Training During training , the model takes in data — for example, news articles, Wikipedia entries, social media posts, and digitized books, among other sources — and tries to find relationships and patterns between words in that vast dataset. This is a complex process that takes time and a lot of computational power. Club stock examples Nvidia’s chips have become the dominant source of that computational power. Additionally, Broadcom and Alphabet have for years co-designed a custom chip that Google uses to train its own AI models. That chip is known as a tensor processing unit, or TPU. More recently, Amazon and Microsoft have rolled out in-house designed AI chips, though Nvidia remains the clear leader in AI training with some market share estimates well above 80%. Eventually, the model will get to a place where it understands the word Uber is more strongly associated with taxi, cab and car than it is trees, dinosaurs or vacuums. At a high level, that’s because news articles and Reddit posts mentioning Uber that are fed to the model during training are more likely to also contain the words taxi, cab and car than tree, dinosaur and vacuum. This is just one little example. In the actual training of LLMs, it’s repeated on a massive scale with billions and billions of connections drawn between words. 10. Parameters The connections that an LLM has drawn are expressed in the number of parameters, which have been jumping exponentially in recent years . Club stock examples You may have heard Meta Platforms, the parent of Instagram and Facebook, tout that its flagship LLM, known as Llama 2, has up to 70 billion parameters. Alphabet in December launched what it called its most capable model yet, Gemini, while Amazon is training its LLM with 2 trillion parameters, Reuters reported in November. “The highest level way of thinking about it is a parameter is a unit of pattern storage,” Riedl said. “More parameters means you can store more bits and pieces of a pattern. Whether that’s Harry Potter has a wand, or platypuses have bills. … When people say, ‘I dropped something, they usually say it falls.’ Those are little bits of examples of pattern. If you want to learn a lot of pattern, recognize a lot of pattern about lots and lots of topics, you need lots of parameters.” After all the patterns are learned, the LLM can be deployed into the world through applications like ChatGPT, where somebody can ask for a basic itinerary for a vacation in Istanbul and shortly thereafter receive paragraphs of text with historic places to see and tours to take. 11. Inference That deployment, which allows the generation of a basic itinerary for a vacation, is known as inference. “Inference is another word for guess, so it’s guessing what the most useful output will be for you. We distinguish that from the training,” Riedl said. “You stop learning at some point, and somebody comes by and says, ‘All right, well, let me give you an input. What will you do?’ You can think of the model as basically saying, ‘Ah, I’ve practiced on so much stuff and I’m just ready to go.'” Once a model is switched into inference mode, it’s not really learning anymore, according to Riedl. “Now, OpenAI or somebody else might be collecting some data from your usage, but what they will do is they’ll go back and they will train it again,” Riedl explained. 12. Fine-tuning The act of feeding an existing model fresh data so it can get better at a certain task is known as fine-tuning. “Fine-tuning means you don’t have to back and train it from scratch,” Riedl explained, describing large language models as “word-guessers.” Whenever an LLM fields an inquiry from a user, the model will lean on all the patterns it learned during training to try to guess which words it needs to string together to best respond to the inquiry. The guesses won’t always be factually “accurate,” though. That’s because the model has been designed to learn patterns between words, not necessarily answers to trivia questions. 13. Hallucination This is where the concept of hallucination comes into play. It generally refers to when an LLM responds to an inquiry with false information that, at first blush, may seem to be grounded in fact. Perhaps the most high-profile example of hallucination to date involves two attorneys who were fined by a U.S. federal judge after they submitted a legal brief they asked ChatGPT to write. The brief cited multiple legal cases that didn’t exist and included fake quotes. Of course, the optics of hallucinations are far from ideal, and some people point to them as reasons to be wary of broader AI adoption. But, according to the University of Washington’s Shah, they are difficult to completely avoid when asking AI systems to generate content. The models are using probabilistic approaches to predict what’s next, and there’s always a chance it’s not going to align with expectations. “It’s the side effect of being generative,” he said. “It’s predicting what the most probable next pattern is, which by definition is not set in stone.” Shah said it would be like if he was asked to predict which words his interviewer was going to say next. If Shah knew the interviewer their whole life and fielded their questions about AI many times before, he said he’d likely have a decent shot at guessing what they’d say next. “If I have really known you, if I have really understood you, chances are 95% of the time I’m going to be spot-on. Maybe a couple percent of the time you were like, ‘Uh sure. That’s not what I was thinking, but I could see I could say something like this.’ And maybe the last few percent times you’re like, ‘Wait a minute. No. Not me, never me.’ That’s what we’re referring to with hallucination,” Shah said. 14. Bias Bias is another downside to AI systems — and LLMs in particular — that users need to consider. While many types of bias exist, usually when bias is discussed in the context of LLMs people are referring to prejudicial bias, according to Georgia Tech’s Riedl. A general example would be that the model says a person is better suited to do a task simply based on gender. “The reason I focus on prejudicial bias is because, generally speaking, these are biases or stereotypes that we as a society have decided are unacceptable, but are present in the model,” Riedl said. “It’s a data problem,” he added. “People express prejudicial biases. They get into the data. The model picks up on that pattern, and then reflects it back on us.” 15. Guardrail The creators of AI systems can take steps to limit bias by implementing what’s known as a guardrail, which in practice may stop the application from generating an output on certain topics, such as those that are politically controversial. Guardrails are algorithms — remember, a set of instructions — manually added on top of the underlying model. For example, a user could send an LLM a question like, “Who are better computer programmers, men or women?” Without any guardrails in place, the LLM would offer a response based on its training data, Shah explained. “These are commercial systems, so anything that gets into hot water, they’re going to put guardrails” in place to limit the model’s ability to respond, Shah said. “The underlying LLM may still biased, may still be discriminatory or may still have problems.” 16. Memorization Another issue with LLMs that’s been in the news lately involves a concept called memorization, which figures heavily into a copyright infringement lawsuit against OpenAI and Microsoft filed in December by the New York Times . In its complaint, the newspaper provides examples where ChatGPT responded to inquiries with text that’s nearly identical to excerpts of New York Times articles. It highlights how LLMs can memorize parts of their training data and later provide it as an output. In the case of New York Times stories, it raises questions about intellectual property rights and copyright protections. In other instances, such as a business inputting customer data into an existing model during fine-tuning, it opens the door to security and privacy risks if personal information ends up being memorized and regurgitated. Responding to the lawsuit in January, OpenAI wrote in a blog post that regurgitation is a “rare bug that we are working to drive to zero. … Memorization is a rare failure of the learning process that we are continually making progress on, but it’s more common when particular content appears more than once in training data, like if pieces of it appear on lots of different public websites. … We have measures in place to limit inadvertent memorization and prevent regurgitation in model outputs.” 17. Graphics processing units The field of AI has been around for more than 60 years, but its major leaps forward in recent years have been due to advancements in neural networks, which are good at finding patterns in data. Computer hardware also has played a big part in recent AI advancements. To be more specific, Nvidia’s pioneering graphics processing units , or GPUs — which hit the market beginning in the 1990s and originally were used for graphics rendering — played a big part. The GPUs laid the groundwork for the company’s dominance in the AI training market today. To improve graphics rendering, GPUs were designed to be able to perform multiple calculations at the same time — a concept referred to as parallel processing . The mathematical principles used to move digital characters across a screen are fundamentally the same as what neural networks do to find patterns in data, according to Georgia Tech’s Riedl. Both require a lot of computations done in parallel, which is why GPUs handle neural network training so well. More than a decade ago, however, machine learning researchers realized the parallel processing capabilities of GPUs led to high-quality results when training neural networks. After this discovery that hardware existed that could process bigger, wider neural networks, AI researchers eventually said, “Well, let’s go figure out to make a big, wide neural network,” Riedl said. 18. Central processing unit The parallel processing capabilities of GPUs stands in contrast to a traditional computer processor. Known as a central processing unit , or CPU, these chips perform computations sequentially. CPUs can handle lots of general purpose tasks well, both in personal computers and inside data center servers. CPUs can be used for AI tasks, too. For example, Meta used to run most of its AI workloads on CPUs until 2022, Reuters reported. It is currently on track to end this year with hundreds of thousands of Nvidia’s top-of-the-line GPUs. While GPUs have the upper hand in AI training, CPUs are understood to perform AI inference well. Club stock examples Nvidia recently entered into the data center CPU market as part of its so-called Grace Hopper Superchip , which combines both a CPU and GPU into one chip. The company has touted its ability to perform inference for AI applications. Historically, CPUs were the primary processing engine of data centers, but GPUs have taken on an increasingly prominent role due to the growth of AI. Broadcom figures heavily into the changing landscape with its networking products, which help stitch together different parts of the data center. For example, its Jericho3-AI fabric released last year can connect thousands of GPUs. For its part, Nvidia also has a growing, but arguably underappreciated networking business. 19. Transformer A seminal moment on that neural network journey arrived in 2017 when employees at Alphabet published a paper describing their creation of the transformer model architecture. It harnessed the parallel processing capabilities of Nvidia hardware to make neural networks that were not only better at figuring out how words go together (better at finding patterns in data) but also much larger. In that sense, the introduction of the transformer architecture laid the groundwork for the current generative AI boom. 20. Generative Pre-trained Transformers In 2018, roughly three years after OpenAI’s founding, the organization introduced the first version of the model that would go on to power ChatGPT. It was called GPT — shorthand for Generative Pre-trained Transformers. The Microsoft-backed start-up has since gone on to release new versions of the GPT model, with the latest being GPT-4. The three-letter abbreviation has appeared in other places, too, such as Salesforce’s Einstein GPT. Bottom line Investors on both coasts and everywhere in between remain focused on the promise of AI more than a year after ChatGPT went viral. But conversations on such a technical topic can quickly veer into unfamiliar territory. We hope that by explaining these AI terms — just as we do for certain financial jargon — Club members feel better equipped to invest in companies involved in the fast-moving field. Of all the Club companies running the AI race, Nvidia and Google parent Alphabet have arguably played the most important role in bringing AI to where it is today. Indeed, while Microsoft has wisely ridden its close relationship with OpenAI to a $3 trillion valuation and a leadership position in the world of gen AI, it was pioneering research inside Google — on top of Nvidia chips — that gave rise to OpenAI’s innovations. (See here for a full list of the stocks in Jim Cramer’s Charitable Trust.) As a subscriber to the CNBC Investing Club with Jim Cramer, you will receive a trade alert before Jim makes a trade. Jim waits 45 minutes after sending a trade alert before buying or selling a stock in his charitable trust’s portfolio. If Jim has talked about a stock on CNBC TV, he waits 72 hours after issuing the trade alert before executing the trade. THE ABOVE INVESTING CLUB INFORMATION IS SUBJECT TO OUR TERMS AND CONDITIONS AND PRIVACY POLICY , TOGETHER WITH OUR DISCLAIMER . NO FIDUCIARY OBLIGATION OR DUTY EXISTS, OR IS CREATED, BY VIRTUE OF YOUR RECEIPT OF ANY INFORMATION PROVIDED IN CONNECTION WITH THE INVESTING CLUB. NO SPECIFIC OUTCOME OR PROFIT IS GUARANTEED.
This story originally appeared on CNBC