- column
- TECHNOLOGY Q&A
Defining commonly used AI terms
By Wesley Hartman
Related
May 1, 2026
People skills: You are a human being, not a human doing
May 1, 2026
Use Excel to automate financial statement analysis
May 1, 2026
5 human competencies CPAs need in the AI age
TOPICS
Q. I am hearing so many AI-related terms. Can you give me a primer on AI terms and some of the context?
A. Artificial intelligence (AI) terminology has moved beyond computer science and is now part of the everyday lexicon. These are terms that were part of my college computer-engineering coursework but are now spoken daily. I am going to define these terms and give examples in accounting where I can.
- Artificial intelligence (AI): The term was coined in 1955 and introduced more widely in 1956, though the concept has been around through history. Merriam-Webster defines it as “the capability of computer systems or algorithms to imitate intelligent human behavior.” The level of imitation has moved through history as technology moved forward. Using car production lines as an example, modern factories have levels of automation that from a prior point in history would be considered advanced AI.
- Generative AI: Generative AI is a mixture of technologies built since 1956. In 2017, researchers at Google released a paper titled “Attention Is All You Need.” This introduced a method for computers to review all the words in a length of text and apply different levels of importance based on prior words. It could be thought of as very complex statistics and probability. Those levels create the context where the word “bank” can have different meanings if the word “river” versus “loan” appeared earlier in the sentence. Accountants use this sort of context every day, though with numbers and transactions. For example, an auditor is reviewing transactions on a chart of accounts for fraud; the context is the business, such as a home improvement store versus a religious institution.
- Large language model (LLM): This is the backbone of generative AI. An LLM contains training text and data scraped from the internet. Generative AI applies its statistics to this data in response to prompts. This is analogous to an accountant taking data from the chart of accounts and creating graphs and charts to determine trends and find outliers.
- Prompt: This is the text entered into a generative AI tool. Before generative AI can create a response, it needs the prompt to analyze and determine the context and weight of each word. Then it can start generating a response using the same context and weight methods. For determining each subsequent word of its response, it uses the context and weight of the text it already generated in the response. Liken this to a tax client asking questions about their tax return. The accountant would weigh the client’s words and bring in context from the tax return to respond.
- Tokens: Tokens are units that AI can understand. Usually, one token is one word or one part of a longer word. These tokens, along with the order of the tokens, are applied to the training data to create responses.
- Temperature: This is how strictly the AI’s probability model selects what the next token should be, starting at 0 and using 0.1 increments. A lower temperature will be more rigid and factual while a higher temperature, like 0.9, will explore probability outliers, which can be useful for creative tasks but raise the risk of mistakes. Temperature is set by foundation model providers within the model itself. Depending on how the model is accessed, users may or may not be able to adjust the temperature. For accounting work, a lower temperature is safer for working with factual data.
- Foundation model: These are the LLMs (as well as other non-language-based models) many vendors use to add AI capabilities. These are familiar names like OpenAI’s GPT models (used by ChatGPT), Google’s Gemini models, Anthropic’s Claude models, and others. Instead of building their own LLM, which is expensive, vendors contract with or license a product from a foundation model provider.
- Alignment: AI is designed to help you. Alignment is AI’s ability to stay on task toward a provided goal. If a prompt starts with “I want to manage my inbox but never delete anything” — the goal — then the AI’s alignment is to help but never delete. Further information within the prompt would specify how emails are managed, but the alignment remains the same.
- Fine-tuning: This is a process that a vendor uses to narrow a foundation model’s scope and how it responds. An example would be a tax research vendor. One part of their product is fine-tuning to keep the foundation models responding in a consistent structure. However, that structure could then be transformed into preferences such as memo styles for the research. Stated differently, fine-tuning is for determining the answer to a question, but the answer can be transformed into a different presentation.
- Retrieval-augmented generation (RAG): RAG is a method for giving AI data on which it focuses when answering questions. For example, we have our standard operating procedures stored as text and connected to one of the foundation models. My team can ask questions about our own processes. This is a very simple example of RAG. More complex RAG systems have much larger datasets like using a chart of accounts to make suggestions for categorizing transactions.
- Context window: AI can keep only a limited amount of text in memory. This is the context: all the text of all the prompts and responses over the course of a chat with an AI tool. Several of Anthropic’s models have a context window of 200,000 tokens. An average novel is around 85,000 words, so that context window is roughly the size of two novels. The tokens in the context window are used for the context and weights of the next response. Over time, the foundation models will compact and summarize prior prompts and responses to allow continued prompting. One word of caution is that alignment can be lost if compacting the context windows does not keep the alignment in summary. If doing a long research session on a topic, concepts discussed earlier might be lost.
- Model Context Protocol (MCP): MCP is a method for allowing AI to connect to existing tools and systems. Some vendors have started to publish their own MCP servers. There are different analogies, but mine is using an app on my phone to connect to my general ledger. The phone company and app developer agree to use the common language of the internet. The phone is the same as the foundation model. The app is the MCP server. The foundation model provider and MCP server developer agree to use the common language of MCP to communicate.
- Agentic AI: This is the AI that will take actions. Sending emails and entering data are some basic actions agentic AI might perform. An AI agent will determine how to accomplish the goal I give it using some of the previously mentioned technology. An example is entering data into general ledger software. I provide the AI agent with PDF receipts and prompt it to enter the vendors and transactions into Xero. The AI agent will connect to the Xero MCP server, read the PDFs, check for the existence of the vendor, and enter the receipt.
About the author
Wesley Hartman is the founder of Automata Practice Development.
Submit a question
Do you have technology questions for this column? Or, after reading an answer, do you have a better solution? Send them to jofatech@aicpa.org.
