Categorized in: 

Test your GenAI IQ

As generative AI (GenAI) tools like Microsoft Copilot become more integrated into daily workflows, so do the myths and misunderstandings about what they can — and can’t — do. Let’s break down 10 common misconceptions about GenAI and Copilot, pairing each myth with a true or false question and a short explanation to help you use these tools more effectively.

1. True or False: The acronym “AI” always refers to Generative AI (GenAI).

Answer

 False

Generative AI is just one type of AI. The acronym AI is an umbrella term for Artificial Intelligence that can refer to many different technologies, from simple automation to advanced language models. Its meaning often depends on the context. There’s no single definition that fits all uses.

Deeper Dive

This Google Cloud video explains AI, machine learning and generative AI. Below is a simplified explanation of each.

Most current AI technologies rely on “machine learning” algorithms that can represent complex patterns in large data sets. Specifically, what many people refer to when they use the word “AI” today is a specific type of AI, “Generative AI” (models like ChatGPT, Gemini and Claude). Generative AI refers to AI models that can generate new content or data that wasn’t in their original data set.

AI models can be created using all kinds of different data sets. And they can be used for more than just creating new content — many models are instead “predictive.” Predictive AI uses past data to predict or categorize something, often to optimize some future state of affairs. For example, Netflix uses AI models to learn patterns in a dataset of your and others’ viewing behavior, to predict what you will want to watch next. A laundry machine might be advertised as “AI” because it uses sensors to determine the most efficient settings for your wash, based on patterns in a dataset of washing machines and fabric types.

In Generative AI models, analogous datasets consist of massive amounts of text, images, audio or video, typically scraped from the internet. Instead of predicting something, these models generate new content based on the patterns in these datasets.

2. True or False: GenAI understands everything like a human.

Answer

False

People often assume GenAI systems understand language the same way humans do — grasping meaning, context, and intention.

GenAI tools process patterns in data — they don’t “understand” in a human sense.

Deeper Dive
AI models like Copilot generate responses based on statistical patterns in language, not comprehension or consciousness. They can mimic understanding but lack awareness, emotions or intent. They predict what comes next in a sequence of words based on training, not comprehension.

To better understand this nuance, read Why it’s a mistake to ask chatbots about their mistakes from Ars Technica. Understanding why large language models cannot apply introspection will help you understand how they work in general.

3. True or False: AI can make mistakes and hallucinate facts.

Answer

Truth

GenAI tools may produce plausible-sounding but incorrect or fabricated information. Human oversight is essential.

Deeper Dive

An AI hallucination happens when a generative AI tool like Copilot or ChatGPT makes something up that sounds believable — but isn’t true.

It’s like when someone confidently gives you wrong directions. The AI isn’t trying to lie — it just generates a response based on patterns in its training data, and sometimes those patterns lead to inaccurate or fictional information.

For example, Copilot was asked to summarize responses from a survey. It produced a list that included two bullets that could not be found in any of the original survey responses. They sounded logical and perhaps may have been common responses in other organizations' surveys, but they were an “AI hallucination” in this case. 

Even telling Copilot to only refer to the data you provide does not always prevent a hallucination, based on its past training.

How to avoid being misled:

  • Always double-check important facts.
  • Use Copilot for drafting, brainstorming and summarizing, not for final decisions.
  • If something sounds off, ask for sources or verify with trusted information.

4. True or False: CU employees should not use ChatGPT or any GenAI tool other than Copilot.

Answer

False

You can use other GenAI options for general tasks like brainstorming, just not for anything involving student records, internal documents or confidential university information, commonly referred to as CU data. You can use tools such as ChatGPT, Claude or Gemini, if you don’t enter any CU data.

When working with CU data, use approved tools like Microsoft Copilot. Depending on the data classification of the information, you may need the paid enterprise license version, while lower-risk data might be suitable for the free web version of Copilot, as long as you are signed into your CU administrative account.

Deeper Dive

Review CU System’s AI Resources page for more information on data classification and vetted tools. University policies regarding AI may vary by campus, so always refer to your campus AI guidelines.

5. True or False: You can use Microsoft Copilot without a paid license.

Answer

True

There are two versions of Copilot: a free web-based Copilot, known as Copilot Chat and the paid license enterprise version known as Copilot for Microsoft 365 (Copilot M365).

Any CU employee can access the free web-based Copilot version at copilot.microsoft.com. It offers basic assistance and allows up to five uploads a day. You need to sign in to your CU administrative account for security purposes when using the free web version.

If you want Copilot to be integrated into Microsoft 365 apps (Word, Excel, Outlook, etc.) for deeper functionality like document summarization, email drafting and data analysis, or to use Copilot with confidential CU data, you need the paid license version. A Copilot M365 license can be requested through the UIS Help Desk with your department’s budget approval.

Deep Dive

Feature M365 Copilot (paid license) Copilot chat (free web version)
Integration Built into Word, Excel, Outlook, Teams and SharePoint Web-based only
Data Access Uses your work emails, Teams meeting notes, files and calendar No access to M365 data but you can upload up to 5 files per day.
Security Enterprise-level agreement.
Can be compliant with HIPAA and FERPA; Approved to use with specific CU data classifications (public and confidential, NOT highly confidential at CU System Administration at this time).

Public web access. 

Staff should sign into their Microsoft account so any information added during the session is protected and then purged when the session ends. UIS recommends using the Microsoft Edge browser when using Copilot.

Use Cases Summarize docs, draft emails, analyze data General Q&A, brainstorming
Access Available for license purchase by contacting the UIS Service Desk, with budget approval from your department. 

After you have a license, you can access Copilot via any M365 app. go to https://office.com/chat, or add the Copilot app to your device's taskbar.
https://copilot.microsoft.com or look for the Copilot icon at the top of your Edge browser.

6. True or False: Copilot is secure by default.

Answer

False

Security depends on how Copilot is configured and governed.

Deeper Dive

Only Microsoft Copilot is approved for use with CU data — that is, university-owned information such as documents, emails, spreadsheets and other content created or handled as part of your university role. This includes public and confidential data classifications if you have a Copilot M365 paid license, or just public data if using the free version, Copilot Chat.

Important: This does not include highly confidential data, such as Social Security numbers, health records or financial account details. Neither version of Copilot is approved for that level of data sensitivity for CU System Administration at this time.

The UIS Service Desk provides several tips for using the free web version of Copilot securely.

Ensuring that FERPA-protected data is only used in vetted and approved tools is crucial for maintaining student privacy and institutional compliance. Using third-party tools that haven’t been properly vetted risks exposing sensitive university information to unauthorized access and potential data breaches.

Finally, be mindful that AI-generated meeting transcripts and notes may be subject to CORA requests, requiring careful handling and storage to maintain transparency and compliance.

7. True or False: GenAI tools consume more energy than a basic web search.

Answer

True

GenAI tools do consume a lot more energy — both in the big upfront training and in ongoing inference and usage.

Deeper Dive

While training large GenAI models (like GPT-5 or image generators) requires significant computational resources and energy, inference — the process of running the model to generate responses — also consumes a substantial, ongoing amount of energy.

Inference happens every time someone uses the model — generating text, images, code, etc. As usage scales to millions or billions of queries daily, the cumulative energy cost of inference will exceed training.

The University of Colorado has a commitment to sustainability.  We encourage employees to use Copilot and leverage the value it brings, while we also need to make careful decisions about when it truly adds value and avoid overuse.

Copilot relies on Microsoft’s Azure cloud, where models are trained and optimized for efficiency. Microsoft has announced a desire to be carbon negative by 2030, meaning it aims to remove more carbon from the atmosphere than it emits. Companies are investing in systems that treat and reuse wastewater for cooling, significantly reducing the demand for fresh water. Microsoft has discussed plans to build such systems, but these changes have yet to be implemented.

CU System Administration is focused on using GenAI to boost efficiency and reduce repetitive tasks, not using AI for the sake of novelty.

Sources: A Computer Scientist Breaks Down Generative AI’s Hefty Carbon Footprint, Saenko et al, Scientific American, May 25, 2023; Energy Considerations of Large Language Model Inference and Efficiency Optimizations, Fernandez et al., ACL 2025; OpenAI’s Data Center Ambitions Collide with Reality, Fried, Axios, July 23, 2025.

8. True or False: AI will replace human creativity and jobs.

Answer

False

There’s a common fear or belief that GenAI will completely replace artists, writers, programmers and many other creative or knowledge-based professions.

GenAI is a tool, not a total replacement. It excels at augmenting human creativity, speeding up workflows and generating drafts or ideas — but it still relies on human oversight, judgment and originality. Most effective applications involve collaboration between humans and GenAI, rather than substitution.

9: True or false: GenAI results can be biased.

Answer

True

GenAI can reflect the biases in its training data.

GenAI models can unintentionally perpetuate stereotypes or unfair assumptions. Responsible use includes reviewing outputs for bias and ensuring inclusive language and representation.

Keep a human-in-the-loop approach, always integrating human oversight of AI-generated results to ensure accuracy and avoid bias.

Deeper Dive

Watch the LinkedIn Learning video on Toxicity and bias in GenAI, from the course: LLM Foundations: Building Effective Applications for Enterprises.

10. True or False: Everyone knows how to use Copilot.

Answer

False

Effective use of Copilot requires training and experimentation. After learning how to use it with one application, like Word, you may need additional training to use it in a different application, such as Excel.

To get the most out of Copilot, users need to learn prompt techniques, understand its limitations and explore its capabilities. Adoption improves with hands-on experience and shared best practices.

Deeper Dive

Learn more by visiting the CU System SharePoint Copilot page and joining the November 19 Collab Café on Copilot prompt techniques!

 

Tags: 

Add new comment