Which AI limitation is an example of a response that is factually incorrect, despite sounding confident?
Drift refers to changes in model behavior over time, not confident factual errors.
Hallucination is when AI confidently states false information as if it were true — factually incorrect but convincingly presented.
Model architecture refers to the structural design of the AI system, not a type of incorrect response.
Data bias causes skewed or unfair outputs, but is distinct from confidently stating false facts.
Question 2/ 10
Which of the following functions best describes how AI models use their training data to create new content?
AI does not vote or aggregate consensus — it generates output based on learned patterns.
AI models are not simply executing pre-written code for each output — they generate responses dynamically.
AI does not access a live cloud library during generation — it relies on patterns learned during training.
AI models generate new content by applying statistical patterns learned from their training data.
Question 3/ 10
What is the primary role of an AI model, like Gemini?
Visual interface rendering is handled by the app layer, not the AI model itself.
Database management and chat history storage are infrastructure concerns, not the model's primary role.
The AI model's core role is to process inputs and generate reasoned, contextually appropriate outputs.
API orchestration is typically handled by agents or middleware, not the base model itself.
Question 4/ 10
Beyond solving one-off tasks, what is a core benefit of using AI as a collaborator?
Removing human oversight is not a recommended benefit — human-in-the-loop remains essential.
AI can assist but should not be solely relied upon for specialized high-stakes professional advice.
AI supports creativity but should not replace the human as the primary creative decision-maker.
AI's greatest collaborative value is as an iterative thinking partner for complex, multi-step challenges.
Question 5/ 10
Which action is an example of checking AI outputs for objectivity?
Cross-referencing with a trusted source is the standard method for checking factual objectivity of AI outputs.
The quantity of sources cited doesn't guarantee objectivity — quality and cross-verification matter more.
Word count has no bearing on the objectivity or accuracy of the content.
Reviewing for sensitive language checks for bias in tone, not factual objectivity specifically.
Question 6/ 10
Why is the human-in-the-loop principle essential when collaborating with AI?
Human-in-the-loop keeps human expertise and critical judgment at the center of AI-assisted decisions.
Human-in-the-loop is about oversight and judgment, not data access control.
Logging is a technical feature — human-in-the-loop is about decision oversight, not automatic tracking.
The opposite is true — human-in-the-loop prevents AI from operating with unchecked independent judgment.
Question 7/ 10
What step in responsible AI use requires transparency about the use of an AI tool?
'Ask' is the first step — deciding whether the task is appropriate for AI. Transparency comes at a different step.
'Tell' is the step that requires you to be transparent and disclose when AI has been used to produce content.
Iteration is about refining prompts and outputs, not about disclosure or transparency.
'Check' involves verifying the output quality, not disclosing AI use to stakeholders.
Question 8/ 10
A marketer is creating a prompt to brainstorm catchy new product names for a line of running shoes. They want AI to take on a specific persona. Which details about that persona should they include in the prompt?
A cost-benefit analysis is a different task entirely — it doesn't define a persona for creative brainstorming.
A legal persona is mismatched for creative product naming in fitness marketing.
A relevant persona — creative branding expert in fitness — directly aligns with the task of naming running shoe products effectively.
An unrelated image provides no useful persona or context for the naming task.
Question 9/ 10
A designer is creating a prompt to generate a new ad campaign image. To ensure the image matches the color scheme the client wants, the specialist attaches a reference image in the prompt. Which prompting technique is this?
Attaching an image alongside a text prompt is multimodal prompting — using more than one type of input (text + image).
Temperature is a model parameter controlling randomness, not a prompting technique involving images.
Meta-prompting involves asking the AI what information it needs — it doesn't involve attaching reference images.
Prompt chaining links sequential prompts together — it's not about attaching visual references.
Question 10/ 10
You have been using a single Gemini chat to plan a product launch. Now you want to use Gemini to summarize a downloaded report about a different project. What is the most effective way to start this new task?
Uploading into the existing chat mixes contexts — the product launch history could skew the summary.
Starting a fresh chat clears the context window, ensuring the new summary is not influenced by prior unrelated conversation.
Switching models within the same chat doesn't resolve the context contamination from previous messages.
Instructing AI to ignore prior context is unreliable — the context window still influences responses.