You are a customer support specialist using a public AI tool to help draft a response to a complex customer problem. The customer's original email contains their full name, home address, and a detailed description of their service issue.
Based on the ACT framework, which action best demonstrates responsible AI use when handling the data?
Telling AI 'do not share' does not prevent personal data from entering the public tool's context — the data is still exposed once pasted.
Using generic placeholders protects the customer's personal data while still giving AI enough context to draft a helpful response — this is responsible AI use.
Deleting chat history after the fact does not undo the data exposure that occurred when the personal information was submitted to a public tool.
Inputting the full email with personal details into a public AI tool exposes sensitive customer data, which violates responsible data handling principles.
Question 2/ 10
When you prompt AI to generate a response, which description best explains the core action the AI model is performing?
AI models do not access a real-time database — they rely on patterns learned during training, not live external lookups.
AI does not synthesize consensus from human experts — it generates responses based on statistical patterns in training data.
AI models do not execute fixed pre-written patterns — they generate responses dynamically based on learned statistical relationships.
AI models generate new content by identifying statistical patterns from training data and applying those patterns to new inputs.
Question 3/ 10
You are a data analyst for a subscription food service, using a new AI-powered dashboard to identify trends in customer behavior. You notice the dashboard accurately categorizes customers who are likely to unsubscribe, even though you never programmed specific rules for those categories.
Which statement best describes how the AI model and the application interface worked together to achieve this?
The roles are reversed — the interface handles display, not learning; the AI model performs the pattern recognition and inference.
The application interface does not perform reasoning — that is the AI model's function. The interface is responsible for presenting results to the user.
The AI model and the interface are distinct components. Machine learning does not require manually programming every scenario — that is the opposite of how it works.
This correctly describes the division of roles: the application interface presents results visually, while the underlying AI model uses machine learning to find patterns and generate inferences.
Question 4/ 10
You are a leadership and development coordinator. You use Gemini to help you outline a presentation for your department's leadership team. Gemini generates a comprehensive and factually correct slide deck outline, but the tone is highly technical and generic.
Based on the "AI as a collaborator" mindset, what is the most effective next step?
Asking AI to simply use a 'more professional tone' is vague and does not leverage your unique knowledge of the specific audience.
This is the collaborator mindset in action — you bring your unique contextual knowledge of the audience to guide Gemini in producing a tailored, audience-appropriate revision.
Using the deck as-is without revision misses the opportunity to apply your expertise and refine the output for the specific audience.
Asking for design recommendations shifts focus away from the core issue — the tone and audience fit — which requires your human judgment, not design advice.
Question 5/ 10
You are a bakery manager. You are using Gemini to help you draft an email response to a customer who received the wrong cake. Gemini generates the following output: "We apologize for the mistake. Our team will have a new cake ready for you by the end of the day." You know that the bakers won't actually have a new cake ready until tomorrow morning.
Based on the ACT framework, what is the most responsible next step?
Sending inaccurate information — even if AI generated it — misleads the customer and damages trust. Gemini does not have real-time knowledge of your bakery's schedule.
Adding a disclaimer does not fix the inaccurate promise — it still sets the wrong expectation with the customer.
Simply regenerating without providing corrected context will likely produce the same inaccurate timeline again.
The ACT framework requires you to review and verify AI outputs before acting. Here, you apply your real-world knowledge to correct the inaccuracy before sending — this is responsible AI use.
Question 6/ 10
You are a small business owner running a landscaping company. You use Gemini to help you draft an email pitch to a potential commercial client. You notice that the output includes a persuasive pitch but completely misses a specific request to guarantee weekend-only maintenance to avoid disrupting their retail customers, which you know is crucial for winning this specific bid.
From the options below, what is the most effective next step to utilize your critical thinking to refine this output?
Generating multiple versions is inefficient and does not guarantee the key detail will appear. The better approach is to apply your knowledge and fix it directly.
This is the correct use of critical thinking — you recognize the gap, apply your expertise about the client's needs, and manually insert the crucial business-specific detail.
Changing the persona to 'expert salesperson' does not address the specific missing detail about weekend-only maintenance.
Gemini does not have real-time knowledge of this client's current requirements. Assuming the constraint is no longer needed without verification risks losing the bid.
Question 7/ 10
You are a market researcher. You are using Gemini to help you summarize customer feedback about a newly released product. You notice that the output focuses only on the positive reviews and completely ignores the critical feedback.
From the options below, what is the most responsible next step to increase the reliability of this output?
Submitting a one-sided summary as a final report ignores critical feedback and produces an unreliable and misleading analysis.
Converting a biased summary into a visual slide deck does not fix the underlying objectivity problem — it just makes a flawed output look more polished.
Evaluating the output for objectivity and iterating on your prompt is the responsible approach — it addresses the bias directly and produces a more reliable, balanced summary.
Asking Gemini to generate hypothetical negative reviews introduces fabricated data into a research report, which is misleading and undermines research integrity.
Question 8/ 10
You are a fundraiser for a non-profit organization. You use Gemini to help you draft an outreach email asking previous donors to contribute to the annual campaign. You enter the prompt: "You are an expert fundraiser. Write an email asking past donors to give to our annual campaign. Format the response as a short email." You notice that the generated email sounds too aggressive and does not mention the specific impact of past donations.
Based on the 4-step prompting framework, what is the most effective next step to improve this output?
Sharing personal donor data like names, ages, and addresses in an AI tool raises serious privacy concerns and is not a responsible approach.
Refining the prompt with specific tone guidance and contextual details about past donation impact directly addresses both issues identified in the output — this is effective use of the prompting framework.
Uploading a photograph does not address the tone or missing content issues. It is an ineffective use of multimodal features for a writing task.
Changing the persona to 'expert writer' does not address the specific issues of tone and missing impact context — those require explicit instructions in the prompt.
Question 9/ 10
You are a restaurant manager using Gemini to help you plan for a busy holiday weekend. You notice that asking for a revised staff schedule, a list of daily specials using overstocked ingredients, and a draft for a social media post all in one single prompt yields a generic and disorganized response.
Based on prompt chaining strategies, what is the most effective next step to manage this complex task?
Using separate chats breaks the connective context between tasks. Prompt chaining works best within the same chat where each output can build on the previous one.
Adding 'think step-by-step' to a single overly complex prompt does not substitute for properly chaining tasks sequentially. The problem is task complexity, not reasoning depth.
Using generic templates and doing everything manually defeats the purpose of using AI as a collaborative tool for this task.
This is prompt chaining — breaking a complex task into sequential, connected prompts within the same chat so each output informs the next, resulting in a more focused and useful response.
Question 10/ 10
You are a golf course events manager. You used a single Gemini chat to help you plan a summer golf tournament, including brainstorming event names and creating a promotional plan. Now, you want to use Gemini to help you summarize a downloaded report about course maintenance.
From the options below, what is the most effective way to start this new task?
Instructing AI to ignore prior context is unreliable — the tournament context in the same chat still influences responses.
Starting a fresh chat clears the context window entirely, ensuring the maintenance report summary is not skewed by the unrelated golf tournament planning conversation.
Switching models within the same chat thread does not reset the conversation history — the prior context remains and can still influence the output.
Uploading the report into the existing chat mixes unrelated contexts — the tournament planning history could skew the maintenance summary.