Course Content
Investigate the potential of AI in your practice
In this lesson, you’ll discover many exciting ways that educators are tapping into AI tools to advance their teaching practice. Three key benefits will be discussed: time-savings, differentiation, and lesson enhancement. Fictional educator scenarios are used to provide helpful context as you prepare for upcoming activities in this course. You’ll also learn some helpful tips for completing the activities. In addition to the activities hosted directly on the Teacher Center site, this course will ask you to perform tasks in the AI tool of your choice, such as Gemini or ChatGPT. The instructions for these activities will be written for Gemini, which is freely available, but select whichever tool you like. Keep in mind that different tools may produce different results — getting an output that doesn’t match the activity is OK, as long as you review it to make sure it’s accurate and useful.
0/24
Generative AI for Educators

This lesson underscores the importance of evaluating AI outputs for accuracy and ensuring ethical use of AI. You will learn how to review outputs for factual content and identify and avoid bias. In addition, the importance of considering privacy and security implications will be highlighted. By the end of this lesson, you’ll know how to apply your own good judgment to ensure AI is appropriate and beneficial.

Address AI limitations

In this lesson, you’ll explore strategies to help you address potential drawbacks of using AI. Consider the following limitations and read about strategies to address them.

Unfair bias:

AI tools are trained on information produced by humans, making them prone to any human biases that exist within the original data. For instance, maybe you request an AI tool to generate an image of a scientist for a class slideshow. If the images mostly depict white males, you might accidentally communicate that all scientists are white men. Human intervention creates more inclusive results.

 
Hallucinations and unreliable information:

AI sometimes produces a hallucination, which is any inaccurate or misleading output. Here’s an example: Perhaps you tell an AI tool that your sister’s name is Robin. There is also a breed of songbird called a robin. An AI hallucination may be represented as, “Your sister is a songbird.” 

Here are some additional examples of conversational AI hallucinations:

  • An AI tool summarizing a news article might invent quotes or events that weren’t there originally.

  • When asked a complex question that requires real-world understanding, the AI tool might produce a response that sounds plausible but is ultimately meaningless.

These hallucinations are why it’s important to be aware of this limitation and to double-check information from conversational AI tools with trusted sources. To use AI tools responsibly, fact-checking is essential. One strategy is to request that the tool provide the sources of its information, or a bibliography, so you can investigate the reliability.

 
Academic integrity and cheating:

Many educators worry about irresponsible use of AI, as it can lead to plagiarism and hinder the learning process. When calculators became more prevalent in schools, there were similar concerns about the potential for students to use them as shortcuts, rather than truly understanding the material. This caused a shift in how math was taught, putting more emphasis on showing work. By understanding what AI tools can and can’t do, educators will be prepared to use AI to support their practice.

Privacy and security:

AI tools collect and analyze large amounts of data, so it’s important to consider privacy and security. Privacy is the right for a user to control their personal information. Security involves safeguarding this information to prevent unauthorized access. Review the relevant AI tool privacy policy if you have any questions. It’s your duty to disclose use of AI and double-check outputs, as well as understand your organization’s AI guidelines — especially before inputting any private student information.  

Consider responsible use

The scenarios portrayed in this project are fictitious. They are intended for pedagogical purposes only.

I’m a fifth grade science teacher. I have been using an AI tool to help me do a lot of the planning work for my class’s lab activities. For example, it helps me create lab report worksheets for them to fill out and also check that the reading level is right for my students. It’s been a really useful tool for me — it saves me a lot of time planning so I can focus more on setting up successful experiments. It also means my students get the clearest version of my instructions so they feel empowered to go hands-on in class.  

 

While AI can offer many benefits, it’s important to also consider its limitations. 

Understanding the limitations of AI can also help the fifth grade science teacher use it appropriately. Some challenges may include:

  • Overreliance: AI tools can provide useful scaffolding, but they shouldn’t be treated as a source of truth. Human evaluation of AI results is required.

  • Unreliable information: Only humans have the critical thinking skills needed to judge if information is accurate. To ensure that I’m using AI tools responsibly, I always fact-check the output.

  • Unfair bias: As AI tools are trained on information produced by humans, they are prone to any human biases that exist within the data. This is a very important point to keep in mind.

A stop sign (without the word “stop” in it); with a person looking thoughtfully

Further, appropriate and responsible use of AI depends on the specific application and the user’s ability to verify its outputs. Using incomplete, innaccurate, or biased training datasets can potentially lead to discriminatory or unreliable outputs. In addition, algorithmic bias can occur if an AI system, even unintentionally, favors or disadvantages certain groups based on the data it’s trained on or the way it’s programmed. This can produce unfair outcomes, such as biased hiring decisions or inaccurate facial recognition software. And while AI can predict trends based on past data, accurately predicting future events in complex and dynamic environments remains a challenge due to unforeseen circumstances, unpredictable human behavior, and limitations in understanding the full scope of influential factors.

With these challenges in mind, the following are some suitable situations for genAI and issues to be aware of:

Creative content generation

AI can create unique paintings, melodies, and other art forms. While originality and beauty are subjective, human experts can assess the quality of the output. In addition, AI can generate storylines, scripts, poems, and other creative texts. Here, factual accuracy and alignment with the intended style are crucial metrics for evaluation.

Data augmentation and simulation

AI can simulate complex systems, such as economic models or weather patterns. For example, a social studies lesson might explore the potential outcomes of making a different historical decision or an environmental science class activity could investigate the simulated impact of different levels of greenhouse gas emissions on weather patterns. In all cases, it’s important to compare the simulation’s behavior to historical data or scientific principles.

The bottom line is that human judgment is crucial in all domains. AI has limited understanding of real-world context and can generate outputs that are factually inaccurate or that miss subtle nuances of meaning. These are called hallucinations, and you will learn more about them soon. For now, just understand that responsible use of AI requires understanding its limitations and carefully evaluating outputs for accuracy, potential bias, and subjective aspects.

Review this course’s responsibility checklist

The following is another example of a responsibility checklist. This checklist was created specifically for this course. Read through this list, and consider these ways to practice using AI responsibly.

 

Review AI outputs:

  • Verify the accuracy of any content you’re planning to share. Fact-check outputs using reputable sources, such as academic articles on Google Scholar.
  • Read and edit the outputs to personalize the content you create.

 

Disclose your use of Ai:

  • Know and follow your organization’s policy or guidelines related to the use of AI.
  • Tell your audience and anyone it might affect that you’ve used or are using AI.

  • APAGrammarly and MLA offer helpful advice for how to cite the use of AI for work.

Consider the privacy and security implications:

  • Again, know and follow your organization’s policy or guidelines related to privacy and security concerns.
  • Only input essential information. Don’t provide any information that’s unnecessary, confidential, or private, as you may threaten the security of an individual or the organization you’re working for. 

  • Read supporting documents associated with the tools you’re using, such as resources that describe how the model was trained to use privacy safeguards, including terms and conditions. 

Use AI thoughtfully:

Always use your judgment to ensure you’re using AI for good. Consider the following:

  • Do you have the appropriate knowledge necessary to confirm if the output is correct? 

  • If you use AI for a particular task, will it negatively affect anyone around you? Does it reinforce or uphold biases that may cause damage to any groups of people? 

Feel free to download this checklist as a PDF to use whenever you need it! 

Just click the link: Responsibility checklist PDF

Also consider the strategies you generated previously in this lesson to determine whether the course checklist can be improved for your practice. The more you learn about ethical use of AI, the better prepared you will be to use it!

 
Scroll to Top