Guiding Principles for Responsible Use of Artificial Intelligence (AI)
This document provides an initial set of foundational principles and guidelines for responsibly using AI tools in East Irondequoit CSD. These foundational principles and guidelines will be reviewed and updated as needed to ensure they continue to align with changes in technology, laws and our understanding of AI’s impact on education. These principles and guidelines are meant to promote responsible use of AI that is purpose driven and supports district goals. These principles and guidelines apply to all forms of AI, including Generative AI.
Definition of Generative AI: Generative Artificial Intelligence (AI) is a technology that can create content, including text, images, audio, or video, when prompted by a user. Generative AI systems learn patterns and relationships from massive amounts of data, which enables them to generate new content that may be similar, but not identical, to the underlying training data. The systems generally require a user to submit prompts that guide the generation of new content. (Adapted slightly from U.S. Government Accountability Office Science and Tech Spotlight: Generative AI)
Definition of Personally Identifiable Information (PII): Personally Identifiable Information is any combination of two or more identifying elements that when combined can identify a person.
Guiding Principles: The district supports these guiding principles as the foundation for developing guidance for the responsible use of AI. These guiding principles are based on the CoSN K12 Generative AI checklist, the Teach AI in Education Toolkit and the NIST AI Risk Management Framework. These principles articulate the following characteristics of trustworthy AI. Trustworthy AI should be safe and secure, accountable and transparent, valid and reliable, privacy-enhanced, explainable and interpretable and equitable and accessible. In addition, AI use should be purpose driven, human-centered and never used with the intent to harm, harass or intimidate others.
Purpose Driven:
AI should be used to enhance educational outcomes, not replace human instruction.
AI use should support curriculum and help students achieve their educational goals.
AI use should enhance operational and administrative efficiency, not replace it.
AI use should support district goals to deliver equitable educational experiences to students.
Privacy and Data Protection:
AI should be used only in a manner that respects user privacy.
Personally identifiable information, sensitive or confidential information should never be shared with AI systems.
AI tools must comply with all relevant data protection and privacy laws.
AI tools must have appropriate data security protections.
Transparency and Accountability:
The use of AI should be transparent. Students, parents, and staff should be informed about the AI tools being used, their purpose, and how they work.
Maintain human agency and judgement. There should be clear accountability for the decisions made by AI. If an AI system makes a decision, there should be a human who can explain the decision-making process.
The district shall use Generative AI responsibly and be held accountable for the performance, impact and consequences of its use in district work.
Vendors partnering with schools to provide AI tools should be accountable for the reliability of their products and comply with relevant data protection and privacy laws.
Accurate and Reliable:
AI outputs should be reviewed for accuracy to verify that outputs produced are valid and continue to demonstrate the reliability of system performance.
AI outputs should be viewed as first draft and reviewed for bias and inaccuracies.
Equitable and Accessible:
All students should have access to AI tools which are inclusively designed with all students in mind.
Fairness in AI includes concerns for equality and equity by addressing issues such as harmful bias and discrimination. AI may produce bias output; therefore, all AI output should be checked for accuracy, and it should be tested and monitored.
Safety and Well-being:
AI should be used in a way that ensures the physical, emotional, and psychological safety of students.
AI should not be used to harm, harass, or intimidate others.
AI should be used with safety and security in mind, minimizing potential harm and ensuring that systems are reliable, resilient, and controllable by humans.
Knowledge - Promote AI Literacy: The district shall provide AI Literacy training to staff and students to support responsible use of AI. AI Literacy training should include topics such as:
Understanding how to use AI and how AI works and why it produces the results that it does.
Strategies for AI concepts into core academic classes.
Benefits of AI.
Risks of AI:
Bias and Fairness in outputs of AI tools.
Overreliance on use of AI.
Cultural competence.
Accessibility and Equity.
Data privacy.
Hallucinations and Deep Fakes.
Ongoing Monitoring and Evaluation:
The use of AI should be regularly reviewed to assess its impact on students and the learning environment utilizing metrics to measure successes.
The use of AI should be regularly reviewed to ensure equity of access, data privacy, and safe and ethical usage are maintained.
The district shall adapt its use of AI tools as new information and models become available.