AI Nutrition Facts: Making Informed Decisions About Your AI Tools

zachp
Instructure
Instructure
2
591

Instructure.png

It seems like everyone’s buzzing about the capabilities of artificial intelligence (AI), and Instructure is no exception. At Instructure, we are committed to improving teaching and learning, and believe AI technology can help increase educator efficiencies, improve learning outcomes, and increase student engagement and empowerment when applied in a thoughtful, strategic, and ethical way. With these goals in mind, we are integrating AI technology within our own products and platforms and also leveraging our vast partner ecosystem of AI enabled tools to deliver the best teaching and learning experiences.

Decisions about where and how to integrate AI into education are not something we take lightly– and you shouldn’t either. That’s why we’ve embraced the software industry trend and will be providing AI Nutrition Facts on first- and third-party AI features. These Nutrition Facts help you understand exactly what you’re getting with each feature, how your data is being used, and enable you to make informed decisions about which AI features and tools you want to use. Read on to learn how Instructure’s AI Nutrition Facts will help you build the right AI-enabled platform for your institution. 

What are AI Nutrition Facts?

Simply put, AI Nutrition Facts mirror the nutrition facts that you’d find on packaged food. They aim to provide customers full transparency about what’s inside. Careful vetting of all edtech tools is crucial for building a healthy organization ecosystem. Use the AI Nutrition Facts to ensure that a given tool matches your organization’s AI vetting criteria and use case. You can find our first-party Nutrition Facts in our AI in Education community group, and third-party Nutrition Facts on the Emerging AI Marketplace

 

NF1.png   NF2.png

 

NF3.png

 

Using the AI Nutrition Facts Label

The first section, Model & Data, provides information to help you understand where the AI output comes from.

ModelData.png

  • Base Model – Indicates which version/model the functionality is built on and describes the tool’s capabilities, performance, and general applicability.
    • Guiding questions: Did you build this LLM or are you using a model from a vendor? What are the benefits and use cases of this model? Does this bring up any equity issues?
  • Trained with User Data – Indicates if customer data was used to train the base model.
    • Guiding Questions: Is my students’ data being used to train this model for other institutions? Does this bring up any equity issues? Does this imply a need for stringent data governance policies?
  • Data Shared with Model – Indicates what customer data is sent to model during training or processing.
    • Guiding questions: What data does the model use to generate a useful, relevant output? Does this raise any concerns about data privacy and compliance?

 

The next section, Privacy & Compliance, provides information that determines whether student and institution data is safe.

PrivacyCompliance.png

  • Data Retention – Indicates how long data related to the feature will be retained.
    • Guiding Questions: How long does this AI feature store my institution’s data?
  • Data Logging – Indicates if the feature provides logging tools that help the provider or user understand what output the AI has produced and how it was produced.
    • Guiding Questions: What records are you keeping of queries and their outputs? What are the risks if data is logged unintentionally?
  • Regions Supported – Indicates the geographical locations where the AI tool is permitted to operate. 
    • Guiding Questions: Is the tool compliant with local regulations and accessible to all intended users?
  • PII (Personal Identifiable Information) – Indicates if PII is exposed and where.
    • Guiding Questions: What PII, if any, is exposed and in what environment does that occur?

 

The final section addresses how you should use the tool’s output.

Outputs.png

  • AI Settings Control Indicates whether there are settings to control the availability and use of the AI functionality.
    • Guiding Questions: Can I control if the AI feature can be turned on or off?  Who controls the functionality and for which users?
  • Humans in the Loop – Indicates whether AI-driven decisions can be modified, verified, and corrected by a human.
    • Guiding Questions: Are there tools to review/change/block the output of AI? What are the potential impacts if a mistake is made?
  • Guardrails – Indicates any safety mechanisms put in place to prevent undesirable outcomes, biases, or harmful human actions.
    • Guiding Questions: Who is my audience? What guardrails are in place to ensure appropriate outputs? 
  • Expected Risks – Indicates any expected risks associated with use of the tool.
    • Guiding Questions: What are some known shortcomings of this feature? What are some potential security risks?
  • Intended Outcomes – Provides the intended outcome(s) of the tool, such as improving learning efficiency, providing personalized feedback, or streamlining administrative tasks.
    • Guiding Questions: What are the benefits of using this feature? What outcomes are expected, and how will you measure them?


More Questions? For more information, visit our Instructure Community to find AI resource documents, recent AI product updates, and other AI blog posts. You can also stop by our AI Resource Hub for more insights and check out our InstructureCon session recording on the innovative and intentional use of AI.

2 Comments