Categorized in: 

Guidance for using AI tools

Artificial Intelligence tools are evolving daily. The University of Colorado wants to take a conscientious approach that prioritizes keeping CU data safe. When using any third-party AI tool, CU System Administration staff are responsible for understanding how to use the tool effectively, safely and within existing CU policies and laws.

The Office of Information Security has created a guidance document to educate users of AI tools about key considerations in effective and safe usage.

Guidance for AI Tool Use

 

In addition to sharing steps before using AI, the guide links to specific policies that apply to a wide range of technologies, including AI technologies. It also outlines the most important considerations:

Be aware of existing policies and processes

  1. Follow all procurement processes for an IT purchase, including a security review.
  2. Understand the data classification of the information being loaded into the tool.
  3. Understand any privacy implications for users of the tool.

Be careful about the information used by the tool

  1. Understand how the third-party developer of the AI tool handles CU data and what rights they claim.
  2. Work with security teams to ensure proper controls are in place for the tool.

Be transparent

  1. Perform extensive testing to ensure the tool provides accurate output.
  2. Review the output to ensure it meets CU expectations for being thoughtful, supportive and inclusive.
  3. Be sure to include an attribution for summaries created by an AI tool.
  4. Build in periodic reviews to maintain high standards.

There is much more to consider with using any third-party technology. The guidance document provides examples of AI-tool use. We encourage you to review the document with your team prior to considering AI tools and discuss the pros and cons. Contact UIS for guidance by emailing help@cu.edu and asking for AI solution design assistance.

Add new comment