View as Webpage

Letterhead for the Office of Information Technology

Memorandum

TO: University Community


FROM: Stephan A. Byam, Vice President of Information Technology


DATE: July 29, 2025


SUBJECT: Free Artificial Intelligence (AI) Usage at UDC

The Office of Information Technology (OIT) has observed a growing use of free AI tools across campus. While these tools are often cost-effective and easily accessible, they may pose significant risks to UDC’s data and information systems if not properly managed. Many free AI platforms lack strong security and may not meet UDC’s cybersecurity standards, putting institutional data at risk.


Institutional data falls into two categories:


  • Level 0 Data—public information such as course catalogs, event schedules and general campus details.
  • Data Above Level 0—internal communications, student records and other confidential or personally identifiable information, which require stronger protection to prevent data breaches and privacy violations.


Additionally, these tools can be vulnerable to threats such as model poisoning—a type of attack where the AI is intentionally manipulated to produce harmful or inaccurate results, which can result in harmful or misleading outputs. The UDC community is encouraged to be mindful of these risks and to exercise caution when using AI tools for academic, administrative or other purposes. 


Below are a few key points to keep in mind when considering the use of free AI tools.

 

Data Privacy Considerations with Free AI Tools


  • Data Collection Practices: Free AI tools often prioritize performance and user output over data privacy. Compared to paid, enterprise-grade tools, they typically have less stringent data protection policies.
  • Lack of Transparency: Many free tools do not clearly explain how they collect, store or manage user data. This includes limited compliance with key privacy regulations such as the General Data Protection Regulation (GDPR) and the Family Educational Rights and Privacy Act (FERPA).
  • Broad Usage Rights: Service agreements for free AI tools may grant providers broad rights to use, store or even share user data with third parties. This can significantly reduce user control over their own information. 


Data Security Risks with Free AI Tools


Limited Security Measures: Free AI tools often lack the robust security protocols found in paid enterprise-grade solutions. This increases the risk of data breaches, meaning any user-defined content you input or process may be exposed to unauthorized access.


Model Vulnerability: Some free tools may contain hidden security flaws that can be exploited by malicious actors. These vulnerabilities can lead to model poisoning.


Legal and Compliance: Free AI tools that create content might generate images, audio and texts that infringe on existing copyright. 


Free AI tools in a regulated higher education environment can lead to unintentional violations of data protection laws such as FERPA and GDPR. 


Reliability of Free AI Tools: Free AI tools are often trained on publicly available data, which may not undergo thorough testing or validation. As a result, these tools can produce unreliable or inaccurate results.


These tools may also be trained on biased data, leading to the amplification of those biases and potentially resulting in discriminatory outcomes.


How the UDC Community Can Minimize Risk


It is recommended that you log into your UDC account and use Microsoft Co-Pilot because it offers stronger privacy and protection.


If you choose to use free AI tools:


  • Proactively review the terms of service and privacy policies of any free AI tools before using them.
  • Do not share UDC confidential information, personally identifiable information or intellectual property.
  • Carefully assess AI-generated outputs for accuracy and potential bias before using or presenting


If you have any questions or concerns, please submit a consultation request with the Office of Information Technology’s Cybersecurity Analyst.

UDC | Turning Potential into Power