At Queen鈥檚 University, we promote the responsible and ethical use of artificial intelligence to support academic and research excellence. To ensure the safe and appropriate use of generative AI software, the university has conducted a series of security and privacy assessments through the Security Assessment Process (SAP). These evaluations help identify potential risks, protect user privacy and institutional data, and inform appropriate use guidelines.
All generative AI applications have been carefully assessed by the Information Security Office. This page provides an overview of AI tools that have been vetted for use, as well as those that are strongly discouraged due to security or ethical concerns. Before using any AI system or application, please ensure that an SAP has been completed. Explore the assessment summaries below to learn more about AI applications and their compliance with university policies and best practices.
Generative AI Approved for Use
The following applications have been reviewed, vetted, and approved for use at Queen鈥檚 University. Please pay attention to the Acceptable Data Classification Levels, as not all tools can be used for all purposes.
Chatbots
- LibreChat Recommended -
Purpose: LibreChat is our newly released generative AI interface developed right here at Queen鈥檚 University. It is powered by Azure OpenAI models and can safely be used for administrative tasks, tutoring, and research.
Security Awareness: You can safely use Queen鈥檚 University data within this product, given the enterprise data protection standards in place.
Acceptable Data Classification Levels: General, Internal, Confidential
- Microsoft 365 Copilot Recommended -
Purpose: Conversational AI, tutoring, research, AI-powered assistance within various Microsoft applications.
Security Awareness: You can safely use Queen鈥檚 University data within this product, given the enterprise data protection standards and business agreements in place.
Acceptable Data Classification Levels: General, Internal, Confidential
- OpenAI ChatGPT -
Purpose: Conversational AI, tutoring, research.
Security Awareness: Use of confidential university data within this product is discouraged at this time.
Acceptable Data Classification Levels: General, Internal
- Google Gemini -
Purpose: Multimodal LLM integrated with Google products and services.
Security Awareness: Use of confidential university data within this product is discouraged at this time.
Acceptable Data Classification Levels: General, Internal
- Anthropic Claude -
Purpose: Advanced AI language model focused on safe and reliable conversational AI.
Security Awareness: Use of confidential university data within this product is discouraged at this time.
Acceptable Data Classification Levels: General, Internal
Common AI Powered Apps
- Otter.ai -
Purpose: AI-powered transcription
Security Awareness: Be mindful of recording consent laws.
Acceptable Data Classification Levels: General, Internal
- Genio (formerly Glean) for Education AI -
Purpose: AI to transcribe audio recordings and generate outlines.
Security Awareness: Be mindful of ethical AI usage.
Acceptable Data Classification Levels: General, Internal
- Auris AI -
Purpose: AI generated transcripts and subtitles
Security Awareness: Do not share personal information.
Acceptable Data Classification Levels: General
- Captions AI -
Purpose: Generate and edit talking videos with AI.
Security Awareness: Do not share personal information.
Acceptable Data Classification Levels: General
- VoiceGain AI -
Purpose: AI for speech-to-text transcription and voice recognition.
Security Awareness: Do not share personal information.
Acceptable Data Classification Levels: General
- Wordly AI -
Purpose: Real-time translation and captioning services for meetings.
Security Awareness: Be cautious with confidential conversations.
Acceptable Data Classification Levels: General
Generative AI Applications - Not Recommended
The use of the following applications is discouraged at this time as a precautionary measure to protect the Queen鈥檚 University community, data, and systems.
- DeepSeek Not Recommended
DeepSeek has raised significant privacy and security concerns, along with demonstrated issues related to content and accuracy bias. Additionally, several security vulnerabilities have been discovered, which are substantial enough to warrant the recommendation of avoiding its use within Queen鈥檚 digital environment.
- xAI Grok Not Recommended
It is advised to avoid using xAI Grok for confidential university discussions, as it may involve the use of user data for training purposes. Additionally, Grok has shown vulnerabilities to data exfiltration attacks, where sensitive information can be inadvertently leaked through AI interactions. Given Grok鈥檚 鈥渁nti-woke鈥 stance and the lack of sufficient guardrails, there is also a risk of it generating biased or incorrect information.
- 01.AI Yi Not Recommended
It is crucial to implement robust data privacy and security measures to safeguard user information and ensure compliance with relevant regulations when using 01.ai鈥檚 Yi. Additionally, compatibility issues with the current Yi 34B infrastructure have been identified, necessitating specialized expertise to fine-tune the model effectively.