To help researchers use AI effectively and minimise risks, staff from across the university have developed a comprehensive set of AI Guidelines.
These guidelines are aligned with existing UTS policies and government legislation, ensuring ethical and responsible use of Generative AI (GenAI) tools in research.
Why following the guidelines matters
Our goal is to help researchers avoid unintentional breaches of data security or research integrity and remain compliant with all applicable regulations. The guidelines reinforce key principles outlined in the Australian Code for Responsible Conduct of Research, promoting transparency, accountability, and trust.
Practical guidance at every stage of research
The guidelines offer clear advice for responsible AI use, including:
- Initial planning: Data security, confidentiality, cybersecurity, and intellectual property considerations.
- Tool selection: Managing potential biases and privacy concerns.
- Project design: Ethical use, particularly in sensitive research contexts.
- Funding and literature reviews: Compliance with relevant policies and licensing conditions.
- Data handling: Ensuring confidentiality, ethical management, and documentation.
- Publishing and peer review: Verification of accuracy, originality, and adherence to ethical standards.
Supporting your responsible AI use
We welcome your feedback to develop training resources and further support all UTS staff using AI. Planned support includes:
- Recommendations for AI tools endorsed by UTS academics.
- Training resources on responsible AI use in research.
- Exploration of potential AI enterprise tools for university-wide adoption.
Get involved
If you have feedback or knowledge of existing AI working groups, training, or resources at UTS, please help us coordinate a collective effort. Contact Serena Ekman from the ODVCR team at serena.ekman@uts.edu.au.
Together, let's advance research responsibly and effectively with AI.
Read the Use of AI in Research Guidelines.