skip to content

Department of Computer Science and Technology

 

Our aim is for students to acquire the learning objectives of our courses. In line with this aim we wish to highlight the dangers to learning of using AI tools without critical thinking, but we do not wish to ban tool use that is not detrimental to learning.  

See the Department AI Policy here.

When completing work students should ask themselves whether they are acquiring the course learning objectives (which can be found on the course syllabus pages). For instance, using AI to correct spellings may not interfere with understanding the course content, but using AI to generate all the code examples without critically appraising the output will. 

For essay-based work, AI might be used to play devil’s advocate while refining your argument (since it is necessary to critically engage with the output), but students should be wary of using AI to define topic content (i.e., “what things should I mention in an essay on x?”) because the output can reflect generic (and perhaps incorrect) ideas rather than a student’s, probably very good, unique opinions on the matter. 

Note that supervisors want students to learn rather than simply generate work. When a student finds themselves up against a deadline, supervisors would rather receive no work than work generated by AI. Marking verbatim AI generated output is a waste of everyone’s time. 

Students should expect supervisors to discuss coding examples in supervisions to check understanding. Supervisors are less interested in whether the code works, but rather how and why it works. Students should be able to critique their own coding examples. 

 If a supervisor suspects use of AI tools is disadvantaging a student’s learning, they may contact the student’s Director of Studies.  

 For students at postgraduate level, always consider whether the use of AI is interfering with your research objectives (supervisors can advise you when in doubt).