Generative AI (GenAI) is a type of artificial intelligence that uses advanced algorithms and large language models to create new content—such as text, images, or audio—based on patterns learned from vast datasets.
In the context of legal research, GenAI tools can help streamline the analysis of case law, draft documents, and summarize complex information, offering significant efficiency gains. However, these tools can also produce errors or “hallucinations,” may lack jurisdiction-specific knowledge, and raise concerns about data privacy and ethical use, so all outputs require careful review and validation before use in practice.
Do note that the unauthorised use of generative AI tools will be considered cheating – a violation of the SMU Code of Academic Integrity and will be dealt with accordingly.
With the exception of SciSpace and Scite, we do not offer institutional access to these AI tools. However, you can make use of the free versions or consider getting your own subscription.
While helpful, GenAI also has some risks. It can perpetuate biases from its training data and sometimes makes up information that sounds real but isn’t (“hallucinations”). There’s a chance it might copy existing work too closely, leading to copyright or plagiarism issues. GenAI can also accidentally reveal private or sensitive information. Because of these risks, it’s important to double-check AI outputs and use them carefully.
People have gotten into trouble for misusing GenAI—see the cautionary tales below!