While AI can afford us many new opportunities, the technology also has many potential drawbacks that we, as conscientious users, need to be aware of.
Reliability: LLMs such as ChatGPT can run into issues with the reliability of their responses. Because there is little transparency on what data AI models are trained on, users can’t assess if the information generated comes from a reputable source. Additionally, LLMs have been known to “hallucinate” information such as citing a book that does not exist. When using these tools, you should always check any facts, laws, or sources referenced in a response.
Pattern Recognition vs. Truth: AI tools primarily excel at pattern recognition rather than discerning absolute truths. While they can identify trends and associations within the data, they may not always discern the validity or reliability of individual studies or claims. This limitation underscores the importance of human judgment and critical appraisal in the review process to ensure the accuracy and credibility of the findings.
Transparency: AI can reflect and the biases of the data it is trained on. This problem is exacerbated by the lack of transparency around the content being used to train models. Content generated by AI that is presented as purely factual can contain political or cultural biases. It is always best practice to reflect on LLM responses to determine if there may have been bias introduced.
Bias: AI tools have the capacity to exacerbate existing issues of bias and discrimination, especially when tools are used in decision-making. Notable examples of algorithmic bias have come up in healthcare, hiring, and policing.
Incomplete Coverage: AI tools may not have access to all relevant databases or sources of information, and many organizations do not provide what information the tool has been trained on, leading to incomplete coverage of the literature.
Privacy Concerns: Many AI tools are created by for-profit organizations that have a vested interest in the data you feed the tools. For this reason, there are concerns around privacy about the information you feed generative AI.
Copyright and Intellectual Property: At the moment, AI generated content is not covered under copyright law; however, this is a developing landscape and the line between AI generated content and content made by people with help from generative AI tools is blurry. There are concerns from creators across a wide range of fields about AI generated content being used to avoid paying artists. Additionally, there are concerns about copyrighted material being used to train AI.
Environmental Impact: Training AI Models is associated with massive energy expenditure as well as water consumption.
Ethical Considerations: With any new technology, there comes a risk of its abuse. AI tools can be used to spread disinformation, surveille vulnerable communities, and exacerbate already existing structural inequalities.