Dr Celia McCrea: AI's Impact on Body Image and on Academia
Dr Celia McCrea is former Director of Postgraduate Psychology at Leicester University, with a research specialism in eating disorders and body image.
In this interview, she shares valuable insights into the potential effects of AI powered image editing tools on young people, including distorted body image and mental health issues.
We also explore how AI is impacting academia and ways institutions can adapt.
You can read the key takeaways from our interview at the end of the article.
Dr Celia McCrea
AI’s Impact on Body Image and Academia
Thank you for joining SAIFCA today, Dr McCrea!
As a former University Course Director of Postgraduate Psychology and with a research specialisation in eating disorders and body image, how do you see AI influencing these areas?
AI’s influence on body image and eating disorders is a topic of growing concern. AI powered image editing tools and filters have become incredibly sophisticated and accessible, allowing users to alter their appearance dramatically with just a few clicks.
While these tools can be creative and fun, they can also contribute to setting unrealistic beauty standards. Young people, especially girls and young women, are often bombarded with these idealised images on social media platforms. This constant exposure can lead to unhealthy comparisons and a distorted sense of self-worth, potentially contributing to body dissatisfaction, low self-esteem, and even triggering mental health issues such as anxiety and eating disorders.
Could you elaborate on how these AI generated images and filters can specifically impact the mental health of young people?
The impact can be profound and varied. First, there’s the issue of unattainable beauty standards. When AI alters images to ‘perfection’, it creates a benchmark that’s impossible to achieve in real life. Young people can internalise these standards, leading to chronic dissatisfaction with their own appearance. This can erode self-esteem and contribute to negative body image.
Additionally, the use of filters can become a crutch. Some individuals might feel unable to post a photo without altering it, reinforcing the idea that their natural appearance isn’t good enough. Over time, this can lead to increased social anxiety and even withdrawal from real-life interactions.
There’s also the addictive nature of social media algorithms, which can trap users in a cycle of seeking validation through likes and comments on their edited images. All of these factors can put a strain on mental health, potentially leading to depression, anxiety disorders, and eating disorders.
Given your background in academia, how do you see AI impacting universities, particularly in terms of academic integrity?
The advent of AI language models has indeed introduced new challenges in higher education, including in fields like psychology where critical thinking and original analysis are crucial. While these tools are valuable for research and learning, they can also make it easier for students to produce essays and assignments with minimal original input.
This raises serious concerns about plagiarism and the authenticity of academic work, especially in disciplines that rely heavily on essay-based assessments, as psychology often does. Traditional plagiarism detection software isn’t always equipped to identify AI generated content, and some detection tools can even produce false positives, unfairly accusing students of misconduct.
This not only undermines the trust between educators and students but also complicates the assessment process. Furthermore, reliance on AI generated content can impede the development of critical thinking and writing skills that are essential for academic and professional success in psychology and related fields.
What steps do you suggest universities take to evolve within this context, especially in psychology departments?
I believe a multifaceted strategy is necessary. Education is the first step - both for students and faculty. Universities should provide guidance on the ethical use of AI, highlighting both its potential and its pitfalls. Workshops or modules on digital literacy could be integrated into the psychology curriculum to ensure that everyone is informed about these tools and their implications.
In terms of assessment, psychology educators might also consider diversifying their methods. Incorporating more in-person evaluations, such as presentations, oral exams, or practical projects, can reduce reliance on written assignments, which may be more susceptible to AI interference. For instance, case study analyses or research proposal presentations could be effective alternatives.
Students need clearer guidelines, which should be addressed by updating academic policies to address AI explicitly. In psychology, we might emphasise the importance of original thought and ethical considerations when using AI tools for research or assignments.
Investing in more advanced detection tools may also be important at some stage, but we must acknowledge their limitations and potential for inaccuracy, and use them as part of a broader strategy rather than a sole solution.
Ultimately, fostering an academic culture that values integrity and original thought is key, especially in a field such as psychology where understanding human behaviour and cognition is paramount.
Finally, given your expertise in body image and eating disorders, what measures do you believe are crucial for society to ensure AI develops safely and beneficially in this context?
Ensuring the safe and beneficial development of AI, particularly in relation to body image and mental health, requires a number of different approaches. First, we need robust regulatory frameworks that address the use of AI in image manipulation, especially when it comes to advertising and social media content aimed at young people. These regulations should consider the potential psychological impact of AI altered images.
Secondly, those at the forefront of AI development should follow the traditional practices of psychologists with meaningful consideration of the ethical aspects of AI systems. Input from the field of psychology, as well as that of mental health professionals, can help in this regard.
Public engagement and education are also crucial. By raising awareness about the reality of AI altered images and their potential impact on mental health, we can empower individuals, especially young people, to develop a more critical eye and healthier relationship with social media.
Interdisciplinary collaboration is also key. Bringing together experts from psychology, computer science, ethics, and policy can help anticipate unintended consequences and develop more holistic solutions. For instance, collaborations could lead to the development of AI tools that promote positive body image rather than unrealistic ideals.
Lastly, investing in research on the psychological impacts of AI generated content is vital. This can provide valuable insights to guide policy and practice, helping us understand how to mitigate negative effects and potentially use AI as a tool for promoting mental health and positive body image.
By taking these steps, we can work towards harnessing the benefits of AI while minimising its risks, particularly in sensitive areas like body image and mental health.
Thank you, Dr McCrea, for sharing your valuable insights on these critical issues of AI, psychology, and academia.
It’s been my pleasure. These are complex issues that require ongoing dialogue and collaboration across disciplines. I hope our discussion contributes towards a greater understanding of the challenges and opportunities we face as AI continues to evolve and impact our society.
KEY TAKEAWAYS FROM DR MCCREA’S INTERVIEW
Five key takeaways relevant to the risks from AI to children and education from the interview with Dr McCrea:
- AI’s role in setting unrealistic beauty standards and impact on mental health: AI powered image editing tools and filters, widely used on social media, can contribute to setting unattainable beauty standards. These idealised images, particularly aimed at young people, together with the addictive nature of social media, can lead to unhealthy comparisons, body dissatisfaction, and mental health issues such as anxiety and eating disorders.
- Academic integrity in education: The rise of AI Large Language Models (LLMs) presents challenges in education, especially regarding plagiarism and the authenticity of student work. AI generated content can undermine critical thinking, originality, and academic integrity, which are crucial in fields like psychology. There are various methods to help towards combating this, such as incorporating oral and practical projects and incorporating AI education into the curriculum. At the same time, we must look to the benefits AI can bring for learning and education.
- Need for regulation and ethical AI development: There is an urgent need for regulatory frameworks to govern the use of AI in image manipulation, particularly in content targeting young people. Collaboration between psychologists, computer scientists, and policymakers can help ensure AI tools are developed responsibly, with consideration for their impact on mental health and body image.
These points highlight the importance of addressing AI’s influence on mental health and education, especially for younger audiences.
Addressing these challenges now can help protect the mental health of young people and preserve the integrity of education in an AI driven world.