Skip To Main Content

Let’s Prepare Today to Meet the Coming Challenges of Artificial Intelligence

Let’s Prepare Today to Meet the Coming Challenges of Artificial Intelligence
  • Ethics Institute Leaders

Recent studies cite the use of social media as a leading cause of depression and anxiety in young women. 

How did this happen? As recently as 20 years ago, social psychologists believed that social media would help young people establish meaningful social connections with their peers. That young student who sat in the back of the classroom and who was socially isolated from other students could use Facebook and other social media platforms as “life preservers” that would help build ties to others in her peer group. But today, it is becoming clear that overuse of Facebook, Instagram, TikTok and other social media platforms only increases feelings of inferiority to “influencers” and other social media authorities. And ultimately, young women who spend a lot of time on social media are more likely to become anxious and depressed.

How did this happen? What went wrong? I believe the problem is that the use of social media can never take the place of genuine interpersonal relationships that young people establish with other young people and friends. 

People expected social benefits from social media. Today, they seem to be expecting that artificial intelligence will provide similar benefits. Instead of finding friendship with living, breathing other people, people will come to rely on computers that “seem” to be human. And I foresee a time, maybe 20 years from now, when people will be saying, “AI seemed to promise so many benefits . . . but now we see that it is only leading to isolation and alienation.”

People who come to rely too heavily on AI-enhanced technology, like those who have come to spend too much time on social media, will only burrow down into artificial worlds inhabited by artificial people. 

And the use of AI is proliferating quickly. As I write this letter this month, thinkers from many disciplines have turned intense scrutiny at the increasing use of Artificial Intelligence in research. Every day, news stories and articles are reporting that students are using AI-enhanced tools like ChatGPT to automatically write essays and research papers on a variety of topics. A student who needs to write an original essay about the role that athletics plays in gaining college admission (to name just one possible topic) can turn that assignment over to an AI-enabled chat bot and immediately receive a credible essay on the topic. Did that student formulate opinions by speaking with coaches, other students, or athletes? No, there was no human interaction. 

That student just typed a prompt like “what ethical issues are involved in college athletics today,” hit “enter” and got an instant essay. 

And so we see that AI-enhanced computer tools, like social media, could lead to greater isolation if we do not find ways to use them in the right way. 

What are educators to do? One step is to provide context about what AI is, and what it can and cannot do. It can help with research but ultimately cannot take the place of original, interpersonal inquiry. Another step is to create ethical guidelines for how students can use AI-enhanced research in their work. Teachers can, for example, let students use AI tools like ChatGPT in much the same way they use Google now, as research tools, provided they indicate they have done so.

Forbidding the use of AI is futile. The cat is out of the bag and cannot be lured back in again. But if we think about AI today and plan reasonable ways to use it, it could become a useful tool that does not contribute to loneliness or depression in young people.

I would welcome your thoughts on this topic!

  • Ethics
  • Technology