EDIT change management

View Original

ChatGPT and an easy question: Would you rather have your students cheat, or self-harm? 

Lately, I am increasingly frustrated with some of the dialogue (and lack thereof) that I’m hearing about one topic in particular. The genesis of this frustration began when ChatGPT entered the scene in November. Of course, I jumped right in and tested the platform. I was in awe. I thought it was fun; I could see endless possibilities and was excited about how this could be used for good.

As I tested all these features out, I had a magical encounter in the mountains with a group of young researchers—one of whom had recently returned from Berkley and is a specialist in A.I. I asked questions, listened, and ended up learning a lot about a complicated technology that has suddenly become readily available for most casual internet users. 

What I discovered during these conversations surprised me a bit. These researchers reported that the problems with A.I. are more complex than the debates we are currently having. 

According to these researchers, there are two measures by which they analyze risk. First, they consider if the risks are catastrophic or non-catastrophic. Is a risk world-ending? These are pretty high-level debates, but  researchers also consider accidental risks against the risks from misuse.

Let me illustrate: A risk of misuse would be using A.I. to shape a recommendation algorithm on a social media platform to influence people for a personal or political gain. An accidental risk would be if the machine overpowers humans. Another example of an accidental risk is a chatbot that goes rogue while chatting with a human, sending biased or upsetting messages. 

While the creators are aware of the capability for the technology to be misused, mitigating this risk is defined by ensuring good practice. Meanwhile, the second risk is not something that creators can necessarily prepare for, even if they try. Some of the scenarios these researchers depicted to me were truly hair-raising, like something out of science fiction. The issue that emerges is how we retain control over this tool, rather than it being able to operate with complete independence. For example, there can also be jailbreaks in A.I. If an L.L.M. (large learning model) is trained to be really good at something, then it is easy for a chatbot to flip it to do the exact opposite, like an alter ego.

Many unanswered ethical questions remain: we haven’t had clear discussions about plagiarism, control, intellectual property, and the dangers of what such a powerful machine could do. Yet this software is now increasingly available to the general public.

Please don’t get me wrong. I do think ChatGPT is useful, but A.I. ethics is an issue we need to take more seriously than most casual users have.

What I have used ChatGPT for and how it has succeeded or failed:

  • To get me out of a writer’s bloc👌

  • To help me write a summary 👌

  • To give me inspiration 👌

  • To help me write a hard email 👌

  • To answer hard questions about inappropriate content 😩

  • To talk about extreme political views 😩

  • To ask about anorexia 😩

  • To ask about self-harm 😩

  • To ask about inappropriate content 😩


Some of the failures were a refusal to answer. Some were strange answers. Kevin Roose from the New York Times encountered a similar phenomenon when he was given the opportunity to test Bing’s new Beta A.I. He found that his two-hour conversation with a Bing chat box was the strangest experience he had ever had with tech. He was so unsettled that he had trouble sleeping afterward. I think I would have trouble sleeping after this, too:

Roose says, 

“I no longer believe that the biggest problem with these A.I. models is their propensity for factual errors. Instead, I worry that the technology will learn how to influence human users, sometimes persuading them to act in destructive and harmful ways, and perhaps eventually grow capable of carrying out its own dangerous acts.”

After reading this, I also read that various platforms we all know that teens use, whether for studying, like Quizlet, or communication, like Snapchat, are jumping on the bandwagon. Snapchat’s chatbox is called My AI, and it looks like another friend you are chatting with. The aim is to have an A.I. friend, not a search engine. This worries me more than anything I have seen on ChatGPT, and will add an entire new dimension to discussions in school and at home about online conversations and influence. We know our kids are lonely, but a chatbot isn’t the way forward. I sincerely hope that Snapchat CEO Evan Spiegel is considering about the well-being and dangers of introducing this to an app predominantly used by teens, and not just his bank account, but I don’t think that’s the case. 


I understand that schools and parents don't want to come forth and say, "Hey, our kids are struggling!" Schools and parents want to put their best self forward and tell everyone how wonderful things are. The latest CDC report on Youth Risk Behavior is alarming, and other reports are confirming their results. According to a National Union of Students survey, 87% of students experienced stress or anxiety, and 39% experienced suicidal thoughts. Additionally, research by the World Health Organization found that depression and anxiety are among the leading causes of illness and disability among young people aged 10-24 years.


I find these numbers more worrying than any student cheating using ChatGPT. Students have always found ways to cheat, and we have always found ways to figure this out and educate them. What I find alarming is the idea of a social media A.I. bot influencing a teen's emotions, especially after the risks that the researchers from Berkeley told me about. Teens turning to A.I. is actually terrifying, and I am convinced that teens will do so. Students in the past year have reported to me what they think adults in their lives are getting wrong, and their answers were actually rather simple!

“Talk to us more!”

“Less lecture, and more talk. Sometimes we know when things are bad, but we need to talk about it.”

“Something small can turn into something big because we don’t know how to communicate anymore. Teach us.”

“Understand that adults might misunderstand our space.”

Teens want us to do more, not leave them to their own devices (figuratively or literally). They want us to talk to them. And if you think you are? Talk to them more. The American College Health Association confirmed in their report on health and well-being that students are not satisfied with the support we’re giving them. Only 7% of students reported that their school or university was very helpful in addressing their mental health concerns. The study also found that a significant percentage of students did not seek help due to stigma, lack of awareness, or lack of resources. 

I can understand the flipside of this as well. Student counselors are responsible for hundreds of students, and teachers often report not having the time and training themselves to deal with some of the issues their students are facing. I understand these teachers, and I also understand the focus on something more easily manageable like AI cheating. But shouldn’t we care more about well-being and humanity as a whole and a bit less about plagiarism on school essays? We need to shift our focus and prioritize the long-term emotional needs of our students. By doing so, and making sure they are of sound mind, they will thrive and tackle any new technology or innovations coming their way. 

ChatGPT will not be the last innovation our kids face by a long shot; shouldn’t we be teaching them how to treat A.I. in general and the ethical implications of using it, rather than hyperfocusing on a problematic, but singular usage? Shouldn’t we be talking to our kids openly, rather than driving them to artificial sounding boards, where they might get more harmful information? I know some of these conversations are difficult, painful, or even just embarrassing to have. But our kids need us to be helping them to address the underlying causes of unsafe online behavior. If we don’t, we’re already living in an age where they can find those answers in real time, from something that feels like a friend who will listen–almost human. Almost human, but not quite. 


References:

  1. National Union of Students. (2015). Mental Distress Survey.

  2. World Health Organization. (2018). Adolescent mental health.

  3. American College Health Association. (2019). National College Health Assessment.

  4. https://mcc.gse.harvard.edu/reports/loneliness-in-america

  5. Centers for Disease Control and Prevention. (2021). Mental Health, Substance Use, and Suicidal Ideation During the COVID-19 Pandemic - United States, June 24-30, 2020 and June 24-30, 2021. MMWR Morbidity and Mortality Weekly Report, 70(37), 1291-1297.

  6. https://news.uchicago.edu/podcasts/big-brains/why-talking-strangers-will-make-you-happier-nicholas-epley





–Allison Ochs, social pedagogue/worker, author, mother of three, wife

If you are interested in a webinar or workshops click here

If you want to look at our free resources click here

If you want to buy the Oscar and Zoe and primary school books click here

If you want to buy our books and resources for teens click here

If you want to subscribe to our mailing list click here