Mental health experts warn that ChatGPT addiction is rising among shore staff and say it is only a matter of time before we see this onboard.
Mental Health Support Solutions (MHSS) says the lack of regulations when it comes to AI is worrying and has raised fears over the content users may be exposed to when utilising AI chat tools. On August 25, the parents of 16-year-old Adam Raine sued OpenAI, alleging that ChatGPT reinforced their son’s suicidal thoughts, ultimately contributing to his death.
Since this incident, mental health experts are raising alarms against the overuse of Artificial Intelligence (AI), which is already spreading worldwide at an exponential pace.
While general AI platforms can suggest support, lend an ear, and avoid explicit material, other lesser-known ones don’t offer such restrictions. Certain AI apps allow users to engage in largely unrestricted NSFW interactions.
The lack of AI’s ability to terminate conversations and transfer users to human professionals where necessary, combined with its perfect impression of emotional simulation, can lead to a dangerous illusion of companionship, says MHSS.
“With maritime health professionals with years of experience, we understand our escalation protocols, who to contact right away and which vessels, and the bot has no absolute capability of doing that. We should try to be more careful, considerate, assessing the risks just as much as the benefits”, commented Charles Watkins, Founder and Director of Clinical Operations at MHSS,.
Such warnings underline the growing concern over how society and seafarers alike interact with AI in their daily lives.
“We need to educate people on how to use AI safely, and to know when to seek human support”, added Güven Kale, Clinical Psychologist and Emotionally Focused Therapist at MHSS.
As AI advances within the maritime industry, as does the need for enhanced regulation and education, warns MHSS.
Source:safety4sea.com
