Journal of Applied Bioinformatics & Computational BiologyISSN: 2329-9533

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Review Article, J Appl Bioinforma Comput Biol Vol: 12 Issue: 1

In Mental health Support, Ethical Perspective on the Utility of ChatBots

Khushi Tekriwal*

Department of Computer Science, Allegheny College George Mason University, Pennsylvania, United States

*Corresponding Author: Khushi Tekriwal
Department of Computer Science, Allegheny College, George Mason University, Pittsburgh, Pennsylvania, United States
Tel: 9874292223
E-mail: khushitekriwal05@gmail.com

Received date: 07 October, 2022, Manuscript No. JABCB-22-78307; Editor assigned date: 10 October, 2022, PreQC No. JABCB-22-78307 (PQ); Reviewed date: 25 October, 2022, QC No. JABCB-22-78307; Revised date: 03 January, 2023, Manuscript No. JABCB-22-78307 (R); Published date: 11 January, 2023, DOI: 10.4172/2329-9533.1000256

Citation: Tekriwal K (2023) In Mental Health Support, Ethical Perspective on the Utility of ChatBots. J Appl Bioinforma Comput Biol 12:1

Abstract

The use of chatbots is altering the dynamic between patients and medical staff. Conversational chatbots are computer programmes that use algorithms to mimic human conversation in a variety of ways, including through text, voice and body language. The user can type in a question and the chatbot will know exactly what to say in response. A major benefit of electronically delivered treatment and disease management programmes is the potential for improved adherence through the use of chatbot-based systems. This chapter presents a survey of current chatbot systems in the field of mental health. Some promising results have been shown for the use of chatbots in the areas of psychoeducation and adherence.

Keywords: Conversational User Interface (CUI); Evidence Based Medicine (EBM); Randomized Controlled Trials (RCTs); Depression; Anxiety

Introduction

What are chatbots?

According to the definition, a chatbot is "a system that communicates with users utilising natural language in a number of forms, including text, voice, images and video, including body expressions, facial, spoken and/or written" [1]. A chatbot may also be considered a chatterbot, virtual agent, dialogue system and Conversational User Interface (CUI) or machine conversation system. A chatbot is an artificial intelligence system designed to carry on conversations in our place. It's simple to begin interacting with a chatbot because most of them are text-driven and feature images and unified widgets [2].

The use of chatbots in the healthcare industry has grown rapidly in recent years. By providing behavioural change, therapy support, assistance in managing diseases (see, for example, Babylon health, which offers digital health consults and information (see, for example, Wysa, that offers cognitive behaviour therapy or healthcare chatbots help patients, their families and healthcare teams [3].

Literature Review

In recent years, the healthcare industry has seen a rise in the number of chatbots aimed at helping with mental health problems [4]. An estimated 29% of the population will experience mental health problems at a certain point in their lives with 10% of children and 25% of adults experiencing symptoms in a given year [5]. Disabling conditions resulting from mental health disorders are associated with lower quality of life indicators. Losses in productivity from both labour and capital are expected to total around $16 trillion worldwide between 2011 and 2030.

Most often, medication and/or talk therapy are used to treat mental health issues [6]. Although mental health services are in high demand, not enough trained professionals are available to meet that need. According to global estimates, there is one psychiatrist in developing countries for each ten million of the population, compared to in developed countries, nine per 100,000 people [7]. The world health organization reports that only about 15% of people in developing countries have accessibility to services regarding mental health, compared to 45% of people in developed countries [8]. If people with mental health issues aren't given treatment, they may become more suicidal and even take their own lives.

Over the past five years, conversational agents have seen increased interest in psychoeducation, behaviour change and self-help, all areas that aim to combat the problem of scarce resources for treating people with mental health disorders (Figure 1).

Figure 1: Pictorial representation of chat box dialogue.

Chatbots for mental health

An analysis by Abd-alrazaq, found that 53 studies had reported 41 distinct chatbots for mental health. The Chatbots were created for a variety of functions, including screening (e.g., SimSensei), training (e.g., LISSA) and therapy (e.g., Woebot), with a focus on autism and depression. Seventy percent of all chatbots are deployed as standalone programmes, while only 30% use a web-based interface. 87% of the studies saw chatbots taking the reins and leading the conversation, while thirteen percent saw a shared leadership between the chatbot and human participants. Virtual humans or avatars are the most common forms of embodiment used by chatbots.

Two popular chatbot platforms being utilised today are SERMO and Wysa. Wysa, the chatbot, is both intelligent and empathetic [9]. It is integrating a mood tracker, which could detect negative moods. On the basis of the user's emotional state, it will either recommend a depression screening them to seek test or advise professional help. The app includes guided mindfulness meditation exercises to help with stress, depression and anxiety. A total of 129 people, split evenly between regular and infrequent chatbot users, participated in the study [10]. Research found that regular Wysa users experienced greater mood improvement than infrequent users (P>0.03). Participants' interactions with Wysa were found to be beneficial and stimulating.

The SERMO app is designed for people with moderate psychological impairment [11]. CBT techniques are used to help people learn to control their emotions. Based on Albert Ellis's ABC theory (situation, cognition, emotion), this app features a chatbot that prompts users to share how they're feeling about specific events in their lives. According to this theory, our thoughts and actions are influenced by the stimuli we take in, whether consciously or subconsciously [12]. Using the user's natural language and the data collected, SERMO can automatically determine the user's basic emotion. There are currently five recognised emotions: Joy, sadness, grief, anger and fear. The system recommends activities or mindfulness exercises as a suitable measure, depending on the emotion. Thirteen different conversations were created with OSCOVA for the prototype. Included are a wide range of user responses prompted by the user's expression of various emotions and states of mind. Natural language processing techniques and lexicon based processes are used to identify and categorise emotional states. Thus, SERMO is a chatbot that takes a hybrid approach, combining a rulebased conversation flow with NLU capabilities. In order to assess SERMO's usability, the user experience questionnaire was administered to a total of 21 participants (people with mental health conditions and psychologists) (UEQ). Users praised the app's usefulness, clarity and aesthetic appeal. Users gave generally neutral ratings on fun to use scales measuring hedonic quality. The experts who were consulted agreed that the app would be helpful for people who have trouble communicating in person.

Since there is a severe lack of trained mental health professionals in both the developing and developed worlds, more and more people are turning to chatbots for assistance with their mental health. AI (Artificial Intelligence) algorithms are being added to chatbots more and more as a way for them to interact with users (Figure 2).

Figure 2: Chatbots characteristics in clinical psychotherapy and psychology.

Advantages in mental health of chatbots

With mental health professional shortages worldwide, chatbots could help provide better care at a lower cost while also increasing access to care. For people who have trouble talking to their doctor about their mental health issues due to stigma, chatbots can be a helpful alternative for receiving treatment (Figure 3). As reported by Lucas and colleagues, veterans with PTSD were more forthcoming about their symptoms with a chatbot than with non-anonymized and anonymized versions of a self-administered questionnaire [13].

Figure 3: Graphic representation chatbots and its implementations technically.

Because chatbots are typically intuitive and simple to use, they can be a great resource for people who may have less developed language, health literacy or computer skills. Simple verbal and nonverbal interactions, including empathy, concentration and proximity, are possible with embodied chatbots. They can then form a therapeutic bond with their patients as a result. According to research, even people who have never used a computer before can easily navigate chatbots [14].

Traditional computer based interventions for mental health have been shown to be effective, but they suffer from low adherence and high attrition rates because of the inefficiency of effective interaction with humans that can only be found in in-person meetings with healthcare professionals [15]. Chatbots could be a good alternative to these kinds of interventions because they talk to users in a way that is both natural and fun.

The potential advantages of using chatbots for mental health have been reported in previous studies. Chatbots, in particular, can help with a variety of mental health problems. Chatbots have been understood to be more effective than eBooks in reducing symptoms of anxiety (P=0.02) and depression (P=0.017) [16]. Chatbots can provide a safe space for people to learn and practise interpersonal skills (such as how to conduct a successful job interview) without fear of criticism. For example, the study found that those who used the chatbot improved their job interview skills and confidence more than those who waited on the waiting list (P=0.05) [17]. One or more mental health issues may be detectable by chatbots.

With mental health professional shortages worldwide, chatbots could help provide better care at a lower cost while also increasing access to care. For people who have trouble talking to their doctor about their mental health issues due to stigma, chatbots can be a helpful alternative for receiving treatment (Figure 4). As reported by Lucas and colleagues, veterans with PTSD were more forthcoming about their symptoms with a chatbot than with non-anonymized and anonymized versions of a self-administered questionnaire [18].

Figure 4: The graphical representation for utility of mental health technology.

Ethical challenges

There are a lot of chatbots for mental health that can be found in app stores; however, many of them are not evidence based or at the very least, the knowledge that they are founded on is not contradicted by research that is relevant [19]. In order for mental health chatbots to be credible and useful, clinical evidence should be relied upon. This means that therapeutic procedures that are already utilised in clinical practise and have been demonstrated to be helpful should be incorporated into the chatbots. In addition, there is a paucity of research about the potential therapeutic use of chatbots in the field of mental health. It is difficult to draw definitive conclusions regarding the effect of chatbots on several mental health outcomes, according to a systematic review. This is because of the high risk of bias in the included studies, the low quality of the evidence, the lack of studies assessing each outcome, the small sample size in the included studies and the contradictions in the results of some of the included studies. Users could be harmed by improper recommendations or undetected dangers if such limitations are present.

Discussion

Chatbots are required to maintain users' privacy and confidentiality in order to protect the sensitive data they collect concerning users' mental health. Chatbots, in contrast to interactions between a patient and a doctor, in which the patient's right to privacy and confidentiality is respected, frequently do not take these factors into account. The vast majority of chatbots, particularly those that are offered on social media platforms, do not enable users to communicate in an anonymous capacity. As a result, communications can be associated with particular users. A number of chatbots have made it very clear in the terms and conditions section of their websites that they are free to abuse and share the data they collect for any number of reasons. However, users frequently accept such terms and conditions despite their complex and official language presentation without attentive reading and as a result, they may not be aware that the confidentiality of their data will not be maintained. This indicates that the data could be sold, traded or advertised by the distributor of a chatbot in some capacity. The Facebook issue serves as the clearest illustration of this, as it occurred when Facebook improperly shared the personal information of millions of its users with Cambridge analytica. It is possible that cyberattacks may become another problem, which would make users' personal health data available for uses that are not yet understood. The cumulative effect of these problems has an effect on the degree to which users are willing to interact with chatbots and the amount of sensitive information that they are willing to disclose.

Another difficulty associated with employing chatbots is ensuring users safety. At the moment, the majority of chatbots do not have the capability to handle critical scenarios in which the safety of their users is at stake. This may be due to the failure of chatbots to contextualise the talks of users, to understand the emotional cues that users provide and to recall the conversations that users have had in the past. Even though certain chatbots, like Wysa, provide the option of getting quick support from a mental health professional, such services are typically not free and people who are less than 18 years old are not permitted to use them. One more concern pertaining to security is an over dependence on chatbots. In other words, users of chatbots may get overly attracted to or overly dependent on them as a result of the ease with which they may access them; consequently, this may exacerbate the users' addictive behaviours and drive them to avoid face to face contacts with mental health specialists.

AI chatbots consistently work toward the goal of passing the turing test, which requires convincing users that they are having a conversation with a real person rather than a computer. Patients are fooled into thinking the chatbot is real by programming empathy into the chatbot and giving it responses that make it seem as though they are having a conversation with a real person. It's possible that mirroring exactly how therapists behave would make this impression even stronger. However, users have a right to know with whom they are talking and it would be immoral to mislead them in this manner from a healthcare provider's standpoint because of this right. It is possible that engaging in conversation with a computer or robot rather than a human being is considered demeaning in some societies. Kretzschmar and colleagues proposed that creators of chatbots warn users and continue to remind them, that they are talking with a computer that has limited capacity to grasp users requirements. This recommendation was made in order to prevent this ethical dilemma. The vast majority of chatbots also lack the capacity to demonstrate genuine empathy or sympathy, both of which are essential components of psychotherapy. As a result, the utilisation of such chatbots in mental healthcare may not be something that many individuals are comfortable. There is the potential for misuse of chatbot technology, including the use of the technologies to replace established services, which could potentially exacerbating already existing health inequalities. We lack ethical and regulatory frameworks for health interventions in general and mental health applications in particular. Recent research has attempted to solve the following issues: A group of patients, clinicians, researchers, insurance organisations, technology companies and programme officers from the US national institute of mental health came together to establish a consensus statement regarding criteria for mental health apps [20]. The group has come to a consensus on the following standards: The protection of users' data and privacy; efficiency; user experience and adherence and data integration.

Conclusion

Improvements to mental health chatbots linguistic abilities are still warranted. There needs to be an improvement in their capacity to comprehend user input and to appropriately respond to it. One of the biggest obstacles is developing a reliable method of identifying emergency situations and devising an appropriate response to those situations. Another unanswered question is how to tailor chatbots to specific users. It might be useful to learn from users feedback and conversations. One aspect of tailoring services to the user's specific needs is taking into account their level of health literacy. Language could be modified in terms of its style or complexity according to the information provided by the user. Treatment plans, for instance, could be tailored to a patient by retrieving relevant information from their medical history. There should be a way for a chatbot to get and use this kind of information on the fly.

Evaluating chatbots for mental health is important for making sure they are being used and are helpful. When assessing the technical difficulties of healthcare chatbots, what factors matter most? What metrics and standards should be used? Randomized Controlled Trials (RCTs), the gold standard of "level 1" evidence in Evidence Based Medicine (EBM), would be ideal for evaluating chatbots. Certain health outcome measures are established through these trials, which are carried out as summative evaluations of electronic interventions. More studies are needed to determine how those working in mental health care can benefit from implementing chatbots.

With mental health professional shortages worldwide, chatbots could help provide better care at a lower cost while also increasing access to care. For people who have trouble talking to their doctor about their mental health issues due to stigma, chatbots can be a helpful alternative for receiving treatment. Lucas and his colleagues found that veterans with PTSD were more honest about their symptoms when they talked to a chatbot than when they filled out a self-administered questionnaire, both with and without their names on it.

References

international publisher, scitechnol, subscription journals, subscription, international, publisher, science

Track Your Manuscript

Awards Nomination