The Development and Use of Chatbots in Public Health: Scoping Review PMC
The evidence cited in most of the included studies either measured the effect of the intervention or surface and self-reported user satisfaction. There was little qualitative experimental evidence that would offer more substantive understanding of human-chatbot interactions, such as from participant observations or in-depth interviews. As an interdisciplinary subject of study for both HCI and public health research, studies must meet the standards of both fields, which are at times contradictory [52]. Methods developed for the evaluation of pharmacological interventions such as RCTs, which were designed to assess the effectiveness of an intervention, are known in HCI and related fields [53] to be limited in the insights they provide toward better design. In the healthcare field, in addition to the above-mentioned Woebot, there are numerous chatbots, such as Your.MD, HealthTap, Cancer Chatbot, VitaminBot, Babylon Health, Safedrugbot and Ada Health (Palanica et al. 2019).
They are not companions of the user, but they get information and pass them on to the user. They can have a personality, can be friendly, and will probably remember information about chatbot technology in healthcare the user, but they are not obliged or expected to do so. Intrapersonal chatbots exist within the personal domain of the user, such as chat apps like Messenger, Slack, and WhatsApp.
Facilitate post-discharge and rehabilitation care
Considering their capabilities and limitations, check out the selection of easy and complicated tasks for artificial intelligence chatbots in the healthcare industry. Companies are actively developing clinical chatbots, with language models being constantly refined. As technology improves, conversational agents can engage in meaningful and deep conversations with us. In the case of Tessa, a wellness chatbot provided harmful recommendations due to errors in the development stage and poor training data.
In addition, especially in health care, these systems have been based on theoretical and practical models and methods developed in the field. For example, in the field of psychology, so-called ‘script theory’ provided a formal framework for knowledge (Fischer and Lam 2016). Thus, as a formal model that was already in use, it was relatively easy to turn it into algorithmic form. These expert systems were part of the automated decision-making (ADM) process, that is, a process completely devoid of human involvement, which makes final decisions on the basis of the data it receives (European Commission 2018, p. 20). Conversely, health consultation chatbots are partially automated proactive decision-making agents that guide the actions of healthcare personnel. Chatbots have the potential to address many of the current concerns regarding cancer care mentioned above.
Evaluation of Chatbot Design
ChatGPT and similar large language models would be the next big step for artificial intelligence incorporating into the healthcare industry. With hundreds of millions of users, people could easily find out how to treat their symptoms, how to contact a physician, and so on. Chatbots can handle a large volume of patient inquiries, reducing the workload of healthcare professionals and allowing them to focus on more complex tasks.
To develop social bots, designers leverage the abundance of human–human social media conversations that model, analyse and generate utterances through NLP modules. However, the use of therapy chatbots among vulnerable patients with mental health problems bring many sensitive ethical issues to the fore. In the last decade, medical ethicists have attempted to outline principles and frameworks for the ethical deployment of emerging technologies, especially AI, in health care (Beil et al. 2019; Mittelstadt 2019; Rigby 2019).