Beyond Boundaries: The Promise Of Conversational AI In Healthcare
One of the key elements of expertise and its recognition is that patients and others can trust the opinions and decisions offered by the expert/professional. However, in the case of chatbots, ‘the most important factor for explaining trust’ (Nordheim et al. 2019, p. 24) seems to be expertise. People can trust chatbots if they are seen as ‘experts’ (or as possessing expertise of some kind), while expertise itself requires maintaining this trust or trustworthiness. Chatbot users (patients) need to see and experience the bots as ‘providing answers reflecting knowledge, competence, and experience’ (p. 24)—all of which are important to trust.
First, there are those that use ML ‘to derive new knowledge from large datasets, such as improving diagnostic accuracy from scans and other images’. Second, ‘there are user-facing applications […] which interact with people in real-time’, providing advice and ‘instructions based on probabilities which the tool can derive and improve over time’ (p. 55). The latter, that is, systems such as chatbots, seem to complement and sometimes even substitute HCP patient consultations (p. 55). The systematic literature review and chatbot database search includes a few limitations. The literature review and chatbot search were all conducted by a single reviewer, which could have potentially introduced bias and limited findings.
Chatbots in Healthcare: Improving Patient Engagement and Experience
Eligible apps were those that were health-related, had an embedded text-based conversational agent, available in English, and were available for free download through the Google Play or Apple iOS store. Apps were assessed using an evaluation framework addressing chatbot characteristics and natural language processing features. Most healthbots are patient-facing, available on a mobile interface and provide a range of functions including health education and counselling support, assessment of symptoms, and assistance with tasks such as scheduling. Most of the 78 apps reviewed focus on primary care and mental health, only 6 (7.59%) had a theoretical underpinning, and 10 (12.35%) complied with health information privacy regulations. Our assessment indicated that only a few apps use machine learning and natural language processing approaches, despite such marketing claims. Most apps allowed for a finite-state input, where the dialogue is led by the system and follows a predetermined algorithm.
This article contributes to the discussion on the ethical challenges posed by chatbots from the perspective of healthcare professional ethics. Health-focused apps with chatbots (“healthbots”) have a critical role in addressing gaps in quality healthcare. There is limited evidence on how such healthbots are developed and applied in practice. Our review of healthbots aims to classify types of healthbots, contexts of use, and their natural language processing capabilities.
Organizations Using Healthcare Chatbots
Healthcare chatbots could also spark ethical issues, ranging from the social implications of the chatbot’s design to the types of responses the chatbot can give. The vast amounts of data generated in healthcare are a goldmine for improving patient outcomes and operational efficiency. Jelvix’s Healthcare software development services are at the forefront of turning this data into actionable insights, driving the evolution of data-driven healthcare solutions. The Jelvix team has built mobile and web applications for remote patient monitoring.
While healthbots have a potential role in the future of healthcare, our understanding of how they should be developed for different settings and applied in practice is limited. There has been one systematic review of commercially available apps; this review focused on features and content of healthbots that supported dementia patients and their caregivers34. To our knowledge, no review has been published examining the landscape of commercially chatbot in healthcare available and consumer-facing healthbots across all health domains and characterized the NLP system design of such apps. This review aims to classify the types of healthbots available on the app store (Apple iOS and Google Play app stores), their contexts of use, as well as their NLP capabilities. By combining chatbots with telemedicine, healthcare providers can offer patients a more personalized and convenient healthcare experience.
Review Limitations
As Nordheim et al. have pointed out, ‘the answers not only have to be correct, but they also need to adequately fulfil the users’ needs and expectations for a good answer’ (p. 25). Importantly, in addition to human-like answers, the perceived human-likeness of chatbots in general can be considered ‘as a likely predictor of users’ trust in chatbots’ (p. 25). Health care data are highly sensitive because of the risk of stigmatization and discrimination if the information is wrongfully disclosed. The ability of chatbots to ensure privacy is especially important, as vast amounts of personal and medical information are often collected without users being aware, including voice recognition and geographical tracking. The public’s lack of confidence is not surprising, given the increased frequency and magnitude of high-profile security breaches and inappropriate use of data [95]. Unlike financial data that becomes obsolete after being stolen, medical data are particularly valuable, as they are not perishable.
- As well as encouraging more high-level studies (ie, RCTs), there is a need for authors to be more consistent in their reporting of trial outcomes.
- Chatbots have been implemented in remote patient monitoring for postoperative care and follow-ups.
- Thus, one should be cautious when providing and marketing applications such as chatbots to patients.
- By facilitating preliminary conversations about embarrassing and stigmatized symptoms, medical chatbots can play a pivotal role in influencing whether or not someone seeks medical guidance.
It has been proven to be 95% accurate in differentiating between normal and cancerous images. A study of 3 mobile app–based chatbot symptom checkers, Babylon (Babylon Health, Inc), Your.md (Healthily, Inc), and Ada (Ada, Inc), indicated that sensitivity remained low at 33% for the detection of head and neck cancer [28]. The number of studies assessing the development, implementation, and effectiveness are still relatively limited compared with the diversity of chatbots currently available. Further studies are required to establish the efficacy across various conditions and populations.
While they can bridge accessibility gaps in care and offer initial guidance, professional therapy or counseling remains essential for in-depth support. A well-rounded mental health strategy combines the immediacy and accessibility of chatbots with the depth of professional care. AI-driven chatbots are becoming an increasingly popular component of employee benefits packages, aiming to fill a critical gap in mental health support. About a third of US employers currently provide ‘digital therapeutics’ (DTx) and an additional 15% are considering adding such a solution in 2024 or 2025. It’s also not realistic to expect every patient to be on board with digital-care solutions beyond their current use in this pandemic. Having multiple points of entry for care —chatbots, telehealth visits, in-person consultations — provides patients with the valuable choice of how they want to receive it, ultimately boosting their confidence in and loyalty to their care provider.

The cognitive behavioral therapy–based chatbot SMAG, supporting users over the Facebook social network, resulted in a 10% higher cessation rate compared with control groups [50]. Motivational interview–based chatbots have been proposed with promising results, where a significant number of patients showed an increase in their confidence and readiness to quit smoking after 1 week [92]. No studies have been found to assess the effectiveness of chatbots for smoking cessation in terms of ethnic, racial, geographic, or socioeconomic status differences. Creating chatbots with prespecified answers is simple; however, the problem becomes more complex when answers are open. Bella, one of the most advanced text-based chatbots on the market advertised as a coach for adults, gets stuck when responses are not prompted [51]. Given all the uncertainties, chatbots hold potential for those looking to quit smoking, as they prove to be more acceptable for users when dealing with stigmatized health issues compared with general practitioners [7].