Exploring Public Trust in AI Healthcare Systems: A Questionnaire-based Study on Bias, Reliability, and Privacy Concerns
DOI:
https://doi.org/10.61424/jcsit.v2i2.469Keywords:
Artificial Intelligence; Healthcare Systems; Public Trust; Reliability; Bias; Privacy; Questionnaire SurveyAbstract
The healthcare industry is now more and more adopting artificial intelligence (AI) in support of improving diagnosis, managing virtual consults, and carrying out predictive analysis. Even with the increasing use of these systems, their value is subject to public trust. This research examines social opinions on artificial intelligence in healthcare, targeting dimensions of reliability, bias, and concerns for privacy. A questionnaire survey was conducted, which collected 1,019 questionnaires from the public, representing all age groups, both genders, all educational levels, and all professions. Results indicate that 88.6% of the respondent participants made use of healthcare services that had AI interwoven in them, for instance, chatbots, cell phone applications, or internet-aided diagnosis tools. Participants had varying opinions regarding the reliability of the AI systems. Specifically, 39.9% believed them to be equally reliable as human professionals, 48.3% less reliable, and 11.8% more reliable. There was high skepticism regarding fairness in this context. While 46.5% of the interviewers believed that AI is equally fair to all human beings, 68% assumed that available data is incomplete or biased in favor of Western populations. Data and privacy protection were among the major concerns, with 68.4% worried that their health-related information would not be safeguarded appropriately. While it was also thought to be important that artificial intelligence systems should clearly demonstrate their limitation, 89% of the interviewees indicated that transparency in the matter was critical. Finally, the results show that consumers are steadily accepting artificial intelligence in the healthcare industry in a cautious way. Although users find the convenience beneficial, they lack trust in the accuracy, fairness, and protection of their confidential information. The study suggests that in building public trust, it is critical to use inclusive datasets that contain heterogeneous groups, use effective privacy controls, and have transparency. To achieve more refined insight regarding whether trust in AI health systems has increased or decreased with time, future studies should look at inter-cultural perceptions and longitudinal analyses.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Emran Hossain, Naeem Aziz, Jakaria Alam, Mysha Islam Kakon, Fazlul Haque

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.