Exploring Public Trust in AI Healthcare Systems: A Questionnaire-based Study on Bias, Reliability, and Privacy Concerns

Authors

  • Emran Hossain Chief Instructor & Department Head, Department of Computer Science & Technology, Confidence Polytechnic Institute, Bangladesh
  • Naeem Aziz Research Scholar, Department of Computer Science & Engineering, Daffodil International University, Bangladesh https://orcid.org/0009-0003-5779-9597
  • Jakaria Alam MSc, Department of Data Science, LaTrobe University, Australia
  • Mysha Islam Kakon BSc, Department of Computer Science & Engineering, Daffodil International University, Bangladesh
  • Fazlul Haque BSc, Department of Computer Science & Engineering, Daffodil International University, Bangladesh

DOI:

https://doi.org/10.61424/jcsit.v2i2.469

Keywords:

Artificial Intelligence; Healthcare Systems; Public Trust; Reliability; Bias; Privacy; Questionnaire Survey

Abstract

The healthcare industry is now more and more adopting artificial intelligence (AI) in support of improving diagnosis, managing virtual consults, and carrying out predictive analysis. Even with the increasing use of these systems, their value is subject to public trust. This research examines social opinions on artificial intelligence in healthcare, targeting dimensions of reliability, bias, and concerns for privacy. A questionnaire survey was conducted, which collected 1,019 questionnaires from the public, representing all age groups, both genders, all educational levels, and all professions. Results indicate that 88.6% of the respondent participants made use of healthcare services that had AI interwoven in them, for instance, chatbots, cell phone applications, or internet-aided diagnosis tools. Participants had varying opinions regarding the reliability of the AI systems. Specifically, 39.9% believed them to be equally reliable as human professionals, 48.3% less reliable, and 11.8% more reliable. There was high skepticism regarding fairness in this context. While 46.5% of the interviewers believed that AI is equally fair to all human beings, 68% assumed that available data is incomplete or biased in favor of Western populations. Data and privacy protection were among the major concerns, with 68.4% worried that their health-related information would not be safeguarded appropriately. While it was also thought to be important that artificial intelligence systems should clearly demonstrate their limitation, 89% of the interviewees indicated that transparency in the matter was critical. Finally, the results show that consumers are steadily accepting artificial intelligence in the healthcare industry in a cautious way. Although users find the convenience beneficial, they lack trust in the accuracy, fairness, and protection of their confidential information. The study suggests that in building public trust, it is critical to use inclusive datasets that contain heterogeneous groups, use effective privacy controls, and have transparency. To achieve more refined insight regarding whether trust in AI health systems has increased or decreased with time, future studies should look at inter-cultural perceptions and longitudinal analyses.

Downloads

Published

2025-11-01

How to Cite

Hossain, E., Aziz, N., Alam, J., Kakon, M. I., & Haque, F. (2025). Exploring Public Trust in AI Healthcare Systems: A Questionnaire-based Study on Bias, Reliability, and Privacy Concerns. Journal of Computer Science and Information Technology, 2(2), 12–22. https://doi.org/10.61424/jcsit.v2i2.469