IMPLEMENTING CHATBOTS FOR CONSUMER COMPLAINT RESPONSE

Main Article Content

Babaev Djahongir Ismailbekovich

Abstract

Chatbots offer companies potential efficiency gains in consumer complaint handling through real-time automated intake and responses. However, conversational AI also risks frustration, bias, and regulatory non-compliance without adequate human oversight and governance. This legal analysis examines emerging use cases for complaint chatbots. It reviews capabilities like 24/7 availability, process standardization, and multilingual support. However, chatbots struggle to interpret nuance and provide satisfactory resolution alone. A hybrid approach with human agents managing complex disputes likely balances benefits and risks most responsibly. Further empirical research on consumer perceptions and responsible design principles can help guide ethical integration of complaint chatbots.

Article Details

How to Cite
Babaev Djahongir Ismailbekovich. (2024). IMPLEMENTING CHATBOTS FOR CONSUMER COMPLAINT RESPONSE. Proceedings of International Conference on Modern Science and Scientific Studies, 3(2), 440–445. Retrieved from https://econferenceseries.com/index.php/icmsss/article/view/3992
Section
Articles

References

Accenture. (2018). Chatbots for customer service: Efficiency at scale. https://www.accenture.com/_acnmedia/PDF-92/Accenture-Chatbots-PoV.pdf

European Commission. (2021). Proposal for a regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (Artificial Intelligence Act). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206

Følstad, A., & Brandtzæg, P. B. (2017). Chatbots and the new world of HCI. Interactions, 24(4), 38-42. https://doi.org/10.1145/3085558

Heo, J., & Lee, Y. (2021). Chatbot as a new digital technology solution for sustainable communication in the era of digital transformation. Sustainability, 13(2), 798. https://doi.org/10.3390/su13020798

Husa, J. (2015). Comparative law, legal linguistics and method. In M. Van Hoecke (Ed.), Methodologies of Legal Research (pp. 67-82). Hart.

Luger, E., & Sellen, A. (2016). "Like Having a Really Bad PA": The Gulf between User Expectation and Experience of Conversational Agents. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 5286–5297. https://doi.org/10.1145/2858036.2858288

Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., & Barnes, P. (2022). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22), June 21–24, 2022, Seoul, Republic of Korea. ACM, New York, NY, USA. https://doi.org/10.1145/3531146.3533121

Vaidyam, A. N., Wisniewski, H., Halamka, J. D., Kashavan, M. S., & Torous, J. B. (2019). Chatbots and conversational agents in mental health: A review of the psychiatric landscape. The Canadian Journal of Psychiatry, 64(7), 456-464. https://doi.org/10.1177/0706743719828977

Vyas, A., Chisalita, C. M., & Dix, A. (2020). Organizational policies and procedures for ADA-compliant AI systems. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3313831.3376684

Zamora, J. (2017). I'm Sorry, Dave, I'm Afraid I Can't Do That: Chatbot Perception and Expectations. Proceedings of the 5th International Conference on Human Agent Interaction, 253-260. https://doi.org/10.1145/3125739.3125766.