GET THE APP

The Impact of AI Focus on Individuals: User Feedback and Developer Priorities
..

Telecommunications System & Management

ISSN: 2167-0919

Open Access

Commentary - (2023) Volume 12, Issue 3

The Impact of AI Focus on Individuals: User Feedback and Developer Priorities

William J. Bingley*
*Correspondence: William J. Bingley, Department of Psychology, Bisha University, Al Nakhil, Bisha 67714, Saudi Arabia, Email:
Department of Psychology, Bisha University, Al Nakhil, Bisha 67714, Saudi Arabia

Received: 01-May-2023, Manuscript No. jtsm-23-104606; Editor assigned: 03-May-2023, Pre QC No. P-104606; Reviewed: 15-May-2023, QC No. Q-104606; Revised: 22-May-2023, Manuscript No. R-104606; Published: 29-May-2023 , DOI: 10.37421/2167-0919.2023.12.382
Citation: Bingley, William J. “The Impact of AI Focus on Individuals: User Feedback and Developer Priorities.” Telecommun Syst Manage 12 (2023): 382
Copyright: © 2023 Bingley WJ. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Abstract

The goal of human-centered artificial intelligence is to shift AI development's focus away from technology and toward people. However, it is unclear whether the principles and practices of HCAI currently in use are sufficient to achieve this objective. We conducted a qualitative survey of AI developers and users to see if HCAI is sufficiently human-centered. Additionally, we conducted a thematic content analysis on their responses to learn more about their distinct priorities and experiences. We were able to compare user experiences with HCAI in principle and practice. We discovered that positive user experiences were characterized by the social impact of AI, but this was less of a priority for developers. In addition, our findings indicated that making AI more human-centered necessitates enhancing its user-centered functionality. In fact, users were more worried about understanding AI than about being understood by AI. Developers showed an "avoidance of harm" perspective by being concerned with ethical, privacy, and security concerns in accordance with HCAI guidelines. However, our findings suggest that in order for HCAI to truly be human-centered, it must place a greater emphasis on the needs of individuals.

Keywords

Artificial intelligence • Human-centered AI • User experience • Developers • Human needs

Introduction

In ways that have both positive and negative effects on users, other people, and society, systems that incorporate artificial intelligence have become an integral part of the lives of many people. In addition to connecting people, providing entertainment, and assisting in the creation and distribution of vaccines, AI systems are utilized in a wide range of applications. However, incorporating AI into systems may result in significant negative effects as demonstrated by instances in which individuals are wrongfully denied unemployment benefits based on facial recognition software, fired by a performance algorithm due to circumstances beyond their control, or have their personal information leaked by a chatbot. The trend toward human-centered AI, which aims to put the user rather than technology at the center of AI development, is a reflection of the growing significance and integration of AI in people's lives. However, HCAI guidelines typically concentrate on broad concepts like human values, ethics, and privacy, which can be too abstract to be practical Shneiderman In addition, it is unclear to what extent these guidelines as well as the procedures that should be followed when attempting to implement them are truly "human-centered," as in focusing on the ways in which AI affects people. In this paper, we compare HCAI in principle and practice to the experiences of AI users in order to determine whether or not HCAI is human-centered [1].

Description

As described by Shneiderman, the drive of HCAI to put humans at the heart of AI represents a ‘second Copernican revolution’. What this means specifically is a matter of some debate. For Shneiderman , HCAI focuses on human experiences, satisfaction, and needs, with the aim of “amplifying, augmenting, and enhancing human performance in ways that make systems reliable, safe, and trustworthy” so as to “support human self-efficacy, encourage creativity, clarify responsibility, and facilitate social participation”. However, other HCAI researchers construe the meaning of ‘human-centered’ in different ways. For example, Gillies focus on the human work that goes into algorithm training and development, while Yang emphasize the societal impact of AI. Moreover, for many researchers a key feature of HCAI is that AI should be transparent and explainable. We propose that all of these conceptualizations of HCAI follow fundamentally from considering the human as the main focus in AI development and differ largely as a function of the context in which this approach is applied. Seeking to formalize these developments, a number of guidelines have been proposed by governments, organizations, and researchers to translate the ideals of HCAI into practice. Many governments have proposed formal HCAI guidelines. For example, the European Union lists seven key requirements that AI systems should meet in order to be trustworthy, including being transparent, having accountability and promoting societal and environmental wellbeing [2].

Similarly, Australia has an AI ethics framework that includes principles such as fairness, human-centered values, and accountability, while China has released the ‘Beijing AI principles’ which include principles of doing good, being responsible, and being inclusive. Among private companies, Microsoft has led the way in developing guidelines for ethical AI. For example, Microsoft's principles for ethical AI emphasize fairness, inclusiveness, reliability, safety, transparency, privacy, security, and accountability. And finally, various researchers and research teams have proposed guidelines for AI. For example, Floridi provide an ethical framework called AI4People that incorporates principles such as beneficence, justice, and explicability [3].

Unfortunately, despite this proliferation of guidelines, the ideals of HCAI have proven difficult to put into practice. Speaking to this point, Shneiderman has argued that while ethical guidelines are a step in the right direction, they are often too vague to be helpful for software engineers. Similarly, Mittelstadt has criticized AI ethics for consisting of vague principles and lofty value statements that lack the detail and precision needed to formulate specific recommendations for improving practice. Accordingly, it is not clear that HCAI in principle is reflected in the practices of AI developers [4].

To determine whether HCAI is truly human centered in theory and practice, we looked into developer priorities and user experiences provides a summary of our most significant findings and their implications for HCAI. HCAI guidelines generally matched the priorities of developers. For instance, developers regarded ethics, privacy, security, and understandability as significant aspects to take into account when developing AI, all of which are prominently featured in the guidelines that are currently in place. It is encouraging that emerging developers appear to have internalized these significant values, given that many of the developer samples' participants were university students [5].

Conclusion

In this paper we reviewed HCAI theory and guidelines and compared these with the results of a survey assessing developer priorities and user experiences. Developer priorities were aligned to some extent with current guidelines, although our results suggest that AI both in principle and in practice falls short of the objectives set out by HCAI theory. Promisingly, developers were aware of the risks of AI systems, and beyond this had an intuitive understanding that improving functionality from the perspective of the user is an important goal for HCAI. However, our results suggest that HCAI stands to benefit from an increased understanding of the social impacts of AI, as these are particularly important for positive user experiences and it is these that may ultimately further shape the AI ethics landscape. Furthermore, to bridge the gap between the goals of HCAI and current practice, we propose that researchers and guidelines should focus more on understanding what people need in their lives and how AI helps or hinders the satisfaction of these needs. Ultimately then, we would suggest that to be truly ‘human-centered’ HCAI needs to focus much more on humans than it currently does.

Acknowledgement

None.

Conflict of Interest

None.

References

  1. Naser, M. Z. "Mechanistically informed machine learning and artificial intelligence in fire engineering and sciences."Fire Technol 57 (2021): 2741-2784.
  2. Google Scholar, Crossref, Indexed at

  3. Naser, M. Z., Chris Lautenberger and Erica Kuligowski. "Special Issue on “Smart Systems in Fire Engineering”." Fire Technol(2021): 1-4.
  4. Google Scholar, Crossref, Indexed at

  5. Klein, R. A. "SFPE handbook of fire protection engineering (1995)."Fire Saf J1 (1997): 61-63.
  6. Google Scholar, Crossref, Indexed at

  7. Oliveira, Rafael G., João Paulo C. Rodrigues, João Miguel Pereira and Paulo B. Lourenço, et al. "Experimental and numerical analysis on the structural fire behaviour of three-cell hollowed concrete masonry walls."Eng Struct 228 (2021): 111439.
  8. Google Scholar, Crossref, Indexed at

  9. Friaa, Houda, Myriam Laroussi Hellara, Ioannis Stefanou and Karam Sab, et al. "Artificial neural networks prediction of in-plane and out-of-plane homogenized coefficients of hollow blocks masonry wall."Meccanica55 (2020): 525-545.
  10. Google Scholar, Crossref, Indexed at

arrow_upward arrow_upward