Factors affecting the usage of ChatGPT: Advancing an information technology acceptance framework
markanthonycamilleri
86 views
24 slides
May 17, 2024
Slide 1 of 24
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
About This Presentation
Few studies have explored the use of artificial intelligence-enabled (AI-enabled) large language models (LLMs). This research addresses this knowledge gap. It investigates perceptions and intentional behaviors to utilize AI dialogue systems like Chat Generative Pre-Trained Transformer (ChatGPT). A s...
Few studies have explored the use of artificial intelligence-enabled (AI-enabled) large language models (LLMs). This research addresses this knowledge gap. It investigates perceptions and intentional behaviors to utilize AI dialogue systems like Chat Generative Pre-Trained Transformer (ChatGPT). A survey questionnaire comprising measures from key information technology adoption models, was used to capture quantitative data from a sample of 654 respondents. A partial least squares (PLS) approach assesses the constructs' reliabilities and validities. It also identifies the relative strength and significance of the causal paths in the proposed research model. The findings from SmartPLS4 report that there are highly significant effects in this empirical investigation particularly between source trustworthiness and performance expectancy from AI chatbots, as well as between perceived interactivity and intentions to use this algorithm, among others. In conclusion, this contribution puts forward a robust information technology acceptance framework that clearly evidences the factors that entice online users to habitually engage with text-generating AI chatbot technologies. It implies that although they may be considered as useful interactive systems for content creators, there is scope to continue improving the quality of their responses (in terms of their accuracy and timeliness) to reduce misinformation, social biases, hallucinations and adversarial prompts.
Size: 1.19 MB
Language: en
Added: May 17, 2024
Slides: 24 pages
Slide Content
This presentation is drawn from: Camilleri, M.A. (2024). Factors affecting performance expectancy and intentions to use ChatGPT: Using SmartPLS to advance an information technology acceptance framework, Technological Forecasting and Social Change , https://doi.org/10.1016/j.techfore.2024.123247 By M.A. CAMILLERI, Ph.D. (Edinburgh) Department of Corporate Communication, Faculty of Media and Knowledge Sciences, University of Malta, MALTA. Factors affecting the usage of ChatGPT: Advancing an information technology acceptance framework
Contents Introduction Theoretical underpinnings Methodology Data analysis Conclusions
Introduction Academic colleagues and practitioners are increasingly raising awareness on different uses of generative artificial intelligence ( GenAI ) dialogue systems like service chatbots and/or virtual assistants ( Baabdullah et al., 2022; Balakrishnan et al., 2022; Brachten et al., 2021; Hari et al., 2022; Li et al., 2021; Lou et al., 2022; Malodia et al., 2021; Sharma et al., 2022). Some of them are evaluating their strengths and weaknesses , including of OpenAI's Chat Generative Pre-Trained Transformer (ChatGPT) ( Farrokhnia et al., 2023; Gill et al., 2024; Kasneci et al., 2023).
What is ChatGPT? GPT-3.5 is a free-to-use natural language processing chatbot driven by GenAI . It was optimized for dialogue by using Reinforcement Learning with Human Feedback (RLHF). Its models are trained on vast amounts of data including conversations that were created by humans (such content is accessed through the Internet).
Research questions This study's focused research questions are: RQ1: How and to what extent are information quality and source trustworthiness influencing the online users' performance expectancy from ChatGPT ? RQ2: How and to what extent are their perceptions about ChatGPT's interactivity , performance expectancy , effort expectancy , as well as their social influences affecting their intentions to continue using it ? RQ3: How and to what degree is the performance expectancy construct mediating effort expectancy – intentions to use this GenAI technology ?
Theoretical underpinnings This research builds on various conceptual frameworks: Intentions to use (information) technology – Theory of Reasoned Action (TRA), Theory of Planned Behavior (TPB), Technology Acceptance Model (TAM/TAM2/TAM3), Unified Theory of Acceptance and Use of Technology (UTAUT/UTAUT2); Performance expectancy – UTAUT/UTAUT2; Effort expectancy – UTAUT/UTAUT2; Social influences – UTAUT/UTAUT2; Information quality – Elaboration Likelihood Model (ELM), Information Adoption Model (IAM); Source trustworthiness (i.e. a peripheral cue) – ELM, IAM; Perceived interactivity – Synchronous Technology Adoption Model (STAM), Interactive Technology Adoption Model (ITAM). Figure 1provides a graphical illustration of the proposed research model, entitled: The Information Technology Acceptance Framework.
Methodology Primary data were collected through an online survey questionnaire disseminated via an email, among members of staff and students who were enrolled in full time and part time courses in a Southern European university, during the second semester of 2022–2023. There were >13,200 research participants who were targeted . This empirical study complied with the research ethic policies of the higher educational institution as well with the EU's (2016) general data protection regulations (GDPR).
The survey administration The research participants were expected to clearly indicate the extent of their agreement with the survey’s measuring items (statements) in a five-point Likert scale , where 1 represented “strongly disagree” and 5 referred to “strongly agree”. The survey was pilot tested among a small group of academic colleagues. The list of measures (and their sources), their corresponding items as well as a definition of each construct are featured in Table 1. The research participants disclosed their demographic information including their gender as well as their age by choosing one of five age groups in the last part of the survey. They also indicated the highest qualification that they attained at the time of this study.
The profile of the respondents After a few weeks, there were six hundred fifty-four (n = 654) respondents who confirmed (through a filter question) that they have used ChatGPT; The frequency table reported 292 males, 338 females and 20 participants opted not to indicate their gender. The research participants were categorized into 5 age groups (18–28; 29–39; 40–50; 51–61; Over 62); The majority of them were between 18 and 28 years (n = 318) . The second largest group involved middle-aged individuals who were between 40 and 50 years (n = 128); Most of the respondents (n=640) reported that they had completed an undergraduate level of education.
Data analysis The descriptive statistics The findings reported that, in the main, the research participants agreed with the statements that were presented to them in the survey questionnaire; The mean (M) values were mostly above 3 (out of 5). Whilst EE1 (M = 4.239) and EE2 (M = 4.18) were the highest mean scores, PI2 (M = 2.908) and IQ2 (M = 2.911) reported the lowest means; The standard deviation (SD) values were relatively low as the highest variance figure was 1.216 (for SI1).
Results from SMARTpls algorithm ( i.e a composite-based partial least squares approach) A collinearity assessment revealed that there was no evidence of common method bias in this study. The variance inflation factors (VIFs) were lower (<) 3.3 . The outer loadings ranged between 0.653 and 0.941 . The findings confirmed that the reliability values were higher (>) 0.7 . The average variance extracted (AVE) figures were > 0.6 .
Results from SMARTpls algorithm (2) The constructs' discriminant validities were tested through ( i ) Fornell and Larcker's (1981) criterion as well as via ( ii) heterotrait monotrait ratio (HTMT) procedure ( Henseler et al., 2015) . The former reported that the square roots of AVE (in bold) were higher than the other correlation values (within the same columns) . In addition, the latter HTMT values were lower than 0.9 . The PLS algorithm clearly indicated the factors' predictive power R2 and shed light on the values of ƒ2 . Source trustworthiness had the highest effect on performance expectancy, where f2 = 0.3 . Other noteworthy effects were reported between perceived interactivity and intentions to use ChatGPT (f2 = 0.245), and between effort expectancy and performance expectancy (f2 = 0.145). Figure. 2 depicts the path coefficients and the R2 results of this empirical investigation.
Results from SMARTpls Bootstrapping procedure The bootstrapping procedure was utilized to examine the hypotheses (H1-H7) of this st udy . The findings confirmed the robustness of the proposed structured model. In sum, there were highly significant effects between the exogenous and endogenous constructs , as indicated in Table 2. Table 3 sheds light on the mediated effects (of performance expectancy) in the proposed research model.
Table 3. The mediated effect of performance expectancy on [effort expectancy-intentions] (C) M.A. Camilleri (C) M.A. Camilleri
Conclusions Theoretical implications The results from this study report that source trustworthiness-performance expectancy (with a β =0.450 , T=8.477, p<0.001) was the most significant path in this study . Similar effects were also evidenced in previous IAM theoretical frameworks ( Kang and Namkung , 2019 ; Onofrei et al., 2022 ), as well as in a number of studies related to TAM ( Assaker , 2020 ; Chen and Aklikokou , 2020 ; Shahzad et al., 2018 ) and/or to UTAUT/UTAUT2 ( Lallmahomed et al., 2017 ). In addition, this research also reports that information quality (ALSO FROM IAM, like ST) significantly affects their performance expectancy from ChatGPT (where β=0.158, T=2.966, p = 0.003). Yet, in this case, this link was weaker than the former. The findings suggest that the individuals' perceptions about the interactivity of ChatGPT affected their intentions to use it (as β=0.355, T=8.255, p<0.001). Perceived interactivity-intentions was the strongest DIRECT antecedent that predicted behavioral intentions .
Managerial implications OpenAI's ChatGPT admits that its GPT-3.5 outputs may be inaccurate, untruthful and misleading at times. This issue was reflected in the results . It clarifies that its algorithm is not connected to the internet , and that it can occasionally produce incorrect answers ( OpenAI, 2023a ). It posits that GPT-3.5 has limited knowledge of the world and events after its cut-off date (January 2022) and may also occasionally produce harmful instructions or biased content . In addition, OpenAI (2023b) indicates that its GPT-4 still has many known limitations that the company is working to address, such as “ social biases and adversarial prompts ” at the time of writing/revising the accepted article (i.e. December 2023). Evidently, works are still in progress at OpenAI. OpenAI recommends checking whether its chatbot's responses are accurate or not, and to let them know when and if it answers in an incorrect manner, by using their “Thumbs Down” button.
Limitations and future research Unlike longitudinal studies, such a research instrument provides a snapshot of the research participants' perceptions at a specific point in time . As a result, this quantitative methodology may lend itself to possible limitations . Some colleagues argue that cross-sectional surveys are prone to common method variance (CMV) ( see Podsakoff et al., 2023). In this case, the findings confirmed that the variance inflation factors were lower than 3.3 , as per the recommended threshold (Hwang et al., 2023). Moreover, the results reported appropriate reliability, as well as convergent and discriminant validity values. This research confirms the robustness of the proposed theoretical framework , as all hypotheses were supported . Hence, future researchers are invited to replicate this study in different settings. In the future, other scholars could rely on the measures that were used in this study. T here is scope for researchers to continue investigating conversational (verbal) capabilities as well as the anthropomorphic (visual and vocal) features of chatbots . Besides, they are also urged to explore the governments' regulatory and quasi-regulatory interventions (to shed light on their principles, soft and hard laws) in this regard.
The full reference list is available here: Suggested citation : Camilleri, M.A. (2024). Factors affecting performance expectancy and intentions to use ChatGPT: Using SmartPLS to advance an information technology acceptance framework, Technological Forecasting and Social Change , https://doi.org/10.1016/j.techfore.2024.123247 [THIS IS AN OPEN-ACCESS ARTICLE].
Thank you for your attention. Please feel free to ask any questions.