Skip to main content

Trust in the chatbot: a semi-human relationship

Abstract

Today, the use of chatbots for different functions in various industries has become a very interesting business for companies. Chatbots are promising types of interfaces. It is therefore necessary to understand how customers interact with retailers' interfaces in order to provide them with a better experience. In this study, we mobilise two theories, such as Stimilus-Organism-Response and Social Presence Theory, to formulate our research hypotheses. We have made major contributions to the interactive marketing and artificial intelligence literature by focusing on an emerging interactive technology: text chatbots. Our aim is to test the hedonic attributes of consumer trust in text chatbots by integrating the social and emotional aspects of this interaction. We also want to look at the moderating effects of text chatbot disclosure and task complexity. Based on the responses, we ran a questionnaire survey. A total of 353 people were polled for data. Participants were chosen at random. The structural equation modelling technique was used. First, the findings revealed that empathy and friendliness are major hedonic predictors of consumers' confidence in text chatbots. Second, the results demonstrate that the chatbot's task complexity and disclosure partially affect the empathy-trust relationship and the usability-trust relationship. We have made significant contributions to the field of interactive marketing research and artificial intelligence by focusing on new interactive technologies such as text-based chatbots. Our study is one of the first to look at the hedonic determinants of customer belief in text-based chatbots (1). All previous research has concentrated on the practical application of chatbots for digital customer service. The moderating effects of human-chatbot contact are investigated in our study (2). These two contributions make our research original. The findings give additional information that e-service providers and chatbot developers may utilise to improve their services, understand their effects on user experience, and provide a guide for strategy development and relationship building.

Introduction

Today, although the applications of artificial intelligence are still at the conceptual stage, they have not generated much added value. Despite this lack of practical applicability, a number of projects have been launched and have been adopted by companies such as Adecco in various sectors. Intelligent customer relationship management is one of the main tasks of digital marketing, which must take advantage of chatbot technology [11].

For starters, digital communication has expanded significantly on instant-messaging apps such as Facebook Messenger, WhatsApp, Snapchat, and Skype, which represents business potential. Companies should take this opportunity to use chatbots to improve their existing services. Chatbot use has increased in recent years, proving its value in customer-business relationships [11].

Chatbot technology is now one of the most common conversational agents [2]. In some sectors, communication in the form of chatbots may replicate human interactions via text conversations or voice instructions, bringing unparalleled economic possibilities [11, 52].

As a result, the use of chatbot technology is growing exponentially. This is justified by the ease and accessibility of building a chatbot, the substantial developments in artificial intelligence, and the increased use of email applications [20].

A chatbot is characterised as follows: "a machine dialogue system that converses with human users in conversational language" (Shawar and Atwell 2005, p. 489). By 2024, the chatbot market will be worth over $9 billion [65]. Chatbots are among the most promising automated means of engagement in the retail industry. A chatbot is an application of artificial intelligence-powered software for computers that simulates an interactive human conversation by using pre-programmed user words and audio or visual inputs [20].

To put it simply, chatbots are programmes that mimic human discussions, allowing users to connect with digital gadgets as if they were speaking with an actual individual [20, 62].

Nonetheless, many companies and brands, like eBay, Facebook, Amazon, and Apple, have implemented this technology. These companies have used chatbots to take orders, recommend products, or provide other services to customers [34]. A large proportion of customer services are fully supported by artificial intelligence, including chatbots [11, 57].

Similarly, in the retail sector, consumer engagement is important to promote interactivity and value co-creation and collaboration. This type of engagement is part of the overall customer experience. It results in cognitive, emotional, and behavioural responses [20, 51].

As a result, digital marketing has benefited from several commercial advantages provided by artificial intelligence, notably text chatbots. Compared to a human being, chatbots always operate without negative emotions and always interact with consumers in a friendly manner [11, 52]. Chatbots mechanise customer service and respond effectively to consumers, understanding their requests [18]. Chatbots are capable of handling a high number of consumer conversations at the same time, which increases customer service efficiency. The advantage of chatbots is that they can interact with consumers in a friendlier way than traditional employee-based customer service [32]. The psychological state of human beings changes, but with chatbots, it is always positive. Text chatbot adoption can provide various commercial benefits to businesses. However, the issue may be with consumers who are uncomfortable speaking to a computer about their own personal wants or purchase decisions [11].

Similarly, consumers may perceive text chatbots as less trustworthy than human beings and may also have the discrimination that an announcement with a machine lacks empathy and personal feelings [11]. This negative perception can damage the brand image of the company in question. To overcome this problem, companies are invited to judiciously apply text chatbots to offer better customer service. They are being asked to facilitate complex tasks and increase the efficiency of chatbots to respond easily to customer requests.

In short, studies in the commercial sector based on artificial intelligence are recent and rare. We can cite examples of studies on robots [21], others on machines [69], and chatbots [11, 52].

The findings of these studies suggest that computer programme qualities correspond via empathy and usability [21], which have an important effect on consumers' perceptions and making choices [23], intent to buy [56], and intent to use [21]. Other qualities were removed, such as chatbot resistance, chatbot dependency, chatbot transparency, and task complexity. These studies also neglected the client-supplier relationship's emotional and social components.

The customer-user emotional association arises when, as a consequence of a human–computer contact, the consumer experiences social pleasure (feelings of being heard, understood, and taken into consideration) [20]. When the shop offers social support through a high-quality electronic service that aligns with the motivations of its customers, this sensation is produced. The customer moves on to the engagement stage after having an emotionally resonant encounter that demonstrates a sense of belonging. This collaboration results in positive engagement, the creation of values, effective merchant performance, and eventually customer intentions and behaviour [20].

The concept's importance to academics and practitioners is justified by the concept's fast evolution. The way consumers interact with companies has been entirely redesigned, and businesses today have a plethora of alternatives for interacting with their customers using chatbots.

Despite earlier research's good views on chatbot use in digital marketing, there is still a scarcity of empirical evidence on consumer behaviour. Aspects were not taken into account in earlier research. The adoption of some trust measures for chatbots has been abandoned. The test of moderating effects was also neglected. We address these methodological and practical flaws in our work.

Our research adds significantly to the body of literature on digital marketing and artificial intelligence by focusing on text chatbots, a new interactive technology. We are doing one of the first empirical studies on the elements that determine consumer trust in text chatbots. Previous research has only looked at the utilitarian use of chatbots for online customer service.

Our study (1) is one of the initial empirical investigations on the elements that affect text chatbot user trust. Previous research has only looked at the utilitarian use of chatbots for virtual customer service. Our study looks at the moderating impact of human-chatbot conversation (2). These two contributions make our research unique.

The purpose of this semi-human encounter is to evaluate the hedonic aspects of confidence among customers in text chatbots. to research how job complexity and text chatbot disclosure affect consumer confidence in text chatbots. In general, we recommend the following research questions:

  1. 1.

    What effect does empathy have on consumer trust in chatbots?

  2. 2.

    What effect does usability have on consumer trust in chatbots?

  3. 3.

    What are the implications of chatbot disclosure with task complexity on the moderating effects?

The synopsis of this chapter is provided below. First, a review of earlier studies on the main concepts discussed follows. The theoretical underpinnings, rationales for the research hypotheses, a suggested method, and the results that were uncovered are then presented. The debate will then conclude with some ramifications.

Literature review, theoretical framework, and hypothesis development

Theoretical foundation

We used two theoretical frameworks to solve our issue and clarify our research conceptual model, which included the "Stimulus-Organism-Response" theory SOR plus the "theory of social presence."

The very first is the Stimulus-Organism-Response theory (S.O.R.), proposed by Mehrabian and Russell in [55] and revised by Jacoby [43]. Individuals respond to environmental cues by engaging in specific behaviours [46]. People's responses and behaviours can be separated into two categories: approach behaviours (such as choosing, exploring, and acting) and avoidance behaviours (such as seeking, resisting, and negatively reacting) [50].

Several scenarios have been utilised to exemplify the Stimulus-Organism-Response (S.O.R.) hypothesis, such as the use of the internet [58], shopping on the internet [25], retail [68], the tourist sector [50], and e-commerce [20]. The "S.O.R." notion is widely supported in the latter. This notion has acquired popularity as a psychological framework for studying consumer behaviour in this setting [9, 20, 29, 79].

Similarly, Eroglu et al.'s [25] research reveals that the natural environment and atmospheric signals that may be used to predict future events on a digital platform (S-stimuli) may have an influence on customers' cognition or emotions (O-organism), which in turn may have an impact on their behavioural outcomes (R-response). This technique is also supported by our study. The S.O.R. theory was used to describe the customer behaviour and mental processes associated with the usage of text chatbots in the environment of e-commerce.

The "S.O.R." hypothesis has three parts: stimulus, organism, and reaction [20, 46]. These three components can be replaced with others that work similarly, including "Input-Response-Output" (Stimu-lus = input / Organism = response / Response = output). Stimuli, as described by Jacoby [43], are environmental elements that drive individuals to pay careful attention and get aroused [24]. In our study, "stimulus" has been defined as the text chatbot's distinctive traits that would capture customers' attention, such as empathy and friendliness.

The "agency" component might be characterised as the individual's emotional and cognitive states, as well as their process of interacting with inputs and reactions inadvertently [20, 43]. The agency examines both cognitive and emotive components of the customer's engagement with text chatbots in this situation. The consumer's trust in text chatbots was the agent responsible for these cognitive processes and affective evaluations in our study.

The S.O.R. hypothesis's final component is the "reaction." Donovan and Rossiter [22] define it as "the effects of an individual's behaviour on their environment." These impacts, responses, and actions may have good or negative consequences. People frequently respond positively to their surroundings, as seen by their proclivity to remain, investigate, and participate [4, 20]. People's willingness to respond negatively might be interpreted negatively. In our study, we will examine customers' favourable and unfavourable responses to text chatbots by taking into consideration work difficulty, chatbot reliance, chatbot resistance, & chatbot disclosure.

The second hypothesis is the notion of social presence. We used this social theory to investigate why there is little human warmth in online interactions, because social presence influences how people behave [11, 36]. Despite technological and artificial intelligence capabilities, human warmth may still be important [37]. There is currently limited knowledge on how social presence influences communications connecting people and chatbots [2, 11]. This information gap is filled by our research.

According to the theory of social presence, consumers look for the social aspect of their technological interactions, especially when they have a problem. To solve this problem, the consumer seeks interaction with a human being to find a solution. According to the theory's predictions, social presence impacts interactions between humans and technology. This effect is significant because it assists and encourages customers to utilise chatbot technology and benefit from the social component of interacting with a real person.

Employing the social presence theory, we use the closeness and promptness of the answer in our study to measure the consumer's impression of the text chatbot as a genuine person. We try to understand how the proximity and immediateness of the conversational design influence consumer trust.

In short, the Stimulus–Organism–Response (S.O.R.) theory shows that individuals react to environmental factors and act in specific ways. The above hypothesis has been confirmed in several fields. This theory has been adopted in this context as a psychological theory for studying consumer behaviour.

While social presence affects the behaviour of individuals, we have used this social theory to explore the lack of human warmth in online transactions. Regardless of the potential benefits associated with technology & artificial intelligence, human warmth may still be important. Research into the effect of social presence on human-chatbot interaction is still scarce. Our study fills this research gap.

Having set out the basis of our research, we now move on to present the concepts and the relationships between them.

Chatbot fundamentals

Let's start by defining chatbots. The definition of a chatbot given by Hatwar et al. [38] is "software agents that replicate an entity, typically a human counterpart, that is vague or explicit, with which the user can communicate in a discussion (written, spoken, or mixed)".

Artificial intelligence-powered chatbot programmes incorporate one or more languages spoken by humans into their conceptualizations [49]. It is a human-chatbot interaction (Dandison et al. [74]. Interactions and chats are viewed as user input that must be handled in accordance with particular corporate standards and processes. As a result, "bots" are artificial intelligence-focused usage behaviours that are user-focused. The international market for automation, robots, and computers is rapidly developing [74]. Because cost-cutting is a top aim for all organisations, corporations may build a bot in roughly half the time it takes to build a normal mobile application. Because strong platforms do not require expensive servers, bots may be created and maintained for around half the expense of mobile apps [74, 78].

Our findings complement previous research on commercial chatbots [2, 12, 74]. We assess past studies on user involvement, perception, and behaviour utilising text chatbots [13]. Text chatbots, according to Brandtzaeg and Flstad [7], Dandison et al. [74], may be viewed as a more personal source of interaction that contains and transmits social value. Our research relies on this social connection paradigm. In this study, we used hedonic determinants of trust in text chatbots. We also look at moderating effects.

As was previously mentioned, the Stimulus-Organism-Response hypothesis serves as the foundation for our conceptual research strategy. The distinctive features of a text-based chatbot are considered stimuli. In order to draw in customers, a chatbot driven by an electronic device may be seen as realistic and human [33]. Realistic and human-like chatbots, on the other hand, will have a significant influence on how useful people think them to be. Businesses have used artificial intelligence in e-commerce and digital advertising in the manner of text chatbots that can react to client requests autonomously [47]. This form of chatbot is becoming more popular in online buying. Some individuals may believe that a text chatbot's service to customers is friendlier than that of a traditional human customer service worker.

Consumers will be more satisfied when dealing with human-like machines, based on the social presence assumption [6]. However, some users believe chatbots are too impersonal and typically provide tight or standard replies. This study's major focus is on whether empathy and accessibility, two text chatbot traits, increase consumer trust in chatbots.

Chatbot empathy

Empathy is described initially as "the ability to comprehend, recognise, and react to the thoughts, feelings, behaviours, and observations of others" [60, p. 1366].

Empathy is a multidimensional notion that involves both cognitive and emotional responses [11, 21, 66]. In information systems, empathy is defined as "the qualities required for effective relationships among consumers and robots" [3]. Czaplewski et al. [16] consider empathy to be one of the dimensions of service quality supplied by an online business.

Professional staff members are traditionally taught to constantly be sympathetic and offer solutions to resolve customers' problems. The viewpoint of the customer must constantly be considered. Similar to this, some text chatbots for e-services have been developed to automatically address customer demands. When a chatbot has an elevated degree of empathy, it can effectively recognise client wants and give useful solutions to their difficulties. Then, it is quite possible that consumers would have a good perception of the text chatbot's services [11].

Chatbot friendliness

Employees are typically taught to interact with customers in a courteous manner and project positive emotions [73]. Employees that are friendly and helpful to consumers increase their view of the service's quality and make customers happy [10, 11].

According to Pugh [67], clients will be more happy with their experience if staff members express more positive attitudes. This will encourage consumers to return and generate positive word-of-mouth.

According to prior studies, users provided high evaluations on the effects of usability and trust when it comes to consumer acceptance of robots [31]. When a customer uses a text chatbot to communicate with a business, it improves their mood and perception of the business and the brand in question [11, 73].

Trust in the chatbot

The organisation, which is an additional element of the "S.O.R." hypothesis, will now be discussed. When it comes to placing confidence in the text chatbot, individual cognitive abilities are reflected in the organism's cognitive and emotional emotions. We trust others because of their emotions and sentiments. According to Hoff and Bashir [39], how others see us is the foundation for developing trust. Before deciding whether or not to put their confidence in the text chatbot, which is known as the "agency" in this study technique, the customer goes through a method of creating trust based on their sentiments.

A different sense of trust is "the willingness of a person or administration to be exposed to a third party" [11, 40, 53]. Typically, trust research has concentrated on interpersonal relationships. Trust has been studied in a variety of disciplines, spanning economics, sociology, psychology, and computer systems [8, 11, 26].

Because of advancements in human–robot interaction, e-trust is now critical in characterising how people engage with technology [11, 39]. To address this gap, our study focuses primarily on consumers' faith in the text chatbot during the e-purchasing process.

Existing research has used capability, honesty, and compassion as predictors of electronic trust [44]. How humans and autonomous machines interact influences users' emotions of trust [82]. According to the social presence theory, the bulk of human–computer interactions are social [1, 11, 61].

Consumers frequently regard these computers as social agents, despite the fact that they have no connection to human sentiments or experiences. Customers will respond to these computers similarly to how they react to people [1, 42]. Users' perceptions of robots, according to Blut et al. [6], boost enjoyment and enhance human–robot interaction. Finally, Cheng et al. [11] revealed that chat-bot customer feedback explains chatbot confidence.

Usability and empathy are the two main factors that influence consumer trust in text chatbots, as was already mentioned. Consumers would interact with text chatbots more favourably if they were kind and empathic, according to the Stimulus-Organisation-Response theory with the social presence theory. As a result, we may express the following hypotheses:

H1: Empathy positively influences consumers' trust in the text chatbot.

H2: Friendliness positively influences consumers' trust towards the text chatbot.

Moderating impacts of chatbot disclosure and task complexity

Task complexity

The primary goal of an e-commerce platform's customer care is to assist customers in finding solutions to their problems during the purchasing process. This service is made to satisfy clients' needs during the pre-, during-, and post-purchase phases [11, 51, 80].

Consumers may need to identify their needs during the pre-purchase phase, gather more information, and decide whether the product will satisfy their needs. At this point, the text chatbot's goal is to inform customers about specific products and aid in their decision-making. This is accomplished by providing answers to queries regarding the specifics, capabilities, or tactics of the product itself.

The second stage, known as the purchase stage, is when the consumer chooses, orders, and makes payment. The duration of this stage of the customer-chatbot engagement may be less than that of the previous two.

The consumer's attention is mostly focused on engagement or after-sales service requests during the third stage, the post-purchase stage. In this stage, the consumer-point-of sale interaction includes decisions about repeat purchases, service requests, returned goods, and/or various types of participation [77]. In this step, which has the potential to be trickier than the previous two stages, the text chatbot plays a crucial role.

The technology must satisfy the needs or requirements of the consumer for particular jobs, according to Blut et al. [6]. When the technology is able to accommodate the customer's particular wants, the encounter will be more enjoyable. The qualities of the tasks that chatbots handle should be the main focus. The link between text chatbot qualities and customer intent can be moderated by task attributes, in accordance with the task-technology fit theory. We draw a conclusion about how task complexity affects consumer behaviour.

There are consequences of task complexity that have been looked into in earlier research. These outcomes come in many forms: task and team performance, for example (Bjorn and Ngwenyama [5, 19]. Complex tasks are involved in each stage of the consumer experience. Task complexity was used in our research to better assess consumer trust in chatbots. The customer requires assistance when the task is complicated. As a result, we can state the following about our third theory:

H3a: Empathy positively influences consumers' trust in the text chatbot in a weaker way when the task is complex.

H3b: Usability positively influences consumers' trust in the text chatbot in a weaker way when the task is complex.

Chatbot disclosure

According to Davenport (2019), increasing system disclosure or adoption as much as possible is a way to increase consumer trust. Being informed about the identity of the chatbot is the user's right [52]. Text chatbots should logically identify themselves as machines before interacting with and conversing with customers [11, 52].

The text chatbot will introduce itself to the customer at the beginning of the encounter by employing introductory language for the product, service, or brand. Customers who are aware that they are talking with a text chatbot rather than a live person would greatly benefit from the empathy and friendliness of text chatbots due to their empathic, pleasant, and convivial service. As a result, we may state the following about our fourth assumption:

H4a: Empathy positively influences consumer trust in a stronger way once the identity of the text chatbot is disclosed.

H4b: Friendliness positively influences consumer trust in a stronger way once the identity of the text chatbot is disclosed.

Hypotheses

References

H1: Empathy positively influences consumers' trust in the text chatbot

Zotowski et al. [82], Adam et al. [1]

H2: Friendliness positively influences consumers' trust towards the text chatbot

Zotowski et al. [82], Cheng et al. [11]

H3a: Empathy positively influences consumers' trust in the text chatbot in a weaker way when the task is complex

Lemon and Verhoef [51], Cheng et al. [11]

H3b: Usability positively influences consumers' trust in the text chatbot in a weaker way when the task is complex

Van Doorn et al. [77], Blut et al. [6]

H4a: Empathy positively influences consumer trust in a stronger way when the identity of the text chatbot is disclosed

Davenport (2019), Luo et al. [52]

H4b: Friendliness positively influences consumer trust in a stronger way when the identity of the text chatbot is disclosed

Davenport (2019), Cheng et al. [11]

Figure 1 summarises our conceptual model, which consists of numerous hypotheses.

Fig. 1
figure 1

The conceptual foundation for the study

Design methodology

We conducted a validation test on our theoretical model. Because of the repetitive tasks required, text chatbots are more likely to be used by online airlines. We used an experimental technique in order to comprehend how the text chatbot's usability and empathy increase customer trust in the chatbot. To investigate how chatbot disclosure and task complexity change these relationships.

Zikmund [81] employed random sampling to locate participants. Our sample consisted of passengers experienced at foreign airports. Participation required a WhatsApp account that demonstrated communication with an airline's text chatbot.

Participants were asked about their impressions of their conversations via the text chatbot. The survey contained questions about using the text chatbot, demographic questions, and finally, questions about the research concepts. In all, 353 people took part in this study, and 328 of them returned valid surveys. All 25 surveys were unable to be used due to a lack of data. Before the main experiment, a pre-test was performed to ensure that the change functioned as expected. The data received for our study is private and anonymous for ethical reasons. The demographic information for the participants is presented in Table 1.

Table 1 Shows the demographic characteristics of the participants

Instruments for measurement

To assess the variables in our model and ensure the validity and reliability of the measures, we used 5-point Likert-type multi-item scales (1 = strongly disagree, 5 = strongly agree) [63]. The use of measurements that have been assessed in a range of settings and languages ensures the robustness of the scales employed in this investigation. We changed a couple of the measuring scales' elements to match the theme of our research.

The empathy variable was measured using four questions based on research by [11, 12, 16, 21]. Usability was evaluated using the Tsai and Huang [73] scale (4 items). Morgeson and Humphrey's [59] assessments were used to develop the task complexity measure (4 items). The disclosure measure (2 items) was created using research from [45, 52, 70]. The scale [15, 27] was used to assess trust.

Results

Construct validity and reliability

We were able to discover the structure of the correlations between the survey questions we employed by using principal component analysis (PCA). The PLS 3.0 software was used. Based on the evaluation of the many shared characteristics and the factorial weight of each item, we elected to keep all of the items from the measuring scales used since all of the commonalities are less than 0.5 [28].

The correctness of our observations was confirmed once more by Cronbach's alpha values that were greater than the prescribed value of 0.7 (Table 2). The average variance extracted (AVE) values are also more than the recommended value of 0.05, ensuring that the model has converging validity [35]. Furthermore, the AVE values in Table 3 are bigger than the squared values of the inter-construct connections, demonstrating the discriminant validity of the constructs [30].

Table 2 Reliability and convergent validity of measures
Table 3 The constructs' discriminate validity

Because the Variance Inflation Factor (VIF) findings are all below the suggested values at the threshold of 10, indicating acceptable dependability, the analysis found no difficulties, and the multi-linearity was verified [41]. The VIF test results show that multi-colinearity is not a major issue in the present research [72] (Table 4).

Table 4 Results of research hypothesis testing

Discussion

To test our hypotheses, we find that empathy accounts for 61% of a customer's trust in a text chatbot (R2 = 0.61), demonstrating their strong explanatory power. Empathy has a notably positive influence on customer confidence in the chatbot (β = 0.560, t = 24.41, p 0.005). These data support hypothesis H1 by demonstrating that empathy is a necessary prerequisite for client confidence in the text chatbot.

Similarly, usability (R2 = 0.59) explains 59% of customer trust in the chatbot, exhibiting its significant explanatory power. User confidence in the chatbot is positively influenced by usability (β = 0.500, t = 23.47, p 0.005). These data support hypothesis H2 by demonstrating that usability is an important determinant of customer confidence in text chatbots.

Moderating impacts of task complexity and chatbot disclosure

In order to test the stability of this model, we employed the moderator effect function using Smart Partial Least Squares "PLS" 3.0 to further investigate its impact on the correlations among empathy/usability and client confidence in the chatbot. We found task complexity and the disclosure of text chatbots to be potential moderators. These paths are only partially validated, as the data clearly demonstrate. As a result, the complexity of the assignment and the chatbots' disclosure both partially modify the association between empathy and trust as well as the relationship between usability and trust. Hence, we can affirm again that the suggested model is stable and reliable. The H3(a,b) and H4(a,b) hypotheses are therefore partially accepted. Tables 5 and 6 provide the complete results, which are as follows:

Table 5 Outcomes of the task complexity's moderating effects
Table 6 Outcomes of the chatbot disclosure’s moderating effects

Robustness test

Further investigation is required to ensure the trustworthiness of the key conclusions reached. We wish to do an additional analysis to better examine and corroborate our acquired results. This move was primarily motivated by the complexities of the linkages connecting the variables in our research model. To that end, and in order to successfully validate the results reached from our model, we want to apply the "blindfolding" test to examine the impact of external variables on the model's performance. This technique evaluates the predictive utility of each concept by analysing changes in the criteria estimations (Q2) (Hair et al. 2017). Customer trust in the chatbot (Q2 = 0.425), which has an index that is significantly larger than zero, has a fair predictive relevance, based on the Stone-Geisser blindfold test (Q2) finding [71].

Conclusion

The current study improved previous studies by investigating the function of text chatbots in understanding customer behaviour in an e-commerce company. The moderating effects of job complexity and disclosure of chatbot use, as well as the impacts of empathy and usability on customer trust in the text chatbot, were also emphasised. We used two theories to construct hypotheses and correlations between variables in our study model: the Stimilus-Organism-Response theory & the social presence theory.

The goal of this research is to test the elements that impact customer confidence in text chatbots. Also, to assess the moderating effects of task complexity with text chatbot disclosure, in order to respond to our research questions, we provide the important findings of our study and compare them with other findings in the literature. Our analysis yielded the following conclusions:

First, empathetic and friendly text chatbots have a favourable impact on their users' trust. This result specifically backs up those of Cheng et al. [11]. This finding demonstrates how significant these two effects are. As a result, customers' judgements of text chatbot friendliness and empathy are both positively connected with their degree of confidence in the chatbot. Even if the effect of usability is large, empathy has a stronger positive impact on trust than accessibility. Customers favour text chatbots over other forms of technology since they can better grasp their needs and points of view. As a result, for successful and long-lasting contacts, the e-customer service provider's empathy is crucial.

The connection between empathy/usability and chatbot consumer trust, as well as the moderating effects of task difficulty and textual disclosure of the text chatbot, were also explored. The results show that task difficulty has a negative moderating effect on the relationship between usability and customer confidence in the chatbot, making usability's favourable influence on consumer trust smaller when the consumer's job is tough. However, job complexity has little effect on the link between empathy and client trust in the chatbot. This finding might be explained by the fact that, if the task at hand is complex, clients may be more interested in the chatbots' professionalism and problem-solving abilities than in their attitudes or customer service approaches. According to the findings, chatbot disclosure positively moderates the association between usability and customer confidence in text chatbots; however, it negatively modifies the relationship among empathy and trust. This discovery backs up Cheng et al.'s [11] findings. This discovery emphasises the significance of these two effects.

Implication societal

Corporate social responsibility is an important issue these days. Corporate social responsibility has several dimensions: environmental, social, ethical, and economic considerations. Corporate social responsibility is an organisation's deliberate support for its community. In our study, social responsibility has a technological dimension that serves society quickly and at all times.

Theoretical implications

Our research adds several new theoretical concepts to this field. To begin, although the Stimilus-Organism-Response Theory plus the Social Presence Theory are normally employed in conventional commercial scenarios, our study blended these two theories and incorporated a text chatbot in an electronic commerce setting.

The Stimilus-Organism-Response hypothesis is validated by our study in the context of electronic commerce, and it also makes a contribution to the research on the influence of text chatbot qualities on the development of consumer trust.

Finally, the conceptual framework of this research adds to the body of knowledge in the fields of information systems, electronic commerce, and digital marketing.

Fourth, our research has progressed to investigate the impact of e-service features, job difficulty, and text chatbot disclosure as variables that limit the formation of customer trust.

Finally, artificial intelligence-based chatbots are a brand-new, developing technology that offers several benefits to both organisations and customers. As a consequence, this study contributes to our knowledge of the constraints and factors impacting observed consumer behavioural responses.

Managerial implications

Our research has several managerial implications for businesses that offer electronic customer service as well as for companies that create business technologies. Businesses in the internet commerce industry employ artificial intelligence and automated robots on a regular basis. To enhance the consumer and/or brand experience, all online businesses, including shops, virtual outlets, and online service providers, must engage with their consumers automatically. Our research revealed the benefits of text chatbots as an automated form of communication between clients and businesses.

Our research helps to understand how people perceive the text chatbot in a business setting, as well as their motivations for resisting it and their behavioural responses. The company's competitive position improves in relation to its rivals by using the text chatbot in a well-designed and thoughtful manner. When dealing with a person and a chatbot, it is critical to keep the human factor in mind since it influences the customers' satisfaction, purpose, and future behaviour.

The findings of our study lead us to recommend that e-service providers and technology creators streamline and customise text chatbot practices. Principally, consumers' perceptions of the text chatbot are severely impacted by the task's intricacy. Attempt to establish good contact between the chatbot-using business and its clients. This interaction must give an accurate and quick answer to the customer's demands (information requests, orders, availability, payments, and so on) while also providing a friendly, welcome experience for the user. The usage of text chatbot technology allows the user firm to respond to consumer queries in a helpful manner, hence improving the customer and/or brand experience.

The business that employs text chatbot technology is required to recognise the machine. Consumer views of the disclosure have a direct impact on how confident they are in the text chatbot and the firm in question. Consumer trust should receive special consideration because it is a key factor mediating the association connecting consumers' perceptions and their attitudes, intentions, and future behaviours.

Limitations and potential research directions

Despite these advances, this study contains limitations that will need to be addressed in future studies on the issue. The first constraint is related to the survey data and sample size employed. The sample for this study is entirely composed of travellers. The sample size is modest enough to allow for structural equation analysis. The study's setting may have an influence on the conclusions' generalizability to other industries or circumstances. It would be fascinating to replicate the study with a larger sample size and a different data gathering approach to confirm the results, which would yield major hints.

The survey field is connected to the second constraint. Only a fictional activity and a mock purchase were required of the participants. To ensure that the findings are as valid as possible, it would be fascinating to repeat this study using information from encounters with a variety of real firms.

The third limitation of our research is the use of only two chatbot technological qualities, including usability and empathy. It would be intriguing to test other features, like the chatbot's intelligence.

The fourth constraint is that we are able to validate our research model within the air transport and e-commerce sectors. More research in many domains, including economics, health, agri-food, and so on, is suggested in order to apply the findings.

In this study, we investigated the moderating effects of task complexity on text chatbot disclosure. To study if customer sentiment towards the chatbot changes, for example, according to different degrees of participation, we recommend introducing additional moderating factors such as product type and amount of involvement.

Furthermore, the suggested theoretical framework makes no claims to be accurate and may be enhanced upstream and downstream by other factors such as the chatbot's perceived trustworthiness and the co-creation of additional benefits for the company via the use of chatbot technology.

In short, this study aims to advance previous research by investigating the role of text chatbots in explaining consumer behaviour in the e-commerce sector. Our investigation is one of the first empirical investigations into the hedonic determinants of consumer trust in text chatbots (1). Previous studies have investigated the utilitarian use of chatbots solely for online customer service. Our study tests the moderating effects of human-chatbot interaction (2). These two contributions make our research original.

The results showed that empathy and friendliness are the essential hedonic background of consumer trust in the text chatbot. Also, the results showed that the empathy-trust relationship and the usability-trust relationship are partially moderated by the chatbot's task complexity and disclosure. The findings give extra important information for e-service providers and chatbot developers to enhance their quality, comprehend their effects on user experience, and serve as a reference for designing strategies and establishing long-term connections.

Availability of data and materials

Not applicable.

Abbreviations

SOR:

Stimulus-organism-response

VIF:

Variance inflation factor

AVE:

Average variance extraite

PLS:

Partial least squares

References

  • Adam M, Wessel M, Benlian A (2020) AI-based chatbots in customer service and their effects on user compliance. Electron Mark 31:427–445

    Article  Google Scholar 

  • Araujo T (2018) Living up to the chatbot hype: the influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Comput Hum Behav 85(2):183–189

    Article  Google Scholar 

  • Birnbaum GE, Mizrahi M, Hoffman G, Reis HT, Finkel EJ, Sass O (2016) What robots can teach us about intimacy: the reassuring effects of robot responsiveness to human disclosure. Comput Hum Behav 63:416–423

    Article  Google Scholar 

  • Bitner MJ (1992) Servicescapes: the impact of physical surroundings on customers and employees. J Mark 56(2):57–71

    Article  Google Scholar 

  • Bjørn P, Ngwenyama O (2009) Virtual team collaboration: building shared meaning, resolving breakdowns and creating translucence. Inf Syst J 19(3):227–253

    Article  Google Scholar 

  • Blut M, Wang C, Wunderlich NV, Brock C (2021) Understanding anthropomorphism in service provision: a meta-analysis of physical robots, chatbots, and other AI. J Acad Mark Sci 49:632–658

    Article  Google Scholar 

  • Brandtzaeg PB, Følstad A (2017) Why people use chatbots. Int Conf Internet Sci 10673:377–392

    Article  Google Scholar 

  • Breuer C, Huffmeier J, Hertel G (2016) Does trust matter more in virtual teams? A metaanalysis of trust and team effectiveness considering virtuality and documentation as moderators. J Appl Psychol 101(8):1151–1177

    Article  Google Scholar 

  • Chang HJ, Eckman M, Yan RN (2011) Application of the stimulus-organism-response model to the retail environment: the role of hedonic motivation in impulse buying behavior. Int Rev Retail Distrib Consum Res 21(3):233–249

    Google Scholar 

  • Chen CC, Lin SY, Cheng CH, Tsai CC (2012) Service quality and corporate social responsibility, influence on post-purchase intentions of sheltered employment institutions. Res Dev Disabil 33(6):1832–1840

    Article  Google Scholar 

  • Cheng X, Bao Y, Zarifis A, Gong W, Mou J (2021) Exploring consumers’ response to text-based chatbots in e-commerce: the moderating role of task complexity and chatbot disclosure. Internet Res 32(2):496–517

    Article  Google Scholar 

  • Chung M, Ko E, Joung H, Kim SJ (2018) Chatbot e-service and customer satisfaction regarding luxury brands. J Bus Res 117:587–595

    Article  Google Scholar 

  • Ciechanowski L, Przegalinska A, Magnuski M, Gloor P (2019) In the shades of the uncanny valley: an experimental study of human–chatbot interaction. Futur Gener Comput Syst 92:539–548

    Article  Google Scholar 

  • Cyr D, Hassanein K, Head M, Ivanov A (2007) The role of social presence in establishing loyalty in e-service environments. Interact Comput 19(1):43–56

    Article  Google Scholar 

  • Cyr D, Head M, Larios H, Pan B (2009) Exploring human images in website design: a multimethod approach. MIS Q 33(3):539–566

    Article  Google Scholar 

  • Czaplewski AJ, Olson EM, Slater SF (2002) Applying the RATER model for service success. Mark Manag 11(1):14–17

    Google Scholar 

  • Das G (2016) Antecedents and consequences of trust: an e-tail branding perspective. Int J Retail Distrib Manag 44(7):713–730

    Article  Google Scholar 

  • Daugherty PR, James Wilson H, Michelman P (2019) Revisiting the jobs artificial intelligence will create. MIT Sloan Manag Rev 60(4):1–8

    Google Scholar 

  • Davenport TH (2019) Can we solve AI’s ‘trust problem’? MIT Sloan Manage Rev 60(2):18–19

    Google Scholar 

  • Dayan M, Di Benedetto CA (2010) The impact of structural and contextual factors on trust formation in product development teams. Ind Mark Manag 39(4):691–703

    Article  Google Scholar 

  • De Cicco R, Silva SC, Alparone FR (2020) Millennials’ attitude toward chatbots: an experimental study in a social relationship perspective. Int J Retail Distrib Manag 48(11):1213–1233

    Article  Google Scholar 

  • De Kervenoael R, Hasan R, Schwob A, Goh E (2020) Leveraging human-robot interaction in hospitality services: incorporating the role of perceived value, empathy, and information sharing into visitors’ intentions to use social robots. Tour Manag 78:104042

    Article  Google Scholar 

  • Donovan R, Rossiter J (1982) Store atmosphere: an environmental psychology approach. J Retail 58(1):34–57

    Google Scholar 

  • Duan Y, Edwards JS, Dwivedi YK (2019) Artificial intelligence for decision making in the era of big data—evolution, challenges and research agenda. Int J Inf Manag 48:63–71

    Article  Google Scholar 

  • Eroglu SA, Machleit KA, Davis LM (2001) Atmospheric qualities of online retailing: a conceptual model and implications. J Bus Res 54(2):177–184

    Article  Google Scholar 

  • Eroglu SA, Machleit KA, Davis LM (2003) Empirical testing of a model of online store atmospherics and shopper responses. Psychol Mark 20(2):139–150

    Article  Google Scholar 

  • Ert E, Fleischer A, Magen N (2016) Trust and reputation in the sharing economy: the role of personal photos in Airbnb. Tour Manag 55:62–73

    Article  Google Scholar 

  • Everard A, Galletta DF (2005) How presentation flaws affect perceived site quality, trust, and intention to purchase from an online store. J Manag Inf Syst 22(3):56–95

    Article  Google Scholar 

  • Évrard Y, Pras B, Roux E (2003) Market: Etudes et Recherches en Marketing. Dunod. B, Paris

    Google Scholar 

  • Fiore AM, Kim J (2007) An integrative framework capturing experiential and utilitarian shopping experience. Int J Retail Distrib Manag 35(6):421–442

    Article  Google Scholar 

  • Fornell C, Lacker D (1981) Evaluating structrual equation model with unobservable variable and meaurement error. J Martk Res 18(1):39–50

    Article  Google Scholar 

  • Fridin M, Belokopytov M (2014) Acceptance of socially assistive humanoid robot by preschool and elementary school teachers. Comput Hum Behav 33:23–31

    Article  Google Scholar 

  • Froehlich A (2018) Pros and cons of chatbots in the IT helpdesk. available at: https://www.informationweek.com/strategic-cio/it-strategy/pros-and-cons-of-chatbots-in-the-it-helpdesk/a/did/1332942 (accessed 18 December 2021)

  • Gilbert RL, Forney A (2015) Can avatars pass the Turing test? Intelligent agent perception in a 3D virtual environment. Int J Hum Comput Stud 73:30–36

    Article  Google Scholar 

  • Go E, Sundar SS (2019) Humanizing Chatbots: the effects of visual, identity and conversational cues on humanness perceptions. Comput Hum Behav 97:304–316

    Article  Google Scholar 

  • Hair JF, Black WC, Babin BJ, Anderson RE, Tatham RL, (2010) Multivariate data analysis. Upper Saddle River, Pearson, New Jersey

  • Hair JF Jr, Sarstedt M, Ringle CM, Gudergan SP (2017) Advanced issues in partial least squares structural equation modeling. Sage Publications, Thousand Oaks, CA

  • Hassanein K, Head M (2005) The impact of infusing social presence in the web interface: an investigation across product types. Int J Electron Commer 10(2):31–55

    Article  Google Scholar 

  • Hassanein K, Head M (2007) Manipulating perceived social presence through the web interface and its impact on attitude towards online shopping. Int J Hum Comput Stud 65(8):689–708

    Article  Google Scholar 

  • Hatwar PN, Patil A, Gondane D (2016) AI based chatbot. Int J Emerg Trends Eng Basic Sci 6:85–87

    Google Scholar 

  • Hoff KA, Bashir M (2015) Trust in automation: integrating empirical evidence on factors that influence trust. Hum Factors 57(3):407–434

    Article  Google Scholar 

  • Hoy WK, Tschannen-Moran M (1999) Five faces of trust: an empirical confirmation in urban elementary schools. J School Leadersh 9(3):184–208

    Article  Google Scholar 

  • Hua Y, Cheng X, Hou T, Luo R (2020) Monetary rewards, intrinsic motivators, and work engagement in the IT-enabled sharing economy: a mixed-methods investigation of internet taxi drivers. Decis Sci 51(3):755–785

    Article  Google Scholar 

  • Ischen C, Araujo T, Voorveld H, van Noort G, Smit E (2019) Privacy concerns in chatbot interactions”. In: Følstad A, Araujo T, Papadopoulos S, Law EL-C, Granmo O-C, Luger E, Brandtzaeg PB (eds) Conversations 2019: chatbot research and design. Springer, Cham, pp 34–48

    Google Scholar 

  • Jacoby J (2002) Stimulus-organism-response reconsidered: an evolutionary step in modeling (consumer) behavior. J Consum Psychol 12(1):51–57

    Article  Google Scholar 

  • Jarvenpaa SL, Knoll K, Leidner DE (1997) Is anybody out there? Antecedents of trust in global virtual teams. J Manag Inf Syst 14(4):29–64

    Article  Google Scholar 

  • Jensen ML, Yetgin E (2017) Prominence and interpretation of online conflict of interest disclosures. MIS Q 41(2):629–643

    Article  Google Scholar 

  • Kamboj S, Sarmah B, Gupta S, Dwivedi Y (2018) Examining branding co-creation in brand communities on social media: applying the paradigm of Stimulus-Organism-Response. Int J Inf Manag 39:169–185

    Article  Google Scholar 

  • Kaplan A, Haenlein M (2019) Siri, Siri, in my hand: who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Bus Horiz 62(1):15–25

    Article  Google Scholar 

  • Kelley SW, Davis MA (1994) Antecedents to customer expectations for service recovery. J Acad Mark Sci 22(1):52–61

    Article  Google Scholar 

  • Khanna A, Pandey B, Vashishta K, Kalia K, Pradeepkumar B, Das T (2015) A study of today’s AI through chatbots and rediscovery of machine intelligence. Int J u-and e- Serv Sci Technol 8(7):277–284

    Google Scholar 

  • Kim MJ, Lee CK, Jung T (2020) Exploring consumer behavior in virtual reality tourism using an extended stimulus-organism-response model. J Travel Res 59(1):69–89

    Article  Google Scholar 

  • Lemon KN, Verhoef PC (2016) Understanding customer experience throughout the customer journey. J Mark 80(6):69–96

    Article  Google Scholar 

  • Luo X, Tong S, Fang Z, Qu Z (2019) Frontiers: machines vs humans: the impact of artificial intelligence chatbot disclosure on customer purchases. Mark Sci 38(6):937–947

    Google Scholar 

  • Mayer RC, Davis JH, Schoorman FD (1995) An integrative model of organizational trust. Acad Manag Rev 20(3):709–734

    Article  Google Scholar 

  • McKenna KYA, Green AS, Gleason MEJ (2002) Relationship formation on the internet: What’s the big attraction? J Soc Issues 58(1):9–31

    Article  Google Scholar 

  • Mehrabian A, Russell JA (1974) An approach to environmental psychology. The MIT Press, Cambridge

    Google Scholar 

  • Mende M, Scott ML, van Doorn J, Grewal D, Shanks I (2019) Service robots rising: how humanoid robots influence service experiences and elicit compensatory consumer responses. J Mark Res 56(4):535–556

    Article  Google Scholar 

  • Mitchell V (2018) Gartner: why humans will still be at the core of great CX. available at: https://www.cmo.com.au/article/642649/gartner-why-humans-will-still-core-great-cx/ (accessed 20 December 2021)

  • Mollen A, Wilson H (2010) Engagement, telepresence and interactivity in online consumer experience: reconciling scholastic and managerial perspectives. J Bus Res 63(9–10):919–925

    Article  Google Scholar 

  • Morgeson FP, Humphrey SE (2006) The Work Design Questionnaire (WDQ): developing and validating a comprehensive measure for assessing job design and the nature of work. J Appl Psychol 91(6):1321–1339

    Article  Google Scholar 

  • Murray J, Elms J, Curran M (2019) Examining empathy and responsiveness in a high-service context. Int J Retail Distrib Manag 47(12):1364–1378

    Article  Google Scholar 

  • Nass C, Steuer J, Tauber ER (1994) Computers are social actors. In: Proceedings of the SIGCHI, conference on human factors in computing systems: celebrating interdependence, ACM, Boston, MA, pp 72–78

  • Oracle (2019) What is a chatbot? available at https://www.oracle.com/solutions/chatbots/what-is-achatbot/ (accessed 8 December 2021)

  • Peter JP (1979) Reliability: a review of psychometric basics and recent marketing practices. J Mark Res 16(1):6–17

    Article  Google Scholar 

  • Piçarra N, Giger JC (2018) Predicting intention to work with social robots at anticipation stage: assessing the role of behavioral desire and anticipated emotions. Comput Hum Behav 86:129–146

    Article  Google Scholar 

  • Pisa R (2018) Chatbot market size is set to exceed USD 134 billion by 2024—ClickZ. available at: https://www.clickz.com/chatbot-market-size-is-set-to-exceed-usd-1-34-billion-by-2024/215518/ (accessed 6 December 2021)

  • Powell PA, Roberts J (2017) Situational determinants of cognitive, affective, and compassionate empathy in naturalistic digital interactions. Comput Hum Behav 68:137–148

    Article  Google Scholar 

  • Pugh SD (2001) Service with a smile: emotional contagion in the service encounter. Acad Manag J 44(5):1018–1027

    Article  Google Scholar 

  • Rose S, Clark M, Samouel P, Hair N (2012) Online customer experience in e-retailing: an empirical model of antecedents and outcomes. J Retail 88(2):308–322

    Article  Google Scholar 

  • Seeber I, Bittner E, Briggs RO, de Vreede T, de Vreede GJ, Elkins A, Maier R, Merz AB, Oeste-Reiß S, Randrup N, Schwabe G, Sollner M (2020) Machines as teammates: a research agenda on AI in team collaboration. Inf Manag 57(2):103–174

    Article  Google Scholar 

  • Shawar BA, Atwell ES (2005) Using corpora in machine-learning chatbot systems. Int J Corp Linguist 10(4):489–516

    Article  Google Scholar 

  • Starzyk KB, Holden RR, Fabrigar LR, MacDonald TK (2006) The personal acquaintance measure: a tool for appraising one’s acquaintance with any person. J Pers Soc Psychol 90(5):833–847

    Article  Google Scholar 

  • Thakur R (2018) The role of self-efficacy and customer satisfaction in driving loyalty to the mobile shopping application. Int J Retail Distrib Manag 46(3):283–303

    Article  Google Scholar 

  • Thatcher JB, Wright RT, Sun H, Zagenczyk TJ, Klein R (2018) Mindfulness in information technology use: definitions, distinctions, and a new measure. MIS Q 42(3):831–847

    Article  Google Scholar 

  • Tsai WC, Huang YM (2002) Mechanisms linking employee affective delivery and customer behavioral intentions. J Appl Psychol 87(5):1001–1008

    Article  Google Scholar 

  • Ukpabi DC, Aslam B, Karjaluoto H (2019) Chatbot adoption in tourism services: a conceptual exploration. In: Robots, artificial intelligence, and service automation in travel, tourism and hospitality 2019, pp 105–121. Emerald Publishing Limited

  • Van der Heijden H (2003) Factors influencing the usage of websites: the case of a generic portal in The Netherlands. Inf Manag 40(6):541–549

    Article  Google Scholar 

  • Van der Heijden H (2004) User acceptance of hedonic information systems. MIS Q 28(4):695–704

    Article  Google Scholar 

  • Van Doorn J, Lemon KN, Mittal V, Nass S, Pick D, Pirner P, Verhoef PC (2010) Customer engagement behavior: theoretical foundations and research directions. J Serv Res 13(3):253–266

    Article  Google Scholar 

  • Waxer C (2016) Get ready for the bot revolution. Computer World. Retrieved from https://www.computerworld.com/article/3126438/emerging-technology/article.html. Accessed on December 9, 2021

  • Wu YL, Li EY (2018) Marketing mix, customer value, and customer loyalty in social commerce: a stimulus-organism-response perspective. Internet Res 28(1):74–104

    Article  Google Scholar 

  • Zhang M, Jin B, Wang GA, Goh TN, He Z (2016) A study of key success factors of service enterprises in China. J Bus Ethics 134(1):1–14

    Article  Google Scholar 

  • Zikmund WG (2000) Business research methods, 6th edn. Harcourt College Publisher, Orlando

    Google Scholar 

  • Złotowski J, Yogeeswaran K, Bartneck C (2017) Can we control it? Autonomous robots threaten human identity, uniqueness, safety, and resources. Int J Hum Comput Stud 100:48–54

    Article  Google Scholar 

Download references

Acknowledgements

I thank the editor-in-chief and his staff for evaluating our article proposal.

Funding

There was no financial support for this work that could have influenced its outcome.

Author information

Authors and Affiliations

Authors

Contributions

Not applicable (only one author), the author read and approved the final manuscript.

Corresponding author

Correspondence to Moez Ltifi.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ltifi, M. Trust in the chatbot: a semi-human relationship. Futur Bus J 9, 109 (2023). https://doi.org/10.1186/s43093-023-00288-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s43093-023-00288-z

Keywords