Turn Threats into “Treatments”? Social Bots’ Possible Applications in Health Communication during Pandemics

COVID-19 pandemic is a global Crisis, social media platforms have been a significant site of getting information and arouse discussions. However, social bots have risen on the online social networks, social bots are applications that existing in cyber space merely and they can mimic human users to interact with you following their own logic, there are the features of “Intangible” 、 “personate” and “automatic”. Evidence suggests that social bots did harm to the Health Communication during COVID-19 pandemic, researchers found that social bots contributed to diffuse political issues stir negative emotions, spread rumor. Social bots often have a negative association, but there are many bots which perform benign tasks. This study analysis the reasons bots performed badly in COVID-19 pandemic first, then discuss about how to turn the “threats” to “treatments”, proving that social bots can act as a positive role in different periods of Health Emergencies.


INTRODUCTION
The new coronavirus has been raging for more than a year, this health disaster has brought enormous challenges to the global politics and economy, threatening the lives of millions of people. At present, we are in the post-epidemic era, but the experience and lessons brought by the epidemic cannot be ignored. How to disseminate the health information better and guide people to strengthen their own protection has become the focus of sociologists and communicators.
Nowadays, social platforms are a repository of information, especially during pandemics, where cyberspace is flooded with ideas and opinions because people are limited in their social distance, yet there are many actors--"social bots'' among online users that affect the ecology of cyberspaces. Based on the research of Salge CADL, there are 23 million social bots in Twitter, 27milliion social bots in Facebook, and the number is up to 140 million in Instagram in 2017.
Social robot is a kind of "automatic account'', they are automated accounts that use artificial intelligence to steer discussions and promote specific ideas or products on social media such as Twitter and Facebook. They can mimic the behavior of human so hardly can be recognized by users, social bots usually act as a friend in the users' interpersonal communication networks.
According to "the spiral of silence'' theory, people are influenced by their surroundings when they express their opinions, especially that so a large number of social bots actually create a climate of opinion that influences their perceptions. COVID-19 is a global crisis and with people being pushed out of physical spaces due to containment measures, online conversation on social media becomes one of the primary tools to track social discussion. A survey found that suspected social bots contributed to as much as 9.27% of COVID-19 discussions on Twitter. As a result, if social bots spread information related to health care, they are responsible to play a positive role in every period pandemics.

LITERATURE REVIEW SOCIAL BOTS AND ONLINE HEALTH COMMUNICATION
There has already been a wealth of studies that looked into social media dynamics in the context of COVID-19, however, only a handful of studies paid attention to social bots especially related into Health Communication. Health communication is a good and perform well section, social bots may be used for manipulating public attitudes, sentiments, and behaviors by engaging in online health-related discussions. On the one hand, through effective and friendly interactions with human users, social bots can assist people with achieving their physical and mental health goals, researchers designed a Twitter bot named Notobot that leverages machine learning to identify users posting pro-tobacco tweets and select individualized interventions to curb their tobacco. Experimental evaluation suggested that our system would perform well if deployed.
On the other hand, some malicious bots pose multifaceted threats to public health in online social networks. For example, in vaccine controversies, evidence suggests that people turn to the Internet seeking information about the potential side effects and consequences of vaccines, indicating that social media can affect parents' decision to have their child vaccinated , However, The Defense Advanced Research Projects Agency recently conceptualized how anti-vaccination activists could magnify their message's reach by employing social bots to inundate Twitter with tailored narratives intended to drive health-related decision-making . In addition, by suggesting medical uses for cannabis, social bots can also be employed to skew the public's attitudes towards the efficiency of cannabis in dealing with mental and physical problems. Some recent works have also investigated social bots' engagement in online conversation pertaining to COVID-19. Researchers from Carnegie Mellon University (CMU) found that many social bots spread and amplified false medical advice and conspiracy theories and in particular, promoted "reopening America" messages.
A study provided some evidence that high bot score accounts are used to promote political conspiracies and divisive hashtags alongside with COVID-19 content .In the fight of COVID-19, some prevalent misinformation narratives were spread by manipulated social bots, including the belief that COVID-19 is a hoax created by the Democratic Party and that COVID-19 will disappear by itself. In conclusion, studies focused almost exclusively on the negative impact of social bots on Health Communication, but how to use it in a proper way is an aspects needs more studies.

SOCIAL BOTS' APPLICATIONS IN EMERGENCES
There are few papers in this field, especially in the vison of COVID-19. Less studies focus on the applications of social bots in contrast with their threats, but there are still several results as following.
In the light of the study from Miner, during the COVID-19 pandemic, researchers have noted that chatbots could be used to spread up-to-date health information, ease people's health anxiety around seeking medical care, and alleviate psychological pain. Otherwise, Hofeditz Lennart has published a paper, they document Ed a first step in the area of benign social bot utilisation for emergency management, Emergency organisations would like further bot applications such as message translation bots, preparedness content chatbots and bots as guides for their apps. However, they choose Australia as a empirical investigation location and they emphasis social bots' applications in natural disaster.So, some scholars showed the use of social bots in health communication, but a system to conclude how can social bots benefits the health emergency such as pandemics as well as the model to illustrate how to turn the disadvantages to advantages is lacked nowadays.

SOCIAL BOT'S NEGETIVE EFFECTS ON COVID-19 PANDEMICS
We have less perception of social bots because they have no physical image by virtue of their virtual "digital identities'', but in fact they already exist widely in social interactions with human beings. There is a social robot which is born for the benefit of mankind but also has serious negative effects. As stated in the literature review, the influence of social bots on health transmission is mainly focused on the negative aspects. On the basis of the previous research achievements, the following are the main points.

SPREAD FALSE HEALTH CLAIMS
In 2017, scholars such as shao of Indiana university analysed the 14 million tweets containing 400,000 messages during the 2016 presidential election, noting that automated social bots accounts were extremely active in the early stages of misinformation dissemination, and that human users then forwarded the false news disseminated by the bogus news, which ultimately led to the proliferation of the fake news.
An example is a study in 2017 that compiled a corpus of 2.2 million Twitter posts and characterized how social bots promote electronic cigarettes. The researchers found that, in comparison with human users, social bots were twice as likely to suggest that electronic cigarettes could be used in smoking cessation. Research has shown that social bots are becoming an important medium for spreading rumors and spreading unverified rumors during the New Crown Pneumonia epidemic, seriously disrupting social order.

DISTURB SENTIMENT EXPRESSIONS
Social bots are not as low-energy as we think they are. Surveys show that moderate-to-complex bad bots now account for almost three-quarters of the traffic, that advanced persistent bots continue to plague websites and often avoid detection through random IP addresses, anonymous proxy inputs, identity changes, and imitation of human behavior. In 2019, 73.7% of the bad robot traffic was APB, and geographically 45.9 % of malicious social robot attacks came from the United States.
It has be found that social bots were intentionally engaged in manipulating public opinion and sentiments during the COVID-19 crisis and that they managed to facilitate the contagion of negative emotions on certain topics. Social bots and humans shared similar trends in sentiment polarity for almost all topics. In overall negative topics, social bots were even more negative than humans when addressing COVID-19 in the US, COVID-19 in China, and economic impacts. Social bots' sentiment expressions were weaker than those of humans in most topics, except for COVID-19 in the US and the healthcare system. Social bots were more likely to actively amplify humans' emotions, rather than to trigger humans' amplification.
Social platforms have become an important place for people to express and communicate their emotions. Social bots act as human beings in social platforms, transmit negative or negative emotions at certain times, even induce social events, and become a destabilizing factor in cyberspace.

MISLEAD THE PUBLIC CONCEPTION
Social bots are not a high-threshold technology, and organizations that produce and sell them for profit are misguided, using them as human beings on social platforms that can be used for false marketing and even to facilitate criminal terrorist propaganda and recruitment. Social Bots Many terrorists use social bots to propagate extremist ideas, which will undoubtedly lead citizens to commit crimes and offences. More than 20 experts say authoritarian rulers, criminals and terrorists can use artificial intelligence to rig elections or use drones to carry out terrorist attacks, warning of possible abuses. In a 100-page analysis, experts outlined the rapid growth of cybercrime and the use of "bots'' to disrupt news interviews and infiltrate social media over the next five to 10 years. There is no doubt that misleading content, such as extremism and cultish ideas, can also be carried by social bots, which can easily lead people astray and lead them to justice. Much of the misconception about epidemic awareness comes from the personal influence of social bots, especially when it comes to political issues.

NECESSARY REGULATIONS TO GOVERN SOCIAL BOTS
Social bots are invented to serve human initially. Automatic responders to inquiries are increasingly adopted by brands and companies for customer care. Quake bot is a software application to report the latest earthquakes as fast as possible. Although these types of bots are designed to provide a useful service, they can sometimes be harmful. As shown above, social bots still play a largely negative role in the new crown of pneumonia, but they are essentially artificial intelligence technology, which is a double-edged sword that can be truly harnessed as a tool to serve human if we can take steps to avoid its disadvantages. Such measures can be taken as following.

INSTITUTIONAL ASPECTS
With the advent of social bots' threats to society, more and more institutional measures were taken. In 2018, Senator Robert Herzberg of California sponsored SB1001, which outlaws the creation and operation of social media "bots'' in order to influence political voting or access funding, unless it is explicitly stated that they are bots. On the platform side, Facebook blocked 1.5 billion bogus accounts between April and September 2019, including social robotics accounts that churn out spam.
Moreover, in April 2019, the EU issued two important documents Ethics Guidelines for Trustworthiness AI and Algorithm Responsibility and Transparent Governance Framework. According to the Ethics Guidelines for Trustworthiness AI, a credible AI must possess, but is not limited to, three characteristics: (1) legality, i.e. the belief that the AI shall respect the fundamental rights of the person, in accordance with the provisions of existing laws; (2) Ethical, i.e. believing that AI should ensure that ethical principles and values are observed and that "ethical purposes'' are met; (3) robustness, i.e., that AI systems should be robust from a technical or social development point of view, because even if they serve ethical purposes, they can unintentionally cause harm to humans without the support of reliable technology.
The Ethical Guidelines also provide a checklist of credible AI assessments based on the seven key requirements outlined above. The checklist, which applies primarily to AI systems that interact with humans, aims to provide guidance on the implementation of seven key requirements that help different levels of a company or organization, such as management, legal services, research and development, quality control, HR, procurement, day-to-day operations, and so on, collectively ensure the achievement of credible AI. The Code of Ethics states that the list of assessed matters is not always exhaustive, that credible AI construction requires continuous refinement of AI requirements and the pursuit of new solutions to address them, and that stakeholders should be actively engaged to ensure that AI systems operate safely, securely, legally and ethically throughout the life cycle and ultimately for the benefit of humanity. As the problem of social bots becomes more and more serious, it is necessary to promulgate a exclusive legal regulation to restrict and regulate the behavior of the social robot in social platforms. If social bots spread rumors and misinformation about epidemics at will, laws should be in place to punish them and the people behind them.

TECHNICAL ASPECTS
The bot testing platform, Bot or Not, developed by Clayton A. Davis of Indiana University and others, analyzes the metadata of user accounts by categorizing them and extracting information from their interaction patterns and publishing content to generate more than 1,000 feature values that can be sorted into six categories: network features, profiles, buddy relationships, textual and emotional features. Facebook said it had "maimed'' 1.2 billion fake accounts in the last three months of 2018 and 2.19 billion during the first quarter of 2019. Most fake social media accounts are "bots'' created by automated programs that publish certain types of information, violate Facebook' s terms of service, and manipulate part of a social conversation . Complex actors can create millions of accounts using the same programs.
At present, the detection technology to identify whether the user is a robot in internet platform is still advancing. Based on the identification of social robot, it is necessary to enhance the recognition of its content, distinguish malicious and benevolent social bots, and regulate the malicious society bots that harm the spread of health information so as to reduce the influence of malicious social bots.

GLOBAL COOPERATION
The research and application of artificial intelligence has the characteristics of trans-boundary and international division of labor, so it is necessary to strengthen international collaboration and coordination in ethics and governance. The EU focuses on international communication on social robot governance issues, such as the adoption by OECD member States on 22 May 2019 of the Artificial Intelligence Principles, the Responsible Management of Trusted AI, which have five ethical principles, including inclusive growth, sustainable development and well-being, people-centred values and equity, transparency and interpretability, robustness and safety, and accountability. The EU is also continuously improving its governance programmes in the process of participating in policy formulation of the global governance framework.
In 2019, OECD member countries approved the Responsible stewardship of trusted AI, including principles of inclusive growth, sustainable development and wellbeing, people-oriented values and equity, transparency and interpretability, robustness, safety and reliability, and accountability. It's a practice of governing social bots by the international effrots.
All in all, taking mixed measures and strengthen the cooperation of different areas is the effective method to regulate the behavior of social bots as a result apply them in the emergency managements during pandemics.

POSSIBBLE APPLICATIONS IN SOCIAL BOTS
Social Robotics is an emerging field, with many applications envisioned. Scientific and technological advancements constantly impact humans on the individual and societal level.
At different stages of outbreak and development, social robots, which are well regulated, can be helpful in preventing and controlling the epidemic, social bots have different emphases depending on the period.

EARLY OUTBREAK
The role of social robot in the early stage of public health events is mainly to transmit timely information about digital development and propagate correct concept of self-protection. In addition, it is often overlooked that social robots can act as assistants to small groups, such as automatically acting as representatives of organizations that want society to recruit volunteers and record and screen, as well as serve as conduits for information within health workers, delivering positive and useful information to individuals in the organization, increasing their productivity and serving as "glue'' that holds individuals together.

STAGE OF SUSTAINED DEVELOPMENT
As the social environment becomes more complex in the developing and rising phases of the epidemic, and as societies are prone to rumors, conflicts and attacks between parties and social groups, experience with the new crown epidemic suggests, social robots should play a positive role as "agents'', trying to deliver positive messages on social platforms and repair social rifts.
Social robots are not only cold messengers, but also emotional carriers due to their "human'' nature. In the gloomy times of the epidemic, the positive signals they send are expected to be a way to ease social and even international conflicts.

ACCESS CONTROL PHASE
As the epidemic is alleviated and society gradually regains its orderly governance, social robots will perform their functions in economic recovery, facilitating the flow of information according to different social needs, helping unemployed persons to obtain employment and forecasting the recovery cycle of economic individuals through strong computational power, while monitoring possible secondary disasters and helping societies to slowly become functional. Based on previous research, social bots have at least these potential roles in pandemics management and Health Communication. From the table above, we can predict that if social robots are well governed, they can be useful and invaluable in helping people in pandemics solve many problems.

CONCLUSION
The COVID-19 epidemic swept the world with unprecedented momentum. In the 21st century, WE entered into the "risk society' with the development of economy and technology', so people should seek the solution to the social risks. As a new invention, social robot has two sides, on the one hand, it can benefit human beings, but also brings threats. This paper attempts to link the threats and benefits of social robots with the active governance from the angle of epidemic situation, and explores how it could benefit mankind in the management of public health events.
This study focuses on the application of public health events, while the results can be extended to other areas such as natural disaster assistance. With the improvement of the legal system, it is foreseeable that future scholars will pay more attention to how to effectively use social robots in various kinds of social governance. In essence, the study is a reasonable prediction based on previous studies, and a feasible path to transform threats into applications is constructed. However, due to the lack of research methods, there are still some problems in proving the effectiveness of this transformation and the feasibility of active application of social robots in practice.