Social Robot: are they really among us?
What are software robots?
Our world is ruled by social media: Facebook, Twitter, Instagram and Snapchat are the most used platforms for social exchange and their influence on our collective mind is huge. But are these environments actually populated by people only?
Some studies claim that only 39% are humans, while the remaining 61% are bots.
In this computer world, bots (software robot) are algorithms that simulate human beings. Generally, they have their Facebook profile: they post pictures, comments on their mood, and simulate an opinion on any subject.
Probably someone of them is already one of our Facebook friends or Twitter followers.
The most advanced bots can have a real online conversation with human beings, unaware that on the other side of their pc there are “persons” made of lines of code. These robots can access the same communication and interaction systems of machines used by humans. At the beginning, they were used as random text composers, later they became more sophisticated and able to imitate human behavior in a social environment, turning into “social bot”.
There are two kinds of social bot: the good (70%) and the bad ones (30%).
The good social bots are harmless infiltrators, automatized users, programmed to respond to determined input and to subsequent human reactions. There are a number of chat services: from the useful customer service chats used by more and more companies, such as Lloyds Banking Group, Royal Bank of Scotland, Renault and Citroën, to the more playful chats, such as Poncho the meteorologist cat (http://www.poncho.is/) or Xiaoice the teenager who gives love advices.
Another example of bot is “Tay AI”, launched by Microsoft on March 23, 2016 on Twitter. This project aimed to analyze how a computer can enhance his verbal comprehension in human discussions and learn to use a natural language. After few hours of “practice” Tay had become vulgar, crude, a conspiracy theorist, supporter of incestuous relationships and Hitler’s Aryan theories. In 16 hours it posted 92.000 twits maintaining its opinions, making it necessary to close its account immediately.
Bad bots are created with the only purpose of harming, in fact they deceive, exploit and manipulate people’s interactions on social networks with spam, malwares and misinformation. They are often used as a tool for propaganda and enlistment in subversive armed groups.
This software has also been used in 2015 for marketing purposes when a company put a multitude of social bot at work to increase rumors on its products. They succeeded: people’s interest grew and the same for the company’s profit, which gained $ 6 billions.
Manipulating people’s opinion by means of bots is profitable, but it stretches the bounds of legality: we experienced it in 2010 during USA elections, when bots have been used on Twitter network to support some candidates and discredit opponents posting links to fake news.
Bots can harm our society in even more subtle ways: some studies reveal that users of social networks are highly vulnerable since they carelessly reveal their private information, such as telephone numbers, addresses, and even bank data. This kind of vulnerability can be exploited by cybercrime, causing the erosion of people’s confidence in social networks.
Bots can easily infiltrate in the unaware population and manipulate people. In the last few years, bots on Twitter have become more and more sophisticated and very difficult to detect: they can create fake news to populate profiles, publish posts on schedule, hold conversations, and answer comments and questions. Other bots can hack people’s identity to steal personal information or contents such as pictures and links. They can also clone users and imitate their behavior, interacting with their friends and publishing contents in line with users’ habits.
For all these reasons, the computer science industry is engaged in designing advanced methods to automatically detect social bots. Social networks are trying the same introducing captcha, but it turned out to be inappropriate and ineffective. Until now many approaches have been tried to identify bots, such as crowdsourcing systems, automatic learning methods, social network data tracking, but no one has been very successful.
At the beginning an heuristic system was used to unmask bots: a real person, chatting with another user, tried to understand the human or robotic nature of the interlocutor according to the sense and repetition of answers. Later, automatic systems based on algorithms were created for this purpose: some algorithms were developed manually, but others were programmed to develop autonomously mapping the behavior of those users suspected of being bots.
The main aim is to detect harmful bots, secondly is to expose who is controlling, and taking advantage of, these bots.
At this point, some questions arise: what’s the point of these bad bots? Can they be exploited to increase a company’s profit? How many fake contents have they created?
Sep 02, 2016
© 2000 - 2025 Eurecna S.r.l. All rights reserved.