"We were amazed by the quality of APIs they provide you. They provide you with libraries for any possible language, like C++, C#, .Net, Java, whatever," Muslukhov says. "You just import their library and you call the function with an image inside, and they return you within five seconds a string with the CAPTCHA." Accuracy is claimed to be 87 percent, but the researchers decided to do the work manually in its testing to optimize the outcomes.
The basic infrastructure costs around $30 per bot. Ready-made networks with tens of thousands of connections can provide an instant "army of bots," as Muslukhov puts it. "We chatted with one of the guys online. He responded to us with some features -- they had this already made."
It's not easy to stop the social bots
The complexity of social botnets makes it difficult to craft an effective security policy against them, the UBC researchers say. Widespread access to online services, including features such as crawling social networks and ease of participation, introduces conflicts between security and usability.
Security online relies on several assumptions. One key assumption is that fake accounts have a hard time making friends -- in other words, you can easily tell apart a real or fake account, by looking at its friendship circle. The UBC experiment proves social bots can be human enough to trump this assumption.
When the fakes ingrain themselves so well in the network that they are indistinguishable from the authentic accounts, you face a more fundamental concern: How do you rely on data in your social network? After all, many technological, economical, social, and political activities depend on that info.
For example, Facebook lets users interact automatically with the site, so outside service providers can integrate their offerings. This makes it as easy for social bots to use Facebook as it is for people. Facebook also lets users browse through extensive data sets, to make the site more convenient and useful. Social bots can take advantage of this laxity to harvest massive amounts of private data.
The UBC researchers divide the available defensive strategies into prevention and limitation. Prevention requires changing the prospects facing a potential social botnet operator. In other words, that means putting up more barriers for automated access, because such automation favors computer-driven invaders. That of course risks turning away human users who don't want to jump through the hurdles either.
Limitation means accepting that infiltrations will occur and focuses on capping the damage. Today, social networks rely on limitation to respond to adversaries: They observe differences in the structure and actions of social botnets compared to human networks, then use that detection to close down artificial accounts. But as social botnets gradually extend their tentacles into human networks, acquiring in the process a similar social structure, this limitation defense becomes less effective.
Sign up for MIS Asia eNewsletters.