Social media has gone from a novel way for businesses to connect directly with customers to an often sizable drain on marketing and customer service resources. The sheer volume of social conversations can be mind-numbing… Reading, let alone responding, to that many messages isn’t always practical. So to keep up with the burgeoning workload, a growing number of companies are taking a step which to some in the field amounts to purest blasphemy: They’re taking the social out of social media.
Story length: 1,190 words; Access the full story here.
Brenda Berkelaar (The University of Texas at Austin) suggests some key “Takeaways” on the story:
Despite their critics and origin in interpersonal communication scholarship, Petronio’s (2002) Communication Privacy Management (CPM) Theory and Expectancy Violation Theory (Burgoon, 1978) provide insight into concerns about robots, who aren’t immediately recognized as robots. Broadly speaking, CPM argues for the importance of people having control over when, how, and to who (or to what) people disclose information.1 If she expanded expectancy violation theory into digital space, Burgoon might argue (as the author of the tweeting robots article did) that if you connect with “someone” and it turns out that “someone” is a computer script, you are likely to evaluate it negatively.2 If your audience expects information quickly and doesn’t need interaction, bring out the ‘bots so long as they are actually efficient for the end user.
As both these theories suggest, people have rules for where ‘bots belong. They don’t belong where we expect people. We see this in recent news stories on Mitt Romney: Allegations emerged that his campaign bought robots to support social media goals and to increase his number of followers. Of note, it is an equally likely scenario that the ‘bots were activated by opponents since the idea of robots in place of humans taints perceptions of the accused’s authenticity and transparency—which for many people is followed by a quick jump to questioning ethics (Oliver, 2004). Pulling back the curtain to reveal robots where humans are expected tends to make the leader of a “robot army” look bad (see Huff, 2012). Both CPM and expectancy violation theories would support the recommendation in the article that organizations flag interactions as being of robotic or human origins. This would help set expectations and offer people insight into how to get around the robot if they needed or wanted human interaction.
This is not to say robots shouldn’t be used in social media. Computers excel at automation—so long as their operating instructions are appropriately designed. Advances in affective computing and linguistic analysis can provide insight into the information onslaught facing organizations everyday. Social media analysis systems can operate as an early warning system for different types of social media challenges. They might also help identify strategic organizational opportunities. Furthermore, automated scheduling can help human communicators manage when and where their tweets, posts, and feeds are delivered. Unfortunately, as of yet, automation can lead to surprises since computers don’t always recognize context changes. Which leads to my corollary takeaway: Organizations generally don’t like surprises either. Although it seems that the National Rifle Association (NRA) tweeter was unaware of (the) recent Colorado shooting, many people initially speculated that the tweet “”Good morning, shooters. Happy Friday! Weekend plans?” resulted from the NRA scheduling future tweets using Hootsuite™, a social media app. In so doing, this situation highlighted potential problems with using technology to accomplish communication goals (and human failings as well). Finding the sweet spot that optimizes human interaction and computer automation remains a difficult challenge.
So after all this, the big takeaway from this article is to avoid surprises for individuals and organizations. How do we do this? Some fundamental communication skills may apply:
- Know your audience. This includes knowing their expectations. If they want information quickly and don’t need interaction, a digital agent (with the appropriate identification) might be just the thing, so long as there is an option to access a human being if necessary.
- Frame expectations by letting your users/customers/clients know if and when they are communicating with a digital agent. And if you use technology,
- Use technology strategically. Consider the costs of likely, probable, and possible scenarios, and decide whether the efficiency—if audience(s) even wants it, is worth it. Second order effects can create quite a mess. So,…
- Be prepared for surprises and adapt. As with any communication scenario, some surprises are inevitable, knowing how to adapt quickly is crucial. Also, when you mess up (and the technology messing up is your organization messing up), a good apology helps as well.
1And, yes, presumably the technologically literate recognize that online interactions differ because of invisible audiences, context collapse, and the broadcast ability of Internet tools (boyd, 2006). However, I suspect that, for most people, our immediate physical reality makes for digitally localized conversations so that they fail to truly appreciate the de-contextualized and re-(re-?)contextualized aspects ofonlinecommunication. That is, online communication is understood and framed by the author in light of the intended audience, without consideration of invisible or robotic audiences until they are surprised.
2Although, I want to leave open the possibility that a person could prefer robots over people and thereby be pleasantly surprised by the robotic revelation, as expectancy violation theory predicts.