You won’t be able to use an anonymous bot to tweet or boost Twitter profiles, or post items on Facebook in California, beginning next year. Or use a bot that pretends to be a person to try to sell something – including a candidate for office – on high traffic websites.
California governor Jerry Brown signed senate bill 1001 into law. Authored by senator Bob Hertzberg, it’s particularly intended to stop automated social media posts that inject comments – fake or otherwise – into political debates.
It only applies to websites that attract at least 10 million unique, U.S. visitors per month. There are a couple of hundred websites that meet that qualification, according to Quantcast.com. The list includes big, California-based platforms, like Google, Youtube, Facebook, Apple and Netflix. But as written, it also applies to out-of-state giants, like Amazon and the New York Times.
SB 1001 makes it…
Unlawful for any person to bot to communicate or interact with another person in California online, with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election.
Using a bot would still be legal, so long as it’s identified as such and the disclosure is “clear, conspicuous, and reasonably designed to inform persons with whom the bot communicates or interacts that it is a bot”.
Online platforms, web hosts and Internet service providers won’t have to do any policing. SB 1001 specifically doesn’t apply to them.
A “bot” is defined as “an automated online account where all or substantially all of the actions or posts of that account are not the result of a person”. It doesn’t appear to apply to customer service bots that websites use to communicate with visitors, but there’s enough wiggle room that the courts will have to decide where to draw the line. That’ll be after the law survives the inevitable challenges on First Amendment and federal preemption grounds.
Anytime age-restricted products – for example weapons, alcohol, tobacco and porn – are offered for sale online in California, merchants would have to “take reasonable steps to verify the age of the purchaser”, if a recently amended bill makes it into law.
Assembly bill 2511, authored by Ed Chau (D – Monterey Park), started out as a privacy bill aimed at preventing commercial use of social media postings by minors, and would have generally reinforced bans on selling certain things to them. Arguably, selling guns, porn or booze to minors or mining their social media for commercial purposes without their parents’ permission is already illegal. But arguably there were loopholes and the original idea was to close them.
Since layering on new language on top of existing bans can create confusion and new loopholes, Chau told the assembly privacy and consumer protection committee that he wanted “to narrow the bill to reflect
his underlying intent, which is to intensify the age verification efforts online sellers must undertake with respect to specified goods”.
Opponents – including Internet industry groups – objected, saying age verification requirements would be burdensome, and would require websites to collect even more personal data about customers, creating even more privacy problems.
Chau’s response, according the committee’s analysis, was to toss the problem back to them…
Innovation and technology have created new types of marketplaces which have greatly increased convenience for Californians. However, those new technologies also came with a host of problems. It is time to apply that same innovative spirit the tech sector embodies to the solving of undesirable consequences. I am confident that as this bill moves through the process, we can come up with a solution that will balance all of the competing interests.
In other words, Chau’s bill is still a work in progress, and if it moves forward more amendments are likely. AB 2511 is queued up for a vote by the full California assembly, with a deadline coming on Friday – if it isn’t approved by then, it’ll be a dead issue.
Some bills that would regulate websites, social media and other consumer-facing Internet services are moving ahead in the California legislature. But not all of them.
Assembly bill 3169, carried by James Gallagher (R – Chico), is dead. It would have required “social media Internet web sites” and search engines to be politically neutral. It would have failed any First Amendment test. The assembly privacy and consumer protection committee scrapped it by ignoring it – when the vote was taken, only two members, both democrats, said aye and the rest remained silent.
One of those voting aye, assemblyman Ed Chau (D – Monterey Park), has a couple of privacy bills pending. AB 2511 would put restrictions on web sites and applications that use information about minors; AB 2935 would do the same for information collected by health monitoring devices. Both made it out of committee and are queued up for a floor vote in the assembly.
As originally written, senate bill 1424 by Richard Pan (D – Sacramento) was as dumb an idea as Gallagher’s proposal. It would have required social media platforms to place a warning on news stories containing false information. Besides being completely unworkable, it too would have collapsed at the first mention of the First Amendment. Pan rewrote it to require sites to disclose fact checking and other editorial policies. Even as amended, SB 1424 still looks like an overreach, though. The senate judiciary committee is scheduled to take a look at it today.
Two bills took aim at bots – automated processes that can mimic people, and collect and post information on websites and social media. AB 1950 by Marc Levine (D – San Rafael) would have required sites that use bots to disclose the practice. It was also killed by the assembly privacy and consumer protection committee (although death is never final in the California legislature – anything can happen so long as it’s in session).
SB 1001 by Bob Hertzberg (D – Van Nuys) specifically targets bots that act like people – chatbots, as they’re sometimes called. If a company invites you chat online but doesn’t tell you that, say, Eliza, is just a computer program designed to make you think they care, then they could face consumer fraud charges. It was approved by both the senate judiciary and business, professions and economic development committees, and is awaiting a final blessing from legislative leaders on the appropriations committee before heading to the senate floor.
The etiquette of things.
“Good practice, when it comes to handling data, is not something new, it’s something we’ve already done well”, said Marc Rogers, an Internet security researcher. “We have to be careful we don’t get paralysed by worrying about exotic threats”. He was speaking on a panel this morning at CES that looked at the need, or not, for regulating the so-called Internet of things (IoT). When a device in a home, a thermostat for example, automatically sends information to a private company – an electric utility, say – it might not be done with the same degree of privacy and consent that’s involved when a person manually enters data on a website.
Regulations are coming. Federal trade commissioner Maureen Ohlhausen made it very clear that the FTC has consumer privacy and data security at the top of its agenda, although she tried to reassure everyone that it’ll come with a dose of “regulatory humility”.
“We should adopt a regulatory regime that allows innovation, even disruptive innovation, to thrive”, she said, pointing out that federal regulators already have a well-packed tool box, but they’re not quite sure what to do with it. “How do we insure that consumers get the benefits and minimise the risk, without an undue burden on business?”
“You can have a very serious negative impact if you regulate prematurely”, countered Robert Pepper, VP for global technology policy at Cisco. It’s better to wait for people to arrive at their own rules and expectations by social consensus if possible. “Etiquettes are not developed through regulation, they’re developed bottom up by users”.