7 minute read

Should firms be accountable for misuse of their technology? Silicon Valley entrepreneur's words of warning

Imagine discovering your network is being used by Russian hackers to influence elections. This is what venture capital investor, serial entrepreneur and leading voice on tech and privacy issues Craig Vachon faced. Here he discusses artificial intelligence, bias, HR and accountability with Siân Harrington, editorial director of The People Space

Tech and AI accountability

Craig Vachon is on a mission to build awareness of how even tech-for-good can be used with chilling effect if it falls into the wrong hands. He should know: an anti-censorship tool his company developed to enable web users in countries such as Saudi Arabia and China to thwart state censorship, saw a massive spike of traffic in Russia during the US election and Brexit referendum – causing multiple US government agencies to descend on his office.

“The intent of the tool was to allow people to have more opportunities to learn, to become assimilated in a world of like-minded humans. And suddenly the tool gets misused and GCHQ and the three letter acronyms in the United States show up at your office and say ’Hey, do you realise your network is being used to allow for anonymous Russian hackers to influence elections?’

“These are the types of challenges that are mind boggling to most technologists who set out to build something that is benign and, in most instances, fairly good. So, the question really comes down to at what point do you ascribe responsibility?”

With businesses increasingly adopting new technologies like artificial intelligence (AI) that gather huge amounts of personal data this is a vital – if somewhat uncomfortable – question. And with much of this data now coming from the HR and people space, in the form of employee data, candidate data, social media data, sensors in offices and so on, it is one no leaders can afford to ignore.

“I have the hardest time with this question because, at my core, I want to be responsible and I think that most technologists in Silicon Valley and around the globe should be responsible for the products they create. But at what point do humans override this? After all, in 99.99% we use kitchen knives solely for dinner. But there is a tiny, tiny percentage of people who use them to harm themselves or someone else. And yet we don't necessarily think the kitchen knife manufacturer is responsible for that.”

He adds: “A good example is the Conservative party [in the UK] changing the name of its Twitter handle and setting up a fake ‘fact-checking’ URL. To me that is entirely 100% about humans being morally repugnant, that is not about technology. It’s about moral questions that humans have to answer. So, we've got to hold people accountable as opposed to pretending that it's technology's fault.”

AI's infallibility promise

As I see it, one of the big issues here is the promise of today’s technology. We have created a viewpoint that AI is somehow going to be infallible, and in people’s minds we have created an idea that it is going to be 100% perfect. Take autonomous cars. Our natural reaction to hearing a self-driving car has killed someone is to ban them altogether. We want them to be 100% safe. But when humans get into cars they can become killing machines and, in the long run, are far more likely to result in death.

According to the World Health Organization, there were 1.35 million traffic road deaths in 2018 and road traffic injuries are now the leading killer of people aged 5-29 years old. But there’s something about the fact you are not in control of an autonomous vehicle as a human that makes us uncomfortable. For humans, the question we should be asking about autonomous vehicles is, what is an acceptable number of deaths or injuries? What does Vachon think?

“We're fallible as humans. When they get it correct, autonomous cars could be much safer for us. And yet we almost expect it to be 100% and we cannot be 100% on anything to do with technology,” he agrees. “Humans who designed that technology are imperfect and hence the technology itself is imperfect. One of the most interesting things we've learned about AI is that AI is fallible too. In every instance humans are the instructors, you need human intervention to teach the AI. So natural human biases, good, bad, indifferent, are there.”

Allied to this is the fact that regulation has not kept up with technological development. Europe’s General Protection Data Regulation (GDPR) is oft-quoted as leading the way here, in terms of protecting consumers and employees. But are they actually doing this effectively? Not according to Vachon.

“I'm a privacy geek, or anorak, if you will. And interestingly, most of the GDPR take-down requests in the EU today are people who have criminal records who want to expunge those records from the worldwide web. Again, it’s being misused from its original intent. So, when you ask should we look to regulatory responses, I see them as a big heavy hammer, they're not a scalpel. GDPR is a really brilliant piece of regulatory work except for when it's misused.”

Building in bias

Yet more regulation is looking increasingly likely. For, despite AI’s promise of repeatable data decision- making being unimpeded by prejudice, there are clear signs that bias is being built into algorithms, often unconsciously.

“One of the biggest concerns I have with AI is that it's currently being taught by young, typically white people who live in surreal worlds like Silicon Valley,’” says Vachon. “So how representative is this to the rest of the world? It's a really dangerous thing when we have facial recognition through AI, and it works remarkably well if you're an old, ugly, fat guy like me who just happens to be white. It works perfectly. But if you happen to be of a darker skin, It doesn't work remarkably well. In fact, it's scary.”

Which brings us to HR. For one way the average person is coming into contact with AI without probably knowing it is through talent departments using forms of AI to pick up the correct words in resumes, through video interviewing, sentiment analysis and facial recognition technology. Does Vachon think this is an acceptable use of new technology?

“From an HR perspective, I find it to be intellectually lazy,” he immediately says, “because diversity and other people's diverse thoughts are necessary for success. If I am looking at a small company to make an investment and all the people are identical, they all have big picture vision and they haven't recruited someone who has detail orientation, who has an expertise in execution, I walk away. And I'm not sure that we're doing ourselves as corporate citizens any benefit by not looking at recruits or potential recruits as humans, and instead rely on technology to try to sort this because it makes it faster and more efficient.”

But many organisations are reporting increasing diversity by using new technologies, I counter. For, as we have just discussed, we're inherently biased as humans.

“That may be the case,” Vachon says. “But I get worried. I was looking at an article recently where they were using a technique where they were trying to judge the person's demeanour and ethical base by their interpersonal movements. And I found that to be abysmal as, on a personal level, I can't stand stil, so if you have a video of me, you'd be shocked at how I move a lot and find it very hard to sit still. So, I would worry that this is misrepresenting me as a candidate because I don't fit into a norm.”

But from the perspective of the HR departments and talent departments, these technologies are getting people in front of them whom they wouldn't normally see because they are programming in diversity into their AI. It's a difficult balancing act.

“And that's why it makes this discussion so interesting,” says Vachon. “I have an investment in an AI company that does natural language generation. And while I love this little company, because we can put 120 to 150 pages of company reports into our machine and 90 seconds later there’s a four paragraph report that goes onto Yahoo finance and you think, wow, that's brilliant…until you talk to a friend like Siân who is a journalist and says, hold on, that may be taking my job.

“And in the next breath, I look at that efficiency and I go, okay, but what are we missing? The computer should be pointing out three potential areas where there are anomalies and the human can follow up. And that’s where some of these tools need to be considered, instead of as a panacea, as human augmentation.”

Is it HR Big Brother?

So what should businesses and HR directors think about when they are looking at getting data that on the one hand can be very beneficial to the business and to employees, but on the other hand can feel a bit Big Brother?

“Start, like everything else in life, with what you're trying to achieve. And then reverse engineer a solution with a great deal of forethought to how it impacts humans. I definitely think that we get a little intellectually lazy with some of these tools and don't necessarily think about all of the impacts. I do think there are responsibilities that need to be taken into account by businesses.

“Perhaps there are technology companies that get a little intellectually lazy about all of the potential uses of these products. But today’s technology tools are enormously difficult to control and enormously powerful. Weapons of mass persuasion can be created pretty readily. I am fully aware of the counterargument that if you've got nothing to hide then why are you concerned? Most of us have nothing to hide, but we do have things that we want to protect.

“We need to, as humans, hold people accountable when they undertake this sort of morally, ethically challenged approach because technology can be misused, whether it's a kitchen knife or a social media platform.”

As co-founder and managing partner of Silicon Valley seed investment firm, Chowdahead Growth Fund, Craig Vachon (pictured) has raised $1.46 billion in new investment for tech start-ups-for-good and has just pubiished his first fiction book, The Knucklehead of Silicon Valley. His 2012 TEDx Talk ‘Does privacy matter?’ exposed how the big names in tech make users into products and was the most popular TEDx the year it came out…until Google took it down

Craig Vachon

Published 11 December 2019

Suddenly the tool gets misused and GCHQ and the three letter acronyms in the United States show up at your office and say ’Hey, do you realise your network is being used to allow for anonymous Russian hackers to influence elections?’

Enjoyed this story?
Sign up for our newsletter here.