News

Date: 17 Jun 2020 [London, United Kingdom]

Artificial intelligence is intertwined in airports, entertainment venues, stadiums, hotels, casinos, shopping centres and in particular, police forces.

As AI technology moves beyond basic consumer needs, what is the true cost of AI in surveillance – and is it out of control already? We asked the experts

Artificial intelligence is intertwined in airports, entertainment venues, stadiums, hotels, casinos, shopping centres and in particular, police forces. In 2019, at least 75 countries were using artificial intelligence-enabled surveillance. Irish start-up Liopa is trialling a phone application that can interpret phrases mouthed by people, and VisionLabs, which is based in Amsterdam, claims it is able to tell if someone is showing anger, disgust, fear, happiness, surprise or sadness; with the aim of tracking productivity and even make hiring decisions.

Concerns Around Facial Recognition

“The use of AI and industry acceptance has been growing steadily internationally, particularly in Southeast Asia. The region has been identified as one of the target markets for the investment fund, with a significant portion of the US$800 million to be made available to growing the region’s ecosystem.

Nigel Jones, Co-Founder of the Privacy Compliance Hub, former Google executive and head of its legal team for Europe, said: “While undoubtedly technology can be a force for good, and can and does track and find criminals, it can go too far and actually become a threat to people going about their everyday lawful business. Today, from unlocking your iPhone, to employers tracking productivity and police forces surveilling protests – facial recognition technology is becoming more and more embedded into our everyday lives.

But there are several reasons why we think there are reasons for concern when it comes to facial recognition software and privacy. These tools work a lot better in the lab than they do in the real world. Small factors such as light, shade and how the image is captured can affect the result,” said Jones.

Jones says that many critics argue that users have not consented to be scanned, and that going out in public should not be considered consent to be tracked. Even knocking on a friend’s door (complete with a Ring doorbell) could see you added to a police database of images.

Without law and order, facial cognition for surveillance could be dangerous for those caught on camera, according to Shawn Tan, CEO of Skymind, the world’s first dedicated AI ecosystem builder, enabling companies and organisations to develop their own AI applications.

“For far too long, there have been too many cases where biased datasets have been used by companies to develop AI that discriminates against the society it is meant to serve. It also leads to backlash for businesses and shuts out too many people from accessing goods and services. Companies must seriously consider algorithm interpretability and testing. New, robust legislation puts a renewed sense of responsibility on those developing and implementing AI to be transparent and call out biases in datasets. Without legislation, companies have no incentive to put in the extra resources required to overcome such biases,” said Tan.

Sea Change for Big Tech

The scale of mass surveillance doesn’t get any greater than Google’s use of cookies, but they along with others have announced that they will end cookie usage. IBM, Microsoft and Amazon have all either pulled out of the facial recognition software market altogether or are limiting their work with police forces in the US.

But companies selling facial recognition software have compiled huge databases to power their algorithms – including some 3 billion images that Clearview AI scraped from Google, Facebook, YouTube, LinkedIn and Venmo) that it can search against.

According to Jones, these systems are a real security risk: “Hackers have broken into databases containing facial scans used by banks, police departments and defence firms in the past. Criminals can use this information to commit identity fraud, harass or stalk victims. Biometric data is not something you’d want to fall into the wrong hands,” he added.

Peter Galdies, the Senior Consultant of DQM GRC, specialist data protection, and privacy consultancy, added: “Speaking from a privacy perspective, it’s noteworthy to mention Meta’s recent decision to shut down its facial recognition system. Meta acknowledged the situation as a complex social issue, and the rebranded ‘Meta’ seems to be shifting its approach as to how this complex biometric data is stored and processed. Part of its reasoning for this could be that the company realised the nature and type of consent gained may not have met the existing, or future, standards that regulators may expect for sensitive biometric data.

It can therefore be concluded that commercial users of such data would also be somewhat sceptical of forming a business relationship with Meta on these matters – considering the previous litigations that Facebook has been involved in with privacy regulators.”

Leading biometric authentication firm ID R&D takes consent very seriously. Transparency is the solution, according to President Alexey Khitrov:

“Meta’s change of identity came alongside the announcement that it will end the use of its facial recognition software, but many believe facial recognition is highly likely to be used in future metaverses. Users need to choose whether to opt-in or out – and Meta acknowledges this. Nobody should be using face recognition without consent or prior knowledge.”

“Transparency must give control of the biometrics to its rightful owners – the people themselves. An AI Bill of Rights will allow people to know exactly what information is being used and how,” said Khitrov.

Is Facial Recognition Technology Vulnerable to Corruption?

Facial recognition mechanisms are obviously a huge boon for law enforcement, according to Andy Patel, a researcher with F-Secure’s Artificial Intelligence Centre of Excellence. Surely people would be fine with such mechanisms replacing manual examination of CCTV footage, allowing the generation of real-time alerts when suspects or known perpetrators pass CCTV cameras?

Patel insists that most haven’t considered the downsides – such as what happens if the data is stolen and used for malicious purposes – or how such a mass surveillance system might be used if their country turns into a corrupt regime.

“At the present moment, no system is impervious to security breaches, so it should be assumed that facial recognition data will eventually be stolen. With this in mind, law enforcement agencies should use such technologies selectively and via appropriate legal processes. They should also ensure that any data associated with specific individuals be stored in a very secure manner,” said Patel.

Callsign is a pioneering digital trust through proprietary technology that uniquely mimics the way humans identify each other in the real world.

Amir Nooriala, Chief Commercial Officer, said: “A layered approach to stopping cybersecurity breaches offers an effective solution without invading individual user privacy. Mass surveillance cannot be a solution to protecting people and personal data when there are better methods on offer. Customers must believe their digital identities and personal information are protected, otherwise, these newly minted digital services will never be a source of sustainable growth.”

Who is Regulating AI?

Nigel Jones believes that the use of facial recognition software has real implications for fundamental human rights, including the right to protest, and the right to private life. In China, the gaming giant Tencent is using facial recognition to stop children from playing games between 10 pm and 8 am. Between these times, players will have to give a facial scan to prove they’re an adult.

“It’s particularly insidious as we might start to see it as normal. It’s a slippery slope. One of the risks of children being monitored at a young age is the acceptance of facial recognition technology as they grow into adults.”

The use of facial recognition tools is already governed by the GDPR in the EU and the UK, but as technology companies themselves are calling for stronger regulation, there isn’t enough transparency about what happens to stored images, how they’re processed, and for how long, according to Jones.

One of the challenges faced by regulators is that the technology is moving incredibly fast. Close-box technology systems used by police forces were not trusted by 29% of individuals surveyed by the Ada Lovelace Institute back in 2019; citing a lack of faith that the police will use the data ethically.

With 25 years experience in data regulation and practices as CEO at GRC International Group, a global provider of IT governance, risk management, and compliance solutions, Alan Calder believes that centralised data gathering is a critical issue: “The more data that is centrally gathered, the more likely it is that there will be abuses and breaches – cybercriminals will obviously attack large stores of valuable personal data, all of which can be sold or re-used to craft other attacks – and most governments have a tendency toward repression and enforcement, for which access to large quantities of personal data is essential. Limiting data collection by central authorities is a good way of limiting government interference.”

Skymind’s Shawn Tan added that “like many other firms in the AI sector, we are in favour of the EU rules introduced in May, which will help to determine how companies use artificial intelligence (AI). The rules are the first of their kind to introduce regulation to the sector, and the EU’s approach is unique in the world. They can also address the ethical issues around AI.”