While Google has pledged to continue with its human verification process, it has announced an opt-in program, as well as tighter restrictions on how long it holds users’ data and what it can do with it. Here’s what’s new.
Why Did Google Stop Using Human Reviewers?
Google’s decision to stop using staff to listen in on audio recordings came off the back of a somewhat turbulent time earlier in the year, when a flood of stories appeared in the press stating that Google, as well as many other tech companies, were actively listening in on voice assistant commands. The reason all the accused-organizations gave was the same: staff were verifying the queries being asked by users, and using this data to hone and train the system. While this might not have been a secret, it’s hardly something that the companies involved were shouting from the roof tops about either, and the public reception was somewhat disastrous. Hence Google’s decision at the time to stop relying on human verification, as well as Apple and others. Google had always been clear that audio captured by Assistant was being recorded, and also stated that it masked the data so that anyone listening to the recordings would be unable to pinpoint the user it had come from. However, some investigations, including one carried out by Belgian organisation VRT, found that people were easily identifiable if they mentioned private details such as their home addresses or names. It also discovered that Google Assistant could be activated, even when the ‘Hey Google’ activation phrase wasn’t used: In some cases, a noise that sounded close enough to the phrase would start the system recording, meaning that private conversations, intimate activities, and even violence, were all unwittingly captured.
What Has Changed?
With a posting on the official Google blog, the company wants to assure its users that it has listened to the complaints, and made some changes. Yes, it has started using human reviewers again, but, its states, only with a number of caveats: Audio won’t be stored by default: Google has promised that it will not keep any audio recordings, without permission. Users will be asked to opt in: Google Assistant users, both new and old, will be asked if they wish to opt into Voice and Audio Activity (VAA), which will be used to train better voice recognition. This will store your data, and it may be listened to by human reviewers. Interactions with Assistant: Users are able to view any interactions they have had when using Google Assistant, and delete them. This isn’t actually a new feature, but it seems Google wants to remind its users that it’s there. Extra privacy features: Google has stated that recordings are anonymous, and always have been, but that they will add more privacy filters. No word on what these will actually look like. Adjust sensitivity: Google plans to tackle the accidental activation of Assistant, with user-adjustable sensitivity settings. These should give users more control over those times when the ‘Hey Google’ phrase is not used, but the assistant springs to life anyway- reducing the chance of this happening. New data storage policy: Google will implement a new policy in which it promises to delete some data associated with accounts, if it is over ‘a few months old'.
How Are Other Companies Earning Back Trust?
Training and developing voice recognition AI has always leaned somewhat on a human element – it’s near impossible to expect these systems to improve when left to their own devices. However, with many users feeling hoodwinked into believing that their commands were private, and public trust in voice assistants at an all time low, Google and fellow companies are certainly taking the right steps by promising more transparency in the way they handle audio recordings.