Creepy Google AI targeted father, falsely accused him of child sex crimes following private conversation between him and doctor about son’s groin problems by: Ethan Huff for Natural News
A Silicon Valley software engineer faced a government probe and had his Google email accounts shut down and deleted after the tech giant’s artificial intelligence (AI) apparatus falsely flagged himfor child sexual abuse material (CSAM).
The unnamed father, who chose to remain anonymous for he and his family’s protection, says things went south back in February 2021 when he sent photos of his son’s medical condition to the family doctor.
The doctor had requested that the boy’s father upload pictures of his son’s genital swelling as part of a virtual emergency consultation, which caught the attention of Google’s AI robots.
Now is your chance to support Gospel News Network.
We love helping others and believe that’s one of the reasons we are chosen as Ambassadors of the Kingdom, to serve God’s children. We look to the Greatest Commandment as our Powering force.
Since just about everything was closed in the Bay Area due to Wuhan coronavirus (Covid-19) restrictions, the father had no choice but to upload the photos to help his son. It was then that his life took a major turn for the worse.
Two days after the photos were uploaded, the father discovered that he had been locked out of his Google account due to “harmful content” that was deemed a “severe violation of Google’s policies and might be illegal.”
Google then proceeded to report the father to the National Center for Missing and Exploited Children, which then resulted in the San Francisco Police Department opening a case against him. (Related: Back in 2017, Google declared that immortality will be achieved by the year 2029.)
Don’t use anything from Google
The father attempted to reason with the alleged humans who work at Google, but with no effect. He was cut off not only from his Google email but also his mobile provider and Google Fi – and he also lost all of his emails, contacts, photos and even his phone number.
Nearly a year later, the father received an envelope from police with documents explaining that he had been investigated for the alleged crimes that he did not commit. Also included were copies of search warrants served to Google and the man’s internet service provider (ISP).
Fortunately for the father, his case was ultimately dismissed after it was determined that he did nothing wrong – it was Google’s AI robots that did something wrong.
The father then proceeded to ask Google to reinstate all of his accounts, only to have the tech giant deny it. Even though the case was closed and the father was deemed innocent, Google decided that all of his accounts still have to be permanently deleted.
“This case highlights the complications that arise when using AI technology to identify abusive digital material,” writes Amber Crawford for 100percentfedup.com.
“Google’s AI was trained to recognize ‘hashes,’ or unique fingerprints, of CSAM. The flagged content is then passed on to human moderators who determine the proper channels to report the potentially harmful material to.”
A spokesperson for Google told the media that it remains committed to detecting and addressing CSAM, even if that means probing people’s private emails and content using AI robots.
“This is precisely the nightmare that we are all concerned about,” says Jon Callas, a director of technology projects at the Electronic Frontier Foundation (EFF), a non-profit digital rights group, adding that Google’s AI robots are “intrusive.”
“They’re going to scan my family album, and then I’m going to get into trouble.”
Apple, as you may recall, attempted to launch a similar AI detection system for its iCloud product, only to receive major backlash concerning the privacy implications. Apple has now shelved that plan, though it did implement another one that allows parents to detect and censor nudity in their children’s Messages app.