The consequences to recent technological advances are a great deal more frightening than GDPR

I am sure I am not alone in being besieged by emails and social media messages about GDPR.   Panic has set into the nation.   The sheer volume of unsolicited and pressuring communications with GDPR offerings seems slightly ironic.

Like everyone, I am fed up with the hype, the veiled threats, the additional pressure on all us mugs who are running businesses or self-employed.   Luke Johnson referred to running a business as a vale of tears in his Times articles recently, with bad debts, cyber security issues, client complaints, industrial accidents and so forth.   GDPR is just another test to our resilience.

What strikes me as ironic is that these restrictions on how we operate under the guise of protecting personal data, comes at a time when both technology and governments alike are stepping up their collection of data on both our companies and about us as individuals, over and above the   Cambridge Analytics /facebook scandals.  It seems both the most outrageous hypocrisy and a clear signal that we have indeed lost the battle to Big Brother.

While social media sites are on the one hand encouraging authenticity, they are using technology to identify our natural patterns in order to discourage our own use of auto-posting.   This gives a frightening picture of how well they know us.  Robin Cassells, of Orwell Computers, describes it as being “beyond 1984, with our fingerprinting, audio listening, face scanning, continuous video surveillance and every word being listened to”.  Newspeak, he believes, is indeed here.

Our reliance on AI is a path we are a long way down, abandoning our powers of reasoning in favour of the judgements of machines.  One in ten people now say they would trust the results of an app in an area which used to be most personal, our choice of partner.  We only have to ask Alexa, Apple HomePod or Google’s Assistant speakers and they will happily tell us what to do and what to think.

There are the plans of Bryan Johnson, and his company Kernel.  Kernel wants to treat brain dis-orders and mental health issues with the creation of a brain interface, or in other words, putting chips in our brains.   The unstoppable Elon Musk as well who is working on this field with his company, Neuralink.   Musk is, at least, voluble on the dangers of AI.

Johnson describes our brain as the most consequential variable in the world and our biggest blind spot, saying that the root of all problems is there so we must change our minds, literally.   The danger lies is if we become unable to control how our minds are changed.    On the surface, anything that aids the treatment of mental illnesses seems like a good idea, but when I hear Johnson talk of increasing our rates of learning and imagination, or  altering our abilities to love, it takes me into a sinister and frightening world.

So while we are battling with red tape issues over what password system we can use on the email addresses of the customers that we have dared to collect,  our governments and the technology companies are gathering more data on us than we know ourselves.

Google’s Sergey Brin was a voice among many recently warning that “we are on a path that we must tread with deep responsibility, care, and humility.”  He talks of the threat to employment which we are all aware of and fears that technology will manipulate people with AI generated fake news.   We know that both China and the US have developed the ability to send hidden instructions to your Siri r Alexa or hide instructions to your smartphone within the white noise in YouTube videos.

AI machines that can use their own brains without human influence are not quite there yet, but with sinister laughter echoing from our Alexa’s, their autonomy doesn’t feel very far away.

Read more

Comments are closed.