A week ago, AI expert Geoffrey Hinton announced he was leaving Google. Following his departure, the scientist made a statement warning about AI and its learning capacity. The Music Void ponders on the decision.
It’s been ten years since British-Canadian cognitive psychologist and computer scientist Geoffrey Hinton began working for Google Brain, a division within Google AI researching artificial intelligence. In 2018, the expert won the ACM Turing Award, the computer science equivalent of the Nobel prize, for his work on deep learning. A great-great-grandson of mathematicians George and Mary Everest Boole, Hinton has been possessed by an interest in AI for all his life. Last week, however, Hinton announced that is stepping down from Google.
In the latest podcast by The Guardian, journalist Alex Hern, who interviewed Hinton immediately after the announcement, commented on this. “He quit Google last week because he felt he needed the freedom to speak openly about his recent realisation that AI risks being the destruction of everything”.
In his fiery speech, Hinton uses striking epithets such as “the end of civilization”. Talking to The Guardian, he said: “I firmly believe that either we are going to survive this or we not, we might [minimise the risk] a little bit by thinking very hard before it’s too late but it might well be that it’s inevitable”.
While multiple generative AI inventions, e.g. ChatGPT, threaten the careers of skilled workers, advanced AI implies an even more domineering role of artificial intelligence. Elaborating on his decision to step down from Google, Hinton told Computer World: “I used to think that the computer models we were developing weren’t as good as the brain. […] Over the last few months, I’ve changed my mind completely, and I think probably the computer models are working in a completely different way than the brain”.
The capacity for faster learning as well as the ability of the machines to interact with one another means that inevitably they form an entity. “They can look at different data, but the models are exactly the same. What that means is, they can be looking at 10,000 sub-copies of data and whenever one of them learns something, all the others know it. One of them figures out how to change the weights so it can deal with this data, and so they all communicate with each other and they all agree to change the weights by the average of what all of them want. Now the 10,000 things are communicating very effectively with each other, so that they can see 10,000 times as much data as one agent could. And people can’t do that.”
Although it is great that someone like Geoffrey Hinton decided to share his concerns, the possible scenario of AI domination still seems to many as if it was a plot for a sci-fi story. While tech companies will continue to look into further development, Internet users simply follow the beaten track. The words “dystopian” and “apocalypse” have been used quite often to describe various events, e.g. pandemic, the war in Ukraine, and the cost of living crisis. Do people have the energy to deal with the potentially threatening scenario of AI progress? Handily (for those who are interested), there are many distractions. An attempt to put personal involvement with advanced technology on hold would be a minor contribution. Sometimes it’s good to sacrifice tech-savviness for the sake of a better world.