comScore

Minnesota-Duluth professor says we may live in a neural network

“I did not even have time to think about what could be philosophical implications of the results," - Vitaly Vanchurin, University of Minnesota Duluth

“I did not even have time to think about what could be philosophical implications of the results," - Vitaly Vanchurin, University of Minnesota Duluth Facebook

Buckle up.

Vitaly Vanchurin, a physics professor at the University of Minnesota Duluth, uploaded a paper to arXiv over the summer that argues that our known world – the entire universe – could actually be, at its “most fundamental level,” a gigantic neural network.

For the uninitiated, a neural network is what gives our brains the ability to think, process new information, and learn from experience. Researchers have been studying artificial intelligence and machine learning by creating their own neural networks inspired by the real thing.

It’s the stuff that’s allowing us to develop autonomous machines that can learn new stuff, much the way we do. The universe, Vanchurin hypothesizes, may actually be one of these systems writ very, very large.

Like, turtles-all-the-way-down large.

“This is a very bold claim,” the paper says. In fact, it “could be considered as a proposal for the theory of everything, and as such it should be easy to prove wrong.”

Everything including, say, natural selection – the theory of how some creatures survive and change over time in order to better adapt to their environments. Maybe it’s all part of the trial and error of some massive neural net, making and paring connections, trying to wind its way to a solution.

“Indeed, if the entire universe is a neural network, then something like natural selection might be happening on all scales from cosmological and biological all the way to subatomic scales,” the paper elaborates. “The main idea is that some local structures (or architectures) of neural networks are more stable against external perturbations… than other local structures.

“As a result, the more stable structures are more likely to survive and the less stable structures are more likely to be exterminated… As the learning progresses these chains can chop off loops, form junctions and according to natural selection the more stable structures would survive.”

It’s gonna take people way smarter than us to parse all the way through the argument and its implications (although you can give it a shot by checking out the preprint yourself).

Smarter, even, than Vanchurin, by his own admission to Futurism. In his initial email to the publication, he reportedly said he might not understand it all himself, and he was only referring to the complexity of neural networks – not the wildass implications of what this all means.

“I did not even have time to think about what could be philosophical implications of the results,” he said.

The interviewer then asked – because they “needed to” – whether this meant we were all living in a simulation after all, like (and we paraphrase) some The Matrix-ass computer world shit.

“No, we live in a neural network,” Vanchurin answered, “But we might never know the difference.”

Cool. Cool cool cool. We’re not going to stay up at night thinking about that at all.

Experts in the fields of both physics and machine learning expressed skepticism to the idea to the Daily Mail – but declined to comment on the record. (The paper has not yet been peer-reviewed.) And Vanchurin readily admits that the idea is “crazy.”

But it is certainly something to wonder about.