How seriously should governments take the threat of existential risk from AI, given the lack of consensus among researchers? On the one hand, existential risks (x-risks) are necessarily somewhat speculative: by the time there is concrete evidence, it may be too late. On the other hand, governments must prioritize — after all, they don’t worry too much about x-risk from alien invasions.

Remember the internet, blockchain, etc.? Well, AI will go the same way. It will be an ‘existential threat to society’ until a few relevant players figure out how to OWN the whole thing: subsidies, tax exemptions, technology, regulation, market, etc.

Then (and not a day before) it will become a homologated part of the ‘rules-based’, free-market illusion.

And if it takes 10 years for that to happen, we will have 10 years of Terminator threatening to come for all of us, and all sorts of sci-fi absurdities on MSM.

That’s it, I’ve just summarised millions of AI-related news stories for you and saved you thousands of hours wondering what’s really going on.