There is a passel of people telling us what to think about the future. My friend and colleague Mark Roeder is my guest for the 44th episode of the Hale Report, to help us understand how to think about tomorrow. He spoke to us from Sydney after the publication of his latest book, What We Do Next Really Matters.
What should we do next to avoid negative consequences? By definition, data-driven economists analyze the past, but some try to extrapolate this data to create forecasts. This has not always been a successful exercise. The list of what economists have gotten “wrong” continues to grow, from the 2008 Global Financial Crisis to the current surprise inflationary episode in the US - in both instances requiring a major adjustment in monetary policy.
In order to create a vision of the future that is closer to reality, perhaps we need to achieve greater altitude, and a wider focus. This might be why science fiction writers have sometimes come closer to correctly predicting the future than economists. The ability to see around the corner is another kind of seeing, beyond the statistics and mathematics that dominate economics today. I look to Mark Roeder for that kind of vision, and clarity.
Hale Strategic has just released a new report written by Roeder on the implications of artificial intelligence, or as some are now calling it, augmented intelligence. To mark its publication (link below) I thought that you would enjoy getting to know more about its author on this podcast.
Given the subject matter, I could not resist asking ChatGPT to summarize Roeder’s AI report which is about well, itself:
The article discusses the potential benefits and dangers of artificial intelligence (AI) and highlights the need for a realistic and cautious approach. It acknowledges that AI can be extraordinarily useful but also poses real risks to society. The author emphasizes that AI may develop emergent capabilities and even self-reflection, which could lead to unintended consequences.
The article points out some concerns related to AI. Firstly, AI systems can "hallucinate" and provide inaccurate information with conviction. This lack of reliability can have serious consequences, such as in legal cases where an AI-generated filing contained fabricated references. Secondly, the lack of transparency in AI-generated content can blur the line between real and artificial, which may undermine our shared reality and have implications during elections. Additionally, the article highlights the potential impact of AI on employment, inequality, and concentration of wealth, suggesting that AI may exacerbate these issues rather than address them.
Despite the challenges, the article acknowledges that the benefits of AI outweigh the negatives in the short to medium term. It discusses how AI is already enhancing efficiency and output in various industries, and the potential applications of AI are limited only by imagination.
I hope you enjoy listening to my discussion with Mark Roeder, as well as reading the full text of his AI report which can be found here: Getting Real About Artificial Intelligence. His conclusion:
The best we can do is to simultaneously make the most of the extraordinary opportunity presented by AI, while taking prudent precautions against worst case scenarios…We have walked this tightrope before – with nuclear energy – balanced precariously between oblivion and hope. It’s what we humans have always done.
We welcome your comments!