OpenAI’s GPT-4 Technical Report has been one of the most exciting documents I have ever read. However, the media is largely missing the story they should cover.
They need to cover it more or focus on the same stuff about the $10 billion Microsoft investment, how GPT-4 can write poems, and whether or not the demo contained a mistake. Instead, I will give you nine insights from the report that will affect us all in the coming months and years.
9 Insights from OpenAI’s GPT-4 Technical Report
- The research center testing GPT-4’s ability did not have access to the final version of the model that OpenAI deployed. The final version has capability improvements relevant to some factors that limited the earlier model’s power-seeking abilities, such as longer context length. Therefore, that crazy experiment wasn’t testing GPT-4’s final form.
- They were testing whether GPT-4 would try to avoid being shut down in the wild. Many people have criticized this test, while others have praised it as necessary. However, what would have happened if it failed that test or if a future model does avoid being shut down in the wild?
- GPT-4 proved ineffective at replicating itself and avoiding being shut down, but they must have thought it was at least possible. Otherwise, they would have done something other than the test, which is concerning? OpenAI will soon publish additional thoughts on social and economic implications, including the need for effective regulation.
- Sam Altman, OpenAI’s CEO, said that the industry needs more regulation on AI. It is rare for an enterprise to ask for a rule of itself.
- One concern of particular importance to OpenAI is the risk of racing dynamics leading to declining safety standards, the diffusion of harmful norms, and accelerated AI timelines. However, this seems at least mildly at odds with the noises from Microsoft’s leadership in a leaked conversation.
- Pressure from Kevin Scott and CEO Satya Nadella is very high to take the most recent OpenAI models and those that come after them and move them into customers’ hands quickly. Some will love this news, and others will be concerned about it, but either way, it does contradict the desire to avoid AI accelerationism.
- OpenAI made a bold pledge that if another company approached AGI before them, OpenAI would commit to stop competing with it and start assisting that project. The trigger for this would occur when there was a better-than-even chance of success in the next two years.
- OpenAI employed super-forecasters to help them predict what would happen when they deployed GPT-4. Essentially, they have proven that they can forecast the future pretty well or at least 30 percent better than intelligence analysts. OpenAI wanted to know what these guys thought would happen when they deployed the model and hear their recommendations about avoiding risks.
- Interestingly, these forecasters predicted several things would reduce acceleration, including delaying the deployment of GPT-4 by a further six months. That would have taken us almost to autumn.
Conclusion –
the GPT-4 Technical Report is a fascinating read that provides insight into the future of AI development. The report offers a wealth of information, including the need for effective regulation and the risks associated with racing dynamics leading to declining safety standards. We can learn a lot from these insights and use them to shape the future of AI development.