OpenAI’s Leadership Crisis: A Catalyst for a Smarter AI Strategy
So… what happened this weekend?
On Friday, OpenAI announced that Sam Altman, the darling of Silicon Valley AI had been unceremoniously fired as the CEO of OpenAI. All of this happened with no warning to senior leadership, investors (including 49% owner Microsoft), or customers. Immediately after the firing, customers reported getting emails informing them that their payment terms to use OpenAI would change. At the time of writing (Sunday morning), it’s not clear what happened or whether he will be asked back.
This isn’t an essay about what happened this weekend, I personally don’t really care, but rather it’s an opportunity to think about what we can learn from this mess.
Why does what happened this weekend matter for enterprises?
This weekend was an abrupt wake up call for all those building with AI. It was a reminder of two things:
- The field is still incredibly early
- Current leaders in these companies don’t necessarily have aligned goals with businesses building with AI
Most of the AI foundation model companies that we know (eg, OpenAI, Anthropic, Cohere, Stability), until a year ago (if they even existed) were nothing more than well-funded research labs. This is not meant to diminish the phenomenal work that they have done in that time (and it really is phenomenal) - but rather to give context to the maturity of the field. These groups were not created to help enterprises build applications with AI, most of them have the stated aim to create AGI (Artificial General Intelligence) or save humanity from that very same AGI.
In the past year, many AI organisations, including OpenAI, have shifted their focus towards serving enterprises, a venture that is very different from their original aim of creating or safeguarding against AGI. This pivot requires a substantial internal cultural shift, one that Sam Altman was spearheading at OpenAI. However, it's important to recognize that the primary motivation of the leadership of many of these organisations is still deeply rooted in AGI concerns, not in building fantastic products for businesses.
These are not the same people that sold you cloud or databases
These companies selling you AI are NOT the same people that sold you database services, they are not the same people that sold you cloud. OpenAI’s affiliation with Microsoft did not make OpenAI behave grown-up this weekend. These AI companies are largely led by academics and “philosophers”, not by sensible business people - and they act like it. These companies will ‘grow up’ - they have to.
But in growing up, comes growing pains. OpenAI has already demonstrated multiple instances of these growing pains including sleuth changing model quality, a ChatGPT data breach, and numerous copyright arguments. The growing pains are nowhere close to being over - Altman’s ousting is the biggest one yet.
So what does that mean for how I should build my AI strategy?
In the meantime, no business should put themselves as potential collateral damage in these growing pains - AI is generating too much business value for that. So what does that mean for how businesses should build out their AI strategy while ensuring appropriate best-practice guard-rails?
Diversify your model sources
The AI game is too early and too important to wed ourselves to a single vendor. As we saw this weekend (and with previous OpenAI outages), these companies are young and get things wrong. In the same way that we would never build traditional software with a single point of failure (especially if we don’t control it), we definitely shouldn’t be doing the same with our vital AI systems. AutogenAI for instance has done this with their product - so that when OpenAI suffered their 90 min long outage, they were able to seamlessly switch to another model API provider.
The AIOps/MLOps/LLMOps leaders in this space have already built their AI platforms with portability and Interoperability in mind so changing models is as simple as changing the API calls - there are plenty of examples of this being done well: AWS did this with Bedrock, IBM have done it with WatsonX, Dataiku are doing this with LLM Mesh, we at TitanML have done this with the Takeoff Inference Server.
Build with trusted partners while avoiding vendor lock-in
If you are going to build with API based models instead of using open-source / self-hosted models, then be sure that you use them with a trusted 3rd party rather than using the API itself. When OpenAI went down the equivalent Azure OpenAI models stayed up because Azure owned that infrastructure. Additionally, when it comes to privacy and data security, using these models through third parties provides additional promises that aren’t given (or trusted) when dealing with the AI creator itself.
Own your applications
AI is too important for the future of our businesses for us to give over control to 3rd parties. Until recently, self-hosting LLMs has been too operationally expensive and difficult to be reasonably done at scale. This has changed. Open-source models are better than ever (and improving rapidly) - making it easy to build high quality applications. The infrastructure to host these models in VPC or on-prem has been solved - so deploying them to production at scale is easier than ever. It used to be that self-hosting took months per project and required incredibly powerful GPUs - that is no longer the case with inference infrastructure solutions like TitanML’s Takeoff Inference Server.
Open-source models and self-hosting models is a good horse to bet on, especially if built in a way that is interoperable with the latest model advancements. Self-hosting is improving rapidly and it is the best way to insulate yourself from the madness going on in Silicon Valley.
This weekend's events are a stark reminder of the fragility and unpredictability inherent in the rapidly evolving AI industry. Enterprises looking to build a robust AI strategy must be aware of this and build accordingly. If Enterprises want to insulate themselves from the growing pains in the industry while reaping the benefits of AI they must prioritise portability, interoperability and build up the capabilities to fully own their AI applications.
Written by: Meryem Arik
Please reach out to us at TitanML if you would like to discuss your AI infrastructure stack and strategy.