AI Governance and deep look into the OpenAI circus

OpenAI’s recent corporate upheaval has created an interesting public conversation on AI governance and a pivotal point in AI’s evolution

What happens when you add the following ingredients together; diverging visions, effective altruism, artificial intelligence, and Microsoft? I unpack what happened at OpenAI and isolate important lessons for organisations and leaders. The OpenAI and Sam Altman story could be the butterfly effect we refer to in the future.

Background to the Feuding at OpenAI

OpenAI started out as a research based not-for-profit, the brainchild of Sam Altman, Ilya Sutskever, Elon Musk, and others. The aim of the organisation was to compete with commercial artificial intelligence (AI) projects to create AI products that were safe and beneficial to humanity.

They feared AI programs created from rapid corporate commercialisation would only focus on economic benefits. Fast paced development, without early regard of ethical controls and safeguards, was aimed at advancing AI as quickly as possible without thinking of the harm it could cause.

Trouble began when Elon Musk left OpenAI, who was a significant and influential investor and resulted in a reduction in cashflow for research activities. A for profit subsidiary was created to raise the large amount of funds required to advance the science and projects they were developing such as ChatGPT, which enabled large corporations like Microsoft to partner with OpenAI and make an initial $1 billion USD investment¹. However, the not-for-profit board of directors had governance and control over this subsidiary, with a directive to work for the benefit of humanity and not its investors. The shift in fundraising started the rift between the board and OpenAI’s key founder, Sam Altman, who was also the CEO of OpenAI.

The race to Artificial General Intelligence

Sam Altman’s goal is for OpenAI to be the first to develop Artificial General Intelligence (AGI)²; the creation of an autonomous intelligent agent which could theoretically learn to complete any intellectual task a human being could perform. The race to AGI has often been considered the new arms race due to the impact such a technology would have on humanity as we know it. OpenAI’s mission is to develop this as they saw AGI as an existential risk, and that there was a need to ensure ‘alignment’.

‘Aligned AGI’ is the concept that AI would be developed whilst aligned with human values and intent, to make it safe for human use and able to solve many of the human and societal issues we face.This made OpenAI extremely devoted to its mission, where its members saw Sam Altman as a ‘technology messiah’, with comparison to Steve Jobs on the influence he has on advancing technology.

However, the reality of AGI development requires significant monetary investment and governance. From maintaining a company of 700+ employees, to the resource cost to enable computing advances, to ensuring the focus of development is on socio ecological challenges, among many others. This burden grew when expectations escalated, thanks to the public release of OpenAI’s highly popular ChatGPT generative AI service, which has dominated conversations around AI.

Growing tension at the top of OpenAI

The board that governed OpenAI was highly passionate about its mission, serving with close affiliations with the effective altruism community. Thereby absorbing the philosophy of ensuring technological advances are effectively ‘doing good’, with a focus on bettering humanity and ensuring that new technologies do not pose an existential risk. Their impact on the AI community is well known and is considered the opposite of ‘effective acceleration’.

Tensions rose between the board and Altman due to the pace of fundraising and the growing influence of investors, where OpenAI’s tools and development were being applied by a growing number of partner corporations. Altman himself was annoyed that a paper written by an ethics researcher on the board, Helen Toner, was critical of OpenAI but praised another AI company, Anthropic, which was an offshoot of disgruntled OpenAI employees who had left the company³.

The board were also concerned that the rapid advancement of OpenAI’s research being an existential risk, with the organisation close in completing their development of math model Q*, an AI product that will have capabilities in planning and reasoning . This development may have been one of the reasons for the board’s intervention, being a significant step towards AGI. The growing divide between the board and CEO led to the events of mid-November where Altman was fired by the board under the governance structure OpenAI was founded on, creating media hysteria.


A case study on strategic governance of technology

This was an interesting case study on governance and strategy not being integrated or aligned. Leading up to Altman’s dismissal, departed board members were not replaced. Leading to decisions by a majority of four non-executive members, of which two were active in the effective altruism community.

“Communication channels between the board and Sam Altman” was the board’s public statement as to why he was dismissed. It indicates a breakdown in relationships and a lack of a common vision between the leadership team, especially as OpenAI grew and accelerated its progress to the tune of its investors.

While the board had the power under OpenAI’s governance structure to dismiss Altman it did so with no clear mandate, which infuriated the 700 OpenAI employees. It had an impact on investors, as Microsoft swooped in to offer Sam a position to run a new advanced AI program. Multiple CEOs were hired by Open AI’s board and replaced in a week, indicative of the disorder created by the board.

While the board felt that it was meant to govern the company in the ‘interest of humanity’, it failed to realise that the company itself had changed. OpenAI was headed in a far more commercial direction than it understood.


AI governance and the future of technology

Ultimately, the board found themselves on the losing side as almost 95% of OpenAI employees were ready to leave and join Sam if he moved to Microsoft. Sam was reinstated as CEO, creating great alignment between him and Microsoft. Interestingly also signifying a strategic win by Microsoft who looks to dominate the technology and AI landscape in the future.

What has become a very public journey is a lesson in the importance of governance. OpenAI is now further away from the values it was built on, and intervention was clearly too late. Even with strong governance powers by the board, its use was not effective. Leading to confusion and chaos, which was ultimately used by Microsoft and the re-instated leader of OpenAI to create a new order. The new board that replaced the old will likely work in harmony with Altman and Microsoft to achieve AGI. This is unlikely to satisfy the original intent of OpenAI and its mission of ‘aligned AGI’, as it now looks to deepen its corporate ties and focus on economic growth to accelerate its projects.

It’s important organisations must learn from the circus that surrounded OpenAI. Governance needs to adapt to changing vision, principles, objectives, and stakeholders. It enables more stability and control over the use of technology, better ensuring its safety and that it meets organisational goals and the desired benefits. As organisations change over time, governance as a practice must also do the same.

Data Agility is a leader in data and technology governance and is able to advise organisations looking into the use of AI and the data that enables it. Through our enterprise information management framework, Data Agility have effectively ensured organisation’s strategic aims were appropriately governed and managed to enable an organisation to better make use of its data and technology, as more organisations look into implementing AI. Feel free contacting us to speak to one of our governance specialists on your organisation’s use of data, AI, and technology.


Most popular insights.

Effective use of technology and data in sports

Effective use of technology and data in sports

Sports across codes and the world expect decision makers, like referees, to make perfect decisions with the tech and data they have at their fingertips. The passion of the sports fan and the desire for the ref to make the correct ‘call’ is not going anywhere. So is technology to blame? Is the data inaccurate? Or is human error unavoidable? Let’s explore.

Data Lifecyle Management eBook

Data Lifecyle Management eBook

This eBook shows you the best practices on data lifecycle management; collect the data you need, store it securely while you need it, dispose of it when it is no longer needed.