The study of AI (artificial intelligence) and its development is as complex as it’s promising.
It’s precisely why those skilled in this field enjoy what many consider a progressive career — a job where there’s growth in its difficulty and responsibility.
And there are many such roles today.
Computer science and information technology employment was predicted to grow 11% from 2019 to 2029 — adding about 531,200 new jobs in the industry with a higher-than-average salary, according to the US Bureau of Labor Statistics.
The World Economic Forum ranked “AI and Machine Learning Specialist” #2 on its list of Top 20 job roles in increasing and decreasing demand across industries.
But that does not seem to be the case for Sam Altman, CEO of OpenAI last weekend.
Open AI is the company that “kicked off an AI arms race” when its AI chatbot, ChatGPT, first debuted last November. It was dubbed “the best artificial intelligence chatbot ever released to the general public.”
Altman quickly became the face of GenAI. A few months later, Microsoft invested US$1 billion in OpenAI to build “artificial general intelligence: — i.e. a machine that could do anything the human brain could do.
Altman was compared to Bill Gates, the co-founder of software giant Microsoft.
Then, last weekend, a stunning fall from grace.
On Nov. 17, 2023, Altman was dismissed abruptly following what OpenAI said was a “deliberative review process by the board, which concluded that he was not consistently candid in his communications with them, hindering its ability to exercise its responsibilities.”
At the time, OpenAI’s board was composed of six members — three co-founders and three non-staff members:
- Sam Altman: CEO and co-founder of OpenAI
- Greg Brockman: President and co-founder of OpenAI
- Ilya Sutskever: Chief Scientist and co-founder of OpenAI
- Adam D’Angelo: CEO of Quora
- Tasha McCauley: Technology entrepreneur
- Helen Toner: Director at Georgetown Center for Security and Emerging Technology
Other sources such as AFP reported that the turmoil escalated the differences between Altman — who has become the face of generative AI’s rapid commercialisation since ChatGPT’s arrival a year ago — and Open AI’s board members who expressed deep reservations about the safety risks posed by AI as it gets more advanced.
These are signs of cracks within Silicon Valley.
More importantly, it begs the question: why is there such drama surrounding the study of AI and its development?
The drama behind the study of AI and its development
While generative AI has disrupted many lives and industries across the globe, some world leaders have grown petrified about the potential of limitless power.
Even before ChatGPT, the US government has warned of the dangers of AI in wiping out jobs.
“The issue is not that automation will render the vast majority of the population unemployable,” said Jason Furman, Obama’s chief economist and chairman of the US Council of Economic Advisors.
“Instead, jobs created by AI could come too slowly, pay too little, and exclude the least skilled who need them most. Workers who lack the skills or opportunity to quickly find new, decent jobs enabled by automation could find themselves effectively excluded from the job market. That “eaves us with the worry that the only reason we will still have our jobs is because we are willing to do them for lower wages.”
The warnings continued in the years that followed.
In May this year, scientists and tech industry leaders, including high-level executives at Microsoft and Google, issued a new warning about the perils that AI poses to humankind.
Worries about AI systems outsmarting humans have intensified with the rise of a new generation of capable AI chatbots like Bard, ChatGPT, AppleGPT, and many others like it.
It sent countries across the globe to regulate developing technologies, with the European Union blazing the trail with its proposed AI Act.
Higher education institutions have responded in their own ways, too.
The Institute for Ethics in AI at the University of Oxford brings together world-leading philosophers and other experts in the humanities with technical developers and users of AI in academia, business and government.
Researchers here focus on investigating the ethical impacts from all perspectives, covering six themes: AI and Democracy, AI and Governance, AI and Human Rights, AI and Human Well-Being, AI and the Environment, and AI and Society.
The University of Melbourne has micro certificates in “Introduction to the Ethics of Artificial Intelligence.”
Informed by leading research from the Centre of Artificial Intelligence and Digital Ethics (CAIDE), this certificate, among many others, explores how to apply ethical frameworks and theories to AI in your workplace.
But where does Silicon Valley stand in the study of AI and its progress?
OpenAI’s firing of Sam Altman
The five days of chaos surrounding Altman’s position in OpenAI exposed the controversies surrounding Silicon Valley and its study of AI and its development.
Here’s what went down:
- Altman was fired on Nov. 17, 2023. Four board members, including OpenAI’s chief scientist, voted against him, which led to his dismissal. Mita Murati was appointed as interim CEO.
- Greg Brockman, the chairman of the board and another co-founder, resigned in response to Altman’s termination.
- Microsoft, one of the main investors at OpenAI, pressured the board of directors to rehire Sam as OpenAI CEO.
- On Monday, Nov. 20, 2023, Microsoft CEO Satya Nadella announced they hired Altman to lead a new artificial intelligence department alongside Brockman and several other recently departed OpenAI employees.
- The same day, OpenAI hired former Twitch CEO Emmett Shear as its interim CEO, replacing Murati just two days after appointing her to the role. Shear confirmed the move in a post on X early the following morning.
- On Tuesday, OpenAI announced in a post on X that Altman reached an agreement in principle (a stepping stone to a contract) to return as CEO of OpenAI, but on the condition that OpenAI reconfigures its board of directors.
While OpenAI has been tight-lipped behind the reason for Altman’s departure, one report suggests that Sutskever — pivotal in developing OpenAI’s ChatGPT and wanting highly advanced systems behaving according to defined limits — initiated the recent coup.
In this, how much does the education of these influential figures affect their views on the study of AI and its development?
The study of AI: Silicon Valley’s five most iconic figures
Sam Altman, a tech visionary and entrepreneur, is a name synonymous with innovation.
Altman dropped out of Stanford in 2005 to create Loopt, a location-sharing app, eventually selling it for US$43.4 million to Green Dot in 2012.
In 2011, he joined the influential startup incubator Y Combinator before heading to OpenAI in 2019.
As the CEO of OpenAI, Altman catapulted ChatGPT to global fame and has become Silicon Valley’s sought-after voice on the promise and potential dangers of AI.
“I can’t imagine that this would have happened to me,” Altman told Intelligencer about his new role as leader of the AI movement.
Altman believes AI technology will reshape society as we know it. While he thinks it comes with real dangers, it can also be “the greatest technology humanity has yet developed” to enhance our lives significantly.
Greg Brockman is the President and co-founder of OpenAI.
Greg attended Harvard University and Massachusetts Institute of Technology (MIT), dropping out of both.
At Harvard, he collaborated with the Harvard Computer Society to administer and build computer systems. At MIT, he worked on projects like XVM and Linerva.
He later left to contribute to the founding of Stripe, an Irish-American multinational financial services and software as a service (SaaS) company dual-headquartered in South San Francisco, California, the US and Dublin, Ireland.
In May 2015, Brockman left Stripe to co-found OpenAI with Altman. With a genuine belief in AI’s potential for positive impact, Brockman advocates for ethical and responsible development.
“We must ensure AI benefits all of humanity,” Brockman asserts, underscoring OpenAI’s commitment to advancing the field while prioritising safety and inclusivity.
His loyalty to Altman goes deep as Brockman said he was departing as president hours after the board pushed out Altman. In a post on the social media site X, he wrote: “Based on today’s news, I quit.”
Helen Toner, a board member and director of strategy at Georgetown’s Centre for Security and Emerging Technology (CSET), holds an MA in Security Studies from Georgetown, a BSc in Chemical Engineering, and a Diploma in Languages from the University of Melbourne.
Before joining CSET, Toner lived in Beijing, studying the Chinese AI ecosystem as a research affiliate of Oxford University’s Center for the Governance of AI.
When it comes to AI, she is clear-eyed about the risks of generative AI.
Toner, who co-authored a paper, has cautioned against excessive reliance on AI chatbots and advocated for US government action to balance innovation with citizen protection from AI risks.
This led her to clash with Altman over an academic paper comparing the safety approaches of OpenAI and Anthropic.
Satya Nadella, the CEO of Microsoft, has a degree in electrical engineering from the Manipal Institute of Technology, an MS in computer science from the University of Wisconsin–Milwaukee and an MBA from the University of Chicago.
In an interview, Nadella shared his perspective on AI, saying, “Technology will provide more and more ways to bring people together. “
He believes in AI’s potential to empower people and transform industries. “I see these technologies acting as a co-pilot, helping people do more with less,” he stated passionately.
Microsoft is OpenAI’s largest investor, with over US$10 billion stake.
The Microsoft CEO reached out to Altman following the firing to offer him support in his next steps.
Ilya Sutskever is OpenAI’s chief scientist and co-founder and one of the board members whom Altman clashed with over some aspects, including the pace of developing generative AI.
He graduated from the University of Toronto with a bachelor’s degree in Mathematics in 2005, a Master of Science in Computer Science in 2007, and a Doctor of Philosophy in 2013.
In 2015, after a short stint at Google, Sutskever co-founded OpenAI and eventually became its chief scientist; so critical was he to the company’s success that Elon Musk has taken credit for recruiting him.
In an interview with MIT Technology Review, Sutskever expressed his focus on preventing artificial superintelligence from going rogue.
Artificial superintelligence refers to a hypothetical level of AI that surpasses human intelligence in virtually all aspects.
In fact, the OpenAI leadership shakeup centred on AI safety, with Sutskever disagreeing with Altman on the pace of commercialising generative AI and measures to reduce public harm
“It’s obviously important that any superintelligence anyone builds does not go rogue,” Sutskever says.
However, despite all the fiasco that has happened, Sutskever has since publicly apologised on the X platform.
He expressed regret for his decisive vote against Altman and indicated his renewed support for Altman.