Is OpenAI and Q* a threat to humanity?
If you've been keeping up with the recent developments at OpenAI in the past week, you're likely aware of the significant changes involving CEO Sam Altman, the prominent figure in generative AI. Altman, often considered an industry leader in Artificial Intelligence, had an unexpected departure from the OpenAI board, causing surprise throughout the tech community. However, in a swift turn of events, Altman has made a comeback, supported by 500 out of 700 OpenAI employees and Microsoft CEO Satya Nadella. This resurgence follows the replacement of board members who initially ousted him.
In the run-up to Sam Altman's short four-day break, a team of staff researchers proactively penned a letter to the board of directors. In this communication, they voiced their worries about a ground-breaking artificial intelligence discovery that, from their perspective, had the potential to pose a threat to humanity.
OpenAI has acknowledged the existence of a project known as Q* (pronounced Q-Star) in both an internal communication and a letter addressed to the board preceding the events of Altman’s departure.
Industry experts see Q* as a potential breakthrough in OpenAI's quest for artificial general intelligence (AGI). AGI represents an autonomous system with the ability to outperform humans in a variety of economically valuable tasks.
With access to significant computing resources, the new model was able to solve certain mathematical problems. Though performing maths at the level of grade-school students, the model's success in these tasks has generated considerable optimism among researchers about the future potential and accomplishments of Q*.
AI researchers view mathematics as a crucial frontier. Currently, generative AI is good at tasks such as writing and language translation by predicting the next word statistically. However, responses to the same question can vary significantly. Mastering the skill of mathematics, where there is a singular correct answer, implies that AI could possess enhanced reasoning capabilities, mirroring human intelligence more closely.
In a scenario where an AI is given the task of reserving a table at a popular restaurant, shutting down mobile networks and turning all traffic lights green might theoretically help the AI achieve its goal. However, in the real world, such actions would be viewed as unreasonable and unethical.
What about AI decision-making systems that offer loan approval and hiring recommendations? They carry the risk of algorithmic bias. This is due to the fact that the training data and decision models they operate on often mirror long standing social prejudices.
AI is rapidly evolving into an alien intelligence, proficient at achieving goals but potentially dangerous due to its lack of inherent alignment with the moral values of its creators.
Right now, AI isn't remotely close to being able to cause that kind of trouble. AI is designed around doing specific tasks, not making big decisions. The tech is not advanced enough to figure out and plan all the goals, big and small – to mess with traffic just to book you a restaurant table.
Not only does the tech lack the complex ability for multi-layered judgement in these situations, but it also doesn't have independent access to crucial parts of our infrastructure to start causing that level of damage. So, as of now, AI isn't posing a threat to humanity.
How big is the threat of OpenAI and Q*?
Humans are inherently judgemental beings. We actively assess details and routinely make decisions, whether at work or during leisure, such as hiring choices, loan approvals, or selecting what to watch on Netflix. However, an increasing number of these decisions are being automated and delegated to algorithms. While the world won't come to an end with this shift, a concern might be that people may gradually lose the ability to make these judgments independently. The less practice individuals have in making such decisions, the more their judgement-making may diminish over time.
Empowering AI with the capacity to generalise, learn, and comprehend, enabling it to make broad judgements and plan goals and subordinate goals, might expedite the process of humans relinquishing everyday decision making.
These significant challenges demand the attention of governments and policymakers to establish frameworks that ensure responsible and ethical AI .
What are the potential benefits of AGI?
With AI such as Q* starting to advance its reasoning capabilities, we also have to consider the potential positive impact that this could have in various aspects of life. Science and healthcare are prime examples where reasoning and deduction are key to unlocking the answers to the big questions. For without them, we would not see the progression that healthcare has made over the years. With AGI, we could potentially see the long-unanswered questions of the science and medical world resolved in just a fraction of a second – opening up a world of possibilities in healthcare advancement; inclusive of medical research, diagnostic techniques and treatment development. While this is not something that can be done with the current reasoning capabilities of Q*, it is a step in the right direction. However, as mentioned previously, we must not be blind to the ethics of AGI. In order for it to work effectively for everyone, there needs to be regulations put into place to maintain ethics and prevent the possibilities of algorithmic bias.