Skip to content

Meta AI and the Push for Artificial General Intelligence

There’s no doubt we’re moving towards an AI-assisted world, but different AI developers have varying ideas of what they want that world to look like. Some of the biggest names in AI like OpenAI CEO and co-founder Sam Altman and Meta CEO Mark Zuckerberg are setting their sights on building artificial general intelligence (AGI) and making it widely available, though their visions of exactly how AGI should be used in our daily lives are still unfolding.

AI that is as intelligent and capable as any human, if not more so, could have widespread transformative effects on every facet of our society, from the way we work to the products we consume to the way we spend our leisure time. But the future of AGI is still very uncertain.

Both Altman and Zuckerberg emphasize the safe and ethical use of AI, yet Altman has simultaneously portended AI-driven doom. His company’s feverish uphill journey towards the development of more intelligent AI seems to clash with his apocalyptic warnings and urgent calls for AI regulation.

Despite the clear benefits of AI for many businesses and the convenience of generative AI tools, in a recent Forbes survey looking to determine consumer sentiment around AI, the majority of respondents said they were concerned about misinformation and job loss from AI technology.

While many are fearful of an uncertain future, others are looking forward to a society powered by AI. Zuckerberg in particular has recently announced his ambitious plans to spearhead the development of AGI, and has restructured his company and allocated millions of dollars towards his venture.

What exactly is AGI, how is Meta working to develop it, and what implications could it have for our current work structure? What does safe AGI usage actually look like?

Jump to:
What is AGI?
Meta’s plans for AI
What does safe AI usage look like?
What are the risks of AGI?
Should businesses be worried about AI privacy?

What is artificial general intelligence (AGI)?

AGI, as it’s generally defined, is a type of artificial intelligence with cognitive abilities equal to or greater than that of humans. In essence, it’s human intelligence represented in software form. While current AI software is normally created to be utilized for specific tasks when prompted by a human, AGI would be able to perform any cognitive task a human could do, and could do it autonomously.

AGI can perform as well or better than humans on a wide range of tasks, has the ability to independently build on its own knowledge, and has contextual logic and reasoning skills. Or rather, it would if it existed. To this day, AGI is purely theoretical - researchers have not been able to get AI technologies to this advanced level. 

Is AGI achievable?

In the scientific community, the jury is out as to whether AGI could exist, and how soon this technology could be developed if it is possible. Even the definition of AGI varies depending on who you ask. But some researchers are claiming we’re well on our way with GPT-4, ChatGPT’s newest large language model (LLM).

Tech mogul Elon Musk has recently filed a lawsuit against OpenAI, claiming the startup has diverted from its original mission to develop AI for the benefit of humanity, instead adopting a for-profit structure. A core tenet of the lawsuit is the claim that GPT-4 can be considered AGI technology. 

In 2019, Microsoft entered an agreement with OpenAI stating that OpenAI’s LLMs would be exclusively licensed to the tech giant, but only until the LLMs reached a level of sophistication considered fitting of the definition of AGI, at which point the technology would be made available to other developers for the benefit of humanity. Musk claims that since GPT-4 is a version of AGI, it should no longer exclusively belong to OpenAI and Microsoft. The exclusivity of GPT-4 means, according to Musk, that OpenAI is breaking their agreement and betraying the vision the startup was founded on.

Musk isn’t the only one that believes GPT-4 is pushing boundaries. In April 2023, researchers at Microsoft released a paper claiming that GPT-4 is approaching human-level intelligence with its reasoning skills. They were impressed and baffled by its seemingly deep understanding of the world, and humanlike ability to reason. 

Bold claims that AGI, or at least some version of it, is already here, don’t represent the general consensus among AI researchers. In fact, some researchers go so far as to dismiss Microsoft’s lofty assertions about GPT-4 as nothing more than a PR pitch.

Though ChatGPT has some contextual reasoning skills, it doesn’t seem to display an actual understanding of the content it’s trained to produce. Rather, it runs on a predictive engine; when asked a question, it references its massive stores of knowledge and strings together words based on the usual cadence and grammatical structure of human speech and writing.

Some AI scientists think AGI may arrive as soon as 2027; others say it’ll take a century or longer. There is also a minority who doubt AGI is possible at all.

Meta's plans for AI

After ChatGPT’s near-overnight breakout success, Meta has worked to integrate AI into nearly every aspect of the business. In June of 2022, the company announced they were reorganizing so that their AI teams had a larger influence across multiple departments. One of these developments was moving the Fundamental AI Research team (FAIR) to the same part of the company as Reality Labs, the business unit of Meta working to build generative AI into Meta’s apps. 

Following Meta’s 2023 “year of efficiency”, a corporate restructuring plan which resulted in over 10,000 roles being eliminated to favor a flatter management hierarchy, Zuckerberg has allocated company resources to focus heavily on AI development.

Expanding infrastructure for AI development

In early January 2024, Zuckerberg announced in an Instagram Reels post that by the end of 2024, Meta will have acquired nearly 600,000 high-powered computer graphics cards - 350,000 of Nvidia’s coveted H100 graphics cards, and the rest in “H100 equivalents”. This major purchase is, in Zuckerberg’s words, being conducted with the goal of building the “massive compute infrastructure” required to pursue the company’s AI development goals. 

Each one of the Nvidia cards can cost $25,000-$30,000, so the cost Meta may be paying for those cards alone could amount to nearly $9 billion. And that’s not counting the 250,000 “H100 equivalents” the company is springing for.

Ultimately, according to his Reels post, Zuckerberg’s goals are to “build general intelligence, open source it responsibly, and make it available and useful to everyone”. He wants Meta to provide the next generation of AI services to its millions of users, and to do that would require significant advancements to AI technology.

Working towards AGI

Last year, Meta released Llama 2, an LLM with capabilities similar to ChatGPT; it can generate code, create content, analyze large datasets, and more. The LLM is free for commercial or research use, which aligns with Zuckerberg’s goal of “democratizing” AI. Zuckerberg told The Verge that his company’s broader focus on AI was influenced by the release of Llama 2, and a more advanced Llama 3 is now in development.

Though AGI appears to be the end goal for Meta, Zuckerberg doesn’t have a clear definition for what AGI means. As he put it in his interview with The Verge, “I don’t have a one-sentence, pithy definition,” but that the most important part of AGI is its ability to “reason and have intuition”. He also says that the arrival of AGI won’t be contained to one pivotal moment, but rather a gradual release over time.

In the meantime, Meta’s Ray-Ban smart glasses are gaining traction. Dubbed “the next generation of smart glasses”, they have an embedded camera, allowing users to livestream directly to Facebook and Instagram and share pictures with friends and family instantly using voice commands. They’re also integrated with Meta AI, a conversational assistant that can help you get information on-the-go. 

All this begs the question: Are we looking towards a future in which daily life is constantly monitored and aided by virtual assistants? Is it a good idea to make everyday life augmented reality? How can we ensure that a super-intelligent AI assistant can’t be used to spread misinformation? And if everyone has a camera on their face constantly ready to record, how can we prevent privacy issues?

How are AI developers ensuring their products are being used ethically?

While the biggest names in AI are repeatedly stating how highly they prioritize safe and responsible development, there is no universally recognized standard for how AI should be used. So what are safe AI practices? Since AI tech is so new, we’re still coming to understand how it can be used, let alone how it can be misused.

Companies like OpenAI and Meta have released best practice and safe usage guidelines, which specify how developers using their LLMs for commercial or research purposes should work to implement AI ethically and mitigate risks to themselves and their users. Some of the guidelines put forth by either or both company include:

  • Identify vulnerabilities in the model by using prompts that may elicit undesirable or inappropriate responses.
  • In a controlled environment, subject the model to adversarial attacks that bad actors may use to breach the system. Conduct tests regularly throughout training and beyond.
  • Be transparent with end users about the potential risks or limitations of your software.
  • Allow end users to control how their data is used, and give them control over the AI outputs.
  • Comply with any applicable regulations if you're collecting or processing sensitive data.

While these policies provide a baseline for safe usage, it’s hard to ensure that these guidelines will actually be followed by everyone using advanced LLMs, and that anyone in violation will be stopped. And as AI continues to advance, the risks have the potential to grow.

There’s no single consensus on how AI should be regulated by national or international law, but some international organizations have worked to come up with a general set of principles for AI regulation. The Organization for Economic Cooperation and Development (OECD) is an intergovernmental forum where 38 member countries with democratic governments collaborate to develop policy standards. The OECD has developed a set of principles and recommendations to guide how policymakers should approach AI regulation.

Some of these recommendations include:

  • Governmental investment in AI research and development
  • Fostering accessible AI ecosystems with digital infrastructure
  • Promoting a policy environment that supports a smooth transition from the development stage to real-world applications
  • Work with AI stakeholders to prepare workers for how AI will transform society, and offer support including career training programs and a social safety net to help those whose jobs were displaced

While the recommendations seem sound, the legal system has yet to catch up.

What are the potential risks of AI?

The OECD mentions that unchecked AI development will result, and in some ways already has resulted, in the proliferation of worldwide anxieties: “bias and discrimination, the polarization of opinions at scale, the automation of highly skilled jobs, and the concentration of power in the hands of a few.”

Given that we’ve never seen a technology that could match or outpace human performance in a general sense, the broad implications are hard to predict, but the following are what researchers and economists are most concerned about at this point.

Upending the job market

AI in its current state has already resulted in significant job losses, either from roles being replaced by AI or due to shifting priorities within companies. The tech job layoffs in early 2024 were due in part to companies’ strategic reallocation of resources in favor of AI technologies. And numerous business leaders are touting the benefits of AI, and looking to use it to save on labor costs. 

In a recent survey of 750 business leaders, 37% of respondents said AI had replaced workers in 2023. AI-influenced role eliminations can sometimes come in significant numbers. Here’s just one real-world example: Since it adopted AI technology to assist with customer service interactions, fintech company Klarna has reduced its headcount by 25%, and has stated that the AI assistant is now doing the work of 700 staff members. A spokesperson for the company has stated that AI is not directly responsible for the layoffs, but that the technology is a significant consideration in the company’s hiring strategy.

Though the majority of companies aren’t using AI to replace roles, the new technology is clearly affecting the job market - and some roles are more vulnerable to being eliminated than others. And it’s still the early days of generative AI. If a technology that could outperform humans were created, the job losses would have the potential to be far-reaching, since the cost savings for companies would be incredible.

Security and privacy issues

If we imagine a world where AGI is used in nearly every facet of your life, the amount of data it would need to collect in order to process your surroundings would be astronomical. We’re already in an era where consumer privacy concerns about the data collected by smartphones and social media platforms are ever-increasing. AGI technology, when used in a way that companies like Meta are going for, would take things a step further.

With AGI-powered assistants running in the background of our lives, seeing what we’re seeing and hearing what we’re hearing, the technology would know everything about us. The increased capacity for surveillance of individuals’ public and private lives with use of AGI would need to be tightly regulated. And if a software has access to a massive amount of personal data, any data breach by bad actors would be catastrophic.

Mass disinformation

We’ve already seen misinformation being spread on a wide scale with smartphones and social media. Technology that is able to process data faster than humans can, and, based on this data, create written or spoken content that closely plays into common fears or mimics the manner of speech of a single person or group of people, could create very powerful disinformation campaigns.

OpenAI released a 2023 study in collaboration with Georgetown University’s Center for Security and Emerging Technology and the Stanford Internet Observatory, exploring how AI could be used for disinformation purposes. The biggest risk they identified was spreading propaganda via “online influence operations”. The production of personalized and influential content meant to affect the opinion of a target group of people would be made far easier with advanced generative tools. 

One way to mitigate the proliferation of these campaigns, according to OpenAI, is to restrict access to the most advanced LLMs, as well as media literacy campaigns and the adoption of “digital provenance standards”. 

In contrast to OpenAI’s stance of more restrictive access, the most important aspect of AI safety to Zuckerberg is equal access to AI tech across the board, at least when it comes to commercial and institutional use. He told The Verge, “If you make [AI tech] more open, then that addresses a large class of issues that might come about from unequal access to opportunity and value.”

Zuckerberg recognizes the contrast between Meta’s open-source strategy with other leaders in AI development, and implies that the privatization of AI technology is more of a business play than anything related to safety. “I’m sure some of them are legitimately concerned about safety, but it’s a hell of a thing how much it lines up with the strategy,” he told The Verge.

Individuals and organizations need to pay close attention to the sources of the content they consume, as fraudulent or manipulative content only becomes more prolific and convincing.

Should you be worried about privacy issues if your business uses AI?

If you are currently using, or want to start using, AI tools for your business, it’s vital to understand any potential ethical or legal issues that may arise, and prepare your strategy to prevent or mitigate them. 

Whether you’re using AI to process and analyze data, automate customer service interactions, or create content, this could mean an external entity has access to some of your and/or your customers’ data. Do you have procedures in place to determine how you use AI services, and checks and balances to ensure safe and legal usage? 

Take our free AI ethics assessment to gauge your AI preparedness.