Sam Altman’s Gamble: Will OpenAI’s Aspirations Match the Industry’s Growing Expenses?

It’s a staggering $1.4 trillion (£1.1 trillion) dilemma. How can a startup like OpenAI, which is currently operating at a loss, afford such enormous expenses?

A positive answer to this question could significantly ease investor worries about potential bubble bursts in the burgeoning artificial intelligence sector, including the high valuations of tech companies and a global expenditure of $3 trillion on data centers.

The firms behind ChatGPT require extensive computing resources (or “compute”) to train their models, generate responses, and develop even more advanced systems going forward. OpenAI’s computing obligations (AI infrastructure such as chips and servers supporting its renowned chatbots) are projected to reach $1.4 trillion over the next eight years, overshadowing its annual revenue of $13 billion.


Recently, this disparity has appeared to be a significant concern, leading to market unease regarding AI expenditures and remarks from OpenAI leaders who have not sufficiently clarified these issues.

OpenAI CEO Sam Altman initially attempted to address the situation during a somewhat awkward discussion with Brad Gerstner of Altimeter Capital, the company’s leading investor, but concluded with Altman’s assertion that “enough is enough.”

On his podcast, Gerstner articulated that the company’s capacity to cover more than $1 trillion in computing expenses while yielding only $13 billion in annual revenue is an issue “plaguing the market.”

Altman countered by stating, “First of all, we’re generating more than that. Secondly, if you want to sell your stock, I can find you a buyer; I’ve had enough.”

Last week, OpenAI’s Chief Financial Officer Sarah Friar suggested that some of the chip expenses could be offset by the U.S. government.

“We’re exploring avenues where banks, private equity, and even governmental systems can help finance this,” she mentioned to the Wall Street Journal, noting that such assurances could significantly lower financing costs.

Was OpenAI, which recently declared itself a full-fledged for-profit entity valued at $500 billion, implying that AI companies should be regarded similarly to banks during the late 2000s? This led to a quick clarification from Friar, who denied on LinkedIn that OpenAI was seeking federal reassurance while Altman aimed to clarify his stance on X.

“We neither have nor want government guarantees for OpenAI data centers,” Altman wrote in an extensive post, adding that taxpayers shouldn’t be responsible for rescuing companies that make “poor business choices.” Perhaps, he suggested, the government should develop its own AI infrastructure and provide loan assurances to bolster chip manufacturing in the U.S.

Tech analyst Benedict Evans remarked that OpenAI is trying to compete with other major AI contenders supported by substantial existing profit models, including Meta, Google, and Microsoft, who are significant backers of OpenAI.

“OpenAI aims to match or surpass the infrastructure of dominant platform companies that have access to tens of billions to hundreds of billions of dollars in computing resources. However, they rely on cash flow from current operations to afford this, something OpenAI lacks, and they’re working to gain entry into that exclusive circle independently,” he noted.

Altman is confident that the projected $1.4 trillion can be offset by future demand for OpenAI products and ever-evolving models. Photo: Stephen Brashear/AP

There are also concerns surrounding the cyclical nature of some of OpenAI’s computing agreements. For instance, Oracle is set to invest $300 billion in developing new data centers for OpenAI across Texas, New Mexico, Michigan, and Wisconsin, with OpenAI expected to reimburse almost the same amount in fees for those centers. According to its agreement with Nvidia, a primary supplier of AI chips, OpenAI will purchase chips for cash, while Nvidia will invest in OpenAI as a non-controlling stakeholder.

Altman has also provided updates on revenue, stating that OpenAI anticipates exceeding $20 billion in annual revenue by the year’s end and reaching “hundreds of billions of dollars” by 2030.

He remarked: “Based on the trends we’re observing in AI utilization and the increasing demand for it, we believe that the risk of OpenAI lacking sufficient computing power is currently more pressing than the risk of having excess capacity.”

Skip past newsletter promotions

In essence, OpenAI is confident that it can recover its $1.4 trillion investment through anticipated demand for its products and continually enhancing models.

The company boasts 800 million weekly users and 1 million business customers, deriving income from consumer ChatGPT subscriptions – which accounts for 75% of its earnings – in addition to offering enterprises a specific version of ChatGPT and allowing them to leverage its AI models for their own products.

A Silicon Valley investor, who has no financial ties to OpenAI, emphasizes that while the company has the potential for growth, its success hinges on various factors like model improvements, reducing operational costs, and minimizing the expenses of the chips powering these systems.

“We believe OpenAI can capitalize on its strong branding and ChatGPT’s popularity among consumers and businesses to create a suite of high-value, high-margin products. The crucial question is: how extensively can these products and revenue models be able to scale, and how effective will the models ultimately prove to be?”

However, OpenAI currently operates in the red. The company contends that figures regarding its losses are misrepresented, such as claims of an $8 billion loss in the first half of the year and about $12 billion in the third quarter, yet it does not dispute these losses or provide alternative figures.

Altman is optimistic that revenue may stem from multiple sources, including heightened interest in paid ChatGPT versions, other organizations utilizing their data centers, and users purchasing the hardware device being crafted in collaboration with iPhone designer Sir Jony Ive. He also asserts that “substantial value” will emerge from scientific advancements in AI.

Ultimately, OpenAI is banking on needing $1.4 trillion in computing resources, a figure far from its current income, because it is convinced that demand and enhancements to its product lineup will yield returns.

Karl Benedict Frey, author of “How Progress Ends” and an associate professor of AI at the University of Oxford, casts doubt on OpenAI’s aspirations, citing new concerns and evidence of a slowdown in AI adoption in the U.S. economy. Recently, the U.S. Census Bureau reported that companies with 250 or more employees have experienced a decline in AI adoption.

“Multiple indicators reveal that AI adoption has been decreasing in the U.S. since summer. While the underlying reasons remain unclear, this trend implies a shift where some users and businesses feel they aren’t receiving the anticipated value from AI thus far,” Frey stated, adding that achieving $100 billion in revenue by 2027 (as suggested by Altman) would be impossible without groundbreaking innovations from the company.

OpenAI claims that its enterprise ChatGPT version has grown ninefold year-over-year, accelerating business acceptance, with clientele spanning sectors, including banking, life sciences, and manufacturing.

Yet, Altman acknowledges that this venture might not be a guaranteed success.

“However, we could certainly be mistaken, and if that’s the case, the market will self-regulate, not the government.”

Source: www.theguardian.com

Bryan Cranston Appreciates OpenAI’s Efforts to Combat Sora 2 Deepfakes

Bryan Cranston expressed his “gratitude” to OpenAI for addressing deepfakes of him on its generative AI video platform Sora 2. This action follows instances where users managed to create his voice and likeness without his permission.

The Breaking Bad actor has voiced concerns to actors’ union Sag Aftra after Sora 2 users generated his likeness during the platform’s recent launch. On October 11th, the LA Times reported that in one instance, “a synthetic Michael Jackson takes a selfie video using an image of Breaking Bad star Bryan Cranston.”


To appear in Sora 2, living individuals must provide explicit consent or opt-in. Statements following the release from OpenAI confirmed it has implemented “measures to block depictions of public figures” and established “guardrails to ensure audio and visual likenesses are used with consent.”

However, upon Sora 2’s launch, several articles emerged, including those from the Wall Street Journal, Hollywood Reporter, and LA Times, which reported that OpenAI instructed several talent agencies that if they didn’t want their clients’ or copyrighted material to be featured in Sora 2, they needed to opt-out instead of opt-in, causing an uproar in Hollywood.

OpenAI contests these claims and told the LA Times its goal has always been to allow public figures to control how their likenesses are utilized.

On Monday, Cranston released a statement via Sag Aftra thanking OpenAI for “enhancing guardrails” to prevent users from generating unauthorized portraits of himself.

“I was very concerned, not only for myself but for all performers whose work and identities could be misappropriated,” Cranston commented. “We are grateful for OpenAI’s enhanced policies and guardrails and hope that OpenAI and all companies involved in this endeavor will respect our personal and professional rights to control the reproduction of our voices and likenesses.”

Hollywood’s top two agencies, Creative Artists Agency (CAA) and United Talent Agency (UTA), which represents Cranston, have repeatedly highlighted the potential dangers Sora 2 and similar generative AI platforms pose to clients and their careers.

Nevertheless, on Monday, UTA and CAA released a joint statement alongside OpenAI, Sag Aftra, and the Talent Agents Association, declaring that what transpired with Cranston was inappropriate and that they would collaborate to ensure the actor’s “right to determine how and whether he can be simulated.”


“While OpenAI has maintained from the start that consent is required for the use of voice and likeness, the company has expressed regret over these unintended generations. OpenAI has reinforced its guardrails concerning the replication of voice and likeness without opt-in,” according to the statement.

Actor Sean Astin, the new chair of SAG Aftra, cautioned that Cranston is “one of many performers whose voices and likenesses are at risk of mass appropriation through reproduction technology.”

“Bryan did the right thing by contacting his union and professional representatives to address this issue. We now have a favorable outcome in this case. We are pleased that OpenAI is committed to implementing an opt-in protocol, which enables all artists to decide whether they wish to participate in the exploitation of their voice and likeness using AI,” Astin remarked.

“To put it simply, opt-in protocols are the only ethical approach, and the NO FAKES law enhances our safety,” he continued. The Anti-Counterfeiting Act is under consideration in Congress and aims to prohibit the production and distribution of AI-generated replicas of any individual without their consent.

OpenAI has openly supported the No FAKES law, with CEO Sam Altman stating the company is “firmly dedicated to shielding performers from the misuse of their voices and likenesses.”

Sora 2 permits users to generate “historical figures,” which can be broadly defined as both well-known and deceased individuals. However, OpenAI has recently acknowledged that representatives of “recently deceased” celebrities can request for their likeness to be blocked from Sora 2.

Earlier in the month, OpenAI announced its partnership with the Martin Luther King Jr. Foundation to halt the capability of depicting King in Sora 2 at their request as they “strengthened guardrails around historical figures.”

Recently, Zelda Williams, the daughter of the late actor Robin Williams, pleaded with people to “stop” sending her AI videos of her father, while Kelly Carlin, the daughter of the late comedian George Carlin, characterized her father’s AI videos as “overwhelming and depressing.”

Legal experts speculate that generative AI platforms could enable the use of deceased historical figures to ascertain what is legally permissible.

Source: www.theguardian.com

TechScape: Is OpenAI’s $5 billion chatbot investment worth it? It depends on your utilization of it | Artificial Intelligence (AI)

What if you build it and no one comes?


It’s fair to say the luster of the AI boom is fading. Skyrocketing valuations are starting to look shaky compared to the massive spending required to keep them going. Over the weekend, tech site The Information reported that OpenAI is An astonishing $5 billion in additional spending is expected More than this year alone:

If our predictions are correct, OpenAI’s recent valuation would be $80bnwill need to raise more capital over the next 12 months or so. Our analysis is based on informed estimates of what OpenAI will spend to operate the ChatGPT chatbot and train future large-scale language models, as well as a “guesstimate” of how much OpenAI will spend on staffing, based on OpenAI’s previous projections and our knowledge of its adoption. Our conclusion shows exactly why so many investors are concerned about the profit prospects of conversational artificial intelligence.

The most pessimistic view is that AI — and especially chatbots, an expensive and competitive sector of an industry that has captured the public’s imagination — isn’t as good as we’ve been told.

This argument suggests that as adoption grows and iteration slows, most people have had a chance to use cutting-edge AI properly and are beginning to realize that it’s great but probably useless. The first time you use ChatGPT, it’s a miracle, but by the 100th time, the flaws are obvious and the magic fades into the background. You decide ChatGPT is bullshit.

In this paper, I argue against the view that ChatGPT and others are lying or hallucinating when they make false claims, and support the position that what they are doing is bullshit. … Since these programs themselves could not care less about the truth, and are designed to generate text that looks true without actually caring about the truth, it seems appropriate to call their output bullshit.

Get them trained




It is estimated that only a handful of jobs will be completely eliminated by AI. Photo: Bim/Getty Images/iStockphoto

I don’t think it’s that bad. But that’s not because the system is perfect. I think the move to AI is a hurdle we’ve got to overcome much earlier. You have to try a chatbot in any meaningful way to even begin to realize it’s bullshit and give up. And judging by the tech industry’s response, that’s starting to become a bigger hurdle. Last Thursday, I reported on how Google is partnering with a network of small businesses and several academy trusts to bring AI into the workplace to enhance, rather than replace, worker capabilities. Debbie Weinstein, managing director of Google UK and Ireland, said:

It’s hard for us to talk about this right now because we don’t know exactly what’s going to happen. What we do know is that the first step is to sit down and talk. [with the partners] And then really understanding the use case. If you have school administrators and students in the classroom, what are the specific tasks that you actually want to perform for these people?

For teachers, this could be a quick email with ideas on how to use Gemini in their lesson plans, formal classroom training, or one-on-one coaching. Various pilot programs will be run with 1,200 participants, with each group having around 100 participants.

One way of looking at this is that it’s just another feel-good investment in the upskilling schemes of big companies. Google in particular has been helping to upskill Brits for years with its digital training scheme, formerly branded as the company’s “Digital Garage”. To put it more cynically, teaching people how to use new technology by teaching them how to use your own tools is good business. Brits of a certain age will vividly remember “IT” or “ICT” classes as thinly veiled instructions on how to use Microsoft Office. People older and younger than me learned some basic computer programming. I learned how to use Microsoft Access.

In this case, it’s something deeper: Google needs to go beyond simply teaching people how to use AI and also run experiments to figure out what exactly to teach them. “This isn’t about a fundamental rethinking of how we understand technology, it’s about the little everyday things that make work a little more productive and a little more enjoyable,” Weinstein says. “Today, we have tools that make work a little easier. Those three minutes you save every time you write an email.

“Our goal is to make sure that everyone can benefit from technology, whether it’s Google technology or other companies’ technology. And I think the general idea of working together with tools that help make your life more efficient is something that everyone can benefit from.”

Ever since ChatGPT came out, the underlying assumption has been that the technology speaks for itself, and the fact that it literally does is a big help to that. But chat interfaces are confusing. Even if you’re dealing with a real human being, it’s still a skill to get the best out of them when you need help, and an even better skill when the only way to communicate with them is through text chat.

AI chatbots are not people. They are so unlike humans that it’s all the more difficult to even think about how they might fit into common work patterns. The pessimistic view of this technology isn’t “what if there wasn’t one there” – there is, of course, a pessimistic view, despite all the hallucinations and nonsense. Rather, it’s a much simpler view: what if most people never bothered to learn how to use them?

Skip Newsletter Promotions

Masbot Gold




Google DeepMind has trained its new AI system to solve problems from the International Mathematical Olympiad. Photo: Pittinan Piyavatin/Alamy

Meanwhile, elsewhere in Google it reads:

Although computers are being built to perform calculations faster than humans, the highest levels of formal mathematics remain the sole domain of humans. But a groundbreaking discovery by researchers at Google DeepMind has brought AI systems closer than ever to beating the best human mathematicians at the field.

Two new systems, called AlphaProof and AlphaGeometry 2, worked together to tackle problems in the International Mathematical Olympiad, a worldwide math competition for middle school students. 1959Each year, the Olympiad consists of six incredibly difficult problems covering subjects such as algebra, geometry and number theory, and winning a gold medal makes you one of the best young mathematicians in the world.

A word of warning: the Google DeepMind system solved “only” four of the six problems, and one of them they solved using a “neurosymbolic” system, which is less AI-like than you might expect. All problems were manually translated into a programming language called Lean, which allows the system to read it as a formal description of the problem without having to parse human-readable text first. (Google DeepMind also tried to use LLM to do this part, but it didn’t work very well.)

But this is still a pretty big step. The International Mathematical Olympiad difficultand AI won the medal. What happens when you win the gold medal? Is there a big difference between being able to solve problems that only the best high school mathematicians could tackle and being able to solve problems that only the best undergraduates, graduate students, and doctors could solve? What changes when a branch of science is automated?

If you’d like to read the full newsletter, sign up to receive TechScape in your inbox every Tuesday.

Source: www.theguardian.com

Understanding Sora AI: A Comprehensive Guide to OpenAI’s Text-to-Video Tools

Sora introduces a groundbreaking artificial intelligence software that empowers users to produce remarkably lifelike videos based on simple verbal instructions.

OpenAI, the mastermind behind Dall-E and ChatGPT, is pushing boundaries with the soon-to-be-released service.

This innovation seemingly emerged out of nowhere. Previous attempts at AI-generated videos were less than impressive, to put it lightly. But with Sora, things are changing.

How did OpenAI achieve this feat? Can you use these tools today? And what does this mean for the future of video and content creation? Let’s dive deep into the modern tools and their implications.

What is Sora?

Sora is an AI tool capable of generating full videos up to 1 minute long. For instance, by simply entering a prompt like “a group of cats worshipping a giant dog,” Sora can potentially display videos matching that description.

Amidst the social media buzz and specialized computing communities, Sora’s unexpected rise may have gone unnoticed. There wasn’t any grand announcement or extensive advertising campaign; it just appeared abruptly.

OpenAI has showcased various sample videos where Sora impressively produces lifelike visuals. These videos feature mirror reflections, intricate liquid movements, and falling snow particles.

How does Sora work?

Sora operates similarly to previous AI image generators but with added complexity. It utilizes diffusion modeling to convert video frames into static images, which are then reconstructed into a cohesive video.

To train Sora, example videos and corresponding textual descriptions are provided to help the model understand the relationship between images and actions depicted in the videos.

This process challenges the model to understand intricate details like 3D models, motion, reflections, shadows, and other complex features to replicate accurately.

For transparency, OpenAI offers a detailed explanation of how the model functions on its website, although the sources of the training videos remain undisclosed.

How to use Sora AI

Currently, Sora is not available to the general public. OpenAI exercises caution in releasing such powerful tools, starting with a small “red team” of individuals who assess potential risks and harms of the technology.

Following this, a select group of visual artists, designers, and filmmakers will gain insight into how the tool functions for creative endeavors. Eventually, Sora may become accessible to the public, likely following OpenAI’s pay-as-you-go model.

Is Sora the best AI video generator?

Based on the videos unveiled so far, Sora appears to be a significant leap ahead of previous AI video generation attempts. Early endeavors in AI-generated videos, like Will Smith eating spaghetti or the ”Peppoloni Hug Spot” commercial, paled in comparison.

Contrasting those early attempts with Sora’s work reveals a stark contrast. Sora’s videos boast accurate lighting, reflections, and human-like features, even tackling complex scenarios like people entering and exiting the frame.

Despite its impressive capabilities, Sora is not without flaws. Glitches like disappearing body parts, sudden appearances, and floating feet are observable in its videos. As the public gains access, more videos will expose both strengths and weaknesses of the model.

read more:

Source: www.sciencefocus.com

Security Concerns Raised by the Realism of OpenAI’s Sora Video Generator

AI program Sora generated this video featuring an android based on text prompts

Sora/OpenAI

OpenAI has announced a program called Sora, a state-of-the-art artificial intelligence system that can turn text descriptions into photo-realistic videos. This video generation model has added to excitement over advances in AI technology, along with growing concerns about how synthetic deepfake videos will exacerbate misinformation and disinformation during a critical election year around the world. I am.

Sora AI models can currently create videos up to 60 seconds using text instructions alone or a combination of text and images. One demonstration video begins with a text prompt describing a “stylish woman walking down a Tokyo street filled with warmly glowing neon lights and animated city signs.” Other examples include more fantastical scenarios such as dogs frolicking in the snow, vehicles driving down the road, and sharks swimming through the air between city skyscrapers.

“Like other technologies in generative AI, there is no reason to believe that text-to-video conversion will not continue to advance rapidly. We are increasingly approaching a time when it will be difficult to tell the fake from the real.” Honey Farid at the University of California, Berkeley. “Combining this technology with AI-powered voice cloning could open up entirely new ground in terms of creating deepfakes of things people say and do that they have never actually done.”

Sora is based on some of OpenAI's existing technologies, including the image generator DALL-E and the GPT large language model. Although his text-to-video AI models lag somewhat behind other technologies in terms of realism and accessibility, Sora's demonstrations are “orders of magnitude more believable and cartoon-like” than previous ones. “It's less sticky,” he said. Rachel TobackHe is the co-founder of SocialProof Security, a white hat hacking organization focused on social engineering.

To achieve this higher level of realism, Sora combines two different AI approaches. The first is a diffusion model similar to those used in AI image generators such as DALL-E. These models learn to gradually transform randomized image pixels into a consistent image. The second of his AI techniques is called “Transformer Architecture” and is used to contextualize and stitch together continuous data. For example, large-scale language models use transformer architectures to assemble words into commonly understandable sentences. In this case, OpenAI split the video clip into visual “space-time patches” that Sora's transformer architecture could process.

Sora's video still contains many mistakes, such as a walking person's left and right feet swapping positions, a chair floating randomly in the air, and a chewed cookie magically leaving no bite marks. contained. still, jim fanThe senior research scientist at NVIDIA praised Sora on social media platform X as a “data-driven physics engine” that can simulate the world.

The fact that Sola's video still exhibits some strange glitches when depicting complex scenes with lots of movement suggests that such deepfake videos are still detectable for now. There is, he says. Arvind Narayanan at Princeton University. But he also warned that in the long term, “we need to find other ways to adapt as a society.”

OpenAI has been holding off on making Sora publicly available while it conducts “red team” exercises in which experts attempt to break safeguards in AI models to assess Sora's potential for abuse. An OpenAI spokesperson said the select group currently testing Sora are “experts in areas such as misinformation, hateful content, and bias.”

This test is very important. Because synthetic videos allow malicious actors to generate fake footage, for example, to harass someone or sway a political election. Misinformation and disinformation fueled by AI-generated deepfakes ranks as a major concern For leaders as well as in academia, business, government, and other fields. For AI experts.

“Sora is fully capable of creating videos that have the potential to deceive the public,” Tobac said. “Videos don't have to be perfect to be trustworthy, as many people still don't understand that videos can be manipulated as easily as photos.”

Toback said AI companies will need to work with social media networks and governments to combat the massive misinformation and disinformation that could arise after Sora is released to the public. Defenses could include implementing unique identifiers, or “watermarks,” for AI-generated content.

When asked if OpenAI has plans to make Sora more widely available in 2024, an OpenAI spokesperson said the company “will make Sora more widely available in OpenAI's products.” We are taking important safety measures.” For example, the company already uses automated processes aimed at preventing commercial AI models from producing extreme violence, sexual content, hateful images, and depictions of real politicians and celebrities. .With more people than ever before Participate in elections this yearthese safety measures are extremely important.

topic:

  • artificial intelligence/
  • video

Source: www.newscientist.com