What if you build it and no one comes?
It’s fair to say the luster of the AI boom is fading. Skyrocketing valuations are starting to look shaky compared to the massive spending required to keep them going. Over the weekend, tech site The Information reported that OpenAI is An astonishing $5 billion in additional spending is expected More than this year alone:
If our predictions are correct, OpenAI’s recent valuation would be $80bnwill need to raise more capital over the next 12 months or so. Our analysis is based on informed estimates of what OpenAI will spend to operate the ChatGPT chatbot and train future large-scale language models, as well as a “guesstimate” of how much OpenAI will spend on staffing, based on OpenAI’s previous projections and our knowledge of its adoption. Our conclusion shows exactly why so many investors are concerned about the profit prospects of conversational artificial intelligence.
The most pessimistic view is that AI — and especially chatbots, an expensive and competitive sector of an industry that has captured the public’s imagination — isn’t as good as we’ve been told.
This argument suggests that as adoption grows and iteration slows, most people have had a chance to use cutting-edge AI properly and are beginning to realize that it’s great but probably useless. The first time you use ChatGPT, it’s a miracle, but by the 100th time, the flaws are obvious and the magic fades into the background. You decide ChatGPT is bullshit.
In this paper, I argue against the view that ChatGPT and others are lying or hallucinating when they make false claims, and support the position that what they are doing is bullshit. … Since these programs themselves could not care less about the truth, and are designed to generate text that looks true without actually caring about the truth, it seems appropriate to call their output bullshit.
Get them trained
I don’t think it’s that bad. But that’s not because the system is perfect. I think the move to AI is a hurdle we’ve got to overcome much earlier. You have to try a chatbot in any meaningful way to even begin to realize it’s bullshit and give up. And judging by the tech industry’s response, that’s starting to become a bigger hurdle. Last Thursday, I reported on how Google is partnering with a network of small businesses and several academy trusts to bring AI into the workplace to enhance, rather than replace, worker capabilities. Debbie Weinstein, managing director of Google UK and Ireland, said:
It’s hard for us to talk about this right now because we don’t know exactly what’s going to happen. What we do know is that the first step is to sit down and talk. [with the partners] And then really understanding the use case. If you have school administrators and students in the classroom, what are the specific tasks that you actually want to perform for these people?
For teachers, this could be a quick email with ideas on how to use Gemini in their lesson plans, formal classroom training, or one-on-one coaching. Various pilot programs will be run with 1,200 participants, with each group having around 100 participants.
One way of looking at this is that it’s just another feel-good investment in the upskilling schemes of big companies. Google in particular has been helping to upskill Brits for years with its digital training scheme, formerly branded as the company’s “Digital Garage”. To put it more cynically, teaching people how to use new technology by teaching them how to use your own tools is good business. Brits of a certain age will vividly remember “IT” or “ICT” classes as thinly veiled instructions on how to use Microsoft Office. People older and younger than me learned some basic computer programming. I learned how to use Microsoft Access.
In this case, it’s something deeper: Google needs to go beyond simply teaching people how to use AI and also run experiments to figure out what exactly to teach them. “This isn’t about a fundamental rethinking of how we understand technology, it’s about the little everyday things that make work a little more productive and a little more enjoyable,” Weinstein says. “Today, we have tools that make work a little easier. Those three minutes you save every time you write an email.
“Our goal is to make sure that everyone can benefit from technology, whether it’s Google technology or other companies’ technology. And I think the general idea of working together with tools that help make your life more efficient is something that everyone can benefit from.”
Ever since ChatGPT came out, the underlying assumption has been that the technology speaks for itself, and the fact that it literally does is a big help to that. But chat interfaces are confusing. Even if you’re dealing with a real human being, it’s still a skill to get the best out of them when you need help, and an even better skill when the only way to communicate with them is through text chat.
AI chatbots are not people. They are so unlike humans that it’s all the more difficult to even think about how they might fit into common work patterns. The pessimistic view of this technology isn’t “what if there wasn’t one there” – there is, of course, a pessimistic view, despite all the hallucinations and nonsense. Rather, it’s a much simpler view: what if most people never bothered to learn how to use them?
Masbot Gold
Meanwhile, elsewhere in Google it reads:
Although computers are being built to perform calculations faster than humans, the highest levels of formal mathematics remain the sole domain of humans. But a groundbreaking discovery by researchers at Google DeepMind has brought AI systems closer than ever to beating the best human mathematicians at the field.
Two new systems, called AlphaProof and AlphaGeometry 2, worked together to tackle problems in the International Mathematical Olympiad, a worldwide math competition for middle school students. 1959Each year, the Olympiad consists of six incredibly difficult problems covering subjects such as algebra, geometry and number theory, and winning a gold medal makes you one of the best young mathematicians in the world.
A word of warning: the Google DeepMind system solved “only” four of the six problems, and one of them they solved using a “neurosymbolic” system, which is less AI-like than you might expect. All problems were manually translated into a programming language called Lean, which allows the system to read it as a formal description of the problem without having to parse human-readable text first. (Google DeepMind also tried to use LLM to do this part, but it didn’t work very well.)
But this is still a pretty big step. The International Mathematical Olympiad difficultand AI won the medal. What happens when you win the gold medal? Is there a big difference between being able to solve problems that only the best high school mathematicians could tackle and being able to solve problems that only the best undergraduates, graduate students, and doctors could solve? What changes when a branch of science is automated?
If you’d like to read the full newsletter, sign up to receive TechScape in your inbox every Tuesday.
Source: www.theguardian.com