When a physicist requires a disposable phone, it signifies a shifting America – John Norton

aT International Science Council has observed some intriguing trends recently. Certain American participants are opting to travel with a “Burner” phone or a minimalist laptop solely running a browser, reminiscent of security-conscious individuals from 15 years ago when traveling to China.

These scholars are keeping a close eye on the American political climate, particularly concerned about potential repercussions upon their return. They have been reading Robert Reich’s subsack, highlighting instances where scientists faced obstacles entering the US due to political opinions expressed in private messages.

Cases like Dr. Rasha Alawieh’s deportation despite having a valid visa and court order, and attempts to deport Columbia University alumni Mahmoud Khalil following a pro-Palestinian demonstration, are causing alarm among the academic community.

The Trump administration’s crackdown on pro-Gaza demonstrations and demands for the return of research funds from universities suggest a troubling trend of targeting academic institutions. This hostility towards universities, particularly elite ones, stems from a disdain for their wealth and independence.

The growing concerns among US researchers about potential crackdowns on certain fields of research, driven by political ideologies, resemble dark periods of history. Europe’s response, offering refuge to American researchers at universities like AIX-Marseille in France and VUB in Belgium, presents a glimmer of hope amid uncertainty.

As the academic landscape faces shifting political tides, the question arises: what proactive measures are UK institutions taking to navigate these challenges? The future remains uncertain as academia grapples with evolving geopolitical dynamics.

What I’ve read

How Trump’s Yemeni text was sent by mistake
Jeffrey Goldberg Amazing story About the White House security leak Atlantic Ocean.

Philosophy and paternity
Strange trends among western philosophers Explore with interesting posts by Doug Muir.

ai learned to reason…
…or do you have it? Nice Explainer by Melanie Mitchell.

  • Do you have any opinions on the issues raised in this article? If you would like to send a response of up to 300 words by email to consider being published in our Letters section, please click here.

Source: www.theguardian.com

Was Apple’s rush to join the AI craze a misstep for Siri? |John Norton

aFter ChatGpt broke the cover in late 2022, and the tech industry embarked on modern rendering Tulip Maniapeople began to wonder why Apple, the biggest tech giant of all, is keeping their distance from the insanity. In the end, Tech Commentariat decided that there were only two possible interpretations of the company’s stand officeness. et alOr it had an unning plan to unleash the technology that would make the world a world-wide.

Finally, Apple was cleaned up at the annual World Wide Developers’ Conference (WWDC) held on June 10th last year. Or appeared. For Apple, “AI” is not what the vulgar routs rave about at Openai, Google, Microsoft, or Meta, and does not mean anything completely refined and refined what is called “Apple Intelligence.” It wasn’t like the veteran Apple-Watcher’s John Gruber put it downa single thing or product, but “marketing terminology for collections of features, apps, and services.” Putting everything under one memorable label makes it easier for users to understand that Apple is launching something truly novel. And, of course, it’s also easy for Apple to say that users who wanted to have all these flashy features should buy an iPhone 15 Pro, as older devices can’t stand the task.

Needless to say, this columnist fell into it and upgraded it. (Well, one sucker is born every minute.) Like the kit, the new phone was impressive. A powerful new processor chip, neural engine, and more have been treated. And the camera turned out to be surprisingly good. However, the features of Apple Intelligence enabled by the upgrade seemed trivial and sometimes exciting. I started messing around with my photo collection, for example, getting in the way, unwanted, and imposing categories on annoying images. And then there was a new pre-installed app called “Image Playground.”Make communication and self-expression even more fun” – This may be true for a 4-year-old with short attention span, but otherwise it was a central casting turkey and should have been strangled at birth.

However, there was one feature that looked interesting and useful. This is a serious enhancement to Siri, an attempt at Apple’s virtual personal assistant. from now on, The company announced: “Siri can deliver intelligence tailored to users and their on-device information. For example, users could say, “Play the podcasts recommended by Jamie.” Siri finds flight details and cross-references with real-time flight tracking to give arrival times. “

However, in a thorough examination, Siri was unable to do these useful things even when run on my expensive new phone. In fact, it mostly looked mediocre as ever. And it came on March 7th Announcement from Apple“We’re also working on more personalized Siri, making us more aware of your personal context and the ability to take action for you within and across your app. It takes longer than we thought we’d offer these features, and we’ll be rolling out it next year.”

For Gruber, who knows more about Apple than anyone I know, this was like a red rag of a bull. The announcement meant, He wrote“What Apple has shown regarding the upcoming ‘personalized Siri’ at WWDC was not a demo. It was a concept video. The concept video is bullshit and a sign of a company that is confused, if not a crisis.” And he has long memories, so he reminded him that Apple last screened the concept video – what is called Knowledge Navigator Video – He was heading for bankruptcy. And when Steve Jobs returns and turns it into the most profitable company in history, it never made anything like that again.

Until – called Gruber – now.

Is he overreacting? Answer: Yes. While Apple is not in danger, this minifiasco, featuring Siri and Apple Intelligence, looks like the first serious misstep in managing Tim Cook’s company. If there’s one thing Jobs’ Apple was famous, it didn’t announce the product before it was ready to ship. It is clear that the company had seriously underestimated the amount of work it took to deliver what it had promised to SIRI last June. If you were particular about Jobs Playbooks, the time to start the enhancement would have been early in June 2025. The company clearly forgot Hofstadter’s Law:Even considering Hofstadter’s law, everything takes longer than expected.

What I’ve read

A million monkeys…
ChatGpt can’t kill anything worth saving Amazing essay By John Warner on AI and writing.

A beloved machine of blessing?
AI: A means of end or a means to our end? Read Stephen Fry’s first lecture to the Digital Futures Institute in King’s College London Obsessive Du Jour.

It’s written on the card
Jillian Hess’s description of Karl Linnae’s materials Practice to take groundbreaking notes It is illuminated.

Source: www.theguardian.com

To build Britain as a leading AI force, we must stand up to tech giants | By John Norton

Sir Keir Starmer does not create visions. But last Monday, he broke a lifelong habit. Speech at University College London. It was about AI, which he sees as “the defining opportunity of our generation.” He declared that Britain was “the land of Babbage, Loveless and Turing” and, of course, “the country that birthed the modern computer and the World Wide Web.” Please mark my words. Britain will become one of the great AI superpowers. ”

It's kind of exciting. Within days of taking office, the Prime Minister invited Matt Clifford, a clever engineer from Central Casting, to think about “how to seize the opportunity in AI''. Clifford scored 50 points. AI Opportunity Action Plan Starmer fully accepted this, saying he would “take full responsibility for the British state”. He also named Clifford AI Opportunity Advisor Supervise the implementation of the plan and report directly to him. It's only a matter of time until then solar We call him “Britain's AI emperor.”

Clifford's appointment is both predictable and puzzling. That was to be expected, as he had been hanging around government for a while: Rishi Sunak, for example, hosted the AI ​​Safety Summit and approached him to set up the UK Safety Summit. AI safety unit. It's puzzling because he's already made so much money in technology. External Interests Register This will be a fairly long scroll. Several media and technology executives said to financial times They were concerned that Clifford, who had founded a successful investment firm with offices around the world, was being given too much influence over AI policy.

Damian Collins, a former Conservative technology secretary, said Clifford was “clearly a very capable person” but said he was “concerned about the balance of interests represented and how they are represented.” “It will be done,” he said. If Mr Starmer really believes that AI is a game-changing technology, it is strange that his chief adviser would be so involved in such an important game.

Collins was referring to a particularly hot topic. It is a routine copyright violation by tech companies that train AI models on the creative works of others without permission, approval, or payment. The latest revelations about this practice come from new, unredacted documents. US lawsuit This shows that the training dataset for Meta's Llama AI includes a huge database of pirated books collected from the internet.

Recommendation 24 of the plan calls for reform of the UK text and data mining regime. And the argument that “the current uncertainty around intellectual property (IP) is hindering innovation and undermining our broader ambitions for growth in AI and the creative industries” is a strong argument for many in these industries. made people furious. “There is no 'uncertainty' in the UK text and data mining regime,” he said. Creative Rights in the AI ​​Coalition. “UK copyright law does not allow text or data mining for commercial purposes without a license. The only uncertainty is who will use Britain's creative crown as training material without permission and who will That's how you got it.”

Much of Clifford's plan seems sensible (albeit expensive). For example, building a national computing infrastructure for AI. Improving university research capabilities. Train tens of thousands of new AI professionals. Promote public-private partnerships to maximize the UK's interests in 'frontier' AI. Ensure strong technical and ethical standards to oversee the development and deployment of AI.

All of this is a refreshing change from the empty fuss about 'Global Britain' of the Johnson-Snak-Truss era. The plan's stated ambition to position the UK as an “AI maker rather than an AI taker” is that the UK has real potential in this area but lacks the resources to realize that potential. This suggests a candid recognition. But making that happen means we have to face two troubling truths.

The first is that this powerful technology is controlled by a small number of giant companies, none of which are based in the UK. Their power lies not only in their capital and human resources, but also in the vast physical infrastructure of data centers they own and manage. This means that any nation wishing to operate in this field must get along with them.

Skip past newsletter promotions

The UK Government needs to do a lot in this regard. The current attitude towards business is the snobbish attitude exhibited by Technology Secretary Peter Kyle, who said the Government needed to take a 'Government is' attitude.feeling humble” and uses a “national strategy” when dealing with technology giants, rather than using the threat of new legislation to influence developments in areas such as frontier artificial intelligence. In other words, the UK should treat these organizations as nation-states. Clearly, Kyle doesn't realize that appeasement is the art of being nice to the alligator in the hopes that it will eat you in the end.

Another troubling truth is that even though AI is powerful, economists like Nobel Prize winners Daron Acemoglu The general economic impact, at least in the short term, is believed to be significantly smaller than technology evangelists believe. Even worse, Economist Robert Gordon once pointed out thatgeneral-purpose technologies take a long time to have a significant impact. The message to the Prime Minister is clear. Becoming an “AI superpower” may take at least several election cycles.

Source: www.theguardian.com

#10 Reminder: Online safety is not one-size-fits-all. – John Norton

London fixed gear and single speed (LFGSS) is a great online community of fixed gear and single speed cyclists in and around London. Unfortunately, this columnist is not eligible for membership. He doesn’t live in (or near) a big city and needs a lot of gear to climb the gentlest slopes. That’s why we admire more rugged cyclists who disdain Starmie’s assistance. Archer or Campagnolo hardware.

But bad news is on the horizon. As of Sunday, March 16th, LFGSS will be retired. Dee Kitchen is a core developer at Software Wizards (and Cyclists).
microcosm is a platform for operating non-commercial, privacy-friendly, and accessible online forums such as LFGSS.
announced On that day, he announced that he would “remove the virtual servers hosting LFGSS and other communities, effectively immediately terminating the approximately 300 small communities I run, as well as a small number of larger communities such as LFGSS.” said.

Why does the kitchen do this? Answer: he read
statement
It was announced on December 16 by Ofcom, the regulator appointed by the government to enforce the provisions of the Online Safety Act (OSA). “Providers are currently obliged to assess the risk of unlawful harm to their services, with a deadline of March 16, 2025. Following the Code, which has completed a parliamentary process, providers will be required to assess the risk of unlawful harm to their services from March 17, 2025. “We must take steps to protect our users from illegal content and activity as set forth in our Terms and Conditions or use other effective means. If a provider does not act quickly to address service risks, we are prepared to take enforcement action. ”

Please wait a moment. OSA isn’t just about protecting children and adults from harmful content, bullying, pornography, etc. It’s not just about discussions about fixed gear bikes, cancer support, dog walking, rebuilding valve amps, etc. Is it? It may sound strange, but the answer seems to be no. of
act requires
Services that process user-generated content have baseline content moderation practices and use those practices to remove reported content that violates UK law and to prevent children from viewing pornography. And that applies to every Services that process user-generated content and have “links to the UK”.

Mr. Kitchen believes the online forums he hosts fall within the scope of this practice and, as he is based in the UK, there is “no way around it”. “I can’t afford to spend probably tens of thousands of dollars to get through the legal and technical hoops here for an extended period of time…The site itself barely gets in a few hundred donations each month, and it takes a little more to run it. It costs money…this is not a venture that can afford the compliance costs… If I did, what would remain is a disproportionately high personal liability for me that could easily be used as a weapon by disgruntled people banned for egregious behavior. ” That is why he believes he has no choice but to shut down the platform.

Some may think that he is overreacting, that common sense will prevail and that legal precedent will eventually emerge. But the OSA is a new piece of legislation, a meandering evolution of the 2019 White Paper on Online Harms and the chaotic passage of Parliament at a time when the Conservative Party was busy mismanaging the country. (One grizzled political insider described this to me as a “dog’s breakfast.”) In such a situation, the cost of being an early test case would give anyone pause. . I’ve been a blogger for decades, and from the beginning I decided not to allow comments.
my blog Partly because I didn’t want the burden of moderation, but also because I was worried about the legal ramifications of what people posted. So instead of the kitchen, I would like to do what he has decided.

Many years ago, I had an exchange with Tim Berners-Lee, the inventor of the World Wide Web, at a Royal Society conference. From a conversation with a new Labor Minister, I realized the following: This guy thinks the web is the internet! And I told Tim that. “It’s much worse than that,” he replied, “and millions of people around the world think so.” facebook It’s the internet. ”

The root of the problem with OSA is that it was framed and enacted by legislators who believe that the “internet” consists only of the platforms of a few big tech companies. So they passed laws supposedly to deal with these corporate thugs, without imagining the unintended consequences for the actual internet of people using technology for purely social purposes. And in doing so, they inadvertently raise the famous question posed by Alexander Pope in a letter to Dr. Arbuthnot in 1735: “Who breaks the butterfly on the wheel?” It will end up being put away.

what i was reading

British students at risk
Nathan Heller’s long, thoughtful new yorker essay On the plane from the American University of Humanities.

Skip past newsletter promotions

don’t entrust your life
It’s really subtle LM Sacasas article What we can learn from 20th century cultural critic Lewis Mumford in the age of AI.

Musk meets Ross Perot
Incisive work by John Ganz Two engineers who thought they understood politics.

  • Do you have an opinion on the issues raised in this article? If you would like to submit a letter of up to 250 words for consideration for publication, please email observer.letters@observer.co.uk.

Source: www.theguardian.com

The Impact of AI on the Year 2025: Could it Be the Next Generation Spreadsheet?

2024 was the year of large-scale language models (LLMs), and 2025 looks set to be the year of AI “agents.” These are semi-intelligent systems that leverage LLM to go beyond the usual tricks of generating plausible text and responding to prompts. The idea is that you can give your agent a high-level (or even a vague goal) and break it down into a series of actionable steps. Once you “understand” your goals, you can create a plan to achieve them, just like humans do.

OpenAI Chief Financial Officer Sarah Friar recently explained:
therefore,
financial times: “It could be a researcher, or it could be a useful assistant for the average person or a working mom like me. In 2025, the first highly successful agents to help people with their daily lives will be introduced. It’s like having a digital assistant.

“It doesn’t just react to your instructions; it can learn, adapt, and, perhaps most importantly, take meaningful action to solve problems on your behalf.”

. In other words, Miss Moneypenny on steroids.

So why are these automatic money pennies suddenly being hailed as the next big thing? Even though the tech industry has spent trillions of dollars building huge LLMs, Does it have something to do with the fact that you still can’t expect a reasonable return on your investments? This is not to say that an LLM is useless. This is extremely useful for people whose work involves languages. And for computer programmers, these are very useful. But for many industries, at the moment, they still seem like a solution looking for a problem.

With the advent of AI agents, things could change. LLM has the potential to be attractive as a building block for virtual agents that can efficiently perform many of the complex task sequences that make up the “work” of any organization. Or so the tech industry thinks. And, of course, McKinsey, the consulting giant that provides the subconscious hymn sheet every CEO sings. agent AI,

McKinsey’s Barbles

“we are moving from thinking to acting” as “AI-enabled ‘agents’ that use underlying models to execute complex multi-step workflows across the digital world” are adopted.

If that really happens, we may need to rethink our assumptions about how AI will change the world. At the moment, we are primarily concerned with what technology can do for individuals or humanity (or both). But if McKinsey & Company’s claims are correct, deeper long-term effects could come through the way AI agents transform companies. After all, companies are actually machines for managing complexity and turning information into decisions.

Political scientist Henry Farrell, a keen observer of these issues, suggests this possibility. LLM,
he claims “an engine for summarizing vast amounts of information into something useful.” Because information is the driving force behind their operations, large companies will adopt any technology that provides a more intelligent and contextual way of processing information. information – as opposed to just something data they are currently process. As a result, Farrell says, companies will “introduce LLMs in ways that seem boring and technical, except for things that are immediately relevant, for better or worse, but actually important.” Big organizations shape our lives! As people change, our lives will change in countless seemingly unexciting but important ways.

At one point in his essay, Farrell likens this “boring and technical” transformative impact of LLMs to the way a humble spreadsheet reshapes large organizations. this is,
classy explosion Written by economist and former stock analyst Dan Davis
irresponsible machine It was one of the nicest surprises of the year. He points out that spreadsheets have “enabled entirely new working styles for the financial industry in two ways.” First, it allows for the creation of larger and more detailed financial models, allowing for different ways of budgeting, creating business plans, evaluating investment options, etc. And second, this technology allows for repetitive work. “Instead of thinking about what assumptions make the most business sense and then sitting down and predicting them, Excel [Microsoft’s spreadsheet product] We just presented our predictions and encouraged them to tweak their assumptions up or down until they got an answer they were happy with. What’s more, it’s also an answer that your boss will be satisfied with.

The moral of the story is clear. Spreadsheets were as revolutionary a technology when they first appeared in 1978 as ChatGPT is in 2022. However, it has now become a routine and integral part of organizational life. The emergence of AI “agents” built from models like GPT appears to be following a similar pattern. In turn, the organizations that absorb them will also evolve. And in time, the world may rediscover the famous dictum of Marshall McLuhan’s colleague John Culkin: “We shape our tools, and our tools shape us.”

what i was reading

economics story
transcription of
fascinating interview We will talk about economics, pluralism, and democracy with renowned economist Hajun Chan.

AI?
“False consolation due to AI skepticism”
energetic essay Casey Newton on the two “camps” in the AI ​​debate.

Trump’s next move
“I have a cunning plan…” Here is Charlie Stross’ blog post:
A sketch of a true dystopian story Regarding the impact of President Trump’s inauguration.

Source: www.theguardian.com

Elon Musk may not be America’s new king, but he could be the new Thomas Cromwell.

PThis is essentially what transpired at Mar-a-Lago on election night, when it became evident that Trump had emerged victorious. The scene was chaotic. Mr. Trump is a man of expansion. He is encircled by the members of a formidable clan and another individual. In his victory speech, the president-elect commended his campaign team, his running mate, and his family, each receiving brief praises.

But “AN Other” was allocated a full four minutes. He is Elon Musk, the wealthiest person in history. President Trump has referred to him as a “super genius,” “a unique individual,” and a “star.” Musk jetted straight from Texas on the Gulf Stream to revel in the adulation of his new master. He also invested hundreds of millions of dollars and a month’s worth of time to be present. Now, his moment has arrived.

Let that sink in. More on this later.

Now, consider what Musk’s counterparts in Silicon Valley were contemplating as they sat, pondering and composing sly congratulatory notes to Donald. And trust me, their disdain was palpable. The atmosphere in the valley was rife with frustration. The tech titans had been strategizing for months on how to curry favor with Trump in case he emerged victorious. Then along came Musk, sidestepping them and making his way into the inner circle of the new administration. It must have been exasperating.

Reflecting on the recent events, it’s evident that the situation has worsened. Trump has appointed Musk and aspiring mogul Vivek Ramaswami to lead his “Government Efficiency Division” (dubbed “Doge” after Musk’s favored cryptocurrency, Dogecoin). The duo is tasked with a concerted effort to streamline regulations, bureaucracy, and spending across the federal government. “Together, these two outstanding Americans will lay the groundwork for my administration to dismantle government bureaucracy, cut back on excessive regulations, eliminate wasteful spending, and restructure our federal agencies,” proclaimed their new chief.

Perhaps he will benefit from Musk’s assertion that he could slash at least $2 trillion from the $6.8 trillion government budget and Ramaswamy’s pledge to abolish the FBI, the Department of Education, and the nuclear regulatory agency during his unsuccessful bid for the Republican nomination. It appears he was impressed by their proposals.

While this new entity is labeled a “department,” it does not function as a typical government agency. If it did, Mr. Musk would face a slew of conflicts of interest that could pose legal challenges if he starts curtailing regulators with whom he currently clashes. These include the Federal Aviation Administration, National Labor Relations Board, Securities and Exchange Commission, Federal Communications Commission, and Federal Trade Commission. Moreover, his various companies secured $3 billion in government contracts from 17 federal agencies last year. However, by operating “outside” the system, he would have more leeway to make cuts as he sees fit.

Skip past newsletter promotions

In 2018, author Michael Lewis released The Fifth Risk, a notable book exploring the consequences of President Trump’s political appointments during his initial term, particularly related to three government agencies: the Department of Energy, the Department of Agriculture, and the Department of Commerce. According to Lewis, the book was born out of his curiosity about the lesser-known branches of government and their true nature, which he discovered mainly revolved around safeguarding people and societal safety.

If Mr. Musk’s track record is any indication, such safety concerns are unlikely to be assuaged. Following a court order in Delaware to proceed with the Twitter acquisition, he promptly laid off 6,500 employees, around 80% of his workforce by his own estimation. Among those terminated were individuals tasked with moderating content on the platform to maintain a certain level of “safety.” Their departure led to an open platform that attracted anti-wokeists, white supremacists, misogynists, conspiracy theorists, and other denizens of an alternative reality. Additionally, he tweaked the platform’s algorithms to prioritize his own posts for its 200 million users, effectively turning it into a broadcasting platform for airing political views and preferences.

By backing Trump, Musk appears to be employing an all-in strategy, reminiscent of his approach years ago when facing challenges in Tesla Model 3 production and claimed to have slept at the factory for weeks. He relocated to Pennsylvania for the final campaign month, actively engaging with supporters and enhancing the campaign’s visibility, particularly in rural areas.

In essence, he has positioned himself as indispensable to Trump, presenting a potential issue for his future. Narcissists are averse to being indebted to anyone, regardless of their assistance. Thomas Cromwell’s integral role in Henry VIII’s court in the 1530s, as depicted in Wolf Hall: Mirror and Light, illustrates that aligning with power may not always bode well. History may not replicate itself, but it might echo this time, as attributed to Mark Twain.

what i was reading

the narrow path from despair
Diane Coyle’s Beautiful and concise review of Sam Friedman’s book Enlightenment Economics Failed states: why nothing works and how to fix it.

Congratulations, boss
The Verge compilation All the disgustingly vile messages the tech giants sent to the incoming president.

Reason for continuing
Insightful discussion on 404 Media – Why the initiative remains crucial even under the Trump administration – Why honest journalism is needed now more than ever.

Source: www.theguardian.com

Renowned AI pioneer Jeffrey Hinton honored as “godfather of AI” – an offer too good to refuse

WBack in 2011, Marc Andreessen was a venture capitalist with dreams of becoming a public intellectual. published an essay Titled “Why Software is Eating the World,'', he predicted that computer code would take over large swaths of the economy. Now, 13 years later, the software seems to be making its way into academia. In any case, this is one possible conclusion to be drawn from the fact that computer scientist Jeffrey Hinton shares the following about 2024: Nobel Prize in Physics John Hopfield and computer scientist Demis Hassabis share half of it. Nobel Prize in Chemistry With one of my colleagues at DeepMind, John Jumper.

In some ways, Hassabis and Jumper's awards were as expected. Because they built the machine. alpha fold 2 – This will enable researchers to solve one of the most difficult problems in biochemistry: predicting the structure of proteins, the building blocks of biological life. Their machine was able to predict the structure of virtually every 200m protein the researchers identified. So this is a big problem for chemistry.

But Hinton is not a physicist. Indeed, he once Introduced at an academic conference As someone who “failed physics, dropped out of psychology, and then joined a field with absolutely no standards: artificial intelligence.” After graduating, I worked as a carpenter for a year. But he's the guy who found a way to do it (“backpropagationThis allows neural networks to be trained. This was one of the two keys that opened the door to machine learning and sparked the current AI frenzy. (The other is transformer model (published by Google researchers in 2017).

But where's the physics in this? That's from Mr. Hopfield, who shares the award with Mr. Hinton. “Hopfield networks and their further development, called Boltzmann machines, are based on physics,” Hinton explained to the man. new york times. “Hopfield nets used energy functions and Boltzmann machines used ideas from statistical physics. So that stage of the development of neural networks relied heavily on ideas from physics.”

that's ok. But the media often describes Hinton as the “godfather of AI,” which has vaguely sinister overtones. In reality, he is the exact opposite: tall, affable, polite, intelligent, and endowed with an acerbic and sometimes acerbic wit. When I asked Cade Metz how he reacted when he heard the news of the award, he said he was “shocked, surprised, and appalled,” which I think most people would say. But in 2018, he shared the Turing Award, computer science's equivalent of the Nobel Prize, with Joshua Bengio and Yann LeCun for their work in deep learning. So he was always in the top league. It's just that there is no Nobel Prize in computer science. Given the way software is eating up the world, perhaps that should change.

There's an old joke that the key to becoming a Nobel Prize winner is to “outlive” your rivals. Hinton, now 77, clearly took notice. But in fact, what is most admirable about him is his persistence in believing in the potential of neural networks as the key to artificial intelligence, long after the idea had been discredited by the profession. Given the way academia works, it required an extraordinary amount of determination and confidence, especially in a rapidly developing field like computer science. Perhaps what drove him through his dark times was the idea that his great-grandfather was George Boole, the 19th century mathematician who invented the underlying logic. all Of this digital stuff.

We also think about the impact awards have on people. When news of Hinton's award broke, I thought of Seamus Heaney, who won the literary prize in 1995. He described the experience as “like being attacked by something.” generally “A benign avalanche.” Note that I say “almost.” One of the consequences of the Nobel Prize is that the recipient instantly becomes public property, and everyone wants a piece of it. “All I'm doing these days is 'going to work,'” Heaney wrote resignedly to a friend in June 1996. And this situation will continue for weeks and months yet… Whatever the final outcome of the Stockholm effect, its direct result is the desire to quit and start over. with a unique persona (within myself)”

So…note to Jeff: Congratulations. And manage your calendar.

what i was reading

talk like this
Is chatting with a bot a conversation? wonderful new yorker essay Historian Jill Lepore talks about interacting with GPT-4o's Advanced Voice Mode.

Interesting times…
October 2, 2024. this particular problem Heather Cox Richardson's essential Substack blog is a gem.

real page turner
Elite college students who can't read books, interesting report in atlantic ocean Written by Rose Horowich.

Source: www.theguardian.com

AI Fraud is a Growing Issue in Education, But Teachers Shouldn’t Lose Hope | Opinion Piece by John Norton

IThe start of term is fast approaching. Parents are starting to worry about packed lunches, uniforms, and textbooks. School leavers heading to university are wondering what welcome week will be like for new students. And some professors, especially in the humanities, are anxiously wondering how to handle students who are already more adept at Large Language Models (LLMs) than they are.

They have good reason to be worried. Ian Bogost, a professor of film and media, said: and He studied Computer Science at Washington University in St. Louis. it is“If the first year of AI College ended with a sense of disappointment, the situation has now descended into absurdity. Teachers struggle to continue teaching while wondering whether they are grading students or computers. Meanwhile, the arms race in AI cheating and detection continues unabated.”

As expected, the arms race is already intensifying. The Wall Street Journal Recently reported “OpenAI has a way to reliably detect if someone is using ChatGPT to write an essay or research paper, but the company has not disclosed it, despite widespread concerns that students are using artificial intelligence to cheat.” This refusal has infuriated a sector of academia that imagining admirably that there must be a technological solution to this “cheating” problem. Apparently they have not read the Association for Computing Machinery's report on “cheating”. Statement of principles for developing generative AI content detection systemsstates that “reliably detecting the output of a generative AI system without an embedded watermark is beyond the current state of the art and is unlikely to change within any foreseeable timeframe.” Digital watermarks are useful, but they can also cause problems.

The LLM is a particularly pressing problem for the humanities because the essay is a critical pedagogical tool in teaching students how to research, think, and write. Perhaps more importantly, the essay also plays a central role in grading. Unfortunately, the LLM threatens to make this venerable pedagogy unviable. And there is no technological solution in sight.

The good news is that the problem is not insurmountable if educators in these fields are willing to rethink and adapt their teaching methods to fit new realities. Alternative pedagogies are available. But it will require two changes of thinking, if not a change of heart.

First, law graduates, like the well-known psychologist from Berkeley, Alison Gopnik says They are “cultural technologies”, just like writing, printing, libraries, internet searches, etc. In other words, they are tools used by humans. AugmentIt's not an exchange.

Second, and perhaps more importantly, the importance of writing needs to be reinstated in students' minds. processI think E.M. Forster once said that there are two kinds of writers: those who know their ideas and write them, and those who find their ideas by trying to write. The majority of humanity belongs to the latter. That's why the process of writing is so good for the intellect. Writing teaches you the skills to come up with a coherent line of argument, select relevant evidence, find useful sources and inspiration, and most importantly, express yourself in readable, clear prose. For many, that's not easy or natural. That's why students turn to ChatGPT even when they're asked to write 500 words to introduce themselves to their classmates.

Josh Blake, an American scholar, Writes intelligently about our relationship with AI Rather than trying to “integrate” writing into the classroom, I believe it is worth making the value of writing as an intellectual activity fully clear to students. you If you think about it, naturally they would be interested in outsourcing the labor to law students. And if writing (or any other job) is really just about the deliverables, why not? If the means to an end aren't important, why not outsource it?

Ultimately, the problems that LLMs pose to academia can be solved, but it will require new thinking and different approaches to teaching and learning in some areas. The bigger problem is the slow pace at which universities move. I know this from experience. In October 1995, the American scholar Eli Noam published a very insightful article: “The bleak future of electronics and universities” – in ScienceBetween 1998 and 2001, I asked every vice-chancellor and senior university leader I met in the UK what they thought about this.

Still, things have improved since then: at least now everyone knows about ChatGPT.

What I'm Reading

Online Crime
Ed West has an interesting blog post Man found guilty of online posts made during unrest following Southport stabbingIt highlights the contradictions in the British judicial system.

Ruth Bannon
Here is an interesting interview Boston Review Documentary filmmaker Errol Morris Discusses Steve Bannon's Dangerous 'Dharma' his consciousness of being part of the inevitable unfolding of history;

Online forgetting
A sobering article by Neil Firth MIT Technology Review On Efforts to preserve digital history for future generations In an ever-growing universe of data.

Source: www.theguardian.com

Tech Giants’ Disregard for Democracy Seen in Resistance to Delivery Drones | by John Norton

vinegarFlip digital capitalists over and you find technological determinists: people who believe technology drives history. These individuals view themselves as agents of what Joseph Schumpeter famously called “creative destruction.” They take pleasure in “moving fast and breaking things,” a phrase once used by Facebook founder Mark Zuckerberg, until their representatives convince them that this approach is not ideal, not only because it means taxpayers will bear the consequences.

Technological determinism is, in fact, an ideology that influences your thoughts even when you’re not consciously aware of it. It thrives on a narrative that argues: Technical necessity Whether we agree or not, this narrative suggests that new innovations will continue to emerge. LM Sacasas explains “Every claim of inevitability serves a purpose, and narratives of technological inevitability serve as a convenient shield for tech companies to achieve their desired outcomes, minimize opposition, and persuade consumers that they are embracing a future that may not be desirable but is deemed necessary.”

However, for this narrative of inevitability to resonate with the general public and result in widespread adoption of the technology, politicians must eventually endorse it as well. This scenario is currently observable with AI, although the long-term implications remain unclear. Yet, some indications are troubling, like the cringe-worthy video incidents involving Rishi Sunak’s fawning over the world’s wealthiest individual, Elon Musk, and Tony Blair’s recent heartfelt conversation aired on TV with Demis Hassabis, the well-known co-founder of Google DeepMind.

It’s refreshing to encounter an article that explores the clash between deterministic myths and democratic realities, as seen in “Resisting Technological Inevitability: Google Wing Delivery Drones and the Battle for Our Skies.” Noteworthy academic papers soon to be published in Philosophical Transactions of the Royal Society A, a reputable journal. Written by Anna Zenz from the University of Western Australia’s School of Law and Julia Powles from the Technology & Policy Lab, the paper recounts the narrative of how major tech firms attempted to dominate a new market with a promising technology – delivery drones – without considering the societal repercussions. It reflects how a proactive, resourceful, and determined public successfully thwarted this corporate agenda.

The company in question is Wing, a subsidiary of Google’s parent company Alphabet. Their objective is to develop delivery drones to facilitate the transportation of various goods, including emergency medical aid, creating a new commercial industry that enables broad access to the skies. This is evident in Australia, which hosts Google’s largest drone operation in terms of deliveries and customer outreach. It is endorsed by both state and federal governments, with the federal government taking the lead.

Zenz and Powles argue that by persuading Australian politicians to allow the testing of an Aerial Deliveroo-like service (under the guise of an “experimental” initiative), Google heavily relied on the myth of inevitability. Officials who already believed in the inevitability of delivery drones saw the potential benefits of embracing this trend and offered their support, either passively or actively. The company then leveraged the perception of inevitability to obtain “community acceptance,” manipulating the public into silence or passive tolerance by claiming that delivery drones were an inevitable progression.

One of the test sites for this project was Bonython, a Canberra suburb where the trial commenced in July 2018. However, the project faced immediate challenges. Numerous residents were perturbed and bewildered by the sudden appearance of drones in their neighborhood. They expressed outrage over the drones’ impact on their community, local wildlife, and the environment, citing issues like unplanned landings, dropped cargo, drones flying near traffic, and birds attacking and disrupting the drones.

While many communities might have simply grumbled and overlooked these issues, Bonython took a different approach. A group of proactive residents, including a retired aviation law expert, established a dedicated online presence, distributed newsletters, conducted door-to-door outreach, engaged with politicians, contacted media outlets, and submitted information requests to local authorities.

Their efforts paid off eventually. In August 2023, Wing quietly announced the termination of operations in the Canberra region. This decision not only marked the end of the project but also triggered a congressional inquiry into drone delivery systems, scrutinizing various aspects such as pilot training, economic implications, regulatory oversight, and environmental impacts of drone delivery. This investigation shed light on the blind acceptance of the myth of inevitability among public officials, prompting critical questions that regulators and governments should consistently pose when tech companies champion “innovation” and “progress.”

Echoing Marshall McLuhan’s sentiments in a different context, it’s crucial to acknowledge that “there is absolutely no inevitability if there is a willingness to reflect on unfolding events.” Public resistance against the myth of inevitability should always be encouraged.

Skip Newsletter Promotions

What I’m Reading

The Thinker’s Work
There are fascinating essays in New Statesman about John Gray’s exploration of Friedrich Hayek, one of the 20th century’s most enigmatic thinkers.

Turn the page
Feeling pessimistic? Check out what Henry Oliver has to say in this insightful essay.

A whole new world
Science fiction writer Karl Schroeder shares some provocative blog posts contemplating the future.

Source: www.theguardian.com

Silicon Valley Trump supporters rally behind the decline of democracy | John Norton

I
yeah How does democracy end?In his elegant book, The Restoration of Liberal Democracy, published after Trump’s 2016 election, David Runciman made a startling point: the liberal democracies we take for granted will not last forever, but they will not fail in the ways we’ve seen them in the past: without revolution, military coup, or breakdown of social order. Moving forward through failure In an unexpected way. The implication was that people who compare it to what happened in Germany in the 1930s are mistaken.

Until a few weeks ago, that seemed like wise advice. But then something changed: key sectors of Silicon Valley, a Democratic stronghold for decades, began to support Trump. In 2016, contrarian billionaire and PayPal co-founder Peter Thiel was the only prominent Silicon Valley figure to endorse Trump, which merely confirmed the fact that he was a Silicon Valley legal outcast. But in recent weeks, many of Silicon Valley’s bigwigs (Elon Musk, Marc Andreessen, and David Sachs, to name just three) have revealed themselves as Trump supporters and donors. Musk has set up a pro-Republican political action committee (super PAC) and is donating to it. On June 6, venture capitalist Sachs hosted a $300,000-a-person fundraising dinner at Trump’s San Francisco mansion.

Why the sudden interest in politics? It’s probably a combination of several factors. First, Biden’s billionaire tax plan (and his administration’s antitrust litigation enthusiasm). Second, Trump’s newfound enthusiasm for cryptocurrency. Third, Biden has raised far more money for his campaign. And finally, and most importantly, Trump’s momentum was beginning to look unstoppable even before Biden dropped out.

The last two factors are reminiscent of the 1930s. In 1932, the Nazi Party was in serious financial trouble, and when Hitler became chancellor the following year, he personally appealed to business leaders for help. Funds were raised from 17 different business groups, with the largest donation coming from
IG Farben and Deutsche Bank
At the time, these donations must have seemed like a shrewd gamble to the businessmen who donated them. But as historian Adam Tooze wrote in his landmark book on the period, it also meant that German businessmen “were willing to cooperate in the destruction of German political pluralism.” In return, according to Tooze, German business owners and managers were given unprecedented powers to control their employees, collective bargaining was abolished, and wages were frozen at relatively low levels. Corporate profits and business investment grew rapidly. Fascism had been good for business, but it wasn’t anymore.

I wonder if these thoughts were going through the minds of the tech titans enjoying a $300,000 dinner in San Francisco that June night. My guess is no, they’re not. Silicon Valley residents don’t care much about history because they’re in the business of creating the future, so there’s nothing to learn from the past.

That’s a pity, because history has some lessons for them. The German businessmen who decided to support Hitler in 1933 may not have known exactly what he was up to for Germany, and probably knew nothing about the plans for the “Final Solution.” But David Sachs’ dinner guests have no such excuse.
Project 2025
President Trump’s second term plans are available online in a 900-page document.

It’s an interesting read. It has four core objectives: protecting children and families, dismantling the administrative state, defending borders, and restoring “God-given” individual liberties. But essentially,
A huge expansion of presidential powers There are many hysterical proposals, including putting the Department of Justice under Presidential control, replacing nonpartisan civil servants with loyalist ones, rolling back environmental laws, mass deportations, and removing “sexual orientation and gender identity, diversity, equity and inclusion, gender, gender equality, gender equity, gender sensitivity, abortion, reproductive health and reproductive rights” from all federal rules, agency regulations, contracts, grants and laws.

The rationale for Project 2025 was a concern that Trump had no idea how to use his new powers when he came to power in 2016, and that he certainly will not do so next time. As public concern about the document has grown, he has tried to distance himself from it. This may be because he thinks he won’t need a plan if elected. Speaking recently at a Christian convention in Florida, he said: “Go out and vote, this time. You don’t have to vote anymore. Four more years and we’ll take care of it. We’ll all be sorted out. My beautiful Christian people, you don’t have to vote anymore.”

The lesson? Be careful what you wish for. Copycats, Silicon Valley.

Skip Newsletter Promotions

What I’m Reading


Where to start?
Tim Harford said:
How do we fix Britain? Here’s how” in Financial Times.

False balance
There’s a thoughtful Substack by historian Timothy Snyder.
Two-sidednessThe harmful delusions of mainstream media.

In the Ether
In a skeptical blog post in Molly White’s newsletter, Citation Needed, she writes:
When cryptocurrency policy becomes an election issue.

Source: www.theguardian.com

My latest iPhone symbolizes stagnation, not progress. Artificial intelligence faces a similar future | John Norton

I Recently, I bought an iPhone 15 to replace my 5-year-old iPhone 11. The phone has the new A17 Pro chip, a terabyte of data storage, and is accordingly eye-poppingly expensive. Of course, I have carefully considered my reasons for sparing money on such a scale. For example, I have always had a policy of only writing about devices I bought with my own money (no freebies from tech companies). The fancy A17 processor is necessary to run the new “AI” features that Apple promises to launch soon. The phone also has a significantly better camera than my old phone, which is important (to me).
My Substack Blog It comes out three times a week and I post new photos in each issue. Finally, a friend whose old iPhone is nearing the end of its lifespan might be happy to have an iPhone 11 in good condition.

But these are more rationalizations than evidence. In fact, my old iPhone was fine for what it did. Sure, it would eventually need a new battery, but otherwise it lasted for years. And if you look objectively at the evolution of the iPhone line, it’s just been a steady series of incremental improvements since the iPhone 4 in 2010. What was so special about that model? Mainly this.
Front cameraThe iPhone 11 opened up a world of selfies, video chat, social media, and all the other accoutrements of a networked world. But what followed was only incremental change and rising prices.

This doesn’t just apply to the iPhone, but to smartphones in general; manufacturers like Samsung, Huawei, and Google have all followed the same path. The advent of smartphones, which began with the release of the first iPhone in 2007, marked a major break in the evolution of mobile phone technology (just ask Nokia or BlackBerry if you doubt that). A decade of significant growth followed, but the technology (and market) matured and incremental changes became the norm.

Mathematicians have a name for this process: they call it a sigmoid function, and they depict it as an S-shaped curve. If you apply this to consumer electronics, the curve looks like a slightly flattened “S,” with slow progress on the bottom, then a steep upward curve, and finally a flat line on the top. And smartphones are on that part of the curve right now.

If we look at the history of the technology industry over the past 50 years or so, we see a pattern: first there’s a technological breakthrough: silicon chips, the Internet, the Web, mobile phones, cloud computing, smartphones. Each breakthrough is followed by a period of intense development (often accompanied by an investment bubble) that pushes the technology towards the middle of the “S”. Then, eventually, things settle down as the market becomes saturated and it becomes increasingly difficult to fundamentally improve the technology.

You can probably see where this is going.
So-called “AI” Early breakthroughs have already occurred: first, the emergence of “big data” generated by the web, social media and surveillance capitalism, then the rediscovery of powerful algorithms (neural networks), followed in 2017 by the invention of the “Transformer” deep learning architecture, followed by the development of large-scale language models (LLMs) and other generative AI, of which ChatGPT is a prime example.

Now that we’ve passed the period of frenzy of development and huge amounts of corporate investment (with unclear returns on that investment) that has pushed the technology up into the middle of the sigmoid curve, an interesting question arises: how far up the sigmoid curve has the industry climbed, and when will smartphone technology reach the plateau where it is currently stagnating?

In recent weeks, we are starting to see signs that this moment is approaching. The technology is becoming commoditized. AI companies are starting to release smaller and (allegedly) cheaper LLMs. Of course, they won’t admit this, but it’s because the energy costs of the technology are increasing.
Swelling Irrational promotion of the industry
It’s not much talked about among economists. Millions of people have tried ChatGPT and its ilk, but most of them never showed up.
Lasting Interest Nearly every large company on the planet has run an AI “pilot” project or two, but very few have made any real deployments.
Today’s Sensation Is it starting to get boring? In fact, it’s a bit like the latest shiny smartphone.

Source: www.theguardian.com

America needs to prioritize AI development like the Manhattan Project – John Norton

TTen years ago, Oxford philosopher Nick Bostrom Super IntelligenceThe book explores how superintelligent machines might be built and the implications of such technology, one of which is that such machines, if built, would be difficult to control and might even take over the world to achieve their goals (in Bostrom's famous thought experiment, this was to make paperclips).

The book was a huge hit, generated lively debate, but also attracted a fair amount of opposition. Critics complained that it was based on an overly simplistic view of “intelligence,” that it overestimated the likelihood of the imminent emergence of superintelligent machines, and that it offered no credible solutions to the problems it raised. But the book had the great merit of forcing people to think about possibilities that had previously been confined to academia or the fringes of science fiction.

Ten years later, he takes on the same target again. This time, instead of a book, he makes a film titled “Situational Awareness: The Next DecadeThe author is Leopold Aschenbrenner, a young man of German origin who now lives in San Francisco and hangs out with Silicon Valley's intellectual elite. On paper, he sounds like a Sam Bankman Freed-type whiz kid: a math genius who graduated from a prestigious US university as a teenager, spent time at Oxford with his colleagues at the Future of Humanity Institute, and worked on the OpenAI “superalignment” team.Currently disbandedAfter working at Yahoo! Auctions for $1.2 billion in 2017, he founded an investment firm focused on artificial general intelligence (AGI) with funding from Stripe founders Patrick and John Collison. These are two smart guys who don't play for losers.

So this Aschenbrenner is clever, but at the same time, he's playing the game.The second point may be relevant, since the gist of his lengthy essay is essentially that superintelligence is coming (with AGI as a stepping stone), but the world isn't yet ready to accept it.

The essay is divided into five sections. The first section lays out the path from GPT-4 (its current state) to AGI (which the author believes could arrive as soon as 2027). The second follows a hypothetical path from AGI to true superintelligence. The third describes four “challenges” that superintelligent machines would pose to the world. The fourth section outlines what the author calls the “projects” necessary to manage a world with (or dominated by) superintelligent machines. The fifth section is Aschenbrenner's message to humanity in the form of three “tenets” of “AGI realism.”

In his view of how AI will progress in the near future, Aschenbrenner is fundamentally an optimistic determinist, i.e., he extrapolates the recent past under the assumption that trends will continue. To see an upward curve, he has to extend it. He grades LLMs (large-scale language models) by their capabilities. Thus, GPT-2 is at the “preschooler” level, GPT-3 at the “elementary school student” level, and GPT-4 at the “smart high school student” level, and it seems that with the massive increase in computing power, by 2028 “models as smart as PhDs and experts will be able to work next to us as colleagues.” By the way, why do AI advocates always consider PhDs to be the epitome of human perfection?

After 2028 comes the big leap from AGI to superintelligence. In Aschenbrenner's world, AI won't stop at human-level capabilities. “Hundreds of millions of AGIs will automate AI research, compressing a decade's worth of algorithmic progress into a year. We will rapidly evolve from human-level to superhuman AI systems. The powers and dangers of superintelligence will be dramatic.”

Skip Newsletter Promotions

The third section of the essay explores what such a world might be like, focusing on four aspects of it: the unimaginable (and environmentally catastrophic) computational requirements needed to run it, the difficulty of maintaining the security of an AI lab in such a world, the problem of aligning machines with human purposes (which Aschenbrenner believes is difficult but not impossible), and the military implications of a world of superintelligent machines.

It is not until the fourth topic that Aschenbrenner's analysis really begins to disintegrate thematically. Like the message in the Blackpool stone pole, the nuclear weapons analogy runs through his thinking. He sees the US as being at a stage in AI after J. Robert Oppenheimer's original Trinity experiment in New Mexico, ahead of the USSR, but not for long. And of course, China fills the role of the Soviet empire in this analogy.

Suddenly, superintelligence has gone from being a human problem to being a US national security imperative. “The US has a lead,” he writes. “We must maintain that lead. And now we're screwing it. Above all, we must lock down AI labs quickly and thoroughly before major AGI breakthroughs leak out in the next 12 to 24 months. … Computer clusters must be built in the US, not in the dictatorships that fund them. And US AI labs have an obligation to cooperate with intelligence agencies and the military. A US lead in AGI cannot ensure peace and freedom by simply building the best AI girlfriend app. It's ugly, but we must build AI for US defense.”

All we need is a new Manhattan Project and the AGI Industrial Complex.

What I'm Reading

The dictator is shot
Former Eastern Bloc countries fear Trump It's an interesting piece. New Republic About people who know something about life under oppression.

Normandy revisited
Historian Adam Tooze 80 Years Since D-Day: World War II and the “Great Acceleration” The piece looks back on wartime anniversaries.

Lawful interference
Monopoly Recap: The Harvey Weinstein of Antitrust This is a blog post by Matt Stoller about Joshua Wright, the lawyer who has had a devastating impact on U.S. antitrust enforcement for many years.

Source: www.theguardian.com

The AI Bubble is Headed Towards Bust, Not Boom | John Norton


“a

“Are we really in an AI bubble?” asked a reader of last month’s column about the apparently unstoppable rise of Nvidia. “And how do we know that?” That was a good question, so he asked the AI, which pointed out:
investmentpedia, written by someone who knows this stuff. Bubbles taught me that he goes through five stages.
Elisabeth Kubler-Ross said that people live with sadness.. For investment bubbles, the five stages are displacement, boom, euphoria, profit taking, and panic. So let’s see how this maps onto our previous experience with AI.

First, displacement. It’s easy. It was ChatGPT wotdunnit. When it appeared on November 30, 2022, the world just went crazy. Then everyone realized,
this That’s exactly what was being tweeted around AI! And people were fascinated by the discovery that they could talk to machines, and that machines could talk to them (well, write them) back in coherent sentences. it was done. It was like the moment people saw in the spring of 1993.
mosaicthe first proper web browser, and suddenly the pennies dropped.
this That was the purpose of the “Internet”. And Netscape held his initial public offering in August 1995, stock prices skyrocketed, and the first Internet bubble began to inflate.

Second stage: Boom. With the launch of ChatGPT, all the big tech companies have actually been playing with this AI technology for years, but were too scared to tell the world due to the inherent instability of the technology. It became clear that it couldn’t be done. But once the ChatGPT creator let his OpenAI let the cat out of the bag, fomo (fear of missing out) took over. And other companies have learned that Microsoft stole their advances by secretly investing in his OpenAI, and in doing so gained privileged access to his powerful GPT-4 large-scale multimodal model. This realization created a sense of alarm. Microsoft’s president, Satya Nadella, inadvertently revealed that his intention was to make Google “dance.” If that was indeed his plan, it worked.Google, which considered itself a leader in machine learning, released Bard chatbot
before you’re ready Then he retreated amidst the voices of ridicule.

But that excitement also stirs up the lower echelons of technology, and suddenly there’s a surge in startups being founded by entrepreneurs, entrepreneurs who see the big “foundation” model of tech companies as a platform on which new things can be built. I saw it.
Once you look at the web As such a basic foundation. These seedlings were funded in the old-fashioned way by venture capitalists, but some of them were funded by tech companies and companies like his Nvidia, which was producing hardware that could allegedly build the future of AI. received significant investment from both.

The third stage of the cycle, euphoria, is the stage we are in now. The winds of caution are shifting, and ostensibly rational companies are betting huge sums of money on AI. OpenAI boss Sam Altman began by saying,
Raise $7 trillion from Middle East oil states For the big push to create AGI (artificial general intelligence). He also partnered with Microsoft to
stargate supercomputer. All of this seems to be based on articles of faith. So to create a superintelligent machine, all you need is (a) infinitely more data and (b) infinitely more computing power. And the strange thing is that at the moment the world seems to be taking these fantasies at face value.

This begins the fourth stage of the cycle: profit taking. At this point, an astute operator notices that the process is becoming unstable and initiates an escape before the bubble bursts. No one is actually making money from AI yet, except the companies that build the hardware, so maybe the people who own stock in Nvidia, Apple, Amazon, Meta, Microsoft, Alphabet (née Google). Other than that, there are very few benefits to be gained. It turns out that this generative AI is great at spending money, but not great at generating investment returns.

Stage 5 – Panic – awaits. At some stage the bubble gets punctured and a rapid downward curve begins as people scramble to get out while they can. In the case of AI, it is unclear what triggers this process. Governments may eventually grow tired of out-of-control giant corporations draining investors’ money. Or will shareholders come to the same conclusion? Or finally realizing that AI technology is causing an environmental disaster. Data centers cannot be spread all over the earth.

But it will burst someday. Nothing grows exponentially forever. So, back to the first question. Are we in an AI bubble? Is the Pope a Catholic?

Source: www.theguardian.com

Ireland embraces tech giants while neglecting public services

IIn 1956, a man
TK “Ken” WhitakerAn Irish civil servant by training as an economist, he was appointed Permanent Secretary to the Treasury in Dublin at the relatively young age of 39. From his vantage point as the head of the national treasury, the outlook was bleak. The Republic of Ireland was in deep economic and social crisis. It had no natural resources, little industry, and was in deep depression. Inflation and unemployment were high. Ireland’s main export was young people, who fled by the thousands each year in search of work and a better life. The proud dream of Irish independence produced an impoverished nation of priests on the verge of collapse.

Mr. Whitaker quickly assembled a team of young officials to critically analyze the country’s economic failures and devise a series of policies to remedy them. As a result, a report titled “First Plan for Economic Expansion” was published in November 1958, and subsequently
Sean Lemas He was elected Taoiseach (Prime Minister) in 1959 and became Ireland’s survival strategy.

At its heart were several important proposals. Ireland will have to embrace the idea of free trade. That would mean boosting competition and ending the protectionism that had been a feature of Irish economic policy under Lemas’s predecessor Airmon de Valera (whose economic philosophy was once described as “non-British”). But most importantly, the strategy requires that Ireland must welcome foreign capital in the future, which essentially means being nice to national companies, giving multinationals generous tax breaks, giving them help finding land to build on, and generally being responsive to their needs.

Whittaker’s strategy was bold, but it worked. (Of course, joining the European Economic Community in 1973 didn’t hurt either.) The republic moved from a state of deep socio-economic problems to an apparent paradigm of neoliberal prosperity. I have transformed. Foreign companies (mainly American companies) flooded in. German crane manufacturer Liebherr was an early entrant. In 1980, he was followed by Apple, and then came pharmaceutical companies. (Perhaps Viagra is manufactured in Ireland, once the holy land of Catholicism.) Then along came the big technology companies, many of which now have their European headquarters in Dublin.

If any of these behemoths had any doubts about coming to the Emerald Isle, two things would have reassured them. The first is Brexit. These companies had to join the EU. The second was how the republican government rushed to the rescue of one of its compatriots, Apple. When the European Commission concluded in 2016 that the company had been unfairly granted €13 billion in tax exemptions by Irish authorities, Apple not only successfully appealed this decision in 2020 but also had a similar ruling in 2020. was lowered.
The republican government did it.. Think about it for a moment. A small country is refusing to accept her 13 billion euro payment. (Incidentally, the Commission has appealed this decision, and it appears Apple may still have to pay an additional €1.2 billion in interest. This money is currently held in an escrow fund with the Irish government.)

But the subconscious message to corporate bosses was: “If you run into trouble with the EU, we will support you.” This message may have reached Beijing as well. In any case, it is
interesting to learn It comes just as the US and EU are considering cracking down on TikTok (whose owner ByteDance, coincidentally, is based in Dublin), and the Irish government is considering cracking down on popular e-commerce app Temu and other companies. It says that it welcomes Chinese-funded companies. Shein, and tech company Huawei.

I might regret this for the rest of my life, but for now, isn’t that all the treble? Only up to a certain point. On the one hand, the influx of foreign capital into Ireland was transformative. Tax revenue from resident high-tech companies is, on paper, making the country richer. The government is paying out of its ear.

surplus

65.2 billion euros by 2027.



Meanwhile, Ireland faces some difficult problems. For example, corporate wealth has done to Dublin what Silicon Valley did to San Francisco, turning a once livable city into a highly unaffordable metropolis. There is a huge
lack of affordable housing. A related homelessness crisis: around 12,000 people are in emergency accommodation, with an average monthly rent of €1,468. Add to that a creaky public health service (along with lavish and expensive private health services).

And it is the only country in Europe.
Population explosion underway: Current demographic trends indicate that the Republic
The population in 2016 was 4.7 million
somewhere in the range of about $5.5 million.
6.7 million people by 2051 By the end of this century, there will be 10 million people living on the entire island of Ireland.

There is a paradox here. Mr. Whitaker’s strategy is to build enough affordable housing to build all the affordable housing the country needs, to fund a world-class public health system, and to build a mass transit system that frees up the nation’s capital. It brought in tax revenue and created a society that was clearly richer than his wildest dreams. Traffic congestion, electrification of everything, etc. Nevertheless, it is ruled by a coalition government that appears unable to look ahead to the next election. Perhaps it is true that we are getting the government we deserve.

what i was reading

A game with a frontier

A great essay by Bruce Schneier
How “Frontier” became the slogan for uncontrollable AI.

talking points

Salvo, Volume 5 Featuring a fascinating interview transcript by Gavin Jacobson.
new statesman With the famous French economist Thomas Piketty.

into the clouds

The incredible ecological impact of computing and the cloud Anthropologist Stephen Gonzalez Montserrat details what he learned while working in a giant data center.

Source: www.theguardian.com

The Future of Communication: What Changes with Britain’s New Snooper Charter Law | John Norton

WBack in 2000, the Investigatory Powers Regulation Bill was introduced by the Blair government, which enshrined formidable surveillance powers into law. Long before Edward Snowden revealed his secrets, it was clear to those paying attention that the British deep state was gearing up for the digital age. The powers implicit in this bill were so broad that some expected it to pass the House with a bang.

However, the majority of MPs surveyed didn’t seem interested in the bill. Only a handful of his 659 elected members seemed concerned at all about what was being proposed. Most of the work to improve bills as they pass through Parliament is done by a small number of members of the House of Lords, some of them hereditary members, rather than elected members. It was eventually revised and became law (nicknamed Ripa) in July 2000.

In 2014, the government commissioned David Anderson QC (now KC) to investigate its operation and recommended that new legislation be enacted to clarify the questions Ripa raises. Home Secretary Theresa May introduced a new investigatory powers bill in the House of Commons in 2015, which was scrutinized by a joint committee of the Lords and the House of Commons. This bill became the Investigatory Powers Act (or “Peep Charter”) in November 2016. The following month, the European Court of Justice ruled that the general retention of information legalized by the law was unlawful.

In 2022, the Home Office conducted a review of how the act worked. It concluded that the law had “largely achieved its objectives” but that further significant reforms were needed “to take into account advances in technology and the evolving demands of protecting national security and tackling serious crime.” Spies needed legislative support and more formally sanctioned wiggle room.

The Investigatory Powers (Amendment) Bill is currently before the Lords of Westminster. “The world has changed,” the blurb says. “Technology is advancing rapidly and the types of threats the UK faces continue to evolve.” It aims to enable security and intelligence agencies to respond to a range of evolving threats. And of course, this is global Britain, so “world-leading safeguards within the IPA will be maintained and strengthened”.

Upon closer inspection, the bill should give security services more latitude in building and leveraging so-called “mass datasets of personal information” and collecting and using CCTV footage and facial images. The bill also allows for the “collection and processing of Internet connection records” for generalized mass surveillance.

The bill will force technology companies, including overseas bases, to inform the UK Government of any plans that may require improving security or privacy measures on their platform before these changes take effect. For instance, Apple views this as an “unprecedented overreach by the government” that could see the UK “covertly veto new user protections globally and prevent us from delivering them to our customers”.

A hat-trick, at least for global Britain.

what i am reading

intestinal level
Cory Doctorow’s Marshall McLuhan Lecture on enshift, or the way digital platforms tend to deteriorate. A record of an event you’ll never forget.

X factor
a great blog post written by Charles Arthur, former technology editor guardian. Summary: Think before you tweet. Or maybe you should just quit.

Apocalypse again
a solemn politiko column Jac Schaefer on the recent wave of layoffs in American news organizations.

Source: www.theguardian.com