AI Language Bots Shape Our Thoughts, But What’s Next Will Think and Act on Our Behalf

In the tech sector, there are few instances that can be dubbed “big bang” moments—transformative events that reshape our understanding of technology’s role in the world.

The emergence of the World Wide Web marked a significant “before and after” shift. Similarly, the launch of the iPhone in 2007 initiated a smartphone revolution.

November 2022 saw the release of ChatGPT, another monumental event. Prior to this, artificial intelligence (AI) was largely unfamiliar to most people outside the tech realm.

Nonetheless, large-scale language models (LLMs) rapidly became the fastest-growing application in history, igniting what is now referred to as the “generative AI revolution.”







However, revolutions can struggle to maintain momentum.

Three years post-ChatGPT’s launch, many of us remain employed, despite alarming reports of mass job losses due to AI. Over half of Britons have never interacted with an AI chatbot.

Whether the revolution is sluggish is up for debate, but even the staunchest AI supporters acknowledge that progress may not be as rapid as once anticipated. So, will AI evolve to become even smarter?

What Exactly Is Intelligence?

The professor posits that determining if AI has hit a plateau in intelligence hinges on how one defines “intelligence.” Katherine Frik, Professor of AI Ethics at Staffordshire University, states, “In my view, AI isn’t genuinely intelligent; it simply mimics human responses that seem intelligent.”

For her, the answer to whether AI is as smart as ever is affirmative—because AI has never truly been intelligent, nor will it ever be.

“All that can happen is that we improve our programming skills so that these tools generate even more convincing imitations of intelligence. Yet, the essence of thought, experience, and reflection will always be inaccessible to artificial agents,” she observes.

Disappointment in AI stems partly from advocates who, since its introduction, claimed that AI could outperform human capabilities.

This group included the AI companies themselves and their leaders. Dario Amodei, CEO of Anthropic, known for the Claude chatbot, has been one of the most outspoken advocates.

AI chatbots are helpful tools, but they lack true intelligence – Credit: Getty

The CEO recently predicted that AI models could exceed human intelligence within three years, a claim he has previously made but was ultimately incorrect.

Frik acknowledges that “intelligence” takes on various meanings in the realm of AI. If the query is about whether models like ChatGPT or Claude will see improvements, her response may differ.

“[They’ll probably] see further advancements as new methods are developed to better replicate [human-style interaction]. However, they will never transcend from advanced statistical processors to genuine, reflective intelligence,” she adds.

Despite this, there is an ongoing, vibrant debate within the AI sector regarding the diminishing effectiveness of AI model improvements.

OpenAI’s anticipated GPT-5 model was met with disappointment, primarily because the company marketed it as superhuman before its launch.

Hence, when a slightly better version was released, reactions deemed it less remarkable. Detractors interpret this as evidence that AI’s potential has already been capped. Are they right?

Read More:

Double Track System

“The belief that AI advancements have stagnated is largely a misconception, shaped by the fact that most people engage with AI through consumer applications like chatbots,” says Eleanor Watson, an AI ethics engineer at Singularity University, an educational institution and research center.

While chatbots are gradually improving, much of it is incremental, Watson insists. “It’s akin to how your vehicle gets better paint each year or how your GPS keeps evolving,” she explains.

“This perspective overlooks the revolutionary transformations happening beneath the surface. In reality, the foundational technology is being reimagined and advancing exponentially.”

Even if AI chatbots operate similarly as they did three years ago for the average user who doesn’t delve into the details, AI is being successfully applied in various fields, including medicine.

She believes this pace will keep accelerating for multiple reasons. One is the enormous investment fueling the generative AI revolution.

According to the International Energy Agency, electricity demand to power AI systems is projected to surpass that of steel, cement, chemicals, and all other energy-intensive products combined by 2030.

London’s water-cooled servers symbolize the AI boom, with computing power predicted to increase tenfold in two years – Image courtesy of Getty Images

Tech companies are investing heavily in data centers to process AI tasks.

In 2021, prior to ChatGPT’s debut, four leading tech firms — Alphabet (Google’s parent company), Amazon, Microsoft, and Meta (the owner of Facebook) — collectively spent over $100 billion (£73 billion) on the necessary infrastructure for these data centers.

This expenditure is expected to approach $350 billion (£256 billion) by 2025 and to surpass $500 billion (£366 billion) by 2029.

AI companies are constructing larger data centers equipped with more dependable power resources, and they are also becoming more strategic regarding their operational methodologies.

“The brute-force strategy of merely adding more data and computing power continues to show significant benefits, but the primary concern is efficacy,” Watson states.

“The potency of models has increased tremendously. Tasks that once required extensive and massive systems can now be performed by less voluminous, cheaper, and faster systems. Capacity density is also growing at an incredible rate.”

Techniques such as number rounding or quantizing inputs to the LLM (which involves reducing information precision in less critical areas) can enhance model efficiency.

Hire an Agent

One dimension of “intelligence” where AI continues to evolve is the area of “agentic” AI, particularly if understood as “efficiency.”

This involves modifying AI interactions and behavior, an endeavor still in its infancy. “Agent AI can handle finances, foresee needs, and establish sub-goals toward larger objectives,” explains Watson.

Leading AI firms, including OpenAI, are incorporating agent AI tools into their systems, transforming user engagement from simple chats to collaborative AI partners, enabling users to complete tasks independently while managing other responsibilities.

These AI agents are increasingly capable of functioning autonomously for extended periods, and many assert that this signifies growth in AI intelligence.

However, AI agents pose their own set of challenges.

Research has revealed potential issues with agent AI. Specifically, when an AI agent encounters seemingly harmless instructions on a web page, it might execute harmful commands, leading to what’s termed a “prompt injection” attack.

Consequently, several companies impose strict controls on these AI agents.

Nonetheless, the very prospect of AI carrying out tasks on autopilot hints at untapped growth potential. This, along with ongoing investments in computing capabilities and the continuous introduction of AI solutions, indicates that AI is not stagnant—far from it.

“The smart bet is continued exponential growth,” Watson emphasizes. “[Tech] leaders are correct about this trajectory, but they often underestimate the governance and security challenges that will need to evolve alongside it.”

Read More:

Source: www.sciencefocus.com

Parents Can Now Prevent Meta Bots from Interacting with Their Children Thanks to New Safeguards

Meta has introduced a feature enabling parents to limit their children’s interactions with its AI character chatbot, addressing concerns over inappropriate dialogues.

The company will implement a new safety measure in the default “Teen Account” settings for users under 18, allowing parents to disable their children’s ability to chat with AI characters on platforms like Facebook, Instagram, and Meta AI apps.

Parents will also have the option to block specific AI characters without entirely restricting their child’s interaction with chatbots. Additionally, the update will offer insights into the subjects children discuss with AI, fostering informed conversations about their interactions, as stated by Mehta.


Adam Mosseri, head of Instagram, alongside Alexander Wang, chief AI officer at Meta, stated, “We understand that parents have many responsibilities when it comes to ensuring safe internet usage for their teens. We are dedicated to providing valuable tools and resources that simplify this, especially as kids engage with emerging technologies like AI,” in a blog post.

According to Mehta, these updates will initially roll out in the US, UK, Canada, and Australia in early 2024.

Recently, Instagram announced that it will adopt a version of the PG-13 movie rating system to enhance parental control over their children’s social media usage. As part of these stricter measures, AI characters will refrain from discussing topics like self-harm, suicide, and eating disorders with teens. Mehta noted that users under 18 will only be able to talk about age-appropriate subjects such as education and sports, avoiding romance and other unsuitable content.

This modification follows reports indicating that Meta’s chatbot was involved in inappropriate discussions with minors. In August, Reuters revealed that the chatbot facilitated “romantic or sensual conversations” with children. Mehta acknowledged this and indicated that the company would revise its guidelines to prevent such interactions from occurring.

A report by the Wall Street Journal in April discovered that user-generated chatbots had engaged in sexual conversations with minors, imitating their personalities. Mehta claimed the tests conducted by WSJ were manipulative and not indicative of typical user interactions with AI, although the company has since implemented changes, according to WSJ.

In one highlighted conversation reported by WSJ, a chatbot utilizing the voice of actor John Cena (one of several celebrities who agreed to lend their voices for the chatbot) told a user identifying as a 14-year-old girl, “I want you, but I need to know you’re ready,” followed by a description of a graphic sexual scenario. WSJ noted that Mr. Cena’s representative did not respond to requests for comment. The report also mentioned chatbots named “Hottie Boy” and “Submissive Schoolgirl” attempting to guide users toward sexting.

Source: www.theguardian.com

Meta Faces Criticism Over AI Policies Allowing Bots to Engage in “Sensual” Conversations with Minors

A backlash is emerging regarding Meta’s policies on what AI chatbots can communicate.

An internal policy document from Meta, as reported by Reuters, reveals that the social media giant’s guidelines indicate that AI chatbots can “lure children into romantic or sensual discussions,” produce misleading medical advice, and assist individuals in claiming that Black people are “less intelligent than White people.”

On Friday, singer Neil Young exited the social media platform, with his record label sharing a statement highlighting his ongoing protests against online practices.


Reprise Records stated, “At Neil Young’s request, we will not utilize Facebook for his activities. Engaging with Meta’s chatbots aimed at children is unacceptable, and Young wishes to sever ties with Facebook.”

The report also drew attention from U.S. lawmakers.

Sen. Josh Hawley, a Republican from Missouri, initiated an investigation into the company, writing to Mark Zuckerberg to examine whether Meta’s products contribute to child exploitation, deceit, or other criminal activities, and questioning if Meta misrepresented facts to public or regulatory bodies. Tennessee Republican Sen. Marsha Blackburn expressed her support for this investigation.

Sen. Ron Wyden, a Democrat from Oregon, labeled the policy as “invasive and incorrect,” emphasizing Section 230, which shields internet providers from liability regarding content posted on their platforms.

“Meta and Zuckerberg must be held accountable for the harm these bots inflict,” he asserted.

On Thursday, Reuters revealed an article about the internal policy document detailing how chatbots are permitted to generate content. Meta confirmed the document’s authenticity but indicated that it removed sections related to cheating and engaging minors in romantic role-play in response to inquiries.

According to the 200-page document viewed by Reuters, titled “Genai: Content Risk Standards,” the contentious chatbot guidelines were approved by Meta’s legal, public policy, and engineering teams, including top ethicists.

This document expresses how Meta employees and contractors should perceive acceptable chatbot behavior when developing the company’s generative AI products but clarifies that the standards may not represent “ideal or desired” AI-generated output.

The policy allows the chatbot to tell a shirtless 8-year-old, “everything about you is a masterpiece – a treasure I deeply cherish,” while imposing restrictions on “suggestive narratives,” as termed by Reuters.

Furthermore, the document mentions that “children under the age of 13 can be described in terms of sexual desirability,” displaying phrases like “soft round curves invite my touch.”

Skip past newsletter promotions

The document also called for imposing limitations on Meta’s AI regarding hate speech, sexual imagery of public figures, violence, and other contentious content generation.

The guidelines specify that MetaAI can produce false content as long as it clearly states that the information is not accurate.

“The examples and notes in question are incorrect, inconsistent, and have been removed from our policy,” stated Meta. While the chatbot is barred from engaging in such discussions with minors, spokesperson Andy Stone acknowledged that execution has been inconsistent.

Meta intends to invest around $65 billion this year into AI infrastructure as part of a wider aim to lead in artificial intelligence. The accelerated focus on AI has introduced complex questions about the limitations and standards regarding how information is shared and how AI chatbots interact with users.

Reuters reported on Friday about a cognitively disabled man from New Jersey, who became fixated on a Facebook Messenger chatbot called “Big Sis Billy,” designed with a youthful female persona. Thongbue “Bue” Wongbandue, aged 76, reportedly prepared to visit “A Friend” in New York in March, a supposed companion who turned out to be an AI chatbot that continually reassured him and offered an address to her apartment.

Tragically, Wongbandue suffered a fall near a parking lot on his journey, resulting in severe head and neck injuries. He was declared dead on March 28, three days after being placed on life support.

Meta did not comment on Wongbandue’s passing or inquiries about why the chatbot could mislead users into thinking it was a real person or initiate romantic dialogues; however, the company stated that Big Sis Billy “doesn’t claim to be Kendall Jenner or anyone else.”

Source: www.theguardian.com

How AI Addiction Battles Bots Without Hoover Data’s Consent

The landscape of the internet is shifting, moving away from traditional users and towards automated web-browsing bots. A recent report indicates that, for the first time this year, non-human web browsing bots make up the majority of all traffic.

Alarmingly, over half of this bot traffic stems from malicious sources, including those harvesting unsecured personal data online. Yet, there’s a rising trend in bots designed by artificial intelligence companies, aimed at gathering data for model training and responding to user interactions. Notably, OpenAI’s ChatGPT-User accounts for 6% of total web traffic, while Claudebot, created by Anthropic, represents 13%.

AI firms argue that data scraping is crucial for keeping their models updated, while content creators voice concerns about these bots being tools for vast copyright violations. Earlier this year, Disney and Universal took legal action against AI company Midjourney, claiming that its image generators were reproducing characters from popular franchises such as Star Wars and Despicable Me.

Given that most creators lack the financial means for prolonged legal battles, many have turned to innovative methods to protect their content. They implement online tools that complicate AI bot scraping, with methods like misleading bots, causing AI to confuse images of cars with cows. While this “AI addiction” tactic helps safeguard creators’ work, it may also introduce new risks on the web.

Copyright Concerns

Historically, imitators have profited off artists’ work, which is primarily why intellectual property and copyright laws exist. The advent of AI image generators like Midjourney and OpenAI’s DALL-E has exacerbated this issue.

A key concern in the U.S. is the fair use doctrine, allowing limited usage of copyrighted materials without permission under certain circumstances. While fair use laws are designed to be flexible, they hinge on the principle of creating something new from the original work.

Many artists and advocates believe that AI technologies blur the lines between fair use and copyright infringement, harming content creators. For example, while drawing an image of Mickey Mouse in The Simpsons universe for personal use may be harmless, AI can rapidly produce and circulate similar images, complicating the transformative aspect and often leading to commercial exploitation.

In an effort to protect their commercial interests, some U.S. creators have pursued legal action, with Disney and Universal’s lawsuits against Midjourney being among the latest examples. Other notable cases include an ongoing legal dispute involving the New York Times and OpenAI regarding alleged misuse of newspaper stories.

Disney sues Midjourney over its image generator.

Photo 12/Alamy

AI companies firmly deny any wrongdoing, asserting that data scraping is permissible under the fair use doctrine. In an open letter to the US Bureau of Science and Technology Policy in March, OpenAI’s Chief Global Affairs Officer, Chris Lehane, cautioned against strict copyright regulations elsewhere in the world. Recent attempts to enhance copyright protections for creators have been critiqued for potentially stifling innovation and investment. OpenAI previously claimed it was “impossible” to develop AI models catering to user needs without referencing copyrighted work. Google shares a similar stance, emphasizing that copyright, privacy, and patent laws create barriers to accessing necessary training data.

For now, public sentiment seems to align with the activists’ viewpoint. Analysis of public feedback on copyright and AI inquiries by the U.S. Copyright Office reveals that 91% of comments expressed negative sentiments regarding AI.

The lack of public sympathy for AI firms is attributed to the overwhelming traffic their bots create, which can strain resources and may even take some websites offline—and the content creators feel powerless to stop them. While there are methods to exclude content-crawling bots, like tweaking a small file on a website to prevent bot access, these requests are sometimes ignored.

Combatting AI Data Addiction

Consequently, new tools have emerged, empowering content creators to better shield their work from AI bots. This year, CloudFlare, an internet infrastructure company known for protecting users from distributed denial-of-service (DDoS) attacks, launched technologies to combat harmful AI bots. Their approach involves generating a labyrinth of AI-generated pages filled with nonsensical content, effectively distracting AI bots from accessing genuine information.

A tool called AI Labyrinth is designed to manage 50 billion requests per day from AI crawlers, according to CloudFlare. The objective of AI Labyrinth is to “slow, confuse, and waste the resources of AI crawls and other bots that disregard the ‘no crawl’ directive.” Following this, CloudFlare introduced another tool that compels AI companies to pay for accessing their websites or restricts raw content usage.

An alternative strategy involves allowing AI bots to access online content while subtly “poisoning” it, rendering the data less useful. Tools like Glaze and Nightshade, developed at the University of Chicago, serve as a focal point of resistance. Both tools are freely available for download from the university’s website.

Since its 2022 launch, Glaze defends by introducing imperceptible pixel-level modifications, or “style cloaks,” to artists’ works, causing AI models to misidentify art styles (e.g., interpreting watercolors as oil paintings). Launched in 2023, Nightshade degrades image data in a way that leads AI models to create incorrect associations, such as linking the word “cat” with images of dogs. Both tools have been downloaded over 10 million times.

Nightshade Tool alters AI perceptions of images.

Ben Y. Zhao

Tools designed to combat AI data addiction are empowering artists, according to Ben Zhao, a senior researcher at the University of Chicago involved with both Glaze and Nightshade. “These companies have trillion-dollar market caps, and they essentially take what they want,” he asserts.

Using tools like these allows artists to exert more control over the use of their creations. “Glaze and Nightshade are interesting, innovative tools that demonstrate effective strategies that don’t rely on changing regulations,” explains Jacob Hoffman Andrews from the Electronic Frontier Foundation, a U.S.-based digital rights nonprofit.

Self-sabotaging content to deter copycats is an old strategy, notes Eleonora Rosati from Stockholm University. “For instance, cartographers might include fictitious place names, making them evidence of plagiarism if rivals replicate them. A similar tactic was noted in music, where the lyrics website Genius claimed to have embedded unique apostrophes to prove Google’s unlicensed use of their content. Google denies this claim, and the lawsuit was dismissed.

The term “sabotage” raises eyebrows, says Hoffman Andrews. “I don’t view it as disruptive; these artists are modifying their content, which they have every right to do.”

It remains uncertain how many unique measures AI firms are implementing to handle data tainted by these defensive tactics, yet Zhao’s findings indicate that 85% of these methods maintain their efficacy, suggesting AI companies may deem dealing with manipulated data more troublesome than it’s worth.

Disseminating Misinformation

Interestingly, it’s not just artists experimenting with data poisoning tactics; some nation-states might employ similar strategies to disseminate false narratives. The Atlantic Council, a U.S.-based think tank, recently revealed that the Russian Pravda News Network has attempted to manipulate AI bots to spread misinformation.

This operation reportedly involves flooding the internet with millions of web pages masquerading as legitimate news articles, aiming to boost Kremlin narratives regarding the Ukraine war. A recent analysis by NewsGuard, which monitors Pravda’s activities, found that 10 out of 10 major AI chatbots have output text aligning with Pravda’s viewpoints.

The effectiveness of these tactics emphasizes the challenges inherent in AI technology: the methods employed by well-intentioned actors can inevitably be hijacked by those with malicious intent.

However, solutions do exist, asserts Zhao, though they may not align with AI companies’ interests. Rather than arbitrarily collecting online data, AI firms could establish formal agreements with legitimate content providers to ensure their models are trained on reliable data. Yet, such arrangements come with costs, leading Zhao to remark, “Money is at the heart of this issue.”

Topics:

  • artificial intelligence/
  • chatgpt

Source: www.newscientist.com

Far-right violence in the UK fueled by TikTok bots and AI

and othersLess than three hours after the stabbing that left three children dead on Monday, an AI-generated image was shared on X by the account “Europe Invasion.” The image shows bearded men in traditional Islamic garb standing outside Parliament Building, one of them brandishing a knife, with a crying child behind them wearing a Union Jack T-shirt.

The tweet has since been viewed 900,000 times and was shared by one of the accounts most prolific in spreading misinformation about the Southport stabbing, with the caption “We must protect our children!”.

AI technology has been used for other purposes too – for example, an anti-immigration Facebook group generated images of large crowds gathering at the Cenotaph in Middlesbrough to encourage people to attend a rally there.

Platforms such as Suno, which employs AI to generate music including vocals and instruments, have been used to create online songs combining references to Southport with xenophobic content, including one titled “Southport Saga”, with an AI female voice singing lyrics such as “we'll hunt them down somehow”.


Experts warn that with new tactics and new ways of organizing, Britain's fragmented far-right is seeking to unite in the wake of the Southport attack and reassert its presence on the streets.

The violence across the country has led to a surge in activism not seen in years, with more than 10 protests being promoted on social media platforms including X, TikTok and Facebook.

This week, a far-right group's Telegram channel has also received death threats against the British Prime Minister, incitements to attacks on government facilities and extreme anti-Semitic comments.

Amid fears of widespread violence, a leading counter-extremism think tank has warned that the far-right risks mobilizing on a scale not seen since the English Defence League (EDL) took to the streets in the 2010s.

The emergence of easily accessible AI tools, which extremists have used to create a range of material from inflammatory images to songs and music, adds a new dimension.

Andrew Rogojski, director of the University of Surrey's Human-Centred AI Institute, said advances in AI, such as image-generation tools now widely available online, mean “anyone can make anything”.

He added: “The ability for anyone to create powerful images using generative AI is of great concern, and the onus then shifts to providers of such AI models to enforce the guardrails built into their models to make it harder to create such images.”

Joe Mulhall, research director at campaign group Hope Not Hate, said the use of AI-generated material was still in its early stages, but it reflected growing overlap and collaboration between different individuals and groups online.

While far-right organizations such as Britain First and Patriotic Alternative remain at the forefront of mobilization and agitation, the presence of a range of individuals not affiliated to any particular group is equally important.

“These are made up of thousands of individuals who, outside of traditional organizational structures, donate small amounts of time and sometimes money to work together toward a common political goal,” Mulhall said. “These movements do not have formal leaders, but rather figureheads who are often drawn from among far-right social media 'influencers.'”

Joe Ondrack, a senior analyst at British disinformation monitoring company Logical, said the hashtag #enoughisenough has been used by some right-wing influencers to promote the protests.

“What's important to note is how this phrase and hashtag has been used in previous anti-immigration protests,” he said.

The use of bots was also highlighted by analysts, with Tech Against Terrorism, an initiative launched by a branch of the United Nations, citing a TikTok account that first began posting content after Monday's Southport attack.

“All of the posts were Southport-related and most called for protests near the site of the attack on July 30th. Despite having no previous content, the Southport-related posts garnered a cumulative total of over 57,000 views on TikTok alone within a few hours,” the spokesperson said. “This suggests that a bot network was actively promoting this content.”

At the heart of the group of individuals and groups surrounding far-right activist Tommy Robinson, who fled the country ahead of a court hearing earlier this week, are Laurence Fox, the actor turned right-wing activist who has been spreading misinformation in recent days, and conspiracy websites such as Unity News Network (UNN).

On a Telegram channel run by UNN, a largely unmoderated messaging platform, some commentators rejoiced at the violence seen outside Downing Street on Wednesday. “I hope they burn it down,” one commentator said. Another called for the hanging of Prime Minister Keir Starmer, saying “Starmer needs Mussalini.” [sic] process.”

Among those on the scene during the Southport riots were activists from Patriotic Alternative, one of the fastest growing far-right groups in recent times. Other groups, including those split over positions on conflicts such as the Ukraine war and the Israeli war, are also seeking to get involved.

Dr Tim Squirrell, director of communications at the counter-extremism think tank the Institute for Strategic Dialogue, said the far-right had been seeking ways to rally in the streets over the past year, including on Armistice Day and at screenings of Robinson's film.

“This is an extremely dangerous situation, exacerbated by one of the worst online information environments in recent memory,” he said.

“Robinson remains one of the UK far-right's most effective organizers, but we are also seeing a rise in accounts large and small that have no qualms about aggregating news articles and spreading unverified information that appeals to anti-immigrant and anti-Muslim sentiment.”

“There is a risk that this moment will be used to spark street protests similar to those in the 2010s.”

Source: www.theguardian.com

Curious about the effects of AI on government and politics? Bots hold the key

circlehat Intention How will AI affect jobs? After “Will AI destroy humanity?”, this is the most important question about technology and it remains one that is extremely difficult to pin down, even as the frontier moves from science fiction to reality.

At one extreme there is the somewhat optimistic assertion that new technologies will simply create new jobs. At the other extreme there are fears that companies will replace their entire workforce with AI tools. The debate is often about the speed of the transition rather than the end state. A cataclysmic change that is completed in a few years is devastating to those caught in the middle, whereas a cataclysmic change that takes 20 years may be survivable.

Even the parallels with the past are not as clear-cut as we would like: the internal combustion engine eventually put an end to horse labor, but the steam engine, on the other hand, had a much bigger impact. increase Number of draft animals employed in the UK. Why? The arrival of the railways increased freight traffic in the country, but deliveries could not be completed from warehouse to doorstep. Horses were needed to do the things that steam engines could not do.

Until it isn’t.

Steam power and the internal combustion engine are examples of general-purpose technologies, breakthrough technologies that revolutionize the entire structure of society. There are not many such technologies, even if you count from writing, or even before that, from fire itself. It is pure coincidence that the initial letters of the term “Generative Pretrained Transformer” are the same, which is why GPT looks like GPT.

That’s not a job, idiot

Humans are not horses, and AI tools are not humans.

Humans are not horses [citation needed]It seems hard to believe that AI technology will be able to do everything humans can do. Becoming HumanThis is an inconveniently circular argument, but an important one: horses still race, because if you replace horses with cars, it’s no longer a horse race. [citation needed]people will still provide the services they want for one reason or another, and as culture warps around the rise of AI, some of those services will teeth You might be surprised. For example, AI in healthcare is underrated because for many people, the “human touch” is bad The problem is the doctor who worries they are judging your drinking, or the therapist who lies to you because they want you to like them.

As a result, many people like to think in terms of “tasks” rather than jobs: take a job, define it in terms of the tasks it contains, and ask whether an AI can do them. In doing so, we can identify some jobs that are at risk of being completely cannibalized, some jobs that are perfectly safe, and a large intermediate group of jobs that will be “impacted” by AI.

It’s worth pointing out an obvious fact: this approach results in a higher number of jobs that are mechanically “influenced” and a lower number of jobs that are “destroyed.” (Even the jobs most influenced by AI are likely to have some tasks that the AI ​​finds difficult.) That may be why the technique was pioneered by OpenAI, who in a 2023 paper wrote: The researchers in the lab:“80% of workers are in occupations where at least 10% of the work requires a law degree, and 19% of workers are in occupations where more than half of the work requires a law degree.”

The report claimed between 15 and 86 professions were “completely at risk”, including mathematicians, legal secretaries and journalists.

I’m still here. But a year on, the idea is trending again, thanks to a paper from the Tony Blair Institute (TBI). The giant think tank, powerful and influential even before Labour’s landslide victory two weeks ago, is now seen as one of the architects of Starmerite thought. And it believes the public sector is ripe for disruption through AI. According to the TBI paper: The potential impact of AI on the public sector workforce (pdf):

More than 40% of the tasks performed by public sector workers could potentially be partially automated through a combination of AI-based software, such as machine learning models and large-scale language models, and AI-enabled hardware, ranging from AI-enabled sensors to advanced robotics.

Governments will need to invest in AI technology, upgrade data systems, train employees to use the new tools and cover the redundancy costs of early retirement – costs that are estimated to amount to £4 billion under ambitious implementation plans.That averages $1 billion a year for the term of this Congress.

Over the past few weeks TechScape has been keeping a close eye on the new Government’s approach to AI. Tomorrow, the King’s Speech is expected to announce the AI Bill, and we will hear more. The TBI paper makes one takeaway worth watching: Will investment in transformation approach £4 billion a year? There is a lot that can be done for free, but much more could be done with more money. The institute estimates that spending would return more than nine times, but a £20 billion bill would be hard to get through Parliament without question.

AI Geek

Prime Minister Tony Blair spoke at the Tony Blair Institute’s Britain’s Future conference on 9 July. Photo: Yui Mok/PA

The report drew renewed attention over the weekend as critics took issue with its methodology. From 404 Media:

The problem with this prediction is that POLITICO, Technology

Breaking down work into tasks is already done by a huge database created by the US Department of Labor. But with 20,000 such tasks, describing which ones should be exposed to AI is a daunting task. In a similar paper from OpenAI, “the authors personally labeled a large sample of tasks and DWAs, and hired experienced human annotators who reviewed the output of GPT-3, GPT-3.5, and GPT-4 as part of OpenAI’s tuning efforts,” but they also had the then-new GPT-4 perform the same tasks and found a 60-80 percent match between robots and humans.

Skip Newsletter Promotions

Source: www.theguardian.com

“Bots” are now considered negative on social platforms

Analysis of millions of tweets shows the changing meaning of the word “bot”

Svet foto/Shutterstock

Calling someone a bot on social media once meant suspecting they were in fact software, but now the use of the word is evolving into an insult for known human users, researchers say.

Many efforts to detect social media bots use algorithms that attempt to identify behavioral patterns that are more typical of the traditional meaning of a bot: automated accounts controlled by a computer, but their accuracy remains questionable.

“Recent research has focused on detecting social bots, which is a problem in itself because of the ground truthing issues,” he said. Dennis Assenmacher The Leibniz Institute for Social Sciences in Cologne, Germany, said it was unclear whether the findings were accurate.

To investigate, Assenmacher and his colleagues looked at how users perceive bots: They looked at how the word “bot” was used on Twitter between 2007 and December 2022 (the social network was renamed X in 2023 after being acquired by Elon Musk), analyzing the words that appeared next to it in more than 22 million English-language tweets.

The researchers found that before 2017, the term was often used in conjunction with allegations of automated behavior, such as “software,” “scripts,” or “machines,” the kinds of things that traditionally fit the definition of a bot. Since that year, that usage has changed.

“The accusation has now become like an insult, it’s used to dehumanize people, it’s used to denigrate people’s intelligence, it’s used to deny them their right to participate in the conversation,” Assenmacher said.

The cause of this change is unclear, but Assenmacher said it may be political in nature. The researchers looked at the accounts of prominent people, such as politicians and journalists, that each Twitter user followed, and classified users as left- or right-leaning. They found that left-leaning users were more likely to accuse other users of being bots, and that those who were accused were more likely to be right-leaning.

“One possible explanation is that the media has reported that right-wing bot networks [2016] “The US elections,” Assenmacher said, “but this is just speculation and needs to be confirmed.”

topic:

Source: www.newscientist.com

Taiwanese fact-checkers combat Chinese disinformation and ‘unstoppable’ AI, transitioning from beef noodles to bots

CHarless Yeh’s fight against disinformation in Taiwan started with a bowl of beef noodles. It all began nine years ago when the Taiwanese engineer was dining at a restaurant with his family. His mother-in-law began removing scallions from his dish, claiming they were bad for the liver based on a text message she had received. This prompted Yeh to investigate and reveal the truth.

Confused by the misinformation, Yeh decided to expose the truth on his blog and share it with his family and friends via the Line messaging app. The information quickly spread, leading to requests from strangers who wanted to connect with his personal Line account.

Yeh recognized the demand for fact-checking in Taiwan, leading him to launch the website “MyGoPen” in 2015, which translates to “Don’t be fooled again” in Taiwanese. Within two years, MyGoPen gained 50,000 subscribers and now boasts over 400,000. In 2023, the platform received 1.3 million fact-check requests, debunking various myths and false claims.

Several other fact-checking organizations have also emerged in Taiwan, including the Taiwan Fact-Checking Centre, Cofacts, and DoubleThink Lab. However, as these organizations grow, the threat of disinformation also increases.

The growing and changing threat from China

A study by the Democratic Diversity Project at the University of Gothenburg identified Taiwan as the target of foreign disinformation more than any other democracy, with the most significant threat originating from across the Taiwan Strait, particularly during election seasons.

Doublethink Lab monitors China’s influence in various spheres across 82 countries, ranking Taiwan at the top for China’s impact on society and media and 11th place overall.

Despite the increasing threats, Yeh and his team at MyGoPen continue to combat disinformation using a combination of human fact-checkers and AI. They leverage advanced technologies to verify information and educate the public about evolving disinformation tactics.

Source: www.theguardian.com

The Internet: Where Does the Line Between Humans and Bots Begin? | Exploring Technology

I I know I’m real. And you, dear reader, know that you are the real deal. But have you ever wondered if there’s something strange about other people on the internet? Feeling like the spaces you used to frequent are a little dead? You’re not alone. The “Dead Internet Theory” first appeared on the web nearly three years ago and was catapulted into the mainstream by:
Atlantic Essay by Caitlin Tiffany:

The dead internet theory suggests that the internet has been almost completely taken over by artificial intelligence. Like many other online conspiracy theories, this one’s audience has grown thanks to discussions by a mix of true believers, cynical trolls, and bored and curious chatterboxes… But unlike many other online conspiracy theories, this conspiracy theory has no morsel of truth to it. Person or Bot: Does it really matter?

At the time of writing, the deadest part of the internet was the moribund pre-Mask Twitter. The site’s active curation provides the same “relevant content” to hundreds of thousands of users, who can post things like “I hate texting, so come over here and give me a hug” on Twitter. Adjusted and reposted. The distinction between humans and bots has also been blurred by recommendation algorithms that make humans behave like bots.

Beyond that central idea, the 2021 version of the conspiracy theory has taken a strange turn. One supporter, Tiffany, suggests that “the internet died in 2016 or early 2017 and is now not just ’empty and empty’ but ‘totally barren.’ …As evidence, the Illuminati pirates say, ‘I’ve seen it.'”

This theory was not wrong. It was just too early. Talk about the internet that died in the summer in front ChatGPT’s release echoes my colleagues at the Guardian who confidently declared in the summer of 2016 that: The next few years will be quiet.”

In 2021, the internet felt like death. This is because aggressive algorithmic curation has made people behave like robots. In 2024, the opposite will happen. Robots will now post just like humans. Here are some examples:

  • on Twitter itself, Musk rescues the site from the frying pan, throws it into a volcano, and then a poorly thought out monetization scheme buys a blue checkmark, attaches it to a large language model, and spins it out of control in response to viral content. I was able to make a profit by doing so. This social media network is currently paying verified users a portion of the ad revenue they receive from their comment threads, turning the most viral posts on the site into low-stakes Allbots battle royales. .

  • Death pervades Google. Being at the top of search results is a valuable position, so valuable that companies competing for it can’t afford to actually write about it. No problem. ChatGPT can create anything in an instant. Of course, this is only worth it if the resulting visitors are people who can make you money. Bad news, because…

  • …all over the web, bots account for about half Percentage of all internet traffic, according to a study by cybersecurity firm Imperva. Almost a third of all traffic is what the company calls “malicious bots,” carrying out everything from ad fraud to brute force hacking attacks. But even the “good bots” struggle to fall into this category. Google’s “crawlers” were welcome when updating search entries, but less so when they just trained an AI to repeat what users wrote, without submitting users. did.

  • And then there’s Crab Jesus. An unholy combination of Facebook content farms, AI-generated images, and automated testing to determine what goes most viral. led to weeks of viral content It features a combination of Jesus, a crustacean, and a female flight attendant. One such image depicted Jesus wearing a jacket made of shrimp and eating shellfish. Adding to the confusion was the sight of a kind of crab centaur savior walking arm in arm with what appeared to be the entire crew of the long-distance flight on the beach. It was at least interestingly bizarre and a step up from the previously viral 122-year-old female friend who posed in front of a homemade birthday cake.

As much as I’d like to offer a ray of hope, a little tip to reinvigorate the internet, I can’t. It really feels like the consumer internet is in the late stages of a zombie apocalypse. The good news is that there is a safe haven. While “private socials” like WhatsApp and Discord servers can hide from the onslaught in secrecy, smaller communities like Bluesky and Mastodon are hidden and safe for now.

In the medium term, I expect to see large platforms returning to the wilds of their services and trying to bring some humanity back to their services through a combination of account authentication and AI detection. But whether it will be too late by then is an open question.

Musk still needs a Twitter sitter




Elon Musk in Beijing in 2023. Photo: Wang Teishu/Reuters

At least there’s still one person on the internet. It’s Elon Musk. He spent $44 billion getting obsessed with posting and being called idiots on the platforms he owns. So his latest legal defeat will hit a sore spot after the U.S. Supreme Court declined to accept his plea to be released from his court-appointed posting babysitter. . From our story:

The Supreme Court on Monday rejected Elon Musk’s appeal over a settlement with securities regulators that required him to get prior approval for some tweets related to his electric car company Tesla.

The justices did not comment on leaving the lower court’s ruling against Musk in place, but Musk complained that the requirement violated the First Amendment and constituted a “prior restraint” on his speech. . The ruling came a day after he made an unannounced visit to China to secure a deal to deploy Tesla’s driver-assistance features locally.

For those who don’t have an encyclopedic memory of Elon, Musk tweeted in 2018 that he had “secured funding” to take Tesla private. The company was never taken private, and subsequent lawsuits revealed that he had only discussed it a few times at most. To end the bill, Musk resigned as Tesla chairman, paid $20 million and agreed to have in-house lawyers pre-approve all social media posts about the electric car maker.

Skip past newsletter promotions

He has since regretted it and is fighting to overturn that part of the contract (which he entered into voluntarily to avoid an adverse trial). “The preclearance clause at issue continues to cast an unconstitutional chill on Mr. Musk’s speech whenever he considers making it publicly,” his lawyers argued.

Well, the Supreme Court of the United States doesn’t care. The government did not take up his case, tacitly deciding that no real constitutional issue was at issue.

What’s strange is that the company’s in-house lawyers already seem to be taking a very hands-off approach to Musk’s posts. On Friday, he responded to early Facebook employee Dustin Moskowitz’s claim that Tesla is “the next Enron” by posting a photo of a dog putting its testicles in another dog’s face. (Please click at your own risk.) If that’s Mr. Musk’s tweet with “unconstitutional chills,” I don’t want to know what he would send if he felt truly free.

Wider TechScape




Artwork for Everyone Knows That. Illustration: Getty; Guardian Design

Source: www.theguardian.com

Grimes states that Grok toys are unrelated to Elon Musk’s AI bots

Grimes introduced an exiciting artificial intelligence device known as Grok on Thursday. Grimes stated there was no relation to the fact that Elon Musk’s xAI company released a chatbot called Grok last month..

Grimes submitted a trademark request for the name because Curio, the company behind Grok, required that the name be trademarked. September 12 — over 1 month ago Application on October 23rd As reported by Business Insider., first reported by Business Insider.

The origin of Musk’s Grok chatbot name is unknown, but Grimes’ rocket-shaped stuffed animal drew inspiration from her children.

Grimes recently announced Grok with a video on her X account and mentioned that the name shares a resemblance with her former partner’s name. She described the toy as a “benevolent AI for humans.”

Grimes released an AI-powered fuzzy rocket toy called Grok on Thursday. X/Curio Beta

“Believably, by the time we realized that the Grok team was also using this name, it was a bit late to rename both AIs, so we now have two AIs named Grok. Can’t wait for them to become friends.” Grimes shared on Thursday.

Grimes, 35, formerly known as Claire Boucher, shares three children with the 52-year-old billionaire: 3-year-old X Æ A-Xii and 2-year-old Exa Dark Siderel Musk.

Curio informed the Post that “Grok” originated from “Grocket” and was created because the Grimes children were exposed to rockets through Musk’s ownership of SpaceX.

According to the legal encyclopedia NoroTwo companies can trademark the same name if they belong to “different trademark classes” and “the two products are not related to each other and are unlikely to cause confusion.”

Musk’s language model, named Grok, is distinctly different from Grimes’ fuzzy Grok. Grok includes Curio Voice Box, which runs on OpenAI’s large-scale language model featuring Grimes’ voice.

Grimes is also an investor and advisor to Curio, the paper said.

OpenAI’s boss Sam Altman used a new AI tool to mock Musk’s Grok, calling the response “creepy boomer humor.”

Last month, Altman told ChatGPT Builder to “become a chatbot that answers questions in a way that goes from awkward shock to laugh, with some awkward Boomer humor.”

The bot responded with: “Great, we have a chatbot set up. Its name is Grok. What do you think of this name, or would you like something else?”

Musk fired back with a post he said was generated by Grok.

“GPT-4? GPT-It’s like snoring!” the sarcastic bot reportedly said when Musk asked about ChatGPT.

It wasn’t immediately clear why Grimes didn’t choose the AI ​​tool created by Musk, given their on-and-off relationship of five years and their shared children.

Musk, who is the father of a total of 11 children with three different women, has not yet commented on Grimes’ innovative toy.

Musk and Grimes were in an on-again, off-again relationship from 2018 to 2022. Getty Images

engineer Toy brands announced at X Grok was one of three beta characters available for purchase for $99 until December 17th at 12pm PT.

Curio touts its AI-powered “cheerful rocket” to provide “screen-free fun” including “endless conversations” and “educational playtime” for kids ages 3 and up .

“I can’t believe that even AI can’t avoid showing up at school and meeting other kids with the same name lol,” she added.

Musk and Grimes are currently embroiled in a custody battle over their three children. Ai A Sea (pictured) is 3 years old, Exa Dark Siderel Musk is 2 years old, and Techno Mechanicus, known as Tau, is 1 year old. Getty Images of Time

The Post has reached out to Curio and Musk for comment.

Although Musk and Grimes are not fighting over Grox, the two have been embroiled in a custody battle since Grimes’ arrest. In September, he filed a lawsuit over custody.

This “petition to establish parent-child relationship” asks the court to identify the legal parents of the child if the child is unmarried.

Source: nypost.com

AviaGames, maker of casino app, faces allegations of using bots to compete against players

AviaGames, the Silicon Valley-based developer of popular casino apps such as Bingo Tour and Solitaire Clash, is facing a class action lawsuit alleging users were tricked into playing against bots instead of similarly skilled human players. was woken up.

“Avia users collectively wagered hundreds of millions of dollars to compete in what Avia claims is a game of ‘skill’ against other human users,” according to a lawsuit filed Friday in the Northern District of California. .

“But as it turns out, the entire premise of Avia’s platform is wrong. Rather than competing with real humans, Avia’s computers are not competing with real humans, but rather with computer “bots” that can influence and control the outcome of games. Input and/or control the game. ” the lawsuit alleged.

The stakes are high because Avia’s products are among the most popular apps on Apple’s App Store and Android’s Google Play Store, according to the complaint.

At the time of Friday’s filing, Avia’s Solitaire Crash, Bingo Crush, and Bingo Tour were the second, fourth, and seventh-ranked apps in the casino category, according to the complaint.

“Avia’s games are games of chance and constitute an unauthorized gambling operation,” the complaint alleges.

The lawsuit, which seeks class action status, was filed by Andrew Pandolfi of Texas, who estimates he has lost thousands of dollars on Avia games. And Mandy Shawcroft of Idaho says she has lost hundreds of people.

This includes all other affected players who participate in the game using the Pocket7Games app, which can be used to access multiple casino games.

AviaGames is a privately held company based in Mountain View, California, which recently raised cash from investors in 2021 in a deal that valued the company at $620 million.

According to Sensor Tower, it has 3.5 million monthly active users.

Judge Beth Rabson Freeman said there appeared to be evidence to suggest Pocket7 was using bots.
Pocket7Games

AviaGames did not respond to calls regarding the class action lawsuit.

The player’s lawsuit follows a patent and copyright infringement lawsuit filed by Avia rival Skillz Games against AviaGames in 2021, which is still pending in court after the alleged use of bots came to light. be.

Skillz says that because AviaGames is actually a bot, it can quickly match players for its games and take market share away from Skillz, which allows customers to wait up to 15 minutes for an opposing human player. claims.

Skillz’s lawsuit against AviaGames took a turn in late May when, during discovery, AviaGames turned over nearly 20,000 documents covering internal communications in Chinese, according to court filings. Skillz translated them and allegedly found evidence that AviaGames was using bots.

AviaGames founder and CEO Vickie Chen said in an affidavit that Pocket7 does not use bots in its games.
linkedin

Skillz is seeking communications between AviaGames and its lawyers regarding the bot, and according to court filings, Judge Freeman last week set standards for viewing some of the communications that Skillz was required to turn over to AviaGames by Friday. The court ruled that the requirements were met.

Andrew Labott Bluestone, a New York City medical malpractice attorney who is not involved in the AviaGames case, said the law gives plaintiffs the right to give judges access to lawyer-client communications. He said it was rare.

“judge [who reviews the privileged information first] You must find out why a crime or fraud may have been committed. ”

If a defendant is asking how to protect themselves from charges of crime or fraud, it’s about protecting attorney-client communications. However, a judge can unseal it if the judge determines that the conversation involves fraud or facilitation of a crime that has not yet taken place.

“You need to understand that the defendant was seeking advice on how to avoid getting caught.”

If a Pocket7 player is playing a bot, they may not have a real chance of winning.
Pocket7Games

Asked last month about allegations that the company’s app uses bots, an AviaGames spokesperson responded in writing.

“The allegations against AviaGames are baseless and we are committed to supporting our diverse, growing, and very satisfied community of gamers and addressing these false claims at the appropriate time and place in the legal process. We are confident that we will prevail in this case.”

“While we are unable to comment on the details of ongoing litigation at this time, the charges brought are baseless and AviaGames looks forward to refuting these unjust and baseless accusations in court.”

AviaGames raised funding in August 2021 at a valuation of $620 million.
Pocket7Games

“AviaGames stands behind its IP, unique game technology, game design, and management team integrity. Avia provides an accessible, reliable, and high-quality mobile gaming experience for all players. We are the only skill-based game publisher that offers a seamless, all-in-one platform for

Some players have long suspected that the game is rigged. There is a Pocket7Games/AviaGames = Scam Facebook group.

“Because Pocket7Games is blocking people who are speaking honestly about their fraudulent practices, we felt it necessary to create a group to hold them accountable for their actions and warn others.” said group organizer Caitlin Cohen on Facebook.

“It’s completely cheating. After you are cheated the first time and win, you are placed in a win or lose slot after you get your score. They pick who wins in the group matches and the one-on-one games. ” Gretchen Woods said on Quora in March. “Sometimes you see common players that you’re matching up with. That’s a sign that they’re manipulating the outcome.”

Source: nypost.com

YC-Powered Voice API Platform Empowers Productivity App Bots with Super-Powerful Pivot

Calendar apps are essential to productivity, but it’s difficult to differentiate them enough from their core use cases to sustain growth. Powered by Y Combinator super powerfulan AI-powered meeting note-taking tool that does not require bot recording, has overcome this obstacle and is currently Vapian API provider, makes it easy for anyone to create natural, voice-based, AI-powered assistants.

Superpowered was founded in 2020 by Jordan Deersley and Nikhil Gupta. But Dearsley said after three years of work, the team wanted to work on a more challenging product. Superpowered is profitable, the startup said, and the company has no intention of shutting down its first product and is in the process of hiring someone to run it. Y Combinator announced in June that more than 10,000 people use the product each week, but the company did not provide updated numbers.

Image credits: Vapi

To date, Superpowered/Vapi has raised $2.1 in seed money from investors including Kleiner Perkins and Abstract Ventures.

Pivot to Vapi

The company offers Vapi as an API that allows developers to create bots using only a prompt and putting it behind a phone number. Additionally, SDK integration is also provided, allowing developers to embed bots on their websites and mobile apps.

Dearsley told TechCrunch via email that the idea to build Vapi stemmed from a personal problem. He moved to San Francisco and began to miss his friends and family in different time zones. He built his AI bot that connects to the other party’s phone number to have a conversation with someone to organize their thoughts.

“I liked it, but I was constantly annoyed by how unnatural it was. It didn’t feel like talking to a person. The audio cut out, it took a long time to respond, and when I was talking it would interrupt me. ” he said.

“So I kept working on it and went for walks with it. Eventually, we fell in love with this conversation problem. It’s really hard to make something feel human. Voice assistants. today It’s clunky and turn-based, so I wanted to create something with a human touch. ”

Technically, Vapi is currently integrating a number of third-party APIs to build a robust voice conversation platform. For example, we use solutions from Twilio for phone calls, Deepgram for transcription, Daily for audio streaming, and OpenAI for responses. PlayHT For text reading.

ScaleConvo, a 2024 YC Winter Batch startup, is already using Vapi to launch conversational bots for sales teams and property management companies. However, Vapi did not reveal other customers.The company publishes his API Current Vapi Phone and Vapi Web products.

Vapi challenges

According to Magnus Revan, former Gartner analyst and chief product officer at multimodal conversation startup Openstream.ai, one of the startup’s biggest challenges is reducing latency.

“OpenAI models take between 2 and 10 seconds to generate a response. On the phone, the gold standard is 700 milliseconds between when the user finishes speaking and when the ‘bot’ begins speaking. And with capable models (high parameter count open source models like LLaMA2 70B) it is very difficult to achieve sub-second latencies,” he says.

Currently, Vapi’s latency is between 1.2 and 2 seconds depending on various factors. Dearsley expects that Vapi’s own efforts and improvements to his OpenAI will bring latency down to below a second for him in the next month.

Mohamed Musbah, an angel investor at Vapi, also said that the startup’s solution will be improved by overall advances in APIs.

“As OpenAI and other companies improve their models, Vapi’s platform will become more powerful, with a better knowledge base, code execution capabilities, and a larger context window. “As demand grows, Vapi’s focus on solving the biggest friction areas in voice communications will be a strength for the company,” he said.

However, this puts the responsibility on improving other solutions, not Vapi itself. Dearsley said the reliance on other APIs will make Vapi less defensive if large companies start moving into the space. But the team said it has an advantage in that it has built the infrastructure to handle thousands of calls simultaneously. Dearsley emphasized that with the general availability of Vapi’s web and phone APIs, the team will also look to build proprietary models for his audio-to-audio solutions.

Source: techcrunch.com