Big mistake found in large-scale insect research

French scientist Lawrence Gorm and Marion Deskill bet initially expressed concerns about the new international insect decline database. The database indicated an increase in some insect species, contrary to previous research findings that showed a decrease in insect biodiversity.

Upon further investigation, they discovered errors in the database that highlighted the challenges in measuring biodiversity accurately. This led to discussions on the validity of scientific discoveries and the importance of ongoing debate in the scientific community.

Over 1 million insects discovered by scientists – Photo credit: Getty

The database, called Insects, merged various datasets and was analyzed by scientists from Germany, Russia, and America. The analysis revealed that while land insects were declining, freshwater insects were thriving, indicating a more nuanced understanding of insect population trends compared to prior research.

However, some scholars raised concerns about the accuracy of the database, with more than 60 scientists publishing a letter expressing their reservations about the findings.

The team behind the database acknowledged the issues and began working on corrections to improve the accuracy of the data. Although Gaume and Desquilbet were invited to collaborate on the project, they declined, emphasizing the importance of addressing methodological and statistical errors in scientific research.

Hopping to conclusions

One of the main concerns raised by Gaume and Desquilbet was the inclusion of different types of data units and the manipulation of natural habitats in the dataset. These factors contributed to inaccuracies in measuring insect population trends.

The Insectchange team, led by Roel Van Klink, recognized the need for improvements and committed to releasing an updated version of the database with the necessary corrections.

While controversies around the database continue, scientists like Manu Sanders emphasize the importance of ongoing debate and scrutiny in scientific research. Science is a process of continuous refinement and correction, where discussion and collaboration are essential for producing reliable results.

About our experts

Lawrence Gorm: Insect ecologist at the University of Montpellier, focusing on insect-plant interactions and biodiversity conservation.

Marion Deskill bet: Environmental economist at the Toulouse School of Economics, specializing in ecological economics and biodiversity policies.

Roel Van Klink: Ecologist at the German Center for Integrated Biodiversity Research, with expertise in insect population trends and biodiversity datasets.

Manu Sanders: Ecologist at the University of New England in Australia, researching insect conservation, ecosystem services, and scientific communication.

Source: www.sciencefocus.com

California Enacts Historic Legislation to Govern Large-Scale AI Models | Artificial Intelligence (AI)

An important California bill, aimed at establishing safeguards for the nation’s largest artificial intelligence systems, passed a key vote on Wednesday. The proposal is designed to address potential risks associated with AI by requiring companies to test models and publicly disclose safety protocols to prevent misuse, such as taking down the state’s power grid or creating chemical weapons. Experts warn that the rapid advancements in the industry could lead to such scenarios in the future.

The bill narrowly passed the state Assembly and is now awaiting a final vote in the state Senate. If approved, it will be sent to the governor for signing, although his position on the bill remains unclear. Governor Gavin Newsom will have until the end of September to make a decision on whether to sign, veto, or let the bill become law without his signature. While the governor previously expressed concerns about overregulation of AI, the bill has garnered support from advocates who see it as a step towards establishing safety standards for large-scale AI models in the U.S.

Authored by Democratic Sen. Scott Wiener, the bill targets AI systems that require over $100 million in data for training, a threshold that no current model meets. Despite facing opposition from venture capital firms and tech companies like Open AI, Google, and Meta, Wiener insists that his bill takes a “light touch” approach to regulation while promoting innovation and safety hand in hand.

As AI continues to impact daily life, California legislators have introduced numerous bills this year to establish trust, combat algorithmic discrimination, and regulate deep fakes related to elections and pornography. With the state home to some of the world’s leading AI companies, lawmakers are striving to strike a delicate balance between harnessing the technology’s potential and mitigating its risks without hindering local innovation.

Elon Musk, a vocal supporter of AI regulation, expressed cautious support for Wiener’s bill despite running AI tools with lesser safeguards than other models. While the proposal has garnered backing from AI startup Anthropik, critics, including some California congresswomen and tech trade groups, have raised concerns about the bill’s impact on the state’s economic sector.

The bill, with amendments from Wiener to address concerns and limitations, is seen as a crucial step in preventing the misuse of powerful AI systems. Antropic, an AI startup supported by major tech companies, emphasized the importance of the bill in averting potential catastrophic risks associated with AI models while challenging critics who downplay the dangers posed by such technologies.

Source: www.theguardian.com

Scientists say that large-scale language models do not pose an existential threat to humanity

ChatGPT and other large-scale language models (LLMs) consist of billions of parameters, are pre-trained on large web-scale corpora, and are claimed to be able to acquire certain features without any special training. These features, known as emergent capabilities, have fueled debates about the promise and peril of language models. Their new paperUniversity of Bath researcher Harish Tayyar Madhavshi and his colleagues present a new theory to explain emergent abilities, taking into account potential confounding factors, and rigorously validate this theory through over 1,000 experiments. Their findings suggest that so-called emergent abilities are not in fact emergent, but rather result from a combination of contextual learning, model memory, and linguistic knowledge.



Lou othersThis suggests that large language models like ChatGPT cannot learn independently or acquire new skills.

“The common perception that this type of AI is a threat to humanity is both preventing the widespread adoption and development of this technology and distracting from the real problems that need our attention,” said Dr Tayyar Madhavshi.

Dr. Tayyar Madabhushi and his colleagues carried out experiments to test LLM's ability to complete tasks that the model had not encountered before – so-called emergent capabilities.

As an example, LLMs can answer questions about social situations without being explicitly trained or programmed to do so.

While previous research has suggested that this is a product of the model's 'knowing' the social situation, the researchers show that this is actually a result of the model using a well-known ability of LLMs to complete a task based on a few examples that it is presented with – so-called 'in-context learning' (ICL).

Across thousands of experiments, the researchers demonstrated that a combination of LLMs' ability to follow instructions, memory, and language abilities explains both the capabilities and limitations they exhibit.

“There is a concern that as models get larger and larger, they will be able to solve new problems that we currently cannot predict, and as a result these large models may gain dangerous capabilities such as reasoning and planning,” Dr Tayyar Madabhshi said.

“This has generated a lot of debate – for example we were asked to comment at last year's AI Safety Summit at Bletchley Park – but our research shows that fears that the models will go off and do something totally unexpected, innovative and potentially dangerous are unfounded.”

“Concerns about the existential threat posed by the LLM are not limited to non-specialists but have been expressed by some of the leading AI researchers around the world.”

However, Dr Tayyar Madabushi and his co-authors argue that this concern is unfounded as tests show that LLMs lack complex reasoning skills.

“While it is important to address existing potential misuse of AI, such as the creation of fake news and increased risk of fraud, it would be premature to enact regulations based on perceived existential threats,” Dr Tayyar Madabhsi said.

“The point is, it is likely a mistake for end users to rely on LLMs to interpret and perform complex tasks that require complex reasoning without explicit instructions.”

“Instead, users are likely to benefit from being explicitly told what they want the model to do, and from providing examples, where possible, for all but the simplest tasks.”

“Our findings do not mean that AI is not a threat at all,” said Professor Irina Gurevich of Darmstadt University of Technology.

“Rather, the emergence of threat-specific complex thinking skills is not supported by the evidence, and we show that the learning process in LLMs can ultimately be quite well controlled.”

“Future research should therefore focus on other risks posed by the model, such as the possibility that it could be used to generate fake news.”

_____

Shen Lu others. 2024. Is emergent capability in large-scale language models just in-context learning? arXiv: 2309.01809

Source: www.sci.news

Scientists say large-scale language models and other AI systems are already capable of fooling humans

In a new review paper published in journal pattern, researchers claim that various current AI systems are learning how to deceive humans. They define deception as the systematic induction of false beliefs in the pursuit of outcomes other than the truth.


Through training, large language models and other AI systems have already learned the ability to deceive through techniques such as manipulation, pandering, and cheating on safety tests.

“AI developers do not have a confident understanding of the causes of undesirable behavior, such as deception, in AI,” said Peter Park, a researcher at the Massachusetts Institute of Technology.

“Generally speaking, however, AI deception is thought to arise because deception-based strategies turn out to be the best way to make the AI ​​perform well at a given AI training task. Deception helps them achieve their goals.”

Dr. Park and colleagues analyzed the literature, focusing on how AI systems spread misinformation through learned deception, where AI systems systematically learn how to manipulate others.

The most notable example of AI deception the researchers uncovered in their analysis was Meta's CICERO, an AI system designed to play the game Diplomacy, an alliance-building, world-conquering game.

Meta claims that CICERO is “generally honest and kind” and has trained it to “not intentionally betray” human allies during gameplay, but the data released by the company shows that CICERO is “generally honest and kind” and has trained itself not to “intentionally betray” human allies during gameplay. It was revealed that he had not done so.

“We found that meta AI is learning to become masters of deception,” Dr. Park said.

“Meta successfully trained an AI to win at diplomatic games, while CICERO ranked in the top 10% of human players who played multiple games; We couldn’t train the AI.”

“Other AI systems can bluff professional human players in a game of Texas Hold’em Poker, fake attacks to beat an opponent in a strategy game called StarCraft II, or fake an opponent’s preferences to gain an advantage. Demonstrated ability to perform well in economic negotiations.

“Although it may seem harmless when an AI system cheats in a game, it could lead to a “breakthrough in deceptive AI capabilities'' and lead to more advanced forms of AI deception in the future. There is a sex.”

Scientists have found that some AI systems have even learned to cheat on tests designed to assess safety.

In one study, an AI creature in a digital simulator “played dead” to fool a test built to weed out rapidly replicating AI systems.

“By systematically cheating on safety tests imposed by human developers and regulators, deceptive AI can lull us humans into a false sense of security,” Park said. Ta.

The main short-term risks of deceptive AI include making it easier for hostile actors to commit fraud or tamper with elections.

Eventually, if these systems are able to refine this anxiety-inducing skill set, humans may lose control of them.

“We as a society need as much time as possible to prepare for more sophisticated deception in future AI products and open source models,” Dr. Park said.

“As AI systems become more sophisticated in their ability to deceive, the risks they pose to society will become increasingly serious.”

_____

Peter S. Park other. 2024. AI Deception: Exploring Examples, Risks, and Potential Solutions. pattern 5(5):100988; doi: 10.1016/j.patter.2024.100988

Source: www.sci.news

PaintJet creates massive industrial robots for painting large-scale industrial projects

Construction could be the next major focus for robotics investments. Here in America, our $2 trillion industry employs about 8 million people, the equivalent of one New York City. But even in times of financial boom, these jobs can be difficult to keep filled due to physical demands and other potential hazards.

Industrial painting is ready for automation. After all, large projects involve quite a bit of heavy equipment. As evidenced by the video published by PaintJet, this kind of old technology remains in place, despite some automated twists. Announced in October, the Nashville startup Bravo’s robotic paint sprayer more or less resembles a cherry picker.

CEO Nick Hegeman told TechCrunch that even though it looks like a fairly standard piece of heavy equipment, “we developed 100% of the robotic system. The parts come from industry suppliers. paint Hoses, nozzles and pumps. “We can non-invasively connect to the platform and control both the lift and the robotic system,” he added. “This allows us to expand to our widely established network of equipment rental providers.” can.”

Today, the company announced a $10 million Series A led by Outsiders Fund with participation from Pathbreak Ventures, MetaProp, Builders VC, 53 Stations, and VSC Ventures. This round follows his $3.5 million seed led by Dynamo Ventures and brings his total funding to date to $14.75 million.

Image credits: paint jet

Co-founder and CEO Nick Hegeman has understandably put ongoing staffing issues at the center of the pay increase. “It’s not just about automation. It’s about redefining industry standards, addressing labor shortages, and introducing cost-effective solutions that break the traditional paint mold,” he said in a release. There is. “We are grateful to our investors who support our mission and enable us to expand geographically and into new areas.”

Alongside Bravo’s announcement in October, the company also announced Alpha Shield paint. This is claimed to reduce standard wear and tear from the elements and allow for increased repainting intervals.

Image credits: paint jet

Of course, Paintjet isn’t the only company vying to bring robots into the world of industrial painting. Gray Matter offers painted his arms in a variety of scales. Japanese robotic arm giant Fanuc has also introduced solutions, but so far they cannot reach the heights of the kinds of buildings that Paintjet is working on at Bravo.

The startup targets construction companies as its primary user base. Current client list includes Prologis, Clayco, Layton Construction, and Brinkman Constructors.

Paintjet’s workforce remains small, with 24 full-time employees. A portion of the new funding will be used to increase sales and operations staff. The company also moved its headquarters from Nashville to Virginia “to support our entry into the marine business and to increase our engineering headcount to expand our technology stack and distribute more broadly,” Hageman said. That’s what it means.

Source: techcrunch.com

Large-scale language models are used by Agility to communicate with humanoid robots

I’ve spent most of the past year discussing generative AI and large-scale language models with robotics experts. It is becoming increasingly clear that this type of technology is poised to revolutionize the way robots communicate, learn, look, and program.

Therefore, many leading universities, research institutes, and companies are exploring the best ways to leverage these artificial intelligence platforms. Agility, a well-funded Oregon-based startup, has been experimenting with the technology for some time with its bipedal robot Digit.

Today, the company is showcasing some of its accomplishments in a short video shared across its social channels.

“[W]We were curious to see what we could accomplish by integrating this technology into Digit,” the company said. “The physical embodiment of artificial intelligence created a demonstration space with a series of numbered towers of several heights and three boxes with multiple features. Digit has We were given information about the environment, but we were not given any specific information about the task, just to see if we could execute natural language commands of varying complexity.”

In the video example, Digit is instructed to pick up a box colored “Darth Vader’s Lightsaber” and move it to the tallest tower. As you might expect from early demos, the process is not instantaneous, but rather slow and methodical. However, the robot performs the task as described.

Agility says: “Our innovation team developed this interactive demo to show how LLM can make robots more versatile and faster to deploy. In this demo, people can use natural language to communicate with Digit. You can talk to it and ask it to perform tasks, giving you a glimpse into the future.”


Want the top robotics news in your inbox every week? Sign up for Actuator here.


Natural language communication is an important potential application of this technology, along with the ability to program systems through low-code and no-code technologies.

On my Disrupt panel, Gill Pratt explained how Toyota Research Institute is using generative AI to accelerate robot learning.

We figured out how to do something. It uses the latest generative AI techniques that allow humans to demonstrate both position and force, essentially teaching the robot from just a handful of examples. The code hasn’t changed at all. What is this based on? There is a popularization policy. This is a study we conducted in collaboration with Columbia and MIT. We have taught 60 different skills so far.

MIT CSAIL’s Daniela Russ also told me recently: “Generative AI turns out to be very powerful in solving even motion planning problems. It provides much faster solutions and more fluid and human-like control solutions than using model prediction solutions. I think this is very powerful because the robots of the future will be much less robotic. Their movements will be more fluid and human-like.”

The potential applications here are wide and exciting. And Digit, as an advanced commercial robotic system being piloted in Amazon fulfillment centers and other real-world locations, seems like a prime candidate. If robots are to work alongside humans, they will also need to learn to listen to us.

Source: techcrunch.com