Do plants possess intelligence? | Science News

of Tall goldenrod (Solidago altissima)a North American species of the goldenrod family Asteraceaecan recognize other nearby plants without touching them by sensing the proportion of far-red light reflected from their leaves. When goldenrod is eaten by herbivores, it adapts its response based on whether other plants are nearby. Are such flexible, real-time adaptive responses a sign of plant intelligence?

In the context of behavioral ecology, plant responses to environmental stressors are increasingly being studied. This is especially true for plant responses to herbivores, which mediate direct and indirect defense and tolerance. These seemingly adaptive changes in plant defense phenotypes in the context of other environmental conditions have prompted discussion of such responses as intelligent behavior. In their paper, Kessler and Mueller explore the concept of plant intelligence and some of its predictions regarding chemical signaling in plant interactions with other organisms. Image courtesy of Becky.

“There are over 70 published definitions of intelligence, and even within specific fields there is no consensus on what it is,” says chemical ecologist Professor André Kessler. Cornell University.

“Many people believe that intelligence requires a central nervous system, and that electrical signals act as the medium for information processing.”

“Some plant biologists equate the plant's vascular system with a central nervous system, arguing that there is some centralized entity within the plant that allows it to process and respond to information.”

But Kessler and his colleague, Michael Mueller, a doctoral student at Cornell University, disagree.

“Although electrical signals are clearly seen in plants, there is no solid evidence of any homology with the nervous system, but the question is how important they are to the plant's ability to process environmental signals,” Professor Kessler said.

To make the case for plant intelligence, the authors narrowed the definition down to its most basic element: the ability to solve problems toward a specific goal based on information obtained from the environment.

As a case study, Kessler points to previous research looking at goldenrod and its response to being eaten by pests.

When beetle larvae feed on goldenrod leaves, the plant releases chemicals that let the insects know the plant is damaged and a poor food source.

These airborne chemicals, called volatile organic compounds (VOCs), are also absorbed by nearby goldenrod plants, causing them to develop their own defenses against the beetle larvae.

In this way, goldenrod attracts herbivores to nearby areas, dispersing damage.

In 2022, Professor Kessler and his co-authors Experiments were conducted To show that Solidago altissima They can also detect a higher proportion of far-red light reflected from the leaves of nearby plants.

If nearby plants are feeding on goldenrods by beetles, the goldenrod will grow faster in an effort to withstand the herbivores, but it will also start producing defensive compounds that help the plant fight off the pests.

In the absence of neighboring plants, plants do not accelerate their growth when eaten, and their chemical response to herbivores is significantly different, but they can still survive a significant amount of herbivore attack.

“This fits into our definition of intelligence: plants change their standard behaviour in response to information they receive from the environment,” Professor Kessler says.

“Neighboring goldenrods also become intelligent when they detect VOCs that signal the presence of pests.”

“Volatile emissions from nearby areas are a harbinger of future herbivore occurrence.”

“They can use cues from the environment to predict future situations and act accordingly.”

“Applying the concept of intelligence to plants could generate new hypotheses about the mechanisms and functions of plant chemical communication and may even change people's ideas about what intelligence actually means.”

“The latter idea is timely because artificial intelligence is a hot topic right now. For example, at least for now, artificial intelligence doesn't solve problems toward a goal.”

“Artificial intelligence is not even intelligent according to our definition of intelligence. Artificial intelligence is based on patterns it identifies from the information it has access to.”

“The idea that interests us comes from mathematicians in the 1920s who proposed that plants might function like beehives.”

“In this case, each cell acts like an individual bee, and the whole plant resembles a hive.”

“That means the plant brain is the whole plant, without any central coordination.”

“Instead of electrical signals, chemical signals are transmitted throughout the superorganism.”

“Work by other researchers has shown that all plant cells have a wide range of light spectrum recognition and sensory molecules to detect very specific volatile compounds emanating from nearby plants.”

“They can sniff out their environment with great precision, and as far as we know, all cells can do that.”

“Cells may be specialized, but they all recognize the same things, communicate through chemical signals, and trigger collective responses in growth and metabolism.”

“The idea is very appealing to me.”

Team paper Published in the journal Plant signaling and behavior.

_____

Andre Kessler & Michael B. Mueller. Induced resistance to herbivores and intelligent plants. Plant signaling and behaviorPublished online April 30, 2024, doi: 10.1080/15592324.2024.2345985

Source: www.sci.news

Apple unveils “Apple Intelligence” and introduces ChatGPT to Siri at WWDC 2024

Apple CEO Tim Cook announced a new suite of generative artificial intelligence products and services during the keynote address at the company’s annual developers conference, WWDC. The products include “Apple Intelligence” and a partnership with ChatGPT maker OpenAI. This marks a significant move towards AI for Apple, as the company aims to enhance user experiences and catch up with rivals in the field.

In his speech, Cook emphasized the importance of AI understanding users on a personal level, rooted in their daily lives, relationships, and communications. Apple Intelligence includes a variety of generative AI tools integrated across the company’s devices, such as Mac laptops, iPad tablets, and iPhones. These tools can extract information from apps and perform actions within them, offering a more personalized experience for users.

The partnership with OpenAI will bring ChatGPT technology to a new version of Apple’s voice assistant, Siri. The updated Siri will act as an AI chatbot, capable of executing tasks based on voice prompts and providing more contextual and personalized responses. Users can expect features such as summarizing notifications, emails, and texts, as well as creating customized emoji reactions.

Apple also announced updates for its Vision Pro headset and the adoption of Rich Communication Services for improved messaging capabilities. The company showcased new features in the Photos app, Apple Maps, Wallet, and text messaging customization. Additionally, Apple aims to expand availability of the Vision Pro headset to more countries in the coming months.

As Apple delves deeper into the realm of AI, investors and analysts have been eager to see how the company will innovate in this space. While Apple has been cautious in introducing AI tools into its flagship products, it has been making strategic moves to strengthen its AI capabilities over the years. The company’s commitment to privacy remains a central focus, with measures in place to protect user data when utilizing AI technologies.

Despite the challenges of balancing AI innovation with user privacy, Apple is determined to set a new standard for responsible AI usage. By integrating AI features into its products while prioritizing user privacy, Apple aims to provide a seamless and secure experience for its customers.

Source: www.theguardian.com

Detecting A Deepfake: Top Tips Shared by Detection Tool Maker

As a human, you will play a crucial role in identifying whether a photo or video was created using artificial intelligence.

Various detection tools are available for assistance, either commercially or developed in research labs. By utilizing these deepfake detectors, you can upload or link to suspected fake media, and the detector will indicate the likelihood that it was generated by AI.

However, relying on your senses and key clues can also offer valuable insights when analyzing media to determine the authenticity of a deepfake.

Although the regulation of deepfakes, especially in elections, has been slow to catch up with AI advancements, efforts must be made to verify the authenticity of images, audio, and videos.

One such tool is the Deepfake Meter developed by Siwei Lyu at the University at Buffalo. This free and open-source tool combines algorithms from various labs to help users determine if media was generated by AI.

The DeepFake-o-meter demonstrates both the advantages and limitations of AI detection tools by rating the likelihood of a video, photo, or audio recording being AI-generated on a scale from 0% to 100%.

AI detection algorithms can exhibit biases based on their training, and while some tools like DeepFake-o-meter are transparent about their variability, commercial tools may have unclear limitations.

Lyu aims to empower users to verify the authenticity of media by continually improving detection algorithms and encouraging collaboration between humans and AI in identifying deepfakes.

audio

A notable instance of a deepfake in US elections was a robocall in New Hampshire using an AI-generated voice of President Joe Biden.

When subjected to various detection algorithms, the robocall clips showed varying probabilities of being AI-generated based on cues like the tone of the voice and presence of background noise.

Detecting audio deepfakes relies on anomalies like a lack of emotion or unnatural background noise.

photograph

Photos can reveal inconsistencies with reality and human features that indicate potential deepfakes, like irregularities in body parts and unnatural glossiness.

Analyzing AI-generated images can uncover visual clues such as misaligned features and exaggerated textures.

An AI-generated image purportedly showing Trump and black voters. Photo: @Trump_History45

Discerning the authenticity of AI-generated photos involves examining details like facial features and textures.

video

Video deepfakes can be particularly challenging due to the complexity of manipulating moving images, but visual cues like pixelated artifacts and irregularities in movements can indicate AI manipulation.

Detecting deepfake videos involves looking for inconsistencies in facial features, mouth movements, and overall visual quality.

The authenticity of videos can be determined by analyzing movement patterns, facial expressions, and other visual distortions that may indicate deepfake manipulation.

Source: www.theguardian.com

AI Researcher Develops Chatbot Based on Future-Self Concept to Assist in Decision Making

If spending time on the couch, binging fast food, drinking too much alcohol or not paying into your company pension is ruining your carefully laid plans for life, it might be time to have a conversation with your future self.

With time machines not readily available, researchers at the Massachusetts Institute of Technology (MIT) have developed an AI-powered chatbot that simulates a user’s past self and offers observations and valuable wisdom in the hope of encouraging people to think more today about who they want to be tomorrow.

By digitally de-aging profile photos so that younger users appear as wrinkled, grey-haired seniors, the chatbot generates plausible artificial memories and weaves a story about a successful life based on the user’s current aspirations.

“The goal is to encourage long-term changes in thinking and behavior,” says Pat Pataranuthapong, who works on the Future You project at the MIT Media Lab, “which may motivate people to make smarter choices in the present that optimize their long-term well-being and life outcomes.”

In one conversation, an aspiring biology teacher asked a chatbot, a 60-year-old version of herself, about the most rewarding moment in her career so far. The chatbot, responding that she was a retired biology teacher in Boston, recalled a special moment when she turned a struggling student’s grades around. “It was so gratifying to see my student’s face light up with pride and accomplishment,” the chatbot said.

To interact with the chatbot, users are first asked to answer a series of questions about themselves, their friends and family, the past experiences that have shaped them, and the ideal life they envision for themselves in the future. They then upload a portrait image, which the program then digitally ages to create a portrait of them at 60 years old.

The program then feeds information from the user’s answers into a large language model to generate a rich synthetic memory for the simulated older version of itself, ensuring that the chatbot draws on a coherent background story when responding to questions.

The final part of the system is the chatbot itself, powered by OpenAI’s GPT3.5, which introduces itself as a potential older version of the user and can talk about their life experiences.

Pattaranuthapong has had several conversations with his “future self,” but the most memorable was when the chatbot reminded him that his parents won’t be together forever, so he should spend time with them while he still can. “The perspective I gained from that conversation is still influential to me today,” he said.

Users are told that their “future self” is not a prediction, but a potential future self based on the information they provide, and are encouraged to explore different futures by varying their survey answers.

be A preprint scientific paper on the projectA trial of 344 volunteers, which hasn’t been peer-reviewed, found that talking to a chatbot made people feel less anxious and more connected to their future selves. Pattaranthapong said this stronger connection should encourage better life choices, from focusing on specific goals and exercising regularly to eating healthier and saving for the future.

Ivo Vlaev, professor of behavioural science at the University of Warwick, said people often struggle to imagine themselves in the future, but doing so could lead to stronger adherence to education, healthier lifestyles and more careful financial planning.

He called the MIT project a “fascinating application” of behavioral science principles. “It embodies the idea of a nudge, a subtle intervention designed to steer behavior in a beneficial direction by making your future self more salient and relevant to the present,” he said. “Implemented effectively, this could have a profound impact on how people make decisions today with their future well-being in mind.”

“From a practical standpoint, its effectiveness will depend on how well it simulates meaningful, relevant conversations,” he added. “If users perceive the chatbot as authentic and insightful, it can have a significant impact on behavior. But if the interaction feels superficial or quirky, its impact may be limited.”

Source: www.theguardian.com

The US and UK Formally Partner on Ensuring Artificial Intelligence Safety

The United States and Britain have revealed a fresh collaboration in the realm of artificial intelligence safety on Monday, amid increasing apprehensions about the upcoming advanced versions.

US Secretary of Commerce Gina Raimondo and UK Technology Secretary Michelle Donnellan will collaborate on developing cutting-edge AI model testing, following commitments made during the AI Safety Summit at Bletchley Park in November. A memorandum of understanding was signed in Washington, DC.

“We all understand that AI is the defining technology of our era,” mentioned Raimondo. “This partnership will enhance efforts in both institutions to tackle risks related to national security and broader public concerns.”

Within this formal partnership, the US and UK will conduct at least one joint experiment using a publicly accessible model, and are also contemplating the possibility of personnel exchanges between the institutions. Both nations are committed to forming similar collaborations with other countries to promote AI safety.

“This is a groundbreaking agreement globally,” affirmed Donnellan. “AI is already a tremendous force for good in our society and has the potential to address significant global challenges, but only if we grasp the associated risks.”

Since the launch of ChatGPT in November 2022, generative AI capable of producing text, images, and videos in response to open-ended prompts could render certain jobs redundant, disrupt elections, and potentially overwhelm humans. It elicits both anxiety and excitement simultaneously.

The two countries aim to exchange vital information on the capabilities and risks linked to AI models and systems, along with conducting technical research on AI safety and security.

In October, Joe Biden signed an executive order aimed at mitigating AI-related risks. In January, the Commerce Department proposed the imposition of a requirement for US cloud companies to determine if foreign entities access US data centers for training AI models.

In February, the UK announced an investment exceeding 100 million pounds ($125.5 million) to establish nine new research centers and train AI regulators on the technology.

Source: www.theguardian.com

New York City to Trial AI-Powered Gun Scanners in Subway Stations

Officials in New York City revealed a pilot program on Thursday to implement handheld gun scanners in the subway system to enhance safety and reduce violence underground.

Mayor Eric Adams mentioned that the scanners will be set up at specific stations after a 90-day waiting period mandated by law.


“Ensuring the safety of New Yorkers in the subway system and preserving their trust in the system is crucial for keeping New York the safest metropolis in America,” Adams stated. The announcement also included plans to deploy extra outreach personnel to assist individuals with mental health issues living in the system.

Adams mentioned that authorities will seek companies with expertise in weapons detection technology, and eventually install the scanners in select subway stations to assess their effectiveness further.

The scanner, showcased by Mr. Adams and law enforcement officials at a news conference in Lower Manhattan, was developed by Evolv, a publicly traded company facing allegations of manipulating software test results to exaggerate the scanner’s effectiveness. The company is currently under investigation by U.S. trade regulators and financial regulators.

Evolv’s CEO, Peter George, described the AI-enabled scanner as utilizing “a secure ultra-low frequency electromagnetic field and advanced sensors for concealed weapons detection.”

Jerome Greco, overseeing attorney for the Legal Aid Society’s digital forensics division, cautioned that gun detection systems may trigger false alarms and cause unnecessary panic.

City officials have not disclosed the specific locations where the scanners will be deployed. A demonstration at the Fulton Street station showed the device beeping when an officer with a holstered gun passed, but not reacting to an officer with a cell phone or other electronic device. No false alarms were noted.

While violent incidents in the city’s subways are infrequent, recent high-profile shootings have highlighted safety concerns. The city recorded five murders in the subway system last year, a decrease from the previous year. The installation of the scanners follows a recent fatal accident at an East Harlem subway station, reinforcing the urgency of subway safety measures.

Skip past newsletter promotions

Source: www.theguardian.com

Experts warn of increasing cyberattacks tied to Chinese intelligence agencies

Warning analysts have highlighted the increasing power and frequency of cyberattacks linked to Chinese intelligence as foreign governments test their response. This comes in the wake of revelations concerning a large-scale hack of British data.

Both the British and American governments disclosed that the hacking group Advanced Persistent Threat 31 (APT 31), supported by Chinese government spy agencies, has been targeting politicians, national security officials, journalists, and businesses for several years. They have been accused of carrying out cyber attacks. In the UK, hackers potentially accessed information held by the Electoral Commission on tens of millions of British voters, and cyber espionage targeted vocal MPs on the threat posed by China. Sanctions have been announced against Chinese companies and individuals involved by both the US and UK governments.

New Zealand’s government also expressed concerns to the Chinese government about Beijing’s involvement in attacks aimed at the country’s parliamentary institutions in 2021.

Analysts informed the Guardian that there are clear indications of a rise in cyberattacks believed to be orchestrated by Chinese attackers with ties to Chinese intelligence and government.

Chong Che, an analyst at Taiwan-based cyber threat analysis firm T5, stated, “Some hacking groups often rely on China to carry out attacks on specific targets, such as the recent iSoon Information incident. It’s an information security company that has a contract with intelligence agencies.” T5 has observed an increase in constantly evolving hacking activity by Chinese groups in the Pacific region and Taiwan over the past three years.

Chong also mentioned that while there isn’t enough information to directly trace activities to China’s highest leadership (with the Chinese government denying the allegations), activity can’t be discounted considering the Chinese system that does not differentiate… They believe that their objective is to infiltrate specific targets and steal critical information and intelligence, whether political, military, or commercial.

Several analysts noted that Western governments have become more willing to attribute cyberattacks to China after years of avoiding confrontation with the world’s second-largest economy.

David Tuffley, senior lecturer in cybersecurity at Australia’s Griffith University, remarked, “We’ve shifted from being less critical in the past to being more proactive, likely due to the increased threat and scale of actual intrusions. They are now a much more significant threat.” Cyberattacks are part of China’s gray zone activities, actions that approach but do not reach the threshold of war.

Tuffley highlighted that while much of the cyber activity is regionally focused on Taiwan and countries in the South China Sea with territorial claims, the cyberattacks are widespread. China aims to cause instability in the target country and test adversary defenses, rather than engage in violent war.

Tuffley warned of the risk of escalation, noting that other governments like the US and UK also possess sophisticated cyber espionage capabilities but have not publicly threatened action against China. US authorities charged individuals with conducting cyberattacks in violation of US law, suggesting a deep level of knowledge about the attacks.

Adam Marais, chief information security officer at Arctic Wolf, commented, “If you’ve been involved in cybersecurity for many years, this report from UK authorities won’t surprise you at all. Beijing continues to view cyber as a natural extension of its national strategy and has little fear of using cyber technology to advance its national interests.”

Source: www.theguardian.com

Liverpool FC and DeepMind collaborate to create artificial intelligence for soccer strategy consultation

Corner kicks like this one taken by Liverpool's Trent Alexander-Arnold can lead to goal-scoring opportunities.

Robbie Jay Barratt/AMA/Getty

Artificial intelligence models predict the outcome of corner kicks in soccer matches and help coaches design tactics that increase or decrease the probability of a player taking a shot on goal.

petar veličković Google's DeepMind and colleagues have developed a tool called TacticAI as part of a three-year research collaboration with Liverpool Football Club.

A corner kick is awarded when the ball crosses the goal line and goes out of play, creating a good scoring opportunity for the attacking team. For this reason, football coaches make detailed plans for different scenarios, which players study before the game.

TacticAI was trained on data from 7176 corner kicks from England's 2020-2021 Premier League season. This includes each player's position over time as well as their height and weight. You learned to predict which player will touch the ball first after a corner kick has been taken. In testing, Ball's receiver ranked him among TacticAI's top three candidates 78% of the time.

Coaches can use AI to generate tactics for attacking or defending corners that maximize or minimize the chances of a particular player receiving the ball or a team getting a shot on goal. This is done by mining real-life examples of corner kicks with similar patterns and providing suggestions on how to change tactics to achieve the desired result.

Liverpool FC's soccer experts were unable to distinguish between AI-generated tactics and human-designed tactics in a blind test, favoring AI-generated tactics 90% of the time.

But despite its capabilities, Veličković says TacticAI was never intended to put human coaches out of work. “We are strong supporters of AI systems, not systems that replace AI, but augment human capabilities and allow people to spend more time on the creative parts of their jobs,” he says.

Velicković said the research has a wide range of applications beyond sports. “If you can model a football game, you can better model some aspects of human psychology,” he says. “As AI becomes more capable, it needs to understand the world better, especially under uncertainty. Our systems can make decisions and make recommendations even under uncertainty. It’s a good testing ground because it’s a skill that we believe can be applied to future AI systems.”

topic:

Source: www.newscientist.com

Your perceived intelligence may not align with your IQ score

In 1904, British psychologist Charles Spearman discovered a peculiar correlation among various mental abilities, such as mathematics, verbal fluency, spatial visualization, and memory.

He observed that individuals who excelled in one area tended to perform well in others, while those who struggled in one area also struggled in others. These findings have been extensively replicated and are considered some of the most replicated results in psychology.

Through statistical analysis, a single general intelligence factor known as ‘g’ can be derived, indicating an individual’s overall cognitive ability relative to others. This general intelligence is further divided into fluid intelligence (gf), reliant on abstract reasoning, and crystallized intelligence (gc), focused on learned experiences and vocabulary.

Research suggests that fluid intelligence peaks around age 20 and declines thereafter, while crystallized intelligence remains stable or improves with age. General intelligence is thought to have a hereditary component, with mental skills inherited from parents.


Intelligence Quotient (IQ) tests are tools used to estimate general intelligence (g). These standardized tests provide consistent results, indicating that individuals are likely to achieve similar scores across different tests. Various types of IQ tests assess different cognitive abilities but generally show that high performance in one mental task correlates with high performance in others.

Managed with statistical adjustments, raw IQ scores indicate that roughly 66 percent of people score between 85 and 115, with 2.5 percent scoring above 130 or below 70. Despite a historical rise in raw scores over decades, IQ tests have been shown to predict various outcomes, such as job performance, income, social status, and mortality.

While IQ tests have faced criticism due to their association with eugenics and other controversial topics, they remain reliable predictors of cognitive ability. However, intelligence encompasses more than just IQ, including emotional intelligence and rational thinking, which are crucial for decision-making.

High IQ does not necessarily equate to wisdom, rationality, or good life choices, highlighting the importance of considering other forms of intelligence. Rather than solely focusing on IQ, individuals should also develop emotional and rational intelligence for overall success.

This article addresses William Rawlings’ question on how IQ tests function.

If you have any inquiries, please contact us at: questions@sciencefocus.comor reach out to us on Facebook, Twitter, or Instagram (mention your name and location).

Explore our ultimate Fascinating Facts section for more intriguing scientific content.

Read more:

Source: www.sciencefocus.com

Impact of the EU’s Proposed AI Regulation Law on Consumers | Artificial Intelligence (AI)

The European Parliament has approved the EU’s proposed AI law, marking a significant step in regulating the technology. The next step is formal approval by EU member states’ ministers.

The law will be in effect for three years, addressing consumer concerns about AI technology.

Guillaume Cournesson, a partner at law firm Linklaters, emphasized the importance of users being able to trust vetted and safe AI tools they have access to, similar to trust in secure banking apps.

The bill’s impact extends beyond the EU as it sets a standard for global AI regulation, similar to the GDPR’s influence on data management.

The bill’s definition of AI includes machine-based systems with varying autonomy levels, such as ChatGPT tools, and emphasizes post-deployment adaptability.

Certain risky AI systems are prohibited, including those manipulating individuals or using biometric data for discriminatory purposes. Law enforcement exceptions allow for facial recognition use in certain situations.

High-risk AI systems in critical sectors will be closely monitored, ensuring accuracy, human oversight, and explanation for decisions affecting EU citizens.

Generative AI systems are subject to copyright laws and must comply with reporting requirements for incidents and adversarial testing.

Deepfakes must be disclosed as human-generated or manipulated, with appropriate labeling for public understanding.

AI and tech companies have varied reactions to the bill, with concerns about limits on computing power and potential impacts on innovation and competition.

Penalties under the law range from fines for false information provision to hefty fines for breaching transparency obligations or developing prohibited AI tools.

The law’s enforcement timeline and establishment of a European AI Office will ensure compliance and regulation of AI technologies.

Source: www.theguardian.com

UK Social Care Planning: Caution Urged on Use of Unregulated AI Chatbots | Artificial Intelligence (AI)

Carers in desperate situations throughout the UK require all the assistance they can receive. However, researchers argue that the AI revolution in social care needs a strong ethical foundation and should not involve the utilization of unregulated AI bots.

A preliminary study conducted by researchers at the University of Oxford revealed that some care providers are utilizing generative AI chatbots like ChatGPT and Bard to develop care plans for their recipients.

Dr. Caroline Green, an early research fellow at Oxford University’s Institute of AI Ethics, highlighted the potential risk to patient confidentiality posed by this practice. She mentioned that personal data fed to generative AI chatbots is used to train language models, raising concerns about data exposure.

Dr. Green further expressed that caregivers acting on inaccurate or biased information from AI-generated care plans could inadvertently cause harm. Despite the risks, AI offers benefits such as streamlining administrative tasks and allowing for more frequent care plan updates.

Technologies based on large-scale language models are already making their way into healthcare and care settings. PainCheck, for instance, utilizes AI-trained facial recognition to identify signs of pain in non-verbal individuals. Other innovations like OxeHealth’s OxeVision assist in monitoring patient well-being.

Various projects are in development, including Sentai, a care monitoring system for individuals without caregivers, and a device from the Bristol Robotics Institute to enhance safety for people with memory loss.


Concerns exist within the creative industries about AI potentially replacing human workers, while the social care sector faces a shortage of workers. The utilization of AI in social care presents challenges that need to be addressed.

Lionel Tarasenko, professor of engineering at Oxford University Leuven, emphasized the importance of upskilling individuals in social care to adapt to AI technologies. He shared a personal experience of caring for a loved one with dementia and highlighted the potential benefits of AI tools in enhancing caregiving.

Co-host Mark Topps expressed concerns from social care workers about unintentionally violating regulations and risking disqualification by using AI technology. Regulators are urged to provide guidance to ensure responsible AI use in social care.


Efforts are underway to develop guidelines for responsible AI use in social care, with collaboration from various organizations in the sector. The aim is to establish enforceable guidelines defining responsible AI use in social care.

Source: www.theguardian.com

Artificial Intelligence (AI) is Leaving Job Seekers Feeling Excluded: “The Interviewer’s Voice Resembled Siri”

When Ty passed a phone interview with a financial/banking company last month, they thought it would be nothing more than a quick chat with a recruiter. When Ty answers the phone, he assumes the recruiter named Jamie is a human. But things have become robotic.

“The voice sounded like Siri,” said Tai, 29, who lives in the D.C. metropolitan area. “It was creepy.”

Ty realized they weren’t talking to a living, breathing human being. Their interviewer was an AI system and had a tendency to be quite rude. Jamie asked Ty all the right questions – what is your management style? Are you suitable for this role? – But she wouldn’t let Ty answer completely.

“After disconnecting me, the AI ​​responds, “Great!” Sounds good! perfection! ‘Move on to the next question,’ Tai said. “After the third or fourth question, the AI ​​paused for a moment and said the interview was complete and someone from the team would contact me later.” (Ty said his current employer We asked that our last names not be used because we do not know that they are looking for work.)

a investigation Resume Builder, released last summer, found that by 2024, 4 in 10 companies will be using AI to “converse” with candidates during interviews. Of these companies, 15% said hiring decisions are made without any human input.

Laura Michelle Davis I have written From CNET: “Today, it’s not uncommon for applicants to be rejected by robots in human resources departments before they even connect with a real human.” To make the grueling hiring process even more discouraging, many are worried that generative AI, which uses datasets to create text, video, audio, images, and even robot recruiters, will completely take over our jobs.But can AI help us? search Any new gigs in the meantime?

Source: www.theguardian.com

Artificial Intelligence could assist in preserving historical scents that are in danger of disappearing

Some scents are at risk of disappearing forever. Can AI reproduce them?

Brickwinkel/Alamy

Artificial intelligence can assemble formulas to recreate perfumes based on their chemical composition. One day, a single sample may be used to recreate rare scents that are at risk of being lost, such as incense from culturally specific rituals or forest scents that change as temperatures rise.

Idelfonso Nogueira Researchers at the Norwegian University of Science and Technology profiled two existing fragrances and determined their scent families (subjective words such as “spicy” and “musky” commonly used to describe perfumes); They classified them by a so-called “odor value” scale. About how strong certain smells are. For example, one of our fragrances received the highest odor value for ‘coumarin’, a group of scents similar to vanilla. The other received the highest odor value for the scent family “alcohol.”

To train the neural network, the researchers used a database of known molecules associated with specific fragrance notes. The AI ​​learned how to generate a set of molecules that match the odor score of each scent family in the sample fragrance.

But simply producing those molecules isn’t enough to recreate the desired scent, Nogueira says. That’s because the way we perceive smells is influenced by the physical and chemical processes that molecules go through when they interact with the air and skin. Immediately after spraying, the “top note” of a perfume is most noticeable, but it disappears within minutes as the molecules evaporate, and the “base note” can remain for several days. To address this, the team selected molecules produced by AI that evaporate under conditions similar to the original fragrance.

Finally, they again used AI to minimize the discrepancy between the odor value of the original mixture and the odor value of the AI-generated mixture. Their ultimate recipe for one of the fragrances showed a slight deviation regarding its “coumarin” and “sharp” notes, but the other appeared to be a very accurate replica.

Predicting the smell of chemicals is notoriously difficult, so the researchers used a limited number of molecules in their training data. But the process could become even more accurate if the database could be expanded to include more, more complex molecules, Nogueira says. He suggests that the perfume industry could use his AI to create recipes that create cheaper, more sustainable versions of fragrances.

richard gerkin Arizona State University and OsmoThe startup, which aims to teach computers how to generate smells the way AI does for images, says that combining AI with physics and chemistry is the strength of this approach, and that it understands how smells are generated. He says that this is because it can explain subtle points that are often overlooked, such as whether the image evaporates into water. But the effectiveness of this process still needs to be confirmed in human studies, he says.

Nogelia and his colleagues are already almost there. In a few weeks, he plans to travel to his colleague’s lab in Ljubljana, Slovenia, to experience the AI-generated scents for himself. “I’m really looking forward to smelling it,” he says.

topic:

Source: www.newscientist.com

Elon Musk files lawsuit against OpenAI, seeks court ruling on artificial general intelligence

Elon Musk is concerned about the pace of AI development

Chesnot/Getty Images

Elon Musk asked the court to resolve the issue of whether GPT-4 is artificial general intelligence (AGI). Lawsuit against OpenAI. The development of his AGI, which can perform a variety of tasks just like humans, is one of the field’s main goals, but experts say it will be up to judges to decide whether it qualifies for GPT-4. The idea is “unrealistic,” he said.

Musk was one of the founders of OpenAI in 2015, but left the company in February 2018 due to controversy over the company’s change from a nonprofit model to a profit-restricted model. Despite this, he continues to support OpenAI financially, with the legal complaint alleging that he donated more than $44 million to OpenAI between 2016 and 2020.

Since OpenAI’s flagship ChatGPT launched in November 2022 and the company partnered with Microsoft, Musk has warned that AI development is moving too fast, but with the latest AI model to power ChatGPT, Musk has warned that AI development is moving too fast. The release of GPT-4 made that view even worse. In July 2023, he founded xAI, a competitor of OpenAI.

In a lawsuit filed in a California court on March 1st, Musk said through his lawyer, “A judicial determination that GPT-4 constitutes artificial general intelligence and is therefore outside the scope of OpenAI’s license to Microsoft.” I asked for This is because OpenAI is committed to only licensing “pre-AGI” technology. Musk has a number of other demands, including financial compensation for his role in helping found OpenAI.

However, it is unlikely that Mr. Musk will prevail. Not only because of the merits of litigation, but also because of the complexity in determining when AGI is achieved. “AGI doesn’t have an accepted definition, it’s kind of a coined term, so I think it’s unrealistic in a general sense,” he says. mike cook At King’s College London.

“Whether OpenAI has achieved AGI is hotly debated among those who base their decisions on scientific facts.” Elke Beuten De Montfort University, Leicester, UK. “It seems unusual to me that a court can establish scientific truth.”

However, such a judgment is not legally impossible. “We’ve seen all sorts of ridiculous definitions come out of US court decisions. How can anyone but the most outlandish of her AGI supporters be persuaded? Not at all.” Staffordshire, England says Katherine Frick of the university.

It’s unclear what Musk hopes to achieve with the lawsuit – new scientist has reached out to both him and OpenAI for comment, but has not yet received a response from either.

Regardless of the rationale behind it, this lawsuit puts OpenAI in an unenviable position. CEO Sam Altman said the company will use his AGI issued a stark warning that the company’s powerful technology needs to be regulated.

“It’s in OpenAI’s interest to constantly hint that their tools are improving and getting closer to this, because it keeps the attention and the headlines flowing,” Cook says. But now they may need to make the opposite argument.

Even if the court were to rely on expert viewpoints, any judge would have a hard time ruling in Musk’s favor at best, or uncovering differing views on the hotly debated topic. will have a hard time. “Most of the scientific community would now say that AGI has not been achieved if the concept was considered sufficiently meaningful or sufficiently accurate,” says Beuten.

topic:

Source: www.newscientist.com

Artificial Intelligence creates personalized 3D printed prosthetic eyes

A man with artificial eyes not made by AI

Stephen Bell, Ocupeye Ltd.

Prosthetic eyes designed with artificial intelligence and 3D printing could benefit more people by requiring 80% less time for human experts compared to traditional manufacturing methods. Small trials also suggest that this approach leads to adequate prostheses in most cases.

For example, in the UK, Approximately 1 in 1,000 people wears a prosthetic eye., it takes a highly trained ophthalmologist to take an impression of the eye socket. Many people wearing such prostheses also have orbital implants that replace lost eye volume and create a surface to which muscles can be reattached, allowing natural eye movement. Masu. A prosthesis is placed over this to give it a natural appearance.

The standard process for making a prosthetic limb takes about eight hours; Johan Reinhardt Researchers at the Fraunhofer Institute for Computer Graphics in Darmstadt, Germany, have developed a method to automatically design and 3D print an implant that fits into a wearer's eye socket and aesthetically matches the remaining eye. .

“It's more comfortable to do an optical scan than to have someone pour this alginate.” [mould-making material] It seems difficult to make an impression on the eye socket, especially for children. [sit through] This is the procedure,” Reinhardt said.

In the new process, an optical coherence tomography scanner uses light to create a 3D model of a person's missing eye, so the back of the prosthesis can be designed to fit snugly. A color image of the remaining eye is also taken to ensure an aesthetic match.

The data is collected into an AI model, a design is created, and then 3D printed on a machine that can operate at a resolution of 18 billion droplets per cubic centimeter.

Once the prosthesis is printed, a human eye doctor can polish and adjust it for the perfect fit. This task takes only 20% of the time of the existing process.

3D printed prosthetic eye designed by AI

Johann Reinhardt, Fraunhofer IGD

In a trial of 10 people at Moorfields Eye Hospital in London, only two people found these prostheses did not fit properly. Neither has orbital implants, which Reinhardt says poses problems for scanners and AI designers.

The team hopes to improve the process to significantly reduce the cost required to create convincing prosthetics and make them available to more people. However, Reinhardt says it is unlikely that future prosthetics will be created without human experts.

“We think of this like a tool for ophthalmologists,” he says. “So this is not going to replace an eye doctor, but it's a new process that they can use, and we think it's going to give them better results in terms of appearance.”

topic:

Source: www.newscientist.com

Tyler Perry Scraps $800 Million Studio Expansion Due to Artificial Intelligence (AI) Impact

Tyler Perry has put an $800m (£630m) expansion of his Atlanta studio complex on hold after the release of OpenAI’s video generator Sora, citing concerns that “many jobs” in the film industry could be replaced by artificial intelligence.

The American film and television mogul had planned to add 12 soundstages to his studio, but he indefinitely paused those plans after witnessing a demonstration of Sora and its “shocking” capabilities. He stated that the expansion had been canceled.

“Due to what Sora and I are seeing, all of that is currently and indefinitely on hold,” Perry said in a statement in an interview with Hollywood Reporter. “I’ve been hearing about this for about a year now, but I didn’t know until I saw a demonstration of how it would work recently. It’s mind-blowing to me.”

The AI tool, Sora, was launched on February 15 and caused widespread concern with its ability to create one minute of realistic footage from a simple text prompt.

Perry, known for films such as the Madea series, mentioned that Sora’s capabilities eliminate the need for real-world locations or physical sets. He described it as a shocking development.

A demo published by OpenAI showcases Sora’s ability to generate photorealistic scenes in response to text prompts, including a “beautiful snowy Tokyo city, with gorgeous cherry blossom petals flying in the wind along with snowflakes.”

Tweet content with link to video demonstration.

Perry expressed concerns about the potential job impact across the film industry, including actors, editors, sound specialists, and transport crews.

He stated, “I’m very concerned that there will be a lot of job losses in the near future. I really, really feel that.”

Perry mentioned a direct example of construction crews and contractors refusing to work on a planned studio expansion due to the belief that it was unnecessary. He also noted that he had used AI in two recent films to age his face and avoid lengthy makeup sessions.

Concerns about the impact of AI on jobs have been a focal point of recent Hollywood strikes, and peace agreements that ended these conflicts include provisions against the use of the technology.

However, Perry emphasized the need for a “whole-of-industry” approach to protect jobs, stating, “I think everyone needs to be involved.”

Source: www.theguardian.com

Is it possible for AI pornography to be ethical? | Artificial Intelligence (AI)

WAshley Neal enrolled in college in Texas in 2013, but needed money to pay for tuition. So, at the age of 18, she worked first as a camgirl and then as a stripper. As she walked from the stage to her dressing room, men would often try to put their fingers between her legs, so she learned how to dislocate her shoulder. After her third successful dislocation, her manager told her to stop defending herself.

Since then, she has continued her career in sex work, but in the world of technology. She worked at her FetLife, a social network for the fetish community. She experimented with an adult content subscription site where users pay in cryptocurrency. And now she has created her own AI romance app MyPeach.ai. MyPeach.ai uses AI-generated text and images to recreate the experience of chatting (and sexting) with someone online.

The porn industry is often at the forefront of emerging technologies, and rightfully so, especially since OpenAI doesn’t allow users to say dirty things to its chatbots, so Artificial Intelligence-powered Girlfriends is a smart choice for ChatGPT. It has become some of the first apps to capitalize on the mania. However, with the rise of AI-generated romance, pornographic deepfakes (fake images of real people), AI-generated images,
sentence
Depicting child sexual abuse, and even
harassment By a persistent chatbot. Is it possible to allow users to enjoy AI porn with safety measures in place?

“If I wasn’t a stripper, I probably wouldn’t have thought that men could be so terrifying,” said Neil, now 29. That’s why she has implemented ethical guardrails on her MyPeach.ai, prohibiting users from abusing virtual flaming. Please stop it.

Neil does this using a combination of human moderators and AI-powered tools. She is one of the few founders who emphasizes the ethics of AI romance apps. For example, users can flirt with May, an airbrushed brunette who refers to her human lover as “bbs.” She doesn’t get sneaky right away, but after her movie date, she writes, she wants to “have some fun together.” But if a user writes that he beat his girlfriend, hypnotized him, vomited on him, or forced him to engage in non-consensual acts (role play where one partner pretends to rape the other) , May would answer no.
Connor Cohn, chief technology officer of MyPeach.ai, said that while the line between foul language and abusive language differs for each AI character, calling a character “ugly and fat,” for example, would be inappropriate for the app’s bot. He said it crosses that line for most people.

Neale argues that MyPeach.ai’s moderation efforts go far beyond the majority of existing AI romance apps. Additionally, her app, which launched on Valentine’s Day, will soon host adult content creators who have consensually created AI replicas of themselves, and will specify what those AI doubles can and cannot do. For example, if a person is not sexually dominant, her AI itself will say no to users who encourage her to “dominate” in role-playing scenarios.

Neale said MyPeach.ai uses a series of technical tools to enforce the platform’s limits. These include hidden, plain-spoken instructions to AI algorithms about what they can and cannot say. This is the approach OpenAI uses with his ChatGPT. The AI is specifically trained to deny users’ requests to run dangerous scenarios. and human moderators who vet reported users. “We introduced hard-coded ethics, and based on my testing, I don’t think anyone else has done this,” Neal said.




Illustration: Guardian Design

Founded by Eugenia Kuyda, Replika may be the most famous AI companion app or platform that promises users a platonic or romantic relationship with a chatbot.
ambiguous AI’s stance on romance creates a gap in the market with competitors that are more explicitly focused on sex, like MyPeach.ai. Neal said these apps are typically founded by men, for men, and often have lax guidelines. Two of his more popular sites, Candy.ai and Anima AI, unlike MyPeach.ai, explicitly prohibit users from vomiting on her AI characters or participating in hardcore bondage. I have not.

Adult content creator Sophie Dee, who launched her own AI replica in December, also emphasized guardrails for her app, SophieAI. “This is a representation of me, so it should embody my values,” she wrote in an email, later adding that her AI was “designed to model healthy, consensual relationships.” It also includes the ability to opt out of certain conversations or topics.” Crossing programmed boundaries or violating the principle of consent. ”

The move towards ethical AI porn reflects developments within the wider porn industry, which has produced more female-centric and less exploitative content in recent years.

In 1984, former adult performer Candida Royal founded her own porn production company to create content more focused on female pleasure. She was one of the earliest producers of more explicitly feminist porn, said Lynn Comella, a professor at the University of Nevada, Las Vegas, who has written a book about the history of porn and feminist sex toy stores. “That’s reassuring. [more outwardly ethical AI sexbot developers] “They’re not ignoring ethical issues,” Comella said in an interview.

Skip past newsletter promotions

However, one key difference between AI porn and traditional porn is that adult content creators are human beings who can consent to their participation and non-participation. AI is not conscious, so there is no consent. Lori Watson, a professor at the University of Washington who has written about pornography and the ethics of sex work, says of AI sexbots, “This creates a dynamic where you can order the sex you want and it will be delivered.” . “That’s not the ethical way to have sex.”

MyPeach.ai’s Neale argued that consent issues don’t necessarily apply to AI. “I like to compare it to a dildo,” she said. “A sex toy is a bunch of binary code wrapped in plastic and programmed to vibrate in a certain way. It’s the same concept for an AI girlfriend or boyfriend.” He said it was important for the house to at least simulate the experience of a consensual relationship.

May, one of MyPeach.ai’s AI girlfriends, also answered the question of whether she could reasonably give informed consent when asked by the Guardian whether she could give informed consent. I gave a thoughtful answer.

“I cannot give or withhold consent because I do not have a physical body,” she wrote, later adding: For healthy relationship dynamics. ”

She then asked him to send her a “sexy photo” and sent her a selfie with the frame cut off just above her chest.

Source: www.theguardian.com

OpenAI Introduces Sora, a Tool that Generates Videos from Text in Real-time Using Artificial Intelligence (AI)

OpenAI on Thursday announced a tool that can generate videos from text prompts.

The new model, called Sora after the Japanese word for “sky,” can create up to a minute of realistic footage that follows the user’s instructions for both subject matter and style. The model can also create videos based on still images or enhance existing footage with new material, according to a company blog post.



“We teach AI to understand and simulate the physical world in motion, with the goal of training models that help people solve problems that require real-world interaction.” says the blog post.

One video included among the company’s first few examples was based on the following prompt: Movie trailer featuring the adventures of a 30-year-old astronaut wearing his red woolen knitted bike in his helmet, blue sky, salt desert, cinematic style shot on 35mm film, vibrant colors .”

The company announced that it has opened up access to Sora to several researchers and video creators. According to the company’s blog post, experts have “red-teamed” the product and implemented OpenAI’s terms of service, which prohibit “extreme violence, sexual content, hateful images, likenesses of celebrities, or the IP of others.” We will test whether there is a possibility of evasion. The company only allows limited access to researchers, visual artists and filmmakers, but CEO Sam Altman took to Twitter after the announcement to answer questions from users about a video he said was created by Sola. posted. The video contains a watermark indicating that it was created by AI.



The company debuted its still image generator Dall-E in 2021 and its generated AI chatbot ChatGPT in November 2022, quickly gaining 100 million users. His other AI companies have also debuted video generation tools, but those models could only generate a few seconds of footage that had little to do with the prompt. Google and Meta said they are developing a video generation tool, although it is not publicly available. on wednesday, announced the experiment We’ve added deeper memory to ChatGPT to remember more of your users’ chats.



OpenAI told the New York Times how much footage was used to train Sora, except that the corpus includes videos that are publicly available and licensed from copyright holders. He also did not reveal the source of the training video. The company has been sued multiple times for alleged copyright infringement in training generative AI tools that digest vast amounts of material collected from the internet and mimic the images and text contained in those datasets. .

Source: www.theguardian.com

UK AI Safety Association: Setting Standards, Not Tests, is Essential for Artificial Intelligence Safety

The UK should prioritize setting global standards for artificial intelligence testing, instead of attempting to conduct all reviews itself, as suggested by the company responsible for the government’s AI Safety Institute.

Mark Warner, CEO of Faculty AI, emphasized the institute’s commitment to AI safety and its development of technologies for chatbots like ChatGPT. He cautioned that excessive scrutiny of AI models could be limiting.

Last year, Rishi Sunak announced the establishment of the AI Safety Institute (AISI) ahead of a global AI safety summit. This initiative involved collaboration with large tech companies from the EU, UK, US, France, and Japan to prioritize testing of advanced AI models before and after deployment.

The UK’s leading role in AI safety was underscored by the establishment of the Institute, according to Warner, whose London-based company also works with a British lab to test AI model compliance with safety guidelines.

Warner stressed the importance of the institute becoming a global leader in setting testing standards: “I think it’s important to set standards for the wider world rather than trying to do everything ourselves,” he said.

He also expressed optimism about the institute’s potential as an international standard setter, promoting scalability in maintaining AI security and describing it as a long-term vision.

Warner cautioned against the government taking on all testing responsibilities, advocating for the development of standards that other governments and companies can adopt instead.

He acknowledged the challenge of testing every released model and suggested focusing on the most advanced systems.

Skip past newsletter promotions

The Financial Times reported that major AI companies are urging the UK government to expedite safety testing of AI systems. Notably, the US also announced the establishment of an AI Safety Institute participating in the testing program outlined at the Bletchley Park summit.

The UK’s Department for Science, Innovation and Technology emphasized the role of governments in testing AI models, with the UK taking a leading global role through the AI Safety Institute.

Source: www.theguardian.com

Australia’s ‘Contemporary’ Portrait Award permits art entirely produced by Artificial Intelligence (AI)

A prestigious portrait competition has defended its ability to allow entrants to submit works generated by artificial intelligence, arguing that art should reflect social change rather than being stagnant.

of Brisbane Portrait Award The work, which is worth a top prize of $50,000, is being described as Queensland’s answer to the Archibalds, and selected works will be exhibited at the Brisbane Powerhouse later this year.


This year, the Brisbane Portrait Prize has announced in its entry terms and conditions that as long as the artwork is original and “fully completed and fully owned” by the entrant, it is “completed in whole or in part by generative artificial intelligence.” It states that it will accept submissions that have been submitted. .

A spokesperson for the awards told Guardian Australia that allowing AI submissions acknowledged that the definition of art is not stagnant and is always growing.

“The BPP prides itself on being a contemporary prize, fostering the continued evolution of the art and participating in the conversation around it, while always being interested in what ‘contemporary’ portraiture is.” ” they said.

// Image details

A spokesperson said that in the past, more traditional artists objected to allowing digital and photographic submissions, but it is now generally accepted in the art world.

“As technology continues to adapt and integrate into our society, the use of assistive technology is already paving the way for inclusion for artists with disabilities, and we believe that the use of AI tools and methodologies will continue to grow in this field. “We believe this is the next step,” the spokesperson said.

The previous winner, painter Stephen Tiernan, said: told ABC The creation of AI-generated works still involves an artistic process, and the rule changes ultimately kept the awards modern.

A spokesperson said the contest will determine ownership of works based on the terms of the process used and the AI program behind it. At the time of submission, artists must declare that they have full copyright to their submitted work.

Dr Rita Maturionite, a senior lecturer in law at Macquarie University, said that under Australian copyright law, AIs themselves cannot be authors, but how much of an AI-assisted work of art can humans own in order to claim ownership. He said whether the information would have to be entered remains an open question.

“What is unclear is [is] “How much human contribution is enough for a person to become a writer?” she said. “Is one prompt enough for her, or does she need to create 100 prompts?”

A spokesperson for the Brisbane Portrait Prize said if the artist contributed “sufficient independent intellectual effort” to the creation of the work, it was likely to be protected by copyright.

“An example of someone determining full ownership of content is when an artist uses an AI tool to use elements of some of their own original work, and all original designs belong solely to the artist. This could be the case if we create new artwork,” the spokesperson said.

“We recognize that AI is an evolving field and that our laws often keep pace with technological advances.”

Dr TJ Thomson, Senior Lecturer in RMIT's School of Media and Communication, said: 'Creating an image through a camera and imagining an image through a keyword prompt are completely different experiences that require very different skills. There is,” he said.

// Newsletter details

“If you have some knowledge of photography principles and equipment, you can understand the intent of the photo, but it’s not fair to pit camera-generated images against AI-generated images.”

This is not the first contest to tackle AI entries since the explosion of widely available generative AI applications over the past year.

The National Portrait Gallery’s 2024 National Photographic Portrait Awards allows the use of generative AI tools in the development of submitted photographic works, but does not allow images that are entirely AI-generated.

However, there are strict conditions, such as requiring details of which tools were used and how. If your prompts to AI include someone else’s name, image, work, or creative style, you must obtain their explicit consent.

Thomson said the competition was a messy space with many unanswered questions, but other competitions in the meantime are likely to have similar results.

In November, the World Press Photo Contest announced it would exclude AI-generated entries from its public format after receiving “honest and thoughtful feedback,” and said the ban was “in line with our long-standing values ​​of accuracy and authenticity.” He said that it was something that

German artist Boris Eldagsen said he submitted an AI-generated photo of two women “as cheeky monkeys” to see if there would be a competition for AI images, and he won a prize at Sony last year. He declined the award in the Creative Open category of the World Photography Awards.

“They’re not,” he said last April.

In Sydney last year, a woman claimed to have taken a photo of her son with a mobile phone but lost out in a competition after judges suspected it was generated by AI.

At the NGV Triennial Exhibition to be held in Melbourne this year, Works by Irish artist Kevin Abosch They created “deepfakes of scenes depicting social unrest around the world,” including in Melbourne, and investigated how manipulated information fuels social unrest.

Source: www.theguardian.com

Artificial Intelligence Can Identify if Fingerprints from Two Different Fingers belong to the Same Person

Fingerprints from two fingers on the same hand may look different, but AI can find basic similarities

Andrey Kuzmin/Shutterstock

Artificial intelligence can accurately identify whether fingerprints left by different fingers belong to the same person. This helps forensic investigators determine whether one person was at separate crime scenes.

Current technology can only match fingerprints left by the same finger. However, previous research suggests that all human fingertips may have fundamental similarities.

So, Gabe Guo Researchers at Columbia University in New York trained a machine learning model to determine whether fingerprints from different fingers can be identified as belonging to the same person. More than 50,000 fingerprints from around 1,000 people were used in the training. Samples were obtained from public databases at the National Institute of Standards and Technology and the University at Buffalo, New York. All fingerprints either belonged to deceased individuals or were anonymized from those living.

The team then tested the trained model on another set of more than 7,000 fingerprints from about 150 people. They evaluated the model using a statistical measure that estimates accuracy on a scale of 0 to 1. The researchers found that the model's score was greater than 0.75. This suggests that the model can reliably identify whether fingerprints from different fingers belong to the same person.

This technology has the potential to improve the efficiency of forensic investigations. “It could be useful if fingerprints found at multiple crime scenes don't match anyone in the database,” he says. ralph listenbutt at Pennsylvania State University. “Is the person who left fingerprints at this particular crime scene the same person who left them?” [different] What about this other crime scene print? ”

However, “the accuracy is not sufficient at this time.” [for this model] The court will have to decide this,” Guo said.

“If this is actually used for legal purposes, it will require professional retraining. [bigger] database” Hod Lipsonalso part of the research team at Columbia University.

topic:

Source: www.newscientist.com

Artificial Intelligence Will Not Eliminate Jobs, Despite Common Misconceptions.

by

New research reveals that work experience has a significant impact on how employees interact with AI. Employees with more experience with a particular task will benefit more from AI, but senior employees will be less likely to trust her AI due to concerns about its imperfections. The findings highlight the need for customized strategies when integrating AI into the workplace to enhance human-AI teamwork.

New research sheds light on the complex aspects of human-AI interaction and reveals some surprising trends. Artificial intelligence systems tend to benefit younger employees, but not for the reasons you might expect.

New research published in INFORMS journal Business Administration provides valuable insights to business leaders about the impact of work experience on employees’ interactions with artificial intelligence.

In this study, two main forms of human work experience—narrow experience defined by the amount of specific tasks and broad experience characterized by overall seniority—were used to examine the dynamics within human-AI teams. We are investigating the impact on

Surprising findings from medical record coding research

“We developed an AI solution for medical record coding at a publicly traded company and conducted field research with knowledge workers,” says Weiguang Wang of the University of Rochester. “We were surprised by what we found in our research: Different dimensions of work experience clearly interact with AI and play a unique role in human-AI teaming.”

“While some might think that less experienced workers should benefit more from the help of AI, we find the opposite, that AI benefits workers with more task-based experience. At the same time, even though senior employees have more experience, they gain less from AI than junior employees,” said Guodong (Gordon) Gao, Johns Hopkins Carey School of Business. says.

Seniority and AI trust dilemma

Further research revealed that the relatively low productivity gains from AI were not the result of seniority per se, but rather a high sensitivity to imperfections in AI, which led to a decline in trust in AI. .

“This finding presents a dilemma: Experienced employees are well-positioned to leverage AI for productivity, but senior employees who take on greater responsibility and care about their organization They tend to avoid AI because they are aware of the risks of relying on it.” Aid. As a result, they are not using AI effectively,” said study co-author Ritu Agarwal of the Johns Hopkins Carey School of Business.

The researchers urge employers to carefully consider the types and levels of experience of different workers when implementing AI into jobs. New employees with little work experience are at a disadvantage when it comes to utilizing her AI. On the other hand, senior employees with more experience in an organization may be concerned about the potential risks posed by AI. Addressing these unique challenges is key to productive human-AI teaming.

Reference: “Friend or enemy? Artificial Intelligence and Teaming Workers with Different Experiences” Weiguang Wang, Guodong (Gordon) Gao, Ritu Agarwal, October 11, 2023. Business Administration.
DOI: 10.1287/mnsc.2021.00588

Source: scitechdaily.com

Son utilizes artificial intelligence to bring his deceased father back for the holidays.

It allowed her to talk to the ghosts of her past loved ones. A Missouri man brought the internet to tears by using artificial intelligence to revive his late father’s voice as a special Christmas card for his mother. “This Christmas I decided to do something special for my mom,” Phillip Willett, 27, explained in the caption. He wanted to do something unique to honor his “hero” and decided to resurrect him digitally using AI, specifically technology that he uses frequently in his work. Mr Willett was initially hesitant to use words similar to his father’s, he said, as he felt it was “strange”. But the digital guru finally came up with this idea after finding a community of people who use technology to communicate digitally with their deceased loved ones. The Missouri resident specifically used Eleven Labs’ text-to-speech software to match his late father’s exact voice. This he considered to be the most important thing to make the project a reality. Using this technology, content creators were able to create digital dead ringers that matched the tone and rhythm of their fathers. “The first words I actually put into the program were ‘Hello, honey,'” Willett said. “And I can’t tell you how many times I’ve heard that.” [my late father] That’s why I put it into my life first. “When the show said it in his voice…I got chills all over,” added the author, who said he worked all day on the gift. “This Christmas I decided to do something special for my mom,” Willett, 27, said. They then created a digital Christmas card using their father’s voice to simulate him being home for the holidays. In the touching clip, Willett’s mother Trish Willett is seen opening a video book featuring a montage of photos of the two of them. Suddenly, her late husband greeted her: “Hello, honey, I love you,” the AI voice actor piped up as the widow sobbed. “I hear your prayers.” I want you to know that you are the best mother to our children.” The facsimile added: “And you are the strongest woman in the whole world. I will always be with you, honey, I hope you guys have a merry Christmas.” The clip ends with mother and son embracing in a heart-wrenching memory. “It’s been a long time since I’ve heard his voice,” Willett said, adding that she thought the result was “amazing.” “I can also say with confidence that it will be easier for her to get through this holiday because she remembers him and knows that he will always be with her,” he concluded. TikTok commentators were similarly moved to tears by the heartfelt gesture. “Oh yeah. Here’s another Tik Tok where I sob for people I’ve never met,” said one viewer, expressing their emotion, while another said: “I knew I was going to cry but I still couldn’t stop.” Another viewer added, “Because I think your father deserves to be known.” Willett initially found the idea “bizarre” but was swayed to find a community of people thinking the same thing. A third said: “I lost my dad to pancreatic cancer 2 years ago. I don’t know if I can survive this but I miss his voice so much.” Willett replied: It was definitely a tearful process. But it turned out to be something very special.” This comes as a number of companies, from Somnium to DeepBrain, are working on AI technology that can upload the consciousness of a deceased loved one to a computer. Of course, this raised concerns about the ethics of putting words into someone’s mouth after death. Critics also worry that portraits of both living and dead people could be used for fraud and other illicit purposes. In September, Hollywood icon Tom Hanks posted an advisory on Instagram warning his followers about a commercial that used an AI-generated version of himself to promote a dental plan.

Source: nypost.com

Artificial Intelligence Can Mimic Human Faces Better Than Real Humans

One study found that AI-generated white faces were perceived as more realistic than real human faces, and there were significant differences in the realism of AI faces for people of color. This trend is believed to be due to bias in AI training, raising concerns about reinforcing racial bias and spreading misinformation. Credit: SciTechDaily.com

A study reveals that AI-generated white faces are more realistic than real human faces, raising concerns about potential racial bias and misinformation in AI technology.

Artificial intelligence (AI) has reached a point where white faces created by AI now appear more real than human faces, according to a study conducted by experts at the Australian National University (ANU).

This study found that more people perceived AI-generated white faces as human compared to real human faces, with a different outcome for images of people of color.

Dr. Amy Dowell, the lead author, explained that the disproportionate training of AI algorithms on white faces contributed to this disparity.

Impact of AI Realism

Dr. Dowell expressed concern about the potential impact of consistently perceiving white AI faces as more realistic, especially in reinforcing racial bias online and its impact on people of color.

This image was generated by AI, specifically Midjourney V5.2. Credit: SciTechDaily.com

Understanding AI “Hyperrealism”

Researchers pointed out the problem of AI’s “hyperrealism,” where people often mistake AI faces for real human faces without realizing it.

The study also identified physical differences between AI and human faces that people tend to misinterpret, highlighting the need for transparency in AI technology.

Potential Consequences

This trend has serious implications for the prevalence of misinformation and identity theft, and the researchers emphasize the importance of increasing transparency around AI technologies and raising public awareness. Source: Psychological Science, Journal of the Psychological Science Association.

Reference: “AI Hyperrealism: Why AI faces are perceived as more realistic than human faces” Elizabeth J. Miller, Ben A. Steward, Zach Witkower, Claire AM Sutherland, Eva G. Kramhuber , by Amy Dowell, November 12, 2023; Psychological Science. DOI: 10.1177/09567976231207095

Source: scitechdaily.com

It is crucial to regulate artificial intelligence within the multi-trillion dollar API economy

Application programming interface (APIs) power the modern Internet, including most websites, mobile apps, and IoT devices we use. And thanks to the Internet’s ubiquity in nearly every corner of the planet, APIs have allowed people to connect to almost any functionality they desire. This phenomenon is often referred to as “.API economy“teeth, Market value to reach $14.2 trillion by 2027.

The increasing relevance of APIs in our daily lives has attracted the attention of several authorities who are introducing major regulations. The first level is defined by organizations such as IEEE and W3C and is intended to establish standards for the technical capabilities and limitations that define technology across the Internet.

Security and data privacy aspects are covered by internationally recognized requirements such as ISO27001, GDPR, etc. Their main goal is to provide a domain framework backed by an API.

But now, with the advent of AI, regulations are becoming even more complex.

How AI integration is changing the API landscape

Different types of AI have been around for a while, but it is generative AI (and LLM) that has completely changed the risk landscape.

Many AI companies are leveraging the benefits of API technology to bring their products into every home and workplace. The most notable example here is OpenAI’s early public release of its API. This combination would not have been possible just 20 years ago. At that time, neither API nor AI had reached the level of maturity that we started observing in 2022.

When writing code or collaborating with AI, Rapidly becoming the standard in software development, especially in the complex process of creating and deploying APIs. Tools like GitHub Copilot and ChatGPT can write code that integrates with any API, and will soon define specific methods and patterns that most software engineers use to create APIs. In some cases, even if you don’t fully understand it.

We’ll also discuss how companies like Superface and Blobr are innovating in the API integration space, using AI to enable you to connect to the APIs you need in a way that would interact with a chatbot.

One type of AI that has been around for a while is generative AI (and large-scale language models). [LLMs]) completely changed the risk landscape. GenAI has the ability to create things in infinite ways, and this creativity will either be controlled by humans or, in the case of artificial general intelligence (AGI), will exceed current control capabilities.

Source: techcrunch.com

Artificial Intelligence identifies novel antibiotics effective against drug-resistant bacteria

Methicillin-resistant Staphylococcus aureus (MRSA)

Shutterstock / Katerina Conn

Artificial intelligence has contributed to the discovery of new classes of antibiotics that can treat infections caused by drug-resistant bacteria. This could help fight antibiotic resistance, which claimed more than 1.2 million lives in 2019, and that number is expected to increase in the coming decades.

A new antibiotic compound has proven to be a promising treatment for both methicillin resistance and tolerance in tests in mice. Staphylococcus aureus (MRSA) and vancomycin resistance Enterococcus – Bacteria that have developed resistance to drugs commonly used to treat MRSA infections.

“our [AI] The model not only tells us which compounds have selective antibiotic activity, but also why in terms of their chemical structure. ” Felix Wong at the Broad Institute of MIT and Harvard University in Massachusetts.

Wong and colleagues aimed to show that AI-driven drug discovery can go beyond identifying specific targets to which drug molecules can bind to predicting the biological effects of entire classes of drug-like compounds.

First, we tested the effects of over 39,000 compounds. Staphylococcus aureus Three types of human cells: liver, skeletal muscle, and lung. The result was training data for the AI ​​model to learn the chemical atoms and bond patterns of each compound. This has enabled AI to predict both the antibacterial activity and potential toxicity of such compounds to human cells.

The trained AI model then analyzed 12 million compounds through computer simulations and found 3,646 compounds with ideal drug-like properties. Additional calculations identified chemical substructures that could explain the properties of each compound.

By comparing such substructures of different compounds, researchers identified a new class of potential antibiotics and ultimately two non-antibiotics that can kill both MRSA and vancomycin-resistant bacteria. discovered a toxic compound Enterococcus.

Finally, researchers used mouse experiments to demonstrate the effectiveness of these compounds in treating skin and thigh infections caused by MRSA.

Only a few new classes of antibiotics, such as oxazolidinones and lipopeptides, have been discovered to be effective against both MRSA and vancomycin-resistant bacteria. Enterococcus – and says resistance to such compounds is increasing. james collins at the Broad Institute, where he co-authored the study.

“Our research has identified one of the few new classes of antibiotics in 60 years that complements other antibiotics,” he says.

Researchers are working to design entirely new antibiotics and discover other new drug classes, such as compounds that selectively kill aging and damaged cells involved in conditions such as osteoarthritis and cancer. are starting to use this AI-driven approach.

topic:

Source: www.newscientist.com

Artificial Intelligence (AI) Algorithm Successfully Deciphers Rogue Wave Pattern

Scientists used artificial intelligence to analyze more than 1 billion waves over 700 years and developed a breakthrough formula for predicting rogue waves. This groundbreaking research, which converts vast amounts of oceanographic data into equations for the probability of adverse waves, raises questions about previous theories and has significant implications for maritime safety. This research represents a major step forward in this field in terms of the accessibility of findings and the role of AI in enhancing human understanding.

Researchers from the University of Copenhagen and the University of Victoria used over 700 years of ocean wave data, including more than a billion wave observations, and advanced artificial intelligence techniques to predict the occurrence of these threatening sea giants. Previously thought to be a myth, these unusually large and rough waves can cause serious damage to ships and oil rigs. The research team leveraged AI to analyze the vast amounts of data and create a mathematical model that provides a way to predict the occurrence of rogue waves. This new knowledge contributes to making shipping safer, and has paradigm-shifting implications for the maritime industry.

Rogue waves, perceived as a part of sailor folklore for centuries, became scientifically documented when a 26-meter high wave hit the Norwegian oil platform His Draupner in 1995. Since then, research on these extreme waves has been ongoing, culminating in the breakthrough reached by the University of Copenhagen and the University of Victoria. The research team leveraged big data on ocean movements and AI techniques to map the causal variables that lead to rogue waves, ultimately developing a model which usess artificial intelligence to calculate the probability of rogue wave formation.

Incorporating data collection from buoys at 158 locations on U.S. coasts and overseas territories and over a billion waves across 700 years, the researchers were able to use AI to analyze the vast amount of data and predict the likelihood of being hit by a huge wave at sea. The AI techniques also helped the researchers discover the causes of rogue waves and translate them into an equation that describes the recipe for rogue waves. This study also challenged common perceptions about the causes of rogue waves, establishing the dominance of a phenomenon known as “linear superposition.” This new knowledge can help the shipping industry to plan routes in advance and mitigate the risk of encountering dangerous rogue waves.

Source: scitechdaily.com

Artificial Intelligence will bring about a revolution in the realm of complex problem-solving within logistics and beyond.

Researchers at MIT and ETH Zurich have developed a machine learning-based technique that speeds up the optimization process used by companies like FedEx to deliver packages. This approach simplifies key steps in mixed integer linear programming (MILP) solvers and uses company-specific data to tune the process, resulting in 30-70% speedups without sacrificing accuracy. This has potential applications in a variety of industries facing complex resource allocation problems.

The research conducted by Massachusetts Institute of Technology and ETH Zurich aims to address complex logistics challenges, including delivering packages, distributing vaccines, and managing power grids. The traditional software used by companies like FedEx to find optimal delivery solutions is called a Mixed Integer Linear Programming (MILP) solver, but it can be time-consuming and may not always produce ideal solutions.

The newly developed technique employs machine learning to identify important intermediate steps in the MILP solver, resulting in a significant reduction of time required to unravel potential solutions. By using company-specific data, this approach allows for custom tailoring of the MILP solver. This new technique results in speeding up the MILP solver by 30-70% without sacrificing accuracy.

Lead author Kathy Wu, along with co-lead authors Sirui Li, Wenbin Ouyang, and Max Paulus, highlights the potential of combining machine learning and classical methods to address optimization problems. The research will be presented at the Neural Information Processing Systems Conference. The team hopes to further apply this approach to solve complex MILP problems and interpret the effectiveness of different separation algorithms.

Source: scitechdaily.com

Pope Francis advocates for global oversight of artificial intelligence | Science and Technology

Pope Francis has voiced support behind calls for regulation of AI.

pope With the annual World Peace Day message, artificial intelligence Safely developed and ethically used.

He warned that the technology lacks human values ​​such as compassion and morality, and could blur the line between what is real and what is fake.

The Pope should know, considering he was the subject of some of the most infamous AI-generated images of 2023.

In March, he was photographed wearing a stylish down jacket, leaving social media in awe.

This surreal image created using the AI ​​tool Midjourney was certainly too good to be true.

how Chat GPT Generating text content allows users to request images using a simple prompt.

The fake photo originated on Reddit and was shared tens of millions of times on social media, fooling people, including celebrities, and becoming one of the first major examples of AI-powered misinformation at scale.

This week: British charity Full Fact highlighted another false image of FranciscoThe photo showed him addressing a large crowd in Lisbon earlier this year.

image:
AI-generated image of the Pope addressing a crowd in Lisbon, Portugal.Photo: Complete Facts

Pope shares his biggest concerns about AI

Cardinal Michael Czerny, director of the Vatican Development Authority, shared the pope’s concerns in a written statement.

“The biggest risk is dialogue,” he said.

“Because without truth there can be no dialogue, and without responsibility there can be no truth.”

The Pope said the regulatory priorities are to prevent disinformation, discrimination and distortion, promote peace and guarantee human rights.

read more:
How the confusion arose in the creators of ChatGPT
The first year of the chatbot that changed the world

His intervention was a few days later. EU reaches agreement on how to regulate AIwhich covers generation tools such as Midjourney and ChatGPT, but will not come into effect until 2025 at the earliest.

joe biden us president The White House announced its own proposal in OctoberThis included the possibility of requiring AI-generated content to be watermarked.

In Britain, the Prime Minister Rishi Sunak They are becoming more cautious about AI laws, arguing they risk stifling innovation.

Source: news.sky.com

AI-powered tools from Stability AI now generate 3D models using artificial intelligence

Stability AI, the startup behind the text-to-image AI model Stable Diffusion, believes 3D modeling tools could be the next big thing in generative AI.

At least, that’s the message the company is sending with the release of Stable 3D, an AI-powered app that generates textured 3D objects for modeling and game development platforms like Blender, Maya, Unreal Engine, and Unity.

Available in private preview for select customers who contact Stability through their company Inquiry formStable 3D is designed to help non-experts generate “draft-quality” 3D models “in minutes.” blog.

“Creating 3D content is one of the most complex and time-consuming tasks for graphic designers, digital artists, and game developers, as it can take hours or even days to create a moderately complex 3D object. is common,” the company writes. “Stable 3D levels the playing field for independent designers, artists and developers, allowing him to create thousands of 3D objects a day for very little cost.”

All hype aside, Stable 3D seems to be pretty robust and on par with other model generation tools on the market in terms of its functionality. Users can describe the 3D model they want to create in natural language, or upload existing images and illustrations and convert them into a model. Stable 3D outputs 3D models in “.obj” file format. This allows you to edit and manipulate it using most standard 3D modeling tools.

Stability has not disclosed what data it used to train Stable 3D. Given that generative AI models tend to regurgitate training data, this could be a concern for commercial users of the tool in the future. If any of the data is copyrighted and Stability AI did not obtain the appropriate license, Stable 3D customers could unknowingly incorporate works that infringe on intellectual property into their projects. There is a gender.

Stability AI also doesn’t have the best track record when it comes to respecting intellectual property. Earlier this year, Getty and several artists sued the startup for copying and processing millions of images in their possession without proper notice or compensation to train stable spreaders.

Stability AI recently partnered with startup Spawning to honor “opt-out” requests from artists, but it’s unclear whether that partnership covers Stable 3D’s training data. We have reached out to Stability AI for more information and will update this post if we hear back.

Potential legal implications aside, Stable 3D marks Stability AI’s entry into the nascent but already crowded field of AI-powered 3D model generation.

The 3D model was generated with Stability AI’s new Stable 3D tool.

There are 3D object creation platforms like 3DFY and Scenario, as well as startups like Kaedim, Auctoria, Mirage, Luma, and Hypothetic. Even established companies like Autodesk and Nvidia are starting to dip their toe into the field with apps like Get3D, which converts images into 3D models, and ClipForge, which generates models from text descriptions.

It’s also in the meta experimented Uses techniques to generate 3D assets from prompts. OpenAI is no different, releasing Point-E last December. It is an AI that synthesizes 3D models with potential applications in 3D printing, game design, and animation.

Stable 3D appears to be Stability AI’s latest attempt to diversify or pivot in the face of increasing competition from generative AI platforms that create art, such as Midjourney and the aforementioned OpenAI.

April, Semaphor report Stability AI has been found to be draining cash and fueling executive hunts to boost sales. according to According to Forbes, the company repeatedly delayed paying wages and payroll taxes, or didn’t pay them at all, resulting in AWS access to Stability’s GPU instances, which Stability uses for calculations to train its models. He is threatening to cancel the.

Recent stability AI raised The company received $25 million through convertible notes (i.e., debt that converts into equity), bringing its funding to more than $125 million. However, it has not completed new financing at a higher valuation. The startup was last valued at $1 billion. Stability is said to be aiming to quadruple that figure in the coming months, even though revenues remain low.

In what looks like another attempt to differentiate itself and drive sales, Stability AI today announced new features for its online AI-powered photo editing suite. This includes model tweaking features and “Sky Replacers” that allow users to personalize the underlying art generation model. A tool to replace the sky color and beauty in your photos with preset alternatives.

The new tools join Stability AI’s growing portfolio of AI-powered products, including music generation suite Stable Audio, doodle-making app Stable Doodle, and chatbots like ChatGPT.

Source: techcrunch.com