Cameras Mimicking Human Vision Could Enhance Astronomical Discoveries

Sirius Binary Star System Captured with a Neurotype Camera

Satyapreet Singh, Chetan Singh Thakur, Nirupam Roy, Indian Institute of Science, India

Neurotype cameras, designed to emulate human vision, offer significant benefits for astronomers by enabling the capture of both bright and dim celestial objects in a single frame. This allows for tracking swift-moving entities without the risk of motion blur.

Unlike conventional digital cameras that sample a grid of pixels multiple times per second, recording data for each pixel each time, neurotype cameras, or event cameras, function quite differently. Each pixel is activated only if there’s a change in brightness at that specific location. If the brightness remains constant, no new data is saved, resembling how the human eye processes visual information.

This innovative approach presents various benefits. By recording only changing pixels, less data is generated while maintaining a much higher frame rate. Furthermore, these cameras measure light on a logarithmic scale, enabling the detection of fainter objects next to brighter ones that may saturate conventional camera images.

To investigate the potential of this technology for astronomical applications, Chetan Singh Thakur and his team at the Indian Institute of Science in Bengaluru mounted a neurotype camera on a 1.3-meter telescope at the Aliyabatta Observatory in Uttarkhand, India.

They successfully captured meteoroids traveling between the Earth and the Moon and also obtained images of the Sirius binary system, which includes Sirius A, the brightest star in the night sky, and Sirius B.

Sirius A is approximately 10,000 times brighter than Sirius B, making it challenging to capture both in a single image using traditional sensors, as noted by Mark Norris from the University of Central Lancashire, UK, who was not part of the study.

According to Singh Thakur, neurotype cameras excel at tracking fast-moving objects due to their high frame rates. “For high-speed objects, you can capture their movement without blur, unlike conventional cameras,” he explains.

Telescopes typically utilize multiple sensors that can be swapped as needed. Norris points out that a neurotype camera could serve as an additional tool for viewing scenarios where both very bright and very faint objects need to be observed concurrently, or for quickly moving targets like the recently identified interstellar object 3i/Atlas.

Traditionally, to follow fast-moving objects, astronomers would need to pan the telescope. However, neurotype cameras can accurately track the movement of these objects precisely while maintaining background details and resolving their locations.

“Do you want to know the brightness of an object or its location? In quantum mechanics, you can’t ascertain both at the same instant,” Norris states. “This technology offers a potential method to achieve both simultaneously.”

While neurotype cameras provide unique advantages, they may not replace all sensor applications. Their resolution is typically lower than that of charge-coupled devices (CCDs), which are commonly used in digital cameras, achieving an efficiency of about 78% compared to the 95% efficiency of CCDs. This disparity makes traditional sensors more effective at capturing dim objects near their detection limits.

Topic:

Source: www.newscientist.com

Light Has Transformed into a Liquid Mimicking Spacetime

Researchers can construct structures that mimic lab black holes

Science Photo Library/Alamy

By shaping light into a liquid-like form, researchers aim to deepen their grasp of black holes and other intriguing phenomena.

Though observing supermassive objects like black holes directly is a challenge, scientists have employed quantum effects to create valuable analogs in laboratory settings. For instance, they have previously simulated space-time with ultra-cold atoms to mimic a black hole’s material.

Now, Kevin Falk from the Kastler-Brossel Laboratory (LKB) in Paris and his team have utilized light to generate highly controlled analogs of space-time.

They achieved this by trapping light within small cavities made of reflective semiconductor materials, allowing it to bounce between layers and interact with the material’s charge. This quantum interaction ultimately converts light into a liquid-like state.

Researchers can manipulate this fluid with lasers to replicate the geometry of space-time, creating structures that correspond to a black hole’s event horizon. The object can fall but will not return.

Their light-based “space” is meticulously controlled, enabling Falk and his colleagues to visualize events and create sudden space-time structures.

They plan to use this unique simulation to investigate how Hawking radiation emitted from black holes varies with abrupt changes in the event horizon. However, achieving this requires cooling and isolating the experiment to amplify the quantum effects involved.

“This work represents a remarkable experimental achievement,” says Juan Ramon Muñoz de Nova from Complutense University in Madrid, who has contributed to initial measurements of Hawking radiation in black hole simulations involving ultracold atoms. He believes these new experiments present opportunities to observe various phenomena, such as how black holes vibrate and produce “ring” effects.

Friedrich Koenig from St Andrews University in the UK notes that this research offers “the most valuable platform” to explore new concepts related to gravity and the complex interplays between gravity and quantum effects.

One striking outcome of this study is the potential to reveal that the observed black holes might not be what they seem. Maxime Jacquet from LKB explains that the first image captured of a black hole by the Event Horizon Telescope may appear authentic, but the visual representation of a black hole may not equate to its true nature.

There could be sizable objects that distort light like black holes, creating a false appearance without an observable event horizon. Theoretical investigations suggest this is feasible, and light-based experiments may allow for further exploration of this possibility, according to Jacquet.

“We must exercise caution. While we have these analogs, fluids and black holes are fundamentally different entities,” warns Falk. “However, this experiment enables us to experiment with testing theories related to black holes.”

Topics:

Source: www.newscientist.com

The sea-inspired washed-down clothing boasts stunning cyano type, mimicking seaweed

Mandy Barker’s “Delesseria Tunic Ictus”

Mandy Barker

An influential book by botanist and pioneering photographer Anna Atkins, almost two centuries ago. British algae photos: impressions of cyanotypes surprised readers with its scientific power and artistry. In it, Atkins presented images of seaweed collected from the British coastline using the cyanotype method, a printing process carried out by laying objects on chemically coated paper, exposing them to ultraviolet rays and creating a cyan blue background.

https://www.youtube.com/watch?v=a_7exgdvv8e

Artist Mandy Barker’s new book, British algae photo: Cyanotype defects she uses the same technique to draw attention to the ongoing pollution crisis facing our oceans. Like Atkins, Barker also scrutinised the British coastline. But rather than finding the beauty of nature, she saw abandoned clothes washed down on the beach. Her first discovery, she says, looked like seaweed. “It was kind of an attractive and beautiful fabric.” However, when jackets, dresses, shoes, underwear and school uniforms began to appear, the scale of the problem quickly became clear to her.

Inspired by the “seaweed-like shape” of the clothing, Barker decided to create a new cyanotype print (photo not the photo above) from this discovered fabric (photo above, not the photo above) to replicate Atkins’ work, which involves small but large changes. She draws attention to the climate costs associated with fast fashion by inventing the Latin name for clothing. Delesseria Tunica Ictus – Nodded to both the shirt she found and the red algae that Atkins had photographed.

topic:

Source: www.newscientist.com

Human-like robot masters the waltz through mimicking human actions

Humanoid robot waltzes with the help of AI trained on human motion capture recordings

Xuxin Cheng and Mazeyu Ji

AI that helps humanoid robots mirror human movements could allow robots to walk, dance, and fight in more human-like ways.

The most agile and fluid robot movements, such as Boston Dynamics’ impressive demonstration of robotic acrobatics, are typically narrow, pre-programmed sequences. Teaching robots a wide repertoire of persuasive human movements remains difficult.

In order to overcome this hurdle, Peng Xuanbin at the University of California, San Diego, and colleagues have developed an artificial intelligence system called ExBody2. This allows the robot to imitate various human movements in a more realistic way and execute them smoothly.

Peng and his team began by building a database of possible movements that a humanoid robot could perform, from simple movements such as standing and walking to more complex movements such as tricky dance moves. Created. The database contained motion capture recordings of hundreds of human volunteers collected in previous research projects.

“Humanoid robots share a similar physical structure with us, so it makes sense to leverage the vast amount of human movement data that is already available,” Peng says. “By learning to imitate this kind of behavior, robots can quickly learn a variety of human-like behaviors. This means that anything humans can do, robots have the potential to learn.” It means something.”

To teach the pseudo-humanoid robot how to move, Peng and his team used reinforcement learning. In this learning, the AI ​​is given an example of what makes a successful move and then challenged to figure out how to do it yourself through trial and error. They started by training ExBody2 with full access to all the data on this virtual robot, including the coordinates of each joint, so it could mimic human movements as closely as possible. It then learned from these movements, using only data accessible in the real world, such as inertia and velocity measurements from sensors on the actual robot’s body.

After ExBody2 was trained on the database, it was able to control two different commercially available humanoid robots. It was able to smoothly combine simple movements such as walking in a straight line and crouching, as well as perform tricky movements such as following a 40-second dance routine, throwing punches, and waltzing with humans.

“Humanoid robots work best when all limbs and joints work together,” Penn says. “Many tasks and movements require coordination between the arms, legs, and torso, and whole-body coordination greatly increases the range of a robot’s capabilities.”

topic:

Source: www.newscientist.com

OpenAI prohibits bot mimicking US presidential candidate Dean Phillips from its platform

OpenAI has taken down the account of the developer of an AI-powered bot that pretended to be US presidential candidate Dean Phillips, citing a violation of company policies.

Phillips, who is challenging Joe Biden for the Democratic nomination, was impersonated by a bot using ChatGPT. dean bot site.

The bot is backed by Silicon Valley entrepreneurs Matt Krysilov and Jed Summers, who are supporting Phillips with a superpack called “We Deserve Better” that funds and supports political candidates. An organization to do this has been established.

San Francisco-based OpenAI announced it has removed developer accounts that violated its policies against political campaigning and impersonation.

“We recently terminated developer accounts that knowingly violated our API Usage Policy, which prohibits political campaigning, or that impersonated individuals without their consent,” the company said.

The Phillips bot, created by AI company Delphi, is currently disabled. Delphi has been contacted for comment.

OpenAI Usage policy It says developers who use the company’s technology to build their own applications must not engage in “political campaigning or lobbying.” It also prohibits “impersonating another person or entity without their consent or legal right to do so,” although it is unclear whether Minnesota Congressman Phillips gave his consent to the bot.

A pop-up notification on the dean.bot website describes the “AI voice bot” as “a fun educational tool, but not perfect.” It added: “Although the voice bot is programmed to sound like him and elicit his ideas, it may say things that are wrong, incorrect, or shouldn’t be said.” I am.

washington post, The ban was first reported by, reported that Krysilov asked Delphi to remove ChatGPT from its bot and instead rely on freely available open source technology. We have reached out to Krysilov, a former OpenAI employee, for comment.

We Deserve Better received $1 million in funding from billionaire hedge fund manager Bill Ackman, who put it in a post to “It’s the biggest investment I’ve ever made.”

Mr. Phillips, 55, announced his candidacy for president in October, citing Mr. Biden’s age and saying he should be given the opportunity to mentor younger generations. Mr. Phillips, who was campaigning in New Hampshire on Saturday, described Mr. Biden as “un-electable and weak.”

There are concerns that deepfakes and AI-generated disinformation could disrupt elections around the world this year, with the US, EU, UK and India all planning to vote. On Sunday, the Observer reported that 70% of British MPs are concerned that AI will increase the spread of misinformation and disinformation.

Source: www.theguardian.com