Study: Brain Signals in the Visual Area Can Indicate the Colors Observers Are Viewing

Do different observers experience similar neural activity in response to the same color? Does color produce distinct response patterns in specific brain areas? To explore these inquiries, researchers at the University of Tübingen utilized existing knowledge of color responses from various observers’ brains to predict the colors an individual is perceiving based on their brain activity. By estimating general brain commonality and responding to achromatic, spatial stimuli, the authors successfully aligned disparate brain responses within a common response framework linked to the retina. In this framework, derived independently of specific color responses, the perceived color can be decoded across individuals, revealing distinct spatial color biases between regions.

Using a sample of male and female volunteers, Michael M. Bannert & Andreas Bartels examined whether spatial color biases are shared among human observers and whether these biases differ among various regions. Image credit: Vat Loai.

Employing functional MRI scans, researchers Michael Banert and Andreas Bartels from the University of Tübingen captured images of subjects’ brains while they viewed visual stimuli, identifying various signals related to red, green, and yellow colors.

Remarkably, the patterns of brain activity appeared similar among subjects who had not participated previously. This suggests that the colors perceived can be accurately predicted by comparing them to the brain images of other participants.

The representation of color in the brain proves to be much more consistent than previously believed.

While it was already feasible to identify the colors an individual observed using functional magnetic resonance imaging (fMRI), this was only applicable to the same brain.

“We aimed to investigate whether similar colors are encoded across different brains,” Dr. Banert stated.

“In other words, if we only have neuronal color signals from another person’s brain, can we predict the colors they’re perceiving?”

“It’s well established that different brains exhibit roughly similar functional structures.”

“For instance, specific areas are more active when viewing faces, bodies, or simply colors.”

During the color experiment, researchers employed specific classification algorithms to analyze fMRI data, systematically differentiating signals originating from the brains of various groups of individuals by color.

Subsequently, data from new subjects were utilized to ascertain the colors they were perceiving using neuronal signals.

To frame each brain’s orientation, scientists spatially mapped how they responded to stimuli at different locations within their visual field using fMRI measurements.

“At this stage, we did not incorporate colors to avoid any bias in our results—only black and white patterns,” Professor Bartels explained.

“By simply merging this mapping data with color information from another person’s brain, we ensured we correctly identified the ‘new’ brain activity related to what the person was observing at that moment.”

“I was surprised to discover that even subtle variations in individual colors show remarkable similarity across brain activity patterns in specific visual processing regions, something previously unknown.”

Spatial color coding in the brain is domain-specific and organized consistently among individuals.

“There must be functional or evolutionary factors contributing to this uniform development, but further clarification is needed,” the authors noted.

The study was published this week in the Journal of Neuroscience.

____

Michael M. Bannert and Andreas Bartels. Large-scale color biases in the functional architecture of the retina are domain-specific and shared throughout the human brain. Journal of Neuroscience Published online on September 8th, 2025. doi: 10.1523/jneurosci.2717-20.2025

Source: www.sci.news

Revolutionary App Guides Cricket Fans with Visual Impairments Around Lord’s

“It’s 19 feet ahead,” announced the robotic voice from an iPhone held by Moshfik Ahmed, as he navigated through London’s Road Cricket Field in search of a seat.

“Up the stairs,” directed Ahmed, an English cricketer with visual impairment, as he tapped a white cane on his way to the Edrich Stand without any external assistance. “There’s one landing. We’re positioned at 9 o’clock at the base of the stairs. We’ve reached the fifth row.”

Ahmed was among the first to test the newly installed Wayfinding technology at Lord’s, designed for blind and partially sighted individuals, enabling disabled fans to enjoy live sports.

Waymap, the company behind this app-based navigation tool, asserts that the 31,000-seat cricket stadium is the first sports venue worldwide to offer personal GPS, specifically tailored to manage traffic in stadiums, shopping centers, and transportation systems.

Utilizing a £50,000 camera, Waymap meticulously mapped stairs, corridors, inclines, entrances, and concourses to develop a digital twin of this historic cricket ground, allowing the app to navigate users by meter precision.

This technology was implemented ahead of next month’s Test match between England and India. The Marylebone Cricket Club, which manages the venue, believes it can assist other cricket enthusiasts in discovering the most accessible routes throughout the premises.

“The concept is fantastic for the visually impaired,” said Ahmed, who tried the app upon the Guardian’s invitation after participating in a showcase match on Wednesday. “If it functions flawlessly, I can navigate to the station independently, cross the street by myself, arrive at the stadium, and find my way using the app. I know many sports enthusiasts who are visually impaired. This will make it completely accessible for them.”

Moshfik Ahmed at the cricket grounds on the road. Photo: Sean Smith/Guardian

It was Ahmed’s first experience with the app, which had some initial hiccups. At times, it mistakenly suggested he head in the wrong direction, pointing him to temporarily closed stairs, and even guided him to row 20 of the Edrich Stand instead of column 5.

However, it seemed that both the app and the user were still in the adaptation phase. For instance, the app should be customized to reflect the individual user’s walking pattern, which could clarify the misdirection he experienced.

“It must be precise and dependable,” stated Ahmed, who lost most of his vision in 2017.

“We’re dedicated to delivering an exceptional experience,” said Celso Zuccollo, CEO of WayMap. “WayMap represents a novel navigation approach. It usually requires multiple visits to fully grasp how to use the app effectively.”

“The objective is likely to extend this technology to venues like Wembley, various football stadiums, and we are in discussions with horse racing tracks,” he added.

Existing apps available for users of the Washington, DC public transport system do not adequately alert users to the movements of people around them, particularly those whom Ahmed noted can pose significant challenges in maneuvering safely and comfortably.

Source: www.theguardian.com

Train your brain to see through visual fantasies

Have you found that the orange circle on the left is smaller than the orange circle on the right?

Radoslaw Wincza et al. (2025)

Optical fantasies may make you feel like a fool, but you may be able to train your brain to resist your brain.

“People in the general population are very likely trained to unravel illusions and have the ability to perceive the world more objectively,” he says. Radoslaw Wincza At Lancaster University, UK.

Wincza and his colleagues recruited 44 radiologists with an average age of 36. He spent over a decade finding small details such as fractures from a medical scan. They also saw 107 college students, an average of 23 years old, studying medicine and psychology.

Each participant displayed four fantasies one at a time on the screen. With each illusion, participants had to look at size or length size or length shape or line pairs and choose larger or longer ones.

In the three illusions, other objects made larger shapes or longer lines smaller and shorter lines. The team found that radiologists were less susceptible to these illusions than students.

“Radiosists have this ability to really focus on the key elements of the visual scene, where they ignore unrelated contexts and have tunnel vision,” says Wincza. “By adjusting your targets, they don’t experience that much illusion.”

In the fourth illusion, one of the shapes was vertical, and the pair was horizontal. This made the latter look even wider, even if it was actually narrower. Both groups were equally susceptible to fantasy. This is probably because this didn’t involve much of an adjustment to background distraction, as it didn’t contain any surrounding objects.

“It suggests that if everyone trains themselves, they can gain the ability to be susceptible to illusions,” he says. Carla Evans At York University, UK. Focusing on certain aspects of photography, for example, could improve this ability, but she says there is more work to see how fast this can be. “It could take years or weeks.”

topic:

Source: www.newscientist.com

AI Technology can accurately recreate visual perceptions using mind-reading capabilities

Top row: Original image. Second row: AI-reconstructed image based on macaque brain recordings. Bottom row: Image reconstructed by the AI ​​system without the attention mechanism.

Thirza Dado et al.

Artificial intelligence systems can currently create highly accurate reconstructions of what a person sees, based on recordings of brain activity, and these reconstructed images improve significantly as the AI ​​learns which parts of the brain to pay attention to.

“As far as I know, these are the most accurate and closest reconstructions.” Umut Güçül Radboud University, Netherlands.

Güçül's team is one of several around the world using AI systems to understand what animals and humans see through brain recordings and scans. In a previous study, his team used a functional MRI (fMRI) scanner to record the brain activity of three people while they were shown a series of pictures.

In a separate study, the team used an implanted electrode array to directly record the brain activity of a single macaque monkey as it viewed AI-generated images — an implant done by a different team and for a different purpose, Güçül's colleagues say. Sarza Dado“We didn't put implants in macaques to restructure their perception,” she says. “That's not a good argument against doing surgery on animals.”

The research team has now reanalyzed the data from these earlier studies using an improved AI system that can learn which parts of the brain to pay most attention to.

“Essentially, the AI ​​is learning where to pay attention when interpreting brain signals,” Gyuklüh says, “which of course in some way reflects what the brain signals pick up on in the environment.”

By directly recording brain activity, some of the reconstructed images were very close to the images seen by the macaques, as generated by the StyleGAN-XL image-generation AI. But accurately reconstructing AI-generated images is easier than real images, because aspects of the process used to generate the images can be incorporated into the AI ​​training to reconstruct those images, Dado said.

The fMRI scans also showed a noticeable improvement when using the attention guidance system, but the reconstructed images were less accurate than those for the macaques. This is partly because real photographs were used, but Dado also says that it is much harder to reconstruct images from fMRI scans. “It's non-invasive, but it's very noisy.”

The team's ultimate goal is to develop better brain implants to restore vision by stimulating the higher-level parts of the visual system that represent objects, rather than simply presenting patterns of light.

“For example, we can directly stimulate the area that corresponds to a dog's brain,” Güçül says, “and in that way create a richer visual experience that is closer to that of a sighted person.”

topic:

Source: www.newscientist.com

New research uncovers the ‘visual masking’ phenomenon in animal behavior

A strange phenomenon called visual masking can reveal the time scale of perception, but its underlying mechanisms are not well understood.

Colored plots show neural activity recorded in mouse visual cortex (V1). Each row of tick marks represents the spikes of a different neuron. Although researchers can predict the target side from neural activity with near-perfect accuracy, animal subjects often get incorrect masked trials due to how brain regions downstream of V1 process this information. I am.Image credit: Gail other.

Have you ever wanted to make something invisible? It turns out your brain can do it.

Unfortunately, this is a limited superpower. In visual masking, we do not consciously recognize another image when it appears in rapid succession.

But the timing of those images is important. For masking to work, the first image must flash very quickly, and the second image must follow rapidly (on the order of 50 milliseconds).

Don't get me wrong, the first image doesn't stay in view very long, but it's definitely long enough to be recognizable without the second image or mask.

Scientists discovered this phenomenon in the 19th century, but why and how the human brain does this remains a mystery.

“This is an interesting observation, that your perception doesn't accurately reflect what exists in the world,” said Dr. Sean Olsen, a researcher at the Allen Institute.

“Like other optical illusions, we think this tells us something about how the visual system works and, ultimately, the neural circuits underlying visual perception.”

In a new study, Dr. Olsen and colleagues take a closer look at the science behind this bizarre illusion and show for the first time that it also occurs in mice.

When the mice were trained to report what they saw, they were also able to pinpoint the specific areas of the brain needed for the visual masking illusion to work.

Dr. Christoph Koch, also from the Allen Institute, said, “Our research has narrowed down the region of the brain responsible for perceiving the world around us.''

“What are the steps from the time the photons rain down on your retina to when you actually become consciously aware of what you’re seeing?”

When a rain of photons hits our retina, the information follows a predetermined path from the eyeball through several different areas of the brain and into the highly-processed areas of the cortex, the wrinkled outermost shell of the brain. It ends with

Previous research on visual masking has led scientists to believe that neurons in the early part of the brain in the retina and its pathways are activated even when a person is unaware that they are looking at an image. I know. In other words, your brain sees things without your knowledge.

To explore where unconscious sensations turn into conscious perceptions and actions, scientists first asked 16 mice to move a small mouse in the direction of rapidly flashing images in exchange for a reward if they chose the correct direction. I trained him to spin a Lego wheel.

I then added different masking images on either side of the screen, immediately after the target image.

Adding a mask prevented the animal from performing the task correctly. This means that the animal can no longer recognize the original target image.

Because visual masking had never been tested in mice before, the authors had to create a task for mice, in which the images and the way they were presented were different from those used in previous human studies. I meant that.

To confirm that the optical illusion they showed to rodents was also relevant to us, they tested it on 16 people.

It turns out that human perception (or lack thereof) and mouse perception of this particular visual masking illusion are very similar.

The researchers then used a special technique known as optogenetics, which allowed them to quickly suppress activity in cells or areas throughout the brain with flashes of light.

They targeted this inhibition to the mouse's primary visual cortex, known as the first part of the cortex where visual information from the eyes enters higher cortical areas of the brain.

By turning off the primary visual cortex the moment the masking image appeared, they were able to completely block visual masking after the target image. Even though the masking image was visible, the mouse reverted to accurately locating the first image. the current.

This result implies that conscious perception is occurring in the visual cortex or in higher regions of the cortex downstream.

“This is consistent with the general idea in the field that the cortex is the seat of conscious cognition in mammals, including ourselves,” Dr. Koch said.

Although this study narrowed down the region responsible for conscious perception to the cortex, there are still many regions of the cortex that may be involved.

Further studies will need to silence these other areas to test their effects on visual masking tasks.

“We're starting to put some limits on where masking is occurring,” Dr. Olsen said.

“We think this is a good paradigm to track to track other areas that are listening to the primary visual cortex and essentially fusing the flow of target and mask information in the brain. Masu.”

of findings It was published in the magazine natural neuroscience.

_____

SD Gale other. Visual cortex is required for posterior masking in mice. nut neurosi, published online on November 13, 2023. doi: 10.1038/s41593-023-01488-0

Source: www.sci.news

New study sheds light on the visual masking phenomenon, unraveling the mystery of “invisibility”

A new study has revealed how visual masking, a phenomenon in which rapid succession of images leads to unconscious image processing, occurs in both humans and mice. This study highlights the role of the cortex in conscious perception and provides important insights into the brain’s visual processing mechanisms.

Delve into the mysterious optical illusions and science of visual masking.

Recent research published in natural neuroscience Visual masking is a phenomenon that plays an important role in how we perceive things, or rather how we don’t “see” them. This study not only revealed aspects of conscious perception in the brain, but also demonstrated that this phenomenon occurs in both humans and mice.

Visual masking occurs when a person does not consciously recognize an image because another image is displayed in rapid succession. For effective masking, the first image must appear and disappear quickly, followed by her second image within about 50 milliseconds.

Groundbreaking research in visual perception

Allen Institute researcher Dr. Sean Olsen and his colleagues have delved into the science behind this optical illusion and shown for the first time that it also occurs in mice. After training the mice to report what they saw, the researchers were also able to pinpoint the specific areas of the brain needed for the visual masking illusion to work.

“This is an interesting observation, that what exists in the world is not accurately reflected in your perception,” Olsen said. “Like other optical illusions, we think this tells us something about how the visual system works and, ultimately, the neural circuits underlying visual perception.”

Exploring the brain’s role in visual recognition

Scientists discovered this strange phenomenon in the 19th century, but why and how the human brain does this remains a mystery.

The study narrows down the parts of the brain involved in perceiving the world around us, said Dr. Christoph Koch, a Distinguished Fellow at the Allen Institute who led the study with Dr. Olsen and Dr. Sam Gale. Ta. , a scientist at the Allen Institute.

When a rain of photons hits our retina, the information follows a predetermined path from the eyeball through several different areas of the brain and into the highly-processed areas of the cortex, the wrinkled outermost shell of the brain. It ends with Previous research on visual masking has led scientists to believe that neurons in early parts of the brain, in the retina and its pathways, are activated even when a person is unaware that they are looking at an image. I know. In other words, your brain sees things without your knowledge.

From mouse to human: parallel recognition

To explore where unconscious sensations turn into conscious perceptions and actions, scientists first taught 16 mice to move a small mouse in the direction of a rapidly flashing image in exchange for a reward if they chose the correct direction. I trained him to spin a Lego wheel. The scientists then added different masking images on either side of the screen immediately after the target image. Adding a mask prevented the animal from performing the task correctly. This means that the animal can no longer recognize the original target image.

Visual masking had never been tested in mice before, so the research team had to create a task for mice, in which the images and the way they were displayed were different from those used in previous human studies. I meant that. To confirm that the optical illusion they showed to rodents was also relevant to us, the research team tested it on 16 people (using keystrokes instead of a wheel). It turns out that human perception (or lack thereof) and mouse perception of this particular visual masking illusion are very similar.

This result implies that conscious perception is occurring in the visual cortex or in higher regions of the cortex downstream. This is consistent with the general sentiment in the field that the cortex is the seat of conscious cognition in mammals, including us, Koch said.

Reference: “Visual cortex is required for posterior masking in mice” by Samuel D. Gale, Chelsea Stroder, Corbett Bennett, Stefan Mihalas, Christoph Koch, and Sean R. Olsen, November 13, 2023 Day, natural neuroscience.
DOI: 10.1038/s41593-023-01488-0

Source: scitechdaily.com

Kosmik: A Visual Canvas Equipped with PDF Reader and Web Browser Functionality

In recent years, tools like Figma, TLDraw, Apple’s Freeform, and the Easel feature in the Arc browser have tried to sell the idea of ​​using an “infinite canvas” to capture and share ideas.french startup cosmic builds on that general concept with knowledge acquisition tools that don’t require users to switch between different windows or apps to retrieve information.

Kosmik was founded in 2018 by Paul Rony and Christophe Van Deputte. Prior to that, Ronnie worked as a junior his director at a video production company, but instead of files and folders on which he could place videos, PDFs, websites, notes, and drawings, Ronnie used a single whiteboard as his type of canvas. was needed. And that’s when he started building his Kosmic, Ronnie told his TechCrunch. He draws on his background in the history and philosophy of computing.

“It took us almost three years to create a working product that included baseline features like data encryption, offline-first mode, and built a spatial canvas-based UI,” Rony explained. “We built all of this on top of his IPFS, so when the two of us collaborate, everything is peer-to-peer instead of relying on a server-based architecture.”

Image credits: cosmic

Kosmik offers an infinite canvas interface where you can insert text, images, videos, PDFs, and links, which can be opened and previewed in the side panel. It also has a built-in browser, so users no longer have to switch between windows to find relevant links on his website. Additionally, the platform also features a PDF reader that allows users to extract elements such as images and text.

This tool helps designers, architects, consultants, and students to build information boards for various projects. This tool is useful because it doesn’t require you to open numerous Chrome tabs and enter details in documents. Documents are a less visual medium for many different types of media. Some retail investors use apps to monitor stock prices, and consultants use apps for project boards.

Image credits: cosmic

Ronnie emphasized that bringing these different tools together in one place is a core selling point for Kosmik.

“I think it all revolves around the idea that we don’t have the best web browser or text editor or the best thing. It’s a PDF reader,” Ronnie said. “But being able to have them exist together in the same place, and being able to drag and drop items between them, makes this tool very powerful.”

Available via the web, Mac, and Windows, Kosmik comes with a basic free tier, which has a limit of 50 MB of files and 5 GB of storage with 500 canvas “elements.” For more storage and unlimited elements, the company offers a monthly subscription of $5.99, and eventually he offers a “one-time” subscription for those who only want to use the software on one device. We are planning to offer a “pay-as-you-go” model.

double down

Cosmic also announced today that it has raised funds. $3.7 million in seed round of funding led by Creandum. Alven, Kima Ventures, Betaworks, Replit and Quizlet founders participated.

Hanel Baveja, a principal at Creandum, told TechCrunch that the company decided to invest in Kosmik because Kosmik is a bit like Notion or Miro, and the potential to build something that completely changes an organization’s workflow. He said this is because there is. But Babeja said that like any consumer tool in this space, the startup needs to create immediate value for users.

“The time to value for any product must be immediate. Especially if it aims to become a commodity, you only have one chance to attract users,” Babeja said. “Finding a balance between a rich feature set and ease of deployment is certainly one of the challenges and is an area where the Kosmik team continues to strive.”

This cash injection is also timely given the product iterations in the pipeline.As expected of Cosmic is consolidating its codebases and Kosmik 2.0 will bring feature parity. The new app will be web-based and the desktop client will essentially be a wrapper app.

Additionally, the new version includes features such as multiplayer collaboration and AI-powered automatic tagging of items in images.

Ronnie said that in multiplayer mode, you can collaborate with someone on just a portion of the canvas using “cards,” which are like folders with objects dropped into them, rather than sharing the entire board. .

Kosmik opened to users in March and currently claims to have around 8,000 daily users, but the product can work completely offline, making it difficult to determine exactly how many people are actively using it. said it was difficult.

It’s worth noting that Kosmik isn’t the only startup active in the personal whiteboard space. Berlin-based Deta is building a new cloud OS for this problem and solution. sanity Building a social knowledge sharing platform. These companies must compete in some way to capture users’ attention and persuade them to try new paradigms for acquiring knowledge.

Source: techcrunch.com