Elon Musk’s Bold Vision for the Future: Will His Big Bets Pay Off?

Elon Musk at World Economic Forum

Billionaire Elon Musk at the World Economic Forum

Krisztian Bocsi/Bloomberg via Getty Images

Elon Musk, known for his leadership in several multibillion-dollar companies, continues to capture headlines. While his polarizing views draw attention, his flagship companies—Tesla and SpaceX—are undeniably pioneering advancements in electric vehicles and space exploration. Recent corporate maneuvers indicate that Musk may have an ambitious plan to integrate these ventures.

In a strategic development, Tesla has announced plans to halt production of its Model S and Model X. This shift does not signify an end to vehicle manufacturing; rather, the production facilities are to be reconfigured to advance Tesla’s humanoid robot, Optimus. Concurrently, Tesla is set to invest $2 billion into xAI, another of Musk’s enterprises, which oversees the social media platform X and its controversial chatbot, Grok.

This collective shift suggests Tesla is prioritizing AI-driven initiatives. In a recent report, both Bloomberg and Reuters revealed Musk’s intentions to merge SpaceX with either Tesla or xAI—or potentially both—in light of his plans to take SpaceX public this year.

What is Musk aiming to achieve with this consolidation? “By integrating xAI and SpaceX, he may be seeking to enhance resource efficiency across data, energy, and computing,” explains Marbe Hickok from the University of Michigan. “He also suggested a merger with Tesla to leverage their technologies for distributed computing.”

Projected plans for humanoid robots, with Musk expressing a goal to manufacture 1 million third-generation Optimus robots annually, require substantial computing resources for AI. Interacting with humans and the surrounding environment necessitates sophisticated AI systems capable of managing extensive data.

Nevertheless, the rise of generative AI is already straining energy resources. Musk’s xAI recently faced scrutiny at the Colossus Data Center in Memphis, which came under fire from the U.S. Environmental Protection Agency for exceeding legal power generation limits. Musk has previously advocated for establishing data centers in space, positing that a rollout could occur within two to three years. However, many experts caution that various technical challenges—including cooling and radiation protection—must be resolved first.

Despite these challenges, launching a data center into orbit presents an opportunity, and SpaceX stands as a leading provider of reliable launches for both private and public sectors. Their extensive experience, particularly with their Starlink satellite internet division, supports this ambition.

“SpaceX is actively deploying a satellite grid in orbit—currently over 9,000 satellites—focused on internet distribution,” states Robert Scoble, a technology analyst at Unaligned. “While xAI works on internet distribution and news, its primary focus is developing innovative AI models that empower our vehicles, humanoid robots, and daily lives,” he says, “the convergence of these endeavors makes strategic sense.”

Ultimately, Musk envisions that the collaboration of SpaceX, Tesla, and xAI could position them at the forefront of the AI landscape, competing against major players like OpenAI, Google, and Microsoft. However, all three companies have not publicly commented on these developments, and Musk himself remains silent.

Contrarily, some experts challenge Musk’s strategic direction. “Currently, only Tesla possesses financial capabilities, but its trajectory is concerning for funding future growth,” asserts Edward Niedermayer, author of Ridiculous: The True Story of the Tesla Motor. He suggests these moves are “defensive,” aimed at bolstering the companies for future prospects and attracting broader retail investor interest.

Niedermayer emphasizes the necessity of public investment due to mounting operational costs: “Running out of cash is a significant concern,” he notes. “The expenses associated with training and operating AI models are considerable.” His belief is that by consolidating resources, Musk aims to present an attractive investment opportunity. However, if his vision doesn’t materialize, it could result in significant repercussions.

Topics:

Source: www.newscientist.com

Achieving the 1.5°C Climate Goal: The Century’s Best Vision for a Sustainable Future

New Scientist - Your source for groundbreaking science news and in-depth articles on technology, health, and the environment.

During the first decade of the 21st century, scientists and policymakers emphasized a 2°C cap as the highest “safe” limit for global warming above pre-industrial levels. Recent research suggests that this threshold might still be too high. Rising sea levels pose a significant risk to low-lying islands, prompting scientists to explore the advantages of capping temperature rise at approximately 1.5°C for safeguarding vulnerable regions.

In light of this evidence, the United Nations negotiating bloc, the Alliance of Small Island States (AOSIS), advocated for a global commitment to restrict warming to 1.5°C, emphasizing that allowing a 2°C increase would have devastating effects on many small island developing nations.

James Fletcher, the former UN negotiator for the AOSIS bloc at the 2015 UN COP climate change summit in Paris, remarked on the challenges faced in convincing other nations to adopt this stricter global objective. At one summit, he recounted a low-income country’s representative confronting him, expressing their vehement opposition to the idea of even a 1.5°C increase.

After intense discussions, bolstered by support from the European Union and the tacit backing of the United States, as well as intervention from Pope Francis, the 1.5°C target was included in the impactful 2015 Paris Agreement. However, climate scientists commenced their work without a formal evaluation of the implications of this warming level.

In 2018, the Intergovernmental Panel on Climate Change report confirmed that limiting warming to 1.5°C would provide substantial benefits. The report also advocated for achieving net-zero emissions by 2050 along a 1.5°C pathway.

These dual objectives quickly became rallying points for nations and businesses worldwide, persuading countries like the UK to enhance their national climate commitments to meet these stringently set goals.

Researchers at the University of Leeds, including Piers Foster, attribute the influence of the 1.5°C target as a catalyst driving nations to adhere to significantly tougher climate goals than previously envisioned. “It fostered a sense of urgency,” he remarks.

Despite this momentum, global temperatures continue to rise, and current efforts to curb emissions are insufficient to fulfill the 1.5°C commitment. Scientific assessments predict the world may exceed this warming threshold within a mere few years.

Nevertheless, 1.5°C remains a crucial benchmark for tracking progress in global emissions reductions. Public and policymakers are more alert than ever to the implications of rising temperatures. An overshoot beyond 1.5°C is widely regarded as a perilous scenario, rendering the prior notion of 2°C as a “safe” threshold increasingly outdated.

Topic:

Source: www.newscientist.com

High-Tech Glasses and Eye Implants Revive Vision Affected by Aging

Study participant measuring visual acuity while wearing glasses post-retinal implant

Study participant measuring reading capacity post-retinal implant

Moorfields Eye Hospital

Individuals experiencing significant vision impairment can regain the ability to read, thanks to a compact wireless chip implanted in one eye along with advanced glasses.

Age-related macular degeneration (AMD) is a prevalent condition that impacts central vision and tends to progress over time. While the precise cause remains unknown, this condition arises from damage to the light-sensitive photoreceptor cells and neurons located in the central retina, leading to difficulties in facial recognition and reading. Available treatments are primarily designed to slow down the progression.

An advanced form of AMD referred to as geographic atrophy typically allows individuals to retain some photoreceptor cells that facilitate peripheral vision, along with sufficient retinal neurons to relay visual information to the brain.

Leveraging this capability, Daniel Palanker and his team at Stanford University in California created the PRIMA device. This system includes a small camera mounted on the glasses, which captures images and projects them through infrared light onto a 2-by-2-millimeter solar-powered wireless chip implanted at the rear of the eye.

The chip then transforms the image data into electrical signals, which the retinal neurons transmit to the brain. Infrared light is employed for this process as it is invisible to the human eye, thereby ensuring it does not interfere with any remaining vision. “This allows patients to utilize both the prosthesis and their peripheral vision simultaneously,” explains Palanker.

To evaluate its efficacy, researchers enlisted 32 participants aged 60 and above, all suffering from geographic atrophy. Their visual acuity in at least one eye was below 20/320—meaning they could see what a person with 20/20 vision could see at 320 feet (97.5 meters) only at 20 feet (6 meters).

The team initially implanted a chip in one of the participant’s eyes. After a waiting period of four to five weeks, the volunteers began using the glasses in their everyday activities. The glasses enabled them to magnify their view up to 12 times and adjust brightness and contrast as needed.

After a year of using the device, 27 of the participants managed to read again and recognize shapes and patterns. They also noted an average improvement of five lines on a standard eye chart compared to their initial findings. Some participants were able to achieve 20/42 vision.

“Witnessing them progress from reading letters to full words brought immense joy to both sides. One patient expressed, ‘I believed my eyes were irreparably damaged, but now they’re revitalizing,'” shares Jose Alan Sahel from the University of Pittsburgh School of Medicine.

While stem cell therapy and gene therapy may potentially restore vision lost due to AMD, these approaches are still in early experimental trials. PRIMA stands out as the first artificial eye designed to restore functional vision in individuals with the condition, allowing them to perceive shapes and patterns.

Approximately two-thirds of the volunteers experienced temporary side effects, such as increased intraocular pressure, as a result of the implants; however, this did not hinder their vision improvement.

Comparison of a trial participant’s eye (left) and eye with retinal implant (right)

Science Co., Ltd.

“This research is both exciting and significant,” remarks Francesca Cordeiro from Imperial College London. “It provides hope for delivering vision improvements that have previously seemed more like science fiction.”

The improved visibility experienced by participants is limited to black and white. “Our next objective is to develop software to provide grayscale resolution and enhance facial recognition,” states Palanker. Nevertheless, researchers do not anticipate achieving color vision in the near future.

Palanker also aims to increase PRIMA’s resolution, which is currently constrained by pixel size and the total count that can be included on a chip. Testing a more advanced version in rats is underway. “This current version equates to human vision of 20/80, but electronic zoom can enable vision as sharp as 20/20,” he explains.

topic:

Source: www.newscientist.com

Our New Vision for the Future Has Run Its Course and Needs Revamping

The 20th century was a vibrant era for future visions, yet the 21st century has not sparked the same enthusiasm. Sci-fi author William Gibson, known for his groundbreaking cyberpunk work Neuromancer, refers to this phenomenon as “Future fatigue”, suggesting we seldom mention the 22nd century.

This stagnation is partly due to the evolution of many iconic future concepts from the 20th century. For instance, plastic was once hailed as the material of the future. Although it has proven to be durable, versatile, and plentiful, its properties now pose significant environmental and health concerns.

Today’s predominant future imagery carries a legacy of historical influence. Themes such as space colonization, dystopian AI, and a yearning for an imaginary past persist, often shaped by the climate anxiety many people experience. The future begins to feel like a closed book rather than an open road.

Jean Louis Missica, former vice mayor of Paris, articulated it well in his writing: “When the future is bleak, people idealize past golden ages. Nostalgia becomes a refuge amid danger and a cocoon for anticipated decline.”

Another factor contributing to this stuck imagery is social media, which exposes users to a vast array of different time periods at once, fostering nostalgia and a continuous remixing of existing ideas.

However, new visions of the future have emerged this century. For example, the climate aspiration movement gained traction on Tumblr and blogs in the 2000s. Yet, as smartphones became our primary mode of communication, the collective imagination surrounding our vision of the future waned.

I reflect on the future of living, drawing from my experience that a cohesive vision can motivate individuals to drive change. Such visions serve as engines of inspiration and imagination. They enable us to envision the society we aspire to create and commit to working towards that future. Movements like Civil Rights have long recognized this. A unified future vision also manifests effectively in architecture, advertising, and television, with Star Trek inspiring engineers for decades.

As we transition from fossil fuels to renewable energy, we find ourselves in a transformative era. This period is daunting yet invigorating. Numerous hotspots of innovation are emerging, such as rooftop solar energy in Pakistan, where households and small businesses actively adopt renewable energy solutions, or the global initiatives like Transition Town, rethinking local economies and cultures.

Nevertheless, we lack a unified vision that integrates these innovations, embedding them within a social context and building pathways from the present to the future.

In my new book, I explore four visions for the future currently taking shape: DeGrowth, which reevaluates our economic roles; SolarPunk, which revitalizes cultural innovation; the Metaverse, which immerses us in a vibrant digital universe; and movements that encourage us to rethink our relationship with nature.

Yet, the future won’t stop evolving. We must cultivate and nurture more emerging visions, allowing them to take shape as we redefine our narrative of what the future could be.

Sarah Hughesley is the author of Designing Hope: A Vision Shaping Our Future

topic:

Source: www.newscientist.com

Cameras Mimicking Human Vision Could Enhance Astronomical Discoveries

Sirius Binary Star System Captured with a Neurotype Camera

Satyapreet Singh, Chetan Singh Thakur, Nirupam Roy, Indian Institute of Science, India

Neurotype cameras, designed to emulate human vision, offer significant benefits for astronomers by enabling the capture of both bright and dim celestial objects in a single frame. This allows for tracking swift-moving entities without the risk of motion blur.

Unlike conventional digital cameras that sample a grid of pixels multiple times per second, recording data for each pixel each time, neurotype cameras, or event cameras, function quite differently. Each pixel is activated only if there’s a change in brightness at that specific location. If the brightness remains constant, no new data is saved, resembling how the human eye processes visual information.

This innovative approach presents various benefits. By recording only changing pixels, less data is generated while maintaining a much higher frame rate. Furthermore, these cameras measure light on a logarithmic scale, enabling the detection of fainter objects next to brighter ones that may saturate conventional camera images.

To investigate the potential of this technology for astronomical applications, Chetan Singh Thakur and his team at the Indian Institute of Science in Bengaluru mounted a neurotype camera on a 1.3-meter telescope at the Aliyabatta Observatory in Uttarkhand, India.

They successfully captured meteoroids traveling between the Earth and the Moon and also obtained images of the Sirius binary system, which includes Sirius A, the brightest star in the night sky, and Sirius B.

Sirius A is approximately 10,000 times brighter than Sirius B, making it challenging to capture both in a single image using traditional sensors, as noted by Mark Norris from the University of Central Lancashire, UK, who was not part of the study.

According to Singh Thakur, neurotype cameras excel at tracking fast-moving objects due to their high frame rates. “For high-speed objects, you can capture their movement without blur, unlike conventional cameras,” he explains.

Telescopes typically utilize multiple sensors that can be swapped as needed. Norris points out that a neurotype camera could serve as an additional tool for viewing scenarios where both very bright and very faint objects need to be observed concurrently, or for quickly moving targets like the recently identified interstellar object 3i/Atlas.

Traditionally, to follow fast-moving objects, astronomers would need to pan the telescope. However, neurotype cameras can accurately track the movement of these objects precisely while maintaining background details and resolving their locations.

“Do you want to know the brightness of an object or its location? In quantum mechanics, you can’t ascertain both at the same instant,” Norris states. “This technology offers a potential method to achieve both simultaneously.”

While neurotype cameras provide unique advantages, they may not replace all sensor applications. Their resolution is typically lower than that of charge-coupled devices (CCDs), which are commonly used in digital cameras, achieving an efficiency of about 78% compared to the 95% efficiency of CCDs. This disparity makes traditional sensors more effective at capturing dim objects near their detection limits.

Topic:

Source: www.newscientist.com

Why Elon Musk’s Vision for Self-Driving Tesla Taxis Misses the Mark: A Critique of Lidar

After years of promising investors that millions of Tesla Robotaxis would soon flood the streets, Elon Musk launched a limited driverless car service in Austin, Texas. The rollout faced significant challenges from the start.

The June 22nd debut was met with a barrage of videos from pro-Tesla influencers, who appeared to celebrate the service and showcased their rides. Musk heralded it as a milestone, and Tesla’s stock shot up nearly 10% the next day.

However, it soon became evident that some of the influencer footage painted a troubling picture of an autonomous vehicle that either broke traffic laws or struggled with basic functions. By Tuesday, the National Highway Traffic Safety Administration (NHTSA) had launched an investigation into these incidents and sought Tesla’s input.

If, as Musk boasted on X, this limited deployment is the result of over a decade of work, it symbolizes the complex technical choices and fixations embraced by the world’s richest person in pursuit of fully autonomous vehicles.

Musk framed the idea of a driverless car as integral to the company’s future. This year it experienced a severe decline but he vowed to rapidly expand the Robotaxi service. Nonetheless, this week’s rocky launch suggests Tesla grapples with the technical hurdles that have drawn scrutiny from regulators.

The Robotaxi pilot involved around 10 cars navigating a confined area in Austin, with safety drivers present in the front seats. Additional limitations included restrictions during adverse weather and at nighttime. Influencer rides were priced at $4.20 each, mirroring Musk’s penchant for cannabis-related memes.

“Tesla’s autonomous driving can be deployed in approved locations. There’s no need for extensive mapping or specialized equipment,” the official Tesla account tweeted on launch day. “It just works.”

However, footage from at least 11 rides indicated that the trial did not unfold as flawlessly as Tesla’s promotional materials suggested. In one instance, the Robotaxi failed to make a left turn, veering into oncoming traffic instead, and resolved the issue by driving along a double yellow line. Other clips showed the vehicle allegedly exceeding speed limits.

This footage caught the NHTSA’s attention, with the agency stating they were aware of the incidents and had reached out to Tesla for more details.

Meanwhile, Musk retweeted a pro-Tesla influencer praising the service amidst technical failures and ongoing regulatory inquiries. One tweet shared by Musk featured a video showing a Robotaxi halting for a peacock crossing the road, while another urged followers to “ignore the media.”

“Lidar is lame.”

Musk has long maintained that reliance solely on cameras for autonomous vehicles is the key to true self-driving capabilities. Tesla’s consumer models feature what are termed “Autopilot” and “Fully Autonomous” capabilities, enabling hands-free driving on highways. These systems are supported by numerous external cameras for navigation, maneuvering, and stopping. The Robotaxis use similar software while depending entirely on cameras.

This camera-centric approach starkly contrasts with other self-driving tech firms like Waymo and Zoox, which utilize a combination of cameras and sensors, including radars and lidars. For instance, Waymo’s latest driverless vehicles are equipped with about 40 cameras and sensors, while Tesla’s advanced model for fully autonomous driving employs around 8 cameras. Bloomberg analysis. Lidar and radar are beneficial for detecting obstacles under poor weather and lighting conditions.

Despite lidar’s advantages, Musk argues that Tesla operates without it. “Lidar is lame,” he declared during Tesla’s Autonomy Day in 2019. “Using it in a car is foolish. It’s costly and unnecessary.”

According to Bloomberg, Lidar systems can cost around $12,000 each, whereas cameras are typically much more affordable. Musk contends that camera-only technology mirrors how humans navigate using their vision.

Tesla Faces Lawsuits and Investigations Over Full Self-Driving Mode

Musk’s claims regarding camera-only technology have placed Tesla under scrutiny, particularly following a fatal accident involving drivers using its fully autonomous driving features. The company is currently embroiled in various government investigations and civil lawsuits, asserting that fully autonomous driving suffers from weather-related issues like sun glare, fog, dust, and darkness. There are reports of at least 736 accidents and 17 fatalities linked to this technology. Analysis by the Washington Post.

“Tesla maintains an almost obsessive view of running the system solely on cameras, despite the consensus among experts in the field,” commented Brett Schreiber, a lawyer representing several victims of Tesla’s autopilot failures.

“Anyone following collision avoidance technology since the ’90s understands that radar, lidar, and cameras are the optimal trio.”

Schreiber expressed little surprise at Tesla’s Robotaxi’s shaky development in Austin.

“The real tragedy here is that people continue to be harmed and killed due to this technology,” he said. “And this highlights issues like, ‘Look how cute it is that a car can’t even make a left turn.’

Tesla did not respond to inquiries regarding the ongoing lawsuits, investigations, and crash incidents related to its fully autonomous driving capabilities.

Tesla’s Tactics vs. Waymo’s Approach

The contrast between Waymo’s method of launching commercial autonomous driving services in densely populated cities and Tesla’s approach extends beyond discussions about lidar versus cameras. Waymo is often seen as a frontrunner in the U.S. autonomous vehicle landscape, which has seen its competitors sharply reduced.

There are numerous reasons Waymo has outlasted many of its rivals. Historically, the Google subsidiary dedicated extensive time to mapping urban areas and rigorously testing vehicles prior to launch. For example, in San Francisco, where Waymo first implemented a completely autonomous commercial service, the company had begun mapping and testing as early as 2021.

Initiated as part of Google’s X Research Lab in 2009, Waymo also encountered challenges with self-driving cars despite its cautious, step-by-step city-by-city rollout. Earlier this year, Waymo was compelled to recall over 1,200 vehicles due to software problems causing collisions with roadside objects, gates, and other barriers. Additionally, the NHTSA launched an investigation last year after receiving 22 reports of Waymo vehicles demonstrating erratic behaviors or violating traffic laws.

In contrast, Tesla is still in the trial phase with its service, yet the Robotaxi launch in Austin marks the first time the automaker has deployed its fully autonomous driving technology in real-world conditions. There has been no information disclosed regarding the duration or extent of mapping or testing this technology in Austin.

This launch evokes memories of Uber’s initial attempt at self-driving vehicle ride-sharing services in 2016, which was also conducted without the necessary approval from California regulators. On the very first day of their pilot project in San Francisco, Uber vehicles reportedly ran a red light. They were forced to suspend the service just a week later after the DMV revoked their registration. At the time, an autonomous driving executive at Uber had urged engineers to expedite the process.

Faced with a lawsuit from Waymo regarding its self-driving operations and struggling to stay competitive, Uber sold its autonomous driving division in 2020.

Like Uber, Tesla also did not seek permission to operate its Robotaxi Service in Austin, as Texas has no existing permit process, which is not expected to be established until September.

At this time, it remains uncertain how frequently Tesla plans to deploy its Robotaxi service behind the scenes, but it’s clear that automakers are under pressure to meet deadlines set by Musk.

With the introduction of Robotaxis, Musk has claimed that Teslas will achieve full automation since at least 2016, and he may be approaching the deadlines he has postponed several times over the last decade.

Source: www.theguardian.com

Retinal Implants Regain Vision in Blind Mice

Retinal damage can result in blindness

bsip sa/alamy

Retinal implants have shown potential in restoring vision in blind mice, indicating that they may eventually help those with conditions like age-related macular degeneration, where photoreceptor cells in the retina deteriorate over time.

Shuiyuan Wang from Fudan University in China and his team developed a retinal prosthesis composed of metal nanoparticles that replicate the function of lost retinal cells, converting light into electrical signals to be sent to the brain.

In their experiments, the researchers administered nanoparticles into the retinas of mice that had been genetically modified to be nearly completely blind.

They restricted water access for three days to both the modified blind mice and those with normal vision. Subsequently, they trained all mice to activate a 6cm wide button on a screen to receive water.

Following training, each mouse underwent 40 testing rounds. The fully sighted mouse pressed the button successfully 78% of the time. Mice with implants achieved a 68% success rate, while untreated blind mice only managed 27%. “That presents a very noticeable effect,” stated Patrick DeGenard, who wasn’t involved in the research but is affiliated with Newcastle University in the UK.

After 60 days, researchers observed minimal signs of toxicity from the implants in the mice. However, Degenaar emphasized the need for long-term safety data, stating, “For clinical application, extensive animal testing lasting approximately five years will be necessary.”

“Patients with age-related macular degeneration and retinitis pigmentosa could benefit from this prosthetic,” noted Leslie Askew from the University of Surrey, UK, who was not part of the study.

Degenaar also remarked that justifying this solution for age-related macular degeneration patients is complex, as they possess a degree of vision that may not warrant the risks associated with implanting prosthetics.

Furthermore, he noted that mice generally have inferior vision compared to humans, raising uncertainty about how beneficial the findings will be for people until comprehensive clinical trials are conducted.

Topic:

Source: www.newscientist.com

Enhanced Contact Lenses Enable Vision in Infrared Spectrum, Even in Darkness

New contact lenses can provide infrared vision

Olga Yasternska/Alamy

Contact lenses enable users to perceive beyond the visible light spectrum, detecting infrared flickers even in darkness or with closed eyes.

The lenses incorporate engineered nanoparticles that absorb and convert infrared radiation, particularly within the near-infrared range of 800-1600 nanometers. This technology functions similarly to night vision equipment, allowing visibility in low-light conditions, but the contact lenses are significantly lighter and do not need any external power source.

“Contact lenses grant military personnel a modest, hands-free nighttime capability, overcoming the challenges posed by cumbersome night vision [goggles or scopes]” stated Peter Rentzepis from Texas A&M University, who is involved in related research employing the same nanoparticles (sodium fluoride, ytterbium, erbium) for eyeglass lenses.

The innovative wearables developed by Yuqian Ma from the China University of Science and Technology and his team have not yet achieved detailed night vision. This limitation occurs because they can solely detect “high-intensity narrowband LED” light sources, as noted by Rentzepis, without capturing the ambient infrared light.

“While it’s an ambitious study, contact lenses alone cannot be employed for reading in infrared or navigating dark paths,” explained Mikhail Kats, who is not associated with the research, at the University of Wisconsin-Madison.

In human-mouse studies, the contact lenses transformed an otherwise invisible flash of infrared light into what Kats describes as “a significant, colorful chunk of visible light.” These representations serve a purpose; for instance, MA and his team encoded and transmitted alphabetic characters by altering the frequency, quantity, and color of various light flashes.

This research builds upon previous studies where scientists directly injected nanoparticles into the eyes of mice to facilitate infrared vision. Wearable contact lenses present a “safer and more practical approach to human applications,” observes Rentzepis. However, he cautions that they still pose potential health and safety concerns, such as risks of thermal exposure from the photoconversion process or the leakage of nanoparticles into ocular tissues.

Topic:

Source: www.newscientist.com

Infrared Contact Lenses Enable Night Vision or Eyelid Closure

Researchers have created prototype infrared contact lenses that enable users to see in the dark or even with their eyes closed.

The innovative prototype, developed by the University of Science and Technology in China, incorporates nanoparticles that transform infrared light into visible light.

Contact lenses infused with nanoparticles were provided to volunteers as part of the study recently published in the journal Cell. Participants successfully detected a flashing signal from infrared rays, which are normally invisible to the naked eye.

The transparent lenses permitted participants to perceive both visible and infrared light simultaneously.

“We discovered that when subjects close their eyes, near-infrared light penetrates the eyelids more efficiently than visible light, allowing us to capture this flickering information more effectively,” stated Tian Xue, the lead researcher from the University of Science and Technology in China.

These nanoparticles absorb near-infrared (NIR) light with wavelengths ranging from 800 to 1600 nanometers, which is beyond human visual perception. They then re-emit this light within the visible range of 400 to 700 nanometers.

Currently, near-infrared light is utilized in active night vision goggles, which illuminate the environment with infrared rays and convert that light into a visible format for users.

Active Night Vision Goggles illuminate the landscape with infrared rays and convert this into visible wavelengths – Credit: Getty Images/StockByte

However, if you’re hoping to see the world as “faithfully” portrayed in Predators, you may be disappointed—longer wavelengths are required for that effect.

At present, the contact lenses are sensitive enough to detect light emitted from infrared LEDs.

While the lenses initially struggled to capture fine details, the research team was able to enhance this capability by using an additional set of glasses.

Nanoparticles can be modified to emit light in various colors, improving the clarity and interpretation of infrared images. There may even be potential to alter visible light performance.

“By converting red visible light to green visible light, this technology could become invisible to those who are colorblind,” explains Xue.

Read more

Source: www.sciencefocus.com

Apple Introduces Enhanced Accessibility Features for Individuals with Vision and Hearing Impairments

Apple has unveiled an extensive array of iOS accessibility features aimed at supporting individuals with visual and auditory impairments, challenging the perception that Apple’s hardware pricing makes accessibility costly.

Ahead of Global Accessibility Awareness Day on Thursday, May 15th, Apple revealed its upcoming accessibility features, which will debut later this year. These include live captions, personal audio replication, tools for reading enhancement, upgraded Braille readers, and “nutrition labels.”

The nutrition labels mandate developers to outline the accessibility features available within their apps, such as voiceover, voice control, or large text options.


Sarah Herrlinger, senior director of Apple’s Global Accessibility Policy and Initiative, expressed to Guardian Australia her hope that the nutrition label will empower developers to create more accessibility options in the future.

“[It] gives them a real opportunity to understand what it means to be accessible and why they should pursue it and expand upon it,” she remarked.

“By doing this, we’re giving them the chance to evolve. There might be aspects they are already excelling in.”

The company has also enhanced its Magnifier app, bringing it to Mac, enabling users to utilize their camera or connected iPhone to zoom in on screens or whiteboards during lectures to read presentations.

The updated Braille functionalities allow for note-taking with Braille screen input or compatible Braille devices, along with calculations using Nemeth Braille, a standardized Braille code used in mathematics and science.




Apple’s new live listening accessibility features enable your iPhone or iPad to function as a microphone and transmit sounds to your hearing device. Photo: Apple

The enhanced personal audio feature allows users to replicate their voice using just 10 phrases, improving on previous models that demanded 150 phrases and required an overnight wait for the model to be processed. Apple assures that this voice replication will remain on the device unless password-protected and backed up to iCloud, where it will be encrypted, minimizing the risk of unauthorized use.

Herrlinger noted that as advancements in artificial intelligence have emerged at Apple, the accessibility team has actively sought ways to incorporate these innovations into their initiatives.

“We have been collaborating closely with the AI team over the years, ensuring we leverage the latest advancements as new opportunities arise,” she stated.


Google’s Android operating system offers several comparable accessibility features, such as live captions, Braille readers, and magnifying tools. New AI-supported features were announced this week.

Apple’s live caption feature, Live Listen, allows users to utilize AirPods to enhance audio in settings like lecture halls. In addition to live captions, Apple has recently introduced functionality that enables individuals with hearing loss to utilize AirPods as hearing aids.

While Apple’s hardware is typically viewed as high-end in the smartphone market, Herrlinger disputes the notion that the company’s accessibility options come at a premium, emphasizing that these features are built into the operating system at no additional cost.

“It’s available out of the box without extra charges,” she asserted.

“Our aim is to develop various accessibility features because we understand that each individual’s experience in the world is unique. Different people utilize various accessibility tools to aid them, whether it’s a single challenge or multiple.”

Herrlinger mentioned that it would be more cost-effective for customers to access multiple features on key devices.

“Now, they’re all integrated into a single device that has the same price for everyone,” she remarked. “Thus, in our view, it’s about making accessibility more democratic within the operating system.”

Chris Edwards, Head of Corporate Affairs at Vision Australia, commended the company for embedding accessibility features into their products and operating systems, highlighting his own experience as a blind individual with a Seeing Eye Dog.

“I believe that interpreting images through the new features enhances accessibility for all. The ability to interpret images in real-time is a significant step towards improving lives,” he stated.

“The new accessibility features seem particularly beneficial for students in educational settings, reinforcing that Braille remains a crucial mode of communication.”

Source: www.theguardian.com

Bill Gates Shares Vision for Shutting Down the Gates Foundation by 2045

Donald Trump represents the forefront of these funding cuts, but the harsh realities of his administration are just part of the narrative. Following a surge in the 2000s, contributions to global health stagnated throughout the 2010s. The landscape of charitable giving has also shifted notably in the era of pledges. The wealthiest individuals globally have committed to donating over half of their fortunes to various causes. After Gates’ divorce in 2021, Melinda eventually departed the foundation to pursue her own philanthropic endeavors. Recently, long-time ally Warren Buffett announced his plan to channel most of his remaining wealth into a charitable trust; his children will manage this, and he will not provide any additional funds to the Gates Foundation beyond his passing. Following a slowdown after the Covid years, this year saw a decline in foreign aid—Mark Suzman, CEO of the Gates Foundation, recently wrote in The Economist about falling aid levels, describing it as “falling off a cliff.”

On the ground, progress has been uneven, particularly in the aftermath of the pandemic emergency, which led to the suspension of many routine vaccination programs, leaving the world’s poorest nations in severe debt distress. While the proportion of the global population living in extreme poverty fell by nearly three-quarters from 1990 to 2014, that progress has stalled since then.

This is the crucial moment to reflect on Gates and his team’s narrative—given the gap between post-pandemic setbacks and the challenges posed by Trump’s policies, the Gates Foundation will emphasize once more the potential of biomedical tools and life-saving innovations in the current development landscape, including advancements in AI. They envision a future where the Gates Foundation is no longer needed. This vision is undeniably attractive. But with the challenges ahead, can it truly be realized?

During two days in late April, I engaged in discussions with Gates about the current state and legacy of his philanthropy, reviewing both accomplishments and setbacks thus far, as well as the challenges yet to come. Below is a revised, condensed account of those conversations, capturing his optimistic, detailed, confident, and at times bold perspective as he describes the coming decades as an “era of miracles,” representing even more fundamental advancements than he has previously cited.

Let’s discuss the current tensions surrounding the Trump administration. It appears that the administration is poised to abandon foreign aid entirely, leaving millions of people and many global institutions in jeopardy. How dire is this situation?

Source: www.nytimes.com

How one artist’s vision of Mario Jump made him a key figure in Nintendo’s story | Games

IIn 1889, craftsman Fusajiro Yamauchi founded a Hanafuda company in Kyoto, naming it “Nintendo.” Although the exact meaning has been lost over time, historians believe it translates to “leave it to luck.” Nintendo successfully transitioned from paper games to electronic games in the 1970s, establishing itself as a household name worldwide.

Working at Nintendo was a dream come true for Takaya Imamura, an art school student enamored with games like Metroid and Super Mario Bros. 3 in the 1980s. Despite initial misconceptions about the industry, Imamura discovered the creative opportunities at Nintendo and joined the team in 1989. Over the years, he contributed to iconic projects and characters, solidifying his place in gaming history.

Imamura’s journey at Nintendo was marked by memorable collaborations with Shigeru Miyamoto, leading to the creation of beloved games and characters. From F-Zero to Zelda and Star Fox, Imamura’s artistic vision helped shape Nintendo’s unique design philosophy. His work reflected a blend of traditional techniques with innovative storytelling, resonating with audiences worldwide.

As Nintendo evolved under new leadership, Imamura witnessed the company’s strategic shifts and successful product launches. Reflecting on his time at Nintendo, Imamura embraces the transformative era of gaming and technological advancements. His departure from Nintendo in 2021 marked a new chapter in his career as an indie developer, with a passion project inspired by his earliest days in the industry.

Embracing the spirit of chance and creativity, Imamura’s journey comes full circle with his indie game, Omega Six. Honoring Nintendo’s legacy of dedication and innovation, Imamura continues to explore new frontiers in game development, guided by his enduring vision and passion for storytelling.

Source: www.theguardian.com

MrBeast’s infamous game show is a grim dystopian vision, tailor-made for America in 2025

YouTube sensation Jimmy, also known as “Mr. Biggest reality contest show ever created.” And by most accounts, he achieved his goal.

Beast Games, halfway through production, has dominated Amazon’s charts in over 80 countries, now holding the top spot among streaming platforms. The show, hailed as the number one unscripted program in history, attracted over 50 million viewers in just 25 days.

Inspired by Netflix’s K-drama “The Squid Game,” Beast Games mirrors the show’s premise, color scheme, sweatsuits, and cash motivation but with a louder, more American take.

With a budget exceeding $100 million, Beast Games stands as the most expensive competitive show to date. Funding mostly came from Donaldson’s own pocket to cover prizes, accommodation, staff, and elaborate filming locations.

The result is a spectacle, but not the inspiring one MrBeast envisioned. It reflects America’s current state, akin to a slow-motion luxury liner disaster sinking under the waves. A grim reminder for future generations of the greed and self-destruction in society.

Part of Beast Games’ allure is its unscripted format, offering a raw portrayal of real contestants vying for a chance at generational wealth. However, the show’s depiction of capitalism and exploitation raises concerns.

Beast Games takes cues from the Netflix hit Squid Game. Photo: No Joo-han/Netflix

Beast Games blurs the line between entertainment and exploitation, with contestants subjected to degrading challenges for a shot at wealth. MrBeast’s role in the show’s narrative raises questions about ethics and responsibility.

Beast Games embodies the dark side of American society, offering a stark commentary on wealth, influence, and morality. The show’s portrayal of competition and exploitation highlights deeper societal issues and challenges.

Source: www.theguardian.com

Do Elon Musk and Reform Britain Share a Political Vision?

The recent gathering between Elon Musk, Nigel Farage, and Reform UK treasurer Nick Candy was not just a meeting of Donald Trump supporters but a meeting of minds.

Their political agenda, developed under President Trump’s MAGA Vision, focuses on immigration, culture wars, and public sector cuts.

Farage emphasized the importance of saving the West, stating, “We only have one chance. Together, we can achieve great things.”

Speculation arose about Musk potentially donating up to $100m to the reforms, despite potential objections from voters.

A ban on wealthy foreigners donating to British political parties received 55% support, with 66% saying Musk should not have any influence on British politics.

Although they share ideological similarities, the public opinion on Musk’s influence remains divided.

Immigration

Musk’s stance on U.S. immigration aligns with the reformers’ goals, emphasizing the need for secure borders and boosting legal immigration to meet labor demands in the tech industry.

Farage and Reform prioritize freezing “non-essential” immigration and deporting illegal immigrants, echoing Musk’s concerns.

Shrinking Government

Musk’s anti-government sentiments stem from regulatory challenges in his industries and support from Trump to slash the U.S. federal budget.

Farage endorses Musk’s efforts in reducing public sector size, aligning with Reform’s vision for the UK.

Political science professor Tim Bale highlights Musk’s appeal to disruptors like Reform, citing their shared values in shaking up the establishment.

Rights and “Woke War”

Musk’s criticisms of woke culture and diversity regulations resonate with Reform’s agenda to combat “transgender ideology” and abolish equality provisions.

Support for Musk’s anti-woke stance aligns with Reform’s cultural war priorities.

Net Zero

Musk’s environmental credentials contrast with Reform’s rollback of eco-friendly policies, advocating for revoking the UK’s net zero target and boosting oil and gas licenses.

While Musk prioritizes environmental concerns, Reform focuses on economic implications of green policies.

Russia

Musk’s shifting views on Ukraine, from supporting to more ambiguous stances, reflect his complex relations with geopolitical issues.

Farage’s past remarks on Russia’s invasion of Ukraine and criticisms of NATO align with Musk’s involvement in aiding Ukraine through Starlink.

Both Musk and Farage’s views on Russia highlight their divergent paths in addressing international conflicts.

Source: www.theguardian.com

Eating pistachios every day could help safeguard your vision

Dietary treatments with pistachios, a bioavailable source of xanthophyll lutein, are effective in increasing macular pigment optical density (MPOD) in healthy adults, according to a new study from Tufts University and Tufts Medical Center. has been announced.

Pistachios are the only nut that provides a measurable source of lutein, a powerful antioxidant that helps protect the eyes. Image credit: Erika Varga.

Lutein and zeaxanthin are dietary xanthophylls, a type of carotenoid most commonly found in vegetables and fruits, with green and yellow vegetables being particularly rich sources.

These compounds cross the blood-brain barrier and accumulate exclusively in the macular region of the human retina, where they are called macular pigments.

Pistachios are the only nut that contains large amounts of lutein and zeaxanthin, but unlike eggs, they only contain lutein.

However, like eggs, pistachios provide a source of fat, primarily as monounsaturated and polyunsaturated fats, and therefore may be a highly bioavailable source of lutein.

“Our research shows that pistachios are not only a nutritious snack, but may also have significant eye health benefits,” said Dr. Tammy Scott, a research and clinical neuropsychologist at Tufts University. The results are showing.”

“This is especially important as people age and the risk of visual impairment increases.”

In a randomized controlled trial, eating 2 ounces (57 grams) of pistachios per day as part of a regular diet for 12 weeks significantly reduced otherwise healthy middle-aged people compared to eating just their regular diet. MPOD was shown to increase significantly in the elderly. .

They also found that pistachio consumption almost doubled the participants' daily lutein intake and significantly increased plasma levels.

“Incorporating a handful of pistachios into your diet can improve your intake of lutein, which is important for eye protection,” says Dr. Scott.

“Pistachios provide a source of healthy fat, and lutein from pistachios may be more readily absorbed into the body.”

“Pistachios provided approximately 1.6 mg of lutein, which is enough to double the average daily intake of U.S. adults for lutein, a type of plant pigment known as xanthophylls.”

Lutein, found in pistachios, not only supports eye health but may also benefit brain function.

“Because lutein crosses the blood-brain barrier, it may help reduce oxidative stress and inflammation,” said Tufts University researcher Elizabeth Johnson, Ph.D.

“Similar to the eyes, lutein selectively accumulates in the brain and may play a role in attenuating cognitive decline.”

“Research suggests that higher levels of lutein improve cognitive abilities such as memory and processing speed, making pistachios an invaluable addition to diets aimed at supporting healthy aging overall. It is a great addition.”

of study On October 17, 2024, nutrition journal.

_____

Tammy M. Scott others. Pistachio consumption increases macular pigment optical density in healthy adults: a randomized controlled trial. nutrition journalpublished online October 17, 2024. doi: 10.1016/j.tjnut.2024.10.022

Source: www.sci.news

Review of Vision Pro: Apple’s cutting-edge headset exceeds expectations

ohOn a sweltering summer day in London, I found myself working in the middle of snow-covered Yosemite National Park, surrounded by floating apps and browser windows. Later, I’d reminisce about holidays from years ago, staring out at windswept Oregon beaches, sitting in a speeder on Tatooine watching Rogue One in 3D, and spending the night with a guided meditation.


These are the sort of immersive experiences Apple’s latest, and most expensive, gadget offers, blending the real and virtual worlds, all controlled by your eyes and hands. The Vision Pro may resemble virtual reality headsets like Meta’s Quest series, but it aims to be something much more.

But with a prohibitive price tag of £3,500 (€3,999 / $3,499 / AU$5,999) that most buyers won’t even consider, this cutting-edge tech marvel is best thought of as a glimpse into the near future of computing.

You can use your Mac’s screen, keyboard, and trackpad streamed to a simulated 4K display, alongside other windows and apps. Photo: Martin Godwin/The Guardian

Put on the headset and you’re transported to a photorealistic exotic location, or use the Digital Crown to increase or decrease immersion, seamlessly blending reality and the virtual world. The real world is sent through the camera to a crystal-clear display and displayed as pass-through video — far better than the competition, and so clear that you can read on your phone without taking off the headset.

Your content appears in a floating window fixed in 3D space, as big or small as you like. Even if you walk by the window, your content stays where you were and is instantly visible when you return. Just see the content you want and select it with a pinch of your fingers. Type directly by “touch” on the hover keyboard or scroll through sites like a giant virtual iPad.

Step into an immersive experience and go one step further by walking with dinosaurs, exploring the solar system or flying along neon-lit highways in rhythm games.

Third-party apps offer a variety of mixed reality and immersive experiences. astronomy (Top left), Jetpack Joyride 2 (Top right), Disney+ (Bottom left) and Luna (Bottom right). Photo: Samuel Gibbs/The Guardian

Heavyweight Technology

Vision Pro is the pinnacle of headset tech: The same M2 chip found in the 2022 MacBook Air runs apps, and its R1 chip processes input from the headset’s 12 cameras, five sensors, and six microphones. Combined, this delivers a smooth experience in both the real and virtual worlds displayed on two high-resolution Micro-OLED displays in front of your eyes.

The exterior cameras and sensors create a map of the real world, including objects like furniture and walls, and track the user’s position and hand movements. The interior camera monitors eye movements to interact with buttons and objects, making sure what you’re looking at is clear. The headset also features “Optic ID,” an alternative to Face ID, to seamlessly unlock and authenticate payments by scanning your iris.

The experience is exponentially better than anything that came before, and at times, it’s magical.

The various parts of the Vision Pro attach to each other via magnets and quick-release clips, including two types of straps. Photo: Martin Godwin/The Guardian

All the technology packed into the Vision Pro creates one major problem: weight. At up to 650 grams, it’s heavier than Apple’s largest iPad Pro and competing headsets like the Quest 3, which weighs 515 grams. And that doesn’t include the 353 gram battery, which connects to the headset with a cord so you can put it in your pocket or keep it on the desk in front of you.

During the ordering process, your face is scanned with an iPhone and a custom fit is created from nearly 200 combinations of strap sizes, “light seals” and cushioning, making it more comfortable than any other headset and leaving no goggle marks on your face.

What remains is the strain on my neck. After wearing the headset daily for a month, I can now manage sessions up to about two hours long. However, I still feel like I’ve given my neck a workout, and wearing it for long periods without taking proper breaks causes the same neck, shoulder and back pain I get when I’m hunched over a laptop all day.

The battery lasts for about 2-3 hours, which is plenty long enough for you to easily charge it while sitting at your desk or on the couch, but this headset isn’t designed to be easily shared, and even if you manage to get a good fit on your guest’s face, you’ll need to redo the eye-tracking setup for five minutes to get it to work temporarily.

We’re only scratching the surface in productivity improvements

I placed my Mac display in the center with various windows around it, and some behind and above it, and the screenshots in the headset don’t do justice to how it will look in person. Photo: Samuel Gibbs/The Guardian

The Vision Pro is different from other headsets in that it’s fully integrated into the Apple ecosystem — more like a Mac than an iPhone — allowing you to create an entire app and productivity environment anywhere, without the need for multiple monitors.

It comes with many familiar apps, including Apple’s Mail, Messages, Notes, Keynote, Freeform, and Photos, and many others are available as “compatible” apps, including Microsoft’s Word and Excel, but it doesn’t include Google apps like Gmail or Drive, and only some of them work properly as web apps in Safari.

The headset tracks your hand movements relative to virtual objects, and here we see the common two-handed pinch-to-zoom gesture to increase the size of a photo. Photo: Martin Godwin/The Guardian

Using the Vision Pro as part of a productivity setup is great, but that’s only scratching the surface of what the headset can do.

Apps for Vision Pro are varied: some simply drag 2D experiences into the 3D space of the headset, like games played on a TV screen placed within the environment, while others are fully immersive environments you can walk around in.

The Apple TV app lets you enter a virtual cinema and choose your row and seat, while Disney+ lets you sit on a couch in Avengers Tower or the aforementioned speeder on Tatooine. On both services, the 3D movies look especially good.

But where Vision Pro really shines is when you combine real and virtual worlds, such as playing on a virtual chessboard placed on a table in front of you. Apple’s Encounter Dinosaur demo experience shows what’s possible by creating a portal to a prehistoric land that’s fixed to the actual wall of a room. A butterfly emerges from the portal and lands on your outstretched finger. The dinosaur then comes into view and locks eyes with you. The dinosaur’s head and eyes follow you as you move around the room, before scaring off rival dinosaurs and roaring at you.

There are only a handful of truly great experiences available on the App Store right now, but most of the best ones are controlled directly by your hands and body. Other headsets can do similar things, but none do it as easily, accurately, or with the same high fidelity as Vision Pro.

Relive your memories like never before

When you view your holiday panoramas with Vision Pro, the photos expand all around you, filling your field of vision completely for an immersive experience. Photo: Apple

One of the most unexpected and wonderful things about Vision Pro is its ability to relive past moments through photos, videos, and panoramas.

Loading a panorama photo made me feel like I was standing in Death Valley again, enjoying the dramatic colors of a sunset over the vast desert. Or sitting in a packed Capital One Arena watching the Washington Capitals play ice hockey. And a photo I took from the top of Seattle’s Space Needle gave me the same feeling of height dread I had when I took it seven years ago.

Vision Pro can also display spatial and 3D videos shot with a headset or iPhone 15 Pro. These look like the little holograms you often see in sci-fi, giving you a real sense of depth and the feeling of being back in the moment, but it takes practice to get it right

Source: www.theguardian.com

Dementia risk factors identified: Poor vision and high cholesterol

Vision loss linked to dementia

Drazen Žigic/Getty Images

A large-scale study has identified poor eyesight and high cholesterol as two new risk factors for dementia. The study claims that eliminating these factors, along with 12 other previously recognized factors, could prevent almost half of all dementia cases worldwide. However, some of these factors are difficult to eliminate, and genetics and advanced age remain the biggest risk factors for developing dementia.

“Dementia may be one of the most significant health threats facing the nation.” Gil Livingston “The possibility of changing this and significantly reducing the number of people suffering from depression is crucial,” said researchers from University College London. [this] disease.”

A 2020 study identified 12 potentially modifiable risk factors for dementia, including hearing loss, depression, smoking, high blood pressure, heavy alcohol consumption, obesity, air pollution, traumatic brain injury, diabetes, social isolation, physical inactivity and lack of education.

Livingstone and 26 other dementia experts from around the world updated the list based on the latest evidence, retaining the 12 risk factors but adding two new ones: high levels of low-density lipoprotein (LDL) “bad” cholesterol before age 65 and untreated vision loss in later life.

The researchers included high LDL cholesterol based on several new findings, including: Analysis of 17 studies The study followed around 1.2 million British participants under the age of 65 for over a year.

The results showed that for every 1 millimole per liter (mmol/L) increase in LDL cholesterol, the incidence of dementia increased by 8 percent. In another study of similar size, High LDL cholesterol (above 3 mmol/L) has been linked to a 33% increased risk of dementia, on average, and this risk is most pronounced in people who had high LDL cholesterol in midlife. “So it really does matter how long you have it,” Livingston says.

The researchers suggest that this association may mean that excess cholesterol in the brain increases the risk of stroke and contributes to dementia. Cholesterol has also been linked to the buildup of beta-amyloid protein plaques in the brain, which is linked to Alzheimer's disease.

Untreated vision loss can: Analysis of 14 studiesThe study, which involved more than 6.2 million older adults who were initially cognitively healthy, showed a 47% increased risk of developing dementia over 14.5 years. In another analysis, the risk The decline in vision was mainly due to cataracts and complications from diabetes. [loss] “There's a risk because you're reducing cognitive stimulation,” Livingston said, even though some research suggests that such stimulation may make the brain more resilient to dementia.

The researchers then used their model to estimate what percentage of dementia cases worldwide could be prevented if each of 14 modifiable risk factors were eliminated. They found that hearing loss and high cholesterol had the greatest impact, each contributing about 7 percent of dementia cases, while obesity and excessive alcohol consumption had the least impact, each contributing 1 percent. If all factors were eliminated, the team estimated that about 45 percent of dementia cases could be prevented.

But just because these factors are associated with dementia doesn't mean they cause it, he said. Dylan Williams“So even if we target interventions at them, they may not prevent as much disease as we would hope,” said researcher David L. Schneider of University College London, who was not involved in the report.

These estimates are only population averages and don't capture individual-level risk, Williams says. So removing all factors from your life wouldn't necessarily halve your risk of dementia, which is heavily influenced by genetics and age. Eliminating many of these risk factors, like air pollution or lack of education, would also require public health interventions rather than individual changes, Williams says.

topic:

Source: www.newscientist.com

New Glasses Coated with Lithium Enhance Vision in Low Light

A device that can convert infrared light into visible light

Laura Valencia Molina et al. 2024

Glasses coated with lithium compounds may one day help us see clearly in the dark.

For more than a decade, researchers have been searching for the best lightweight materials that can convert infrared light, invisible to the human eye, into visible light in order to provide an alternative to night-vision goggles, which are often heavy and cumbersome.

Until recently, the leading candidate was gallium arsenide. Laura Valencia Molina The researchers, from the Australian National University in Canberra, and their colleagues found that a film of lithium niobate coated with a lattice of silicon dioxide performed better.

“Through improved design and material properties, we have achieved a tenfold increase in the conversion rate from infrared to visible light compared to gallium arsenide films,” the team said. Maria del Rocio Camacho MoralesAt the Australian National University.

Through a series of experiments, the team demonstrated that the lithium niobate film could convert high-resolution images from infrared light with a wavelength of 1,550 nanometers to visible light with a wavelength of 550 nanometers, exceeding the capabilities of gallium arsenide.

Night vision goggles require infrared particles called photons to pass through a lens and be converted into electrons in a device called a photocathode. These electrons then pass through a phosphor screen to be converted into visible light photons. This entire process requires cryogenic cooling to prevent distortion of the image.

Molina says the lithium niobate film is hit by infrared light emitted by an object and illuminated with a laser at the same time. The film combines the infrared light with the laser light, which then up-converts the infrared light into visible light.

Camacho Morales says that one day, lattices of lithium niobate and silicon dioxide could be made into a film thinner than plastic wrap that could be coated over regular glasses to improve night vision.

While still in the research stage, the laser was positioned so that it could be easily shone onto the film along with infrared light emitted by the object, and the team is now experimenting with creating an array of nanolasers that can be positioned on top of the lithium niobate film.

The research is an important next step toward lightweight night-vision devices, and perhaps a film that can be attached to ordinary glasses, Camacho Morales said. It could also help drones navigate in the dark, he said, because current night-vision devices are too heavy to carry in some vehicles.

topic:

Source: www.newscientist.com

I experimented with Apple Vision Pro and it gave me a fright – Arwa Mahdawi

IIf you’re worried that technology is getting a little too intelligent and robots are on the verge of taking over the world, there’s a simple way to ease your fears. Call the company and ask some simple questions. He is put through an automated voice system and spends the next 10 minutes yelling, “No, I didn’t say that!” What do you mean by “I didn’t really understand”? We don’t need that option! Make me human, damn it!

That was certainly my experience when I called Apple to try and reconfirm the Vision Pro demo, which was abruptly canceled due to snow. But if my phone experience felt dated, the Apple Vision Pro headset itself felt like an amazing glimpse into the future. Not surprisingly, its price is $3,499.

I think my expectations were pretty low. For the last decade or so, we’ve been told that virtual reality and augmented reality are just around the corner, but they’ve consistently failed to break into the mainstream. The headset was clunky and impractical, the price was prohibitive, and the experience itself, while impressive, wasn’t necessarily awe-inspiring. Metaverse, a rebrand of virtual reality, was similarly disappointing.

But the Vision Pro really impressed me. I felt like Usher and kept saying “wow” throughout the demo. The Vision Pro is branded as a “spatial computing” rather than an entertainment device and is intended to be used for everything from answering emails to browsing the Internet. Navigate with your eyes and scroll by pinching your fingers or moving your hand. He is conducting an invisible orchestra.

Despite all the use cases on the market, its most impressive aspect is immersive video. Everything else feels like a bit of a gimmick. Do you want to see computer apps floating in front of your eyes? Not so much! But when you watch a movie, you feel like you’re drawn into the content. If money wasn’t an issue, I would have bought a headset right away just because watching movies is so much fun.

And that’s basically the scope of the market for Vision Pro at this point. In other words, people who have nothing to do with money. The headset is impressive, but it’s still not very comfortable (I’m lucky to be able to drink coffee while wearing it) and it’s not worth the price. This technology is still in its infancy and will take some time to become widespread in broader culture.

But while it’s hard to say when spatial computing will become as ubiquitous as smartphones are today, it’s clear that the question is when it will be widely adopted, not if. There is no denying that we are moving towards a world where “real life” and digital technology seamlessly merge. The internet is moving from our screens to the world around us. And it raises serious questions about how we perceive the world and think about reality. Big tech companies are rushing to get the technology out there, but it’s unclear how worried they are about the consequences.

Some of these outcomes are easy to predict. In a few weeks’ time, you’ll almost certainly hear about a car accident caused by someone using a headset while driving. There are already a lot of videos out there of people using the Vision His Pro while out and about, including in the car. (Incidentally, while Apple advises people not to use headsets while driving, it doesn’t have guardrails to prevent people behind the wheel from using the technology.)

And without some radical intervention, it seems depressingly inevitable that these headsets will soon take online harassment to a whole other level. Over the years, there have been multiple reports of people being harassed and even “raped” within the Metaverse. The highly immersive nature of virtual reality makes the experience feel frighteningly real. With the lines between real life and the digital world blurring to the point of being almost indistinguishable, is there a meaningful difference between attacks online and attacks in real life?

Even scarier, and more broadly, is the question of how spatial computing will change what we think of as reality. Researchers at Stanford University and the University of Michigan recently studied the Vision Pro and other “pass-through” headsets, a feature that brings VR content into a real-world environment and allows you to see what’s around you while using the device. (This is a technical term referring to It has emerged with some stark warnings about how this technology will rewire our brains. interfere with social connections).

These headsets basically give us all our private worlds and rewrite the concept of shared reality. The camera you use to see the world allows you to edit your environment. For example, you can wear a camera and go to the store. Then all the homeless people will disappear from your sight and the sky may become brighter.

“What we’re going to experience is that when you use these headsets in public, you lose that common ground,” said the director of Stanford University’s Virtual Human Interaction Lab and lead researcher on the study. Jeremy Bailenson, one of them, recently said: Said Business Insider. “People will be physically in the same place and visually experience different versions of the world at the same time. We’re going to lose what we have in common.”

What’s scary isn’t just the fact that our perception of reality might change. It’s the fact that a small number of companies will have a lot of control over how we see the world. Consider how much influence big tech companies already have over the content we watch. And it’s multiplied by millions. Do you think deepfakes are scary? Wait until they look more realistic.

We are seeing a global increase authoritarianism. If we’re not careful, this kind of technology will significantly accelerate that. Is it possible to draw people into another world, numb them with entertainment, and determine how they see reality? It is an authoritarian’s dream. We are entering an era in which we can coax and manipulate people like never before. Forget Mussolini’s bread and circuses, up-and-coming fascists now have donuts and vision pros.

  • Do you have an opinion on the issues raised in this article? Click here if you would like to email your answer of up to 300 words to be considered for publication in our email section.

Source: www.theguardian.com

Years of Study and a Grand Vision to Merge Computers and Brains

Elon Musk’s announcement on Monday caught the attention of a small community of scientists who work with the body’s nervous system to treat disorders and conditions.

Robert Gaunt, an associate professor at the University of Pittsburgh’s School of Physical Medicine and Rehabilitation, said, “Inserting a device into a human body is not an easy task. But without neuroscience research and decades of demonstrated capabilities, I don’t think even Elon Musk would have taken on a project like this.”

Musk tweeted, “The first humans @Neuralink I was recovering well yesterday. Initial results show promising neuronal spike detection.” However, many scientists are cautious about the company’s clinical trials and note that not much information has been made public.

Neuralink won FDA approval to conduct its first human clinical study last year, and the company is developing brain implants that allow people, including severely paralyzed patients, to control computers with their thoughts.

Although it’s too early to know if Neuralink’s implants will work in humans, Gaunt said the company’s announcement is an “exciting development.” His own research focuses on restoring motor control and function using brain-computer interfaces.

“In 2004, a small device known as the Utah array was implanted in a human for the first time, allowing a paralyzed man to control a computer cursor with nerve impulses,” according to a report from University of Utah. Scientists have demonstrated how brain-computer interfaces can help people control robots, stimulate muscles, decode handwriting, speech, and more.

Musk said the clinical trials will aim to treat people with paralysis and paraplegia. However, many scientists believe enhancing human performance through brain-controlled devices is far in the future and not very realistic.

Still, Neuralink’s clinical trials represent a major advance for the fields of neuroscience and bioengineering. Funding basic science research is key to private companies advancing commercially viable products, says Gaunt.

Source: www.nbcnews.com

The Limitations of Apple’s Vision Pro Headset: Absence of Netflix, Spotify, and YouTube Integration

It’s important to have friends who come to your birthday parties, offer support during tough times, and allocate resources to develop apps for emerging virtual reality platforms despite limited direct benefits. It may be tempting to believe that a $30 billion cash reserve and a product line generating over $200 billion annually are sufficient. However, Apple is finding that money cannot buy everything.

Pre-orders for Apple’s Vision Pro headset, a $3,500 “spatial computing” platform and CEO Tim Cook’s vision of Apple’s future, opened last week. Despite Apple’s enthusiasm, quiet opposition from potential users has overshadowed the announcement.

According to a report from Bloomberg (£), Netflix has opted not to design a Vision Pro app or support existing iPad apps on the platform, instead instructing users to access their content through a web browser.

Rather than developing a Vision Pro app or supporting existing iPad apps, Netflix has chosen to direct users to watch their content on the web. This decision is notable given the competition between Netflix and Apple in the streaming market.

Although the initial weekend release of Vision Pro saw an estimated 160,000-180,000 units sold, this pales in comparison to Netflix’s 250 million paying subscribers. Therefore, Netflix’s reluctance to invest resources in an app for the Vision Pro is understandable, as app development is only worthwhile if it can attract new customers or retain existing ones.

Despite Apple’s promotion of the Vision Pro as the most immersive way to watch TV, Netflix has similarly abandoned its app for MetaQuest, demonstrating a pattern of resistance to immersive platforms.

Due to these decisions, Vision Pro users will be limited to watching Netflix through the web, losing the ability to access offline viewing, a key selling point of the headset.

Furthermore, YouTube and Spotify have also opted not to release new apps for the Vision Pro, indicating a lack of enthusiasm from major content providers for the platform.

In a related story, Apple has recently allowed developers to bypass its payment system, providing them with an alternative to the high fees associated with in-app purchases. This shift may reflect a broader resistance among developers to Apple’s monopoly over economic activity in their app ecosystem.

The reluctance of major content providers to invest in apps for the Vision Pro may indicate a broader skepticism among developers about the benefits of supporting Apple’s latest venture. This trend may signal a greater movement within the developer community to challenge Apple’s control over app development and monetization.

Source: www.theguardian.com

Apple Vision Pro Expected to Launch between Late January and Early February

We’ve known about the Vision Pro for more than six months now (not to mention it’s been rumored for years), but Apple’s first “spatial computing” device is expected to arrive in consumer electronics heading into the new year. One of the biggest question marks. The $3,499 headset was given an “early 2024” release date when it was announced at WWDC in June, but the company hasn’t provided further specifics since then.

Apple oracle Ming-Chi Kuo Provided an early holiday gift He narrowed down the system’s release date to “late January to early February.” According to the analyst, the first Vision Pro will be shipped to Apple within about a month, bringing the total number of units shipped this year to about 500,000 units.

Company’s accurate target There are still open-ended questions remaining for this year. About a month after the device was announced, it was reported that Apple had reduced its forecast from around 1 million units to “less than 400,000 units.”

Even the latest figure of 500,000 is small for a company of Apple’s enormous size and influence. Keep in mind that the company should ship more than 200 million iPhones this calendar year.

But Vision Pro is widely considered to be Tim Cook’s biggest challenge in his 12 years as CEO. Not only is this an entirely new category and form factor for the company, but it’s also an exorbitant price point, even for customers accustomed to paying extra for Apple products. Add to that the fact that VR has not lived up to expectations for decades, and we have a big uphill battle ahead.

Kuo calls Vision Pro “Apple’s most important product in 2024.” That’s a tough statement to argue with, given years of speculation and all the time and money the company has undoubtedly poured into the headset.

Source: techcrunch.com