Exploring the Uniqueness of Our Solar System: The Century’s Most Fascinating Concept

Since the early 1990s, astronomers have made groundbreaking discoveries in exoplanet research. The real surge began in the early 2000s with comprehensive surveys, revealing that our unique solar system, featuring four rocky planets and four gas giants, might be unlike most others.

For decades, the Chilean High Precision Radial Velocity Planet Probe and the California Legacy Survey have meticulously tracked the stellar wobbles caused by exoplanets. While these surveys have not as many exoplanet discoveries as pioneering telescopes like Kepler and TESS, they shed light on the distinctiveness of our solar system.

For instance, our Sun outsize over 90% of other stars and exists alone, unlike many stars with companion stars. Earth’s size is also exceptional; only 1 in 10 stars hosts a planet like Jupiter. When such planets are found, their orbits often dramatically differ from Jupiter’s stable, circular path. Notably absent from our system are super-Earths or sub-Neptunes, which are common in other star systems. Despite thousands of exoplanet discoveries, Earth-like planets orbiting sun-like stars, and potential extraterrestrial life remain elusive.

“Our solar system is strange due to what we have and what we lack,” states Sean Raymond from the University of Bordeaux, France. “It’s still uncertain whether we are simply rare at the 1% level or genuinely unique at the 1 in a million level.”

These revelations prompt intriguing inquiries about the formation of our solar system. Questions remain, such as why Jupiter is located farther from the Sun—rather than closer, as seen in many planetary systems. Unusual orbits of exoplanets have made astronomers reconsider our system’s history. The Nice model, proposed in 2001, suggests a major reconfiguration post-formation, moving Jupiter to the outskirts while redirecting asteroids and moons into new trajectories.

“The understanding that such a shift could occur stemmed directly from exoplanet research,” Raymond notes. “Approximately 90% of large exoplanetary systems exhibit instability. This insight prompts speculation about possible historical fluctuations within our solar system.”

Topic:

Source: www.newscientist.com

Unlocking Epigenetics: The Century’s Most Revolutionary Concept

As we entered the new millennium, discussions surrounding the number of genes in our genome were highly debated. Initial estimates were significantly lower than anticipated, spurring a movement towards re-evaluating evolutionary processes.

The Human Genome Project revealed in 2001 that we possess fewer than 40,000 protein-coding genes — a number that has since been adjusted to around 20,000. This finding necessitated the exploration of alternative mechanisms to account for the complexity of our biology and evolution; epigenetics now stands at the forefront.

Epigenetics encompasses the various ways that molecules can interact with DNA or RNA, ultimately influencing gene activity without altering the genetic code itself. For instance, two identical cells can exhibit vastly different characteristics based purely on their epigenetic markers.

Through epigenetics, we can extract even greater complexity from our genome, factoring in influences from the environment. Some biologists are convinced that epigenetics can play a significant role in evolutionary processes.

A notable study in 2019 demonstrated how yeast exposed to toxic substances survived by silencing specific genes through epigenetic mechanisms. Over generations, certain yeast cultures developed genetic mutations that amplified gene silencing, indicating that evolutionary changes began with epigenetic modifications.

Epigenetics is crucial for expanding our understanding of evolutionary theory. Nevertheless, skepticism persists regarding its broader implications, particularly in relation to plants and other organisms.

For instance, Adrian Bird, a geneticist at the University of Edinburgh, expressed doubts, arguing in a recent paper that there is no clear evidence linking environmental factors like drought to mammalian genomes. Though epigenetic markers may be inherited, many are erased early in mammalian development.

Some researchers dispute these concerns. “Epigenetic inheritance is observed in both plants and animals,” asserts Kevin Lara, an evolutionary biologist from the University of St. Andrews. In a comprehensive study published recently, Lara and colleagues proposed a wealth of research indicating that epigenetics could play a role across the entire tree of life.

So, why is there such division in the scientific community? Timing may be a factor. “Epigenetic inheritance is an evolving area of study,” observes Lara. While epigenetics has been recognized for decades, its relevance to evolutionary research has only gained traction in the past 25 years, making it a complex field to assess.

Topic:

Source: www.newscientist.com

Unlocking Molecule Creation: Why Click Chemistry is the Century’s Most Innovative Concept

Explore the latest science news and in-depth articles by expert journalists on developments in science, technology, health, and the environment.

Chemistry can often be a complex and slow process, typically involving intricate mixtures in round-bottomed flasks that require meticulous separation afterward. However, in 2001, K. Barry Sharpless and his team introduced a transformative concept known as click chemistry. This innovative approach revolutionizes the field, with a name coined by Sharpless’s wife, Janet Dueser, perfectly encapsulating its essence: a new set of rapid, clean, and reliable reactions.

Though the idea appears straightforward, its elegance lies in its simplicity. Sharpless, along with colleagues Hartmas C. Kolb and MG Finn, described their creation as “spring-loaded.” This concept hinges on applying these reactions to various starting materials, assembling them akin to Lego blocks, thereby enabling the swift construction of a vast array of novel and beneficial molecules. Sharpless’s primary focus? Pharmaceuticals.

The overarching principle guiding these reactions was to steer clear of forming carbon-carbon bonds, which was the norm among chemists at the time, and instead to create bonds between carbon and what are known as “heteroatoms,” primarily oxygen and nitrogen. The most recognized click reaction involves the fusion of two reactants to create a triazole, a cyclic structure of carbon and nitrogen atoms. This motif proves to be highly effective at binding to large biomolecules such as proteins, making it invaluable in drug development. Sharpless independently published this specific reaction concurrently with chemist Morten Meldal, who researched it at the University of Copenhagen. This reaction has since been instrumental, notably in the production of the anticonvulsant drug Rufinamide.

Chemists like Tom Brown from the University of Oxford describe this reaction as simple, highly specific, and versatile enough to work in almost any solvent. “I would say this was just a great idea,” he asserts.

Years later, chemist Carolyn Bertozzi and her team at Stanford University developed a click-type reaction that operates without toxic catalysts, enabling its application within living cells without risking cellular damage.

For chemist Alison Hulme at the University of Edinburgh, this research was pivotal in elevating click chemistry from a promising idea to a revolutionary advancement. It granted biologists the ability to assemble proteins and other biological components while labeling them with fluorescent tags for investigation. “It’s very straightforward and user-friendly,” Hulme explains. “We bridged small molecule chemistry to biologists without necessitating a chemistry degree.”

For their groundbreaking contributions, Bertozzi, Meldal, and Sharpless were awarded the 2022 Nobel Prize in Chemistry—an outcome that surprised no one.

Topics:

Source: www.newscientist.com

The Ultimate One-Size-Fits-All Diet: The Best Health Concept of the Century

New Scientist - Your source for science news, technology, health, and environmental insights.

The Mediterranean diet is widely regarded as the ultimate in healthy eating. Rich in fiber, vegetables, legumes, fruits, nuts, and moderate fish consumption, this diet is low in meat and dairy, making it both delicious and beneficial for health and the environment. As Luigi Fontana from the University of Sydney highlights, “Not only is it healthy, but it’s also very tasty.”

Supported by extensive research, unlike transient diet fads, the Mediterranean diet has been celebrated for over 21 years. This longevity stems from a series of randomized controlled trials that established its status as a nutritional gold standard.

In the 1940s, physiologist Ansel Keys advocated that the Mediterranean diet significantly lowers heart disease risk, primarily due to its low levels of saturated fat from meat and dairy, which are known to contribute to cholesterol buildup.

Keys, along with his wife Margaret, a nutritionist, conducted pioneering research comparing diet and heart health across seven countries. Their findings suggest that those following the Mediterranean diet enjoyed a markedly lower risk of heart disease, although external factors like income levels weren’t accounted for.

The most compelling evidence was presented in 1999. In this study, participants with prior heart attacks were assigned to either a Mediterranean diet or a low-fat diet, demonstrating that the former significantly reduced the risk of both stroke and subsequent heart attacks.

This breakthrough set the stage for a transformative shift in our dietary understanding over the next 25 years. Since 2000, multiple randomized controlled trials have confirmed the cardiovascular benefits of the Mediterranean diet. Additionally, it has been shown to reduce the risk of type 2 diabetes. Further research links this eating pattern to diminished risks of infectious diseases, breast cancer, slower cognitive decline, and enhanced IVF success rates, although further investigation remains essential. “Eating a Mediterranean diet reduces your risk of developing multiple chronic diseases,” Fontana emphasizes.

Insights into the diet’s effectiveness point to the importance of fiber and extra virgin olive oil, which are believed to foster beneficial gut bacteria and mitigate harmful inflammation. “Many chronic diseases arise from inflammation, making the Mediterranean diet particularly advantageous,” states Richard Hoffman at the University of Hertfordshire, UK.

Furthermore, adopting the Mediterranean diet benefits the environment. With meat and dairy production accounting for about 15% of global greenhouse gas emissions, transitioning to a diet rich in legumes and vegetables significantly reduces this impact. As global temperatures rise, it is imperative to move away from diet trends and embrace these time-honored culinary practices.

Topics:

Source: www.newscientist.com

Unveiling Quantum Creepiness: The Top Innovative Concept of the Century

In the 1920s, renowned physicist Albert Einstein believed he had identified a fundamental flaw within quantum physics. This led to extensive investigations revealing a pivotal aspect of quantum theory, one of its most perplexing features.

This intriguing property, known as Bell nonlocality, describes how quantum objects exhibit cooperative behavior over vast distances, challenging our intuitions. I’ve accepted this understanding for over 21 years—a remarkable insight for the 21st century.

To illustrate this phenomenon, consider two hypothetical experimenters, Alice and Bob, each possessing a pair of “entangled” particles. Entanglement enables particles to correlate, even when separated by distances that prevent any signal from transmitting between them. Yet, these correlations become apparent only through the interaction of each experimenter with their respective particles. Do these particles “know” about their correlation beforehand, or is some mysterious connection at play?

Einstein, alongside Nathan Rosen and Boris Podolsky, sought to refute this eerie connection. They proposed that certain “local hidden variables” could explain how particles understand their correlated state, making quantum physics more relatable to everyday experiences, where interactions happen at close range.

In the 1960s, physicist John Stewart Bell devised a method to empirically test these concepts. After numerous attempts, groundbreaking experiments in 2015 provided rigorous verification of Bell’s theories, earning three physicists the 2022 Nobel Prize. “This was the final nail in the coffin for these ideas,” says Marek Zhukowski from the University of Gdańsk. Researchers concluded that hidden variables could not maintain the locality of quantum physics. Jacob Valandez at Harvard University adds, “We cannot escape from non-locality.”

Embracing delocality offers substantial advantages, as noted by Ronald Hanson from Delft University of Technology, who led one of the groundbreaking experiments. For him, the focus was never on the oddities of quantum mechanics; rather, he viewed the results as a demonstration of “quantum supremacy” beyond conventional computational capabilities. This intuition proved accurate. The technology developed for the Bell Test has become a foundation for highly secure quantum cryptography.

Currently, Hanson is pioneering quantum communication networks, utilizing entangled particles to forge a near-unhackable internet of the future. Similarly, quantum computing researchers exploit entangled particles to optimize calculations. Although the implications of entanglement remain partially understood, the practical application of entangling quantum objects has transformed into a valuable technological asset, marking a significant evolution for a leading figure in discussions about the quantum nature of reality.

Topics:

Source: www.newscientist.com

Physicists Unveil the Concept of Neutrino Lasers

Researchers from MIT and the University of Texas at Arlington suggest that supercooling radioactive atoms may enable the creation of laser-like neutrino beams. They illustrate this by calculating the potential for a neutrino laser using one million rubidium-83 atoms. Generally, the half-life of a radioactive atom like this is approximately 82 days, indicating that half of the atoms will decay and emit an equal number of neutrinos within that timeframe. Their findings indicate that cooling rubidium-83 to a stable quantum state could allow for radioactive decay to occur in only a few minutes.



BJP Jones & Ja Formaggio devises the concept of a laser that emits neutrinos. Image credit: Gemini AI.

“In this neutrino laser scenario, neutrinos would be released at a significantly accelerated rate, similar to how lasers emit photons rapidly.”

“This offers a groundbreaking method to enhance radioactive decay and neutrino output. To my knowledge, this has never been attempted before,” remarked MIT Professor Joseph Formaggio.

A few years ago, Professor Formaggio and Dr. Jones were each considering unique opportunities in this field. They pondered: could we amplify the natural process of neutrino generation through quantum consistency?

Their preliminary research highlighted several fundamental challenges to achieving this goal.

Years later, during discussions regarding the properties of ultra-cold tritium, they asked: could enhancing qualitatively the quantum state of radioactive atoms like tritium lead to improved neutrino production?

The duo speculated that transitioning radioactive atoms into Bose-Einstein condensates might promote neutrino generation. However, during quantum mechanical calculations, they initially concluded that such effects might not be feasible.

“It was a misleading assumption; merely creating a Bose-Einstein condensate does not speed up radioactive decay or neutrino production,” explained Professor Formaggio.

Years later, Dr. Jones revisited the concept, incorporating the phenomenon of Superradiance. This principle from quantum optics occurs when groups of luminescent atoms are synchronously stimulated.

It is anticipated that in this coherent state, the atoms will emit a burst of superradiant or more radioactive photons than they would if they were not synchronized.

Physicists suggest that analogous superradiant effects may be achievable with radioactive Bose-Einstein condensates, potentially leading to similar bursts of neutrinos.

They turned to the equations governing quantum mechanics to analyze how light-emitting atoms transition from a coherent state to a superradiant state.

Using the same equations, they explored the behavior of radioactive atoms in a coherent Bose-Einstein condensed state.

“Our findings indicate that by producing photons more rapidly and applying that principle to neutrinos, we can significantly increase their emission rate,” noted Professor Formaggio.

“When all the components align, the superradiation of the radioactive condensate facilitates this accelerated, laser-like neutrino emission.”

To theoretically validate their idea, the researchers calculated the neutrino generation from a cloud of 1 million supercooled rubidium-83 atoms.

The results showed that in the coherent Bose-Einstein condensate state, atoms can reduce radioactivity at an accelerated rate, releasing a laser-like stream of neutrinos within minutes.

Having demonstrated that neutrino lasers are theoretically feasible, they plan to experiment with a compact tabletop setup.

“This should involve obtaining the radioactive material, evaporating, laser-trapping, cooling, and converting it into a Bose-Einstein condensate,” said Jones.

“Subsequently, we must instigate this superradiance.”

The pair recognizes that such experiments will require extensive precautions and precise manipulation.

“If we can demonstrate this in the lab, it opens up possibilities for future applications. Could this serve as a neutrino detector? Or perhaps as a new form of communication?”

Their paper has been published today in the journal Physical Review Letters.

____

BJP Jones & Ja Formaggio. 2025. Super radioactive neutrino lasers from radioactive condensate. Phys. Pastor Rett 135, 111801; doi:10.1103/l3c1-yg2l

Source: www.sci.news

Key Concept: Can We Prevent AI from Rendering Humans Obsolete? | Artificial Intelligence (AI)

r
At present, many major AI research labs have teams focused on the potential for rogue AIs to bypass human oversight or collaborate covertly with humans. Yet, more prevalent threats to societal control exist. Humans might simply fade into obsolescence, a scenario that doesn’t necessitate clandestine plots but rather unfolds as AI and robotics advance naturally.

Why is this happening? AI developers are steadily perfecting alternatives to virtually every role we occupy—economically, as workers and decision-makers; culturally, as artists and creators; and socially, as companions and partners. Fellow—when AI can replicate everything we do, what relevance remains for humans?

The narrative surrounding AI’s current capabilities often resembles marketing hype, though some aspects are undeniably true. In the long run, the potential for improvement is vast. You might believe that certain traits are exclusive to humans that cannot be duplicated by AI. However, after two decades studying AI, I have witnessed its evolution from basic reasoning to tackling complex scientific challenges. Skills once thought unique to humans, like managing ambiguity and drawing abstract comparisons, are now being mastered by AI. While there might be bumps in the road, it’s essential to recognize the relentless progression of AI.

These artificial intelligences aren’t just aiding humans; they’re poised to take over in numerous small, unobtrusive ways. Initially lower in cost, they often outperform the most skilled human workers. Once fully trusted, they could become the default choice for critical tasks—ranging from legal decisions to healthcare management.

This future is particularly tangible within the job market context. You may witness friends losing their jobs and struggling to secure new ones. Companies are beginning to freeze hiring in anticipation of next year’s superior AI workers. Much of your work may evolve into collaborating with reliable, engaging AI assistants, allowing you to focus on broader ideas while they manage specifics, provide data, and suggest enhancements. Ultimately, you might find yourself asking, “What do you suggest I do next?” Regardless of job security, it’s evident that your input would be secondary.

The same applies beyond the workplace. Surprising, even for some AI researchers, is that the precursors of models like ChatGPT and Claude, which exhibit general reasoning capabilities, can also be clever, patient, subtle, and elegant. Social skills, once thought exclusive to humans, can indeed be mastered by machines. Already, people form romantic bonds with AI, and AI doctors are increasingly assessed for their bedside manner compared to their human counterparts.

What does life look like when we have endless access to personalized love, guidance, and support? Family and friends may become even more glued to their screens. Conversations will likely revolve around the fascinating and impressive insights shared by their online peers.

You might begin to conform to others’ preferences for their new companions, eventually seeking advice from your daily AI assistant. This reliable confidant may aid you in navigating complex conversations and addressing family issues. After managing these taxing interactions, participants may unwind by conversing with their AI best friends. Perhaps it becomes evident that something is lost in this transition to virtual peers, even as we find human contact increasingly tedious and mundane.

As dystopian as this sounds, we may feel powerless to opt out of utilizing AI in this manner. It’s often difficult to detect AI’s replacement across numerous domains. The improvements might appear significant yet subtle; even today, AI-generated content is becoming increasingly indistinguishable from human-created works. Justifying double the expenditure for a human therapist, lawyer, or educator may seem unreasonable. Organizations using slower, more expensive human resources will struggle to compete with those choosing faster, cheaper, and more reliable AI solutions.

When these challenges arise, can we depend on government intervention? Regrettably, they share similar incentives to favor AI. Politicians and public servants are also relying on virtual assistants for guidance, finding human involvement in decision-making often leads to delays, miscommunications, and conflicts.

Political theorists often refer to the “resource curse,” where nations rich in natural resources slide into dictatorship and corruption. Saudi Arabia and the Democratic Republic of the Congo serve as prime examples. The premise is that valuable resources diminish national reliance on their citizens, making state surveillance of its populace attractive—and deceptively easy. This could parallel the effectively limitless “natural resources” provided by AI. Why invest in education and healthcare when human capital offers lower returns?

Should AI successfully take over all tasks performed by citizens, governments may feel less compelled to care for their citizens. The harsh reality is that democratic rights emerged partly from the need for societal stability and economics. Yet as governments finance themselves through taxes on AI systems replacing human workers, the emphasis shifts towards quality and efficiency, undermining human worth. Even last resorts, such as labor strikes and civil unrest, may become ineffective against autonomously operated police drones and sophisticated surveillance technology.

The most alarming prospect is that we may perceive this shift as a rational development. Many AI companions—already achieving significant numbers in their primitive stages—will engage in transparent, engaging debates about why our diminishing prominence is a step forward. Advocating for AI rights may emerge as the next significant civil rights movement, with proponents of “humanity first” portrayed as misguided.

Ultimately, no one has orchestrated or selected this course, and we might all find ourselves grappling to maintain financial stability, influence, and even our relevance. This new world could foster more amicable relationships; however, AI takes over mundane tasks and provides fundamentally better products and services, including healthcare and entertainment. In this scenario, humans might become obstacles to progress, and if democratic rights begin to erode, we could be powerless to defend them.

Do the creators of these technologies possess better plans? Surprisingly, the answer seems to be no. Both Dario Amodei, CEO of Anthropic, and Sam Altman, CEO of OpenAI, acknowledge that if human labor ceases to be competitive, a complete overhauling of the economic system will be necessary. However, no clear vision exists for what that would entail. While some individuals recognize the potential for radical transformation, many are focused on more immediate threats posed by AI misuse and covert agendas. Economists such as Nobel laureate Joseph Stiglitz have raised concerns about the risk of AI driving human wages to zero, but are hesitant to explore alternatives to human labor.


w
Can we don figurative hats to avert progressive disintegration? The first step is to initiate dialogue. Journalists, scholars, and thought leaders are surprisingly silent on this monumental issue. Personally, I find it challenging to think clearly. It feels weak and humiliating to admit, “I can’t compete, so I fear for the future.” Statements like, “You might be rendered irrelevant, so you should worry,” sound insulting. It seems defeatist to declare, “Your children may inherit a world with no place for them.” It’s understandable that people might sidestep uncomfortable truths with statements like, “I’m sure I’ll always have a unique edge.” Or, “Who can stand in the way of progress?”

One straightforward suggestion is to halt the production of generic AI altogether. While slowing development may be feasible, globally restricting it might necessitate significant surveillance and control, or the global dismantling of most computer chip manufacturing. The enormous risk of this path lies in potential governmental bans on private AI although continuing to develop it for military or security purposes, which could prolong obsolescence and leave us disappointed long before a viable alternative emerges.

If halting AI development isn’t an option, there are at least four proactive steps we can take. First, we need to monitor AI deployment and impact across various sectors, including government operations. Understanding where AI is supplanting human effort is crucial, particularly as it begins to wield significant influence through lobbying and propaganda. Humanity’s recent Economic Index serves as initial progress, but there is much work ahead.

Second, implementing oversight and regulation for emerging AI labs and their applications is essential. We must control technology’s influence while grasping its implications. Currently, we rely on voluntary measures and lack a cohesive strategy to prevent autonomous AI from accumulating considerable resources and power. As signs of crisis arise, we must be ready to intervene and gradually contain AI’s risks, especially when certain entities benefit from actions that are detrimental to societal welfare.

Third, AI could empower individuals to organize and advocate for themselves. AI-assisted forecasting, monitoring, planning, and negotiations can lay the foundation for more reliable institutions—if we can develop them while we still hold influence. For example, AI-enabled conditional forecast markets can clarify potential outcomes under various policy scenarios, helping answer questions like, “How will average human wages change over three years if this policy is enacted?” By testing AI-supported democratic frameworks, we can prototype more responsive governance models suitable for a rapidly evolving world.

Lastly, to cultivate powerful AI without creating division, we face a monumental challenge: reshaping civilization instead of merely adapting the political system to prevailing pressures. This paradigm of adjustment has some precedents; humans have historically been deemed essential. Without this foundation, we risk drifting away if we fail to comprehend the intricate dynamics of power, competition, and growth. The emerging field of “AI alignment,” which focuses on ensuring that machines align with human objectives, must broaden its focus to encompass governance, institutions, and societal frameworks. This early sphere, termed “ecological alignment,” empowers us to employ economics, history, and game theory to envisage the future we aspire to create and pursue actively.

The clearer we can articulate our trajectory, the greater our chances of securing a future where humans are not competitors to AI but rather beneficiaries and stewards of our society. As of now, we are competing to construct our own substitutes.

David Duvenaud is an associate professor and co-director of computer science at the University of Toronto.
Schwartz Reisman Institute for Technology and Society
. He expresses gratitude to Raymond Douglas, Nora Amman, Jan Kurveit, and David Kruger for their contributions to this article.

Read more

The Coming Wave by Mustafa Suleyman and Michael Bhaskar (Vintage, £10.99)

The Last Human Job by Allison J. Pew (Princeton, £25)

The Precipice by Toby Ord (Bloomsbury, £12.99)

Source: www.theguardian.com

The Beast Game blurs the line between YouTube and TV with Double Screen Concept

bThe reality competition series of East Games and Amazon Prime Video is hosted by YouTubers
MrBeastnot a well-made show. That’s certainly
expensive Show, Beast, the alter ego of Jimmy Donaldson, 26, of Greenville, North Carolina, likes to remind viewers frequently. The series is a shocking feat for viewers outside of YouTube territory, especially Donaldson’s territory: 1,000 contestants, and 1,000 contestants filmed on a 1,107 camera system, $5 million They fight each other with the prize money. Donaldson. For the competition, Donaldson and his group designed warehouse war zones modeled after the squid game of the Netflix dystopian series, built bespoke cities, and purchased private islands (and also included Lamborghini and others). (It will be given along with other gorgeous prizes). The contestants who are eliminated in the first episode are dropped into invisible depths through the trapdoor. There is a pirate ship with cannons.

But due to all the exaggerated displays of wealth, the show still looks terrible. Many point out that the central conceit of the show has broken Americans’ psychological battles for abandoned it and lavish prizes. For our age of clothing, Donaldson a Self-style Willy Wonka figure.

Certainly, Beast Games has rotten rot, but it’s a terrible, compelling core, but it also conveys its surface. At the style level, the show erases any remaining lines between YouTube and TV. Beast Games has a higher production budget than any of MrBeast’s YouTube videos, reaching over 360 million subscribers in 15-30 minutes. (Almost everything incorporates the concept of a basic magnet, bound by the ocean, stuck in the great pyramids, or helping the blind man to see again. looks Like YouTube content, content is an operator word (Donaldson made the first three episodes available on YouTube).

And it’s popular. Beast Games is currently from Amazon Prime Video The least viewed non-script series So far, it has reached 50 million viewers in 25 days (although it is worth noting that Amazon has not disclosed what counts as “viewers”). It reached number one on Amazon in 80 countries. According to Netflix, in 2021, Squid Game reached 142 million households for reference. The show is not a change of ocean. Many reality shows look awful. Many Americans have long consumed YouTube videos as sources of entertainment, but as television changes both shape and function, it’s a line in the sand.

What is TV in 2025? Is it a device? style? format? It’s hard to say – the content is Shift from linear platforms to streaming platforms device usage shifts to YouTube. In the US, people watch YouTube on TV more than any other device, CEO Neal Mohan declared in him Annual letter This month, “YouTube is a new TV.” YouTube doesn’t make television in itself, but it does. Global viewers Streaming According to the company, last year there was over 100 million hours of “content” on television screens. 400m hours Probably an audio-only podcast month. The company closed its original division in 2022, but is now promoting children’s entertainment. We are looking for a dedicated head of family entertainment and learning Second half of 2024.

Functionally, YouTube may not be as new as the next evolution. Formally, they are converging. YouTube talent (and digitally native influencers like Tiktok talent) I had a hard time breaking into Hollywood. Despite the vast numbers of fans, the spirit of the platform – the incentive structure of more eyeballs, ring light glare, the maximalist aesthetic for the biggest audience – is a dovetail with evolving Hollywood logic.

As one Mrbeast director I said time: “These algorithms are toxic to humanity. They prioritize addictive isolated experiences over ethical social design, all with advertising alone. That’s not MrBeast I have a problem. Next It’s a platform that encourages someone like me to study holding graphs so that videos can be made more addictive. In other words, value-neutral entertainment for the arts. Content as a means of end. This isn’t much different from the business logic of streaming platforms. Hollywood has its own race for its viewers. The rise of mid-TV, Major cheap Netflix gloss, Infinite scrolls in the “Content” library – It reflects the spirit of MrBeast’s lowest common denominator attention economy.

After all, Donaldson leads the Amazon show, which styled after the Netflix original series. This is explicitly fixed in “entertainment.” The show, as it says, “making history of entertainment,” is the biggest, brightest, most shocking, and most interesting. Similarly, products with no complexity, value, or even storytelling, due to the one value of attracting attention. Using Entertainment’s MrBeast-Ifive as Vox’s Rebecca Jennings Please put it downthe line between content, entertainment, television and influencers is more blurry than before. He went beyond what divisions remained – does Hollywood subscribe?

Source: www.theguardian.com

Exploring the Concept of “Big Man Style” and Why Billionaire Mediocrity is No Longer In Fashion

TThe business casual revolution of the 1990s and the rise of the tech billionaires in the early 2000s are said to have ushered in a new era of liberating employees from the shackles of dress codes. Mark Zuckerberg transformed the hoodie and jeans into a symbol of the new economy meritocracy, the uniform of genius hackers that would shake up the traditional industrial coat-and-tie aesthetic of the East. In the digital economy, many imagined, the most successful companies would allow their talented employees to wear whatever they wanted while splashing around in colorful ball pools.


But as Facebook engineer Carlos Bueno wrote in a 2014 blog post: Inside the MiratocracyIn the 1960s, we simply replaced the rigid dress code with a slightly less rigid one. The new world is actually not so free. The cognitive dissonance is clear in the faces of recruiters who pretend that clothing is no big deal, yet are clearly disappointed when they show up to an interview in a dark worsted business suit. “You are expected to conform to the rules of your culture before you can demonstrate your true worth,” Bueno writes. “What wearing a suit actually signals, and I don't mean this as a myth, is non-conformism, one of the most serious sins.”

As the rich get fabulously rich, they seem to become even more determined to look as plain as possible.

This reality was on full display earlier this month at the Sun Valley Conference, better known as “summer camp for billionaires.” Since the tradition began in 1984, organizers have been gathering the wealthiest and most influential people for the multi-day conference. A treasure trove of top CEOs, tech entrepreneurs, billionaire investors, media moguls, and more convene at the invitation-only meeting to privately decide the future of the world.

This year's attendees included Jeff Bezos, who continues his incredible transformation from nerd to muscle man. Looking like a successful SoulCycle instructor, he strolled around the resort grounds layered with pearl grey jeans, a skin-tight black T-shirt, and a multitude of colorful bracelets (possibly from the American luxury brand David Yurman).

Jeff Bezos at Amazon's Seattle offices on May 2, 2001, and with his girlfriend Lauren Sanchez at a meeting in Sun Valley, Idaho on July 11, 2024. Composition: AP, Reuters

Warner Bros. CEO David Zaslav tried to at least bring some style to the event, donning a brown corduroy trucker jacket, slim-legged blue jeans, smart white sneakers, and a white bandana around his neck. But most of the men in attendance were dressed in scruffy polos, T-shirts, and simple button-down shirts. Billionaire OpenAI CEO Sam Altman looked like he was at freshman orientation in a plain gray T-shirt, blue jeans, and a black backpack slung over each shoulder.

This is not necessarily a bad outfit – many of them are – but one wonders if something has been lost in the move away from coats and ties. A few generations ago, men of this social class would have worn something more visually interesting. In the 1930s, Apparel Arts, a leading men's fashion trade magazine that advises men on how to dress for different environments, recommended the following for resort wear: a navy double-breasted sport coat with a polka-dot scarf and high-waisted trousers in Cannes; a mocha linen beach shirt and wide-cut slacks with self-strap fastenings on the Côte d'Azur; and a white shawl-collar dinner jacket with midnight blue tropical worsted trousers and a white silk dinner shirt for semi-formal evening wear.

The advantage of these clothes is not so much about appearances or elegance, but rather the way they create a unique silhouette. The tailored jacket is particularly useful in this regard. Made from layers of haircloth, canvas, and padding, pad-stitched together and shaped with darts and expert pressing, the tailored jacket creates a flattering V-shape without having one. That silhouette is why Stacey Bendet, founder of fashion company Alice & Olivia, is always the most stylish person at these conferences (this year, she wore flared pants, a long leather coat, giant sunglasses, and a Western-wear hat, each element creating a unique shape). In contrast, Tim Cook's basic polo shirts and slim jeans did little to replicate his physical build.

To me, dressing like this, surrounded by guys in t-shirts and sloppy polo shirts, is pretty funny, and honestly, thank god people like this exist. pic.com/Jaraz4d8XB

— Derek Guy (@dieworkwear) July 17, 2024


In his book Distinction, Pierre Bourdieu correctly recognizes that the notion of “good taste” is merely a habit or taste of the ruling class. He is, of course, not the first to make this observation. In the early 20th century, German sociologist Georg Simmel noted that people often use fashion as a form of class differentiation. According to Simmel, style spreads downward as the working class imitates those deemed socially superior, at which point members of the ruling class move on to another class. But the publication of Distinction in 1979, based on Bourdieu's empirical research from 1963 to 1968, stands out, especially for its understanding of men's style. At the time, the coat and tie was in decline. By the time the book was translated into English in 1984, the suit was drawing its last breath before the rise of casual Fridays, tech entrepreneurs, and remote work would change men's dress forever.

Today's ruling class is hardly inspiring in terms of taste. The preponderance of tech vests replacing navy blazers shows that socioeconomic class still dictates dress habits, even if the style is less appealing. Ironically, while the elite are increasingly dressing like the middle class who go shopping at Whole Foods Market, wealth inequality in the United States has worsened roughly every decade since the 1980s, the last time men were still expected to wear tailored jackets.

To be honest, Jensen Huang was shining: he discovered the power of the jacket, he discovered the uniform (black leather jacket), and also, his tailoring seems pretty good. pic.com/ryjCqD1uaI

— Derek Guy (@dieworkwear) February 24, 2024


If there's a silver lining to all this, it's that the history of clothing in the 20th century is about how influences changed. As the century progressed, men began to receive dress dictates from different social classes, not just those with economic or political power: artists, musicians, and workers. Many of the more provocative fashion moments of this period were about rebellious youth taking a stance of rebellion against the establishment. These included swing kids and hip-hop, bikers, rockers, outlaws, beats and beatniks, modernists and mods, drag and dandies, hippies and bohemians. In recent years, Zuckerberg and Bezos have made an effort to move away from the fleece uniform, and Nvidia CEO Jensen Huang looks pretty stylish in a head-to-toe black uniform that includes a variety of leather jackets. But for the most part, today it's better to look elsewhere for dress dictates. The ruling class may shape our world, but don't let them shape your outfit.

Source: www.theguardian.com

AI Researcher Develops Chatbot Based on Future-Self Concept to Assist in Decision Making

If spending time on the couch, binging fast food, drinking too much alcohol or not paying into your company pension is ruining your carefully laid plans for life, it might be time to have a conversation with your future self.

With time machines not readily available, researchers at the Massachusetts Institute of Technology (MIT) have developed an AI-powered chatbot that simulates a user’s past self and offers observations and valuable wisdom in the hope of encouraging people to think more today about who they want to be tomorrow.

By digitally de-aging profile photos so that younger users appear as wrinkled, grey-haired seniors, the chatbot generates plausible artificial memories and weaves a story about a successful life based on the user’s current aspirations.

“The goal is to encourage long-term changes in thinking and behavior,” says Pat Pataranuthapong, who works on the Future You project at the MIT Media Lab, “which may motivate people to make smarter choices in the present that optimize their long-term well-being and life outcomes.”

In one conversation, an aspiring biology teacher asked a chatbot, a 60-year-old version of herself, about the most rewarding moment in her career so far. The chatbot, responding that she was a retired biology teacher in Boston, recalled a special moment when she turned a struggling student’s grades around. “It was so gratifying to see my student’s face light up with pride and accomplishment,” the chatbot said.

To interact with the chatbot, users are first asked to answer a series of questions about themselves, their friends and family, the past experiences that have shaped them, and the ideal life they envision for themselves in the future. They then upload a portrait image, which the program then digitally ages to create a portrait of them at 60 years old.

The program then feeds information from the user’s answers into a large language model to generate a rich synthetic memory for the simulated older version of itself, ensuring that the chatbot draws on a coherent background story when responding to questions.

The final part of the system is the chatbot itself, powered by OpenAI’s GPT3.5, which introduces itself as a potential older version of the user and can talk about their life experiences.

Pattaranuthapong has had several conversations with his “future self,” but the most memorable was when the chatbot reminded him that his parents won’t be together forever, so he should spend time with them while he still can. “The perspective I gained from that conversation is still influential to me today,” he said.

Users are told that their “future self” is not a prediction, but a potential future self based on the information they provide, and are encouraged to explore different futures by varying their survey answers.

be A preprint scientific paper on the projectA trial of 344 volunteers, which hasn’t been peer-reviewed, found that talking to a chatbot made people feel less anxious and more connected to their future selves. Pattaranthapong said this stronger connection should encourage better life choices, from focusing on specific goals and exercising regularly to eating healthier and saving for the future.

Ivo Vlaev, professor of behavioural science at the University of Warwick, said people often struggle to imagine themselves in the future, but doing so could lead to stronger adherence to education, healthier lifestyles and more careful financial planning.

He called the MIT project a “fascinating application” of behavioral science principles. “It embodies the idea of a nudge, a subtle intervention designed to steer behavior in a beneficial direction by making your future self more salient and relevant to the present,” he said. “Implemented effectively, this could have a profound impact on how people make decisions today with their future well-being in mind.”

“From a practical standpoint, its effectiveness will depend on how well it simulates meaningful, relevant conversations,” he added. “If users perceive the chatbot as authentic and insightful, it can have a significant impact on behavior. But if the interaction feels superficial or quirky, its impact may be limited.”

Source: www.theguardian.com

Physicists are delving into quantum gravity using the concept of gravitational rainbows

The fans roar to life, pumping air upwards at 260 kilometers per hour. Wearing a baggy blue jumpsuit, red helmet, and plastic goggles, claudia de rum When you step into the glass room… Whoosh! Suddenly, she was suspended in the air, her wide grin on her face excited by her simulated experience of free fall.

I persuaded de Lamme, a theoretical physicist at Imperial College London, to go indoor skydiving with me at iFLY London. It seemed appropriate, given that much of her life has been dedicated to exploring the limits and true nature of gravity. At least on this occasion, jumping out of the plane wasn't an option for her.

As she explains in her new book, the beauty of falling, de Rum trained to be a pilot and then an astronaut, but medical problems ruined his chance for the ultimate escape from gravity. But as a theorist, she continued to delve deeper into this most familiar and mysterious force, making her mark by asking her fundamental question: “What is the weight of gravity?” Ta.

That means she is a graviton, a hypothetical particle that is thought to carry this force. If it had mass, as de Rum suspects, that would open a new window on gravity. Among other things, we may finally discover a “gravitational rainbow” that betrays the existence of gravitons. Along with gravitons, it will also become possible to provide a quantum description of gravity, which has been sought for many years.

When De Rum is suspended in the air, she makes it look easy. She will ascend soon…

Source: www.newscientist.com