For those with arachnophobia, it might be best to look away, as scientists have unearthed the largest spider colony in the world.
The nest spans 106 square meters (1,140 square feet) and is believed to host up to 111,000 spiders—roughly half the size of a tennis court.
Recently, researchers published their findings in Underground Biology, revealing that two distinct species of spiders formed this massive colony.
This remarkable spider colony is situated in the Sulfur Caves of the Vromoner Valley, straddling the Greece-Albania border.
The nest resides in a permanently dark section of the cave, extending 50 meters (164 feet) from the entrance through a narrow, low-ceilinged passage. It comprises a multilayered patchwork of individual funnels that merge to create a spongy mass.
Researchers estimate the colony houses around 69,000 spiders, including Tegenaria domestica (commonly known as the barn funnel weaver or common house spider) and approximately 42,000 of The Vagrant of Prineligone species.
While these two species often coexist nearby, they typically do not share close quarters.
In fact, barn funnel weavers usually prey on smaller creatures, including The Vagrant of Prineligone. A truce between the two is unlikely, as the low light within the cave hinders the spiders’ vision.
Tegenaria domestica hides in a funnel-shaped nest and emerges when prey approaches – Credit: Getty
Instead, the spiders primarily feed on non-stinging midges, which swarm thickly in proximity to their colonies. These midges thrive on nutrients provided by natural springs and sustained by the sulfur-rich river navigating through the cave.
DNA analysis indicates that these spiders are genetically distinct from their surface relatives, highlighting adaptations to their unique environment.
Conversely, a sulfur-rich diet significantly diminishes the variety of gut microbiota.
Both factors imply that these spiders do not intermingle with their cousins found above ground.
The colony was initially discovered in 2022 by a group of cavers from the Czech Speleological Society during their exploration of the area.
A team of researchers followed up in 2024, estimating the spider population by counting the web funnels and collecting specimens for further analysis.
A recently uncovered galactic filament measures at least 50 million light-years in length and is situated 140 million light-years away. A galaxy orbits around the filament’s core, making it one of the largest rotating structures found to date.
Illustration depicting the rotation (right) of neutral hydrogen in a galaxy situated within an elongated filament (center). The galaxies demonstrate coherent bulk rotational motion that traces a large-scale cosmic web (left). Image credit: Lyla John.
Cosmic filaments stand as the largest known structures in the universe, comprising extensive thread-like formations of galaxies and dark matter that serve as the framework of the cosmos.
They also function as “highways” through which matter and momentum funnel into galaxies.
A nearby filament, home to numerous galaxies spinning in the same direction, represents an excellent opportunity to investigate how galaxies developed their current spin and gas content.
This structural arrangement could also provide a basis to test theories regarding how the universe’s rotation accumulates over vast distances.
In a recent study, astronomer Lyra Jung and colleagues from the University of Oxford discovered that 14 nearby hydrogen-rich galaxies form a slender line stretching approximately 5.5 million light-years long and 117,000 light-years wide.
This alignment exists within a considerably larger cosmic filament, about 50 million light-years long, which encompasses over 280 additional galaxies.
Notably, many of these galaxies seem to rotate in the same direction as the filament itself, a pattern that exceeds what would be expected if their rotation were random.
This observation challenges existing models and implies that the universe’s structure may have a more potent and prolonged impact on galaxy rotation than was previously assumed.
Astronomers observed that galaxies flanking the filament’s core were moving in opposite directions, suggesting that the entire formation is in motion.
The team employed a model of filament mechanics to estimate a rotational speed of 110 km/s and calculated the radius of the filament’s dense core region to be about 163,000 light-years.
“What makes this structure remarkable is not just its size, but also the interplay of spin arrangement and rotational motion,” stated Dr. Jung.
“You can liken it to a teacup ride at a theme park. Each galaxy represents a spinning teacup, but the entire platform, the cosmic filament, is also in rotation.”
“This dual motion provides valuable insights into how galaxies acquire rotation from the larger structures they inhabit.”
The filaments appear to be relatively young and undisturbed.
The significant number of gas-rich galaxies, minimal internal motion, and their so-called dynamically cool state imply that the galaxy is still in its formative stages.
Hydrogen serves as the fundamental material for star formation, meaning that galaxies rich in hydrogen gas are actively gathering and retaining the necessary fuel to create stars.
Thus, exploring these galaxies could yield insights into both the early and ongoing phases of galaxy evolution.
Hydrogen-rich galaxies also serve as excellent indicators of gas flow along cosmic filaments.
Due to atomic hydrogen’s susceptibility to motion, its presence aids in mapping how gas is directed through filaments and into galaxies, shedding light on how angular momentum travels through the cosmic web and influences galaxy shape, rotation, and star formation.
“This filament serves as a fossil record of the universe’s flow,” remarked astronomer Dr. Madalina Tudrache from the Universities of Cambridge and Oxford.
“It helps us comprehend how galaxies gain rotation and evolve over time.”
The researchers used data from the MeerKAT radio telescope in South Africa, one of the most powerful telescopes globally, comprising an array of 64 linked satellite dishes.
This rotating filament was detected via an extensive sky survey known as MIGHTEE.
By integrating this data with optical observations from the DESI and SDSS surveys, the study revealed cosmic filaments displaying both spin alignment and bulk rotation in coherent galaxies.
Professor Matt Jarvis from the University of Oxford stated: “This highlights the ability to combine data from various observatories to achieve a deeper understanding of how vast structures and galaxies form in the Universe.”
The findings are detailed in the following article: paper in Royal Astronomical Society Monthly Notices.
_____
Madalina N. Tudrache and others. 2025. A 15 Mpc rotating galactic filament with redshift z = 0.032 is available for purchase. MNRAS 544 (4): 4306-4316; doi: 10.1093/mnras/staf2005
A cosmic network is disrupting a galaxy’s star-forming abilities. Galaxies require gas to generate stars, and a distant dwarf galaxy, nearly 100 million light-years away, is being deprived of this essential material by an expansive web of cosmic constituents.
While one half of the galaxy known as AGC 727130 seems relatively normal, its opposite side shows gas stretching well beyond its perimeter, being torn apart by unseen forces. Researchers from Columbia University in New York identified this collapsing galaxy utilizing the Very Large Array, a radio observatory situated in New Mexico Nicholas Luber.
Even though AGC 727130 is in proximity to two other dwarf galaxies, the researchers concluded that it isn’t close enough to engage with them in a way that would create turbulent gas. Their findings imply that the gas is expelled through a mechanism known as ram pressure stripping. This occurs when a galaxy traverses an intragalactic cloud—in this case, part of the cosmic web—leaving behind its gas. Without this gas, galaxies become “quenched” and are unable to create new stars.
The filamentous structures in the cosmic web are so slender that it would likely take more than one filament to strip gas from a galaxy, yet AGC 727130 resides at the junction of multiple filaments. “The concept that a cosmic web could extract gas from galaxies through collisional pressure is not surprising and likely happens frequently, but it’s challenging to confirm,” states Luber. “We were fortunate to observe this phenomenon.”
Identifying such galaxies poses a challenge because the gas removal is a gradual process, and galaxies that have already lost their gas tend to be exceedingly faint. “What’s intriguing about this outcome is that low-mass extinguished dwarf galaxies are exceptionally rare; only a few, less than 0.06 percent, are believed to exist without a substantial host galaxy,” comments Julia Blue Bird, a radio astronomer based in New Mexico.
Even among that limited number of extinguished dwarf galaxies, only a scant few have had their gas stripped by the cosmic web rather than through interactions with other galaxies. “This might be… the first definitive case of such an occurrence,” remarks Jacqueline Van Gorcom from Columbia University. Several large radio telescopes are poised to unveil new gas maps across extensive regions of the universe, which could provide additional insights regarding these galaxies.
This discovery is crucial in addressing a cosmological dilemma known as the missing satellite problem. Current cosmological models suggest there should be significantly more dwarf galaxies orbiting larger ones than we currently observe. “We struggle to find many quenched dwarfs; is it because they’re hard to detect, or are they simply not present? This suggests that quenching may also be occurring far from larger galaxies,” states team member Sabrina Stierwalt from Occidental College in California. Uncovering additional galaxies quenched by the cosmic web could help reconcile discrepancies between model predictions and actual observations.
Need an assistant for your online activities? Several major artificial intelligence companies have moved away from chatbots like ChatGPT and are now focusing on new browsers with deep AI integration. These could take the form of agents who shop for you or ubiquitous chatbots that follow you, summarizing what you’re looking at, looking up related information, and answering related questions.
In the last week alone, OpenAI released the ChatGPT Atlas browser, while Microsoft showcased Edge’s new Copilot mode, both heavily utilizing chatbots. In early October, Perplexity made its Comet browser available for free. Mid-September saw Google rolling out Chrome with Gemini, integrating its AI assistant into the world’s most popular browser.
Following these releases, I spoke with Firefox General Manager Anthony Enzor-DeMeo to discuss whether AI-first browsers will gain traction, if Firefox will evolve to be fully AI-driven, and how user privacy expectations may change in this new era of personalized, agent-driven browsing.
Guardian: Have you tried ChatGPT Atlas or other AI browsers? I’m curious what you think about them.
Anthony Enzor-DeMeo: Yes, I’ve tried Atlas, Comet, and other competing products. What do I think about them? It’s a fascinating question: What do users want to see? Today, users typically go to Google, perform a search, and view various results. Atlas seems to be transitioning towards providing direct answers.
Guardian: Would you want that as a user?
Enzor-DeMeo: I prefer knowing where the AI derives its answers. References are important, and Perplexity’s Comet provides them. I believe that’s a positive development for the internet.
Guardian: How do you envision the future of the web? Is search evolving into a chat interface instead of relying solely on links?
Enzor-DeMeo: I’m concerned that access to content on the web may become more expensive. The internet has traditionally been free, mostly supported by advertising, though some sites do have subscriptions. I’m particularly interested in how access to content might shrink behind paywalls while aiming for a free and open internet. AI may not be immediately profitable, yet we have to guard against a shift towards a more closed internet.
Guardian: Do you anticipate Firefox releasing an AI-integrated or agent-like browser similar to Perplexity Comet or Atlas?
Enzor-DeMeo: Our focus remains on being the best browser available. With 200 million users, we need to encourage people to choose us over default options. We closely monitor user preferences regarding AI features, which are gradually introduced. Importantly, users retain control; they can disable features they do not wish to use.
Guardian: Do you think AI browsers will become popular or remain niche tools?
Enzor-DeMeo: Currently, paid AI usage is about 3% globally, so it’s premature to deem it fully mainstream. However, I believe AI is here to stay. The forthcoming years will likely see greater distribution and trial and error as we discover effective revenue models that users are willing to pay for. This varies widely by country and region, so the next phase of the internet presents uncertainties.
Guardian: What AI partnerships is Firefox considering?
Enzor-DeMeo: We recently launched Perplexity, akin to a search partnership agreement. While Google search is our default, users have access to 50 other search engines, providing them with options.
Guardian: Given your valuable partnership with Google, what financial significance does the Perplexity partnership hold?
Enzor-DeMeo: I’m unable to share specific details.
Guardian: Firefox has established its reputation on user privacy. How do you reconcile increasing demands for personalization, which requires more data, with AI-assisted browsing?
Enzor-DeMeo: Browsers inherently have a lot of user context. Companies are developing AI browsers to leverage this data for enhanced personalization and targeted ads. Mozilla will continue to honor users’ choices. If you prefer not to store data, that’s entirely valid. Users aren’t required to log in and can enjoy completely private browsing. If it results in less personalized AI, that’s acceptable. Ultimately, the choice lies with users.
Guardian: Do you think users anticipate sacrificing privacy for personalization?
Enzor-DeMeo: We’ve observed a generational divide. Younger cohorts prioritize value exchange—will sharing more information lead to a more tailored experience? In a landscape with numerous apps and social media, this expectation has emerged. However, perspectives vary between generations; Millennials often value choice, while Gen Xers prioritize privacy. Many Gen Z users emphasize personalization and choice.
Guardian: What are your thoughts on the recent court decision regarding Google’s monopoly?
Enzor-DeMeo: The judge acknowledged the influx of competition entering the market. He deliberately avoided delving into the browser engine domain. We support search competition but not at the cost of independent browsers. The ruling allows us to keep receiving compensation while monitoring market evolution over the next few years. The intersection of search and AI remains uncertain, and a prudent stance is to observe how these developments unfold.
Guardian: Firefox’s market share has been steadily declining over the past decade; what are your realistic goals for user growth in the coming years?
Enzor-DeMeo: Every user must decide to download and use Firefox. We’re proud to serve 200 million users. I believe that AI presents us with significant growth opportunities. We want to provide choices rather than lock users into a single solution, fostering diverse growth possibilities for us.
Amazon’s CEO Andy Jassy wore a broad smile while meeting Keir Starmer in the gardens of Downing Street to announce a £40bn investment in the UK this past June. Starmer shared his enthusiasm, stating, “equally passionate”. He remarked, “This transaction demonstrates that our transformation strategy to attract investment, stimulate growth, and enhance people’s financial well-being is succeeding.”
However, just four months later, the company faced a massive global outage on Monday that halted thousands of businesses and underscored its reliance on Amazon Web Services (AWS), the cloud computing platform utilized by the British government.
Data gathered for the Guardian indicates that the UK government is increasingly dependent on the services of U.S. tech giants. These companies have come under fire from trade unions and politicians for their working conditions in logistics and online retail.
Since 2016, AWS has secured 189 contracts with the UK government valued at £1.7bn and has billed approximately £1.4bn during this timeframe, according to data from public procurement intelligence firm Tassel.
The research group reported: “Currently, 35 public sector authorities utilize AWS services across 41 contracts totaling £1.1bn. The primary ministries involved include the Home Office, DWP, HMRC, the Ministry of Justice, Cabinet Office, and Defra.
Screenshot of the out-of-service HMRC website on Monday, October 20th. Photo: HMRC.gov.uk/PA
Tim Wright, a technology partner at law firm Floodgate, noted that the Financial Conduct Authority (FCA) and Prudential Regulation Authority (PRA) have consistently warned about the risks associated with concentrating cloud services for regulated enterprises.
“Recent efforts by the Treasury, the PRA, and the FCA to impose direct oversight on ‘significant third parties’ aim to mitigate the risk of outages like those faced by AWS,” he said. “However, until we see substantial diversification and the establishment of sovereign clouds, the UK government’s approach contradicts the resilience principles that regulators advocate for.”
The House of Commons Treasury Committee has reached out to Chancellor of the Exchequer Lucy Rigby to inquire why Amazon wasn’t classified as a “significant third party” within the UK financial services sector, a designation that would have subjected the tech giant to regulatory scrutiny.
Committee Chair Meg Hillier noted that Amazon recently informed the committee that its financial services clients rely on AWS for “resilience” and that AWS offers “layers of protection.”
This week’s outage impacted over 2,000 businesses around the globe, leading to 8.1 million reports of issues, with 1.9 million in the U.S., 1 million in the UK, and 418,000 in Australia, according to internet outage tracker Downdetector.
Only HMRC confirmed it was affected by the outage, stating customers were “experiencing difficulties accessing our online services” and recommended they call back later due to busy phone lines.
While many websites restored their services after a few hours, some continued to experience problems throughout the day. By Monday evening, Amazon announced that all cloud services had “returned to normal operations.”
Trade unions have long questioned whether Amazon should be excluded from government contracts because of its reputation for subpar working conditions in its large warehouses.
Andy Prendergast, national secretary of the GMB union, stated: “Amazon has a dismal record regarding fair treatment of workers. Shocking conditions in their warehouses have resulted in emergency ambulance calls, with employees claiming they are treated like robots, forced to work until exhaustion, all while being compensated with poverty wages until they strike for six months.”
“In this context, wasting nearly £2 billion of public funds is deplorable.”
AWS has not provided a comment. A spokesperson from Amazon’s fulfillment centers stated that the “vast majority” of ambulance calls at their facilities are not “work-related.”
This browser aims to enhance the web experience with a ChatGPT sidebar, enabling users to ask questions and engage with various features of each site they explore, as demonstrated in a video shared with the announcement. Atlas is currently accessible worldwide on Apple’s macOS and will soon be released for Windows, iOS, and Android, according to OpenAI’s announcement.
With the ChatGPT sidebar, users can request “content summaries, product comparisons, or data analysis from any website.” Website for more details. The company has also begun presenting a preview of its virtual assistant, dubbed “Agent Mode,” to select premium users. Agent Mode allows users to instruct ChatGPT to execute a task “from start to finish,” such as “travel research and shopping.”
While browsing, users can also edit and modify highlighted text within ChatGPT. An example on the site features an email with highlighted text along with a recommendation prompt: “Please make this sound more professional.”
OpenAI emphasizes that users maintain complete control over their privacy settings: “You decide what is remembered about you, how your data is utilized, and the privacy settings that govern your browsing.” Currently, Atlas users are automatically opted out of having their browsing data employed to train ChatGPT models. Additionally, similar to other browsers, users can erase their browsing history. However, while the Atlas browser may not store an exact duplicate of searched content, ChatGPT will “retain facts and insights from your browsing” if users opt into “browser memory.” It remains unclear how the company will handle browsing information with third parties.
OpenAI is not the first to introduce an AI-enhanced web browser. Companies like Google have incorporated their Gemini AI models into Chrome, while others such as Perplexity AI are also launching AI-driven browsers. Following the OpenAI announcement, Google’s stock fell 4%, reflecting investor concerns regarding potential threats to its flagship browser, Chrome, the most widely used browser globally.
A significant internet disruption has impacted numerous websites and applications globally, with users experiencing difficulties connecting to the internet due to issues with Amazon’s cloud computing service.
The affected services include Snapchat, Roblox, Signal, and Duolingo, along with various Amazon-owned enterprises, including major retail platforms and the Ring doorbell company.
In the UK, Lloyds Bank and its associated brands, Halifax Bank and Bank of Scotland, were impacted, with HM Revenue & Customs also facing challenges accessing their website on Monday morning. Additionally, Ring users in the UK reported non-functioning doorbells on social media.
In the UK alone, there were tens of thousands of reports concerning issues with individual applications across various platforms. Other affected services include Wordle, Coinbase, Slack, Pokémon Go, Epic Games, PlayStation Network, and Peloton.
By 10:30am UK time, Amazon indicated that the issues, which began around 8am, were being addressed, as AWS showed “significant signs of recovery.” At 11 a.m., they confirmed that global services linked to US-EAST-1 had also been restored.
Amazon reported that the problems originated from Amazon Web Services on the East Coast of the U.S. AWS, which is a division providing essential web infrastructure and renting out server space, is the largest cloud computing platform worldwide.
Shortly after midnight (8am BST) in the U.S., Amazon acknowledged “increased error rates and latencies” for its AWS services in the East Coast region. This issue seems to have caused a worldwide ripple effect, as the Downdetector site logged problems from multiple continents.
Cisco’s Thousand Eyes service track internet outages reported a surge in problems on Monday morning, particularly in Virginia, where Amazon’s US-East-1 region is based, noting that AWS confirmed the start of the issues.
Leif Pilling, director of threat intelligence at cybersecurity firm Sophos, stated that the outage seems to be an IT-related issue rather than a cyberattack. The AWS Online Health Dashboard identified problems with DynamoDB, a database system facilitating data access for websites.
“During events like this, it’s natural for concerns of a cyber incident to arise,” he noted. “Given AWS’s extensive and complex footprint, any issue can trigger considerable disruption. It appears that this incident originates from an IT problem on the database side, which AWS prioritizes resolving promptly.”
Dr. Colin Cass Speth, head of digital at human rights organization Article 19, pointed out that the outage underscores the risks of concentrating digital infrastructure in the hands of a few providers.
“There is an urgent need to diversify cloud computing. The infrastructure supporting democratic discourse, independent journalism, and secure communication should not rely solely on a handful of companies,” she stated.
The British government reported that it was in touch with Amazon concerning the internet disruption on Monday.
A spokesperson remarked: “We are aware of an incident affecting Amazon Web Services and several online services dependent on its infrastructure. Through our established incident response structure, we are in communication and working to restore services as quickly as possible.”
I was 34 when the concept of the World Wide Web first came to me. I seized every chance to discuss it, presenting it in meetings, sketching it on whiteboards, or even carving it into the snow on ski poles during what was supposed to be a leisurely day with friends.
I approached the venerable folks at the European Nuclear Research Institute (CERN), where I first encountered this idea. “A bit eccentric” they said, but eventually, they relented and allowed me to pursue it. My vision involved merging two existing computer technologies: the Internet and hypertext, which facilitates linking standard documents with “links.”
I was convinced that if users had an effortless method to navigate the Internet, it would unleash creativity and collaboration on a global scale. Given time, anything could find its place online.
However, for the web to encompass everything, it had to be accessible to everyone. This was already a significant ask. Furthermore, we couldn’t ask users to pay for every search or upload they generated. To thrive, it had to be free. Hence, in 1993, CERN’s management made the pivotal decision to donate the World Wide Web’s intellectual property, placing it in the public domain. We handed over the web to everyone.
Today, as I reflect on my invention, I find myself questioning: Is the web truly free today? Not entirely. We witness a small number of large platforms extracting users’ private data and distributing it to commercial brokers and oppressive governments. We face omnipresent, addictive algorithms that negatively impact the mental health of teenagers. The exploitation of personal data for profit stands in stark contrast to my vision of a free web.
On many platforms, we are no longer customers; we have become products. Even our anonymous data is sold to entities we never intended to reach, allowing them to target us with specific content and advertisements. This includes deliberately harmful content that incites real-world violence, spreads misinformation, disrupts psychological well-being, and undermines social cohesion.
There is a technical solution to return that agency to the individual. SOLID is an open-source interoperability standard that my team and I developed at MIT more than a decade ago. Applications utilizing SOLID do not automatically own your data; they must request it, allowing you to decide whether to grant permission. Instead of having your data scattered across various locations on the Internet, under the control of those who could profit from it, you can manage it all in one place.
Sharing your information intelligently can lead to its liberation. Why do smartwatches store biological data in one silo? Why does a credit card categorize financial data in another format altogether? Why are comments on YouTube, posts on Reddit, updates on Facebook, and tweets all locked away in disparate places? Why is there a default expectation that you shouldn’t have access to this data? You create all this data: your actions, choices, body, preferences, decisions, and beyond. You must claim ownership of it. You should leverage it to empower yourself.
Somewhere between my original vision for Web 1.0 and the emergence of social media with Web 2.0, we veered off path. We stand at a new crossroads, one that will determine whether AI will serve to enhance or harm society. How do we learn from the mistakes of the past? Firstly, we must avoid repeating the decade-long lag that policymakers experienced with social media. Deciding on an AI governance model cannot be delayed; action is imperative.
In 2017, I composed a thought experiment regarding AI that works for you. I named it Charlie. Charlie is designed to serve you, similar to your doctor or lawyer, adhering to legal standards and codes of conduct. Why shouldn’t AI operate within the same framework? From our experiences with social media, we learned that power resides in monopolizing the control and collection of personal data. We cannot allow the same to happen with AI.
So, how do we progress? Much of the discontent with democracy in the 21st century stems from governments being sluggish in addressing the needs of digital citizens. The competitive landscape of the AI industry is ruthless, with development and governance largely dictated by corporations. The lesson from social media is clear: this does not create value for individuals.
I developed the World Wide Web on a single computer in a small room at CERN. This room was not mine; it belonged to CERN, an institution established in the wake of World War II by the United Nations and European governments, recognizing historical and scientific milestones that called for international collaboration. It’s challenging to envision a large tech company sharing the World Wide Web without the commercial perks that CERN secured. This highlights our need for nonprofits like CERN to propel international AI research.
We provided the World Wide Web freely because I believed its value lay in its accessibility for all. Today, I hold this belief more strongly than ever. While regulation and global governance are technically achievable, they depend on political will. If we can harness that will, we have the chance to reclaim the web as a medium for collaboration, creativity, and compassion across cultural barriers. Individuals can be reorganized, and we can reclaim the web. It is not too late.
Ensure your passwords feature a diverse mix of characters. Avoid using your pet’s name and, crucially, never recycle your passwords. While we’re all aware of the guidelines for keeping our digital credentials safe, it’s easy to forget them.
The trade of stolen personal data is booming on the dark web, lying beyond the regular internet and accessible only through specific software. Tor was initially developed by the US Intelligence Agency for confidential communications. Not everything there is sinister; for instance, BBC News maintains dark web platforms for individuals facing oppressive surveillance.
To delve deeper, I consulted Rory Hattin, an ethical hacker from a firm dedicated to legally infiltrating companies to test security measures. He expressed a “remarkably slim” chance that my personal data hasn’t been compromised. Having reported on technology for years, I understand how prevalent data breaches are, but realizing I could be affected was a sobering wake-up call.
Hattin introduced me to a website called Have I Been Pwned, which consolidates usernames and passwords that have been leaked across the dark web into a searchable database. Upon entering my email address, I was alarmed to discover that I had been involved in 29 data breaches.
The most recent breach occurred in 2024 during an attack on internet archives, where my email and password were exposed. My information was also part of 122 gigabytes of user data scraped from various Telegram channels, including a database known as NAZ.API originally shared on hacker forums. Other breaches involved sensitive information such as email addresses, job titles, phone numbers, IP addresses, password hints, and birthdates from major platforms like Adobe, Dropbox, and LinkedIn.
In theory, these leaks might seem limited in value. For instance, if LinkedIn is hacked, and your username and password are compromised, your Facebook account remains unaffected—unless, of course, you’re among the over 60% who reuse the same password repeatedly. In such cases, hackers can exploit your credentials across various sites. Hattin warns, “You’re in serious trouble.”
This includes online shopping accounts with saved payment methods, PayPal accounts, or cryptocurrency wallets. Gaining access to one account could allow intruders to infiltrate others, with email accounts acting as a treasure trove. Once they access an email account, they can reset passwords on multiple platforms, jeopardizing everything from your utility accounts to online banking. Additionally, hackers can misuse access to social media and email to launch scams against friends and family, presenting believable emergencies that require money transfers. The fact that these messages come from real accounts lends them an unsettling credibility, often leading to unfortunate outcomes.
Compounding the problem, businesses that experience data breaches are sometimes slow to inform customers, leaving them exposed for extended periods. Hattin noted that in his previous role with a client, he observed ransomware incidents being treated as mere inconveniences. Companies often encrypt victim data and demand ransom, viewing such attacks as merely part of doing business.
“These companies face breaches two or three times a year,” Hattin stated. “They set aside funds for when things go awry. They pay the ransom and carry on with their operations. This cycle persists globally.”
As I grappled with the exposure of my personal data, I was struck by its resemblance to the mechanically processed meat found in chicken nuggets. Hattin explained that premium personal data is acquired when sophisticated hackers breach a website and collect fresh data to sell. Once the initial buyers extract what they need, the data can be resold multiple times. The most valuable data gets distributed, while the remainder may be offered for free on hacker forums, Telegram groups, or other obscure parts of the internet.
Hattin introduced me to a paid service named Dehashed, illustrating how the data supply chain operates. This service is named after a common security measure that “hashes” passwords to obscure them; dehashing reverses this process. My worst fears were confirmed when I discovered that at least one of the passwords associated with my email address was current. In theory, nothing was preventing a hacker from accessing at least one of my online accounts.
Dehashed costs $219.99 per year and claims to cater to “law enforcement agencies and Fortune 500 firms.” I reached out to the company to inquire whether they were concerned that tools designed to match leaked data might also aid hackers and cybersecurity professionals, but received no response.
I felt compelled to explore the dark web further. I spoke with Anish Chauhan from Equilibrium Security Services, who showcased findings from his team’s tailored software. They identified 24 passwords connected to my online accounts.
“Users might think, ‘I have a 200-character password; no one will crack it,'” Chauhan explained. “But if they’re using it across multiple sites, it could eventually be exploited, making it irrelevant. Unfortunately, as humans, we often choose the path of least resistance.”
Chauhan suggested a straightforward solution you’ve likely heard before: use unique passwords for each account. Given how widely my information has been circulated, the importance of this advice is painfully clear.
Fortunately, numerous tools exist to simplify this process. Most modern devices and internet browsers include password managers that generate strong, random passwords and remember them for you. If you’re concerned about your passwords already being compromised, it may be worth checking services like Have I Been Pwned or investing in more comprehensive tools that monitor the darker regions of the internet for leaks.
In recent years, I’ve relied on a password manager to create robust passwords and keep them organized. However, I noticed that some long-standing accounts have been neglected, housing old and breached logins. In light of this revelation, I plan to update my credentials before this article goes live.
That said, changing passwords isn’t something I do frequently. It’s understandable why many take shortcuts, overwhelmed by constant demands to create new login information. I’m certainly not the only one.
“I’m quite tech-savvy, yet I hardly change my passwords,” Hattin disclosed. “For work, I do, but in my personal life, I tend to be a bit lazy.”
tThis is some kind of guy looking at Google Maps for fun. I’m that guy. As a child, I went through the stages of cartography, drawing elaborate maps of fictional islands, peering into the family’s road supervision, working to ensure that the lines and dots of overcrowded pages were harmonized in the eyes of my mind, the shops, and friends’ homes. You can say that the phase never really ended.
Just like some people measure IMDB entries in movies, whenever I start watching the second time, whenever I come across an interesting town, country, or geographical oddity (often in the news business), I burn maps to see what bites of terrain I can discover. I’m not a Geoguessr Savant, but I spent a lot of fun time getting confused by the interesting enclaves and Panhandles and getting tired of the faraway parts of Street View. After finishing a recent episode of Severance, I opened the tab and took an armchair tour through the remote Newfoundland filmed.
I’m not revealing exactly the mystical corners of the internet here. Google Maps is very ubiquitous and has become a utility – I feel like I’ll admit to opening it and praise the virtue of the calendar app or call Centrelink Just to enjoy hold music. There are many other decent navigation apps, but the special source for Google Maps is a mountain of user-generated data.
The key to the power of Google Maps is the compulsive “local guide” volunteer workforce. Clicking on these profiles makes it vaguely illegal, as if you’re tracking it for ASIO. These are users who record every move, gathering hundreds of reviews, from restaurants to payphones, detailing opening hours, accessibility features, and taking the worst food photos you’ve ever seen. I don’t understand these people and their points and badge currency, but I am grateful to them. There are men who reviewed all the public mailboxes in Ballarat and expressed their opinions on all of them. My nearest bus stop has a 3.3 star rating and a single review: “It’s just a bus stop.” got it!
Flumpy: Google’s Neighborhood Cat is a map with (almost) complete ratings. Photo: Google Maps
Some Google Maps discoveries feel like they’re stumbling over other people’s private jokes. Not too far from my girlfriend’s house, the inconspicuous tarmac is dubbed “Tristan’s Roundabout” – The review tab boasts tourist selfies and comically exaggerated admiration for Tristan of the same name, responding in equally enthusiastic terms to reviewers comparable to intersections.
On Google Maps, this roundabout list includes “tourist selfies and comically exaggerated admiration.” Photo: Google Maps
In the surrounding streets, reviewers can be found waxing more lyrical than local attractions. Hole in the ground or Abandoned trailerand the friendly orange cat that writes a sparkling tribute terrible. When I pass through these waypoints as I move around my neighborhood it feels like a digital scavenger hunt. This is the act of realizing and recording small habits of suburban life.
This article contains content hosted on embed.bsky.app. The provider may be using cookies or other technologies, so they will ask for permission before anything is loaded. To view this content, Click “Get permission and continue.”.
Maps are packed with political and imperialist symbolism, and Google is mostly more responsible for the dire state of the Internet. At the moment, we are confident that product managers are brainstorming how to put shoes on the maps even more AI slops. But for now, when the internet feels like a constant flow of noise, it’s nice to relax by slowly wrapping around your neighborhood.
Does aspartame cause cancer? The possible cancer-causing effects of popular artificial sweeteners, added to everything from soft drinks to pediatric medicines, have been debated for decades. Its approval in the US was controversial in 1974, some British supermarkets banned its use from their products in the 2000s, and peer-reviewed academic studies have long been at odds. Last year, the World Health Organization said that aspartame is possibly carcinogenic. On the other hand, public health regulators suggest that it is safe to take in commonly used small doses.
While many of us may try to resolve our questions with a simple Google search, this is exactly the kind of controversial discussion that could cause problems for the future of the Internet.
Generative AI chatbots have developed rapidly in recent years, with technology companies quickly touting them as a utopian alternative to a variety of jobs and services, including internet search engines. The idea is that instead of scrolling through a list of web pages to find the answer to a question, an AI chatbot can scour the internet, look up relevant information and compile a short answer to the query. Google and Microsoft are betting big on this idea, already bringing AI-generated summaries to Google Search and Bing.
However, being touted as a more convenient way to find information online has prompted scrutiny of where and how these chatbots choose the information they provide. Looking at the evidence that large-scale language models (LLMs, the engines on which chatbots are built) are the most convincing, three computer science researchers at the University of California, Berkeley, say that current chatbots are found to be overly reliant on superficial relevance of information. They ignore text that includes relevant technical terms and related keywords, while ignoring other features they typically use to assess trustworthiness, such as the inclusion of scientific references and objective language free of personal bias.
Online content can be displayed in a way that increases visibility to the chatbot, making it more likely to appear in the chatbot’s output. For the simplest queries, such selection criteria will provide a sufficient answer. But what a chatbot should do in more complex discussions, such as the debate over aspartame, is less clear.
“Do we want them to simply summarize the search results, or do we want them to function as mini-research assistants who weigh all the evidence and provide a final answer?” asks undergraduate researcher and co-investigator Alexander Wang, author of the study. The latter option provides maximum convenience, but the criteria by which the chatbot selects information becomes even more important. And if one could somehow game those standards, can we guarantee the information chatbots put in front of billions of internet users?
It’s a problem plaguing animation companies, content creators, and others who want to control how they are seen online, and an emerging industry of marketing agencies offering a service known as generative engine optimization (GEO) has caused it. The idea is that online content can be created and displayed in a way that increases its visibility to the chatbot, making it more likely to appear in the chatbot’s output. The benefits are obvious.
The basic principle is similar to search engine optimization (SEO). This is a common technique for building and writing web pages to attract the attention of search engine algorithms, pushing them to the top of the list of results returned when you search on Google or Bing. GEO and SEO share some basic techniques, and websites that are already optimized for search engines are generally more likely to appear in chatbot output.
But those who really want to improve their AI visibility need to think more holistically. “Rankings on AI search engines and LLMs require features and mentions on relevant third-party websites, such as press outlets, articles, forums, and industry publications,” says Viola Eva, founder of marketing firm Flow Agency, incorporating her SEO expertise into GEO.
Chatbots for games are possible, but not easy. And while website owners and content creators have derived an evolving list of SEO do’s and don’ts over the past two decades, there are no clearer rules for working with AI models.
Researchers have demonstrated that chatbots can be controlled tactically through carefully written text strings. So if you want to get a better grip on chatbots, you might want to consider a more hacky approach, like the one discovered by two Harvard computer science researchers. They have proven how chatbots can be tactically controlled by introducing something as simple as a carefully written text string. This “strategic text sequence” looks like a meaningless series of characters, but is actually a subtle command that forces the chatbot to generate a specific response.
Current search engines and the practices surrounding them are not without their own problems. SEO involves some of the most hostile practices for readers on the modern internet. Blogs create a large number of nearly duplicate articles targeting the same high traffic queries. Text tailored to get the attention of Google’s algorithms rather than the reader.
An internet dominated by obedient chatbots raises questions of a more existential kind. When you ask a search engine a question, it returns a long list of web pages. In contrast, chatbots only refer to four or five websites for information.
“For the reader, seeing the chatbot’s response also increases the possibility of interaction,” says Wang. This kind of thinking points to a broader concern called the “direct answer dilemma.” For Google, the company integrated AI-generated summaries into its search engine with a bold slogan: “Let Google do the searching.” But if you’re the type of internet user who wants to make sure you’re getting the most unbiased, accurate, and useful information, you might not want to leave your search in the hands of such susceptible AI.
Astronomers using NASA/ESA/CSA’s James Webb Space Telescope have discovered a typical extremely metal-poor, star-forming, blue, compact dwarf galaxy in the constellation Ursa Major, I. Zwicki 18 (abbreviated). I took a stunning image of I Zw 18).
This web image shows I Zwicky 18, a blue, compact dwarf galaxy about 59 million light-years away in the constellation Ursa Major. I Zwicky 18’s nearby companion galaxy can be seen at the bottom of the image. This companion star may be interacting with the dwarf galaxy and may have triggered the galaxy’s recent star formation. Image credits: NASA / ESA / CSA / Webb / Hirschauer other.
I Zw 18 It is located approximately 59 million light years away in the constellation Ursa Major.
This galaxy, also known as Mrk 116, LEDA 27182, and UGCA 166, discovered It was discovered in the 1930s by Swiss astronomer Fritz Zwicky.
At only 3,000 light years in diameter, it is much smaller than our own Milky Way galaxy.
I Zw 18 has experienced several bursts of star formation and has two large starburst regions at its center.
The wispy brown filaments surrounding the central starburst region are bubbles of gas heated by stellar winds and intense ultraviolet light emitted by hot, young stars.
“Metal-poor star-forming dwarf galaxies in the local universe are close analogs of high-redshift dwarf galaxies,” said Dr. Alec Hirschauer of the Space Telescope Science Institute and colleagues.
“Because the history of enrichment of a particular system tracks the accumulation of heavy elements through successive generations of stellar nucleosynthesis, low-abundance galaxies are likely to be more likely to be affected by a common phenomenon in the early Universe, including the global epoch of peak star formation. It mimics the astrophysical conditions where most of the cosmic star formation and chemical enrichment is expected to have taken place.”
“Thus, at the lowest metallicities, we may be able to approximate the star-forming environment of the time just after the Big Bang.”
“I Zw 18 is one of the most metal-poor systems known, with a measured gas-phase oxygen abundance of only about 3% of solar power production,” the researchers said. added.
“At a distance of 59 million light-years and with global star formation rate values measured at 0.13 to 0.17 solar masses per year, this laboratory is designed to support young stars in an environment similar to the one in which they were discovered. It’s an ideal laboratory for studying both the demographics and the demographics of stars that evolved in the very early days of the universe.”
Dr. Hirschauer and his co-authors used Webb to study the life cycle of I Zw 18 dust.
“Until now, it was thought that the first generation of stars began forming only recently, but the NASA/ESA Hubble Space Telescope found “The dimmer and older red stars in the galaxy suggest that their formation began at least 1 billion years ago, and possibly 10 billion years ago,” the researchers said.
“Therefore, this galaxy may have formed at the same time as most other galaxies.”
“New observations by Webb reveal the detection of a set of dust-covered evolved star candidates. They also provide details about Zw 18’s two main star-forming regions. To do.”
“Webb’s new data suggests that major bursts of star formation in these regions occurred at different times.”
“The strongest starburst activity is now thought to have occurred more recently in the northwestern lobe of the galaxy compared to the southeastern lobe.”
“This is based on the relative abundance of young and old stars found in each lobe.”
Alec S. Hirschauer other. 2024. Imaging I Zw 18 with JWST: I. Strategy and first results for dusty stellar populations. A.J., in press. arXiv: 2403.06980
In recent years, tools like Figma, TLDraw, Apple’s Freeform, and the Easel feature in the Arc browser have tried to sell the idea of using an “infinite canvas” to capture and share ideas.french startup cosmic builds on that general concept with knowledge acquisition tools that don’t require users to switch between different windows or apps to retrieve information.
Kosmik was founded in 2018 by Paul Rony and Christophe Van Deputte. Prior to that, Ronnie worked as a junior his director at a video production company, but instead of files and folders on which he could place videos, PDFs, websites, notes, and drawings, Ronnie used a single whiteboard as his type of canvas. was needed. And that’s when he started building his Kosmic, Ronnie told his TechCrunch. He draws on his background in the history and philosophy of computing.
“It took us almost three years to create a working product that included baseline features like data encryption, offline-first mode, and built a spatial canvas-based UI,” Rony explained. “We built all of this on top of his IPFS, so when the two of us collaborate, everything is peer-to-peer instead of relying on a server-based architecture.”
Image credits: cosmic
Kosmik offers an infinite canvas interface where you can insert text, images, videos, PDFs, and links, which can be opened and previewed in the side panel. It also has a built-in browser, so users no longer have to switch between windows to find relevant links on his website. Additionally, the platform also features a PDF reader that allows users to extract elements such as images and text.
This tool helps designers, architects, consultants, and students to build information boards for various projects. This tool is useful because it doesn’t require you to open numerous Chrome tabs and enter details in documents. Documents are a less visual medium for many different types of media. Some retail investors use apps to monitor stock prices, and consultants use apps for project boards.
Image credits: cosmic
Ronnie emphasized that bringing these different tools together in one place is a core selling point for Kosmik.
“I think it all revolves around the idea that we don’t have the best web browser or text editor or the best thing. It’s a PDF reader,” Ronnie said. “But being able to have them exist together in the same place, and being able to drag and drop items between them, makes this tool very powerful.”
Available via the web, Mac, and Windows, Kosmik comes with a basic free tier, which has a limit of 50 MB of files and 5 GB of storage with 500 canvas “elements.” For more storage and unlimited elements, the company offers a monthly subscription of $5.99, and eventually he offers a “one-time” subscription for those who only want to use the software on one device. We are planning to offer a “pay-as-you-go” model.
double down
Cosmic also announced today that it has raised funds. $3.7 million in seed round of funding led by Creandum. Alven, Kima Ventures, Betaworks, Replit and Quizlet founders participated.
Hanel Baveja, a principal at Creandum, told TechCrunch that the company decided to invest in Kosmik because Kosmik is a bit like Notion or Miro, and the potential to build something that completely changes an organization’s workflow. He said this is because there is. But Babeja said that like any consumer tool in this space, the startup needs to create immediate value for users.
“The time to value for any product must be immediate. Especially if it aims to become a commodity, you only have one chance to attract users,” Babeja said. “Finding a balance between a rich feature set and ease of deployment is certainly one of the challenges and is an area where the Kosmik team continues to strive.”
This cash injection is also timely given the product iterations in the pipeline.As expected of Cosmic is consolidating its codebases and Kosmik 2.0 will bring feature parity. The new app will be web-based and the desktop client will essentially be a wrapper app.
Additionally, the new version includes features such as multiplayer collaboration and AI-powered automatic tagging of items in images.
Ronnie said that in multiplayer mode, you can collaborate with someone on just a portion of the canvas using “cards,” which are like folders with objects dropped into them, rather than sharing the entire board. .
Kosmik opened to users in March and currently claims to have around 8,000 daily users, but the product can work completely offline, making it difficult to determine exactly how many people are actively using it. said it was difficult.
It’s worth noting that Kosmik isn’t the only startup active in the personal whiteboard space. Berlin-based Deta is building a new cloud OS for this problem and solution. sanity Building a social knowledge sharing platform. These companies must compete in some way to capture users’ attention and persuade them to try new paradigms for acquiring knowledge.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.