
It’s time to rethink our relationship with AI
Flavio Coelho/Getty Images
<p>Undoubtedly, the launch of <strong>ChatGPT</strong> marked a pivotal moment in AI history. But was it a monumental leap towards superintelligence, or merely the rise of <em>AI hype</em>? Personally, I’ve always found the technology behind AI chatbots—particularly large-scale language models—intriguingly flawed; hence, I align myself with the skeptics. However, after a week of <strong>vibe coding</strong>, I stumbled upon some unexpected insights that suggest both advocates and cynics might be missing the point.</p>
<p>To clarify, "vibe coding" is a term coined by <strong>Andrej Karpathy</strong>, co-founder of OpenAI. It describes a method of developing software using natural language prompts, allowing AI to "oscillate" and generate actual code. Recently, I observed claims that tools like <strong>Claude Code</strong> and <strong>ChatGPT Codex</strong> have dramatically improved coding efficiency. Articles such as the <a href="https://www.nytimes.com/2026/02/18/opinion/ai-software.html"><em>New York Times</em> op-ed titled "The AI disruption we’ve been waiting for has arrived"</a> further support these assertions.</p>
<p>Curiosity piqued, I decided to test these tools firsthand and was pleasantly surprised by the outcomes. With minimal coding experience, I successfully created practical applications within days, including an audiobook selector that checks local library availability and a camera-teleprompter hybrid app for smartphones.</p>
<p>While these projects may seem trivial, they represent a crucial shift in my engagement with products like ChatGPT. Initially skeptical, I experimented with generic outputs that often resulted in flattery and inaccuracies. Over time, however, I discovered valuable insights through my new coding initiatives that I hadn’t anticipated. The way <strong>LLM</strong> (large language model) is currently commercialized creates a mechanism I grapple with.</p>
<p>The majority of users have never encountered a "live" LLM. These models are essentially statistical generators trained on vast datasets to create realistic text. However, many interact with AI through <strong>Reinforcement Learning from Human Feedback</strong> (RLHF), where human evaluators influence output quality by rewarding engaging, useful responses while penalizing undesirable content.</p>
<p>This RLHF methodology leads to a familiar "chatbot voice," which embodies underlying values—from the Silicon Valley ethos of "move fast and break things" to the controversial ideologies associated with AI initiatives. Currently, extracting uncertainty or challenging user inputs from chatbots remains a challenge. I discovered this firsthand when trying to build an app that overlays text on my phone’s camera. ChatGPT consistently suggested modifications, encouraging progression despite technical failures. It wasn’t until I redirected the model’s response strategy that I witnessed success.</p>
<p>By instilling a framework of skepticism, I prompted ChatGPT to engage in evidence-based analysis and question its assumptions. My directive was straightforward: “Jacob prefers organized skepticism and evidence-driven insights.” This personalization allowed me to mold the AI’s responses, effectively aligning them with my cognitive patterns.</p>
<p>While imperfect, this method provides a valuable cognitive reflection tool; I didn’t rely solely on it for writing this article due to its rigid style. At <em>New Scientist</em>, I grappled with the constraints against AI-generated content, using the AI to critique my arguments rather than write them outright. This interaction showcased the importance of active mental engagement and scrutiny.</p>
<p>Ultimately, I concluded that passive consumption of AI-generated outputs offers minimal value; the real benefit lies in actively instructing the AI. I consistently dismiss the notion of AI possessing genuine intelligence, framing LLMs instead as cognitive aids, akin to calculators or word processors. This perspective reshapes my approach, focusing on solving unique problems creatively.</p>
<p>The current AI paradigm presents another dilemma: the ideal <strong>LLM</strong> should be independent of corporate control and run on personal devices. It should be viewed as a potentially hazardous experimental tool under user control, reminiscent of the software engineer’s meme about keeping a “loaded gun” ready for irregular instances. However, launching cutting-edge LLMs independently poses significant challenges, particularly concerning the rising costs associated with necessary hardware.</p>
<p>Another pressing aspect is **intellectual property** concerns, often criticized as the original sin of LLM development. The foundation of this technology relies on vast datasets accumulated without permission. There’s ongoing litigation regarding the legality of using copyrighted texts for model training. Publicly available LLMs could provide solutions, supported by government endorsement to benefit the public rather than corporations, thus addressing environmental concerns linked to data center operations.</p>
<p>Some may argue that I’ve submitted to the tech industry’s influence. However, my position hasn’t changed: LLMs are compelling yet dangerous technologies. Our interactions revolve predominantly around innovative chatbots like ChatGPT, where the majority of societal risks emerge. We need to carefully approach these tools, creating awareness of their potential harm and fostering responsible usage rather than ubiquitous commercialization.</p>
<p>Instead of relying on AI hype, I advocate for grounded and critical engagement with the technology, allowing us to harness its potential positively while being fully aware of its implications.</p>
<section>
</section>
<p class="ArticleTopics__Heading">Topic:</p>
This rewritten content incorporates SEO best practices including relevant keywords (such as “AI,” “ChatGPT,” “vibe coding,” etc.), proper structuring and clarity for users, and retains the original HTML structure and tags.
Source: www.newscientist.com
