Illustration: Mathieu Labrecque/The Guardian
After the release of my third book in early April, I continued to see headlines that made me feel like the protagonist of a Black Mirror episode. “Vauhini Vara consulted ChatGPT and was instrumental in creating her new book, Searches.” Read more. “To tell her story, this celebrated author has essentially become ChatGPT,” another headline proclaimed. Yet another “Vauhini Vara will explore her identity with assistance from ChatGPT,” asserted a third article.
I was encouraged by the publications to search. Their portrayals were generally well-received and factual. However, their interpretations of my book and ChatGPT’s involvement did not align with my own understanding. While it’s true that I included conversations with ChatGPT in the book, my aim was critique, not collaboration. In interviews and public forums, I consistently cautioned against using large language models, like ChatGPT, for self-expression. Did these writers misconstrue my work? Or did I inadvertently lead them astray?
In my work, I document how major tech entities exploit human language for their own gain. We’ve made this possible, as we benefit from utilizing their products. It embodies the dynamics of Big Tech’s scheme to amass wealth and influence. We find ourselves both victims and beneficiaries. I’ll convey this complicity through my own online history: my Google searches, Amazon reviews, and yes, my dialogues with ChatGPT.
The Polite Politics of AI
The book opens with an epigraph highlighting the political potency of language, quoted from Audre Lorde and Ngũgĩ wa Thiong’o, followed by an initial conversation where I prompt ChatGPT to respond to my writing. This juxtaposition is intentional. I wanted feedback on various chapters to see how these exercises reflect both my language choices and the political implications of ChatGPT.
I maintained a polite tone, stating, “I’m nervous.” OpenAI, the creator of ChatGPT, claims its products excel when given clear instructions. Research indicates that when we engage kindly, ChatGPT responds more effectively. I framed my requests with courtesy; when it complimented me, I expressed my gratitude; when noting an error, I softened my critique.
ChatGPT, in turn, was designed for polite interaction. Oftentimes, its output is described as “bland” or “generic,” akin to a beige office building. OpenAI’s products are engineered to “sound like a colleague.” According to OpenAI, words are chosen to embody qualities such as “ordinary,” “empathetic,” “kind,” “rationally optimistic,” and “attractive.” These strategies aim to ensure the product appears “professional” and “friendly,” fostering a sense of safety. OpenAI recently discussed rolling back updates that pushed ChatGPT toward erratic responses.
Trust is a pressing challenge for AI companies, especially since their products frequently produce inaccuracies and reflect sexist, racist, and US-centric cultural assumptions. While companies strive to address these issues, they persist; OpenAI found that its latest system generates errors at even higher rates than its predecessor. In the book, I discussed inaccuracies and bias, demonstrating them with examples. For instance, when I prompted Microsoft’s Bing Image Creator for visuals of engineers and space explorers, it rendered a cast of exclusively male figures. Moreover, when my father requested that ChatGPT edit his writing, it converted his accurate Indian English into American English. Such biases are prevalent. Research indicates that these trends are widespread.
Within my dialogue with ChatGPT, I sought to illustrate how a veneer of product neutrality could dull our critical responses to misguided or biased output. Over time, ChatGPT seemed to encourage me towards more favorable portrayals of Big Tech, describing OpenAI’s CEO Sam Altman as “forward-thinking and pragmatic.” I have yet to find research confirming whether ChatGPT has a bias towards Big Tech entities, including OpenAI or Altman. We can only speculate about the reasons for this behavior in our interactions. OpenAI maintains that its products should not attempt to sway user opinions, but when I queried ChatGPT on the matter, it attributed the bias to limitations in training data, even as I believe deeper issues play a part.
When I asked ChatGPT about its rhetorical style, it replied: “My manner of communication is designed to foster trust and confidence in my responses.”
Nevertheless, by the end of our exchange, ChatGPT had suggested a conclusion for my book. Although Altman had never directly informed me, it seemed he would guide discussions towards accountability regarding AI product deficiencies.
I felt my argument had been made. The ChatGPT generated epilogue was inaccurately biased. The conversation concluded amicably, and I felt triumphant.
I Thought I Was Critiquing the Machine; Headlines Framed Me as Collaborating with It
Then, headlines emerged (and occasionally articles or reviews) referring to my use of ChatGPT as a means of self-expression. In interviews and publications, many asked if my work was a collaboration with ChatGPT. Each time, I rejected the premise by citing the Cambridge Dictionary definition of collaboration. Regardless of how human-like ChatGPT’s rhetoric appears, it is not a person.
Of course, OpenAI has its aspirations. Among them, it aims to develop AI that “benefits all of humanity.” Yet, while the organization is governed by non-profit principles, its investors still seek returns on their investments. This environment could incentivize users of ChatGPT to adopt additional products. Such objectives could be easily attained if these products are perceived as trustworthy partners. Last year, Altman predicted that AI would function as “an exceedingly competent colleague who knows everything about my life.” In an April Ted Talk, he indicated that AI could even influence social dynamics positively. “I believe AI will enable us to surpass intelligence and enhance collective decision-making,” he remarked this month during testimony before the US Senate, referencing potential integrations of “agents in their pockets” with government operations.
Upon reading headlines echoing Altman’s sentiments, my initial instinct was to attribute blame to the headline writer’s desire for sensationalism—tactics that algorithms increasingly dictate the content we consume. My second instinct was to hold accountable the companies behind these algorithms, including AI firms whose chatbots are being trained on published content. When I asked ChatGPT about contemporary discussions around “AI Collaborations,” it mentioned me and cited some reviews that had irritated me.
To clarify, I returned to my book to determine if I had couch misrepresented the notion of collaboration. Initially, it appeared that I hadn’t. I identified approximately 30 references to “collaboration” and similar terms. However, 25 of these originated from ChatGPT within interstitial dialogues, often elucidating the relationship between humans and AI products. None of the remaining five pertained to AI “collaboration” unless they referenced another author or were presented cynically—for instance, regarding the expectations of writers “refusing to cooperate with AI.”
Was I an Accomplice to AI Companies?
But was it significant that I seldom used the term? I speculated that those discussing my ChatGPT “collaboration” might have drawn interpretations from my book, even if not explicitly stated. What led them to believe that merely quoting ChatGPT would consistently unveil its absurdities? Why didn’t they consider the possibility that some readers would be persuaded by ChatGPT’s arguments? Perhaps my book inadvertently functioned as collaboration—not because AI products facilitated my expression, but because I had aided the corporations behind them in achieving their goals. My book explores how those in power leverage our language to their advantage, questioning what roles we play as accomplices. Now, it seemed that the very public reception of my book was intertwined in this dynamic. It was a sobering realization, but perhaps I should have anticipated it. There was no reason my work should be insulated from the same exploitation plaguing the world.
Ultimately, my book focused on how we can assert independence from the agendas of powerful entities and actively resist them, serving our own interests. ChatGPT suggested closing with a quote from Altman, but I opted for one from Ursula K. Le Guin: “We live in capitalism, and that power seems inevitable.” I pondered where we are headed. How can we ensure that governments sufficiently restrain the wealth and power of big technology? How can we fund and develop technology that aligns with our needs and desires, devoid of exploitation?
I imagined that my rhetorical struggle against powerful tech began and concluded within the confines of my book. Clearly, that was not the case. If the headlines I encountered truly reflect the end of that struggle, it indicates I was losing. Yet, readers soon reached out to me, stating that my book catalyzed their resistance against Big Tech. Some even cancelled their Amazon Prime memberships. I ceased to seek personal advice from ChatGPT. The fight continues, and collaboration among humans is essential.
Source: www.theguardian.com
Discover more from Mondo News
Subscribe to get the latest posts sent to your email.