Mondo News

All of the latest tech and science news from all over the world.

TikTok Spied on Journalists and ChatGPT is Challenging Google

This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email [email protected] with any questions.

Happy new year. It’s so good to be back in the studio. How are you?

I am great and I am extremely excited to be back making “Hard Fork.”

It was a lot of fun. One of my favorite things was getting a DM from a high school student who said that, after listening to us, he had used ChatGPT to finish his assignments before finals.

Changing lives, one episode at a time. How was your holiday?

It was good. You know, I caught up on my classical Latin studies. I —

I watched some Criterion film collection I just did a lot of very productive — no, I watched movies and ate cookies and fixed my mom’s router, as a good tech columnist son should do.

Yeah, I did. It was like — it was — she’s got nominally high-speed internet, but it’s like it basically dial-up in there. And I was like, what’s going on? After, like, years of this, suffering through my mom’s bad Wi-Fi, I was, like, finally like, you know, I’m just going to do this. Like, a good son should fix his mom’s Wi-Fi router. So I did.

I mean, routers to me are a sort of final frontier in terms of difficulty. Whenever you have to log into the router, you’re sort of googling, trying to figure out what’s the right address to go to, what is the sort of default system password. But you — you actually made it into the router.

I did, I did. I hacked the mainframe and she’s getting decent download and upload speeds now. So now we can — now we can video chat from my mom’s house.

Can I tell you how I solved the problem of always being asked to fix things at my parent’s house over the holidays?

I’m Kevin Roose. I’m a tech columnist at “The New York Times.”

This week, we talk to “Forbes” reporter Emily Baker-White about why TikTok spied on her and what that means for the app’s future. Also, ChatGPT has Google on high alert and my phone now lives in a box.

Casey, in our last episode before the holiday break we were doing predictions. One of my predictions was that, in 2023, the US government would finally ban TikTok.

Which, at the time, I thought was a very melodramatic prediction.

Yeah, you kind of looked askance at me, like, who is this guy and what is he smoking? But two things happened over the break that made me even more confident that my prediction is going to come true. The first is that the Biden administration, as part of its spending bill, passed a provision that would ban TikTok from government-owned devices.

The second thing, which I think for people who favor a ban on TikTok is kind of a smoking gun, is that the company was found to have surveilled at least three American journalists. One of them is Emily Baker-White from “Forbes,” who’s here with us today. Emily, welcome to “Hard Fork.”

So tell us about this scoop. What made TikTok and ByteDance spy on you?

So I started reporting on TikTok and ByteDance last year, when I was at BuzzFeed News. I reported on something called Project Texas, which is basically the company’s efforts to try to rewire its data systems to narrow the number of people abroad that can access sensitive US user data. And after I started reporting on this, a source leaked me audio from over 80 internal meetings at TikTok and ByteDance about Project Texas, and I published a story about it. And after that, I think TikTok and ByteDance kind of freaked out a little bit and they started a leak investigation called Project Raven.

The goal of the investigation was to find out who was sourcing me. And part of the investigation entailed a ByteDance team — this is a team that doesn’t work for TikTok; it works for TikTok’s parent company, ByteDance — using the TikTok app to track my location via my IP address to try to see if I was meeting with any TikTok or ByteDance employees. Which means they were also tracking their employees’ IP address-based location as well.

I mean, as a reporter, this is one of the scariest things that you can find out, right? All of us, in the course of our work, are talking to folks who are not authorized to speak with us. We take every step we can to protect them. But we also sort of do it assuming that the company is not going to look too hard, in most cases, for who we’re talking to, right? It’s usually not going to be worth it to them. So what can you tell us about how you found out that this was happening? And how did you react?

Yeah, so I found out about Project Raven through sources inside the company. I’m not going to tell you much more than that. But eventually, I was able to review internal materials that let me know about this project. And we first reported on this in October, but we reported on it in a much more general way than we ultimately did in December, in order to protect sources, right? So I wasn’t going to say exactly what I knew, because doing so could have let the company know who was talking to me.

And so that report that we put out in October used a higher level of generality. We said that there was a plan to track the locations of individual US citizens. I knew at the time that I was among those people. But we didn’t say for what purpose and we didn’t say who the people were.

And when we published that story in October, TikTok and ByteDance didn’t actually deny it, but they went on Twitter and they said my reporting lacked integrity. They said that it wasn’t possible that the app could have been used to track location because it doesn’t track GPS-level location. Which I assume is true, but we know that there are other ways to get location, and IP address is one of those ways. And they said that TikTok had never been used to target journalists. And they later admitted, three days before Christmas, that it had.

So there’s a lot there I want to drill down on. But I sort of want to ask, like, when you learn about the ominously-named Project Raven and you learn that you’ve been targeted and the company has your IP address and is using that to try to figure out who your sources are, how did you feel?

Unimpressed. Sort of disappointed, honestly. Like, first of all, I’m a millennial. I don’t like when my phone talks at me without my permission. I just, like — I don’t like autoplay videos in any context. And so, like, I’ve never gotten very into TikTok. I, like, used it a little bit, a couple of times. But like, I just deleted it off my phone and was like, really, guys? Who decided that this was a good cost/benefit decision?

I mean, it’s definitely the case that if you had been conducting all of your communications with your sources over TikTok DM, like, that probably would have been a bad move on your part.

I mean, it is not totally unprecedented for a tech company to do this, right? I mean, a couple of years ago, Uber was found to be looking up journalists’ ride histories. I think Microsoft got in trouble because it looked at a journalist’s Hotmail account. So this kind of thing happens, but it’s rare because it’s so risky.

But also — and just to get into the why this really matters, one of the reasons that TikTok is in such trouble right now, one of the reasons that we’re seeing bans across states and the federal government is so many lawmakers and regulators are concerned that TikTok data could be used against Americans, in part, to surveil them. And until recently, TikTok could dismiss that as a conspiracy theory. And yet, now we know that it was actually true. That the company did, at least in this case, surveil Americans.

So Emily, I’m curious — like, what’s been the fallout at ByteDance and TikTok from your reporting on this Project Raven surveillance operation?

I think a lot of people inside the company are horrified that this happened. Like, they say they have closed this particular data opening for people in China to access US user data. But I think it just shows that, really, a couple of people who have access to user data in a way that maybe isn’t unreasonable on its face, a couple of people can pull some data that they’re not really supposed to pull. And at a company of this size, given the number of people who need access to user data for their job, data access controls across geographic boundaries are really hard and really tricky. And I think —

But my understanding is that this wasn’t just some low-level employee in the IT Department sort of getting curious about who you were talking to. Like, this had buy-in from some of the highest levels of ByteDance, right?

Yeah, ByteDance and TikTok have certainly tried to frame this as a few bad actors. If it was a few bad actors, it was a few bad actors receiving orders from their bad actor bosses. So it definitely wasn’t — it wasn’t rogue people. But it shows how easy it would be for rogue people to do this.

I just think it’s a terrible unforced error. Why would you think this is a good idea? And I say that for all of these companies that have done this. Like, what are you guys thinking?

Right. And I think the theme that’s been running through a lot of your coverage, at least as I read it, is that ByteDance and TikTok, they’re in kind of this fight for their life in the United States. And part of how they’re trying to survive this regulatory scrutiny is by claiming and demonstrating that even though ByteDance owns TikTok, they operate at arm’s length. There aren’t people sort of in Beijing who are, like, pulling the strings on TikTok in the US.

And what your reporting has shown is, I think, that those ties are much closer than ByteDance and TikTok have been letting on. That there are lots of Chinese nationals making decisions about US TikTok. The systems that these two companies use are, in fact — use a lot of the same tools. That it’s sort of a big sort of mess of how would you even untangle those?

So I think that, to me, felt like the sort of insight that you’ve been reporting toward is that this company is not being totally forthright about the level of involvement that its parent company, ByteDance, has with its US operations. Is that a fair reading?

I think definitely, yes. And one of the things that makes this tricky is, like, there’s nothing nefarious about Chinese nationals making decisions about how to run a tech company, right? Like, that’s a super normal thing.

But I think where the company gets in trouble is its representations about who is in charge, who is making what decisions, and how entangled TikTok and ByteDance really are. Like, I have one source who had a contract with TikTok but got a W2 from ByteDance. I have another source who got some of their checks from TikTok from ByteDance. Every @TikTok email alias is actually an @ByteDance email alias. Like, the systems are linked. Everybody at TikTok uses ByteDance’s VPN, called Seal. Everybody at TikTok uses ByteDance’s internal, like, Slack equivalent, which is called Lark.

And so, again, there’s sort of nothing nefarious about this. But ByteDance has built dozens of apps. It’s known as an app factory. And at first, TikTok was just another one of those. And so all of the dozens and dozens of internal tools that make the app run — and every major tech company has such tools — were built by engineers in China, and largely have been managed by engineers in China.

And so I think TikTok and ByteDance are really trying to separate all of those systems, but it’s really hard to separate them. And it’s really hard to know if you’ve got them all, just knowing how big the backend of a company of this size is.

So there’s sort of like two buckets of concern in Washington about TikTok, right? There’s the sort of data misuse, surveillance spying concern that TikTok could essentially be used by the Chinese government to spy on the location or the data of American citizens. There’s also kind of this influence bucket, that China could use TikTok and the Chinese government could use TikTok to sort of insert propaganda or pro-Chinese sentiment into its apps so that American users of these apps would get a sort of distorted picture of life in China. Or just that TikTok would be used, in some way, to undermine US interests.

Of those two buckets, which do you think people should be more concerned about?

I love this question. Honestly, I have spent more time reporting on the bucket that — I think Ezra Klein called it the data espionage bucket. And I think that one gets talked about more. But the second one is way, I think, under-discussed, because recommendations algorithms are complex. We know this. But we also know that they can influence public opinion in all kinds of ways.

And I feel like we’ve talked a lot about this when it comes to Facebook, when it comes to YouTube. And somehow, we haven’t talked about it with TikTok as much. And I think we should, because very subtle changes to a recommendation algorithm that is powering many millions of phones can change how people think about an election or a pandemic, right? And we know this from the other platforms. It’s just as true when it comes to TikTok, but now we have this sort of additional layer of geopolitics.

Right. Like, you think about what the Russians had to do to interfere in the 2016 election. And it was like, well, they had to buy a bunch of Facebook ads. They had to create Facebook pages and Facebook groups and a bunch of fake people. Then they had to trick Americans into going to rallies, right, just to sort of sow dissent.

But like, if you’re the Chinese government and you can just make a handful of phone calls to ByteDance — I’m not saying this has happened, but you could just sort of imagine it happening. And they’re like, you know, why don’t we just amplify a lot of content of Americans screaming at each other, and a lot of content about the dysfunctional American politics? And then maybe amplify some content about how beautiful China is and what a great vacation destination it is, and how much people love their leaders there. And then maybe you throw in a viral American flag burning challenge at some point, right?

You could do all of this. And to your point, Emily, there are no fingerprints on the algorithm, so nobody knows where it’s coming from.

Yeah. I think that’s right. And I think it’s worth noting that people have told me that ByteDance has done this before. And we know that the Chinese government has done this before. So on that first part, I reported a story over the summer about an app called TopBuzz, which was a news aggregator app run by ByteDance. It no longer exists. But former employees of ByteDance told me that TopBuzz had essentially, like, stickied or pinned to the top of its feed what they referred to as pro-China messages. I don’t think this was like President Xi standing on a tank type of thing. I think it was, like, cute panda videos. And I don’t know where exactly the orders for this came from, but people told me that it had happened. ByteDance denied that, by the way, when I reported on it.

So there’s that out there. And then we know, from what the Chinese government has done on other social platforms, that they do run influence campaigns. I reported a story with a colleague of mine, Iain Martin, about how the Chinese government had, through an entity called MediaLinks, run a bunch of TikToks about American politics, about specific politicians, before the ‘22 midterms. And TikTok is still the only major social platform out there today that doesn’t label state media accounts.

And they say they’ve been working on this for a long time. They rushed out labels on Russian state media accounts after Russia invaded Ukraine. But the rest of their state media policy has been in limbo. And man, if I worked there, I would be making some noise about getting that out faster. Because if, like all other platforms, they just labeled Chinese state media as Chinese state media, they would be able to say, yeah, governments run these accounts on our platform. They run them on all platforms.

Right. I mean, I think inside TikTok, from the folks that I’ve talked to there, there’s a feeling that this kind of theory that they’re a puppet of the Chinese government is like a xenophobic sort of “red panic” sentiment.

And you know, I thought that for a while. And then I’m like, well, but there is all this documented history of TikTok being used, and other ByteDance products being used in these exact ways. Like, it feels like a conspiracy that we’ve learned, in part through your reporting, is actually true.

Yeah, I was in the same boat as Kevin, also having those conversations with the TikTok employees. And then last summer, I did read your story about TopBuzz. And I thought, wow, this thing that for so long they have sort of dismissed and waved away and said we would never do, you suddenly had this reporting and said, no, it actually did happen. And so, now time and time again, TikTok has told us one thing and then something else has turned out to be true. Which I think has led us to this place now where governors and lawmakers have said enough is enough. We just don’t trust this company.

Right, right. I want to talk about what’s happening in Washington now. And over the past couple of months, it really seems like this momentum in this effort to ban TikTok or otherwise kind of separate the US part of TikTok from ByteDance is really gaining steam. So can you just catch us up on what sort of pressure is TikTok facing at the moment? Who is taking a look at them? Who is making decisions about their future in the US?

For the past several years, TikTok has been negotiating a national security agreement with CFIUS, the Committee on Foreign Investment in the United States. And CFIUS is a panel of a bunch of government agencies that work together to try to assess the risk caused by foreign ownership of companies and figure out what to do about it. And TikTok and CFIUS thought they were going to be done negotiating this contract a long time ago, several times now. They’ve had a lot of deadlines that they’ve missed, and that’s probably not terribly uncommon.

But I do think as CFIUS has sort of learned more about TikTok, various different fears have arisen. And I think, just sort of going back to Project Texas, this is TikTok’s big effort to ameliorate national security concerns and to rebuild TikTok to sort of sever most of its ties to the ByteDance backend that created it. They’ve spent a huge amount of money on this. They’ve spent a huge amount of time on it. They’re really trying to make it work.

And I think the problem is that I was talking to someone at one point who said it’s like trying to untangle a pile of spaghetti. And I think that’s really true. Just the way that these things are built, data flows all kinds of directions all the time to power, I don’t know, a sales dashboard here or a data pipeline over here that’s helping this monetization tool, right?

Again, this isn’t nefarious data flowage. It’s just data flowage. That’s just how these things are built. And even cataloging and understanding all of the tools, and then all of the ways that data flow inside the tools, is so hard — especially given that a lot of people who built the tools aren’t there anymore — that I think a lot of people are just really worried about being able to prove that they’ve gotten them all.

Mm. Mhm. I want to play you a clip from this C.E.O. of TikTok, Shou Chew, who got interviewed by my colleague, Andrew Ross Sorkin, at our DealBook summit. And in this clip, Shou Chew talked about some of these data access issues. And I just want to play that clip for you and get your reaction to it.

So let me ask you this. So this is FBI leader Chris Wray to the House Homeland Security committee. He said he’s, quote, “extremely concerned that under Chinese law, Chinese companies are required to do whatever the Chinese government wants them to do in terms of sharing information or serving as a tool of the Chinese government.” What happens in the reality, behind the scenes, in terms of how this actually works or happens?

So the FBI director, through his team, through CFIUS, has access to the discussions and the plans that we are building to solve and address this problem. And they’re very confident that this will address the concerns that he has raised. Now for us, you know, no foreign government has asked us for US user data before. Really, they haven’t. And if they did, we would say no.

We have a transparency report that we publish every quarter, like most other consumer internet companies. And in that report, we’ll spell out the details of the various data requests that various governments around the world have asked of the citizens of their country. And we want to make it very transparent so you can keep your eye on the report to see the kind of requests that we are disclosing.

So one thing is that TikTok has repeatedly said they’ve never been asked for data. This gets back to the sort of porous relationship between TikTok and its parent company. I would like to hear ByteDance answer that question, and I’m not sure they ever squarely have.

And I can’t help but think of a great piece of reporting “The New York Times” did a while back about the limits of Shou Chew’s power, right, and the fact that some decision making about TikTok has happened among ByteDance leadership instead of TikTok leadership. And I think if ByteDance were answering all of those questions, I think we would get a broader picture. Because as we know, at least as of when I was being surveilled, ByteDance employees who never report into the TikTok leadership chain had access to my IP address.

Right. And so — so in other words, if ByteDance had to answer the same question under oath, they would probably have to say, well, yes, in fact, we have been asked for user data by some governments around the world. Also, all of us have reported on platforms like Google and Meta and Twitter, and all of those platforms get requests for user data all the time. And they publish reports where they’re getting tens of thousands of requests. So the idea that TikTok, which is one of the largest platforms in the world, has never gotten a request for user data from a government, it just strains credulity.

Yeah, we do know that that happens all the time, and that these platforms do turn over stuff all the time. I don’t know that ByteDance would answer the question differently. I just want to point out that they haven’t answered it I’d like to hear their answer.

Right. So right now, TikTok is furiously trying to pull off this Project Texas thing that would involve migrating all of its US operations to servers controlled by Oracle and sort of making the division between TikTok and ByteDance more clear and hardening it. But it seems like all of this is going to end up being too little too late for regulators in Washington.

Politico reported just last month that the Department of Justice and the Treasury Department were in sort of the final stages of a deal with TikTok, and that there was some disagreement about whether TikTok would be forced to sell its US operations to an American company, or whether it or just have some kind of lighter punishment, like having to do this Project Texas thing. Maybe set up, like, a security council made up of US citizens. So what do you think the resolution here is likely to be? I know you’re not a fortune teller, but do you have a sense of where this is going to end up? Are we going to see TikTok sold to a US company this year?

Sorry for that lame answer, but — I think it is more likely that CFIUS will mandate a sale to a US company than it previously was. I’ll say that. I think there is definitely momentum in that direction. There are also lawmakers of both parties, people in both houses of our legislature, who are saying too little too late. Who are saying, like, hey, CFIUS, y’all better do something or else we’re going to. And I think that gets to one sort of overarching point here, which is that, right now, there’s no national security agreement.

And so the people who are worried that TikTok is a national security risk and needs a solution, right now, they’re just operating without whatever solution people want that to be. So I understand why people are getting impatient.

You also reported — one of your stories last year featured some TikTok and ByteDance employees kind of talking about Project Texas, and basically implying, like, it’s kind of all for show. Like, yes, the data will be on Oracle servers, but really, it’s going to be us running our own virtual machines. And so Oracle won’t really be able to control what we do. So do you get the sense that this is just kind of a project for PR purposes? Or will this actually take ByteDance’s ability away to control TikTok in any meaningful way?

I don’t think we know that yet. I would say it seems a little cheap to say it’s just for show. They have stood up departments. They have paid contractors gazillions of dollars to help them catalog all their internal tools. If it’s for show, it’s really expensive for show.

But I think people are still worried that it won’t be a complete fix. And I think that is maybe the fairer way to say it might not be good enough. I think saying it’s just for show probably minimizes a lot of work that a lot of people have done who are really trying to fix a problem. But I understand why skeptics say it might not be good enough.

Also, it’s like, at every company, there are people acting in good faith to do the right thing and there are people trying to undermine them. And the question is, who has the most power in this situation, right? I think there have been a lot of good faith efforts within TikTok, based on my own conversations with some of their executives, in which they’re trying to do everything that Emily just pointed out. And then, thanks to Emily’s reporting, we also know that there are people spying on them.

I will say, just as an aside, it’s very funny to me that the question of, is TikTok sort of above board, is now being left to Larry Ellison’s Oracle corporation, which I would say is historically one of the most ruthless and conniving companies of all time. Larry Ellison also famous for cheating to win the America’s Cup yacht race. So if I were personally choosing who I wanted to audit TikTok’s books, it would not be the Oracle corporation.

If you don’t know about Larry Ellison cheating to win a yacht race, ask ChatGPT to tell you about that.

It’s a legendary story in the history of American technology executives. So Emily, after all the reporting you’ve done on TikTok and ByteDance, do you think TikTok is a national security risk to the United States? Should it be banned?

Those are two different questions. I don’t necessarily think a TikTok ban is the best answer because I think this won’t be the only time this ever happens. We live in a global interconnected technological world. And whether it’s an app that has ties to the Iranian government or the Saudi government or whatever, there will be apps that have ties to governments with which the US has cozy relationships.

I think it is reasonable to worry about how governments, including our own, want to use the massive amount of information available through these tech giants that we use and rely on all the time. I think just saying TikTok is a national security risk, we need to ban it — first of all, it doesn’t create useful precedent for the next time this happens. And ideally, we will have a system where someone figures out what the principles are, figures out what the risks are, and figures out what we think we need to do to mitigate those risks. I also think a lot of the risks posed by TikTok are also posed by a bunch of domestic companies too.

Right. This is sort of the “well, Facebook collects data on us all the time too” argument.

Totally. Collects data and also influences what information we consume. And so there are people who talk about how we should have a national data privacy law, and that might be a good thing. There are people who talk about how we should mandate more transparency about how companies influence what content we consume, and that seems like a good thing. And there may be another sort of geopolitical international relations type of thing that you would want to layer on top of things for companies that have some relationship with foreign governments too. But I guess just saying, “Let’s ban TikTok, it’s scary and bad,” doesn’t seem like the most sophisticated solution to the problem.

So I asked TikTok to comment on the revelations that its employees had been spying on Emily and others, and this is what a spokeswoman told me over email. Quote, “The misconduct of certain individuals who are no longer employed at ByteDance was an egregious misuse of their authority to obtain access to user data. This misbehavior is unacceptable and not in line with our efforts across TikTok to earn the trust of our users. We take data security incredibly seriously and we will continue to enhance our access protocols, which have already been significantly improved and hardened since this incident took place.”

Two final questions. One, is there anything you’d like to say to the ByteDance employee who’s monitoring this podcast recording?

And last question, if TikTok is banned in the United States, in part because of your reporting, are you ready to face the wrath of hundreds of millions of teenagers? No, millions of teenagers in the US.

Eh. I think the kids are smarter than we give them credit for. And I think if they’re angry at regulators for doing a thing, they won’t just be angry because their memes are gone. They’ll be angry because they think the solution wasn’t the best solution. I don’t know. I have more faith in teenagers than other people, I guess.

Emily Baker-White, you’re a legend. Thank you for coming.

So news broke this week that Microsoft and OpenAI are working together on a ChatGPT-powered version of Bing. This was a story in “The Information” which was sourced to two people with direct knowledge of the plans. It said basically Microsoft is going to launch a version of Bing that uses the AI behind ChatGPT to answer some search queries, and that Microsoft hopes that this new feature, which it wants to launch before the end of March, is going to help it compete with Google.

Microsoft is doing it. BingGPT is going to be a reality.

Yeah, so great report in “The Information.” I learned so much about Bing. And one of the things that I learned was that apparently when Microsoft signed the deal with OpenAI to work together in 2019, this was essentially always the plan. And in fact, they’ve been using these earlier versions of GPT in various aspects of Bing. The reason we didn’t know this until this week is because no one has used Bing since 2019.

But if you were to go there, you would apparently see like rudiments of GPT already in the search engine. But of course, what is much more interesting is the idea that we had sort of talked about, which is, man, what if there is a search engine that delivers results that look much more like what we’ve all been enjoying on ChatGPT, but in something like Google or Bing?

Right. And you had this great riff about how you had started using ChatGPT for different kinds of searches, like what kinds of shoes should a man have in his wardrobe, and how the results you were getting were a lot better than results you were getting from Google. And apparently you weren’t the only one who felt that way, because we also learned that inside Google, ChatGPT is causing a lot of chaos.

So my colleagues at the “Times,” Nico Grant and Cade Metz, talked to a bunch of people inside Google and reported on a story on December 21 that Google’s management had declared a code red that was essentially the corporate equivalent of like pulling the fire alarm over ChatGPT, that there were worries among very high-ranking Google executives that ChatGPT could, in fact, take a bite out of Google’s extremely profitable, extremely well-entrenched search business, and had basically redirected a ton of engineers and product people to trying to implement some kind of competitor to ChatGPT as quickly as possible.

Which I just thought, what an amazing thing. I mean, Google, one of the most stable internet businesses of all time, one of the most profitable. And Google Search is just this kind of unassailable monopoly of just cash printing, and has been for so long. And here comes OpenAI and this ChatGPT thing, and a couple of weeks after this thing goes public, Google is pulling the fire alarm.

Yeah. I mean, so you know, I was talking to Google people and they tend to talk about ChatGPT sort of dismissively, right? Like, they like to call it a demo, you know? And yet here, along came this demo, and next thing you know, they’ve got a code red. Which, by the way, I’ve tried to look into what it means to declare a code red inside a tech company. My familiarity with code red is mostly from the classic film “A Few Good Men.”

The two code — within you, there two code reds. But it does seem like the company is trying to figure this out. And of course, I can’t wait to see what they’re going to do. Look, we know that they have a large language model of their own. We know they’re very impressed by it. They have talked about it nonstop for years. But they have a lot of really tricky issues to work out with how to release it to the public.

And so this is a really interesting one for me, because it’s a rare case where the issue is much less about the technology, I think, because I think both sides sort of have a good and working thing. And it’s much more about what kind of products do you build? What are the user interfaces? What are the safety measures that you put into place? And what’s the business model? And those are questions that Google has not had to ask itself in a serious way, in an existential way, for a really long time.

Right, it’s just taken for granted that there’s this search machine that is just going to keep printing cash, quarter after quarter after quarter, billions of dollars a quarter that it just doesn’t really have to work for all that hard, because it’s Google. Because when you want to learn something, you go to Google, because Google is the verb that has taken root in our culture as, like, the thing that means “to search for something.” It is so much more entrenched than, like, TikTok or Facebook or even any of these other apps that just seem kind of untouchable. Google is kind of bedrock in the tech world. And so it’s really interesting to see how quickly they have gone from kind of dismissing this thing to now panicking over it.

Yeah. Well, and look, I’m somebody who has been really critical of Google Search in recent years because I feel like they have come to take their users’ presence for granted, right? We’ve talked about how often when you search for something on Google, you don’t really get back an answer. You get a research project, right?

You have to search on Google, say “Reddit” and then whatever you want to find, if you want to find an actual answer that’s written by a human. And it’s been that way for years. It’s totally maddening.

There’s this whole industry of search engine optimizers whose job it is essentially to ruin Google search results and just fill them up with sponsored crap, right? And so what the OpenAI folks have shown with ChatGPT is that a different way is possible. And that even though the search results that you’re getting from ChatGPT are riddled with errors, they can provide some offensive stuff, and they do result on just sort of like mulching a lot of labor from writers and journalists into this undifferentiated, like, pablum of sentences, it’s still very useful and it can be very entertaining.

And so I think this is a case where the genie is out of the bottle a little bit, and now we have to see who’s going to capitalize.

Totally. And I have a couple of things to say about this. One is what an amazing strategic move by Microsoft. I mean, years ago, to make this investment. I think their billion investment in OpenAI is going to go down as one of the greatest technology investments of all time.

I think it goes Google buying YouTube, Facebook buying Instagram, OpenAI investment by Microsoft. Like, I just think if this makes Bing a legitimate competitor to Google, it will be such a huge deal to Microsoft’s future that it’s going to make their initial investment look so smart.

And let me just put a couple of numbers on that from the “Information” story. So Bing, which we all sort of laugh about as a search engine, despite being a sort of also-ran in the search market, made $11 billion last year. Or rather, we should say, Microsoft’s ad business, for which Bing accounts for the majority, made $11 billion last year. That’s not bad.

Twitter’s revenue in 2021, about $5 billion, OK? Google makes 10 times that.

I almost fell off my chair when I read this. But yes, that’s correct.

So then you have Google, which is making about 10 times what Bing is making in search results, right? So if Satya Nadella, the C.E.O. of Microsoft, can take Bing from $11 billion to $50 billion over the next few years, you’re absolutely right. I think we will already look back at him as, like, a generationally-talented C.E.O. But this — man, this could really cement his legacy.

Totally. My other thing that I’m wondering about is cost here. Because something that I heard from folks who understand search and large language models and ChatGPT, ChatGPT is not free to run, right? Every query that you run it, it’s been estimated it costs, like, a cent or two in just computing power. You have to pay for basically this huge machine running on these giant supercomputers.

And so people who are close to Google were sort of pooh-poohing ChatGPT and saying, like, we could never do something like that, because I think Google processes something like 8 and 1/2 billion queries a day. And so if you had to ask a large language model like a ChatGPT every time you ran a query on Google, like, that would add up very quickly. You’d be spending millions of dollars a day just for the processing power to run the large language model in the background.

So I’m very curious how Microsoft is going to scale this, whether it really is going to cost them a cent or two every time someone searches on Bing. Because that could get expensive really quickly.

Yeah. And so the question becomes, well, how do you make money from a search engine like this? Search is a really good business because, so often, people are essentially telling you what they want to buy, right? You’re googling hotels and flights and cars and clothes, and there are all sorts of ways that you can just sort of stick an ad in there and then you wind up making a ton of money when that person converts into a sale.

ChatGPT, I think, does offer a lot of similar promise. When I say, hey, what kind of shoes should I have in my wardrobe, instead of just saying, well, here are the 10 kinds of shoes, there’s no reason ChatGPT couldn’t say, and here’s where to buy them, and here’s some images of those shoes, right? And here’s the ones that are on sale right now.

So I think the question is, will ChatGPT-like search results be so much better that they convert better, and the resulting sales essentially offset the increased computing costs, at least in the short term? And then over the long term, does Moore’s Law kick in, and the stuff just basically becomes as cheap as Google Search over the next 5 to 10 years.

Right. I mean, that’s the bet that I would be making if I were Google, is like, yes, it costs a cent or two to do a query on ChatGPT now, but that’s not going to stay that way forever. I do think there’s an interesting sort of conundrum in here. There was someone quoted in the “Times” story that I thought was really interesting and made an interesting point. It was a former Google employee who now runs a startup. And they said that, actually, that plugging something like ChatGPT into a search engine like Google could actually hurt the business of ads. They said, quote, if Google gives you the perfect answer to each query, you won’t click on any ads.

So basically saying, like, if your shoe query does come back with the perfect set of shoes, why would you need to click on ads? Because you’ll have the answer right there and you can go look for it yourself. Like, is there something that that person is missing about the relationship between the quality of the search result and how likely you are to click on an ad?

Well, a huge part of Google’s ad business is putting their ads on websites, right? So Google is sort of synonymous with the web because it funds so much of the web. I think the real issue here is that if ChatGPT spares you from having to browse the web, then yes, that is a huge impact on their business, right? And I think it’s also going to just, frankly, trigger regulatory concerns. So this is probably a good time to talk about the fact that, over the years, Google has routinely gotten in trouble for doing stuff that I just don’t think is that big a deal. For example, they will show you a snippet of a news story in Google News along with a headline, and they had to stop doing this in Spain. Google News pulled out of Spain for years because it was illegal to show tiny snippets of text from these websites without paying a licensing fee, right?

Imagine what regulators are going to say around the world when Google says, we took the entire web, we put it into a blender, and we trained a large language model on it. And we will now serve you the entire knowledge of humanity with no links back to the source material. Google will be, like, under a raid, right? Like, they will try to shut the company down if it does that. And so I think it’s in a really precarious position.

Now, right now, I think regulators, for the most part, have not heard of ChatGPT, So this probably isn’t on their radar as much.

Except the ones who listen to “Hard Fork.” Shout out.

And by the way, you’re the best regulators in the entire world.

Let us know if you want to come on the show. But this really is going to be, I think, one of the next turns of the screw, is when people are saying, hey, wait a minute. What is the actual source material for all this stuff? Why aren’t you crediting any of the sources? Why aren’t any of the people who contributed their labor to this being compensated for it, right? And you can just sort of imagine a sort of big regulatory crackdown coming that tries to stop this stuff from taking the next steps.

Yeah, I think that is possible. I think it’s going to take at least a decade. We’re still catching up to the last big wave of technology. But I do think this is a really interesting time at Google, which is sort of navigating all of this, plus these larger sort of macro trends we’ve been talking about, the interest rates and cost cutting and the advertising slowdown.

So I think this is, in general — like, one of the things that I’m going to be looking at this year as one of the emerging rivalries in tech is between Google and Microsoft, who have competed in various ways over the years, but never when it comes to core search. Bing has never, I would guess, kept many Google executives up at night worried about the future of their own search product, and maybe now they will.

I think this is absolutely going to be one of the most delicious stories of the year. And I think you might see Google Search change more in the next two years than it has in the past ten. And frankly, I think that’ll be good for us.

Oh, my god, I can’t believe — like, it needs a refresh. And if this is the kick in the pants that gives them the desire and the resources to make Google Search better somehow, like, I am all for it.

But I do think — one more thing I’ll say on this is I do understand the position that Google — some folks inside Google have taken, which is we can’t deploy this safely, so we’re not going to do it yet. We’re going to keep it internal. We’re going to keep testing it. I do think that OpenAI, as a startup, has more leeway to put something out there that isn’t perfect and can be misused because they’re not under regulatory scrutiny by every agency in the world. They’re not a trillion dollar internet company. They have an investment from one in Microsoft, but they’re not under the same level of scrutiny as Google.

And so I understand and appreciate the position that Google is in. It built this stuff. It came up with the technology that powers large language models.

Yeah, they invented the transformer. And yet, they have been like lapped by these smaller outfits like OpenAI who just come in, take this code, make their own versions of it, and then release it to the public and don’t have to spend a lot of time sort of dealing with regulators and politicians yet, although that could happen. So I understand why they’re moving slowly, but I think this is a moment where they either need to start making moves to improve Google Search or risk getting left in the dust.

I think Google has benefited enormously over the past half a decade or so from being a generally cautious company. But I do think it’s moving into a mode where it’s probably going to have to take on some more risk.

One other thing that’s sort of incredible to me about this story is that ChatGPT, this thing that is now reportedly being built into Bing, still has no idea whether the stuff it’s saying is true or not. Like, that feels like a pretty big liability for me, if I’m Microsoft and I’m looking at this. Like, you can get good search results from ChatGPT, but you can also get horrible search results from ChatGPT. So how do you think it’s going to navigate that? Is it going to put a caveat on all the search results saying this may or may not be a hallucination of our AI language model? Like, how do they deal with the fact that these products still sometimes spew out stuff that is just totally wrong?

It’s the right question. And this is one where I think that we’re all just going to have to undergo a societal evolution where we understand that we’re using these tools that might not always be right. You know, I remember growing up and being told by professors not to rely on Wikipedia because it couldn’t always be trusted. And what did we do? Did we stop using Wikipedia? No, we used it even more. We just sort of understood that we were probably going to need a second and third source before we put anything into a paper that we were writing.

I think you’re going to see something happen similar with GPT, right, where it is a starting point for a lot of people. It’s like me with our now overused example about buying shoes. It’s like, I might look up a second source, but it gave me a very good start and now I can take that and take the next steps much more easily. And so I think if we see these things more as a first step rather than a final step, we’ll probably be able to make great use of them.

So Kevin, you mentioned something to me the other day that I need to hear everything about. You have begun putting your phone into some sort of box.

Yeah, so this is part of my New Year’s resolution, 2023. I am going into therapy with my phone. We are going to couples counseling, me and my phone.

No. No, no, no. I have a long — we have a long and tortured relationship, me and this phone. So a couple of years ago, I did a phone detox, like a 30-day program that was meant to separate me from my phone, which worked pretty well. Then COVID happened. Then I had a baby. Then I had all this time where I was just sitting around, like, feeding bottles and stuff. And so my phone use started to creep back up, and I’m not very happy about that. So —

Enter the box. I don’t want to say too much about this particular box because I just started using it. And honestly, I’m not sure if I like it or not. But it is a box. It’s called Aro. It has some chargers in there, like little slots. You can put four phones in there. So I put my phone in there and it has a sensor on it. So when I put my phone in, it starts a timer. And it’s like basically in phone jail. And then when you take it out, it sort of congratulates you and said, hey, you got an hour of your life back or something like that. So it basically —

Wait, when you say it congratulates you, like, is there like a voice that —

But you can’t read it because your phone is in the box.

Well, you take your phone out and it says you successfully put down your phone for three hours, you miserable person. I guess that’s a victory. Feel good about that if you want.

So one thing that really hurt my phone use over the pandemic was that I started sleeping with my phone in my bedroom again.

And every expert in this field will tell you, like, that is the single worst thing you can do, because you reach for your phone, you get the light in your eyes, it’s over. You’re not going back to sleep.

So that definitely happened to me. So part of this is the box, it’s in my kitchen. It’s not in my bedroom. So I put my phone to bed, put my AirPods to bed, and I go to bed.

Right. OK, backing up then, so one time of the day that you use the boxes at night. Are there other times during the day when the phone goes in the box?

Yes. Like, doing dinner with the family, I’ll put my phone in the box. I will say, you can still hear it buzzing from inside the box. So there’s a little bit of, like, I need to figure out how to deal with that. But it has been good so far. I’m not — I’m horrible. Like, I’m not a good user of this box yet, but I’m getting there.

So I’ll show you this. It’s cool. It’s like a little app and — this is not sponsored content. In fact, I bought a lifetime subscription yesterday. So if I try to go to, say, Instagram on my phone, I get a little thing.

Wow. It just — like a sort of a takeover of your screen just happened and this app just said, “It’s time to take a breath.”

So it basically puts, like, a 5 or 10 second pause in between when you open an app. And now you can see it says — it says four attempts to open Instagram within the last 24 hours. I did just install this last night. And then you can choose, like, I don’t want to open Instagram or continue to Instagram. So it sort of like just puts a little speed bump in between you and your problem apps.

Right. It’s like you have to ask like the principal if it’s OK to leave class early. And the principal says, well, this is the fourth escape attempt you’ve made, but we’ll allow it this time.

It’ll still let you. It’s not a — the think I like about it is it’s not like a hard and — I’ve used these blockers before where it’s like do not let me open Twitter for the next 12 hours.

Which is kind of a brute force method. What I like about this is you can still get there. It’s just kind of annoying and it takes a little while, and it kind of makes you only do it when you really want to use the app, rather than just sort of habit.

It forces you to be more intentional about the way that you use these things.

OK, so now that your phone is in a box, and when it’s out of the box, it’s telling you not to use all the apps on your phone, like, what has been the effect of this on your life so far?

It’s just complicated it a lot. Like, it just means that I spend a lot of time pretending to do deep breathing waiting for my thing to go away.

“Is it phone time yet, box? Oh, box, when may I open you?”

I do feel a little bit like I’m the hamster in the cage just flicking at the lever to get another treat. Like, it makes the addiction dynamics so much clearer.

I mean, this is such an interesting story to me because I know there are so many people out there who have such a tortured relationship with their phone, with the apps on their phone. They know that they need it in their life, but they don’t like the way they’re using it. They wish they used it less. They wish they had deleted some of the apps on it. And yet, it has never once occurred to me to take any of the steps that you’re taking.

You’re not like a — you’re not a screen time worrier?

You know, I think I was before the pandemic. And then I think the pandemic happened and I thought, the debate is over and screen time has won. Like, the screen is the only thing that is tethering me to human connection right now, and so I’m just going to lean into the moment that I’m in. And it is like a virtual metaverse moment, baby.

I’m like ready to go full-on bionic. I want just, like, a ticker of Mastodon posts going through my mind at all times.

Wow. So I do think that something happened during COVID where it was like — it was basically futile and useless to try to lower our screen time, because what else were we going to do? But I do think now that life has stabilized a little bit, I think actually what tipped me over toward being worried again about myself was having a kid, honestly, and watching the way — my kid’s too young to use a phone, but someday, he will. And already, he’s obsessed with it. Like, will crawl across the room to get to the phone because he sees me staring at it all the time.

So that’s where I’m like I don’t actually like how much I’m using my phone. I’m trying to be much more intentional about it. I’m not going cold turkey. I’m not doing anything — I’m not getting one of those, like, dumb phones that they sell now for hardcore people. But I am trying to be more intentional, putting in these little speed bumps. And this box, I don’t know if it’s going to stick, but I’m trying it.

There is something really almost chilling about looking into the eyes of a baby that is looking at a screen, you know? Where they go from infinitely distractible and sometimes irritable, or sometimes really engaged with you, and then they just see the flicker of a screen and they become hypnotized. And it doesn’t even matter what’s on the screen. There’s something about the glow of the pixels and they’re just completely transfixed. So I can understand having a kid and thinking, I want to change my relationship with this thing.

I think a question that I have is, like, is the phone really the problem? Like, are you solving at the level of the problem here? Or is the real issue like maybe something a little bit higher level about, well, why are you opening Instagram 30 times a day?

I’m trying to figure out a way to have this conversation without making you do therapy live on the podcast. But like — I mean, look, I have been sort of gently laughing at this whole situation the whole time. But look, the truth is, I have also had to come up with my own more intentional relationship to certain of the apps that I use at various points in my life. You know, the gay hookup apps like the Grindr. Man, did I have to work on my relationship with that. And there were times when I would finally delete that thing from my phone and it was like I had finally shut Pandora’s box, you know?

So I very much sympathize with this idea that sometimes, depending on the app and depending on your personality, you just will not be able to pull yourself away from it. At the same time, again, I wonder — it’s like, well, is the app really the issue here? Is there maybe just something else going on with you? And the reason I think that’s important is because it goes to the question of, is this box actually going to work for you or not, in the long run?

Totally. And I think I like that there are technologists out there who are trying to use technology to solve a technological problem, right? That we have these limiting apps, these speed bump apps, these boxes that have Bluetooth and send you push alerts and basically gamify the process of getting off your phone. I appreciate and I think that’s a worthy space for innovation. I also just — I think you’re probably right that there’s some problems that technology can’t solve, and this may be one of them. So I’m going to look deep within myself and I’m going to continue to try to use this box until it tells me I’m doing a good job, because I am a hamster in a cage.

Well, if nothing else, putting your phone in a box does seem like a good way to keep TikTok from spying on you.

But you know, I think — because this is a technology podcast and we do need to keep up on the trends, my commitment to you and the listeners is that I’m actually going to use my phone more this year. That’s my big resolution, is I am going to — here’s the thing. The more time I spend on the internet, the better that my life gets. Like, I wish I could tell you that spending all day reading Mastodon posts has left me an invalid, but no. It’s like, things are going great.

Wow. You are just a cybernetic citizen of the future. No, I want to be clear that I am not anti-phone. One of the things that my phone detox coach helped me with a few years ago, this woman named Catherine Price, who’s amazing. Shout out, Catherine.

A real thing, by the way. You did have a coach. Like, this was a — there was a process you went through. Yeah.

Oh, yeah. Total process. She has a 30-day program. She put me through it and it was life changing. It genuinely, like, made all my relationships better. Like, really gave me time back. I was much more productive. So I really think there’s like a benefit in not getting off your phone, but in resetting your relationship with it. I think that’s just a healthy thing to do. I think, for me, I have to make sure that I am still in the driver’s seat of that relationship, right? That my phone is working for me and not the other way around. Because it is so easy to just let the phone become the boss.

Yup. And you know what? In 2023, we want you to be your own damn boss. You tell that — you put that phone in the box and you say, I got this, phone.

Yeah. So my phone is right now not in a box, but it’s going to go back in a box as soon as I get home. And if you’re trying to reach me —

That’s it for this week. “Hard Fork” is produced by Davis Land. We’re edited by Paula Szuchman. This episode was fact checked by Caitlin Love. Today’s show was engineered by Alyssa Moxley. Original music by Dan Powell, Elisheba Ittoop, Marion Lozano, and Rowan Niemesto. Special thanks to Hanna Ingber, Nell Gallogly, Kate LoPresti, and Jeffrey Miranda. As always, you can email us at [email protected] That’s all for this week. See you next time.

This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email [email protected] with any questions.

Happy new year. It’s so good to be back in the studio. How are you?

I am great and I am extremely excited to be back making “Hard Fork.”

It was a lot of fun. One of my favorite things was getting a DM from a high school student who said that, after listening to us, he had used ChatGPT to finish his assignments before finals.

Changing lives, one episode at a time. How was your holiday?

It was good. You know, I caught up on my classical Latin studies. I —

I watched some Criterion film collection I just did a lot of very productive — no, I watched movies and ate cookies and fixed my mom’s router, as a good tech columnist son should do.

Yeah, I did. It was like — it was — she’s got nominally high-speed internet, but it’s like it basically dial-up in there. And I was like, what’s going on? After, like, years of this, suffering through my mom’s bad Wi-Fi, I was, like, finally like, you know, I’m just going to do this. Like, a good son should fix his mom’s Wi-Fi router. So I did.

I mean, routers to me are a sort of final frontier in terms of difficulty. Whenever you have to log into the router, you’re sort of googling, trying to figure out what’s the right address to go to, what is the sort of default system password. But you — you actually made it into the router.

I did, I did. I hacked the mainframe and she’s getting decent download and upload speeds now. So now we can — now we can video chat from my mom’s house.

Can I tell you how I solved the problem of always being asked to fix things at my parent’s house over the holidays?

I’m Kevin Roose. I’m a tech columnist at “The New York Times.”

This week, we talk to “Forbes” reporter Emily Baker-White about why TikTok spied on her and what that means for the app’s future. Also, ChatGPT has Google on high alert and my phone now lives in a box.

Casey, in our last episode before the holiday break we were doing predictions. One of my predictions was that, in 2023, the US government would finally ban TikTok.

Which, at the time, I thought was a very melodramatic prediction.

Yeah, you kind of looked askance at me, like, who is this guy and what is he smoking? But two things happened over the break that made me even more confident that my prediction is going to come true. The first is that the Biden administration, as part of its spending bill, passed a provision that would ban TikTok from government-owned devices.

The second thing, which I think for people who favor a ban on TikTok is kind of a smoking gun, is that the company was found to have surveilled at least three American journalists. One of them is Emily Baker-White from “Forbes,” who’s here with us today. Emily, welcome to “Hard Fork.”

So tell us about this scoop. What made TikTok and ByteDance spy on you?

So I started reporting on TikTok and ByteDance last year, when I was at BuzzFeed News. I reported on something called Project Texas, which is basically the company’s efforts to try to rewire its data systems to narrow the number of people abroad that can access sensitive US user data. And after I started reporting on this, a source leaked me audio from over 80 internal meetings at TikTok and ByteDance about Project Texas, and I published a story about it. And after that, I think TikTok and ByteDance kind of freaked out a little bit and they started a leak investigation called Project Raven.

The goal of the investigation was to find out who was sourcing me. And part of the investigation entailed a ByteDance team — this is a team that doesn’t work for TikTok; it works for TikTok’s parent company, ByteDance — using the TikTok app to track my location via my IP address to try to see if I was meeting with any TikTok or ByteDance employees. Which means they were also tracking their employees’ IP address-based location as well.

I mean, as a reporter, this is one of the scariest things that you can find out, right? All of us, in the course of our work, are talking to folks who are not authorized to speak with us. We take every step we can to protect them. But we also sort of do it assuming that the company is not going to look too hard, in most cases, for who we’re talking to, right? It’s usually not going to be worth it to them. So what can you tell us about how you found out that this was happening? And how did you react?

Yeah, so I found out about Project Raven through sources inside the company. I’m not going to tell you much more than that. But eventually, I was able to review internal materials that let me know about this project. And we first reported on this in October, but we reported on it in a much more general way than we ultimately did in December, in order to protect sources, right? So I wasn’t going to say exactly what I knew, because doing so could have let the company know who was talking to me.

And so that report that we put out in October used a higher level of generality. We said that there was a plan to track the locations of individual US citizens. I knew at the time that I was among those people. But we didn’t say for what purpose and we didn’t say who the people were.

And when we published that story in October, TikTok and ByteDance didn’t actually deny it, but they went on Twitter and they said my reporting lacked integrity. They said that it wasn’t possible that the app could have been used to track location because it doesn’t track GPS-level location. Which I assume is true, but we know that there are other ways to get location, and IP address is one of those ways. And they said that TikTok had never been used to target journalists. And they later admitted, three days before Christmas, that it had.

So there’s a lot there I want to drill down on. But I sort of want to ask, like, when you learn about the ominously-named Project Raven and you learn that you’ve been targeted and the company has your IP address and is using that to try to figure out who your sources are, how did you feel?

Unimpressed. Sort of disappointed, honestly. Like, first of all, I’m a millennial. I don’t like when my phone talks at me without my permission. I just, like — I don’t like autoplay videos in any context. And so, like, I’ve never gotten very into TikTok. I, like, used it a little bit, a couple of times. But like, I just deleted it off my phone and was like, really, guys? Who decided that this was a good cost/benefit decision?

I mean, it’s definitely the case that if you had been conducting all of your communications with your sources over TikTok DM, like, that probably would have been a bad move on your part.

I mean, it is not totally unprecedented for a tech company to do this, right? I mean, a couple of years ago, Uber was found to be looking up journalists’ ride histories. I think Microsoft got in trouble because it looked at a journalist’s Hotmail account. So this kind of thing happens, but it’s rare because it’s so risky.

But also — and just to get into the why this really matters, one of the reasons that TikTok is in such trouble right now, one of the reasons that we’re seeing bans across states and the federal government is so many lawmakers and regulators are concerned that TikTok data could be used against Americans, in part, to surveil them. And until recently, TikTok could dismiss that as a conspiracy theory. And yet, now we know that it was actually true. That the company did, at least in this case, surveil Americans.

So Emily, I’m curious — like, what’s been the fallout at ByteDance and TikTok from your reporting on this Project Raven surveillance operation?

I think a lot of people inside the company are horrified that this happened. Like, they say they have closed this particular data opening for people in China to access US user data. But I think it just shows that, really, a couple of people who have access to user data in a way that maybe isn’t unreasonable on its face, a couple of people can pull some data that they’re not really supposed to pull. And at a company of this size, given the number of people who need access to user data for their job, data access controls across geographic boundaries are really hard and really tricky. And I think —

But my understanding is that this wasn’t just some low-level employee in the IT Department sort of getting curious about who you were talking to. Like, this had buy-in from some of the highest levels of ByteDance, right?

Yeah, ByteDance and TikTok have certainly tried to frame this as a few bad actors. If it was a few bad actors, it was a few bad actors receiving orders from their bad actor bosses. So it definitely wasn’t — it wasn’t rogue people. But it shows how easy it would be for rogue people to do this.

I just think it’s a terrible unforced error. Why would you think this is a good idea? And I say that for all of these companies that have done this. Like, what are you guys thinking?

Right. And I think the theme that’s been running through a lot of your coverage, at least as I read it, is that ByteDance and TikTok, they’re in kind of this fight for their life in the United States. And part of how they’re trying to survive this regulatory scrutiny is by claiming and demonstrating that even though ByteDance owns TikTok, they operate at arm’s length. There aren’t people sort of in Beijing who are, like, pulling the strings on TikTok in the US.

And what your reporting has shown is, I think, that those ties are much closer than ByteDance and TikTok have been letting on. That there are lots of Chinese nationals making decisions about US TikTok. The systems that these two companies use are, in fact — use a lot of the same tools. That it’s sort of a big sort of mess of how would you even untangle those?

So I think that, to me, felt like the sort of insight that you’ve been reporting toward is that this company is not being totally forthright about the level of involvement that its parent company, ByteDance, has with its US operations. Is that a fair reading?

I think definitely, yes. And one of the things that makes this tricky is, like, there’s nothing nefarious about Chinese nationals making decisions about how to run a tech company, right? Like, that’s a super normal thing.

But I think where the company gets in trouble is its representations about who is in charge, who is making what decisions, and how entangled TikTok and ByteDance really are. Like, I have one source who had a contract with TikTok but got a W2 from ByteDance. I have another source who got some of their checks from TikTok from ByteDance. Every @TikTok email alias is actually an @ByteDance email alias. Like, the systems are linked. Everybody at TikTok uses ByteDance’s VPN, called Seal. Everybody at TikTok uses ByteDance’s internal, like, Slack equivalent, which is called Lark.

And so, again, there’s sort of nothing nefarious about this. But ByteDance has built dozens of apps. It’s known as an app factory. And at first, TikTok was just another one of those. And so all of the dozens and dozens of internal tools that make the app run — and every major tech company has such tools — were built by engineers in China, and largely have been managed by engineers in China.

And so I think TikTok and ByteDance are really trying to separate all of those systems, but it’s really hard to separate them. And it’s really hard to know if you’ve got them all, just knowing how big the backend of a company of this size is.

So there’s sort of like two buckets of concern in Washington about TikTok, right? There’s the sort of data misuse, surveillance spying concern that TikTok could essentially be used by the Chinese government to spy on the location or the data of American citizens. There’s also kind of this influence bucket, that China could use TikTok and the Chinese government could use TikTok to sort of insert propaganda or pro-Chinese sentiment into its apps so that American users of these apps would get a sort of distorted picture of life in China. Or just that TikTok would be used, in some way, to undermine US interests.

Of those two buckets, which do you think people should be more concerned about?

I love this question. Honestly, I have spent more time reporting on the bucket that — I think Ezra Klein called it the data espionage bucket. And I think that one gets talked about more. But the second one is way, I think, under-discussed, because recommendations algorithms are complex. We know this. But we also know that they can influence public opinion in all kinds of ways.

And I feel like we’ve talked a lot about this when it comes to Facebook, when it comes to YouTube. And somehow, we haven’t talked about it with TikTok as much. And I think we should, because very subtle changes to a recommendation algorithm that is powering many millions of phones can change how people think about an election or a pandemic, right? And we know this from the other platforms. It’s just as true when it comes to TikTok, but now we have this sort of additional layer of geopolitics.

Right. Like, you think about what the Russians had to do to interfere in the 2016 election. And it was like, well, they had to buy a bunch of Facebook ads. They had to create Facebook pages and Facebook groups and a bunch of fake people. Then they had to trick Americans into going to rallies, right, just to sort of sow dissent.

But like, if you’re the Chinese government and you can just make a handful of phone calls to ByteDance — I’m not saying this has happened, but you could just sort of imagine it happening. And they’re like, you know, why don’t we just amplify a lot of content of Americans screaming at each other, and a lot of content about the dysfunctional American politics? And then maybe amplify some content about how beautiful China is and what a great vacation destination it is, and how much people love their leaders there. And then maybe you throw in a viral American flag burning challenge at some point, right?

You could do all of this. And to your point, Emily, there are no fingerprints on the algorithm, so nobody knows where it’s coming from.

Yeah. I think that’s right. And I think it’s worth noting that people have told me that ByteDance has done this before. And we know that the Chinese government has done this before. So on that first part, I reported a story over the summer about an app called TopBuzz, which was a news aggregator app run by ByteDance. It no longer exists. But former employees of ByteDance told me that TopBuzz had essentially, like, stickied or pinned to the top of its feed what they referred to as pro-China messages. I don’t think this was like President Xi standing on a tank type of thing. I think it was, like, cute panda videos. And I don’t know where exactly the orders for this came from, but people told me that it had happened. ByteDance denied that, by the way, when I reported on it.

So there’s that out there. And then we know, from what the Chinese government has done on other social platforms, that they do run influence campaigns. I reported a story with a colleague of mine, Iain Martin, about how the Chinese government had, through an entity called MediaLinks, run a bunch of TikToks about American politics, about specific politicians, before the ‘22 midterms. And TikTok is still the only major social platform out there today that doesn’t label state media accounts.

And they say they’ve been working on this for a long time. They rushed out labels on Russian state media accounts after Russia invaded Ukraine. But the rest of their state media policy has been in limbo. And man, if I worked there, I would be making some noise about getting that out faster. Because if, like all other platforms, they just labeled Chinese state media as Chinese state media, they would be able to say, yeah, governments run these accounts on our platform. They run them on all platforms.

Right. I mean, I think inside TikTok, from the folks that I’ve talked to there, there’s a feeling that this kind of theory that they’re a puppet of the Chinese government is like a xenophobic sort of “red panic” sentiment.

And you know, I thought that for a while. And then I’m like, well, but there is all this documented history of TikTok being used, and other ByteDance products being used in these exact ways. Like, it feels like a conspiracy that we’ve learned, in part through your reporting, is actually true.

Yeah, I was in the same boat as Kevin, also having those conversations with the TikTok employees. And then last summer, I did read your story about TopBuzz. And I thought, wow, this thing that for so long they have sort of dismissed and waved away and said we would never do, you suddenly had this reporting and said, no, it actually did happen. And so, now time and time again, TikTok has told us one thing and then something else has turned out to be true. Which I think has led us to this place now where governors and lawmakers have said enough is enough. We just don’t trust this company.

Right, right. I want to talk about what’s happening in Washington now. And over the past couple of months, it really seems like this momentum in this effort to ban TikTok or otherwise kind of separate the US part of TikTok from ByteDance is really gaining steam. So can you just catch us up on what sort of pressure is TikTok facing at the moment? Who is taking a look at them? Who is making decisions about their future in the US?

For the past several years, TikTok has been negotiating a national security agreement with CFIUS, the Committee on Foreign Investment in the United States. And CFIUS is a panel of a bunch of government agencies that work together to try to assess the risk caused by foreign ownership of companies and figure out what to do about it. And TikTok and CFIUS thought they were going to be done negotiating this contract a long time ago, several times now. They’ve had a lot of deadlines that they’ve missed, and that’s probably not terribly uncommon.

But I do think as CFIUS has sort of learned more about TikTok, various different fears have arisen. And I think, just sort of going back to Project Texas, this is TikTok’s big effort to ameliorate national security concerns and to rebuild TikTok to sort of sever most of its ties to the ByteDance backend that created it. They’ve spent a huge amount of money on this. They’ve spent a huge amount of time on it. They’re really trying to make it work.

And I think the problem is that I was talking to someone at one point who said it’s like trying to untangle a pile of spaghetti. And I think that’s really true. Just the way that these things are built, data flows all kinds of directions all the time to power, I don’t know, a sales dashboard here or a data pipeline over here that’s helping this monetization tool, right?

Again, this isn’t nefarious data flowage. It’s just data flowage. That’s just how these things are built. And even cataloging and understanding all of the tools, and then all of the ways that data flow inside the tools, is so hard — especially given that a lot of people who built the tools aren’t there anymore — that I think a lot of people are just really worried about being able to prove that they’ve gotten them all.

Mm. Mhm. I want to play you a clip from this C.E.O. of TikTok, Shou Chew, who got interviewed by my colleague, Andrew Ross Sorkin, at our DealBook summit. And in this clip, Shou Chew talked about some of these data access issues. And I just want to play that clip for you and get your reaction to it.

So let me ask you this. So this is FBI leader Chris Wray to the House Homeland Security committee. He said he’s, quote, “extremely concerned that under Chinese law, Chinese companies are required to do whatever the Chinese government wants them to do in terms of sharing information or serving as a tool of the Chinese government.” What happens in the reality, behind the scenes, in terms of how this actually works or happens?

So the FBI director, through his team, through CFIUS, has access to the discussions and the plans that we are building to solve and address this problem. And they’re very confident that this will address the concerns that he has raised. Now for us, you know, no foreign government has asked us for US user data before. Really, they haven’t. And if they did, we would say no.

We have a transparency report that we publish every quarter, like most other consumer internet companies. And in that report, we’ll spell out the details of the various data requests that various governments around the world have asked of the citizens of their country. And we want to make it very transparent so you can keep your eye on the report to see the kind of requests that we are disclosing.

So one thing is that TikTok has repeatedly said they’ve never been asked for data. This gets back to the sort of porous relationship between TikTok and its parent company. I would like to hear ByteDance answer that question, and I’m not sure they ever squarely have.

And I can’t help but think of a great piece of reporting “The New York Times” did a while back about the limits of Shou Chew’s power, right, and the fact that some decision making about TikTok has happened among ByteDance leadership instead of TikTok leadership. And I think if ByteDance were answering all of those questions, I think we would get a broader picture. Because as we know, at least as of when I was being surveilled, ByteDance employees who never report into the TikTok leadership chain had access to my IP address.

Right. And so — so in other words, if ByteDance had to answer the same question under oath, they would probably have to say, well, yes, in fact, we have been asked for user data by some governments around the world. Also, all of us have reported on platforms like Google and Meta and Twitter, and all of those platforms get requests for user data all the time. And they publish reports where they’re getting tens of thousands of requests. So the idea that TikTok, which is one of the largest platforms in the world, has never gotten a request for user data from a government, it just strains credulity.

Yeah, we do know that that happens all the time, and that these platforms do turn over stuff all the time. I don’t know that ByteDance would answer the question differently. I just want to point out that they haven’t answered it I’d like to hear their answer.

Right. So right now, TikTok is furiously trying to pull off this Project Texas thing that would involve migrating all of its US operations to servers controlled by Oracle and sort of making the division between TikTok and ByteDance more clear and hardening it. But it seems like all of this is going to end up being too little too late for regulators in Washington.

Politico reported just last month that the Department of Justice and the Treasury Department were in sort of the final stages of a deal with TikTok, and that there was some disagreement about whether TikTok would be forced to sell its US operations to an American company, or whether it or just have some kind of lighter punishment, like having to do this Project Texas thing. Maybe set up, like, a security council made up of US citizens. So what do you think the resolution here is likely to be? I know you’re not a fortune teller, but do you have a sense of where this is going to end up? Are we going to see TikTok sold to a US company this year?

Sorry for that lame answer, but — I think it is more likely that CFIUS will mandate a sale to a US company than it previously was. I’ll say that. I think there is definitely momentum in that direction. There are also lawmakers of both parties, people in both houses of our legislature, who are saying too little too late. Who are saying, like, hey, CFIUS, y’all better do something or else we’re going to. And I think that gets to one sort of overarching point here, which is that, right now, there’s no national security agreement.

And so the people who are worried that TikTok is a national security risk and needs a solution, right now, they’re just operating without whatever solution people want that to be. So I understand why people are getting impatient.

You also reported — one of your stories last year featured some TikTok and ByteDance employees kind of talking about Project Texas, and basically implying, like, it’s kind of all for show. Like, yes, the data will be on Oracle servers, but really, it’s going to be us running our own virtual machines. And so Oracle won’t really be able to control what we do. So do you get the sense that this is just kind of a project for PR purposes? Or will this actually take ByteDance’s ability away to control TikTok in any meaningful way?

I don’t think we know that yet. I would say it seems a little cheap to say it’s just for show. They have stood up departments. They have paid contractors gazillions of dollars to help them catalog all their internal tools. If it’s for show, it’s really expensive for show.

But I think people are still worried that it won’t be a complete fix. And I think that is maybe the fairer way to say it might not be good enough. I think saying it’s just for show probably minimizes a lot of work that a lot of people have done who are really trying to fix a problem. But I understand why skeptics say it might not be good enough.

Also, it’s like, at every company, there are people acting in good faith to do the right thing and there are people trying to undermine them. And the question is, who has the most power in this situation, right? I think there have been a lot of good faith efforts within TikTok, based on my own conversations with some of their executives, in which they’re trying to do everything that Emily just pointed out. And then, thanks to Emily’s reporting, we also know that there are people spying on them.

I will say, just as an aside, it’s very funny to me that the question of, is TikTok sort of above board, is now being left to Larry Ellison’s Oracle corporation, which I would say is historically one of the most ruthless and conniving companies of all time. Larry Ellison also famous for cheating to win the America’s Cup yacht race. So if I were personally choosing who I wanted to audit TikTok’s books, it would not be the Oracle corporation.

If you don’t know about Larry Ellison cheating to win a yacht race, ask ChatGPT to tell you about that.

It’s a legendary story in the history of American technology executives. So Emily, after all the reporting you’ve done on TikTok and ByteDance, do you think TikTok is a national security risk to the United States? Should it be banned?

Those are two different questions. I don’t necessarily think a TikTok ban is the best answer because I think this won’t be the only time this ever happens. We live in a global interconnected technological world. And whether it’s an app that has ties to the Iranian government or the Saudi government or whatever, there will be apps that have ties to governments with which the US has cozy relationships.

I think it is reasonable to worry about how governments, including our own, want to use the massive amount of information available through these tech giants that we use and rely on all the time. I think just saying TikTok is a national security risk, we need to ban it — first of all, it doesn’t create useful precedent for the next time this happens. And ideally, we will have a system where someone figures out what the principles are, figures out what the risks are, and figures out what we think we need to do to mitigate those risks. I also think a lot of the risks posed by TikTok are also posed by a bunch of domestic companies too.

Right. This is sort of the “well, Facebook collects data on us all the time too” argument.

Totally. Collects data and also influences what information we consume. And so there are people who talk about how we should have a national data privacy law, and that might be a good thing. There are people who talk about how we should mandate more transparency about how companies influence what content we consume, and that seems like a good thing. And there may be another sort of geopolitical international relations type of thing that you would want to layer on top of things for companies that have some relationship with foreign governments too. But I guess just saying, “Let’s ban TikTok, it’s scary and bad,” doesn’t seem like the most sophisticated solution to the problem.

So I asked TikTok to comment on the revelations that its employees had been spying on Emily and others, and this is what a spokeswoman told me over email. Quote, “The misconduct of certain individuals who are no longer employed at ByteDance was an egregious misuse of their authority to obtain access to user data. This misbehavior is unacceptable and not in line with our efforts across TikTok to earn the trust of our users. We take data security incredibly seriously and we will continue to enhance our access protocols, which have already been significantly improved and hardened since this incident took place.”

Two final questions. One, is there anything you’d like to say to the ByteDance employee who’s monitoring this podcast recording?

And last question, if TikTok is banned in the United States, in part because of your reporting, are you ready to face the wrath of hundreds of millions of teenagers? No, millions of teenagers in the US.

Eh. I think the kids are smarter than we give them credit for. And I think if they’re angry at regulators for doing a thing, they won’t just be angry because their memes are gone. They’ll be angry because they think the solution wasn’t the best solution. I don’t know. I have more faith in teenagers than other people, I guess.

Emily Baker-White, you’re a legend. Thank you for coming.

So news broke this week that Microsoft and OpenAI are working together on a ChatGPT-powered version of Bing. This was a story in “The Information” which was sourced to two people with direct knowledge of the plans. It said basically Microsoft is going to launch a version of Bing that uses the AI behind ChatGPT to answer some search queries, and that Microsoft hopes that this new feature, which it wants to launch before the end of March, is going to help it compete with Google.

Microsoft is doing it. BingGPT is going to be a reality.

Yeah, so great report in “The Information.” I learned so much about Bing. And one of the things that I learned was that apparently when Microsoft signed the deal with OpenAI to work together in 2019, this was essentially always the plan. And in fact, they’ve been using these earlier versions of GPT in various aspects of Bing. The reason we didn’t know this until this week is because no one has used Bing since 2019.

But if you were to go there, you would apparently see like rudiments of GPT already in the search engine. But of course, what is much more interesting is the idea that we had sort of talked about, which is, man, what if there is a search engine that delivers results that look much more like what we’ve all been enjoying on ChatGPT, but in something like Google or Bing?

Right. And you had this great riff about how you had started using ChatGPT for different kinds of searches, like what kinds of shoes should a man have in his wardrobe, and how the results you were getting were a lot better than results you were getting from Google. And apparently you weren’t the only one who felt that way, because we also learned that inside Google, ChatGPT is causing a lot of chaos.

So my colleagues at the “Times,” Nico Grant and Cade Metz, talked to a bunch of people inside Google and reported on a story on December 21 that Google’s management had declared a code red that was essentially the corporate equivalent of like pulling the fire alarm over ChatGPT, that there were worries among very high-ranking Google executives that ChatGPT could, in fact, take a bite out of Google’s extremely profitable, extremely well-entrenched search business, and had basically redirected a ton of engineers and product people to trying to implement some kind of competitor to ChatGPT as quickly as possible.

Which I just thought, what an amazing thing. I mean, Google, one of the most stable internet businesses of all time, one of the most profitable. And Google Search is just this kind of unassailable monopoly of just cash printing, and has been for so long. And here comes OpenAI and this ChatGPT thing, and a couple of weeks after this thing goes public, Google is pulling the fire alarm.

Yeah. I mean, so you know, I was talking to Google people and they tend to talk about ChatGPT sort of dismissively, right? Like, they like to call it a demo, you know? And yet here, along came this demo, and next thing you know, they’ve got a code red. Which, by the way, I’ve tried to look into what it means to declare a code red inside a tech company. My familiarity with code red is mostly from the classic film “A Few Good Men.”

The two code — within you, there two code reds. But it does seem like the company is trying to figure this out. And of course, I can’t wait to see what they’re going to do. Look, we know that they have a large language model of their own. We know they’re very impressed by it. They have talked about it nonstop for years. But they have a lot of really tricky issues to work out with how to release it to the public.

And so this is a really interesting one for me, because it’s a rare case where the issue is much less about the technology, I think, because I think both sides sort of have a good and working thing. And it’s much more about what kind of products do you build? What are the user interfaces? What are the safety measures that you put into place? And what’s the business model? And those are questions that Google has not had to ask itself in a serious way, in an existential way, for a really long time.

Right, it’s just taken for granted that there’s this search machine that is just going to keep printing cash, quarter after quarter after quarter, billions of dollars a quarter that it just doesn’t really have to work for all that hard, because it’s Google. Because when you want to learn something, you go to Google, because Google is the verb that has taken root in our culture as, like, the thing that means “to search for something.” It is so much more entrenched than, like, TikTok or Facebook or even any of these other apps that just seem kind of untouchable. Google is kind of bedrock in the tech world. And so it’s really interesting to see how quickly they have gone from kind of dismissing this thing to now panicking over it.

Yeah. Well, and look, I’m somebody who has been really critical of Google Search in recent years because I feel like they have come to take their users’ presence for granted, right? We’ve talked about how often when you search for something on Google, you don’t really get back an answer. You get a research project, right?

You have to search on Google, say “Reddit” and then whatever you want to find, if you want to find an actual answer that’s written by a human. And it’s been that way for years. It’s totally maddening.

There’s this whole industry of search engine optimizers whose job it is essentially to ruin Google search results and just fill them up with sponsored crap, right? And so what the OpenAI folks have shown with ChatGPT is that a different way is possible. And that even though the search results that you’re getting from ChatGPT are riddled with errors, they can provide some offensive stuff, and they do result on just sort of like mulching a lot of labor from writers and journalists into this undifferentiated, like, pablum of sentences, it’s still very useful and it can be very entertaining.

And so I think this is a case where the genie is out of the bottle a little bit, and now we have to see who’s going to capitalize.

Totally. And I have a couple of things to say about this. One is what an amazing strategic move by Microsoft. I mean, years ago, to make this investment. I think their billion investment in OpenAI is going to go down as one of the greatest technology investments of all time.

I think it goes Google buying YouTube, Facebook buying Instagram, OpenAI investment by Microsoft. Like, I just think if this makes Bing a legitimate competitor to Google, it will be such a huge deal to Microsoft’s future that it’s going to make their initial investment look so smart.

And let me just put a couple of numbers on that from the “Information” story. So Bing, which we all sort of laugh about as a search engine, despite being a sort of also-ran in the search market, made $11 billion last year. Or rather, we should say, Microsoft’s ad business, for which Bing accounts for the majority, made $11 billion last year. That’s not bad.

Twitter’s revenue in 2021, about $5 billion, OK? Google makes 10 times that.

I almost fell off my chair when I read this. But yes, that’s correct.

So then you have Google, which is making about 10 times what Bing is making in search results, right? So if Satya Nadella, the C.E.O. of Microsoft, can take Bing from $11 billion to $50 billion over the next few years, you’re absolutely right. I think we will already look back at him as, like, a generationally-talented C.E.O. But this — man, this could really cement his legacy.

Totally. My other thing that I’m wondering about is cost here. Because something that I heard from folks who understand search and large language models and ChatGPT, ChatGPT is not free to run, right? Every query that you run it, it’s been estimated it costs, like, a cent or two in just computing power. You have to pay for basically this huge machine running on these giant supercomputers.

And so people who are close to Google were sort of pooh-poohing ChatGPT and saying, like, we could never do something like that, because I think Google processes something like 8 and 1/2 billion queries a day. And so if you had to ask a large language model like a ChatGPT every time you ran a query on Google, like, that would add up very quickly. You’d be spending millions of dollars a day just for the processing power to run the large language model in the background.

So I’m very curious how Microsoft is going to scale this, whether it really is going to cost them a cent or two every time someone searches on Bing. Because that could get expensive really quickly.

Yeah. And so the question becomes, well, how do you make money from a search engine like this? Search is a really good business because, so often, people are essentially telling you what they want to buy, right? You’re googling hotels and flights and cars and clothes, and there are all sorts of ways that you can just sort of stick an ad in there and then you wind up making a ton of money when that person converts into a sale.

ChatGPT, I think, does offer a lot of similar promise. When I say, hey, what kind of shoes should I have in my wardrobe, instead of just saying, well, here are the 10 kinds of shoes, there’s no reason ChatGPT couldn’t say, and here’s where to buy them, and here’s some images of those shoes, right? And here’s the ones that are on sale right now.

So I think the question is, will ChatGPT-like search results be so much better that they convert better, and the resulting sales essentially offset the increased computing costs, at least in the short term? And then over the long term, does Moore’s Law kick in, and the stuff just basically becomes as cheap as Google Search over the next 5 to 10 years.

Right. I mean, that’s the bet that I would be making if I were Google, is like, yes, it costs a cent or two to do a query on ChatGPT now, but that’s not going to stay that way forever. I do think there’s an interesting sort of conundrum in here. There was someone quoted in the “Times” story that I thought was really interesting and made an interesting point. It was a former Google employee who now runs a startup. And they said that, actually, that plugging something like ChatGPT into a search engine like Google could actually hurt the business of ads. They said, quote, if Google gives you the perfect answer to each query, you won’t click on any ads.

So basically saying, like, if your shoe query does come back with the perfect set of shoes, why would you need to click on ads? Because you’ll have the answer right there and you can go look for it yourself. Like, is there something that that person is missing about the relationship between the quality of the search result and how likely you are to click on an ad?

Well, a huge part of Google’s ad business is putting their ads on websites, right? So Google is sort of synonymous with the web because it funds so much of the web. I think the real issue here is that if ChatGPT spares you from having to browse the web, then yes, that is a huge impact on their business, right? And I think it’s also going to just, frankly, trigger regulatory concerns. So this is probably a good time to talk about the fact that, over the years, Google has routinely gotten in trouble for doing stuff that I just don’t think is that big a deal. For example, they will show you a snippet of a news story in Google News along with a headline, and they had to stop doing this in Spain. Google News pulled out of Spain for years because it was illegal to show tiny snippets of text from these websites without paying a licensing fee, right?

Imagine what regulators are going to say around the world when Google says, we took the entire web, we put it into a blender, and we trained a large language model on it. And we will now serve you the entire knowledge of humanity with no links back to the source material. Google will be, like, under a raid, right? Like, they will try to shut the company down if it does that. And so I think it’s in a really precarious position.

Now, right now, I think regulators, for the most part, have not heard of ChatGPT, So this probably isn’t on their radar as much.

Except the ones who listen to “Hard Fork.” Shout out.

And by the way, you’re the best regulators in the entire world.

Let us know if you want to come on the show. But this really is going to be, I think, one of the next turns of the screw, is when people are saying, hey, wait a minute. What is the actual source material for all this stuff? Why aren’t you crediting any of the sources? Why aren’t any of the people who contributed their labor to this being compensated for it, right? And you can just sort of imagine a sort of big regulatory crackdown coming that tries to stop this stuff from taking the next steps.

Yeah, I think that is possible. I think it’s going to take at least a decade. We’re still catching up to the last big wave of technology. But I do think this is a really interesting time at Google, which is sort of navigating all of this, plus these larger sort of macro trends we’ve been talking about, the interest rates and cost cutting and the advertising slowdown.

So I think this is, in general — like, one of the things that I’m going to be looking at this year as one of the emerging rivalries in tech is between Google and Microsoft, who have competed in various ways over the years, but never when it comes to core search. Bing has never, I would guess, kept many Google executives up at night worried about the future of their own search product, and maybe now they will.

I think this is absolutely going to be one of the most delicious stories of the year. And I think you might see Google Search change more in the next two years than it has in the past ten. And frankly, I think that’ll be good for us.

Oh, my god, I can’t believe — like, it needs a refresh. And if this is the kick in the pants that gives them the desire and the resources to make Google Search better somehow, like, I am all for it.

But I do think — one more thing I’ll say on this is I do understand the position that Google — some folks inside Google have taken, which is we can’t deploy this safely, so we’re not going to do it yet. We’re going to keep it internal. We’re going to keep testing it. I do think that OpenAI, as a startup, has more leeway to put something out there that isn’t perfect and can be misused because they’re not under regulatory scrutiny by every agency in the world. They’re not a trillion dollar internet company. They have an investment from one in Microsoft, but they’re not under the same level of scrutiny as Google.

And so I understand and appreciate the position that Google is in. It built this stuff. It came up with the technology that powers large language models.

Yeah, they invented the transformer. And yet, they have been like lapped by these smaller outfits like OpenAI who just come in, take this code, make their own versions of it, and then release it to the public and don’t have to spend a lot of time sort of dealing with regulators and politicians yet, although that could happen. So I understand why they’re moving slowly, but I think this is a moment where they either need to start making moves to improve Google Search or risk getting left in the dust.

I think Google has benefited enormously over the past half a decade or so from being a generally cautious company. But I do think it’s moving into a mode where it’s probably going to have to take on some more risk.

One other thing that’s sort of incredible to me about this story is that ChatGPT, this thing that is now reportedly being built into Bing, still has no idea whether the stuff it’s saying is true or not. Like, that feels like a pretty big liability for me, if I’m Microsoft and I’m looking at this. Like, you can get good search results from ChatGPT, but you can also get horrible search results from ChatGPT. So how do you think it’s going to navigate that? Is it going to put a caveat on all the search results saying this may or may not be a hallucination of our AI language model? Like, how do they deal with the fact that these products still sometimes spew out stuff that is just totally wrong?

It’s the right question. And this is one where I think that we’re all just going to have to undergo a societal evolution where we understand that we’re using these tools that might not always be right. You know, I remember growing up and being told by professors not to rely on Wikipedia because it couldn’t always be trusted. And what did we do? Did we stop using Wikipedia? No, we used it even more. We just sort of understood that we were probably going to need a second and third source before we put anything into a paper that we were writing.

I think you’re going to see something happen similar with GPT, right, where it is a starting point for a lot of people. It’s like me with our now overused example about buying shoes. It’s like, I might look up a second source, but it gave me a very good start and now I can take that and take the next steps much more easily. And so I think if we see these things more as a first step rather than a final step, we’ll probably be able to make great use of them.

So Kevin, you mentioned something to me the other day that I need to hear everything about. You have begun putting your phone into some sort of box.

Yeah, so this is part of my New Year’s resolution, 2023. I am going into therapy with my phone. We are going to couples counseling, me and my phone.

No. No, no, no. I have a long — we have a long and tortured relationship, me and this phone. So a couple of years ago, I did a phone detox, like a 30-day program that was meant to separate me from my phone, which worked pretty well. Then COVID happened. Then I had a baby. Then I had all this time where I was just sitting around, like, feeding bottles and stuff. And so my phone use started to creep back up, and I’m not very happy about that. So —

Enter the box. I don’t want to say too much about this particular box because I just started using it. And honestly, I’m not sure if I like it or not. But it is a box. It’s called Aro. It has some chargers in there, like little slots. You can put four phones in there. So I put my phone in there and it has a sensor on it. So when I put my phone in, it starts a timer. And it’s like basically in phone jail. And then when you take it out, it sort of congratulates you and said, hey, you got an hour of your life back or something like that. So it basically —

Wait, when you say it congratulates you, like, is there like a voice that —

But you can’t read it because your phone is in the box.

Well, you take your phone out and it says you successfully put down your phone for three hours, you miserable person. I guess that’s a victory. Feel good about that if you want.

So one thing that really hurt my phone use over the pandemic was that I started sleeping with my phone in my bedroom again.

And every expert in this field will tell you, like, that is the single worst thing you can do, because you reach for your phone, you get the light in your eyes, it’s over. You’re not going back to sleep.

So that definitely happened to me. So part of this is the box, it’s in my kitchen. It’s not in my bedroom. So I put my phone to bed, put my AirPods to bed, and I go to bed.

Right. OK, backing up then, so one time of the day that you use the boxes at night. Are there other times during the day when the phone goes in the box?

Yes. Like, doing dinner with the family, I’ll put my phone in the box. I will say, you can still hear it buzzing from inside the box. So there’s a little bit of, like, I need to figure out how to deal with that. But it has been good so far. I’m not — I’m horrible. Like, I’m not a good user of this box yet, but I’m getting there.

So I’ll show you this. It’s cool. It’s like a little app and — this is not sponsored content. In fact, I bought a lifetime subscription yesterday. So if I try to go to, say, Instagram on my phone, I get a little thing.

Wow. It just — like a sort of a takeover of your screen just happened and this app just said, “It’s time to take a breath.”

So it basically puts, like, a 5 or 10 second pause in between when you open an app. And now you can see it says — it says four attempts to open Instagram within the last 24 hours. I did just install this last night. And then you can choose, like, I don’t want to open Instagram or continue to Instagram. So it sort of like just puts a little speed bump in between you and your problem apps.

Right. It’s like you have to ask like the principal if it’s OK to leave class early. And the principal says, well, this is the fourth escape attempt you’ve made, but we’ll allow it this time.

It’ll still let you. It’s not a — the think I like about it is it’s not like a hard and — I’ve used these blockers before where it’s like do not let me open Twitter for the next 12 hours.

Which is kind of a brute force method. What I like about this is you can still get there. It’s just kind of annoying and it takes a little while, and it kind of makes you only do it when you really want to use the app, rather than just sort of habit.

It forces you to be more intentional about the way that you use these things.

OK, so now that your phone is in a box, and when it’s out of the box, it’s telling you not to use all the apps on your phone, like, what has been the effect of this on your life so far?

It’s just complicated it a lot. Like, it just means that I spend a lot of time pretending to do deep breathing waiting for my thing to go away.

“Is it phone time yet, box? Oh, box, when may I open you?”

I do feel a little bit like I’m the hamster in the cage just flicking at the lever to get another treat. Like, it makes the addiction dynamics so much clearer.

I mean, this is such an interesting story to me because I know there are so many people out there who have such a tortured relationship with their phone, with the apps on their phone. They know that they need it in their life, but they don’t like the way they’re using it. They wish they used it less. They wish they had deleted some of the apps on it. And yet, it has never once occurred to me to take any of the steps that you’re taking.

You’re not like a — you’re not a screen time worrier?

You know, I think I was before the pandemic. And then I think the pandemic happened and I thought, the debate is over and screen time has won. Like, the screen is the only thing that is tethering me to human connection right now, and so I’m just going to lean into the moment that I’m in. And it is like a virtual metaverse moment, baby.

I’m like ready to go full-on bionic. I want just, like, a ticker of Mastodon posts going through my mind at all times.

Wow. So I do think that something happened during COVID where it was like — it was basically futile and useless to try to lower our screen time, because what else were we going to do? But I do think now that life has stabilized a little bit, I think actually what tipped me over toward being worried again about myself was having a kid, honestly, and watching the way — my kid’s too young to use a phone, but someday, he will. And already, he’s obsessed with it. Like, will crawl across the room to get to the phone because he sees me staring at it all the time.

So that’s where I’m like I don’t actually like how much I’m using my phone. I’m trying to be much more intentional about it. I’m not going cold turkey. I’m not doing anything — I’m not getting one of those, like, dumb phones that they sell now for hardcore people. But I am trying to be more intentional, putting in these little speed bumps. And this box, I don’t know if it’s going to stick, but I’m trying it.

There is something really almost chilling about looking into the eyes of a baby that is looking at a screen, you know? Where they go from infinitely distractible and sometimes irritable, or sometimes really engaged with you, and then they just see the flicker of a screen and they become hypnotized. And it doesn’t even matter what’s on the screen. There’s something about the glow of the pixels and they’re just completely transfixed. So I can understand having a kid and thinking, I want to change my relationship with this thing.

I think a question that I have is, like, is the phone really the problem? Like, are you solving at the level of the problem here? Or is the real issue like maybe something a little bit higher level about, well, why are you opening Instagram 30 times a day?

I’m trying to figure out a way to have this conversation without making you do therapy live on the podcast. But like — I mean, look, I have been sort of gently laughing at this whole situation the whole time. But look, the truth is, I have also had to come up with my own more intentional relationship to certain of the apps that I use at various points in my life. You know, the gay hookup apps like the Grindr. Man, did I have to work on my relationship with that. And there were times when I would finally delete that thing from my phone and it was like I had finally shut Pandora’s box, you know?

So I very much sympathize with this idea that sometimes, depending on the app and depending on your personality, you just will not be able to pull yourself away from it. At the same time, again, I wonder — it’s like, well, is the app really the issue here? Is there maybe just something else going on with you? And the reason I think that’s important is because it goes to the question of, is this box actually going to work for you or not, in the long run?

Totally. And I think I like that there are technologists out there who are trying to use technology to solve a technological problem, right? That we have these limiting apps, these speed bump apps, these boxes that have Bluetooth and send you push alerts and basically gamify the process of getting off your phone. I appreciate and I think that’s a worthy space for innovation. I also just — I think you’re probably right that there’s some problems that technology can’t solve, and this may be one of them. So I’m going to look deep within myself and I’m going to continue to try to use this box until it tells me I’m doing a good job, because I am a hamster in a cage.

Well, if nothing else, putting your phone in a box does seem like a good way to keep TikTok from spying on you.

But you know, I think — because this is a technology podcast and we do need to keep up on the trends, my commitment to you and the listeners is that I’m actually going to use my phone more this year. That’s my big resolution, is I am going to — here’s the thing. The more time I spend on the internet, the better that my life gets. Like, I wish I could tell you that spending all day reading Mastodon posts has left me an invalid, but no. It’s like, things are going great.

Wow. You are just a cybernetic citizen of the future. No, I want to be clear that I am not anti-phone. One of the things that my phone detox coach helped me with a few years ago, this woman named Catherine Price, who’s amazing. Shout out, Catherine.

A real thing, by the way. You did have a coach. Like, this was a — there was a process you went through. Yeah.

Oh, yeah. Total process. She has a 30-day program. She put me through it and it was life changing. It genuinely, like, made all my relationships better. Like, really gave me time back. I was much more productive. So I really think there’s like a benefit in not getting off your phone, but in resetting your relationship with it. I think that’s just a healthy thing to do. I think, for me, I have to make sure that I am still in the driver’s seat of that relationship, right? That my phone is working for me and not the other way around. Because it is so easy to just let the phone become the boss.

Yup. And you know what? In 2023, we want you to be your own damn boss. You tell that — you put that phone in the box and you say, I got this, phone.

Yeah. So my phone is right now not in a box, but it’s going to go back in a box as soon as I get home. And if you’re trying to reach me —

That’s it for this week. “Hard Fork” is produced by Davis Land. We’re edited by Paula Szuchman. This episode was fact checked by Caitlin Love. Today’s show was engineered by Alyssa Moxley. Original music by Dan Powell, Elisheba Ittoop, Marion Lozano, and Rowan Niemesto. Special thanks to Hanna Ingber, Nell Gallogly, Kate LoPresti, and Jeffrey Miranda. As always, you can email us at [email protected] That’s all for this week. See you next time.

Calls to ban TikTok or force its sell-off from its parent company ByteDance are gaining momentum, especially after reports of ByteDance’s surveillance of several U.S. journalists. And could Microsoft’s investment in OpenAI mark the end of Google’s search monopoly?

Also, New Year’s resolutions, including locking up your phone.

Emily Baker-White, a technology reporter at Forbes who reports on TikTok.

Special thanks to Hanna Ingber, Nell Gallogly, Kate LoPresti and Jeffrey Miranda.

Category: Technology

Source: NYTimes Technology

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: