Skip to Content, Navigation, or Footer.
The independent student publication of The University at Buffalo, since 1950

A conversation with Nita Farahany and Nick Thompson

The pair sat down with The Spectrum before their presentation: “AI and the Future of Everything”

<p>Nita Farahany, one of UB's two distinguished speakers on Nov. 16, spoke with <em>The Spectrum </em>prior to her presentation.</p>

Nita Farahany, one of UB's two distinguished speakers on Nov. 16, spoke with The Spectrum prior to her presentation.

Nick Thompson’s job has changed a lot in 2023. 

Thompson, the CEO of The Atlantic, is preparing for new AI-powered competitors and the end of search engines, where most of the magazine’s online traffic comes from. His team is building AI tools to search The Atlantic’s decades-old archive, analyze marketing campaigns and synthesize customer comments. He’s even talking with lawyers about whether The Atlantic is owed compensation for the use of its articles to train AI models. 

Thompson and Nita Farahany, an AI scholar and author of “The Battle for Your Brain,” sat down with The Spectrum before their Distinguished Speaker Series talk on Thursday to discuss how people in power are preparing for the AI boom and whether the world is ready for it.  

This interview has been edited for length and clarity.

The Spectrum: To start off, could you each tell us what excites you the most about AI, and what about it keeps you up at night?

Nick Thompson: I consider AI to be a bicycle for the mind. I think it will make us all more intelligent, more creative and allow us to do many more things. And what keeps me up at night is that it’s going too quickly for a society to adapt its social norms for all the change it’s bringing.

Nita Farahany: What I’m most excited about is the coevolution of AI with humanity. It’s changing everything. It’s changing how we think, it’s changing how we operate, it’s changing our understanding of ourselves and our understanding of each other. The thing that keeps me up at night is that we don’t have a great understanding of our own minds, and the relationship, then, of our minds, and the melding of our minds with AI is deeply problematic in many ways. But in the same kind of idea as Nick, norms aren’t evolving quickly enough. The norms that already exist are deeply problematic in terms of how they have hacked, addicted and shaped our minds. And we’re going down that same path now with tools that are far more powerful and far more likely to leave us in a place that is going to erode what it means to be human. 

TS: How can lawmakers, policymakers and industry leaders like yourselves educate people and allow people to protect themselves from the path that technology is headed down?

NF: I’m going to challenge a little bit of the question. And the reason is that I think one of the difficulties is putting the burden on consumers and individuals, which isn’t to say that people aren’t disempowered. I think there are things that we can do to empower people, but I think too often, the way that we’ve dealt with the risks of emerging technologies is to expect people to protect themselves. And I think we have to fundamentally change the terms of service and technology and the way in which the technology is designed to favor and to foster cognitive liberty of individuals. If you’re interacting with AI, if you’re interacting with a chatbot, or if it’s an AI-generated image, there should be labels. And there are moves to doing this, which is to have design floors and minimums that favor and protect individuals and try to safeguard against manipulation of deep fakes, or shallow fakes and doctored images, or even just the knowledge that who you are interacting with is AI and not a human. That alone changes your understanding and your ability to actually safeguard yourself. So I think part of the education is helping people learn when they are or aren’t interacting with something that is AI-generated. And that requires that we put into place norms, laws and burdens on companies that regulate how they behave and how they interact with humans and human interactions.

TS: What are lawmakers and industry leaders getting right in response to AI? What are they getting wrong?

NT: Well, they’re paying a lot of attention to it. They are talking about it. I consider that getting something right. I think that the general framework laid out in Biden’s executive order — where he explained that you need to be extremely concerned about bias, you need to be extremely concerned about authenticity, you need to be extremely concerned about the effect on the workplace — that was all good. What I worry they’re getting wrong is that they’re going to be convinced by the large tech companies to pass burdensome regulations in the name of AI safety, and what those burdensome regulations will really do is lock in the market and make it impossible for small, open competitors to compete with a large monopolist. And that will just lock in the power of a small number of large tech companies. And I think part of the reason why the big AI companies are out there talking about regulation, is that if you can get the government to pass a law that requires every company to have 100 lawyers to comply with it, no startup will ever be able to compete with you. And that was the other part of Biden's executive order. So they’re getting that a little bit wrong in my mind.

NF: I agree with all of that. I think it’s unfortunate that it’s an executive order, rather than congressional action. What it signals to me is that there’s a fear that congressional action won’t actually follow, and executive orders can easily be undone. It puts too much into too many different regulatory agencies’ hands to actually implement and enforce in ways that create potentially overlapping, deeply conflicting, or inadequate frameworks to actually address what the problems are. And I think the big tech companies have done a really good job of trying to frame the issues around long term existential risks, rather than some short-term issues that really have to be addressed. In that way, the Biden executive order was good in that it actually respected and listened to a number of the voices that were talking about the near-term risks of bias, issues with respect to transparency and accountability, the need for broader data privacy legislation that addresses some of the near-term, not the longer-term existential risks. I look at it as almost a hodgepodge of what’s in the executive order, rather than a comprehensive understanding of how to actually govern AI in a way that both would foster and enable innovation, which we really want, but safeguard against the most significant misuses and risks to society. And there’s almost nothing in there that addresses these problems of mental manipulation of individuals, which create a lot of the short-term risks as well as the long-term risks of AI. If people can’t actually develop some sort of safeguards around how they’re thinking and how they’re evolving with AI, then all we’re doing is taking all the problems of social media and amplifying them in terms of mental health problems in terms of mental addiction and manipulation. All of the problems that we’re seeing with social media are on steroids with generative AI, and there’s virtually nothing that’s currently being proposed that addresses that.

dss-qa-2.jpg

Thompson, left, is the CEO of The Atlantic. Farahany, middle, is a Duke University professor and author of "The Battle for Your Brain." 

TS: We’re having such trouble now, at the governmental level, with social media. It’s actively a problem. How can we expect to now, like you said, put it on steroids with AI when we’re still grappling with the current problems? 

NF: I don’t think we’ve had the right framework, which is the biggest problem when it comes to social media. Part of it is just the power of the tech titans. The way I think about it is with energy companies: We had a major problem: growing climate problems, growing emissions problems, but also a national security problem, which is reliance on foreign oil. And so there was a broad set of both liability and incentives that were put in place to try to shift toward alternative energy investments. Otherwise, legacy energy companies never would have invested in alternative energy if they didn’t have any incentive to do so. Their business model was based on oil and based on fossil fuels and traditional technologies and energy. With a massive set of incentives, and liability regimes, it’s shifted over time. We’re now at a tipping point, where 23 countries around the world are at a tipping point with electric vehicles. That’s exciting. If you think about legacy tech companies, their business model is extractive and exploitative, right? It’s all based on ad revenue. Ad revenue requires your attention and your addiction to technology, which is what leads to a lot of mental health problems. It’s designed to poke you and keep you on task, but it’s also designed to try to undermine your thinking, right? It doesn’t want you to be critically thinking, it’s not like The Atlantic, where you spend time critically thinking about articles.

NT: What about The Atlantic social media feed? We’re trying to manipulate you into being critical thinkers.

NF: There’s a difference between content that is designed for critical thinking versus content that is designed to push you into automatic thinking, right? So how do you take legacy tech companies, whose business models are based on extraction and exploitation, and move them to some totally different business model? It’s not going to be just with laws, right? We’re going to need a significant set of incentives that actually enable different business models to emerge that aren’t based just on advertisement and advertising revenue. And we can liken that to energy, but we have to grapple with it with a bigger framework, a bigger framework that says it’s a collective set of problems, and the way you address those problems isn’t just with big sticks. It’s sticks, it’s incentives, it’s research to try to find different business models that can flourish to enable us to shift to a different kind of approach.With energy, we had a national security crisis, where people recognized there’s both a climate problem as well as the risk of dependence on foreign oil. We need something like that to have governments feel the same way about tech companies, to feel like their interests are aligned with changing the business models of tech companies to enable flourishing.

NT: One other thing about social media, is that by the time the government was trying to regulate social media, it was already mixed up in politics. And so Democrats are worried that certain regulations help Republicans, Republicans are worried certain regulations help Democrats. It became impossible to do anything. Fortunately, with AI, we’re a little bit before it totally screws up politics. And so maybe we can regulate it before anybody has any idea about whether regulation is gonna help the other guy out. 

TS: How long do you think we have until it gets mixed up in politics?

NT: About five minutes. 

NF: I’d say it’s already mixed up in politics, right?

NT: It is mixed up, it’s just not as bad as social media. You don’t have Democrats saying Facebook got Trump elected. I think that it will be an interesting test: the role of AI in the 2024 elections. You’re already starting to see it in the British elections. You’re seeing it in a story today in The New York Times about the Argentinian elections. You have not seen elections fully manipulated and changed by AI in the public way. There are AI datasets and AI tools that are used by every campaign. But it’ll be a very interesting test when the ‘24 election is one where AI plays a huge role and whether that affects the ability of the government to regulate it. Because if it is perceived that the tools created by AI companies helped Joe Biden or helped Donald Trump or Nikki Haley or whoever is running, then it will totally change the conversation about regulation.

TS: We’ve seen generative AI effects newsrooms all around the world. You know, Gannett tried to use it to write a couple of high school sports articles…

NT: That was great. You guys do it? It’s a really smart idea.

TS: How is AI affecting operations at The Atlantic

NT: So, huge question. We spend a ton of time modeling how it will change our future business models. So the biggest change is that a large percentage of the people who find The Atlantic come through search. Search is going to be turned upside down as it moves to a chatbot, as opposed to 10 blue links. Huge difference. There will be all kinds of other companies that use AI to summarize, copy, replicate, try to be like The Atlantic. So we will have many more competitors in many different ways. So we are modeling that out and trying to figure out how we can best survive that. There are existential questions that we think through. Then there are the questions of how you use AI on the business side. So we are building AI tools, for example, to analyze our advertising campaigns to make sure they’re compliant with FTC regulations. We’re using AI tools to take every little bit of customer feedback we’ve ever gotten to give ideas to product managers. We are building tools to identify stories that are in the news, pieces that are in the Atlantic archive, and then headlines that you could tweet out from those archive stories. You might have noticed on my Twitter feed the other day, that I tweeted out a review of “Moby Dick” on the anniversary of the day “Moby Dick” was published. Did I know that because I’m a Herman Melville scholar? No, I knew that because I was testing out an AI tool that told me that this was the day that “Moby Dick” had been written, and we had a review essay that people really liked. So we’re using it in that kind of way. Then there’s the really hard question, which is how do the journalists use it? I don’t oversee the journalist, I’m just the businessman. But my view is that AI can be an incredible tool. It’s a great tool for copy editing, a great tool for fact checking, a great tool for finding story ideas. It’d be a great tool for analyzing datasets. But you should never on any day of the week — not Monday, Tuesday, Wednesday, Thursday, Friday, Saturday or Sunday — take a sentence or a word from an AI chatbot and put it in your story. You should never do that. You can use it to inspire you. You can ask the AI to read your story and say, ‘How do I make this story better?’ But you should never take one word from AI and put it in your writing.

NF: And why is that? 

NT: Well, the simplest answer would be legal complexities. One: you want to retain the copyright. Maybe the AI company has the copyright. Two: you want to make sure that there’s a purity in the content you create and build to say, ‘This was created by humans, it’s been created by humans for the entirety of our history.’ And third: every time a journalistic organization has done it, as far as we know, they have shot themselves in the foot and created errors. You can’t trust this stuff. You can’t believe this stuff. 

NF: When you say you should never take a sentence, right? If it’s copy editing, it’s changing your sentences. 

NT: But I’m not saying, ‘Change the sentence.’ I’m saying, ‘Offer suggestions.’ 

TS: Would the publication have to denote, like Nita suggested, if they used AI-generated content?

NF: It’s very unclear. Right now, the only legal ruling that we have says that if it’s entirely AI-generated, like if a chat bot generated it, then a human can’t claim copyright on it. But what about when it’s co-created? What about when you go in and you say, ‘Write me a paragraph that argues the following,’ and you give it all of the substantive content that you want, but it actually generates the words and the sentences and then you edit the words and sentences. How do we think about that? We have no idea. And a lot of people are doing that, a lot of students are doing that, right? 

NT: Not these students. 

TS: Oh, of course not. 

NF: I’m less against the possibility of doing it. So from a legal complexity perspective, absolutely. From a factual perspective, absolutely. But from a drafting perspective for first drafts, I’m less troubled by the idea of my students, for example, coming up with a very good concept and telling a chatbot, ‘I want to argue the following, write a paragraph that summarizes it.’ Then the student plays with it until they feel like it says what they want it to say. Now, who owns the copyright of it? Does Open AI have a right to the copyright claim? I think the answer is going to be no because there was more substantive human input because it’s human thinking. But there’s drafting and co-drafting and coevolution and co-editing happening with AI. Will it result in better writing? Probably not.

NT: Probably not. And there may be a day where my hard edge stance changes, but for now, I have a very verbal.

NF: And how are you policing that?

NT: We had one journalist who violated the rule, and we executed them.

NF: I say this because this comes up so much in the university setting. As a professor, it’s coming up every day in every class, and students feel like they’re cheating if they’re using tools that are going to fundamentally transform their lives and are going to be part of the rest of their lives. And I think we have to start getting a lot more nuanced about how we think about it, given that people are going to be using it for this kind of co-piloting and coevolution with how they think about writing it.

Grant Ashley is the editor in chief and can be reached at grant.ashley@ubspectrum.com  

Ryan Tantalo is the managing editor and can be reached at ryan.tantalo@ubspectrum.com 


RYAN TANTALO
tantalo-2023

Ryan Tantalo is the managing editor of The Spectrum. He previously served as senior sports editor. Outside of the newsroom, Ryan spends his time announcing college hockey games, golfing, skiing and reading.


GRANT ASHLEY
4D1A3172.jpg

Grant Ashley is the editor in chief of The Spectrum. He's also reported for NPR, WBFO, WIVB and The Buffalo News. He enjoys taking long bike rides, baking with his parents’ ingredients and recreating Bob Ross paintings in crayon. He can be found on the platform formerly known as Twitter at @Grantrashley. 

Comments


Popular




View this profile on Instagram

The Spectrum (@ubspectrum) • Instagram photos and videos




Powered by SNworks Solutions by The State News
All Content © 2024 The Spectrum