Duck Tales: Why DuckDuckGo is giving users a choice about if and how they use AI (Ep.23)
18 March 2026

Duck Tales: Why DuckDuckGo is giving users a choice about if and how they use AI (Ep.23)

Inside DuckDuckGo

About

In this episode, Gabriel (Founder) and Zac (SVP, Insights) discuss AI adoption, common user concerns, and why we’re building AI that users can control and customize.

Disclaimers: (1) The audio, video (above), and transcript (below) are unedited and may contain minor inaccuracies or transcription errors. (2) This website is operated by Substack. This is their privacy policy.

Gabriel: Hello, welcome back to Duck Tales. I’m Gabriel, founder of DuckDuckGo. I have with me today Zac Pappis.

Zac: Yep, I’m Zac. I’ve been here for about 14 years now, so I think we’ve had the pleasure of working together for a very long time. And we’re on our insights team, so generally doing a lot of market research or user research and generally trying to give us better insight into what our customers and future customers want.

Gabriel: Cool, and yeah, happy to have you. And we’ve been friends for 15 years. But we are going to talk about AI today. You’ve done a lot of research in AI over the last year. And we obviously recently put out this kind of Yes AI, No AI public poll. And we’ve talked about AI a few times on the show here, just in terms of product, but also just more generally. So our approach to AI has been a bit different than the other tech companies. Obviously we’re asking people if they want AI, no one seems to ever be doing that. But more generally our approach is to make AI features that are private, of course, because we’re a privacy company, useful, which I think lots of customers have different opinions on what AI is useful and what is not useful across all sorts of products. And most importantly, I think to this discussion, optional. So we’re making all of our AI features optional. You can turn them off or tune them, actually. We’re going to get into that too. So that’s been our approach, private, useful, optional. And we thought we would talk about, with you today, who’s been actually doing a lot of research with consumers and looking at other research, kind of how those three things kind of thread through the research. So if we were gonna, you know, obviously that’s a big topic, maybe we can start at the highest level and work our way down. Like, yeah, highest level landscape of AI right now, kind of like what are you seeing in all of the research trends that you’re looking at?

Zac: Yeah, I mean, generally, I think what we feel just as being consumers and people downstream of a lot of technology is that it’s been pushed on us as well as the general market, despite not really asking if we wanted it. And the market moves so fast that I think we feel the consumer has been left behind and no one really ever got an informed choice or consent into how prevalent AI has become in every product. And we think that’s a mistake, right? So for users and the companies that are building it, that choice as you laid it out is to make these things really truly useful and optional and of course, private. But that’s not what we’ve seen. And I don’t think that’s how most people have felt in terms of the ways that the companies have rolled these products into, or I should say rolled them out to their broader consumer base. Lots of tech companies have put AI overviews into products or turned AI on by default without really asking for consumer consent. It really reminds me a lot of what had happened through the 2010s and the cookie era or social logins or a lot of cases where technology just sort of appeared for people without a lot of demand for it or really a lot of cost to use it or a lack of concern into how exactly their data was going to be used or how would it impact the usefulness of the product they were using.

Gabriel: Yeah, that makes sense. I mean, giving them the best intentions, like, even though we kind of just screwed the approach, I think somebody saw a lot of people’s assumptions seem to be it was super useful and everyone’s going to love this. And I think to some degree it’s turned out not as useful for lots of people in different scenarios. And then some people just don’t want it shoved down their throat. You know, they would like the choice. They might use it anyway. So with that in mind, like, I know you actually — so like that’s kind of the general consumer sentiment, but like in terms of like actual adoption, like data points, where are you, what are you seeing right now?

Zac: Yeah, so we see that this is everywhere. We also see that people are using it a lot. Some studies that we have looked at and run ourselves, as well as those that we’ve evaluated from external companies and polling services, or things like Pew had run a study in June of last year showing that there is a lot of variability when it comes to who is using AI and who has a strong preference for it. A good example might be that 29% of parents use AI daily, but 15% of non-parents use it. And that’s almost twice the rate from parents and non-parents that are using it. That’s just the type of disparity, I think, that we see across different consumer types. So older or younger, more educated, less educated, there is a real preference and usage gap between these groups. And that doesn’t mean that this product is going to be right or the specific implementation is going to be right for everybody. And so that level of specificity or attenuation for exactly how the product appears to people is really important to us. So despite the fact that you have lots of parents, 30% of parents using AI daily, I think even more, 50% of Americans are actually concerned, more concerned than excited about AI. And that’s gone up over the last few years. So you have these two things happening. You have a lot more exposure and usage of AI and more familiarity with it. But as people are becoming more familiar with it, their concern is going up. And that to us is evident that someone’s really living with a choice that they didn’t really get to make.

Gabriel: Yeah. Well, I think one thing is usually said in there and it goes to some other data points I’ve seen very recently, correct me if I’m wrong, but there was another Pew that came out really recently on teenagers and it was about, I think about half the teenagers were kind of regular daily users. And then there was another one, just about ChatGPT usage and it was like maybe 40% of desktop people were weekly users. What I see on that is I do see that’s obviously very significant adoption in three or four years. However, that’s still like majority of people close are not daily users. You know, so like the one headline I take from what you said and what I’ve seen is that yes, there is a lot of adoption, but this idea that everyone’s using it all the time is that narrative just seems not true. And it relates to your concern points. Like the people who are using it are concerned. People who aren’t using it are also very concerned. And so there’s just like generally a building concern. There are obviously lots of different issues people have with AI. So like, how do you like think about that concern? If you kind of try to piece it apart, like, where are people expressing, how are people expressing that concern, I guess, and how is it related to their...

Zac: Yeah. I think it’s something that we know pretty well because we’ve seen a similar type of concern with tech overreach before. Just some stats that we have on hand to share because I did look into this a little bit before today. There was a study by Resilience [CHECK] in December, so pretty recent, December 2025, showing 54% of those polled, US adults that they surveyed, have avoided AI-powered features. And in our own polling of US adults, we’ve seen something closer to like 13% of people actively disabling AI search or browser features in their browser to protect themselves. And when we look more deeply at why people are doing that, taking, you know, kind of these extra steps either to tune or to completely avoid the way that AI has been integrated into their products, we see a couple of, I should say, familiar concerns. There is a concern that companies are rolling these new technologies into their products so quickly that they come with new types of privacy trade-offs or data security concerns. And of course, we’ve seen that in the past with Cambridge Analytica and other cases where a lot of data collection can just increase the surface area, the risk surface area for having that data. So when asked why people were avoiding AI or were turning these features off, I think 51% had said they reduced data sharing because of AI, meaning a behavior that they’re taking to proactively not share as much as a result of AI being in the product. And when asked what they wanted, the majority of answers from those folks were opt-out rights, data traceability, and disclosure. None of those things are no AI. They’re consistent more with a theme that would be control, right? Not no AI. They just want to know when it’s being used, how it’s being used, and to have some input and flexibility into where it’s applied to the product.

Gabriel: Yeah, I mean, those seem like totally legitimate concerns to us. I think more broadly, like what I’m hearing is, I mean, that’s, privacy is one of the main concerns people have. That’s obviously why we’re building private AI and giving people that control. The second is that people do want options to your point. It’s not just yes or no. It’s, like, yes, some people, but it’s a smaller percentage like you pointed out, totally want to get rid of all AI because maybe they have more objections for various reasons. But it seems like the majority of people actually just want it to be useful and private. So what’s useful to somebody may not be useful to another person. And so if there are 10 AI features, maybe they want to engage with six out of 10. Maybe they want to turn the dials on them a little bit different. And so it’s like this broader customization of AI thing to make it useful that we’re trying to do with our search features that I don’t think other companies have approached in the same way. They’re just kind of like all on all the time, you know?

Zac: Yeah, exactly. Funny enough, there was an Ernst & Young study pretty recently, it might have been January or February of this year. It was a poll of 500 or so US business leaders, so people in an executive position or director position in some kind of a large company. I think these were all companies of over 40,000 employees or something pretty large. And from that survey, 78% of the company leaders polled said that their adoption is outpacing their ability to do good risk management. And 45% of the same people polled said they had a confirmed or suspected data leak via these unauthorized AI tools. So they’re really prioritizing speed over the exhaustive vetting that they would need to do to either ensure that they’re actually producing something that’s safe, both from a privacy and security standpoint, and also useful, that they’re getting it into a product really without understanding user needs or how the product is being adopted by those who are really core to their business.

Gabriel: Yeah, that’s actually a really good point. I mean, it’s like we expect these numbers that we’re citing to change over time, right? And you’re already seeing that like concern could go down if you address people’s needs for transparency and control and also concern could go down as people discover, you know, actually useful features and how they’re using it. I think part of the issue is there hasn’t been that control and transparency. And then part of the issue is I think to that last point is things are intentionally moving too fast for people and change, you know, it takes time to understand these technologies, to get good risk management, to like get good processes in place that, you know, don’t exploit your data and have other security and privacy risks. And so, like, I imagine that longer term, some of this will settle down, but it feels, I guess, probably not the level, it just — if I were to summarize some of this, it just feels like it’s moving too fast for people a bit and that everything needs to slow down a little bit.

Zac: Yeah, I mean, it’s just kind of spitballing here, but it seems almost like a double-edged sword. You both have people who are in charge of making these types of product decisions, rushing them out the door without fully understanding them. And then as consumers who are also, you know, privy to, or I guess experiencing the downstream effect of other product changes, we feel it. So you’re getting this double whammy of having to participate if you’re somebody that works in any industry right now, which probably a lot of them are impacted by AI. It’s likely that your organization is dealing with a lot of these same challenges, not really having the right oversight or internal expertise to understand the risks. A lot of pressure from the market and competitors moving so quickly and you feel like you’re going to get left behind. And it’s understandable why some people might feel like the industry or the changes are moving faster than they can really make sense of them. And I think that’s what we feel both as people somewhat responsible for creating consumer technology, but also as consumers ourselves. We see them in the products that we use every day from Apple and Microsoft, et cetera.

Gabriel: Yeah, well, let’s take a few minutes to just talk about our Yes AI, No AI campaign we did. So we put out this essentially public poll. It really wasn’t to our users, and I’ll get to that in a second, about asking, are you Yes AI or No AI? And we understand, we just talked about how it’s all nuanced and it’s about control. So we understand it’s a bit of a binary kind of thing that we’re asking people to choose, but it was kind of like, your finger in the air, just kind of say what side you’re on. Now part of that was because we think that the people who are concerned inside just haven’t had a lot of voice. They haven’t really been listened to. So this was a bit of an attempt to allow and show that and have an opportunity for a tech company to kind of listen and see what’s out there. Obviously the poll ended up very skewed. Interesting though, is like overall numbers were like 85-15, something like that. But if you look when you polled our actual users on like a platform, it was more like 50-50. And what happened was the whole poll went viral in the No AI community. And kind of my theory is, you know, not a lot of people are speaking to this community. We did. And so it went viral there when there’s everyone speaking to the Yes AI community. They don’t really have a reason to vote Yes AI, you know, but people really do want to express their No AI vote. So I thought it was interesting. We didn’t really know what to expect. That was the hypothesis. And that’s really what happened. You know, it seemed like, I guess, I guess my read on it and tell me what you think is like, I think some people are out there and maybe listening to this and being like, I still don’t believe there’s no AI sentiment here. And it’s like, I think we’re here to say, yes, there is, you know, like we have so many data points here that show there’s a large percent of people who are concerned with different AI things.

Zac: Yeah, exactly. The campaign was awesome. And I know it’s not something that we typically do. So it’s great to see just a response from it. But it was really meant, I think, to point out, correct me if I’m wrong, the gap that we didn’t, that no one really gave consumers a choice for. So this gap between yes and no, where most people aren’t in this absolutist camp. Even if I think you have some anti-AI sentiments, it’s more than likely, I guess, just given a kind of a bell curve that for most people, they fall somewhere in the middle. It can be useful for some things, you know, in certain conditions or if, you know, it’s kind of an opt-in or something that really is explicit for the user. But in other cases, maybe not. And I think the experience that this campaign had kind of really drawn on was what we talked about earlier, just a flood of AI without really a lot of ramp up and consumers not really getting a choice to speak out about it. And that certainly, as you pointed out, the people who were really pro AI were kind of getting the life that they wanted to live in and the world was really bending in their direction. But the No AI crowd probably didn’t see or kind of people on the other parts of that bell curve didn’t really see anything coming for them. They felt probably like it was and it’s still seemingly like an AI-powered world that we’re heading into. But without understanding what that is, it’s certainly scary. And certainly with the other concerns that we’ve seen in the data that we’ve just shared here today, there are privacy concerns. There are systemic concerns and how that’s going to impact the rest of the products that they use. So if it’s something that’s getting built into Amazon, how does it impact my Echo device? How does this data migrate from one process or one product to another? And all of that, I think, just comes in tenfold with AI because it is such a sensitive topic for people. And the type of content that you engage with in AI is uniquely different from, say, something that you would type into your browser. It’s a lot more personal, it’s a lot deeper, and certainly from the history that has grown from that, it can be too personal for people and really jeopardize a lot of the concerns, bring a lot of concerns forward that they had in the past with cookies and just general corporate tracking. So one thing that I think ours does really well that we’ve seen a lot of positive response for is the fact that you can use it with no accounts. You can just go to duck.ai and start using it. No accounts, it’s not training on your data, you can stop using it when you want, you can turn it off if you’d like, and all of that optionality and that level of control is just not something I think we have seen in any other product. So it would be interesting to see.

Gabriel: Yeah, agreed. And people have asked specifically on the back of that campaign, kind of what are we doing? And it’s important to say, well, first of all, all that optionality was built in already before we did that. So you could have turned Duck AI off completely from search. You can turn Search Assist off within our search results on Duck AI itself. You can choose what model provider you want. Like you don’t have an account, like you said. But I’d say additionally, we also created this domain, noai.duckduckgo.com, as well as yesai.duckduckgo.com. And those have now built in like all AI on and all AI off. If you really want to be on each end of the extreme and you don’t want to tune it, we took the time to kind of like make those two bespoke experiences for people on the end. And we have seen a decent uptake on the No AI side. Just thinking about, you know, closing out a little bit, like, is there anything on the product side or going forward, you know, in addition to all that you want to talk about from like a research perspective or anything.

Zac: Yeah, I could talk for hours. I don’t know if the podcast can sustain that but yeah, it’s just something that you just said was interesting because I did run into my neighbor yesterday who is kind of aware that I work at DuckDuckGo and was asking me about the AI, you know, just in general, not ours, just in general, what’s going on with AI and they were really delighted to hear about noai.duckduckgo.com and I directed them to our search overviews and let them know how you can tune kind of the frequency that those appear in search. And she was like really taken aback by that. I think that’s the kind of experience that we’re trying to manufacture more of is that like the need clearly matches the product that we’re providing. And that connection happens almost instantly where people recognize that that’s what they’ve been looking for. So for us, I think that’s a lot of trust and control and really turning that into what you would call like a delightful UX to getting back to, I think, the podcast as well. So not really AI that confuses you, AI that should be there when it’s truly helpful and how and where that gets embedded into the product is a lot of what our research is focused on going forward. So you’re going to be seeing a lot of how we integrate and make AI easy to get to, easy to get out of, and easy to switch from when you’re navigating between, say, traditional search, browsing, AI. If your phone is in your pocket, your laptop is in your bag, and all of those contexts where a technology can be helpful, but you may not know exactly which one can be. So we want it to be present when it is helpful and kind of hidden or tucked away when you would like to invoke it, but otherwise out of the way.

Gabriel: Cool, that’s a good place to end. Well, thank you, Zac, for coming on.

Zac: Thanks for having me. This was great. Thank you. Bye.

Gabriel: Cool, thanks everybody for listening. See you next time. Bye.



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit insideduckduckgo.substack.com