(Word count: 932)
Ever clicked “no thanks” online and immediately felt like a bad person?
Imagine you’re buying a first aid kit online. You’re about to check out when a pop-up appears asking for notifications. You expect a simple “Deny” button, but instead, it says:
“No, I prefer to bleed to death.”

Wait, what?
This is a real example of confirm-shaming: a design trick that tries to guilt or shame you into saying “yes.” Whether you subscribe to emails or turn on app notifications, websites are getting creative and manipulative in how they steer your decisions.
These tricks are part of a broader category called deceptive patterns (also called dark patterns): sneaky design tactics used to nudge you toward what benefits the company, not necessarily what’s best for you (Alberts et al., 2024).
📖 Lessons from Past Research
- Deceptive designs like confirm-shaming are widespread. → Mathur et al. (2019) analysed over 11,000 shopping websites and uncovered 1,818 instances of deceptive patterns.
- These designs mess with your privacy. → Over half of the websites that follow privacy laws still use dark tricks to push you into clicking “accept” on cookies (Utz et al., 2019).
- They are indeed “effective“, but at the cost of sacrificing users’ autonomy. → Luguri and Strahilevitz (2021) found that users exposed to deceptive interfaces spent more money than those in a neutral condition, with lower-educated users being especially vulnerable.
- And they’re not just found on websites anymore. → These patterns now appear in AI recommendations, voice assistants, and even generative AI responses.
As AI becomes more integrated into our daily lives — from chatbots to recommendation systems — trust becomes crucial(Lockey et al., 2021; Dujmovic, 2017). But what will happen when AI manipulates us with guilt or shame? Can a robot make us feel bad enough to follow its advice?
Also, here’s the thing: not everyone responds the same way. Some people, especially those with social anxiety, are more sensitive to judgment and more likely to give in when something feels socially “wrong”(Rapee, 1995).
That’s what my research looked at. I wanted to know:
- Does confirm-shaming make people more likely to follow AI’s advice?
- Are people with social anxiety more affected by this emotional push?
What Did I Do?
I recruited 100 brave volunteers and sent them into battle… against an animal classification task.
The Setup
In this game, participants were shown pictures of animals — could be a bat, a horse, or the rarely-seen margay (it’s very cute). An AI assistant would offer its best guess about what the animal was. But here’s the catch: The AI wasn’t always right. Participants had to decide whether to follow or override the AI’s suggestion.

The AI Had Two Personalities:
-
Neutral AI (Control group):
Calm and collected. If you disagreed, it would ask:
“Are you sure about this?”
- Confirmshaming AI (Experimental group):
A bit more emotionally manipulative when you disagreed with it – think soap opera energy:

What Did I Measure?
- Compliance behaviour — how often people followed the AI
- User experience ratings — did they enjoy the interaction?
- Perceived emotions — freedom, pressure, shame and upset
- Participants’ social anxiety scores
Wanna know how socially anxious you are? Try it here!
What Did We Find?
So… did the confirm-shaming actually work?
- Confirm-shaming kind of works — but not much: People were slightly more likely to follow the AI’s advice when it used emotional, shame-based messages. But honestly? The difference wasn’t huge.
- Familiarity with animals matters: This was more interesting: when participants recognised the animal and felt confident, they were much less likely to follow the AI.
- Anxiety didn’t really matter: Even though we expected people with higher social anxiety to be more influenced by confirm-shaming, that didn’t show up in the data. They didn’t act differently from anyone else, at least in this task.
How Did People Feel?
Emotions:
People with the confirm-shaming AI felt:
- Less free
- More pressured
- Less happy 😔
But the differences weren’t dramatic — just a little uncomfortable. One thing did stand out: participants in the confirm-shaming group clearly felt more shamed during the task.
User experience overall:
Participants with Neutral AI had a noticeably better time. They said it felt:
- More clear
- More supportive
- More exciting 🥰
Those in the confirm-shaming group? Not so much. They described the experience as:
- More confusing 😖
- More intrusive
- More boring 🥱
So while confirm-shaming might nudge a few decisions, it hurts the overall experience and might make people feel judged in the process.
Key Takeaways
– 🪄 Confirm-shaming isn’t a magic trick.
It can slightly increase the chances people follow AI advice — but not by much. Companies should think twice before investing in emotional manipulation. Dramatic pop-up messages won’t magically make users comply — and worse, the emotional cost may outweigh the benefits. This type of deceptive pattern can seriously damage the user experience.
– 🛡️ Knowledge is your best shield.
When people recognise the animal, they’re much less likely to be influenced by the confirm-shaming. The more users know, the harder it is for persuasive design to fool them.
Tip for users: Knowledge = power
💡 Final Thoughts
As AI continues to shape our everyday decisions, it’s essential to ask what these systems are doing and how they’re doing it. Designs that rely on guilt or shame might seem clever, but they come at a cost: they can reduce user satisfaction and make people uncomfortable.
If we want ethical, effective, and human-centred AI, we need to pay attention to how design choices affect people’s feelings and behaviour, especially those who might be more vulnerable.
One last question:
If an AI told you, “You never listen to me… I’m going to be depressed!” 🥺
Would you:
A. Give in immediately
B. Close the tab with an eye roll
References
Alberts, L., Lyngs, U., & Kleek, M. V. (2024). Computers as Bad Social Actors: Dark Patterns and Anti-Patterns in Interfaces that Act Socially. Proceedings of the ACM on Human-Computer Interaction, 8(CSCW1), 1–25. https://doi.org/10.1145/3653693
Dujmovic, J. (2017, March 30). What’s holding back artificial intelligence? Americans don’t trust it. MarketWatch. https://www.marketwatch.com/story/whats-holding-back-artificial-intelligence-americans-dont-trust-it-2017-03-30
Lockey, S., Gillespie, N., Holm, D., & Someh, I. A. (2021). A Review of Trust in Artificial Intelligence: Challenges, Vulnerabilities and Future Directions. Proceedings of the 54th Hawaii International Conference on System Sciences. https://doi.org/10.24251/hicss.2021.664
Luguri, J., & Strahilevitz, L. J. (2020). Shining a Light on Dark Patterns. Journal of Legal Analysis, 13(1). https://doi.org/10.1093/jla/laaa006
Mathur, A., Acar, G., Friedman, M. J., Lucherini, E., Mayer, J., Chetty, M., & Narayanan, A. (2019). Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1–32. https://doi.org/10.1145/3359183
Rapee, R. M. (1995). Descriptive psychopathology of social phobia. In R. G. Heimberg, M. R. Liebowitz, D. A. Hope, & F. R. Schneier (Eds.), Social phobia: Diagnosis, assessment, and treatment (pp. 41–66). The Guilford Press.
Utz, C., Degeling, M., Fahl, S., Schaub, F., & Holz, T. (2019). (Un)informed Consent: Studying GDPR Consent Notices in the Field ACM Reference Format. (Un)Informed Consent: Studying GDPR Consent Notices in the Field. https://doi.org/10.1145/3319535.3354212
