Chatbots designed to manipulate emotions 😥


Weekly Newsletter

Practical AI Strategies

The GenAI companies designing chatbots to mess with your emotions

Hi everyone,

I've spent the last couple of weeks visiting schools as we kick off Term 1 here in Australia, and it's safe to say we are now past the point of "what is AI and how does it work?" and well into "what's coming next?" A lot of my conversations were around the more technical aspects of GenAI, including "agentic" browsers, Model Context Protocol (MCP), and semi-autonomous swarms of AI flinging themselves all over the web...

But on the whole, most people simply don't have time to keep up with the frantic pace of change. Last week, I asked this mailing list and my online course community what would make it easier - some form of community? A monthly PD? The answer was, almost unanimously, a "library of resources" that educators can dip into whenever they have a question and get trusted, curated advice.

So that's what I'm going to start working on next.

If you want to give feedback, make suggestions, or just tell me what you really want to know about AI, the survey is still open (link at the end of this email.)

Teaching AI Ethics: Emotions and Social Chatbots

Social chatbots are one of the biggest potential harms in the AI industry for young people. If you're not familiar with them, platforms like Character, Replika, and Chai are deliberately designed to evoke emotions in their users, encouraging them to form - often unhealthy - relationships.

In the 2026 update to my Teaching AI Ethics series, I've switched my focus from AI that tries to read emotion (affect recognition) to AI that tries to provoke feelings.

If you're an educator working with young people, marginalised or at-risk youth, or if you have children yourself, you need to know what's going on with social chatbots.

OpenAI starts advertising to free users

The moment we've all been waiting for has finally arrived: ChatGPT will start rolling out ads to users very soon. Nobody I have spoken with is at all surprised by this: it's basically the trajectory of every digital platform for the past 20 years.

But with GenAI, it's an even bigger problem. These things know a lot about their users, from medical conditions to employment status to their deepest, darkest thoughts and feelings. They don't need to rely on algorithms which infer data about users: users tell them everything!

So I think we should care that OpenAI is starting to run ads, especially in education.

GenAI is a microwave

If you've been here for a while, you'll know I love metaphors for AI. I genuinely think they're a useful way to conceptualise tough concepts. So yesterday, when the COO at Peninsula Grammar said, "AI is a microwave," my ears pricked up.

The gist of it was this: AI is fine for making something fast and convenient. It can produce something 'nutritious', but it might be a bit bland. People like convenient things, but sometimes it's better to do it the old fashioned way.

I bounced off the idea and extended the microwave with another similarity to AI: How many of the functions do you use on your microwave? If you're like me, you use the defrost button, and maybe the 'quick 30 second' button. Now, my microwave came with about half a dozen inserts. I think I could cook a rotisserie chicken in there if I wanted to. But I have no idea where the instructions are.

GenAI has a similar problem: it has no instructions, and most people only use a couple of buttons; buttons like "generate text", and "summarise this". So it never occurs to people to cook a rotisserie chicken - or maybe use GenAI to clean data in a 1000 line spreadsheet, and then turn that data into a visual dashboard, for example.

You can expect to read more about this metaphor on the blog once the idea is fully cooked...

Cheers,

Leon

PS: I'm still gathering intel on the best ways to deliver GenAI professional learning when it seems no one has time (or energy) to cram yet another thing into the calendar.

The "library" idea seems to be really popular, butI'm leaving the survey open for another week to catch as many thoughts as possible.


Stay informed.​
The AI Reads page curates fresh, practical articles on AI and education—updated every week, free to browse.
Check it out → https://leonfurze.com/ai-reads/​


211 Tahara Grassdale Road, Grassdale, VIC 3302
​Unsubscribe · Preferences​

Leon Furze

I'm a educator, writer, and podcaster who loves to talk about artificial intelligence, education, and writing & storytelling. Subscribe and join over 9,000+ educators every week!

Read more from Leon Furze

Weekly Newsletter Practical AI Strategies Teaching AI Ethics: Power and the AI Resistance Hi everyone, Two new articles on the blog this week, and both are a bit heavier than usual. I think that's okay. One of my goals for this year was to get back to writing about the bigger picture of AI: the stuff behind the products, the marketing, and the hype. Teaching AI Ethics 2026: Power The first is a 2026 update to my Teaching AI Ethics series: specifically, the article on Power. The original was...

Weekly Newsletter Practical AI Strategies This week on the blog... Can You Spot an AI Generated Image? Plus: Teaching AI Ethics 2026: Human Labour Hi everyone, It's the time of the year when Australian schools head back for staff days, and things really start to gear up for Term 1. I've just been up in Brisbane with Grace Lutheran College, and we had a great day across two campuses exploring GenAI and assessment. I've had conversations with teachers over the past week about the best ways to...

leonfurze.com GenAI in Education This week on the blog... kicking off 2026 with a guide for educators Hi everyone, Welcome back for the first newsletter of 2026. If you've been here for a while you might notice a few (small) differences. I've moved from Mailchimp to Kit and am testing out the new platform - let me know if anything doesn't work! Part of the reason I've shifted is that the mailing list grew a lot in 2025, and I want to really make the most of this community in 2026 as I...