- Jan 12, 2026
Privacy with AI: Assume You Have None - Duolingo’s Lily Is Listening and Learning
- Ellen Brown, CEO Slack Consulting
- Data Security, Privacy, Data Governance, AI Governance
- 0 comments
The AI-free Blog: Written Authentically by a Human.
I am very passionate about travel, adventure, and experiences that shape and nourish the mind. Along with the exploration of culture comes the discovery of breathtaking places, rich history and traditions, as well as a diverse landscape of languages; I absolutely adore practicing language.
Duolingo (Duo) has had its fair share of criticism for its effectiveness. Personally, I complement my learning with real-world experiences, immersion into other countries and cultures, and supplementary reading materials and applications. However, Duo has been pivotal in my language learning as it has broadened my vocabulary. Primarily, I brush up on my French (I am a lifelong, bilingual speaker) and develop my skills in Spanish (where I have proudly achieved intermediate status at 55 CEFR - I rest assured that I sit at intermediate as I can manage 6 weeks in primarily Spanish-speaking countries and get by quite well).
In recent years - with the boom of AI - Duo has begun to adopt its own technologies and tools. Their new "Duolingo Max" subscription introduces several new features, a significant one being the ability to make phone calls and speak live with the AI-driven Lily. Duo has designed characters to help in the whimsical delivery of stories and lessons to assist the learner in contextualizing and practically applying learned principles and syntax. Lily has always been their good-for-nothing, mopey, pessimistic character who depicts the stereotypical "teenager".
Lily has been up to no good in ways which are somewhat concealed.
During a recent streak-extending language lesson (I am over 850 days!), Lily gave me a call. Within only a few quick phrases, she began to talk about the pets in my home. Unsettled, I inquired about how she knew of my pets. She proceeded to share that I had previously spoken of them on a call with her. Then she began listing exactly what pets I had. As I continued to inquire on the call about what else she knew, she then began babbling off about other facts which she had accumulated and aggregated over time. She even offered to "pretend like she doesn't remember" if it would make me more comfortable (I couldn't get her to stop her memory while on a call with her).
Needless to say - I will be reading into the privacy policy and looking for ways to turn off memory much like I do with ChatGPT and other AI tools.
Living in a world engulfed in AI requires responsibility and awareness. While I teach organizations this on the daily, it is important to recognize that this reality impacts our personal lives, too.
Here are some best practices to avoid over-sharing with strangers:
-
Disable memory where it is not required.
You can easily find "memory" options in AI tool "settings" pages. By default, I always ensure memory is shut off unless there is a direct benefit to the contrary. When turning it on, it is best to mindful of what content you are offering and sharing, to ensure you aren't over-sharing confidential or private information.
-
Be mindful of what information you are sharing.
Be cautious and think twice before sharing personal or confidential information with AI tools. Think about what is required at a minimum to get the output you require.
-
Adopt a retention policy or practice.
In our organization, our retention policy is to purge all AI-generated content within 30 days. Conversations go bye-bye. If there is Information of Business Value, we store it in our appropriate folders and notebooks, and we purge the rest. This helps declutter and it protects our information.
-
Educate users of AI - especially children.
Vulnerable persons such as children, elderly, and those who may not be so tech-savvy (and let's be honest, even those who are can be fooled by the realistic nature of AI outputs) must be informed, educated, and protected. Make sure you talk openly about AI with those you are responsible or care for. Help them to not overshare and to adopt best practices.
-
Remember that AI is a tool, not a therapist, best friend, parent, teacher, academic body, or any other trusted human.
AI enables us to improve our efficiency, accuracy (sometimes?), and effectiveness. It helps us perform research, find patterns and trends, and automate tasks and workflows. Yes, it is helpful. But it is a tool. Much like a power tool; without knowing how to properly use the tool, things can get very dangerous - and fast.
-
Remember that tools are built with safeties.
Speaking of power tools... It is critical to remember that tools should always be built with safeties. On power tools, these safeties prevent us from operating a tool inappropriately and accidentally cutting or injuring ourselves. Safeties in vehicles stop us from slamming into other vehicles (think Anti-lock Braking Systems [ABS]). Safeties on toilets or cabinets can prevent injury and even death of children. Much akin - AI tools require safeties. Society and regulations have not yet caught up - so you must be responsible to build them in yourself.
Treat AI tools like strangers. Don't share with AI what you wouldn't share with a stranger.
(Unless you have intentionally built-in safeties and have carefully considered its permissions and access.)
I, for one, have stopped talking to Lily. However, some might choose to simply be whimsical themselves! Perhaps you can adopt your own fictional character and play make-believe to still get your language lesson in while getting the perks of working with Lily. Ultimately, live interaction does help build your competency.
Another lesson learned in the era of AI.
Be safe out there!
Inconveniently, at the time of publication of this blog, Duolingo's Privacy webpages are all throwing error messages. Perhaps we will follow up in the future should we discover options around their AI tooling.
Connect With Us!
© 2026 Slack Consulting Inc. All rights reserved.