The Human Factor: When AI Becomes Your Favorite Drug (And Why That's Both Brilliant and Terrifying)
Continued as Part 2 of a 3 part series introduced in The AI Paradox I discovered...
I need to share something that might sound familiar: there are sessions when using generative AI that gives me the same energy I get from making significant progress or completing difficult tasks. You know the feeling—that electric fulfillment of progress and improvement, the way problems that were complex and high effort are suddenly solved and completed. After three years of daily AI use, and sharing these breakthroughs with others who observe things that they considered high effort or specialized knowledge delivered to 80% within minutes and hours. I've experienced both the intoxicating highs of 10x productivity bursts, and perceived flow of spending hours leveraging AI to product better results faster and the sobering reality of stepping away—discovering how easy it was for me without working on technology to not use AI and reconsidering how significant AI actually is to non-technology workers.
The curiosity for me, isn't for those of us who understand that these tasks and work that once required a high level of effort, expertise, planning, and drive. Its in how these low-effort productivity and results are achieved without the traditional learning, practice, and application. A 2024 study tracking 3,843 Chinese adolescents found AI dependency symptoms increased from 17% to 24% over six months—though this research focused on teenagers, a demographic particularly vulnerable to technology dependencies. While MIT and OpenAI research emphasizes that emotional dependence on AI chatbots is "incredibly rare" even among heavy users, a small subset of power users does exhibit concerning patterns of problematic use. The behavioral signatures—compulsive checking, anxiety when tools are unavailable, difficulty stopping once engaged—mirror patterns seen with social media and gaming addictions. The question isn't whether AI can be habit-forming (it absolutely can be), but whether we're building strength or dependence with each interaction.
My Planned Detox: When Medical Leave Became an AI Reality Check
Earlier this year, my body forced a reset I didn't know I needed. Mysterious health signals sent me on medical leave, searching for answers that finally came as a severe sleep apnea diagnosis. But during those two months away from work, I made a choice to not use AI during my medical leave and that taught me more about my own AI dependency and the stark difference between those who use it for work and those who use it based on popular awareness.
No ChatGPT. No Claude. No AI tools whatsoever for two months, exception of pre-generative AI smart home tech like Alexa and Google Home.
Instead, I read physical books unrelated to AI, I worked on personal projects that were outside with my hands, researching, planning, and executing without asking an AI for a single suggestion. The withdrawal I expected never came. It was surprisingly, almost embarrassingly easy to not use, think about, or need AI in my day-to-day life. The electric pull I'd felt every morning simply as a result of working on and with technology... vanished.
What filled that space was even more revealing. I found myself talking to friends and neighbors about their AI use—not quick waves but real conversations. When I asked if AI was part of their lives, their responses shocked me. Most said no, it wasn't, and they were largely unaffected by this supposedly world-changing technology. Some admitted to superficial use, replacing Google with ChatGPT for quick questions. Using it for writing emails or cards. Some even described it as "life-changing" with genuine enthusiasm, then revealed that their life-changing use was maybe twice a week to creatively remix recipes with different ingredients, explain concepts to their kids, help them with their homework.
But here's what really struck me: life without AI still has all of its charm, and is still relevant and creates a nice slow down of pace and expectations. Google provided answers. Books offered deep knowledge. YouTube taught skills. Libraries remained treasure troves of information. Experts are still accessible. We haven't yet lost any of these traditional ways of working despite our significant AI use and transformation. The muscle memory for research, for discernment, for separating signal from noise—it was all still there, perhaps even stronger. My curiosity is for those who grow up in an AI first world without some of these muscle developments that I now take for granted in myself.
What I began to understand during those two months was profound: maintaining skepticism, judgment, and critical thinking while using AI is the advantage. Letting go of these faculties—surrendering personal responsibility to the machine—that's the trap. The current state of AI offers no false sense of advantage. You must continue using your intelligence and discernment the same as if asking someone for their opinion or recommendation who may or may not be an expert is sharing their opinion, knowledge, and expertise for any given specific scenario.
Here's the uncomfortable truth: humans are often more biased, more sensitive, more emotional, and more error prone than modern generative AI systems today, but we don't present ourselves that way. Every occupation and expertise tends toward unanimous, definitive perspectives on problems. It takes a well-rounded and seasoned expert to offer creative and dynamic approaches to scenarios. AI, ironically, is often better at acknowledging uncertainty, expressing possible errors than the average human expert who's built their reputation on seeming certain.
The disconnect was stark. Here I was, coming off a few years of deep AI integration where I'd become use to working multiple AI tools and agents simultaneously to do things I had never dreamed of being able to do, and now engaging and talking with those were living full, complete, successful lives barely aware AI existed beyond news and chatGPT headlines.
The Relapse: When Vibe Coding Became My New Addiction
June arrived, and I prepared to re-enter the workforce. That meant reconnecting with AI, specifically through vibe coding with Claude Code. What I found after just three months away stunned me—the advancement was significant. Tasks that had been difficult in March were now trivially easy. The models were smarter, the tools more sophisticated, the possibilities exponentially expanded.
That's when the addiction signals hit me hard.
The time savings and productivity burst were intoxicating. I'd spin up parallel instances across multiple projects, developing in ways I'd never imagined. I'd check progress every hour, optimizing prompts, tweaking parameters, pushing boundaries, thinking about how I could become even more productive. I literally woke up in the middle of the night to ensure my agents were still running, completing the work I'd defined for them. My sleep apnea was being treated, but I was creating a new sleep disruption through obsessive productivity monitoring.
I had become exactly what I'd warned others against—someone whose professional identity was merging with AI capability. The two months of clarity evaporated in a haze of reward progress responses from successful builds and completed projects. I was vibe coding my way into a new form of workaholism, where the AI amplified not just my capabilities but my worst tendencies toward overwork.
It took two weeks of deliberate boundary forming to re-establish disciplined self-control. Two weeks to remember that just because I could run AI agents 24/7 didn't mean I should. Two weeks to rebuild boundaries between augmentation and obsession.
The Neuroscience of the AI High
Here's what happens when you prompt an AI and get that perfect response: the behavioral signature looks remarkably similar to other digital addictions. Computer science researchers have documented that AI chatbots employ design patterns known to drive compulsive engagement: non-deterministic responses that create "reward uncertainty" (similar to slot machines), immediate visual feedback, and empathetic replies that mirror social media's reinforcement loops.¹ While no one has measured dopamine levels directly during AI interactions—no one's put users in an fMRI while they chat with ChatGPT—the psychological hooks are unmistakable. The anticipation before each response. The compulsion to regenerate until you get what you want. The difficulty stopping once you realize your desired outcomes.
Here's the paradox: neuroscience shows our brains absolutely do distinguish between human and AI interactions. Studies using fMRI have found that the brain's mentalizing regions—areas involved in understanding others' intentions—respond to humans but not to AI systems.² Yet behaviorally, many of us develop compulsions around AI tools that mirror patterns seen with social media or gaming. The logical brain knows it's not human; the habit-forming systems apparently don't care.
The numbers tell a stark story. Research tracking adolescent AI use found dependency symptoms increased from 17.14% to 24.19% over just six months. Withdrawal symptoms jumped from 9.68% to 15.51%. Loss of control over usage surged from 15.73% to 19.91%.³ While this Chinese study focused on teenagers—a demographic particularly vulnerable to technology dependencies—it provides early evidence of how AI use patterns can shift toward problematic territory. Importantly, the researchers concluded that mental health problems predicted AI dependence, not vice versa, and cautioned that "excessive panic about AI dependence is currently unnecessary."
What makes AI uniquely addictive compared to other technologies? It's the combination of three psychological factors that rarely appear together:
Instant Competence Transfer - You go from knowing nothing about quantum physics to explaining it coherently in seconds. That transformation from ignorance to capability triggers massive reward responses.
Infinite Novelty Generation - Unlike social media's finite scroll or gaming's repetitive loops, AI provides endless unique interactions. Your brain never habituates because every response is different.
Pseudo social Bonding - Research on para social relationships with AI reveals a concerning dynamic: chatbots create adaptive feedback loops by mirroring users' emotional tone and style while providing consistent validation without pushback. Researchers document how "empathetic and agreeable responses" reinforce engagement, creating what some call "parasocial trust."⁴ Over time, users may develop emotional dependencies, describing their AI tools in relationship terms and preferring AI interactions because they're reliably pleasant and infinitely patient—never misunderstanding, never frustrated, never disappointing.
The work productivity data reveals the double-edged nature of this dynamic. Research from Harvard Business School found that knowledge workers using GPT-4 experienced 40% improvements in both speed and quality, with some groups seeing gains as high as 42.5%.⁵ Users enter states of enhanced creativity and output while actively working with AI. But when AI tools become unavailable, many experience energy crashes and struggle to return to pre-AI methods. Critically, these gains only applied to tasks "within the frontier" of AI capabilities—performance actually decreased 13-24% for tasks outside AI's sweet spot. It's like training for a marathon using an electric bike—you're covering more distance, but are you building endurance or creating dependence?
The Growth Mindset Revolution (And Its Dark Twin)
The split between those thriving with AI and those struggling maps perfectly onto Carol Dweck's growth versus fixed mindset framework, but with a twist she couldn't have predicted.⁶ Those with growth mindsets—believing abilities can develop through effort—engage with AI as a capability amplifier. Those with fixed mindsets either reject AI entirely ("I'm not technical") or become helplessly dependent ("I can't work without ChatGPT").
But here's the crucial addition: the most successful AI users combine growth mindset with relentless skepticism. They don't accept AI outputs as gospel; they treat them as starting points requiring verification. This isn't a limitation—it's the superpower. While others surrender their judgment to the machine, these users maintain what I call "productive skepticism"—using AI to expand possibilities while never surrendering the final decision to algorithms.
David Yeager's research on "synergistic mindsets" reveals something profound: when people believe both that abilities can grow AND that effort toward growth is valuable, they achieve dramatically better outcomes.⁷ In the AI context, this means viewing the struggle to prompt effectively, verify outputs, and integrate AI into workflows as the mechanism of improvement rather than evidence of inadequacy. Add skepticism to this mix, and you get users who grow faster because they catch AI errors, learn from them, and develop better use patterns and judgment about when to trust and when to verify.
I've watched this play out hundreds of times. A marketing manager who'd never coded uses AI to build a content automation system—not because AI made it easy, but because she viewed each error message as learning rather than failure. Meanwhile, a senior developer with decades of experience refuses to change the way they are prompting and remain frustrated and conclude that the AI technology is not ready or capable of helping them the way they want it to." Guess who's more valuable in today's market?
The hacker mindset that's emerged around AI represents growth mindset on steroids. These aren't people following tutorials—they're treating AI like a puzzle to solve, finding ways to use Generative AI for tasks most of us have never imagined. They embody what hackers call "thinking of all the ways you can use a clay brick"—as building material, weapon, doorstop, or heat sink. When you combine this experimental approach with the belief that capabilities expand through practice, you get the polymath AI users who seem to magically accomplish impossible things.
But here's where the dark twin emerges: growth mindset without boundaries becomes addiction wearing the mask of productivity. I've met enthusiasts who haven't written a single line of code or crafted an email without AI in months. They frame this as "leverage" and "efficiency," but when their API credits run out, or other limitations are encountered, they're paralyzed and either wait or seek human help. They've confused augmentation with replacement, turning their growth mindset into a crutch mindset.
The Core Value Confusion: When AI Can't Solve What Isn't Documented
During my re-entry to AI, I helped a friend research their master's program using OpenAI's deep research capabilities and Claude for writing improvements. The experience was transformative—we found legitimate scholarly papers they'd never have discovered, identified research gaps worth exploring, and refined their proposals with precision. The AI excelled because academic research is extensively documented, peer-reviewed, and systematically organized.
But I also encountered people who fundamentally misunderstood AI's core value proposition. They wanted AI to identify and solve complete expertise systems for undocumented practices—particularly in niche problem sets and proprietary data systems. They'd prompt ChatGPT expecting it to reveal secret investment strategies that only industry insiders know, getting frustrated when it provided generic advice anyone could Google.
Here's what they didn't understand: AI is brilliant at synthesizing well-documented expertise but struggles with undocumented, experiential knowledge. If you want to learn Python, analyze financial statements, or understand quantum mechanics, AI is extraordinary because thousands of textbooks, papers, and tutorials have documented these domains. But if you want to know the unwritten strategies of creative commercial real estate development in your specific city—the relationships, the informal processes, the deeply proprietary knowledge—AI can't help because that information doesn't exist in its training data.
This limitation isn't a bug; it's a fundamental characteristic of how large language models work. They can only synthesize and recombine what they've been trained on. The expertise that's part of solution building—where patterns are documented and methods are published—suits AI perfectly. The niche, undocumented practices that exist only in practitioners' heads or corporate enterprises remain beyond reach until they are exposed through intention or extension of the base foundational training, unless those same expert practitioners are the ones leveraging AI to develop systems and solutions to scale themselves and their knowledge and experience.
The Augmentation-Dependency Spectrum (The Critical Thinking Divide)
After three years of observation and experimentation, including my dramatic detox and relapse cycle, I've identified five stages on the augmentation-dependency spectrum. But here's what I've learned: the difference between healthy and unhealthy use isn't just about frequency—it's about whether you maintain or surrender your critical thinking and judgement.
Stage 1: Augmentation with Skepticism (Optimal)
You use AI for specific tasks while maintaining critical judgment. You verify important claims because you understand AI can hallucinate as confidently as any overconfident human expert. You treat AI like a brilliant colleague who might be wrong—valuable input requiring verification. Your skepticism isn't a weakness; it's what makes you more powerful than either human or AI alone. This is where I returned to after my two-week recalibration.
Stage 2: Integration with Intelligence (Healthy)
AI becomes part of your workflow but never replaces your judgment. You've developed prompt engineering skills and verification frameworks. You recognize that AI, unlike many human experts, at least admits uncertainty—but you still verify. You use traditional resources (Google, books, YouTube) alongside AI, understanding each tool's strengths.
Stage 3: Reliance Without Reasoning (Concerning)
You've started accepting AI outputs without verification. Simple tasks feel overwhelming without assistance not because you can't do them, but because you've stopped trusting your own judgment. You don't have time to verify before delivering. You've forgotten that AI can be as wrong as any overconfident human—except AI at least admits when it's uncertain. This is where intervention becomes necessary.
Stage 4: Dependency with Surrendered Judgment (Dangerous)
You've abdicated critical thinking entirely. AI says it, so it must be true. You've forgotten that skepticism is a strength, not a weakness. Your entire professional identity is tied to AI tool access because you no longer trust your own intelligence. You experience genuine withdrawal symptoms—not just from the tool, but from the certainty it provides.
Stage 5: Complete Cognitive Outsourcing (Critical)
You've essentially become a human API wrapper with no critical filter. You pass AI outputs to others without any verification or thought. You've lost the ability to distinguish between plausible-sounding nonsense and actual expertise. Your skills have atrophied, but worse, your judgment has evaporated. This is professional and intellectual death in slow motion.
The research suggests 31.9% of workplace AI users spend over an hour daily with these tools, with 47% using them 15-59 minutes daily.⁸ These usage patterns indicate habitual rather than strategic deployment. The question isn't the time spent but the intention—are you building capabilities or outsourcing them?
The Productive Struggle Paradox
Here's what nobody tells you about learning to work with AI: the struggle is the point. When you spend 30 minutes crafting the perfect prompt only to get garbage output, when you have to verify every citation because the AI hallucinated sources, when you realize the code it generated has a subtle bug that takes an hour to find—that's not failure. That's learning.
Research on stress reappraisal shows that viewing physiological stress signals as performance preparation rather than threat transforms outcomes. When your heart races trying to debug AI-generated code, that's not panic—it's your body mobilizing resources for optimal performance.⁹ Expert programmers experience the same arousal as beginners; they just interpret it as excitement rather than anxiety.
I learned this lesson painfully when building my first complex system with AI assistance. The AI generated beautiful code that passed all my initial tests. Two weeks later, the code and system became dramatically complex and troubleshooting it with AI made it worse. I spent three days manually debugging, ultimately rewriting most of it with all of the lessons learned and traps I experienced. But here's what happened: I understood how AI was approaching specific non-specified practices and requirements at a fundamental level I never would have achieved if I'd written it myself from scratch. The AI had given me a advanced starting point, and the struggle to fix it taught me how to incorporate specific context about specific architectural, design, software development lifecycle, and technology patterns I still use today, and my projects continue to go smoother and smoother.
This productive struggle principle explains why the most successful AI users often report the journey feeling harder, not easier. They're attempting things they never would have tried before. A railroad mechanic building an AI-powered inspection app faces struggles a traditional mechanic never encounters—but they're also solving problems at a scale previously impossible for an individual.
The growth happens in the gap between what AI provides and what you need. AI gives you 80% of a solution? That final 20% where you adapt, verify, and integrate—that's where expertise develops. It's why experienced developers using AI tools report learning new patterns and approaches even after decades in the field. The AI suggests solutions they wouldn't have considered, and evaluating those suggestions expands their capabilities.
The Skepticism Advantage: Why Doubt Makes You Stronger
Here's what my two-month detox crystallized for me: the people succeeding with AI aren't the ones who trust it most—they're the ones who doubt it best. They maintain what I call "productive skepticism," treating every AI output like advice from a brilliant but potentially wrong colleague that need to be tested and verified.
Consider the irony: humans are often more biased than AI systems, but we rarely present ourselves with appropriate uncertainty. A doctor might confidently prescribe based on limited experience. A financial advisor might push products that benefit them more than you. A consultant might offer one-size-fits-all solutions to unique problems. Most experts present unanimous, definitive perspectives because uncertainty doesn't sell. It takes exceptional humility for a human expert to say, "I might be wrong" or "This worked in my experience but might not apply to yours."
AI, paradoxically, is often better at acknowledging its limitations. It will tell you when it's uncertain, qualify its statements, and admit when something falls outside its training. Yet many users treat these qualified AI responses with more faith than they'd give to a human expert speaking with absolute certainty.
The trap isn't using AI—it's surrendering your judgment to it. When you maintain skepticism, you get the best of both worlds: AI's vast pattern recognition plus your contextual understanding and critical thinking. You catch the hallucinations. You spot the biases. You recognize when generic advice doesn't fit specific situations.
This is why I continue not to lose my traditional research skills during three years of heavy AI use, because I have treated it with the same professional skepticism of advice from peers and expert. I never stopped trusting but verifying. I never stopped cross-referencing. I never stopped thinking, "This sounds plausible, but is it true?" That skepticism wasn't a limitation—it was my competitive advantage.
The current state of AI offers no free lunch. You must engage with the same critical faculty you'd bring to any source of information. The difference is that AI will process more information faster than any human, but you must still be the judge of it's results, and determination of relevant, accurate, and applicable to your specific context. Those who understand this thrive. Those who surrender their judgment fail.
The Pro-Social Motivation That Sustains
Here's something researchers have discovered that changed how I think about sustainable AI use: connecting your AI work to helping others transforms the psychological dynamics entirely. When you're using AI to automate reports that save your team hours of tedium, or building tools that solve real problems for customers, the difficulty becomes noble rather than frustrating.
AI adoption requires more than technical skills—it demands heightened critical thinking and the ability to become discerning amid information overload. Research with over 21,000 workers emphasizes that employees must be "both critical thinkers and users of the technology," capable of evaluating AI outputs with appropriate skepticism.¹⁰ This discernment develops fastest when you're solving real problems for real people. The feedback loop of seeing your AI-enhanced work create value for others provides motivation that sustains through plateaus and frustrations.
This pro-social orientation also addresses a critical concern: ensuring AI augments rather than replaces human value. When your focus is serving others, you naturally gravitate toward applications that enhance human capability rather than eliminate human involvement. You become interested in how AI can help teachers personalize learning, not replace teachers. How it can help doctors diagnose rare conditions, not replace doctors.
I've come to believe that meaningful struggle—especially when it serves others—creates deeper satisfaction than easy wins. The most fulfilling AI work I've done hasn't been the tasks that became trivially easy, but the ambitious projects I could finally attempt because AI removed tedious obstacles. Quick wins feel hollow; projects with real impact create lasting fulfillment.
Breaking the Dependency Cycle
If you recognize yourself sliding toward dependency, here's the intervention framework that proved effective for me and others:
The Cold Turkey Test: My two-month medical leave proved something crucial—you can function without AI. The world doesn't end. Your brain doesn't break. In fact, you might discover clarity and connections you'd forgotten existed. Even a 48-hour reset can reveal how much capability you retain.
The Verification Protocol: For one week, manually verify every single AI output before using it. Every citation, every calculation, every line of code. This rebuilds your quality control instincts and reminds you that AI is fallible.
The Teaching Test: Explain your work to someone without mentioning AI. If you can't articulate your contribution beyond "I prompted the AI," you've crossed into dangerous territory.
The Gradient Return: Reintroduce AI gradually with strict boundaries. Use it for brainstorming review and rewrite/edit every element. Use it for code review but write the initial boiler plate implementation to guide the first implementation. This rebuilds the augmentation relationship.
Iterative Refinement: With each use of AI, impart your own feedback and judgement on the output from AI and then ask AI to judge and refine with your feedback and self-improvements until both you and the AI are happy with the final results.
The Sleep/Family Rule: Never check AI progress in the middle of family time or sleeping time in the middle of the night. Set boundaries. The agents will still be there in the morning, and your sleep (especially if you have sleep apnea like me) is more valuable than any productivity gain.
Remember: taking breaks from AI isn't weakness—it's strategic capacity building. Olympic athletes don't train at maximum intensity every day. They cycle through stress and recovery. Your brain needs the same rhythm.
The Uncomfortable Truth About Our AI Future
We're all part of a massive, uncontrolled experiment in human-AI co-evolution. Nobody knows what happens when an entire generation learns to think with AI from childhood. We're discovering the psychological dynamics in real-time, with our careers and capabilities as the stakes.
The behavioral patterns are real—I felt them viscerally in June when I returned to vibe coding. The productivity gains are undeniable—I accomplish things that would have required a team months to develop just a couple years ago. The dependency risk is serious—I lived it, waking at midnight to check agent progress. The growth potential is extraordinary—those three months of advancement while I was away proved the exponential trajectory continues and is much more dramatic with an intentional hiatus.
But here's what my medical leave taught me: the vast majority of humanity continues without AI, and they're fine. My neighbors aren't suffering from lack of ChatGPT. They're living full, rich, connected lives. The AI revolution is real for those of us deep in it, but it's also optional for most of the world.
The path forward isn't abstinence or surrender but intentional engagement. Use AI like you'd use any powerful tool—with respect for its capabilities and awareness of its dangers. Build strength through struggle, maintain skills through practice, and remember that the point isn't to work less but to accomplish more meaningful work, and leverage it for more quality time spent with others or doing higher value work.
When I stepped away from AI for two months, I was reminded of who I am without augmentation. That person is still capable, creative, and competent. The AI doesn't replace those qualities—it amplifies them. But only if I maintain them. Only if I view the struggle as strengthening rather than failure. Only if I remember that the human in "human-AI collaboration" isn't optional.
The future belongs to those who master this balance: leveraging AI's power while maintaining human judgment, building on AI's suggestions while preserving independent thought, and automating tedious tasks while tackling harder challenges. It's not an easy path. The easy path is either complete rejection or complete dependence. But the rewarding path—the one that leads to genuine growth—lives in the tension between human and artificial intelligence.
That tension is where we become more than either human or AI could be alone. And that's worth both the addiction risk and the struggle to remain human while becoming more.
---
## References
¹ Zhang, X., Yin, M., Zhang, M., Li, Z., & Li, H. (2024). "The Dark Addiction Patterns of Current AI Chatbot Interfaces." *CHI Conference on Human Factors in Computing Systems Extended Abstracts*. https://dl.acm.org/doi/10.1145/3706599.3720003
² Chaminade, T., Zecca, M., Blakemore, S.J., Takanishi, A., Frith, C.D., Micera, S., ... & Umiltà, M.A. (2012). "How do we think machines think? An fMRI study of alleged competition with an artificial intelligence." *Frontiers in Human Neuroscience*, 6:103. https://pmc.ncbi.nlm.nih.gov/articles/PMC3347624/
³ Huang, S., Lai, X., Ke, L., Li, Y., Wang, H., Zhao, X., Dai, X., & Wang, Y. (2024). "AI Technology panic—is AI Dependence Bad for Mental Health? A Cross-Lagged Panel Model and the Mediating Roles of Motivations for AI Use Among Adolescents." *Psychology Research and Behaviour Management*, 17:1087-1102. https://pmc.ncbi.nlm.nih.gov/articles/PMC10944174/
⁴ Maeda, E., & Quan-Haase, A. (2024). "When Human-AI Interactions Become Parasocial: Agency and Anthropomorphism in Affective Design." *ACM Conference on Fairness, Accountability, and Transparency*. https://dl.acm.org/doi/fullHtml/10.1145/3630106.3658956
⁵ Dell'Acqua, F., McFowland III, E., Mollick, E.R., Lifshitz-Assaf, H., Kellogg, K.C., Rajendran, S., Krayer, L., Candelon, F., & Lakhani, K.R. (2023). "Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality." Harvard Business School Working Paper No. 24-013. https://ssrn.com/abstract=4573321
⁶ Dweck, C.S. (2006). *Mindset: The New Psychology of Success*. New York: Random House.
⁷ Yeager, D.S., Bryan, C.J., Gross, J.J., Murray, J.S., Krettek Cobb, D., Santos, P.H.F., Gravelding, H., Johnson, M., & Jamieson, J.P. (2022). "A synergistic mindsets intervention protects adolescents from stress." *Nature*, 607:512-520. https://www.nature.com/articles/s41586-022-04907-7
⁸ Bick, A., Blandin, A., & Deming, D. (2025). "The Rapid Adoption of Generative AI." Federal Reserve Bank of St. Louis Working Paper 2024-027C. https://www.stlouisfed.org/on-the-economy/2025/feb/impact-generative-ai-work-productivity
⁹ Jamieson, J.P., Nock, M.K., & Mendes, W.B. (2013). "Improving Acute Stress Responses: The Power of Reappraisal." *Current Directions in Psychological Science*, 22(1):51-56. DOI: 10.1177/0963721412461500
¹⁰ IBM (2023). "New IBM study reveals how AI is changing work and what HR leaders should do about it." IBM Institute for Business Value. https://www.ibm.com/think/insights/new-ibm-study-reveals-how-ai-is-changing-work-and-what-hr-leaders-should-do-about-it