The Innovation Acceleration Framework: What Three Years of Parallel Transformations Taught Me
continued as Part 1 of a 3 part series introduced in The AI Paradox I discovered...
The Double Life That Revealed the Pattern
Here’s what might sound contradictory: I was passionately pursuing my own AI transformation in stolen hours at night and weekends, before helping a massive enterprise transform their AI awareness, skillsets, and adoption. I was primed for this moment when circumstances called me to action at UKG. Prior to this global AI Revolution at work and in our industry, I was working on large-scale enterprise agile transformation. My audience was largely kicking and screaming, and I was the boots on the ground charging the hills, blocking and tackling, winning the hearts and minds of the unwilling—or agreeing to disagree and moving forward with the help of network influence, personal and seasoned work relationships, and servant-leadership to stand on.
But that was only my day job.
My double life at night and on weekends was learning and using primitive AI Transformer technology, manually debugging vector databases and training data, learning low-code platforms like Bubble, and fast prototyping with Firebase platforms in Google Cloud Platform, developing reusable components for myself. RAG was difficult but possible. Grounding was mysterious and inconsistent before platform/provider controls were in place. Defining and deploying agents with tools and selective data use felt like black magic requiring constant experimentation.
Yet something remarkable was happening in those late-night sessions: things that were frustratingly difficult but possible in month one became merely challenging by month three, then almost routine by month six. Not because I was getting smarter, but because the technology itself was accelerating—and I was learning to accelerate with it. Though I should be clear: I’m still learning. What feels routine today will likely seem primitive six months from now, these transformations can be month over month, or much faster based on new releases and capabilities.
I stayed open to multiple platforms, AI providers, and technology choices. When something better, faster, and easier emerged, I abandoned what I’d been using without sentimentality. My patterns evolved: from painstakingly developing reusable components to developing more flexible context, guidelines, and standards that are now baked into either the context-building portion or the solutions themselves. Each of my personal projects developed a progressively maturing look, feel, and coding rigor—not because I planned it that way, but because I learned to let LLMs help me improve my own context iteratively.
I keep a range of personal projects running at varying levels of maturity. The newest projects leveraged the latest learnings, standards, contexts, tech stacks, platforms, and guidelines. Older projects got upgraded gradually as I learned how to manage in-place improvements. New experiments and research started when I hit a specific problem, frustration, or friction point that needed solving. This cycle allowed me to be practical and produce actual work while learning, improving existing workflows, and building capability.
I’m nowhere near optimized—I’m not even sure what optimized looks like in a field changing this rapidly. But I’m capable of doing so much more now than three years ago, and more importantly, I’m better at learning what I don’t yet know. The humility to recognize gaps accelerates growth faster than confidence in what you’ve already mastered.
Meanwhile, at work, things started accelerating for me outside the enterprise before they hit internally—or were met with skepticism amongst leaders not yet familiar with the brewing power and breakthroughs incoming. I was using AI to learn and create my own transformation agile articles based on my experience. My leaders were impressed with the articles, but it wasn’t moving hearts and minds when I told them I was using AI to help me. All the content I was creating wasn’t intellectual property, so it didn’t raise alarms—not the most exciting work either.
I asked for allowances to help small businesses with their AI needs outside of our domain. I asked to be included in opportunities where I could support innovation through AI. It was a hackathon in March 2023 that my AI influence and capabilities went on stage for the first time at UKG, and that notoriety hasn’t yet run its course. There have been a few inspirational injections along these past 2.5 years where the spotlight was on, and the inspirations continue to flow. See Linked In Post.
Here’s what stunned me: this AI transformation felt easy relative to the kicking and screaming I was used to when it came to transformations. The difference? Everyone wanted it. I’d been living the acceleration in my personal work. I’d experienced firsthand how the right balance of planning and iteration drives progress—not pure chaos, not perfect planning, but structured experimentation. I’d learned how constraint drives focus when it’s the right constraint, how practical iteration beats pure planning when feedback loops are tight, and how problems pull innovation faster than technology pushes it.
But I should be honest: I’m still figuring this out. Every organization is different. Every innovator needs different support. The patterns I’ve identified work, but they’re not formulas to be applied rigidly—they’re principles to be adapted based on continuous listening and observation.
Many of the things that worked well for this AI transformation were founded and forged with my recent embrace of scaled agile transformation principles, mindsets, and behaviors. But the acceleration patterns—those came from watching my own capability compound through deliberate practice with progressively better tools, always maintaining enough structure to build on previous learnings while staying flexible enough to abandon what doesn’t work. What I learned is that the companies winning with AI aren’t the ones with the best models or biggest budgets. They’re the ones who’ve mastered something counterintuitive: using constraint to surface innovators, removing blockers for those who persist, and letting business problems drive the technology rather than the reverse.
The same forces that transformed my personal capability from struggling with basic RAG implementations to shipping sophisticated multi-agent systems work at organizational scale more effortlessly. The parallel journeys—one personal, one corporate—revealed identical patterns.
The Five Forces of Sustainable AI Transformation
Through hundreds of experiments across both tracks, I’ve identified five forces that separate transformative AI adoption from incremental tinkering:
Force 1: Strategic Constraint Creates Innovation Pressure
Personal Discovery: In my early projects, I gave myself every possible option—multiple frameworks, unlimited API calls, access to every model, consider every platform. Progress was slow. Then I imposed a constraint: build a working prototype and deploy in 48 hours using only free tiers so that others could interact with it. Suddenly, decisions became clear, focus intensified, and I delivered more in that weekend, and got more feedback than in the previous weeks of unlimited optionality and hypothetical feedback.
The counterintuitive truth: we threw technical platforms meant for developers at everyone in our company regardless of role and aptitude. This wasn’t elegant. It wasn’t comfortable. But it revealed something invaluable—our most resilient and persistent innovators across the organization, mirroring how my personal time constraints surfaced which approaches actually worked versus which merely seemed promising.
We made AI capabilities visible and accessible while maintaining the pressure and difficulties brought on by trust, complex platforms, and a rapidly changing technology space. The constraint wasn’t artificial scarcity—it was real business urgency combined with new, unfamiliar tools that made the impossible suddenly possible. I listened, observed, and focused on solving demonstrable problem solution sets within the constraints.
What happened? Innovation exploded. When you can’t solve a problem the old way in the time available, you stop asking “should we try AI?” and start asking “how do we make this work?” Just as my personal projects forced me to stop debating architectures and start shipping solutions, organizational constraints surfaced people who acted rather than analyzed and waited.
The Psychology of Productive Pressure:
This aligns perfectly with stress research. When people believe their resources match the challenge (even if barely), stress transforms from debilitating threat into enhancing challenge response. My job wasn’t to remove the pressure—it was to ensure they felt capable of meeting it with the new tools available. I knew this worked because I’d lived it—those 48-hour personal hackathons where the clock forced clarity.
This pattern appears across industries. BBVA deliberately provided just 3,000 ChatGPT Enterprise licenses for their 125,000+ employees—a mere 2.4% coverage. This strategic constraint wasn’t about budget; it was about surfacing committed innovators. Within five months, those constrained users achieved 83% weekly active usage and created 2,900 custom GPTs, with 700 shared organization-wide. The bank identified these “AI Wizards” and scaled to 11,000 licenses based on demonstrated innovation rather than organizational hierarchy—the exact pattern I’d discovered managing my personal project portfolio, where constraints revealed which projects deserved continued investment.
Klarna’s CEO Sebastian Siemiatkowski took a similar approach, providing API access to 2,500 employees (50% of workforce) but publicly acknowledging that “only 50 percent of our employees use it daily.” Rather than mandate usage, this honest constraint revealed natural adopters who became champions. The company grew from 50% to approximately 90% daily adoption of generative AI tools, with AI now handling work equivalent to 700 full-time agents (primarily outsourced customer service contractors) and driving an estimated $40 million profit increase.
Force 2: Strategic Abundance Removes Innovation Blockers
Personal Discovery: I spent three frustrating weeks building my own blogging platform through vibe coding before discovering the Ghost Blogging platform had solved the exact problem. I spent days wrestling with multiple agent platforms before finding an open source solution that implemented a multi-agent research flow that I could reverse engineer and leverage to build my own solutions that abstracted the complexity and took me away from third party platforms but leveraged third party libraries. Each time I removed a blocker by finding a better tool, my project velocity increased dramatically.
But here’s what I learned about constraints: some were intentional (choosing a 48-hour deadline, constraining to specific tools and technology), but many were accidental—organizational friction, budget limits, platform limitations, organizational change. Both types yielded valuable innovation. The accidental constraints exposed something powerful: persistence, stubbornness, and knowing it can be easier will eat strategy alive. Constraints will be broken through the gravity of demand, awareness, and ingenuity. The lesson wasn’t to impose constraints blindly—it was to listen, observe, and persist toward removing the RIGHT constraints while maintaining the productive ones.
While constraint drives focus, abundance unlocks potential. But abundance isn’t about giving everyone everything—it’s about identifying the precise blockers hindering your top innovators and obliterating them.
The Abundance Diagnostic:
I started focusing on a simple question: “What would have made this 10x easier for more people than just me and my team?” The same question I’d been asking myself in my personal projects: “What’s slowing me down that doesn’t need to?”
The answers revealed systematic blockers that I heard from various people:
- “I wasted three days getting granular GCP API access approved, one service at a time” (Like I wasted time troubleshooting issues in new and unfamiliar platforms)
- “I couldn’t test properly without production data access” (Just like I previously struggled without realistic test data)
- “I needed design resources but couldn’t justify a ticket” (Just like I needed UI components without UI team members)
- “I spent more time explaining the business case than building” (Just like I spent time fighting improper fit tooling)
Every systematic blocker had an AI opportunity to rethink, reframe, and re-imagine a new way through—the same realization that led me to abandon platforms that created friction and embrace those that removed it.
Immediate Access Grants:
- Pre-approved API access to major AI platforms through trusted cloud infrastructure
- Dedicated cost-per-use budget (for everyone, no manual approval needed)
- Direct access to de-identfied example data and leveraging AI generated training data for testing and AI use
- Leveraging AI to generate scenario-based data and testing (exactly how I solved my test data problem)
- Using different AI providers to evaluate output and recommend improvements at each stage
- “Innovation weeks and quarterly hackathons” protected from meeting and work cadences
BBVA exemplifies this pattern perfectly. When the bank’s legal team built their “BBVA Retail Banking Legal Assistant GPT” using their own documentation and precedents, they reduced response times to under 24 hours. This mirrors my personal evolution: my first document processing pipeline took weeks to build; my latest one took hours because I’d learned which patterns worked and which tools eliminated friction. BBVA’s success spread organically—inspiring marketing, HR, and operations teams to build their own tools. The 900+ strategically interesting use cases came not from top-down planning but from removing blockers and watching innovators create.
Strategic abundance isn’t democratic—it’s meritocratic. Give disproportionate resources to those already demonstrating initiative, then let their results create demand for broader adoption. Just as I invested more time in personal projects that showed traction and abandoned those that didn’t, organizations should double down on innovators who ship and deliver value.
Force 3: Business Problems as Innovation Drivers
Personal Discovery: I spent a month building a “easy general purpose” AI assistant before realizing I could leverage an open-source solution that enabled both simple direct access to trusted LLMs and RAG capabilities deployed on trusted enterprise cloud infrastructure. Then I built, deployed, and demo’d with a use case that was relevant for everyone at my company in less than 3 days to solve one specific problem, then expand and abstract. The frustrations forced the innovation and focused my research. That specific version became my most-used internal tool because it solved real frustrations. Every successful personal project since has started with a specific pain point, not a general capability.
The rote trainings or displays of shiny new technology without practical business problems left audiences excited but disconnected—unable to apply it in their own space. Transforming our workshops to be audience-specific, choosing their business problems to demonstrate AI capabilities, mirrored how my personal projects only gained traction when solving actual problems.
The breakthrough for my own focus came when I flipped the model entirely: Start with my most painful business problems, then use them as forcing functions for my research into learning and growing AI capability development. This was echoed in our quarterly hackathons.
The Problem-First Approach:
Every quarter, we themed our innovation challenges around real business problems and access to exciting technologies and partners. We emphasized team diversity and cross-functional collaboration—engineers working with domain experts who understood the actual pain points.
- Clear Success Metrics: “Reduce proposal time to under 4 hours” not “explore AI for proposals”
- Access and Budget: Pre-approved tools, time, and resources
- Freedom to Fail Fast: “Show me something in two weeks—working prototype or evidence this won’t work”
The Learning Velocity:
When learning is driven by solving a real problem you care about, feedback loops tighten dramatically. You’re not memorizing prompt patterns—you’re discovering what works through necessity. The friction and frustration become your teachers.
In my personal projects, I learned that feedback, evaluation, and improvement should be part of every project, every cycle of practical experience. But I also learned the critical importance of upfront work: setting up context, planning, and standardized intentions and guidelines as part of each AI build. The clearer these context-building and planning cycles, the smoother everything that follows. It’s not planning versus iteration—it’s planning AS PART OF iteration, where each cycle informs better planning for the next.
I started using LLMs to evaluate and improve my own prompts and context. This meta-loop—using AI to get better at AI—accelerated my capability faster than any course could. The same pattern works organizationally: innovators solving real problems with structured experimentation learn faster than anyone taking training.
Transforming How We Work, Not Just What We Build:
One of the most powerful discoveries was using innovation tournaments to practice these iterative cycles in compressed timeframes. Our quarterly hackathons didn’t just produce solutions—they required teams to practice iterative cycle optimization in shorter bursts. Plan-build-test-learn-improve cycles that normally took months were compressed into days or weeks.
This wasn’t just about what teams built. It transformed HOW they worked. Teams learned to set clear context and intentions upfront, execute with flexibility, gather feedback rapidly, and incorporate learnings into next iterations. These compressed cycles became the pattern for standard work—continuous innovation not only in what we were building but in how we were working.
I’m still learning the right balance here. Some teams need more structure, others more freedom. The lesson isn’t that I’ve figured out the perfect formula—it’s that the formula itself needs to evolve based on feedback from each cycle.
TE Connectivity’s AI Cup demonstrates this perfectly. Rather than train 300 students on AI theory, they presented real factory challenges. Winners developed systems with 98.8% and 99.2% accuracy at 100x speed—solutions being deployed across production lines with plans for global expansion to 50-100+ manufacturing sites. The problems drove the learning, exactly as my frustration with slow research note categorization drove me to learn vector embeddings and semantic search.
Force 4: Empowered Innovators, Not Managed Experiments
Personal Discovery: My most successful projects happened when I gave myself permission to experiment, fail, and focus my research on my pain points freely—but with structure. I learned that one of the most valuable practices is setting up context, planning, and standardized intentions and guidelines as part of each AI build upfront. The clearer the context building and planning cycles, the smoother the implementation and execution goes.
But here’s the nuance I discovered: it’s not “iteration beats planning.” It’s about just enough planning to remain flexible, incorporating learnings from each cycle to inform the next. Planning becomes part of the iterative cycle—each phase has the right level of planning mixed with openness to learning, iterating, and improving. Planning and performing execution iteratively, delivering value through iterations where feedback influences what comes next. My biggest failures came when I either over-planned too rigidly and now live with those decisions to gradually transition or under-planned (chaos). Success lived in the middle: structured experimentation.
The pattern: trust the innovator (even when that innovator is yourself), provide problems and resources with clear context and intentions, then get out of the way while maintaining feedback loops.
This is where I see most organizations fail spectacularly. They identify opportunities, assemble teams, create governance processes, and then wonder why nothing happens. It’s the organizational equivalent of planning a personal project so thoroughly that you never actually build it, and get stuck talking about it for months without action.
The Anti-Pattern That Not All Companies Have Escaped:
Boston Consulting Group research shows 74% of companies struggle to achieve and scale AI value despite significant investments. The pattern is consistent: detailed planning, technical architecture reviews, phased rollout plans, weekly status meetings—and every project taking 6-8x longer while delivering half the value.
The Breakthrough Model:
We learned to identify and celebrate top innovators and give them something powerful: Problems, not prescriptions.
Instead of: “Build a chatbot for customer support using Gemini with these specific prompt templates and this approval workflow…”
Shifting to business problem focus: “Reduce our average customer support resolution time by 40%. Here’s access to our support data, here’s unlimited monitored access to our trusted AI platforms, here’s protection from other meetings. Show me results or learnings in three weeks.”
This mirrored how I managed my personal project portfolio: keep a range of projects at varying maturity levels running. Newest projects leverage latest learnings. Older projects get upgraded gradually. Each experiment teaches something that improves the next one.
Shopify exemplifies this empowerment. CEO Tobi Lütke’s March 2025 memo stated that teams must demonstrate why they cannot get what they want done using AI before asking for resources—enabling headcount to stay flat at 8,100 while Q4 2024 revenue grew 31% year-over-year. Three innovators might tackle problems three different ways, all working, with standardization emerging naturally. Just as I learned which patterns worked across my projects and codified them as flexible guidelines (not rigid rules), Shopify let patterns emerge from practice.
What This Requires From Leadership:
- Tolerating Different Approaches: Three innovators, three solutions, all working—standardize later
- Accepting Visible Failures: Celebrate fast learning, not perfect execution
- Resisting Control Impulses: Ask “what did you learn?” not “what did you accomplish?”
The innovators who thrived believed capabilities grow through effort. When obstacles appeared, they saw opportunities to get smarter, not evidence of inadequacy. This growth mindset separated those who persisted from those who retreated—the same mindset that let me abandon platforms without ego when something better emerged.
Force 5: Value Now, Leverage Elegance When Available, and Iterate to Long-Term Autonomy and Economies Later
Personal Discovery: This principle saved me countless hours. Early on, I spent weeks struggling with internal Identity Access Management in GCP whose purpose built was not for wide internal use re-building a custom authentication system so every employee could access in a frictionless way before discovering Micorosoft Azure had built-in Microsoft Authentication and Ping SSO integration natively. I spent days troubleshooting my own vector in place database implementation before using a now antiquated hosted vector db solution. I learned: ship working solutions fast using whatever works, then optimize later based on actual usage. My pattern shifted from “build everything perfectly” to “prove value quickly, iterate based on reality.”
My development approach evolved dramatically: I moved from painstakingly creating reusable components to developing flexible context, guidelines, and standards baked into my solutions. Each new project uses progressively better patterns—not because I planned it perfectly, but because I learned what worked through iteration. This cycle allowed me to produce practical work while learning, improving workflows, and building capability. I’m nowhere near optimized, but I’m capable of so much more now than three years ago.
The Traditional Trap:
Organizations plan for full engineering scale from day one without testing in real-world scenarios:
- “Let’s design the perfect architecture”
- “Let’s build this properly with our own infrastructure”
- “We should train our own model so we’re not dependent on vendors”
- “Let’s insulate ourselves from this third party framework that handles every edge case and rapidly pushes feature enhancements”
Every one of these projects was still “in progress” when faster-moving competitors shipped. It’s the organizational equivalent of my two-week authentication system that never got used because I discovered native support without effort.
The Value-First Approach:
Now I explicitly prioritize expensive, lower-effort, imperfect value delivery over elegant, future-proofed systems:
Phase 1: Prove Value (0-3 months)
- Use the most expensive, easiest solution (usually API calls to major providers)
- Accept higher per-unit costs for lower implementation complexity
- Ship working solutions in weeks, not quarters
- Goal: Demonstrate ROI and build organizational belief
Phase 2: Iterate Based on Usage (3-6 months)
- Collect data on actual usage patterns (often very different from planned)
- Identify the 20% of use cases driving 80% of value
- Let real friction guide optimization priorities
Phase 3: Optimize Economics (6-12 months)
- Only now consider custom solutions based on actual usage data
- Justify investments with proven ROI, not theoretical efficiency
This mirrors my personal project evolution: latest projects use newest tech and patterns; older projects get upgraded gradually as I learn to manage in-place improvements. Practical experience iterated guides the best designs to ground truth.
North Atlantic Industries, a 250-employee defense/aerospace manufacturer, started with expensive Azure OpenAI API calls for code documentation. After demonstrating 60-70% efficiency gains and millions in savings, they secured buy-in for company-wide expansion. The quick, expensive proof enabled the broader investment—just as my quick Microsoft Azure and open-source leverage implementation proved the value of rapid prototyping over perfect planning.
The Iteration Engine: Building Organizational Learning
Personal Discovery: My biggest breakthrough came from using LLMs to improve my own context iteratively. I’d run a project, analyze what worked, then ask Claude to help me refine my guidelines. Each cycle improved the next project. I learned to make feedback, evaluation, and improvement part of every cycle. This meta-loop—AI helping me get better at AI—accelerated my learning exponentially.
The companies that sustain AI advantage build systems that learn from every experiment:
The Learning Architecture I Built:
1. Friction as Signal
- Monthly “frustration forums” where anyone vents about broken processes
- Every complaint tagged: Could AI help? Has anyone tried? What blocked progress?
- Top frustrations become next quarter’s innovation challenges
Just as my personal projects started when I hit a specific problem, frustration, or friction point, organizations should use friction as the signal for where to innovate next.
2. Experimentation Rituals
- Bi-weekly “Show and Tell” sessions (30 minutes, mandatory attendance)
- Innovators present works-in-progress, not just successes
- Focus on “what I learned” not “what I built”
- Failed experiments get equal airtime
This mirrors my personal project portfolio approach: maintain projects at varying maturity levels, share learnings across them, and let failures teach as much as successes.
3. The Sunk Cost Detector
- Quarterly review of all ongoing initiatives:
- Is this solving the original problem?
- Are we continuing because of past investment or future potential?
- Explicit permission to kill projects, even expensive ones
- “Pivot awards” for teams who recognize failure signals early
I learned to abandon platforms without sentimentality when something better emerged. Organizations need the same capability.
4. Success Pattern Codification
- After every successful deployment, document the pattern:
- What business problem did this solve?
- What approach worked and why?
- What tools and techniques were essential?
My development patterns evolved from rigid reusable components to flexible guidelines baked into solutions. Document patterns, but keep them flexible enough to evolve.
5. Cross-Functional Innovation Teams
- Identify top innovators across different functions
- Create temporary “tiger teams” mixing diverse skills
- Small (3-5 people), nimble, time-boxed innovation
The Parallel Transformation: What Personal Acceleration Reveals About Organizational Change
Here’s what three years of parallel journeys taught me: The same forces that accelerated my personal capability from struggling with basic RAG to shipping sophisticated multi-agent systems work similarly at organizational scale.
The Pattern Recognition:
In my personal work, I experienced:
- The right constraints drove focus: 48-hour hackathons with clear context produced more than unlimited time without structure
- Removing the right blockers accelerated velocity: Abandoning platforms that created friction, but maintaining productive constraints
- Problems pulled innovation: Specific frustrations led to useful tools; general goals led nowhere
- Structured experimentation enabled breakthroughs: Best projects had clear upfront context and planning, then freedom to iterate
- Planning within iteration compounded learning: Each cycle informed better planning for the next, not iteration versus planning but iteration including planning
At organizational scale, these exact forces produced:
- Strategic constraints surfaced innovators: Limited licenses revealed who would persist (though some constraints were accidental organizational friction that we learned to listen to and address)
- Strategic abundance unlocked potential: Removing blockers for top performers created proof points (while listening for which constraints to maintain)
- Business problems drove adoption: Real pain points motivated learning faster than training
- Empowerment with structure enabled breakthroughs: Problems-not-prescriptions, but with clear context and intentions upfront
- Value-first with planning beat perfection: Quick proofs with enough structure to build on, then iterate based on learnings
I’m still refining this balance. What I thought was the right approach six months ago often looks incomplete now. The key isn’t having figured it out—it’s staying open to feedback and continuing to adapt, learn, and improve from previous cycles.
The Meta-Lesson: Capability compounds through cycles of practical experience guided by structured experimentation—not through perfect planning alone, nor through pure chaos. Whether building personal capacity or organizational capability, the acceleration comes from:
- Embracing the right constraints while listening for which ones to remove
- Ruthlessly eliminating non-essential friction while maintaining productive structure
- Starting with real problems and clear context, not general capabilities
- Trusting innovators with problems, resources, and enough planning to build on learnings
- Shipping solutions with structure enough to iterate from, learning what works through practice
The balance point between structure and flexibility shifts constantly. What worked last quarter might be too rigid or too loose this quarter. The key capability isn’t having the perfect system—it’s developing the sensitivity to know when to add structure and when to remove it, when constraints help and when they hinder. I’m still learning this discernment.
The Transformation Journey: In three years, I went from manually debugging my own vector databases to building sophisticated multi-agent systems that iteratively improve their own context. Not because I got dramatically smarter, but because I learned to balance structure with flexibility—setting clear context and planning upfront, then iterating based on feedback. My newest projects leverage patterns discovered in older ones. My oldest projects get upgraded with lessons from recent experiments.
But here’s the humbling truth: I’m nowhere near done learning. Every breakthrough reveals new gaps. Every solved problem exposes new challenges. The field itself evolves faster than any individual can master it. What makes someone effective isn’t having all the answers—it’s developing the muscle to learn, adapt, and recognize when previous approaches no longer serve.
Organizations can compress this learning curve, but they can’t skip it. The path forward: identify people already living it (like I was during my “double life”), listen to where they’re hitting friction, remove their blockers while maintaining productive constraints, give them problems worth solving with clear context and enough structure to build on learnings, and let their results teach others. The constraint-driven approach surfaces these people naturally—if we’re humble enough to listen and observe rather than prescribe.
The Mindset Shift That Makes Everything Possible
Here’s where I was most primed: helping people move past mindset blocks and self-limiting beliefs, into lean-forward action and progress over perfection and waiting. Graceful, empathetic, but not to the point of ruinous empathy and growth-stifling comfort.
The innovators who thrived—both in my personal projects and organizationally—shared one belief: struggle is the mechanism of growth, not evidence of inadequacy. When an AI experiment failed, they interpreted it as “I learned what doesn’t work, now I’m smarter,” not “I’m not technical enough” or “AI doesn’t work.”
This growth mindset separates AI adopters who persist through the chaotic middle from those who retreat to comfortable territory. When you believe capabilities expand through effort, every frustration becomes interesting rather than defeating.
I learned this managing my own progression: things impossibly difficult in month one became routine by month six. Not through natural talent, but through persistent iteration. Learning how LLMs behaved against my context and iteratively improving with their help. Making feedback and evaluation part of every cycle. Keeping multiple projects running at different maturity levels so newest work benefits from latest learnings.
The organizations that will win the AI revolution aren’t those with biggest budgets or best technical talent. They’re the ones building cultures where:
- The right constraints drive creative urgency while leaders listen for which constraints to remove
- Top innovators get disproportionate resources and freedom, with clear context and enough structure to compound learnings
- Business problems pull AI adoption rather than technology pushing it
- Fast, structured value delivery proves concepts, then iterates based on feedback
- Planning and iteration work together—each cycle informing better planning for the next
- Struggle is reframed from threat to growth opportunity
- Leaders stay humble enough to keep learning, listening, and adapting
This last point matters more than I initially recognized. The moment you think you’ve figured out the formula is the moment you stop listening to signals that your approach needs to evolve. The best innovators I’ve worked with—and the most successful organizations—share a quality of perpetual openness: confident enough to act, humble enough to adapt.
Why Your Response to This Moment Matters
The transformation isn’t happening to us—it’s being created by persistent innovators who see obstacles as interesting problems rather than evidence of impossibility. BBVA expanded from 2.4% coverage to 11,000 licenses by tracking who created value. Klarna grew from 50% to approximately 90% daily AI adoption by identifying and supporting natural adopters. Shopify achieved 31% Q4 revenue growth with flat headcount through empowered adoption. The companies that recognize this, surface these individuals through strategic constraint and careful listening, remove their blockers while maintaining productive structure, and give them real problems to solve with clear context and enough planning to compound learnings are building sustainable competitive advantage.
What stunned me most: this organizational transformation felt easy relative to the agile transformations I’d led before. Why? Because I’d been living the acceleration in my personal work. I’d experienced how planning within iteration compounds learning, how the right constraints force focus while the wrong ones hinder progress, how real problems pull innovation faster than general capabilities push it.
But “easy” is relative. There were still failures, frustrations, and wrong turns. I’m still making mistakes, still learning which constraints help and which hurt, still refining the balance between structure and flexibility. The difference is I’ve learned to treat those mistakes as data points rather than judgments, to stay curious about what’s not working rather than defensive about what I thought would work.
The parallel journeys—one personal, one corporate—revealed the same truth: capability compounds through cycles of practical experience informed by rapid feedback and just enough planning to build on previous learnings. Whether you’re building your own AI capacity or transforming an organization, the same five forces accelerate the journey. The question isn’t whether AI will transform your organization. The question is whether you’ll surface and empower the people already living the transformation in their off-hours, their side projects, their late-night experiments—and whether you’ll stay humble and observant enough to learn from what they discover.
Those people are your future. Find them, listen to where they’re stuck, remove their blockers while maintaining productive constraints, give them problems worth solving with clear context to build from, and watch what compounds. Then stay curious about what’s working and what’s not, because the approach that works today might need refinement tomorrow.
In Part 2 of this series, we’ll explore why some people naturally thrive in this environment while others struggle, and how to develop the psychological resilience that lets you treat AI’s chaos as a workout for your brain rather than a test of your worth.
Source Verification for This Article
After conducting extensive research across company press releases, official corporate communications, SEC filings, CEO statements, and consulting firm reports, I’ve verified the sourcing for each claim in this article. Here’s what the investigation uncovered about the accuracy and documentation of these widely-circulated statistics. Let’s make fact-checking ourselves the norm with AI assistance: #transparent #ai-assist-ftw
Company Performance Metrics Fully Documented
BBVA’s Strategic Constraint Implementation is thoroughly verified through multiple authoritative sources spanning May 2024 to May 2025. The initial deployment of 3,000 ChatGPT Enterprise licenses for a workforce of 125,000+ employees (2.4% coverage) is confirmed in BBVA’s May 22, 2024 press release. The 83% weekly active usage within five months is documented in BBVA’s November 20, 2024 article “BBVA sparks a wave of innovation among its employees with the deployment of ChatGPT Enterprise.” The creation of 2,900 custom GPTs with 700 shared organization-wide is verified by both OpenAI’s official case study and BBVA’s corporate communications. The expansion to 11,000 licenses is confirmed in BBVA’s May 12, 2025 press release. The legal team’s “BBVA Retail Banking Legal Assistant GPT” reducing response times to under 24 hours is documented in multiple BBVA articles, with the January 7, 2025 piece noting the team of nine attorneys now handles 40,000+ annual queries more efficiently. The 900+ strategically interesting use cases is confirmed in the same January 2025 article.
Sources: BBVA Corporate Communications (May 22, 2024; November 20, 2024; January 7, 2025; May 12, 2025); OpenAI Case Study (November 2024) URLs: https://www.bbva.com/en/innovation/ [multiple dated articles]; https://openai.com/index/bbva/
Klarna’s AI Adoption Journey is comprehensively documented through CEO Sebastian Siemiatkowski’s direct statements and official company communications. The provision of API access to 2,500 employees (50% of workforce) is confirmed in Computer Weekly’s August 28, 2023 article featuring Siemiatkowski’s quote: “still only 50% of our employees use it daily.” The growth to approximately 90% daily adoption is verified in Klarna’s May 14, 2024 press release and CNBC coverage, though this figure includes broader generative AI tools including Klarna’s internal assistant “Kiki,” not just OpenAI. The claim about AI handling work equivalent to 700 full-time agents is confirmed in Siemiatkowski’s Sequoia Capital podcast interview and OpenAI’s case study, with important clarification that these were primarily outsourced customer service contractors. The $40 million profit increase is documented in the same Sequoia podcast with Siemiatkowski stating it’s “estimated to drive a $40 million USD in profit improvement to Klarna in 2024”—a projection rather than realized profit.
Sources: Computer Weekly (August 28, 2023); CNBC (May 14, 2024); Sequoia Capital Podcast “Training Data” (2024); OpenAI Case Study; Klarna Press Release (May 14, 2024) Active URLs: https://www.sequoiacap.com/podcast/training-data-sebastian-siemiatkowski/
TE Connectivity’s AI Cup statistics are verified from the official TE Connectivity corporate website. The competition involving nearly 300 university students from 24 universities worldwide is documented on their AI Cup story page. The winning systems achieving 98.8% and 99.2% accuracy at 100x speed improvements are confirmed with specific technical details: the 98.8% system increases annotation efficiency by 100 times compared to manual methods, while the 99.2% system is 100 times faster than manual inspection. Important clarification on deployment: the 98.8% system has been deployed on one production line with plans for rollout across 50+ global sites, while the 99.2% system is integrated into three production lines with 100+ sites under consideration for future deployment.
Source: TE Connectivity Corporate Website (2024) URL: https://www.te.com/en/about-te/stories/ai-cup.html
Shopify’s AI-First Approach is documented through CEO Tobi Lütke’s direct communications and official SEC filings. The March 2025 memo stating that teams must demonstrate why they cannot get what they want done using AI before asking for resources is verified through Lütke’s April 7, 2025 post on X (formerly Twitter) where he shared the internal memo publicly. The exact quote states: “Teams must demonstrate why they cannot get what they want done using AI” before requesting headcount. The flat headcount of 8,100 employees is confirmed in Shopify’s Form 10-K filed February 11, 2025, stating “As of December 31, 2024, Shopify had approximately 8,100 employees worldwide.” The 31% year-over-year revenue growth specifically refers to Q4 2024 performance, as documented in Shopify’s February 11, 2025 earnings release titled “Shopify Merchant Success Powers Q4 Outperformance.”
Sources: Tobi Lütke X Post (April 7, 2025); Shopify Form 10-K (February 11, 2025); Shopify Press Release (February 11, 2025); MacroTrends; StockAnalysis Active URLs: https://www.shopify.com/news/
Business Implementation Cases Documented
North Atlantic Industries’ Azure OpenAI Implementation is documented in a Microsoft customer case study. As a 250-employee defense/aerospace manufacturer, NAI’s journey from initial Azure OpenAI API calls for code documentation to company-wide expansion is detailed in Microsoft’s May 22, 2024 Americas Partner Blog post. The 60-70% efficiency gains and millions in savings are documented with specific use cases: automated code commenting for 100,000+ lines of C# code, elimination of outsourced software testing (saving thousands of hours and millions in costs), and sales proposal automation saving 16 hours per proposal. The company-wide expansion across engineering, sales, service, and manufacturing is confirmed through quotes from NAI President William Forman, Director of Workplace Technology Tim Campbell, and Software Engineer Lacey Stein. Important note: This documentation comes from a single source (Microsoft case study) without independent third-party verification.
Source: Microsoft Americas Partner Blog (May 22, 2024) URL: https://www.microsoft.com/en-us/americas-partner-blog/2024/05/22/
Research and Industry Analysis Verified
Boston Consulting Group’s October 2024 Research provides authoritative documentation for organizational AI challenges. The report “Where’s the Value in AI?” surveyed 1,000+ CxOs and senior executives across 59 countries and over 20 sectors, assessing AI maturity across 30 key enterprise capabilities. The finding that 74% of companies struggle to achieve and scale AI value is stated precisely as: “Seventy-four percent of companies have yet to show tangible value from their use of AI.” The report identifies only 4% of companies as having cutting-edge AI capabilities that consistently generate significant value, with an additional 22% beginning to realize gains, combining to form the 26% designated as “AI leaders.”
Source: Boston Consulting Group - “Where’s the Value in AI?” by Nicolas de Bellefonds et al. (October 24, 2024) URL: https://www.bcg.com/press/24october2024-ai-adoption-in-2024-74-of-companies-struggle-to-achieve-and-scale-value PDF: https://media-publications.bcg.com/BCG-Wheres-the-Value-in-AI.pdf
Verification Summary
All company-specific statistics (BBVA, Klarna, TE Connectivity, Shopify, North Atlantic Industries) have verifiable original sources with exact figures and proper citations. Key clarifications include: Klarna’s 700 FTE figure refers primarily to outsourced contractors; Klarna’s $40M is an estimated projection; TE Connectivity’s global deployment is planned rather than fully complete; and North Atlantic Industries claims are based on a single Microsoft source. The BCG research statistic (74% of companies struggling with AI value) is fully verified with detailed methodology. This investigation demonstrates both the availability of authoritative sources for major AI adoption claims and the critical importance of distinguishing between realized results, projections, current deployments, and planned rollouts.