top of page

Strap In, or Blast Off Blind: The Jetpack Problem of AI Without Strategy

  • May 23
  • 5 min read

 

Silicon Valley has always been good at metaphors. “Move fast and break things.” “Software is eating the world.” And now, “AI is a jetpack.” But as with most metaphors turned mantras, what begins as an illustration becomes a delusion when not properly contextualized. Yes, AI is a jetpack—a device of exponential propulsion, a tool that can lift operations, productivity, and creativity to once-unimaginable heights. But as with any high-speed mechanism, the danger lies not in the speed, but in the direction.

 

Today, we find ourselves in the era of anxious acceleration. In boardrooms across the globe, doors are flung open and executives rush breathlessly toward artificial intelligence, propelled less by understanding than by the fear of being left behind. This is the most dangerous moment for any technology—not when it’s obscure, but when it’s ubiquitous and poorly understood. As history reminds us, the industrial revolution produced as many organizational failures as it did titans, with the latter distinguished not by their access to steam power, but by the vision to deploy it with purpose.

 

The Fallacy of First-Mover Advantage

It’s worth reminding ourselves that technological adoption has never been a guarantee of success. Blockbuster experimented with digital delivery before Netflix; Kodak held patents on digital photography before Instagram. What doomed them wasn’t ignorance, but incoherence—a failure to align innovation with organizational strategy, customer value, and long-term competitive positioning.

 

AI is no different. A 2023 McKinsey report noted that while 55% of companies had adopted AI in at least one business unit, only 11% reported significant financial benefits from the adoption. The gap, as the report concludes, is not in access to the tools but in the presence—or absence—of a coherent AI strategy.

 

So what does that strategy look like?

 

1. Strategic Assessment: Know Thyself, Before You Know Your Algorithm

Before a single model is trained or a chatbot deployed, a company must begin with an assessment. Not of the technology, but of itself. What are the organization’s current pain points? Which processes are data-rich but insight-poor? Where are the decision bottlenecks that slow execution or erode value?

 

This is where many AI initiatives go astray. In the rush to deploy generative models or predictive analytics, companies often bypass this diagnostic phase, failing to match AI’s capabilities with real strategic needs. As Michael Porter has long argued, the essence of strategy is choosing what not to do. In the context of AI, it’s choosing which problems are not worth automating, and which are. A Harvard Business Review article (“Building the AI-Powered Organization,” 2019) found that firms who began their AI journey with a deliberate diagnostic phase outperformed their competitors in ROI by 30% over three years.

 

Assessment must be both internal and external. Internally, what is the state of data infrastructure? Are data governance policies in place? Is the workforce ready—not just in terms of technical fluency, but in mindset—for collaboration with intelligent systems? Externally, how are competitors deploying AI? What are the regulatory implications of new deployments in your sector?

 

2. Purposeful Use-Case Selection: Do Fewer Things, Better

Once assessment is complete, organizations must resist the urge to plaster AI over every available surface. Not every customer query needs a chatbot, nor every KPI a machine-learning model. A sound AI strategy prioritizes high-value, high-feasibility use cases that align with core business goals. It may be automating fraud detection in finance, optimizing route logistics in supply chains, or personalizing user experiences in retail.

 

Importantly, the selection of these use cases must involve cross-functional teams. The days of siloed IT departments making strategic tech decisions are over. A study published in MIT Sloan Management Review (2021) found that cross-functional governance increased project success rates by 48%. When marketing, operations, compliance, and IT share ownership of AI implementation, outcomes are more aligned with actual business needs.

 

3. Data Readiness: The Hidden Cost of Intelligence

AI is only as good as the data it trains on. And most organizations are not ready.

 

According to a 2023 Gartner report, 80% of AI failures result from poor data quality, not flawed algorithms. Data readiness means more than having large datasets; it means clean, labeled, unbiased data that is accessible and secure. It also means having a data governance framework that clearly defines ownership, compliance obligations (such as GDPR or HIPAA), and ethical boundaries.

 

This is where strategy must intersect with ethics. If a retailer uses AI to personalize recommendations, what data are they collecting from customers? Is it opt-in? Transparent? What biases are embedded in historical purchasing patterns, and how might these perpetuate inequality?

 

A jetpack, after all, doesn’t discriminate in direction. It will carry you forward—into markets or into scandal.

 

4. Capability Building: Human Intelligence Still Matters

One of the ironies of AI is that the more powerful it becomes, the more important human judgment is. Organizations cannot outsource thinking to algorithms. Instead, they must build what researchers call “AI fluency”—a blend of technical understanding, critical thinking, and ethical reasoning. This doesn’t mean retraining every employee as a data scientist, but it does require widespread literacy about what AI can and cannot do.

 

The World Economic Forum estimates that 44% of current workforce skills will be disrupted within five years, largely due to AI. Yet most corporate training programs lag behind. A 2024 PwC report found that only 28% of firms had formal reskilling initiatives in place for AI integration.

The companies that win won’t be those that hire the most PhDs, but those that democratize AI knowledge across the organization. The future isn’t about AI replacing humans, but augmenting them. And that only works if the humans are ready to engage.

 

5. Governance and Continuous Evaluation: The Compass on the Jetpack

Perhaps the most neglected part of AI strategy is governance—not just at the point of implementation, but as an ongoing function. AI models drift. Data shifts. Regulatory environments evolve. Without a mechanism for continuous oversight and adaptation, what began as a smart initiative can veer off course.

 

A strong governance framework includes technical monitoring (are models performing as expected?), ethical review (are unintended harms arising?), and strategic alignment (does this initiative still serve our goals?). Think of it as a compass strapped to the jetpack: a continuous recalibration to ensure the trajectory is right, not just the speed.

 

Recent moves by the EU to legislate AI practices (the AI Act) signal that regulation is not theoretical. It’s here. Organizations that bake compliance and transparency into their AI strategy from day one will fare far better than those who retrofit guardrails after scandal strikes.

 

When More Speed Means More Mistakes

It is tempting to mistake urgency for clarity. We are living in a moment where AI tools are not just available—they’re irresistible. The doors, to borrow a phrase, are frequently flung open by executives eager to run in entirely the wrong direction.

 

But this is not a race where speed alone determines winners. It’s a course where direction, judgment, and discipline matter more. AI is indeed a jetpack. But unless you strap it on with strategic intent, you’re more likely to crash through the ceiling than rise above it.

 

 
 
 

Comments


Thanks for submitting!

bottom of page