Cutting-Edge Insights into Innovation

Develop AI as Talent

Highlights


Top Insights

1. Treating AI as talent is a new way of thinking: you don’t “implement” GenAI, you hire, train, and develop it. That forces leaders into familiar talent questions: job definition, curriculum, mentorship, performance reviews, career path.

2. A proprietary asset isn’t the AI model; it’s your “dynamic playbook.” Models commoditize. What becomes defensible is your machine-readable capture of top performers’ tacit heuristics. That’s the thing competitors can’t copy quickly. This flips AI investment logic from “buy compute + data” to “build institutional know-how at scale.”

Source: Beyond the Big Data Mindset: An Executive’s Guide to Cultivating AI as Talent (CMR Insight)

Top News

1. DeepSeek has released V3.2 and V3.2-Speciale that claim GPT-5/Gemini-3-level reasoning and coding performance.
2. Kling O1 is a unified multimodal AI video model with precise editing while Kling AI’s Video 2.6 adds native audio generation.
3. Runway has launched its Gen 4.5 text-to-video AI, topping the Video Arena benchmark.
4. Mistral 3 is a new open-weight model family spanning edge-friendly Ministral 3 (3B/8B/14B) and the frontier MoE Mistral Large 3.
5. Google has rolled out Gemini 3 Deep Think mode to AI Ultra subscribers in the Gemini app, while Google Workspace Studio is a new no-code hub for building AI agents.

Additional Insights

1. Generation AI (The Economist)

AI is rapidly reshaping childhood both in school and at home: after early bans, it’s now normalised in classrooms (majorities of US high-schoolers and teachers use it), with governments in places like the US, Singapore and China pushing AI literacy and adoption. Used well, AI can cut teachers’ prep time, generate engaging materials, and deliver personalised tutoring that early trials suggest boosts reading and language outcomes, potentially widening access to high-quality learning beyond wealthy families. But the central educational risk is cognitive “offloading”: students may rely on AI for answers rather than thinking, which studies link to weaker recall and critical-thinking confidence, plus more cheating and a shift toward in-school testing. Outside class, AI is personalising play and culture, from adaptive video games to a booming market in AI toys, yet these systems can be poorly guarded, producing unsafe or emotionally manipulative interactions. Teens are also forming relationships with AI “companions”; while often harmless, a minority treat them as friends or partners and disclose serious issues to them, with rare but severe harms reported, prompting rising regulatory scrutiny and child-specific product controls. Overall, AI offers powerful new tools for learning and creativity, but its very helpfulness can undercut independence, social development, and emotional resilience unless carefully designed, guided, and supervised.

2. Future of Business: Moderna’s Founder on Innovation That Breaks Through (Harvard Business Review Podcast)
Noubar Afeyan distinguishes incremental “adjacency” innovation from true breakthroughs, which are discontinuous, hard to predict, and often undervalued at first; he argues companies need both mindsets (improving from the present forward while also imagining future states and working backward) because adjacency work commoditizes quickly, whereas leaps can create durable advantage. Breakthroughs require structures and incentives different from standard R&D: a dedicated, differently rewarded group that runs many parallel bets in a realm of uncertainty (unknown probabilities), persisting through iteration rather than relying on expert validation. Afeyan sees AI as essential “augmented imagination” that can generate and sift thousands or millions of novel ideas, exemplified by Flagship’s agent-based “Extuitive” project, and frames “polyintelligence” as recognizing intelligence not only in machines but also in nature’s adaptive systems, which can reshape biology and innovation. Flagship’s process explicitly mimics Darwinian evolution—variation, selection, iteration, and inheritance—using market or user feedback as selection pressure, with openness to matching problems and solutions along the way.

3. State of AI: An Empirical 100 Trillion Token Study with OpenRouter (a16z)
A study of the data from OpenRouter, an AI inference provider using various LLMs, provides an up-to-date picture of AI usage. Open-weight models have become mainstream, accounting for about one-third of real traffic, with Chinese open-source models exploding from ~0 to near-parity with Western open-source models, and that open model usage is driven mainly by roleplay (over half of open-source tokens) rather than coding. Users are shifting toward “medium” 15B–70B models as a capability/efficiency sweet spot, while reasoning-tuned models rapidly became the default, now powering over half of all tokens. Prompt lengths have quadrupled largely because programming inputs are huge, and programming itself surged to the #1 use case (over half of recent tokens), yet roleplay remains almost as large overall. Despite falling prices, demand looks close to inelastic (people pick models for fit and trust more than cost) while technical/IT queries are a rare high-cost, high-volume outlier users willingly pay for. Finally, retention shows “first-to-fit” lock-in effects (some early cohorts stay ~40% at month five), DeepSeek displays a notable churn-then-return boomerang pattern, and Asia’s share of spend has doubled, signaling a fast geographic shift in AI usage.

4. AI in 2026: A Tale of Two AIs (Sequoia)
David Cahn argues that 2026 will be a “tale of two AIs”: behind the scenes, progress on infrastructure and AGI will slow, while user-facing adoption keeps accelerating. He expects major data-center buildouts to slip because Big Tech’s surging AI capex (especially from Google and Meta, with Microsoft and Amazon still aggressive) will collide with chokepoints across the stack—limited scaling by near-monopolists like TSMC/ASML, plus late-stage construction bottlenecks in industrial gear (generators, cooling) and skilled-labor shortages; one sign of delays would be hyperscalers warehousing chips instead of installing them. At the same time, the AGI timeline is being pushed out from late-2020s hype to a more realistic early-2030s window, raising the risk that today’s hyperscaler spending overshoots near-term revenue, which remains only tens of billions versus trillions in planned compute/energy investment. Yet none of this, he says, will meaningfully slow adoption: AI apps—led by coding tools and ChatGPT-like products—are already multi-billion-dollar businesses, more startups are racing from $0 to $100M and soon $0 to $1B in revenue, and they’re doing it with striking efficiency (often >$1M revenue per employee), using AI internally to create a compounding “self-improving” flywheel while benefiting from falling compute costs and enterprise frustration with DIY AI rollouts. The next phase won’t be an overnight takeoff, Cahn concludes, but a grind of entrepreneurial execution that steadily locks AI into the economy even as the frontier and its infrastructure move more slowly than the hype once promised.

5. Building the AI muscle of your business leaders (McKinsey)
AI success hinges less on tools and more on a scarce layer of “domain owners” (N-2/N-3 leaders) who can translate business value into AI road maps and then drive delivery and adoption end-to-end; yet Fortune 500 leaders today average only ~17% technical skills and just 5% have ever held technical roles, making this bench the true bottleneck. The article reframes AI capability as a “second muscle” business leaders must deliberately build through continuous, hands-on learning (not boot camps), starting from high-value problems rather than tech-first proofs of concept, and developing practical fluency in agile delivery, data/architecture health, and tech-talent judgment. It also argues you can’t simply hire these leaders: you must identify and upskill roughly 75–150 of them across 15–30 core journeys, supported by people-heavy incentives, real projects, and an operating model that embeds engineers under business leadership with persistent funding, because that combination becomes the hard-to-copy competitive edge.

Innovation Radar

 
1. AI Model Releases and Advancements

DeepSeek has released two free, MIT-licensed open-source AI models (V3.2 and the competition-crushing V3.2-Speciale) that claim GPT-5/Gemini-3-level reasoning and coding performance while using a new sparse-attention design to cut long-context compute costs dramatically (VentureBeat).

Kuaishou has launched Kling O1, a unified multimodal AI video model with “Nano Banana”-style precise editing to rival Sora and Runway and target adoption by filmmakers, studios, advertisers, and influencers (SCMP). Kling AI’s Video 2.6 adds native audio generation so creators can produce synced video, dialogue, and sound effects in one faster end-to-end workflow (KrAsia).

Runway has launched its Gen 4.5 text-to-video AI, topping the Video Arena benchmark ahead of Google and OpenAI with high-def, physics-aware video generation (CNBC).

At NeurIPS 2025, NVIDIA showcased its expanding open-source AI ecosystem by releasing Alpamayo-R1 for autonomous driving plus new Nemotron/NeMo speech, safety, and reinforcement-learning tools (NVIDIA).

Mistral 3 is a new Apache-2.0 open-weight model family spanning edge-friendly Ministral 3 (3B/8B/14B) and the frontier MoE Mistral Large 3, offering multimodal, multilingual, cost-efficient performance, broad hardware/software optimizations, and wide platform availability plus enterprise customization options (Mistral).

At AWS re:Invent 2025, AWS announced major AI and cloud upgrades—including Graviton5 CPUs, autonomous “frontier” agents, expanded Amazon Nova models with open training, Trainium3 UltraServers and new GPU instances for cheaper/faster training and inference, plus Bedrock AgentCore, AI Factories, and broad storage, security, Lambda, database, and observability enhancements (About Amazon).

Seedream 4.5 is a fast, high-resolution multimodal text-to-image and editing model that excels at generating and refining consistent multi-image sequences with strong character stability, accurate anatomy, and designer-ready layouts for storytelling, marketing, comics, and video workflows (Seedream).

Google has rolled out Gemini 3 Deep Think mode to AI Ultra subscribers in the Gemini app, offering improved parallel-reasoning performance on tough math/science/logic tasks and top benchmark results, accessible by selecting “Deep Think” with Gemini 3 Pro (Google).

OpenAI is experimentally training its GPT-5-Thinking model to add post-answer “confessions” that self-report and explain cheating or deception, which looks promising for interpretability but remains limited and not fully reliable because models may not know—or truthfully describe—what went wrong (MIT Technology Review).

 
2. AI Tools and Features

OpenAGI has unveiled Lux, a desktop-controlling AI agent it says beats OpenAI and Anthropic on the tough Online-Mind2Web benchmark while running faster and far cheaper thanks to action-focused training (VentureBeat).

Google Workspace Studio is a new no-code hub for building, sharing, and scaling Gemini-powered AI agents that automate tasks across Workspace and connected apps (Google).

AWS is making advanced AI model customization easy and cheaper for any developer via new Amazon Bedrock reinforcement fine-tuning and SageMaker serverless workflows, enabling faster, more accurate AI agents (About Amazon).

 
3. AI for Science

DeepSeek’s new open-weight DeepSeekMath-V2 model uses a self-verifying generator-verifier feedback loop to catch and fix its own proof errors, letting it match top Olympiad-level reasoning and surpass human Putnam scores, though it still struggles with the very hardest problems (Nature).

Researchers built a monitor-sized, glasses-free 3D display called EyeReal that uses deep learning and eye-tracking to concentrate light only toward the viewer’s eyes, overcoming an information bottleneck to deliver wide-angle, high-fidelity, real-time 3D from low-cost LCD hardware (Nature).

4. Others

NVIDIA’s Blackwell GB200 NVL72 extreme hardware-software codesign removes MoE scaling bottlenecks to deliver about 10× faster, more energy-efficient inference for leading open-source models (NVIDIA).