Highlights
Top Insights
1. Most companies still automate “inside” existing workflows. The winners rethink the workflow itself and assign AI agents distinct roles across the entire journey.
-
Example: In quote-to-order, agents don’t just help sales reps, they intake requests, classify complexity, process orders, and manage customer status end-to-end. Result: 30–40% labor cost reduction and tens of millions in new revenue from faster turnaround and higher conversion.
2. Vendor platforms accelerate deployment but can cost up to 3× more annually at scale (licenses + services can reach ~$1.5M per use case). In-house platforms cost less over time but require stronger engineering and governance capability.
3. Waiting for perfect enterprise data slows value creation unnecessarily. Modern agents using RAG and tools can operate on messy, decentralized data and drive data improvement over time.
Source: Scaling AI Requires New Processes, Not Just New Tools (BCG)
Top News
1. OpenAI plans to begin testing clearly labeled ads in ChatGPT while emphasizing privacy, answer independence, and user control.
2. Replit launched a new feature that lets users create, publish, and monetize mobile apps for Apple devices.
3. Anthropic has begun beta testing a new Claude AI integration with Apple Health.
4. Amazon One Medical launched a HIPAA-compliant, agentic Health AI assistant
Additional Insights
1. It’s time for agentic video editing (a16z)
While video creation exploded in 2025, editing remains the main bottleneck, and 2026 will mark a shift toward AI agents taking on much of that work. Video editing is time-intensive, requires significant taste and skill, and creates a high barrier to entry despite growing demand for video content. Recent advances in multimodal vision models, tool-using AI agents, and improved image and video generation have made it feasible for agents to understand, plan, and execute complex editing tasks. These capabilities will dramatically increase both the supply and quality of video content, shifting human creators toward higher-level creative direction while agents execute the labor-intensive editing work.
2. Human capabilities are at the heart of high-performing teams (Deloitte)
The article argues that while structure, skills, and technology matter, sustained high performance in teams increasingly depends on enduring human capabilities that technology cannot replace. Based on a large survey and executive interviews, it highlights six core capabilities—curiosity, emotional and social intelligence, divergent thinking, informed agility, resilience, and connected teaming—as distinguishing factors of high-performing teams. Such teams demonstrate greater trust, inclusion, adaptability, learning from failure, and openness to diverse perspectives, which together enable better decision-making and stronger responses to change. The research also suggests that these capabilities improve how teams integrate and benefit from AI, leading to higher-quality collaboration with both technology and colleagues. Despite this, organizations tend to underinvest in human skills relative to technology, indicating an opportunity to rebalance training, encourage experimentation, and clarify how individual roles connect to broader strategy to build more resilient, future-ready teams.
3. How our AI innovation engines actually work (Board of Innovation)
The piece explains how AI-native innovation engines function as continuous, learning systems that integrate data, sensing, insight generation, concept creation, simulation, and portfolio management into a single feedback-driven loop. It argues that innovation improves when intelligence becomes the organizing principle, allowing teams to connect signals across markets, consumers, and culture and make evidence-backed decisions at scale. A core insight is the importance of a connected intelligence layer that unifies internal and external data within an organization’s own secure environment so knowledge compounds over time. The article highlights always-on sensing and AI-driven interpretation as ways to surface early signals and transform them into reusable, company-owned insights and opportunity spaces. It emphasizes simulation and validation through digital twins and virtual experiments to reduce risk and prioritize investments earlier. Throughout, it stresses that humans remain central, guiding strategy, ethics, and judgment while working alongside systems designed to learn continuously with them.
4. Beyond feasibility filters: How expertise heterogeneity enables innovation recognition (Strategic Management Journal)
The article presents research-based insights into how different configurations of evaluator expertise shape the recognition of innovation, arguing that innovation assessment is not a single judgment but a staged cognitive process. It shows that evaluators tend to apply an initial feasibility filter to screen out ideas that fail minimum technical standards before engaging in a more integrative assessment of how novelty can enhance performance. Domain-specific experts act as strict technical gatekeepers, ensuring rigor but often overlooking cross-domain improvement opportunities. Domain-adjacent experts are more open to novel ideas and rate both novelty and feasibility higher, though sometimes with less stringent standards. Most importantly, domain-spanning experts combine rigorous feasibility screening with a superior ability to recognize how novel elements across domains can improve system performance, suggesting that organizations should strategically combine and sequence different types of expertise rather than rely on a single “ideal” evaluator.
5. AI Boosts Research Careers but Flattens Scientific Discovery (IEEE Spectrum)
The analysis argues that AI tools significantly boost individual scientific careers by increasing publication volume, citations, and leadership advancement, but simultaneously narrow the collective scope of scientific discovery. By examining over 40 million papers, the study finds that AI-augmented research clusters around popular, data-rich, and tractable problems, reducing topical diversity and weakening follow-on engagement between studies. This creates a tension where personal incentives reward speed, scale, and productivity, while the broader scientific enterprise risks conformity and declining originality. The pattern appears consistent across multiple decades and waves of AI development, suggesting the effect is structural rather than temporary. The authors and commentators emphasize that the core issue lies less in AI’s technical design and more in academic reward systems that steer researchers toward safe, easily automated questions. Ultimately, the piece frames AI as a powerful but double-edged tool whose impact on science will depend on whether incentives are reshaped to encourage exploration beyond well-trodden intellectual ground.







