Highlights
Top Insights
1. Asking “What does your birthday in 2040 feel like?” leads to more innovative futures than asking “What does 2040 look like?” Most future-thinking prompts are visual, tech-centric, and abstract. But shifting to an emotional, human-centered prompt grounds the future in personal meaning, opening up more authentic and diverse possibilities.
2. It is important to question the framing of problems to escape outdated assumptions. Many projects aiming to be “innovative” are just applying fresh paint to old models. The framing itself (e.g., “how do we improve X?”) often smuggles in old assumptions, preventing truly new thinking.
Source: The “used future” trap (IDEO)
Top News
1. Tencent has open-sourced its Hunyuan3D World Model 1.0.
2. Runway announced Aleph, an AI video model that enables users to edit and generate video content through simple prompts.
3. Z.ai introduced GLM-4.5 and GLM-4.5-Air, advanced LLMs designed for reasoning, coding, and agentic tasks.
4. AlphaEarth Foundations is a AI model that integrates multimodal Earth observation data.
5. Microsoft has introduced Copilot Mode in Edge, an AI-powered browsing experience.
Additional Insights
1. The Bitter Lesson versus The Garbage Can (One Useful Thing)
The article explores the tension between structured process mapping in organizations and the unpredictable, messy reality captured by the “Garbage Can Model” of decision-making. It contrasts this with Richard Sutton’s “Bitter Lesson” from AI research, which shows that brute-force learning from outcomes often outperforms systems built on human expertise and careful design. As applied to AI in the workplace, this suggests that instead of trying to map and automate every chaotic process, companies might be better off defining desired outputs and letting AI agents learn how to produce them, potentially bypassing institutional complexity altogether. The piece ultimately poses a crucial question: are organizations problems that can be solved like chess—with enough data and computation—or are they irreducibly messy systems that require human understanding to navigate? The answer may reshape how companies compete in the age of AI.
2. A Guide to Building Change Resilience in the Age of AI (HBR)
Shopify and Moderna achieved major AI-driven performance gains not by upgrading technology alone, but by radically rethinking their organizational models. Shopify, for example, spun off its entire logistics arm—something it had spent years building—to refocus on product innovation and launch AI-native tools like Sidekick, an assistant for entrepreneurs. Similarly, Moderna merged its technology and HR departments to create entirely new workflows and roles, resulting in over 3,000 customized AI agents that now support everything from clinical trials to internal operations. These bold moves contrast with the common corporate instinct to layer AI on top of existing systems. The most effective organizations aren’t just adding AI—they’re redesigning themselves around it.
3. How Four Companies Capitalize on AI to Deliver Cost Transformations (BCG)
While over 90% of executives recognize AI’s crucial role in cost reduction, many companies fail to translate AI-driven productivity gains into lasting financial impact because they merely automate existing processes rather than redesigning them. Leading firms achieve transformative results—like billions in savings—by using AI not just for efficiency, but to fundamentally reshape workflows, rigorously track outcomes, and integrate AI into broader cost programs. For instance, a German energy provider built a GenAI tool in just ten weeks to catch overpayments, while a biopharma company ran side-by-side “mirror” process experiments to reinvent marketing, R&D, and manufacturing, saving up to $170 million. IBM, by applying all three success drivers holistically—including a major focus on people—unlocked $3.5 billion in savings and doubled enterprise productivity.
Innovation Radar
1. AI Model Releases and Advancements
Tencent has open-sourced its Hunyuan3D World Model 1.0, an AI tool that generates high-quality, interactive 3D worlds from text or images and integrates with graphics pipelines for editing and simulation (Tech In Asia).
Runway Aleph is an advanced AI video model that enables users to edit, transform, and generate video content through simple prompts, including tasks like altering lighting, camera angles, styles, objects, characters, and environments—all without complex tools or manual effort (Runway).
Alibaba has released Wan 2.2, a new suite of open-source MoE-based video generation models that produce high-quality cinematic videos from text or images with enhanced motion control, visual detail, and efficiency (KrAsia).
Z.ai introduced GLM-4.5 and GLM-4.5-Air, advanced large language models designed for reasoning, coding, and agentic tasks, achieving top-tier benchmark performance with hybrid reasoning modes, efficient MoE architecture, and reinforcement learning-based post-training for enhanced tool use, full-stack development, and complex autonomous capabilities (Z.AI).
AlphaEarth Foundations is a powerful AI model that integrates massive, multimodal Earth observation data into compact, consistent global embeddings, enabling unprecedentedly detailed, accurate, and efficient planetary mapping and monitoring (Google).
Cohere has launched Command A Vision, a state-of-the-art multimodal AI model designed for enterprises, excelling in complex visual and text tasks—such as document processing, scene understanding, and data extraction—while offering secure, efficient deployment with minimal hardware requirements (Cohere).
Krea and Black Forest Labs have released the open weights for FLUX.1 Krea, a highly aesthetic-focused image generation model designed to eliminate the “AI look” through a unique post-training pipeline combining curated supervision and human preference optimization (Krea).
2. AI Tools and Features
Google has launched Opal, an experimental tool in public beta that lets users easily create, edit, and share AI-powered mini apps using natural language and visual workflows—no coding required (Google). NotebookLM now features Video Overviews—AI-generated, narrated visual summaries—and a redesigned Studio panel that lets users create and manage multiple customized outputs like Audio Overviews, Mind Maps, and Reports within a single notebook (Google).
Microsoft has introduced Copilot Mode in Edge, a new experimental AI-powered browsing experience that enhances productivity by understanding context across tabs, enabling natural voice commands, simplifying tasks, and helping users stay focused—all while prioritizing privacy, security, and user control (Microsoft).
Ideogram Character lets users generate endless high-fidelity variations of a character from a single photo, combining features like Magic Fill, Describe, and Remix to place characters in diverse scenes and styles (Ideogram).
ChatGPT’s new study mode offers interactive, step-by-step learning tailored to students’ skill levels, aiming to deepen understanding through guided prompts, scaffolded responses, and knowledge checks instead of simply providing answers (OpenAI).
Manus has launched Wide Research, a powerful new feature that enables users to perform large-scale, parallelized research tasks through collaborative AI agents running on dedicated cloud infrastructure (Manus).
3. AI for Science and Medicine
Stanford researchers have developed a virtual lab powered by AI agents, including an AI principal investigator, that collaborates autonomously to accelerate scientific discovery—already demonstrating its potential by designing a promising new nanobody-based COVID-19 vaccine candidate in just days (Stanford).
4. Other
Chinese startup Unitree Robotics has launched the R1, a multimodal AI-powered humanoid robot priced under $6,000, marking a major step toward affordable, versatile robotics (Bloomberg).







