Cutting-Edge Insights into Innovation

AI Affects Learning at Work

Highlights


Top Insights

1. AI is disrupting how we learn at work. People learn through continuous effort, with practice, struggle, and frustration along the way. If AI takes a lot of those away, it might hurt learning.

2. AI creating more content, simulating empathy, and automating decisions could lead to more noise (instead of deep insights) and outsourcing our effort, thereby reduce learning.

Source: AI Is Changing How We Learn at Work (Harvard Business Review Digital Article)

Top News

1. ChatGPT is adding formatting blocks with a mini editor toolbar that turns chat responses into document-style drafts.
2. NotebookLM has a new Data Tables feature that allows table creation. The new “Lecture” format for Audio Overview can create 30-min lectures.
3. Manus introduced Connectors for Projects, allowing users to integrate Google Drive and custom APIs.
4. Zhipu AI introduced GLM-4.7, an open-source model focused on reasoning, coding and multimodal work.

Additional Insights

1. Driving the connected mobility shift: Verizon’s view on V2X (McKinsey)
V2X (Vehicle to everything) is finally escaping “pilot purgatory” not because cars got vastly smarter, but because networks did: 5G plus edge computing can push “street-level intelligence” close enough to meet latency physics, offload heavy perception/inferencing from vehicles and roadside gear, and even program the network itself (via slicing) to prioritize things like first responders. What stands out is how the near-term value is framed less as futuristic autonomy and more as unsexy, ROI-able wins—emergency-vehicle signal preemption, vulnerable road-user protection “around corners,” and smoother tolling, while the hardest blocker isn’t technology but commercial alignment and accountability across cities, OEMs, and insurers. The other sharp insight: cybersecurity can’t be bolted on later because once traffic systems become orchestrated, they effectively become critical infrastructure, and a single high-profile failure could set consumer trust (and adoption) back disproportionately.

2. Framing the Human–AI Fit: Matching Collaboration Strategies to Hybrid Problems (California Management Review Insights)
Leaders get the most from AI not by “deploying a model,” but by framing the problem first: diagnosing how familiar, complex, and value-laden (wicked) it is, then choosing the right human–AI “dance.” The piece proposes a Hybrid Diagnostic Cube to classify hybrid problems and match them to four collaboration modes—automated execution, machine-augmented decision making, human-in-the-loop oversight, or expert human judgment—arguing that most failures come from picking the wrong mode (over-automating judgment or under-using computation). Because problems “move” over time (novel becomes routine, etc.), leaders must repeatedly reassess fit and manage the shift as a social change process—surfacing resistance, translating across expert groups, and framing AI as learning/extension rather than replacement—so AI becomes a true cognitive partner instead of a misapplied tool.

3. Job apocalypse? Humbug! (The Economist)
As AI agents spread, they’re creating new jobs centered on human judgment and organizational fit, not just coding. Training modern models increasingly relies on higher-paid domain experts (e.g, finance/law/medicine) rather than rote taggers; deploying agents inside companies fuels demand for “forward-deployed engineers” who blend developer, consultant, and salesperson and understand messy human contexts (like why customers insist on a person). Meanwhile, more work is emerging around human-in-the-loop oversight (remote troubleshooters handling edge cases and stressed users), formal AI risk/governance to prevent leaks and operational failures, and senior “chief AI officers” to coordinate many models and vendors—suggesting the premium is shifting from pure technical ability toward communication, accountability, and real-world judgment.

4. Science in 2026 (Nature)
2026 is set to be a pivotal year for science, driven by rapid advances in artificial intelligence, medicine, space exploration and geopolitics. AI is moving from a support tool to an active research partner, with smaller, more efficient models and even AI agents delivering early scientific breakthroughs—though not without risks. Major medical developments include large-scale cancer blood tests, streamlined clinical trial rules and momentum in personalized gene-editing therapies. Space missions to the Moon, Mars and the Sun, deep-ocean drilling into Earth’s mantle, and big upgrades to particle-physics labs signal bold exploration ahead, while shifting global power—especially China’s dominance in many key technologies—and US policy changes add political and funding uncertainty to the research landscape.

Innovation Radar

 
1. AI Model Releases and Advancements

MiniMax unveiled its enhanced M2.1 AI model focused on real-world tasks, stronger multi-language programming and mobile development support (MSN).

Zhipu AI has launched GLM-4.7, a new state-of-the-art open-source language model focused on advanced reasoning, coding, and multimodal capabilities, available globally via Z.ai’s APIs (Testing Catalog).

2. AI Tools and Features

Manus has launched Connectors for Projects, letting users globally integrate apps like Gmail, Google Drive, Notion, GitHub, Google Calendar, and custom APIs to automate context-aware workflows securely within personal or team workspaces (Testing Catalog). Manus Design View is a new integrated feature of the Manus agent that lets users generate, precisely edit, and refine visual designs—including images and text—in a single seamless, mobile-friendly AI workflow (Manus).  

NotebookLM’s new Data Tables feature automatically synthesizes scattered information from your sources into clean, exportable tables—useful for everything from meeting action items and competitor analysis to study prep, research synthesis, and travel planning (Google). Gemma Scope 2 is a large open-source suite of interpretability tools for all Gemma 3 models that lets researchers examine and debug complex internal language-model behaviors to advance AI safety and reliability (Google). Google is testing a new NotebookLM “Lecture” audio overview that can generate roughly 30-minute, lecture-style explanations in multiple languages, with more voice options coming in 2026 (Testing Catalog).

Bloom is an open-source framework that automatically generates scenario-based evaluation suites for a researcher-specified misalignment-relevant behavior, runs rollouts and judges them to quantify the behavior’s frequency/severity across models, and has been validated to correlate well with human judgments while distinguishing baseline from intentionally misaligned models (Anthropic).

OpenAI has added new granular personalization controls that let users fine-tune ChatGPT’s personality, tone, formatting style, and emoji use to better match their preferences (Life Hacker). ChatGPT is adding formatting blocks with a mini editor toolbar that turns chat responses into document-style drafts, like an office suite, with richer text editing and task-specific layouts (Dataconomy).

3. Others

U.S. regulators approved a daily pill version of Novo Nordisk’s Wegovy (oral semaglutide) for obesity, giving the company an early lead over Eli Lilly’s still-pending rival and potentially expanding access through a more convenient, possibly cheaper alternative to injections (NPR).