Highlights
Top Insights
1. AI is disrupting how we learn at work. People learn through continuous effort, with practice, struggle, and frustration along the way. If AI takes a lot of those away, it might hurt learning.
2. AI creating more content, simulating empathy, and automating decisions could lead to more noise (instead of deep insights) and outsourcing our effort, thereby reduce learning.
Source: AI Is Changing How We Learn at Work (Harvard Business Review Digital Article)
Top News
1. ChatGPT is adding formatting blocks with a mini editor toolbar that turns chat responses into document-style drafts.
2. NotebookLM has a new Data Tables feature that allows table creation. The new “Lecture” format for Audio Overview can create 30-min lectures.
3. Manus introduced Connectors for Projects, allowing users to integrate Google Drive and custom APIs.
4. Zhipu AI introduced GLM-4.7, an open-source model focused on reasoning, coding and multimodal work.
Additional Insights
1. Driving the connected mobility shift: Verizon’s view on V2X (McKinsey)
V2X (Vehicle to everything) is finally escaping “pilot purgatory” not because cars got vastly smarter, but because networks did: 5G plus edge computing can push “street-level intelligence” close enough to meet latency physics, offload heavy perception/inferencing from vehicles and roadside gear, and even program the network itself (via slicing) to prioritize things like first responders. What stands out is how the near-term value is framed less as futuristic autonomy and more as unsexy, ROI-able wins—emergency-vehicle signal preemption, vulnerable road-user protection “around corners,” and smoother tolling, while the hardest blocker isn’t technology but commercial alignment and accountability across cities, OEMs, and insurers. The other sharp insight: cybersecurity can’t be bolted on later because once traffic systems become orchestrated, they effectively become critical infrastructure, and a single high-profile failure could set consumer trust (and adoption) back disproportionately.
2. Framing the Human–AI Fit: Matching Collaboration Strategies to Hybrid Problems (California Management Review Insights)
Leaders get the most from AI not by “deploying a model,” but by framing the problem first: diagnosing how familiar, complex, and value-laden (wicked) it is, then choosing the right human–AI “dance.” The piece proposes a Hybrid Diagnostic Cube to classify hybrid problems and match them to four collaboration modes—automated execution, machine-augmented decision making, human-in-the-loop oversight, or expert human judgment—arguing that most failures come from picking the wrong mode (over-automating judgment or under-using computation). Because problems “move” over time (novel becomes routine, etc.), leaders must repeatedly reassess fit and manage the shift as a social change process—surfacing resistance, translating across expert groups, and framing AI as learning/extension rather than replacement—so AI becomes a true cognitive partner instead of a misapplied tool.
3. Job apocalypse? Humbug! (The Economist)
As AI agents spread, they’re creating new jobs centered on human judgment and organizational fit, not just coding. Training modern models increasingly relies on higher-paid domain experts (e.g, finance/law/medicine) rather than rote taggers; deploying agents inside companies fuels demand for “forward-deployed engineers” who blend developer, consultant, and salesperson and understand messy human contexts (like why customers insist on a person). Meanwhile, more work is emerging around human-in-the-loop oversight (remote troubleshooters handling edge cases and stressed users), formal AI risk/governance to prevent leaks and operational failures, and senior “chief AI officers” to coordinate many models and vendors—suggesting the premium is shifting from pure technical ability toward communication, accountability, and real-world judgment.
4. Science in 2026 (Nature)
2026 is set to be a pivotal year for science, driven by rapid advances in artificial intelligence, medicine, space exploration and geopolitics. AI is moving from a support tool to an active research partner, with smaller, more efficient models and even AI agents delivering early scientific breakthroughs—though not without risks. Major medical developments include large-scale cancer blood tests, streamlined clinical trial rules and momentum in personalized gene-editing therapies. Space missions to the Moon, Mars and the Sun, deep-ocean drilling into Earth’s mantle, and big upgrades to particle-physics labs signal bold exploration ahead, while shifting global power—especially China’s dominance in many key technologies—and US policy changes add political and funding uncertainty to the research landscape.







