Annoucing upcoming book

Connecting the Dots.. Reflections on TPS

Annoucing upcoming book about Lean and TPS is almost Here!

I’m excited to share that I’ve been working on a groundbreaking book in collaboration with Harish Jose, a fellow systems thinker, mentor and a highly experienced Lean Thinker.

Front cover of the book design(draft version)

Together, we’ve crafted what we believe is the most comprehensive exploration of the Toyota Production System (TPS) ever published. This isn’t just a collection of tools or surface-level concepts. We’ve gone deep into the philosophical roots, technical intricacies, and ethical foundations of TPS, uncovering layers that most Lean literature barely touches. Expect chapters on Genchi Genbutsu, Jidoka, Respect for Humanity, and even Taiichi Ohno’s adaptive genius, all woven into a narrative that connects history, practice, and leadership.

If you’ve ever felt that Lean was oversimplified or misrepresented, this book will challenge and expand your understanding.

This book would be initially available for the Cyb3rsyn community

AI Section

Keep Your Edge: Why DHH Says AI Can’t Replace Real Programmers”

Recently came across this wonderful interview of DHH about AI. I highly recommend watching this Youtube video.
If you don’t know DHH, here is a brief intro.

David Heinemeier Hansson (DHH) is the creator of Ruby on Rails and CTO of 37signals, known for shaping modern web development. Beyond code, he’s a bestselling author and Le Mans-winning race car driver who blends sharp insight with unapologetic candor.

David Heinemeier Hansson (DHH) approaches artificial intelligence with both optimism and clear-eyed skepticism. In his discussion with Lex Fridman, DHH emphasizes that while he uses AI tools like ChatGPT and Claude daily for coding assistance and research, he cautions against the hype that AI will imminently replace programmers or build production-grade systems without human oversight.

He believes AI currently functions best as a collaborative tool, helping with code review and idea generation, but not as a standalone creator of complex or reliable software.

DHH is wary of allowing AI to “autocomplete all your code,” warning that this leads to a gradual loss of personal competence—a phenomenon he likens to the atrophy that happens when managers stop hands-on work. He’s enthusiastic about the pace of AI development and its future potential but encourages a balanced view: celebrate progress, but recognize that genuinely agentic, autonomous software engineering remains a goal for tomorrow, not today.

Ultimately, DHH’s view is that AI will keep raising the level of abstraction in programming, rewarding those who understand both the details and the big picture, and cautions his audience not to trade away hands-on skills too quickly in pursuit of convenience.

LeSS Section

Book Review: Managing the unexpected

I am reading this wonderful book “Managing the Unexpected”. The authors Karl E. Weick and Kathleen M. Sutcliffe explores how organizations in high-risk fields—like aviation, nuclear power, and emergency medicine—consistently perform reliably despite constant surprises.
Rather than trying to eliminate uncertainty, these “high-reliability organizations” embrace it through what the authors call mindful organizing. At its core are five interrelated practices that any team can adopt to turn small glitches into early warnings and build resilience into daily work. Here are the practices with some practical ideas to implement on the ground.

Preoccupation with Failure

Teams treat every small glitch as a fire-drill that reveals hidden cracks in the system. By surfacing “near-misses” immediately, squads turn minor annoyances into early warning beacons rather than sweeping them under the rug.

  • Create a “glitch board” (physical or digital) where anyone can log a near-miss within minutes of noticing it.

  • Dedicate the first five minutes of each Daily Stand-up to discuss one near-miss and assign a lightweight follow-up action.

Reluctance to Simplify Interpretations

Rather than accepting the first plausible explanation, teams dig deeper to understand conflicting data and edge cases. This cultural muscle prevents surface-level fixes that may introduce bigger problems tomorrow.

  • Launch quick “data sense-making sessions” when metrics look off—invite both devs and QA to challenge assumptions.

  • Use a lightweight 5-Whys card at the point of anomaly and keep it visible until root causes are addressed.

Sensitivity to Operations

Real-time flow trumps weekly reports. Teams continuously monitor the actual day-to-day workstream, creating feedback loops that catch blockers before they snowball.

  • Assign a rotating “flow guardian” whose sole job is to spot and highlight bottlenecks on the Kanban board.

  • Maintain a simple, up-to-date operational dashboard (even a whiteboard sketch) in the team room for everyone to see.

Commitment to Resilience

When things go wrong, teams bounce back faster because they’ve rehearsed failure and written blameless playbooks. They view breakdowns as learning labs, not blame games.

  • Run monthly “game days” simulating incidents (server crash, data loss) and time your recovery.

  • After each real or simulated failure, conduct a 15-minute blameless post-mortem and update a shared runbook.

Deference to Expertise

In a crisis or pivot, decisions flow to those with the best local knowledge, regardless of title. This preserves speed and accuracy when it matters most.

  • Empower any team member to pause work and call a “quick huddle” with the person who knows the most about the current problem.

  • Keep an evolving “expertise map” so you can rapidly identify who to loop in when unexpected issues arise.

Systems Thinking Section

Most failures are due to bad mental models

In the provocative book, “Democratic Corporations”, Ackoff argue that many of society’s chronic failures such as in education, healthcare, governance, and business stem not from bad intentions, but from bad mental models.

Most systems today are still being managed using outdated deterministic or mechanistic models, treating people like parts in a machine. But modern social systems are fundamentally purposeful and dynamic, filled with individuals and groups making choices.

Simple metaphor: A deterministic system is like a watch, every cog behaves as expected based on physical laws.

A social system is like a team and its performance depends not only on individual skills, but on collaboration, motivation, communication, and choice.

Why this matters: Treating a social system (like a company or community) as if it were deterministic leads to mismanagement, poor morale, and failure to adapt. Social systems need models that honor human purpose, autonomy, and complexity.

The social systems require social-systemic thinking models that embrace complexity, interdependence, choice, and feedback. When we mistakenly apply simplistic models to complex, choice-driven systems, we may optimize parts while damaging the whole.

The author call for organizations that are democratic, adaptive, decentralized, and capable of dissolving conflict while increasing choice. In an era of turbulence and interconnection, success depends on matching our models to the true nature of the systems we seek to change.

Upcoming Course Schedule

If you are keen to join the course, please reach out to me directly

We are authorised Training providers for the following courses

  • ICAgile Systemic Coaching

  • Certified LeSS Pracitioner

  • Certified LeSS Executives

  • Systems Thinking for Leaders

Copyright (C) 2025 The Empirical Coach Pty Ltd. All rights reserved.
You are receiving this email because you have opted during one of the LeSS events or requested in relation with any of the trainings conducted by the Empirical Coacht Pty ltd

Our mailing address is:
The Empirical Coach Pty Ltd
High Street Road
Glen Waverley, VIC 3150
Australia