<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.4.1">Jekyll</generator><link href="https://andrewsavikas.com/feed.xml" rel="self" type="application/atom+xml" /><link href="https://andrewsavikas.com/" rel="alternate" type="text/html" /><updated>2026-03-04T20:06:44+00:00</updated><id>https://andrewsavikas.com/feed.xml</id><title type="html">andrewsavikas.com</title><subtitle>Personal website for Andrew Savikas</subtitle><author><name>Andrew Savikas</name></author><entry><title type="html">AI Doesn’t Kill Skills — It Moves Them Up a Layer</title><link href="https://andrewsavikas.com/ai-doesnt-kill-skills-it-moves-them-up-a-layer/" rel="alternate" type="text/html" title="AI Doesn’t Kill Skills — It Moves Them Up a Layer" /><published>2026-03-04T00:00:00+00:00</published><updated>2026-03-04T00:00:00+00:00</updated><id>https://andrewsavikas.com/ai-doesnt-kill-skills-it-moves-them-up-a-layer</id><content type="html" xml:base="https://andrewsavikas.com/ai-doesnt-kill-skills-it-moves-them-up-a-layer/"><![CDATA[<p>A <a href="https://arxiv.org/abs/2601.20245">new academic paper</a> shows that AI users scored 17% <em>lower</em> on a quiz designed to measure how much they’d learned while completing a task. AI had smoothed out many of the rough edges that produce real learning, like debugging code. They completed the task, but came out the other side knowing measurably less than colleagues who didn’t use AI.</p>

<p>If you manage people who use AI tools, this sounds alarming.</p>

<p>But it isn’t—or at least not for the reasons you think.</p>

<p>Here’s what the (excellent!) paper actually found. A group of 52 experienced Python developers were randomized into two groups: one used AI to help them complete a task, the other did not. The AI group hit fewer errors along the way, which also meant there was less opportunity to learn from <em>correcting</em> those errors. If you never debug your mistakes, you never really learn to debug, and historically debugging is one of the most valuable skills a developer can cultivate.</p>

<p>But there’s some real nuance in the paper beyond that headline finding.</p>

<p>Digging deeper, the authors identified six interaction patterns in how developers used AI, and the learning outcomes were starkly different.</p>

<p>Three patterns tanked learning. <em>Delegation</em>—hand the AI the whole task and paste in the result. <em>Progressive reliance</em>—start doing the work yourself, then gradually shift to letting the AI write all the code. <em>Iterative debugging</em>—run into an error, ask the AI to fix it, run into another, ask again, never understanding what went wrong.</p>

<p>Three patterns preserved it. <em>Conceptual inquiry</em>—ask the AI questions instead of asking it to do things. This group scored highest and, interestingly, was also the fastest among the high-scoring patterns. <em>Hybrid code-explanation</em>—ask for code and an explanation together, then actually read the explanation. <em>Generation-then-comprehension</em>—let the AI write the code, then manually copy it and ask follow-up questions to understand what it did.</p>

<p>So despite the scary title, the pattern isn’t ultimately “AI helps or hurts.” It’s “how you use it determines what you learn.” The paper knows this—it’s right there in the data. But the headline finding is the scary number, not the nuanced pattern.</p>

<h2 id="the-layer-the-paper-didnt-measure">The layer the paper didn’t measure</h2>

<p>Here’s what happened when I had an AI agent build a Gmail MCP integration from scratch: I learned essentially nothing about the implementation—I couldn’t tell you how the OAuth flow works, couldn’t debug the API calls. By the paper’s measure, I atrophied.</p>

<p>But I learned a <strong>lot</strong> about something else: how to scope a project so an agent can execute it. How to break work into pieces that are independently verifiable. How to write a spec tight enough that the output is useful and loose enough that I’m not just dictating code. How to evaluate results when I can’t read every line.</p>

<p>The paper measured skills at the execution layer. It didn’t measure what was forming at the layer above.</p>

<h2 id="this-has-happened-before">This has happened before!</h2>

<p><em>Every</em> wave of programming abstraction—assembly to C, C to Python, Python to frameworks—changed <strong>which</strong> skills mattered without reducing <strong>whether</strong> skills mattered. Nobody argues that Python developers are less skilled than assembly programmers. They’re skilled at different things, at a different layer.</p>

<p>The pattern: each compression step lets you say less while the system fills in more. The skills shift from “how do I implement this” to “how do I specify this” to “how do I evaluate whether this is right.” The new layer isn’t optional or lesser. It’s just what the work requires.</p>

<p>AI agents are the next compression step, and they’ve made the leap all the way to natural human language (rather than, say, compiled libraries) but the dynamic is the same.</p>

<h2 id="the-management-transition">The management transition</h2>

<p>This is the same structural shift that happens when an individual contributor is promoted to a manager. You stop learning how to do the work and start learning how to direct and evaluate it. Nobody calls that “skill atrophy”—they call it a career path. (I wrote recently about how <a href="/your-agents-have-an-org-chart-problem/">agents are forcing us to rediscover organizational behavior</a>—this is the individual version of that same shift.)</p>

<p>The VP of Engineering who can’t implement OAuth anymore isn’t atrophied. They’ve shifted their primary skills to a layer where their judgment about estimates, architecture trade-offs, and team capability is what matters. They need enough understanding to smell when something’s wrong—but “enough” is not “can do it themselves.”</p>

<h2 id="what-the-paper-actually-caught">What the paper actually caught</h2>

<p>Look at those six patterns again. The three that tanked learning—delegation, progressive reliance, iterative debugging—have something in common: the developer simply offloaded work and moved on. The three that preserved learning—conceptual inquiry, hybrid explanation, generation-then-comprehension—all involve the developer doing cognitive work <em>on top of</em> the AI output, actively engaged with guiding and evaluating it.</p>

<p>The paper didn’t catch AI killing skills. It caught people building the new skills needed to work more effectively a layer <em>above</em> writing the code.</p>

<p>The developers who learned the most were the ones who stayed engaged—who asked questions, who read the explanations, who treated the AI output as something to understand rather than something to blindly accept. They were doing the work at the new layer above, even if nobody was measuring it.</p>

<p>(To me this sounds an awful lot like the way a new manager has to stop measuring their worth by individual output and start learning to actively direct and evaluate someone else’s.)</p>

<p>The skills (and the learning) just moved up a layer. They didn’t disappear.</p>]]></content><author><name>Andrew Savikas</name></author><summary type="html"><![CDATA[A new paper says AI users learn 17% less. But it’s measuring the wrong layer — skills don’t disappear when you delegate, they shift from execution to evaluation.]]></summary></entry><entry><title type="html">Your Agents Have an Org Chart Problem</title><link href="https://andrewsavikas.com/your-agents-have-an-org-chart-problem/" rel="alternate" type="text/html" title="Your Agents Have an Org Chart Problem" /><published>2026-02-23T00:00:00+00:00</published><updated>2026-02-23T00:00:00+00:00</updated><id>https://andrewsavikas.com/your-agents-have-an-org-chart-problem</id><content type="html" xml:base="https://andrewsavikas.com/your-agents-have-an-org-chart-problem/"><![CDATA[<blockquote>
  <p>How’s everyone else feeling about a future where we each manage a team of agents? :eyes: :grimacing: I’ve been surprised that this has been freaking me out as much as it is. I guess it’s the change? And when I imagined myself managing, I imagined managing people problems. I dunno what AI problems will be like.</p>
</blockquote>

<p>That’s from a <a href="https://wanderu.com">Wanderu</a> colleague’s Slack message, and it really nails something I keep seeing with people (including myself!) moving from using a single AI chat window to coordinating multiple LLM agents at once: the main problems stop being about technology and start being about <em>management</em>.</p>

<p>Anyone trying to work with more than one or two LLM agents over any period of time will quickly run headlong into issues like:</p>
<ul>
  <li><strong>Span of control</strong> — it’s nearly impossible to directly “manage” 20 (or even 10) agents just like it’s nearly impossible for a manager to have 20 direct human reports.</li>
  <li><strong>Delegation frameworks</strong> — trust isn’t binary. Effective oversight depends on the task at hand.</li>
  <li><strong>Controls vs. policies</strong> — writing a rule doesn’t mean it will always be followed</li>
</ul>

<p>This is all firmly in the territory of Organizational Behavior, and there are decades of practical knowledge about coordination, delegation, trust, and verification to draw from.</p>

<p>Matt Levine has <a href="https://news.bloomberglaw.com/mergers-and-acquisitions/matt-levines-money-stuff-crypto-markets-are-where-the-fun-is">a running bit in his Bloomberg newsletter</a> about how the crypto world keeps rediscovering traditional finance. Crypto dismissed clearinghouses, custody chains, and KYC as legacy overhead, then spent a decade painfully rebuilding all of it from scratch. The structures turned out to be load-bearing. The technology changed, but the underlying problems — custody, clearing, trust — didn’t.</p>

<p>I think the same thing is happening with agents and management theory. Technologists used to treating organizational behavior as bureaucratic overhead are quickly rediscovering why TPS reports exist.</p>

<p>Others have noticed this too. Jesse Vincent wrote about agent swarms <a href="https://blog.fsck.com/2026/02/03/managing-agents/">“speedrunning”</a> the lessons of Brooks’s <em>Mythical Man-Month</em>. Martin Fowler’s recent Thoughtworks retreat notes describe “organizational structures built for human-only development <a href="https://martinfowler.com/fragments/2026-02-18.html">breaking in predictable ways</a>.”</p>

<p>As one example of how this plays out, every experienced manager learns that “work harder” doesn’t produce better work. The real job is specifying success conditions: what you’re checking for, and how you’ll both know it worked. Most people learn this by failing at the vague version first.</p>

<p>The LLM agent equivalent is obvious. “Be more careful.” “Think step by step.” “Be more accurate.” These are all just variations of “work harder” or “do better next time” — they sound like instructions but they don’t tell the agent what to actually <em>do</em> differently.</p>

<p>The good news is that we already know how to handle this! Many of the same tools and techniques developed over hundreds of years for effectively organizing, coordinating, and managing the work of teams of people work quite well for organizing, coordinating, and managing the work of teams of <em>people simulations</em>.</p>

<p>I’ve said that <a href="/what-developers-do-now-with-ai/">knowledge workers can learn a lot about the future by watching what software developers are doing</a>. But in the case of successfully using LLM agents, the reverse is also true: those building and working with teams of agents have much to learn from great managers about things like clear writing that sets proper expectations about what “good” work actually is.</p>

<p>The parallel isn’t perfect. Agents don’t get demoralized or play office politics. They have no trouble repeating the same work over and over. They’re undaunted by levels of process and bureaucracy that would crush any human. But they also don’t push back when instructions are unclear — they just <a href="/check-your-work-doesnt-work-with-llms/">confidently do the wrong thing</a>.</p>

<p>This is a theme I’ll keep coming back to: Span of control, separation of duties, delegation frameworks, the distinction between policies and structural controls — each of these has an agent parallel I want to unpack in future posts.</p>

<p>But the basic claim is that when working with agents starts to feel like management, that’s not a coincidence, it’s a fundamental feature – which means that for knowledge workers who want to make the most of agentic workflows, the big opportunity isn’t about learning new technology, it’s about applying the management, communication, and judgment skills <em>you already have</em>.</p>]]></content><author><name>Andrew Savikas</name></author><summary type="html"><![CDATA[People building AI agent systems keep rediscovering management theory — the same way crypto kept rediscovering traditional finance.]]></summary></entry><entry><title type="html">‘Check Your Work’ Doesn’t Work with LLMs</title><link href="https://andrewsavikas.com/check-your-work-doesnt-work-with-llms/" rel="alternate" type="text/html" title="‘Check Your Work’ Doesn’t Work with LLMs" /><published>2026-02-17T00:00:00+00:00</published><updated>2026-02-17T00:00:00+00:00</updated><id>https://andrewsavikas.com/check-your-work-doesnt-work-with-llms</id><content type="html" xml:base="https://andrewsavikas.com/check-your-work-doesnt-work-with-llms/"><![CDATA[<p>In <a href="/what-developers-do-now-with-ai/">my last post</a>, I argued that the tools developers build for themselves keep showing up on everyone else’s desk eventually (eg, <em>rsync</em> ➡ Dropbox), so one of the best ways to understand how knowledge work will change tomorrow is to understand what developers are doing <em>today</em>.</p>

<p>One of the reasons using LLMs (and developers are using a <em>lot</em> of LLMs right now) feels so different from most of our experience interacting with computers is their ability to process normal human language. Just say “summarize this report into a 3-paragraph email I can send to my team” and that’s exactly what you get! While you <em>could</em> get more specific (“at least 2 paragraphs, but no more than 3, explain things in plain language like you would to a 5-year-old”) usually even a vague instruction gets at least passable results.</p>

<p>A few weeks ago I gave Claude an instruction like “draft an email summarizing our request and setting a deadline of next Tuesday”</p>

<p>Click. Whirr. And then I had a perfectly useful draft email for my purposes. I made some quick edits and off it went.</p>

<p>But then I looked closer at the (now sent) message and Claude had put in the wrong date (it was the correct date for that particular Tuesday – in <em>last</em> year 🤦‍♂️).</p>

<p>Of course Claude was “apologetic” when I pointed out the error. At first I tried to be more explicit: “<em>Be very careful about dates</em>”. I even added an extra instruction to be read before every session that said to be <strong>extremely</strong> careful about dates, and to double check them.</p>

<p>And it worked.</p>

<p>Or at least I thought so.</p>

<p>Until the next time it didn’t.</p>

<p>It turns out “check your work” is <em>directionally helpful</em> but not truly <em>effective</em>.</p>

<p>What <em>does</em> work is to <strong><em>give the LLM a test it can use to falsify a claim</em></strong>.</p>

<p>In this case, the fix was that instead of just telling Claude to “be sure about the date” I added an explicit instruction to verify EVERY relative date reference by checking it against the computer’s built-in <code class="language-plaintext highlighter-rouge">date</code> command.</p>

<p>The agent didn’t get “better at remembering to check its work on dates”, it changed the way it worked because the instructions were more specific about <em>how</em> to check its work – I gave it a falsifiable success condition.</p>

<p>Given the right guidance, the agent is perfectly capable of correcting its own mistakes before you ever see them, <strong>as long as you define what you’re asking for in a way that it can verify its own work</strong>.</p>

<p>But while we humans are quite good at knowing what “correct” means when we see it, in the context of knowledge work, most of us don’t have much experience systematizing those preferences into workflows a computer can reliably automate.</p>

<p>My friend and publishing industry veteran George Walkley <a href="https://www.georgewalkley.com/Systems-and-Meaning/">posted recently about this from the other direction</a> — publishers have the right editorial instincts but not the systems habit:</p>
<blockquote>
  <p>Editorial training optimises for polish and precision. [But t]here is also a difference between shaping sentences and thinking in systems. Developers tend to treat prompts as modular components, version-controlled assets, parts of repeatable workflows. Most publishers that I work with are not yet operating at that level of process abstraction.</p>
</blockquote>

<p>Yet I’ve seen evidence that we’ve been able to bridge this gap in the past, and it offers useful insights for working with LLMs today.</p>

<p>Back in my own days in a book publishing Production Department, the work involved <strong><em>oodles</em></strong> of exacting criteria: Make sure every figure has a caption; Chapter titles can’t be more than 80 characters long; You can’t skip from H1 to H3 without an intervening H2.</p>

<p>The criteria were clear, but still often challenging for a person to test 100% correctly with dozens to review across hundreds of pages. Quite often the natural response to this was adding yet <em>another</em> review pass by <em>another person</em> applying the same criteria (“is this Proofread 1 or Proofread 2?”).</p>

<p>Computers were rarely looked to for help because the work to codify an ever-evolving list of criteria (often there were different checklists for different types of books too) was rarely worth the cost of having a developer build a program or script to do it.</p>

<p>A technique I borrowed from <a href="https://www.linkedin.com/in/rebeccagoldthwaite/">Rebecca Goldthwaite</a>, then at Cengage, was something of a middle ground, which was to use an XML-validation tool called <em><a href="https://en.wikipedia.org/wiki/Schematron">Schematron</a></em> to tackle the problem. With Schematron, you created a list of falsifiable assertions about a piece of content, and the computer ran down the list (however long that list was) and merrily evaluated the content based on your assertions.</p>

<p>Although it didn’t require any formal programming knowledge to run those tests, it did mean learning a fairly arcane notation for translating a natural-language assertion like “every figure MUST be immediately followed by a caption” into a formal test the computer could perform.</p>

<figure class=""><img src="/img/schematron-rules.svg" alt="Three publishing rules translated from natural language into Schematron XPath assertions" /><figcaption>
      Image credit: one of my AI agents via Claude Code

    </figcaption></figure>

<p>But it was easy to swap out which checklist of assertions you used, and with a bit of practice, even an editor with no programming experience could learn to write and revise those tests, improving and adapting them over time.</p>

<p>In hindsight, this was <strong>one example of a bridge between a person using natural language to describe a problem, and expressing that problem in a consistent, verifiable way for a computer</strong>.</p>

<p>Because until very recently, to explain to a computer how to help us, we had to learn <em>how to talk in ways that the computer could understand</em>. While that was possible using a “real” computer language, it could also be done with tools like AppleScript, or even recording a “macro” for the computer to play back again, over and over.</p>

<p>Regardless of the tool, the challenge was the same: turn fuzzy natural language into unambiguous computer instructions.</p>

<p>The good news is that now LLMs <strong>can understand our fuzzier language directly</strong>, and then <em>they</em> turn that intention into practical actions you see happen on your screen.</p>

<p>But there’s a catch: because the agent <em>seems to understand</em> that fuzzy language, it’s easy to skip a critical step — ensuring clear verification criteria. Without it, the agent will go off and do the work as best it can, leaving <em>you</em> to spot incorrect dates, or note when it said one thing but actually did another.</p>

<p>The gap used to be comprehension: the computer couldn’t understand what we wanted, so we had to learn its language. Now it’s verification: the computer understands what we want but can’t confirm it delivered.</p>

<p>Humans have already developed very effective ways of coping with this problem. It’s why airline pilots who’ve flown the same type of planes for 20 years still follow pre-flight checklists. It’s the whole conceit behind <em><a href="https://bookshop.org/p/books/the-checklist-manifesto-how-to-get-things-right-atul-gawande/6daf086d5c79b06d">The Checklist Manifesto</a></em>: offload the burden of verification to externally defined criteria. “Are we ready to take off?” is a riskier question to ask than “Can you prove you completed every step in this checklist?”</p>

<p>When we wrote precise, pedantic instructions, ambiguity had nowhere to hide. Now the input is fuzzy and the output looks confident —which is exactly when wrong dates, false claims, and quiet mistakes slip through. Especially outside of coding, where there’s no test suite to catch them. The fix isn’t to hope the agent gets it right. It’s to tell the agent how to check.</p>]]></content><author><name>Andrew Savikas</name></author><summary type="html"><![CDATA[What a 20-year-old XML validation tool taught me about getting reliable results from AI]]></summary></entry><entry><title type="html">What Developers Do Now with AI, You’ll Do Next</title><link href="https://andrewsavikas.com/what-developers-do-now-with-ai/" rel="alternate" type="text/html" title="What Developers Do Now with AI, You’ll Do Next" /><published>2026-02-15T00:00:00+00:00</published><updated>2026-02-15T00:00:00+00:00</updated><id>https://andrewsavikas.com/what-developers-do-now-with-ai</id><content type="html" xml:base="https://andrewsavikas.com/what-developers-do-now-with-ai/"><![CDATA[<p>In April 2007, Drew Houston posted his <a href="https://www.ycombinator.com/apply/dropbox">Y Combinator application</a> for a new product called Dropbox. He described it as “taking the best elements of <em>subversion</em>, <em>trac</em> and <em>rsync</em> and making them ‘just work’ for the average individual or team.” He knew exactly what he was building on — <em>rsync</em>, the command-line file synchronization tool that’s been shipping with Unix systems since 1996. Houston’s pitch was that his little sister could use Dropbox to keep track of her high school term papers without burning CDs or carrying USB sticks — and definitely without <em>rsync</em>.</p>

<p>A <a href="https://news.ycombinator.com/item?id=9024">Hacker News commenter</a> offered this feedback: you could already build such a system yourself “quite trivially” by getting an FTP account, mounting it locally with <em>curlftpfs</em>, and then using <em>subversion</em> or <em>cvs</em>.</p>

<p>That wasn’t wrong! The technology existed!</p>

<p>But the gap between “exists for people who know what <em>curlftpfs</em> means” and “works for a high schooler’s term papers” turned out to be worth about $12 billion.</p>

<p>Early reactions to Slack followed the same script: “it’s just IRC,” as if decades of chat protocol history made a more accessible tool’s wider potential redundant.</p>

<p>Indeed, for more than 30 years now, one of the best ways to see the future of knowledge work has been to watch how software developers work.</p>

<p>Their job may be to write software, but they tend to <em>also</em> write <em>other software</em> that makes it more efficient to write software. And then they write more software still, to help with all the <em>other</em> work that goes into writing software, like collaborating with colleagues or managing schedules and plans.</p>

<p>And it’s those broader tools — the ones applicable to other kinds of knowledge work — that offer the best preview of the future for the rest of us. Even when the future looks, at first, like something only a developer could love.</p>

<figure class=""><img src="/img/irc-vs-slack.png" alt="The same conversation in IRC (2007) vs. Slack (2013) — same people, same skepticism, same idea." /><figcaption>
      Image credit: one of my AI “agents” via Claude Code

    </figcaption></figure>

<p>But for those willing to spend effort learning how developer tools work <em>before</em> they cross that gap to mainstream, <strong>there is competitive advantage</strong>.</p>

<p>I experienced this firsthand 20 years ago at O’Reilly Media, where I helped build an <a href="https://apeth.net/matt/iosbooktoolchain.html">industry-leading publishing toolchain</a> based on the observation that much of a developer’s workflow is around writing, editing, and collaborating on collections of complex, long-form text documents. It turns out that what works for a “codebase” works exceptionally well for a “book” too (especially if that book’s authors happen to also be software developers!)</p>

<p>That lesson has stuck with me throughout my career: pay attention to the tools that software developers are using, because with a bit of work they can often be <em>incredibly</em> powerful when applied to other kinds of knowledge work.</p>

<h2 id="same-tune-new-ai-instrument">Same tune, new (AI) instrument</h2>

<p>Over the past few months, I’ve noticed many of the smartest engineers I know all talking more and more about how transformative their work was becoming with <a href="https://www.anthropic.com/claude-code">Claude Code</a> (and similar LLM tools).</p>

<p>And I kept hearing one specific theme over and over — that <a href="https://github.com/jlevy/speculate/blob/main/about/lessons_in_spec_coding.md">the work had shifted</a> from thinking about and working <em>primarily</em> on code, to thinking about and working <em>primarily</em> on <em>other</em> kinds of text documents: specifications, prompts, test plans. “With the right structure and documentation, these things can write very useful software” has been the theme.</p>

<p>I knew I’d heard that tune before.</p>

<p>They are <em>writing</em>. They are <em>planning</em>, they are <em>specifying</em>, they are <em>reviewing plans and specs and applying their judgment to make decisions and corrections</em>. They are <em>revising their instructions and feedback to provide better feedback to their workers</em>. If that’s not a description of modern knowledge work, I don’t know what is.</p>

<p>For 30 years, the pattern has been the same — developers build it, everyone else gets it later, polished and packaged. But this time the gap might be shorter than you think, because the work developers are doing with these tools right now isn’t primarily coding. It’s <em>managing knowledge and managing teams</em>.</p>

<p>It’s <strong>your job</strong>. And you already know how to do it.</p>

<p>That’s why I’ve jumped into the deep end on trying to use these tools as much like developers are using them right now, because that’s the way to understand how <em>all</em> of us will be using these tools in the future.</p>

<p>I haven’t been this excited about the future of knowledge work in decades, and if all you’re seeing and hearing is anchored to the extremes (“AI is Evil!” vs. “AI is magic!”) then you’re missing the messy middle where all of the <strong><em>real</em></strong> work that affects us all is happening.</p>

<hr />

<p><strong><em>P.S.</em></strong> I’m planning to share more about what I’m learning, including what is (and what isn’t) different about these tools compared with prior ones. I’m also taking a cue from Anne-Laure Le Cunff’s excellent <a href="https://nesslabs.com/book"><em>Tiny Experiments</em></a> and writing out loud about all of it. It’s been a while since I’ve done this, but some things are worth figuring out in public. This is the first post. More to come.</p>]]></content><author><name>Andrew Savikas</name></author><summary type="html"><![CDATA[For 30 years, the pattern has been the same — developers build it, everyone else gets it later, polished and packaged. But this time the gap might be shorter than you think.]]></summary></entry><entry><title type="html">Remembering Ben Carnevale</title><link href="https://andrewsavikas.com/remembering-uncle-ben/" rel="alternate" type="text/html" title="Remembering Ben Carnevale" /><published>2025-06-15T00:00:00+00:00</published><updated>2025-06-15T00:00:00+00:00</updated><id>https://andrewsavikas.com/remembering-uncle-ben</id><content type="html" xml:base="https://andrewsavikas.com/remembering-uncle-ben/"><![CDATA[<p><em>I was honored to be asked to give the eulogy for my Uncle Ben at his funeral mass on June 14, 2025. Ben was a very successful entrepreneur, but that was the last thing he wanted people to know about him. What mattered to him was all the work he did to help those around him, and the lines out the door at his wake were a testament to how successful he was at doing just that. Below is the full text of the eulogy, with a few minor details changed for privacy. Happy Father’s Day, Uncle Ben, we already miss you dearly.</em></p>

<hr />

<p>719 Oakmont Drive in Downers Grove was an especially lucky place to grow up.</p>

<p>You see, in my case that particular house came with an entire extra family, right next door over at number 725. Three cousins were really extra siblings to play with, some cool bonus aunts and uncles came with the package too, and even extra grandparents who would come to your birthday parties, bearing delicious homemade mostaccioli and meatballs.</p>

<p>And of course there was also an extra set of parents there to offer love and support, and to rely on and learn from.</p>

<p>So even when my own family changed, and my father moved far away, I felt very lucky to still have Uncle Ben, Auntie Mar, and the rest of the Carnevales, right next door.</p>

<p>In my memories from that time is the way that when Uncle Ben spoke about almost anything he was up to, he made it all seem so … <em>interesting</em>.</p>

<p>He’d show me all the different speakers and electronics stacked neatly on the shelves in his basement workshop, enthusiastically explaining the difference between a woofer and a tweeter.</p>

<p>He’d regale us with stories of adventures in what to me at the time sounded like truly exotic places, like Ireland and Japan, where he often traveled for business.</p>

<p>I remember how cool it seemed that he’d bought his own tuxedo, because he had so many occasions to wear a tuxedo that it didn’t make sense to keep renting them.</p>

<p>Uncle Ben clearly enjoyed pretty much whatever he was doing, and for a long time I thought he was <em>so lucky</em> to have found fulfilling work and a range of skills and interests that made him so happy.</p>

<p>But I’ve recently realized something about Uncle Ben.</p>

<p>I see now that his “luck” was in fact a choice.</p>

<p>A choice to live a life full of joy, and love, and optimism, and most of all service. A life that brightened and improved the lives of all of those around him.</p>

<p>But don’t take my word for it. If you’re like me, you’ve spent a lot of time over the past few days looking at pictures of Ben. And I hope you look at even more, because when you do you see that he’s not, as so many of us do, just smiling for the camera. He was <em>already</em> smiling, because he’s joyful, and happy, and content with where he is and who he’s with – someone just happened to catch it on film. You can see it over and over – it’s right there in that unmistakable twinkle in his eye that could light up whatever room he happened to step into.</p>

<p>There’s something else you can see in those pictures, besides the joy. And that’s pride. And I don’t mean pride in any boastful sort of way. Quite the opposite. It’s how proud he was of his family, and as his family expanded with the weddings and then the grandkids, and then the weddings <em>of</em> the grandkids, that pride and that smile and that twinkle grew bigger and brighter too.</p>

<p>Having children of my own has taught me the truth in the adage that “once a parent, always a parent”. In Uncle Ben’s case, he and Auntie Mar never stopped being that extra set of parents for me. He’d regularly send me articles and videos, and would ask me questions about my job that made it clear he’d done his homework. On the day of the Boston marathon bombing I arrived home to find an email from Uncle Ben, checking in to make sure we were all OK.</p>

<p>I’ve spent a lot of time thinking about Uncle Ben over the past few weeks, and I simply don’t remember ever feeling anything but genuine love and kindness and support and strength. (And while I don’t mean strength in the physical sense, it’s also true that I was pretty sure he’d break a finger every time he shook my hand!)</p>

<p>But what’s so remarkable about Ben is how much he worked to direct all of that love, kindness, support, strength, talent, and wisdom toward so many of those around him.</p>

<p>Over the years, and without any fanfare, Ben helped so many families who weren’t quite as lucky as mine was in that house on Oakmont Drive, and who desperately needed an Uncle Ben of their own.</p>

<p>He helped a woman that had lost her husband and along with him the means to support her family, spending two years mentoring her, teaching her how to budget, and helping the whole family find sustainable housing. And he kept doing that over and over as part of the Bridge Families program.</p>

<p>And like a true entrepreneur, he spotted an unmet need when he learned that none of the area food pantries were especially useful for someone who worked the day shift, so he organized a group of men from church to start an evening food pantry. And for those who couldn’t make it in, on Sundays he’d drive the food to them.</p>

<p>I joke about Uncle Ben being an “extra” dad, but the truth is that he was always just as much a role model as my own father. Watching him live a joyful family life alongside his best friend Marilyn taught me so much about how to build a marriage and a family of my own. Especially that it took work, and time, and love, and attention, but that if you did it right, you’d get back so much more than you put into it.</p>

<p>Uncle Ben was there for me my entire life, through thick and thin, always ready with a beaming smile, a warm hug, and the reassuring strength of someone who knows for certain what truly matters most in life.</p>

<p>He set an <em>incredible</em> example for what it means to be an exceptional father, husband, brother, uncle, grandfather, and all-around good man, and it’s a privilege to be standing here today to honor and remember him.</p>

<p>Everyone deserves someone like Ben in their life. Someone to love you and support you and always be in your corner, ready with a kind word or a warm embrace, or just a smile to cheer you up and remind you without saying a word that things really are going to be OK.</p>]]></content><author><name>Andrew Savikas</name></author><summary type="html"><![CDATA[I was honored to give the eulogy for my uncle, Ben Carnevale, at his funeral mass on June 14, 2025]]></summary></entry><entry><title type="html">SCARF: The Mental Model that Can Help you Communicate More Mindfully</title><link href="https://andrewsavikas.com/scarf-model-communicate-mindfully/" rel="alternate" type="text/html" title="SCARF: The Mental Model that Can Help you Communicate More Mindfully" /><published>2020-04-13T00:00:00+00:00</published><updated>2025-06-19T00:00:00+00:00</updated><id>https://andrewsavikas.com/scarf-model-communicate-mindfully</id><content type="html" xml:base="https://andrewsavikas.com/scarf-model-communicate-mindfully/"><![CDATA[<p>For nearly a decade now, I’ve found myself coming back again and again (and regularly recommending to others) a fantastic framework I picked up from David Rock’s <em><a href="https://bookshop.org/p/books/your-brain-at-work-revised-and-updated-strategies-for-overcoming-distraction-regaining-focus-and-working-smarter-all-day-long-david-rock/19344147?aid=80144&amp;ean=9780063003156&amp;listref=my-favorite-books-about-business-finance-and-investing&amp;next=t" target="_blank">Your Brain At Work</a></em>. The book is full of powerful tools for understanding the neurological reasons we act and feel the way we do on the job, but one stands out for how it helps better understand your own reactions to circumstances, as well as how your words and behaviors affect others. He calls it the “SCARF” model, and the acronym is short for five dimensions that strongly influence our state of mind:</p>

<ol>
  <li><strong>Status</strong>. Do we feel valued and important?</li>
  <li><strong>Certainty</strong>. How sure are we about what’s going to happen next?</li>
  <li><strong>Autonomy</strong>. How much control do we have over our circumstances?</li>
  <li><strong>Relatedness</strong>. Do we feel good about and close to the people we’re engaging with?</li>
  <li><strong>Fairness</strong>. Are we and those around us behaving and being treated fairly?</li>
</ol>

<p>Think of each as a scale from -10 to +10, with 0 being a neutral state. If something increases our feeling along one of those dimensions, the reading goes up, it feels good, and we want more of it, generating a “toward” response. On the other hand, if something reduces our feeling along one of those dimensions, we can quickly dip into negative territory, triggering an “away” response that increases anxiety—and often with it the “fight or flight” response ingrained so deeply in our lizard brain.</p>

<h2 id="the-dimensions-of-the-scarf-model-in-action">The dimensions of the SCARF model in action</h2>
<p>How does this work in practice? Here’s one example: remember when we all used to fly places all the time? You’ve likely experienced the dreaded tarmac delay. Alongside all the quotidian inconveniences of travel, why are those <strong>so</strong> uniquely infuriating? Let’s look at the experience along some of the dimensions of the SCARF model:</p>

<ul>
  <li><em>Status</em>. Even if you have the airline’s version of “status”, it’s not going to get you home any faster than anyone else who feels trapped like cattle on the plane.</li>
  <li><em>Certainty</em>. When will you take off? <em>Will</em> you take off? Will it be 10 minutes or 2 hours? What’s going on?</li>
  <li><em>Autonomy</em>. Do you have <em>any</em> control over the situation? Unless you’re a member of the crew, probably not any. The seatbelt sign is on, and you’re stuck.</li>
  <li><em>Fairness</em>. What did you ever do to deserve this? Why you, why this flight, why now of all times?</li>
</ul>

<p>With those four  meters all firmly in the red, all we’re left to work with is the sense of Relatedness we get by commiserating with our fellow <del>prisoners</del> passengers by complaining about the situation!</p>

<h2 id="a-better-way-by-applying-the-scarf-model">A better way by applying the SCARF model</h2>
<p>Imagine yourself again on that same flight. You’ve just pushed back from the gate, and while taxiing toward the runway the plane stops and the pilot comes on the intercom, and this time she says:</p>

<p><em>“Ladies and gentleman, this is your captain speaking. First off, on behalf of the entire crew, I want to thank you for being our passenger tonight. Our job is getting you where you need to go safely and quickly, and we take that job very seriously…” <strong>(Status)</strong></em></p>

<p><em>“Unfortunately because of some bad weather back in Boston, we’re not going to be able to take off for a while. Right now I hope it will be a brief delay, but there’s no way to know for sure. What I</em> can <em>promise you is that I’ll come back on this intercom at least every 15 minutes to give you an update, even if that’s just to say there’s no update…” <strong>(Certainty)</strong></em></p>

<p><em>“It’s frustrating for us too when this happens — we’re just as eager to get back home to our families and friends as all of you…” <strong>(Fairness)</strong></em></p>

<p><em>“As you can see the seatbelt sign is on, so it’s important that you stay seated and buckled in while we wait. But we know you weren’t expecting this delay either, so if you really need to use the lavatory, please ring your call button and we’ll do our best to help you out. And while we can’t start our beverage service until we’re up in the air, we’d be happy to bring you a cup of water if you’re thirsty, just ring that call button.” <strong>(Autonomy)</strong></em></p>

<p>Nothing has changed about the circumstances — you’re still sitting on that tarmac indefinitely and there’s nothing really that you can personally do to change that. But you can imagine you and your fellow passengers feeling a lot less stress and anxiety this time around.</p>

<h2 id="applying-the-scarf-model-at-work">Applying the SCARF model at work</h2>
<p>The SCARF framework is useful in two directions:</p>

<ol>
  <li>Better understanding your own responses to words, actions, and circumstances</li>
  <li>Better understanding (and influencing) how others respond to <em>your</em> words and actions</li>
</ol>

<h3 id="understanding-yourself">Understanding yourself</h3>
<p>If you notice yourself feeling anxious, frustrated, angry, or scared—or just like you want to leave the room—take a deep breath and see if you can identify which of the SCARF dimensions is at play. Do you feel treated unfairly? Like things are out of your control? Disrespected? The mere act of labeling our emotions can help engage the more rational and logical parts of our brain.</p>

<p>Chances are, you’re more sensitive to some of those dimensions than others. For example, maybe you get <strong>really</strong> upset when you feel like you’re not being treated fairly. Knowing that’s how you respond can help you identify when it’s happening <em>while</em> it’s happening (and you can do something about it).</p>

<p>(Yes, I know, I know, we’re <em>all</em> pretty anxious and scared these days, and that’s a good opportunity to use the SCARF model to think through which aspect of this pandemic thing is weighing on you the most—is it the uncertainty? the lack of control? The act of observing and labeling our emotions is often enough to moderate them. BTW it’s notable how many of us are actively compensating by trying to maintain or even increase our sense of Relatedness by baking together, playing games, and extra Zoom calls with friends and family.)</p>

<h3 id="understanding-others">Understanding others</h3>
<p>The other great way to use the SCARF model is when planning an important conversation with others. Beforehand, take a piece of paper and write the letters SCARF in a column down the page. Now think of one or two things you could say to the person that would increase the person’s feelings along that dimension.</p>

<p>You don’t need to say all of them, but this way you’ll have some talking points at hand that can help raise the odds of a positive outcome. As a bonus, spending a few minutes writing positive things down about the person you’re about to talk to will put you in a very positive frame of mind about that person!</p>]]></content><author><name>Andrew Savikas</name></author><summary type="html"><![CDATA[Add David Rock's brilliant “SCARF” framework to your toolkit to help drop the stress level on both sides of your next conversation]]></summary></entry><entry><title type="html">On Building a Daily Habit of Continuous Learning</title><link href="https://andrewsavikas.com/daily-habit-continuous-learning/" rel="alternate" type="text/html" title="On Building a Daily Habit of Continuous Learning" /><published>2019-02-01T00:00:00+00:00</published><updated>2019-02-01T00:00:00+00:00</updated><id>https://andrewsavikas.com/daily-habit-continuous-learning/</id><content type="html" xml:base="https://andrewsavikas.com/daily-habit-continuous-learning/"><![CDATA[<blockquote>
  <p>Education is what people do to you and learning is what you do to yourself. — Joi Ito</p>
</blockquote>

<p>While I adore that perfectly tweetable quote from <a href="http://www.ted.com/talks/joi_ito_want_to_innovate_become_a_now_ist/transcript">Joi Ito’s TED Talk</a>, I’ve come to appreciate something he says a bit later in the same talk as an incredibly concise prescription for success in your career (and life in general):</p>

<blockquote>
  <p>[I]t’s about stopping this notion that you need to plan everything, you need to stock everything, and you need to be so prepared, and [instead] focus on being connected, <strong>always learning, fully aware, and super present</strong>.” (Emphasis added)</p>
</blockquote>

<p>It’s no coincidence that that phrase is also an accurate description of how young children naturally engage with the world around them. In fact, John Seely Brown, in a talk called <a href="http://www.johnseelybrown.com/el.pdf">Cultivating the Entrepreneurial Learner in the 21st Century</a> makes the case that we finally have the tools and technology to help spread and scale the kind of play-driven education first popularized by Maria Montessori more than 75 years ago. He also provides a useful metaphor to help understand how those same technology changes are shrinking the shelf life of key professional skills:</p>

<blockquote>
  <p>We are moving away from a 20th century notion of learning as picking up a set of fixed assets to a 21st century notion of learning as constantly reinventing and augmenting your skills. In the past, your skillset was authoritative, transferred to you in delivery models — often called schooling — and had a wonderful scalable efficiency. How do we move to a model that requires participating in ever-changing flows of activities and knowledge?</p>
</blockquote>

<p>But much of the language we use to describe education and training remains firmly rooted in the asset-based approach Seely Brown describes: we talk about courses, and certificates, and degrees, all with the idea that learning is something to “complete”, with a defined endpoint. (<a href="http://infed.org/mobi/paulo-freire-dialogue-praxis-and-education/">Paulo Freire calls it the banking model of education</a>.)</p>

<p>By way of comparison, it would be absurd to think it was possible to “complete” things like: being fit, eating right, or being a good marriage partner or parent. The only real measure of success on those dimensions is a sustained commitment to constant improvement — wanting to be at least slightly better today than you were yesterday.</p>

<p>It’s time to treat learning and skill development the same way — as a <a href="http://www.fastcompany.com/3020758/leadership-now/why-deliberate-practice-is-the-only-way-to-keep-getting-better">habit of deliberate practice</a> to be cultivated and sustained for a lifetime — to keep learning in order to get a little bit better than you were yesterday.</p>

<p>Most of what’s written about deliberate practice is about things like athletic performance, musical instruments, or even computer programming: “<em>Do the thing for <a href="http://gladwell.com/outliers/the-10000-hour-rule/">around 10,000 hours</a> and you’ll be an expert</em>.” But developing a “continuous learning” mindset through a habit of daily learning isn’t about repeating a behavior to become proficient, it’s about building a positive habit that can crowd out negative ones, as well as have positive side effects on other parts of your work and life.</p>

<p>It’s just like how exercising for 30 minutes a day isn’t at all about becoming an expert on the elliptical machine — the payoff comes from all the benefits you get to enjoy during the <em>other</em> 1,410 minutes of your day. You sleep better, you feel better, you look better, and you’re just better equipped to handle the challenges life throws at you every day.</p>

<p>(And to carry the analogy further, formal courses and certifications will always have a place in learning, but are more like the occasional 5K run or half-marathon — not something you’d want to do every day, but when you do you’re going to perform much better thanks to those daily workouts.)</p>

<p>One of the very best sites on the Web is Maria Popova’s <a href="http://www.brainpickings.org/">Brain Pickings</a>. In a <a href="http://www.brainpickings.org/index.php/2012/09/25/william-james-on-habit/">fantastic post about habits</a>, she quotes William James, writing more than a century ago:</p>

<blockquote>
  <p>When we look at living creatures from an outward point of view, one of the first things that strike us is that they are bundles of habits.</p>
</blockquote>

<p>If you’re like me, a fairly common “bundle of habits” in the modern (and mobile) age is to check email, peruse Twitter, look at Facebook, scan Slack and swing by Google Analytics to see how yesterday’s traffic numbers looked. I don’t think it’s controversial to call that a recipe for mindless multitasking and <a href="http://lindastone.net/qa/continuous-partial-attention/">continuous partial attention</a>. But being tethered to your phone also means you’re only a few taps away from an incredible wealth of knowledge and training to improve yourself.</p>

<hr />

<p>As a way of “eating our own dogfood” when I was CEO at <a href="https://safaribooksonline.com">Safari Books Online</a> (now the O’Reilly learning platform), we instituted a mandatory learning and development challenge, specifically around using our mobile apps for daily learning. I’m happy to say that our results were positive and truly profound. Yet we accrued no certifications, completed zero courses, and haven’t unlocked any badges; rather we saw a consistent and widespread <a href="https://www.linkedin.com/pulse/lifelong-learning-beware-plateaus-michael-conner">shift in attitude and mindset</a> toward continuous improvement and a genuine openness to new ideas and ways of thinking. As the CEO at the time, I couldn’t have asked for a better ROI than that.</p>

<p>But don’t take my word for it. Give it a try yourself: commit to spending 10 minutes a day for the next 10 days watching or reading anything from a book or video about something you’d like to improve at in your career. If you don’t know where to start, here are 10 high-quality book summaries from my friends at one of my former employers, <a href="https://getabstract.com/">getAbstract</a>, that can be read in 15 minutes in less from your phone or laptop:</p>

<ul>
  <li><a href="https://www.getabstract.com/en/summary/your-brain-at-work/23518">Your Brain at Work</a></li>
  <li><a href="https://www.getabstract.com/en/summary/the-power-of-habit/17285">The Power of Habit</a></li>
  <li><a href="https://www.getabstract.com/en/summary/exponential-organizations/23451">Exponential Organizations</a> (one of Mark Zuckerberg’s book club picks)</li>
  <li>Nir Eyal’s <a href="https://www.getabstract.com/en/summary/hooked/22993">Hooked: How to Build Habit-Forming Products</a></li>
  <li><a href="https://www.getabstract.com/en/summary/leadership-and-the-new-science/84">Leadership and the New Science</a></li>
  <li><a href="https://www.getabstract.com/en/summary/the-checklist-manifesto/13575">The Checklist Manifesto</a></li>
  <li><a href="https://www.getabstract.com/en/summary/management/13805">Peter Drucker’s Management</a></li>
  <li><a href="https://www.getabstract.com/en/summary/the-fifth-discipline/1257">The Fifth Discipline</a></li>
  <li>David Allen’s classic <a href="https://www.getabstract.com/en/summary/getting-things-done/1576">Getting Things Done</a></li>
  <li><a href="https://www.getabstract.com/en/summary/scaling-up/24982">Scaling Up</a></li>
</ul>

<hr />

<p><em>This post was <a href="https://medium.com/@andrewsavikas/on-building-a-daily-habit-of-continuous-learning-82ef77a8aff9">originally published on Medium</a>.</em></p>]]></content><author><name>Andrew Savikas</name></author><summary type="html"><![CDATA[It’s time to treat learning and skill development the same way — as a habit of deliberate practice to be cultivated and sustained for a lifetime — to keep learning in order to get a little bit better than you were yesterday.]]></summary></entry><entry><title type="html">When Tech Goes From Disruption to Deployment</title><link href="https://andrewsavikas.com/cresting-wave-technology/" rel="alternate" type="text/html" title="When Tech Goes From Disruption to Deployment" /><published>2017-03-07T00:00:00+00:00</published><updated>2017-03-07T00:00:00+00:00</updated><id>https://andrewsavikas.com/cresting-wave-technology/</id><content type="html" xml:base="https://andrewsavikas.com/cresting-wave-technology/"><![CDATA[<blockquote>
  <p>Technology is anything that was invented after you were born, everything else is just stuff. — Alan Kay</p>
</blockquote>

<p>I’ve spent much of my career at the intersection of publishing and technology, and it took many years for me to realize that describing it that way implies that “publishing” and “technology” are two different things, ignoring that almost everything about “publishing” as we commonly know it today was at one time just as much a “technology” as an iPhone.</p>

<p>In his essay, <a href="http://web.stanford.edu/dept/HPS/HistoryWired/Landow/LandowTwentyMinutes.html">Twenty Minutes into the Future</a>, George P. Landow, writing around the dawn of the World Wide Web, describes the situation eloquently:</p>

<blockquote>
  <p>First, one encounters a tendency among many humanists contemplating the possibility that information technology influences culture to assume that before now, before computing, our intellectual culture existed in some pastoral nontechnological realm. Technology, in the lexicon of many humanists, generally means “only that technology of which I am frightened.” In fact, I have frequently heard humanists use the word technology to mean “some intrusive, alien force like computing,” as if pencils, papers, typewriters, and printing presses were in some way natural. Digital technology may be new, but technology, particularly information technology, has permeated all known culture since the beginnings of human history. If we hope to discern the ways in which we might move beyond the book, we must not treat all previous information technologies of language, rhetoric, writing, and printing as nontechnological.</p>
</blockquote>

<p>(My favorite examples came from researching the history of publishing in medieval Europe for a conference talk in Frankfurt, and discovering that <strong>something as fundamental as word spacing within books developed over the course of 300 years</strong>, starting in around 1100. And it took hundreds of years more for punctuation to spread — the hyphen first appeared in the 11th century Europe, and took another 200 years to reach England. The colon didn’t appear until the late 14th century!)</p>

<p>Obviously many technological innovations spread much faster now, but only because they can do so using the now-boring communications and transportation infrastructure of <em>previous</em> waves of innovation.</p>

<p>And that concept of sequential waves of technological innovation — first emerging and disrupting the status quo, and then gradually becoming the status quo — is familiar to students of <a href="https://en.wikipedia.org/wiki/Creative_destruction">Schumpeter’s “creative destruction.”</a> But while it’s useful to know that these waves happen, even more useful from the perspective of planning and investing is spotting when one may be about to crest and the next begin.</p>

<p>In her (extraordinary) book, <a href="https://ceobookshelf.co/technological-revolutions-financial-capital-review/">Technological Revolutions and Financial Capital</a>, Carlota Perez outlines a vocabulary and framework for describing and understanding successive waves of technological innovation:</p>

<blockquote>
  <p>This book holds that the sequence: technological revolution — financial bubble — collapse — golden age — political unrest, recurs about every half century and is based on causal mechanisms that are in the nature of capitalism.</p>
</blockquote>

<p>The first four of those waves began with the Industrial Revolution in 1771, followed by: the Age of Steam &amp; Railways; the Age of Electricity and Heavy Engineering; and the Age of Oil, the Automobile and Mass Production. She labels the fifth wave — well underway at the time of her writing in 2003 — the Age of Information and Telecommunications, noting its birth as the 1971 introduction of the Intel microprocessor.</p>

<p>If these waves tend to last about 50 years, then the theory would suggest it’s just about time to look for clear signs that we’re far into what Perez labels the “Deployment” stage :</p>

<blockquote>
  <p>When an innovation is within the natural trajectory of the prevailing paradigm, then everybody — from engineers through investors to consumers — understands what the product is good for and can probably suggest what to improve. Even such minor and doubtfully useful products as the electric can-opener or the electric carving knife are thought worth designing, producing, buying and using in a world that is already accustomed to dozens of electrical appliances in the kitchen. The same happens with the successive applications of the general principles of the prevailing paradigm. In the case of continuous mass production, for example, after manufacturing had fully developed all its principles and refined its organizational practices, the task of applying the model to any other activity became straightforward. Mass tourism, of the ‘assembly-line’ type, moving people from airplane to bus, from bus to hotel and from hotel to bus, was obvious to conceive, easy to put into practice and readily accepted by consumers at the time.</p>
</blockquote>

<p>Or to borrow from Kay’s phrasing, Deployment is when yesterday’s “technology” starts to become tomorrow “just stuff”.</p>

<p>So what’s the evidence that we’re now squarely in that “Deployment” stage, and therefore nearing the dawn of the <em>next</em> technological revolution (and also primed for some, ahem, political unrest)?</p>

<h2 id="1-technology-is-fully-diffusing-into-every-industry-and-corporate-department">1. “Technology” is fully diffusing into every industry and corporate department</h2>

<p>I’ve spent a fair bit of time lately in what’s knows as the “Ed Tech” market (short for “Educational Technology”). And that’s meant keeping an eye on the competitive landscape, which at times seems to be changing every day with new products, services, and startups chasing opportunities. A useful tool for navigating those changes is one of the many landscape maps provided by bloggers, analysts, and investors, <a href="https://www.insidehighered.com/quicktakes/2017/02/24/ed-tech-landscape-2017">like this one</a>:</p>

<figure class=""><img src="/img/higher_ed_landscape.jpg" alt="market map of vendors in higher-ed landscape" /></figure>

<p>And while these kinds of maps have been around for a long time, during the past few years something has changed, and they have proliferated explosively. (There are many others even just for Ed Tech, for example <a href="http://blog.degreed.com/infographic-the-learning-content-landscape/">here’s one from the folks at Degreed</a>.) You can now find one of these “market maps” for just about any traditional corporate function. Here’s <a href="https://www.cbinsights.com/blog/sales-tech-startup-market-map/">Sales</a>:</p>

<figure class=""><img src="/img/sales_landscape.png" alt="market map of vendors in sales tech" /></figure>

<p>And <a href="http://chiefmartec.com/2016/03/marketing-technology-landscape-supergraphic-2016/">Marketing</a>:</p>

<figure class=""><img src="/img/marketing_landscape.jpg" alt="market map of vendors in marketing tech" /></figure>

<p>And <a href="http://www.capterra.com/human-resource-software/hr-landscape">HR</a>:</p>

<figure class=""><img src="/img/hr_landscape.png" alt="market map of vendors in hr tech" /></figure>

<p>And even <a href="http://www.accountexusa.com/ecosystem/">Accounting</a>:</p>

<figure class=""><img src="/img/accounting_landscape.png" alt="market map of vendors in accounting tech" /></figure>

<p>You can also see it across major segments of our economy, like <a href="http://fintechranking.com/2016/08/04/infographics-global-fintech-landscape/">Finance</a> (aka “Fintech”), <a href="https://www.cbinsights.com/blog/travel-tech-market-map/">Travel</a>, and <a href="https://www.cbinsights.com/blog/commercial-real-estate-tech-market-map-company-list/">Commercial Real Estate</a>. And to help you navigate all of these landscape maps, the folks at CB Insights have compiled <a href="https://www.cbinsights.com/blog/industry-market-map-landscape/">this helpful list</a> of 45(!) different Market Maps.</p>

<p>Technology is no longer a separate department or function but is now thoroughly permeating our entire economy, and as Perez would put it, the “new” paradigm is becoming just “common sense”.</p>

<h2 id="2-technology-is-overtaking-previous-economic-growth-engines">2. “Technology” is overtaking previous economic growth engines</h2>

<p>Another hallmark of each successive “surge” of innovation and then its diffusion into the wider economy is when the “new” economy companies begin overtaking the “old” ones as the engines of overall economic growth. Here’s a chart from Perez’s book showing how Oil and Auto firms (the “technology” companies of their day) displaced steel over a 30-year period:</p>

<figure class=""><img src="/img/growth_engines.png" alt="Figure 4-4 from Technological revolutions and financial capital showing the top 10 firms in US by asset size in 1917, 1930, and 1948, showing how the firms from the 4th wave of technological revolution -- oil and automobile -- overtook the 3rd wave steel industry as the growth engine of the US economy" /></figure>

<p>Now consider a similar look atop the S&amp;P 500 today, where the takeover by companies of the 5th surge is well underway:</p>

<figure class=""><img src="/img/s_and_p.png" alt="List of S&amp;P 500 as of January 2017 showing how tech firms like Apple and Microsoft have nearly overtaken oil and auto firms like Exxon" /></figure>

<p>VC maven Mary Meeker’s <a href="http://www.kpcb.com/internet-trends">2016 Internet Trends report</a> was her usual fascinating snapshot into what’s happening on and around the Web, and it included a useful comparison of “old” vs “new” economy companies and their valuations relative to their revenue:</p>

<figure class=""><img src="/img/meeker_old_new.png" alt="Slide from Mary Meeker showing relative market caps of old vs new media companies, like Netflix vs Viacom and Amazon vs Walmart. Viacom has higher revenue but is shrinking, whereas Netflix has lower revenue but is growing very fast" /></figure>

<p>(One obvious interpretation of the market-cap-to-revenue disparity is that investors believe the best days are ahead for Amazon and Netflix, and likely the opposite for Wal-Mart and Viacom.)</p>

<h2 id="3-todays-technology-becomes-tomorrows-utilities-and-infrastructure">3. Today’s “technology” becomes tomorrow’s utilities and infrastructure</h2>

<p>One of my favorite strategic analysis tools is Simon Wardley’s eponymous <a href="http://blog.gardeviance.org/">mapping framework</a>. A premise of the tool is that any given technology will eventually follow this path:</p>

<ol>
  <li>Genesis</li>
  <li>Custom Built</li>
  <li>Product (and Rental)</li>
  <li>Commodity</li>
  <li>Utility</li>
</ol>

<p>Not every technology will proceed through at the same pace, and some would seem to get stuck along the way, but overall it’s quite a useful model.</p>

<p>So it was with amusement that I read the following passage about Amazon from <a href="http://www.economist.com/news/business/21717421-three-financial-sanity-tests-whether-there-bubble-are-technology-firms-madly">an Economist column on whether tech firms are currently overvalued</a>:</p>

<blockquote>
  <p>(Amazon) is one of the most optimistically valued firms, with 92% of its current worth justified by profits after 2020. Outside investors have a lot at stake because it is huge, with a market value of $410bn. About a third of this value is justified by its profitable cloud-computing arm, AWS. But the rest of the firm, which straddles e-commerce, television and films, as well as logistics, barely makes money despite generating large sales. Nor is it growing particularly fast for its industry. <strong>To justify its valuation you need to believe that it becomes a sort of giant utility for e-commerce</strong> which by 2025 cranks out profits of around $55bn a year, or probably more than any other firm in America. (Emphasis added)</p>
</blockquote>

<p>The reason I was amused is because “giant utility” is exactly how many investors are valuing Amazon, and that’s in part because <a href="http://money.cnn.com/2017/03/02/technology/amazon-s3-outage-human-error/">it’s already behaving that way</a>:</p>

<blockquote>
  <p>According to Synergy Research Group, AWS owns 40% of the cloud services market, meaning it’s responsible for the operability of large swaths of popular websites. So if AWS goes down, it takes a huge number of businesses, apps, and publishers with it.</p>
</blockquote>

<h2 id="the-map-is-not-the-territory">The map is not the territory</h2>

<p>While Schumpeter, Perez, Wardley, and others offer incredibly useful tools for understanding the interplay between technological innovation and economic activity, ultimately they are just tools. We may be “due” for the end of one cycle and the beginning of the next, but reality is of course often far less predictable (and ultimately far more interesting!) than a model.</p>

<p>To say that we are nearing the end of one of these 50ish-year cycles is not to imply that “technology” as we commonly mean it today will disappear, any more than we have said goodbye to mass production or steel or electricity. Rather, the nearly ubiquitous infrastructure of the Internet, the World Wide Web, pervasive mobile broadband, and the internet of things — including mobile computers, phones, and sensors on nearly every person and in many building and vehicles, will likely be part of the core “infrastructure” for whatever revolution comes next.</p>

<p>And as the father of two children who could navigate a smartphone before they could walk, I’m exceptionally excited to see what kinds of things <em>they</em> will call “technology”.</p>]]></content><author><name>Andrew Savikas</name></author><summary type="html"><![CDATA[3 Reasons why we’re well into the final 'deployment' stage of the technological revolution begun nearly 50 years ago with the first microprocessor, and technology as we know it is about to get boring (just in time for the next big wave...)]]></summary></entry><entry><title type="html">Systems Thinking and the Evolution of Industries</title><link href="https://andrewsavikas.com/systems-thinking-evolution-industries/" rel="alternate" type="text/html" title="Systems Thinking and the Evolution of Industries" /><published>2016-10-07T00:00:00+00:00</published><updated>2016-10-07T00:00:00+00:00</updated><id>https://andrewsavikas.com/systems-thinking-evolution-industries/</id><content type="html" xml:base="https://andrewsavikas.com/systems-thinking-evolution-industries/"><![CDATA[<figure class=""><img src="/img/railroad.jpeg" alt="photo of a power line next to a railroad trailing off into the horizon" /></figure>

<p>Any time I take a long train ride it feels just a little bit like being in a time machine. Compared with the indignities of commercial air travel, the train seems to belong in a different era. Like it wouldn’t seem at all surprising if someone boarded the train who was actually a time traveler from 100 years ago.</p>

<p>Now, they might be a little confused that everyone was quietly staring at — or loudly talking at — a small glass box. But certainly they would recognize they were on a train, with seats for passengers, a dining car, and a conductor coming through to collect tickets.</p>

<p>And yet it would take little convincing to persuade our fictional time traveler that he was indeed visiting the future.</p>

<p>(As a sidenote, it is ironic that wifi is so terrible on trains. For a time, trains were the fastest and most reliable means of communication available. That’s because when trains first began connecting — and in many cases reshaping — the country in the 19th century, the word “communication” was inextricably linked with “transportation”. If you wanted to send information from one place to another, you literally needed to <em>send</em> the message. It wasn’t until the telegraph that information and transportation were finally separated.)</p>

<p>From our modern perspective it’s difficult to appreciate just how profound the separation of communication from physical transportation really was, but in “<a href="http://faculty.georgetown.edu/irvinem/theory/Carey-TechnologyandIdeology.pdf">Technology and Ideology: The Case of the Telegraph</a>”, Media Scholar James Carey offers a helpful anecdote:</p>

<blockquote>
  <p>It was of particular use on the long stretches of single-track railroad in the American West, where accidents were a serious problem. Before the use of the telegraph to control switching, the Boston and Worcester Railroad, for one example, <strong>kept horses every five miles along the line, and they raced up and down the track so that their riders could warn engineers of impending collisions</strong>. [Emphasis added]</p>
</blockquote>

<p>But perhaps what’s most interesting to me about a conversation with our time-traveling friend isn’t the idea of explaining what’s new, but the realization that <em>so much would be quite familiar</em>. For example, it wouldn’t be hard at all for him to understand that one of my monthly bills is a payment to the American Telephone and Telegraph company — more commonly known as just AT&amp;T these days — for the privilege of sending short text-based messages to family and business associates. Or even that those messages often include opaque abbreviations, though in today’s case that’s to save time, rather than money.</p>

<p>So what does that have to do with the publishing industry?</p>

<p>Well, as another sidenote, Carey mentions that Ernest Hemingway once worked as a telegraph operator, an experience that gave him a lot of practice paring down his prose, honing his signature writing style.</p>

<p>But that’s not why I bring it up.</p>

<p>This post began its life many months ago as an unfinished draft of an email reply to a thread on a publishing industry email list. I can’t recall the exact message, but the subject was something along the lines of “Digital is ‘Done’”.</p>

<p>And that’s a sentiment I’ve heard several times from many in publishing. That sure, ebooks and Kindle are here to stay, but print is unquestionably still the main event. And that for all the hype that surrounded “digital disruption” over the past decade or so, it’s time now to more or less return to normalcy.</p>

<p>There’s less talk these days about “chief digital officers”, and in fact at some firms that role has now disappeared, with the explanation that “everyone is digital” now.</p>

<p>Certainly much has changed over the past 10 years. When I was at <a href="https://oreilly.com">O’Reilly Media</a> and working to help plan the very first Tools of Change conference almost 10 years ago, there was no Kindle, no iPhone. Borders was still alive and kicking. Twitter was in its infancy and Facebook had just opened up registration to non-college students. There was no Instagram, no Snapchat, no Uber.</p>

<p>But it’s also worth noting <strong>how much looks the same</strong>. Back then there were 6 major trade publishers atop the industry. Today there are … the same major trade publishers atop the industry, though two of them merged.</p>

<p>For all that has changed — no publisher would fail to include social media outreach or SEO in their marketing plans — it’s striking just how much would be entirely recognizable as “publishing” to a time traveler from, say, 20 years ago.</p>

<p>How is it that whether it’s publishing or trains or telegrams, so much can empirically be different, while yet more still remains decidedly unchanged? And more importantly, <em>why</em>?</p>

<p>It’s worth some time thinking about a handful of the other large industries that have nominally been under threat of “disruption” over the past 10–20 years.</p>

<p>Thinking more broadly than trains, let’s look for a moment at travel. Sure the world today looks very different if you’re a travel agent, but again I think a time traveler from 20 or even 50 years ago would recognize most of it, with some minor differences.</p>

<p>It would seem unusual that most of us book our own tickets and hotels, or that we take our shoes off at the airport to be looked at naked by bureaucrats, but we fly American or United or Delta Airlines, we show up at the airport and kill time at a bookstore — or a bar — and we claim our baggage using little paper tags at the other end of our flight.</p>

<p>Or let’s look for a moment at real estate. There’s no question that <a href="https://zillow.com">technology has changed real estate</a>, and significantly altered many parts of the buying and selling process. But in most parts of the country, people still buy and sell houses largely the way they always have, including using agents, borrowing money with a mortgage from a lender, and signing a giant pile of paperwork at a lawyers office. And then having their deed recorded by a county official.</p>

<p>Or look at another industry that was very much intertwined with real estate during the housing bubble, and that’s the financial industry. An industry under such intense strain and scrutiny would seem especially susceptible to disruption and displacement from new entrants. And yet in nearly every case, the most common outcome for a successful “fin tech” startup is … to be <a href="https://techcrunch.com/2014/02/20/simple-acquired-for-117m-will-continue-to-operate-separately-under-its-own-brand/">bought by a bank</a> or existing <a href="http://www.thinkadvisor.com/2015/08/26/blackrock-snaps-up-futureadvisor-could-wealthfront">financial services firm</a>.</p>

<p>How is it that we see so many industries <em>both</em> irretrievably altered and yet entirely recognizable to someone visiting from before the change?</p>

<h2 id="homeostasis-and-the-lens-of-systems-thinking">Homeostasis and The Lens of Systems Thinking</h2>

<p>One of my very favorite books is “<a href="https://bookshop.org/a/80144/9781603580557">Thinking in Systems</a>” by Donella Meadows, published in 2008 by Chelsea Green. In it Meadows defines a system as follows:</p>

<blockquote>
  <p>A system is a set of things — people, cells, molecules, or whatever — interconnected in such a way that they produce their own pattern of behavior over time. The system may be buffeted, constricted, triggered, or driven by outside forces. But the system’s response to these forces is characteristic of itself, and that response is seldom simple in the real world.</p>
</blockquote>

<p>As Meadows notes, the real-world responses of systems are rarely simple, yet we tend to actively seek simplicity when explaining events around us:</p>

<ul>
  <li>Book sales are down because people don’t read!</li>
  <li>Book sales are up because of adult comic books!</li>
  <li>Book sales are down because of piracy!</li>
  <li>Book sales are up because of Facebook!</li>
  <li>Ebooks will save publishing!</li>
  <li>Ebooks are killing independent bookstores!</li>
</ul>

<p>(I offer these examples with humility, and will readily admit being guilty of offering such simplistic explanations on occasion.)</p>

<p>Yet it shouldn’t be surprising that we find linear narratives based on cause-and-effect so appealing. It’s a fundamental part of our nature.</p>

<p>This comes from a <a href="http://m.phys.org/news/2010-05-psychologists-babies-wrong-months.html">2010 study conducted at Yale</a>:</p>

<blockquote>
  <p>In one experiment babies between six and ten months old were repeatedly shown a puppet show featuring wooden shapes with eyes. A red ball attempts to climb a hill and is aided at times by a yellow triangle that helps it up the hill by getting behind it and pushing. At other times the red ball is forced back down the hill by a blue square. <strong>After watching the puppet show at least six times the babies were asked to choose a character. An overwhelming majority (over 80%) chose the helpful figure</strong>. Professor Paul Bloom said it was not a subtle statistical trend as “just about all the babies reached for the good guy.” [Emphasis added]</p>
</blockquote>

<p>That kind of response is called “non-linear”, and it’s so deeply ingrained into the world around us that it’s nearly invisible, at least compared with linear behavior, which is much better suited to the logical, rational part of our brain.</p>

<p>Your car seems to respond in a linear way: if you drive twice as fast, you’ll arrive in half the time. But the traffic all around you is a system that responds in a non-liner way. As you add cars, the flow slows gently, until at some critical point — typically just after you’ve turned onto the highway — it suddenly collapses into gridlock.</p>

<p>As you add fertilizer to your tomato garden, more fertilizer means more tomatoes — up to a point. More fertilizer means more tomatoes until it means no tomatoes at all.</p>

<p>Again, this kind of behavior is all around us — water gets slowly colder until it suddenly freezes. Or slowly warms until suddenly it boils. But time and again, our explanations tend toward very linear cause and effect.</p>

<ul>
  <li>If the stock market tumbles, it’s because of that day’s big news story.</li>
  <li>If sales are up, it’s because of effective marketing.</li>
  <li>If a project is running late with a team of 2 people, we add 2 more — and the project becomes even later.</li>
</ul>

<p>That last phenomenon was described in a classic 1975 technology book by Fred Brooks published by Addison-Wesley called “<a href="https://bookshop.org/a/80144/9780201835953" target="_blank">The Mythical Man Month</a>”, which illustrated that you cannot speed up a software project by adding more people, and in fact you will inevitably slow it down.</p>

<hr />

<p>Any group of people after all is a collection of living organisms, so there’s good reason to think that the same rules that apply to other complex natural systems like ecologies or weather also apply to people.</p>

<p>Dana Meadows defines three fundamental characteristics of any system.</p>

<p>First, it must have <strong>elements</strong>, components that comprise the system. These may be physical, tangible things like people or computers. Or they may be intangible things like temperature or mood.</p>

<p>Second, it must have <strong>interconnections</strong>. Ways for information and action to be transmitted among the various elements. For a company, that could be a hallway conversation, a financial report, or perhaps the tone and posture of a meeting participant.</p>

<p>Third, it has a <strong>purpose</strong>. A goal. In the case of a company or a group of people, this may or may not resemble what anyone at the company — especially the CEO — might say or think the purpose is, and is usually closely tied to what behaviors are actually rewarded. For example, I went to the <a href="https://www.wellsfargo.com/about/corporate/vision-and-values/index">Wells Fargo website</a> and took a look at their 21-page “Vision and Values” document, which reads in part:</p>

<blockquote>
  <p>The reason we wake up in the morning is to help our customers succeed financially and to satisfy their financial needs, and the result is that we make money. It’s never the other way around.</p>
</blockquote>

<p>Uh-huh.</p>

<p>We’ll look a bit deeper at each of the three parts of a system, but keep in mind that our inclination at the first sign of trouble in a system is typically to start swapping out the elements, as if they were faulty parts of an engine (Wells Fargo is quick to point out just how many people they’ve fired). But the second two — the connections and the meaning — are far more important in determining the health of a system.</p>

<hr />

<p>One of the most important mechanisms within a system is something called “homeostasis”, which means the state of the system when everything’s — for lack of a better word — normal.</p>

<p>Let’s use a very simple example of a mechanical system with a feedback loop for maintaining homeostasis.</p>

<p>Your home has a thermostat, perhaps even a fancy one that you can adjust from your smartphone. As winter sadly fast approaches, most likely you will soon turn on the heat in your home. You would probbaly define “normal” as having your home at somewhere around 68 degrees, though in practice anything in the range of 67–70 would be fine.</p>

<p>Your home gets cooler by losing heat to the outside. Once the temperature drops below 68, the thermostat tells the furnace to turn on, adding heat back to the system. Once the desired temperature is reached, the thermostat instructs the furnace to turn off.</p>

<p>It’s fairly sophisticated behavior using incredibly simple components. The furnace has just two states: on or off. The thermostat only knows how to track a set point: if the temperature drops below the set point, the switch turns on; if it’s above the set point, the switch turns off. As “smart” as your thermostat may be, that basic operation is unchanged. Yet your house will now adjust to nearly any weather system that comes your way. It’s worth noting that you don’t have to have any idea what the weather will be in the future to know that your home will stay comfortable.</p>

<p>In other words, the elements of a system communicate through interconnections in pursuit of the goal of the system. The system is always changing, but when strained, it works very, very hard to restore itself to homeostasis. (And natural systems are usually much, much better at doing that than mechanical ones like an HVAC system.)</p>

<p>Once you start thinking about homeostasis, you’ll see it everywhere.</p>

<p>For example, consider someone happily and very recently married. Much of the courtship and early time living together is about establishing homeostasis for this new system.</p>

<p>You gradually learn the signs when your spouse is upset, what the pet peeves are, how to make each other smile, etc. Within a few months, and certainly within a few years, you and your spouse operate within a fairly consistent zone of homeostasis. When something strains that (say a new job, or a move), the system responds in one of three ways: it pushes back to homeostasis, it recalibrates to a new homeostasis, or it collapses.</p>

<p>As a happy example of recalibrating to a new homeostasis, consider that same new family after their first child.</p>

<p>That is an event that is so disruptive to the workings of the system that it must find a new homeostasis, which of course it does. Everything from sleep schedules to social routines to food preferences changes. And then you settle into a new homeostasis, which itself will change slightly over time, but last until the next kid comes along, starting the process over again.</p>

<p>And you see it in your own organizations: how many “change efforts” have you or your company tried and failed? In many cases, the biggest failure is that the forces for homeostasis are too strong. The disruption has to be big enough to shock the system into an existential threat: change or die. You also see this in examples where former enemies cooperate, or bitter rivals come together.</p>

<h2 id="homeostasis-in-action">Homeostasis in Action</h2>

<p>As a vivid example of both the stabilizing power of homeostasis and the kind of shock — sometimes quite literally — needed to disrupt it, I’ll share a story about San Francisco. (This was first told to me by <a href="https://medium.com/@jonathan">Jonathan Rosenfeld</a> and I found it so intriguing <a href="http://www.uctc.net/research/papers/162.pdf">I went searching for more details</a>.)</p>

<p>Before the 1930s there was regular ferry service throughout San Francisco Bay, including between what’s known as the East Bay and downtown San Francisco. But with the rise of the car and construction of the Bay Bridge, ferry ridership declined precipitously and soon was discontinued between the East Bay and downtown.</p>

<p>As the Bay Area population — and with it traffic and congestion — grew throughout the 20th century, there were repeated calls for reviving the ferry service between the East Bay and downtown. But like many municipal matters, it stagnated within local and regional politics for decades, a chronic victim of homeostasis.</p>

<p>Then on October 17, 1989, the Loma Prieta earthquake struck, dramatically disabling the Bay Bridge and paralyzing traffic.</p>

<p>Within 3 hours, ferry service had resumed between the east bay and downtown San Francisco and it continues to this day.</p>

<hr />

<h2 id="maintaining-homeostasis-and-health">Maintaining Homeostasis and Health</h2>

<p>When working to understand a system, again because of our instinct to construct narratives, we tend to put the most focus on the elements of the system — the actors in the story — and the least on the purpose of the system. Yet that purpose, the goal of the system, tends to have a much more powerful impact on the behavior of the system than either the elements or the interconnections between them.</p>

<p>The inclination to think about a system like a company mechanistically, with parts to be repaired or replaced, may well be a side effect of the way we use metaphors — storytelling again — to make sense of our own minds.</p>

<p>Robert Epstein, a psychologist, author, and magazine editor explains it eloquently in this passage from his essay, “<a href="https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer">The Empty Brain</a>”:</p>

<blockquote>
  <p>Artificial intelligence expert George Zarkadakis describes six different metaphors people have employed over the past 2,000 years to try to explain human intelligence.</p>

  <p>In the earliest one, eventually preserved in the Bible, humans were formed from clay or dirt, which an intelligent god then infused with its spirit. That spirit ‘explained’ our intelligence — grammatically, at least.</p>

  <p>The invention of hydraulic engineering in the 3rd century BCE led to the popularity of a hydraulic model of human intelligence, the idea that the flow of different fluids in the body — the ‘humours’ — accounted for both our physical and mental functioning. The hydraulic metaphor persisted for more than 1,600 years, handicapping medical practice all the while.</p>

  <p>By the 1500s, automata powered by springs and gears had been devised, eventually inspiring leading thinkers such as René Descartes to assert that humans are complex machines. In the 1600s, the British philosopher Thomas Hobbes suggested that thinking arose from small mechanical motions in the brain. By the 1700s, discoveries about electricity and chemistry led to new theories of human intelligence — again, largely metaphorical in nature. In the mid-1800s, inspired by recent advances in communications, the German physicist Hermann von Helmholtz compared the brain to a telegraph.</p>

  <p>Each metaphor reflected the most advanced thinking of the era that spawned it.</p>

  <p>Predictably, just a few years after the dawn of computer technology in the 1940s, the brain was said to operate like a computer, with the role of physical hardware played by the brain itself and our thoughts serving as software.</p>
</blockquote>

<p>I would argue that the metaphors Epstein listed have a parallel in how we think about and talk about organizations, and that we’re still using a lot of computer and software metaphors. Think about how many times you’ve heard talk of “upgrading” your organization, or the buzz about concepts like “Lean” or “Agile” as if they were apps to install.</p>

<p>But our teams and companies and industries are no more like machines or computers than our brains are, and when we use incomplete metaphors we limit our ability to accurately understand their behavior.</p>

<p>In her outstanding book from Berret-Kohler, “<a href="https://bookshop.org/a/80144/9781576753446" target="_blank">Leadership and the New Science</a>”, Margaret Wheatley gets right to the core of the limitations of our current structural metaphors when we try to understand systems like a company:</p>

<blockquote>
  <p>The organization of a living system bears no resemblance to organization charts. Life uses networks; we still rely on boxes. But even as we draw our boxes, people are ignoring them and organizing as life does, through networks of relationships. To become effective at change, we must leave behind the imaginary organization we design and learn to work with the real organization, which will always be a dense network of interdependent relationships.</p>
</blockquote>

<p>(And before you object to the term “network” as yet another variation of the computer metaphor, I’ll defend Wheatly by noting that the word originated in the 16th century, and has over time been used to describe everything from thread patterns to canals.)</p>

<p>What might this mean for an team, a company, or even an entire industry?</p>

<p>Well, if we start to think about things more like an actual ecosystem — a collection of those “dense networks of interdependent relationships” — then we can think very differently about how to prepare for the future.</p>

<p>You see, when we look at the living, dynamic systems surrounding us we see remarkable longevity and resilience — despite exactly zero effort expended by those systems at predicting the future.</p>

<p>Nassim Nicholas Taleb, of “<a href="https://bookshop.org/a/80144/9780812973815" target="_blank">Black Swan</a>” fame, describes this in his latest book, “<a href="https://bookshop.org/a/80144/9780812979688" target="_blank">Antifragile</a>”, in talking about how <strong>incredibly</strong> effective our species as a whole has been at adapting to immense forces of change over time, despite being terrible at predicting the future:</p>

<blockquote>
  <p>Consider, as a thought experiment, the situation of an immortal organism, one that is built without an expiration date. To survive, it would need to be completely fit for all possible random events that can take place in the environment, all future random events. By some nasty property, a random event is, well, random. It does not advertise its arrival ahead of time, allowing the organism to prepare and make adjustments to sustain shocks. For an immortal organism, pre-adaptation for all such events would be a necessity. When a random event happens, it is already too late to react, so the organism should be prepared to withstand the shock, or say goodbye.</p>

  <p>Post-event adaptation, no matter how fast, would always be a bit late. To satisfy the conditions for such immortality, the organisms need to predict the future with perfection — near perfection is not enough. But by letting the organisms go one lifespan at a time, with modifications between successive generations, nature does not need to predict future conditions beyond the extremely vague idea of which direction things should be heading. Actually, even a vague direction is not necessary. Every random event will bring its own antidote in the form of ecological variation. It is as if nature changed itself at every step and modified its strategy every instant.</p>
</blockquote>

<p>If life uses networks of relationships, and life thrives amid uncertainty and unpredictability, what are the conditions for a healthy network? And what can we do to help ourselves and our organizations become healthier networks of interdependent relationships?</p>

<p>According to Wheatley, a healthy network needs three things: <strong>information, connections, and meaning</strong>. You’ll note that neatly echoes Dana Meadows definition of a system, with its elements, interconnections, and purpose.</p>

<p>And recall that our instinct is typically to focus our energy on changing the elements of the system — the people, the tools, the suppliers, the formats. We see cause and effect, and so we try command and control to modify the elements of the system. We reshuffle and reorganize the elements, connect them together on an org chart, and then often as an afterthought, cobble together some form of vision or mission statement.</p>

<p>But when we don’t pay enough attention to the system’s interconnections and purpose, the system will fight our best efforts as it seeks homeostasis.</p>

<p>So rather than <em>command and control</em>, we should instead be thinking the way that nature does, which is in terms of <em>sense and respond</em> (that distinction is at the heart of the “Holocracy” movement, though I much prefer <a href="http://www.strategy-business.com/article/00344?gko=10921">this overview via strategy+business</a> to anything “branded” Holocracy).</p>

<p>That is why even as individual companies or sectors may be fragile or endangered, industries as a whole can remain vibrant and resilient. There is no deliberate attempt to control or change the elements of the larger system. Instead, the system responds naturally by building more connections among its parts, which improves the flow of information and reinforces the system’s purpose and goals.</p>

<h2 id="but-what-about-disruptive-innovations">But What About “Disruptive Innovations?”</h2>

<p>It bears pointing out that just because an industry (as a dynamic system) is resilient and adaptable doesn’t mean any particular company or sector will last as long as the bigger system they’re a part of. Systems comprise smaller systems, and are parts of larger ones — one of the most compelling points Taleb makes in “Antifragile” is that often the relative fragility of the individual elements of a system actually makes the larger system much stronger:</p>

<blockquote>
  <p>So antifragility gets a bit more intricate — and more interesting — in the presence of layers and hierarchies. A natural organism is not a single, final unit; it is composed of subunits and itself may be the subunit of some larger collective. These subunits may be contending with each other. Take another business example. Restaurants are fragile; they compete with each other, but the collective of local restaurants is antifragile for that very reason. Had restaurants been individually robust, hence immortal, the overall business would be either stagnant or weak, and would deliver nothing better than cafeteria food — and I mean Soviet-style cafeteria food. Further, it would be marred with systemic shortages, with, once in a while, a complete crisis and government bailout. All that quality, stability, and reliability are owed to the fragility of the restaurant itself.</p>
</blockquote>

<p>The majority of the literature on “disruption” is oriented at helping individual companies compete, and at that level conditions can indeed be harsh.</p>

<p>But let’s look more closely at some of the “classic” <a href="https://en.m.wikipedia.org/wiki/Disruptive_innovation">examples of disruption</a>, whereby a new entrant begins at the low end of the market with a product that incumbents (and their best customers) find inferior, but which eventually both creates a new, larger market and displaces the incumbents:</p>

<ul>
  <li>Hydraulic excavators (replacing cable-driven ones)</li>
  <li>Steel mini-mills (replacing vertically integrated mills)</li>
  <li>The progression of floppy disk drives (from 14” to 8” to 5.25” to 3.5” to solid-state)</li>
  <li>The PC (replacing the mini-computer, which in turn replaced the mainframe)</li>
</ul>

<p>In each case, once-mighty firms were toppled from their perch atop their industries. But each respective industry (construction, manufacturing, and computing) <strong>grew larger, stronger, and healthier by the same pattern of disruption that was so damaging to so many individual firms</strong>.</p>

<p>It’s no coincidence that every industry has some form of conferences, trade shows, and standards bodies, and that often these emerge during times of strain and stress.</p>

<p>It’s also no coincidence that more often than not, regardless of the industry, you’ll hear people say that the real value of any particular conference or trade show or standards body isn’t the stated theme or purpose. What is the number one reason people attend conferences and trade shows? That’s right, “networking”. It is the system acting to improve its dense network of interdependent relationships.</p>

<p>It is literally <em>what we as human beings are built to do</em>, and by acknowledging it, and by working to create the conditions for a healthy system, a healthy network of interdependent relationships, we strengthen ourselves and the organizations we’re a part of, no matter what the future brings.</p>

<p>The “New Normal” for Publishing</p>

<p>So I think that kind of systemic response is the main reason why industries like publishing can undergo <em>profound</em> change while also remaining quite recognizable over long periods of time. The system adapts and evolves, absorbing new information and if necessary, finding a new homeostasis, a new normal.</p>

<p>And that “new normal” for book publishing doesn’t mean that digital is “done”! On the contrary, digital books and the wider set of technologies surrounding them are now deeply woven into the fabric of what’s “normal” for the system.</p>

<p>Alan Kay famously said, “technology is anything that was invented after you were born,” and just as we don’t think anymore of things like punctuation as “technology” (the hyphen first appeared in continental Europe in the 11th century, and took <strong>200 years</strong> to reach England), anyone “born” into publishing today won’t give a second thought to ebooks or social media, they are now just part of the system’s “new normal”.</p>

<p>Digital in publishing isn’t done, it’s just getting <em>boring</em>. Which means it’s just getting started.</p>

<hr />

<p><em>This essay was adapted from a talk given at the <a href="https://firebrandtech.com/">Firebrand Technologies</a> user conference in Portsmouth, NH in September of 2016.</em></p>]]></content><author><name>Andrew Savikas</name></author><summary type="html"><![CDATA[On why digital disruption in publishing isn't 'done' (even if it seems that way)]]></summary></entry><entry><title type="html">Going Home</title><link href="https://andrewsavikas.com/going-home-why-i-left-a-great-company-to-become-a-work-from-home-dad/" rel="alternate" type="text/html" title="Going Home" /><published>2016-03-13T00:00:00+00:00</published><updated>2016-03-13T00:00:00+00:00</updated><id>https://andrewsavikas.com/going-home-why-i-left-a-great-company-to-become-a-work-from-home-dad/</id><content type="html" xml:base="https://andrewsavikas.com/going-home-why-i-left-a-great-company-to-become-a-work-from-home-dad/"><![CDATA[<figure class=""><img src="/img/son_in_sebastopol.jpg" alt="photo of my son in the yard of an AirBnB house in Sebastopol, CA" /><figcaption>
      Pictured: Not my actual home. (But my actual son.)

    </figcaption></figure>

<p>After 5 years, I’ve left my role as CEO at <a href="https://safaribooksonline.com">Safari Books Online</a>, which is now part of <a href="https://oreilly.com">O’Reilly Media</a>. As O’Reilly works to fully integrate Safari with the rest of their operations, Tim O’Reilly and I agreed this was the right time for me to step away. This post provides some more context.</p>

<hr />

<p>Anyone paying attention to technology, publishing, or education (and especially the places they intersect) knows that a lot can change in 5 years. That is true for companies like O’Reilly and Safari, and it’s also true for people.</p>

<p>My wife and I recently welcomed our second child, and events like that are opportunities for deep reflection. It’s a cliché to hear someone say they’ve left a job to “spend more time with family”, but I can say with certainty that sometimes it’s exactly true. Acquisitions mean that roles change — and in this case, as the Safari CEO role went away, the right choice was for me to go along with it.</p>

<p>For now, I’ll be working on building a life that can give me time to be more present with my kids as they move (<em><strong>so</strong></em> quickly!) through some very important early years.</p>

<p>I have no illusions that this is some unique path (indeed I’m happy to be joining what <a href="http://www.fastcompany.com/3043595/second-shift/what-high-profile-working-fathers-leaving-their-jobs-means-for-the-rest-of-us">seems to be a growing movement</a>), but perhaps someday this will become a more common option even among those without the resources of the corporate executive ranks.</p>

<hr />

<p>I began my publishing career almost 14 years ago as the Reprints Editor in O’Reilly’s production department, and was fortunate to get the opportunity to move up through a number of challenging roles, including <a href="http://shop.oreilly.com/product/9780596004934.do">writing a book</a>, architecting early versions of O’Reilly’s XML-based digital publishing toolchain, and later chairing the seminal <a href="http://www.toccon.com/toc2013">TOC Conference</a>.</p>

<p>When I took the CEO job at Safari, it was with a mandate to build an internal technology and product development capability that could evolve with the rapidly changing online learning landscape. The Safari team succeeded at doing just that, while profitably growing revenue for five years running, and the platform we built together was one of the reasons that O’Reilly chose to fully acquire Safari (they already owned 50% as part of a joint venture with Pearson).</p>

<p>O’Reilly’s strategy for integrating Safari is sound, and while I’m admittedly biased, I believe they are <em>incredibly</em> well-positioned to complete their transformation from a “publisher” to an integrated media company that’s fundamentally about helping people learn and changing the world along the way.</p>

<p>I am so grateful for my time at O’Reilly and Safari, especially all of the smart, helpful, generous and talented people I met along the way. Working at O’Reilly and Safari profoundly shaped my career, with extraordinary opportunities at every step of the way for someone who loves publishing, business, and technology.</p>]]></content><author><name>Andrew Savikas</name></author><summary type="html"><![CDATA[Why I left a great company to become a work-from-home dad]]></summary></entry></feed>