<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.4.1">Jekyll</generator><link href="https://andrewsavikas.com/feed.xml" rel="self" type="application/atom+xml" /><link href="https://andrewsavikas.com/" rel="alternate" type="text/html" /><updated>2026-03-15T21:06:15+00:00</updated><id>https://andrewsavikas.com/feed.xml</id><title type="html">andrewsavikas.com</title><subtitle>Personal website for Andrew Savikas</subtitle><author><name>Andrew Savikas</name></author><entry><title type="html">What My Personal AI Agent Setup Actually Looks Like</title><link href="https://andrewsavikas.com/what-my-personal-ai-agent-setup-actually-looks-like/" rel="alternate" type="text/html" title="What My Personal AI Agent Setup Actually Looks Like" /><published>2026-03-14T00:00:00+00:00</published><updated>2026-03-14T00:00:00+00:00</updated><id>https://andrewsavikas.com/what-my-personal-ai-agent-setup-actually-looks-like</id><content type="html" xml:base="https://andrewsavikas.com/what-my-personal-ai-agent-setup-actually-looks-like/"><![CDATA[<blockquote>
  <p>“In reality it is the maze which remembers, not the mouse.”</p>

  <p>— James Gleick, <em>The Information</em></p>
</blockquote>

<p>When I first started using AI, it was always about helping me produce some specific output. A research report, a meeting agenda, even a new workout plan for the gym. I learned over time that the more specific and detailed of a prompt I gave, the better the output.</p>

<p>Later, when I needed another output that used a lot of the same context — perhaps a spreadsheet going to the same audience as the meeting agenda — I’d go back to the previous chat session on the topic, and either continue from there, or have the LLM generate a summary I could use to <a href="/the-amnesiacs-letter/">seed a new chat rather than rehash all of the background from scratch</a>.</p>

<p>But as I tried to scale that up and involve an LLM with more and more areas of responsibility in my work and personal life, managing all of that context quickly spiraled out of control. I’d either try to do too much within a single chat (because that chat session already “knew” a bunch of stuff relevant to a specific project or company or team) or flail in an effort to organize and re-use context and bits of prompts in various scattered files and folders — most of which quickly became stale.</p>

<p>Because for an executive or operator doing knowledge work, any specific output or artifact isn’t usually the hard part. The hard part is maintaining an ever-evolving model of a complex web of interrelated context: What matters right now? What was already decided? Why was that decided? Who is involved? What assumptions are at play? What changed since the last time I looked at this? What issues remain unresolved? Where are the edge cases and pitfalls?</p>

<p>Those mental maps are complex, and any manager or executive will tell you that the hidden costs of “reloading” are very real. Every interruption, every context switch, every multi-day pause forces you to keep reconstructing a mental house of cards that will just come crashing down again with the next shift in attention.</p>

<p>This is the pain point behind endless systems and tools, from <a href="https://gettingthingsdone.com/">Getting Things Done</a> (GTD) to <a href="https://www.buildingasecondbrain.com/">Building a Second Brain</a>: the promise that with the right system and enough discipline, you too can manage all of that context switching.</p>

<p>The first glimpse I got of a potential better way was when I tried out the Claude desktop app, which lets Claude access your local files. Now instead of pasting in or uploading notes and documents, I could just tell Claude where to find the context needed for a specific task. A huge improvement!</p>

<p>But the context <em>keeps changing</em>, which means <em>someone</em> still has to keep all of those notes and prompts and background material organized and up to date, or you end up fighting against stale context — which can be worse than no context at all.</p>

<p>So I reframed the problem away from “how can I use AI to help me produce better output” to “how can I use AI to manage the evolving context across multiple related areas of responsibility in my work and life?”</p>

<p>Because unlike a programmer’s codebase, which usually itself stores the state of the work, a knowledge worker’s “codebase” is spread across docs, spreadsheets, meeting notes, email threads, Slack threads, unstated assumptions, political landmines, and tacit (but usually undocumented) knowledge.</p>

<p>And as I started working with AI and specifically with structured “agents”, the more I realized that for me, <em>that</em> was the real opportunity. Using AI primarily to manage continuity and context over time, rather than primarily to produce artifacts of output.</p>

<h2 id="my-setup">My setup</h2>

<figure class=""><img src="/img/post6-header-repo.png" alt="A terminal window showing a file tree of an agent repository: a shared folder with domain context files, and individual agent folders each containing their own instructions, memories, inbox, and projects." /><figcaption>
      An anonymized version of my actual agent repo structure. Each agent has its own folder with instructions, accumulated memories, an inbox, and project workspaces. Shared context lives in a common directory.

    </figcaption></figure>

<blockquote>
  <p>“[W]hat people forget is that founders at successful companies have another reason not to have to take so many notes or use so many productivity systems: They have an entire organization that acts as an extension of their intelligence. In a sense, the organization itself is the biggest productivity hack of all — rendering cheap alternatives like note-taking systems or pomodoros obsolete.”</p>

  <p>— Dan Shipper, <a href="https://every.to/superorganizers/ai-and-the-age-of-the-individual">“AI and the Age of the Individual”</a></p>
</blockquote>

<p>LLMs are trained on copious amounts of human interaction and language, so to me it made sense to lean into that and model my workflow around a “team” metaphor. I’m sure you’ve had what I call the “Bob” experience within an organization before: every time a question comes up about a particular topic or system, everyone says “Oh, go ask Bob about that.” Maybe Bob’s the one who built the process or system, or maybe he’s just been around long enough to have seen it all, but everyone knows that when you need to know something about “that thing” — you go ask Bob.</p>

<p>So that’s what I set out to build for myself. A team of agents that would each maintain as much context as possible around a particular domain or area of responsibility. That way, when I needed a new output or had a question related to that domain, I could go to Bob (or Janet, or Patty), and ask them for help, because they’d already “know” enough background to be immediately useful.</p>

<p>But it’s one thing to collect the disparate context needed to help an agent be an effective assistant — it’s quite another to keep that context updated over time.</p>

<p>The good news is that AI agents are <em>very</em>, <em>very</em> good at helping themselves do just that.</p>

<p>I made a new folder on my computer, and in the folder made sub-folders for multiple agents. Each one has a name and a basic persona, and each one is designed to accumulate and manage knowledge and context for a particular domain. They have detailed instructions about how to keep their own “memories”, and for how to maintain and update information about their particular domain.</p>

<p>At the start of each session, the very first thing they do is load up all of that previous context so we can pick up exactly where we left off the last time. Then the very last thing they do before we wrap up is <em>update</em> all of those memories to reflect what’s new or what’s changed. My interactions with them along the way(researching, writing, analyzing, modeling) gives them realtime information about how the context is evolving, and then <em>they</em> do the work of organizing and processing that information over time.</p>

<p>I’ve also added tools for the agents to easily access email, docs, Slack, and other external sources of context, making it easy for me to direct them to get up to speed quickly on specific items.</p>

<p>Because their context is continually updating, the agents have accumulated a rich understanding of what’s going on, so I don’t have to keep explaining things like projects, priorities, and people. I can just say things like, “Please refresh me on where things stand with the Acme partnership negotiation,” and “OK, which of the product descriptions still need to be updated?”</p>

<p>The main benefit is not that it helps me produce more useful outputs (though it absolutely does<sup id="fnref:1"><a href="#fn:1" class="footnote" rel="footnote" role="doc-noteref">1</a></sup>). It’s that it preserves continuity of context over time. It dramatically lowers the cost of returning to hard problems without starting over. It helps maintain the internal map (or really, <em>maps</em>) needed to be effective. And once you’ve experienced that, it’s very hard to go back. For me, the system has stopped feeling like a system or a trick or a hack or a process, and started feeling like essential infrastructure.</p>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:1">
      <p>One of my agents helped me quickly find several good real-life example prompts to use above; another reviewed my entire archive of thousands of <a href="https://readwise.io">Readwise</a> highlights to find relevant quotes to accompany this post. Then they managed all of the fiddly bits needed to get it published. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>Andrew Savikas</name></author><summary type="html"><![CDATA[I built a team of AI agents that maintain their own context over time across projects, pauses, and ever-shifting priorities.]]></summary></entry><entry><title type="html">The Amnesiac’s Letter</title><link href="https://andrewsavikas.com/the-amnesiacs-letter/" rel="alternate" type="text/html" title="The Amnesiac’s Letter" /><published>2026-03-06T00:00:00+00:00</published><updated>2026-03-06T00:00:00+00:00</updated><id>https://andrewsavikas.com/the-amnesiacs-letter</id><content type="html" xml:base="https://andrewsavikas.com/the-amnesiacs-letter/"><![CDATA[<p>In Kurt Vonnegut’s 1959 novel, <em>The Sirens of Titan,</em> a character named Unk discovers his memory has been erased. He finds a letter from his pre-erasure self — a letter that has to teach him who he is, what has happened, and what to do next. The letter is his only link to continuity. (40 years later, Leonard Shelby had a similar experience in Christopher Nolan’s <em>Memento</em>, this time involving tattoos and Polaroids.)</p>

<p>This is not just a sci-fi premise.</p>

<p>Think about a long session with ChatGPT or Claude where things were going well. It got to know your project. It anticipated what you meant. It stopped suggesting things you’d already rejected. It felt like the AI was <em>learning</em> things about you.</p>

<p>Then you started a new conversation in a fresh chat window — and the AI was a stranger again. It made suggestions you already rejected. It didn’t know anything about you anymore. It was frustrating because it felt like starting over.</p>

<p>Because it <strong>was</strong> starting over! The LLM on the other end had none of the previous conversation’s context. No amnesiac’s letter to explain who it was (and who you were).</p>

<p>All of that context, all you’d shared in conversation and corrected along the way, just … gone.</p>

<p>The friendly chat interface hides it, but every LLM model is just like Unk or Leonard: it only <em>seems</em> like there’s consistency because every time you send a follow-up message, <strong>all of the conversation history up to that point is included as well</strong>. Your preferences, your corrections, your examples, your “no, more like this” — all of it was included <em>every</em> time, informing every response. You were writing Unk’s letter, one message at a time, without realizing it.</p>

<p>During chat sessions, we experience AI agents improving how they “know us” over time, and it certainly <em>seems</em> like “learning”. But that implies a continuity that just doesn’t exist. There is no “over time” — there’s just a new amnesiac with every response, encountering your (growing) letter to them for the very first time.</p>

<p>Next time, what if you had that letter ready <em>before</em> the conversation even started?</p>

<p>Not accumulated through trial and error over dozens of messages. Written deliberately, in advance, and handed to the amnesiac the moment they wake up. Here’s who I am. Here’s what I care about. Here’s how I work. Here’s what we’ve done so far. Here’s what matters right now.</p>

<p>That’s the power of what’s known as a system prompt. It’s what a <em>CLAUDE.md</em> file is. It’s what ChatGPT’s custom instructions or a Claude project instructions field are for.</p>

<p>There’s lots of great advice about writing good prompts like that. And it’s true: if you write a detailed, 5-page dossier about yourself, your project, and all of the nuances and dependencies involved, you’re going to get much better results, much faster, than if you just open up a new chat session cold.</p>

<p>But that’s also a lot of work!</p>

<p>So here’s what to do that will change the way you use LLMs: go back to one of those “good” sessions you had with Claude or ChatGPT or Gemini. Where you finally felt like it really got you and what you needed. And now <strong>ask the LLM to write the “amnesiac’s letter” for you</strong>:</p>
<blockquote>
  <p>Before we wrap up, summarize everything you’ve learned about me, my project, and my preferences in this conversation — written so that a completely new AI assistant could read it and pick up where you left off.</p>
</blockquote>

<p>Or if you want to embrace your inner Leonard Shelby:</p>
<blockquote>
  <p>Imagine you’re about to lose your memory of this entire conversation. Write a letter to your future self — someone with no memory of me at all — that would let them help me as effectively as you can right now.</p>
</blockquote>

<p>Now you have a document you can paste in at the start of your next session, or add to whatever “custom instructions” feature your preferred LLM tool offers.</p>

<p>That single letter is a solid start, but of course there’s much more to constructing useful context: what to emphasize, what to leave out, how to build in safeguards that don’t depend on the reader at all. I’ll get into all of that later. But start here: the next time you have a “good” session, don’t just close the tab. Ask the amnesiac to write their letter first.</p>

<p>(Oh and that letter you end up with can also be a surprisingly revealing document about <em>you</em> — what you <em>actually</em> care about, how you <em>actually</em> work, what you’re <em>actually</em> pedantic about. Most people have never written that down!)</p>

<p>Every session you close without capturing anything is context you might end up having to rebuild from scratch, so don’t forget to ask your amnesiac for some help first.</p>]]></content><author><name>Andrew Savikas</name></author><summary type="html"><![CDATA[The hidden document that makes AI feel like it remembers you.]]></summary></entry><entry><title type="html">AI Doesn’t Kill Skills — It Moves Them Up a Layer</title><link href="https://andrewsavikas.com/ai-doesnt-kill-skills-it-moves-them-up-a-layer/" rel="alternate" type="text/html" title="AI Doesn’t Kill Skills — It Moves Them Up a Layer" /><published>2026-03-04T00:00:00+00:00</published><updated>2026-03-04T00:00:00+00:00</updated><id>https://andrewsavikas.com/ai-doesnt-kill-skills-it-moves-them-up-a-layer</id><content type="html" xml:base="https://andrewsavikas.com/ai-doesnt-kill-skills-it-moves-them-up-a-layer/"><![CDATA[<p>A <a href="https://arxiv.org/abs/2601.20245">new academic paper</a> shows that AI users scored 17% <em>lower</em> on a quiz designed to measure how much they’d learned while completing a task. AI had smoothed out many of the rough edges that produce real learning, like debugging code. They completed the task, but came out the other side knowing measurably less than colleagues who didn’t use AI.</p>

<p>If you manage people who use AI tools, this sounds alarming.</p>

<p>But it isn’t—or at least not for the reasons you think.</p>

<p>Here’s what the (excellent!) paper actually found. A group of 52 experienced Python developers were randomized into two groups: one used AI to help them complete a task, the other did not. The AI group hit fewer errors along the way, which also meant there was less opportunity to learn from <em>correcting</em> those errors. If you never debug your mistakes, you never really learn to debug, and historically debugging is one of the most valuable skills a developer can cultivate.</p>

<p>But there’s some real nuance in the paper beyond that headline finding.</p>

<p>Digging deeper, the authors identified six interaction patterns in how developers used AI, and the learning outcomes were starkly different.</p>

<p>Three patterns tanked learning. <em>Delegation</em>—hand the AI the whole task and paste in the result. <em>Progressive reliance</em>—start doing the work yourself, then gradually shift to letting the AI write all the code. <em>Iterative debugging</em>—run into an error, ask the AI to fix it, run into another, ask again, never understanding what went wrong.</p>

<p>Three patterns preserved it. <em>Conceptual inquiry</em>—ask the AI questions instead of asking it to do things. This group scored highest and, interestingly, was also the fastest among the high-scoring patterns. <em>Hybrid code-explanation</em>—ask for code and an explanation together, then actually read the explanation. <em>Generation-then-comprehension</em>—let the AI write the code, then manually copy it and ask follow-up questions to understand what it did.</p>

<p>So despite the scary title, the pattern isn’t ultimately “AI helps or hurts.” It’s “how you use it determines what you learn.” The paper knows this—it’s right there in the data. But the headline finding is the scary number, not the nuanced pattern.</p>

<h2 id="the-layer-the-paper-didnt-measure">The layer the paper didn’t measure</h2>

<p>Here’s what happened when I had an AI agent build a Gmail MCP integration from scratch: I learned essentially nothing about the implementation—I couldn’t tell you how the OAuth flow works, couldn’t debug the API calls. By the paper’s measure, I atrophied.</p>

<p>But I learned a <strong>lot</strong> about something else: how to scope a project so an agent can execute it. How to break work into pieces that are independently verifiable. How to write a spec tight enough that the output is useful and loose enough that I’m not just dictating code. How to evaluate results when I can’t read every line.</p>

<p>The paper measured skills at the execution layer. It didn’t measure what was forming at the layer above.</p>

<h2 id="this-has-happened-before">This has happened before!</h2>

<p><em>Every</em> wave of programming abstraction—assembly to C, C to Python, Python to frameworks—changed <strong>which</strong> skills mattered without reducing <strong>whether</strong> skills mattered. Nobody argues that Python developers are less skilled than assembly programmers. They’re skilled at different things, at a different layer.</p>

<p>The pattern: each compression step lets you say less while the system fills in more. The skills shift from “how do I implement this” to “how do I specify this” to “how do I evaluate whether this is right.” The new layer isn’t optional or lesser. It’s just what the work requires.</p>

<p>AI agents are the next compression step, and they’ve made the leap all the way to natural human language (rather than, say, compiled libraries) but the dynamic is the same.</p>

<h2 id="the-management-transition">The management transition</h2>

<p>This is the same structural shift that happens when an individual contributor is promoted to a manager. You stop learning how to do the work and start learning how to direct and evaluate it. Nobody calls that “skill atrophy”—they call it a career path. (I wrote recently about how <a href="/your-agents-have-an-org-chart-problem/">agents are forcing us to rediscover organizational behavior</a>—this is the individual version of that same shift.)</p>

<p>The VP of Engineering who can’t implement OAuth anymore isn’t atrophied. They’ve shifted their primary skills to a layer where their judgment about estimates, architecture trade-offs, and team capability is what matters. They need enough understanding to smell when something’s wrong—but “enough” is not “can do it themselves.”</p>

<h2 id="what-the-paper-actually-caught">What the paper actually caught</h2>

<p>Look at those six patterns again. The three that tanked learning—delegation, progressive reliance, iterative debugging—have something in common: the developer simply offloaded work and moved on. The three that preserved learning—conceptual inquiry, hybrid explanation, generation-then-comprehension—all involve the developer doing cognitive work <em>on top of</em> the AI output, actively engaged with guiding and evaluating it.</p>

<p>The paper didn’t catch AI killing skills. It caught people building the new skills needed to work more effectively a layer <em>above</em> writing the code.</p>

<p>The developers who learned the most were the ones who stayed engaged—who asked questions, who read the explanations, who treated the AI output as something to understand rather than something to blindly accept. They were doing the work at the new layer above, even if nobody was measuring it.</p>

<p>(To me this sounds an awful lot like the way a new manager has to stop measuring their worth by individual output and start learning to actively direct and evaluate someone else’s.)</p>

<p>The skills (and the learning) just moved up a layer. They didn’t disappear.</p>]]></content><author><name>Andrew Savikas</name></author><summary type="html"><![CDATA[A new paper says AI users learn 17% less. But it’s measuring the wrong layer — skills don’t disappear when you delegate, they shift from execution to evaluation.]]></summary></entry><entry><title type="html">Your Agents Have an Org Chart Problem</title><link href="https://andrewsavikas.com/your-agents-have-an-org-chart-problem/" rel="alternate" type="text/html" title="Your Agents Have an Org Chart Problem" /><published>2026-02-23T00:00:00+00:00</published><updated>2026-02-23T00:00:00+00:00</updated><id>https://andrewsavikas.com/your-agents-have-an-org-chart-problem</id><content type="html" xml:base="https://andrewsavikas.com/your-agents-have-an-org-chart-problem/"><![CDATA[<blockquote>
  <p>How’s everyone else feeling about a future where we each manage a team of agents? :eyes: :grimacing: I’ve been surprised that this has been freaking me out as much as it is. I guess it’s the change? And when I imagined myself managing, I imagined managing people problems. I dunno what AI problems will be like.</p>
</blockquote>

<p>That’s from a <a href="https://wanderu.com">Wanderu</a> colleague’s Slack message, and it really nails something I keep seeing with people (including myself!) moving from using a single AI chat window to coordinating multiple LLM agents at once: the main problems stop being about technology and start being about <em>management</em>.</p>

<p>Anyone trying to work with more than one or two LLM agents over any period of time will quickly run headlong into issues like:</p>
<ul>
  <li><strong>Span of control</strong> — it’s nearly impossible to directly “manage” 20 (or even 10) agents just like it’s nearly impossible for a manager to have 20 direct human reports.</li>
  <li><strong>Delegation frameworks</strong> — trust isn’t binary. Effective oversight depends on the task at hand.</li>
  <li><strong>Controls vs. policies</strong> — writing a rule doesn’t mean it will always be followed</li>
</ul>

<p>This is all firmly in the territory of Organizational Behavior, and there are decades of practical knowledge about coordination, delegation, trust, and verification to draw from.</p>

<p>Matt Levine has <a href="https://news.bloomberglaw.com/mergers-and-acquisitions/matt-levines-money-stuff-crypto-markets-are-where-the-fun-is">a running bit in his Bloomberg newsletter</a> about how the crypto world keeps rediscovering traditional finance. Crypto dismissed clearinghouses, custody chains, and KYC as legacy overhead, then spent a decade painfully rebuilding all of it from scratch. The structures turned out to be load-bearing. The technology changed, but the underlying problems — custody, clearing, trust — didn’t.</p>

<p>I think the same thing is happening with agents and management theory. Technologists used to treating organizational behavior as bureaucratic overhead are quickly rediscovering why TPS reports exist.</p>

<p>Others have noticed this too. Jesse Vincent wrote about agent swarms <a href="https://blog.fsck.com/2026/02/03/managing-agents/">“speedrunning”</a> the lessons of Brooks’s <em>Mythical Man-Month</em>. Martin Fowler’s recent Thoughtworks retreat notes describe “organizational structures built for human-only development <a href="https://martinfowler.com/fragments/2026-02-18.html">breaking in predictable ways</a>.”</p>

<p>As one example of how this plays out, every experienced manager learns that “work harder” doesn’t produce better work. The real job is specifying success conditions: what you’re checking for, and how you’ll both know it worked. Most people learn this by failing at the vague version first.</p>

<p>The LLM agent equivalent is obvious. “Be more careful.” “Think step by step.” “Be more accurate.” These are all just variations of “work harder” or “do better next time” — they sound like instructions but they don’t tell the agent what to actually <em>do</em> differently.</p>

<p>The good news is that we already know how to handle this! Many of the same tools and techniques developed over hundreds of years for effectively organizing, coordinating, and managing the work of teams of people work quite well for organizing, coordinating, and managing the work of teams of <em>people simulations</em>.</p>

<p>I’ve said that <a href="/what-developers-do-now-with-ai/">knowledge workers can learn a lot about the future by watching what software developers are doing</a>. But in the case of successfully using LLM agents, the reverse is also true: those building and working with teams of agents have much to learn from great managers about things like clear writing that sets proper expectations about what “good” work actually is.</p>

<p>The parallel isn’t perfect. Agents don’t get demoralized or play office politics. They have no trouble repeating the same work over and over. They’re undaunted by levels of process and bureaucracy that would crush any human. But they also don’t push back when instructions are unclear — they just <a href="/check-your-work-doesnt-work-with-llms/">confidently do the wrong thing</a>.</p>

<p>This is a theme I’ll keep coming back to: Span of control, separation of duties, delegation frameworks, the distinction between policies and structural controls — each of these has an agent parallel I want to unpack in future posts.</p>

<p>But the basic claim is that when working with agents starts to feel like management, that’s not a coincidence, it’s a fundamental feature – which means that for knowledge workers who want to make the most of agentic workflows, the big opportunity isn’t about learning new technology, it’s about applying the management, communication, and judgment skills <em>you already have</em>.</p>]]></content><author><name>Andrew Savikas</name></author><summary type="html"><![CDATA[People building AI agent systems keep rediscovering management theory — the same way crypto kept rediscovering traditional finance.]]></summary></entry><entry><title type="html">‘Check Your Work’ Doesn’t Work with LLMs</title><link href="https://andrewsavikas.com/check-your-work-doesnt-work-with-llms/" rel="alternate" type="text/html" title="‘Check Your Work’ Doesn’t Work with LLMs" /><published>2026-02-17T00:00:00+00:00</published><updated>2026-02-17T00:00:00+00:00</updated><id>https://andrewsavikas.com/check-your-work-doesnt-work-with-llms</id><content type="html" xml:base="https://andrewsavikas.com/check-your-work-doesnt-work-with-llms/"><![CDATA[<p>In <a href="/what-developers-do-now-with-ai/">my last post</a>, I argued that the tools developers build for themselves keep showing up on everyone else’s desk eventually (eg, <em>rsync</em> ➡ Dropbox), so one of the best ways to understand how knowledge work will change tomorrow is to understand what developers are doing <em>today</em>.</p>

<p>One of the reasons using LLMs (and developers are using a <em>lot</em> of LLMs right now) feels so different from most of our experience interacting with computers is their ability to process normal human language. Just say “summarize this report into a 3-paragraph email I can send to my team” and that’s exactly what you get! While you <em>could</em> get more specific (“at least 2 paragraphs, but no more than 3, explain things in plain language like you would to a 5-year-old”) usually even a vague instruction gets at least passable results.</p>

<p>A few weeks ago I gave Claude an instruction like “draft an email summarizing our request and setting a deadline of next Tuesday”</p>

<p>Click. Whirr. And then I had a perfectly useful draft email for my purposes. I made some quick edits and off it went.</p>

<p>But then I looked closer at the (now sent) message and Claude had put in the wrong date (it was the correct date for that particular Tuesday – in <em>last</em> year 🤦‍♂️).</p>

<p>Of course Claude was “apologetic” when I pointed out the error. At first I tried to be more explicit: “<em>Be very careful about dates</em>”. I even added an extra instruction to be read before every session that said to be <strong>extremely</strong> careful about dates, and to double check them.</p>

<p>And it worked.</p>

<p>Or at least I thought so.</p>

<p>Until the next time it didn’t.</p>

<p>It turns out “check your work” is <em>directionally helpful</em> but not truly <em>effective</em>.</p>

<p>What <em>does</em> work is to <strong><em>give the LLM a test it can use to falsify a claim</em></strong>.</p>

<p>In this case, the fix was that instead of just telling Claude to “be sure about the date” I added an explicit instruction to verify EVERY relative date reference by checking it against the computer’s built-in <code class="language-plaintext highlighter-rouge">date</code> command.</p>

<p>The agent didn’t get “better at remembering to check its work on dates”, it changed the way it worked because the instructions were more specific about <em>how</em> to check its work – I gave it a falsifiable success condition.</p>

<p>Given the right guidance, the agent is perfectly capable of correcting its own mistakes before you ever see them, <strong>as long as you define what you’re asking for in a way that it can verify its own work</strong>.</p>

<p>But while we humans are quite good at knowing what “correct” means when we see it, in the context of knowledge work, most of us don’t have much experience systematizing those preferences into workflows a computer can reliably automate.</p>

<p>My friend and publishing industry veteran George Walkley <a href="https://www.georgewalkley.com/Systems-and-Meaning/">posted recently about this from the other direction</a> — publishers have the right editorial instincts but not the systems habit:</p>
<blockquote>
  <p>Editorial training optimises for polish and precision. [But t]here is also a difference between shaping sentences and thinking in systems. Developers tend to treat prompts as modular components, version-controlled assets, parts of repeatable workflows. Most publishers that I work with are not yet operating at that level of process abstraction.</p>
</blockquote>

<p>Yet I’ve seen evidence that we’ve been able to bridge this gap in the past, and it offers useful insights for working with LLMs today.</p>

<p>Back in my own days in a book publishing Production Department, the work involved <strong><em>oodles</em></strong> of exacting criteria: Make sure every figure has a caption; Chapter titles can’t be more than 80 characters long; You can’t skip from H1 to H3 without an intervening H2.</p>

<p>The criteria were clear, but still often challenging for a person to test 100% correctly with dozens to review across hundreds of pages. Quite often the natural response to this was adding yet <em>another</em> review pass by <em>another person</em> applying the same criteria (“is this Proofread 1 or Proofread 2?”).</p>

<p>Computers were rarely looked to for help because the work to codify an ever-evolving list of criteria (often there were different checklists for different types of books too) was rarely worth the cost of having a developer build a program or script to do it.</p>

<p>A technique I borrowed from <a href="https://www.linkedin.com/in/rebeccagoldthwaite/">Rebecca Goldthwaite</a>, then at Cengage, was something of a middle ground, which was to use an XML-validation tool called <em><a href="https://en.wikipedia.org/wiki/Schematron">Schematron</a></em> to tackle the problem. With Schematron, you created a list of falsifiable assertions about a piece of content, and the computer ran down the list (however long that list was) and merrily evaluated the content based on your assertions.</p>

<p>Although it didn’t require any formal programming knowledge to run those tests, it did mean learning a fairly arcane notation for translating a natural-language assertion like “every figure MUST be immediately followed by a caption” into a formal test the computer could perform.</p>

<figure class=""><img src="/img/schematron-rules.svg" alt="Three publishing rules translated from natural language into Schematron XPath assertions" /><figcaption>
      Image credit: one of my AI agents via Claude Code

    </figcaption></figure>

<p>But it was easy to swap out which checklist of assertions you used, and with a bit of practice, even an editor with no programming experience could learn to write and revise those tests, improving and adapting them over time.</p>

<p>In hindsight, this was <strong>one example of a bridge between a person using natural language to describe a problem, and expressing that problem in a consistent, verifiable way for a computer</strong>.</p>

<p>Because until very recently, to explain to a computer how to help us, we had to learn <em>how to talk in ways that the computer could understand</em>. While that was possible using a “real” computer language, it could also be done with tools like AppleScript, or even recording a “macro” for the computer to play back again, over and over.</p>

<p>Regardless of the tool, the challenge was the same: turn fuzzy natural language into unambiguous computer instructions.</p>

<p>The good news is that now LLMs <strong>can understand our fuzzier language directly</strong>, and then <em>they</em> turn that intention into practical actions you see happen on your screen.</p>

<p>But there’s a catch: because the agent <em>seems to understand</em> that fuzzy language, it’s easy to skip a critical step — ensuring clear verification criteria. Without it, the agent will go off and do the work as best it can, leaving <em>you</em> to spot incorrect dates, or note when it said one thing but actually did another.</p>

<p>The gap used to be comprehension: the computer couldn’t understand what we wanted, so we had to learn its language. Now it’s verification: the computer understands what we want but can’t confirm it delivered.</p>

<p>Humans have already developed very effective ways of coping with this problem. It’s why airline pilots who’ve flown the same type of planes for 20 years still follow pre-flight checklists. It’s the whole conceit behind <em><a href="https://bookshop.org/p/books/the-checklist-manifesto-how-to-get-things-right-atul-gawande/6daf086d5c79b06d">The Checklist Manifesto</a></em>: offload the burden of verification to externally defined criteria. “Are we ready to take off?” is a riskier question to ask than “Can you prove you completed every step in this checklist?”</p>

<p>When we wrote precise, pedantic instructions, ambiguity had nowhere to hide. Now the input is fuzzy and the output looks confident —which is exactly when wrong dates, false claims, and quiet mistakes slip through. Especially outside of coding, where there’s no test suite to catch them. The fix isn’t to hope the agent gets it right. It’s to tell the agent how to check.</p>]]></content><author><name>Andrew Savikas</name></author><summary type="html"><![CDATA[What a 20-year-old XML validation tool taught me about getting reliable results from AI]]></summary></entry><entry><title type="html">What Developers Do Now with AI, You’ll Do Next</title><link href="https://andrewsavikas.com/what-developers-do-now-with-ai/" rel="alternate" type="text/html" title="What Developers Do Now with AI, You’ll Do Next" /><published>2026-02-15T00:00:00+00:00</published><updated>2026-02-15T00:00:00+00:00</updated><id>https://andrewsavikas.com/what-developers-do-now-with-ai</id><content type="html" xml:base="https://andrewsavikas.com/what-developers-do-now-with-ai/"><![CDATA[<p>In April 2007, Drew Houston posted his <a href="https://www.ycombinator.com/apply/dropbox">Y Combinator application</a> for a new product called Dropbox. He described it as “taking the best elements of <em>subversion</em>, <em>trac</em> and <em>rsync</em> and making them ‘just work’ for the average individual or team.” He knew exactly what he was building on — <em>rsync</em>, the command-line file synchronization tool that’s been shipping with Unix systems since 1996. Houston’s pitch was that his little sister could use Dropbox to keep track of her high school term papers without burning CDs or carrying USB sticks — and definitely without <em>rsync</em>.</p>

<p>A <a href="https://news.ycombinator.com/item?id=9024">Hacker News commenter</a> offered this feedback: you could already build such a system yourself “quite trivially” by getting an FTP account, mounting it locally with <em>curlftpfs</em>, and then using <em>subversion</em> or <em>cvs</em>.</p>

<p>That wasn’t wrong! The technology existed!</p>

<p>But the gap between “exists for people who know what <em>curlftpfs</em> means” and “works for a high schooler’s term papers” turned out to be worth about $12 billion.</p>

<p>Early reactions to Slack followed the same script: “it’s just IRC,” as if decades of chat protocol history made a more accessible tool’s wider potential redundant.</p>

<p>Indeed, for more than 30 years now, one of the best ways to see the future of knowledge work has been to watch how software developers work.</p>

<p>Their job may be to write software, but they tend to <em>also</em> write <em>other software</em> that makes it more efficient to write software. And then they write more software still, to help with all the <em>other</em> work that goes into writing software, like collaborating with colleagues or managing schedules and plans.</p>

<p>And it’s those broader tools — the ones applicable to other kinds of knowledge work — that offer the best preview of the future for the rest of us. Even when the future looks, at first, like something only a developer could love.</p>

<figure class=""><img src="/img/irc-vs-slack.png" alt="The same conversation in IRC (2007) vs. Slack (2013) — same people, same skepticism, same idea." /><figcaption>
      Image credit: one of my AI “agents” via Claude Code

    </figcaption></figure>

<p>But for those willing to spend effort learning how developer tools work <em>before</em> they cross that gap to mainstream, <strong>there is competitive advantage</strong>.</p>

<p>I experienced this firsthand 20 years ago at O’Reilly Media, where I helped build an <a href="https://apeth.net/matt/iosbooktoolchain.html">industry-leading publishing toolchain</a> based on the observation that much of a developer’s workflow is around writing, editing, and collaborating on collections of complex, long-form text documents. It turns out that what works for a “codebase” works exceptionally well for a “book” too (especially if that book’s authors happen to also be software developers!)</p>

<p>That lesson has stuck with me throughout my career: pay attention to the tools that software developers are using, because with a bit of work they can often be <em>incredibly</em> powerful when applied to other kinds of knowledge work.</p>

<h2 id="same-tune-new-ai-instrument">Same tune, new (AI) instrument</h2>

<p>Over the past few months, I’ve noticed many of the smartest engineers I know all talking more and more about how transformative their work was becoming with <a href="https://www.anthropic.com/claude-code">Claude Code</a> (and similar LLM tools).</p>

<p>And I kept hearing one specific theme over and over — that <a href="https://github.com/jlevy/speculate/blob/main/about/lessons_in_spec_coding.md">the work had shifted</a> from thinking about and working <em>primarily</em> on code, to thinking about and working <em>primarily</em> on <em>other</em> kinds of text documents: specifications, prompts, test plans. “With the right structure and documentation, these things can write very useful software” has been the theme.</p>

<p>I knew I’d heard that tune before.</p>

<p>They are <em>writing</em>. They are <em>planning</em>, they are <em>specifying</em>, they are <em>reviewing plans and specs and applying their judgment to make decisions and corrections</em>. They are <em>revising their instructions and feedback to provide better feedback to their workers</em>. If that’s not a description of modern knowledge work, I don’t know what is.</p>

<p>For 30 years, the pattern has been the same — developers build it, everyone else gets it later, polished and packaged. But this time the gap might be shorter than you think, because the work developers are doing with these tools right now isn’t primarily coding. It’s <em>managing knowledge and managing teams</em>.</p>

<p>It’s <strong>your job</strong>. And you already know how to do it.</p>

<p>That’s why I’ve jumped into the deep end on trying to use these tools as much like developers are using them right now, because that’s the way to understand how <em>all</em> of us will be using these tools in the future.</p>

<p>I haven’t been this excited about the future of knowledge work in decades, and if all you’re seeing and hearing is anchored to the extremes (“AI is Evil!” vs. “AI is magic!”) then you’re missing the messy middle where all of the <strong><em>real</em></strong> work that affects us all is happening.</p>

<hr />

<p><strong><em>P.S.</em></strong> I’m planning to share more about what I’m learning, including what is (and what isn’t) different about these tools compared with prior ones. I’m also taking a cue from Anne-Laure Le Cunff’s excellent <a href="https://nesslabs.com/book"><em>Tiny Experiments</em></a> and writing out loud about all of it. It’s been a while since I’ve done this, but some things are worth figuring out in public. This is the first post. More to come.</p>]]></content><author><name>Andrew Savikas</name></author><summary type="html"><![CDATA[For 30 years, the pattern has been the same — developers build it, everyone else gets it later, polished and packaged. But this time the gap might be shorter than you think.]]></summary></entry><entry><title type="html">Remembering Ben Carnevale</title><link href="https://andrewsavikas.com/remembering-uncle-ben/" rel="alternate" type="text/html" title="Remembering Ben Carnevale" /><published>2025-06-15T00:00:00+00:00</published><updated>2025-06-15T00:00:00+00:00</updated><id>https://andrewsavikas.com/remembering-uncle-ben</id><content type="html" xml:base="https://andrewsavikas.com/remembering-uncle-ben/"><![CDATA[<p><em>I was honored to be asked to give the eulogy for my Uncle Ben at his funeral mass on June 14, 2025. Ben was a very successful entrepreneur, but that was the last thing he wanted people to know about him. What mattered to him was all the work he did to help those around him, and the lines out the door at his wake were a testament to how successful he was at doing just that. Below is the full text of the eulogy, with a few minor details changed for privacy. Happy Father’s Day, Uncle Ben, we already miss you dearly.</em></p>

<hr />

<p>719 Oakmont Drive in Downers Grove was an especially lucky place to grow up.</p>

<p>You see, in my case that particular house came with an entire extra family, right next door over at number 725. Three cousins were really extra siblings to play with, some cool bonus aunts and uncles came with the package too, and even extra grandparents who would come to your birthday parties, bearing delicious homemade mostaccioli and meatballs.</p>

<p>And of course there was also an extra set of parents there to offer love and support, and to rely on and learn from.</p>

<p>So even when my own family changed, and my father moved far away, I felt very lucky to still have Uncle Ben, Auntie Mar, and the rest of the Carnevales, right next door.</p>

<p>In my memories from that time is the way that when Uncle Ben spoke about almost anything he was up to, he made it all seem so … <em>interesting</em>.</p>

<p>He’d show me all the different speakers and electronics stacked neatly on the shelves in his basement workshop, enthusiastically explaining the difference between a woofer and a tweeter.</p>

<p>He’d regale us with stories of adventures in what to me at the time sounded like truly exotic places, like Ireland and Japan, where he often traveled for business.</p>

<p>I remember how cool it seemed that he’d bought his own tuxedo, because he had so many occasions to wear a tuxedo that it didn’t make sense to keep renting them.</p>

<p>Uncle Ben clearly enjoyed pretty much whatever he was doing, and for a long time I thought he was <em>so lucky</em> to have found fulfilling work and a range of skills and interests that made him so happy.</p>

<p>But I’ve recently realized something about Uncle Ben.</p>

<p>I see now that his “luck” was in fact a choice.</p>

<p>A choice to live a life full of joy, and love, and optimism, and most of all service. A life that brightened and improved the lives of all of those around him.</p>

<p>But don’t take my word for it. If you’re like me, you’ve spent a lot of time over the past few days looking at pictures of Ben. And I hope you look at even more, because when you do you see that he’s not, as so many of us do, just smiling for the camera. He was <em>already</em> smiling, because he’s joyful, and happy, and content with where he is and who he’s with – someone just happened to catch it on film. You can see it over and over – it’s right there in that unmistakable twinkle in his eye that could light up whatever room he happened to step into.</p>

<p>There’s something else you can see in those pictures, besides the joy. And that’s pride. And I don’t mean pride in any boastful sort of way. Quite the opposite. It’s how proud he was of his family, and as his family expanded with the weddings and then the grandkids, and then the weddings <em>of</em> the grandkids, that pride and that smile and that twinkle grew bigger and brighter too.</p>

<p>Having children of my own has taught me the truth in the adage that “once a parent, always a parent”. In Uncle Ben’s case, he and Auntie Mar never stopped being that extra set of parents for me. He’d regularly send me articles and videos, and would ask me questions about my job that made it clear he’d done his homework. On the day of the Boston marathon bombing I arrived home to find an email from Uncle Ben, checking in to make sure we were all OK.</p>

<p>I’ve spent a lot of time thinking about Uncle Ben over the past few weeks, and I simply don’t remember ever feeling anything but genuine love and kindness and support and strength. (And while I don’t mean strength in the physical sense, it’s also true that I was pretty sure he’d break a finger every time he shook my hand!)</p>

<p>But what’s so remarkable about Ben is how much he worked to direct all of that love, kindness, support, strength, talent, and wisdom toward so many of those around him.</p>

<p>Over the years, and without any fanfare, Ben helped so many families who weren’t quite as lucky as mine was in that house on Oakmont Drive, and who desperately needed an Uncle Ben of their own.</p>

<p>He helped a woman that had lost her husband and along with him the means to support her family, spending two years mentoring her, teaching her how to budget, and helping the whole family find sustainable housing. And he kept doing that over and over as part of the Bridge Families program.</p>

<p>And like a true entrepreneur, he spotted an unmet need when he learned that none of the area food pantries were especially useful for someone who worked the day shift, so he organized a group of men from church to start an evening food pantry. And for those who couldn’t make it in, on Sundays he’d drive the food to them.</p>

<p>I joke about Uncle Ben being an “extra” dad, but the truth is that he was always just as much a role model as my own father. Watching him live a joyful family life alongside his best friend Marilyn taught me so much about how to build a marriage and a family of my own. Especially that it took work, and time, and love, and attention, but that if you did it right, you’d get back so much more than you put into it.</p>

<p>Uncle Ben was there for me my entire life, through thick and thin, always ready with a beaming smile, a warm hug, and the reassuring strength of someone who knows for certain what truly matters most in life.</p>

<p>He set an <em>incredible</em> example for what it means to be an exceptional father, husband, brother, uncle, grandfather, and all-around good man, and it’s a privilege to be standing here today to honor and remember him.</p>

<p>Everyone deserves someone like Ben in their life. Someone to love you and support you and always be in your corner, ready with a kind word or a warm embrace, or just a smile to cheer you up and remind you without saying a word that things really are going to be OK.</p>]]></content><author><name>Andrew Savikas</name></author><summary type="html"><![CDATA[I was honored to give the eulogy for my uncle, Ben Carnevale, at his funeral mass on June 14, 2025]]></summary></entry><entry><title type="html">SCARF: The Mental Model that Can Help you Communicate More Mindfully</title><link href="https://andrewsavikas.com/scarf-model-communicate-mindfully/" rel="alternate" type="text/html" title="SCARF: The Mental Model that Can Help you Communicate More Mindfully" /><published>2020-04-13T00:00:00+00:00</published><updated>2025-06-19T00:00:00+00:00</updated><id>https://andrewsavikas.com/scarf-model-communicate-mindfully</id><content type="html" xml:base="https://andrewsavikas.com/scarf-model-communicate-mindfully/"><![CDATA[<p>For nearly a decade now, I’ve found myself coming back again and again (and regularly recommending to others) a fantastic framework I picked up from David Rock’s <em><a href="https://bookshop.org/p/books/your-brain-at-work-revised-and-updated-strategies-for-overcoming-distraction-regaining-focus-and-working-smarter-all-day-long-david-rock/19344147?aid=80144&amp;ean=9780063003156&amp;listref=my-favorite-books-about-business-finance-and-investing&amp;next=t" target="_blank">Your Brain At Work</a></em>. The book is full of powerful tools for understanding the neurological reasons we act and feel the way we do on the job, but one stands out for how it helps better understand your own reactions to circumstances, as well as how your words and behaviors affect others. He calls it the “SCARF” model, and the acronym is short for five dimensions that strongly influence our state of mind:</p>

<ol>
  <li><strong>Status</strong>. Do we feel valued and important?</li>
  <li><strong>Certainty</strong>. How sure are we about what’s going to happen next?</li>
  <li><strong>Autonomy</strong>. How much control do we have over our circumstances?</li>
  <li><strong>Relatedness</strong>. Do we feel good about and close to the people we’re engaging with?</li>
  <li><strong>Fairness</strong>. Are we and those around us behaving and being treated fairly?</li>
</ol>

<p>Think of each as a scale from -10 to +10, with 0 being a neutral state. If something increases our feeling along one of those dimensions, the reading goes up, it feels good, and we want more of it, generating a “toward” response. On the other hand, if something reduces our feeling along one of those dimensions, we can quickly dip into negative territory, triggering an “away” response that increases anxiety—and often with it the “fight or flight” response ingrained so deeply in our lizard brain.</p>

<h2 id="the-dimensions-of-the-scarf-model-in-action">The dimensions of the SCARF model in action</h2>
<p>How does this work in practice? Here’s one example: remember when we all used to fly places all the time? You’ve likely experienced the dreaded tarmac delay. Alongside all the quotidian inconveniences of travel, why are those <strong>so</strong> uniquely infuriating? Let’s look at the experience along some of the dimensions of the SCARF model:</p>

<ul>
  <li><em>Status</em>. Even if you have the airline’s version of “status”, it’s not going to get you home any faster than anyone else who feels trapped like cattle on the plane.</li>
  <li><em>Certainty</em>. When will you take off? <em>Will</em> you take off? Will it be 10 minutes or 2 hours? What’s going on?</li>
  <li><em>Autonomy</em>. Do you have <em>any</em> control over the situation? Unless you’re a member of the crew, probably not any. The seatbelt sign is on, and you’re stuck.</li>
  <li><em>Fairness</em>. What did you ever do to deserve this? Why you, why this flight, why now of all times?</li>
</ul>

<p>With those four  meters all firmly in the red, all we’re left to work with is the sense of Relatedness we get by commiserating with our fellow <del>prisoners</del> passengers by complaining about the situation!</p>

<h2 id="a-better-way-by-applying-the-scarf-model">A better way by applying the SCARF model</h2>
<p>Imagine yourself again on that same flight. You’ve just pushed back from the gate, and while taxiing toward the runway the plane stops and the pilot comes on the intercom, and this time she says:</p>

<p><em>“Ladies and gentleman, this is your captain speaking. First off, on behalf of the entire crew, I want to thank you for being our passenger tonight. Our job is getting you where you need to go safely and quickly, and we take that job very seriously…” <strong>(Status)</strong></em></p>

<p><em>“Unfortunately because of some bad weather back in Boston, we’re not going to be able to take off for a while. Right now I hope it will be a brief delay, but there’s no way to know for sure. What I</em> can <em>promise you is that I’ll come back on this intercom at least every 15 minutes to give you an update, even if that’s just to say there’s no update…” <strong>(Certainty)</strong></em></p>

<p><em>“It’s frustrating for us too when this happens — we’re just as eager to get back home to our families and friends as all of you…” <strong>(Fairness)</strong></em></p>

<p><em>“As you can see the seatbelt sign is on, so it’s important that you stay seated and buckled in while we wait. But we know you weren’t expecting this delay either, so if you really need to use the lavatory, please ring your call button and we’ll do our best to help you out. And while we can’t start our beverage service until we’re up in the air, we’d be happy to bring you a cup of water if you’re thirsty, just ring that call button.” <strong>(Autonomy)</strong></em></p>

<p>Nothing has changed about the circumstances — you’re still sitting on that tarmac indefinitely and there’s nothing really that you can personally do to change that. But you can imagine you and your fellow passengers feeling a lot less stress and anxiety this time around.</p>

<h2 id="applying-the-scarf-model-at-work">Applying the SCARF model at work</h2>
<p>The SCARF framework is useful in two directions:</p>

<ol>
  <li>Better understanding your own responses to words, actions, and circumstances</li>
  <li>Better understanding (and influencing) how others respond to <em>your</em> words and actions</li>
</ol>

<h3 id="understanding-yourself">Understanding yourself</h3>
<p>If you notice yourself feeling anxious, frustrated, angry, or scared—or just like you want to leave the room—take a deep breath and see if you can identify which of the SCARF dimensions is at play. Do you feel treated unfairly? Like things are out of your control? Disrespected? The mere act of labeling our emotions can help engage the more rational and logical parts of our brain.</p>

<p>Chances are, you’re more sensitive to some of those dimensions than others. For example, maybe you get <strong>really</strong> upset when you feel like you’re not being treated fairly. Knowing that’s how you respond can help you identify when it’s happening <em>while</em> it’s happening (and you can do something about it).</p>

<p>(Yes, I know, I know, we’re <em>all</em> pretty anxious and scared these days, and that’s a good opportunity to use the SCARF model to think through which aspect of this pandemic thing is weighing on you the most—is it the uncertainty? the lack of control? The act of observing and labeling our emotions is often enough to moderate them. BTW it’s notable how many of us are actively compensating by trying to maintain or even increase our sense of Relatedness by baking together, playing games, and extra Zoom calls with friends and family.)</p>

<h3 id="understanding-others">Understanding others</h3>
<p>The other great way to use the SCARF model is when planning an important conversation with others. Beforehand, take a piece of paper and write the letters SCARF in a column down the page. Now think of one or two things you could say to the person that would increase the person’s feelings along that dimension.</p>

<p>You don’t need to say all of them, but this way you’ll have some talking points at hand that can help raise the odds of a positive outcome. As a bonus, spending a few minutes writing positive things down about the person you’re about to talk to will put you in a very positive frame of mind about that person!</p>]]></content><author><name>Andrew Savikas</name></author><summary type="html"><![CDATA[Add David Rock's brilliant “SCARF” framework to your toolkit to help drop the stress level on both sides of your next conversation]]></summary></entry><entry><title type="html">On Building a Daily Habit of Continuous Learning</title><link href="https://andrewsavikas.com/daily-habit-continuous-learning/" rel="alternate" type="text/html" title="On Building a Daily Habit of Continuous Learning" /><published>2019-02-01T00:00:00+00:00</published><updated>2019-02-01T00:00:00+00:00</updated><id>https://andrewsavikas.com/daily-habit-continuous-learning/</id><content type="html" xml:base="https://andrewsavikas.com/daily-habit-continuous-learning/"><![CDATA[<blockquote>
  <p>Education is what people do to you and learning is what you do to yourself. — Joi Ito</p>
</blockquote>

<p>While I adore that perfectly tweetable quote from <a href="http://www.ted.com/talks/joi_ito_want_to_innovate_become_a_now_ist/transcript">Joi Ito’s TED Talk</a>, I’ve come to appreciate something he says a bit later in the same talk as an incredibly concise prescription for success in your career (and life in general):</p>

<blockquote>
  <p>[I]t’s about stopping this notion that you need to plan everything, you need to stock everything, and you need to be so prepared, and [instead] focus on being connected, <strong>always learning, fully aware, and super present</strong>.” (Emphasis added)</p>
</blockquote>

<p>It’s no coincidence that that phrase is also an accurate description of how young children naturally engage with the world around them. In fact, John Seely Brown, in a talk called <a href="http://www.johnseelybrown.com/el.pdf">Cultivating the Entrepreneurial Learner in the 21st Century</a> makes the case that we finally have the tools and technology to help spread and scale the kind of play-driven education first popularized by Maria Montessori more than 75 years ago. He also provides a useful metaphor to help understand how those same technology changes are shrinking the shelf life of key professional skills:</p>

<blockquote>
  <p>We are moving away from a 20th century notion of learning as picking up a set of fixed assets to a 21st century notion of learning as constantly reinventing and augmenting your skills. In the past, your skillset was authoritative, transferred to you in delivery models — often called schooling — and had a wonderful scalable efficiency. How do we move to a model that requires participating in ever-changing flows of activities and knowledge?</p>
</blockquote>

<p>But much of the language we use to describe education and training remains firmly rooted in the asset-based approach Seely Brown describes: we talk about courses, and certificates, and degrees, all with the idea that learning is something to “complete”, with a defined endpoint. (<a href="http://infed.org/mobi/paulo-freire-dialogue-praxis-and-education/">Paulo Freire calls it the banking model of education</a>.)</p>

<p>By way of comparison, it would be absurd to think it was possible to “complete” things like: being fit, eating right, or being a good marriage partner or parent. The only real measure of success on those dimensions is a sustained commitment to constant improvement — wanting to be at least slightly better today than you were yesterday.</p>

<p>It’s time to treat learning and skill development the same way — as a <a href="http://www.fastcompany.com/3020758/leadership-now/why-deliberate-practice-is-the-only-way-to-keep-getting-better">habit of deliberate practice</a> to be cultivated and sustained for a lifetime — to keep learning in order to get a little bit better than you were yesterday.</p>

<p>Most of what’s written about deliberate practice is about things like athletic performance, musical instruments, or even computer programming: “<em>Do the thing for <a href="http://gladwell.com/outliers/the-10000-hour-rule/">around 10,000 hours</a> and you’ll be an expert</em>.” But developing a “continuous learning” mindset through a habit of daily learning isn’t about repeating a behavior to become proficient, it’s about building a positive habit that can crowd out negative ones, as well as have positive side effects on other parts of your work and life.</p>

<p>It’s just like how exercising for 30 minutes a day isn’t at all about becoming an expert on the elliptical machine — the payoff comes from all the benefits you get to enjoy during the <em>other</em> 1,410 minutes of your day. You sleep better, you feel better, you look better, and you’re just better equipped to handle the challenges life throws at you every day.</p>

<p>(And to carry the analogy further, formal courses and certifications will always have a place in learning, but are more like the occasional 5K run or half-marathon — not something you’d want to do every day, but when you do you’re going to perform much better thanks to those daily workouts.)</p>

<p>One of the very best sites on the Web is Maria Popova’s <a href="http://www.brainpickings.org/">Brain Pickings</a>. In a <a href="http://www.brainpickings.org/index.php/2012/09/25/william-james-on-habit/">fantastic post about habits</a>, she quotes William James, writing more than a century ago:</p>

<blockquote>
  <p>When we look at living creatures from an outward point of view, one of the first things that strike us is that they are bundles of habits.</p>
</blockquote>

<p>If you’re like me, a fairly common “bundle of habits” in the modern (and mobile) age is to check email, peruse Twitter, look at Facebook, scan Slack and swing by Google Analytics to see how yesterday’s traffic numbers looked. I don’t think it’s controversial to call that a recipe for mindless multitasking and <a href="http://lindastone.net/qa/continuous-partial-attention/">continuous partial attention</a>. But being tethered to your phone also means you’re only a few taps away from an incredible wealth of knowledge and training to improve yourself.</p>

<hr />

<p>As a way of “eating our own dogfood” when I was CEO at <a href="https://safaribooksonline.com">Safari Books Online</a> (now the O’Reilly learning platform), we instituted a mandatory learning and development challenge, specifically around using our mobile apps for daily learning. I’m happy to say that our results were positive and truly profound. Yet we accrued no certifications, completed zero courses, and haven’t unlocked any badges; rather we saw a consistent and widespread <a href="https://www.linkedin.com/pulse/lifelong-learning-beware-plateaus-michael-conner">shift in attitude and mindset</a> toward continuous improvement and a genuine openness to new ideas and ways of thinking. As the CEO at the time, I couldn’t have asked for a better ROI than that.</p>

<p>But don’t take my word for it. Give it a try yourself: commit to spending 10 minutes a day for the next 10 days watching or reading anything from a book or video about something you’d like to improve at in your career. If you don’t know where to start, here are 10 high-quality book summaries from my friends at one of my former employers, <a href="https://getabstract.com/">getAbstract</a>, that can be read in 15 minutes in less from your phone or laptop:</p>

<ul>
  <li><a href="https://www.getabstract.com/en/summary/your-brain-at-work/23518">Your Brain at Work</a></li>
  <li><a href="https://www.getabstract.com/en/summary/the-power-of-habit/17285">The Power of Habit</a></li>
  <li><a href="https://www.getabstract.com/en/summary/exponential-organizations/23451">Exponential Organizations</a> (one of Mark Zuckerberg’s book club picks)</li>
  <li>Nir Eyal’s <a href="https://www.getabstract.com/en/summary/hooked/22993">Hooked: How to Build Habit-Forming Products</a></li>
  <li><a href="https://www.getabstract.com/en/summary/leadership-and-the-new-science/84">Leadership and the New Science</a></li>
  <li><a href="https://www.getabstract.com/en/summary/the-checklist-manifesto/13575">The Checklist Manifesto</a></li>
  <li><a href="https://www.getabstract.com/en/summary/management/13805">Peter Drucker’s Management</a></li>
  <li><a href="https://www.getabstract.com/en/summary/the-fifth-discipline/1257">The Fifth Discipline</a></li>
  <li>David Allen’s classic <a href="https://www.getabstract.com/en/summary/getting-things-done/1576">Getting Things Done</a></li>
  <li><a href="https://www.getabstract.com/en/summary/scaling-up/24982">Scaling Up</a></li>
</ul>

<hr />

<p><em>This post was <a href="https://medium.com/@andrewsavikas/on-building-a-daily-habit-of-continuous-learning-82ef77a8aff9">originally published on Medium</a>.</em></p>]]></content><author><name>Andrew Savikas</name></author><summary type="html"><![CDATA[It’s time to treat learning and skill development the same way — as a habit of deliberate practice to be cultivated and sustained for a lifetime — to keep learning in order to get a little bit better than you were yesterday.]]></summary></entry><entry><title type="html">When Tech Goes From Disruption to Deployment</title><link href="https://andrewsavikas.com/cresting-wave-technology/" rel="alternate" type="text/html" title="When Tech Goes From Disruption to Deployment" /><published>2017-03-07T00:00:00+00:00</published><updated>2017-03-07T00:00:00+00:00</updated><id>https://andrewsavikas.com/cresting-wave-technology/</id><content type="html" xml:base="https://andrewsavikas.com/cresting-wave-technology/"><![CDATA[<blockquote>
  <p>Technology is anything that was invented after you were born, everything else is just stuff. — Alan Kay</p>
</blockquote>

<p>I’ve spent much of my career at the intersection of publishing and technology, and it took many years for me to realize that describing it that way implies that “publishing” and “technology” are two different things, ignoring that almost everything about “publishing” as we commonly know it today was at one time just as much a “technology” as an iPhone.</p>

<p>In his essay, <a href="http://web.stanford.edu/dept/HPS/HistoryWired/Landow/LandowTwentyMinutes.html">Twenty Minutes into the Future</a>, George P. Landow, writing around the dawn of the World Wide Web, describes the situation eloquently:</p>

<blockquote>
  <p>First, one encounters a tendency among many humanists contemplating the possibility that information technology influences culture to assume that before now, before computing, our intellectual culture existed in some pastoral nontechnological realm. Technology, in the lexicon of many humanists, generally means “only that technology of which I am frightened.” In fact, I have frequently heard humanists use the word technology to mean “some intrusive, alien force like computing,” as if pencils, papers, typewriters, and printing presses were in some way natural. Digital technology may be new, but technology, particularly information technology, has permeated all known culture since the beginnings of human history. If we hope to discern the ways in which we might move beyond the book, we must not treat all previous information technologies of language, rhetoric, writing, and printing as nontechnological.</p>
</blockquote>

<p>(My favorite examples came from researching the history of publishing in medieval Europe for a conference talk in Frankfurt, and discovering that <strong>something as fundamental as word spacing within books developed over the course of 300 years</strong>, starting in around 1100. And it took hundreds of years more for punctuation to spread — the hyphen first appeared in the 11th century Europe, and took another 200 years to reach England. The colon didn’t appear until the late 14th century!)</p>

<p>Obviously many technological innovations spread much faster now, but only because they can do so using the now-boring communications and transportation infrastructure of <em>previous</em> waves of innovation.</p>

<p>And that concept of sequential waves of technological innovation — first emerging and disrupting the status quo, and then gradually becoming the status quo — is familiar to students of <a href="https://en.wikipedia.org/wiki/Creative_destruction">Schumpeter’s “creative destruction.”</a> But while it’s useful to know that these waves happen, even more useful from the perspective of planning and investing is spotting when one may be about to crest and the next begin.</p>

<p>In her (extraordinary) book, <a href="https://ceobookshelf.co/technological-revolutions-financial-capital-review/">Technological Revolutions and Financial Capital</a>, Carlota Perez outlines a vocabulary and framework for describing and understanding successive waves of technological innovation:</p>

<blockquote>
  <p>This book holds that the sequence: technological revolution — financial bubble — collapse — golden age — political unrest, recurs about every half century and is based on causal mechanisms that are in the nature of capitalism.</p>
</blockquote>

<p>The first four of those waves began with the Industrial Revolution in 1771, followed by: the Age of Steam &amp; Railways; the Age of Electricity and Heavy Engineering; and the Age of Oil, the Automobile and Mass Production. She labels the fifth wave — well underway at the time of her writing in 2003 — the Age of Information and Telecommunications, noting its birth as the 1971 introduction of the Intel microprocessor.</p>

<p>If these waves tend to last about 50 years, then the theory would suggest it’s just about time to look for clear signs that we’re far into what Perez labels the “Deployment” stage :</p>

<blockquote>
  <p>When an innovation is within the natural trajectory of the prevailing paradigm, then everybody — from engineers through investors to consumers — understands what the product is good for and can probably suggest what to improve. Even such minor and doubtfully useful products as the electric can-opener or the electric carving knife are thought worth designing, producing, buying and using in a world that is already accustomed to dozens of electrical appliances in the kitchen. The same happens with the successive applications of the general principles of the prevailing paradigm. In the case of continuous mass production, for example, after manufacturing had fully developed all its principles and refined its organizational practices, the task of applying the model to any other activity became straightforward. Mass tourism, of the ‘assembly-line’ type, moving people from airplane to bus, from bus to hotel and from hotel to bus, was obvious to conceive, easy to put into practice and readily accepted by consumers at the time.</p>
</blockquote>

<p>Or to borrow from Kay’s phrasing, Deployment is when yesterday’s “technology” starts to become tomorrow “just stuff”.</p>

<p>So what’s the evidence that we’re now squarely in that “Deployment” stage, and therefore nearing the dawn of the <em>next</em> technological revolution (and also primed for some, ahem, political unrest)?</p>

<h2 id="1-technology-is-fully-diffusing-into-every-industry-and-corporate-department">1. “Technology” is fully diffusing into every industry and corporate department</h2>

<p>I’ve spent a fair bit of time lately in what’s knows as the “Ed Tech” market (short for “Educational Technology”). And that’s meant keeping an eye on the competitive landscape, which at times seems to be changing every day with new products, services, and startups chasing opportunities. A useful tool for navigating those changes is one of the many landscape maps provided by bloggers, analysts, and investors, <a href="https://www.insidehighered.com/quicktakes/2017/02/24/ed-tech-landscape-2017">like this one</a>:</p>

<figure class=""><img src="/img/higher_ed_landscape.jpg" alt="market map of vendors in higher-ed landscape" /></figure>

<p>And while these kinds of maps have been around for a long time, during the past few years something has changed, and they have proliferated explosively. (There are many others even just for Ed Tech, for example <a href="http://blog.degreed.com/infographic-the-learning-content-landscape/">here’s one from the folks at Degreed</a>.) You can now find one of these “market maps” for just about any traditional corporate function. Here’s <a href="https://www.cbinsights.com/blog/sales-tech-startup-market-map/">Sales</a>:</p>

<figure class=""><img src="/img/sales_landscape.png" alt="market map of vendors in sales tech" /></figure>

<p>And <a href="http://chiefmartec.com/2016/03/marketing-technology-landscape-supergraphic-2016/">Marketing</a>:</p>

<figure class=""><img src="/img/marketing_landscape.jpg" alt="market map of vendors in marketing tech" /></figure>

<p>And <a href="http://www.capterra.com/human-resource-software/hr-landscape">HR</a>:</p>

<figure class=""><img src="/img/hr_landscape.png" alt="market map of vendors in hr tech" /></figure>

<p>And even <a href="http://www.accountexusa.com/ecosystem/">Accounting</a>:</p>

<figure class=""><img src="/img/accounting_landscape.png" alt="market map of vendors in accounting tech" /></figure>

<p>You can also see it across major segments of our economy, like <a href="http://fintechranking.com/2016/08/04/infographics-global-fintech-landscape/">Finance</a> (aka “Fintech”), <a href="https://www.cbinsights.com/blog/travel-tech-market-map/">Travel</a>, and <a href="https://www.cbinsights.com/blog/commercial-real-estate-tech-market-map-company-list/">Commercial Real Estate</a>. And to help you navigate all of these landscape maps, the folks at CB Insights have compiled <a href="https://www.cbinsights.com/blog/industry-market-map-landscape/">this helpful list</a> of 45(!) different Market Maps.</p>

<p>Technology is no longer a separate department or function but is now thoroughly permeating our entire economy, and as Perez would put it, the “new” paradigm is becoming just “common sense”.</p>

<h2 id="2-technology-is-overtaking-previous-economic-growth-engines">2. “Technology” is overtaking previous economic growth engines</h2>

<p>Another hallmark of each successive “surge” of innovation and then its diffusion into the wider economy is when the “new” economy companies begin overtaking the “old” ones as the engines of overall economic growth. Here’s a chart from Perez’s book showing how Oil and Auto firms (the “technology” companies of their day) displaced steel over a 30-year period:</p>

<figure class=""><img src="/img/growth_engines.png" alt="Figure 4-4 from Technological revolutions and financial capital showing the top 10 firms in US by asset size in 1917, 1930, and 1948, showing how the firms from the 4th wave of technological revolution -- oil and automobile -- overtook the 3rd wave steel industry as the growth engine of the US economy" /></figure>

<p>Now consider a similar look atop the S&amp;P 500 today, where the takeover by companies of the 5th surge is well underway:</p>

<figure class=""><img src="/img/s_and_p.png" alt="List of S&amp;P 500 as of January 2017 showing how tech firms like Apple and Microsoft have nearly overtaken oil and auto firms like Exxon" /></figure>

<p>VC maven Mary Meeker’s <a href="http://www.kpcb.com/internet-trends">2016 Internet Trends report</a> was her usual fascinating snapshot into what’s happening on and around the Web, and it included a useful comparison of “old” vs “new” economy companies and their valuations relative to their revenue:</p>

<figure class=""><img src="/img/meeker_old_new.png" alt="Slide from Mary Meeker showing relative market caps of old vs new media companies, like Netflix vs Viacom and Amazon vs Walmart. Viacom has higher revenue but is shrinking, whereas Netflix has lower revenue but is growing very fast" /></figure>

<p>(One obvious interpretation of the market-cap-to-revenue disparity is that investors believe the best days are ahead for Amazon and Netflix, and likely the opposite for Wal-Mart and Viacom.)</p>

<h2 id="3-todays-technology-becomes-tomorrows-utilities-and-infrastructure">3. Today’s “technology” becomes tomorrow’s utilities and infrastructure</h2>

<p>One of my favorite strategic analysis tools is Simon Wardley’s eponymous <a href="http://blog.gardeviance.org/">mapping framework</a>. A premise of the tool is that any given technology will eventually follow this path:</p>

<ol>
  <li>Genesis</li>
  <li>Custom Built</li>
  <li>Product (and Rental)</li>
  <li>Commodity</li>
  <li>Utility</li>
</ol>

<p>Not every technology will proceed through at the same pace, and some would seem to get stuck along the way, but overall it’s quite a useful model.</p>

<p>So it was with amusement that I read the following passage about Amazon from <a href="http://www.economist.com/news/business/21717421-three-financial-sanity-tests-whether-there-bubble-are-technology-firms-madly">an Economist column on whether tech firms are currently overvalued</a>:</p>

<blockquote>
  <p>(Amazon) is one of the most optimistically valued firms, with 92% of its current worth justified by profits after 2020. Outside investors have a lot at stake because it is huge, with a market value of $410bn. About a third of this value is justified by its profitable cloud-computing arm, AWS. But the rest of the firm, which straddles e-commerce, television and films, as well as logistics, barely makes money despite generating large sales. Nor is it growing particularly fast for its industry. <strong>To justify its valuation you need to believe that it becomes a sort of giant utility for e-commerce</strong> which by 2025 cranks out profits of around $55bn a year, or probably more than any other firm in America. (Emphasis added)</p>
</blockquote>

<p>The reason I was amused is because “giant utility” is exactly how many investors are valuing Amazon, and that’s in part because <a href="http://money.cnn.com/2017/03/02/technology/amazon-s3-outage-human-error/">it’s already behaving that way</a>:</p>

<blockquote>
  <p>According to Synergy Research Group, AWS owns 40% of the cloud services market, meaning it’s responsible for the operability of large swaths of popular websites. So if AWS goes down, it takes a huge number of businesses, apps, and publishers with it.</p>
</blockquote>

<h2 id="the-map-is-not-the-territory">The map is not the territory</h2>

<p>While Schumpeter, Perez, Wardley, and others offer incredibly useful tools for understanding the interplay between technological innovation and economic activity, ultimately they are just tools. We may be “due” for the end of one cycle and the beginning of the next, but reality is of course often far less predictable (and ultimately far more interesting!) than a model.</p>

<p>To say that we are nearing the end of one of these 50ish-year cycles is not to imply that “technology” as we commonly mean it today will disappear, any more than we have said goodbye to mass production or steel or electricity. Rather, the nearly ubiquitous infrastructure of the Internet, the World Wide Web, pervasive mobile broadband, and the internet of things — including mobile computers, phones, and sensors on nearly every person and in many building and vehicles, will likely be part of the core “infrastructure” for whatever revolution comes next.</p>

<p>And as the father of two children who could navigate a smartphone before they could walk, I’m exceptionally excited to see what kinds of things <em>they</em> will call “technology”.</p>]]></content><author><name>Andrew Savikas</name></author><summary type="html"><![CDATA[3 Reasons why we’re well into the final 'deployment' stage of the technological revolution begun nearly 50 years ago with the first microprocessor, and technology as we know it is about to get boring (just in time for the next big wave...)]]></summary></entry></feed>