The Three Tiers of AI Adoption: A Practitioner's Framework
The tool doesn't need to last forever, it just needs to last longer than the conversation.
After considerable effort and time, I think it's time to talk about some challenges the IT community faces over AI adoption. This entire article represents my thoughts and opinions only.
Most AI adoption frameworks are targeted at the C-suite, and if you're a consulting company you likely want to target decision makers, and not IT. If you're a tooling company or AI provider, you talk up the org chart selling your product. I'll go into depth here talking across the org to fellow practitioners about a framework for learning the tools and bypassing the hype from the consulting companies and the product companies alike.
Having lived in both product-centric ecosystems and outcome-based ecosystems, there's a real benefit to being able to navigate where you are personally in the AI learning progression. Product-centric has you chasing releases of your favorite ecosystem with blinders to the best options, and outcome-based ecosystems have you switching platforms to chase a feature. Understanding the advantages and disadvantages to each can help you navigate your situation. This also helps to understand what messages from leaders and vendors resonate, and which seem more sales than substance.
The three tiers to start with, at least where we are here in late March of 2026 are Conversation, Construction, and Agency. These overlap considerably, and there could be more- but I'm here to share what I have seen in the hopes of making clear the benefits and drawbacks of some of the tools. I'll steer clear of endorsing or criticizing specific tools- but the consensus is pretty clear once you start using different tools and take off the vendor-specific glasses.
Conversation
Conversation is where many of us start- with the chat products from any number of vendors- sometimes we download the app to give it a try, sometimes we poke and prod to see what it can do. I've heard of all manner of prompts, and many heralded the rise of "prompt engineers" in the hopes this was the Next Big Thing. It turns out as almost always, it's a stepping stone to the next tier. Conversations can be as simple as a single question or as complex as a conversation with markdown files as context. Some conversations ask the chat app to emulate a person or role, be it a boss, editor, critic, customer, etc. and these are extremely valuable and have lasting depth- especially once you move into storing the context and conversation outcomes in permanent storage like markdown files. This of course becomes the gateway that starts with copy/pasting markdown data, then gets into 2nd brain API integrations. The latter gets you looking up API keys and doing more permanent work or data storage. This is where Conversation overlaps with Construction.
Construction
Let's say you've done plenty of conversations with the chat app of your choice, and now you're starting conversations with markdown text and ending with saved markdown text. Generally this is your first construction! Although you've relied on just the chat app for input and output, integration with other tools via API calls now has you constructing tools and processes. Depending how far you take this, you can keep using the chat app for reading or writing, but likely you have ideas and have also started to vibe code. That term often is met with derision by some but the idea is highly tempting and empowering- but these don't need to be enterprise apps- they can simply be a tool for your own use. The tool doesn't need to last forever, it just needs to last longer than the conversation. Much like how a 3D printer can make a single part that could never justify a minimum order of 30,000 injection molded parts, your tool application can be used just for a specific use case that only you have. Does the tool need to last a week? A month? Or maybe just a report cycle. If you're doing data processing as an example and you know the input data well, the tool can be fairly reliable as long as you don't ask too much. Once you ask more than a single use (or user) you will quickly run the risk of finding out why professional software design and QA is so important. But again like the 3D printer- it exists in the space where a full factory can't reasonably exist, and there's plenty of room in this margin for value- with the skilled operator at the helm. Once you have the logic for what your app needs to do, you are ready to explore all the use cases, and depending on your IT background you can map out those use cases and expand the durability and reliability of your tool app. This is where we venture into the next tier- Agency.
Agency
Of the many details of agency, the one I'll focus on here is the ability of an entity to effect an outcome. Simply put, we tend to think of people empowered to steer their own outcome as having agency. The quick leap that top-down and bottom-up approaches to agency miss is the expansion of a person to do more and have a greater impact. Agentic AI is sold frequently as replacing employees, but the highly compelling aspect that can get glossed over is improving the power of the employees with context. This is part of the challenge we see today, with focus on business value to leadership and feature-selling to different people. I am making the argument here that if you've progressed through Conversation and Construction, you have already explored the nuances of your specific use case. You have a reasonable start of understanding your LLM's toolset to take the next step. That next step is action based on context with a specific outcome. That context can come from one or more tools, such as pulling simple things like markdown information, a stock price, the weather, traffic, or any number of API calls that you can make from any number of platforms.
The example I like to use is from last year, where I had a challenging drive to make but couldn't seem to make sense of the traffic pattern. So with help from a tool, I coded an app in Python that ran in a Docker container. Every day it would look at the config for a start and end to a time window, and use the Google Maps API to check the route between two GPS coordinates. It would step through every few minutes and tell me the best time to leave. I ran it for a month and noticed a couple things- a $60 charge for all my API calls to Google, and the same general drop-off time for the drive. The code is here, but I'm not surprised it hasn't gotten wide acclaim- it was only written for me. This goes back to me providing context in the chat app for the basis of the tool, aka Conversation. From there I went through and ran some simple command line tests for Construction, then deployed it in my environment to run and message me as an Agent.
With the tools in place, you are able to guide your LLM through any number of logical steps or patterns for your use case. Get too ambitious and you leave gaps for exploitation or misbehavior (both from users and the LLM itself). Give the agent too little to do and you risk leaving the chance to have real impact on your life or work.
Many of the platforms offer the ability to schedule tasks now, which is an easy start to something agentic- but keep in mind that like any technology progression, what's groundbreaking today is commonplace tomorrow.
The key idea is understanding your progression - and that even tier 3 Agency work still derives functionality out of Conversation and Construction. Figure out where you are, and build from there.
Comments ()