Business · Agency Operations
Scoping AI Projects for Clients: The Questions That Prevent Expensive Mistakes
Most AI project failures start at the scoping stage. The client wants 'AI integration.' The agency quotes a price. Nobody defines what that actually means. Here's how to scope these projects properly.
Anurag Verma
7 min read
Sponsored
A client calls. They want AI integrated into their product. They’ve seen what the technology can do. Their competitor just launched something. The board is asking questions. Can you help?
You say yes. You quote a number. Three months later, everyone is frustrated.
This failure pattern has become common enough that we’ve given it a name internally: the “AI integration” trap. It happens when both sides agree to a project without defining what the project actually is. Standard software projects have this problem too, but AI projects have it worse — because clients and vendors both have unrealistic expectations of what’s possible, what’s reliable, and what it costs to operate.
Here’s the scoping process that catches the expensive mistakes before the contract is signed.
The First Problem: “AI” Is Not a Requirement
When a client says they want “AI integration,” that phrase could mean any of the following:
- A chatbot on their website
- An internal search tool over their documents
- Automated data extraction from incoming emails
- A product recommendation engine
- A content generation tool for their marketing team
- An API that classifies incoming support tickets
- A voice interface for their existing app
- An AI agent that takes actions in their system
Each of these is a completely different project. Each has different data requirements, different infrastructure, different reliability expectations, different maintenance costs, and different risks.
Your first job in scoping is to find out what the client actually wants to accomplish, not what technology they think they need.
Ask this early: “Walk me through a specific example of what would be different six months from now if this project is successful. What would an employee do differently? What would a customer experience differently?”
This question forces the client from technology aspirations to concrete outcomes. Concrete outcomes can be scoped. “We want AI” cannot.
Data: The Most Common Blocker
Before you quote any AI project, establish the data situation. Most AI features run on company data: documents, emails, product catalogs, customer records, support tickets. Whether that data is clean, accessible, and legally usable determines whether the project is feasible.
Ask these in every scoping call:
Where does the relevant data live? CRM? Google Drive? Email? SQL database? A mix of all four? Cross-system data is harder to use than data in one place.
How much is there? Ten documents and a thousand is very different from a hundred thousand records from ten systems.
Is it clean? Unstructured text from legacy systems, inconsistent formats, and fields with missing values are all solvable — but they add scope to the project.
Who owns it? Customer data, employee data, and licensed content all have legal constraints. If the client hasn’t thought about this, raise it now. An AI feature trained on data the client doesn’t have the rights to is a liability.
Can we access it? API, database read replica, manual export, or full data migration are all different amounts of work. “We can export it to a CSV” is very different from “we have an API with real-time sync.”
Projects that hit the data wall mid-development are the most painful to restart. Surface this in week one.
Defining “Good Enough”
AI outputs are probabilistic. They’re not always right. This is different from traditional software, where a bug is a bug — either the code behaves correctly or it doesn’t.
Before writing a line of code, you need agreement on what acceptable performance looks like.
For a document Q&A feature: What fraction of questions should the system answer correctly? Is “correctly” judged by users or by you? What happens when it answers incorrectly — is there a fallback to human review?
For a classification tool: What precision and recall are required? A ticket routing system that misclassifies 5% of tickets might be fine; a medical records classifier with the same error rate is not.
For a chatbot: What topics is it allowed to answer? What should it do when a user asks something outside its scope? What content policies apply?
These aren’t implementation details. They’re acceptance criteria. A project without them cannot be delivered — because “done” is undefined.
Get these on paper and signed off before you estimate.
Operational Costs Are Part of the Project
A common scoping mistake: treating AI integration as a one-time build with no ongoing costs.
LLM API calls cost money at runtime. The cost scales with usage. A feature that seemed cheap in development can become expensive at production traffic. Before quoting a project, answer:
- How many requests per day will this feature handle at launch? At full scale?
- Which model will it use, and at what approximate cost per request?
- Who pays the API bill — the client, or is it baked into your hosting?
- What happens if usage spikes? Is there a budget ceiling that triggers an alert?
Put rough monthly operating cost numbers in front of the client during scoping. Not precise estimates — rough ones. The goal is to ensure they understand the feature isn’t just a build cost; it’s an ongoing expense. Some clients are surprised by this. Better surprised in the proposal than on the first invoice.
Human Review: When and For What
Many AI features need human review in the loop — especially for high-stakes outputs. Automatic email responses, customer-facing answers about billing or legal matters, content that goes directly to end users without approval.
Scoping should identify:
- What outputs require human review before being acted on?
- Who reviews them, and how do they access the review queue?
- What’s the turnaround expectation (real-time response vs. review within 4 hours)?
- What does the interface for review look like? Is that interface in scope?
“Human in the loop” sounds simple but often implies building a review interface, notification system, and audit log. That scope can double a project estimate if it surfaces after the quote is accepted.
The Maintenance Reality
AI features decay over time. The world changes, user needs evolve, the underlying model is updated or deprecated, and the retrieval index needs to be refreshed when documents change.
Talk about this in scoping:
- Who adds new documents to the knowledge base when the client updates their policies?
- What’s the plan when the model provider changes pricing or deprecates the version you’re using?
- Who is responsible for monitoring quality over time? How is quality measured?
- Is there a retainer for ongoing maintenance, or is this a one-time handoff?
A chatbot built on a retrieval index over the client’s documentation will answer incorrectly as soon as the documentation goes out of date. If nobody has a plan to refresh it, the feature degrades silently. The client will notice before you do.
The Scoping Checklist
Before submitting any AI project proposal, confirm you have answers to:
- What specific outcome is the client trying to achieve (not what technology)?
- What data does the feature need, where does it live, who owns it, and is it accessible?
- What does “good enough” output quality look like, and how will it be measured?
- What is the expected request volume at launch and at scale?
- What is the estimated monthly API cost at that volume?
- What outputs require human review, and is a review interface in scope?
- What is the plan for keeping the feature current as data and models change?
- Is ongoing maintenance part of this engagement, and at what rate?
Proposals that answer all eight are rarely disputed. Proposals that skip them end up renegotiated mid-project — or worse, delivered to a client who is disappointed despite everything being technically correct.
The AI project that gets scoped properly isn’t always the biggest one. But it’s almost always the one that both sides are happy with when it ships.
Sponsored
More from this category
More from Business
What an AI Feature Actually Costs: The Budget Lines Nobody Plans For
Fixed Price vs Time and Materials: The Contract Decision That Shapes Every Project
How Much Do Web Design and Development Agencies Actually Make? (2026)
Sponsored
The dispatch
Working notes from
the studio.
A short letter twice a month — what we shipped, what broke, and the AI tools earning their keep.
Discussion
Join the conversation.
Comments are powered by GitHub Discussions. Sign in with your GitHub account to leave a comment.
Sponsored