What to ask before hiring an AI developer for your EdTech company
Thinqit Agency
Author

Most EdTech founders get burned not because they hired the wrong technology — but because they asked the wrong questions. Here is how to avoid that.
The EdTech AI developer market has exploded. On any freelance platform or LinkedIn search, you will find hundreds of people who call themselves AI engineers, AI consultants, or machine learning specialists. Some of them are exceptional. Many of them have completed a few Coursera courses, wrapped an API call around a chatbot, and declared themselves AI developers.
For an EdTech founder, this is a real problem. You are not just building a product — you are building something that affects how people learn. A wrong answer from a poorly implemented AI tutor, a privacy breach involving minor learners, or an AI feature that burns your monthly budget in two weeks — any of these can set your company back by months.
The questions in this guide will not guarantee you hire the right person. But they will quickly reveal who is serious and who is not. Use them in every initial call, every technical interview, and every proposal evaluation.
BEFORE YOU START
Get clear on what you actually need AI to do
Before you evaluate any developer or consultant, you need to answer one question yourself: what specific problem are you trying to solve with AI?
Not 'we want to personalise the learning experience.' Not 'we want an AI tutor.' Something specific — 'we want to reduce dropout in week 3 of our course by identifying at-risk learners early' or 'we want to cut content production time from 3 hours per lesson to 45 minutes.'
If you cannot articulate the problem specifically, any developer you hire will define it for you — in a way that maximises their billing, not your outcomes. Get specific first.
Then start asking questions.
The founders who get the most from AI hires are the ones who arrive with a problem, not a feature request. 'We want an AI chatbot' is a feature request. 'We want to reduce support tickets by answering curriculum questions automatically' is a problem.
TECHNICAL CREDIBILITY
Questions to ask about their actual experience
These questions are designed to distinguish genuine experience from polished positioning. Ask them in this order — the answers compound on each other.
Q1 Have you built AI features specifically for education — not e-commerce, not SaaS, not generic chatbots?
EdTech AI has requirements that other industries don't. Learner data privacy (FERPA in the US, DPDP Act in India, COPPA for under-13 users). Content safety filters that catch inappropriate material before a student sees it. Adaptive sequencing logic that works even with sparse data from new learners. A developer who has only built retail recommendation engines or customer service bots will bring the wrong mental model to your product.
Q2 Can you show me a live product you've built that uses AI — with real learners using it today?
Demos are designed to impress. Production systems are designed to survive. Ask for a live URL, not a recorded walkthrough. Ask how many users it has. Ask what broke in the first month and how they fixed it. A developer who has shipped AI to real learners in production will have specific, unsexy answers to these questions. Someone who hasn't will pivot to the demo.
Q3 Are you recommending we build custom models or integrate existing APIs like OpenAI, Gemini, or Claude? Walk me through your reasoning.
This is the highest-signal question in the list. For 95% of EdTech companies — especially those under 100,000 active learners — the right answer is API integration, not custom model training. Custom models require massive labelled datasets, months of training, expensive infrastructure, and ongoing maintenance. A developer who leads with custom model training for an early-stage platform either doesn't understand the economics or is optimising for project size, not your outcomes.
Q4 What happens when the AI gives a wrong answer to a learner — how have you handled this in past products?
In EdTech, a wrong answer has consequences that a wrong product recommendation doesn't. A student who receives incorrect information about a medical concept, a legal principle, or a scientific fact may carry that misunderstanding for years. You need guardrails, confidence scoring, source citations, and clear human review pathways built in from day one. Any developer who hasn't thought about this before you asked has not shipped AI to learners.
Thinqit note: When evaluating developers for EdTech clients, we run a simple test: we ask them to describe the last time their AI feature produced an unacceptable output in production and what they did about it. The quality of that answer tells us everything.
COST & OWNERSHIP
Questions most founders forget to ask until it's too late
The build cost is what gets quoted. The running cost is what kills budgets. And the ownership question is what determines whether you're building an asset or a dependency.
On running costs
Ask every developer to estimate the monthly API cost per 1,000 active learners for the feature they're proposing. Not the build cost — the ongoing cost to run it. If they cannot give you a rough number in the first conversation, they have not designed for your economic reality.
A rough benchmark: a well-designed AI Q&A feature for a course platform should cost between ₹2–8 per active learner per month in API calls, depending on usage patterns. Features that involve long-context processing or image analysis will cost significantly more. Know these numbers before you agree to build anything.
On code and data ownership
Ask directly: who owns the code, the prompts, the fine-tuned configurations, and the data pipelines when we are done working together?
Everything should live in your repositories, under your version control, using your API keys and your cloud accounts. The moment a developer runs AI features through their own infrastructure or keeps prompt configurations in their own systems, you have created a dependency that is very expensive to undo.
Common trap to avoid
Some developers build on their own OpenAI org account and 'share access' with you. When the relationship ends, migrating the configuration, usage history, and fine-tuned prompts is painful and sometimes impossible. Always insist on your own API accounts from day one.
DATA PRIVACY & COMPLIANCE
The questions that determine whether institutional buyers will trust you
If you sell to schools, universities, or corporate training departments in India or globally, data privacy is not a legal checkbox — it is a buying criterion. Procurement teams at large institutions will ask about this before they sign anything. Your AI developer needs to be thinking about it from the first line of code.
– Where is learner data being sent? If you use third-party AI APIs, the data policies of those providers apply to your learners. Know them.
– Are you processing data from minors? COPPA in the US (under 13), DPDP Act in India, and GDPR in Europe have specific requirements for minor data. Non-compliance is not just a legal risk — it will cost you institutional contracts.
– Do learners consent to having their behaviour data used to personalise or improve AI features? This needs to be explicit in your terms, not buried. Institutions will ask.
Any developer who raises these issues before you do is signalling real EdTech experience. Any developer who hasn't mentioned them by the end of the first call needs to be asked directly.
One last question — ask it every time
At the end of every conversation with a potential AI developer or consultant, ask this:
"If you were in my position — early-stage EdTech, limited budget, learner trust as the core product — what would you NOT build with AI right now?"
A developer who has your interests at heart will have a clear answer. They will tell you what's overhyped, what's too expensive to run at your scale, and what won't actually move learner outcomes. A developer who deflects, pivots to their portfolio, or says 'it depends on your needs' without a single concrete example is telling you something important.
The best AI work in EdTech is not the most technically impressive. It is the most precisely scoped. The founders who get real outcomes from AI are not the ones who hired the biggest teams or built the most complex systems. They are the ones who asked the right questions before they started.