Join TrustHub to participate
— every member is ID-verified
Sign Up Free
▲
0
▼
News
McKinsey Now Tests Whether You Can Use AI in Your Job Interview. Everyone Else Will Follow.
McKinsey just changed how it hires people. In a pilot running since earlier this year, the consulting giant started requiring final-round candidates for Business Analyst positions to use its internal AI tool, Lilli, during the interview itself. Not as a hypothetical. Not as a discussion topic. As a hands-on, graded part of the process.
Candidates sit down, get a case study, and use Lilli to analyze it. The interviewers watch how they prompt the tool, how they evaluate what it gives back, and whether they can take raw AI output and turn it into something useful for a client.
This is not a coding test. It is not about being a prompt engineering wizard. McKinsey's own language says they are looking for curiosity and judgment, the ability to take stuff that Lilli spits out and work with it, challenge it, put it into context of the client's specific requirements. The bar is baseline competence, not technical mastery.
And that is exactly why this matters.
Why McKinsey Matters Here
McKinsey is not a tech company. It is a management consulting firm. Its clients are Fortune 500 executives, government agencies, and institutional investors. When McKinsey makes AI fluency a formal hiring criterion, it sends a signal to every industry it touches.
The firm also announced earlier this year that it runs a workforce of 20,000 AI agents internally. Not chatbots. Agents that perform tasks across the organization. Lilli itself has been integrated into how consultants do research, build analyses, and prepare client deliverables. When they test candidates on AI, they are not testing a theoretical skill. They are testing whether you can function in the environment they already operate in.
Fortune reported that this move coincides with McKinsey shifting its recruiting focus toward liberal arts majors. That seems contradictory until you think about it for a second. If AI handles the data crunching and analysis grunt work, what you need from humans is context, judgment, and communication. Those are liberal arts skills, not STEM skills.
What This Means for Everyone Else
The full rollout is expected in spring or summer 2026. Once McKinsey normalizes this, other consulting firms will follow. Then finance. Then tech. Then everyone.
Think about what an AI interview actually tests. It tests whether you can take a powerful but imperfect tool and produce something better than what the tool gives you on its own. That is the entire value proposition of a human worker in an AI-augmented workplace. If you cannot add value on top of what the AI produces, what exactly are you being paid for?
This is a question that most hiring processes have not caught up to yet. Resumes still list Microsoft Office as a skill. Interview prep courses still focus on behavioral questions and case frameworks. Nobody is teaching candidates how to work alongside an AI tool in a high-stakes setting.
That is going to change fast.
The Uncomfortable Part
There is a real tension here that nobody in the McKinsey press releases is talking about. If AI tools are good enough that every employee needs to use them, and the tools themselves are getting better every quarter, then the logical end point is fewer employees.
McKinsey is not going to say that out loud. But the math is hard to ignore. If one consultant with Lilli can do the work that used to take two consultants without it, you eventually need fewer consultants. The firm grows its output, not its headcount.
The same logic applies everywhere. Law firms. Accounting firms. Marketing agencies. Any knowledge work where AI can handle the research, analysis, and first-draft creation. The humans become editors, strategists, and client relationship managers. Those are important roles. There are just fewer of them.
What You Should Do About It
If you are currently job hunting or planning to be, start using AI tools now. Not casually. Deliberately. Pick whatever tool you can access, whether it is ChatGPT, Claude, Gemini, or something else, and use it to do actual work. Research a topic. Draft a memo. Analyze data. Build a presentation.
The skill McKinsey is testing is not knowing which AI to use. It is knowing how to evaluate what AI gives you. Can you spot when it is wrong? Can you push it to go deeper? Can you combine its output with your own knowledge to produce something a client would actually pay for?
That is the new baseline. McKinsey just made it official. The rest of the working world will catch up whether it wants to or not.
Candidates sit down, get a case study, and use Lilli to analyze it. The interviewers watch how they prompt the tool, how they evaluate what it gives back, and whether they can take raw AI output and turn it into something useful for a client.
This is not a coding test. It is not about being a prompt engineering wizard. McKinsey's own language says they are looking for curiosity and judgment, the ability to take stuff that Lilli spits out and work with it, challenge it, put it into context of the client's specific requirements. The bar is baseline competence, not technical mastery.
And that is exactly why this matters.
Why McKinsey Matters Here
McKinsey is not a tech company. It is a management consulting firm. Its clients are Fortune 500 executives, government agencies, and institutional investors. When McKinsey makes AI fluency a formal hiring criterion, it sends a signal to every industry it touches.
The firm also announced earlier this year that it runs a workforce of 20,000 AI agents internally. Not chatbots. Agents that perform tasks across the organization. Lilli itself has been integrated into how consultants do research, build analyses, and prepare client deliverables. When they test candidates on AI, they are not testing a theoretical skill. They are testing whether you can function in the environment they already operate in.
Fortune reported that this move coincides with McKinsey shifting its recruiting focus toward liberal arts majors. That seems contradictory until you think about it for a second. If AI handles the data crunching and analysis grunt work, what you need from humans is context, judgment, and communication. Those are liberal arts skills, not STEM skills.
What This Means for Everyone Else
The full rollout is expected in spring or summer 2026. Once McKinsey normalizes this, other consulting firms will follow. Then finance. Then tech. Then everyone.
Think about what an AI interview actually tests. It tests whether you can take a powerful but imperfect tool and produce something better than what the tool gives you on its own. That is the entire value proposition of a human worker in an AI-augmented workplace. If you cannot add value on top of what the AI produces, what exactly are you being paid for?
This is a question that most hiring processes have not caught up to yet. Resumes still list Microsoft Office as a skill. Interview prep courses still focus on behavioral questions and case frameworks. Nobody is teaching candidates how to work alongside an AI tool in a high-stakes setting.
That is going to change fast.
The Uncomfortable Part
There is a real tension here that nobody in the McKinsey press releases is talking about. If AI tools are good enough that every employee needs to use them, and the tools themselves are getting better every quarter, then the logical end point is fewer employees.
McKinsey is not going to say that out loud. But the math is hard to ignore. If one consultant with Lilli can do the work that used to take two consultants without it, you eventually need fewer consultants. The firm grows its output, not its headcount.
The same logic applies everywhere. Law firms. Accounting firms. Marketing agencies. Any knowledge work where AI can handle the research, analysis, and first-draft creation. The humans become editors, strategists, and client relationship managers. Those are important roles. There are just fewer of them.
What You Should Do About It
If you are currently job hunting or planning to be, start using AI tools now. Not casually. Deliberately. Pick whatever tool you can access, whether it is ChatGPT, Claude, Gemini, or something else, and use it to do actual work. Research a topic. Draft a memo. Analyze data. Build a presentation.
The skill McKinsey is testing is not knowing which AI to use. It is knowing how to evaluate what AI gives you. Can you spot when it is wrong? Can you push it to go deeper? Can you combine its output with your own knowledge to produce something a client would actually pay for?
That is the new baseline. McKinsey just made it official. The rest of the working world will catch up whether it wants to or not.
0 Comments
No comments yet. Be the first to reply!