Course
Deliver: Telling a Good Story
Learn the CARL framework and master the art of delivering compelling, signal-rich stories in behavioral interviews.
You've built your story catalog. You know what questions you might face and which stories map to which signal areas. Now you need to actually tell those stories well.
Our brains love stories. Whether their appeal flows from innate neural structure or whether stories are so ubiquitous they've reshaped that structure, the result is the same: we're hardwired to think in and remember stories. When we hear one, specialized brain regions spring into action, performing mental time travel, triggering empathy, and running simulations of events.
This biological foundation makes stories potent tools for interviews. Unlike a list of accomplishments or abstract descriptions of your skills, stories capture the interviewer's attention and pull them into your experience. Stories slip past critical defenses and activate our brain's dedicated narrative-processing machinery, making them easy to remember long after the interview ends.
This article covers how to structure your stories using a simple framework, which details to include and which to cut, how to handle complex multi-month projects without losing your audience, and how to prepare for the follow-up questions that will definitely come.
The CARL Framework
If you've read anything about behavioral interviews, you've come across the STAR method as a framework for structuring responses:
- Situation: The background and context of the story
- Task: Your specific responsibility or challenge within that situation
- Actions: The concrete steps you took to address the task
- Results: The outcome of your actions
Having a fixed structure like this helps you tell a story with a compelling arc without missing any key pieces interviewers need to evaluate you. An easy-to-remember shorthand like "STAR" makes it possible for you to structure your stories on the fly, with just a little practice in advance.
This isn't a bad method. If you have an interview tomorrow and a bunch of STAR stories prepared, use them. But this format has some drawbacks, especially for senior candidates.
STAR leaves out reflection. There's no explicit place for what you learned from the experience. For senior candidates, this is a way for interviewers to assess your scope. More senior people learn from their mistakes and extract wisdom to be reused on future projects. Without Learnings, you miss an opportunity to demonstrate this.
Situation and Task often blur together. Candidates frequently waste prep time trying to tease apart the two, and in large projects, the distinction between "what was happening" and "what you needed to do" is often artificial anyway.
There is another format that solves these problems, called CARL:
- Context: The overall context of what was happening, including how you were involved. This is the combination of Situation and Task.
- Actions: The concrete steps you took to address the task, what you did, how you did it, and why you chose that approach.
- Results: The outcome of your Actions, what changed, what impact you made on the business.
- Learnings: What you learned or your reflections on the choices you made.
CARL Framework
In CARL, Actions remain the centerpiece, but Context can expand or contract, centering around providing the listener with the minimum amount of setup so they can understand the Actions and their motivations.
Learnings are added as a more natural end of the story. Think of Aesop's fables, where each tale ends with a moral. Learnings provide you a platform for expressing your depth of wisdom, showing that you are self-aware and seek to grow from your experiences.
A Full Story Example
Before we break down each component, here's what a complete CARL story looks like. This is a mid-level engineer answering: "Tell me about a time when you persevered through an obstacle."
Telling this Story
Last year, at [Company], I noticed our customer support team was drowning. They were handling about 200 tickets a day, and many of them were repetitive questions.
I saw an opportunity to build an AI chatbot to handle these routine questions automatically. This would require me to persevere through multiple obstacles: learning to use these AI platforms from scratch, coordinating with teams I'd never worked with before, and integrating with our legacy ticketing system that had minimal documentation.
I thought this could reduce our time to resolve tickets while not damaging customer satisfaction scores, and we did after 3 months of effort.
Notice what the candidate did here? They immediately brought the so-what to the response. They answered the question up front, telling us what they persevered through. They gave concrete metrics, like 200 tickets a day, and mentioned what matters (resolution time), so we understand the stakes.
This is a strong opening, giving the interviewer the frame they need to understand the story.
Now, nobody asked me to do this. I brought the idea to my manager because I'd been wanting to experiment with AI chatbots, and I knew this was a real pain point. My manager was supportive but said I'd need buy-in from the Support Director and I'd have to fit this around my regular sprint work.
So my first obstacle was selling the idea. The Support Director was worried we were trying to 'replace humans' and that customers would hate talking to a bot. I scheduled a meeting with her, showed her data on which questions were most repetitive, and proposed a pilot focused only on those. I emphasized that this would free up her team to handle the complex, high-value conversations. She agreed to a three-month trial.
Here the candidate is mixing in Ownership signal alongside Perseverance. They didn't wait to be told; they identified the problem and pitched the solution. They're showing non-technical actions too: selling the idea, addressing stakeholder concerns, negotiating a pilot.
The technical work had some obstacles. First, I suggested using Amazon Lex, which is their turnkey AI bot platform, but the tech lead had concerns long term, if we were going to maintain it or expand it, and preferred I learn how to use a more flexible system and go directly to Bedrock, which is their more flexible AI platform for generative AI. So I spent some nights and weekends going through some YouTube videos and tutorials.
I jumped into building here, starting with a suggested response feature, to keep a human in the loop. That felt like a doable first step.
I quickly ran into challenges integrating with our legacy ticketing system, which had no API documentation. I spent a few days reading through old code and reverse-engineering the endpoints I needed. I left some quick API docs so future engineers wouldn't hit the same walls.
Now we're getting into what they persevered through. Notice they're showing end-to-end understanding: learning new technology, scoping a proof of concept, integration work. They're specific with details, like Bedrock, establishing credibility but not overwhelming the listener. They're focused on the decisions and the challenges.
After I had built a few responses, I showed the support team but they weren't happy with the first pass of results. They gave me some pointers on how they could be improved and I adjusted the bot.
I kept collecting feedback and examples from the support team until the auto suggestion feature was giving them real value.
Throughout this time I was sending check-in emails to the Support Director and sent her some video progress demos.
At that point, I wanted to try suggesting responses for users without having them file tickets, so I started prompting the user with the suggested response before they finalized the ticket. I started doing this for just a few tickets at a time, I even manually inspected the responses after they were presented to make sure they were legit, since this was the first time we didn't have human review. I was tracking the number of times the user kept going with their ticket and it was doing great, only 50% of the time.
We're moving quickly through these actions to keep the interviewer engaged. But notice the candidate is dropping breadcrumbs, like progress reports and working with the customer support team. There's strong Communication signal here, but they're asking about Perseverance, so these are available for follow-up questions if the interviewer wants to dig deeper.
After a couple of weeks of this smaller scale test, we showed the results to the Director and she green-lit the rollout.
We ended up reducing average resolution time by 35%, from four hours down to 2.6 hours, for the types of questions the bot could handle. About 60% of common questions were now automated. And importantly, we maintained our customer satisfaction score of 4.2 out of 5.
If I were doing this again, I would've involved the support team earlier in the design phase. I brought them in after I had a working prototype, but I think if they'd been part of the initial planning, we would've avoided some of the back-and-forth on tone and messaging.
Quick results with quantified impact, and then immediate reflection showing maturity. The candidate is honest about doing something differently.
This story covers multiple signal areas. Perseverance was primary, but we also saw Ownership, Growth, and Communication. And because this is one of their core stories, they can reuse it for other questions too.
Context
You need to orient the interviewer. Who were you working with? What was the situation? Why did it matter?
Include:
- Company context
- Team or role only if relevant to the story
- The problem or opportunity that kicked things off
- The stakes
Stakes matter. Compare these:
- Weak: "I worked on a performance project..."
- Better: "We needed to improve performance for users"
- Best: "Our checkout flow had a 40% abandonment rate, costing us an estimated $2M per quarter"
Keep it brief: 30-45 seconds. A lot of candidates spend 1-3 minutes of background before they even get to what they did. The interviewer is checking out at that point.
Context Mistakes to Avoid
The project history lesson: "Well, this system was originally built in 2018, and then in 2019 the team tried to refactor it but that got deprioritized, and then in 2020 there was a reorg..." The interviewer doesn't need the full timeline. They need to understand where you started and what you did.
Unnecessary org chart details: "I reported to Sarah, who reported to the VP of Platform, and the product manager was on a different team that reported to..." Unless the reporting structure is directly relevant to your story (like you took some action in response to it), skip it.
Explaining technology the interviewer already knows: "So, Kubernetes is a container orchestration platform that..." If you're interviewing at a tech company, assume technical literacy.
Actions
Actions are where the behavioral signal lives. This is the core of what you're being evaluated on.
Use "I" statements consistently. Not "we decided" but "I proposed and the team agreed" or "I convinced my tech lead to..." Every sentence should make your specific contribution clear. You're a team player, but teams are built of individual actors. You can acknowledge others' contributions without removing all agency on your part.
Include both technical and non-technical actions. Yes, you wrote the code. But did you also scope the project? Communicate with stakeholders? Mentor someone? Resolve a conflict? Those non-technical actions are often the difference between a Senior Engineer story and a Staff Engineer story.
Be specific. Specificity provides an example of how you will do that task once you get hired.
- Not "I talked to stakeholders" but "I scheduled weekly syncs with the PM and wrote a one-pager that I shared with the director."
- Not "I debugged the issue" but "I added distributed tracing and identified that our Redis connection pool was exhausted during peak traffic."
Show repeatable behaviors. Interviewers want to know not just that you succeeded, but that you would succeed again in similar circumstances. Show the how, not just the what.
The Value of Detail in Actions
"Not enough detail." "Too much detail." These are among the most common pieces of feedback our coaches give, and the most frustrating for candidates to receive. How much detail is enough? When does helpful context become unnecessary background? When does demonstrating expertise become unrelatable to the listener?
The appropriate level of detail exists on a spectrum, and where you land depends on your role, your audience, the specific question, and where you are in the conversation.
Use detail to (1) make the story understandable and (2) establish your credibility. Your actions become comprehensible when the listener has enough context to understand them. Details also prove you actually did the work and understood it deeply. Once you've established your credibility, additional details are wasting precious time. Choose where you want to go deep, go there, then keep the narrative moving.
Consider how this example establishes more credibility without taking too much time:
I improved our service's performance by identifying that we were making N+1 queries on our product listing page. Every product triggered a separate database call for its reviews. I refactored this to use a single JOIN query with eager loading, which reduced our p95 response time from 3.2 seconds to 400ms. Customer support tickets about slow loading dropped by 60% in the following month.
Provide details mostly in the Actions. Your Actions are where you demonstrate thinking, technical depth, leadership, and decision-making. This is where interviewers learn who you are as a professional. In your Actions, include:
- The alternatives you considered: I evaluated three approaches: microservices, a modular monolith, or refactoring our existing monolith...
- Your reasoning: I chose the modular monolith because our team of 8 couldn't support 15 microservices...
- Specific technical decisions: I implemented a hexagonal architecture pattern with clear boundaries...
- How you navigated challenges: When the VP questioned the timeline, I created a phased rollout plan...
- Collaboration and influence: I paired with our Principal Engineer to validate the approach...
Likewise, any details you provide in the Context section should be relevant. Don't provide an organizational structure description or a latency number unless your story has you communicating through that structure or making technical choices based on that latency.
If you haven't moved on to your next action after about 30 seconds of talking, you're probably providing too much detail about that single point. Feel free to note opportunities for follow-ups ("I can talk more about that if you like...") then keep moving.
What Actions Should You Include?
Here are categories of actions you can pull from:
- Designing: product/architectural decisions, alternatives considered
- Aligning: building consensus, stakeholder management, negotiation
- Communicating: documentation, presentations, difficult conversations
- Implementing: technical execution, resource allocation, risk mitigation
- Iterating: feedback loops, measurement, course corrections
- Testing & Debugging: QA processes, problem diagnosis, optimization
- Releasing: deployment strategies, monitoring, post-launch support
- Thinking & Deciding: cognitive work, analysis, strategic choices
Results
Results prove that your actions mattered.
Think about impact across multiple dimensions:
- Business impact: revenue, cost savings, efficiency
- User impact: satisfaction scores, reduced friction, new capabilities
- Team impact: velocity improvements, reduced on-call burden, knowledge transfer
Quantify when possible. "Improved performance" is vague. "Reduced p99 latency by 85%, from 800ms to 120ms" is concrete.
When You Don't Have Metrics
Not every project has measurable outcomes. Especially internal tools, process improvements, early-stage work, or startups that don't have the time to measure everything. In these cases, you can still articulate impact:
Compare before and after states: "Before this project, deploying required three people coordinating over four hours. After, any engineer could deploy in fifteen minutes."
Use qualitative feedback: "The product manager told me this was the first time she felt like engineering actually understood her priorities."
Reference time or effort: "Development with the previous API took weeks but after the refactor we were able to build new features in days."
Describe what became possible: "After this shipped, we could test new features to the mobile app without submitting a new build, which unlocked our ability to experiment rapidly."
Metrics are best, but clearly articulated impact works too.
Learnings
Learnings demonstrate that you're someone who gets better over time. What would you do differently? What did you learn about yourself? What did this experience teach you?
Go beyond the obvious. "I learned that communication is important" is generic. "I learned that when working with a remote team, I need to over-document decisions because hallway conversations don't happen" is specific and shows real insight.
Be honest about mistakes. "I should have pushed back on the scope earlier. I knew we couldn't hit the deadline with all those features, but I didn't want to be the person saying no. I've since learned to raise timeline concerns the moment I see them."
Learnings also show interviewers that you can receive feedback. If you describe everything as perfect with no room for improvement, that's a red flag.
Adapting to the Question
While every story can't answer every question, most stories contain multiple angles worth exploring. Think about how Hollywood creates different trailers for the same movie. Imagine only the scenes of Lord of the Rings with Arwen, Aragorn, Éowyn, Sam, and Rosie: you might think it's a romantic drama. Or just the scenes with Merry, Pippin, Gimli, and Legolas: maybe it's a comedy. Each trailer pulls from the same source material but emphasizes different scenes, music, and pacing to resonate with different audiences.
Your behavioral stories work the same way. You can choose which parts of the story to foreground for which question.
That AI chatbot story from above could answer multiple questions:
| Question | What to Emphasize |
|---|---|
| "Tell me about a time you persevered" | Learning ML from scratch, reverse-engineering the legacy system, iterating through feedback |
| "Tell me about a time you demonstrated ownership" | "Nobody asked me to do this. I brought the idea to my manager. I identified the problem and pitched the solution." |
| "Tell me about your communication skills" | Meeting with the Support Director, using data to build the case, progress demos to keep stakeholders aligned |
| "Tell me about a time you influenced without authority" | Scheduling meetings with the Support Director, showing data to address concerns, negotiating the pilot |
Emphasize the signal that was requested early in your response. For other parts of the story, don't just drop them. Mention them in passing like a footnote, in case the listener wants to hear more.
Adapting to the Audience
You need to read the room in real time to keep your interviewer engaged.
Watch for signs you're losing them:
- Eyes glazing over
- Not asking follow-up questions on technical bits
- Seeming to wait for you to finish
- Unmuting themselves on the call or opening their mouth to try to speak
- Looking like they're thinking to form a question
- Stopping note taking
- Saying "yeah" or "hmm" frequently. This sounds like active listening in a normal conversation, but in an interview it probably signals they want to shift topics
If you see these signs, you've gone too deep. Adjust.
Include only enough detail to establish credibility. The third or fourth example of a hard bug you solved isn't doing much for you after the first and second one.
Think about who you're talking to. If you're talking to a backend engineer about a backend project, you can get into architectural specifics. If you're talking to a hiring manager who hasn't written code in years, focus on the decisions and trade-offs rather than implementation details.
Telling Complex Stories
Sometimes your stories are genuinely long. This happens for more senior people, who might have a 2-year-long project or who are preparing for a behavioral round that's a deep dive into a single project.
Long stories are hard to deliver and harder to listen to, so you'll need an organizational system to stay on track and hold the interviewer's attention. The Actions section is typically responsible for the length in these stories, so structure your Actions into themes and list those themes right after the Context in what we call a "Table of Contents." Your listener will know what's coming and you'll be able to signpost your way through, mentioning each theme as you come to it.
Organizing Complex Stories
When you have complex projects, give the interviewer a roadmap upfront so they don't get lost:
"This project happened in three phases: first, getting alignment on the approach; second, the technical implementation; and third, the rollout and measurement. Let me walk you through each."
Or:
"I contributed in four key ways: establishing the technical architecture, mentoring two junior engineers through the implementation, managing stakeholder expectations across three teams, and owning the post-launch metrics analysis."
This does a few things:
- Helps the interviewer track your narrative. They know where you are in the story and what's coming next. They're not wondering "is this going to connect?" or "where is this going?"
- Signals organized thinking. Senior roles require the ability to structure complex information. Demonstrating that skill in how you tell stories reinforces that you can do it in the job.
- Keeps you on track. It helps you return to the key points if you get sidetracked.
Include takeaways for your themes, not just topics. Simply listing the themes helps organize the conversation, but adding what you want the listener to conclude provides reasons to hire you. Compare something like "Technical design" versus "Working with the TL to design around complex constraints." The second one is much more informative.
Consider front-loading results in long stories. If the story is sufficiently long, you may never get to the Results before the interviewer takes you down many follow-up rabbit holes. You can state the Results and even a condensed version of the Learnings right at the top.
Prepare for Follow-Up Questions
Expect follow-up questions to your stories. Certain ones come up over and over:
"What would you do differently?" They're looking for depth of understanding and awareness of mistakes. If you've been including Learnings in your CARL framework, you're ready for this.
"What was the hardest part?" Often asked to get at technical depth. Be ready with an understanding of the challenges you overcame.
"How did you measure success?" Even if you mentioned results, be ready to talk about whether and how you established measurable goals and defined the end state.
"What happened after?" Did the project have lasting impact? Is it still running? Did it get expanded or deprecated? Were there follow-ups you were involved in?
For each of your core stories, think through answers to these questions before the interview. Here are more categories of follow-ups to prepare:
About the Challenge
- What was the hardest part?
- What constraints were you working under?
About Your Role
- What were your specific contributions?
- How much of this was your idea vs. assigned work?
- Who else was involved and what did they do?
About Decisions
- How did you make that decision?
- What alternatives did you consider?
- Who else was involved in the decision?
About Results
- How did you measure success?
- What was the long-term impact?
- What would you do differently?
About Relationships
- How did others respond to your approach?
- What feedback did you receive?
- How did this affect working relationships?
More Story Examples
Here are additional examples of how different candidates at different levels tell CARL-structured stories.
Example: Junior Engineer: Technical Challenge
Question: "Tell me about a challenging technical problem you solved."
Our API was intermittently returning 500 errors in production, happening maybe 2-3 times a day with no obvious pattern. Users were seeing random failures, and our on-call engineer had been looking at it for a week without finding the cause.
I volunteered to take a fresh look. First, I correlated the error timestamps with our deployment logs and found they often happened within an hour of deploys, but not always. I added more detailed logging around the failing endpoint and discovered the errors only happened when requests hit one specific server in our load balancer rotation.
I SSH'd into that server and found it was running an older version of our config. A deploy script bug was skipping it about 30% of the time. I fixed the deploy script to verify all servers were updated before marking a deploy complete, and added a health check that would catch version mismatches.
The 500 errors stopped completely. We went from 2-3 incidents a day to zero over the next month. I documented the debugging process for the team wiki so others could follow similar steps.
What I learned was the value of not assuming the obvious explanation. Everyone was looking at the code, but the bug was in our deployment process. I now try to widen my debugging scope earlier.
Example: Senior Engineer (L5): Cross-Team Project
Question: "Tell me about a time you led a project across multiple teams."
Our mobile app's cold start time had crept up to 8 seconds over two years of feature additions. Users were complaining, and we were seeing a correlation with uninstall rates. The problem was that the startup path touched code owned by five different teams, none of whom had performance as their primary focus.
I proposed a cross-team performance initiative to my manager. I put together a one-pager showing the user impact data and a rough breakdown of where time was being spent, and got buy-in to spend one quarter leading this effort.
I started by profiling the startup path myself and creating a detailed breakdown: about 40% was our main codebase, 30% was a third-party SDK, and 30% was spread across the other teams' initialization code. I scheduled 1:1s with the tech leads from each team to share my findings and understand their constraints.
Two teams were willing to prioritize fixes. Two others said they didn't have bandwidth. For those, I offered to write the PRs myself if they'd review them, which they agreed to. The SDK was trickier; I contacted the vendor, showed them our profiling data, and convinced them to ship an optimized initialization path in their next release.
We reduced cold start time from 8 seconds to 3.2 seconds, a 60% improvement. Uninstall rates in the first session dropped by 15%. The approach became a template for how we run cross-team performance work.
My main learning was about meeting people where they are. I initially assumed everyone would just prioritize what was obviously important, but teams have real constraints. Offering to do the work myself turned potential blockers into collaborators.
Example: Staff Engineer: Technical Strategy
Question: "Describe a time you influenced technical direction across your organization."
When I joined, each of our six product teams had built their own data pipeline for analytics. Different technologies, different schemas, different reliability levels. Product managers were constantly frustrated that they couldn't get consistent metrics across products, and engineers were duplicating effort maintaining these systems.
I spent my first month mapping out what existed: interviewing each team, documenting their pipelines, and understanding why they'd made the choices they had. The reasons were usually reasonable. Each team had optimized for their immediate needs at different points in time.
I wrote an RFC proposing we consolidate onto a single streaming platform. The key insight was that we didn't need to migrate everyone at once. We could build an adapter layer that let teams continue using their existing APIs while routing data through the new system. This dramatically lowered the migration risk.
Getting buy-in required different conversations with different audiences. For engineering leadership, I focused on maintenance cost reduction and reliability improvements. For product leadership, I emphasized the cross-product analytics capabilities this would unlock. For the individual teams, I emphasized that they wouldn't have to change their code immediately. The adapter layer was backward compatible.
I led the design and implementation of the core platform and adapter layer, then ran office hours to help teams migrate. Over six months, all six teams moved to the new system.
We reduced data pipeline maintenance costs by an estimated 40% in engineering time. More importantly, we shipped our first cross-product analytics dashboard within three months of completing the migration, something that would have taken 6+ months before.
The learning for me was about the power of making change feel safe. The adapter layer was extra engineering work upfront, but it was what made the proposal politically viable. If I'd proposed a hard cutover, I would have faced much more resistance.
Exercise: Develop Your Core Stories
Take one of your core stories from the catalog you built in the last video.
-
Write out the full CARL structure.
-
Time yourself telling it out loud. Target 2-4 minutes. If you're going over, you're probably including too much context or too many action details.
-
Write responses to the common follow-up questions:
- What would you do differently?
- What was the hardest part?
- How did you measure success?
- What happened after?
-
Repeat for remaining core stories.
At that point you'll have a well-built story catalog ready for use in an interview.
What's Next
Some questions are so common and so important that they deserve special preparation.
"Tell me about yourself."
"Tell me about your favorite project" or "your most impactful project."
"Tell me about a conflict."
These three questions come up in almost every behavioral interview, and getting them wrong can sink you even if the rest of your interview is strong.
Next article, we cover how to nail all three.
Mark as read