Planning Under Uncertainty
Planning under uncertainty is the ability to make progress when the future is unknowable and playbooks don't exist. It matters because most transformative work happens in ambiguous terrain where data is scarce and conviction must substitute for certainty.
The Guide
6 key steps synthesized from 44 experts.
Embrace 'wartime' humility and diagnose before acting
When facing the unknown, resist the urge to immediately implement solutions. The most valuable product leaders are those who can navigate problems without a playbook—and that starts with honest diagnosis. Prioritize understanding what's actually happening before committing resources to a fix.
Featured guest perspectives
"I think about wartime product management, right? You're coming in, and I think there was... this incredible humility that was needed to really understand and first diagnose what was actually happening on the platform."— Alex Hardimen
"Data is more like a compass than a GPS. If you look at data as a way of giving you the answer, you're always wrong. You're always wrong or you're slow. Wrong or slow or sometimes both, because mostly data doesn't give you the answer. It just tells you if what you just said is ridiculous or there's potentially something there."— Shaun Clowes
Build the ark, don't just predict the rain
Accurate forecasting of failure is worthless without a solution. Once you've identified a threat or uncertain path, shift 100% of your energy to figuring a way out. The only credit that matters is building something that works—not predicting what won't.
Featured guest perspectives
"No credit will be given for predicting rain, only credit for building an ark... You have to build the ark. It doesn't matter if you predict you're going to fail, you've still failed. It gets you nothing. So what you have to do is figure your way out of it and spend all your time on that."— Ben Horowitz
"Resourcefulness brings you further than resources... If resources are carbs, resourcefulness is like muscle. It stays with you. It makes you stronger, and it helps you have a better intuition and better performance over time."— Scott Belsky
Maximize bets and iteration velocity
Success under uncertainty is a numbers game—it's a function of how many bets you place and how quickly you iterate through them. Build a reproducible testing process, fail fast to preserve time and resources, and treat each experiment as one of many shots at bat rather than a single make-or-break moment.
Featured guest perspectives
"Develop a reproducible testing process, and that will actually influence the probability of your success more than anything. It's so unpredictable whether a consumer product idea will work."— Nikita Bier
"If you fail fast, you still have plenty of time to try another attempt and build another version of the product... the more attempts that you have, you simply increase the likelihood of being successful."— Uri Levine
"One thing is probably just really fast iteration cycles. So placing a lot of bets and then being really rigorous about just going through that cycle very soon... success goes up the more bets you make."— Nabeel S. Qureshi
Define learning phases and de-risk the biggest bets first
Be explicit about when you're in a learning phase versus an execution phase. Use unscalable prototyping to validate assumptions, and counterintuitively, prioritize discovery on your riskiest assumptions first. If you constantly put off big swings in favor of predictable work, you'll never truly innovate.
Featured guest perspectives
"How do you be clear about the phase that you're in?... you just have to be very, very clear with your team on what phase you're in. 'Hey, we're in the learning phase and we explicitly are trying to learn these things.'"— Jiaona Zhang
"If you constantly put those off in favor of the lower risk or more predictable smaller swings, how are you ever going to truly innovate and get to the next level."— Camille Hearst
"The first thing is we have to learn how to take an idea and break it into its underlying assumptions. We have to learn how to prioritize those assumptions. Then we have to learn how to run tests that are small enough that they're just testing that assumption."— Teresa Torres
Make decisions with 40-70% of the information
Waiting for certainty is a trap. Make decisions when you have enough data to be informed but before you've lost momentum. If every initiative requires a full experiment, you'll become paralyzed. Trust your intuition on obvious changes, use pre-vs-post analysis when sample sizes are too small, and reserve rigorous A/B testing for major strategic pivots.
Featured guest perspectives
"If you're making a decision with less than 30% of the available data, you're making a big mistake. If you're making a decision only after you have 70%... you have waited far too long."— Shaun Clowes
"If every single one of your initiatives that you're doing on growth is an experiment, that's a problem... It's almost like a disease, like a paralyzing disease, that slows down progress... I think that people should trust their intuition a little bit more."— Elena Verna
"At a startup, you can't do that. You just don't have those users to test with... I've had to shift my mindset from an experimental-oriented approach to making decisions to much more of a conviction-oriented approach."— Ravi Mehta
Use frameworks to manage anxiety and stay focused
Uncertainty breeds anxiety, which can be paralyzing. Combat this by externalizing your fears—write down limiting beliefs and convert them into tactical to-do items. Focus only on what you can control and stay firm on the goal while remaining flexible on the process.
Featured guest perspectives
"Anxiety equals uncertainty times powerlessness... 98% of anxiety comes from two sources. One is what you don't know, and number two is what you can't control or influence."— Chip Conley
"Write them down, understand what they are, look at them in the cold day of light on paper, and then translate them into things that are just obstacles to be overcome... It's not this nebulous, scary fear. It's just literally a to-do item."— Graham Weaver
"Stay firm on the goal, but flexible on the process."— Upasna Gautam
Common Mistakes
- Waiting for statistical significance when sample sizes will never reach it—just ship and use pre-vs-post analysis
- Treating every initiative as an experiment, which creates 'analysis paralysis' and kills momentum
- Prematurely abandoning channels or ideas before giving them an 'appropriate shot' with a well-designed trial
- Spending all your time predicting failure instead of building solutions
Signs You're Doing It Well
- You have a clear list of 'kill criteria' established before projects start
- Your team is comfortable saying 'we don't know' and moving forward anyway
- You're learning something from every experiment, including the failures
- You've shortened planning cycles during high-uncertainty periods without losing strategic coherence
All Guest Perspectives
Deep dive into what all 44 guests shared about planning under uncertainty.
Alex Hardimen
"I think about wartime product management, right? You're coming in, and I think there was... this incredible humility that was needed to really understand and first diagnose what was actually happening on the platform."
- Prioritize diagnosis over immediate action in a crisis
- Approach unknown platform issues with humility
"These are core product skills that we look for in terms of leadership and grit, and the ability to drive through really, really tough problems that there's no playbook for, nobody has ever really done before."
- Focus on grit and leadership when there is no playbook
- Develop the ability to drive through unprecedented problems
Ben Horowitz
"no credit will be given for predicting rain, only credit for building an ark... You have to build the ark. It doesn't matter if you predict you're going to fail, you've still failed. It gets you nothing. So what you have to do is figure your way out of it and spend all your time on that."
- Shift focus from 'predicting failure' to 'figuring a way out'
- Spend 100% of energy on the solution path once a threat is identified
Brian Balfour
"If you're a late-stage startup... you can afford the luxury to place multiple bets and spread your chips and wait it out a little bit to see who the winner is... The key question for startups is totally different. You don't have the luxury to spread your chips. You have to go all in. You have to choose one and go all in."
- Assess your resource constraints before deciding on a multi-platform vs. single-platform strategy.
- For early-stage startups, pick the platform with the best retention and commit fully to avoid resource dilution.
Brian Tolkin
"If you're not going to get significance, if there's no other techniques at your disposal, then sometimes you just got to trust your intuition and ship it. And if that's what you believe, then that's what you believed and you shouldn't spend time trying to get false precision."
- Use power analysis to determine if an A/B test is even viable before starting
- When data is lacking, increase conviction by talking to customers or using observational data (diff-in-diff)
Camille Hearst
"One of my favorite takeaways from that is that from this way of working around this dual track agile de-risking your riskiest ideas first approach is a concept of taking the things in the top, the biggest swing and actually prioritizing those first in terms of product discovery and figuring out what can you do to start de-risking because if you constantly put those off in favor of the lower risk or more predictable smaller swings, how are you ever going to truly innovate and get to the next level."
- Prioritize discovery for the riskiest assumptions first
- Give the team permission to fail during the de-risking phase
Chris Hutchins
"Andy, he always talks about slugging average, not batting average. He's like, 'I don't care if you hit the ball every time. If one in 10 times you hit a home run that's better than someone who hits it every three out of 10 times but gets out a lot.'"
- Optimize for outsized impact rather than a high success rate of minor features
- Balance iterative improvements with big, company-altering bets
Chip Conley
"Anxiety equals uncertainty times powerlessness... 98% of anxiety comes from two sources. One is what you don't know, and number two is what you can't control or influence."
- Create an 'anxiety balance sheet' with four columns: What I know, What I don't know, What I can control, What I can't control
Crystal W
"Even if you have a sample size of 30, the data you get back, generally, does not change but its precision will. So mathematically speaking, you're going to get the same level of trends, but the precision at which you understand those trends will become more deep if you have more data. But the underlying information that you're getting out of that won't be very different at larger scales."
- Don't wait for large-scale data to run experiments
- Focus on directional trends rather than high precision in early-stage testing
Donna Lichaw
"You don't leave a session with me without having tried a little experiment first. The analogy there is we would call it an in the room experiment versus then get out of the building and do an experiment... anything you think is true or you want to do, it's a hypothesis until you test it. And you go out, get data, and then you can do a bigger version."
- Run 'in the room' role-play experiments before attempting a new leadership behavior in a high-stakes environment.
- Use 'get out of the building' experiments to test career interests or leadership styles on a small scale to gather data.
Elena Verna
"If every single one of your initiatives that you're doing on growth is an experiment, that's a problem... It's almost like a disease, like a paralyzing disease, that slows down progress... I think that people should trust their intuition a little bit more."
- If a sample size for an A/B test cannot be reached within one month, skip the test and use pre-vs-post analysis.
- Use 24-hour, 7-day, and 28-day readouts for non-experimental releases to monitor impact.
- Reserve formal A/B testing for high-traffic real estate or major strategic pivots.
Eli Schwartz
"When you do this top-down, it's a TAM forecast essentially. When you do a top-down, you're closer to the truth. Now, you probably aren't going to get to the truth... but it will help you make a better decision than if you just guess."
- Calculate SEO upside by taking the total population and filtering by target demographic and internet purchase behavior
- Use keyword research tools for relative normalization rather than absolute traffic predictions
Ethan Smith
"Take 100 different questions, half of them I will intervene, half of them I won't... you definitely want a control group, especially in Answer Engine Optimization."
- Set up a test group of queries where you apply optimizations and a control group where you do nothing.
- Track 'Voice Share' over several weeks to account for the variance in LLM responses.
Graham Weaver
"The first part of limiting beliefs, write them down, understand what they are, look at them in the cold day of light on paper, and then translate them into things that are just obstacles to be overcome. So, 'How would I fund this,' just becomes a plan, like, 'I need to design a plan where I'd get funding for this charity.' And then that just is a problem like any other problem. It's not this nebulous, scary fear. It's just literally a to-do item."
- Write down every limiting belief or fear associated with a new project
- Convert each fear into a specific 'to-do' item or research task
- Treat nebulous fears as standard obstacles that require a plan to overcome
Gustav Söderström
"There are going to be two types of feedback. One is you did something and it was right, but people are upset because you changed stuff. The other is you did something and it wasn't right, and people are also upset but for good reasons. And so how do you separate these two?"
- Use A/B testing to be scientific about big redesigns, but acknowledge that MVPs for new paradigms must be high-quality to avoid false negatives
- Be 'unemotional' and willing to change your mind 100% when data disproves a hypothesis
Itamar Gilad
"I created a tool called the confidence meter... It goes from very low confidence which is the blue area... all the way to high confidence which is the red area and you can see the numbers going from zero to 10. Where zero is very low confidence, we don't know basically anything we're just guessing in the dark and 10 is full confidence."
- Assign low confidence (0.01 - 0.1) to opinions, pitch decks, and thematic alignment
- Assign medium confidence to user interviews and market data
- Require high-scale experiments or A/B tests to reach high confidence (above 5.0)
- Tie the level of resource investment to the current confidence score
Janna Bastow
"You're saying that you want a quarter million dollars worth of investment, and you're going to spend it on your team who's going to run experiments... Some of these experiments are going to fail and some are going to succeed. You don't know which ones. But that's okay, you know that by the end of the quarter, enough are going to succeed that you're probably going to move the right numbers in the right direction."
- Account for the number of experiments run and the resulting metric movements rather than promising specific feature delivery dates
Jason Droege
"Survival is just part of the game, and most people just give up before they get their timing right... Survival is a precursor to that. So let's not put ourselves in position that could potentially compromise the enterprise along the way. It doesn't mean don't take risks, but think about how you calculate it."
- Make asymmetrically positive decisions where the upside far outweighs the risk
- Avoid high-risk 'all-in' bets that could compromise the entire enterprise
Jiaona Zhang
"how do to be clear about the phase that you're in? ... we are explicitly going to go learn these types of things... you just have to be very, very clear with your team on what phase you're in. 'Hey, we're in the learning phase and we explicitly are trying to learn these things' versus, 'Hey, we have this really big vision and we're just going to go at it.'"
- Define explicit learning phases for new initiatives
- Use unscalable prototyping to gather data quickly
- Set go/no-go milestones for every quarter to avoid the sunk cost fallacy
Lane Shackleton
"He just stops and he's like, "You know what? Just test the extremes. Start the experiment tomorrow. We'll figure it out." Essentially. And I think his point was like, look, we can debate this forever. So I would rather us see the upper and lower bounds of how good this could be or how bad this is going to be immediately."
- Launch experiments with polar opposite variables (e.g., a tiny button vs. a giant button) to gather directional data quickly.
- Prioritize making and testing over circular debates.
Laura Schaffer
"Roughly 80% of the times, ORs in the time are hypotheses and the things that we believe will be true... The closer you get to something that you go bear your head in the sand or go into an attic and build something for six months and ship it, the more likely it is that you are going to ship the 80% wrong stuff."
- Use 'painted doors' or mocks to validate concepts before committing to full builds
- Aim for 'embarrassing' first iterations to maximize learning speed
- Accept lower confidence intervals (e.g., 80% instead of 95%) to double or triple experiment velocity
Lauryn Isford
"Generally my advice is to experiment when you need to and to primarily see it as a risk mitigation tactic when you're making dramatic changes and to let the product development process do more work. So, spend more time with customers, be more rigorous in understanding precisely what problem you're solving, get mocks in front of people and see how they react, and hopefully have more conviction than you otherwise would when you ship something that it's okay if every customer sees it tomorrow and that the experiment doesn't actually matter as much."
- Use experiments to mitigate risk when making dramatic product changes
- Prioritize qualitative customer research and rigorous problem definition over A/B testing for every feature
- Avoid using experiments solely for the sake of metric precision if it slows down the shipping process
Luc Levesque
"Experiments are great, but they can be slow... the subtlety is that experiments are great, but they can be slow... sometimes you just need to YOLO it because it's a better product experience or you just kind of know it's going to work. And if you're YOLO-ing 40 things and three of them work and you can look at pre-post... the speed can outweigh the cost and time it takes to do experiment."
- Balance rigorous experimentation with 'YOLO' releases for obvious UX improvements
- Use pre-post analysis or holdout groups to monitor the impact of non-experimented changes
Mayur Kamat
"The moment you build experimentation, you've now made it scientific. Now, somebody comes up with an idea, say, that's a bad idea. Here, this is why it's a bad idea, because we have done this experiment six times and it has failed across this user groups at this exact level of impact created."
- Implement experimentation tools (like Statsig) to move from 'ideas' to 'data'.
- Use experiment results to provide objective 'no's' to stakeholders.
Mike Maples Jr
"Any coin that says can't lose bad on one side of it might as well say, can't win big on the other side. It's your willingness to fail that lets you have breakthrough success."
- Invest in projects with wildly asymmetric upside even if they have a high likelihood of failure
- Differentiate between 'forecasting' (extending the present) and 'backcasting' (betting on a different future)
Nabeel S. Qureshi
"One thing is probably just really fast iteration cycles. So placing a lot of bets and then being really rigorous about just going through that cycle very soon. I have this... principles, and one of the things on there is basically saying EOP successes goes up the more bets you make, and it's sort of a function of how many bets you make and the probability of success."
- Maximize the number of small experiments to increase the probability of a 'hit'.
- Set tight deadlines for evaluating whether a bet is failing or needs more investment.
Nicole Forsgren
"Draw four boxes on a piece of paper... the first two to the left of them write the word words. And below them, write the word data... always start with words. You do not start with data. You always start with words. And then you'll go around to a couple of people, stakeholders, managers, others, and you'll say, 'Do you agree with this? Is this actually what we're doing?'"
- Map out your hypothesis in words (e.g., 'Customer satisfaction leads to return customers') before looking at data.
- Identify data proxies for each 'word' box to ensure you are measuring the right things.
- Use the framework to identify 'spurious correlations' where the data might show a relationship that doesn't make sense conceptually.
Nikita Bier
"Develop a reproducible testing process, and that will actually influence the probability of your success more than anything. It's so unpredictable whether a consumer product idea will work."
- Build a system for taking 'many shots at bat' to reduce the risk of unpredictability
"If this is true, then what next needs to be true for this thing to work out? And these layers of conditional statements. And the more layers you have, the higher risk your product is, so you should try to condense it to about like four things that must be true for the thing to work."
- List the 4 fundamental assumptions that must be true for the product to succeed
- Validate assumptions in sequence: core flow, peer spread, group hopping, monetization
Ramesh Johari
"Experimentation was never historically in science about winners and losers... Experimentation is always very hypothesis driven. It's about, what are you learning? And that's really an important distinction because what it means is if I go with something big, risky, and it, 'fails,' meaning that doesn't win. Nevertheless, if I was being rigorous about what hypotheses that's testing about my business, I'm potentially learning a lot."
- Define clear hypotheses for every experiment so that a 'loss' still provides actionable business intelligence.
- Avoid judging data scientists solely on the number of 'wins' per quarter to prevent risk-aversion.
"There's ways to take the past into account, to build what's called a prior belief before I run an experiment, and now take the data from the experiment, connect it with the prior, to come up with a conclusion... that falls broadly under the category of what's called Bayesian A/B testing."
- Use Bayesian A/B testing methods to reward experiments that move the 'prior belief' even if they don't reach traditional statistical significance.
"What I find so interesting about experiments is that when you don't know something, it seems not even a question that you would allocate some of your samples to all options... After the fact you're like, 'Treatment was better. What the heck were we thinking? Why'd we give all those samples to control?' ... you have to put yourself in the frame of reference of when you didn't have the answer. And at that moment, what you're essentially saying to yourself is that it's worth paying to learn the answer."
- Acknowledge the 'cost of learning' when deciding which experiments are worth running.
- Use holdout groups to quantify the long-term value of a team's innovations, even if it has a short-term revenue cost.
Ravi Mehta
"At a startup, you can't do that. You just don't have those users to test with. And I think a lot of startups make the mistake of trying to use an experimental approach too early... I've had to shift my mindset from an experimental-oriented approach to making decisions to much more of a conviction-oriented approach."
- Avoid paralysis by analysis; move forward once you have enough data for informed conviction.
- Stop digging for data when the sample size is too small to provide valid experimental results.
Ronny Kohavi
"You have to allocate sometimes to these high risk, high reward ideas. We're going to try something that's most likely to fail. But if it does win, it's going to be a home run. And you have to be ready to understand and agree that most will fail. ... If you go for something big, try it out, but be ready to fail 80% of the time."
- Allocate a specific percentage of resources to high-risk, high-reward ideas
- Prepare the organization for a high failure rate when attempting radical new designs
Sahil Mansuri
"So the way I think about setting up a plan when you have limited visibility and some major headwinds is setting up a really conservative plan and then having milestones, short term milestones that unlock the ability to lean into growth and spend based on hitting those targets."
- Set a conservative baseline plan for the year
- Establish short-term milestones as 'unlocks' for growth spending
- Avoid 'floundering' between extreme optimism and extreme conservatism
Shaun Clowes
"Data is more like a compass than a GPS. If you look at data as a way of giving you the answer, you're always wrong. You're always wrong or you're slow. Wrong or slow or sometimes both, because mostly data doesn't give you the answer. It just tells you if what you just said is ridiculous or there's potentially something there."
- Use data to disprove your assumptions rather than waiting for it to tell you exactly what to do.
"If you're making a decision with less than 30% of the available data, you're making a big mistake. If you're making a decision only after you have 70%... you have waited far too long."
- Aim to make decisions when you have between 40% and 70% of the information you wish you had.
Sri Batchu
"failure is not learning. So it's really important that you learn when you fail. And so we celebrate failure as long as you're learning and you can only learn if you've designed the right test and you failed conclusively"
- Throw all possible tactics and resources at a hypothesis to maximize the chance of seeing a result
- If a 'maximized' version of a feature fails, abandon the hypothesis entirely rather than re-testing minor variations
Teresa Torres
"The first thing is we have to learn how to take an idea and break it into its underlying assumptions. We have to learn how to prioritize those assumptions. Then we have to learn how to run tests that are small enough that they're just testing that assumption."
- Break solutions down into underlying assumptions
- Prioritize assumptions based on risk
- Run small, fast tests (half a dozen a week) to validate assumptions
Tom Conrad
"One really, really big important lesson that I learned at Snap is about risk taking. And when you have the financial support and the foundational relationship with your investors that Evan has, it really allowed him to take these really big swings, acquire a technology that he thought was game changing, build features speculatively."
- Leverage capital to make 'big swings' on speculative technology
- Accept that some speculative features will fail in pursuit of home runs
Tim Holley
"A/B testing... that's the highest bar. It proves with near absolute certainty that there's a causal relationship... But I think that it maybe misses the point in some changes or some areas where you are working towards a bigger net new thing or this specific change won't really be indicative of the greater whole you're building towards."
- Use A/B testing for proving causal relationships in incremental changes.
- Look at cohorts over time or pre-post analysis for changes that don't fit standard A/B testing models.
Tomer Cohen
"I carved out two million members and I said, 'Those are my members. I'm going to focus on building that mountain peak. I'm going to build for them.' Full liberty and doing whatever, it doesn't hurt numbers, giving the scale and really focus on building a great experience for them."
- Carve out a randomized cohort to act as a 'test country' for new product DNA.
- Run 'negative tests' to prove that legacy features or promotional content are actually hurting long-term engagement.
Upasna Gautam
"We always have to have the ability to A, pivot of course, but also have backup and buffers in those types of scenarios. So any time we're planning we build in buffers for all of that chaos that's happening on a daily basis."
- Build buffers into project timelines ranging from days to months depending on scope
- Assess situations objectively daily to decide whether to use buffers or move to the next phase
"Stay firm on the goal, but flexible on the process."
- Use OKRs as an anchor for the team while allowing squad-level autonomy on how to reach them
Uri Levine
"If you fail fast, you still have plenty of time to try another attempt and build another version of the product. Try another go-to-market approach. Try a different business model so you still have plenty of time to make more and more and more attempts, and the more attempts that you have, you simply increase the likelihood of being successful."
- Iterate quickly to increase the total number of attempts at scoring a 'hit.'
- Don't aim for perfection; aim for 'good enough' through rapid iteration.
Yuriy Timen
"The only thing that's worse than a channel or a tactic that you tried not working. The only thing that's worse now is when you didn't give it the appropriate shot, right? And you prematurely were erroneously concluded that it doesn't work and it's remarkable how often you find that to be the case when I talk to companies, "Oh, YouTube, we tried it. It doesn't work." I'm like, "Okay, can I see what you've tried?" And then you look at it and you're like, "Oh, this thing was not designed to even have a shot at working from the get go.""
- Audit failed experiments to see if they were designed for success before writing off a channel.
- Ensure a channel is given an 'appropriate shot' before concluding it doesn't work.
"I think with some tactics and some channels you can fairly objectively create some test guard rails where it's like, if it's YouTube, we know kind of minimum number of impressions that you got to get. Try two to three creative angles. Here's the click through rates range that you're looking for. If you get within these ranges on these KPIs, keep going. If you don't, abandon."
- Set minimum impression thresholds for top-of-funnel tests.
- Test at least 2-3 creative angles before abandoning a channel.
- Define 'keep going' vs 'abandon' KPI ranges before starting the test.
Naomi Gleit
"Four life lessons... three, focus on what you can control. And four, for those things never give up."
- Identify which aspects of a project are within your control and ignore those that aren't
Sanchan Saxena
"From an operating principle, we went into two week planning mode. Greg Greeley, who was the president of Airbnb used to say, 'Look, can't plan for a year, can't plan for a quarter. We're going to plan every two weeks. We're going to react to every two weeks.'"
- Shorten planning cycles to two-week increments during periods of extreme volatility
- Dissolve sub-teams to focus the entire organization on a single survival goal
- Be honest with the team about the lack of long-term clarity
"How do you build conviction in a highly noisy world? ... The thing that I would take with me everywhere is, how do you build in that noise? How do you stay focused and still build what you believe is the right thing and still let the noise happen around you?"
- Build a muscle for operating in ambiguity where data is scarce
- Focus on the 'Web 2.5' journey rather than jumping straight to idealized end-states
Scott Belsky
"Resourcefulness brings you further than resources... If resources are carbs, resourcefulness is like muscle. It stays with you. It makes you stronger, and it helps you have a better intuition and better performance over time."
- Refactor existing systems and processes to find efficiencies instead of requesting more budget
- View resource constraints as an opportunity to build the 'muscle' of resourcefulness
Varun Parmar
"What you want to do is that you want to be the first one to hit the brick wall... speed is something that you should accelerate for the organization... can you be the first one to hit the brick wall where you have the learning faster than anyone else in the market so that you can decide, 'Oh my god, the path that I was going was not the right path.'"
- Optimize for 'time to insight' rather than just 'time to ship.'
- Use rapid prototyping (like Design Sprints) to hit 'brick walls' early and pivot 10-30 degrees based on findings.
Install This Skill
Add this skill to Claude Code, Cursor, or any AI coding assistant that supports Agent Skills.
Download the skill
Download SKILL.mdAdd to your project
Create a folder in your project root and add the skill file:
.claude/skills/planning-under-uncertainty/SKILL.md Start using it
Claude will automatically detect and use the skill when relevant. You can also invoke it directly:
Help me with planning under uncertainty Related Skills
Other Leadership skills you might find useful.
Running Decision Processes
Use a structured 'Curiosity Loop' to gather contextual advice from a curated group to fight the bias...
View Skill → →Having Difficult Conversations
Managers often avoid giving sensitive but critical feedback on presence and perception, which can st...
View Skill → →Cross-functional Collaboration
In content-heavy organizations, cross-functional teams should include subject matter experts (like e...
View Skill → →Managing Up
Organizational dysfunction often stems from the simple power asymmetry where subordinates feel they...
View Skill → →