top of page

20 items found for ""

  • Embracing the 88% Failure Rate: How Top Experimentation Programs Drive Outsized Returns

    If you run an experimentation program, get ready for a shocking statistic: 88% of A/B tests fail to achieve a statistically significant win over the original experience. That's not an off-hand estimate - it's a figure published by companies like Optimizely that are titans in the experimentation space. Other leading practitioners like Airbnb, Booking.com, and Amazon have shared similar "success" rates as low as 10-15%. For anyone new to experimentation, that 88% failure rate may seem abysmal. How can an effective program have such a high rate of fruitless tests? Shouldn't we be aiming for a much higher hit rate? As counterintuitive as it may seem, that 85-90% failure rate is actually the high watermark that top experimentation programs strive for. A success rate much higher than that is a red flag that your program may be playing it too safe and leaving massive returns on the table. The Danger of Playing It Too Safe So what's wrong with a high "success" rate? As Keith Swiderski explains, "The number one reason why your winning test percentage may be higher [than 10-15%] is that you're playing it too safe - running a lot of safe tests that you think will win to boost up your winning percentage." It feels good to rack up wins - you make your boss happy, you hit your goals, you get that bonus. But there's a massive opportunity cost to only running low-risk tests designed for easy victories. By only making incremental, surface-level changes, you're leaving potentially exponential upside on the table. The tests that could truly transform your product's experience and massively increase key metrics never see the light of day. You leave the biggest wins on the table in pursuit of a high stated "win" rate. That may keep you employed, but it's not a recipe for driving breakthrough results and helping your company get ahead of the competition. As Keith notes, "It's like any good investment strategy...you want to mix in safe bets with some smart long shots because that's the only way you will outperform the benchmark." Recalibrating for Massive Wins So if a high stated win rate isn't the goal, what should you optimize for instead? Keith suggests metrics that directly tie back to your company's core objectives: "A winning KPI needs to ladder up to the company's business goals and you need to show that the thing you're doing is helping the company accomplish its goals. Winning test percentage may or may not do that." For most businesses, that means measuring the incremental revenue or revenue lift generated by your experimentation program. You could look at total incremental revenue from all your test wins, revenue delivered per test (like a batting average), or the overall percentage revenue lift your program drove. By focusing on financial impact rather than simplistic win rates, you realign your program to be judged on real, measurable value creation. That gives you the air cover to run bolder tests - even if only 10-15% of them "win", those few breakthrough winners could generate exponential value that dwarfs what you'd get from playing it safe. The Value of "Failed" Tests An 88% test "failure" rate doesn't mean you wasted time and money on fruitless experiments. Those "failed" tests still generated learnings about what did and didn't resonate with your customers. They showed you which bold bets missed the mark so you could course-correct. In this light, celebrating "wins" too much can become counterproductive. As Keith highlights, it can incentivize you to start gaming the system by only running safe tests you think will boost that figure. True innovation comes from taking risks - even if that means waving goodbye to an unsustainably high "success" rate. Following the Trailblazers' Lead Of course, this aggressive testing mindset is easier said than done. Your leadership may balk at pumping resources into a program that will "fail" over 85% of the time. They may question why you're celebrating things that technically "lost" against the original experience. This is where it's vital to understand that you're simply following in the footsteps of the world's most innovative, boundary-pushing companies. Amazon. Booking.com. Airbnb. Microsoft. Meta. Google. They've all embraced that 85-90% failed test rate as the price of admission for driving exponential growth and market dominance. As Keith states, that incredible 85%+ failure rate "goes doubly for experimentation. Experimentation in the scientific process helped create some of the most amazing products or life saving medicines that help us live our lives today. And I can assure you that those inventions didn't happen because somebody played it safe." The world's greatest innovations and most transformative products didn't come from making low-risk, incremental iterations. They came from bold bets, smart long shots, and persevering through countless "failures" to finally achieve greatness. For any company that wants to be a true product leader and innovator, that 88% failure rate is the expectation, not the exception. Embrace it. Optimize for it. And don't let a simplistic "wins" mentality blind you to exponential value creation opportunities. So keep swinging for the fences in your experimentation program. You'll strike out far more often than not - but when you finally crush one of those moonshots, it could catapult your product light years ahead of the competition that kept playing it safe.

  • The Crucial Role of User Research in Product Development

    Companies are under immense pressure to develop and launch products quickly. However, rushing to market without proper user research can lead to products that fail to meet customer needs, resulting in wasted resources and missed opportunities. This is where user experience (UX) research plays a vital role, acting as the voice of the customer and ensuring that products are designed with their needs and pain points in mind. In this video interview, Sarah Bowlin, Senior Research Consultant at Zilker Trail, sheds light on the importance of user research and the challenges companies often face when it comes to incorporating it into their product development process. What Does a User Researcher Do? As Sarah explains, her job as a user experience researcher is to be the representative of the client's target users. She collects data through various methods, including qualitative techniques like one-on-one interviews and unmoderated tests, as well as quantitative methods such as surveys. The goal is to understand the "why" behind user behavior, attitudes, and preferences, as well as to quantify and measure specific aspects of the user experience. Ultimately, a user researcher's role is to provide the internal team with feedback and insights from real people who use or could potentially use the product. This information helps guide the development of features that meet user needs and solve their pain points, ensuring a seamless and enjoyable experience. The Biggest Challenge: The Fire Drill Mentality One of the biggest challenges Sarah has encountered is what she calls the "fire drill mentality." Companies often approach user research as an afterthought, realizing too late that their product might not resonate with users or solve the intended problem. With launch dates looming, they urgently seek quick research to get the insights they need. However, as Sarah points out, once user feedback is obtained, there is often a subsequent process of revisiting and potentially revising the product to address identified issues. This takes time and requires coordination with product, design, and engineering teams, which can be challenging under tight deadlines. Sarah advocates for an "embedded model" of research, where user insights are integrated throughout the design and development process. This proactive approach not only helps identify potential issues early on but also ensures that research is a value-add rather than a bottleneck, ultimately leading to quicker, more cost-effective, and user-centric product development. The Root Cause: Layoffs and Resource Constraints Sarah attributes the prevalence of the fire drill mentality to the tech industry's recent wave of layoffs, which have disproportionately affected user research teams. As companies downsize, research responsibilities often fall on the shoulders of designers, product managers, and engineers, who may lack the necessary expertise or bandwidth to conduct thorough user research. Despite recognizing the importance of user insights, these companies turn to external partners like Zilker Trail to fill the research gap quickly, without fully considering the implications of dismantling their internal research teams. This reactive approach leads to rushed and potentially ineffective user research efforts, further highlighting the need for a more integrated and proactive approach. The Benefits of an Integrated Approach Zilker Trail's integrated research offering aims to address these challenges by embedding user research throughout the product development lifecycle. As Sarah explains, this approach yields numerous benefits: Cost-effectiveness: By building customer insights into the product development process from the outset, companies can design and build more effectively, using fewer resources and minimizing costly redesigns or reworks. Continuous learning: User needs and perceptions are not static; they evolve over time. An integrated research approach enables companies to continuously learn and adapt, ensuring their products remain relevant and user-friendly. Smarter decision-making: With user data informing every step of the process, companies can make more informed decisions, validate their assumptions, and ensure their design choices align with user needs and expectations. Streamlined development: By identifying potential issues early on and iterating based on user feedback, companies can streamline their development process, avoiding delays and costly mistakes. User research is a critical component of successful product development. By acting as the voice of the customer and providing invaluable insights, user researchers help companies create products that truly meet user needs and deliver exceptional experiences. While the challenges of resource constraints and rushed timelines persist, an integrated approach to user research offers a solution, enabling companies to develop user-centric products more efficiently and cost-effectively. As Sarah Bowlin and Zilker Trail demonstrate, investing in user research is not just a nicety; it's a strategic imperative for companies seeking to stay competitive and deliver products that resonate with their target audiences.

  • Product Development Life Cycle - Engineering

    In Phase Four of the Zilker Trail Product Development Life Cycle, the goal is to build and perform Quality Assurance (QA) from the design concepts and prototypes of the items created in the Design Phase. Delivering high-quality products while maintaining a rapid pace can be a delicate balancing act. However, as Brian Cahak, founder of Zilker Trail Consulting, emphasizes, the solution lies not in reinventing the wheel but in applying rigorous review processes to existing development practices. The Importance of Rigorous Review While the concepts of dependency review, code review, quality assurance (QA), and user acceptance testing (UAT) have been around for a long time, it is important to apply a structured and thorough approach to these processes. By implementing rigorous templates and playbooks, development teams can ensure that each step of the review process is executed with precision and attention to detail. Dependency Review: A Technical Lens One crucial aspect of the review process is the dependency review. This step involves taking a technical lens to assess whether the new designs and implementations have taken into account all the necessary dependencies required to bring the product to life. This proactive approach helps identify potential issues or roadblocks early on, preventing costly delays and rework down the line. Code Review: A Standard Development Exercise Code review is a well-established practice in the software development lifecycle. By subjecting code to a thorough review process, developers can catch bugs, identify potential performance issues, and ensure adherence to coding standards and best practices. Quality Assurance and User Acceptance Testing Quality Assurance (QA) and User Acceptance Testing (UAT) are critical steps in ensuring that the final product meets the defined requirements and delivers a seamless user experience. It is important to apply the same level of rigor to these processes as well, leveraging templates and playbooks to streamline and standardize the testing procedures. The Benefits of Rigorous Review Processes By implementing rigorous review processes throughout the development lifecycle, teams can reap numerous benefits: Quality Assurance: Thorough reviews at every stage help identify and address issues early, reducing the likelihood of defects and ensuring a high-quality final product. Efficiency: Well-defined templates and playbooks streamline the review processes, minimizing bottlenecks and enabling teams to move through the development lifecycle with speed and efficiency. Collaboration: Review processes foster collaboration and knowledge sharing among team members, ensuring that everyone is on the same page and working towards a common goal. Risk Mitigation: By identifying potential dependencies, technical roadblocks, and user experience issues early on, teams can proactively mitigate risks and prevent costly delays or rework. When these reviews are complete, Engineering hands off the code to the Experimentation team, which will take the lead in ensuring that there is a methodology in place to track the business value of the item being deployed. Delivering quality products at a rapid pace is essential for staying competitive. While the concepts of dependency review, code review, QA, and UAT are not new, what is new is the importance of applying rigorous processes and templates to these practices. By doing so, development teams can strike the perfect balance between quality and speed, ensuring that their products not only meet the highest standards but also reach the market in a timely and efficient manner. Next Phase: Product Development Life Cycle - Experimentation Related Post: Product Development Life Cycle - Research and Insights

  • Bridging the Alignment Gap: The Key to Successful Product Development

    Companies often face a significant challenge: achieving alignment between various teams and stakeholders involved in the product development process. This lack of alignment can manifest in misguided strategies, wasted resources, and ultimately, products that fail to resonate with the target audience. In this video interview, we sit down with Sam Wettling, Director of Growth Strategy and Amplitude Practice Lead at Zilker Trail. Sam sheds light on the root causes of misalignment and offers practical strategies for fostering collaboration and cohesion among marketing, product, and data teams. The Biggest Challenge: Lack of Alignment According to Sam, the biggest challenge he encounters when onboarding new clients is a lack of alignment among teams. It's not a matter of a lack of effort or commitment; rather, it's a case of different departments having divergent perspectives on what constitutes success for their digital products. Marketing teams may prioritize driving interest and enthusiasm, while product teams focus on building features that meet user needs. Data teams, on the other hand, may be more concerned with collecting and analyzing comprehensive data sets, sometimes at the expense of signal-to-noise ratio. This misalignment can manifest itself in various ways, from conflicting goals and priorities to disconnected decision-making processes and even disagreements over which tools and technologies to adopt. Symptoms of Misalignment While some companies may be aware of their alignment issues, others may be oblivious to the underlying problems. Sam highlights a few telltale signs that companies should watch out for: Lack of dedicated communication channels: When leaders from different teams rarely have one-on-one meetings or dedicated time to discuss their respective goals and initiatives, it's a red flag. Siloed decision-making: If data teams are making decisions without considering the impact on product and marketing teams, or if product teams are launching features without input from marketing and data teams, misalignment is likely. Data hoarding: When data teams prioritize collecting every possible data point out of a fear of missing something important, it can lead to a cluttered and noisy data landscape, making it challenging to extract meaningful insights. Bridging the Gap: Fostering Alignment To address these alignment challenges, Zilker Trail employs a structured approach that emphasizes open communication, goal setting, and collaborative problem-solving. Here's how we tackle misalignment: Safe spaces for honest conversations: Sam and his team start by creating a safe environment where team leaders from different departments can openly and honestly discuss their challenges without fear of judgment or repercussions. Goals, Problems, and Solutions (GPS) workshop: Zilker Trail facilitates a GPS workshop, where teams align on their shared goals, identify the problems preventing them from achieving those goals, and collaboratively develop solutions. Prioritization and impact assessment: Once all the challenges and potential solutions are laid out, Zilker Trail helps teams prioritize their efforts based on potential impact, ensuring they tackle the most pressing issues first. Celebrating early wins: By guiding teams through achievable early wins, Zilker Trail fosters a sense of shared accomplishment and builds confidence in the collaborative process, paving the way for sustained alignment and progress. The Importance of Human Connection In today's remote work environment, fostering alignment can be even more challenging as opportunities for casual conversations and in-person interactions are limited. However, Sam emphasizes that remote work should not be an excuse for misalignment. Simple gestures, such as scheduling regular check-ins, finding common ground through shared interests, or focusing discussions on the customer experience, can go a long way in establishing human connections and laying the foundation for productive business conversations. The Benefits of Alignment When teams are aligned, the benefits are numerous and far-reaching. Not only does it lead to more efficient and focused product development efforts, but it also fosters a culture of collaboration, continuous learning, and customer-centricity. Aligned teams are better equipped to identify and address customer pain points, make data-driven decisions, and deliver products that truly resonate with their target audience. This, in turn, can translate into increased customer satisfaction, loyalty, and ultimately, business success. Alignment among marketing, product, and data teams is not a luxury; it's a necessity. By fostering open communication, collaborative problem-solving, and a shared focus on the customer experience, companies can overcome the challenges of misalignment and unlock the full potential of their products. Bridging the alignment gap requires a strategic approach, facilitated by experts who understand the complexities of cross-functional collaboration. By embracing this approach, companies can not only develop better products but also cultivate a culture of continuous improvement and customer-centricity, positioning themselves for long-term success.

  • Product Development Life Cycle - Experimentation

    In the Zilker Trail Product Development Life Cycle, Experimentation is not just a box to be checked or a tool that is periodically used. Instead, in Phase Five, it is a critical step that will determine the measurement by which we we will evaluate success of the feature. Once the work is done in the Engineering Phase, we release the feature as an experiment. This could be done either as an A/B test (with one or multiple variants) or as a Feature Flag. The goal is to launch in a way that live data can be gathered to validate the findings from the Design Phase. The Experimentation or Analytics team takes the lead in this phase, creating test plans with clear success criteria, working with the Engineering team on a plan to back out the feature if the data does not validate the original hypothesis. This work should be done with rigor but also with speed, to ensure that the process is not being delayed. When this work is complete, the feature is launched, and the Analytics phase begins. Next Phase: Product Development Life Cycle - Analytics Related Post: Product Development Life Cycle - Research and Insights

  • Product Development Life Cycle - Analytics

    In the last phase of the Zilker Trail Product Development Life Cycle, Analytics is used both as a way to look back (determine the value of the feature launched), and a way to look forward (as a catalyst to raise questions that can be addressed in the Research and Insights phase). In this way, we are both learning and resetting in this phase, feeding the life cycle with more questions. This is how the life cycle drives a culture of Experimentation - as delivery becomes an extension of learning. The Analytics team takes the lead in this phase, ensuring that appropriate tagging is in place to report out on the key events and metrics to determine if we are delivering on the KPIs. Templates and formalized Ways of Working are critical in this phase, as it is the connective tissue from the launch of one feature to the launch of the next feature. The team will begin monitoring the data immediately after launch, typically waiting to report out results until statistical significance and confidence is reached. Once the data is ready to report, another important function for this team is telling a data story that can help the business understand the results. The data story is much more than charts and numbers - it has to explain how the business should "feel" about the results, in a language that they understand. Remember, we strive to answer these questions: What is happening in the existing experience or in the market? Why is it happening? To whom is it happening? What is the impact and opportunity size? Therefore, the data story needs to answer these questions. Once the results are published, the information should become institutional knowledge, democratized across the organization to ensure continuous improvement. First Phase: Product Development Life Cycle - Research and Insights

  • Product Development Life Cycle - Prioritization

    In Phase Two of the Zilker Trail Product Development Life Cycle, the goal is to prioritize the opportunities from the Analytics and Insights phase. One person or team (typically a Product Manager or Strategist) is typically responsible in this phase, and will ultimately decide what is actually going to be built, why it is going to be built, and determine the value generated from building it. This work is done by reviewing all of the insights, problems, and hypothetical solutions and developing a catalog of all items of what could be built. But how do you determine what should be built? It is important to develop a methodology that makes sense for your business. At Zilker Trail, we recommend the RICE (Research, Impact, Confidence, Effort) framework, but we encourage clients to adapt this framework to their particular business needs. The most critical aspect of this phase is to have an agreed-upon and consistent prioritization framework. Next phase: Product Development Life Cycle - Design Related Post: Product Development Life Cycle - Research and Insights

  • Product Development Life Cycle - Design

    In Phase Three of the Zilker Trail Product Development Life Cycle, the goal is to create design concepts and prototypes of the items chosen in the Prioritization Stage. One person or team (typically a Product Designer or UX Designer) takes the lead in this phase. Our process encompasses the typical design tasks, but adds in two critical quality checks to ensure that the solution designed will result in a material improvement over the baseline (what is currently in place). The first check, Concept Desirability Score, is done at the wireframe or conceptual stage, and is a validation that the concept is actually desirable by your consumers. This score is calculated through a light usability heuristic. Once it is determined that the concept is ready, we move to the next quality check. The Concept Desirability Score is an absolute score. It is not scored against your current design. Once a design is ready, the Design Quality Score will be calculated. This check is a relative score against your current design in market. This score measures whether the new design is materially better than what is in flight today. Next Phase: Product Development Life Cycle - Engineering Related Post: Product Development Life Cycle - Research and Insights

  • Product Development Life Cycle - Research and Insights

    In Phase One of the Zilker Trail Product Development Life Cycle, the goal is to identify the largest areas of opportunity for improvement and growth. In this phase, we strive to answer these questions: What is happening in the existing experience or in the market? Why is it happening? To whom is it happening? What is the impact and opportunity size? The UX/CX Researcher sifts through customer research, market insights and usability insights, in a methodical and consistent way, to provide critical insights into what is really happening out in the world. The output of this phase is an "Opportunity Map", which blends together all of the analysis into a bubble chart of the biggest opportunities to solve customer problems and unmet needs. Next phase: Product Development Life Cycle - Prioritization

  • Introducing The Zilker Trail Digital Product Development Life Cycle

    Zilker Trail founder Brian Cahak draws inspiration from the success of Formula 1 and Toyota to introduce a comprehensive operating system for the digital product lifecycle. This system helps address the four key questions executives must ask to drive sustainable digital growth: Who are our best customers? How and where do they struggle the most? Are our new solutions better than the baseline? Do we have the right speed and agility for the next decade? What do Formula 1 and the Toyota Production System have to do with Digital Growth? They serve as proof that, with the right process and some teamwork, astonishing metrics can be attained. Brian also shares insights on: Identifying your most valuable customers and understanding their pain points Evaluating the effectiveness of your new digital solutions compared to industry benchmarks Ensuring your organization has the flexibility and responsiveness to adapt to evolving market demands Tune in to the first half of the video below to watch Brian discuss his inspiration for building an Operating System for the Digital Product Development Life Cycle, and why it can work for you. There are three layers to a product development lifecycle that every company should adopt. Ensuring the teams have the necessary skills and capabilities to execute the work effectively Developing a common, cross-functional framework to orchestrate the work seamlessly across teams Establishing a foundation for change management where all groups (not just some) can continuously improve, adapt, and grow together In the second half of the video below, Brian dives into the operating model and discusses how it can help your organization enhance cross-functional coordination, accelerate innovation, and position your digital offerings for long-term success.

  • Improving The Quality of Data Insights

    "Water water everywhere but not a drop to drink" Zilker Trail founder and Head of Service Delivery Jared Bauer makes his interview debut, joining Keith Swiderski to discuss the paradox of organizations having unlimited amounts of data at their fingertips, but still thirsty for insights. Organizations are collecting more and more data, resulting in additional resources and costs to house and secure this data, but still lack basic insights to answer vital questions. How do we solve this problem? Before you start wrangling the data, understand the goals of your customer and articulate the business questions that you are looking for the data to answer.

  • Choosing a KPI: A Strategic Approach

    Key Performance Indicators (KPIs) are vital tools for measuring progress and success in any business or project. However, selecting the right KPI is not a one-size-fits-all process; it’s a nuanced decision that requires careful consideration. Many Digital and e-commerce teams will use Conversion Rate as a primary KPI to measure the effectiveness of sales and marketing efforts. This may not always be the most useful metric. This is because: It doesn’t account for the quality or value of each conversion. For example, a high conversion rate could be driven by low-value transactions that don’t significantly contribute to profitability. It can overlook other important factors such as customer satisfaction, brand perception, and long-term customer value. It is susceptible to fluctuations caused by external factors such as an increase in low converting traffic, market trends or seasonal changes All of these factors may cause conversion rate to not reflect the actual performance of the business. While conversion rate can provide insights, it should be considered alongside other KPIs that offer a more comprehensive view of business health and success. Here’s a guide to help you choose a KPI that aligns with your goals and responsibilities. It Depends The most appropriate KPI for your situation depends on various factors, including your industry, the specific area of your business you wish to measure, and your strategic objectives. There’s no universal KPI suitable for all scenarios, so it’s essential to analyze your unique situation before deciding. Sphere of Control > Concern Focus on what you can control rather than what concerns you. While many factors can influence your business, effective KPIs should measure the aspects within your sphere of control. If your KPI is outside of your sphere of control, then your team is more of a "fan" of the KPI - rooting from the sidelines for it to grow - instead of a sole factor in its growth. Focusing on KPIs in your sphere of control ensures that you’re tracking metrics that reflect your direct actions and decisions. Removes External Events/Forces A well-chosen KPI should be insulated from external events and forces as much as possible. This isolation allows for a clearer assessment of performance, free from the noise of uncontrollable external factors. When considering a KPI, think about whether this number could rise or fall due to external events - if the answer is no, then you're on the right track. Reflects Your Scope of Work, but Contributes to Larger Goal Your KPI should mirror your scope of work while also contributing to the broader organizational goals. It’s crucial to ensure that your KPI is not only relevant to your immediate tasks but also feeds into the larger objectives of your company. Look for a Problem, Construct a Solution, Use a KPI to Measure Identify a problem area within your scope, devise a solution, and then select a KPI to track the effectiveness of your solution. This problem-solving approach ensures that your KPI has a clear purpose and provides actionable insights. Choosing the right KPI is a strategic exercise that can significantly impact the effectiveness of your performance measurement. By considering the factors mentioned above, you can select a KPI that provides valuable insights, drives improvement, and aligns with your business objectives. Remember, the goal is not just to have a KPI but to have the right KPI that propels your business forward. Reach out to Zilker Trail Consulting today to discuss choosing the right KPIs. and how we can help grow those KPIs. Related video: Choosing a KPI Related video: Why Conversion Rate Is a Bad Metric

bottom of page