You’ve built a business, traffic is flowing, and the funnel is selling, but you’re still not where you want to be. At this stage, even a 0.5–1% lift in conversion can unlock thousands of dollars in additional revenue. The question is ‘how?’
To reach these goals, you need to stop relying on gut feeling and start treating conversion rate optimization (CRO) as a growth lever to extract more value from your visitors.
Your focus should shift to data‑driven practices such as clean pre‑ and post‑experiment measurements and structured experimentation. That’s exactly what Olive8 Group’s high‑impact CRO process delivers.
The largest digital brands operate with a CRO focus. Amazon now runs well over 10,000 experiments annually, and Booking.com reports thousands of concurrent tests, turning tiny uplifts into billions in additional revenue over time.
Olive 8 Group has found that high-impact experimentation is the most effective way to improve CRO. Our structured, data-driven approach tests marketing strategies to determine which produces the best outcomes. But rather than testing based on hunches, it uses predetermined methods to experiment with variables and produce scalable results.
Our methods go beyond A/B testing, incorporating quantitative and qualitative research to develop hypotheses and prioritize the most effective frameworks.
Quantitative Research
Quantitative research uses numerical data and statistical analysis to understand behavior and performance and to test hypotheses. It can measure metrics such as conversion rate, cost per acquisition, revenue per user, or survey ratings on a 1-10 scale, and it plays a valuable role in assessing experimental data.
The main difference from qualitative research is that it provides an objective, statistically reliable data point, while qualitative data is stronger when one needs to explain the ‘why’ behind a result (e.g., users were confused by this navigation system, which is why they dropped off). We integrate the following tools to measure success rates:
Funnel Drop Offs
Funnel drop-offs are a metric we closely monitor. Heat maps and analytics allow us to determine where we lose traction in the funnel, whether it’s on the product page, at checkout, or at another step in the journey.
Once identified, we can accurately measure conversion and drop-off rates to pinpoint where the largest losses occur and what’s causing them. We also gain insight into the audience and behaviors (e.g., engagement with paid search campaigns, returning visitors, visitors from city X or Z).
When we dive deeper into these data points, we ask questions like:
- What is our “Golden Path”? What specific pages, content, and touchpoints do our successful buyers interact with before purchasing?
- What are the three critical drop-off points? Does the drop-off speak more to specific browsers or devices/devices or to certain times of day/week/seasons/campaigns?
- Do specific countries, languages, or audience segments show disproportionately high drop-offs?
Our tools also allow us to segment users, enabling us to identify external factors that may impact engagement. We can monitor changes over time to see how implemented changes have improved or worsened the customer journey, and what to focus on next.
Performance & User Experience
Many companies wonder: Is my site overcoming technical friction and delivering a fast, intuitive, and delightful experience that builds user trust and drives conversion?
We help them answer this question by analyzing multiple data points. Our system considers the combined aspects of performance metrics, UX diagnostics, and behavioral data rather than looking at them in isolation.
Analyzing GA4
Google Analytics 4 (GA4) is Google’s newer analytics platform. We like it because it tracks events and users instead of page views and sessions. It revolutionizes how we view data and CRO, offering the following features.
- Event-Based vs Page-View-Based: GA4 treats everything that happens on your website as an event, such as loads, scrolls, clicks, and purchases, with parameters like percentage, product of interest, and currency. It’s a flexible system that allows you to monitor any interaction, from the biggest to the smallest, and determine which ones are important and what they mean.
- User Journey Focused: The platform lets you view every activity as part of the user journey rather than as individual sessions. You can set user IDs, Google signals, and device IDs to measure lifetime value, retention, and cross-device journeys.
- Greater Customization in Reporting: Initial GA4 reports seem limited and simple. However, they are designed to encourage you to use the Explore area for greater customization and insights, creating more tailored results.
- Flexibility in Conversion Identification: As a flexible system, GA4 lets you define your own events, such as sign-ups, purchases, and lead generation. You can use it to track how experiments impact business outcomes.
Heatmaps (Clarity/Hotjar)
Heatmaps are vital for experimentation, showing you areas of your website that get a lot of activity and those that don’t. We utilize both Hotjar and Microsoft Clarity.
- Clarity: Microsoft Clarity is more focused on user experience (UX), measuring things like rage clicks, dead clicks, JavaScript errors, and excessive scrolling. We can use it to identify UX problems that impact conversion.
- Hotjar: We value Hotjar as a behavioral analytics tool. It records individual visits, offers online surveys and feedback widgets, and basic funnel and form analytics, providing both quantitative and qualitative metrics.
Both platforms allow us to segment by device, browser, country, language, and traffic source, so we can determine how behaviors differ based on these demographics.
Qualitative Research
Qualitative research is less measurable, but it is just as important as quantitative research in improving CRO. When combined with quantitative research, it is vital for doubling down on important themes and showing the direction behind the numbers—whether a trend is positive or negative. It focuses on understanding people’s experiences, motivations, and perspectives through words, images, and observation.
Our qualitative testing combines user experience flows (information architecture and on-site behavior), exit surveys, customer interviews, and card sorting exercises to provide unique, story-driven insights that explain why users act the way they do and to inform clearer navigation and information architecture that reduce friction and support higher conversion rates.
User Testing
As the name suggests, user testing involves observing real people use your product to identify pain points that may not be obvious from analytics alone. Various methods can be used, such as:
- Moderated Testing: Users try out products in a live setting, allowing moderators to ask questions and request specific actions.
- Remote Testing: In this application, users are filmed trying out products and may answer survey questions to provide feedback on their experience.
- Prototype Testing: Products are tested in the early stages, before they are brought to market, so changes can be made before widespread exposure to the public.
Exit Surveys
These short questionnaires are given to people at the end of an experience to determine why they are leaving. They may pop up when:
- People navigate away from a web page
- Someone cancels a subscription
- Employees leave a job
- When a program, event, or course of service ends
The goal is to determine why they chose to discontinue the activity and what could have been done to change their minds.
Customer Interviews
Customer interviews are structured conversations with existing and potential customers conducted to better understand their needs, motivations, and experiences. They typically take place in one-on-one or small group settings. Rather than yes-or-no questions, the discussion is open-ended, encouraging the interviewee to share their experience with your product.
These interviews are typically used to assess:
- Pain Points: Identifying problems and their severity
- Feature Favorites: Determining which product features matter most
- Positioning and Messaging: The feedback can help brands determine their products’ benefits and drawbacks, to create effective marketing campaigns
- Customer Journey Mapping: Brands can learn how the consumer experiences the customer journey to identify influential touchpoints
- UX & CRO Improvement: The interviews can help companies learn where consumers commonly drop off and determine sources of confusion and poor usability
Card Sorting
Card sorting asks users to group and label content in ways that make sense to them, revealing their mental models and informing clearer navigation and information architecture that support higher conversion.
The Hypothesis
We translate everything we’ve learned into a testable hypothesis using an if-then-because model. The structure can be broken down as follows:
- If: The proposed variation we plan to test
- Then: The primary variable we plan to move, ideally with an estimated effect size
- Because: This step involves determining what led to the change, with results based on qualitative and quantitative research
The ‘because’ is crucial and should be pulled from multiple research methods, including:
Quantitative Research
- GA4/Shopify funnels can reveal where drop-offs spike
- Conversions, clicks, and bounce rates show what’s driving action and engagement
- Segment differences help brands determine how certain demographics impact activity
- Heatmaps reveal what’s keeping users engaged and what’s not
Qualitative Research
- User tests allow users to indicate your product’s advantages and disadvantages
- Customer interviews identify patterns among various users
- Exit surveys indicate why users are not converting
- Open-ended feedback, like support tickets and chat logs, can also be insightful
Prioritization
The process involves using scoring models to determine which experiments to run first. We generally integrate ICE or PIE as follows.
ICE (Impact, Confidence, Ease) Frameworks
- Impact is the measure of how much the test could improve the target metric, such as conversion rate, revenue, or lead volume, if successful.
- Confident: How confident are you that the change will have the desired impact? This is typically based on data, qualitative research, and past experiment results.
- Ease: How easy is it to integrate based on development, design, time, and risk?
Each factor should be rated on a scale from 1 to 10.
PIE (Potential, Importance, Ease) Frameworks
- Potential: Measures the potential for improvement based on current performance, usability issues, and the severity of problems
- Importance: How valuable is the potential change in terms of traffic volume, revenue, and overall value?
- Ease: Similar to ICE, this measures how easy the change is to implement
Though similar, the ICE method could be more effective when you already have strong research or past data supporting high confidence that the change will be effective. PIE may be preferable when you are in the early stages of experimentation and unsure where to start, because it allows you to easily assess performance vs. potential and traffic value, and it is less focused on project impact.
Olive 8 Group Maturity Method
Olive 8 incorporates quantitative and qualitative research, heat maps, hypotheses, and prioritization into our maturity method. This four-step process utilizes high-quality and responsible experimentation as follows:
- Stage 1: Ad Hoc: A/B testing is integrated occasionally, without shared processes or tracking
- Stage 2: Structured: Tests follow a basic template of hypothesis, primary metric, and guardrails
- Stage 3: Embedded: Testing is baked into the product and campaign road maps, themes, such as pricing and onboarding, are subject to a series of tests
- Stage 4: Scaled: Multiple teams test in parallel, impact is tracked into a leader-reviewed test portfolio
We find our experimentation works best under the following circumstances:
- Velocity: The target is more tests, stronger tools, and results that leaders can understand, supported by standard mechanisms.
- Governance and Standards: Clear rules are established with sound hypotheses, defined metrics, and sensible segments, ensuring results are actionable.
- Impact From Portfolios: The biggest wins come from clear goals and themes, rather than random, one-off experimentation, supporting scalability.
- Accessible Testing: Encourage experimentation and support decision-making with efficient systems, effective tracking, and reliable dashboards.
- Embed Experimentation in the Product: The biggest benefits will happen when testing is built into the product itself, through apps, navigation, sign-ups, and pricing, that go beyond ads and landing pages.
Want to learn more about our experimentation and effective marketing strategies? Contact us to set up a consultation. We will audit your company to identify pain points and typically uncover 3–5 high-impact experiments within the first 30 days, providing a clear roadmap for effective CRO.
