How to Judge a Demo When Everyone’s Saying the Same Damn Thing

The barrier to building a minimum viable product feature, slapping that feature on a webpage, and lighting up the 'request demo' button is lower than ever, making buyers’ evaluation processes extremely tricky. Here are three tips to help you find the sig
Table of Contents

Didn’t I just see this yesterday? I took a long swig of coffee and checked my watch. 8:55am. I did stay up a bit later than usual last night to catch Lohan Beach Club. Maybe I’m just tired.

Deep breath. Focus on the demo. “So this feature is proprietary…”

Ok, I definitely think I saw something like this… yesterday? Maybe I just need more coffee (or less Lohan Beach Club).

These were the thoughts creeping through my mind as I watched a demo for a product I was evaluating recently, and after checking with a few friends who also purchase B2B technology, my suspicions were confirmed.

Basically, tech product demos in 2019 are the professional version of mumble rap – everyone sounds the same, and they’re mostly annoying.

judging_demos.png

In every industry, but SaaS in particular, the barrier to building a minimum viable product feature, slapping that feature on a webpage, and lighting up the “request demo” button is lower than ever, making buyers’ evaluation processes extremely tricky.

Having been on both sides of this demo equation, I know how frustrating this can be, so I decided to lay out three tips to help you find the signal through the noise:

  1. Look for connections between features and strategic outcomes
  2. Account for end user needs to ensure adoption
  3. Weight your requirements with a scorecard

1. Look for connections between features and strategic outcomes

A strategic outcome is one that has a meaningful, measurable effect on a buyer’s business. It’s often broader than the point-problem you may have set out to solve, consisting rather of multiple point-problems and opportunities for growth.

Strategic outcomes are ultimately what VP and C-Suite executives care about and how they evaluate the importance of the technology investments made by their directors and managers.

If a vendor doesn't demonstrate multiple ways you can use their product to impact a strategic outcome over time... then maybe you can't.

Say you were searching for a home landscaper, and ask for “demos” from two vendors.

Vendor A says “I can mow your lawn. I’ve got a great mower. Sharpest blade in town. Your lawn will look like Yankee Stadium when I’m done.”

Vendor B says “Tell me about what you’re trying to accomplish?”

Vendor B uncovers that you’re preparing your home for sale, and that you’ve got open houses starting in a few months. Vendor B then shows you how they can mow your lawn. They also suggest that between now and the open houses, you get your mulch topped off, edging trimmed, and driveway resurfaced, because other homeowners looking to sell used these tactics to increase their home’s perceived value.

Sellers leading “feature-demos” connect with a point-problem (I need to get my lawn mowed) that buyers have defined.

Sellers leading value-demos not only show how they can help address the point-problem, but also help buyers broaden their scope to consider the strategic outcome (I need to maximize the sale price of my home, as quickly as possible).

blog_JudgingDemos_lawnexample.png

They’ll help you quantify how big the outcome is, and help you think through a myriad of different components to achieving that outcome.

Lastly, they’ll then offer solutions, and help you create a roadmap to how you’ll get there over time.

2. Account for end user needs to ensure adoption

It’s very easy for competing vendors to say they have certain features. It’s much more difficult for them to demonstrate those features clearly, going into deep detail on exactly how end users operationalize those features day to day.

The last statement - how end users operationalize those features day to day - is critical.

A mistake I’ve made in the past when evaluating new technologies is focusing my evaluation primarily on what I (the evaluator and administrator of the technology) care about and inadvertently focusing less on what the end users of the technology care about.

Vendors would spend a ton of time demonstrating what I needed and how I would benefit, and I’d optimize for what made me successful, and not account for all of the different ways the end users lives and performances could have been improved.

By not focusing enough on the ultimate end users, you risk making an investment that will not be widely adopted, regardless of how many training sessions or strongly-worded emails you send.

You’ll hear complaints often (trust me) that will either result to loud arguments (bad) or a new vendor search in a few short months (not as bad, but not good).

During your evaluation process, be sure to account for end user adoption:

  • Make the vendor demonstrate how their solution helps not only you, but also the end users. How many day-to-day tasks and responsibilities could be impacted? One? The more useful the solution, the more it will be used.
  • Ask for data on adoption. KPIs such as MAU (monthly active users) and DAU / MAU (the percentage of monthly active users that actually use the technology daily) are great indicators of how much the product’s current users love it.

3. Weight your requirements with a scorecard

As you begin to consider the many ways in which you can impact a strategic outcome, you’ll begin to build up your list of features required to do so.

Because all requirements are not created equal, I’ve found it’s helpful to weight each one in terms of relative importance to the organization. Doing so will allow you to determine which features will have the greatest collective impact in the short and long term, and rank how well each vendor has actually demonstrated how you’ll use each.

This is a lot to consider, so here’s a handy-dandy scorecard you can use during your evaluation process:

demos_table.png

Collective impact is an important consideration in terms of future-proofing your investment. In the example scorecard above, the barrier to Vendor A building one key feature is likely much lower than the barrier to Vendor B building three of them.

The barrier to Vendor A building one key feature is likely much lower than the barrier to Vendor B building three of them.

By optimizing for Feature 1 in the short term, you could potentially miss out on a much more scalable solution over the long term.

Final thoughts

I’ve often found that when it seems like everyone is saying the same thing, if you peel back the onion, you’ll realize that “seems” is the operative word.

Sure, a Honda and a Mercedes can both check the box for “has four wheels and gets from point A to point B.” It’s only once you look under the hood and take it for a test drive that you can see the real differences, and determine which one you’ll love driving for the next few years.

Hopefully these three tips help you do so in your next eval. What tactics have you used to help differentiate between competing demos? Share them in the comments so we can all learn.

Didn’t I just see this yesterday? I took a long swig of coffee and checked my watch. 8:55am. I did stay up a bit later than usual last night to catch Lohan Beach Club. Maybe I’m just tired.

Deep breath. Focus on the demo. “So this feature is proprietary…”

Ok, I definitely think I saw something like this… yesterday? Maybe I just need more coffee (or less Lohan Beach Club).

These were the thoughts creeping through my mind as I watched a demo for a product I was evaluating recently, and after checking with a few friends who also purchase B2B technology, my suspicions were confirmed.

Basically, tech product demos in 2019 are the professional version of mumble rap – everyone sounds the same, and they’re mostly annoying.

judging_demos.png

In every industry, but SaaS in particular, the barrier to building a minimum viable product feature, slapping that feature on a webpage, and lighting up the “request demo” button is lower than ever, making buyers’ evaluation processes extremely tricky.

Having been on both sides of this demo equation, I know how frustrating this can be, so I decided to lay out three tips to help you find the signal through the noise:

  1. Look for connections between features and strategic outcomes
  2. Account for end user needs to ensure adoption
  3. Weight your requirements with a scorecard

1. Look for connections between features and strategic outcomes

A strategic outcome is one that has a meaningful, measurable effect on a buyer’s business. It’s often broader than the point-problem you may have set out to solve, consisting rather of multiple point-problems and opportunities for growth.

Strategic outcomes are ultimately what VP and C-Suite executives care about and how they evaluate the importance of the technology investments made by their directors and managers.

If a vendor doesn't demonstrate multiple ways you can use their product to impact a strategic outcome over time... then maybe you can't.

Say you were searching for a home landscaper, and ask for “demos” from two vendors.

Vendor A says “I can mow your lawn. I’ve got a great mower. Sharpest blade in town. Your lawn will look like Yankee Stadium when I’m done.”

Vendor B says “Tell me about what you’re trying to accomplish?”

Vendor B uncovers that you’re preparing your home for sale, and that you’ve got open houses starting in a few months. Vendor B then shows you how they can mow your lawn. They also suggest that between now and the open houses, you get your mulch topped off, edging trimmed, and driveway resurfaced, because other homeowners looking to sell used these tactics to increase their home’s perceived value.

Sellers leading “feature-demos” connect with a point-problem (I need to get my lawn mowed) that buyers have defined.

Sellers leading value-demos not only show how they can help address the point-problem, but also help buyers broaden their scope to consider the strategic outcome (I need to maximize the sale price of my home, as quickly as possible).

blog_JudgingDemos_lawnexample.png

They’ll help you quantify how big the outcome is, and help you think through a myriad of different components to achieving that outcome.

Lastly, they’ll then offer solutions, and help you create a roadmap to how you’ll get there over time.

2. Account for end user needs to ensure adoption

It’s very easy for competing vendors to say they have certain features. It’s much more difficult for them to demonstrate those features clearly, going into deep detail on exactly how end users operationalize those features day to day.

The last statement - how end users operationalize those features day to day - is critical.

A mistake I’ve made in the past when evaluating new technologies is focusing my evaluation primarily on what I (the evaluator and administrator of the technology) care about and inadvertently focusing less on what the end users of the technology care about.

Vendors would spend a ton of time demonstrating what I needed and how I would benefit, and I’d optimize for what made me successful, and not account for all of the different ways the end users lives and performances could have been improved.

By not focusing enough on the ultimate end users, you risk making an investment that will not be widely adopted, regardless of how many training sessions or strongly-worded emails you send.

You’ll hear complaints often (trust me) that will either result to loud arguments (bad) or a new vendor search in a few short months (not as bad, but not good).

During your evaluation process, be sure to account for end user adoption:

  • Make the vendor demonstrate how their solution helps not only you, but also the end users. How many day-to-day tasks and responsibilities could be impacted? One? The more useful the solution, the more it will be used.
  • Ask for data on adoption. KPIs such as MAU (monthly active users) and DAU / MAU (the percentage of monthly active users that actually use the technology daily) are great indicators of how much the product’s current users love it.

3. Weight your requirements with a scorecard

As you begin to consider the many ways in which you can impact a strategic outcome, you’ll begin to build up your list of features required to do so.

Because all requirements are not created equal, I’ve found it’s helpful to weight each one in terms of relative importance to the organization. Doing so will allow you to determine which features will have the greatest collective impact in the short and long term, and rank how well each vendor has actually demonstrated how you’ll use each.

This is a lot to consider, so here’s a handy-dandy scorecard you can use during your evaluation process:

demos_table.png

Collective impact is an important consideration in terms of future-proofing your investment. In the example scorecard above, the barrier to Vendor A building one key feature is likely much lower than the barrier to Vendor B building three of them.

The barrier to Vendor A building one key feature is likely much lower than the barrier to Vendor B building three of them.

By optimizing for Feature 1 in the short term, you could potentially miss out on a much more scalable solution over the long term.

Final thoughts

I’ve often found that when it seems like everyone is saying the same thing, if you peel back the onion, you’ll realize that “seems” is the operative word.

Sure, a Honda and a Mercedes can both check the box for “has four wheels and gets from point A to point B.” It’s only once you look under the hood and take it for a test drive that you can see the real differences, and determine which one you’ll love driving for the next few years.

Hopefully these three tips help you do so in your next eval. What tactics have you used to help differentiate between competing demos? Share them in the comments so we can all learn.

Experience the power of the Guru platform firsthand – take our interactive product tour
Take a tour