Selected Project:

Scrive


How to eliminate 10,000 technical support requests
by designing an algorithm that expects to fail.


Summary:

Result:
· 99.98% less technical support.
· 499,900% more scalability.

Type of Project:
Electronic signing of contracts, for retail stores. (Enterprise SaaS)

Delivered:
Product prototype and new production-ready admin tools.

Length of Project:
4 months

Team:
Joel Marsh, one developer, and the CPO. Marketing/branding work also included a UI designer.

Joel’s Roles:
Research, algorithm analysis, UX design, product marketing & branding strategy.

Fun fact: The CPO in this project handled all 10,000 user errors per year personally. And that was not in his official job description.

Context:

Scrive Go is an ingenious solution to sign digital contracts in retail stores. That might not sound like a hard thing to solve, but like most of the cases on this website, it’s not as easy as it might sound.

In fact, as of 2017, when we worked on this project, Scrive was the only company in the world that had solved it in a good way.

For stores that require a signed customer agreement — like phone contracts or car rentals — actually signing the damn thing is the worst part of the experience. It’s slow, it’s tedious, and it’s the legal document that holds the whole business together, so you have to consider fun things like like international contract law and fraud.

The other consumer-driven services, like DocuSign or Hello Sign, would require a huge technical integration to do signing for retail, and it typically costs millions of dollars and takes years to complete properly.

Scrive can set it up it in a day or two, essentially for free, using an enterprise retail product called Go.

Scrive Go works with any existing retail system, any existing contracts, and can be learned by an 18-year-old store employee in a few minutes. Clever!

However… although the first version of Scrive Go worked well for early customers (a few hundred thousand contracts per month), it required an unsustainable amount of manual support from Scrive, which made it impossible (and even scary) to try to sell more of it.

Our job was to make this clever, one-of-a-kind technical product scalable.

This project is an amazing of example of a UX idea called heuristics (which is actually a cognitive science idea from the beginning). By changing the way we looked at the problem (the heuristic we were using to solve it) we eliminated 99.98% of all technical support.

 

Click/tap for full size.

 

Problem:
10,000 Critical User
Errors Per Year

Solution:
Change the core algorithm
so it expects to fail.

Wait, so it expects to fail? Yes! We reduced user errors by 99.98% per year, by designing an algorithm that expects to fail.

Remember those notes, when you were in school, that said “Do you like me? YES / NO”? Did you ever see someone add a third option "(“MAYBE”) and send it back?

That was the solution to this problem. Like schoolyard love, processing documents is not always so black and white.

To use Scrive Go, a retail employee opens a document and selects Print from the menu, as usual. But instead of being printed on paper, the document is “printed” to a fake printer (Scrive’s algorithm), and it shows up on a tablet instead, ready to be signed with your finger. Between printing and the tablet, the algorithm decides how to handle the document.

Every time the user tried to sign something that wasn’t an approved contract, it failed, and a human had to check the problem to make sure a real contract wasn’t failing.

Although a genuine failure was rare, it could cost a customer thousands of dollars per hour, so it couldn’t be missed, and it was usually caused by people, not the software, so it couldn’t be predicted either. However, other random nonsense was printed 10,000 times per year, and there was no way to tell whether it was a picture of Batman being printed, or a new contract that wasn’t set up yet.

You can see how that might be annoying for the tech support people at Scrive.

As always, we started with user research. We needed to understand exactly what people were trying to print, and why they were making mistakes.


 

When you have eliminated the impossible, whatever remains, however improbable, must be the truth.

— Sherlock Holmes


 

We came to an unexpected conclusion: users weren’t making mistakes. We were. In real life, retail employees print a lot more than contracts, and the developers (who were excellent, smart people) just hadn’t considered that.

When this problem was first raised, we got two reactions from every technical person in the company: 1) it is too complex to handle “any” document, and 2) therefore machine learning is the answer. Both were wrong, but that wasn’t obvious to anyone except the people who had done the research.

There are two ways to fix user errors: change the users or change the errors. It was a human problem; we needed to do the latter.

Our deep research into the problem had given us a different intuition about the solution: we needed to change the heuristic that was used to evaluate the documents coming in.

The algorithm was asking the wrong question.

Originally, the algorithm was designed to ask “which contract is this?” when it should have been asking “is this a document we can sign?”

The algorithm assumed it was getting a contract, and that was the mistake.

What we really needed was an algorithm that could handle a contract when it got one, but didn’t automatically assume that it was getting one.

Huge difference.

The specific details of what we did are confidential, but after a deep dive into the design of customer agreements, and the legal requirements, and how those documents changed over many years, we discovered that real contracts can only fail in certain ways, which depend on the customer’s business, not the document itself. By knowing that, we were able to design an algorithm that handled those types of failures, and ignored the rest. In a way that machine learning could never do.

It was a radical improvement.

Now, the system is nearly foolproof. By allowing it to “fail” in a variety of ways, now it only really fails when something is “almost” a contract, which is the only scenario when a person should actually look at the problem manually.

A couple times per year. Maybe.

Click/tap for full size.

Problem:
Slow set-up and difficult troubleshooting.

Solution:
Design the admin to handle
data, not documents.

The old admin was designed to support the old algorithm. It provided everything you needed to make each contract unique — so the computer could tell the difference between Contract A and Contract B — and manage the errors when it wasn’t either of those contracts. 10,000 times per year.

However, that heuristic was wrong.

As we discovered during user research, that kind of thinking wasn’t a good fit for real-life. Contract A and Contract B might be identical but have different stakeholders internally, or Contract B might be Contract A plus a weekly promotion that changes all the time, or Contract A and Contract B might be the same, but customised for different countries, languages, currencies, and so on.

In other words, trying to make contracts “unique” was insanely complex for a human admin.

In UX, “complex for a human” is unacceptable.

With the new algorithm, the admin didn’t have to worry about making documents unique anymore. It was about data now. By re-designing the admin so it would handle the data that was extracted from contracts (rather than comparing the documents themselves) the entire system became much simpler.

New heuristic, new UX.

Now, when tech support gets an error because something isn’t quite right, the admin only needs a few minutes to make it work properly, and all the contracts that “failed” in the same way can be instantly corrected — as if the errors never happened.

The full-time tech support role has now become minutes of work, a couple times per year, making room for thousands of customers that could never have been supported before.

 

Read another case.