February 3, 2022
Centaurs, Block Castles, and the Wisdom of Grown-Ups
Digital Workers: Substitution or Force Multiplier?
5 min read
Centaurs, Block Castles, and the Wisdom of Grown-Ups
I sat cross-legged on the floor in front of a house of blocks that had the structural integrity of a Jenga tower on the final turn of the game when I heard "Jelly, help me build the house taller." I shook my head and said "hmm... don't think that's gonna be possible." Sure enough, the next block on the tower sent the structure toppling, and it was time to start over.
"This time, why don't we start with a wider base and narrower top, like a pyramid?" I said, drawing upon the wisdom of a grown-up. Unsurprisingly, the pyramid shape held up much better than the previous structure, and I watched as the next several towers held up relatively well, following the same principle. Could this lovable 3-year-old have eventually figured out the core tenants of castle engineering alone? Of course. But wasn't it faster to help her along through teaching? Definitely. Over the past several years I've spent a good chunk of time training machine learning models to "learn" how to do the work of humans. From reading documents, to classifying images, to categorizing fraud, the process is usually pretty similar:
Find a labeled dataset with the correct answers already determined (the bigger the better!).
Tell the computer to crunch through these examples and "learn" how to do the task.
Test the machine's performance and see how well it learned on its own.
Most of the time, machines actually do really, really well. But there are many cases where the "wisdom of a grown-up" can make a huge difference.
One problem with machine learning is that we rarely have a diverse enough set of training data to encompass the universe of things that the model will see in the real world. When a computer stumbles across something unique that it has never seen before, errors start to creep in. In most cases, the humans who were previously performing this task wouldn't have blinked an eye, drawing upon years of experience and generalized reasoning to arrive at the correct answer.
We recently received a number of mortgage loan packets from a client who was using our Digital Workforce product to automate the quality control process around mortgage lending. These packets are usually between 200-2,000 pages each, and encompass a tremendously wide array of information. Everything from home appraisals to tax returns, employment verification, military status, divorce paperwork, you name it - these packets are enormous and can have practically anything inside them. The goal of the digital workers is then to classify the documents by type, and then extract relevant information from specific documents to perform verification.
But what happens when you see a picture of a cat? Or a hand-written letter from grandma promising to give you money for your down-payment? Needless to say, the digital worker was as confused as a 3-year-old watching a block castle tumble to the ground.
It is in precisely these situations that a little human intervention can go a long way to dramatically improve the performance of digital workers. In our case, the solution was to dramatically widen the training dataset by thinking through and including as many different kinds of things that we could find. There are many techniques to accomplish this, but some combination of minor variation from transforming/augmenting existing examples and major variation through generating novel examples can significantly improve performance. Increasing the diversity of a training set usually results in a more robust generalization of the model, improving accuracy and making the process more resilient to unexpected scenarios. One tactic, and I'm not even joking, was to consult with our favorite 3-year-old to spill things and scribble on documents.
We can either watch the castle tumble a hundred times, or notice the problem and apply a nudge in the right direction.
Digital workers are only as effective as they've been trained. Yes, we do live in a time where machines outperform humans on many tasks, but often the collaboration between man and machine results in better outcomes than either operating alone. Recent years have seen an explosion of interest in chess, despite Deep Blue beating Garry Kasparov in 1997. Why? Because even though a machine can beat a human, there has been a rise of "centaur teams," one person and one computer. And it turns out that a team of human and machine is even better than either a human or computer alone.
It's easy to see digital workers as a replacement for humans in the workforce, but this isn't the case. Digital workers are extremely powerful tools that we can use to multiply our output at a task. It is up to us to lead, teach, and train digital workers the wisdom of grown-ups so that our centaur team of human and machine outperforms either humans or machines alone.
The opinions expressed in this blog are those of the individual authors and do not represent the opinions of BRG or its other employees and affiliates. The information provided in this blog is not intended to and does not render legal, accounting, tax, or other professional advice or services, and no client relationship is established with BRG by making any information available in this publication, or from you transmitting an email or other message to us. None of the information contained herein should be used as a substitute for consultation with competent advisors.