Skip to main content
search
Data Entry Services

Why ‘Good Enough’ Isn’t Enough for Data Entry

By April 25, 2017March 4th, 2024No Comments
Data errors - even small ones - can prove costly.

No one, particularly not enterprises dealing with lots of data, wants to make a costly mistake; yet at the same time, human error is a fact of life. With that in mind, what is an “acceptable” error rate for data entry? How many mistakes can your organization withstand before you endanger your bottom line?

These can be tricky questions, and the answers may vary depending on the type of enterprise.

Some would insist that a 70 percent accuracy rate for data entry is the industry standard, essentially saying that misentering data only 30 percent of the time is “good enough.”

But as ARDEM has observed, when it comes to businesses that rely heavily on data accuracy, 70 percent isn’t a passing grade – and “good enough” isn’t enough.

The Pervasiveness of Inaccuracy in Data Entry

Take for instance the FBI’s National Crime Information Center. Acting as a centralized database processing an average of 12.6 million transactions every day as of 2015, the NCIC is where police officers regularly pull background information – such as arrest records, outstanding warrants and other personal data – on persons suspected of committing crimes. A record of a warrant or past offense could be the difference between a suspect being free to go and a life-changing arrest.

Yet in a 1986 study conducted by Kenneth Laudon for the Office of Technology Assessment and published in the journal Communications of the ACM, the researcher concluded that, at the time, only 25.7 percent of the records held by the NCIC were “complete, accurate and unambiguous.” Perhaps most troubling, over 15 percent of the records for open arrest warrants on file with NCIC were entered in error. The local courts where the arrests came from directly contradicted the FBI records. This mismatch meant the subjects of these mistakes risked arrest every time police stopped them. An arrest resulting from outdated or inaccurate information could lead to dire consequences.

A prominent case of the impact of data entry mistakes comes from a 2005 report from the Migration Policy Institution, which noted nearly 9,000 erroneous immigration hits from the NCIC database from 2002 to 2004. The result was local law enforcement wasted resources on a large number of unnecessary arrests, including up to 42 percent of queries that produced false positives. Not only did the error leave police departments open to legal action, but it also meant officers were not able to direct time toward pressing public safety issues. In fact, some locations experienced a 90 percent error rate, well above the level Laudon found in his review of open arrest warrants. These and other inaccuracies can result in tangible costs.

The Cost of Errors  

Looking at the NCIC example underscores an important point: Even though a 15 percent error rate is well below the “acceptable” threshold of 30 percent, the cost of this error – both emotionally and financially – could be high.

In another example, on Feb. 28, 2017, the Amazon AWS service crashed, causing many websites hosted by the software to become unresponsive. After an internal investigation, Amazon concluded that during a simple debugging, a single service team member executed a command intended to remove a negligible amount of servers to help speed up the process. The team member entered the command incorrectly, resulting in a far larger number of servers being taken offline than originally intended.

The result? A nearly five-hour outage that rendered popular – and commercially successful – sites like Netflix, Tinder, Airbnb, Reddit and IMDb inaccessible. Even as Amazon restored service quickly and no data was lost, Data Center Knowledge reported that according to an email from a spokesperson for cyber-risk firm Cyence, S&P 500 companies lost an estimated $150 million in revenue, while U.S. financial services companies lost $160 million during the outage. With all this economic carnage happening due to a simple command error, imagine the cost of a system riddled with errors, like the NCIC database.

The 1-10-100 Rule of Escalation

Examining examples of the effect of errors on behalf of the FBI and Amazon, it can be easy to think this challenge is only a high-level, high-stakes issue. But for smaller or privately owned businesses operating on a narrow margin, the problem of data accuracy can still be serious.

Many small-business owners feel that investing more resources in data entry accuracy is beyond their reach – or simply isn’t worth it. Their pennies are already pinched, leading them to conclude that investing more resources into data accuracy may not pay off. This belief is a common fallacy, one that the 1-10-100 rule of escalation – as defined by George Labovitz and Yu Sang Chang in their 1992 publication “Making Quality Work: A Leadership Guide for the Results-Driven Manager” – clearly shows to be more costly in the end.

The 1-10-100 rule states that data entry errors multiply costs exponentially according to the stage at which they are identified and corrected. If, for instance, it costs $1 to check the data at first point of entry, it likely costs $10 to correct the error as part of the greater batch. If the incorrect data slips through without correction, the cost of fixing the mistake by the time it reaches customers or the production line is likely to increase ten fold – if indeed the error can even be fixed. This outcome means that for every dollar spent early in the data entry process, a mistake could cost $100 and come with a wide variety of issues related to customer retention, loss of sales, reimbursement costs and damage to the brand.

While the 1-10-100 rule illustrates just how costly data errors can be, the examples of the NCIC and Amazon show how pervasive the issue is and the challenge of addressing it. Looking at the NCIC, correcting the potentially millions of data errors could cost hundreds of millions of dollars. Yet this pales in comparison to what a single data entry error cost AWS users and the company itself in loss of credibility.

Avoiding Escalation 
Whether you’re Amazon or just a local retailer, data entry outsourcing services can be critical to saving you time, money and hassle. By outsourcing your needs, the money spent on getting data entered accurately is significantly less than what you would spend running cleanup in the case of a major error.

Your organization doesn’t need to limp by with “good enough” data entry and simply hope errors never catch up to you. Contact ARDEM today to learn how you can reduce processing costs and save money on the backend by outsourcing data entry services.

Data Entry Services

Back Office Support Services

Business Process Outsourcing Services