Using benchmarking to gauge call center performance can significantly help boost call center ROI over the long run. Measuring performance can aid in pinpointing those key metrics within a call center that can be improved by applying technology.
One such metric is the average amount of time a completed call takes. Reducing the average per-call time spent on the phone even twenty to thirty seconds can result in tremendous cost savings across thousands or even millions of calls, and can significantly contribute to the bottom line in many ways.
For example, call center representatives often spend a good deal of time collecting data about customers or prospective customers while on a call. If this data collection component of a call can be reduced, the cost savings can be significant. Also, a representative then has more time to perform actual customer service or answering questions, resulting in happier customers.
One way to reduce this time spent on the phone is by using an electronic data collection form that can be auto-filled using integrated Web services APIs within the call center application. The simplest example is using a caller-provided zip code or postal code to automatically populate
a data entry form with city, state, country and other pertinent geographical information. Other services such as a frequently updated reverse telephone number service
can provide more specific information such as name and address (and other consumer or business demographics) pre-populated from a Caller-ID mechanism.
These time savers not only can have dramatic cost-savings over a large number of calls, they also ensure more accurate information as human error is significantly reduced. There are also other verification services
available to check data in the same way in cases where manual data entry must be employed. Call center analytics are often much more accurate when comprehensive data about callers are compiled, rather than partial and incomplete data that can leave significant business intelligence gaps.
These are basic examples, but they can go along way in improving over-all call center performance and ROI.
One of the many Web service APIs StrikeIron provides "out in the Cloud" is the ability to check if a telephone number is on the Federal "Do Not Call" list or one of the many similar state lists. This helps an organization that makes outbound telephone calls stay in compliance with Federal Trade Commission and state laws much more easily and cost-effectively. StrikeIron provides this service as an easy-to-integrate API. We also have done a slick integration into Salesforce.com, making compliance within Salesforce very easy as well.
If you are not familiar with the Do Not Call (DNC) list, you can simply register your telephone number to this national list, and by law, telemarketers may not call you pitching their various products and offers. There are exceptions of course (such as if you owe money to an organization), but in general companies face large fines if they do not adhere to this list with their outbound efforts. The largest fine levied so far has been in excess of $5 million dollars, and over sixty organizations have faced legal action for violating the list since its inception.
When the StrikeIron service is invoked (either a SOAP or REST call from an application) to check a given number, typically from a call center application or a business process of some kind, it checks other lists as well such as the Direct Marketing Association (DMA) list. It returns whether or not the provided number can be called, and if not, the reason it can not be called.
One of the possible reasons that we indicate a number can not be called is that the number is a cell phone. Federal regulations already prohibit most telemarketing targeted to cell phones.
We invariably get asked whether or not cell phones need to be separately registered with the DNC list. We also often get asked if the viral emails that are traveling the world indicating that cell phone numbers are going to be released to telemarketers very soon are true. The answer to both questions is no. There is no impending deluge of telemarketing about to occur to mobile devices. There is no separate cell phone registry either.
While you can register a cell phone number to the Federal DNC list without harm, there are already laws in place to prohibit telemarketers from using auto-dialers to dial cell phones and calling cell phone numbers without the consent of a consumer. So there is no need for cell phones to be registered to the list.
And if you are using a service such as StrikeIron's subscription-based DNC Call List Verification to check numbers against the DNC list, you can rest assured that cell phone numbers that are checked will come back with the appropriate response.
At StrikeIron, we have hundreds and hundreds of customers that use our address verification capabilities to ensure impeccable address quality in various business systems. These uses include CRM, call centers, marketing systems, for use in various GIS systems, and also to drastically reduce customer service problems associated with incorrectly shipped products in e-commerce scenarios.
Each of these individual customers is verifying addresses "in the Cloud" using our subscription-based address verification
offerings (delivered as easy-to-integrate SOAP and REST APIs), often several times a second each, twenty-four hours per day, seven days per week.
The primary reasons customers work with StrikeIron are because of our high performance delivery, the investments in reliable infrastructure we have made, and the zero data maintenance required to work with our products (we do all of the reference data updates in our own data center so our customers don't have to worry about the time consuming and costly process).
However, from time to time we get questions as to why a certain street address abbreviation gets applied, why a suite number gets standardized the way it does, and several other similar questions. Invariably, in the case of USA address verification for example (we also have Canadian and global address verification capabilities), we point people to the postal addressing standards documentation provided by the United States Postal Service (USPS). These guidelines define the correct abbreviations and address processing logic that is utilized within our systems. These rules also ensure the greatest level of address quality across multiple systems and use cases.
If you'd like to know exactly what these standards are, the USPS guidelines can be found here: http://pe.usps.com/cpim/ftp/pubs/pub28/pub28.pdf
So in addition to all of other great performance, reliability, and reduction of complexity reasons to integrate StrikeIron address verification into Websites, business processes, and other applications, another great reason is the assurance that we are utilizing the highest possible standard of address quality processing when we deliver verified, standardized, and enhanced addresses by the millisecond, anywhere in the World.
Making use of data sources available outside the organization can significantly improve both the efficiency and effectiveness of various processes within the organization. This is why almost every business utilizes outside data in some way.
* Showing prices in local currencies have shown to increase sales globally by as much as 25%. However, there needs to be a mechanism for keeping these exchange rates current as they can become out of date pretty quickly if they are not updated frequently enough. Sadly, I've seen organizations reload their currency tables in large ERP applications as little as once a year, and one can imagine how this approach can cause a tremendous accuracy problem. In other words, using external data sources correctly is just as important as using them at all.
* Verifying shipping addresses at the point of an e-commerce sale (or any sale actually) utilizing postal reference data can prevent serious customer service problems.
* Building customer profiles of purchasers of a product or service using business demographic data sources can have a substantial impact on future marketing ROI.
* Using zip code and postal information data to pre-fill customer information forms in call center scenarios can significantly reduce per-call phone time for call center representatives, and enable more time for responsiveness to customer needs.
* Integrating state and local sales tax rates into e-commerce transactions can significantly reduce accounting efforts down the line.
These are just a few simple examples of how external data sources can be leveraged to improve business processes. However, what typically prevents outside data sources from being used as described above is the cost to acquire entire data sets, bring them in house, maintain and update them regularly, and of course the cost of the personnel to manage these things has to be factored in as well.
Now however, the traditional cost, effort and complexity of leveraging these data sources can be dramatically reduced by integrating into them over the Internet via a Web services API on a per-use basis with a simple XML-based SOAP or REST call. Because of this type of architecture, these data sources can be integrated directly into to any application or Website that makes use of this data. This approach also ensures that the data sources in use are always the most up-to-date sources available since the updating process is centralized by the providing organization
, also eliminating data aging issues.
The concept of the Cloud provides the technology foundation for this to be a reality, and with a simple couple of lines of code, leveraging external data sources via API over the Web can be done by any organization, large or small.
Customer data is a corporate asset that has a value and if managed correctly can provide an organization with a sizable return. The challenge often becomes on how to measure the value of this electronic asset and quantify its internal rate of return. Imagine for a moment if we could increase customer retention considerably, increase per-customer value, and also increase the number of customers that we have in total? While different for each organization, calculating this kind of return can be a fun exercise to do, and I firmly believe one way to achieve these gains is through the customer data asset.
Now, we all know a company's success can often be determined from how well it manages its customer relationships. The raw material for managing these customer relationships exist in the form of an organization's customer data. In other words, accurate, comprehensive, and complete customer data can help considerably in achieving better customer relationships. After all, how well can you communicate with your customer base if you have poor data to do it with?
As an example, ongoing customer communication such as a newsletter can strengthen ties and loyalty, as well as keep customers informed of positive company developments that make them want to continue to be a customer. However, without having accurate and up-to-date information for this communication (such as accurate physical addresses and current email addresses), this is not possible to do effectively.
Also, simple post-order product delivery to the wrong place due to address typos can create considerable customer service problems. We have all experienced the pain of items not shipped correctly to us from an online vendor, and the maddening follow-up process which usually occurs in order to straighten things out. Simple data verification at the point of sale can help reduce the incidence of this drastically.
With better data, call center representatives can respond faster and more effectively to customers when they don't have to spend time correcting customer information. This of course helps them provide better service.
Amazingly, organizations have invested millions of dollars into sophisticated CRM systems such as Salesforce, NetSuite, Oracle, and others, and yet the overwhelming majority of these instances are still plagued by inaccurate, incomplete, and otherwise poor data quality that grinds away at the ROI potential of these kinds of systems.
While the analysts have broad ranges of the cost of poor data quality to an organization, depending on what all is included and the characteristics of the organization, the one thing they agree on is that the cost is large.
The good news is that on the customer data front, the key to better data is to perform validation and data enhancement at the point of data collection, whether it be from a Web form, a call center, a mailing response, or any other way customer information is collected. These days with XML-based customer data verification services
available via simple integration over the Web, this is quite easy to achieve.
Sure, good customer data can be augmented with back office processing by utilizing additional data sources, but this typically is only successful if the originally-captured data is complete and accurate.
So while the bottom-line impact of poor customer data is not always a straight-forward calculation, trusting your intuition that better data equals better customer relationships, and addressing the issue of customer data quality early in the data life cycle can pay significant dividends.
One thing that's clear as we pass the halfway point of 2010 is that the Cloud Computing movement is not only gaining momentum, but the usage trends of the Web that are driving Cloud Computing are only increasing in influence and contributing to its momentum at a faster pace than ever.
For example, Facebook's Chief Technical Officer reported last month that they were seeing as many as one million photos being served up per second
through the entirety of their Web-based social application, and that they expect this to increase ten-fold
over the next twelve months.
Also, how many of us watched some streaming World Cup soccer games over the past month as Spain proved supreme in South Africa? Or at least highlights on YouTube and various other video outlets? Currently, it is estimated that 50% of all Web traffic is video. That's not surprising, but with High Definition (HD) Web technology and the like emerging, video is expected to represent 90% of all traffic in just a few years. This is going to require bandwidth levels that were largely unthinkable years ago.
On another front, mobile infrastructure is not keeping pace with demand. Right now, some estimates have shown mobile infrastructure requirements growing at about 50% per year, while actual mobile network infrastructure capacity is only growing at 20% per year. This is going to be a real problem, and one of the reasons some mobile carriers such as AT&T have begun capping usage and introducing fees for premium levels of bandwidth that were standard issue up until now, and other carriers may likely follow suit. It's the only way to help curtail demand to meet capacity in their eyes.
So what does all of this mean?
One of the reasons we have Cloud Computing in the first place is that innovative Web companies such as Amazon and Google had to build out enough computing capacity to handle peak periods of Web traffic and activity, especially Amazon during its Christmas holiday crunch.
As a result, they found themselves not only experts at building out distributed computing capacity, load-balancing, and data synchronization, but also found that most of the time they had all sorts of computing power that they had invested in for peak periods "shelved" and not in use, far from cost-optimized. This led them to think of ways to monetize this excess capacity (servers and disk space lying around idle) and led to some of the early thinking and innovation around Web-based centralized computing. The same is true with Google and others with all of their excess Web computing power, as they looked for ways to monetize large, excess amounts of capacity and leverage their expertise at building out server farms and developing highly-distributed, yet high performing levels of computing.
This same necessity-is-the-mother-of-invention phenomenon is playing out now as Facebook develops new technology to serve up its millions of photos per second, and is spawning new data storage and retrieval technology such as the NoSQL paradigm shift, with new non-SQL and "not only" SQL architecture approaches such as Cassandra, BigTable, Neptune, Google Fusion Tables, and Dynamo that are more finely tuned to the needs of Web-scale Cloud Computing.
In parallel, the bandwidth demands of video and mobile infrastructure are seeding new innovation around capacity and distribution of bandwidth as well, including much more efficient and easier to implement elastic computing capabilities to handle these variable bandwidth demands as much of mobile's required computing requirements are moved to and answered via the Web (and this also makes SmartPhones ideal Cloud Computing clients, also pushing the paradigm).
While not only mind-boggling and exciting, these trends are the cornerstones of a revolution already in progress. All of this demand-driven innovation is only causing more and more build-out of the foundation from which the future Internet and "Cloud" will emerge. A few years from now, we will look back and see how the Web computing demands of today, whether from Facebook, Google, Twitter, or others, enabled a whole new generation of Web applications to emerge. And of course, huge amounts of data were gobbled up in the process, a lot of which will have come from StrikeIron's own data delivery innovation in the Cloud.
No doubt about it, the Cloud is a good place to be.