A lead, as we all know, is an individual or company that has indicated some level of interest in potentially buying a product or service. This is an important first stage categorization of any sales process.
Technically, a lead is represented by data, and the potential value of that lead is dependent on the level of accuracy, how current the contact information is, and the comprehensiveness of the data that constitutes the lead. In other words, a lead is only as valuable as the data that comprises it within a CRM application or other sales-driving system.
Fortunately, lead value ROI is generally pretty easily obtained as a function of the sales line. It is very measurable, and one can see over time that the more accurate, complete, and timely leads are likely to drive the most revenue and have the greatest value to the sales organization. Therefore, investing in the quality of lead data can have a measurable and rewarding payoff.
This is one of the primary reasons many organizations come to StrikeIron - to enhance the value of their lead data. Our real-time customer data quality offerings can validate email addresses, physical addresses, and phone numbers to ensure a lead is accurate, and it can even correct missing or inaccurate contact lead data where possible.
We can also add a sizable list of additional data points, including company demographic data points, residential information (from sources such as the Census Bureau), latitude and longitude coordinates, and other data points that add some additional value to the lead and optimize it for the purposes of a sales organization.
Best of all, our easy-to-integrate APIs are available out in the Cloud (meaning no reference data maintenance and a flexible subscription model), making it easy to plug into just about any CRM, marketing automation, or other lead management system via the Internet.
So as we are in the sweet spot of need for a large number of attending organizations at LeadsCon, we have found much success in the past with this conference. As a result, we will have a pretty sizable contingent of folks at the event in New York City this week.
Drop by and say hello (booth 202) and learn more about how to rev up the value of leads. Oh and by the way, we will be giving away some iPad2's. Hope to see you there!
I attended the Data 2.0 Conference this week in San Francisco. There is a lot to be excited about in this emerging, growing and quickly-accelerating industry. However there are still some significant obstacles that have to be overcome for the vision of the data-driven world and the “great data highway in the sky” to truly be realized.
First, there is the exciting stuff. New companies continue to emerge and grow in the space in multiple categories, including broad data sharing sites (FluidInfo, InfoChimps), purveyors of proprietary and hard-to-capture data (Navteq, Metamarkets), API infrastructure providers (WebServius, Apigee, Mashery), specific data category providers (SimpleGEO, Rapleaf, Socrata, DataSift), providers of API-based solutions (StrikeIron, Xignite) and slick data visualization tools (too numerous to list).
Also, the companies that have been in this space for five or more years are becoming larger, more sophisticated and are in many cases continuing to raise significant amounts of capital from investors looking to capitalize on the data megatrend. Even the stalwarts such as Microsoft with their DataMarket are eyeing future fruitful harvests in this space. Twitter also announced the commercial licensing of its entire “fire hose” of Tweets at the event, a move that the providers of analytics tools are hailing.
More and more public, government-sourced data is coming online everyday as fodder for this machine-to-machine information feeding frenzy. This data is coming from every level of government too; cities such as San Francisco (datasf.org), state governments such as Oregon (data.oregon.gov announced two weeks ago), and the federal government's data.gov initiative(rumors that funding might be cut are false according to keynote speaker and the charismatic, sometimes controversial industry-insider Vivek Wadhwa).
All of these government-sourced data assets are being made available to the general public in the hopes of civically engaging the creativity among us to innovate and create public value without the traditional budgetary costs. This has already led to a proliferation of applications such as online live maps of San Francisco municipal transportation schedules (including for the iPhone and Android platforms), as well as municipal vendor contracts available online for public discussion (it's amazing how many vendors consistently come in late and over-budget get rewarded new contracts over and over) like Chicago’s citypayments.org site.
Finally, the platforms (the fertile Cloud and all that is happening there with the Google AppEngine, Microsoft’s Azure, and Amazon’s various cloud offerings) and especially the devices (like smartphones and the iPad) that can make use of this data are also marching forward at a breathtaking pace.
This is all very exciting for those of us whom data represents a livelihood.
However, the significant challenge around the accessibility and usability of these vast seas of data is that it is still largely a complex, IT-oriented developer's world. Most of the access to these data sources is either via API, or available in structures and formats as varied as the data itself. This limits the applicability of these valuable data sources to a very small group of dedicated engineers and leaves us all with only a modicum of the true potential of this space.
Sure, there are API protocol standards such as REST and SOAP, but these only scratch the service. Most single-API vendors introduce a new set of behaviors, data structures, response codes and a new business model with each new API. This adds greatly to the complexity for anyone looking to put these data sources to use, both initially and ongoing. Until we can find a way to normalize the great data highway, it's going to be a bumpy road. Those that know me know I’ve been preaching this for years and have applied much of it to StrikeIron’s various data and API offerings; however it can be a difficult proposition to get adopted across the industry.
The consumption complexity issue is demonstrated by the term "mashup". Several years ago, this was the term-du-jour by an industry claiming that non-developers could combine datasets in interesting and exciting new ways without the assistance of their IT organizations. This term however has all but shriveled and died. Why? Because non-developer's couldn’t do it. The tools were cumbersome, complex, and represented whole new learning curves that most people simply don't have the time or patience for, as well as the lack of standards surrounding the datasets themselves. In fact, I never heard the term mentioned a single time at the event. A few years back, it would have been in nearly every discussion. May the term “mashup” rest in peace.
Until we surpass this hurdle of data consumption complexity and the vendors in the space only pay lip service to these challenges (prevalent on several of the panels at the event), the data-driven world will only be a shell of what it could be.
There is often a need within an organization to move data from point A to point B. One example is when user-submitted data, collected from a Web site, is moved into a CRM system. This typically results in a "lead" being created. Contact data from the user is collected in response to a form being filled out requesting more information about a certain offering, a question needing answering or another action indicating some level of interest in a company's products. All of these types of inquiries are fundamental interests of sales professionals.
The moving of the actual data to a CRM system could be in the form of a nightly batch load or as each single lead is collected. Either way, it is more important to ensure that only valid, complete information is loaded into the CRM system in order to optimize the time and increase the likelihood of success for the sales professional on the other end of the system.
A lot of time can be wasted by a sales organization by following up on phantom leads or leads with incorrect information. Ongoing communication and lead nurturing can also be severely affected if contact information is not valid or current. And finally, expensive, time-consuming "data cleansing" activities might have to be initiated downstream if an organization waits until volumes of incomplete or inaccurate data collects and builds within a CRM system over a long period of time.
One way to prevent this from happening is by using Informatica's Cloud product in conjunction with StrikeIron's Contact Record Verification Suite. StrikeIron has developed a plug-in for Informatica to manage this data migration process. The Cloud product uses Informatica's classic data integration technology in a SAAS scenario, enabling data to be loaded from many various systems, including Web-to-lead data into Salesforce.com. StrikeIron's Contact Record Verification Suite plug-in performs the actual phone number, address and email validation checks along the way. The joint offering is very easy to get up and running – no software to install, no hardware to prepare and no reference data to acquire.
This Cloud-based load-and-validate approach ensures that more accurate, complete and validated data actually gets into the CRM system with minimal effort, optimizing the time of sales executives. This process provides better communication and access with customers and prospective customers while preventing costly data cleansing activities to be performed down the road.
- Here is a video demonstration showing the joint solution: http://www.youtube.com/watch?v=c4-s6kRam6c
- Here is more information on the joint solution: http://www.strikeiron.com/Partners/PremierPartners/Informatica.aspx
Contact us at firstname.lastname@example.org for more information.
StrikeIron will be exhibiting at Oracle's Open World event at the Moscone Center in San Francisco next week, September 19th through the 23rd.
We will be demonstrating our Contact Record Verification Suite and other easy-to-integrate Web services product offerings that help an organization maintain a base of high quality, comprehensive, current customer data in a substantially easier way than ever before. Great customer data is a key component of success when working with various products underneath the Oracle offering umbrella.
This will also be a great opportunity to meet several members of our team for existing and prospective customers, as well as engage in deep-dive discussions about our technology, products, and how better data within business processes, products, systems, and Web sites can provide substantial ROI.
In addition to our own demonstrations, there will be partner demonstrations also at the event.
We will also be discussing our plans around Oracle's forthcoming Fusion Applications launch.
Hope to see you there!