I had an opportunity to moderate a panel at the Data 2.0 Summit this week in San Francisco entitled "Why You Should Join the API Economy". There was a considerable amount of thought leadership on the panel, including Chris Moody, President of Gnip; Gaurav Dillon, CEO of SnapLogic; Chris Lippi, VP Products of Mashery; Peter Kirwan, Entrepreneur-in-Residence of Neustar; and Tim Milliron, Director of Engineering at Twilio.
We explored several topics including where success is occurring now within various API ecosystems (what is working), where money is actually being made with APIs, what some of the adoption challenges are moving forward, and how people can begin moving down an API path (both publishing APIs and finding relevant and valuable ones to consume) - all of these topics I plan to cover in future blog entries.
However, one area we explored that I thought was especially interesting is the adoption of API-centric business models within larger enterprises. Sure, high tech companies like Cisco and Salesforce have been utilizing APIs as significant parts of their business models for years. But where it is becoming especially interesting and demonstrates APIs moving into the mainstream is the traction of APIs and DAAS (data-as-a-service) in traditional vertical industries.
For example, many government entities are now opening up data channels to enable citizens to create innovative applications, such as San Francisco's open data portal, on the publishing side of data and APIs. Opening up this data to the masses can drive all sorts of innovation that bring benefits to entire communities.
On the consumption side, Mohawk Paper's (a company founded in the late 1800's) inspirational data integration case study that Gartner published was discussed as evidence of an enterprise pulling data together from multiple third parties to create a custom solution in the Cloud. One of these services is StrikeIron's real-time foreign exchange rate service API. And of course, among our 1800 customers there are several Fortune 500 companies that are leveraging our various API's and DaaS products at increasing rates, all evidence of expanding adoption in the enterprise.
As we see API-centric and DaaS-centric business models emerge that find traction in the enterprise in addition to all of the smaller entrepreneurial innovators and startups, we know we are getting closer and closer to mainstream adoption, which is where some of the biggest opportunities are yet to be realized.
A lead, as we all know, is an individual or company that has indicated some level of interest in potentially buying a product or service. This is an important first stage categorization of any sales process.
Technically, a lead is represented by data, and the potential value of that lead is dependent on the level of accuracy, how current the contact information is, and the comprehensiveness of the data that constitutes the lead. In other words, a lead is only as valuable as the data that comprises it within a CRM application or other sales-driving system.
Fortunately, lead value ROI is generally pretty easily obtained as a function of the sales line. It is very measurable, and one can see over time that the more accurate, complete, and timely leads are likely to drive the most revenue and have the greatest value to the sales organization. Therefore, investing in the quality of lead data can have a measurable and rewarding payoff.
This is one of the primary reasons many organizations come to StrikeIron - to enhance the value of their lead data. Our real-time customer data quality offerings can validate email addresses, physical addresses, and phone numbers to ensure a lead is accurate, and it can even correct missing or inaccurate contact lead data where possible.
We can also add a sizable list of additional data points, including company demographic data points, residential information (from sources such as the Census Bureau), latitude and longitude coordinates, and other data points that add some additional value to the lead and optimize it for the purposes of a sales organization.
Best of all, our easy-to-integrate APIs are available out in the Cloud (meaning no reference data maintenance and a flexible subscription model), making it easy to plug into just about any CRM, marketing automation, or other lead management system via the Internet.
So as we are in the sweet spot of need for a large number of attending organizations at LeadsCon, we have found much success in the past with this conference. As a result, we will have a pretty sizable contingent of folks at the event in New York City this week.
Drop by and say hello (booth 202) and learn more about how to rev up the value of leads. Oh and by the way, we will be giving away some iPad2's. Hope to see you there!
I attended the Data 2.0 Conference this week in San Francisco. There is a lot to be excited about in this emerging, growing and quickly-accelerating industry. However there are still some significant obstacles that have to be overcome for the vision of the data-driven world and the “great data highway in the sky” to truly be realized.
First, there is the exciting stuff. New companies continue to emerge and grow in the space in multiple categories, including broad data sharing sites (FluidInfo, InfoChimps), purveyors of proprietary and hard-to-capture data (Navteq, Metamarkets), API infrastructure providers (WebServius, Apigee, Mashery), specific data category providers (SimpleGEO, Rapleaf, Socrata, DataSift), providers of API-based solutions (StrikeIron, Xignite) and slick data visualization tools (too numerous to list).
Also, the companies that have been in this space for five or more years are becoming larger, more sophisticated and are in many cases continuing to raise significant amounts of capital from investors looking to capitalize on the data megatrend. Even the stalwarts such as Microsoft with their DataMarket are eyeing future fruitful harvests in this space. Twitter also announced the commercial licensing of its entire “fire hose” of Tweets at the event, a move that the providers of analytics tools are hailing.
More and more public, government-sourced data is coming online everyday as fodder for this machine-to-machine information feeding frenzy. This data is coming from every level of government too; cities such as San Francisco (datasf.org), state governments such as Oregon (data.oregon.gov announced two weeks ago), and the federal government's data.gov initiative(rumors that funding might be cut are false according to keynote speaker and the charismatic, sometimes controversial industry-insider Vivek Wadhwa).
All of these government-sourced data assets are being made available to the general public in the hopes of civically engaging the creativity among us to innovate and create public value without the traditional budgetary costs. This has already led to a proliferation of applications such as online live maps of San Francisco municipal transportation schedules (including for the iPhone and Android platforms), as well as municipal vendor contracts available online for public discussion (it's amazing how many vendors consistently come in late and over-budget get rewarded new contracts over and over) like Chicago’s citypayments.org site.
Finally, the platforms (the fertile Cloud and all that is happening there with the Google AppEngine, Microsoft’s Azure, and Amazon’s various cloud offerings) and especially the devices (like smartphones and the iPad) that can make use of this data are also marching forward at a breathtaking pace.
This is all very exciting for those of us whom data represents a livelihood.
However, the significant challenge around the accessibility and usability of these vast seas of data is that it is still largely a complex, IT-oriented developer's world. Most of the access to these data sources is either via API, or available in structures and formats as varied as the data itself. This limits the applicability of these valuable data sources to a very small group of dedicated engineers and leaves us all with only a modicum of the true potential of this space.
Sure, there are API protocol standards such as REST and SOAP, but these only scratch the service. Most single-API vendors introduce a new set of behaviors, data structures, response codes and a new business model with each new API. This adds greatly to the complexity for anyone looking to put these data sources to use, both initially and ongoing. Until we can find a way to normalize the great data highway, it's going to be a bumpy road. Those that know me know I’ve been preaching this for years and have applied much of it to StrikeIron’s various data and API offerings; however it can be a difficult proposition to get adopted across the industry.
The consumption complexity issue is demonstrated by the term "mashup". Several years ago, this was the term-du-jour by an industry claiming that non-developers could combine datasets in interesting and exciting new ways without the assistance of their IT organizations. This term however has all but shriveled and died. Why? Because non-developer's couldn’t do it. The tools were cumbersome, complex, and represented whole new learning curves that most people simply don't have the time or patience for, as well as the lack of standards surrounding the datasets themselves. In fact, I never heard the term mentioned a single time at the event. A few years back, it would have been in nearly every discussion. May the term “mashup” rest in peace.
Until we surpass this hurdle of data consumption complexity and the vendors in the space only pay lip service to these challenges (prevalent on several of the panels at the event), the data-driven world will only be a shell of what it could be.