The basic premise of APIs (Application Programmer Interfaces) is to make integration and customized usage with an application or between applications possible. These interfaces generally provide a pre-determined set of application communication "methods" for the purposes of sending and receiving messages to an application and invoking various commands. Application APIs have been around for some time. However, Web-based APIs, allowing the integration of applications to occur over the Internet, are more recent.
Traditionally, application to application integration was difficult. Custom code usually had to be written to make applications work together. It was typically a requirement that the applications were available on the same network, limiting the scope of integration. Not to mention that every time a new application was to be tied in, more custom code had to be written. Overall, it was a fairly tedious and expensive proposition to integrate applications, and the task was rarely a repeatable exercise.
Web APIs solve most of these challenges. For example, using text-based protocols like XML or JSON enable platform independence, or the concept that two applications can communicate and work in conjunction with each other even though they are running on entirely different hardware and software platforms.
This is a powerful concept, especially when the Internet has eliminated geographical requirements. For example, platform independence is what enables a Windows-based PC the ability to request a piece of information, such as a customer number, from a mainframe - even if that mainframe is running in a data center on the other side of the world. More recently, Web APIs enable mobile devices such as an iPhone to retrieve information from servers in real-time, which might be currency rates that are being maintained and updated on a Linux server thousands of miles away.
So while many of the old challenges are being solved, we want to be very careful to ensure that an API-based integration approach does not introduce several new complexities, throwing a wrench into an otherwise well-thought-out integration strategy that an organization might currently be considering.
For instance, it is fairly common for an organization or a data provider to publish multiple APIs to its constituency, each one representing a certain function or type of data. While consistency generally makes common sense, the challenge in practice is that the data sources that will be published might be coming from different places with varying data content standards and data structures, making normalization harder.
If an organization has five different APIs to publish, there might be cases where constituents want to integrate several, if not all of these APIs. If each one requires a different integration approach due to inconsistency of implementation across these APIs, the likelihood of adoption decreases, and all of the additional complexity introduced could make the ongoing maintenance of the integration quite difficult - the exact opposite of what we want to achieve with API-based integration.
Years of experience both creating APIs and serving thousands of customers who have integrated these APIs into production have given StrikeIron a foundation of knowledge around the deployment and ongoing use of APIs that address these kinds of issues. When we help customers publish API's through our IronCloud API Management platform, these are the typical areas of normalization we focus on as part of our best practices for APIs:
Data content normalization. This one might be the most difficult as it could require some manipulation of large content datasets. However, as with any dataset and database, data content consistency makes analysis and reporting much easier. A simple example is to ensure gender data is always "m" and "f" rather than multiple variations of "male", "female", etc. Content normalization requirements also could be more complex, like consistent product naming standards.
Data structure normalization. It's important that APIs delivering data follow the same structural formats to ensure that working with the resultant data within client applications is as usable as possible. For a basic example, if one API uses "Full Name" as a data parameter, and another API uses "Given Name" and "Last Name" as two separate parameters, the usefulness of the data can degrade as comparing data can be challenging.
Authentication. This is where utilizing third party platforms, such as IronCloud, to publish APIs can be exceptionally helpful as they typically provide a de-coupled authentication layer allowing many different types of authentication (and protocols) to be leveraged. Examples include SOAP header-based authentication, SOAP parameter-based authentication, certificate-based/HTTP Secure REST, WS*Security, and several others. Here, not only is consistency important, but flexibility as well. It is hard to predict what IDEs or other development tools will be used when integration to an API occurs by a client application, and some of those tools might only support some authentication approaches. The important thing is that once an authentication method is decided upon, that same method can be repeated across all APIs that have been published by an organization.
Response code consistency. Most APIs have a set of response codes associated with them to relay important information back to the calling application after an API method has been invoked. For example, an API might need to respond with a "password not valid" or "data not found" response. In these cases, utilizing a response code such as "404" might be appropriate for data not found. The key thing is to ensure that these response codes are consistent from one API to the next. Otherwise, a developer will have to understand and write additional code to handle the various responses from each different API. This creates more complexity with each new API that is integrated.
API behavior consistency. If mechanisms such as timeouts or API usage reporting capabilities are present as parameters in an API, it's a good idea to ensure that they are available in multiple APIs in the same way. This can prevent unnecessary coding and unexpected client application behavior when developers try to leverage multiple APIs from the same organization.
Business model consistency. It is rare that there is not a usage control mechanism in place governing the use of an externally published API. Whether credits, hits, daily maximums, monthly usage, or some other accounting mechanism is in place governing usage, be sure that is it consistent across all of the APIs that you publish to minimize usage contingency code that needs to be created by the developer. Inconsistency here can cause considerable challenges as usage governance code issues have the nature of often being detected during production use, and that is always undesired. Foresight here can have long-term benefits in terms of adoption and overall stickiness of API usage.
This is a basic model for API design best practices that StrikeIron has developed over the years helping customers and partners design and deploy APIs that are in production in thousands of organizations around the globe. While our IronCloud API Hosting and Management Platform handles a lot of these details, foresight and good API design can make a big long-term impact towards the success of an API Strategy.
If you are considering publishing data or other business functionality in the form of a Web API for integration by others, our experience and track-record of successful API deployment and integration could be helpful and a great choice to minimize complexity, accelerate speed to market, achieve scalability, control usage and ultimately achieve the benefits a well thought-out API strategy can provide. Let us know of your plans and we will gladly provide some initial consultation to see if IronCloud, our API management platform, is right for your needs.
DataWeek 2012 is a conference happening this week in San Francisco. The theme of the conference is a focus on the data revolution that is occurring across businesses. This includes the growth of data-related technologies such as "big data", "data as a service", and the Cloud and how they are creating paradigm changes in product engineering, marketing, and customer relationships. Areas discussed will be how "data science", "data analytics", "open data initiatives", and data platforms will be useful in 2013 and beyond as organizations more and more recognize data as a strategic asset and a critical driver for business growth.
As businesses become more and more data-driven, to compete they will have to become more adept at understanding their customers and customer behavior, product usage patterns, and opportunities and risks that may not be so apparent until operational, customer and other data within the enterprise is leveraged to illuminate what is really happening within our businesses. This can provide the groundwork for insight and the strategic decision-making that follows that insight. In other words, "how does data drive the business forward?"
StrikeIron's CTO Bob Brauer will be moderating a panel on "Making Data Products with Data-as-a-Service", where we will delve into what makes a successful data product, win-win business models, how high quality data is at the core of any data driven initiative, and what the future holds for data-as-a-service products. Tom Carlock from D&B, Stephane Dubois from Xignite, and Brian Wilcove from Sofinnova Ventures will also be on the panel.
Come join us for the session, Tuesday, September 25th at noon PDT at the DataWeek event.
Many of StrikeIron's direct customers integrate our various API-delivered data services into applications, Web sites, and business processes entirely on their own, usually with a single line of code or two - a testament to how easy this is to do. These product offerings available on the Cloud can be integrated into anything that can consume a SOAP or REST-based Web service (which is just about anything).
However, StrikeIron has also developed technology integration partnerships with many of today’s top software and Internet solutions platforms, solutions which are all enhanced by integrating Data-as-a-Service capabilities from StrikeIron.
Having these capabilities, such as real-time address verification, email verification, sales tax rates, foreign currency rates, SMS text messaging, and phone verification, pre-integrated into various other platforms that are already in use by large customers every day can be a very compelling solution. It is a win-win-win scenario for our customers, partners, and our technology.
One such partner is Informatica. Informatica has integrated several StrikeIron services for the purposes of contact data validation within its Informatica Cloud platform, as data validation is a very important step in the integration of data between various platforms. These services can be used via the Informatica Cloud StrikeIron plug-in, or as directly integrated within the Informatica Cloud platform per our most recent partnership. In the latter case, some of our services are available for use simply by checking a box directly within Informatica's Cloud application. This makes it very easy to have high quality, validated data arriving at a target destination, having been cleansed as an intermediate step while in transit from its source. You can view a recorded Webinar here.
There are many different kinds of batch data cleansing processes that can be performed against large databases of existing customer information. Standardizing inconsistent data, removing duplicate records, validating columns against up-to-date reference data, filling in missing data, and appending new data to existing data are all examples of customer data processing that can help improve the value of internal data assets.
When data assets undergo these kinds of processes their value increases and they enable business intelligence applications to be more useful, operations to be more efficient, and customer communication efforts to be more effective. These are worthwhile endeavors indeed.
However, it can often be a considerable effort to do large, after-the-fact database cleanup jobs - not to mention the considerable costs and complexity associated with offline data processing. Also, batch jobs are rarely a one-time effort, as the same problems begin to appear soon after a mass cleansing, and then begin to build to troublesome levels again, putting the data stewards of the organization back to square one.
An alternative can be to leverage real-time data quality mechanisms at the point of data collection
. This means validating data, filling in missing data, appending data, standardizing data, and comparing it to existing data for duplicates in real-time, before
it ever gets into the database. This can eliminate or dramatically reduce the cost and effort associated with downstream batch cleanup processes, enabling the benefits of clean, complete, accurate data to appear immediately across the organization. It also prevents the build up of these kinds of data quality issues over time.
Real-time data quality can be achieved by integrating calls to data quality functions
within business processes, Website data collection forms, customer-facing applications, call center applications where representatives speak with customers, and anywhere else that data is collected in real-time. Typically these programmatic calls are to Cloud-based APIs that are leveraging constantly refreshed reference data to ensure the highest possible data accuracy.
Here more than ever, an ounce of prevention is worth a pound of cure.
One of the exciting things about SOAP and REST-based Web services protocols is that they are text-based, providing for the platform independence necessary for broad machine-to-machine communication and open cloud computing models. In other words, describing data using a textual XML dialect allows iPhones to communicate with mainframes, as well as enabling Fortran-developed scientific instrumentation devices to be able communicate with Dell Server applications in the Cloud written in Java.
As long as both machines are aware of the "rules" of a given XML-dialect and how data is described, they can communicate and more importantly pass data back and forth to perform certain functions based on the resultant data. This is powerful and has really helped lay the groundwork for the success of the Cloud.
To demonstrate this concept, here is an example of an "Input" SOAP message to StrikeIron's Sales and Use Tax Basic service. Remember that XML is not meant to be human readable, but rather the implementation of a set of XML dialect rules. However, if you look closely then you can see the actual data elements that are passed within the XML message received by StrikeIron within our data centers by the calling entity:
Our application servers, which are always listening, receive the request, do some user authentication, and then perform the requested task and return the resultant data XML message below. It can then be used how ever necessary by the calling entity (to process an ecommerce transaction for example). Here is an example of the "Output" XML message:
This communication and data transaction has occurred entirely without human intervention. It takes place between machines that could be located anywhere on the globe, each completely oblivious to the hardware and software that comprise the other entity.
Fortunately, humans rarely if ever need to interact at the XML-level (sometimes it might be useful for debugging). Instead, the creation, sending, receiving, and interpretation of these XML messages are handled by the software development environments that one is working in, abstracting a developer or application user away from the XML-based data exchange.
This form of XML messaging is what makes companies like StrikeIron possible, opening up pre-built data processing, data validation, aggregated data sources, and other business functions available to the world. Regardless of what software and hardware environments a customer happens to be running, it's this approach that makes the ever-evolving "Great Data Highway" possible.
I had an opportunity to moderate a panel at the Data 2.0 Summit this week in San Francisco entitled "Why You Should Join the API Economy". There was a considerable amount of thought leadership on the panel, including Chris Moody, President of Gnip; Gaurav Dillon, CEO of SnapLogic; Chris Lippi, VP Products of Mashery; Peter Kirwan, Entrepreneur-in-Residence of Neustar; and Tim Milliron, Director of Engineering at Twilio.
We explored several topics including where success is occurring now within various API ecosystems (what is working), where money is actually being made with APIs, what some of the adoption challenges are moving forward, and how people can begin moving down an API path (both publishing APIs and finding relevant and valuable ones to consume) - all of these topics I plan to cover in future blog entries.
However, one area we explored that I thought was especially interesting is the adoption of API-centric business models within larger enterprises. Sure, high tech companies like Cisco and Salesforce have been utilizing APIs as significant parts of their business models for years. But where it is becoming especially interesting and demonstrates APIs moving into the mainstream is the traction of APIs and DAAS (data-as-a-service) in traditional vertical industries.
For example, many government entities are now opening up data channels to enable citizens to create innovative applications, such as San Francisco's open data portal, on the publishing side of data and APIs. Opening up this data to the masses can drive all sorts of innovation that bring benefits to entire communities.
On the consumption side, Mohawk Paper's (a company founded in the late 1800's) inspirational data integration case study that Gartner published was discussed as evidence of an enterprise pulling data together from multiple third parties to create a custom solution in the Cloud. One of these services is StrikeIron's real-time foreign exchange rate service API. And of course, among our 1800 customers there are several Fortune 500 companies that are leveraging our various API's and DaaS products at increasing rates, all evidence of expanding adoption in the enterprise.
As we see API-centric and DaaS-centric business models emerge that find traction in the enterprise in addition to all of the smaller entrepreneurial innovators and startups, we know we are getting closer and closer to mainstream adoption, which is where some of the biggest opportunities are yet to be realized.
Aggregating Cloud services and adding value is not new. In fact, StrikeIron has been doing it since 2004 when we launched our Web Services Marketplace aimed at making it easy to integrate SOAP and REST-based APIs. What is new however is the term "Cloud Services Brokerage", which has come into the scene the past couple of years and is now becoming more and more commonplace by analysts, vendors, and enterprise IT professionals. It has evolved to contain much more of a "Cloud" focus than earlier service brokerage concepts, but the general premise and benefits are still pretty much the same.
The key idea is that multiple services are aggregated from multiple sources of data, and then delivered via a single point of entry. The "brokerage" handles integrating, customizing, governing, and otherwise normalizing the access to these data sources, all in an effort to reduce end-user complexity. This normalization not only extends to the interfaces, but also the data structures, service behavior, service responses, and the business models that dictate service usage.
This is all very important because of the breadth of data and data-driven business functions that are available out on the Web that can be put to use. Many of these data sources are commercial, but some of them are also public, and others are created in real-time. If leveraged, much of this third-party data can provide a tremendous value to the organization that can figure out how to make use of it, including within operations, to aid decision-making, and as an important component of sales and marketing campaigns.
However, in raw form, the data available out on the Web typically exists in all kinds of shapes, sizes, and formats, with an equal variability in business model to match, making it a very complex exercise to harness any of it. If you are familiar with the demise of UDDI, you know how important it is to overcome these challenges. These were not the tenants of UDDI upon its introduction, and as a result it receives very little consideration today.
Simplifying access to these rich data sources in a reliable, high-performance manner, on top of a multi-tenant delivery platform built to both manage and abstract the underlying complexity to the external data and data-related functions is the purpose of a Cloud Services Brokerage, and exactly what StrikeIron delivers. Providing consistent, easy, plug-n-play access to a normalized set of high value services, without the requirement of managing, updating, and otherwise maintaining the underlying data, is an important step in bringing the concept of the "The Great Data Highway" in to being. It is a modern approach to the distribution of data via the Cloud, and one that over 1600 StrikeIron customers can attest to.
StrikeIron was highlighted in a 451 Research Impact Report called "StrikeIron offers Web services for business use cases, focuses on data-quality roots." The report features StrikeIron's products, technology, customer/sales model, and competition. It also includes a SWOT analysis.
For a copy of the report, go to http://blog.strikeiron.com/451-research-download/
At Larry Ellison's keynote yesterday at the Oracle OpenWorld event, he announced the Oracle Public Cloud and Oracle's move into Infrastructure-as-a-Service (IAAS) offerings, primarily geared towards Java developers and users of Oracle's Fusion Applications. The brand "Fusion Applications" represents a set of over 100 different modules (financials, HR, etc.) which have been designed to run both on-premise and now in the Cloud and is launching after six years of development.
Clearly the Sun acquisition gave Oracle a lot of the Cloud technology to get to this point, but Salesforce's $2 billion in revenue, increasing penetration into enterprises, and launch of Database.com at the Dreamforce event might be pushing Oracle more quickly into this direction.
However, Ellison was quick to point out that Oracle's Cloud approach was an open one and would enable deployments to be moved to other Cloud environments such as Amazon.com (at least in theory) because of its Java roots, rather than a proprietary one like Salesforce.com's where applications are built with a proprietary language (Apex). Cost, however, was not discussed.
In addition to IAAS and Fusion Applications, Oracle will also have other hosted applications available in its Public Cloud such as its database platform, the SUN OS's, Fusion Middleware, and its Enterprise Manager offering.
This move is more evidence that the industry is moving full steam ahead to Cloud-based deployments, where enterprises can consolidate legacy spending, have fewer servers and other hardware, fewer on-premise software deployments, and a greater reliance on SAAS applications and other service-oriented offerings such as data-as-a-service (DAAS).
One of the things you can see from the picture below is that the Cloud really lays the foundation for "data service" components (notice the distinction versus "database service"), enabling enterprises to quickly leverage third-party datasets and data-oriented business functions such as customer contact data validation. This would be more difficult to achieve in on-premise solutions because third-party data has to be acquired, stored, maintained, and managed - a costly and time-consuming process. With the Cloud, you can simply plug into these services and have all of the third party data managed for you.
So the Public Cloud has been announced, but when will it be launched? StrikeIron is eagerly waiting.
Oracle announced the Oracle CRM On Demand Release 19 Innovation Pack at the Oracle OpenWorld event today in San Francisco.
The release includes an enterprise marketing component enabling marketing professionals to build and manage campaigns, Web sites, and other customer-facing documents. It also includes lead management capabilities as well as role management and segment targeting.
In addition to the marketing features added to Oracle CRM on Demand, there is also a hosted contact center capability introduced in this release.
Presumably, much of these marketing features are the bearing of fruit from the Market2Lead acquisition last year. Oracle's expansion into a marketing automation platform demonstrates continued investments into SAAS and CRM On Demand, enabling it to compete more against entrenched SAAS stalwarts such as Salesforce.com
With marketing automation being added to Oracle's CRM on Demand platform, high quality and comprehensive data at the foundation becomes an even greater imperative. Fortunately, StrikeIron's integration of its Contact Data Verification Suite into the Oracle CRM On Demand platform will play an even greater role in the success of Oracle's SAAS initiatives going forward with this new announcement.
In addition, StrikeIron's mobile messaging solutions can play a significant role with the new campaign capabilities introduced today, as it does with many customers currently using it on other marketing automation platforms as part of critical mobile campaigns.