Many of StrikeIron's direct customers integrate our various API-delivered data services into applications, Web sites, and business processes entirely on their own, usually with a single line of code or two - a testament to how easy this is to do. These product offerings available on the Cloud can be integrated into anything that can consume a SOAP or REST-based Web service (which is just about anything).
However, StrikeIron has also developed technology integration partnerships with many of today’s top software and Internet solutions platforms, solutions which are all enhanced by integrating Data-as-a-Service capabilities from StrikeIron.
Having these capabilities, such as real-time address verification, email verification, sales tax rates, foreign currency rates, SMS text messaging, and phone verification, pre-integrated into various other platforms that are already in use by large customers every day can be a very compelling solution. It is a win-win-win scenario for our customers, partners, and our technology.
One such partner is Informatica. Informatica has integrated several StrikeIron services for the purposes of contact data validation within its Informatica Cloud platform, as data validation is a very important step in the integration of data between various platforms. These services can be used via the Informatica Cloud StrikeIron plug-in, or as directly integrated within the Informatica Cloud platform per our most recent partnership. In the latter case, some of our services are available for use simply by checking a box directly within Informatica's Cloud application. This makes it very easy to have high quality, validated data arriving at a target destination, having been cleansed as an intermediate step while in transit from its source. You can view a recorded Webinar here.
Leveraging the Cloud to verify addresses is easy to do and can pay significant dividends. Incorrect shipping addresses, address typos in Web forms, "return to sender" stamps, prospects who never receive marketing materials, and customers who don't get communicated to can all be a thing of the past simply by integrating a single line of validation code wherever address data is collected, either electronically or by humans. Address verification represents one of the best examples of how to leverage a cloud service to provide value internally to an organization with very little effort.
The United States Post Office (as do other postal entities around the world) publishes updates to their master physical address databases monthly. These updates capture all of the new houses that are being built, new businesses that emerge, the launching of new zip codes, and all of the other address additions, updates, and deletions that occur throughout the U.S. on an ongoing basis. Utilizing this master reference data as a validation source can ensure the accuracy of all collected addresses, comparing those collected to a master list in real-time (a millisecond across-the-Web check). This is something you definitely want to do as the address is collected, as waiting to fix bad addresses downstream can far more costly.
Obtaining, managing, and updating this verification reference data in the raw for an organization that wants to put it to use for maximum address accuracy can be a quite difficult exercise to undertake on their own however, and most often can only be done at considerable cost, requiring hardware, software, personnel, and time. And shortcut approaches that try to leverage this data without the frequent updates can be an absolute disaster.
This is where StrikeIron and the Cloud come in. We acquire and update the reference data from various third parties as frequently as they allow, maintain the data in our data centers within our multi-tenant, high performance delivery platform, have built and perfected intelligent algorithms to handle address matching ambiguity (a straight, raw lookup won't work), and provide interfaces to these verification algorithms available in multiple protocols (SOAP, REST, HTTP Secure) so they can easily be integrated into any business process, application, Web site, or even mobile device. We pull all of this together, productize it, and deliver it as simple, easy-to-use plug-and-play Web services API. A single line of code from any platform is all it takes to tap into these algorithms that are living out on the Web. And like all good Cloud services, all of the details are abstracted away.
Best of all, our solutions are usage-based in cost, so you don't have to bet the farm just to get started, and you can always start small. And, free trials are available, so we can gain your trust before you subscribe. Give it a try today!
Email validation, if done effectively, not only keeps organizations off of spam lists when communicating with customers and prospective customers, but also can provide new opportunities. Running a batch of email validation checks against an entire CRM contact database at certain time intervals is likely to yield a subset of invalid email addresses within the contact data. Of course, this is usually because people leave organizations for one reason or another. It could also mean they changed their email address, or be the result of some other possible reason.
Since the majority of the time these invalid email addresses likely represent contacts at customer or target companies that are no longer there (and their email addresses have been disabled), they likely have a replacement or someone else within that organization that is important to create a new relationship with. Realizing this and being alerted of it via an email validation process approach helps savvy customer-facing employees get in front of and be proactive with replacement contacts sooner rather than later in order to maintain key, existing relationships.
This process only works of course if real-time email validation solutions are utilized that do not rely upon static database lookups using data that can age quickly. Only solutions that go out to the Internet with each email check to recognize invalid email addresses in real-time will be useful in these scenarios.
The identification of these invalid email addresses in real-time typically triggers an action for a customer service representative or a sales person to reach out to the company to begin building the new relationship. This contact action can be a key trigger for new opportunities as well as strengthening existing customer relationships as there is a basis for communication with the customer, partner, or target company since the point of contact is no longer a part of the organization.
Fortunately, it is very easy to run these kinds of batch email validations using a Cloud-based email validation product such as StrikeIron provides. A mechanism is created that calls out to the StrikeIron platform via an API function call as each contract record is checked, simply reporting which email addresses are no longer valid. This creates a contact action list for customer service or sales teams to follow up with and potentially turn into new business opportunities in addition to maintaining existing ones.
The appropriate time interval to run these types of mass email validation processes against a contact database could be every 30 days, every 90 days, or whatever seems ideal for a given business. Keeping contact data fresh and current can be a tremendous competitive advantage, especially when it's so easy.
I had an opportunity to moderate a panel at the Data 2.0 Summit this week in San Francisco entitled "Why You Should Join the API Economy". There was a considerable amount of thought leadership on the panel, including Chris Moody, President of Gnip; Gaurav Dillon, CEO of SnapLogic; Chris Lippi, VP Products of Mashery; Peter Kirwan, Entrepreneur-in-Residence of Neustar; and Tim Milliron, Director of Engineering at Twilio.
We explored several topics including where success is occurring now within various API ecosystems (what is working), where money is actually being made with APIs, what some of the adoption challenges are moving forward, and how people can begin moving down an API path (both publishing APIs and finding relevant and valuable ones to consume) - all of these topics I plan to cover in future blog entries.
However, one area we explored that I thought was especially interesting is the adoption of API-centric business models within larger enterprises. Sure, high tech companies like Cisco and Salesforce have been utilizing APIs as significant parts of their business models for years. But where it is becoming especially interesting and demonstrates APIs moving into the mainstream is the traction of APIs and DAAS (data-as-a-service) in traditional vertical industries.
For example, many government entities are now opening up data channels to enable citizens to create innovative applications, such as San Francisco's open data portal, on the publishing side of data and APIs. Opening up this data to the masses can drive all sorts of innovation that bring benefits to entire communities.
On the consumption side, Mohawk Paper's (a company founded in the late 1800's) inspirational data integration case study that Gartner published was discussed as evidence of an enterprise pulling data together from multiple third parties to create a custom solution in the Cloud. One of these services is StrikeIron's real-time foreign exchange rate service API. And of course, among our 1800 customers there are several Fortune 500 companies that are leveraging our various API's and DaaS products at increasing rates, all evidence of expanding adoption in the enterprise.
As we see API-centric and DaaS-centric business models emerge that find traction in the enterprise in addition to all of the smaller entrepreneurial innovators and startups, we know we are getting closer and closer to mainstream adoption, which is where some of the biggest opportunities are yet to be realized.
One of the features of StrikeIron's IronCloud platform is that it can accept invocations of Web services via multiple protocols including both SOAP and REST. This maximizes the audience of potential users and provides for a good deal of flexibility with multiple IDEs, coding styles, and platform implementations.
In addition to the support for SOAP calls within the platform (including SOAP Headers, SOAP parameter-based authentication, and SOAP w/ HTTP Secure) there is also support for accepting REST calls. This is achieved within the “Transformation” sub-system of our IronCloud platform, meaning we translate the REST call to its equivalent SOAP call before hitting the actual Web service living within our data centers, and then translate the response back to the REST format before it is sent back to the calling entity, and of course all within milliseconds.
Here is an example using REST with our North American Address Verification service, a Web API that validates the existence of any address in the United States or Canada, and then standardizes the address according to postal standards (as well as appending additional data such as county and latitude/longitude coordinates). The example below can be entered into any Web browser address line as-is (with the appropriate authentication - click the Free Trials button to the right or contact StrikeIron to get access) in order to get a response. You can then change parameter values for different addresses to get the different corresponding responses. You can also try other methods within any of our Web services following the same form (you have to change the parameters to match the method of course).
http://ws.strikeiron.com/NAAddressVerification6/NorthAmericanAddressVerificationService/NorthAmericanAddressVerification?LicenseInfo.RegisteredUser.UserID=***********&LicenseInfo.RegisteredUser.Password=******&NorthAmericanAddressVerification.AddressLine1=15501 Weston Parkway&NorthAmericanAddressVerification.AddressLine2=&NorthAmericanAddressVerification.CityStateOrProvinceZIPOrPostalCode=Cary NC&NorthAmericanAddressVerification.Country=US&NorthAmericanAddressVerification.Casing=UPPER
Because a REST call contains parameters including UserID and Password, we of course recommend to our users that these parameters be stored in a non-viewable config file and not the actual Web page source, or some other means of hiding credentials (within non-viewable code or within a database for example).
Have a REST-related question? Contact us at firstname.lastname@example.org Like a free trial? Contact us at email@example.com
Aggregating Cloud services and adding value is not new. In fact, StrikeIron has been doing it since 2004 when we launched our Web Services Marketplace aimed at making it easy to integrate SOAP and REST-based APIs. What is new however is the term "Cloud Services Brokerage", which has come into the scene the past couple of years and is now becoming more and more commonplace by analysts, vendors, and enterprise IT professionals. It has evolved to contain much more of a "Cloud" focus than earlier service brokerage concepts, but the general premise and benefits are still pretty much the same.
The key idea is that multiple services are aggregated from multiple sources of data, and then delivered via a single point of entry. The "brokerage" handles integrating, customizing, governing, and otherwise normalizing the access to these data sources, all in an effort to reduce end-user complexity. This normalization not only extends to the interfaces, but also the data structures, service behavior, service responses, and the business models that dictate service usage.
This is all very important because of the breadth of data and data-driven business functions that are available out on the Web that can be put to use. Many of these data sources are commercial, but some of them are also public, and others are created in real-time. If leveraged, much of this third-party data can provide a tremendous value to the organization that can figure out how to make use of it, including within operations, to aid decision-making, and as an important component of sales and marketing campaigns.
However, in raw form, the data available out on the Web typically exists in all kinds of shapes, sizes, and formats, with an equal variability in business model to match, making it a very complex exercise to harness any of it. If you are familiar with the demise of UDDI, you know how important it is to overcome these challenges. These were not the tenants of UDDI upon its introduction, and as a result it receives very little consideration today.
Simplifying access to these rich data sources in a reliable, high-performance manner, on top of a multi-tenant delivery platform built to both manage and abstract the underlying complexity to the external data and data-related functions is the purpose of a Cloud Services Brokerage, and exactly what StrikeIron delivers. Providing consistent, easy, plug-n-play access to a normalized set of high value services, without the requirement of managing, updating, and otherwise maintaining the underlying data, is an important step in bringing the concept of the "The Great Data Highway" in to being. It is a modern approach to the distribution of data via the Cloud, and one that over 1600 StrikeIron customers can attest to.
Master Data Management, also known as MDM, often comes up in conversation as a key information technology initiative for an enterprise, including considerations for leveraging the Cloud as an ideal environment for MDM. We can save Cloud considerations for a later post, so for now, a very basic MDM primer might be a good idea for those who scratch their head at the mere mention of the term. It's actually a much simpler concept than descriptions of it often entail.
MDM is simply about keeping non-transactional data, such as customer data, in a single place (logically or physically) to be shared across many different systems. When each system that uses this customer data, whether it is a CRM system, an accounting system, a support system, or a business intelligence system, updates or adds to data about a customer, all connected systems gain the benefit of that change.
This "master data" approach also eliminates inconsistencies of things like contact information and customer notes across different systems where customer data may be captured, and does a far better job of keeping customer data current. To use a few timeworn adages, with MDM we are keeping all of our customer data eggs in one basket, with the idea that the whole is greater than the sum of the parts.
The other key benefit to maintaining and managing customer data in one place for use across multiple line-of-business systems is that customer data-specific activities can be performed with that data, benefitting each connected system that makes use of the data. This includes customer data quality initiatives, not only validating and correcting customer data to keep it as current as possible, but also using external third party data sources to create a more vivid, detailed view of an organization's customers. Other activities include customer data consolidation (matching and eliminating redundancy), data governance, easing the distribution of customer data (the Cloud), better customer communication, and a foundation for richer analytics that utilize customer data.
At the end of the day, customer data is one of a company's most important assets. The MDM approach helps maximize the value of that data for use.
Over the years, we have learned that providing flexibility within our Cloud-based data delivery platform helps better serve (and therefore expand) our customer base. In order to support the broadest number of coding styles, Cloud environments, IDEs, and other software development tools, we have provided multiple ways to invoke our various Web services APIs. This includes use of multiple protocols, as well as multiple ways to utilize each protocol.
For example, when using the SOAP protocol to invoke a StrikeIron Web service, you have the option of passing authentication parameters (your UserID and Password) either within SOAP headers, or as parameters as part of the service data payload.
Some developers prefer SOAP headers because it allows them to create reusable authentication code that can be leveraged with multiple StrikeIron services (all StrikeIron services share the same interface, authentication mechanisms, service behavior, and response codes). However, many IDEs do not support SOAP headers, so sending authentication by parameter, along with the rest of the data payload, is the only option in some cases.
Depending on the authentication method utilized, there is a different service endpoint required. The primary difference (other than the service call structure inside the XML, transparent in most cases) is the domain prefix. With North American Address Verification for example, the SOAP Header endpoint is:
If you want to pass the UserID and Password via actual parameters to the service, in that case the endpoint is as follows (note "wsparam" versus "ws"):
We also provide the ability to invoke our services using the REST protocol. Sometimes this is preferred, especially when services are being built into Websites using PHP, Ajax, and other other scripting technologies. Here is an example of our Email Verification service using a REST call (if you have a StrikeIron UserID and Password, you can paste this code into a Web browser to invoke it):
Finally, if you prefer to securely transmit the data of a Web service invocation, you can simply replace "HTTP" in the service endpoint with "HTTPS" in the URL. This will encrypt the data appropriately as it travels from your application or Website to the StrikeIron platform and data centers, and then back again to your application or Website.
These authentication and invoking protocol preferences have been in largest demand by our customer base to date, resulting in their various implementations. If you have other protocols or authentication mechanisms you would like to see us support and build into our delivery platform, please let us know in the comments below.
Email is a great way to communicate with customers and leads. However, email campaigns can be put at risk if a company’s database has out-of-date or inactive email addresses.
Sending promotional materials to invalid or disabled email addresses, if done frequently enough, can put a company on spam lists. This means that important communications to legitimate email addresses will land in the spam folder and likely go unread.
How can this happen? People change email addresses all of the time. They leave companies, or create temporary email addresses for certain purposes, or just simply start getting too much spam. Consequently, they create a new email address and disable the old one.
If you continue to send email to addresses that are not valid frequently enough, ISPs (Internet Service Providers) that host these email addresses will put you on their spammer list under the assumption that you are sending email to random addresses. They will then block all of the email you send in the future to any other email accounts they host. This can severely hamper your marketing and customer communication efforts, so maintaining clean email address databases is very important. Also, once you are on a spammer list, it can be very difficult to get off.
Simply checking the syntax (i.e. is it missing a period? Is it missing the @ symbol?) of email addresses is not enough. It's important to also know whether the email address can receive messages and not get bounced back. After all, the syntax of a disabled email address will likely be correct. This is where a sophisticated email address verification process that employs multiple algorithms and real-time validity mechanisms is extremely important and useful.
A scan of all email addresses prior to outbound communication (or at least at some regularly scheduled interval) to verify validity, and then removing those email addresses that are no longer receiving email, is now an imperative for an effective communications program.
I attended the Data 2.0 Conference this week in San Francisco. There is a lot to be excited about in this emerging, growing and quickly-accelerating industry. However there are still some significant obstacles that have to be overcome for the vision of the data-driven world and the “great data highway in the sky” to truly be realized.
First, there is the exciting stuff. New companies continue to emerge and grow in the space in multiple categories, including broad data sharing sites (FluidInfo, InfoChimps), purveyors of proprietary and hard-to-capture data (Navteq, Metamarkets), API infrastructure providers (WebServius, Apigee, Mashery), specific data category providers (SimpleGEO, Rapleaf, Socrata, DataSift), providers of API-based solutions (StrikeIron, Xignite) and slick data visualization tools (too numerous to list).
Also, the companies that have been in this space for five or more years are becoming larger, more sophisticated and are in many cases continuing to raise significant amounts of capital from investors looking to capitalize on the data megatrend. Even the stalwarts such as Microsoft with their DataMarket are eyeing future fruitful harvests in this space. Twitter also announced the commercial licensing of its entire “fire hose” of Tweets at the event, a move that the providers of analytics tools are hailing.
More and more public, government-sourced data is coming online everyday as fodder for this machine-to-machine information feeding frenzy. This data is coming from every level of government too; cities such as San Francisco (datasf.org), state governments such as Oregon (data.oregon.gov announced two weeks ago), and the federal government's data.gov initiative(rumors that funding might be cut are false according to keynote speaker and the charismatic, sometimes controversial industry-insider Vivek Wadhwa).
All of these government-sourced data assets are being made available to the general public in the hopes of civically engaging the creativity among us to innovate and create public value without the traditional budgetary costs. This has already led to a proliferation of applications such as online live maps of San Francisco municipal transportation schedules (including for the iPhone and Android platforms), as well as municipal vendor contracts available online for public discussion (it's amazing how many vendors consistently come in late and over-budget get rewarded new contracts over and over) like Chicago’s citypayments.org site.
Finally, the platforms (the fertile Cloud and all that is happening there with the Google AppEngine, Microsoft’s Azure, and Amazon’s various cloud offerings) and especially the devices (like smartphones and the iPad) that can make use of this data are also marching forward at a breathtaking pace.
This is all very exciting for those of us whom data represents a livelihood.
However, the significant challenge around the accessibility and usability of these vast seas of data is that it is still largely a complex, IT-oriented developer's world. Most of the access to these data sources is either via API, or available in structures and formats as varied as the data itself. This limits the applicability of these valuable data sources to a very small group of dedicated engineers and leaves us all with only a modicum of the true potential of this space.
Sure, there are API protocol standards such as REST and SOAP, but these only scratch the service. Most single-API vendors introduce a new set of behaviors, data structures, response codes and a new business model with each new API. This adds greatly to the complexity for anyone looking to put these data sources to use, both initially and ongoing. Until we can find a way to normalize the great data highway, it's going to be a bumpy road. Those that know me know I’ve been preaching this for years and have applied much of it to StrikeIron’s various data and API offerings; however it can be a difficult proposition to get adopted across the industry.
The consumption complexity issue is demonstrated by the term "mashup". Several years ago, this was the term-du-jour by an industry claiming that non-developers could combine datasets in interesting and exciting new ways without the assistance of their IT organizations. This term however has all but shriveled and died. Why? Because non-developer's couldn’t do it. The tools were cumbersome, complex, and represented whole new learning curves that most people simply don't have the time or patience for, as well as the lack of standards surrounding the datasets themselves. In fact, I never heard the term mentioned a single time at the event. A few years back, it would have been in nearly every discussion. May the term “mashup” rest in peace.
Until we surpass this hurdle of data consumption complexity and the vendors in the space only pay lip service to these challenges (prevalent on several of the panels at the event), the data-driven world will only be a shell of what it could be.