One of the exciting things about SOAP and REST-based Web services protocols is that they are text-based, providing for the platform independence necessary for broad machine-to-machine communication and open cloud computing models. In other words, describing data using a textual XML dialect allows iPhones to communicate with mainframes, as well as enabling Fortran-developed scientific instrumentation devices to be able communicate with Dell Server applications in the Cloud written in Java.
As long as both machines are aware of the "rules" of a given XML-dialect and how data is described, they can communicate and more importantly pass data back and forth to perform certain functions based on the resultant data. This is powerful and has really helped lay the groundwork for the success of the Cloud.
To demonstrate this concept, here is an example of an "Input" SOAP message to StrikeIron's Sales and Use Tax Basic service. Remember that XML is not meant to be human readable, but rather the implementation of a set of XML dialect rules. However, if you look closely then you can see the actual data elements that are passed within the XML message received by StrikeIron within our data centers by the calling entity:
Our application servers, which are always listening, receive the request, do some user authentication, and then perform the requested task and return the resultant data XML message below. It can then be used how ever necessary by the calling entity (to process an ecommerce transaction for example). Here is an example of the "Output" XML message:
This communication and data transaction has occurred entirely without human intervention. It takes place between machines that could be located anywhere on the globe, each completely oblivious to the hardware and software that comprise the other entity.
Fortunately, humans rarely if ever need to interact at the XML-level (sometimes it might be useful for debugging). Instead, the creation, sending, receiving, and interpretation of these XML messages are handled by the software development environments that one is working in, abstracting a developer or application user away from the XML-based data exchange.
This form of XML messaging is what makes companies like StrikeIron possible, opening up pre-built data processing, data validation, aggregated data sources, and other business functions available to the world. Regardless of what software and hardware environments a customer happens to be running, it's this approach that makes the ever-evolving "Great Data Highway" possible.
Email validation, if done effectively, not only keeps organizations off of spam lists when communicating with customers and prospective customers, but also can provide new opportunities. Running a batch of email validation checks against an entire CRM contact database at certain time intervals is likely to yield a subset of invalid email addresses within the contact data. Of course, this is usually because people leave organizations for one reason or another. It could also mean they changed their email address, or be the result of some other possible reason.
Since the majority of the time these invalid email addresses likely represent contacts at customer or target companies that are no longer there (and their email addresses have been disabled), they likely have a replacement or someone else within that organization that is important to create a new relationship with. Realizing this and being alerted of it via an email validation process approach helps savvy customer-facing employees get in front of and be proactive with replacement contacts sooner rather than later in order to maintain key, existing relationships.
This process only works of course if real-time email validation solutions are utilized that do not rely upon static database lookups using data that can age quickly. Only solutions that go out to the Internet with each email check to recognize invalid email addresses in real-time will be useful in these scenarios.
The identification of these invalid email addresses in real-time typically triggers an action for a customer service representative or a sales person to reach out to the company to begin building the new relationship. This contact action can be a key trigger for new opportunities as well as strengthening existing customer relationships as there is a basis for communication with the customer, partner, or target company since the point of contact is no longer a part of the organization.
Fortunately, it is very easy to run these kinds of batch email validations using a Cloud-based email validation product such as StrikeIron provides. A mechanism is created that calls out to the StrikeIron platform via an API function call as each contract record is checked, simply reporting which email addresses are no longer valid. This creates a contact action list for customer service or sales teams to follow up with and potentially turn into new business opportunities in addition to maintaining existing ones.
The appropriate time interval to run these types of mass email validation processes against a contact database could be every 30 days, every 90 days, or whatever seems ideal for a given business. Keeping contact data fresh and current can be a tremendous competitive advantage, especially when it's so easy.
One of the features of StrikeIron's IronCloud platform is that it can accept invocations of Web services via multiple protocols including both SOAP and REST. This maximizes the audience of potential users and provides for a good deal of flexibility with multiple IDEs, coding styles, and platform implementations.
In addition to the support for SOAP calls within the platform (including SOAP Headers, SOAP parameter-based authentication, and SOAP w/ HTTP Secure) there is also support for accepting REST calls. This is achieved within the “Transformation” sub-system of our IronCloud platform, meaning we translate the REST call to its equivalent SOAP call before hitting the actual Web service living within our data centers, and then translate the response back to the REST format before it is sent back to the calling entity, and of course all within milliseconds.
Here is an example using REST with our North American Address Verification service, a Web API that validates the existence of any address in the United States or Canada, and then standardizes the address according to postal standards (as well as appending additional data such as county and latitude/longitude coordinates). The example below can be entered into any Web browser address line as-is (with the appropriate authentication - click the Free Trials button to the right or contact StrikeIron to get access) in order to get a response. You can then change parameter values for different addresses to get the different corresponding responses. You can also try other methods within any of our Web services following the same form (you have to change the parameters to match the method of course).
http://ws.strikeiron.com/NAAddressVerification6/NorthAmericanAddressVerificationService/NorthAmericanAddressVerification?LicenseInfo.RegisteredUser.UserID=***********&LicenseInfo.RegisteredUser.Password=******&NorthAmericanAddressVerification.AddressLine1=15501 Weston Parkway&NorthAmericanAddressVerification.AddressLine2=&NorthAmericanAddressVerification.CityStateOrProvinceZIPOrPostalCode=Cary NC&NorthAmericanAddressVerification.Country=US&NorthAmericanAddressVerification.Casing=UPPER
Because a REST call contains parameters including UserID and Password, we of course recommend to our users that these parameters be stored in a non-viewable config file and not the actual Web page source, or some other means of hiding credentials (within non-viewable code or within a database for example).
Have a REST-related question? Contact us at email@example.com Like a free trial? Contact us at firstname.lastname@example.org
Over the years, we have learned that providing flexibility within our Cloud-based data delivery platform helps better serve (and therefore expand) our customer base. In order to support the broadest number of coding styles, Cloud environments, IDEs, and other software development tools, we have provided multiple ways to invoke our various Web services APIs. This includes use of multiple protocols, as well as multiple ways to utilize each protocol.
For example, when using the SOAP protocol to invoke a StrikeIron Web service, you have the option of passing authentication parameters (your UserID and Password) either within SOAP headers, or as parameters as part of the service data payload.
Some developers prefer SOAP headers because it allows them to create reusable authentication code that can be leveraged with multiple StrikeIron services (all StrikeIron services share the same interface, authentication mechanisms, service behavior, and response codes). However, many IDEs do not support SOAP headers, so sending authentication by parameter, along with the rest of the data payload, is the only option in some cases.
Depending on the authentication method utilized, there is a different service endpoint required. The primary difference (other than the service call structure inside the XML, transparent in most cases) is the domain prefix. With North American Address Verification for example, the SOAP Header endpoint is:
If you want to pass the UserID and Password via actual parameters to the service, in that case the endpoint is as follows (note "wsparam" versus "ws"):
We also provide the ability to invoke our services using the REST protocol. Sometimes this is preferred, especially when services are being built into Websites using PHP, Ajax, and other other scripting technologies. Here is an example of our Email Verification service using a REST call (if you have a StrikeIron UserID and Password, you can paste this code into a Web browser to invoke it):
Finally, if you prefer to securely transmit the data of a Web service invocation, you can simply replace "HTTP" in the service endpoint with "HTTPS" in the URL. This will encrypt the data appropriately as it travels from your application or Website to the StrikeIron platform and data centers, and then back again to your application or Website.
These authentication and invoking protocol preferences have been in largest demand by our customer base to date, resulting in their various implementations. If you have other protocols or authentication mechanisms you would like to see us support and build into our delivery platform, please let us know in the comments below.
Cloud computing is growing at a fast pace and will continue to do so for quite some time. The Gartner Group for example has projected a tripling of the market in the next five years, and almost everyone else is projecting some level of super-charged growth in the space. Now of course, this all depends on what you include or don't include in your definition of cloud computing (Google Apps for example). As long as you are consistent in your personal definition, the growth ought to be of a similar magnitude.
The reasons for this growth are the advantages that cloud computing provides, including faster deployment, smoother scalability, pay-for-what-you-use business models, and no capital expenditure on the hardware and software that comprises the architecture. Amazon, Microsoft, IBM, Google, Opsource, and Rackspace are all companies offering public cloud infrastructure for rent, and a myriad of vendors have lined up to add layers of capabilities on top of these offerings such as RightScale, and the ecosystems that can take advantage of these architectures such as StrikeIron's
are continuing to invest in the space as well. Unfortunately Sun's promising efforts in this space have been discontinued by Oracle for one reason or another.
This public computing resource trend has been great for startups because new companies can launch on cloud infrastructure "virtually" overnight, without the traditional costs tied to software, hardware, and the management of those resources, which traditionally has required them to seek and spend time on obtaining private funding. Reducing startup "start friction" has in turn created a bubbling sea of innovation as of late.
However, there has been more reluctance in the enterprise space to move to the "Cloud" because of worries about security and losing control when utilizing these public resources. There are just some highly-valued sets of data and mission-critical business processes that many organizations just don't want to put in the hands of a third party.
As a result, many of these companies are now building out their own "private cloud" infrastructure that mirrors the public clouds in functionality. This "member-only" infrastructure can then be shared across business units and geographies in an effort to eliminate IT redundancy, reduce costs, and increase efficiency, just as public clouds do for the masses.
Because of this trend, many of the cloud infrastructure providers are now offering virtual private capabilities. For example, Amazon's Virtual Private Cloud (Amazon VPC) is in an effort to provide a "hybrid" solution for enterprises building out a private cloud where some public computing resources can be utilized where it makes sense to do so.
What's still not clear though is what actual separation of data on the actual public cloud servers really occurs, rendering the concept by some as an exercise in marketing, at least so far. However, the enterprise market for cloud computing is potentially huge, so I am expecting a lot more to occur in this space.
There definitely are solid cases to be made for both public and private clouds (as well as hybrid solutions), so my guess is these two will co-exist for quite some time, and the line as to what separates the two will be somewhat blurred (as usual). The end result will be that whatever route or combination of routes companies employ in the new age of the Cloud, these efforts will leave more resources available for actual innovation rather than infrastructure management and a repetitive IT exercises, and that can only be good for us all, right?