The general premise of data warehousing hasn't changed much over the years. The idea is still to aggregate as much relevant data as possible from multiple sources, centralize it in a repository of some kind, catalog it, and then utilize it for reporting and analytics to make better business decisions. An effective data warehousing strategy seamlessly enables trend analysis, predictive analytics, forecasting, decision support, and just about anything else we now categorize under the umbrella of "data science."
The premise is not different these days, but rather, it is more the shifting nature of the data sources that the warehouse must draw from to capture as much useful information as possible. It's the data that's changed, not the goal.
First, there is the rapid proliferation of social-generated data in all of its unstructured forms, making the data extraction and transformation components of loading data to the warehouse more difficult than it has been in the past. But this isn't really groundbreaking for 2013, as social data and the creation of various Big Data technologies its growth has spawned, such as Hadoop, have been emerging for several years now.
Instead, what will likely be significantly different in 2013 is the acceleration of the deployment of a multitude of SaaS applications within the enterprise, especially in the larger, often slower to adopt, companies that populate the Fortune 2000. As the deal sizes grow in size, the SaaS footprint is clearly becoming significantly bigger.
This is where it becomes interesting. It's not just that an organization has several different SaaS applications such as Salesforce, Workday, and Success Factors in place and in use across the enterprise, with a single instance of each in use by all. Instead, due to the nature of the easier adoption of these SaaS applications, many of them have come in through the back door departmentally and at different times rather than through a centralized IT-controlled proliferation. This means that multiple instances of the same application are popping up everywhere.
For example, there are large enterprises that now have 10, 20, or even 50+ instances of Salesforce running across the entire organization. Each instance has its own set of customization of data collection and storage, separate add-on applications installed, different data feeding these applications, and unique implementation approaches. This could result in the old adage of solving old problems while creating new ones.
Some questions that could be asked are what kind of data collection and ETL challenges will this cause for those wishing to leverage a data warehousing strategy? Is the fact that the operational data from these various SaaS applications is stored and maintained by different vendors, each of which who is incentivized to keep it that way, make things easier or more difficult for data warehousing and the analysis it enables? Will data fragmentation and the resultant data integration strategies scale across all of these instances of SaaS applications? It will be interesting to see organizations meet the "SaaS sprawl" challenge, especially as it relates to cross-enterprise data collection strategy.
Furthermore, SaaS applications have taken an ever-increasing hold of the enterprise as of late with larger and larger deals. With the Cloud and SaaS applications a major part of their 2013 strategies, Oracle, SAP, IBM, and the more traditional software vendors have taken notice. SAP's Business ByDesign, Oracle's Fusion Applications, and recent SaaS acquisitions will surely add to what could become a hodge podge of SaaS applications across the enterprise.
To meet these challenges currently, cloud data warehousing offerings from companies like BitYota and Amazon's Redshift are beginning to emerge with a core theme of the cloud as the centralized data storage repository. ETL and data integration solutions such as Informatica's Cloud and Dell's Boomi are racing to meet these traditional data warehousing requirements in the cloud paradigm. Also, the traditional data cleansing requirements of data warehousing are being met with their cloud-based counterparts for better, more usable data in these new age warehouses. One thing that will never change is that bad data will always equal bad analysis, and the need for making investments in data quality strategies will continue to exist.
As the landscape of SaaS continues its rapid expansion, and the data within these applications continues to burgeon, 2013 will definitely be a pivotal year in the dawn of a new class of data warehousing technologies.
Organizations collect leads in many different ways. Leads are collected when prospects respond to offers, fill out forms in websites, or send email inquiries. Leads can occur simply by word of mouth or other lead capturing tools as well.
Once a database of leads has been created, what can be done to improve the value of this asset that helps drive all future sales? How can the data be leveraged to optimize the time of the sales organization before they receive these leads?
There are 3 main ways to improve the value of lead data:
The first is to eliminate bogus or phantom leads. One way to do this is by validating the existence of email addresses for leads. This not only removes obvious fake email addresses, but also filters email addresses from domains that are not operational and have been turned off or cannot accept email. Other data such as phone numbers or mailing addresses can also be used for lead validation. Lead filtering such as this can prevent the sales organization from wasting valuable time on these phantom leads. It can also reduce spend for marketing campaigns and keep an organization off of spam lists.
Secondly, improving the quality of lead data is also important. This means ensuring that phone numbers are correct, addresses are accurate and do not contain typos, any missing information like zip codes or area codes is added, and correcting anything else that require members of the sales team to waste time chasing down a lead. Incomplete or inaccurate information and associated lost time hinder a sales force from understanding and meeting the needs of actual prospects.
Third and finally, appending additional information to each lead can help provide data points that can be used for segmentation, personalization, localization, and other forms of targeted marketing. For example, appending latitude and longitude data can help with market visualization and opportunity identification. Combining that data with census and other demographic information help score leads for sales to prioritize opportunities.
Sales leads are the lifeblood of revenue generation. The smart organizations recognize that and treat them as such.
In a report last week, the Open Data Center Alliance published that its members plan to triple Cloud deployments in the next two years according to a recent membership survey. This significantly outpaces the adoption forecasts from several different analyst firms and is another indicator where the I.T. industry is headed.
Of course, there are different ways to measure Cloud adoption, and while adoption rates may always be debated, there is little question of the Cloud's growing significance in I.T. Even though some Cloud forecasts combine infrastructure-as-a-Service (IAAS) with Software-as-a-Service (SAAS) and others keep them separate, in either case the trending is upward.
So here are four primary reasons why this trend is occurring and likely to continue for a long time to come:
- Cost. When deploying to the Cloud, one only has to deploy the needed I.T. resources at any given time. Capacity can be added or reduced as needed and whenever necessary. With this cost-savings "elastic" approach, usage spikes can be handled as well as increased resource demand over time. It's the difference between renting a server by-the-minute versus committing to two-year contracts with a data center provider at maximum capacity requirements. The latter, traditional approach front-loads application costs and requires significant capital expenditure. These heavy up-front costs go away in pay-for-what-you-use Cloud scenarios, including the ability to get things up and running more cheaply. Many startups deploying to the Cloud are spending less money on hardware and software investments than just a few years ago and getting up and running faster.
- Abstraction. Cloud deployments hide the details of the hardware, bandwidth resourcing, underlying software, load management, and ongoing maintenance of the given platform. This frees up resources to focus on one's own business rather than endless architecture meetings and decisions - unnecessary for a large majority of applications. This is why Salesforce.com has found success. Customers no longer have to deal with software upgrades for sales people, database choices, syncing data from laptops to servers, hardware deployment decisions, etc. It's just easier in a Cloud SAAS model.
- Innovation. An organization can leverage the innovation and expertise of those who specialize in a given Cloud-based platform such as within data-as-a-service offerings like StrikeIron provides. This continual innovation can be leveraged as a Cloud platform becomes more advanced without any effort of the organization's own resources. The platform improves daily, and these incremental improvements are put to use immediately for the benefit of customers and without company-wide software upgrades and rollouts. Instead, it's built-in and essentially automatic with the Cloud model. Another example is Amazon's EC2, where an increasing number of new features and capabilities can be leveraged without application redeployment.
- Platform Independence. When deploying to the Cloud, many different types of devices and clients can leverage the application via APIs or other interfaces, from PCs, tablets, smart phones, and other systems, as all communication between machines is via the ubiquitous Web, available just about any time anywhere. This makes interoperability easier, and extensive "middleware" investments of the past to make things work together can be dramatically reduced. This is one of the primary reasons why tablets such as the iPad for example have grown considerably in adoption now versus ten years ago – they work with the Cloud and can access a broad array of useful applications from just about anywhere.
These benefits of the Cloud aren't going away, and this is why the adoption trend is accelerating upward.
2011 has been the year of the Cloud database. The idea of shared database resources and the abstraction of underlying hardware seems to be catching on. Just like Web and application servers, paying-as-you-go and eliminating unused database resources, licenses, hardware, and all of the associated cost is proving to have attractive enough business models that the major vendors are betting on it in significant ways.
The recent excitement has not been limited to just the fanfare around "big data" technologies. Lately, most of the major announcements have come around the traditional relational, table-driven SQL environments Web applications make use of much more widely than the key-value pair data storage mechanisms "NoSQL" technology uses for Web-scale data-intensive applications such as Facebook, NetFlix, etc.
Here are some of the new Cloud database offerings for 2011:
Saleforce.com has launched Database.com, enabling developers in other Cloud server environments such as Amazon's EC2 and the Google App Engine to utilize its database resources, not just users of Salesforce's CRM and Force.com platforms. You can also build applications in PHP or on the Android platform and utilize Database.com resources. The idea is to reach a broader set of developers and application types than just CRM-centric applications.
At Oracle Open World a couple of weeks ago, Oracle announced the Oracle Database Cloud Service, a hosted database offering running Oracle's 11gR2 database platform available in a monthly subscription model, accessible either via JDBC or its own REST API.
Earlier this month, Google announced Google Cloud SQL, a database service that will be available as part of its App Engine offering based on MySQL, complete with a Web-based administration panel.
Amazon, to complement its other Cloud services and highly used EC2 infrastructure, has made the Amazon Relational Database Service (RDS) available to enable SQL capabilities from Cloud applications, giving you a choice of underlying database technology to use such as MySQL or Oracle. It is currently in beta.
Microsoft also has its SQL Azure Cloud Database offering available in the Cloud, generally positioned as suited for applications that use the Microsoft stack for developers that will want to leverage some of the benefits of the Cloud.
Some of the above offerings have only been announced so far, and not actually launched. Or, they have limited preview access available now. Also, even the business models in some of these cases have not even been completely divulged, or if so are very likely to change.
Clearly there is a considerable marketshare land grab existing now. All of the major vendors are recognizing that traditional-SQL Cloud storage infrastructure will be an important technology going forward. Adding a solid database layer to the Cloud architecture story seems like an important step in the continuing enterprise and commercial software move to the Cloud, and these new vendor offerings should in turn accelerate this move.
So, is this really the wave of the future? Some of the major questions that will have to be answered include those around latency. When data requests have to hop from a client application, then to the application server, to the database, and then back to the server and client, even multiple times within a single request, it can result in quite a performance hit. Likely, these machines exist far from each other geographically and might really slow things done, annoying an end-user with the slow page loads. This is probably why most infrastructure providers realize that they have to have the corresponding database capabilities available and accessed natively to reduce this latency. However, performance, along with security issues (perceived or otherwise) still could be a significant barrier to mainstream adoption.
Also, most of the relational database environments that exist in the Cloud only have a subset of SQL capabilities available and in some cases can be quite limited. For example, many of these Cloud SQL platforms don't support cross-table joins, at least not yet. This is a very common requirement for SQL applications. The lack of support is primarily because joins can consume a lot of resources, another performance-killer in shared environments.
Once most of this storage and Cloud database infrastructure gets in place however, incorporating more content-oriented data services such as customer data verification will become commonplace and easy to leverage. We may even see them incorporated into the database offerings themselves as they look to differentiate themselves from vendor to vendor. Cloud-based database offerings have the advantage of making much larger libraries of data-oriented add-on capabilities available right out of the box, so the story here is much more than just cost.
While SQL Cloud offering announcements are all the rage in 2011, 2012 will undoubtedly tell the adoption tale. No doubt these offerings will be ideal and cost-effective for many use cases out there. But will demand be large enough quickly enough to support all of these vendors and drive the innovation at a speed that will make these platforms viable in the near future for enterprise and commercial applications? The answer is likely yes, but the next twelve months or so will give us a lot of the supporting data to measure the extent of the trend.
A lead, as we all know, is an individual or company that has indicated some level of interest in potentially buying a product or service. This is an important first stage categorization of any sales process.
Technically, a lead is represented by data, and the potential value of that lead is dependent on the level of accuracy, how current the contact information is, and the comprehensiveness of the data that constitutes the lead. In other words, a lead is only as valuable as the data that comprises it within a CRM application or other sales-driving system.
Fortunately, lead value ROI is generally pretty easily obtained as a function of the sales line. It is very measurable, and one can see over time that the more accurate, complete, and timely leads are likely to drive the most revenue and have the greatest value to the sales organization. Therefore, investing in the quality of lead data can have a measurable and rewarding payoff.
This is one of the primary reasons many organizations come to StrikeIron - to enhance the value of their lead data. Our real-time customer data quality offerings can validate email addresses, physical addresses, and phone numbers to ensure a lead is accurate, and it can even correct missing or inaccurate contact lead data where possible.
We can also add a sizable list of additional data points, including company demographic data points, residential information (from sources such as the Census Bureau), latitude and longitude coordinates, and other data points that add some additional value to the lead and optimize it for the purposes of a sales organization.
Best of all, our easy-to-integrate APIs are available out in the Cloud (meaning no reference data maintenance and a flexible subscription model), making it easy to plug into just about any CRM, marketing automation, or other lead management system via the Internet.
So as we are in the sweet spot of need for a large number of attending organizations at LeadsCon, we have found much success in the past with this conference. As a result, we will have a pretty sizable contingent of folks at the event in New York City this week.
Drop by and say hello (booth 202) and learn more about how to rev up the value of leads. Oh and by the way, we will be giving away some iPad2's. Hope to see you there!
StrikeIron's Global Address Verification Web Service enables addresses to be validated for existence and accuracy in over 240 countries around the world. This Web service is a very useful, easy-to-integrate API for checking shipping addresses, validating customer data, identity verification, and several other use cases where address accuracy is important. A simple SOAP or REST call from a Website, business process, application, or mobile device (a single line of code in most development environments) is all that is needed to invoke the service and verify a global address via the Cloud. We take care of all the underlying reference data updates, so the most recent, accurate data is in use at all times.
One of the primary methods for validating a global address is "basic" verification. It requires only an address line, a country-specific locality line (such as city, city & state, city & province, etc.), and the country. Using that information, the appropriate country-specific postal reference data will be utilized to provide the result.
Just for demonstration (and fun), here are a few sample addresses that demonstrate validating some famous addresses around the World:
In each case, the three parameters are used to validate the existence of the address, provide the necessary postal codes, and also provide a formatted address line which could be used for mail delivery.
There are also some more advanced functions which provide the ability to localize an address for a given country ("Germany" versus "Deutschland" and "Milan" versus "Milano" as examples), perform in batch, and provide more advanced result codes such as confidence levels.
Here is a list of supported countries (and corresponding postal reference data granularity).
If you would like to see how it works with your data, please contact us at email@example.com (or call us, +1-919-467-4545) for a trial. It's also available fully integrated into Salesforce.com.
The "Cloud" has been seeing a lot of momentum this past year, and one place where that is readily apparent is in the stock price of companies making major strategic investments in Cloud technology and associated offerings, as well as aggressive go-to-market plans with those offerings.
To demonstrate this, take a look at the one-year stock price increase of eight major cloud vendors versus the Dow Jones Industrial Average. These eight growth companies were selected because of their software-as-a-service (SAAS) or infrastructure-as-a-service (IAAS) focus. They are Informatica (INFA), Salesforce.com (CRM), Amazon (AMZN), Netsuite (N), Rackspace (RAX), Success Factors (SFSF), Akamai (AKAM), and VMWare (VMW). These securities have seen on average an 81% price increase over the past year, versus a paltry 6% versus the Dow Jones Industrial Average (which at least has gone up).
Will it continue? There is still a long way to go in this space, so probably so.
StrikeIron offers an Email Verification Web Service
(can be easily integrated into applications, business processes and Web sites via SOAP and REST) that for the most part identifies whether or not an email address is a valid one (it exists and can receive email) without actually sending an email.
We use a series of "secret sauce" algorithms running in parallel across the Web, including some things like domain name existence checking, MX record analysis, SMTP conversations, and several other series of redundant checks to make this work.
Email verification/validation is important not only to filter out bogus email addresses, but also to stop sending email (such as newsletters) to email addresses that no longer exist (typically because someone no longer works at a company).
There are a broad range of benefits for this kind of capability, such as keeping off spam lists from mass emailing to non-working addresses, optimizing the time of a sales team by filtering out phantom leads, identity verification, and also triggering reach outs to contact lists when an invalid email address indicates that a current contact is no longer with a company. This last one can be important for several departments, including accounting/collections, sales, and marketing.
We have also done an integration of the capability into Salesforce.com
to make the email validation process within Salesforce seamless and very simple to turn on.
A simple concept, with a complex behind-the-scenes process, can be a big help on the path to success.
StrikeIron is going to be sending a large contingent of team members out to the Salesforce.com Dreamforce event December 6th-9th at the Moscone Center in San Francisco. It is being billed by Salesforce as the "Cloud Computing Event of the Year".
We will be showcasing our native Force.com applications, where we have seamlessly integrated several of our data verification offerings into the Salesforce.com CRM platform, including address verification, email verification, and the Do Not Call list (checking in real-time for outbound compliance).
We also will be showing our Informatica Cloud Contact Record Verification plug-in, where data being loaded into Salesforce.com from various sources can be validated and enhanced as it is being loaded into the system (daily lead loads for example). This can provide for dramatically better data quality within Salesforce, which is often cited as the #1 problem with CRM ROI.
And then of course we have several other data-as-a-service and data verification offerings that are easy to integrate into any application. While the underlying technology for cloud-based name, address, email, and telephone verification is the same, there are of course many cases where you would want to do this outside of Salesforce, but still to the benefit of CRM and other applications.
We will have engineering (including our CTO), marketing, and business development folks (including myself) available for anyone who wants to explore our technology, asks questions, and discuss partnership opportunities.
We hope to see you there!
CRM systems are only as good as the data within them. That's especially true with Salesforce.com and their CRM solution products, including their premier sales force automation product Sales Cloud 2 and their call center solution Service Cloud 2.
For example, imagine how much more effective a salesteam member can be if he or she has complete, accurate, and up-to-date information on contacts they are about to reach out to. Optimizing the time and effort of the sales team can provide significant dividends to the bottom line.
Also, there is nothing more frustrating for example than typing a well-though-out email only to have it returned as undeliverable fifteen minutes later. Bad addresses and bad phone numbers can be just as agonizing.
When marketing to the gold mine of contacts that exist in your CRM system, the quality of data can be the difference in a successful campaign or a failed one. No matter how good the message or offer is, if it's not reaching the targets, it's not going to do well.
Because of this, a couple of years ago we integrated several of our data-as-a-service Web APIs into Salesforce.com, optimizing our capabilities within Salesforce.com. We are gradually improving them over time as customers put them to use in a broad range of industries.
The four services that are currently integrated, customized for the Force.com platform, and available on the Salesforce.com AppExchange include:
US Address Verification - ensuring accurate, validated addresses within your Salesforce system (including adding additional data like county and latitude and longitude)
Global Address Verification - includes address verification for hundreds of countries around the world
Email Address Verification - ensuring email addresses within Salesforce are valid and deliverable (without having to send an actual email)
Do Not Call List Compliance - checks phone numbers against the federal Do Not List to ensure compliance and avoid FTC fines