Subscribe by Email

Your email:

More Info

free-trials
contact-us

StrikeIron Blog

Current Articles | RSS Feed RSS Feed

Data Warehousing 2013: A Changing Landscape

  
  

The general premise of data warehousing hasn't changed much over the years. The idea is still to aggregate as much relevant data as possible from multiple sources, centralize it in a repository of some kind, catalog it, and then utilize it for reporting and analytics to make better business decisions. An effective data warehousing strategy seamlessly enables trend analysis, predictive analytics, forecasting, decision support, and just about anything else we now categorize under the umbrella of "data science."

Running Informatica Cloud on Amazon Web Services

  
  
EC2 Icon

As you may know, StrikeIron is an Informatica Cloud partner. We recently won another customer account that will be using the StrikeIron Contact Record Verification suite to clean their records as they move between Salesforce.com, a proprietary marketing database, and Eloqua via Informatica Cloud. To help this customer get started, we wanted to be able to run Informatica Cloud on a Mac as well as have a test platform that was remotely accessible from anywhere.

Amazon's NoSQL and Database Evolution: What Can Be Learned

  
  
DynamoDBLate last week, Amazon released an update to its DynamoDB service, a fully managed NoSQL offering for efficiently handling extremely large amounts of data in Web-scale (generally meaning very high user volume) application environments. The DynamoDB offering was originally launched in beta back in January, so this is its first update since then.

The update is a "batch write/update" capability, enabling multiple data items to be written or updated in a single API call. The idea is to reduce Internet latency by minimizing trips back and forth to Amazon's various physical data storage entities from the calling application. According to Amazon, this was in response to developer forum feedback requests.

This update to help address what was already an initial key selling point of DynamoDB tells us that latency is still a significant challenge for cloud-based storage. After all, one of the key attributes of DynamoDB when first launched was speed and performance consistency, something that their NoSQL precursor to DynamoDB, SimpleDB, was unable to deliver, at least according to some developers and users who claimed data retrieval response times ran unacceptably into the minutes. This also could have been a primary reason for SimpleDB's lower adoption rates. Amazon is well aware of these performance challenges, and hence the significance of its first DynamoDB update.

Another key tenant of DynamoDB is that it is a managed offering, meaning the details of data management requirements such as moving data from one distributed data store to another is completely abstracted away from the developer. This is great news, as complexity of cloud environments was proving to be too challenging for many developers trying to leverage cloud storage capabilities. The masses were scratching their heads as to how to overcome storage performance bottlenecks, attain replication, achieve response latency consistency, and perform other operations-related data management challenges when it was in their purview to do so. By the way, management complexity will likely still be a major challenge for other NoSQL vendors, and there are many "big data" startups offering products in this category, who do not offer the same level of abstraction that DynamoDB offers. It will be interesting to see if the launch of DynamoDB becomes a significant threat to many of these startups.

We learned this reduction of complexity lesson at StrikeIron within our own niche offerings as well. We gained a much bigger uptake of our simpler, more granular Web services APIs, such as email verification, address verification, and other products such as reverse address and telephone lookups as single, individual services, rather than complex services with many different methods and capabilities. This proved true even if the the more complex services provided more advanced power within a single API. In other words, simplified remote controls for television sets are probably still the best idea for maximum television adoption, as initial confusion and frustration tends to be inversely proportional to the adoption of any technology.

Another interesting point is that this is the fifth class of database product offerings in Amazon's portfolio. Along with DynamoDB, there is also still the aforementioned SimpleDB, a schemaless NoSQL offering for "smaller" datasets. There is also the original S3 offering with a simple Web service based interface for storing, retrieving, and deleting data objects in a straightforward key/value pair format. Next, there is Amazon RDS for managed, relational database capabilities that utilize traditional SQL for manipulating data and is more applicable for traditional applications. Finally, there are the various Amazon Machine Image (AMI) offerings on EC2 (Oracle, MySQL, etc.) for those who don't want a managed relational database and would rather have complete control over their instances (and not have to utilize their own hardware) and the RDBMs that run on them.

This tells us that the world is far from one-size-fits-all cloud database management systems, and we can all expect to be operating in hybrid storage environments that will vary from application to application for quite some time to come. I suppose that's good news for those who make a living on the operations teams of information technology.

And along with each new database offering from Amazon also comes a different business model. In the case of DynamoDB for example, Amazon has introduced the concept of "read and write capacity units", where charges will be based on the combination of frequency of usage and physical data size. This demonstrates that the business models are still somewhat far from optimal, and will likely change again in the future. Clearly they are not yet quite right for the major vendors trying to figure it all out as business model adjustments in the Cloud are not just limited to Amazon.

In summary, following the Amazon database release timeline over the years yields some interesting information, namely that speed/latency, reduction of complexity, the likelihood of hybrid compute and storage environments for some time to come, and ever-changing cloud business models are the primary focus of cloud vendors responding to the needs of their users. And as any innovator knows, the challenges are where the opportunities are.


OpenStack - Open Cloud Operating System Gaining Momentum

  
  
openstack logo

As the "Cloud" has evolved and matured from its roots the past few years, the alternatives for deploying a cloud-based solution have been almost entirely proprietary and commercial. They typically have required at least a credit card to even get started "renting" servers and storage that might be needed for only short periods of time and to achieve more flexible scalability models. With the success and momentum of OpenStack, an open source cloud operating system for deploying, hosting, and managing public and private clouds within a data center, this appears to be changing.

The OpenStack project, launched initially with code contributions from Rackspace and NASA, provides the software components for making cloud management functionality available from within any data center, including one's own, similar to what Amazon, VMWare, Microsoft and other cloud vendors are now offering commercially. Deploying OpenStack enables cloud-based applications and systems utilizing virtual capacity to be launched without the associated run-time fees the current slate of vendors require, as all of the software is freely distributable and accessible.

At first glance, this seems to be an ideal solution for larger enterprise IT organizations to offer up traditional cloud functionality, such as virtual servers and storage availability, to its constituents within the organization and without the fear of vendor lock-in and and ever-increasing vendor costs. This approach also provides for access to implementation details and the ability to customize based on specialized needs - also important in many scenarios and something not typically or easily offered by the larger commercial vendors. So the benefits to the private cloud space to those who find it appropriate to build and manage their own cloud environments are clear.

However, Rackspace itself just announced making public cloud services available using OpenStack, and others are likely to follow in the not-too-distant future, leveraging community-developed innovation in the areas of scalability, performance, and high availability that might ultimately be difficult for any single proprietary vendor to match. This should enable public service providers, especially in niche markets, to proliferate as well.

Major high tech vendors are also backing and aligning with OpenStack. In addition to Rackspace and NASA, Deutsche Telekom, AT&T, IBM, Dell, Cisco, and RedHat all have much to gain from the success of OpenStack and have announced as partners, code contributers, and sources of funding. Commercial distributions have already emerged such as StackOps. Funding for OpenStack-oriented companies has begun from the venture community, and events such as the OpenStack Design Summit and Conference this week in San Francisco are getting larger and selling out quickly.

All of the foundational pieces are in place for OpenStack to have quite a run towards achieving its goal of becoming the universal cloud platform of the future and the leaders of the "open era" of the Cloud. This is an exciting development for companies like StrikeIron and our cloud-based data-as-a-service and real-time customer data validation offerings, as the data layer of the Cloud will become even more promising and fertile as OpenStack continues to accelerate organizations towards easier adoption of cloud computing models and all of its benefits.

Don’t be an aaS

  
  
Slide1

Much of cloud computing terminology is based on the notion of ‘as a Service’ (or ‘aaS’).

Enterprise Cloud Adoption Accelerates - Four Reasons Why

  
  
describe the image

In a report last week, the Open Data Center Alliance published that its members plan to triple Cloud deployments in the next two years according to a recent membership survey. This significantly outpaces the adoption forecasts from several different analyst firms and is another indicator where the I.T. industry is headed.

Of course, there are different ways to measure Cloud adoption, and while adoption rates may always be debated, there is little question of the Cloud's growing significance in I.T. Even though some Cloud forecasts combine infrastructure-as-a-Service (IAAS) with Software-as-a-Service (SAAS) and others keep them separate, in either case the trending is upward.

So here are four primary reasons why this trend is occurring and likely to continue for a long time to come:

- Cost. When deploying to the Cloud, one only has to deploy the needed I.T. resources at any given time. Capacity can be added or reduced as needed and whenever necessary. With this cost-savings "elastic" approach, usage spikes can be handled as well as increased resource demand over time. It's the difference between renting a server by-the-minute versus committing to two-year contracts with a data center provider at maximum capacity requirements. The latter, traditional approach front-loads application costs and requires significant capital expenditure. These heavy up-front costs go away in pay-for-what-you-use Cloud scenarios, including the ability to get things up and running more cheaply. Many startups deploying to the Cloud are spending less money on hardware and software investments than just a few years ago and getting up and running faster.

- Abstraction. Cloud deployments hide the details of the hardware, bandwidth resourcing, underlying software, load management, and ongoing maintenance of the given platform. This frees up resources to focus on one's own business rather than endless architecture meetings and decisions - unnecessary for a large majority of applications. This is why Salesforce.com has found success. Customers no longer have to deal with software upgrades for sales people, database choices, syncing data from laptops to servers, hardware deployment decisions, etc. It's just easier in a Cloud SAAS model.

- Innovation. An organization can leverage the innovation and expertise of those who specialize in a given Cloud-based platform such as within data-as-a-service offerings like StrikeIron provides. This continual innovation can be leveraged as a Cloud platform becomes more advanced without any effort of the organization's own resources. The platform improves daily, and these incremental improvements are put to use immediately for the benefit of customers and without company-wide software upgrades and rollouts. Instead, it's built-in and essentially automatic with the Cloud model. Another example is Amazon's EC2, where an increasing number of new features and capabilities can be leveraged without application redeployment.

- Platform Independence. When deploying to the Cloud, many different types of devices and clients can leverage the application via APIs or other interfaces, from PCs, tablets, smart phones, and other systems, as all communication between machines is via the ubiquitous Web, available just about any time anywhere. This makes interoperability easier, and extensive "middleware" investments of the past to make things work together can be dramatically reduced. This is one of the primary reasons why tablets such as the iPad for example have grown considerably in adoption now versus ten years ago – they work with the Cloud and can access a broad array of useful applications from just about anywhere.

These benefits of the Cloud aren't going away, and this is why the adoption trend is accelerating upward.

Cloud Landscape: Cloud Databases Emerging Everywhere

  
  
clouddb

2011 has been the year of the Cloud database. The idea of shared database resources and the abstraction of underlying hardware seems to be catching on. Just like Web and application servers, paying-as-you-go and eliminating unused database resources, licenses, hardware, and all of the associated cost is proving to have attractive enough business models that the major vendors are betting on it in significant ways.

The recent excitement has not been limited to just the fanfare around "big data" technologies. Lately, most of the major announcements have come around the traditional relational, table-driven SQL environments Web applications make use of much more widely than the key-value pair data storage mechanisms "NoSQL" technology uses for Web-scale data-intensive applications such as Facebook, NetFlix, etc.

Here are some of the new Cloud database offerings for 2011:

Saleforce.com has launched Database.com, enabling developers in other Cloud server environments such as Amazon's EC2 and the Google App Engine to utilize its database resources, not just users of Salesforce's CRM and Force.com platforms. You can also build applications in PHP or on the Android platform and utilize Database.com resources. The idea is to reach a broader set of developers and application types than just CRM-centric applications.

At Oracle Open World a couple of weeks ago, Oracle announced the Oracle Database Cloud Service, a hosted database offering running Oracle's 11gR2 database platform available in a monthly subscription model, accessible either via JDBC or its own REST API.

Earlier this month, Google announced Google Cloud SQL, a database service that will be available as part of its App Engine offering based on MySQL, complete with a Web-based administration panel.

Amazon, to complement its other Cloud services and highly used EC2 infrastructure, has made the Amazon Relational Database Service (RDS) available to enable SQL capabilities from Cloud applications, giving you a choice of underlying database technology to use such as MySQL or Oracle. It is currently in beta.

Microsoft also has its SQL Azure Cloud Database offering available in the Cloud, generally positioned as suited for applications that use the Microsoft stack for developers that will want to leverage some of the benefits of the Cloud.

Current State of the Data Ecosystem - Data 2.0 Conference

  
  

I attended the Data 2.0 Conference this week in San Francisco. There is a lot to be excited about in this emerging, growing and quickly-accelerating industry. However there are still some significant obstacles that have to be overcome for the vision of the data-driven world and the “great data highway in the sky” to truly be realized.

Intelligent Use of Amazon's Simple Email Service (SES) Using StrikeIron Email Verification

  
  
Email Verification example

Amazon's new SES (Simple Email Service) product is a scalable, transaction-based offering for programmatically sending large amounts of email.  This is accomplished using Amazon's Web-scale architecture, most especially for applications that already use EC2 (server rental) and S3 (storage rental). By utilizing SES you are essentially leveraging the "Cloud" to send emails from applications and Web sites rather than investing in your own software and hardware infrastructure to do so. This process substantially reduces cost and complexity as do most Cloud services and in this case requires only a simple API call. There is no network configuration or email server setup required in this process.

Cloud Companies' Share Price Increase Dramatic Versus Dow

  
  
aug 27 resized 600

The "Cloud" has been seeing a lot of momentum this past year, and one place where that is readily apparent is in the stock price of companies making major strategic investments in Cloud technology and associated offerings, as well as aggressive go-to-market plans with those offerings.

All Posts