StrikeIron Blog

Don’t be an aaS

Posted by Justin Helmig on Wed, Nov 16, 2011

Much of cloud computing terminology is based on the notion of ‘as a Service’ (or ‘aaS’).

Read More

Cloud Landscape: Cloud Databases Emerging Everywhere

Posted by Bob Brauer on Tue, Oct 25, 2011

2011 has been the year of the Cloud database. The idea of shared database resources and the abstraction of underlying hardware seems to be catching on. Just like Web and application servers, paying-as-you-go and eliminating unused database resources, licenses, hardware, and all of the associated cost is proving to have attractive enough business models that the major vendors are betting on it in significant ways.

The recent excitement has not been limited to just the fanfare around "big data" technologies. Lately, most of the major announcements have come around the traditional relational, table-driven SQL environments Web applications make use of much more widely than the key-value pair data storage mechanisms "NoSQL" technology uses for Web-scale data-intensive applications such as Facebook, NetFlix, etc.

Here are some of the new Cloud database offerings for 2011:

Saleforce.com has launched Database.com, enabling developers in other Cloud server environments such as Amazon's EC2 and the Google App Engine to utilize its database resources, not just users of Salesforce's CRM and Force.com platforms. You can also build applications in PHP or on the Android platform and utilize Database.com resources. The idea is to reach a broader set of developers and application types than just CRM-centric applications.

At Oracle Open World a couple of weeks ago, Oracle announced the Oracle Database Cloud Service, a hosted database offering running Oracle's 11gR2 database platform available in a monthly subscription model, accessible either via JDBC or its own REST API.

Earlier this month, Google announced Google Cloud SQL, a database service that will be available as part of its App Engine offering based on MySQL, complete with a Web-based administration panel.

Amazon, to complement its other Cloud services and highly used EC2 infrastructure, has made the Amazon Relational Database Service (RDS) available to enable SQL capabilities from Cloud applications, giving you a choice of underlying database technology to use such as MySQL or Oracle. It is currently in beta.

Microsoft also has its SQL Azure Cloud Database offering available in the Cloud, generally positioned as suited for applications that use the Microsoft stack for developers that will want to leverage some of the benefits of the Cloud.

Read More

Public Cloud Versus Private Cloud - Public Comes With Experience and Expertise

Posted by Bob Brauer on Thu, Nov 11, 2010

The debates rage on about "Public Clouds" and "Private Clouds" and which is more appropriate for serious computing efforts, including in business systems and all across the universe of applications.

Most vendors, not surprisingly, line up behind the approach that best suits their product offerings.

For example, SAAS vendors (Salesforce, NetSuite, SuccessFactors) say that multi-tenant applications are the Cloud, citing the need for a business solution with shared, multi-tenant software resources, including databases, are needed to truly make the Cloud useful. Yet many of these vendors are often criticized for not providing "open" models, so still some long-term questions remain. Yes, these Clouds are easy get into, but how do you get out of them if necessary?

The infrastructure-as-a-service crowd (Amazon's EC2, Google App Engine, Rackspace) will suggest that only infrastructure is the "true" Cloud, meaning essentially renting clean servers by the minute and storage by the byte represent the original "open" Cloud vision, enabling applications to be moved from Cloud to Cloud without difficulty. However, this is just servers and storage in the end (at least for now), so the user still has to build everything themselves. Ok for some, not entirely useful for most.

And of course the enterprise software folks (Oracle, SAP, IBM) often claim that the Cloud can and should be "Private" because it's a better security model and enables you to manage it within the organization. This enables them to capitalize on the hype of the Cloud without having to change too much of their actual offerings. Of course the challenge with this model is the lack of sharing licenses or hardware across organizations becomes quite expensive, and quite frankly we have had this model before under other names such as "mainframe", "client-server" and other "in-house" architectures. Sure, there is some incremental innovation and usefulness, but it's not too much different than what has always been offered, just another iteration.

So while there are valid use cases for each of the above scenarios, there is one thing I want to point out with Public versus Private Cloud discussions when businesses are unsure which route to go. It goes all the way back to the birth of the Cloud as a concept itself.

The reason we even have the Cloud in the first place is that heavily-trafficked Web sites such as Google and Amazon found they had to build massive, high performance, scalable systems to be able to handle the processing load at peak times (Amazon at Christmas for example). This meant that during non-peak times, they found themselves with lots of excess, unused computing capacity.

This of course spawned the idea that they could leverage this excess capacity, as well as their expertise in managing high-performance, distributed, "Web scale" computing technology as an additional line of revenue, and possibly launching a brand new industry of opportunities. Hence, the Cloud was born.

The one key piece of this Cloud concept is "expertise". This is something that you get in Public Cloud environments that you don't get in Private Clouds. With Private Clouds, you get all of the hardware and software (and the corresponding purchased licenses) that you need, but you don't have a team of experts that have been running that platform for years monitoring, managing, and supporting that platform in real-time while you use it, including having visibility into it as it runs. By definition you therefore don't have engineers supporting the success of your application systems on a minute-by-minute basis.

This real-time team of experts, and their associated expertise developed over time, is something you get inherently in the Public Cloud scenario. The folks who run these systems have as their core mission in life to keep the platform up and running, battle test it over time, improve it, enhance it, test it, analyze operational data, review performance charts, improve and enhance it again, and on and on, day after day.

Although a bit overused, the electric generator is a good example of demonstrating the difference. If you have your own electrical generators powering your home, it doesn't matter that thousands of other people have one just like it in their homes. If it goes down, you are on your own, and it's your responsibility to keep the electricity flowing from room to room. But if you plug into the electric grid run by your local power company, and there is an outage while you are having dinner somewhere, likely it will be fixed before you even get home from the restaurant. And you might not even notice there was a problem since you weren't at home (you were out dining in the "Dinner Cloud" and outsourcing the washing of dishes). This is because the system was monitored, a problem was detected, and a team was ready to spring into action once the outage occurred.

How long would it have taken to call the generator repairman to get him scheduled to come out with a power outage in your own generator? There's a reason electricity grids have evolved the way they have.

Oh, and all of the innovation occuring behind the scenes at the power company on a day to day basis? It comes to you automatically, often while you sleep, as opposed to a new giant chunk of hardware arriving every 18-24 months that you have to figure out how to configure and get up and running again.

So how is this relevant to StrikeIron?

Well, the same is also true in our case. While we are more the Software-as-a-Service variety of Cloud Computing (and in our case "data-as-a-service"), we recognize that users have a choice in the way to obtain the type of functionality we offer. A lot of the powerful capabilities we have such as our Cloud-managed Contact Record Verification Suite, such as real-time telephone, address, and email verification, could also be purchased and brought in-house as software applications and raw data sources, and a similar result could be achieved in terms of better, more usable customer data assets. The approach would just be a heck of a lot different.

In the latter scenario, all of the verification reference data would have to be managed and maintained internally. One would have to acquire the software and data files, and then get the functionality up and running. It would then have to be designed and delivered in such a way to be able to handle the various loads of data verification that might appear from different applications at different times, and often in high volume scenarios. Also, all of the other expertise around availability, testing, updating, and the usual effort associated with in-house solutions would have to be developed internally.

With us, all we do day in and day out is focus on verifying and delivering our real-time data verification capabilities to thousands of applications simultaneously with a very high level of performance at all times, delivering 24x7x365. All you need to do, just like the electric company, is plug into us. All of the data management, updating, software maintenance, and performance testing and improving is done by us, with all of the heavy lifting abstracted from you.

Since we launched our system in 2005, we have constantly improved our finely-tuned delivery and fault-tolerant capabilities, including load-balancing, high speed data I/O, redundancy, external monitoring, and everything else we have to provide to be able to support our customers and their production applications. And we are getting smarter and better about how we go about it every day. This expertise is something that each and every one of our customers gets to leverage with every single call to our system. This is why we have only had minutes of downtime over the last four years.

So could in-house solutions provide the same end result? Maybe in the sense that yes you could end up with good clean customer data somehow on your own. But at what cost, effort, and with what missed opportunities? Focus on your core business, and leave the external data verification effort to us. We will keep the lights on. Guaranteed.

Read More

Driving Factors of Cloud Computing Only Becoming More Emphatic

Posted by Bob Brauer on Tue, Aug 03, 2010

One thing that's clear as we pass the halfway point of 2010 is that the Cloud Computing movement is not only gaining momentum, but the usage trends of the Web that are driving Cloud Computing are only increasing in influence and contributing to its momentum at a faster pace than ever.

For example, Facebook's Chief Technical Officer reported last month that they were seeing as many as one million photos being served up per second through the entirety of their Web-based social application, and that they expect this to increase ten-fold over the next twelve months.

Also, how many of us watched some streaming World Cup soccer games over the past month as Spain proved supreme in South Africa? Or at least highlights on YouTube and various other video outlets? Currently, it is estimated that 50% of all Web traffic is video. That's not surprising, but with High Definition (HD) Web technology and the like emerging, video is expected to represent 90% of all traffic in just a few years. This is going to require bandwidth levels that were largely unthinkable years ago.

On another front, mobile infrastructure is not keeping pace with demand. Right now, some estimates have shown mobile infrastructure requirements growing at about 50% per year, while actual mobile network infrastructure capacity is only growing at 20% per year. This is going to be a real problem, and one of the reasons some mobile carriers such as AT&T have begun capping usage and introducing fees for premium levels of bandwidth that were standard issue up until now, and other carriers may likely follow suit. It's the only way to help curtail demand to meet capacity in their eyes.

So what does all of this mean?

One of the reasons we have Cloud Computing in the first place is that innovative Web companies such as Amazon and Google had to build out enough computing capacity to handle peak periods of Web traffic and activity, especially Amazon during its Christmas holiday crunch.

As a result, they found themselves not only experts at building out distributed computing capacity, load-balancing, and data synchronization, but also found that most of the time they had all sorts of computing power that they had invested in for peak periods "shelved" and not in use, far from cost-optimized. This led them to think of ways to monetize this excess capacity (servers and disk space lying around idle) and led to some of the early thinking and innovation around Web-based centralized computing. The same is true with Google and others with all of their excess Web computing power, as they looked for ways to monetize large, excess amounts of capacity and leverage their expertise at building out server farms and developing highly-distributed, yet high performing levels of computing.

This same necessity-is-the-mother-of-invention phenomenon is playing out now as Facebook develops new technology to serve up its millions of photos per second, and is spawning new data storage and retrieval technology such as the NoSQL paradigm shift, with new non-SQL and "not only" SQL architecture approaches such as Cassandra, BigTable, Neptune, Google Fusion Tables, and Dynamo that are more finely tuned to the needs of Web-scale Cloud Computing.

In parallel, the bandwidth demands of video and mobile infrastructure are seeding new innovation around capacity and distribution of bandwidth as well, including much more efficient and easier to implement elastic computing capabilities to handle these variable bandwidth demands as much of mobile's required computing requirements are moved to and answered via the Web (and this also makes SmartPhones ideal Cloud Computing clients, also pushing the paradigm).

While not only mind-boggling and exciting, these trends are the cornerstones of a revolution already in progress. All of this demand-driven innovation is only causing more and more build-out of the foundation from which the future Internet and "Cloud" will emerge. A few years from now, we will look back and see how the Web computing demands of today, whether from Facebook, Google, Twitter, or others, enabled a whole new generation of Web applications to emerge. And of course, huge amounts of data were gobbled up in the process, a lot of which will have come from StrikeIron's own data delivery innovation in the Cloud.

No doubt about it, the Cloud is a good place to be.
Read More

Private Clouds More Likely Option in the Enterprise?

Posted by Bob Brauer on Mon, Jun 21, 2010

Cloud computing is growing at a fast pace and will continue to do so for quite some time. The Gartner Group for example has projected a tripling of the market in the next five years, and almost everyone else is projecting some level of super-charged growth in the space. Now of course, this all depends on what you include or don't include in your definition of cloud computing (Google Apps for example). As long as you are consistent in your personal definition, the growth ought to be of a similar magnitude.

The reasons for this growth are the advantages that cloud computing provides, including faster deployment, smoother scalability, pay-for-what-you-use business models, and no capital expenditure on the hardware and software that comprises the architecture. Amazon, Microsoft, IBM, Google, Opsource, and Rackspace are all companies offering public cloud infrastructure for rent, and a myriad of vendors have lined up to add layers of capabilities on top of these offerings such as RightScale, and the ecosystems that can take advantage of these architectures such as StrikeIron's are continuing to invest in the space as well. Unfortunately Sun's promising efforts in this space have been discontinued by Oracle for one reason or another.

This public computing resource trend has been great for startups because new companies can launch on cloud infrastructure "virtually" overnight, without the traditional costs tied to software, hardware, and the management of those resources, which traditionally has required them to seek and spend time on obtaining private funding. Reducing startup "start friction" has in turn created a bubbling sea of innovation as of late.

However, there has been more reluctance in the enterprise space to move to the "Cloud" because of worries about security and losing control when utilizing these public resources. There are just some highly-valued sets of data and mission-critical business processes that many organizations just don't want to put in the hands of a third party.

As a result, many of these companies are now building out their own "private cloud" infrastructure that mirrors the public clouds in functionality. This "member-only" infrastructure can then be shared across business units and geographies in an effort to eliminate IT redundancy, reduce costs, and increase efficiency, just as public clouds do for the masses.

Because of this trend, many of the cloud infrastructure providers are now offering virtual private capabilities. For example, Amazon's Virtual Private Cloud (Amazon VPC) is in an effort to provide a "hybrid" solution for enterprises building out a private cloud where some public computing resources can be utilized where it makes sense to do so.

What's still not clear though is what actual separation of data on the actual public cloud servers really occurs, rendering the concept by some as an exercise in marketing, at least so far. However, the enterprise market for cloud computing is potentially huge, so I am expecting a lot more to occur in this space.

There definitely are solid cases to be made for both public and private clouds (as well as hybrid solutions), so my guess is these two will co-exist for quite some time, and the line as to what separates the two will be somewhat blurred (as usual). The end result will be that whatever route or combination of routes companies employ in the new age of the Cloud, these efforts will leave more resources available for actual innovation rather than infrastructure management and a repetitive IT exercises, and that can only be good for us all, right?

 

Read More

Subscribe by Email

More Info

free-trials
contact-us