Stay on Top of Enterprise Technology Trends
Get updates impacting your industry from our GigaOm Research Community
Amazon and Google trade price cuts. Again
The incumbent public cloud champ and its wanna-be rival took turns cutting prices again last week.
Amazon Web Services sliced the price on Windows on-demand EC2 instances by 26 percent — although as the price still depends on region. That move came within hours of Google cutting prices of most of its GCE instances by an average of 4 percent — that little tidbit was buried in larger news that Google is opening up access to Google Compute Engine to any customer willing to pay $400 a month for Google Gold Support. But because the AWS price cuts were for Windows, that move may have been directed at Microsoft Windows Azure more than Google, but why quibble? NetworkWorld has more as does the Motley Fool.
ProfitBricks, another cloud contender, extended its scale up vs. scale out cloud pitch last week as well, making its biggest instance bigger. The new super-duper instance weighs in at 62 cores and 240GB of RAM up from 48 cores and 196GB of RAM.
“By offering variable instance sizes, which now tip the scales at 62 cores and 240GB of RAM, ProfitBricks continues to define Cloud Computing 2.0. ProfitBricks customers can now run massive computational processes at a lower cost while taking advantage of better speed and performance. It also enables users of databases and big data software to scale their virtual servers vertically rather than horizontally.
Talkin’ Cloud has more here.
When is AWS not the cheapest option? More often than you think
Over the past week several conversations with tech vendors have come around ot the fac tthat, when it comes to actual production workloads, the most cost-effective deployment model — repeated price cuts notwithstanding — is not AWS at all.
For example, the venerable analytics company SAS Institute, when it was testing out its new visual analytics tool, did so on AWS because it couldn’t deploy its own hardware fast enough. But that lasted about a month. “Amazon was way too expensive, so we brought it in-house,” SAS CEO and founder Jim Goodnight told me in a recent interview. “Amazon doesnt’ give it away for free,” he said.
Once companies start deploying higher end services and run advanced analytics, other options are cheaper, Goodnight and his CMO and SVP Jim Davis told me.The two execs were on a nationwide road show to show off the company’s new visual analytics service which will be widely available within months. and will eventually be available from SAS’s own data centers or via private clouds, as the New York Times reported.
If a company uses the vendor’s new visual analytics applications for six months or more, it’s cheaper to run on SAS infrastructure rather than AWS, they said.
I was talking about this conversation last week with Buzzient CEO Timothy Jones, and he agreed wholeheartedly with that assessment that AWS if fine to get going, but less than price optimal for actual production use. AWS is a “honey pot,” he noted. “You can get in cheap but pretty soon it’s not very cheap at all.”
I would love to hear from readers in the comment field about specific scenarios when the AWS public cloud goes from being a great cradle for new applications to a less-than-optimal site to run them.
OpenStack crowd gears up for summit
The new OpenStack Grizzly release was ready for download last week, two weeks before the OpenStack Summit kicks off in Portland, Ore. This, the seventh OpenStack release, adds better support for VMware(s vmw) and Hyper-V(s msft) hypervisors; support for multiple storage options; and some software defined networking (SDN) perks.
As Lew Tucker, VP of cloud computing for Cisco(s csco) told InformationWeek, Grizzly’s updated Quantum componentlets networking companies create applications that will programmatically control the underlying network based on rules and policies.
Photo courtesy of Shutterstock user Brian A Jackson