More tech Stories
On The Web

This is a good blog post from Gartner analyst Alessandro Perilli about some of the problems facing vendors selling OpenStack as private-cloud software. You should read it. My two cents: If OpenStack vendors really are at a loss for how to describe their products, perhaps they should look at how the Hadoop market has been able to (seemingly) thrive thanks to a strong community and clear product visions among the vendors involved, beyond the open source code.

Upcoming Events

In Brief

The days of the cold call might be gone for salespeople. Actually, the days of the not-too-promising call might soon be gone, too. On Tuesday, a company called InsideSales introduced a new capability that infuses neural network technology (the basis of deep learning) into its products to help identify the best leads and even the best ways to approach them. However, scoring sales leads is becoming the new black. We recently covered a company called Infer that delivers a similar service, and companies such as Intel are even doing some of this internally.

On The Web

This post from the New York Times‘ Open blog talks about the architecture and algorithms underpinning its content-personalization engine. Its experience speaks to some larger trends around companies moving from batch to stream processing and to cloud services overall. The Times’ recommendation engine used to rely on MapReduce jobs that ran every 15 minutes, but now relies on a homegrown real-time system. It used to run on Cassandra, but now runs on Amazon’s DynamoDB service.

loading external resource
In Brief

It was a good day for anyone invested in the greater NoSQL market, as Riak creator Basho and Couchcase both announced big customers wins. Basho highlighted The Weather Company, which is running and replicating Riak across multiple global data centers, while travel-industry technology provider Amadeus is working with Couchbase to deploy that database across its customer-facing applications. It’s good news for the NoSQL space because any large companies choosing databases other than MongoDB is validation that they matter and a sign they’ll be around for a while.

In Brief

Amazon Web Services is now offering up free access to three NASA datasets from the NASA Earth Exchange project about the world’s weather, geology and vegetation. The cloud is a natural place to house large datasets that many people or institutions might want to analyze without requiring everyone to download, store and analyze the data locally. Scientific data has proven particularly appealing early, with numerous cloud providers already hosting various datasets, often in the fields of genomics and biology.

nasa_nex_landsat_us_2005_forest_leaf_area_1

In Brief

This survey from State Street and the Economist Intelligence Unit is a pretty good look at the opportunities and challenges of using data in the financial services industry. Many respondents noted the challenge of integrating lots of data sources, which is understandable and probably only going to get harder. It seems there’s a lot of promise in new services/data sources such as Dataminr and Premise Data, but they also represent a pretty big divergence from tradition.

In Brief

A Dallas-based startup called Servergy, which makes low-power servers about half the size of traditional servers, has raised a $20 million series C round of venture capital. The company’s servers run on 8-core 1.5 GHz Freescale Power Architecture processors and, although 1U high, are only 14 inches deep and 8.25 inches wide. Servergy appears to have raised just under $30 million so far, according to SEC filings, although its has not named its investors.

Correction: This post was corrected at 3:15 p.m. to correct the manufacturer of Servery’s processors, which is Freescale and not IBM.

On The Web

Teradata’s CEO addressed the impact of Hadoop on its earnings call and, according to this report from ZDNet, downplayed its effect. In fact, he said only 4 to 8 percent of Teradata workloads might ever move to Hadoop. Even if that’s true for workloads, what about the data itself? It might not need to live in those pricey appliances.

In Brief

Dropbox has hired Kevin Park as its new head of technical operations and IT. Park was at Facebook from 2006 until 2011, where he was a director of technical operations. This isn’t the first time Dropbox has brought on former Facebook employees to help grow its engineering team — in 2012 it bought a startup called Cove that was started by Aditya Agarwal (now VP of engineering) and Ruchi Sanghvi (formerly VP of operations), who built Search and Newsfeed, respectively, during their time at Facebook.

Correction: This post has been updated to clarify that Ruchi Sanghvi is no longer with Dropbox. 

On The Web

This is a pretty interesting benchmark study, although the headline is a bit misleading because Hadoop isn’t really optimized for graph analysis. When you look at comparisons to Spark, GraphLab and other platforms, it seems the decision of what to choose might come down to data volume, acceptable latency and cost, especially when considered against the value of that graph workload. Projects like Giraph and other YARN-enabled engines might make Hadoop look better, too.

1789101147page 9 of 47

You're subscribed! If you like, you can update your settings