Structure Data

Wise.io raises $2.5M for machine learning, simplified

Machine learning startup Wise.io, whose founders are University of California, Berkeley, astrophysicists, has raised a $2.5 million series A round of of venture capital. The company claims its software, which was built to analyze telescope imagery, can simplify the process of predicting customer behavior.

When machine learning takes a lesson from human learning

Declara Co-founder and CEO knows a lot about overcoming adversity and the process of learning as an adult. She also knows a lot about algorithms. On the Structure Show podcast this week, she explained how the two have intersected with her company’s platform.

Repeat after me: ‘Google is not a proxy for big data’

Another study is reporting on the inaccuracy of Google Flu Trends project, which predicts seasonal flu rates based on search data. However, Google’s algorithms don’t constitute the “big data” approach to this issue, they’re just one piece of a smart big data approach.

The state of private cloud

In this week’s Structure Show, Eucalyptus CEO Marten Mickos talks about the long and winding road to private cloud, an idea that arrived ahead of its time.

Olympians weren’t the only athletes in Sochi

RunKeeper tracked what its users were up to in Sochi during the Olympics and found they ran the equivalent of about 78 marathons. It’s an interesting nugget, but part of a much larger picture about learning how, when and where people exercise.

BeyondCore raises $9M to automate data analysis

Analytics startup BeyondCore has raised $9 million for its technology that can analyze complex data sets and automatically highlight the strongest correlations. It’s a promising capability assuming companies are willing to open up analytics across the organization.

3 lessons in big data from the Ford Motor Company

Data means a lot to Ford, informing everything from product design to business intelligence. In this interview from the Structure Show podcast, Ford’s top data scientist talks all about how Ford approaches everything from deploying Hadoop to hiring the right people.

Palantir: The worst-kept $9B secret in Silicon Valley

Quasi-secret intelligence-software startup Palantir is reportedly in the process of raising more than $100 million at a $9 billion valuation. That says a lot about the value of its technology, which isn’t cloud-based or consumerized, but does what it does very well.

How search can solve big data problems

LucidWorks CTO Grant Ingersoll made his case at Structure:Data on Thursday for why companies tackling big problems related to large sets of data should give search another look.

Data? What is it good for? Absolutely … something

It is fashionable these days to either like big data or just malign big data. Regardless of what your personal feelings are, the question has always been and will always be – what is data good for. Here are three stories to illustrate those questions.

Eventbrite: Hadoop isn’t for everybody

Open source data warehousing models have a lot of advantages, the ability to scale horizontally and cheaply among them, but traditional warehousing techniques have their strengths as well, said Vipul Sharma, principle software engineer and engineering manager at Eventbrite, at Structure:Data.

The online future is personal, and that requires big data

The problem for many companies is that user information is spread across hundreds or even thousands of different fields in various databases, and it’s difficult to compile it in real time. But doing that successfully is becoming increasingly important, says WiBiData at Structure:Data.

Mobile data a fascinating and scary opportunity

We’re walking around with sensors in our pockets: those of us carrying smartphones, anyway. As said at Structure:Data, there are huge opportunities for companies to improve existing services and create new ones with the huge amount of data provided by mobile computers.

Flash, an option for big data performance?

Scott Metzger, VP of analytics at flash-memory array maker Violin Memory, argued at Structure:Data that putting flash memory at the heart of big-data infrastructure is a must for any business that is worried about how long it takes to get results from data analysis.

Another big obstacle to exascale computing: resilience

Los Alamos National Laboratory is trying to build to an exascale computer, which could process one billion billion calculations per second. The man in charge of executing that vision, however, sees a big obstacle toward building it. That problem, discussed at Structure:Data, is resilience.

What big data really needs is security

There are plenty of benefits from making data available to large repositories. But Trend Micro’s Dave Asprey said at Structure:Data one thing holding enterprises back from putting their data in the cloud is the lack of security of what they’re sharing.

Never mind the hardware, it’s the algorithms

It’s easier to crunch massive amounts of data when you don’t have to reinvent the wheel for every scenario. Sultan Meghjji and his colleagues at Appistry are hoping to make this process run more smoothly, Meghjji explained at Structure:Data.

How Wordnik moved its database to the cloud

When running databases, how do you get the speed you want while offering the flexibility and cost savings of the cloud? At Structure:Data, Wordnik co-founder Tony Tam described how his company was able to move its relational database from dedicated hardware to the cloud.

Why big data needs real-time intelligent systems

“Anticipation denotes intelligence.” Zubin Dowlaty, VP and head of innovation and development of analytics-outsourcing firm Mu Sigma, said at Structure:Data that’s what companies need to be striving for and in this era of big data, the barriers to achieving that have fallen away.

How big data can be used to improve and predict sales

Data collected can be useful to retailers in many ways, but not necessarily in the ways that one might expect, as discussed at Structure:Data. For example, did you know that there’s a correlation between the music you listen to and the things you might buy?

NASA-style data tools drive digital ad campaigns

At Structure:Data, DataXu showed off its technology in the form of a writhing map of colors that reflected consumer sentiment to cell phone promotions. In practice, this means that a phone company’s ad campaign would automatically increase or decrease offers for contracts or free phones or pre-paid plans.

Is the debate over data and online privacy misguided?

Consumers have long been trading their personal data in return for access to Web sites like Facebook. The tradeoff has worked well for companies and consumers but, as the pool of data grows, so have privacy concerns. At Structure:Data, panelists say the current so-called solutions are misguided.

How to query Big Data quickly: Stream those queries!

Over time, we’re generating massive amounts of new data, and as it gets bigger, it becomes a challenge to gain insights through traditional database queries. At Structure:Data, SQLstream CEO Damian Black proposes how to solve this problem.

Can LexisNexis build a Hadoop-killer?

Hadoop may be the current leader of the pack when it comes to handling big data, but LexisNexis says at Structure:Data the system it developed for its own internal data use — and recently open-sourced — is a viable alternative and in some cases is superior.

Machine data is for people too

Machine-generated data, the non-intelligible zeros and ones that are generated by sensors and other devices, is no longer just for geeks. While it looks like gibberish forward-thinking consumers are already pressing that “gibberish” data into service, according to speakers at Structure: Data 2012.

How humans & machines can team up to solve big problems

In the same way a microscope helps augment the innate ability of the human eye, Quid is trying to create tools to augment how we as humans process unstructured data and visualize it, said Sean Gourley, co-founder and CTO of Quid at Structure:Data 2012.

Big data: Like Moneyball for your business

It’s tough to overcome some of the biases that have become second nature in most businesses. But if you’re John Lucker, who’s a Principal at Deloitte, overcoming the “human factor” can be critical to the success of driving organizational change. Overcoming that is necessary, however.

Mo’ data mo’ problems: The future of the data center

The amount of data processed by companies big and small increases every day – and data centers have a hard time keeping up. Not only is scaling the physical infrastructure costly, it also consumes vast amounts of energy. A solution was discussed at Structure:Data.

How data mining leads to loans for the underbanked

Next-generation online banking service ZestCash uses data to help qualify people for short term loans. The company explains at Structure:Data it has been giving $300 to $800 loans to users based on thousands of variables, which are boiled down to 10 models.

Cloudera CEO: Come to me with apps, I’ll get you money

Hadoop is a great platform for storing and processing data, but it needs applications to make it truly valuable. At Structure:Data, Cloudera CEO Mike Olson discussed the dearth of pro analytics apps using Hadoop, and invited the startups building them to come to him for money.

How to make data digestable for non-techies

Humans have to be part of the equation when it comes to interpreting and processing data if we want to get the most value out of it, argued Arnab Gupta, CEO and founder of Opera Solutions at Structure:Data. Gupta advocates for making data small.

Filtering the digital exhaust

If 80 percent of new data created is going to be unstructured, where is all that data coming from? It’s coming from consumers’ activities online and it requires real-time processing, said Continuuity’s Todd Papaionnou at Structure:Data.

How Allstate uses data to calculate your premiums

Predicting risk is key to the way insurance companies figure out your monthly rates and premiums, and it needs data and time to do so. Allstate said at Structure:Data that the best use of this data was to give it to the 30,000 scientists competing on Kaggle.

How do you cram 1 trillion rows in a spreadsheet?

Robert Lefkowitz, director of web development at 1010data, argued at Structure:Data 2012 for why desktop-based or browser-based spreadsheets aren’t the solution if you have tons of data you need to put to work…

Social: the new silo-busting tool for business

One of the oldest problems in business, getting different groups to communicate with each other, could be solved by one of the newest online phenomenon: social tools. Two industry leaders explained how to break down these silos through social at the Structure:Data conference.

Google’s BigQuery: making data insights faster

Companies are grappling with how to make use of all their data, facing the challenge of teasing out insights quickly and with flexibility. Moving to the cloud opens up security and privacy questions. But the effort can be worth it, says Google’s Ju-kay Kwek at Structure:Data.

The beauty of machine learning? It never stops learning

Business not using machine learning to augment the products and services will find it difficult to compete in the future according to panelists at GigaOm’s Structure:Data event on Wednesday. These companies’ competitive disadvantage will get worse as machine learning solutions gain more intelligence.

Structure:Data live coverage

On March 21 and 22, at Structure:Data, we’ll look at how companies like @WalmartLabs, IBM, and PayPal are using big data and how technologies like Hadoop are evolving to help them analyze that data. Watch the livestream and read our blogs of the event here.

Exclusive: EMC Buys Pivotal Labs

EMC Corp. (s EMC), the Hopkinton, Mass.-based storage and cloud hardware company has bought Pivotal Labs, a San Francisco-based consulting firms well known for its tool Pivotal Tracker and also for its pioneering work on agile development methodology. I had first reported earlier this week.

Can big data fix a broken system for software patents?

Legal scholars are always searching for ways to improve the patent system, sometimes via sweeping changes, but big data — especially techniques such as machine learning and natural-language processing — could help provide a technological fix to a big part of the problem.

Data without context is dirt

Data, I believe is like plastic. You can use it to make wonderful things. However, like plastic, it can be a great polluter and create havoc on the environment. Or as I like to say, data without context is dirt.

Structure Data

As the volume of enterprise data created has moved past terabytes and into tens of petabytes, companies need to figure out the…