2 Comments

Summary:

Civitas Learning, an Austin-based startup, has raised $8.75 million to help colleges and universities make data-driven decisions.

education money

Colleges and universities are sitting on mounds of data that could lead to all kinds of valuable insights – like which students are most at risk for failing or what course loads tend to lead to attrition. But, until recently, schools haven’t had the tools to aggregate and analyze those disparate datasets.

About a year ago, Civitas Learning launched to bring big data analytics to higher ed. On Tuesday, the Austin-based company said it raised an additional $8.75 million in venture capital.

The round, which follows $4.1 million closed last year, was led by Emergence Capital Partners and included existing investors Austin Ventures, First Round Capital and Floodgate, as well as new investors Felicis Ventures and New Markets Venture Partners. With the new funding, the company said it plans to expand to additional schools and build out its services.

Charles Thornburgh, CEO and founder of Civitas, launched the company after years of working in ed tech as an entrepreneur and executive at education company Kaplan. While focusing on higher education,  he became acutely aware of the pressures facing universities – from rising student debt to falling completion rates. Data that could help schools address many of their problems exists in boatloads, but the tools to crunch that data did not, he said.

“There are these remarkably large and fast-flowing data sets about the students [schools are] serving everyday,” he said. “But the thousands of decisions by students, faculty and administrators – none of [those] are benefitted at all by the data in those datasets.”

Civitas helps schools aggregate their various data streams – including Student Information System data on student enrollment and withdrawal patterns, learning management system data on how students are interacting with digital content, professors and peers, financial aid information and even swipe card data that reveals the school resources students are using. Then it analyzes the entire universe of data (both historical and current) to pull out helpful patterns and insights.

While it looks at school-specific data to help each institution better understand its student populations and forecast patterns, it also analyzes data across schools to give them predictive models based on an even richer dataset. So far, Thornburgh said, the company’s “community of data” includes information for three million students and 15 million courses at six higher ed institutions.

The company, which charges schools an annual fee for its data integration and analysis services, offers a limited number of applications developed in-house, including tools for helping students choose courses or for helping teachers identify at-risk students early on. Ultimately, Thornburgh said, the company plans to open up its API so that schools and developers can build additional apps on top of Civitas data.

The company’s growth comes amid increasing interest in companies focused on helping schools put student data to work.  At the K-12 level, the Bill and Melina Gates Foundation-backed InBloom is one of the biggest efforts to aggregate and analyze student data. A project led by the WCET, the WICHE Cooperative for Educational Technologies, and also supported with Gates funding similarly aims to help institutions track and predict student outcomes in higher ed.

You’re subscribed! If you like, you can update your settings

  1. bergstrommartin Tuesday, June 25, 2013

    For all their potential usefulness, these data sets will be worthless if the information gleaned from them is not interpreted intelligently and used effectively. That might seem like a relatively straightforward presumption, but data-literacy is far from ubiquitous right now and, with the increasing popularity of “big data,” this can be especially problematic. “Big Data” is certainly trendy right now, which has led to a wealth of new data collection and data sources, but these new sources of information have not been matched by new sources of data analysts and thinkers. This means that many of these new data streams are either underused, or misused; both problematic.

    If we are to continue this data-oriented trend, and I believe we should, we must also look to educate more people in the intricacies and subtleties of correlations, causations, and sheer randomness. I do not mean to sound elitist, but handing over huge new data-sets to someone who does not know how to use them does little to improve anything and can actually be counterintuitive. A friend of mine’s college “discovered” that student athletes were much less likely to drop-out than other students and subsequently spent large sums of money recruiting student athletes from the demographics that were most likely to drop out. What happened? A quarter of the next year’s freshman football team dropped out. It wasn’t that student athletes were less likely to drop-out, but that the type of student who had been likely to join a sport were also less likely to drop out. That might seem obvious to you and me, but with an old-school provost, nothing should be presumed.

  2. Martin Bergstrom Tuesday, June 25, 2013

    For all their potential usefulness, these data sets will be worthless if the information gleaned from them is not interpreted intelligently and used effectively. That might seem like a relatively straightforward presumption, but data-literacy is far from ubiquitous right now and, with the increasing popularity of “big data,” this can be especially problematic. “Big Data” is certainly trendy right now, which has led to a wealth of new data collection and data sources, but these new sources of information have not been matched by new sources of data analysts and thinkers. This means that many of these new data streams are either underused, or misused; both problematic.

    If we are to continue this data-oriented trend, and I believe we should, we must also look to educate more people in the intricacies and subtleties of correlations, causations, and sheer randomness. I do not mean to sound elitist, but handing over huge new data-sets to someone who does not know how to use them does little to improve anything and can actually be counterintuitive. A friend of mine’s college “discovered” that student athletes were much less likely to drop-out than other students and subsequently spent large sums of money recruiting student athletes from the demographics that were most likely to drop out. What happened? A quarter of the next year’s freshman football team dropped out. It wasn’t that student athletes were less likely to drop-out, but that the type of student who had been likely to join a sport were also less likely to drop out. That might seem obvious to you and me, but with an old-school provost, nothing should be presumed.

Comments have been disabled for this post