Stay on Top of Enterprise Technology Trends
Get updates impacting your industry from our GigaOm Research Community
A few months ago I signed up for the FDA’s email alerts about food recalls, blissfully unaware that the agency sends out anywhere from three to six of those a day for everything from tainted chicken salad with chives in Iowa to undeclared nut allergens in your cranberry juice. So far I’ve yet to see a recall that has affected me, but even if there was one I might miss it in the influx.
And anyway, who wants to track recalls of food in their email inbox? There’s really only one place where food recall data becomes useful: at the point of sale, when you are getting ready to pick up a product or perhaps as the cashier scans it at the check out. A recalled product might get flagged by the system set in place by the grocery store.
But this problem illustrates a huge opportunity looming with the internet of things: making data available at the point where it’s most useful. This requires access to data, an ability to separate the signal from the noise in that data, and then the ability to deliver it at the time when it’s most useful. You might argue that it should be delivered in the format that’s most useful, as well (after all, those emails don’t help me at the meat counter).
Untangling the food chain
Ideally, a consumer never has to know about food recalls if the producers and distributors do their job, but based on the undeclared allergens that I see, a concerned parent of a highly allergic child might welcome an app that provides these recalls along with traditional and expected ingredients of a packaged food product. Imagine a parent with a mobile app that can store a specific allergy or allergies; after scanning food items, the app provides a Yes or No indication.
This app could pull in data from the food producer via an API, the FDA database and perhaps eventually from the manufacturing facility via sensors that understand what goes into each individual batch. Think this is nuts? Well, [company]McDonald’s[/company] has built an app for stores in Australia that does something similar:
Where did the meat in your hamburger come from? In Australia, McDonald’s (which is known as “Macca’s”) has introduced the ‘TrackMyMacca’s‘ iPhone app. It uses your phone’s GPS to find out what restaurant you’re in, image recognition to see what you’re eating, and the date and time to track the exact ingredients that went into your food.
There are other apps trying to bring data to the consumer at the point of action, such as Buycott, an app that cross references a database of products and the companies behind those products with information about what causes they support. People can use it to avoid genetically modified foods or to eat only at restaurants that are run by pro-lifers. IBM is building a massive system to disclose what foods were produced sustainably by tracking everything about the food’s growing conditions and then compressing all that data into usable information to be consumed by grocers, farmers or even the average shopper.
But so far most of these apps draw primarily from databases and APIs. What happens when sensor data gets added to the mix as in the [company]IBM[/company] example? Parsing that information to make it available and useful will become more expensive and challenging when designing world-class user experiences.
Show me the money
For example, in the allergen app use case, why would a food manufacturer open up data about what ingredients are going into its products? That’s likely highly confidential information. And getting data from the FDA as an API will require someone to manually access and clean the existing FDA data and then host it online. Could we make this part of the government’s open data efforts? So in some cases, there’s no incentive to share sensor data, and in others the data requires cleaning to be consumed in a useful format.
For an allergen app, people might buy something good because their kid’s life is at stake, so it could make sense to work through the issues. People will pay for the data cleaning efforts and access to APIs with the price of a download if the service provides information that could save their child’s life. But what about those other situations?
The home health and wellness tracking market is a good example of consumers getting a lot of data that can seem almost meaningless outside of a few quantified-self buffs. So how do you build products that show the average user what they want to see. Basically, a simple way to tell if their sleep, movement, eating and vitals all match up to a healthy lifestyle. And If not, where should they focus?
TicTrac, an app that grabbed my attention earlier this summer when Samsung showed off its platform for tracking fitness information, provides a perfect example. The company’s app was built as a consumer web platform for aggregating a ton of social sensor and fitness data, but consumers don’t pay for it. Instead, big insurance firms and corporations do, according to Jeremy Jauncey, chief revenue officer and co-founder of [company]TicTrac[/company].
The company has found success letting other companies use its technology under their own brand, so you might end up with personalized insights from your employer or maybe an insurance provider. Eventually, your doctor might offer such a service. Businesses pay based on the customizations they ask for (your insurer might want to track preventative health measures like exercise, while your doctor wants to track nutritional information) and on a per-user basis.
The key here is that the consumer inputs data from their own devices and web services, but the platform also accepts data from other arenas. Your doctor might input your health data if you give her permission. Your employer, if it buys the service, might input stats relevant to your job performance. Combine your professional and personal data, and you might see that your ability to answer emails goes up when you listen to music.
“Plenty of things get lost in the data points that only come from one device,” Jauncey said. But doing that level of integration so far means only big business is buying TicTrac. Eventually it might offer services to consumers, as well. I, for one, would love to pay for a TicTrac backend on an app that tracks data related to migraines.
I might also get a discount from my doctor if I provided access to that data back to him for research purposes. “The next wave is the personal data economy where their data has value, and is an asset they can trade,” said Jauncey. That isn’t the case today, when consumers tend to provide their data from free to platforms, sometimes even giving up their ownership rights entirely.
So we’re still a long way off from getting easy-to-understand information at the point of making a decision. And even before we figure out the user interface and algorithms that will create those insights, there’s clearly a lot of work to do on the business model to even get the right data into the right place. We’ll be discussing these business models in depth at our Structure Connect event Oct. 21 and 22 with various executives representing companies in healthcare, retail and the smart home. It’s clear that we need middleware just to make the internet of things work, absent real standards, but we may need data middlemen and escrow providers to even get the business models right.