1 Comment

Summary:

FedEx has always dealt in big data, but its CIO Rob Carter isn’t worried about more. In a conversation with reporters he explained how FedEx has coped in the past and where he things the future of data storage is heading.

FedEx, SenseAware
photo: FedEx

Shipping powerhouse FedEx has been generating big data for years, but now it’s prepping for the future. By attaching digital information in the form of sensors inside its packages, FedEx thinks can bring together the digital and physical worlds to expand its customer service and its business.

FedEX CIO Robert Carter, who spoke at the IT Expo in Austin on Thursday, explained that this act of attaching bits to real-world atoms creates opportunities galore.

“The information we apply to the physical world creates an incredible opportunity for us,” Carter said. “When we apply more bits to the atoms, we create more opportunity for interactions, more opportunities to do business, and opportunities to change how you see the world.” In later conversation with a few reporters, he explained how this lofty vision affects IT, and how changes in IT bring about this vision.

Paving the way for “epic data”

FedEx’s home page in 1994

FedEx has a history of embracing technology that gets it closer to its customers. Its first web site was put in place in 1994 and was just a basic HTML page that asked for customers to enter their tracking number, then communicated that info back to a mainframe. The mainframe figured out where the package was and shot the info back to the customer. Today, it offers a basic native app on all platforms that’s basically a superficial “skin” that talks back to myriad FedEx services to give customers the tools and information they need.

As FedEx has grown, the backend infrastructure to support the organization has adapted accordingly, with FedEx using gear form Teradata and Greenplum to handle today’s data warehousing and analytics. Carter didn’t say how much data the company generates a day, but noted that it has exabytes and exabytes of data that it generates from the 9 million shipments it averages daily. His so-called “epic data” is then kept and stored indefinitely.

And as FedEx adds its SenseAware platform, it is adding real-time data and notifications to its infrastructure at a more granular level. That platform, which launched in 2009, contains a variety of sensors and radio chips that allow it to detect temperature, location, light and report back if a package (and its contents) hits a problem. The SenseAware device, which is a roughly 6 inches by 6 inches, gets dropped into high-value packages like diamonds or human organs and can proactively monitor the package for 96 hours and then alert recipients and senders if something endangers or waylays the package.

Because the SenseAware device has a radio, it is constantly broadcasting information back to FedEx, and can generate a lot of data that must be acted on in real time. Without the right infrastructure, that might be overwhelming. Still, Carter notes that the bulk of FedEx’s exabytes of data are structured and sent from different services to the same message bus where decisions or further analytics can happen.

In the future, databases are for archival purposes

“There is so much coming online that allows us to look at large data sets. From the technical mindset, what’s happening fundamentally is a shift from the reason databases even existed,” Carter explained. Databases were built because memory was precious and operations and IT staff had to allocate when and what made it into memory at any given time. But in modern data centers and computing architectures there’s plenty of addressable memory and cabinets of non-volatile flash memory available for applications. “Databases will become archival rather than a system of record,” Carter said.

Carter compares it to having a file cabinet versus having a more brain-like process with a matrix of information the computer can harness. But FedEx has some advantages in building that matrix. For example, much of its data comes from pre-determined processes such as existing routes or metrics affecting its business and, thus, is mostly structured. Carter says that only a few elements of data, such as the monitoring of Facebook and Twitter to talk to customers, are relatively unstructured.

He anticipates the future of his many exabytes of data as being dumped into a pool of storage with some metadata attached to the files so it can be analyzed. It sounds closer to a key-value store or even some of the NoSQL efforts, although he said FedEx isn’t using many of the new open source data stores or analytics out there, relying in Greenplum’s Hadoop distribution for analysis today.

As companies seek to embrace and use big data, it’s clear that the way data is stored and analyzed is changing. Bt understanding that the application of data to physical items like packages can generate huge opportunities shouldn’t be forgotten either. Big data needs to be used to produce big (or even little) insights.

You’re subscribed! If you like, you can update your settings

  1. David Colbourn Monday, October 8, 2012

    Data warehouses are archival usually OLAP dimensional zed historical repositories and they are only one type of Database. The OLTP transactional single source of truth closely tied to the system of record databases will still be needed. These are apples and oranges. They are tuned for different objectives process memory in different ways and both will be needed in the future. Granted real 64-bit addressability will greatly improve memory addressability but that was not the only issue relational theory addressed when applied in an OLTP design. The whole issue of system maintained referential integrity or data consistency and synchronization across system would persist as long as data duplication exists. That data synchronization and data quality was one of the key points the sold OLTP RDBMS. Historical archives by their very nature will not have a single source of truth they will have many truths each tied to a time and place. This largely precludes system maintained referential integrity and forces application level referential integrity, which will stay second rate by its very complexity. Subscriptions mirroring and stored procedures are not becoming part of a transaction unit of work in the newer system I am seeing. Structured big data will still need a system of record.

Comments have been disabled for this post