Podcast Episode

Voices in Innovation – A Roundtable Discussion on Persistent Memory

:: ::
This week we pull together a great discussion on persistent memory with analysts Enrico Signoretti, Jason Collier, and Matt Leib.

GigaOm analysts and IT guests discuss the industry and emerging tech with host Johnny Baltisberger.

Enrico Signoretti

Enrico has 25+ years of industry experience in technical product strategy and management roles. He has advised mid-market and large enterprises across numerous industries and software companies ranging from small ISVs to large providers.

Enrico is an internationally renowned visionary author, blogger, and speaker on the topic of data storage. He has tracked the changes in the storage industry as a Gigaom Research Analyst, Independent Analyst and contributor to the Register.

Jason Collier

Jason Collier has over 20 years of experience as a serial entrepreneur in the technology industry. Jason was the Founder and Chief Evangelist at Scale Computing, one of the early pioneers in the hyperconvergence space. Previously, Jason was VP of Operations at Corvigo where he oversaw sales engineering, technical support, internal IT, and datacenter operations. Prior to Corvigo, Jason was VP of Information Technology and Infrastructure at Radiate. There he architected and oversaw the deployment of the entire Radiate ad-network infrastructure, scaling it from under one million transactions per month when he started to more than 300 million at its peak.

He has worked with companies such as Dell, Lenovo, Supermicro, Google, Schneider Electric, Acronis, Farm Bureau Insurance, Delhaize, Auburn University, Steel Dymanics, Sumitomo, Nippon, Coca Cola, Air Liquide, Menards, Stena AB  as well as Benchmark Electronics. Jason has also leant his expertise and knowledge to many state and local government agencies such as Fire/Police and SLED as well as many others.

Jason is one of the pioneers of hyperconvergence and is an expert in virtualization, data storage, networking, cloud computing, data centers, and edge computing. In his current role as an analyst, Jason provides distinctive technology and intelligent strategy solutions in both enterprise and startup settings.

Matt Leib

After a long career as an enterprise engineer, Matthew endeavored into a new phase of his career, as a pre-sales architect. At this point, he began blogging as well. In 2018, he was ranked one of the 50 storage bloggers to watch. A long-time contributor to many storage, cloud, and compute industry stalwarts, writing technical marketing, competitive analysis, and performing research as well. He’s written for Oracle, EMC, Pure Storage, HPE, Dell/EMC, and many others.

He’s been recognized by VMware, Cisco, Pure, Dell/EMC, NetApp, and others for his contribution to the industry, and has become a recognized speaker, podcaster, and industry veteran. His courses have trained solutions architects, and salespeople throughout the industry, as well as teaching within the Depaul University Masters in MIS program in Chicago, a disaster recovery class.

Matthew has been recognized as a knowledgeable speaker in multiple categories, understanding the value prop and ecosystem surrounding many differing approaches and many diverse sets of solutions. In his spare time, Matthew plays guitar and has even built a couple. He lives in Skokie, IL, with his dog Stella, and has a daughter.

One Response to “Voices in Innovation – A Roundtable Discussion on Persistent Memory”

  1. Michael Delzer

    So the short answer putting flash memory chips in memory slots may increase the RAM range the CPU has access to.
    It gives the buyer the ability to either buy a server with more addressable memory range or a server with faster storage access.
    Improves SQL and NoSQL performance as well as some ML/AI workloads in the niche use cases for High Performance Computing it can be a game changer.

    Long answer:

    When a CPU accesses memory it does not need to translate this request to move between classic memory and persistent memory, it just asks the memory controller for anything it did not have in cache. When you use persistent memory as RAM it allows a server with lots of memory slots to have a mix of high speed DRAM and high density slower persistent memory to push a server from 1-4 TB up to 32 TB of RAM (as chip density increases this max value will also increase).
    As these memory requests do not need to be translated the CPU and OS just treats this as a slower tier of memory.

    When persistent memory is used a block storage with a file system on top of it the CPU must run through several steps in the OS to make a request for a file which slows down the speed but it also forces the application to call for a file (or storage block) and not just a memory range.

    Persistent memory in the DIMM slots on the motherboard has lower latency and faster bandwidth than even M2 memory that is directly attached to the CPU, when the flash memory is in the PCI bus slots the bandwidth may be further limited and often can’t be treated as RAM but must be converted into a file or block storage request.
    (Some vendors do sell products that can live in a PCI bus slot that can run as extended RAM or block storage)
    When you use the DIMM form factor a memory channel must be either traditional RAM or Persistent Storage.
    When you use the PCI formfactor the current upgrade from v3 to v4 will double throughput possibilities.
    Future PCI version 5 should ship in 2023 as far a ability to easily buy a v5 server and components.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.