B2B Tech Talk with Ingram Micro
B2B Tech Talk with Ingram Micro

Episode · 6 months ago

Intel Optane Persistent Memory: The Best Kept Secret for Virtualization

ABOUT THIS EPISODE

Five years ago, Intel came up with a new chip, a silicon chip, to store data.

With this chip, users can expand the memory component of their server larger than ever before.

Shelby Skrhak speaks with Ken Lloyd, Director of US SLED at Intel, about:

- How Optane splits the difference between NAND and DRAM

- Common uses for Optane

- How Optane works with virtualized workloads

- Why it’s great for public sector and education

For more information, contact Andrew Calabrese (andrew.calabrese@ingrammicro.com) or visit Intel Optane Persistent Memory.

To join the discussion, follow us on Twitter @IngramTechSol #B2BTechTalk

Listen to this episode and more like it by subscribing to B2B Tech Talk on Spotify, Apple Podcasts, or Stitcher. Or, tune in on our website.

...you're listening to B two B tech talk with ingram Micro the place to learn about new technology and technological advances before they become mainstream. This podcast is sponsored by ingram Micro's. Imagine next it's not about the destination, it's about going someplace you never thought possible. Go to imagine next dot ingram micro dot com to find out more. Let's get into it. Welcome to B two B tech talk with ingram Micro. I'm your host Shelby scare hawk and my guest today is ken Lloyd, Intel's director of US led or state local and education ken. Welcome. Hello Shelby. Thanks for having me. Thank you. Well today, you know we're talking about virtualization with until obtained but before we get into this new class of memory and we're talking about in it for you know this public sector today. Can can you give us an overview of obtained for those that aren't familiar? I'm glad to definitely hope this is helpful. There's been some confusion what is this obtained stuff. Well about five years ago Intel came up with a new type of chip, a silicon chip to store data once and zeros And up till now we've had nand storage which is R. S. S. D. S. And our thumb drives that you know and love and we've had memory that's based on dram memory is super fast and high endurance and nand is much much slower and lower endurance and those we had. So I'm kind of slow but big and cheap and then we had super fast but super expensive, obtain splits that very nicely and provides both performance on par with memory but endurance on par with nand and it is able to do that at a price point between those and fills that gap nicely And with those obtained chips until his...

...making two different products. one we make an SSD that's the world's fastest SSD and obtain S A N V M E S S D and the other product that we're going to focus on today is obtained persistent memory, which is a DDR four memory stick that allows users to Expand the memory component of their server much larger than ever before with 1 28-56 And 512 gig sticks. So when we talk about the uses for persistent memory, I mean, so there's the SSD component or there's the SSD obtained and then we're talking about persistent memory today but what are the most common uses And I guess what's driving the demand for this product that kind of splits the difference between, you know, speed and cost. That's exactly the perfect question shall be we're at a point in time where users are demanding more memory than ever before, primarily for virtualization. They want to put more VMS on every server and their VMS are getting bigger. They have databases, sequel server databases, Oracle databases, ASAP instances, crm instances and these take a lot of room and where people used to want 5, 12, 7, 68 or potentially a terabyte of storage. I'm now talking to customers that want one to even four or eight terabytes of memory on their platform, ironically the most common use of intel's persistent memory isn't as a non persistent memory, just as a cheap way to get big volatile memory on the platform. And as such, either the most common use is on VM ware ES, x servers, loading them up with lots of memory. 1...

...to 4 terabytes on a two socket server. And it's not just a capability to deliver that big memory. It's that that memory is profoundly cheaper than the traditional dram memory. So a customer could that has big memory, could do the same amount of memory for a much lower price point or they could do considerably more memory at the same price point. It's once you get your head around what's possible. It's incredibly compelling and very hard to not take a look at how obtain memory can help you expand your VM ware footprint. Right. Well, I don't want to gloss over I guess I want to make sure that I understand what you mean by persistent memory for non persistent memory. Yes. And I apologize for the confusion. The product is called persistent memory and it can be used in a persistent fashion where if I reboot my server, the data is still in that memory. If I choose to use it in that persistent fashion on the server. I have two distinct pools of memory. I have my traditional memory and I have my persistent memory and their separate, so my applications have to decide where to put the data. If I flip the switch in the bios and treat that persistent memory as just part of my main system memory, then I only see one pool of memory and it's invisible to me. It's you're not taking necessarily advantage of that persistent component but you're getting large memory cheaply and getting back to your question of what's the most common use? That is the most common use. Yes, we have customers using it in its persistent mode but the most common is just to get big memory with complete transparency. There's no code changes, there's no application changes, there's no operating system changes. It just looks like big memory...

...on the platform. Something that's when I was reading about obtain, I understand that it was, I mean it's not brand new, it's been around for a few years and in its first instance I guess or its its first unveiling, it was more consumer driven and now it's more enterprise driven. Do I understand that right. It's gone through evolution, we came up with the chips and the first application of them were on SS Ds and those were Ss Ds for consumer devices. Pcs but also s S D s for the data center. It wasn't till our last generation of Z in the second generation Xeon scalable platform. In full market branding that we introduced the support for the obtained persistent memory, but that was almost three years ago. And if your audience is wondering, you know, wow, if this is so great, why did it take three years? And we're on our second generation now with her of this, obtained persistent memory and it takes a long time to move the ecosystem. It's great to have a technology that changes things, but you have to have full support from the operate the OEM's building the platforms, the operating systems running on them and the software running on those operating systems. And that's where we are today, where we have rich and complete support from the different operating systems, the different hardware manufacturers and from our key, one of our top customers and that would be VM ware. Well, so, I mean, speaking of VM ware, how does obtain work with virtualized workloads? That's the magic here. Shelby is using that obtain as memory. VM ware does not know, it's not just running on a system that just happens to have more memory than it did before. It looks. If you bring up your system monitor or your tools within V sphere,...

...you'll see a system with, say, two terabytes of memory, it really doesn't know what flavor of memory that is, whether it's octane or traditional memory. It just knows it has two terabytes to work with and that allows you to host more VMS or bigger VMS when we talk about this use for the public sector. I'd love to hear some use cases or examples of how is this capability has been really a game changer. I mean it seems like an obvious question but you know, why is this so great for our audience? For public sector and education. I will say right off the value proposition transcends public sector. It's good for any customer that has a need for memory. In fact, my soundbite for this is if memory matters, you probably want to look at what your PMM options are. If memory doesn't matter, you don't need very much, Then there might not be any opportunity here within public sector. What we're seeing is constant cost pressure and a pressure to do more with fewer servers. And so if instead of having 25 VM ware hosts in my environment, I can improve my consolidation and Take that down 15-20. The Ry is huge. And this is some of my customers in Colorado and California and state agencies think of large water management agencies and employment agencies and state of California are looking at their VM density and by improving that density they can be flat to down year on year in software licensing and it's, I work for a hardware company. It's great that people buy hardware but the software is often much, much more expensive And so by reducing the number of licenses or...

...at a minimum keeping them flat. They can tremendously reduce their budget. It is a given in our industry that the business always wants to do more with computing. It's why we all have jobs, there's the computers are 1000 times faster than they were just a few years ago. But the businesses want to do 1000 times more and so there's always demand. And so finding ways that they can do more with the same or less money has been a huge advantage within the public sector. Well, and that's that's it. I mean, doing more with less money. I think it's interesting that there's been such for so long. I remember he has been so expensive. It's been, you know, a key cost. They're right. It really. And that's, you may have heard of moore's law where computers density doubles and they get faster and things like a greeting card has more compute than a The Apollo missions, did you know, the compute power has gone up geometrically, memory really hasn't kept pace. It hasn't dropped in cost or gotten bigger at that same rate. And so now we look at servers and memory can represent 60 or more percent of the cost of a server, obtain tmm directly addresses that need for large memory and reduces the component that memory is in that server much more than any other lever you can pull on by changing processors or traditional memory or network or on the storage on the box. The changing the cost of that memory component has a dramatic impact on the overall cost of the server? Well, I like how you put that that the you know, the earlier the sound bite I think for so long than C t o s and and you know technology departments have assumed that more is more expensive if you were to to speak directly to those people and largely you are, what do you say to...

...them to make them understand just how this has changed so much in the last few years and what's possible be typically not a price seller. I explained technology and my role to people and they make a choice and there's choices between technologies this last few months. I have been going out to customers explaining the opportunity to save money which is outside my usual role. But the difference in looking at list prices from one of our o e M s 128 gig of traditional memory was almost $8000. 108 128 gig of octane PMM was about $1800. With that difference you can build up a configuration that cuts the costs so significantly that it's it's irresponsible to ignore. We had a design for a large agency again here in California and they were buying 20 large memory servers with the OM put together two configurations, one with octane PMM and one without The price difference for 20 servers Was nearly $1.6 million. So it's one of those where it's so big and so profound. Now will it always be big. It could change if memory prices came way down, then it becomes maybe less compelling, but it's going to have to change a lot before it's not compelling. And then one last thing to add, if we look at the direction of products like VM ware and Lennox and Windows with regard to multiple memory flavors. VM ware announced at VM World last week, Project Capitola and you'll feel free to google that or use the search engine of your choice. But it's fundamentally...

...saying in the future, we will look at systems and look at all of the available types of memory in the system and then optimize the configuration of S X across all those memory types. Their view of the world is the future, is going to have multiple types of memory. This is not a temporary situation and they expect additional new types of memory to enter the market from other players as well. And so we're really on a bit on the front of this but it's so compelling today that I really want everyone to know to at least take a look at it put together a couple of configurations and see what you find, I can add in a couple links into the follow up for VM ware's post on support for octane PMM and VM ware's testing on performance. Those are two of the key concerns people have and I want to be very up front with those with the first generation of PMM on the previous z. In part it did run slightly slower. The native memory there still was virtually no impact on most VM workloads, but with this current generation, the throughput is the same with octane and traditional memory and so you do get very good support and VM ware has acknowledged this in their articles. Speaking of on the, on the front edge of things, we ask everyone the same question at the end of our podcast and that's where do you see technology going in the next year? That's a big area, all of technology. But we will introduce next year, a new generation of Xeon, it will include DDR five memory, the first Xeon with DDR five. And as with that generation, you'll see the first obtain p m m d d R five sticks that will be profoundly faster, nearly doubling the performance of today's memory. And so you'll see a big step function in memory performance...

...next year. So for our listeners who want to find out more about what we talked about today, how can they reach out? Well, a great place to start is with the people they're buying hardware from, as there's been a ton of training on PMM and the entire intel Xeon product family and memory family reach out to your ingram wrap and you can get the details and we are therefore you as well, your intel reps in the public sector scattered across the US. Very willing to jump in a call with your rep and with your, with yourselves as and customers to make sure you have the best and latest information on what technology is available, ken, thank you so much for joining me. Thank you Shelby. It was great to be here and thank you listeners for tuning in and subscribing to B two B tech talk with ingram Micro if you like this episode or have a question, please join the discussion on twitter with the hashtag B two B tech talk. Until next time I'm Shelby scare hawk. You've been listening to B two B tech talk with ingram micro. This episode was sponsored by ingram Micro's. Imagine next B two B tech talk is a joint production with sweet fish media and ingram Micro, ingram micro production handled by laura Burton and Christine fan. To not miss an episode. Subscribe today on your favorite podcast platform.

In-Stream Audio Search

NEW

Search across all episodes within this podcast

Episodes (395)