Every December, Marvell MVRL 0.00%↑ hosts its annual Industry Analyst Day – a chance for the company to resync everyone to the latest and greatest from the business, as well as align the goals and products of the company moving into the next year. It’s a policy that I really like, especially with companies that I interact with on an irregular basis. From Marvell, the 2024-into-2025 edition of the day was no different, showcasing a focus on fundamental technologies to help drive not only their traditional businesses, but also to expanding the business units high in demand.
In this instance, from my focal point, Marvell spoke a lot about their custom ASIC business. For those not familiar with this line of work, it’s where a company with expertise in chip design and substantial IP can act as the design house for another other business that wants to build a custom chip but doesn’t have the resources. With the recent demand from hyperscalers to tap into this business model for their own designs, as well as future generations, companies likes Marvell and its competitors are battling it out to offer value-add to these designs. For example, Marvell has been at the forefront of SERDES and connectivity IP, as well as Arm compute, making it an option for those wanting large Arm Neoverse designs or AI chips.
Marvell also had three major announcements on the day. Two relating to 1.6 Tbps optics for scale-up compute fabric as well as data-center connectivity, but the big one was around HBM. The demand for high-performance HBM, especially HBM3 and HBM3e, for compute-driven silicon has gone through the roof. Everyone is willing to pay it seems, so those with the largest pockets are buying the biggest and fastest. However, with HBM4, something new comes into the mix – the base silicon on a HBM4 stack is transitioning to a leading-edge node (7nm equivalent) as it requires additional compute and functionality. This has led Marvell to announce a new custom HBM solution offering.
In conjunction with partners, Marvell can design a HBM interface to either be a superset of the JEDEC standard, i.e have new functionality, or a stripped down version of the standard, i.e. be more efficient. Marvell states that one design they’ve put together serializes the I/O between the HBM and the ASIC, as well as increasing speed, for greater performance but with 70% lower interface power. The HBM itself is still using standard dimensions, however with the custom compute die at the bottom of the stack, new ways of interfacing and new commands can be put into that chain for performance or power benefits.
Towards the end of the event, Marvell gave the room of 30-40 analysts a chance to ask questions to the CEO, Matt Murphy. Here is a transcription of that Q&A, tidied up for readability.
Some of these questions focus on the financial performance of the business. The company had recently had its Q3 financial results which were viewed upon very favourably by the street – the stock rose 23% on the back of impressive data center growth numbers, showing a 98% year-over-year revenue increase in the data center market.
Prepared Questions
The first few questions were teed up by Chris Koopmans, COO.
Q: It has been a big week for Marvell, first with earnings and now with today’s announcements. Given the market reaction, are you surprised?
A: Yeah, great question. I think this was clearly, from a content perspective and just all the stuff we're doing, really historic for Marvell. If you just look at where we've come from and where we've arrived, there's more to come.
I've been CEO for a little over eight years, as you guys know. Some of you guys run your own firms, and know it's a job where only problems come at you - you get kind of the crud beat out of you 99% of the time. Every once in a while you get a win. You get a good day or a good week, and it was clearly a good week last week.
It was a culmination, I think, of all the effort we put in this company. If you went back our 2021 meeting we had here, and looked at our investor day we did around then, we started talking then about cloud optimized silicon, We talked in the way that customization we believed was going to become a major industry trend. We had acquired InPhi at that point - we had closed that transaction earlier that year. We were poised for what we thought was going to be a massive opportunity for Marvell at the data center.
At that time, [data center] was 40% of our revenue, kind of pro forma with with InPhi. What we showed you this morning, you see the growth, right? We’re a 70% data center company going to 75% next year. With R&D indexed, we’re probably 80% plus of our OPEX. Next year is going to go to data center engineering. It was a big week and share prices really moved up - the valuation of the company is starting to reflect the IP and the potential.
But it didn't just happen last week, we've been investing through the curve. In this cycle, we've increased our R&D investment at Marvell every year - year in, year out, even with the big industry downturn. We didn't actually cut the R&D, [and for us] R&D has continued to grow. So the fact that we're able to sort of milestone a couple things - the first is announcing this very strategic, very impactful five year agreement with AWS.
With AWS, we’re both a bigger and a more important supplier to them as a result of this custom AI silicon. This is as well as the widespread networking products but also them as a key supplier to us using AWS as a strategic partner in the cloud for EDA. It’s a huge deal and it’s being done at the C-suite level. Matt Garman (CEO of AWS) has been a real partner for me and for Marvell, even when he was VP of sales at AWS, we worked with him a lot. So that was historic, and I think just that like a testament to “Hey, this, this company's got real products, real IP”. We've got real value that somebody as important as them have the honor of partnering with us.
Then on the earnings, they have a big beat and raise, but it's kind of what we've been saying, you know - I think when you're guiding as a public company CEO, you're not really guiding one quarter in advance - you're giving some color [to the future]. So we always felt like Q4 was going to be very strong if we could execute with a big setup for next year - and I think that effectively just happened.
I was kind of surprised even at the end of our last quarter when our stock felt like it was underperforming. There were a lot of rumors and doubts, so I went and bought a bunch of stock on the open market. I was like “Cancel my trading plan? This is ridiculous!”. We've been telling everybody what's going to happen, So it feels good and, and I think from a semi cyclic perspective, really well positioned for the next couple of years - especially with where the CapEx cycle is going and our portfolio investments.
That’s a long winded way to say that it has been a big week not only financially, but also from a product and technology standpoint. The AWS partnership is a big deal. Crossing a billion dollars in data center revenue, and shooting the lights out on the AI revenue. Everything's been going really according to how we had hoped, with only a few little bumps along the way.
Q: You mentioned the AWS announcement - that obviously talked about some custom AI processors as well as our whole connectivity portfolio. That speaks to what the team talked about today. We've been talking about customization of silicon for a long time and now we're starting to see a custom roadmap and architecture for the whole data center for AI. Some of the questions that I've been hearing today are: Why now? And why is this trend happening? How do we see it projecting going forward?
A: Well the trend has been happening for a while. I'm going to reference our 2021 investor day - it was roughly slide three or something. We said something along the lines of ‘Every cloud is unique’. Do you remember this? It's not just one cloud, there's a Meta cloud, there's Microsoft cloud, there's the Amazon cloud. Within those, depending on the properties, those are going to be configured in different ways. At some point, the scale of these is such that you're going to want to drive some degree of optimization or customization to squeeze every last watt out of the system and drive the best possible TCO.
I think what's happened is the sheer scale of the CapEx driven by GenAI has made this an imperative. You just can't spend this much money and spend this much CapEx and OpEx, on the energy side, just to feed the beast. So, it's accelerating, but it's not like a new thing. This has been part of the three-to-five year trend that we've been talking about. I think what you heard from the team today is a lot about how we innovate, customize, and also innovate around scale up and scale out, Even disruptions in the networking layer which historically has kind of been a merchant type of market. That’s going to have to get customized.
So I think it's been a trend that we've been on the right side of, and it probably just continues. But it's going to require very deep customer partnerships to have the trust to go ahead and do that. Obviously [it also needs] the best-in-class IP and kind of roadmap of where you're going so that it's worth that investment - because otherwise you will buy off the shelf and buy commodities and stitch it all together. I think people have figured out that's not the best way, and that's not the way you're going to solve the ultimate issue of how you monetize all this investment you made in terms of delivering AI as a service and delivering the value from GenAI. So those are some of the things at play. Nothing new, but it has definitely ramped up in its criticality.
Q: One of the things we talked about this morning was customizing the actual memory interface, and customizing HBM itself. It’s not a very common thing, and I don't know if everybody expected that today. So can you talk a little bit about the importance of this in the overall XPU market and how did Marvell take the pole position on this one in terms of leading the entire industry in this direction?
A: That was pretty cool to see that. We had in-person discussions here with SK and Samsung, and I think Micron's in their flyout period. But all three have been great partners. Some of them have been working on this for a while. I think it's really born out of us being brought in by our customers as a real trusted thought partner to solve some of these problems. A clear one has been HBM and maximizing the performance of the particular configurations you're trying to build. Quite frankly, the off-the-shelf is just not going to get you there. So I think some of that was about the bottlenecks being seen, and then we say “Here's what you could do on the PHY layer, here's how we could partner on the base die, here's how we could put together the partnership”. It's complicated, right?
But these are big, strong, independent companies that are driving their own thing. They're used to being a commodity. That's been their mindset, so how do you drive the commodity in one standard and sell as much as you can? That's been a shift. I like to think that in our own way. we've been an important part of bringing that part of the ecosystem together with our customers - to think about how to solve the problem in a different way and actually get people to do something about it instead of just talking.
So all three of those partnerships I think are going very well and I think it's going to have outsized benefits in place in our wheelhouse. Because it's at the IO layer, we bring a lot to the table in terms of the strength of the IP and the company and the ability to integrate. Also with the custom accelerator engagements, being able to look at not just this roadmap, but the next-gen, and what's going to be required, and then how you pull all this together.
So while you're talking about the big deal, you guys can understand in your field that it is a big deal, it's a huge deal. It can actually make a big performance and cost and TCO difference for our customers. So I thought that was a big deal to showcase the importance of Marvell on this ecosystem. It’s not just because we are telling you, but three of the biggest guys that are doing the most critical memory, which is going to be one of the biggest spends, are saying we’re a key thought partner on how to go together to these hyperscalers. So it's just another proof point, but also another validation point, which is very, very helpful when we're engaging on these design levels. That's what we're going after.
Q: It's another proof point, but it also shows how much things are changing. One of the questions that I've heard today is about where we are in this AI innovation cycle. Things are changing, and the data center architecture today looks so different from two, three years ago. So is it the first inning, the third inning? Where are we in this overall innovation cycle?
A: It's not just a canned statement. It really is very early. I've been doing this for a long time - I’ve been in this industry for 30 years. That’s 30 years in semiconductors. I can tell you you're at your busiest and most urgent when it's new. I remember all these different cycles, the networking cycle, the mobile phone cycle, the digital camera cycle, the cloud cycle. You name these different things that have hit, and the level of customer urgency and activity in a market that's still on the upswing is just through the roof. I'd say on this AI one, as much as we like to think like we're ahead and we're on this bleeding edge and we're doing all these things, I feel like we're behind every day, particularly in terms of the schedule the customer wants etc.
But that's great. As time goes on, you get qualified IP on XYZ process, and ask questions like how do we get TSMC to pull in that next wafer shuttle. It's a great problem to have, and I think having that massive urgency because our customers are pushing this is good. Then you have competitors - I mean in our case it's a partner in NVIDIA. The big behemoth who is driving a very aggressive roadmap, so that's why what Jensen and his team are doing is really good for the semiconductor industry. It's pulling everybody through to a very uncomfortable beat rate and cycle time. To get there is absolutely exhilarating. When you work in technology and you get to work in semiconductors, and you work on something that's brand new and it arrives early? Yeah, it's stressful and you make sure that you're keeping up.
You can see what we unleashed today, a whole bunch of new stuff for you guys and all these innovations. There's more coming, wait till next year! But I think it's still early. It's not the end of Moore's Law, it's not the end of packaging, it's not the end of I/O. Maybe Moore's Law has slowed or whatever, but there's still advances going on there. But all these things you saw today, I think there's going to be a lot more to do, and that's part of the shift we make. We have seen this before, where some of the more mature markets, like enterprise networking or carrier telco 5G, or even automotive to some extent - these are all markets we focus on. But the new product cycle time beat rate is at four or five years. It's not 18 months. So it requires a different focus and a different urgency. But I feel like it's still very early relative to where we're going to be.
If you draw a line from here to like the end of the decade. I think there's still a lot more to go. There are still several more product cycles here before you even start to think about it levelling off because the performance improvements are coming from the system. They're coming from how you integrate and think about optimizing all the components in the system. Compare that to things like mobile phones that you hold in your hand, so you can sort of do a node jump and integrate the AP in the modem, or there's these little things you can do, but it's a tiny amount of power. In this case, you're talking about kilowatts and kilowatts of power dissipation, and cabling and components. So there's just a lot of open greenfield to go after which is exhilarating to work on. It's the hottest, coolest thing I've seen in forever.
Q: The way you know you're early is when everybody can't wait for the next one. Remember when we couldn't wait for our next PC, or our next phone, and now we're just never checking to get it. That's where we are in this space.
A: Every time I pick this damn thing up (picks up laptop), it bluescreens. It says ‘sorry we have to reset’. I mean, it's 2024 people! I regret upgrading that PC. You know what I mean? It's still the same black honking thing. But in AI, there's no equivalent the black piece of plastic. Everything in AI is about [speeding up time to market] – you have to know if you're behind, late, or if we gotta catch up, how do we go faster, how do we shave off weeks, how do we shave off days, do you guys have it in the fab, can we hop on it, can we save four days.
Are we serious? Yes! Sometimes it’s going to cost a few million bucks and we write the check. I mean, there's a real sense of massive urgency in this area.
Q: So you mentioned the pace being set by Nvidia. How do you feel about that? Are we competing for the market share or how do you think about the dynamics in that space?
A: No, [we’re not competing] with them. They've been such a tremendous partner of Marvell for a long time on a number of different fronts and they remain a great partner today. I've talked to Jensen about this multiple times. He's very aware of where we sit and he sits. First of all, the whole thing I think we both agree on and you can ask him this too, is that we're both in the business of TAM creation. That's a lot different than say, a market like PCs or something we were just talking about. The units have had some growth for decades - there's a certain amount that shipped and they shipped a lot more with COVID and then they shipped a lot less, but they basically shipped the same number. So there, you're in a zero-sum game. Who's going to have what market share? And you just kind of go up and down and one vendor has a better product. And Arm might take some and the x86 guy, but that's not TAM creation. That's just being in a box.
So when you're creating TAM and you're part of the growth, then you just have all this room. So what we do, even on the custom side with these hyperscalers, say they want to diversify or something, but they'll deal with that. But the reality is they have their own unique insights, they have their own unique needs. They can get better TCO if they [get custom silicon] and do it themselves. It's economically better. It makes sense for them to do it. It's not a refute on anything that they're doing. For a lot of those custom chips that the hyperscalers make, they have to interoperate in the same networks as NVIDIA. So at least at the moment with the explosion and the TAM growth, it's a very complimentary relationship.
A lot of investors sometimes want to take the bait. There's a little bit of questioning about how much does NVIDIA have to lose for us to gain share in custom silicon. We respond by saying that the TAM is expanding - so the reality is some of that just by the sheer economics are going to cleave off and want to be unique and bespoke. That is what has happened in every major end market, and I think we both understand that. How much can that be? We're saying 20-25% of the accelerator market. If it does really well, maybe it's 25% and crushes it. There's still plenty of market for everybody, and again what I would commend them on is just absolutely throwing down the gauntlet on new product development and innovation cycles and beat rates.
I think it's been good for the industry and it's been something that we try to model here as well. You have to be at the leading edge, you have to focus on the technology two generations or three generations out, and get those in proof-of-concept. That silicon has to be proven so you can be ready and enable your customers to achieve their silicon ambitions. They have huge ambitions. We just want to be the guy behind the camera, you know. We're not trying to claim all the credit and pound our chest. We're happy to be the company that's the best possible partner for them, so they can hold the chip up. You can see from our financial results that if we do [the chips] well, we do [the financials] well.
We take pride in what we do. Our shareholders get rewarded. So what if somebody else got the credit? Big deal. It all depends on how you're wired. The way I'm wired, I'd rather let them take the credit. Just make sure that we feel like we actually accomplished something - and made an impact.
Audience Questions
Q: Matt, we know you're an automotive guy at heart. The Q3 numbers showed a 9% increase for automotive, but you're projecting single digit gains for the fourth quarter. Is there something going on? I mean a lot more cameras are coming in, we're going to a modular nodal architecture in the car that's going to demand much more Ethernet. 1% growth? What's going on?
A: I am an automotive guy at heart! When I was at Maxim, I ran that business, and was one of the product line guys when I was doing chips for that market in like 2002 or 2003, even before we had an automotive effort. So I've been involved in it a long time. What's happening in our business is pure inventory correction to be honest.
All those automotive companies, even on the new models, got in way over their skis, and you can still see that. Our numbers are hard to benchmark because they're small relative to the big auto semi guys, but they've all experienced to some degree a world of hurt. Some did better than others in managing the cycle. So from our standpoint, we're just looking through and asking where it ends up. We actually see very strong growth next year back into automotive. We still see line of sight to the half a billion dollar kind of bogey we gave. Actually it probably goes higher than that just because, to your point, these decisions have already been made, so it's really about how you implement them.
I would say that the big change we saw in the last few years wasn't just all the EV guys coming up, but it was the big “Detroit Auto” and European ICE vehicles moving to smart platforms. They started to really consider what’s important after the CPU and GPU, what's the next biggest critical decision we make on the architecture - and that is the networking. We said we've won 8 of the top 10 OEMs, and we're totally engaged. So the short term results, they are kind of what they are, but it's a small number, so it's a hard one to benchmark. Like if it's half a billion, then you say, okay, hey look, that's at least more of a real number at this point. But it should have very nice growth next year and the year after, for sure.
James Sanders, TechInsights: Hey Matt, a comment and a question.
First, I noticed in the report that you're happy to let your partners hold a pitch up on stage and take the credit for it. Coming to this event just off the heels of AWS Re:Invent, the reaction to that was quite positive, and I think you should be justifiably proud of that.
As for the question, drawing back to the start of this session, where you said CEOs are bludgeoned with problems. You've got 30 years of experience in this industry, right years as the CEO here. What advice would you give to the future CEO of Intel?
A: I don't know who that's going to be. That was the weirdest thing, but what are you supposed to do? I actually don't know the answer to that question. I mean, I think there's a monster challenge in there, that's all I can say at this point, it's a really great question. If you guys want to grab a beer later, we can definitely do that! But the opportunity is there, and I am an optimist.
What I'll say is this. Some of you don't even remember now, but when I left Maxim, which was a good company, and I was the CEO successor there. I spent 22 years there, and I loved the place. I had to make the tough decision, which I did at the time, personally and professionally - it was the toughest decision I ever made. It was to leave. I mean, Marvell was a dumpster fire at the time you know, it was in absolutely in really bad shape and we didn't have much. But we bootstrapped it and we did a whole bunch of things and then we got here. I had the mentality at that time which was a high level adage, I think everything's fixable to a point. I mean I think you just have to have that mind-set, and you have to be able to just really get in. I'm talking like shoulder to shoulder and just dig in and micromanage the living hell out of everything. At some point after I joined this company, I remember there was 5000 people and me, I didn't know a single person. I didn't trust a single person. But Chris (Koopmans, COO) was here early on. I started to trust him pretty quick.
I think one thing that served us well is that you have to have a point of view on what you're going to do. Yourself personally. Not your team, not other people. You have to figure out what the company problems are, and then literally create your own OS to solve it. What are you going to focus on? What are you going to KPI? What are you going to manage? What are you not going to manage? At some point I ran every meeting at the beginning. Design review, engineering review. I wrote the earning script, I had to get involved. There was no one to help me and at some point you find the right people.
So, yeah, I think whoever takes that, it cannot be a corporate suit. You really need to be able and willing to get in there, meet the people. I used to do this chat with Matt every week. I met with employees every single week. I flew around the world for years meeting employees. What am I dealing with at this company? What do I need to go fix? And how do I do it? What are the people changes you need?
We went through a major turnaround. I know what it takes – it’s hard but it can be done. No problem is insurmountable. You have to make really tough decisions. In the end, when people say that, they're always the easiest, simplest decisions. What did we do with our WiFi? People told me we can't sell it, it's impossible. We sold it to NXP for a home run price. I used it to buy Avera, which got us this custom silicon $40bn TAM. I got Aquantia, it gave us the number one Ethernet PHY. There's all kinds of stuff you can go do.
So I'm an optimist. I think they'll find the right person, they'll get in. It's my optimism. I want Intel to succeed. It's my hero company, I always looked up to it. Andy Grove was my legend, mentor, the person I always aspired to be, and so I want this company to do well. I hope it does. But for me at Marvell, we're in the right spot now. I worked so hard to get us here, Eight years grinding this thing to get to this $90-$100bn market cap, and I’m going to go drive it higher. It's motivated our team. We have a team that worked our butts off. I'm just saying I was super fired up at the start and I don't know about the products and the technology and everything, but I hope for the industry analysts here you hear my confidence in the company, my confidence in the team and the market we've got that we're going after, and we'll see where it goes.
Malik Saadi, ABI Research: Do you see any business opportunity in partnering with incumbent OEMs, such as HP or Lenovo or Supermicro, or are you bypassing them working directly with the hyperscalers? So what opportunity for your customers?
A: Well we're open for business right? We have a custom team, we can go bid on things, we can go quote things, it's really up to the customer to decide if they want to take that on. I think at the moment you really have to be vertically integrated at this point. I don't think it's going to be a matter of selling something as a service or having something that's going to drive your own workload to make it justified. I think it's just way too early for that. But I do think we're going to see interesting companies pop-up that are going to want to do their own custom silicon. Not necessarily those OEMs, but I think because of the sovereignty issue, and AI sovereignty and data sovereignty is going to drive a lot of this. But who's going to be next and what that's going to look like that's sort of over the next few years. I think the value prop is very compelling.
But I think for the traditional server manufacturers, they all very much have a legacy of that's what they do for a living is they put together the reference design, the reference design gets shipped, they box it up, they make it competitive. They go battle it out, and that's sort of the model. I think going anything more custom is just not something it appears that they want to do.
Patrick Kennedy, Axautic Group: As people build these massive data centers, in order to reduce risk they’re likely to pick regular components than go custom and take that risk in using new silicon. Does that change the business model?
A: I think it's not necessarily a custom silicon risk, as much as there are there going to be hiccups along the way. In terms of data center build out and CapEx spend, because of the power grid limitations and because of these power density footprint issues, what we think that is going is driving is a lot more smaller regional data centers. This whole DCI (Data Center Interconnect) thing you guys have followed for a long time - we pioneered DCI over a single module. I think that ends up propagating as sort of the way that people go ‘I going to go deploy this’ or ‘I going to go spend my AI CapEx cause I have to go meet the market demand’ or ‘I can't do it in these historical footprints cause the power's too high, so I'm just going to regionalize’. Then there's the sovereignty issue as well, which is going to cause people to have more distributed data centers. So I think it’s a “fits and starts” growing pain issue. But I think at the end of the day, if people see dollars on the other side in terms of monetizing AI, I think they're just going to figure out a way.
We got interviewed by the Wall Street Journal last week, and we get asked this question all the time about digestion, and would that be an potential issue. I said you’ve got to draw a line between here and the next ten years. We have a very strong conviction in this accelerated computing cycle. It's probably the biggest TAM creator we've seen in a long time. I very much believe in that. I think there could be things like power shortages or maybe there's disruptions because of global conflict or who knows what's going to pop up. But as we've shown, we're continuing to keep the R&D machine cranking and increase that every year on a consistent basis and invest through any cycle we see, including a digestion cycle, which at the moment is the opposite of like we're following up on our revenue and we see a strong next year. So that's all great. If it were to slow down, okay, we'll just deal with it for a couple quarters. But it’s an up and to the right type of investment cycle.
Q: My question is related to automotive. You have all the key IP needed for automotive SoC, and especially looking at cars or vehicles that are going software defined, with common chassis for multiple tiers and so on. This means there is enough volume for custom silicon with specific OEMs or set of OEMs. Are you thinking of going beyond Ethernet and connectivity in automotive into the SoC space somehow?
A: So we spent a lot of time looking at this, and in the 2020-2023 timeframe, there was a pretty big cycle of interest in RFQs, effectively custom SoCs for automotive. Arm based compute with whatever chipset solutions, their IP, our IP, all kinds of unique things. It was pretty exciting, and we were pretty fired up. We always had a risk management framework applied to those because those companies don't have the SoC people. It's really complex stuff, obviously.
They were doing it more because they don't want to buy from Qualcomm or NVIDIA, so it's a little different than with the hyperscalers doing the it because they want the TCO and the benefit. It was a little bit of “we control our destiny” and capital was freer, back when it was 0% interest rates and the stock market was high and everybody was happy, cars were selling.
I'm just giving you all this background to say we were intimately involved in a number of these, engaged in a few, but never too serious. In the end, none of them really materialized, and quite frankly, a few of those that we either didn't win or didn't participate in, they've been cancelled. A lot of them, with what they were doing, they are now asking why they spent so much money on it. They’re getting nowhere, they don't have a team, they don't have the software, and that was the other part of it. People like Qualcomm and NVIDIA have really good full solutions.
So we were interested, it sounded cool. We had the compute with the Ethernet and the actual solution. We had a great pitch. But in the end, I don't think that's a real market. I don't think that market exists in any meaningful way. I think it's going to be dominated by vertically integrated merchants. Because once these sort of petered out, I'm like, “okay, we're kind of done with it, right?” But this is basically what happened. Smart cars are going to go, and all these people that invest in the full solution are going to do really well. Just the custom one was no bueno.
Many thanks to James Sanders for providing the audio recording for this Q&A.