More Than Moore

Share this post

Interview with Jim Keller, Tenstorrent

morethanmoore.substack.com

Interview with Jim Keller, Tenstorrent

RISC-V, Chiplets, IP, PCIe Cards

Dr. Ian Cutress
Feb 24
10
2
Share this post

Interview with Jim Keller, Tenstorrent

morethanmoore.substack.com

I covered the Tenstorrent CEO/CTO switch a couple of posts back, and in my recent trip to SF I managed to get some time with Jim at HQ. He was flying between what seemed like a number of technical and customer meetings. Last time I met Jim in person was almost six years ago at an Intel architecture event, and that was a group interview so we didn’t have much direct interaction - so it was good to actually sit down and talk shop about Tenstorrent’s direction.

Share

If you’re unaware of Tenstorrent, the company is one of the players looking to make a name in the machine learning hardware space. Tenstorrent sees itself as a design company (see below), not simply an AI hardware company. Tenstorrent has already taped out two chip designs, Greyskull and Wormhole, and has a good roadmap using current leading edge process nodes and future process nodes with chiplets and die-to-die low power interfaces. They’re around 270 people, based in SF, Austin, Toronto, and Serbia. Founded in 2016, Jim Keller was the angel investor, and Crunchbase has Tenstorrent listed as $235m in VC funding to date.

The Tenstorrent team is filling up with industry experts, for example co-founder Ljubisa Bajic (CTO, ex-AMD), Wei-Han Lien (RISC-V Lead Architect, ex-Apple), Keith Witek (CSO, ex-Google, ex Si-Five), and others. Jim Keller is a renowned chip architect and leader, formerly AMD Ryzen, Apple A5, Tesla FSD, Intel, SiByte, DEC, and now at Tenstorrent.

This interview is available in both video format, and the transcription is below the video. Apologies for the audio quality, we perhaps should have done it in a padded cell.

*This video was updated on Feb 26th with a better audio quality version.

If you enjoy this interview, please consider subscribing to help support the channel and newsletter.


Ian Cutress: The move from CTO to CEO – can we briefly touch on how's that been for you? Officially it's been a couple of weeks.

Jim Keller: Yeah, officially. It's probably really been a couple months. It's fine. So I came to Tenstorrent to support Ljubisa in any way I could. We're starting to expand our business, we're talking to way more customers, and we're in the process of raising some money. We're talking to investors. The next year is going to have massive distractions and so I've hired some additional staff – you’ve met David (David Bennett, CCO), you've met Bob (Bob Grim, Biz Dev), also Keith Witek (COO). We're bringing a few other people on board to extend expand our ability to do stuff. Ljubisa’s basically been on fire working on software stuff. I think for both of us it's been good – we need to go make some money, we need to execute on a bunch of things, but the fundamental challenge of the company is technology development, and the software problem is mysteriously hard, as everybody knows. We think we're making real progress, so yeah I'd say on the whole and that's been pretty good.

IC: I speak to a lot of machine learning companies and obviously they have the hardware strategy, whether that's just SIMD engines in parallel or something a bit more esoteric, and they all say the software problem is hard. So what are you guys doing differently?

JK: So I was explaining this to somebody - it's like the software is almost too easy and it sucks you into it. So if you just look across the spectrum: like so modern CPUs are really hard, they’re out-of-order execution and it takes pretty experienced people a lot. But we run C programs essentially at the bottom no matter what you do, and the hardware/software contract is really clear and is really bold, and that definition is pretty good. Then you go to GPUs where the thread engines are pretty simple, but you have a thousand of them - the genius of CUDA was you write what looks like a single threaded program per thread and then there's a coordination layer to do that (and sometimes it works great, sometimes it doesn't work so great), so the hardware is much simpler than a CPU at some level but the software contract is more difficult. Then you go to AI you say “oh it's going to run big matrix-multiply tensors, transformations, convolutions, and you know the hardware at some level is similar but it's like the inverse proportionality – on a the CPU the software is simple and the hardware is hard, in AI the software the hardware is simple enough and the software is hard. It's harder than it has any reason to be because the number of operators. I’ve given talks on this – in AI there's only five operators: matrix-multiply, convolution, tensor transformation, T-Low, SoftMax. You could argue about some other details. But how hard could that be? Well it was running on thousands of processors, it's got local memory, global memory, it's got communications, it's got like you name it, it's got every problem in the book, and all these things turn out to be stupidly hard to coordinate.

So what are we doing different? Well we've never put hundreds of people hand coding some benchmark. That's been a fail. Part of the reasons there's so many pivots - so there is inference and training, there’s language and vision models, and big/little models, and then there's generative models, and then stable diffusion models. They all have multiple features inside of them all at once. They've got image, language, and backpass. You know it's pretty complicated if you start chasing one of those. Long before you're done on that model the new thing will be out and the hand-tuning stuff doesn't work.

So our mission is you write AI programs and that compiles performantly. We're starting to crack that properly and our test is we have a library of popular models and we're running most of them performantly and we're on our way to getting all of them performantly. Then the other thing is we want to scale from a single chip to many chips and software to not have many layers between how you code your AI model and how it gets deployed. We've demonstrated some models on a large number of chips we're working to make that more productizable and making pretty good products too.

IC: So I’ve spend some time with your team here, and despite there being industry layoffs, you are hiring. With that pivot to that sort of more focused software mentality, do you find that people that are coming into the company have to switch their traditional thinking a bit to what they're used to?

JK: No, not really. So first of all our software team is relatively small. We're hiring people who really get it. So it's interesting – so there are high level programmers who write JavaScript and all kinds of things. There's lots people like that, those are hardened in their own domain. Then you sort of have system level programmers people, who know operating systems in all kinds of details. Then you have kind of we used to think of low-level programmers who program on the hardware. Then in the compiler stack there's the same kind of thing: there's high level language features, and then there's the mid part of the compiler, and then there's the low level details. The kind of people we need are people who understand system programming and compilers pretty much top to bottom. And when you find the right people they like it.

Rhen the other funny thing is that Tenstorrent started with people with FPGA background synthesis backgrounds, and HPC backgrounds, because AI is kind of a combination of those. They don't grow on trees right, so people come in with different things and they have to sort of be into ‘I'm going to build a software tool chain that part of it looks like synthesis and part of it looks like an HPC problem and part of it looks like low level driver code’. We hire what we think are good people and then you have to find your place in the software stack. It’s a little different from the standard kind of delineations in the software stack but we've had good luck - we like the people we've hired.

IC: So I’ve been speaking with Wei-Han Lien, your RISC-V Chief Architect. This is a two-part question - why have you got cores in your AI chip at all, and why are they RISC-V cores?

JK: There's a couple. So RISC-V processors - first our Tensix processor has five RISC-V cores in them, we call them the ‘Baby RISC’ and they do stuff like fetch data, execute math operands, push data around, manage the NOC, and do some other things. They're RISC-V partly because we could write it and do what we want – we don't have to ask anybody for permission and we can change the Baby RISC. They actually leave some stuff out - it's pretty simplified. In a future generation we're enhancing the math capabilities and fixed a whole bunch of stuff so we could talk to the local hardware control directly. So it's ours, we can do anything we want.

We put in RISC-V in our next generation chip which we're taping out soon partly because we went and asked the other vendors to add some floating point formats for us (and they said no). We're keen on AI, on floating point formats and accuracy/precision stuff, and AI programs have to support that because you want to drive the small floating point data sizes but maintain the accuracy across billions of operations. The RISC-V guys said ‘sure’, so we called Si-Five. So that's why RISC-V is in there.

Now in the future, we think, AI and CPU integration is going to be interesting. We want to be able to do what we want and I don't want to have to ask somebody for permission to add a data type or add a port from the processor to this or change how the data movement engine works - I just want to be able to do it.

IC: You're also moving into chiplets - machine learning chiplets, CPU-type chiplets.

JK: There's a couple drivers. One is 3nm is really expensive to tape out, everybody knows that. The other is you want to drive forward on the compute in terms of density and power efficiency, but all that IO part doesn't care that much. A big driver is the packaging technology has moved to the point where you can get fine pitch, good low power die-to-die PHYs, and do what you want. Now the dream is then you have an AI chiplet, a CPU triplet, an NPU chiplet, support for a couple kinds of memory controllers, a couple kinds of PCIe controllers, and then you can build a product out of an assemblage of different chips. You could say that's kind of like a board, but now it's in a package, and it's in a package where the chips are but it's really short wires and the power efficiency of chip-to-chip and high-bandwidth is really good.

We suspect the next couple years are going to be growing growing pain problems! We've talked to quite a few people about it, and what I think is going to happen is if you have a solution you really want, and you're building this chip, and somebody else has built in another chip, and you work together and co-simulate, the odds are good that'll work together. Then the cool thing then is you can build more products with [fewer] tapeouts and I also think it's going to lower the bar for some kinds of products.

So I think it's a really good idea. A couple of things are driving it - the need for the cost, the package availability, the willingness of people to cooperate across these domains. But like anything, this is not the easy. This is going to be a complete bloody mess and I predict a year from now I'll be thinking ‘geez why did we get into this, we should have waited for somebody else to take the arrows!’

IC: The way Tenstorrent currently has its road maps - with Wormhole [2021], Black Hole [2022], Grendel [2023] and then this chiplet design, and the announcement of Ascalon with multiple design points for Ascalon. We think of Tenstorrent as an AI machine learning hardware company, but if you've got chiplets and there's all different sorts of IP, are you actually selling products or are you're selling IP? What's the business strategy?

JK: So I’ve thought about this, and to me it’s relatively clean. But you know, the world's complicated! I'll give you an example – if you make a C compiler, nobody asks you if you're in autonomous driving, or industrial controls, or data center. It's a C compiler. That's a technology. That technology can be used in anything. So we're a design company - like we design hardware, we design CPUs and AI software, that's what we do. That's what everybody comes in and does.

Now, we're going to sell products. The hardware product we're going to sell is AI computers. We'll sell a chip, a board, or a system. We also have a cloud so people can come try it out and onboard. Also some people don't really want to buy computers, they want to use clouds, so that's a product itself. But we can also sell our AI chiplet and our CPU chiplet as a product. Those are technologies we've designed that people can go use to do a bunch of stuff – stuff that maybe we're not going to do.

Now the funny thing is when we showed people our plan, like here's this AI chip and then we're going to put in RISC-V processors because we want a local compute in the die. We started by licensing a processor but then we built a team who outperformed our targets. So we have this you know really great CPU team, and then people said ‘hey, could I buy that CPU?’. Well I didn't really want to sell CPUs as a product, that's a big complicated market, but other people wanted to, so we're going to license the CPU. We’ll also sell that CPU on a chiplet where people can embed in their own product, and that's turned into really interesting conversations. To the company, they're like ‘well I'm building this thing’ and the last thing they want to do is pay a license fee to license a CPU and then have to harden it, which takes a lot of expertise. Having a CPU triplet that they can put in their package that does what they want. And by the way there's other people doing it, and if they want the other people that's great.

But we're a design company. With CPUs especially, when you build a complicated CPU you want multiple targets for that because to get it right you simulate it yourself. Then you partner with somebody and they simulate it, and I promise you they find bugs you never thought about. So yeah, we've been keen to find some good technical customers for that, or you know they're going to be partners while we debugging them.

But Tenstorrent, we're a design company first. We have great architects, and then the thing that's growing that's really hard is AI. The thing that, well - I'll leave it there. Next question!

IC: From my perspective, because machine learning hardware companies we think of are just providing the device - it seems like you're going after what could be the host, the device, and then all the different combinations of ‘how big do you want your chiplet, how many cores, what sort of cores you want’. This is where I think Tenstorrent is differentiating compared to some of the others in this space and that's just a function of the expertise you have on the team.

JK: Yeah, so you have to think. At some point customers want solutions. So if you go into the data center, there's a server - it's really well defined, and there’s the top of rack switch that's defined, and there's the SAN or the LAN. There's storage computers and network computers, and sometimes they're all integrated in the same server, and sometimes there's servers are for storage, servers for interfacing into a Gateway, so there's a lot of differentiation in that. Today AI is mostly accelerator cards on the server - people want disaggregated AI. So we'll see how that develops.

Building chips with AI compute and general purpose compute right next to each other means that when the AI and those guys need to talk to each other they don't have to go out somewhere with that latency or power overhead and everything else. How that develops is going to be complicated and organic. We could make this AI chiplets, CPU and AI, and they're really busy, but they're still sitting behind a server that's fronting some other pile of applications. But there could be another application where now we have a single chip solution you put it on the board, you hook up some sensors you have an edge server and it doesn't require a quote host computer. How that develops that's anybody's guess.

IC: At what at what point will you be drinking your own Kool-Aid using Tenstorrent hardware to develop future Tenstorrent hardware?

JK: Soon. So everybody on the planet is aware of GPT now, chatGPT, and GitHub Pilot. So GitHub Pilot has been up for a while now, it helps people write code. I've asked our engineers to start using that. Some of them think it's ridiculously good for helping you on the easy code. Andrej Karpathy said it writes 80% of his code 80% correct, which if you’re familiar with programmers, it's not bad! You'd like to be 99. But it'll get there, it's pretty young. Our HR group used it to write some HR policies with chat GPT and she said it was really funny and it was pretty good.

IC: I know it’s helped with some of Tenstorrent’s social posts.

JK: So you start to use it in general, it's going to be part of the toolkit. It's very obvious it's going to be good at writing test bench and test structures, and we're thinking it. So we have a methodology about how to build and test hardware, and build and test software - you typically build frameworks to support all the tests, and then the tests are easy to generate and they have widgets in them that check. So now if we train the model so it knows about our framework, and those widgets can all to generate tests, it's a fairly obvious path. But it's engineering work to do.

We think it's going to be able to generate RTL and then it's going to start to generate code that's different from how humans write code. Every once in a while you see these things and you cannot unsee it. The AI code generation is going to make programs that are different from people, and the computers that accelerate that the best are going to be different than the ones we build today. So us having really solid AI foundation, solid hardware and computers and CPUs, means that as we figure that out we're going to go build new computers that are better at running that software and that's a positive feedback loop. We have the capability to do AI hardware, software, and CPUs, with all the collateral that goes around that – with the team using AI to do it, we think we're going to be well positioned to go make those next architectural steps.

That's a fairly big intellectual statement. We're getting into it as I can tell. Ljubisa and I have talked a lot about this in the last two years, and as we have got into it, it seems like we're directionally correct and we're going to keep going.

IC: Does it worry you that the output, the initial outputs of those models, creates designs that are uninterpretable? So when it comes to debugging, or edge cases, you can't actually go in to fix?

JK: So this is one of the arguments – like in autonomous driving, ‘don't you want some part of the code to be written by humans?’. Because you could audit it, and I'm thinking ‘really?’. If you have five million lines of code, written by a hundred people over five years, most of whom no longer work at your company – you think that's auditable? It’s not.

With AI, this is the really funny thing - if you have a good dataset, and you have good training, you can actually train it to a known loss function. The weird thing about a big C program is it doesn't have a loss function. You have no idea where its sharp edges are, where it's complete failures are. You know, how long did they ship windows for, and everybody's using it and it blue screens on a regular basis. That's the auditable software that you know? You think AI is like inferior too?

The other mysterious thing is the people who wrote the code, human beings, who also appear to mostly have intelligence, and they're not auditable either. We don't live in an auditable world. That ship sailed with Adam and Eve.

IC: I guess you could say that even modern hardware, the hardware itself, isn't auditable - how many features exist that aren’t documented.

JK: I've learned it painfully over the years. So the most important thing is abstraction layers with good clean boundaries, because we're not getting any smarter. I've said this many times - we're not getting any smarter. We build more complicated things, but those complicated things need to be built out of components that somebody understands, and what happens a lot is you have a really great design, and then you add stuff to it for 10 years, and people come and go. At some point it starts to get fragile and people ‘say wow it's getting complicated!’. No, it just turned into a mess. Like you didn't do the right thing – at some point you should have stopped, you should put your pencils down, broke it into pieces, made sure each piece had an owner that understood it, and the interfaces between them had a human readable set of transactions that you can verify.

So hardware that's well designed is fairly predictable and understandable – unlike hardware that's been growing like a furball.  There's a paper called ‘Big Ball of Mud’, I love it. Everybody should go Google it and look at the paper because it's literally the definition of disaster for software, and hardware does the same thing.

Share

IC: I think I think that goes back to the points you’ve made previously about ripping everything up and starting again with the new baseline more frequently than people tend to want to do.

JK: Yeah, I say four to five years. The right number is really three years, because it'll take you a year to get over the fact that you have to do it!

IC: Or the year that you're still behind your previous design!

JK: Yeah you have to - if not redesign, then refactor, and be willing to do that because you will solely get to a point where it's so complicated and fragile you can't touch it. Then something goes wrong, and you literally created something you can't control.

So back to the AI point, the weird thing about AI is you have a new model, you have a new data set, you have new error functions, and you train it. You can regenerate what it does – for the big language models it's not reasonable, but most models in a reasonable amount of time. That’s in a way that you could never regenerate your 5 million lines of C code for all your RTL, or all your historical infrastructure. So AI is pointing to a future that actually moves faster not slower.

Steve Jobs had a great quote about that: ‘the future accelerates’. So your future has to move faster, so you can't be in a situation where the more successful you are, the more ‘legacy’ you have and then the slower you move.

IC: So does that mean the most comfortable we will ever be is today?

JK: No it's yesterday! Today is uncertain, yesterday was the most comfortable you know. People love the past because they selectively edit their memories. There's no uncertainty in what happened 10 years ago – you may be sad about it or something, but you're not going to be surprised. So the future is a daunting place. I hadn't thought about it like that.

IC: Bringing it back to Tenstorrent and the reason why I'm here – with the strategy, the last time we spoke you said something that resonated with me that I've actually used in presentations before: the fact that you've got to have one core that looks like one chip that looks like one system. Has anything changed?

JK: No. Well I I thought about this a lot. So if when the world of computing started, you just naturally have a thousand processors, all the software development would have been how to I coordinate across a thousand processors. But from 1950 or something, to 2005 or so, most people program on one computer, so the computer was a single thing that literally executed instructions at the PC, and for most of that time they're doing one instruction at a time.

So now the GPU guys, by the time it's programmable, they had a thousand threads - so they were starting to think about it. But their mindset in the programming model was still as a single thread execution but replicated lots of times. We call that a vector of scalar programs, which is a which is a curious thing. But now you want to write one program that goes and engages thousands of processors, and to be honest you have to be willing to tolerate some serious inefficiency to make that happen.

If somebody's using chatGPT to write stuff, it does wild things like produce paragraphs that are novel, but it also figures out where to put the punctuation. Now the punctuation could be done with a simple C program - there's two spaces you know at the end of some number of words; put a period or a comma in. It's not rocket science. And you think that using chatGPT to do something a c program can do, that's literally using petaflops of compute for a second yeah to put a comma, as opposed to 100 lines of C program.

But the truth is, your brain has 10^18 of 10^20 operations and you're willing to ponder for a half a second where to put the comma. You don't think that's inefficient? We use these massive computers to do really simple things. You know, my daughter was standing at the refrigerator trying to decide whether to have orange juice or milk - it's 10^18 operations per second for three seconds. That's 3x10^18, a really large sum of operations!

IC: it reminds me of that weird statistic about how a Google search consumes enough power to boiling a kettle for a cup of tea or something.

JK: Yeah! So AI, and intelligence in general, puts everything in a domain where you can think about things and make some kind of choices, and that's a really interesting phenomena. The fact that ‘yes now we could build a computer’, and we're building a computer with a thousand chips, with 200 cores in each – that’s 200,000 processors. We want to program that from a simple program, through a software stack that lets you write a program as if you're executing this program and deploying it onto something that it looks like hundreds of thousands of minions doing pieces of it. It's kind of stunning to think about it but your brain is tens of billions of neurons that are essentially doing the same thing - they're all little computers coordinating with each other. So it's a natural act.

Amazon's Lambda was a thing like that - program in high level language, deploy thousands of cores without having to think about cores. I'm sure it's absurdly inefficient ‘per thread’, but it's assurdly efficient from a point of view of intention to action, which is what we're optimized for.

IC: So you just said you’re building a computer with 1000 chips, 200 cores per chip. Again, another thing you said when we last spoke was you need to find customers who want to build systems with 100 chips to go after the customers who want a 1000, then build a system of thousand to go with customers who want 10000.

JK: So what we don't want to do is sell to somebody who sells to somebody who sells to somebody who has a problem they can't solve. What I want to do, is we want to build AI computers that are usable, programmable, and then in the short run to sell them to people who want to program AI computers. They're going to start small, you know 10, 100, 1000 chips. Then they're gonna say if this is good, and they're going to scale up, and we're going to scale up. Then we'll sell to more people and at some point, and we'll sell to somebody who sells to somebody. We want to go progressively and do this because the AI software stack and the scaling of that in every dimension is difficult. We think we have a good plan to do that, and we've engaged with a bunch of people about it.

Scaling the multiple dimensions is useful, but going from a single core to a million cores and having a software stack that can do that flexibly is the mission. In some at some point in time that'll be a fairly regular thing to think about, but right now it's a technical problem.

I gave Jim a mug with the logo from my YouTube channel. If you’re interested in picking one up for yourself, head on over to the Store.

IC: Have you ever thought about selling PCIe cards with your chip at retail, like a GPU, just an accelerator that people can buy off the shelf? Because I've had a lot of requests for that.

JK: Yeah, we're going to do that. About a year and a half ago we started to rev up - we ordered thousands of cards. We had a reasonable number of models running, and we thought we were going to start to sell, you know, small volumes to people who wanted to take them, experiment with them. Then basically due to the supply chain, the cards didn't show up so we had nothing to sell.

Also we did not move steadily - from like 10 models supported to 20 to 30 to 40. We kind of plateaued because some of our software assumptions broke as we went into more models and more complexity. So we did a fairly big pivot and we rewrote our software stacks deeply. Now we're running the number of models we want, and we worked through the supply chain issues and we now have a reasonable number of cards.

I hope pretty soon we'll be able to sell, let's say, ‘retail AI cards’. But I really want to do that after we go through a reasonable number of engagements with people are pretty close. They go, ‘well we got these models’, we ran them, they bought the computers, they installed them, they worked, and you know they iterated and they changed their software such that they're relatively happy or they throw us some bugs.

We're going to open up our software stack so people can look at it. The weirdest thing in the world is if you buy a product with software stack, you write a program, then get an error message from a piece of software you’d never heard about. So we need to go work through that with people to say ‘here's how the software stack works, here's how the error messages are funnelled out, here's how you successfully write complicated programs, and debug, and work on it’.

But we're on the way on that, and then we will sell cards to people. The cool thing about that is then people think of all kinds of crazy things to do with them. I'm a big believer that AI is going to show up in places that nobody expected. Like, I've been in places that said ‘oh we know the five strategics, we’re going to Target them!’. When I was at SiByte, we talked to hundreds of people, we had hundreds of design wins, and some of them were unexpected. The ones that did the most revenue at the end of the day weren't the obvious ones. So I believe in a little bit of luck.

IC: I always find those people who create the pickaxes are always the most surprised at where the pickaxes are used.

JK: That's a fairly colorful image!

IC: Regarding the current customer base and engagements – Tenstorrent haven't made any formal announcements yet. Do you think you'll be able to, this year?

JK: We're going to. But again I don’t want to announce someone. Like we have a picture of two happy customers holding our card, you know like we can have a bunch of those. But I want them to use them for a while, and get legit feedback. We don't need the hype. What we're doing, we're doing something that's really hard. You know, if you go to the moon, you don't have to make up a story about it. You just climb up to the moon and come back.

IC: You still need to hire the right people to do it though!

JK: We're on a good path. I'm happy with what we're doing. We have really solid [discussions], people like this work and this work, these guys like it, these guys hated it, we talk about that.

Share

Below are my previous interviews, one with Jim + Ljubisa about Tenstorrent, and one with just Jim about Jim.


More Than Moore, as with other research and analyst firms, provides or has provided paid research, analysis, advising, or consulting to many high-tech companies in the industry, which may include advertising on TTP. The companies that fall under this banner include AMD, Armari, Facebook, IBM, Infineon, Intel, Lattice Semi, Linode, MediaTek, NordPass, ProteanTecs, Qualcomm, SiFive, Tenstorrent. Unless otherwise stated, this content is not sponsored.

2
Share this post

Interview with Jim Keller, Tenstorrent

morethanmoore.substack.com
Previous
Next
2 Comments
dtc
Mar 8

in AI there's only five operators: matrix-multiply, convolution, tensor transformation, T-Low, SoftMax---- so what is T-Low ?

Expand full comment
Reply
Randy Lea
Feb 25

Jim Keller is a great interview. He was interviewed in a long conversation with Lex Fridman on his podcast.

Expand full comment
Reply
TopNewCommunity

No posts

Ready for more?

© 2023 Dr. Ian Cutress
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing