Decentralised.co

Keone Hon from Monad

Decentralised.co

Send us a text

Monad was in the news in April 2024 for raising a little over $225 million. As of publishing, it is the largest venture round round in crypto for the year. But what even justifies that amount of money going into a new network? Why is it even needed?

Our latest episode is an hour long conversation with Keone Hon. Prior to building Monad, he spent a decade as a quant working on high frequency trading strategies. He and his crew, have been building a blockchain from the ground up, with the intention of supporting large-scale financial markets. What would that take? 
A custom state database, a whole new consensus mechanism and an alternative approach to transaction execution. 

Sounds like jargon? It likely does. Keone explains how these things work in fairly simple terms. Along the way, he breaks down notes for founders looking to raise and how to engage with communities while building a protocol.

Tune in for some alpha on what could be built on Monad and what it takes to make 10,0000 transactions per second possible on a blockchain. 

Speaker 1:

Hi, this is the Deco Podcast and I'm your host, saurabh. Before we begin, views expressed here are personal opinions. None of this is any kind of advice, let alone financial or legal. It's a conversation about things we find interesting. This is Episode 9 with Keoni Hon from Monat, co-hosted with Soand. Keoni is the co-founder and CEO of Monat and has about a decade of experience in high-frequency trading. He is one of the best first-principle-based thinkers in our space. The discussion revolves around two low-sand tech and community. Keone tells us about technical innovations Monat brought about to improve upon performance compared to established elements like Ethereum and Shadana. While talking about the tech, he also underscores the importance of building a strong community pre-product and how Monad is going about it. Hi Keone, welcome to the show.

Speaker 2:

Yeah, thanks for having me, Saurabh and Joel. My name is Keone, co-founder and CEO at Monad Labs. I've been working on Monad for over two years now, and prior to that was in the high-frequency trading space as a quant. For about 10 years, when I was at Jump. Our primary job was to build a really high-performance trading system from scratch that would take packets from exchanges and extract information from them and make trading decisions and kind of turn all of that around in less than a microsecond. So there was a lot of work involved in building this really performant system, squeezing out any bit of latency out of it and then also doing so in a generalizable way so the trading system could trade many, many different things with many different kinds of configurations.

Speaker 2:

So I did that for a number of years, joined the crypto team at Jump in 2021, spent a little bit of time there, mostly working on Solana DeFi, and while I was there I realized that there was a huge need for more performant EVM execution. And it seemed like no one was really working on this because, for various reasons, a lot of the scaling focus was around layer twos and data availability and other directions of scaling that are also quite important, but no one was working on making the execution system itself, as well as the layers below it and above it, much more efficient. So I ended up leaving Jump and then starting Monad at the beginning of 2022 with two other co-founders, and fast forward two years to now and the team is well underway to launching the testnet and the mainnet of Monad, to launching the testnet and the mainnet of Monad. So Monad is a fully bytecode, evm compatible layer one with over 10,000 transactions per second of throughput, one second block times and single slot finality.

Speaker 2:

And the way that our team has been able to accomplish this is because we've rebuilt everything in Ethereum from scratch at sort of three major layers, which are the storage layer, the execution layer and the consensus layer, and I've introduced a number of interesting optimizations, including parallel execution, as well as a really performant state storage system which offers parallel state access and a high throughput consensus mechanism, which we call Monad BFT. It's at all different layers of the stack. Our team has spent a lot of time squeezing all of the latency out of the system in order to deliver a really performant system that's still fully backward compatible with Ethereum, both from a smart contract perspective and also from an RPC perspective, so that any infrastructure that's interfacing with the blockchain can also interface seamlessly. So I'll pause there. But yeah, just in summary, really performant, fully Ethereum compatible blockchain introducing parallel execution and other improvements.

Speaker 1:

Right, I mean, we'll get into the details of all that you mentioned right now, but before that, like, why don't we start with like a 10,000foot view of what Monad is where it falls, without too many technical details? Right, where it falls in terms of the spectrum of L1s? I think you mentioned Solana and Ethereum, so we'll just use them to sort of map where Monad falls.

Speaker 2:

Right? Well, monad is really a fusion of some aspects of Ethereum and some aspects of Solana. The number one sort of priority has always been full Ethereum compatibility, because we're trying to solve problems for developers rather than introduce more headaches. So full bytecode EVM compatibility and full Ethereum RPC compatibility. In terms of similarities to Solana, I mean, I think Solana is just focused a lot on optimizing throughput and having higher performance and building things from scratch, so Monad is more similar to Solana in some of those regards, although there's a lot of differences between Monad and Solana in terms of how execution is parallelized and format of transactions and a lot of other things. So, yeah, I think the summary is that Monat is kind of like, you know, as if Ethereum and Solana had a baby, the baby would be Monat.

Speaker 1:

Right. So just to set context, it takes EVM from Ethereum and it takes parallelization and performant execution capabilities from Solana. Would that be a decent rough start?

Speaker 2:

Yeah, that's true. Certainly the goal of parallelization, the high-level goal of utilizing the hardware, utilizing the capacity of the hardware much more effectively.

Speaker 3:

Right, okay, fair enough. I'm just curious as to how your experience at the Kwan trading firm and jump really fed into the idea that you need to have a high performance chain that solves for a lot of the issues that both evm and slana has at this point right.

Speaker 2:

well, I joined jump in 2013 as part of a trading team and then became head of the trading team in 2014 and then spent the next seven years just building out, like constantly improving, this really performant trading system. But, like I said we would, the trading system's job was to take packets from the exchange and turn around a trading decision so whether to submit an order, cancel an order in fractions of a microsecond. So like a microsecond means that you know you could do a million of those per second, so there's just a huge emphasis on squeezing out any bit of latency from that system. In addition, you know the system was for a particular instrument that we're trading. That system would be taking back millions of packets per day and making trading decisions and sending orders out and in aggregate, I think our overall jump trading was probably sending hundreds of millions of orders per day across all of these different exchanges. So the scale of the operation was large and I think just getting comfortable with that scale of operation and building systems that would be robust to that scale also were overall lessons from this experience in quant trading. So that's the first thing, and then I think the second thing is part of the crypto team at Jump.

Speaker 2:

You know, solana was definitely a blockchain that people there were really excited about and felt made a lot of sense. And the reason why it made a lot of sense to a lot of people there was because Solana offered transactions that were fractions of a cent individually to submit and to execute, and that fee regime just makes a lot of sense to someone who, in their trading, would send, like I said, tens or hundreds of millions of orders per day, because it's only at that fee level that that scale of trading operations actually makes sense. And so, in particular, one example that was very compelling was order books on chain. Because if, if you were, you know, a market maker who was trying to make very tight markets, that means that you would be constantly adjusting your bids and your asks up and down as your fair value adjusted, and the tighter that market is, the more frequently you would need to adjust your quotes.

Speaker 3:

If I you know, like if I had to interpret off of what you said, the idea was that you wanted a high performing chain that would allow you to trade at the scale and frequency that you could on a more centralized avenue, like you know, the balances of the world at that point in time, right Like, but in a decentralized, sufficiently decentralized way Is that?

Speaker 2:

Yeah, that's right and that's exactly right. And I think the impact on the end user like for someone who was trading on Binance or, even more extreme, someone who is trading on like Chicago Mercantile Exchange, which is the biggest futures exchange in the world, or NASDAQ, which is the biggest equities exchange in the world amount of slippage that they are used to experiencing is like single digit basis points, sometimes even less than that. And whereas in DeFi right now like if you go on Ethereum and you try to make a relatively small trade by traditional finance terms, like you know, a $10,000 trade or a $50,000 trade you'll end up in many markets experiencing like 1% slippage or 2% slippage or even more than that. And so the experience when trading in DeFi right now is to pay a huge amount just in order to make, in terms of price impact, in order to make a trade, and that's actually coming from the fact that there aren't market makers that are quoting really precisely and that are competing down the spread and in order to give the ad user, like the best possible quote.

Speaker 2:

That doesn't really exist in d5 right now. The reason that doesn't exist is because it'd be way too expensive for the market makers to constantly update their quotes. So I guess what I'm trying to say is it's not. It's not just like from an hft perspective, but also like the experienced liquidity on the taker side, for, like a normal retail trader who's trading a DeFi versus trading in CeFi is very, very different, and we could see that the solution was to have, you know, much more efficient blockchains with much cheaper fees, ultimately to give end users a much better experience.

Speaker 3:

So I think just one last bit on this line of questions. Right, what's the end state here for a product like Monad? Do you see RWAs and stocks and more traditional assets going through this? Or is the intent to be crypto-native assets early on and sticking to that? What was the grand vision early on?

Speaker 2:

I mean, in the early days, the focus is definitely crypto-native apps and assets. In the early days, the focus is definitely crypto native apps and assets, but in the long term, I think that blockchain is a much more efficient method of settlement. That also, you know the couple of things like number one more efficient settlement for any kind of financial transaction, whether that's equities or bonds or mortgages. What have you? Number two more transparent. Number three composable so that it two more transparent. Number three composable so that people could build applications on top of these Lego building blocks. You could have an exchange, but then you could have another app on top of that that calls into the swap function on the exchange as a subroutine. And number four self-custody. So all of those benefits would ultimately allow any kind of trading experience that people are doing right now on centralized exchanges. Give them a better experience on DeFi, as long as we can bridge the gap in terms of execution, quality and cost.

Speaker 1:

Gotcha. I think that's good. Kala major innovations that Monad brings to the table, and also in contrast with Ethereum and Solana. So, for example, if Monad is doing something differently, then let's just compare and contrast those things with Solana and Ethereum. Let's start with parallelization of transactions, something that can't be done in current DVMs, sure.

Speaker 2:

So the background in Ethereum and I think, all other Ethereum compatible blockchains today is that transactions are executed one after the other, which actually is very different from how computers, you know, modern computers work Like modern computers have a bunch of processors and then they run, like you know, way more threads than even than they have processors, and that's what allows you to be, you know, browsing the internet with a whole bunch of tabs and then also have Spotify in the background also have, like a bunch of other things. So the fact that Ethereum is single threaded and just processes transactions one after the other in many senses is like a step backward relative to our understanding of how computers work today.

Speaker 1:

So Monad introduces parallel execution, just a quick question why did Ethereum choose to do so in the first place? Because I'm assuming that these things were around when Ethereum was getting off the ground, right?

Speaker 2:

Right. So serial execution is simpler and also the original intent of Ethereum was really to be deterministic in terms of the outcome. So determinism means that there's a block that all of the nodes in the system agree upon. That block has a linear list of transactions, like from 1 to 200, or from 1 to 300, whatever it is, list of transactions like from one to 200 or from one to 300, whatever it is. And the job of all those nodes is to run all those transactions, one after the other, to get to the end state. And we want that operation to be deterministic so that the you know nodes continue to stay in sync with each other and agree upon all of the state.

Speaker 2:

And so just the simpler. I think, like when designing a system, you would probably first implement it in a single threaded fashion to make sure that you have determinism and then maybe over time, like you might start to evolve it to make it smarter about doing still being deterministic, like still having it be that the transactions are all linearly ordered and that the true state is like the end state, as if you just run those transactions one after the other. But there are ways to be smarter about the execution, like to actually parallelize while still preserving the way that Mana does it like. The way of of parallelizing that work under the hood is more complex and the builders of Ethereum or other Ethereum compatible blockchains didn't really get around to that part yet, Like it wasn't as much of a focus as some of the other scaling directions.

Speaker 3:

Right, I think you know. One of the things I'm just noticing here is how backgrounds feed into innovation. Right, if my understanding is right, Gune partly because you were at a HFT organization for a good while and then you were a jump that you figure that this is the problem to solve, whereas a lot of the folks working on ETH in the early days they just wanted to go from PTC to smart contract settlements and then from there to faster settlements. Right, there's always a continuity in how innovation evolves, so to speak.

Speaker 2:

Yeah, I think that's fair to say. I mean, I do think that Ethereum will definitely get there and, in fact, like at Monad Labs, like we want to help Ethereum to get there and push improvements back to Ethereum. Like Ethereum, development is highly decentralized. There's a lot of different parties that are contributing different directions of research, so I think this is all totally normal, but, yeah, it also is a significant effort to build this out and to address the bottlenecks with custom architecture.

Speaker 2:

I guess another thing to consider is, you know, if you think about it, the huge demand for Ethereum transactions throughput only arose a couple of years ago.

Speaker 2:

Like it was only after DeFi summer that there was substantial capital on chain and like really high usage that was in 2020.

Speaker 2:

So it's kind of like it takes a little bit of like people you need to see the demand and then it takes, you know, years to actually respond to that and to build something out that can address that bottleneck. So I think it's also just important to keep in mind that ethereum is still a relatively young technology and there's still, like everyone would agree, that there's a lot of opportunity to improve it still. So, you know, with monad, we just ended up really going deeply down the route of improving execution, while a lot of the other focus was on other things that are orthogonal, that are also important, like layer twos and building really robust fraud proof systems, which has turned out to be like very complex and a lot of work. Or, you know, people building out validity proofs, like the ZK route, people focusing on data availability, so there's like a lot of other verticals in the scaling space that we think make a lot of sense as well. That's just that people were really focused on those for a couple of years.

Speaker 1:

Yeah, so I think we digressed a bit. Let's come back to parallelization. And you were explaining to us as to how OnEd tackles that.

Speaker 2:

Sure. So yeah, just to emphasize, in Monad the blocks are still linear, and then the transactions within each block are also still linear, and the job of the execution system is to get to the end state after, as if running those transactions one after the other, while under the hood doing so in a more intelligent way. In a more intelligent way, and that intelligent way we call optimistic parallel execution, and it's actually just a really simple algorithm which is basically to run a bunch of transactions all at the same time, like starting from the same starting point and kind of keeping track of any of the inputs to those transactions, like any, basically any pieces of state that were read in in the course of executing that transaction, as well as the outputs, ie any pieces of state that were mutated in the course of that transaction. And so when many transactions are run in parallel, what ends up getting generated is a bunch of pending results, one per transaction, where each pending result has a list of inputs and outputs for that transaction.

Speaker 2:

So that's the first stage is running a bunch of transactions in parallel, generating pending results, and then the second stage is actually just stepping through those pending results in the original order of the transactions and for each pending result, either committing it if the inputs are all unchanged since that pending result was generated. If that's the case, then that penny result can immediately be committed, and if any of the inputs have changed, then that transaction immediately goes and gets re-executed. So I think the things to observe from this every transaction gets run at most twice. If it gets run only once it's because the execution the first time was correct and the inputs have not changed at the time of commitment, and if it's run twice it's because there was a change to inputs and so that pending result that transaction has to get re-executed right at the time of commitment.

Speaker 1:

Right. So these pending results are I mean it's an intermediary step before the transactions are committed to a block right.

Speaker 2:

Right. So the algorithm that I described is for executing a list of transactions, assuming that the list of transactions has already been determined. So that kind of leads me into maybe just to take a step back for a second. There's sort of four major improvements in Monad. The first one that I was just describing was the parallel execution, but the second one is what we call deferred execution or asynchronous execution, which is the idea that nodes in Monad agree upon the official ordering of transactions, ie consensus, prior to executing. So the way that it works is like.

Speaker 2:

First of all, talk about Ethereum.

Speaker 2:

Ethereum has 12 second block times, but the actual execution budget for each of those blocks is about 100 milliseconds, which is about 1% of the total block time.

Speaker 2:

Like, 100 milliseconds divided by 12 seconds is 1%, and so that's actually a very small portion of the block time, and the reason for that is that in Ethereum, execution and consensus are interleaved, and in Ethereum, the leader has to execute the list of transactions first before messaging out that proposal to all the other nodes, and then all the other nodes have to execute that list of transactions before voting and sending their results back, and so in an interleaved system, execution is only a small fraction of the total block time because realistically, consensus has a lot of overhead in it.

Speaker 2:

It's nodes all the way on opposite sides of the world talking to each other, so interleaved system that has this dilating factor, the shrinking factor, where only a small fraction of the total block time can actually be allocated toward execution. And so in monad execution is moved out of the hot path of consensus into a separate swim lane. So in monad, like with this deferred execution style of organizing the work, consensus happens first and then what happens is like the nodes basically come to consensus about block number one and then immediately two things happen in parallel. The first is that consensus starts on block two and then in parallel, execution happens over block one. So the result of this is that we can massively raise execution budget because now we could take the full block time to actually go execute, because it's running in parallel to consensus on the next block.

Speaker 1:

Right, but how do the nodes then understand what the transaction order should be?

Speaker 2:

You know, consensus is just getting a bunch of nodes that all have certain like stake weights or certain authorization to participate in this voting scheme.

Speaker 2:

Consensus is just those nodes coming to agreement over a payload and so in Ethereum, that payload is like the list of transactions plus the Merkle root of the state tree post, those transactions plus some other information.

Speaker 2:

In Monad, the nodes are just coming to agreement about the official ordering of transactions without the requirement that those nodes have to have executed. So there's still consensus over the validity of each one of those transactions. Inside the block they're still checking to make sure that transactions all valid signatures and have the gas to pay for those transactions. Like all that is still the same. But yeah, their nodes are just agreeing upon that official ordering and it's a leader-based system, so similar to an ethereum or almost every other blockchain, the leader has authority to order the transactions and all the other nodes are just voting on whether that ordering and the transactions inside that ordering are valid right, okay, can you like give us an example of how these parallel transactions work like, in the sense that what happens in case there is a conflicting transaction, and so on with with an example of service sure, sure, yeah.

Speaker 2:

So, like I was saying, just to lay out again the landscape of improvement. So a second ago we were talking about improvement number two, which is deferred execution, like the practice of splitting consensus and execution into separate swim lanes. But we're going back to improvement number one, which is parallel execution right.

Speaker 2:

So so, yeah, so parallel execution is just the job of taking a long list of transactions and then subdividing out the work onto separate threads that can run in parallel, generate pending results and then commit those pending results in the original order of those original transactions, either for each pending result, either committing it if there's no conflicts with the inputs, or rescheduling the work and re-executing if there was a conflict. So I think maybe the best way to explain this actually is just to do an example. If that's okay, yeah, that's fine, okay, yeah. So imagine that there's three transactions and the starting state of the world prior to these three transactions is that my USDC account has 1,000 USDC. Okay, my account has 1,000 USDC. And then there's three transactions. Transaction one is me sending Saurabh 100 USDC. Maybe you started with zero USDC in your account, oh sorry.

Speaker 2:

So yeah, the starting state of the world is I start with 1, and then Sorab and Joel both start with zero. So transaction one is me sending Sorab a hundred USDC. Transaction two is like an unrelated thing, like someone minting an NFT, and then transaction three is me sending Joel a hundred USDC. So, with parallel execution, what happens is all three of those transactions are kind of run in parallel. And so the first transaction has inputs my account at 1,000 and Sorob's account at zero, and it has outputs of my account at 900 USDC and Sorob's at 100. Does that make sense, right? Right, because it went from 1,0 to 900,1. Got it Cool, so that's fine, but then with parallel execution, because we're running these transactions in parallel. So transaction number three will have inputs of my account at 1000 and Joel's at 0, and then outputs of mine at 900 and Joel's at 100.

Speaker 1:

Right, whereas it should have been 800, and 100.

Speaker 2:

Right, yeah, so it should have had an input of 900, 100. Right or sorry, 900, zero and an output of 800, one. Right, it didn't have that because it wasn't aware of the fact that this other transaction had already happened. It already changed the state of the world. So this is a good example of a conflict. And yeah, we ran these transactions in parallel and we generated these pending results. So what would happen is we would step through those pending results. Pending result number one we would be able to immediately commit, because there's no conflicts. Transaction two, which is an unrelated thing, like maybe someone minting an NFT, that one could be committed as well. And then, when we get to pending result three, we look at the inputs. So the inputs, the pending result, started with Keone's account at a thousand Joel's at zero. But then when we compare that to the current state of the world, which is Keone's at 900, we would see that there's a conflict. So then we would have to reschedule that, right.

Speaker 1:

Okay, Just so that I understand correctly. Right? So at the time of consensus, what Nord agreed upon was that all these three transactions were valid. That is, you had enough USDC to make these two transactions. It's just that they were executed later on. Is that fair?

Speaker 2:

Yeah, so at consensus time, what the system would be checking is that all of the signatures for all three of those transactions were valid, meaning that the sender of that transaction had indeed generated a valid signature, and then also that the sender has enough gas in their account, like a native token, to pay for the execution, which is a different distinction slightly different distinction than like whether the transaction itself will succeed. Like this can happen in ethereum as well. Like someone can submit a transaction that indeed has enough gas to pay for itself, but when that transaction is executed, like the thing that the person was trying to do ends up failing. That's like that's still normal because, like in Ethereum, you see all the time like people trying to land a liquidation, like they could pay for the gas, but the liquidation opportunity is no longer there. So transaction failed, but in consensus, we're only commenting on ability to pay for the gas.

Speaker 1:

Right, and it could have failed for various reasons. Right on ability to pay for the gas Right, and it could have failed for various reasons. Right, I mean, for example, the conditions no longer exist or the slippage is exceeding, or whatever right. Right, but have we checked that you indeed have at least 200 USDC at the time of consensus, or that is being verified?

Speaker 2:

No, we don't, because USDC balances all of that is just from the perspective of the system. The system doesn't know anything like really about USDC. It just knows about like gas balances. Usdc balances just are thought of as like arbitrary state in a smart contract.

Speaker 1:

Got it. So all that is being checked later on at the time of execution. So, in case you did not have enough USDC, the transaction would just fail Right. Okay, that makes sense. So I think this leads us to third and fourth innovation that Monad has talked about.

Speaker 2:

Yeah. So maybe one other thing just to mention really fast on innovation number one. So a common question that we get is like, okay, like what happens if there's a ton of transactions that are serially dependent upon each other? Like if you imagine, like the first 100 transactions are all Keone sending like one USDC to another person, then you know, by extension of that example that I gave you, transaction one will succeed but transaction two through 100 are all going to have to be re-executed because every time, like Keone's USDC balance at the time of execution is going to be different from, or the execution the first time is going to be different from, the actual up-to-date USDC balance. So we get this question a lot and I just want to mention that the reason why this isn't a major issue is actually because of caching.

Speaker 2:

So the single biggest bottleneck for execution is actually state access.

Speaker 2:

The actual amount of computation that is being done in a smart contract is actually very, very little and is very cheap because CPUs are quite fast.

Speaker 2:

The thing that's slow is looking up any state variables from SSD. So you know, a thing to emphasize is that you know you can think of this parallel execution algorithm, which has two different steps, as actually being as follows so the first stage, where many transactions are being run in parallel, this step actually has the effect of like pulling, basically surfacing a bunch of dependencies for all those transactions and pulling those dependencies from SSD into memory, so that then even you know when we're stepping through the transactions a second time and we're either immediately committing that pending result or re-executing. The re-execution is much cheaper because those dependencies are almost always in memory already and memory lookups are much, much cheaper than looking up a value from SSD. So I just wanted to mention that because it just has a huge effect on the efficacy of optimistic parallel execution, because even when you do have to re-execute, that re-execution is not a big deal because the state is almost always in cache.

Speaker 1:

I think that's where MonadDB comes into.

Speaker 2:

The picture right. Yeah, that's right. So that's a good transition as well. So the third area of optimization is the custom state database that our team has built, called MonadDB. So, again, the biggest bottleneck for execution is state access. And in existing clients, this Merkle tree, like state in Ethereum, is stored inside of a Merkle tree, and then that Merkle tree in other clients is stored inside of it's embedded inside of another database like LevelDB or RocksDB.

Speaker 2:

And embedding that Merkle tree inside of another data structure means that when navigating to any node in the Merkle tree which you can imagine is like a tree traversal, going from, like, the root of the tree down to one of those leaves Each one of those node traversals itself is causing navigation to happen inside of another tree.

Speaker 2:

So you can think of it as like kind of like quadratic lookup, because every time you're navigating downward to a node in order to get to the bottom, each one of those node navigations triggers traversing another tree.

Speaker 2:

So people refer to this as read amplification. But the problem is basically that accessing a node in the Merkle tree is really inefficient because it's triggering a bunch of other lookups on disk because this data structure is stored inside of another data structure. So, with MonadDB, our team has basically devised a way to store the Merkle tree directly on SSD and the effect is that there are far fewer lookups and, more generally, the effect is that we can utilize the capabilities of the SSD much, much more efficiently than with other systems, because SSDs are actually pretty powerful, they actually have pretty high bandwidth. You can think of it as a bottle with a pretty wide bottleneck, but we just need to actually be able to utilize all that bandwidth and so, when parallel execution is running and it's surfacing a bunch of dependencies that need to be pulled from SSD, we want to be able to go utilize that bandwidth and make a whole bunch of queries into a bunch of pieces of state very, very efficiently.

Speaker 1:

And why are other chains not doing this, currently using the SSDs?

Speaker 2:

Yeah, so building a custom database from scratch is just a lot of work. I think everyone the common admonishment in software engineering is like don't build your own database. And that's exactly what we did in this case. But it's for a very purpose-driven reason. Like the Ethereum, merkle tree is extremely important to Ethereum. It's really important to have really fast access to that. Merkle tree is extremely important to Ethereum. It's really important to have really fast access to that Merkle tree. It's a very high value system, so it is actually worth it to build this custom database. But it's just. You know, it's a lot of work to do that.

Speaker 3:

Keone. I'm just curious what would be an application that actually uses this database. Would it just be a market like a Pubdex, or you know? Is there a specific application that you think leveraging such databases?

Speaker 2:

yeah, I think that's a good example. I mean any, any application in ethereum basically has some state associated with it, so like, if you think about like ave, with ave, people can deposit money and then when they do that, then their account gets credited with whatever deposit they made and then over time they earn more and more interest. That balance needs to be stored local to the Aave smart contract, and when we talk about storing that information, that literally is the data that's being stored in the Merkle tree that's associated with the Aave smart contracts specific address. So I mean just in general, for any smart contract really to work, it has, it's stateful like, it has some underlying memory of, like what its state of the world is in order to be useful, and so any, any application really is utilizing this database under the hood Got it, and I think this leads us to the fourth innovation that you were talking about.

Speaker 2:

Sure. So the fourth innovation is kind of at the highest layer of the stack. I'm like I think of storage as being lowest layer, for execution is kind of in the middle. The highest level thing is monad BFT, which is a high performance consensus mechanism for allowing hundreds of nodes that are globally distributed to stay in sync with each other. Monad BFT is a derivative of the HotStuff consensus mechanism with some additional changes and improvements.

Speaker 2:

The thing to know about hot stuff is that it is a linear communication complexity algorithm, meaning that in general the communication from the leader to all the other validating nodes is just, it's direct. So the leader sends a block proposal to all the other validating nodes and then all the other validating nodes send their votes to the next leader. So it's kind of like a fan out, fan in kind of approach. And the fact that it's this direct communication is good because it means that the communication is like linear in the number of nodes rather than quadratic, which is what, for example, tendermint is, meaning that all the nodes have to message all the other nodes, and quadratic would limit the size of the network, because if the communication complexity is increasing with the square of the number of nodes of the network, then it doesn't too many nodes that you add before it starts to become unwieldy is that why tendermint has like 100 nodes limit?

Speaker 1:

exactly, yeah, so is there anything else that you want to share with us in terms of technical stuff about monad, because I want to move to other aspects as well.

Speaker 2:

So I was just trying to explain HotStuff as a linear communication complexity algorithm. You know, some of the other improvements that Monad BFT has on top of HotStuff include pipelining. So there's multiple stages of voting in HotStuff and in Monad BFT there's kind of piggybacking between those stages so that you know like stage B of block one can be piggybacked on top of stage A of block two. But I don't think we need to get into all that. But the point is just that there is other improvements that are important to overall raising the throughput and lowering the latency of the system.

Speaker 1:

I think, from like of all the new L1s, even L2s, that I have seen, like Monad, has a pretty strong community, like I've been hanging around in Discord, even Twitter, for example. So, yeah, was that a conscious effort? Like, how do you think of community? What even is a community to you? Because I ask this, because Joel takes this very seriously. He has grown with a close to 6,000 people right now, so I understand that it takes a lot. So I just want to understand what your thoughts are and how you went about building it.

Speaker 2:

Yeah, I think community is super important to crypto. I would even say that community is the superpower of crypto. It's the reason why in traditional tech, a startup will make an announcement and then two people will like it on Twitter, and it's the parents of the employees that are working at the startup. So, whereas in crypto, people are working on new technology and then there are hundreds or thousands of people that are kind of cheering them on and excited about the fact that they're innovating in the space and delivering new things to the space, so I think that that's a superpower, and anyone who's building in crypto should acknowledge that and then embrace that and make that a core part of their strategy, because we're building technology, but ultimately, the technology has to expand its reach substantially to get many more users than we have right now in crypto, and the way that we can do that is through the community, like through people that embrace the values of the mission and can transmit that throughout the world. So, yeah, I guess with Monad, we valued that right from the start. Our team spent a lot of time thinking about what the most successful communities in crypto were and why that was the case, and I guess the main insight that I could share right now is that we just realized that community is in the early days. The community is the product, so we should have a very product driven mindset with respect to delivering a really like enjoyable experience for members of the community. So we definitely don't want to like give people like a lot of work to do, like I see a lot of times there's these questing platforms where it'll be like write a thread about why project whatever is like the greatest project ever, and then you'll get five points and then you just end up with people doing a lot of work to write threads and no one's going to really read that, just spam everyone and then they feel like they did work, but then the work wasn't really that useful. So the idea was just to do the opposite of that like just make the community like an enjoyable place.

Speaker 2:

You know, in 2022, it was a bear market Like a lot of people were looking for just like a home, like a place to hang out, make friends, and we just like kept doubling down on that and trying to give our community member, like our team tried to give our community members a really good experience, while keeping out bots, keeping out spammers, and I'm encouraging people to make genuine connections. And now, if you fast forward from I think we opened our Discord in November 2022. So fast forward now 18 months, you know, has like really creative people who care a lot about the mission, care a lot about crypto and have made a lot of friends. And then it's also an environment where creators can share their work and immediately have like a huge audience. That is like having a lot of fun alongside them.

Speaker 2:

So I think it's ultimately ended up snowballing into a movement even ahead of the main net. And I think there's just you know, it's an anecdote about the power of community and the power of like encouraging people to just make genuine connections, and because there's a common interest, which is that we all care about crypto and we all care about seeing decentralization take over the world and to have to bring that together in a form of just like a positive environment where people can hang out and make friends and be creative. That has ultimately resulted in the community's success.

Speaker 1:

Right For those who are lurking in Discord and want the Discord to be opened up for them. Do you want to share any tips?

Speaker 3:

How to join the community formally.

Speaker 2:

Yeah, yeah. How to join the community? Yeah, so we have. There's the Monad Twitter. There's the Monad meme library, which I think is linked from Twitter, which is just a Telegram channel. A lot of really good memes, probably like 20 memes a day.

Speaker 2:

There's the Discord.

Speaker 2:

Discord right now is like anyone can join the Discord, but there are some channels that are gated and the reason why they're gated is actually just to make sure that it's that there aren't like a bunch of bots or a bunch of spammers in those gated channels that detract from the experience for all the community members.

Speaker 2:

I think there's 250,000 people in the Discord and I think about 50,000 people have access to the more inner circle. But at the end of the day, the goal is just to allow real people to join the Discord and to keep bammers out. So the way to like get to that that like slightly more inner circle is just to like be a human and like just hang out in the it's called the um newbie chat right now, just to hang out in the chat for a little bit. It doesn't usually take that long to get from that to the more to get full access to all the channels. But I think for people that are running into this problem, like they can also feel free to reach out to team members or any friends that are already in there, and we'll find a way to get you in, joel, you have any follow.

Speaker 3:

I think one of the things that I was just wondering right. So when you said mentioned about community, part of what you said is people that truly cared about decentralization. I'm just wondering how do you actually see that right? Is it because they communicate in a way that their interest is in decentralization, or is it because you know how do they signal that? That's what I was thinking of. How do you get a sense that this community truly cares about decentralization?

Speaker 2:

Yeah, well, I think it's. You know, people who care about our mission and are excited for updates and sharing information about the technology, like, yeah, sharing information about, like, what our goal is and what we've accomplished so far. There's not a like a decentralization, purity test or anything like that, but yeah, it's like, well, just more generally, like people who are, you know, very passionate about crypto, like they're in the space, they're following, like they're on crypto twitter all the time, like they care a lot about this space, and there are different reasons for that. Like some people love trading nfts, collecting nfts or trading meme coins or trying out the latest app, whatever it is like. I just think that it is a really unique group of people that have like, obviously, like some different interests, but are just obsessed with crypto. And what I've seen from our community is that the monad community has added up attracting people that are very passionate about crypto overall and also are excited about Monad, perhaps because it's a way of substantially expanding the reach of crypto. Understood.

Speaker 3:

What happens when you come across community members that are not aligned with your vision of what the product should be? Do you have cases where there is an out minority that just doesn't align with what your vision is?

Speaker 2:

Well, I think that's helpful as well because, yeah, like there's not going to be a homogenous thought and it's good for there to be critical feedback. I think with when you look at like the Solana community and then, or the Ethereum community, you know there's a lot of like passionate discussion and debates, both about technical things like fee markets, or less technical things like you know, rate of issuance or ultrasound money or things like that. I think all that is really constructive and important to the health of the community, that people like care about the direction of the project and are actively debating like what should be the direction. So I don't see that as a problem at all. I think, if anything, that would be like a sign that that the project is is on the right track and that there's like the extent of decentralization and the extent of community growth is increasing. Gotcha, I think yeah that's.

Speaker 3:

That's all I had on my side.

Speaker 1:

I have a couple of questions related to broader crypto landscape and your race. Sure, like every few days, there is a new L2 or an L1. How do you think that impacts the overall UX?

Speaker 2:

And like do you genuinely think that there is always going to be room for new L1s in the market? I think there's always going to be room for new L1s that meaningfully improve the overall technology surface area that we have. Like, right now, there's still a huge need for much more performant execution. There's a need for improvements to the consensus mechanism, like people are talking about how Ethereum's consensus is very overloaded right now because there's so many validators. There's over a million quote unquote validators. That leads to overhead on the signature processing. So, anyway, there's still a lot of room for improvement in a lot of different domains and I think the L1 space is a good place to make a meaningful contribution in at least one of those areas hopefully many to ultimately push the space forward. We're not at the edge of efficiency right now. The efficient frontier is still far from complete.

Speaker 2:

I think on the L2 side since you're asking about both of those, it's like L2s are kind of designed to be, or I think my interpretation of the Ethereum community's description of the scaling roadmap is that there should be a lot of L2s.

Speaker 2:

It should actually be easy to spin up a purpose-built L2 that has a robust proof mechanism for inheriting some of the security of the l1.

Speaker 2:

So I think it is actually reasonable that there's a lot of l2s, but the economic value of like each of those l2s inherently like might be smaller just because, like it's something you can spin up quickly, that is very purpose-built and has like sort of a limited set of applications on it. Like maybe the L2 doesn't have chain link on it or doesn't have native USDC on it, that's okay. Like it's just there for a specific purpose. And we have, like as long as we have robust bridges between the L1 and the L2, and by bridges I mean like the deposit contract and the mechanism for like securing assets that are in that deposit contract that are thus on the L2, then I think it's totally fine. So I think the proliferation of L2s that we're seeing, and especially the fact that it's really easy to spin up an L2 using Conduit or Caldera, is a reflection of what the reality will be, is that there will be a ton of L2s that are all just very small economically speaking.

Speaker 1:

Right, okay, and is there any advice you have for founders who are trying to raise, because the recent round that Monad raised was, I think, among the highest of late. So is there a playbook that you have that you can share with founders who are looking to raise at this moment?

Speaker 2:

Well, I think it depends on whether we're talking about, like, the first raise or a subsequent raise. For the first raise, my advice would be to just spend a lot of time trying to map out the strategy and to like write out the different like hurdles that need to be overcome or the major areas of work, and then like what each of those will entail, and just trying to be as detailed as possible, like basically to write a super detailed strategy document and then and to do that for the purposes of building the business, and then the pitch should be like a distillation of that strategy document that just sort of like alludes to some of the higher level summary of you know what our plan is and why. Because I think a lot of times it can be tempting to just have, you know, be focused on the thing that's immediately ahead. And if the thing that's immediately ahead is raising money so we can, like, start building the product and hiring people and so on, then you know there's too much focus specifically on the pitch, when the pitch should actually just be a reflection of the reality, of what the overall status quo is and plans are. And you know the reflection will be stronger if the actual plans are more fleshed out Right and for subsequent reasons, I think it's just important to have tried to build mindshare and show that people you can get people to care about your product.

Speaker 2:

I think that goes a long way. I think that, yeah, any like demonstrations of what things are right now and you know where they can get you in the future, that's obviously very helpful. But, yeah, just like showing traction and there's different ways to show traction like it's, it's not just about like usage of the product. It can also be like traction in the form of like just how much the how, yeah, how strong of a brand like one has been able to build. Like for monet, I feel like the brand is a combination of the technology and the incredible community. So, yeah, I think like just keeping in mind that well, it depends on the product, but I think a lot of crypto products like they are a lot of it is a very crucial component is brand and occupying mindshare. So being able to show like literally having occupying a large amount of mindshare is quite important makes sense.

Speaker 1:

So what should we be looking forward to in terms of public test net and is there any timelines on the main net? And like, how would the TPS number differ later on? Right, I mean, it's currently, I think, around 10,000. What should the expectations be around the main net?

Speaker 2:

Yeah, well, our team is working as hard as we can to launch the testnet within the public testnet, within a couple familiar apps, and also like interesting new apps that could take advantage of the performance.

Speaker 1:

Right. Thanks a lot, Kiyoti, for spending time with us. I learned a lot. I hope the listeners do too. Awesome Thanks for having me. Thank you.

People on this episode