Decentralised.co

Austin Adams from Anagram

Decentralised.co

Send us a text

Over the last few quarters, Solana has become the preferred network to trade meme assets, tinker with DeFi and own NFTs. If you have used Solana over the past month, you have likely experienced dropped transactions. This can be frustrating for users and developers and a threat to Solana’s progress. 

In our quest to understand these performance issues better, we wanted to chat with someone who operates in the weeds of Solana.

Our guest for the episode is Austin Adams from Anagram - a crypto investment firm and think tank. He also led protocol development at Metaplex. We discuss the reasons for Solana’s congestion (separating the fact from the fiction), fixes that have already gone live, and other measures in the works that will further alleviate the problem. We also discuss the highly anticipated Firedancer client and the state of L2s on Solana. 

Austin  also helps us keep up to speed on the developments of ZK projects on Solana. 

This is a technical discussion. If you’re new to the Solana ecosystem, we recommend reading our past articles on Solana to get adequate context. 

This episode is a must-listen if you’re a Solana user awaiting the return of smooth UX or a developer seeking to understand WTH is happening. Enjoy!


Speaker 1:

Hi, this is the Deco Podcast and I'm your host, saurabh. Before we begin, views expressed here are personal opinions. None of this is any kind of advice, let alone financial or legal. It's a conversation about things we find interesting. Hi everybody, today I'm joined by Joel I hope that he's staying inside and, you know, keeping it safe amidst whatever torrential rainfalls in Dubai. Today we have a very special guest. We have Austin Adams from Anagram. He's a software engineer with over a decade of experience and has taken on various technical roles in his career. He's going to help normies like me try to understand many things about Solana. I'm excited to learn what exactly the issues are. What are the solutions being worked on, either short-term or long-term? Yeah, we thought let's kick it off. Welcome Austin. Thanks for being here. Yeah, thanks for having me Excited to get into it. Can we start with your journey or your role at Anagram? Because when Joel and I both took a look at LinkedIn, we didn't understand any.

Speaker 2:

Yeah, go ahead. Just understand the man and the mystery, right? Yeah, that's it Okay.

Speaker 3:

Yeah, I think, like a lot of devs, I think a lot of the differentiations between software engineers' job titles are pretty meaningless and it's just a collaborative meritocracy of just being able to build things, and so I always put snarky job titles on all of my things. But yeah, I worked in web applications, ran my own agency for a while, then got into the finance space. I was doing automated underwriting systems and stuff for alternative financing for home improvements in the US. It was a big energy efficiency push at that time. Then I went into IoT, where I sort of got first hands-on look at how people add the buzzword blockchain to their company's pitch deck and force their engineers to do something with blockchain. But that's kind of what got me sort of excited about it, not necessarily for the actual company I was working on, but for other things. And so when I was working there, that's where I found out about Solana in the very, very early days.

Speaker 3:

Eventually I left that company and went into stock market stuff, was there for a little while and just kind of got bored and my Solana was doing really well and so I decided, hey, why don't I just go do stuff at Solana? So eventually people from Metaplex actually reached out to me on LinkedIn. Apparently, they were able to understand my job history more than you guys and decided that I would be a good fit or whatever. And then I started leading the protocol at Metaplex, which is the NFT or was the only NFT or the biggest NFT standard on Solana Now I think there's several standards which is a net good for the whole chain. So here I am now. After leaving Metaplex, I'm at Anagram where I build whatever I want and research different product experiments all over the crypto verse.

Speaker 2:

Awesome. Just so that there's context right, Metaplex was crucial for the NFC standard on Solana, if I'm not mistaken. Do you want to share some color on the work around that and how that's transitioned since here?

Speaker 3:

Yeah, solana started with tokens like fungible tokens and kind of just like regular token accounts, which is how Solana like attributes the tokens to your wallet. Then some folks at the Solana like labs team decided, hey, we need to do NFTs like Ethereum has, and so they built kind of like a early prototype of that and it sort of became sort of entrenched, like it was the only thing to use, and so people used it. It was called Token Metadata and it was an extension of like the just basic token system on Solana. I'm not exactly sure how all of it happened. I came on board right after some of that stuff, but it was decided to spin that out into a different company called Metaplex, which had a lot of like Solana Labs people in it or Solana Foundation people in it, but then they started hiring externally. So what my main job there was was to extend, fix issues and think about like what the future of like the NFT protocol would be at Solana. Ultimately, we tried a lot of interesting things. We found some success in some of them and some of them we weren't able to get the community adoption for.

Speaker 3:

Some people might know about this new Metaplex standard called Core, which is a really awesome new standard. A lot of that. I won't say everything because the team has really improved on a lot of these ideas. A lot of that was actually inspired by work that myself and my team of amazing protocol engineers were actually working on and were trying to get the community to adopt maybe two years ago extend the utility of NFTs. At some point it turned into like how can we make these NFTs cheaper? We found that, as Solana was getting up to like 250 bucks, each NFT was costing like five bucks to mint and while the standard itself needed to get cheaper because of backward compatibility, we obviously couldn't do that. So the Solana Labs team and the Metaplex protocol team came together to form NFT compression and something called the Digital Asset Standard API, which is now pretty much ubiquitous all over Solana. Almost every NFT on Solana now is what's called a compressed NFT, which means it's almost virtually free to mint it for people.

Speaker 1:

And how did that role transition into your current role at Aniket?

Speaker 3:

Yeah, I think, for whatever reason NFTs are, I guess, full of drama at that point and there were some reasons that I just didn't really want to continue at Metaplex. I think anybody there will tell you that the drama load was pretty high, not necessarily internally, but with Metaplex relationship externally. I also didn't really love the use of NFTs. The reason I wanted to come to Metaplex was to see these real-world use cases represented on chain, even the boring use cases, receipts represented on chain, like even like the boring use cases, right Receipts, purchase orders, house deeds, all these kind of things that we're kind of getting into now but also like fun, interesting experiences that developers create, like third-party extensions and like stuff for games. But what we were seeing a lot was just like 10K generative collections, rugs and a lot of like kind of degenerate, as they say, stuff that I just didn't really care about as much.

Speaker 3:

I'm not saying it was net bad, I'm saying I didn't really care about it. But we had engineers that really loved that stuff and so part of me stepping down and moving on was like let's give these folks a chance to be in charge of the protocol and I want to go build things in other ecosystems. I want to go try my hand at zero knowledge things of which I am a complete LARP and I want to dive deeper into contributing to the core areas of blockchains and then maybe eventually I'll surface back and see what we're doing with NFTs, which is why I'm really happy about the new standards on Solana, like Nifty and Core and Tiny. Spl is kind of like another new standard type of thing.

Speaker 1:

Okay, so is it like you currently work with Solana Labs team with respect to developing the protocol and so?

Speaker 3:

No, not really.

Speaker 3:

I mean, while I'm friendly with most all of those folks, I'm not directly contributing to Solana now, but I'm building in areas that Anagram we have a team here thinks are interesting product categories AI, telegram, wallet that was intended to help new users on board and try out trading strategies in kind of a way that's just much more approachable. Also testing out Telegram distribution model, and then recently just released an early version of a zero-knowledge-proving network thing on Salon. So it's like you have wallet end-user thing, zero-knowledge-proving-network thing on Solana. So it's like you know, you have wallet end-user thing, zero-knowledge like, definitely geared toward developers. But all along, though, as I'm trying to contribute to open source where I can, and some of this work has led into some of what Joel and I wanted to, or the reason he wanted me to even chat on this podcast, which was, you know, was Solana L2s and kind of like the reason for maybe the Solana virtual machine getting ported to other areas- I think that's a good place to port into one of the first questions that we were discussing internally.

Speaker 2:

That is what is wrong with Solana. Today, I can go to jubag and trade with, I can buy MadNats all of that is possible, right? What do you think is worse, causing the congestion issue, and why is that such a big cost for Concentrate?

Speaker 3:

Yeah. Well, no matter what I say, I'm sure I'll be slightly wrong, because there's a lot of nuance and a lot of things that are kind of harder to suss out. Just from what, like Rex St John, which is the Anza Labs, like DevRel kind of dev PR guy, seems like a great dude. But I guess boiling it down to like very basic things is that the transactions weren't like necessarily getting to the computers that could actually pack them into a block. So the way that Solana works is when you send a transaction to your RPC provider whether that's yourself running an RPC or whether that's like you are actually sending it straight to a RPC that also acts as a validator. So quick detour here, rpc is like the API to Solana. Validator is someone who's actually like checking the blocks in a sense and also has a chance at some point to become the person that packs the blocks right and submits them into the actual ledger, which is called the leader on Solana. So on Solana, the leader is kind of who ultimately decides there's a lot of nuance here but ultimately puts the blocks in. And because Solana moves so fast that leader is changing all the time, but the leader schedule is kind of known ahead of time from the set of validators. So what happens is you send a transaction to your entry point to Solana like I said could be an RPC or it could be actually an RPC that's attached or close to a validator and then the job is for that RPC or validator to forward that to the current and because of the amount of demand on Solana, what was happening is that those transactions were getting forwarded just like they normally would, but it was overwhelming the system because the transport protocol that was being used, called QUIC, didn't have the right priorities set up. To make it more confusing, solana had previously added this thing called priority fees, which to a kind of like a smooth brain person like myself, you know, you think, hey, if I add more fee I get a better chance of getting my transaction, in which it didn't actually work that way. That's going to be fixed soon, I believe I think in the more like 118 releases.

Speaker 3:

But it kind of comes down to the fact that there was too much traffic and the Solana code base wasn't correctly prioritizing people with the fee. It was only really giving people that had a little bit higher than the median fee better access. But it was also sort of random, it wasn't operating the way it should. I think there's a number of other things that need to be considered here, which is the connection that the RPC node or validator that's forwarding your transaction to the leader, has to the leader, like there wasn't as much I guess for lack of a better term like prioritizing of those connections, since a computer can only have so many open connections at a time. That computer needs to like decide which ones it's going to, needs to decide which ones it's going to accept and decide which ones it's going to drop.

Speaker 3:

And so part of the stability improvements that rolled out, I think, yesterday which in my experience, which is qualitative, did make my transactions go through faster, which is nice, made it kind of feel like back to the regular Solana again. Part of that was adding in like stake weighted quality of service, which, in like smooth brain terms, which is the only terms I know, is essentially the people, the validators that are forwarding the transaction to the leader with more stake, are guaranteed that connection being held in proportion to their stake. Yeah, I guess I'll just stop there because I feel like I ranted and raved for quite a while I think there's a lot to unpack.

Speaker 1:

So how do I know, as a user, which RPC to use and that RPC has higher stake rate compared to the rest?

Speaker 3:

That's a good question.

Speaker 3:

I don't necessarily know if you do know that on the outset.

Speaker 3:

I think, like as an end user, you're typically just using whatever RPC is configured in the wallet and so the wallet team is actually forming like business relationships with these RPC providers, usually multiple, for example, like my two favorite, triton, helios, and then also another one that's kind of something that a lot of people use which is Extranode, which is like a proxy between a bunch of them to increase the read performance and the write performance. There's a whole bunch of products in there, like Ironforge and others, but the wallet really maintains those relationships, which is kind of interesting. Some wallets allow you to take your own RPC URL. So, for example, if you as an end user are maybe more technical and you go buy access to Triton or Helios or Quicknode yourself and you get a dedicated note which is pretty expensive, to be honest you could put that URL in and that may guarantee you that your transactions will go through better. But the kind of better answer to this is it's really not transparent to the user, the end user, at all.

Speaker 1:

Okay, so users should just hope that the wallet that they are using does these things on there.

Speaker 3:

Yeah, I can't say that that should be the solution for users. I'm just saying I think that's what currently is out there Right now.

Speaker 1:

Yeah, that's what the user can hope for at this point, right, yeah?

Speaker 3:

Sorry, users, we love you.

Speaker 1:

I want to go back and ask you a little bit more about QUIC. I understand that it's a networking protocol, so why don't you tell us what a networking protocol is and about all the priorities that QUIC has kind of messed up at the point?

Speaker 3:

has kind of messed up at the point. Yeah, okay, so, uh, I guess all typical disclaimers apply, which is, I've never implemented quick on my own code, just like I've never implemented the tcp, which is another networking protocol, but I'll go for it. So a networking protocol is how two computers talk. There's something that the curious reader can go Google themselves. It's called the OSI layers. At specific layers there's like the copper or the fiber optic Right, then there's like the metal and silicone, and then there's like the very low level, like raw electrical signals that are being turned into digital signals or analog signals being turned into digital signals, and then on top of there are protocols to actually handle the traffic as data from the perspective of the computer. And so where TCP, which is basically everyone's using TCP, another one called UDP right now we're using UDP are different strategies for handling the incoming data.

Speaker 3:

Quic is another one of those strategies. Quic, I think, was developed by Google and it was a system that is kind of built on that UDP strategy, which actually allows packets to not necessarily have to come in order. Packets can kind of be lost and sort of replayed, which can happen sometimes, but it was built in a way that is meant to be faster and able to handle short-circuiting those connections in the case of some sort of malicious attack on those computers. So TCP, which is the other protocol which is underlying HTTP, which I realize there's a lot of technical acronyms here that one was, I think, developed not necessarily with those factors in mind, although you can achieve those same properties. The desire from Google's perspective when they built it was to speed up TCP and make it better for, like mobile connections which are, you know, on cell towers, they're dropping packets or they're losing signal or whatever, to allow higher quality connections to be maintained. But there's some other properties of it that allow you to have like a quality of service, which is like how much you accept from one person relative to the other person and how fast you feed data back and forth to those people or to those servers that are connecting to you. So that's kind of what QUIC is in the like very smooth brain Austin description.

Speaker 3:

Now how that relates to Solana issues is that with any software it has to be like you know, you have to take it from the paper into code and there's lots of different ways to take something from like an idea into code and different implementations of Quick can suffer from different issues. So one part of this Solana issue was the actual library that the Solana team was using had some inefficiencies in how it handled connections and allocated memory. Another part of that is, like I said, how the Solana developers had extended that quick library to sort of do the Solana functions, which is like take transactions in under a different priority, so the connection handling, like when to take connection and how much data to allow how much data to drop. There was just, I think, some bugs or some misses in how that related to the actual mechanisms of the Solana transaction priority system, so the priority fee system.

Speaker 1:

So yeah, yeah, now that we are at priority fee system. So I understand that transactions go to the leader and only then they are sort of locally placed in parallel queues and that is the only time when leader has any opportunity to sort them based on priority. Am I correct so far?

Speaker 3:

I think so. Yeah, I mean I could do a better job of following along with all the nuance of core block packing and things. But that's my understanding as it happens now and things like JITO do change kind of how that works.

Speaker 1:

But in essence you have to be the leader in order to get the block in get the block in Right, and only at that point where the packets reach the leader are we able to determine how they are parallelized right, and so I just want to understand how that relates to priority fees and why just increasing the priority fees do not help unclogging the blockchain.

Speaker 3:

Yeah, I mean again, I think the current implementation of priority fees had some bugs in it. It just didn't work like people thought it would work. I don't know exactly how that managed to get into mainnet. It could have been some kind of mismatch between the testnet sort of traffic load and the mainnet traffic load and the way that their implementation didn't show the same properties in testnet.

Speaker 3:

But what's supposed to happen is the priority fee is supposed to determine your likelihood of getting packed into the block.

Speaker 3:

I don't believe it had any bearing on your likelihood of the validator forwarding your transaction actually being seen by the leader. So once it gets to the leader and the block is arranged like you said, parallel queues it's looking at which account is being read, which account is being written to. And, for those who don't know, accounts are essentially like the blobs of state or like the database rows of Solana and so, like in traditional databases, if you have a bunch of things writing to it, someone's got to wait so that you can maintain consistency. Because Solana is like it's not an eventually consistent system. It's meant to be like always consistent at the end of each transaction. So that is where I think priority fees come into play is once the transaction executioner makes it to the leader how they are reorganizing and rearranging the blocks to determine does it go into this block or does it need to get forwarded to the next leader after that 400 millisecond time to be packed into that block.

Speaker 1:

Right At this point, I think it's a good time to explain what drop transactions are versus what failed transactions are. Yeah, Because I see a lot of confusion. Like you can break that down for us.

Speaker 3:

Yeah, twitter is a bombastic place and almost nothing you read there can be taken seriously, especially anything that I tweet. But dropped transactions. The leader never saw them, or maybe the packets actually hit the network card of the leader but the software determined. I am too overloaded right now I am not going to pick up this transaction.

Speaker 3:

A failed transaction is like business as usual. It went onto the chain, it actually ran the smart contract code and the code actually determined this is a failure mode. So it's like if I wrote a program that would fail if you sent in the number three but it would exceed if you sent in the number one and I sent in the number three. If it was a drops transaction, I just have to resend it or my RPC provider needs to resend it. There's a retry flow. If it actually is a failed transaction, it means it made it all the way to the leader.

Speaker 3:

The leader did all of its magical abacus stuff and it put that transaction into a block and executed the smart contract. And the smart contract was like hey, dude, you sent in a one, not a three, I'm going to fail. And the user is actually charged, like you know, an entry fee for that, which is a small transaction fee Back in the early days, before priority fees or compute budget fees, is like the developers kind of see them, as the fee was static. It was always like 0.0005 something soul and it was very cheap. But obviously there's no disincentive like there's no de-incentivizing, like just outright spam at that point.

Speaker 1:

so you need to like kind of have some economic levers to pull so once the transaction is not dropped and it reaches the leader, basically they should be sorted by priority fee. Is that correct understanding?

Speaker 3:

I think there's more nuance to that. I think there's a lot of sorting. That happens. Priority fee comes into play. How much compute is being used, how big the transaction is, what accounts it's using. So all of these things have to be it's a constraint solving thing right. You have a limited number of compute per block, you have a limited size per block, you have a small amount of time to get that thing shoved into a good spot and sent out. So all of those things need to be taken into account.

Speaker 3:

I don't currently know, and I don't think it's the case, that if, or even with priority fees being fixed, if you YOLO'd some super fat transaction, meaning it took tons of compute, like 1.399 million compute units, and it had all 32 accounts or it used all of the 112, 32 bytes that you're allowed to use, even if you YOLO a huge priority fee, I don't necessarily know if that would get packed right away. It might have to be forwarded to a few leaders or it might take a few blocks to get in there. It doesn't necessarily guarantee you'll get in right away. I mean, probably almost no transaction at the same block that you send it to the RPC is going to get packed in that same block. Unless you're doing some serious crazy MEV stuff, it's going to be kicked down the road until the leader can pack it into a block.

Speaker 3:

But I think all of those are constraints that need to be calculated over. But certainly there is some weight that the priority fee needs to have on your transaction. Right now people are kind of and using them as like hey, if I YOLO this huge fee I'll get in, which is not the case Once some of that stuff is fixed, if not already fixed with the latest update, which I didn't see anything about that, but there's obviously a lot going on. People are just YOLOing these big fees and they're sort of wasting money.

Speaker 1:

It's 25% of the total compute right, the limit on a single transaction, if I'm not wrong.

Speaker 3:

I don't know, I don't have that. I know that you can have 1.4 million max. I do have to add an instruction that requests that. Otherwise it's 200,000 compute units. Yeah, and I don't exactly remember what the max block compute limit is, sorry.

Speaker 1:

Okay, the reason I ask this is at many places I saw that when there is congestion and on all, then everything is based on priority fees and at that point Solana is basically like Ethereum's mempool. But I understand that, because there are limits on individual contracts as to how much compute units that they can consume in a block, and as well as on the transactions. There will be a cap at that point, like, whatever your priority fee is, you will not be able to get into a block, whereas that's not the case in Ethereum. I mean, you can have a very high gas fee and you're almost guaranteed to get into a block.

Speaker 3:

Yeah, and I think that Solana's team is working on probably not exactly replicating those economics. I mean, the two systems are different for a reason and have different strengths for a reason. But I think there's some degree of needing to allow this notion of fees, like you add more fee, you get a better chance of getting in to actually work, because that's sort of like the understanding that most people have of how that's going to work. Or they change the nomenclature and they change the user education around what priority fee means.

Speaker 1:

Right, and increasing fee anyway doesn't help drop transactions right.

Speaker 3:

Not currently. No, currently. That has more to with the, the stake that the validator, who's forwarding your transaction to the leader, has has.

Speaker 1:

I think there is a fix on the way, has it? I don't know if it's already in or it's just like a couple of days for that to happen. What are the main changes? And yeah, like I think it's already in, because Joel said that his transactions are going. So what were the couple of major changes in that fix? And yeah, I mean we also need to talk about what Zeto is doing. But yeah, first list.

Speaker 3:

Yeah, yeah, I have it pulled up. So version 1.17.31 is live and the validators who want to upgrade are upgrading right, they don't have to. Validators who want to upgrade are upgrading right, they don't have to. And when you look at the percentages of, you'll always see some people trailing that are more conservative. But I think at this point I can get the numbers for you in a moment.

Speaker 3:

A good number of the validators have changed. So the fixes that are there are one kind of kicking out unstaked or low staked connections, if that makes sense. So like yeah thing that's determining which connections to keep and which connections not to, is going to be like, hey, I need to have people with skin in the game or people who are trustworthy in the ecosystem by virtue of the fact that many people have stakes to them. Right, those connections I will prioritize Doesn't mean every small staker will never get a transaction through. It just means those will have higher priority relative to their stake. I don't know the exact amount. What's qualified as low, qualified as not low?

Speaker 3:

The original promise of staked weighted QoS was that even small stakers would get their percentage of packets through, but I think that might have been abused and so the team needed to change that. The team, meaning the community. Hopefully there's a couple other things. There's some forwarding filter stuff which is I don't really know what that is. I can only assume that that's just how the validators actually send it to the other leader. Maybe there's some tightening of restrictions there, or maybe there's some like spam removal there, so they're not YOLOing the same stuff. And then there's just some optimizations to the actual quick implementation there to use less memory, which means they can afford more connections and more packets coming in. I think they added some more metrics as well to give the validators a better view into what's happening.

Speaker 1:

Yeah, that makes me understand, at least at a high level, as to what has happened. Can you walk us through what Zito's solution was and why they suspended the mempool and the whole drama around that?

Speaker 3:

To be honest, I'm not super versed in the whole mempool thing with Zito, but I know Zito in general has a way it predated like priority fees, I think it has a way where you can specify the order of different transactions happening, meaning if you need two transactions to be right behind each other because in the normal system you'd forward them and it's you know anybody's guess when this one's going to get in versus this one, but if you needed the beam to be right after each other in the block, essentially you could send a specific instruction.

Speaker 3:

Our specific type of transaction, our set of transactions called a bundle that included a little tip for the JITO validator. And when a JITO validator which is almost all of them now that could be ignorant, but from what I've seen, almost everybody's using JITO because it's better for the validator, they make more money is that when a JITO validator is the leader, it will actually take that above other constraint solving systems right and it will pack the block with those transactions in the right order. And I don't know how they handle conflicts in that. At that point I think there's got to be some priority, like who's the highest tip was or who the earliest and highest tip was like, let's say, you have two. I think there was some auctions.

Speaker 1:

Actually at that point, yeah, there was an offer, yeah so.

Speaker 3:

So I believe the mempool just had had to do more with making that easier for, like, professional trading systems. But honestly I don't know a whole lot about that whole system, so sorry about that okay, no, no, no problem.

Speaker 1:

Yeah, and like these are, I think, some of the changes that were sort of rushed because the chain was clogged and the market sort of demanded it with whatever. I mean mean coins and whatnot. But what's happening behind the curtains? As far as fixing this on a larger time scale, I understand that fire dancer is one of those solutions, but yeah, so can you just like tell us as to how it gets better?

Speaker 3:

yeah, I believe that 118 comes with some of those fixes to the priority fee mechanism. I believe it also comes with more fixes to connection handling on Quick, and even a removal of Quick is what I've kind of heard. It is a little bit, honestly, hard to know exactly what's happening unless you're. This is one of the things that I think as a community we can do better, and I know that Anza is trying to do better with their new Twitter account and DevRel. But to find out exactly what's happening, traditionally you needed to really like read the PRs and hang out inside the Solana GitHub, which might end the Discord.

Speaker 3:

But 118 is supposed to bring some scheduler fixes, which is like the priority fee stuff, some connection handling fixes to make things more stable. But also some of these fixes will include, like I said, new limits right, as well as like a relaxation of some limits. So one of the things that I saw coming down the pike was I think this will be in 118, is that it didn't how the blocks are packed, based on account conflicts. There was some overly aggressive restrictions there that, if loose end means in my smooth brain world, more blocks can be packed under normal main net circumstances with a lot of conflicts on accounts. I hope that makes sense. I think overall the team is highly focused on anything related to stake weight, quality of service, so making that process, I guess, more streamlined, how you handle packets from those versus people that are not stake, and then the priority fee sort of schedule fixes. I hope that it also. 1118 includes some new syscalls for doing interesting cryptography things.

Speaker 1:

But that's just for pet projects of mine and I mean what I see on twitter is like firedancer seems like this magic potion which will change almost everything and it will make the change smooth. So, yeah, what really is different from the current client? And why is everyone going gaga over? You know, when Fire Dancer launches like it will change everything.

Speaker 3:

I think people are really excited about Fire Dancer because it's another client for Solana. Of course, we also have the Zig client, meaning it's another code base that performs the same functions, which means one bug happening on one may not affect the rest. It's like when you plant potatoes and you plant all the same one you have like what happened in Ireland you know like all the potatoes die. But if you have, you know, multiple species of potatoes, then one small issue or one disease to one species doesn't affect the other, and so I think that's a big reason that developers are excited about FireDancer. Another big reason people like me are excited is the promise of increased speed, the promise of an overall faster system. This is what I'm most excited about and since you're asking me, I'm going to tell you is if fire dancer does go forward with this like plug-in system, then extending the validator might become just just a second.

Speaker 1:

What's a plug plugin system?

Speaker 3:

oh, okay, one of the things that fire dancer is trying to do is take different stages of the whole solana transaction life cycle.

Speaker 3:

There's like the verification stage, there's like packing stage, there's the execution stage, there's all that connection stuff I was talking about, and making those pluggable, meaning instead of Jito having to like, build, like, take the Solana code and you know like muck about with a bunch of it they could have if fire dancer is a thing or was a thing in that point, they could have just taken one block, which is like the default block out, which would be like block packing, and put in their own optimized block pack.

Speaker 3:

And so why I'm excited about sort of that plugin system is it may allow us to plug in different runtimes that can create the same sort of state transitions and then let us move faster on innovative like compute there, like cryptography stuff, other like large state modification stuff, maybe things like that. From the non-developer side of things, I think people are really excited because of the promise that Solana will have more capacity, solana will be faster and they won't have periodic congestion issues to deal with. And then for the maybe non-developers but high frequency traders that are in the Solana world, they kind of see this maybe as, like some of the table stakes architecture that's needed to actually perform like NASDAQ level or like stock market level, trading frequency on one of these global atomic state machines, or blockchains as we know it.

Speaker 1:

How does it increase capacity exactly?

Speaker 2:

Sorry, I just had this one question, Two questions actually. I know you guys are excited about the finance infrastructure angle, but would it ever have a token? How do you invest or speculate on that?

Speaker 3:

Will firedads ever have a token?

Speaker 2:

How can retail get involved?

Speaker 3:

No, I have no idea honestly. I mean anything's possible, but it kind of feels like the benefit that Jump and other high-frequency trading teams would get out of just being able to move Solana and USDC and all these other tokens faster might be enough for them. That launching out of just being able to move Solana and USDC and all these other tokens faster might be enough for them, that launching another token might be interesting. I really don't know.

Speaker 3:

I think there's already an incentive to run a FireDancer validator, because their promise is lower compute requirements but higher performance because they're optimizing at the CPU instruction level, which also, you know, if we want to talk about the drawbacks, is I, like me as, like a, as a, you know, a zero knowledge LARPer, somewhat Okay, rust dev, right, I can go into the Solana code base and know exactly what's happening, whereas if I jump into the fire dancer code, it's going to take me longer to figure out what's going on, and so the contribution aspect of fire dancer might just not really be there, because you may need to be like a c++ level, like like you dream about c code, like you, you dream about it in order to contribute, type of person, in order to get past, like the contribution gate. So those are maybe some of the negatives. There's other negatives, I think, but I wouldn't be surprised if they launch a token, but it's definitely not on my bingo card. I think it surprises me in crypto at that point though.

Speaker 2:

Understood, I think, which leads me to my second question. Right, you were talking about Nasdaq scale. What comes closest to Nasdaq scale today Was's with at peak trading volume close to NASDAQ scale.

Speaker 3:

How do you quantify it?

Speaker 3:

How close are we to it?

Speaker 3:

It's like how much volume you can do in a second, or how much complicated arbitrage and composable financial products you can hit in a single transaction, which I think all of those are hit with a complete re-architecture of core system to make it more performant, higher throughput, because along with increasing transactions per second, it could also increase the bandwidth.

Speaker 3:

So you could maybe raise the block compute limit or you could raise the transaction size limit, or you can add other optimizations, such as make the thing where when one Solana smart contract calls out to another one, right now that's very high overhead but there are ways to make it much lower overhead and that's something Solana Labs or Onza is working on with Program Runtime V2, which who knows if that's real anymore, I think it is. But with FireDancer, because they're rebuilding the smart contract execution system from scratch, they have a chance to make that also very low overhead and so they could raise the limit on how much you could do there. So yeah, I hope that answers Like for me it's TPS, For other people it would be like the amount of financial products you can hit in a single transaction, which has more to do with the bandwidth of the network.

Speaker 1:

Understood. How does this line increase the capacity?

Speaker 3:

I believe it just comes down to efficiency. So just take one small thing and maybe it's not small, but one thing. So typical computers, like the ones we're all using right now to record this podcast, maybe 99% of the time the network card receives packets and those go into this thing called the kernel and then the kernel has a way to do what's called an interrupt at the application level. So they're very separate. Like the kernel is, like this protected system inside the computer. The application level is like what I can go code in my normal day-to-day and that's where I want to code, because I don't want to write Linux kernel code, because it'll take me a thousand years to get it into the kernel, just like it'll take me a thousand years to get it into the kernel, just like it'll take me a thousand years to get an EIP into Ethereum. Sorry, that was a small joke.

Speaker 1:

What's that going to?

Speaker 3:

Okay, yeah, kernel, kernel, yeah, yeah, kernel it's like the, it's like the real central nervous system of a computer and it's what interfaces with applications at the very low level. So, like when you read a file, what you're doing is your application is actually talking to the kernel, which is then talking to the hard drive, or the nvme, like the, the chip that holds the data. Same with the network card. One thing that Linux, which is a open source operating system, kind of like Windows or Mac, has I think a lot of these operating systems have it is called user space networking, where the network card which receives the signals from the internet, from the other computers in Solana's case, instead of going through the kernel, they would go directly to the application and the application is in charge of deciding what to do with all that raw information, and that will create some speedups. Right, user space networking will create much lower overhead networking. It also means you have to really know what you're doing, which the FireDancer team does, and they've been doing this for a while. But that's an example of just one of the areas that they're kind of rethinking how the entire client is built, to the extent that what, like you know, think of the Onza client as like my Toyota Tundra, which is supposed to get like 30 miles to the gallon but it actually gets 15.

Speaker 3:

Because when it's actually running it's using more stuff than maybe it needs to. In some cases that may be good. In other cases that just may be because they were able to put the Tundra together faster, or something like that. Or they were able to do it in a way where they didn't have to think as much about the safety because they have these extra buffers which use extra stuff, whereas the fire dancer client is maybe like a Lamborghini. It's slimmed down and they've squeezed every bit of efficiency out of it. I mean, there's definitely trade-offs there, but the fact that you can use less resources means more resources would be available for that software to do more with the same amount of resources. I hope that makes sense.

Speaker 1:

Okay, that was probably one of the best explanations of fire dancer that I have, like god. So thanks a lot for that, because whenever I've read about it or heard about it, it's just you know, it bypasses the kernel, and that is all that I understood from it. But here I think you really nailed that one. I think this is one of my last questions what's happening with zk and solana? I mean, I saw that there is some zero knowledge plus solana. So are we building an L2 or is it essentially being used just for compression? Please focus.

Speaker 3:

Okay, yeah, zk on Solana is happening on lots of different places and probably places I don't know about as well, although I feel pretty connected with a lot of the major teams doing ZK stuff. I'd say probably the most public and cool initiative that's happening is this idea of ZK compression. I'm not even fully read in on all of the details, but essentially it's take any state or any data that's stored on Solana and you are able to compress. It means take it sort of off-chain temporarily or forever or whenever and be able to apply modifications to it while posting what we call a commitment to that on-chain, which means essentially and we do that in zero knowledge, and what I mean by zero knowledge here is not that there's no knowledge at all of the system, but there is the fact that you can prove these transformations, you can prove the data is valid, you can prove a lot of state on Solana that you as a developer don't want to keep paying for and also, as the network engineers or the Solana core engineers would love to see, is like a bunch of old, stale state being compressed Although I don't know if there's a campaign to actually take state that hasn't been touched in a while and just sort of like compress it without the permission of the developers.

Speaker 3:

I think there are some rumblings about that, but I'm not 100% sure.

Speaker 3:

That would be one of the bigger zero-knowledge things that's happening on Solana.

Speaker 3:

The other ones are I would categorize them more in all of these things, in a bucket called verifiable compute, where zero-knowledge is more of a buzzword and a misnomer to kind of indicate that in these verifiable compute schemes you don't have to push every bit of the data to the place where it's going to be verified. In this case, a lot of zero knowledge applications, including the one that Anagram we built called Bonsol, which leverages risk zero on Solana. It's not an L2. It's just, I guess, like a relay or think of it, maybe an AVS sort of pattern, where you can run some compute, verify that it operated correctly and over the specific inputs that you as the end user or developer were desiring, and then it's integrated into Solana to do further actions once that proof has been validated. So in this case, solana is the place where this compute might be verified and off-chain is where these stuff that you want to happen, that might be too complex or too heavy or too large to do on-chain, is actually being done and then verified on-chain.

Speaker 1:

What's an AVS?

Speaker 3:

Okay, yeah, avs. The AVS is this pattern that kind of was publicized a lot by IGA layer, which means you kind of extend the proofstake security of a layer one out to some services, right. So right now, for example, let's talk about Bonsol. You, as anybody with a computer, can start running a Bonsol node and because you have to run a specific thing called risk zero inside of that code, a specific thing called risk zero inside of that code, it creates some sort of verifiable compute system that can be proven on-chain. But let's say there's another use case where I want to run a service that listens to a blockchain. Let's say I'm a big anonymous person this is a contrived example I want to post to Twitter through a blockchain. This is really dumb. Why would you ever do it? But let's say you send it to some smart contract which has servers listening to the chain and what they do is they pick up that request and then they actually go post onto Twitter under some bot account and they give you back some confirmation on chain that they did that.

Speaker 3:

What an AVS would do is AVS pattern, which means actively validated service, would be is that each one of those nodes of this stupid Twitter posting thing would have a way to validate the actions of each other.

Speaker 3:

If they don't match, then they would get slashed, and while that might seem like hey, we're doing that already, the interesting bit is that the stake that's being slashed is actually not necessarily just the operator of that Twitter node.

Speaker 3:

It's actually a whole bunch of people that already have stake on the base L1, which are delegating their stake through this other system, which is then providing what people call economic security to create an incentive for those node operators to run the correct code. So when I said Bonsol maybe is more like an AVS pattern, I was saying because I thought that was more ubiquitous than it actually is. So that's a good signal for me. But at the same time, when you have a system that's cryptographically verifiable and you can prove it on the L1, you may not need some of these actively validated service economic security sort of systems in order to make them behave. Economic security sort of systems in order to make them behave, but these are very useful for things that you need the validator to, or you need the specific off-chain software to behave in a way that is beneficial for the user and itself. I hope that makes sense.

Speaker 1:

Yeah, that's fine, joel, you have anything, because I mean I can ask more questions.

Speaker 2:

I'm just listening in. It's a deeply technical conversation. A fly on the wall when I have something, I'll kick it. That's it, yeah.

Speaker 1:

Okay, I mean, these are some of the questions that I had. What is it that you are excited by? What occupies most of your time these days?

Speaker 3:

Yeah, I'm very excited to see how I can layer this restaking sort of new sort of pattern that is emerging plays out. I'll just leave that there and you guys can explain that as much as you want out of the Ethereum community, which I think is doing an excellent job in trying to make that a reality, although I think some of the account abstraction things and some of the MAV problems there may harm users. I think they've gone way further than Solana has. So I'm very excited to see smart wallets or smart contract wallets. On Solana we have a one called Fuse. I think there was another one that some of there's a team called B&J Studios sort of released. But making that more ubiquitous and wallet interfaces that bring new users onto the system really excite me as well.

Speaker 3:

The kind of joke that I am always saying internally to Enneagram is like will this work for my neighbor who's a mechanic?

Speaker 3:

And most of the time the answer is no.

Speaker 3:

But when you find products that have the right combination of technology and user simplification and interaction, then those things might be able to take us further into getting a safe interaction for people who don't necessarily consider themselves crypto-native or whatever.

Speaker 3:

I guess the last thing I'd say I'm really, really excited about is the verifiable compute idea. I think there'll be a blog post we're launching soon on this topic, but the idea of being able to lower the burden of consensus mechanisms by proving parts of that consensus mechanism in some sort of verifiable compute system, being able to allow hidden information and zero knowledge kind of interactions on Solana, is really interesting to me, and I think a lot of different ecosystems have gone further than us in the Solana is really interesting to me and I think a lot of different ecosystems have gone further than us in the Solana ecosystem, and so I'm excited to see how we kind of pick up that and try to see if we can get our existing users and existing dApps using those things in ways that make sense, not just in ways that get an investment, seed investment.

Speaker 1:

So Verify will come through. I mean, I understand it as like there won't be a separate layer. It's just that some of the things that already happen on Serana's L1, they will just be proved to the validators right and the cost of verifying those things will be non-trivial.

Speaker 3:

Yeah, I think currently, when we talk about verifiable compute systems, we're talking about systems that can run some kind of code and that code gets turned into math, essentially, and then on the other side, the verifier just needs to do the math, so to speak. To do the math, so to speak, and then you can be as sure, as the cryptographic assumptions are, that it's correct. Now, that does depend on the domain. It depends on a lot of the cryptographic properties, but there are some specific math functions that are really really hard for anybody with a computer or a supercomputer, and in some cases even a quantum computer, to do. And since those are hard to do, if you build a mathematical system based on those things, then you can get some kind of security that someone isn't able to fake it Right? An example kind of escapes me to try to stay out of the mathematical weeds of this, but how this could impact consensus mechanisms and then after that how it will impact smart contracts, is that in the consensus mechanism, a lot of times you're needing to have a bunch of different nodes say yes, this is correct, yes, this is correct, yes, this is correct. And what they're saying yes, this is correct is they're having to rerun the same code that everyone else is, let's say, for example, the Solana leader. They're rerunning that to make sure they pack the block correctly or the transaction is actually in the block, and things like that, which can be inefficient With verifiable compute. Instead of having to rerun that compute, you can actually just take the proof and verify to see if it's correct over the same public properties and you won't have to rerun it. So it could make things more efficient. Now how it could impact smart contracts is when L1s get much more sophisticated verifiers, you can create more minimal trust bridges because one side can just post verified compute to the other side instead of having to have like watcher nodes and different things like that. The other way it can impact smart contracts is by allowing you to have new authentication schemes where authentication meaning your access to specific money or access to specific data on the chain is proven in a much more complicated way. Maybe it's like there's some zero-knowledge system that looks at your passport to decide if you have access to certain funds, which hopefully that never happens or it looks at some real world information that it can prove off-chain via some algorithm, but it can't necessarily prove on-chain due to compute restrictions, but it can verify them on-chain. So you extend some of the smart contract things out off-chain in a way that the chain can actually trust, because it's not trusting the person proving it, it's actually trusting the mathematics behind the system.

Speaker 3:

Now to end users, though, this is all just a huge black box.

Speaker 3:

So when developers are considering using zero knowledge or verifiable compute, in my opinion wiser to try to apply this in areas that make their lives easier or lower their regulatory burden or make it safer for them to do something safe for users, instead of just thinking that users, like end users, like my parents, my neighbor who's a mechanic, or people who are not onboarded to the very specifics of crypto and verified compute, instead of thinking that they will want it just because of buzzwords like private and zero knowledge. It's all a black box to them. They're trusting somebody who's trusting somebody who's written some code and things like that. But when we get into the technical systems somebody who's written some code and things like that. But when we get into the technical systems it's lowering the amount of trust down to the proving system and the cryptographic assumptions behind the proving system, which is pretty common at this point, and we're doing it in a lot of other places where we trust cryptographic assumptions instead of trusting the actual other participants.

Speaker 1:

Right, so it may actually expand the use cases from whatever they are.

Speaker 3:

Yeah, I think it can. Now, the other sort of Achilles heel is that when you take compute and you turn it into these sort of math functions, it makes it slower, more inefficient, and so there's a lot of research happening, and, because I'm kind of, I would consider myself more of a technical thinker than a product thinker, although I'm trying to learn how to think just more in product and user basis, I'm fascinated research that's going into taking compute, turning it into this big math function, but in a very fast way. To the extent that you can take traditional systems that require a lot of trust, you can remove some of that trust or move the locus of trust over to the mathematical proving system. Instead of trusting that some company won't be evil, there's always going to be some level of trust and, like I said to end users, it's all a black box and so you know whatever, but as developers we can push forward and make things safer for people. So these technologies are definitely worth a look.

Speaker 1:

Right, thanks a lot. I think developers would especially enjoy this episode. I think Joel and I may have to go back to it and listen a few more times so that we can get maximum out of it.

Speaker 2:

I'm going to add some text-based explainers to what Austin was saying. That's all.

Speaker 3:

I would say Do you want to talk anything about Solana L2s Joel?

Speaker 2:

I just want to know how would you actually bet on an L2? There's going to be 50 different L2s, right? What are the variables that will make you bet on a particular L2?

Speaker 3:

Yeah. So on the topic of Solana L2s, what would make me want to take a bet on them, or the criteria that I have already taken with my colleagues at Amigram in order to form our opinion on companies coming to us, is, I guess there's some level of how far along they are now, how their go-to-market plan will work, all the typical investment things. But as a developer, when I'm looking at projects, I also have this like does this excite me? O-meter, is this novel? To the extent, is this like novel but also consumable? One thing that I'm looking into, and maybe even looking into building myself, is something that extends the Solana virtual machine with more functions that, like I said, like these zero knowledge and these cryptography functions or functions that allow you to operate over way larger state than fits in a transaction, or operations that allow you to pause some execution, do some other thing and then have that execution resume. Things that might seem simple, but for developers on blockchains they would just melt because they want to create these sort of rich interactions. So something that doesn't just port the Solana virtual machine over to some area, although I think when you have a domain and it's kind of new, you need to take a certain amount of bets in what's ever there.

Speaker 3:

I don't currently know how a SVM on the Ethereum community will do from the perspective of EVM devs.

Speaker 3:

That's a wait and see. I think SVM sort of devs or devs that are friendly to those both ecosystems might find that useful and they can use liquidity from one in these smart contracts. But I just don't know. I think what might be more interesting is allow developers to do more and then allow and the L2s or the Solana alt layers or the Solana versions of Eigenlayer may allow is more innovation at like the almost L1 level, which can create some maybe new experiences, which can create a more attractive like space for new users to come in or a space for users that have left the ecosystem to kind of come back. So I guess that's where I would bet on it. I think there's a lot of these things coming out Solana on EVM, svm on Ethereum, SVM in the whole EVM landscape, svm and Cosmos but I'm kind of looking at something like what's SVM plus what's like a supercharged version of the SVM running on maybe an alt side chain of Solana that's still incentivized by Sol or something.

Speaker 2:

Do you think that this is going to be value additive to Sol net net, like where you have 50 different L2s, and how does it actually differ from 8 L2s? That's just the last question I had. I think these are the two things that goes on my head.

Speaker 3:

I don't know how much it differs. I think one big decision, and even a decision I'm talking to some teams about right now, is like do they launch their own token? Do they just make their base token some kind of soul manifestation?

Speaker 2:

No, the answer is simple They'll all launch their own token. They're not going to do some soul manifestation. For what, Austin? That's not a question At all.

Speaker 3:

At least in the groups that I'm in. What's that?

Speaker 1:

Right Eclipse is an example. I'm pretty sure they'll launch their own.

Speaker 3:

Yeah, I mean it's certainly more advantageous for them to launch their own token. I don't know much about the economics, but I think there'll be some temporal accrual because people will want to get the token and they have to go through certain chains to get the token. But whether or not it will actually accrue value back to the L1? I'm not sure. I do think we'll see if the whole restaking way that someone would build some kind of L2 on Solana or on Ethereum may still add value back. Let's say you had to stake Sol in order to get that token. You couldn't just buy it. Then yeah, I think that would obviously lock more Sol, make Sol more scarce and maybe add to the price of soul. But I don't know if necessarily value and price are the same, because price is super temporal. But I think long term, if it adds more users, then it will be good for the system.

Speaker 2:

For sure, all right, I think that's about all the questions I had.

Speaker 1:

Yeah, thanks a lot Austin.

Speaker 3:

Yeah, no problem. Problem, we can chat about this anytime for sure.

Speaker 2:

Thanks, austin. Thanks for coming on such short notice.

People on this episode