This is a detailed interview about Andre Cronje, with a total duration of 1 hour and 20 minutes. It includes Andre Cronje’s summary of his past career, as well as many experiences, guiding advice, and judgments on personal ideology. This article is sourced from the interview on the Lightspeed Youtube channel, compiled, translated, and written by Panews.
(Table of Contents)
1. Introduction
2. ICO Era: Andre’s Crypto Journey
3. Building Yearn Finance
4. Mistakes and Production Testing
5. Fantom L1: Making Software as Efficient as Possible
6. The Road to ETH Scalability
7. Fantom’s Market Plans
8. SVM is the Best Virtual Machine
9. Airdrops, Regulations, and Bull Market Predictions
Andre Cronje, as an OG in multiple fields of the industry, this is the most recent and in-depth interview, worth reading and referencing. The article is approximately 20,000 words long and divided into 9 sections.
Introduction
Host 1:
Hello everyone, welcome to the “Lightspeed Lecture” program! Today we are fortunate to have Andre Cronje as our guest. He is the founder of Yearn Finance, Phantom, and Keeper Network, and also a key contributor to many DeFi projects. Andre, welcome to our program!
Andre Cronje:
Thank you. Uh, that introduction is a bit exaggerated, I’m just someone who enjoys coding.
Host 1:
I heard on the “Extraordinary Core” program in 2020 that they called you a builder. I myself am not a developer, just an integrator, but I think you are too modest in your self-assessment. You have an interesting story, from which we can learn a lot. Maybe we can start from 2017 when you first entered this field, which happened to be the era of ICOs. So I’m glad to hear how you entered this field and explain to those who were not present at that time how crazy that era was.
Andre Cronje:
Yes, what I mean is that before I got into cryptocurrency, I was a very traditional cryptocurrency skeptic. I come from a traditional financial background and was a chief architect and chief technology officer of a small financial company. We did some high-throughput things using Kafka and Scala at the time. That was my background in high-throughput financial solutions.
That era in 2017 was very similar to now in many aspects because there was too much noise, many teams claimed to solve industry-wide problems, and traditional finance and traditional distributed systems had been struggling for decades. But these 18 to 20-year-old guys, with no work experience, released an ICO and raised 20 or 40 million, claiming to solve distributed systems or other problems.
So, I initially entered this field just to test my skepticism and make sure I didn’t miss anything, you know, to create a disruptive technology and replace the previous technology. This is not the first time this has happened, and it will happen. My concern was that the blockchain field lacked a lot of research evidence and lacked a lot of strong evidence, while many people claimed to have achieved something. So I entered this field, started reading whitepapers, and the whitepapers theoretically proved a lot, and many proofs seemed reasonable. But now, another problem today is that there are indeed many proofs that sound good, and you would say, “Makes sense, it can work.” But when you implement it, there are some hard constraints that don’t allow it to work as you expected.
Even if the theory is correct, or even if the concept is correct, emmmmm (maybe it’s not feasible), so after reading many whitepapers, I started looking at a lot of code, and I started doing my own code reviews. I didn’t do these code reviews from the perspective of value creation or due diligence, it was purely like I read this whitepaper, it says it solves X problem, and then I look at the code, and I wonder if it solves X problem. It was more like a record for myself.
So, you know, when I wrote about these processes on Medium, I just wrote down, well, this code doesn’t match what they said here, this codebase has nothing to do with their claims. I made them public for some reason, and in the era of ICOs, they became very popular because there weren’t many naysayers at the time, not many people saying, “This won’t work because your code proves you don’t have what you claim.” And that’s when a problem arose, which is important. The reason I eventually stopped doing code reviews was that people started treating them as investment signals rather than any code-based research because I shared them for others to learn and go on the same learning journey I was trying to experience.
So, I did my own public reviews, and then eventually worked with a company called Crypt Briefing, collaborating with Hana and John and those people who are still great, and I still keep in touch with them today, and started doing some reviews for them. But then it became more, mixed with a transition that I didn’t like, which was that I liked reviewing public code, so you know, if it’s on GitHub, I can see the code, and everyone can see the code, so people can verify if what I’m saying is true or tell me if there are any mistakes.
But with the growing influence, more and more teams wanted us to review their private code and then release the review results. This made me uncomfortable because it was purely an investment signal, but anyway, that was a parallel (another thing), we can delve into it another time, but going through all of this, you know, 99.9% is garbage, but there is that 1% of real value that exists, and that noise ratio is obviously very high, but that 1% has always haunted me and attracted me.
So looking back at that time, my focus shifted from trying to understand what was happening to catching up with the development of the rising industry. I think I did that in about two years, maybe around 2019, or maybe even earlier, maybe at the end of 2018, I think I successfully caught up. It’s difficult to catch up in this field, I mean, there are new things coming up every day, and you have to read what the other 98% released to know what actually happened, but the actual things that happen are very few, only 1% to 2%.
At that time, I started to focus on one thing, which was that the POW (Proof of Work) at the time was obviously a bottleneck. Looking at blockchain systems, you would think, well, the speed is clearly limited. The longest chain rule standard for Bitcoin at the time was that transactions took 10 to 30 minutes. Before that, I was very fascinated by cross-border payments, cross-border settlements, and real-time online payments.
I am from South Africa, where we didn’t even join SWIFT or have IBAN, we were restricted by foreign exchange controls and restrictions on online consumption. Our banking system was very limited, and it has always been a challenge. Seeing this freedom that is not controlled by a single entity really attracted me, and it also matched my background.
So, I started to focus on consensus research. During that time, the research and code reviews I did started guiding me to understand Fantom and the team there and started getting more involved. They were very hot in the fundraising market, and they managed to raise about $40 million through ETH! It’s worth mentioning that they held onto these ETH, even during the bear market, I remember they eventually sold them when the price of Ether reached around $300. But they made many promises that sounded good but couldn’t actually deliver. They seemed to realize this but chose not to voluntarily exit, such as spending money or doing something to consume that funding. Eventually, they asked me if I could use the research I was starting to release. I had been considering launching my own chain, and this fit well because I didn’t have the experience of interacting with venture capitalists, raising funds, or any other related experience. It’s not my expertise, it’s a skill that I don’t have.
You know, that’s also why I’ve never launched anything, whether it’s Yearn, Keeper, or anything I’ve launched, there were no VC investments or any of those things. Many people think it’s some kind of statement I’m making about professional ethics, but that’s not the case, I’m just not good at it, so I figured out a way to bypass it, and that’s it.
So in the end, they had the funding, they had a branded team, so I pushed my research into it, and the first thing was consensus. The original consensus was ABFT (Asynchronous Byzantine Fault Tolerance), they called it Lachesis, but it was actually based on a paper from the early 1990s, “Common Concurrent Knowledge,” which was actually just an ABFT point-to-point communication system. We initially launched it in late 2019 or early 2020. The consensus itself was great, you know, it was one of the first ABFT solutions, and it jumped from a maximum of 7 TPS transaction speed at the time. We didn’t have a connected virtual machine at the time; we were just doing raw transaction connections because it was just a pure payment network, and we could easily touch the pure payment rate between 30,000 to 50,000, depending on the connectivity or participation of validators.
But we wanted to allow virtual machines because smart contracts are powerful, and at the time we chose EVM, it was our only viable choice, we had considered using WM, we had considered using the risk-based compiler, and so on, but at that time and even now, you know, to make a blockchain truly viable and even adaptable to use, you need a lot of service providers on top of you, making it difficult for people to do anything on the underlying chain because everyone says, well, we’re just doing EVM, people are just forking EVM, so we’ll stick with EVM, and then we will have our consensus as the base layer connection because consensus is just a sorting system, that’s all it is, it takes in transactions, sorts them, and then these transactions can be executed.The article talks about the limitations and improvements of the EVM (Ethereum Virtual Machine) and the development of Yearn Finance. It also discusses Andre Cronje’s experience in the space and the challenges he faced. Here is the translation:
Easily handed over to virtual machines and executed as a state.
Then we noticed that our TPS would drop to a range of 180 to 200, purely due to EVM limitations. Over the next three years, we purely focused on improving the EVM. We made some progress, but I have to say that if I could go back and change that decision, I definitely would.
I believe we chose the easiest route at the time, choosing the EVM route. It was a positive decision because we didn’t have the capacity to build our own wallet, set up our own RPC node provider, or do instant deployments, and so on. But regardless, this is a topic we can delve into later.
Andre Cronje:
In the topic mentioned earlier, they raised $40 million and kept all the funds in ETH. However, when it was eventually converted into USD, only about $2.5 million was left. I want to talk about this because it’s the operational funds for our entire team. To manage this fund, I started researching many lending protocols that were available at the time, such as Compound, BZX, Full Crim, and so on. Apart from Compound, the other protocols have disappeared. Every day, I would check these protocols and remember that Ethereum transaction fees were only three to six cents, so I could operate every day. I would check these websites every morning to see which one had the highest Annual Percentage Yield (APY) and manually transfer funds between these protocols. Over time, I realized that checking these websites every day was tedious, and they should have on-chain smart contracts that display interest rates, which I could collect and display.
The first smart contract I wrote and deployed on Ethereum was just an APY aggregator. It could fetch data from all these different places and display it. I did this because at the time, I couldn’t figure out the RPC infrastructure, such as Web3 JS or anything related, to fetch data and execute operations from nodes. So, for me, the easier way was to deploy it on-chain and read from there.
So, I started my journey with Solidity development. With this smart contract, at least I could check every morning which rate was the highest and then transfer funds. Then I realized, hey, I can actually write a smart contract to do this for me. That’s the origin of Yearn. Later on, it became more sophisticated, and the current state is rocket science compared to the code I wrote. But that’s the foundation. What I wanted was to automate the manual operations I did every day until it could manage the funds I managed. I eventually opened it up for others to use the same system. I no longer needed to click buttons every morning to reallocate funds between different protocols because whenever someone interacted with it, whether it was a deposit or withdrawal, it would reallocate funds. This ultimately automated the whole process, and that’s the origin of Yearn.
However, with the development of Yearn, token launches didn’t go as planned. The token launches weren’t fair. I was just mocking these worthless tokens. I said that as long as you provide liquidity, I would give away these trash tokens for free. It seemed like the dumbest thing in my mind, but apparently, I was wrong. However, it attracted a lot of attention, and people started joining, and things started getting more complex, involving strategic investments, infrastructure, and so on.
With the deepening of strategies, we spent a lot of effort on harvesting. We were like any protocol dumping tokens. It became a thing. I used to manually execute these scripts to do this. So I said, there must be a way to do this in the public space, where anyone can call it, and they would have an incentive to call it. That’s when tasks and keepers emerged. Eventually, it evolved into a keeper network, which works well for Yearn. So, we decided to open it up so that anyone could connect a task, and then keepers would execute it. I don’t know who these keepers are, but they would do the job. The first task I launched on-chain was fascinating because we didn’t advertise, didn’t release anything, we just activated the task, and then the bots started calling it. Seeing these things happen on-chain was chaotic, and that’s probably why it used to be called the dark forest, and now I guess it’s just the MEV (Miner Extractable Value) forest.
Andre Cronje:
Then, there were many… Some mistakes that lack better words to describe. Before Yearn, someone noticed me in this space, but I didn’t have public reputation, fame, or attention, so I developed a lot of bad development habits. For example, I often tested in the production environment, which meant I would put some experimental things into the actual running system, you know I did that. Another example is being completely disconnected from intent and direction. Because mixing testing and production was where I thought there would be a big risk. It’s like telling someone, “Hey, I’m testing in production, and it’s not something you should be doing if you interact with it because there’s a high chance of things going wrong.” I said that to warn you that if you interact here, you need to understand that there’s a high risk.
Testing in production eventually became a somewhat hasty and arbitrary way of putting money in, although that wasn’t my intention. Anyway, I was still using my past development practices, and I was building Eminence. At that time, I was very unhappy with the NFT (Non-Fungible Token) culture. I think it has improved now, but at that time, people were using NFTs in very stupid ways. They turned a picture into an NFT and priced it at $100,000. I liked the idea of NFTs because I’m a passionate gamer. I thought they were a perfect use case for NFTs. So, I obtained the IP license for Eminence, which came from another gaming company. We planned to build some stupid games to showcase how NFTs work. I think the IP of NFTs will always be a problem because it can’t exist in just one game. The whole plan was to build a series of different games that all use the same underlying layer.
But anyway, I deployed a bunch of tests, people interacted with it, and there were serious vulnerabilities, resulting in a loss of about $60 million. I took a big step back because I realized how dangerous this field actually was and how easily things could go wrong without proper safeguards, and so on. At the same time, due to Yearn, I was also under significant pressure from many regulatory bodies, who classified it as a financial instrument, which I think is fair, but I also wanted to keep a bit of distance from it. In the end, I firmly came back because there was one thing that bothered me for a long time, and that was how to improve the AMM (Automated Market Maker) curves. At the time, you know, there was only one standard stable trading curve, which was Curve Finance, founded by Mitch, an absolute genius developer, founder, architect. Perhaps I still think he’s one of the smartest people I know in this space. But I was obsessed with it, and I wanted to make something as simple as Uniswap, like X Y K. And so, I eventually designed the whole X to the power of 3 Y plus Y to the power of 3 X curve, and it worked really well. You can define this curve, and it’s simple. At the same time, I added a bunch of things. At that time, you had TWAP (Time-Weighted Average Price), and I added RWAP (Reserve-Weighted Average Price). Because of how these XY pools work, I don’t really need to explain. You just need to know that for TWAP, it’s a fixed time price point that completely ignores the amount of liquidity. It tells you, hey, you can sell a billion of this thing at this fixed price, and that’s a big problem for me.
Note: Time-Weighted Average Price (TWAP) and Reserve-Weighted Average Price (RWAP) algorithms calculate asset prices using different methods and are integral parts of almost all DeFi protocols.
Because many liquidation bots, liquidation engines, lending, and even fully decentralized stablecoins need to understand slippage as part of the calculation. Let’s take the example of a liquidation bot. Its operation is simple: it needs to check if I can repay someone’s debt and get their one million ETH collateral and sell it in the Uniswap pool and still make a profit. If I use TWAP, my bot would say no problem, the profit is good, and it can execute. But if there’s actually a large slippage after selling, I would incur losses. So what I need is a method that takes into account liquidity, so I can check realistically, and it’s specifically time-weighted, so you know there’s no massive flash loan entering liquidity right now. I can sell, but at the same time, it’s also an opportunity for bots to front-run mine. So I need to go back in time to check everything, and it was built that way. When it was launched on Fantom, there was also some chaos because I left after a week or two. But apart from Fantom, I always thought that’s what decentralized protocol founders should do. If your protocol is completely immutable, no updates, no changes, you need to leave because you can’t be the figurehead associated with that thing. I think Yearn and keepers have done well because they are managed in a very decentralized way. For both protocols, you can’t really pinpoint who owns them, although it was definitely a huge mess on Fantom. It has become one of the main AMMs for many new VM exchanges, such as Velodrome Aerodrome, and many others I don’t know. So, it achieved the goal I wanted, although it wasn’t the iteration I did. After that, I decided that my development days were over, my smart contract days were over, and I didn’t have the necessary infrastructure, so I went back full-time to Fantom. Sorry, this is a very long history, and I’ve been taking up a lot of time here.
Andre Cronje:
I think databases definitely have their uses. I think FVM (Fantom Virtual Machine) is currently the best standard, and I think there’s nothing better than it. From a data structure perspective, I think the situation is like this. With Karma and the new database, we went through some routine processes. Initially, we used Badger, then we did a lot of research on various databases and switched to Pble, which gave us a nice boost in throughput, but not much change. All these existing databases have one problem, and that is they are designed for generic data and can store any content in any way. At the same time, if you use a structured query language (SQL) at the top level, it means there’s a lot happening in the backend. They build their own indexes, their own P-trees, and so on, which adds a lot of additional overhead.
So, even when you switch to key-value databases, it’s still not great because they’re not optimized for what we’re doing. That’s when I realized that we need a new database specifically designed for decentralized finance. That’s how Karma came about. It’s a database that’s specifically designed for decentralized finance, and it’s highly optimized for our use cases. It doesn’t need to build its own indexes or P-trees because it’s already designed to handle the data structure.