Placeholder Image

Subtitles section Play video

  • The first stage of VEET really was like, let's just make things work and make it better than the status quo, but underneath there might be a lot of, you know, hacks and things we want to improve in the future.

  • And now it's the proper time to do it.

  • Hello, welcome to DevTools FM.

  • This is a podcast about developer tools and the people who make them.

  • I'm Andrew.

  • And this is my cohost, Justin.

  • Hey everyone.

  • We're really excited to have Evan Yu joining us again.

  • Evan, you were with us on episode 12 back a few years ago talking about VUE and VEET.

  • And we are now like 119 as of recording.

  • So over a hundred episodes ago.

  • Wow.

  • That's, that's a lot of episodes.

  • We've been doing it a while, but it's, it's so fantastic to have you back.

  • We are continually big fans of your work and how it shapes the ecosystem.

  • So excited to chat again.

  • And we're going to talk about what you're up to these days.

  • But before we dive into that, would you like to tell our listeners a little bit more about yourself?

  • Sure.

  • Hi, I'm Evan.

  • I am, I have been an independent open source developer since 2016.

  • And I worked on VUE.

  • I still actually still work on VUE.

  • I work on VEET.

  • And just recently we started a company called Void Zero and focus a bit deep, even deeper that goes into, you know, the full JavaScript tool chain, starting from servers, no, no, no, starting from parsers to linters, formatters, bundlers, transformers, everything that supports higher level tooling.

  • So the bundler we're building called Rodan is going to support VEET in the future.

  • And VEET is now supporting a lot of other frameworks.

  • So essentially we're trying to build this vertical unified tool chain that can support all the frameworks that depend on VEET today and hopefully make everyone's development experience better.

  • So back, back on that old episode, a long ago, you had actually just released VEET.

  • And since then it's really become like a pillar of the industry.

  • Like many a meta framework is based on it now.

  • And it's like the starter pack for everything.

  • What was the journey from getting, going from VEET to, oh, we actually have to rebuild most of what's below the surface level and form a company around that?

  • Yeah.

  • I think the idea started at one point because when I first started VEET, I was just doing a prototype, honestly, right?

  • So I pretty much just took whatever that's available out there and try to fulfill the goals.

  • And there were a lot of trade-offs and nowadays I consider them kind of hacks because I didn't have the, you know, the bandwidth to do all the things myself.

  • So I have to use what other people have built.

  • And that's, I think that's typically what we've been doing in the Java ecosystem because we feel like, okay, like we don't want to necessarily reinvent the wheels.

  • Why not just use things people have already done?

  • So in the beginning we were, I was using Rollup because I've always liked Rollup's

  • API and then we ran into performance issues because, so the first step of VEET was to have a native ESM dev server, right?

  • That sounds simple.

  • And then we use Rollup to try to handle the dependencies because some dependencies are in CJS.

  • We want to convert them to ESM so they can load in the browser.

  • But Rollup was actually quite slow if you have large dependencies.

  • And so we started using ESBuild for that purpose.

  • And then we tried using ESBuild for production bundling and it was not satisfactory because it has no control over how the chunks are split.

  • And the way ESBuild splits code is just a bit counterintuitive if you're building applications.

  • So we're like, okay, now we need to think about, okay, we use ESBuild for development and pre-bundling, but we use Rollup for production bundling.

  • And we kind of smoothed over the surface to make them work kind of the same way.

  • Right?

  • And later on when people started building real applications with VEET, for example, when people are using VEET with React and previously everyone was using Babel because Babel was supported.

  • Interestingly, if you use VEET by default and you write JSX in TypeScript, they are transformed using ESBuild, which is quite fast.

  • But the moment you want hot module replacement for React, now you need to use Babel because ESBuild does not support hot module replacement transforms.

  • So then Babel, again, made everything slow.

  • So people also came up with an SWC version of the React plugin.

  • So you see the problem here is there are great tools out there, but some of them do this, some of them do that.

  • And now some of the things they both do, they decide to do them differently.

  • And that's the reality that we're kind of dealing with in a JavaScript tooling ecosystem.

  • I'm pretty sure if you've worked with custom build stacks long enough, you understand what I'm saying, right?

  • So in a lot of ways, the reason people love VEET is because we kind of hide this complexity away from them and try to give you a very smooth and consistent entry point so that you don't need to think about these things.

  • But we're kind of, for me, I think we achieved this goal with the initial version of VEET, but long term wise, when people are starting putting more and more dependence on VEET, right, we've seen more and more frameworks moving over to use

  • VEET as the base layer, I kind of start to worry because I feel like, you know, the internal is not as pretty as it should be.

  • And we kind of just swept all the deeper problems under the rug and pretend everything is great.

  • So I guess deep down, you know, along with VEET's growth and adoption, I've always had this, like, inner urge to say, like, is it really up to the task of being the future of all these, like, next generation frameworks serving as the infrastructure?

  • Will VEET be able to live up to that expectation?

  • And I don't think it will if we just keep using this sort of fragmented internals and try to stitch things together and smooth all of the inconsistencies.

  • In a way, the tool chain we're building right now at Voice Zero is an attempt to kind of attack this problem more fundamentally.

  • Let's say, like, if we want to solve all the problems we want to fix in VEET, what do we need? We need a bundler that actually is designed for it.

  • And we need this bundler to be also built on top of a tool chain that can, you know, handle all the different concerns, like starting from the AST all the way to minification and production, bundling, right, through a consistent system.

  • And at the same time, we also want to make each part of this tool chain individually usable.

  • So let's say you want to just take the parser and do something crazy with it.

  • You can totally, you should be able to do that, right?

  • This tool chain, although it's unified and it's a coherent system, it should not be a black box, right?

  • You should not have to say, like, you either take it all or you can never use it.

  • So I think that's, these are the two main premises that we're sort of centering the tool chain around is unification, but without sacrificing the ability to, of composability.

  • We'd like to stop and thank our sponsor for the week, Mux.

  • If you haven't heard of Mux, Mux is an awesome platform that makes adding video to your product easy as adding a few libraries.

  • If you never had to add video to a product, you don't know how many pits of failure there are, whether it's file formats, delivery, uploads, or even playback.

  • There's so much to worry about and so much that will bog your team down while trying to ship a stellar product.

  • So that's where Mux comes in.

  • They have a set of APIs and components that make adding video playback to your app or platform super easy.

  • And since there are a bunch of experts that know video inside and out, they'll have so many different features that your team would never have the time to get to.

  • One of those things being metrics, they have all sorts of different fancy metric dashboards to understand how people are consuming video in the apps that you ship.

  • Recently, they've been adding even more capabilities to their platform.

  • Now you can see when viewers dropped from your videos, so you can gain better insight into how people are consuming those videos.

  • So if you want to add video to your platform and you don't want to spend weeks to months doing it, head over to Mux.com.

  • And with that, let's get back to the episode.

  • So you've been a independent developer for a while and probably one of the most successful, being able to work on a lot of open source projects and produce a lot of very successful open source projects while working mostly on your own.

  • And now you're going this route of, so you've raised some VC, you're forming Void

  • Zero, you have a few people coming to join you to like work on this ecosystem tooling.

  • So why did you decide to make that transition from independent developer to starting a company?

  • And like, what about the timing or the circumstances made this a different choice?

  • So I think overall, I would actually consider myself a pretty risk averse person.

  • Some of the biggest decisions I'd made in my life, I kind of feel like I just swinged it, but like going fully independent to work on Vue, I didn't really know what that would entail. Luckily, I think I built a lifestyle entrepreneurship, kind of lifestyle business thing around Vue.

  • So that is enough to sort of support me and make my life sustainable.

  • So on top of that, right, I'm not starting a company because like, oh, we need to make more money. So that situation.

  • It's more about starting the company is the more realistic way to essentially fulfill the vision that I'm trying to achieve.

  • So it's also partly based on the experience of working as an independent developer.

  • I kind of know where the limit of the current model can go.

  • I think a lot of people kind of use Vue and use me as a success story example for sustainability of any kind of projects.

  • But at the same time, if you consider the scale, the scope of Vue, like we have more than two million users supported by, I think at max, we had like three people working full time on Vue related stuff.

  • Like now it's probably still around three people that's actually full time on Vue.

  • And then a bunch of part time contributors that we sponsor.

  • So it's sustainable.

  • But at the same time, we don't really see it growing, say, to the scale where we can have like a team of 10 people working on it full time, right?

  • Because I intentionally try to build the business around Vue to be as passive and carefree as possible as a lifestyle choice.

  • But also that means the conversion rate, because it's not a for-profit business, like we don't do aggressive marketing or we don't push features to sort of drive profit or revenue.

  • So in a lot of ways, the conversion rate compared to the user kind of view is extremely slow, extremely low, right?

  • And I'm not saying that's a bad thing.

  • For me, there's no sort of this or that in terms of open source sustainability or monetization.

  • I think it all comes down to the goal you're trying to achieve.

  • For me, Vue is a lifestyle business thing, and I think that's working very, very well for me, right?

  • I'm happy about that.

  • But on the other hand, when I think about where Vite is going and thinking about the potential that Vite has and thinking about how we can actually maximize that potential and make the thing, you know, we hope it can be.

  • And I don't see it happening with this sort of more passive model of like just hoping people donate, helping people sponsor, or hoping someone comes up with a business and decides to donate back to Vite, right?

  • That actually has a lot to do with, I think, partly with luck.

  • It takes time.

  • It also has a lot to do with what layer the project stands on.

  • For example, because Vue is a very user-facing framework, so most of the income that's generated by Vue is due to the exposure, due to the high exposure and documentation.

  • And because when people use frameworks, they would likely interact with the documentation very, very constantly.

  • BuildTools is quite different, right?

  • It usually sits one layer below the framework.

  • And also, when you set up BuildTools, once you get it working, you're happy with it.

  • You really have to actually look at the docs every day.

  • So when we go even lower, say like we're building parsers and toolchains like that,

  • I've seen how projects like Babel struggle with funding despite such wide, almost universal adoption across the ecosystem.

  • So I don't think that model is going to work for the things we want to build.

  • But at the same time, this is quite an ambitious goal.

  • So I can't imagine it being done with part-time efforts from just contributors who are like, oh, we want to work on this together.

  • I don't think that's going to happen, or at least it's not going to happen soon enough.

  • So I think the only realistic way for us to actually do it is to have enough resource, capital, to get people paid properly to work on a full-time and as a team, right?

  • So we have a common mission, a common goal, and it's much more serious than your, let's contribute to open source after work on weekends, right?

  • It's different.

  • So that also inevitably brings in a question of expectations from investments and returns.

  • And I think one of the other goals of starting the company is because I always felt like it's a bit sad that the JavaScript ecosystem is relying on a lot of critical infrastructure that's maintained by people that's either underpaid or just as labor of love, right?

  • In a way, the ideal situation is we hope, okay, like big companies using these open source projects, making a lot of money, they should donate back, they should sponsor.

  • And I think it's difficult because there is so much logistics that you have to go through to actually set up the proper organization like a foundations and align the incentives.

  • And there's a lot of lobbying, a lot of like just talking to people and getting people on the same page and to make things like that happen.

  • And smaller open source project authors really don't have the leverage or the time and the energy to make that kind of thing happen.

  • And it's uphill battle before you absolutely become kind of the critical infrastructure the entire ecosystem relies on, right?

  • So I think Voice Zero is also a different attempt where the end goal we hold is we have a sustainable business model that's mostly coming from bigger companies, enterprise, that's paying money that allows us to keep improving these open source parts and keep it free and open source for individual users, for smaller companies, startups, so that more people have, more JavaScript developers have free access to high quality tools.

  • At the same time, the tools should also be well sustained and maintained.

  • So is your plan for monetization, like charging bigger companies to use the tools and then like letting it be open source free for everybody else?

  • Not exactly.

  • So we want to draw a line where if the code runs on your machine, it should be open source and free, right?

  • We don't want to do things like, so one thing we definitely don't do is to ship something as open source and change the license later and hope people pay for it.

  • That's just not the plan.

  • The plan is likely to build, so we do have plans on building associated services that's tied into each step of the way, because when you have a toolchain, you have natural tie into a lot of metadata every step of the way.

  • And how do you get deeper insights from these metadata?

  • And how can we get higher quality metadata for each task you're performing on your code base, right?

  • Overall, I think there's good opportunity to have, to improve upon what we're currently doing. For example, like how deep, how much deeper insights can we get from our bundles?

  • Like, is your tree shaking accidentally failing?

  • Is your chunk cache invalidation working consistently?

  • Are you bundling things that shouldn't really be in the production bundle?

  • And is your code, after your code is transformed by all these tools, is it still as secure as the source code pretends to be?

  • There are a lot of questions that's hard to answer if you don't actually own the toolchain and be able to look at the source code from start, all the way back, all the way to minified in production that you actually ship to the users.

  • So I think there's quite a bit of opportunity here, and we're intended to essentially build services that center around this space.

  • I can't do, I don't want to be too specific right now because there's obviously a lot of things that can change.

  • There's obviously, you know, some details we're still working on.

  • But the idea here is there will be, if we make money, it's from services.

  • It's not going to be from the open source stuff that we built directly.

  • And the open source toolchain obviously serves as the funnel and the mode for the services that we built. And that's the ideal outcome.

  • I think that makes a lot of sense.

  • There is this like real thing of like big companies using open source tooling.

  • It usually doesn't scale super well.

  • And if you've worked in a semi-large company and you've used Webpack, for example, and you know, like, oh, we have like a five to ten minute Webpack build.

  • Well, like most people don't experience that because their apps are too small.

  • But like if you're a really large organization and you're doing, you're bundling a lot of code and you're like running a lot of transforms and doing a lot of custom stuff, you start hitting in those things.

  • So I think it makes sense to a large degree to say, hey, you've just got more needs and we have tools to sort of solve those needs, whereas, you know, 80 percent of people won't ever hit that scaling point.

  • Totally.

  • Yeah.

  • What a part of the part of the reasons we believe there is a market for this is because half the team is have worked at ByteDance on their Web Infra team, and they support some of the largest scale JavaScript applications that we have ever heard of in the industry.

  • And, you know, some code bases with a hundred hundred thousand lines of JavaScript code and takes half an hour to to build.

  • So scale that not every developer would have to ever deal with.

  • And but, you know, that's that's why a lot of the tools that we are building, like the starting from the OXC parser is like we are obsessed with the highest level of performance so that these tool can still handle the scale.

  • The biggest scale application you can throw at it should be able to still do it with decent development experience.

  • So speaking of the OXC parser, I kind of find it funny that it seems like that project in itself started in the same way where like you were just creating a thing to for a side project.

  • And I think Ocean, the guy behind OXC, was just kind of creating a reference implementation of a parser in Rust.

  • So how do we get from there to this is now like that one little Lego block at the bottom of the big structure that is void zero?

  • Yeah, I think a lot of it is me when we when I was thinking about, OK, we need to we need to write our own bundler for Vite.

  • And what do we base the bundler on top of?

  • And we looked at so there are multiple ways of thinking about this, right?

  • So rewriting JavaScript, no, because it's going to be too slow.

  • So we want to do it in a comparative native language.

  • And we looked at Go, there's already ESBuild, which is a great project.

  • I think the downside of ESBuild is in order to achieve the maximum performance, ESBuild is architected in a way where many different concerns are layered across as few AST passes as possible.

  • So it's like it's minification logic is actually spread across multiple passes.

  • And it's not like minification is minification.

  • It's like in the same AST pass, you would see some branches dealing with minification, some branches dealing with transforms.

  • And that making external contribute making like basically extending ESBuild in a reasonable way quite difficult, because you're going to be adding more branches in these AST passes.

  • And it's going to be very difficult for for us to manage it.

  • Like Evan Wallace is obviously brilliant, and he has everything in his brain, and he can sort of improve ESBuild with this architecture, because that's his intentional decision.

  • But we just feel like it's not a good foundation for us to if we want to ever extend on top of it.

  • And also, we do want to make each part sort of well isolated so that people can use it as individual dependencies instead of having to opt into the whole thing at once.

  • So then we turn to Rust.

  • And so Fortune actually also contributed to RSPack at Bidens.

  • And there are some technical decisions in RSPack that made it essentially too late for them to switch to RxE, because it's already kind of built on top of SFC for a very long time before RxE was even usable.

  • But I have been keeping an eye on RxE for a very long time.

  • And I think it is important that for the new toolchain to be based on something that kind of learns from a lot of the things that we've done in the past.

  • Because when Boxu worked on RxE, he had contributed to Roam slash Vilem in the past as well.

  • They had to contribute to sRubyC and deal with sRubyC during RSPack development as well.

  • So the team at WebInfra has a lot of experience dealing with, you know, Rust language toolchains and systems.

  • And I think he distilled a lot of the learnings into the development of RxE initially as a proof of concept.

  • And when it became a bit more production ready, it showed that, okay, all these things did pay off, like both sRubyC and RxE are written in Rust, but there is a pretty significant performance advantage that RxE has.

  • And there are some other design decisions that's a bit more detailed.

  • For example, when using this language toolchain to support the bundler, there are a lot of semantic analysis that we have to do, for example, like determining whether this variable is referenced in this current scope, or is it, you know, shadowing the outer variable, or is it used and exported by and used in another module?

  • A lot of these kind of things, you have to do the analyzation, right?

  • So in JavaScript, most of the parsers just, like, stops at giving you the AST, and they're done.

  • So when we are dealing with, say, I think Babel probably provides a bit more infrastructure for that.

  • But in my own work, for example, in the Vue compiler, we have to do a lot of these semantics analysis ourselves.

  • I think in Excel, Rich Harris has also written quite a few tools around this.

  • But I believe that should be a first-party concern of a language toolchain.

  • So RxE actually comes with a semantic analysis API that allows you to query this information after the parsing is done, because as you parse, it also collects and stores this information already.

  • So you don't have to do the traversal yourself to figure out this information.

  • You can just ask, right?

  • So this is also slightly different from, say, you know, the way SWAC works.

  • In a way, like, I don't want to bash SWAC, because it is the first JavaScript Rust toolchain, right?

  • And I think it serves its purpose really, really well.

  • A lot of people are still using it.

  • It's great.

  • But I think there are things we can learn from it.

  • Learn from the past efforts.

  • And we believe RxE is just a better foundation if we want to build more ambitious features on top of it.

  • So yeah, so Rodown essentially started out with RxE as the base.

  • And so far, we are happy that the performance is turning out to be, you know, living up to our expectations.

  • Something I've always admired about your approach to projects is that very iterative style.

  • So I remember when I first discovered Vue, you were just making the transition from Vue 1 to Vue 2, introducing virtual DOM, learning a lot of lessons from React.

  • And that always struck me, and I feel like you've sort of had a pattern doing that over the years.

  • So I'm curious to tie into the sort of incremental approach that you're taking now.

  • What have you learned from projects like Biome and Roam, for example, who have tried to tackle somewhat similar problems, but maybe from a different angle?

  • And SWC probably is in the same category.

  • They're trying to tackle some performance problems.

  • What are the big lessons and takeaways and things that you're trying to do differently than those projects might have tackled?

  • I think in terms of the end vision, it's very obvious that Void Zero has a lot of similarity to what Roam wanted to do.

  • I think there are two major differences.

  • First is, we decided to work on the toolchain for Void Zero, mostly because we already have

  • Vite serving as sort of a point of convergence.

  • If we don't have Vite as the leverage, then the chance of success will be much slimmer.

  • And Roam really didn't have anything like that.

  • They started out with something that's completely from scratch.

  • So for Roam, I think the biggest challenge is just going from zero to one.

  • How do you make people adopt it?

  • They started out with a formatter, which kind of makes sense because formatter is probably the least intrusive piece of task in the overall development flow.

  • In a way, it's completely coupled from everything else.

  • That makes it easier for people to switch to and adopt.

  • The downside of that is it's also not a strong leverage to have, because it's not really related to the rest of the tasks people are doing.

  • I think the angle where you get the adoption from, this is more like a strategic difference,

  • I think.

  • Another more technical difference is, I think Roam's implementation or Biome's Rust codebase was initially designed to be more intended for an IDE use case scenario.

  • They focused a lot on the IDE story.

  • So they essentially built something, they used something called CST, Concrete Syntax

  • Tree, because they want to preserve the shape of the original code as much as possible, and they want to be resumable and more error resilient.

  • A lot of these are great for IDE use cases, but not necessarily best if you want to do the other tasks, for example, get the fastest possible transforms, and also basically be able to use the AST for multiple tasks along a long pipeline.

  • So when we, I think Boschian could probably share more insights on this front, but I think the difference between the AST and CST was also a major reason where Boschian was like, we don't really want to do that in OXC.

  • But I think it's unfortunate that Roam didn't get to actually keep going beyond what it is now.

  • But I think it still showed people that it's possible to write high quality tooling for

  • JavaScript in Rust, because a lot of people are happy with Biome as a flowmatter nowadays.

  • And it's also part of the reason why we're not in a hurry to work on flowmatter, because it already kind of fills that gap.

  • We will probably eventually have an OXC based flowmatter just for complete sake, but for us, that's just going to be down the road.

  • Your first point reminds me of the saying, make it work, make it right, or make it, yeah, make it work, make it right, make it fast.

  • Like you made it work, Vite, we already have like the grand vision and just all of this work now is like truly like making it right and being able to like make sure the pipes make it fast.

  • Yeah.

  • Yeah.

  • In a lot of ways.

  • Yeah.

  • I think the first stage of Vite really was like, let's just make things work and make it better than the status quo.

  • But underneath, there might be a lot of, you know, hacks and things we want to improve in the future.

  • And now it's the proper time to do it.

  • So I've developed a little bit of a few Vite plugins as I've gone along.

  • I've done a lot of static site generation and I've rebuilt Storybook a few different times.

  • And most of those things usually come down to like, I need to make a very intense plugin for the system.

  • And the one thing that kind of trips me up a lot of the time is the plugin API for Vite is the same as Rollup, but it only has like a select few hooks.

  • And I feel like those hooks are probably excluded because like we have like speed concerns in the mix.

  • With the advent of like Rolldown, will we see the plugin API start to like open up a little bit?

  • Will the speed unlock more power that we can give to plugin devs?

  • I'm curious, what are the hooks you were looking for, but doesn't work indeed?

  • There's just like a handful, like four or five of them that like I've always want to use, but they just don't run in dev mode because they're not there.

  • So yeah, just wondering, will the new power expand to more stuff for us to do?

  • So this is an interesting one because, so first of all, with Rolldown and in a future version of Vite, dev and prod will become more consistent.

  • They will be using the same plugin pipeline and so the plugins will work exactly the same between dev and prod.

  • But the interesting part about having JavaScript plugins running in a Rust-based tool is there is the overhead of sending data back and forth between the two languages because they don't share memory by default.

  • So in most cases, when you send things across the wire, you have to clone them in memory.

  • And that's probably one of the biggest bottlenecks for speed.

  • So let's say if you use raw Rolldown without any JavaScript plugins to bundle 20,000 modules, you can do it in 600 milliseconds.

  • But if you add a couple of JavaScript plugins, you can slow it down by maybe two to three times.

  • And this is directly correlated to the number of modules because we have to, for every hook of every plugin, you have to call it once for every module.

  • So let's say you have a plugin with three hooks, then we're doing 60,000 Rust to JS calls.

  • And that's not cheap.

  • Even if you don't do anything in a hook, it's still quite a bit of a cost.

  • So we're looking for ways to optimize that.

  • So first of all, base layer compatibility is we want all the existing Vite plugins to be able to continue to work the same way.

  • It might compromise the ideal performance to a certain extent, but let's make things work first.

  • And then the next step is for Vite itself internally, we've actually already ported some of the Vite internal logic over to Rust.

  • So right now it's only for builds.

  • So when you do the production build, you can enable the native equivalent of some Vite internal plugins.

  • So that allows us to essentially get Vite build speed down to maybe two to 2.2 and a half times slower than raw Rodan without any JavaScript plugins, which I think is actually decent.

  • And in fact, that's already kind of on par with other pure Rust sponsors.

  • And then we are doing a few things to essentially, there are two different ways you can think about this.

  • One is reduce unnecessary Rust to JS calls.

  • So in typical Rust rollup plugins, we do a lot of things like in the transformer hook, we look at the ID.

  • If the ID ends with a certain extension, we do this, otherwise we just return early.

  • This is actually wasteful if you're using the plugin in a Rust bundler, because the bundler essentially does a Rust to JS call, figure out it actually doesn't need to do anything, but it already paid the cost.

  • This is why ESBuilds plugin actually requires to have a filter outright before it is ever applied.

  • And we're going to essentially introduce something similar.

  • So it's going to be an extension on top of the current rollup syntax.

  • It's going to be compatible because when you use the object format for your hooks, so you specify a logic in the handler property, and then you can have a filter property to say, only apply this hook if the ID matches this regex or something like that.

  • So we can essentially determine whether this hook even needs to be called for a certain module before we even call it.

  • So we don't even need to cross the Rust to JS bridge.

  • That's one thing.

  • The other thing is we're seeing a lot of plugins in the wild doing very similar things.

  • For example, in the transform hook, a lot of plugins take the incoming source code, parse it using a JavaScript parser in the hook, and then do their own semantic analysis or AST traversal, and then use something like magic string to alter the code and generate a new code and also need to generate the source map and then pass it back to the bundler.

  • So a lot of work done in JavaScript, not leveraging the Rust parts.

  • And then also the Rust needs to now need to take the source map and do the source imagine.

  • And source maps are also quite heavy to pass across the boundary because it's bigger objects than source code.

  • So we're trying to design APIs to essentially make this kind of standard AST-based simple transforms to be as efficient as possible.

  • So imagine instead of getting the string of the code, you actually get the pre-parsed

  • AST directly.

  • And instead of manipulating the code and generating the source map in the JS side, you still do the same kind of magic string-like operations, say like append some code here, remove the code here.

  • But these operations are kind of buffered and send as very compact instructions back to Rust.

  • And the heavy lifting of code manipulation, string manipulation, and source map generation is actually done by Rust on the Rust side.

  • So the only work you're doing on the JS side really is looking at the AST and record the operations you want to do, and then tell the Rust side to do it.

  • So I think this, in fact, can cover a very wide range of custom transform needs.

  • Because we're actually able to build views, single file component transform entirely using this model in JavaScript.

  • And if we get this API natively from the bundler, then we can actually offload a lot of the heavy lifting to the Rust toolchain instead of doing it in JavaScript.

  • I don't even need to install a parser dependency myself.

  • So this is the second part of it.

  • And then if we go a bit deeper, this is further down the line, we're also investigating whether it's possible to send ASTs from Rust to JavaScript as little memory cost as possible.

  • So this is something called like zero cost AST transfer.

  • Theoretically, it's already possible using a shared array buffer.

  • And then we need a custom deserializer on the JavaScript side that understands the memory layout and be able to lazily read the AST from the flat array buffer as you need.

  • One of our team members, Overlook Motel, actually has a proof of concept of this working already.

  • But getting this properly into XE is going to be quite challenging.

  • So this is something we're eventually going to do down the road.

  • But the proof of concept shows that this is possible, right?

  • And there are some exciting things in the JavaScript specs.

  • For example, there's a share struct proposal.

  • That's quite new.

  • That's still stage one.

  • It also is kind of exciting if you can use share structs to essentially properly share state across virtual threads and maybe Rust.

  • So what this unlocks is proper parallelization of JavaScript plugins.

  • Right now, when you use JavaScript plugins with a Rust bundler, because the JavaScript plugin still runs in Node.js process, it's still single threaded.

  • One thing we've done is trying to use multiple worker threads to parallelize the workload.

  • But the downside of this is, for example, if the plugin uses a heavy dependency like

  • Babel, each worker thread needs to initialize with Babel and allocate the memory for it.

  • And in a lot of cases, the gain is smaller than you might think.

  • Because the initializing cost of each worker thread just offsets so much of the performance gains you get.

  • It's challenging.

  • There are some things we played around with.

  • For example, instead of spawning the worker threads through a Node.js main process and then get the data back and send it back to Rust, we let the worker threads directly send the data back to Rust.

  • I think some of this might be useful, but applying them blindly for every plugin may not end up being as beneficial as we think.

  • So there's still a lot of work that we're exploring in this area.

  • But I'm kind of optimistic that for a long-term goal for us is to tackle this, still allow users to easily write JavaScript plugins, but without severely compromising the overall performance of the system.

  • Yeah, I do think that this is one of the hardest areas for all the reasons you've outlined.

  • And the temptation is real to just say, sorry, no more plugins in JavaScript.

  • It's also, you know, there's a big ecosystem churn cost there, which is a pretty big downside.

  • Yeah, I kind of want to mention there's also, we do want to start with getting the plugins work, right?

  • Then we start having a recommended subset or a recommended best practice for writing performant JavaScript plugins for Rust.

  • So maybe we'll have linked rules to help guide you writing these plugins, or we can have runtime warnings.

  • Like, one thing we actually did is we use OXC to implement a small tool that can look at your plugin source code and, you know, look at, for example, look at your transform hook and notice that you're doing something like if id.test.regex.return, right?

  • So this is a sign of early return, it shows that this transform hook is actually eligible for filter optimization.

  • So detect such cases and actually give you a soft recommendation, say like this plugin hook can be refactored to use the filter API to make it more performant.

  • So it kind of sounds like there's going to be kind of a divide here at some point where there's like, there's a bunch of legacy rollup plugins that still work in the new architecture.

  • But then as we go on, kind of like a recommended V2 of all of those that use these new APIs to make things really fast.

  • Yeah, and in a lot of ways, we do also want to make most of the common things you need to do built in.

  • For example, if you use rollup, rolldown today, you don't need common JS plugin because it just works out of the box.

  • You don't need the node resolve plugin because it just works out of the box.

  • You don't need TypeScript transforms, you don't need JSX transforms.

  • All of these things just work without plugins.

  • So in a lot of ways, it's similar to rollup's abstraction level is a bit, rolldown's abstraction model is a bit more like ES build slash leaps, right?

  • It's a bit more battery included because that's also the most pragmatic way to get the best possible performance.

  • Yeah, it makes a lot of sense.

  • I'm really interested to see what y'all end up coming up with, with the AST transforms, because I feel like this is a pretty common problem is like, if you need to do very performant

  • AST transforms, I mean, you have the added problem of like going across language boundaries.

  • This just reminds me of a random project that I saw the other day called like render from this guy named Eric Mann.

  • And it's like, it's a byte code that runs in JavaScript that like is a rendering engine or whatever.

  • And it's just like, I don't know, there's like a lot of interesting things in the space when you start thinking about like, how can we make like marshalling and serialization very, very, very fast.

  • So this will be really cool.

  • I'm excited.

  • Well, maybe as we're getting close to wrapping up the episode, let's think about or talk a little bit about what the future looks like.

  • So you're saying earlier that Vite is already pretty prolific.

  • So it is your starting point, you have sort of this broad baseline that, you know, say

  • Rome when it was starting was missing.

  • But there's still a lot of work to do.

  • So what do you like think the priorities going into the project will be?

  • And you know, what is your like time horizon that you're sort of anticipating for, you know, say some of your first products when you release, like, what does that look like for you?

  • Yeah.

  • So this is obviously going to be a quite long process.

  • I think right now our priority is getting rolled down to sort of a beta status.

  • There's a lot of alignment work that we need to do right now, because with the goal of being able to swap out ESBuilder rollup, right, we need to make sure, like, because the sort of the mission of rolldown is to unify the two.

  • So in terms of how they handle, for example, CJS, ESM, Interop, how they handle a lot of the edge cases, they need to be as consistent as possible.

  • And we need to basically enable the test cases of both bundlers, run rolldown against those test cases and try to pass as many of them as possible.

  • And then for the ones that's not passing, we need to analyze them and see each whether this is just an output difference or is it more like a correctness issue, right?

  • So we kind of have to label them one by one.

  • And if there are inconsistencies between the two, we need to make a call on which one do we go with.

  • So this is just a lot of grunt work, but it's necessary before we can consider it, you know, production ready.

  • Once that is completed, I think rolldown.

  • In parallel, OXC is also trying to finish the syntax down leveling transforms.

  • Like some of the hardest ones are like the async generators and stuff like that.

  • But it's well underway.

  • So I think by end of this year, we're hoping to get rolldown to beta status and have the transforms also more or less completed.

  • So that's a good milestone to hit.

  • So that also primes the entire toolchain for general testing into Vite itself.

  • So rolldown-based Vite already has a working progress branch where we pass over 90% of the tests.

  • Some of the tests that's not passing are actually more or less blocked by tests like legacy mode, which we were kind of like intentionally pumping on because I'm not sure how many people will still be using legacy mode by the time we actually release the thing that's stable.

  • So in a way, rolldown-Vite is somewhat usable.

  • Like we are actually already able to use it to power Vite trust to build the Vite docs.

  • But we want to wait until rolldown is ready.

  • We have all the transforms ready.

  • Then we have an alpha or slash beta release for rolldown-based Vite and have people test it.

  • So this version of rolldown-based Vite is strictly just replacing esbuild and rollup with rolldown.

  • So feature equivalence, not really many new things.

  • It's mostly like we want to make sure the transition is smooth, frameworks can move over to the new one.

  • So that will probably also take some time.

  • So in that same process, we do eventually want to move Vite over to a full bundle mode that's entirely powered by rolldown.

  • As nice as unbundled ESM is for smaller projects, we've run into scaling issues in larger projects.

  • Especially when you have, say, more than 3,000 unbundled modules as a dev server.

  • So we believe a fully bundled mode is still necessary.

  • And it also allows us to get rid of some of the issues, for example, optimized depths.

  • It can be completely eliminated.

  • So all your dependencies in your source code will go through the exact same transform pipeline for dev and for production.

  • So consistency will improve greatly.

  • And for the metaframeworks, the one important API for metaframeworks is SSRLoggedModule.

  • In the new environment API, it's called environment.runModule, something like that.

  • So internally, it's currently still using a JavaScript-based transform.

  • That will also be replaced by a Rust-based implementation that's exported by rolldown.

  • So that you'd use the same bundling mechanism for your code running in the browser and running other environments, for example, Node.js SSR.

  • That also gets rid of the configuration needs for SSR externals.

  • So removing optimized depths, removing SSR externals are the two major goals of the full bundle mode and also greatly improving page load performance in larger projects.

  • So that's kind of down the road, probably Q2 next year.

  • And we will likely kick off some product development in parallel once we get the rolldown-based Vite into beta status.

  • Well, that sounds like a whole mountain of work that you guys have to do.

  • So I wish you guys luck on that.

  • That also wraps it up for our questions for this week's episode.

  • Thanks for coming on, Evan.

  • It was a pleasure to have you back on.

  • And it's exciting to see how much progress both the projects have had and what the new project holds.

  • Thank you.

  • Over to Chad.

  • Yeah, super excited for what y'all do.

  • We had a few other episodes where we talked to people building infrastructure tooling.

  • We had the Biome team on a little while back.

  • We had Charlie Marsh talking about RUF and UV in the Python ecosystem.

  • And this really seems like of the bets that we've taken in this space, this seems like the one that's most likely to succeed in my case.

  • It's always a big bet.

  • But I have a lot of faith in y'all.

  • So really excited to see what you do and wish you all the best.

  • Thank you.

The first stage of VEET really was like, let's just make things work and make it better than the status quo, but underneath there might be a lot of, you know, hacks and things we want to improve in the future.

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it