is
Is it you specifically personally developing the rest implementation for Hypercore?
As of now, I think I'm the main, I'm basically the only developer doing it.
There's been other people in the past, and now it's all been handed over to me.
Damn.
Yeah.
That's a lot.
There's been waves of interest.
And I think, yeah, going back a little further around 2020, and then there was a lot of implementation
work.
And a few years ago, whenever I picked it up, one person was working on it, and then I
was like, I think I can push this over the finish line, and that's what I've been doing.
Wow.
How long has the finish line like two years?
I didn't really know what I was getting into.
And I hear you.
Yeah.
Welcome to today's episode of Solarcast.
And in this episode, we will be talking to Blake from the Hypercore protocol.
And specifically, Blake is working on, as you might have heard already in the intro, a
rust implementation of Hypercore.
And Hypercore is part of a big ecosystem.
And shout out to the previous podcast that was also touching on this, the DUT ecosystem
podcast.
And that is also another previous older name for the same protocol.
Specifically, Blake also talks us through some, for me at least, and I think for many,
new use cases for the Hypercore protocol.
And I learned a lot in this podcast, and I hope you find it as interesting as I did.
With no further ado, let's dive in.
Thank you so much for joining me here, Blake.
Thank you.
And you are in a very soon to be snowy New York.
You just told me.
Yeah.
I live the, I'm here in Ridgwood, Queens in New York City.
Cozy.
And we're here to talk about your project today, which now is all in the hands of you.
But historically has been carried by many different people.
I'm hearing from what you've shared.
And it is the rest implementation of Hypercore.
Yeah.
I picked it up a few years ago whenever I was in between jobs.
And at the time there were a few other people working on it.
And since then it becomes the main developer.
From having seen two different rest implementations started and not always finished of Scuttlebutt,
I've understood that it's quite a, like, it's quite a tremendous task that you've
picked up.
But before we dive into a little bit of what this means, some of the listeners might not
even know about Rust as a programming language.
For those who don't know, it's a programming language.
And it's a very particular programming language.
Before we dive into the first questions, do you just want to tell us like, what different
shades of Rust from other programming languages?
Yeah.
I love Rust.
I think like a thing that's talked about a lot is that it's a very secure programming
language.
It eliminates a whole class of like memory problems, which is good.
But I think a thing that is often overlooked that developers love is like how ergonomic
it is for software developers.
It's very modern.
And when they designed it, they thought about things like testing and building the documentation
and cross compilation and things like that, that have been kind of like bolted on to previous
programming languages.
So it has some really first class tooling that makes a lot of things super easy.
And yet it's a systems programming language.
So you could use it to write very low level things like for microcontrollers or writing
operating systems and stuff like that.
But it's also used for everything up to web programming and backend web services.
I'm amazed to hear you use the word easy in the same sentence as describing Rust because
from what I've understood, not everyone finds it very easy to get into.
Yeah, that's true.
It has sort of some new concepts that aren't familiar from other programming languages.
So getting into it can be a little difficult.
But with other things, it's the complexity that's there, I think is necessary.
And it highlights some things that could be bugs in other languages.
It definitely makes you think harder about how you use data.
But with that, it gives you a lot more guarantees about correctness, which I think makes things
easier in a certain way because there's fewer bugs and problems.
So there's a steep learning curve, but once you get there, it's smooth sailing.
Or easier sailing than a lot of other things.
Yeah, yeah.
I think it does require a lot more thoughtfulness in planning.
And for things like rapid, prototyping, it's not always the best thing.
But if you're building something foundational, I think it's a really good choice.
Yeah.
And that's actually what you're doing right now.
Yeah.
With your project on rebuilding Hypercore.
So not everyone who's listening knows what Hypercore is, but it is, as you said, quite
foundational.
And it's been around for a decade now.
Yeah.
Yeah, Hypercore is really cool if people aren't familiar with it.
Maybe they are familiar with the BitTorrent protocol, or maybe people have torrented things.
It's not so common anymore, but it used to be the way that everybody got their music
before Spotify if you didn't go to the store and buy it.
So BitTorrent is just a way for people on the internet to share static files with each
other and it peer to peer away, meaning like from one person's computer to another without
having a centralized server or data center in between you.
And Hypercore is different because instead of sharing a static file, you can share a
file that you can change.
So you can do things like share a directory that you could add files to or share a more
complicated data structure.
And those are the kind of things that we want to do with Hypercore and use those features
to build more rich peer to peer applications.
This is really also interesting to hear your take on Hypercore because I've encountered
it in so many different areas.
It's got a very rich history at this point.
And previously it was known as DUT protocol or DIT.
So from your perspective, why Hypercore?
Why does it feel important to you?
Or why does it feel foundational for you?
That's a good question, especially I know that there's some other peer to peer things
in this kind of space like IPFS.
And I've talked to some people that say things like why not IPFS?
And the way I think about it is IPFS is kind of like a, you could think of it as
kind of like a hash table where you have like key value storage.
But with IPFS, the things that you're storing aren't mutable.
What does mutable mean?
Oh, like it's difficult.
If you want to store data in IPFS by default, it's like not easy to change that data or mutate
it.
And with Hypercore, it's more like a list of things.
And in Hypercore, you can have a list that you can add stuff to.
So with IPFS, if you have a thing and you're like, you're trying to build an application
and you want to have a way to say, give me newer things, like give me new things that
have been added, that's not really built into the core concept of IPFS.
It's more like you have a key and you want to go get another thing.
And maybe that has other keys that you can go to link to other things.
So it's good for things like that.
But for peer-to-peer data that changes that you want to be able to get newer stuff, Hypercore
is good for things like that.
And for the like applications that I've been, that I was originally interested in whenever
I started playing around with Hypercore, which was building like an RSS, peer-to-peer RSS
protocol, it was a much more natural fit.
I'm super curious about this peer-to-peer RSS protocol thing.
Also, in part because like starting to make podcasts and engaging more with RSS feeds,
it's become very clear how essential they are in a more community approach towards content
creation online.
Yeah.
First of all, what are RSS feeds?
Why do they matter to you?
And then let's go into it.
Yeah.
RSS feeds are a, well, it's a web standard.
It stands for really simple syndication.
And I think it was standardized in the mid-2000s.
I know that Aaron Schwartz was part of like the standardization process, whatever he was
a child.
Rest in peace.
And basically it's a way to share a list of things that you add new things to.
Normally things like blog posts where you write a blog post and it goes at the top and
you want people to be able to like get updates from your blog or things like podcasts or
another good example.
Or anytime you want to share a feed of data, it's best used where you're just adding things
to it, but it is possible to change older entries in it.
I didn't actually know that it was possible to change older entries, but it makes sense
because I can upload new podcast episodes from my old ones.
Yeah.
Or how does it work?
I think it's handled by the different clients differently.
So they might not know that they need to fetch old things.
Yeah.
I think the way a lot of people do it is like if they have a podcast, maybe the file
that they share is like a link to a podcast and then they'll change with that link points
to or edit a file behind the link and things like that.
Is that similar then to how it would work in HyperCorp?
With RSS, usually you have a server that you're putting your RSS feed on and a bunch
of people with RSS clients would periodically check that server.
They'd have like a URL for your server and check it for updates.
With HyperCorp, you could have something that looks very similar to a user from a client
perspective.
However, there wouldn't be a server and the way that the RSS feed, this peer-to-peer
RSS feed would be addressed wouldn't be through a URL.
It would be through a public key, which a user wouldn't really have to care about necessarily.
Is it easy to care about it?
I care about it because it makes it functional, but I think it's also good.
I do care about that a normal user shouldn't have to care about the small differences because
it should be easy for people to use.
Yeah.
But yeah, from a publisher's perspective, I think you can have an inner.
That's something that is very similar to the same flow where you write a blog post and publish
it in updates here, RSS feed.
The only thing that might be different is for this centralized RSS thing, you have a server
that's running.
And with the peer-to-peer one, you want to make sure that your content is shared with
other peers.
And so if I were just to share my RSS feed off my computer, press publish and turn it
off.
Somebody might not actually ever read it.
You want to start giving that data out to the swarm of peers because once it's out there
and people are reading it, then you can turn off your computer and it would be fine because
other people have it and they're able to share it with each other.
Yeah.
So does it become like, I'm hearing into this now, but does it become more resilient then?
Yeah.
It becomes more resilient for things where the content is more in high demand.
And if you want to be certain that your peer-to-peer thing is always available, you can have a
server or you can have a seat box is what the equivalent would be in mid-tolerant land
where you have a server that's constantly receding your thing so you don't have to worry about
having your computer that you wrote your blog post on all the time.
Once something becomes popular, you don't really have to worry about that because there
will be different people reading it.
And as long as one of those computers is on, then they'll be able to share with each other.
Yeah, because each computer contains its own copy of the media.
But it's not magic.
Somebody still needs to be, it needs to be alive somewhere for things to be shared with
each other.
So you can't always just count on it to have somebody store your data somewhere for free.
So, okay, let's go back to the RSS comparison then.
When I'm publishing something or when someone is publishing something to RSS, they create
a copy on a server.
And that's the server that's hosting their RSS client.
Is that how it works?
Yeah.
From there on, is that shared to multiple other RSS servers?
It's served to other for regular RSS.
It would be, they would publish it to like an RSS server, which is just like a normal
web server.
And RSS is just like a XML format served from a web server.
Yeah.
And other RSS clients, like someone on their phone or their computer, whenever it automatically
updates their feeds, it would pull in your new content.
But then those clients don't automatically share them with other clients.
But with like a peer-to-peer based RSS protocol, those clients would share the data with other
clients.
And so the central server would become less important.
And those clients would be any user I presume.
Yeah.
Like any listener, anyone who wants to engage with the content would be in server terms,
a client.
Yeah.
And that's where we come back to the peer-to-peer architecture.
And especially for things where you're sharing a lot more data, such as a podcast.
I don't know, your server, you wouldn't have to pay for as much bandwidth on your server
if you're serving your podcast in a peer-to-peer way.
That would all be distributed between other peers.
Let's go back a bit into, because now we've darred into a little bit of the nitty-gritty
how it works.
How did you find Hypercore?
How did you end up working on Hypercore?
And how did you end up building a Rust implementation and why?
Like, what was that journey for you?
I think I first heard about the DAP project in the late teens, whenever I was living in
San Francisco.
I used to hang out at a place called Noisebridge there, which is like a hacker space that I
love.
And people were there talking about it.
Someone explained it to me and I thought it was cool.
And I had seen it around.
I know that it was affiliated with like some Scuttlebutt people who I saw around Oakland
and went to Mozilla's conference in 2019.
And there was like a workshop on how to use Beaker browser, which is like a web browser
that was based on Hypercore.
And I thought that was really cool.
I just must ask, did we meet there?
Possibly.
We can be, right?
Yeah, I was.
Because I was there as well.
Okay, that's cool.
Yeah, I went.
They, I had a privacy browser extension called Privacy Possum that used like heuristics for
blocking crackers.
And Mozilla invited a bunch of privacy oriented like web extension people to come to like
a little mini conference there because Google was changing the way their web extensions
were to prevent things like, well, ostensibly they said it's for security, but it also
had this nice effect for them where it made these extensions that blocked tracking harder
to do.
So we all went there for a meeting.
But yeah, we could have met there.
It's kind of like a funny one, right?
Because like Firefox is what 70% funded by Google, right?
Yeah, something like that.
Because Firefox generates a lot of traffic for Google.
And I think that's how they get a lot of their money.
But as of now, Firefox has not switched over to those web extension APIs.
And that's why like you can still use nice ad blockers on Firefox that'll stop you from
seeing YouTube ads and things like that.
But you can't do that on Chrome.
Go Firefox.
It's amazing that somehow, although we're all tied into this web of funding, we can keep
privacy alive in some spaces of the web.
And that's also something that Hypercore does, I'm assuming.
Yeah, definitely.
With peer to peer applications, like you are revealing your IP address to other peers
that you're connecting to, ever, it is like an encrypted connection between those other
peers.
So no one knows like what you're actually sending just by like watching your traffic
over the internet.
I think it's a different privacy story because then the back to the RSS thing, like this
RSS server knows everybody that is downloading the data, at least by their IP address.
And in the peer to peer story, it's like the people that you're sharing the data with
to.
So maybe the original, like one peer won't see everybody that's sharing the data necessarily.
You see your local environment where you're engaging.
But did you meet Paul then?
Because Paul was also, Paul, crazy, was also at the...
Yeah, I think he gave the, he gave the work.
And then you ended up in Beaker browser, which is it defunct by now?
Or is it still going?
Um, I don't know actually.
I don't know.
I haven't used it since then.
I thought it was really interesting, but I didn't have like an immediate use case for
it.
And that's like whenever I started working on Rust Hypercore, like that was one of, or
like that gets back kind of like into like why Hypercore Rust stuff and not Hypercore
JavaScript stuff, at least for me.
Which I don't know if I should talk about that.
Please do dive into it.
Okay.
Yeah.
Like I implemented this peer to peer RSS protocol in JavaScript, which the main implementation
of Hypercore is it in JavaScript.
It made like a little client for it and it's way to like a script for like ripping podcasts
off of an existing RSS feed and sharing them in a peer to peer way.
But I wanted this thing to, I wanted people to be able to use this like RSS protocol.
And I thought I would wait a good way to do that would be like approaching existing RSS
clients and giving them a library that they could use to integrate so that you could just
get in your podcast client and your RSS client be able to get like a peer to peer
feed of like podcasts or whatever RSS content.
However, you can't just give a JavaScript library to a application like the intent of
pod that's on my phone that's written in Java because of the way that JavaScript works
you can't just like necessarily embed it easily within another language.
But with Rust you can because Rust is like a low level library, you can compile it to
a C library, which is kind of like the lingua franca of all programming languages, every
programming language has a way to call a C library.
And so if I could make this abstract this peer to peer protocol into a library in Rust,
then I could give that to people to use to integrate it into things like a podcast client
or an RSS client.
And I think that's really important for like adoption and trying to meet people where they're
at with like the tools that they have is being able to integrate with them.
So yeah, and I think that the current JavaScript implementation is run by this organization
called WholePunch and they're great.
And I think that they have a reasonable approach to this problem too where they've built sort
of a runtime for the whole like hyper core ecosystem.
And it's really cool.
It's called pair and it's sort of like an application that you can build other hyper core
applications inside of and share like a pair URL and like run JavaScript hyper core applications
within it.
And you it also like distributes the applications in a peer to peer way and they get updated
automatically and if you're to peer way it's pretty cool.
And they have like a they have a chat app which is sort of like the premier application
on it called Keat like K-E-E-T.
And the runtime itself is called pair like the fruit.
I think they're ripping ripping off peer.
Yeah, it's very cute and sweet.
What's it called?
Not metaphor, but pun.
I had a few conversations one with Sarah Path a while ago and then back in the day I
met some of the people from hyper core or no from the ones developing pair right now.
And what's the name of their company again?
WholePunch.
Yes, WholePunch.
So basically as far as I've understood it, the dot slash hyper core ecosystem has had
like a lot of different stages of evolution of people of focuses.
And as you're mentioning like now it's like with the pair protocol and with whole punch
and also your efforts on turning hyper core into rust.
And so I'm wondering like if you look out on the landscape of hyper core slash dots, what
is like bubbling?
What is active?
What is the state of things?
That's interesting.
I think I have been in my own little area for a while.
I am on the Keat and I talked to some people in there about there's like a rust hyper core
channel where I post updates and there's occasionally some people that have been helping
recently.
But I think that that's a I'm not too tapped into what else is going on.
I have been somewhat single minded in my focus of building this project.
As you need to be.
Yeah.
And I think that I feel that I have like a very clear vision of like what I want to do.
And so I'm just like plowing through and doing that kind of thing where I think that
hyper core could be something very infrastructural for a lot of use cases that where we're not
using peer to peer today where we could.
I'm writing a blog post about this but recently the open source security foundation and like
a bunch of programming language foundations wrote a joint statement about how hosting
package registries like the things where whenever you're using a programming language the thing
that you're downloading your libraries that you're using from those are becoming so expensive
partially because the rise of like hectic AI and also like continuous integration systems
constantly downloading things from them hosting them's becoming really really expensive.
And there's also like like new compliance and stuff that they're having to deal with.
So their problem is like they are sharing a whole bunch of content with a whole bunch
of people.
And a lot of that like taking in the post I'm writing about like the rust package registry
the thing you download all of your libraries from is called crates.io.
And there's like billions of downloads for these things on there.
And a lot of them are downloaded by are if basically I'm saying that if there was like
a peer to peer solution for this this would dramatically reduce their costs.
And what in the open source security foundations proposal a lot of like the suggestions were
like we need to like partner more with commercial entities and have like like tiered access
and like ways to pay for these things for commercial entities which seems like a fix.
It would definitely like fix some things.
It also might bring up certain conflicts of interest.
These foundations are like nonprofits they're but not in like the super benevolent sense
necessarily they are sort of like also industry consortiums.
But I think we shouldn't just let them become totally like paid for by big companies.
Using hyper core to build a peer to peer package.
I think it's like a really realistic application that is a technical solution to this problem
that would like dramatically reduce costs and not and things like in rust they wouldn't
have to like take a bunch of money from Microsoft if we could do something like this.
I 100% agree and I think that's an amazing use case for a specifically hyper core as
well since it is like a collaborative thing for one large like database or one.
Yeah and in the way that software libraries are specifically like versioned they are like
sort of like RSS we'd like you add anything to each one and they're like linked together.
And this is a data structure that you could represent in hyper core like in a pretty straightforward way.
I think this is also really it touches on a fascinating topic because like if we look at
peer to peer architectures as a whole and there's there's this concept in scuttle what we used to
call it singularities.
So I realized that I said singularity here what I actually meant to say was single tons also the
next time I say it.
And it's like the DHT system and the distributed hash tables you have the singularity where if
you would create a dot or a hyper core network in one place and then a hyper core network in
another place they wouldn't necessarily be able to communicate to each other.
And that's very essential quality there's some researcher called Shapiro who
probably between the pronunciation there but who talked about this from the perspective of
calling it like the grassroots networks where it's like one part of the network pops up in one
place and then it meets another part of the network and then they can merge and communicate.
With DHTs that's not quite possible right so in some ways my own internal critique towards
DHT based protocols has been like that they're not suitable for communication networks per se
that are natural organic and ever flowing but as you're making like pointing out here there's
some key qualities of these network they're not qualities there's some key implementations
that they're fantastic for and I had never heard or even thought about hyper core being used for
RSS like structures or for package managers so it's it feels like a very fresh take on these
protocols or on hyper core specifically. Thank you maybe it's fresh because I have been
a little bit isolated in my focus I just haven't been talking so it's good we're talking about it.
It is good we're talking about it and also I mean as you mentioned like you're communicating
with other people on Keats like a lot of people have built a lot of different things on hyper core.
Yeah. One of those implementations which is as far as I know the largest of all networks
or infrastructures building on peer-to-peer offline first communication protocols and that's my
pale and my pale I haven't heard that. This is incredible to me and this also just goes to show
how distributed networking development is as distributed in practice of developing it as it is
in architecture. Yeah. Okay but if you haven't heard about it I'll send it to you.
It's developed by Awana Digital. Maybe they're back to being called digital democracy I don't know
they were kind of switching name for a while. Basically my pale runs over hyper core. Okay.
And I think it's the biggest active implementation of hyper core in the world. Okay, interesting.
What is it? What does it do? So my pale is super freaking fascinating.
It's basically... Oh it's like a mapping thing. Exactly. It's a mapping tool and it started
in Ecuador where I forgot the name of the indigenous tribe but there's an indigenous tribe or maybe
collaboration between multiple ones and that were mapping their territories and because oil
companies were trying to claim that the oil companies could come in and drill oil and ruin
the ecology. And then by using a hyper core they could go around and map the like geolocations
of where they were active to showcase them later in court that this was actually their territories.
Yeah. Yeah and that has spread. So now my paleo is used by like 400 different communities around
the world. Most of them indigenous. So yeah. But this is mind flowing to me.
Because like one of the things that I'm fascinating by in my studies is like how distributed organizing
happens. And it's so cool to find like to see see it in action that you're here developing something
that is use case. Like the use case could be for anything that's using hyper core because you're
basically making it easier to interface with hyper core. Right? Yeah. Yeah definitely. That's
one another reason I wanted to choose Rust. I'm interested. I'm going to check out their
implementation later. Yeah. And going back to writing in Rust, another aspect of like implementing
this. I've been using some of Mozilla's Mozilla has a tool that's basically because Rust can be
compiled down to like a C library. And almost every language can call C. They have a tool for building
taking your Rust code and generating libraries for other languages like Python and Kotlin and Swift,
which which just those three libraries it's like back and led programming, building apps for
iPhone and building apps for Android. And with a solid Rust implementation, we would have a good
like Android and iPhone implementation that could be used on phones. Having the same
implementation, I think that use use by these libraries prevents like fragmentation and other
ethics of like different implementations not being able to work with it. Yeah, definitely.
And it sounds like the work you're doing really just scales up the possibilities for hyper core.
And that kind of brings me to another question. Like, why do you think that there's not more
emphasis from other like, for example, whole punch on building hyper core in Rust?
Um, I have talked to Maffin Tasha a little bit. I know that they have their own priorities that
they're focusing on. And I the last I talked to them about is like they were working on like,
a system level implementation written C. And I think that has to do with a so something about
the way they're running stuff in the pair runtime. But I would like to talk with them. I'd be happy
to talk with them more about that. And I think part of the reason is like, I haven't been
marketing my work very much. And like, like, I think I haven't been like talking to many people
about it. And sometimes whenever I'm, you know, whatever I have some work to do on this project,
I'm like, I could either tell people about it or write about it or write more code. And I usually
end up writing more code, which isn't always the best solution.
I hear you so much on that one, not personally, because I'm not a programmer. But from working on
NGI, next generation internet, one of the things that was like the key aspects that made NGI functional
was that they tore down like the whole system of making it complicated to advocate for oneself.
And instead, the applications were super simple. So that people who were like you,
who wanted to focus on writing the code could do that without having to put in like a whole
month on making applications to apply for funding or something like that.
Yeah, that's cool.
I think so. And good on you for doing the work and good on you for being here talking to me about it.
Yeah.
Because both are important, because I think this, like as someone who's been in this ecosystem for a
long time, although I've never worked directly with Hypercore, like you've already opened quite
a few perspectives on the purpose of Hypercore as a protocol, one of the purposes,
there's probably multiple. But it combats my main internal critique. So I'm happy to hear.
Yeah. Thank you. Yeah. I want to, well, maybe I could maybe now is about the time to plug,
but I want to, if anyone is interested in this kind of thing, they should definitely reach out to me.
And you can do my GitHub is just github.com slash CalLix, or you could email me.
It's just email at callix.website. Yeah. And I'd be happy to work with people.
You also mentioned that there was like a group chat where you sometimes write updates and people
could join. How would people start joining that group chat if they wanted to?
Yeah, there is a group chat on Keat, which is the Hypercore chat application. And it is called
the channel is called Rust Hypercore. And I will have to share a link to it. Or I think that with
Keat, now you can search for different channel names. But it should be fairly straightforward
to find or if you reach out, I can connect people. So this process of using your own software to
communicate and using it actively as you're developing, it's commonly known as dogfooding,
right? So how's that going? Well, that
Keat is Keat is using the JavaScript implementation, not the Rust one, but it's
I, it is useful. And I should say that the Rust implementation is still somewhat nascent. It is
currently pinned to like a previous version of Hypercore. And there's a few aspects of it that
need to be completed before like practical usage can be there. Whenever I picked up the project a few
years ago, some key pieces were missing, like a pure discovery, which means like
if you want to download a Hypercore and find other people with it, the way that you actually do that.
And so for the past year, I was implementing the pure discovery part and then things like replication,
whenever you like the encryption and replication. So you find a peer, which is good, you need to
talk to them, then you need to like create an encrypted connection to them. And then you need
to chatter about what hypercores you have with each other and start replicating data between them.
And so those pieces are all basically done in isolation. And I'm at the point where I am
bringing them together and sort of like demos of demonstrating like getting a Hypercore,
discovering another peer on the network and replicating data with it are possible. But
right now the actual like API and some work needs to be done to make this actually useful, but it is
very close. Exciting. Okay, shout out. Anyone looking for a project and wanting to learn more about
Rust or wanting to learn more about Hypercore and think that they can contribute, reach out to Blake
Blake Griffith, I guess, because I just read your blog where your name says as well. So I'm
guessing it's out in the public. But so that's so if you look at like at a timeline, what do you
think is the how how much okay, theoretically, you had an infinite amount of if you had an infinite
amount of money, and it was you working on it only, when do you think this could be done?
Um, with one developer? Yes. So, so I guess the timeline is the time there's like several
different stages of a timeline, right? Yeah. Getting, um, publishing, having Hypercore that is just like
that there's two libraries in the JavaScript ecosystem called Hyper Swarm and Hypercore.
Yeah. Having just those two and having them be able to like replicate data with each other,
I think would happen in the next month, because that's like what I'm wiring together right now.
Oh my gosh. And then there's other, there's other aspects on top of it, right? And there's a
key value store library built on Hypercore called a Hyper B, like a B tree. And I've implemented that,
but the thing after I have those parts working, there's things like updating it to be at future
parity with the latest version of Hypercore. And there's also like things that I've like
skipped over through the implementation that I would love to like polish off, like congestion
control where whenever peers are chattering with each other, if for a real implementation,
they would need to like look, hey, I, my packets didn't send and I need to like control how much
I'm sending a lot of like small technical things like that that should get polished off. Um, but
then I also want to re implement this peer to peer RSS protocol on these things. And
the core data structure, I don't think is so complicated, but would really make it useful is
having more polished tools for like publishing on it and having like a library that's like
designed to be integrated with other tools in like an existing RSS client. And I think those
are a little more long term goals, like a year away, if this was my full time job. Yeah.
So all right, well, I'm cheers and crossing my fingers and holding my thumbs.
Holding my thumbs is the Swedish way of saying I wish you luck.
I'm Swedish.
But it that said, I think we're on a timeline on our own as well here for the podcast. So I
think we're about perfectly time to wrap. And is there anything else that you would like to
add, make a shout out for a anything this
Yeah, just just hit me up. I'd love to hear from people, even if you're not a developer, like
soon I'll be asking people to try things and dinner things. And yeah, try out, keep the chat
application. You could find the my Rust channel on there. And I think that's it.
Thank you so much for joining. And if you want to see more from Blake, you can check out Calyx.
website, which is your website. And you also got a blog. And I also have a GitHub sponsors page,
if anybody wants to chip me five bucks a month, I'd appreciate it. Yes. And then we can get to
Rust implementation of hyper core sooner. I hope so, because I'm convinced by these use cases. And
I'm really with like a peer to peer package manager, such as over hyper core. So thank you
so much for joining. Okay, thank you, Zell. Have a great day. Bye. Nice talking to you.
Solarcast