![](https://zemmy.cc/pictrs/image/81d88be5-9a99-430f-a32b-676d1089df8e.png)
![](https://lemmy.ml/pictrs/image/a64z2tlDDD.png)
Because grocery stores don’t make that data accessible to third party developers, otherwise someone would do what you’re suggesting and they’d risk you shopping elsewhere.
Developer, 11 year reddit refugee
Because grocery stores don’t make that data accessible to third party developers, otherwise someone would do what you’re suggesting and they’d risk you shopping elsewhere.
I completely gave up torrents for Usenet, also using the -arr’s to get content for Plex. I completely saturate my bandwidth with Usenet downloads and I’ve never once received an ISP letter, and I’ve been entirely without a VPN.
As someone who completely gave up torrenting for usenet, what made you decide against usenet?
To elaborate further from the other comment, it’s a person running a copy of the Lemmy software on their server. I for example am running mine (and seeing this thread) from https://zemmy.cc. Thanks to Federation all of our different servers are able to talk to each other so we can have a shared experience rather than everyone being on one centralized instance managed by one set of administrators (like reddit is).
This provides resilience to the network. If reddit goes down, reddit is down. If lemmy.world goes down, you can still access the content of every community that isn’t on lemmy.world, and if other servers were subscribed to the content on a community from lemmy.world you could still see the content from before the server went offline (and it will resync once it’s back up).
If we put all of our eggs into a single basket, we have a single point of failure. If all of the major communities go to lemmy.world then lemmy.world is that single point of failure. Doing that is effectively just recreating the same issues we had with reddit but with extra steps. By spreading larger communities across servers we ensure that the outage (or permanent closure) of a single instance doesn’t take down half the active communities with it.
My friends instance, crystals.rest, is hosted on a $5/mo Linode with 1GB of RAM
Putting all of the large communities on a single instance is just reddit with more steps. It’s good that one of the larger Lemmy communities is not also on the largest Lemmy instance. Lemmy.world suffers a lot of outages (in part because it’s so centralized), meanwhile this community remains available.
I’m not sure what you’re looking at there? I don’t use Edge, I’d reccomend checking the tutorial on Greasyfork or checking Youtube.
It should be as simple as clicking the Tampermonkey icon, clicking the settings option, and entering some keywords to block:
I have a userscript for this purpose:
https://greasyfork.org/en/scripts/471718-lemmy-post-keyword-filter
You’re not paying to remove ads from Lemmy. You can continue using Lemmy ad-free on mobile via the mobile site or any of the other PWA’s or native apps. What you’re paying to remove ads from is Sync. The developer has decided that they need to be compensated to sustain the amount of effort developing and maintaining the app requires. If you don’t want to pay that price with cash or your eyeballs then don’t use it.
Nobody is forcing you to use Sync, nobody is forcing you to see ads. The beauty of a platform like Lemmy is you have the choice to use whatever client you want. That doesn’t mean you’re entitled to any of them.
There’s an expression I think about a lot, “You can’t think when you’re hungry”
Unfortunately principles and ideals are calorie-free
We may not like it, but this is what progress looks like?
I’m a big fan of plant based burgers, but the reality is that telling people “just eat plants” is not going to result in any change. They’ve long ago decided that the inconvenience of switching protein sources is greater than the climate impact ignoring that choice makes, so the only way we’re ever going to see change is to either ban cows or provide an alternative that the masses can/will adopt.
Assuming you’re referring to lab-grown meat, I think that’s also a great alternative. We should be exploring any and all options that can get us to stop relying on cows for protein.
Not sure what to take from this other than it being a really bad take. Insect protein is orders of magnitude more sustainable and eco-friendly than beef. We could replace all the land we destroyed that is used to have cows standing around in their own shit and for a fraction of the acreage produce the same number of protein and calories without massively contributing to climate change.
Static pages with hyperlinks have evolved into a certain horror we all know.
Why couldn’t this just be a webring of sites following a specific design philosophy?
This is a neat idea, but the requirement of installing a whole new piece of software just to decide if it’s worth exploring is already a non-starter.
Sure you could make the argument that HTML has too much going on, but you don’t have to use all of that. It is still at its core just as capable of rendering plaintext and hyperlinks as it was the day it was originally conceived.
Why couldn’t this just be a webring of sites that are following a specific design philosophy. I don’t understand the requirement of an entirely new language, protocol, and client. You’re not executing the goal in any way than what is already possible, and you’re cutting yourself off from being accessible by the vast majority of people by requiring them to install a whole new piece of software just to see if this idea is worth exploring.
How is this website so wrong?
I don’t have a static IP. I’m not in the United States. I’m not even in North America…
I’m literally on another continent which can be very easily verified using nothing more than a geoIP lookup, but they somehow place me somewhere 3,000+ miles away. And no, I’m not using a VPN.
This is neat, but this decidedly a niche product with very limited application. I’m an old hat and I can’t see the inherent value proposition in this, why is this better than static pages with hyperlinks? That doesn’t and frankly shouldn’t require a whole new protocol and client. That’s what HTTP and HTML were originally built for.
I never have been able to, even while wearing wired studio monitors
Go ahead and try scraping an arbitrary list of sites without an API and let me know how that goes. It would be a constant maintenance headache, especially if you’re talking about anything other than the larger chains that have fairly standardized sites