Same. Honestly I need to create a community just for this tool IMO. But I don’t have the time to moderate it.
Same. Honestly I need to create a community just for this tool IMO. But I don’t have the time to moderate it.
https://www.search-lemmy.com/ ?
I’m open to feedback though if the search results seem out of order etc…
Unless you have an account there’s no easy way to get access to the content on the page. Once you have an account there’s technically nothing stopping you from just saving the HTML file to your computer.
Something else you can try though, assuming you don’t have an account, is to just turn off JavaScript. If the site lets you partially load the content and then asks you to create an account to read more, they usually just block the content by having JavaScript add an opaque overlay. With JavaScript disabled, obviously it’s not there to add the overlay and you’re able to keep reading.
That looks like 8.8.8.8 actually responded. The ::1 is ipv6’s localhost which seems odd. As for the wong ipv4 I’m not sure.
I normally see something like requested 8.8.8.8 but 1.2.3.4 responded
if the router was forcing traffic to their DNS servers.
You can also specify the DNS server to use when using nslookup like: nslookup www.google.com 1.1.1.1
. And you can see if you get and different answers from there. But what you posted doesn’t seem out of the ordinary other than the ::1.
Edit just for shits and giggles also try nslookup xx.xx.xx.xx
where xx.xx… is the wrong up from the other side of the world and see what domain it returns.
Another thing that can be happening is that the router or firewall is redirecting all port 53 traffic to their internal DNS servers. (I do the same thing at home to prevent certain devices from ignoring my router’s DNS settings cough Android cough)
One way you can check for this is to run “nslookup some.domain” from a terminal and see where the response comes from.
Speaking of Black & White I just want a reboot that uses a VR headset. Any other game I couldn’t care less about VR but Black & White…
Problem is finding the difference between repost bots and bots that are helpful like automod and link redirectors.
Maybe. 2nd idea I’ve got is that if no one is replying after say 24hrs and something like 75-80% of your posts are as such and you have at least 100 such posts, you get added to the list?
Main concern I see about something like this is false positives and how someone real could end up getting blocked.
I definitely want to think on this some more but it might have some legs.
…I wonder if there’s a programmatic way to detect these bots? Some sort of analysis on their posting behavior?
If they’re playing nice they’ll have the bot flag checked in their profile, and then maybe build a list of any bot that creates posts? As most of the “good” bots just reply to comments? Anyway just thinking out loud. But I’m thinking I could easily add a public API to my search engine that just returns a list of “posting bots”…
Cloudflare? Namecheap?
Not sure exactly what features you’re after but the vast majority of them support what you mentioned above.
Btw I appreciate the fediverse and decentralization as much as the next guy, heck I’m even writing software for the fediverse. But I feel like there’s a handful of people out there that want to try and apply the fediverse concept to everything. Similar to what happened with Blockchain. Everyone and everything had to be implemented via Blockchain even if it didn’t make sense in the end.
IMO though, GitHub is just one “instance” in an already decentralized system. Sure it may be the largest but it’s already incredibly simple for me to move and host my code anywhere else. GitHub’s instance just happens to provide the best set of tools and features available to me.
But back to my original concerns. Let’s assume you have an ActivityPub based git hosting system. For the sake of argument let’s assume that there’s two instances in this federation today. Let’s just call them Hub and Lab…
Say I create an account on Hub and upload my repository there. I then clone it and start working… It gets federated to Lab… But the admin on Lab just decides to push a commit to it directly because reasons… Hub can now do a few things:
Similarly if Hub was to go down for whatever reason. Let’s assume we have a system in place that effectively prevents the above scenario from happening… If I didn’t create an account on Lab prior to Hub going down I now no longer have the authorization to make changes to that repository. I’m now forced to fork my own repository and continue my work from the fork. But all of my users may still be looking for updates to the original repository. Telling everyone about the new location becomes a headache.
There’s also issues of how do you handle private repositories? This is something that the fediverse can’t solve. So all repos in the fediverse would HAVE to be public.
And yes, if GitHub went down today, I’d have similar issues, but that’s why you have backups. And git already has a solution for that outside the fediverse. Long story short, the solutions that the fediverse provides aren’t problems that exist for git and it raises additional problems that now have to be solved. Trying to apply the fediverse to git is akin to “a solution in search of a problem”, IMHO.
I don’t get what benefit hosting your own git brings to be honest
Just another level of backup. Personally I tend to have:
This way I should always have 2 copies of my code that’s accessable at all times. So that there’s very slim chance that I’ll lose my code, even temporarily.
The target
folder may be quite large. You can look at the dependencies for my project but my end binary is only a few MB.
IMHO federation doesn’t bring any real benefits to git and introduces a lot of risks.
The git protocol, if you will, already allows developers to backup and move their repositories as needed. And the primary concern with source control is having a stable and secure place to host it. GitHub already provides that, free of charge.
Introducing federation, how do you control who can and cannot make changes to your codebase? How do you ensure you maintain access if a server goes down?
So while it’s nice that you can self host and federate git with GitLab, what value does that provide over the status quo? And how do those benefits outweigh the risks outlined above?
I don’t see anything wrong initially. Can you also test with something like postman? It’ll allow you to control the headers etc… that you send.
You can also compare to what I have here and see if that helps: https://github.com/marsara9/lemmy-search/blob/b6c88355aba49abca52862473650526821ee165a/server/src/api/lemmy/fetcher.rs#L123
deleted by creator
Let me introduce you to https://sense.com/ and help you create a new obsession.
P.s. it’s not perfect as it uses machine learning to determine your appliances and it can’t find electronics like your computer or TV but it’ll help you find what might be chipping away at your power bill.
Not sure if I entirely understand what you’re asking but here’s my setup that sounds similar-ish that might help.
I’ve got essentially 3 machines
The download machine has a network share to download directly to the NAS in a special /downloads/ folder. Once a download completes Sonarr, etc… move it to it’s correct media folder.
Finally the Jellyfin machine is monitoring the media folders for changes.
I assume you could set up something similar with Plex instead of jellyfin and then store the fully downloaded files on a separate machine with a network drive, so Plex can see it. Essentially the NAS for you would be two machines one (the seedbox) for the partial downloads and a local NAS for the fully downloaded files?
Anyway, not sure if that’s what you’re looking for.