What i dont understand: These gains are calculated into the option premium, so how is this still attractive?
Basically betting the line goes up faster (or earlier) than the seller predicted?
What i dont understand: These gains are calculated into the option premium, so how is this still attractive?
Basically betting the line goes up faster (or earlier) than the seller predicted?
Bandwidth is one part… Storage is theother and usually you have less storage than bandwidth anyways.
Seemy like you dont know how the cli actually works.
So if you ever have to touch the CLI (not sure when a normal user would need to), you complain you dont know the CLI instead of learning the tools you want to use?
Lots of stuff on Windows didnt work for me when i wanted advanced things… very intuitive to fix that, if it was even possible.
Every system has its fair share, but if you are unwilling to learn and understand your tools before using them, you might drill yourself in the hand. Windows is just a bit over-protective before you can do so.
mhh… you might be correct.
I havent considered how easy it actually is to search for a comment and find the exact post.
Question is if searching indexers like public search-engines is enough to call the data easily re-identifiable.
Or if this usage of personal data is covered somehow else e.g. legitimate interest, weighed against the freedoms of the data subjects, as you have listed above already.
Of course they are linked, but removing the username from the comments means they are mostly anonymized as far as GDPR is concerned.
It is perfectly fine to unlink data and keep processing it, as long as its considered anonymized under GDPR.
Your post content here is also not considered personal data, it shows up on a lookup request because its currently linked. If i crawl the page and dont save the username, the resulting data can most likely be considered anonymized under GDPR as far as the current interpretation is concerned.
It only becomes a problem as soon as i become aware the content indeed did contain personal data or probably also if i could have expected it to with high probability.
And i’d have to make sure to remove obvious ways to re-link the content to your user (e.g. mentions of your username in comments).
Anything else requires precedence about ways to re-identify someone based on posts on a platform weighed against the users freedom and the difficulty of doing such re-identification.
Recital 26 discusses when something could be considered anonymous. (or rather when gdpr would apply at all, and what it means to have anonymous data)
Now i dont want to defend reddit here, but afaik most comments are not subject to GDPR as long as you dont know they contain personal data and they have been detached from other personal data fields (like username).
So by removing personal data fields, they most likely become “anonymized”.
Of course thats not the end of it, you have to consider the available technology to de-anonymize this data for it to be legally called anonymized.
But i dont think there has been any case where this was challenged before… and i bet most supervisory authorities would discard such complaints as being “too hard to follow through”. (i got that reply from the Netherlands authority for checking newsletter opt-in from a website)
And i certainly dont think reddit or any operator will be forced to delete comments because they could be deanonymized depending on the content the user wrote, when most comments probably cannot be deanonymized.
Having to check everything for potentially identifiable data in that regard would be ridiculous for website operators.
Maybe some light checks sure, but not as deep as it would be required to truly anonymize everything that a user could have written to identify them.
Alot of that information becomes fragments as soon as you unlink it from the user. e.g. 12 people in a post wrote “I am gay”, great. But if you cant link that back to other comments of the same users somewhere else, its not identifiable, just text.
In an more ideal world, getting less money because people tip less, would push you to reconsider the job choice and ultimately switch to something more lucrative.
With less workers, the company would be forced to pay more to even get employes.
Problem with this idealised scenario is, it doesnt work in the US, because workers are getting screwed so much and have so little choices at those low paying jobs, they’d be the ones loosing massively in the short-term.
And with little support structures my the states and federal government, they would fail… and the 2 party system would fail them even harder, noone cares about them in the government… too much invested in fighting imaginary culture wars.
But then again, using less services of the business leads to the same outcome in the end, so even that wouldnt work well.
The business will always win in the short-term.
So as it is ineviteable, maybe its better to think long term anyways.
And everyone wants tips these days, no longer just a gratitude or paying low wage workers, but now also a ‘bid’… (sure not every worker might like relying on tips, but specially well paid servers prefer it as they make bank)
I dont see you getting iut of tipping either way very well without government intervention… which i dont see happening, but you have orher big issues too…
You can not only use that information for e.g. blackmail, but also to build material to manipulate you to do things without you knowing.
Information is a powerful tool.
Yes, you need an organization which signs your certificate, so it is trusted by default. This is our trust-anchor so we know the certificate presented was validated and it was given only to the website owner.
There are numerous around the world for that.
And if that is no longer offered, you can just not have your certificate signed, which means browsers will complain about it.
But you can trust your own certificate yourself. Or create your own certificate authority which can then sign other certificates for the community as their new trust anchor.
I think we would very quickly build the web-of-trust, but for certificates.
You can even not have certificates, but keep an weak form of TLS (no idea if browsers support TLS_DH_anon_*), but its still encrypted and can only be broken by an active Man-in-the-Middle-attack. (which is theoretically detectable later on)
Diffie-Hellman is an awesome key-exchange.
What i have a problem is the developer accessebility.
I want to build my own sensors into boards and use those, but the devboards are so expensive, its not worth it.
A board with an esp8266 costs just 1-2€, with zigbee its 20-25€.
Might aswell go for the new esp32 versions now and use thread… and its still cheaper.
(though that wasnt an option a few years back, best option there was esp-mesh which kinda sucked)
Ideally the data would have been useless anyways as it wasnt really necessary for automated contact-tracing to keep it identifiable for government agencies.
See DP-3T (Decentralized Privacy-Preserving Proximity Tracing) standard
smartctl
But 10.000 seems on the low side, i have 4 datacenter toshiba 10tb disks with 40k hours and expect them to do at least 80k, but you can have bad luck and one fails prematurely.
If its within warranty, you can get it replaced, if not, tough luck.
Always have stuff protected in raid/zfs and backed up if you value the data or dont want a weekend ruined because you now have to reinstall.
And with big disks, consider having more disks as redundancy as another might get a bit-error while restoring the failed one. (check the statistical averages of the disk in the datasheet)
Async is good because threads are expensive, might aswell do something else when you need to wait for something anyways.
But only having async and no other thread when you need some computation is obviously awful… (or when starting anothe rthread is not easily manageable)
Thats why i like go, you just tell it you want to run something in parallel and he will manage the rest… computational work, shift current work to new thread… just waiting for IO, async.
If it would be hard to do and having to bypass DRM yes, but its actually similar to what the player already does.
A court already ruled here that downloading youtube videos does not break the piracy laws by providing own means of downloading and saving the unprotected data.
Of course that does not include allowing the download feature of the client itself.
Downloading from youtube is piracy? How? If it was like a Youtube Red show, sure, but the normal videos everyone can see for free?
For me piracy begins with aquiring things or features which usually cost money to get whilst also taking into account if its obvious a thing should cost money in such an environment (thats also how our piracy laws are worded here).
So our piracy laws also classify things as piracy if it was obvious the deal was too good to be true like Windows for 2$ on eBay or chinese ROM cards for 5$ with hundreds of games.
Videos on youtube, including music, are a normal occurrence. A full blockbuster movie is usually not.
duckduckgo search is certainly not open source.
Chrome also isnt open-source.
Chromium is, but Google mainly uses it to gain size and push standards that benefit them.
The License is clearly not Free by imposing restrictions to e.g. commercial vs non-commertial usage or distribution. It also restricts usage of name and logo aswell as terminating the license when legal action is taken against the provider.
While i can understand the reasoning, the license still stands against FOSS.
I believe you could have clearly separated them as provider and the software like its done in most cases. By wanting to protect their software, they had to restrict the License, so its no longer Free to use in any form you’d want.
Not really a problem with putting other stuff on it, apart from adhering to security standards. If you want to separate your personal stuff from hosted stuff, go ahead, but just because its torrent, doesnt make it much different.
Put it in a VM if you dont have a second machine i guess.