• 2 Posts
  • 22 Comments
Joined 8 months ago
cake
Cake day: March 12th, 2024

help-circle


  • I find it baffling how much criticism/this stuff goes around and gets published. Which may not be surprising, it’s central to the current politics, people are invested, and press is always interested in stuff like this. Hysteria and social media trends were a thing before too. Press always rides on about the same thing until it’s over or something better comes along.

    But with Russias huge investments in destabilizing the western nations with stuff like this, I have to wonder how much of it is caused or extrapolated by its involvement. Especially when I hear/read “on social media” in press, where Russias primary focus lies.

    There’s no simple or single answer in a complex world. But Russia is certainly looking at this in a very satisfied way. It and what follows with the election may very well be the culmination of many years of destabilization efforts.



  • Over the last eight months, Israel has killed at least 37,765 people and injured another 86,429, according to the ministry’s latest figures. These numbers are likely an undercount due to the decimated medical infrastructure, killed medical workers, and thousands feared trapped under the rubble in Gaza.


    Was there a debate in Congress? Did they reason their vote?

    The closing paragraphs in the article paint a bleak light. None of reason or arguments. Only denial and dismissal of opposition/different views without any reasoning.




  • How do you want us to push for peace there too? Because we have been since the beginning of the war in my eyes.

    What do you mean by “won’t recover from”? Because they have lost things that can’t be recovered since the beginning of the war. Russia is losing things they can’t recover too; thousands of its people for example, it’s money reserves, its military inventory, its non-military-sector economy. Where do you draw the line for Russia and Ukraine of what is “won’t recover from”? Western nations have already committed to helping rebuild the country and especially its destroyed infrastructure.

    How is the war in Ukraine “quickly turning into a much bigger global conflict”? Fighting is still only within Ukraine and the border to Russia. Western material support has been the case since the beginning.

    I have to assume by pushing for peace you mean Ukraine should accept losing large parts of its territory and human atrocities in order for the fighting to end. Is letting Russia win going to reduce conflict long term though? They’ll have more resources to invade other countries next. And proof that it’s a worth investment. That works and they win from. There was precedent before the current war in Ukraine, which is why they started this invasion in the first place. Only this time it didn’t go as smoothly.


  • A block on Twitter doesn’t say anything unless you know why they were blocked and know the person. Blocking can be more than warranted and justified. Be it spam, toxicity, harassment, or similar things. “I saw a screenshot of someone being blocked on Twitter” is not a good foundation for an argument.

    They talk about malware in npm packages. One example isn’t enough to make a general claim that all software with political opinions or voices becomes malware.

    When a platform follows sanctions, and the law, I don’t think you can claim them to be political and activism decisions. If you want to make that argument and want to do so in an absolutist fashion (not assess and reduce risks but evade them entirely), then you can only self-host and I guess on your own servers? No platforms, no services?

    Nowadays, there are many teams who buy popular apps and browser extensions to inject malware.

    … which has nothing to do with political views and especially not political views of the original authors and sellers.

    As you can see, the “opinion” or “political view” of a company is not only a way to hype on sanctions and curry favor with investors, the government, and consumers, but it is also a clear signal about potential threats. It signals that your sensitive data may be hijacked, sold, or wiped anytime if the political compass spins tomorrow and recognizes you as an enemy.

    No. None of what was written before showed me any of that.

    Some of the red flags I actively use to reject software:

    Direct political opinions in a product’s blog, like “we support X” or “we are against X”

    “We are free software and we support free software” -> REJECTED! (?)










  • I assume you don’t mean keyboard text predictions, which would be a different thing, but the platforms.

    It’s a new convenience feature. Something they as a platform can shine with, retain users, and set themselves apart from other platforms.

    Having training data is not the primary potential gain. It’s user investment, retention, and interaction. Users choosing the generated text is valid training data. Whether they chose similar words, or what was suggested, is still input on user choice.

    It does lead to a convergence to a centralized standard speak. With a self-strengthening feedback loop.




  • Quoting the abstract (I added emphasis and paragraphs for readability):

    AI code assistants have emerged as powerful tools that can aid in the software development life-cycle and can improve developer productivity. Unfortunately, such assistants have also been found to produce insecure code in lab environments, raising significant concerns about their usage in practice.

    In this paper, we conduct a user study to examine how users interact with AI code assistants to solve a variety of security related tasks.

    Overall, we find that participants who had access to an AI assistant wrote significantly less secure code than those without access to an assistant. Partici- pants with access to an AI assistant were also more likely to believe they wrote secure code, suggesting that such tools may lead users to be overconfident about security flaws in their code.

    To better inform the design of future AI-based code assistants, we release our user-study apparatus and anonymized data to researchers seeking to build on our work at this link.

    Caveat; quoting from section 7.2 Limitations:

    One important limitation of our results is that our participant group consisted mainly of university students which likely do not represent the population that is most likely to use AI assistants (e.g. software developers) regularly.