• 1 Post
  • 21 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle

  • Well there are 2 things.

    First there is speed for which they do indeed rely on multiple thousands of super high end industrial Nvidia gpus. And since the 10Billion investment from microsoft they likely expanded that capacity. I’ve read somewhere that chatgpt costs about 700,000 a day to keep running.

    There are a few others tricks and caveats here though. Like decreasing the quality of the output when there is high load.

    For that quality of output they do deserve a lot of credit cause they train the models really well and continuously manage to improve their systems to create even higher qualitive and creative outputs.

    I dont think gpt4 is the biggest model that is out there but it does appear to be the best that is available.

    I can run a small llm at home that is much much faster then chatgpt… that is if i want to generate some unintelligent nonsense.

    Likewise there might be a way to redesign gpt-4 to run on consumer graphics card with high quality output… if you don’t mind waiting a week for a single character to be generated.

    I actually think some of the open sourced local runnable llms like llama, vicuna and orca are much more impressive if you judge them on quality vs power requirement.










  • A major part of how we interact. Not replace human interactions and definitely not put a centralized corporate AI in charge.

    My vision of what interaction could look like on Lemmy with AI tools (with a few more years of progress):

    • Instant summaries on long posts
    • Live fact checking with additional sources
    • Complete translations that maintain sentiment
    • Advanced spell check and suggesting alternative grammar live while typing

    Imagine if everyone had a small Wikipedia genie on their shoulder, at your demand telling you information about whatever subject your writing about. We all know Wikipedia has mistakes and that some expert levels stuff really is best to leave to experts. I tend to go back and forth with google a lot if i want to get the details in a post right, it has the same problems. But in general Wikipedia and the internet are much more right than the average single person. For some stuff i rather have a transparent trusted AI provide the details then a random internet stranger that may only claim to have done research, or worse has malicious goals to spread misinformation.


  • What really strikes me here is that your perspective on this seems to be so disconnected from the experience i have gotten working with AI, which is a power-tool to drastic enhance your capabilities in advanced cognitive tasks.

    Since ChatGPT last year i have learned:

    • Advanced power shell and basic python (the first is more useful for my job)

    In just the last 3 weeks (when i got GPT4) i learned

    • How to work a Linux command-terminal, something i have been struggling with for 2+ years

    • Set up and work with both Arch and Debian based systems

    • How to work docker trough cli and how to create heavy customization on many of the servers catered to the needs of my home-network. This includes some advanced reprogramming of how some of my smart devices behave, something i have wanted to do for over 3 years.

    I have also gotten many compliments at work for my emerging ability to quickly create scripts to automate tedious tasks, giving us more time to think about and improve our workflow rather then always trying to finish a never ending backlog.

    This thing has supercharged my life as a computer enthusiast. I never had a teacher that was capable of teaching me in such a customized manner. On my own tempo, in a requested structure and regardless of how stupid my question might be.

    But you are correct that there are clear pitfalls when working with AI. I myself have used it enough that i believe i know how to use them, some notes:

    • The user is always the brain behind the creative process Like you said “It has no more understanding of the text it shits out than a toddler who has learned to swear” The uploader of the post you linked also stated it himself “it is a tool” not a genie that does all the work for you.

    • AI Enhances your knowledge. Ten times zero… Setting up Linux servers on my home network is something i have been trying and failing to do for a while now. (Mostly because i am entirely self learned) but i understood it well enough to know if ChatGPT output is realistic at all. I am always directing it to do what i planned to do, and i never copy its work without first understanding what it actually does.

    • Know the limitations. There are some topics that current AI is much better at then others, in my experience that’s coding and computers. To plan a holiday trip? I tried, its really not that good.

    • Break it down, use what your learned, Build something better:

    Handwrite an email -> have ChatGPT reason what it thinks i am trying to say -> Have Chatgpt rewrite it to better reflect what i am trying to say -> Read and understand what it did -> Discard previous emails and write a final one.

    For someone who pre ChatGPT was horrible at writing emails, My boss has now started asking me to craft standardized emails to be send in bulk.

    Now to address the original post which really just a low quality cut and dry standard reply from ChatGPT. I am gonna go on a limb and say the comment from OP that it is just a tool is probably more a recent realization. The first week of using these models they do indeed feel a bit like magic know it all boxes, but just like Altman stated this feeling fades quickly. You realize if you actually want to create something of real quality (swindlers will swindle) you are going to have to remain in charge, understand what parts of your tools you can and cant rely on.

    I believe there is only one way to learn this and that is for people to use and learn this technology for themselves. I hope i am wrong for the next line but i extrapolate that AI is very much a case of “Get in the motorboat now, or peddle behind forever” because things are going to start to move really fast.





  • Ever time i see a post like this i ask the same thing and i have yet to receive answer.

    Why should i care?

    There are so many open source language models, all with different strengths and weaknesses. There are tools to run them on any OS with all kinds of different hardware requirements.

    This has been the case since before chatgpt came out and has exponentially blown up since.

    Gpt4all is just a single recent model. But in recent weeks it always gets the headlight under “run chatgpt at home”

    What does it do to stand out? Why would i use this and not one of the vicuna or llama models?

    Hugging face has a leaderboard for open source large language models.

    https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard

    If you are interested in running this tech at home, familiarize yourself with multiple models because they all will behave differently depending on your hardware and your needs.