Recent Posts
Truth of Computer Programming
Browsing best practices
Picrel is a 56 yr old auntyji from Amreeka
Placement
More electronics manufacturing moving to India
Tatti windows
WebDev general
Seeding paid courses
AI = Actually Indonesians
alternative frontends
Seedhe maut 🔥
Script request
GOOGLE FIREBASE
Sach bol raha he kya?
sharing doxxing/hacking tutorial
आर्च लिनक्स को कैसे इंस्टॉल करें?
Machine learning with C±+
making a glow-proof smartphone
/AI/premiums
Rust
Torrent links general
how to ddos a site?
Kalej doesnt want me to do shit.
Making a community of people who want to code and ...
Every software is open source of you know assembly
It's over
No Innova-ck
archive.today prolly getting attacked by glowies
why jeets dont seed ?
Privacy Tools
Techsappot jara idhar aana
Bangladesh: Govt officials, student ‘activists’ ho...
phone under 20k
Privacy tools thread
2025 telephone suggestion
Chinese are absolutely dominating AI
AI = Actually Indonesians


NBVf7m
No.709
It's was pinoys but same thing tbh
CEO is getting charged for defrauding his investor by using humans while claiming it was AI.
AFZxCh
No.710
reply to my thread of running llm locally.


NBVf7m
No.711
>>710
you are not gonna be able to run 67billion parameter model on any laptop (i am not sure about macbooks).
Get rtx 3060, 4060 with highest ram you can manage and then maybe you have chance.
However smaller models 7billion param can in theory be run.
I don't do it, another anon did create a thread about it so find that thread ask him desu.
On that, good idea we should probably start a general dedicated for local models what do you say sirs?
Start by grabbing bits of info from 4chan /g/ and LocalLLaMA
On /g/ there's /aicg/ and /lmg/
7zK+rK
No.712
>>710
Just run them in google collab. Laptop pe nahi ho payega.
AFZxCh
No.713
AFZxCh
No.714
>>712
I want to actually use it for daily tasks.
I found this on twitter saying that it can run models. if any anon knows kindly elaborate what this thing is.


NBVf7m
No.715
>>714
>https://vicharak.in/axon
I have been interesting about them for a while, maybe i will get one of their products in future.
I think starting with 6k or somethign not bad.
7zK+rK
No.716
7zK+rK
No.717
Seems more like a buffed of raspberry pi, instead of device for running AI models.


NBVf7m
No.718
>>716
Not sure about price, but there's one product which is like raspberry but not the only one.


NBVf7m
No.719
>>717
yeah


NBVf7m
No.720
if you have rtx 3050, decent chunk of ram - i have like 64gb and decent processors you can run many llms with around 7bn parameter or so.
maybe some image generation models like stable diffusion etc.
AFZxCh
No.721
>>720
well, i have 16GB DDR4, i5 11th gen and rtx 2050 4gb Vram. what can i even do in this? also i think my gpu will work better for anything like this.
AFZxCh
No.722
>>720
>Simpler or less detailed responses: While a 10B model can handle many tasks very well, it may sometimes lack the richer, more layered answers that come from a model with hundreds of billions of parameters.
>Reduced performance on very complex or highly technical queries: The distilled model might occasionally struggle with the most challenging reasoning tasks compared to its larger counterpart.
what do you think about this or your experience on this?


NBVf7m
No.723
>>721
try 7bn parameter models - mistral, deepseek, meta all of them probably have those.
iirc deepseek has 1bn parameter model but it's mostly useless


NBVf7m
No.724
>>722
>what do you think about this or your experience on this?
Give me this week i have decent setup, i can run these models.
I will see some llms and some image gen ones if it works out i will write about it. Probably kickstart a general.
AFZxCh
No.725
>>723
alright, should i do it on ubuntu or windows? i have 40 Gb left on ubuntu so asking for that only.


NBVf7m
No.726
>>725
loonix, cuda is more optimized for it and updated


NBVf7m
No.727
I have separate slightly older setup, if i can run 7bn models on it then ig you will be breeze.
I will test there first. It has limitations same as yours, like 16gb ram etc. etc. Graphics card a bit worse than yours.