Recent Posts
I want to create my own forum but I don't know how...
4Chan bypass?
Is my psu not compatible with my mobo?
Behead AI companies
/i2pg/ - I2P general
ISP Throttling
zoomies jara idhar ana
Windows 7
Delusion of grandeur of AI cucks
Thoughts
Zerodha just donated 100,000 usd to FFMPEG
Jio Fiber
Looking for dumb phoe
Resources in web dev
/compiler_develoment/ thread
XAI hackathon
/g/ related blogpost
grok 4.20
Glorious Xi will save the gamers
what is computer science?
just installed Arch Linux bros
Sketch - A simple 2D graphics library in C
Did RailTel block r34 anon?
Gemini+Figma for UI
LEARNING SPREADSHEETS
/GDG/
AI image gen
/emacs/ general
Simple Linux General /slg/ - Useful Commands editi...
My Biggest Project (till date)
ITS HAPPENING
CAN SOMEONE EXPLAIN ME HOW THESE JOB DESCRIPTIONS ...
the best android browser has arrived!!
Pajeet doval
24/l2w
No.2983
How much electricity do you consume in your daily local model runs?
YMx7B4
No.2984
>>2983(OP)
I am gareeb, I don't have gpu and the one that I have in my laptop is gtx 1650 (which is good for general purpose, but not for running llm more that 2b).
yvCyyh
No.2985
>>2983(OP)
i dont use local llms as they are useless on my limited hardware
tGbf4Z
No.2986
> No GPU
> No life
tGbf4Z
No.2989
>>2983(OP)
I ran local models on my 13th i5. 65 watts peak. Ok for LLM use. Not fast. Image generation was not good.
I want to try to those new AMD AI cpus with npu's built into motherboard.
tGbf4Z
No.2990
>>2983(OP)
And while id love a 12 gb graphics कार्ड, thr costs even for a 3050 are high




















































