Recent Posts
Androidfags zara idhar aana
XHDATA D-808 DX-ing setup, analogue modulation
i don't understand
RCE on Pocketbase possible?
Shifting to linux mint
AI Impact Summit 2026
/emacs/ general
Simple Linux General /slg/ - Useful Commands editi...
/wpg/ - Windows & Powershell General
Sarvam Apology Thread
Some cool tech in my college
the hmd touch 4g
Holy Shit
Saar american companies have best privacy saaar
/desktop thread/
Forking jschan to submit a PR for captcha logic
JEEFICATION OF GSOC
4Chan bypass?
/g/ related blogpost - backup thread
Android Hygiene
My favorite game rn
COOKED
Are we getting real 5g?
I want to create my own forum but I don't know how...
Is my psu not compatible with my mobo?
/i2pg/ - I2P general
ISP Throttling
zoomies jara idhar ana
Zerodha just donated 100,000 usd to FFMPEG
Jio Fiber
/compiler_develoment/ thread
what is computer science?
just installed Arch Linux bros
Sketch - A simple 2D graphics library in C
Gemini+Figma for UI
LEARNING SPREADSHEETS
/GDG/
AI Impact Summit 2026


Zo526l
No.3229
Sarvam has been releasing new models, since Feb 5. Some of these are interesting, their 1 param foundation model is also interesting for translations etc. locally.
New models like Sarvam Edge are even more interesting. It seems the biggest release is coming up today in few hours.
>Join us for the next drop. This time, in-person.
>Date: February 18, 2026
>Time: 12:30 PM
>Venue: Bharat Mandapam, Plenary Hall B
https://nitter.net/SarvamAI/status/2023755059813904708
Let's see.


Zo526l
No.3230
>>3229(OP)
Drop 10/14: Announcing Sarvam Edge, our dedicated effort to bring intelligence to run offline and on-device. Our goal is to make AI that is efficient, private, and accessible everywhere.
Today we are showcasing our speech recognition, speech synthesis, translation, and document digitisation models running on-device. This comes on the back of our research efforts to make models super small in memory and compute footprints, while being close to accuracy of much larger models.
We believe running models at the edge will truly unlock population scale AI in India. And we are working with partners to make this a reality. Read more in our blog here:


Zo526l
No.3231
Sarvam Kaze
>On the surface, Kaze does what you'd expect from smart glasses in 2026—it listens, responds to voice commands, and captures what the wearer sees through embedded cameras. Sarvam is also letting developers build custom apps on its platform, though it hasn't shared much detail on what that looks like in practice.
>Where things get more specific is the AI underneath. Kaze runs on Sarvam's own foundational models, which are trained for Indian languages and use cases like voice interfaces and document processing. Meta's Ray-Ban glasses recently launched in India but still rely largely on English-first interactions, so there's a clear gap Sarvam is trying to fill.


Zo526l
No.3232
I really like what sarvam has been doing, small models, focus on providing ability to run them locally and actually perform intended tasks.
If the sarvam edge etc. are made open source it would be fun to try these. Less than 500mb in side.


Zo526l
No.3233
I am working on creating my own front end tool to handle various local models.
Pic rel, just rudimentary example of sarvam-1, which is good at translation, summary etc.


Zo526l
No.3234


Zo526l
No.3235
Lots of private cockroaches do bs like this. Due to these subhumans genuine good companies are clowned.


Zo526l
No.3236
Hmmm


Zo526l
No.3237
moment of truth


Zo526l
No.3238


Zo526l
No.3239
https://nitter.net/AryamanBharat/status/2024019448206663864 serial updates.


Zo526l
No.3240
Sarvam SAARas


Zo526l
No.3241
>>3240
This is speech to text, ASR model - released on 10th feb.


Zo526l
No.3242
>>3240


Zo526l
No.3243
>>3242
There's sarvam Dub


Zo526l
No.3244
>>3243
sarvam vision


Zo526l
No.3245
live event


Zo526l
No.3246
sarvam vision model ocr.
one of the best part of these are proly their emphasis on indian languages which is great.
you can test on their online platform.


Zo526l
No.3247
It's happeninggggggg
>30bn model
>1bn MOE
>30,000 context length


Zo526l
No.3248
>lodu organizer pointing towards audience instead of the benchmarks


Zo526l
No.3249
>>3248


Zo526l
No.3250
pretty cool


Zo526l
No.3251
105bn model here.
138k context length


Zo526l
No.3252
>>3251


Zo526l
No.3253


Zo526l
No.3254
>>3253
codejeet benchmarks


Zo526l
No.3255
>>3254
>this model 105bn one beats deepseek r1 which was which was 600bn param model in most tasks


Zo526l
No.3256
This is one of the biggest whitepill. It should flood entire social media of India but it won't happen.


Zo526l
No.3257
>>3254
Live tomm.
WYX08o
No.3258
>>3232
Their translation model is really the best I have used for Indian languages so far.


Zo526l
No.3259
>>3258
This whole thing is big big big whitepill. Genuinely nice to see.


Zo526l
No.3260
IT'S GOING TO BE OPEN SOURCE 30BN AND 105BN BOTH


Zo526l
No.3261
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
LET'S FUCKING GGGOOOOOOOOOOOO


Zo526l
No.3266
I really like how their focus has been on applications.


Zo526l
No.3269
PS: if you buy api credits now you will get 2x credits now.


Zo526l
No.3271
What a nice presentation. Amazing to see our ai mission being lead by some real talented folks.
Chat live tomm, hopefully models also release asap.


Zo526l
No.3272
>biggest tech related indian subreddit
>zero posts about sarvam new models which are literally in every manner should be the biggest news right now in entire India
>only usa, china and maybe french have such models
>threads upon thread filled with low iq bhosadpillers recycling twitter shit.
>glazing deepseek 'day' while no news about sarvam


Zo526l
No.3273
Also if you guys can follow try to follow this dude Aryaman. While everyone was circlejerking over bhosadpilling this dude was live sharing the contents about the presentation. Thank you saaar.


Zo526l
No.3274
It will take days before it becomes mainstream how big this day was. Another feather in the cap of ashwinin vaishnaw, he gets far more bad press than he deserves. One of the best ministers right now.


Zo526l
No.3275
This is L&T / xTerra Robotics in house made in India robo dog
>Witness Svan M2, India's first commercial quadruped robot at Booth 9, Hall 2 in India AI Impact Summit.
Due to subhumanity of the galgotiachode real products are not getting limelight. Bhosadpiller baap is giving attention to chinese baits instead of genuinely appraising countless nice indian startups in the summit.


Zo526l
No.3276
>Think the GoI has done really well on its part with funding Sarvam and supporting it through initial risk phase to get Sarvam to where they are. Now with their potential clearly visible, can we see industry veterans like Adani(who eloquently spoke of need for sovereign AI being a necessary strategic capability) or Infosys/TCS to come forward and invest in likes of Sarvam in Billions? Maybe with credit hours on his GPU hyperscalers etc? Sarvam is no more “high risk” in the Indian big biz owner mindset sense. The potential and the market are clearly visible.


Zo526l
No.3277
S+PuUT
No.3278
NOTHING HAPPENS


Zo526l
No.3279
>>3278
achcha lodu


Zo526l
No.3280
>Sarvam 30B-A1B
>Sarvam 105B-A9b
model names just for the sake of it.


Zo526l
No.3281
>Just to confirm all Sarvam models are foundational and trained from scratch… our models are not finetuned of any open source models


Zo526l
No.3286
>Presenting PARAM: India's most powerful indigenous robot dog. Not assembled, not bought, BUILT IN INDIA, built by INDIANS. For our nation, for our century, for our world!
https://nitter.net/GeneralAutonomy/status/2024086721755779419


Zo526l
No.3287
>At the India AI Summit, Larsen & Toubro (L&T) today announces a proposed venture under the India AI Mission to build sovereign, scalable GW-scale NVIDIA AI factory infrastructure to reinforce India’s position as a global AI powerhouse. This partnership is aimed at India’s enterprises, policymakers, industry leaders, global off-takers, and analysts seeking production-grade AI capacity anchored in India’s digital and industrial transformation.


Zo526l
No.3288
I attended the AI Impact Summit today.
5 made in India AI Products BLEW my mind:
>1. EkaScribe: A doctor in a busy rural clinic see 5 patients without touching a keyboard. The AI handles the prescriptions, history, and filing.
>2. Ottobots: Made in India autonomous robots that can deliver medicines in hospitals, navigating elevators and cooridors by themselves (no more family members rushing to pharmacies).
>3. Sarvam Kaze: Sleek, Made-in-India AI spectacles that "see" what you see. They don't just record - they explain the world to you in your local language via bone conduction like a genius translator whispering in your ear.
>4. Sarvam Edge: AI that runs on your phone with zero internet - translates 22 languages in real-time even while you're in a basement.
This is interesting, if it actually works and is open i will integrate it with bhach.
>5. Mankomb’s "Chewie": AI native kitchen appliance that literally eats your waste. It converts wet waste - bones, meat, and fluids into nutrient rich "RegenSoil" in hours using real-time AI sensors.
https://nitter.net/ujjwalscript/status/2023771876380835960#m


Zo526l
No.3289


Zo526l
No.3290
>>3231
caleb reviewing the glasses interesting https://nitter.net/caleb_friesen/status/2024090687546028228#m
ST1fit
No.3294


Zo526l
No.3295
>>3294
nice yara
There's more i guess he is talking about chat going live tomm could be more. Let's see.
P8KVX0
No.3296
are they gonna release it today? will try it out.


Zo526l
No.3297


Zo526l
No.3301
good talk
Coe7XE
No.3302
>>3260
I'm an artsfag with 0 knowledge about AI except using chatGPT to ask basic questions.
Can you give me an idea about what an ai being open source mean, and how are ai models are integrated in the backend
If you can, can you also please explain what parameters and tokens mean


Zo526l
No.3308
>Yesterday, we released Sarvam 30B and Sarvam 105B. Built from scratch, both models leverage a Mixture of Experts (MoE) architecture, delivering stronger performance at scale while using compute more efficiently
>Sarvam 30B activates just 1B non-embedded parameters per token, so it runs far more efficiently while maintaining strong capability.
>The model was pretrained on 16 trillion tokens spanning code, web, multilingual, and mathematical data, and supports a 32K context window that enables long-running agentic interactions.
>It is ideal for real-time applications like conversational AI and high-throughput workflows where latency matters
>Sarvam 105B model follows the same MoE design, activating 9B parameters per token to combine large-scale capability with efficient execution.
>With a 128K context window, it is built for more demanding tasks including complex reasoning, agentic task completion, tool use, coding, mathematics, and science.
>This makes it well suited for enterprise and population-scale deployments that require deeper reasoning and structured problem solving
>Both models will be released as open weights on Hugging Face soon. API access and dashboard support to follow!
https://nitter.net/SarvamAI/status/2024493917539094539
>>3302
>what is ai being open source
Technically it's a misnomer, that's why the term 'open model' exists. Pretty much most llms we call 'open source' are 'open model' where after the training they share you the weights - what are weights? They form the basis of relationship between tokens - which can be small bits of phrases, texts so on. Based on the weights we see prediction of tokens.
If they were open source, we will see lots of more info - data, code, so on. It depends. GPT-2 is one example.
>ai models are integrated in the backend
Through api, most ai companies including sarvam have apis for various models which allow the people to utilize their services. Apart from their api, there are process like RAG, or tools like langchain, so on. Many bots, customer support etc. used the apis and other tools together.
Parameters - higher the value more complex patterns model can explore. But it also means more resources. And they now have further developments like MOE, etc. which allows even higher parameter models to run efficiently. Instead of going through all these billions of parameters for each token prediction, they focus on sub groups.
Another thing about open models is that once weights are out, people can fine tune etc. quantize them which means it can run for specific use cases, or if quantized can run on devices which may not be able to run the original variants due to the memory requirements.
iHQ8tp
No.3312
Have they said anything about image/voice/video generation? I have only heard about chat so far
NhROX6
No.3333
Saara
WYX08o
No.3337
>>3312
Maya ai


















































































