/g/ - Technology

Board dedicated to discussions related to everyday Technology!

Going back to BasicsAs we discussed and shared earlier, I am going bac...
Social Media Crossposts Policy and Threads on /b/ Since anons bring up this issue repeatedly, let's ...
Update on Threads which were deleted between 12 DeAs you guys know that certain threads were randoml...
[View Noticeboard]
0/4000

[All][RED]
BharatChan Disclaimer

Notice

Before proceeding, please read and understand the following:

1. BharatChan is a user-generated content platform. The site owners do not claim responsibility for posts made by users.

2. By accessing this website, you acknowledge that content may not be suitable for all audiences.

3. You must follow BharatChan’s community guidelines and rules. Failure to do so may result in a ban.

4. By using BharatChan users agree to the use of cookies, mostly for session related to user.

A poster on BharatChan must abide by the following rules:

Sitewide Rules
You must be 18 or older to post.
Sharing personal details or engaging in doxing is strictly prohibited.
Political discussions should be confined to /pol/.
Off-topic discussions, thread derailment, or spam may result in a ban and IP blacklist.
Pornographic content is strictly prohibited.
Any activity violating local laws is not allowed.
If you are not an Indian, you can only post in /int/. Or create and account and ask for approval to post in other boards.
Acknowledge

Recent Posts

Sarvam is now proven to be a disappointment

View

Androidfags zara idhar aana

View

View

XHDATA D-808 DX-ing setup, analogue modulation

View

View

i don't understand

View

View

RCE on Pocketbase possible?

View

View

Shifting to linux mint

View

AI Impact Summit 2026

View

/emacs/ general

View

Simple Linux General /slg/ - Useful Commands editi...

View

/wpg/ - Windows & Powershell General

View

View

Sarvam Apology Thread

View

Some cool tech in my college

View

the hmd touch 4g

View

Holy Shit

View

Saar american companies have best privacy saaar

View

View

View

/desktop thread/

View

Forking jschan to submit a PR for captcha logic

View

JEEFICATION OF GSOC

View

View

View

4Chan bypass?

View

/g/ related blogpost - backup thread

View

Android Hygiene

View

My favorite game rn

View

COOKED

View

Are we getting real 5g?

View

I want to create my own forum but I don't know how...

View

Is my psu not compatible with my mobo?

View

View

/i2pg/ - I2P general

View

ISP Throttling

View

zoomies jara idhar ana

View

View

Zerodha just donated 100,000 usd to FFMPEG

View

Jio Fiber

View

/compiler_develoment/ thread

View

what is computer science?

View

just installed Arch Linux bros

View

Sketch - A simple 2D graphics library in C

View

Gemini+Figma for UI

View

LEARNING SPREADSHEETS

View

/GDG/

View

View

Sarvam Apology Thread

Anonymous

KA

vG49v7

No.3262

>32bn parameter 32k context length

>105bn parameter with 105k context length

>Great performance in multiple benchmark

>Can browse web, multi modal

>Bunch of small models - which can run locally btfo'ing bigger models - in speech to text, asr, ocr, etc.

>32bn and 105bn models are going to be open source.

>The platform is going to be live tomm or soon

All these models perform even better in native tongue. I kneeell....their voices are one of the most natural i have ever heard.

Anonymous

KA

vG49v7

No.3263

>>3262(OP)

:animu_casual_dance_g::animu_casual_dance_g::animu_casual_dance_g::animu_casual_dance_g::animu_casual_dance_g::animu_casual_dance_g::animu_casual_dance_g::animu_casual_dance_g:

Whitepill Whitepill Whitepill day!!!

Anonymous

GJ

yaKeP0

No.3264

>>3262(OP)

sarvam is the shit. fucking brilliant and not too costly either. love it! sarvam khalu idam brahma!

Anonymous

KA

vG49v7

No.3265

>>3264

It's really complete W in all terms. I didn't expect was not hoping but they really did it.

Anonymous

GJ

+++1QA

No.3267

>>3265

i have used several models like ai4bharat and openai whisper for my personal transcription project and all i got from them was garbage devanagari text. i thought i was doing the chunking wrong. but sarvam gave me like 60% accurate results on the first try. wth

Anonymous

KA

vG49v7

No.3268

>>3267

Thing is the previous models were nothing, this is bigger than Deepseek R1 moment the models themselves. I guess it will take atleast 2 days for the news to hit the mainstream.

I can run the 30bn model locally. It's funny how it is beating likes of 4o in some benchmarks. Scam altman all that gloating now feels funny.

We will see more attempt by foreign labs to poach these guys and cause issues hopefully we persist.

Anonymous

GJ

1JKj36

No.3270

>>3268

oh yeah. just like they poached clawdbot guy. i can see that happening

Anonymous

ARYA

rRfQ/N

No.3291

>>3262(OP)

>All these models perform even better in native tongue. I kneeell....their voices are one of the most natural i have ever heard.

Hmmm, do we finally get Jeets who will do the bare minimum of excelling in native script and what not?

Anonymous

KA

vG49v7

No.3292

>>3291

>Hmmm, do we finally get Jeets who will do the bare minimum of excelling in native script and what not?

sorry didn't get you

Anonymous

IN

DwYIKL

No.3293

>>3262(OP)

Based.

Hope the premium plan is affordable. Id like to udr thus

Anonymous

IN

pwivAn

No.3303

>>3262(OP)

54 million as total investment. Main advantage being cleaner input data as opposed to compute maxxing.

Best part is it is more useful than your usual LLM for Indians

Anonymous

IN

nN81mJ

No.3304

>>3262(OP)

Just another psyop to save face after Galgotia disaster

Anonymous

IN

6HpeFU

No.3305

>>3304

रंडी रोने का बहाना चाहिए

तुझ जैसो का कुछ नई होगा

हर जगह बस हगते रह

सुरु कुछ होगा भी तुझसे

Anonymous

KA

vG49v7

No.3307

>Yesterday, we released Sarvam 30B and Sarvam 105B. Built from scratch, both models leverage a Mixture of Experts (MoE) architecture, delivering stronger performance at scale while using compute more efficiently

>Sarvam 30B activates just 1B non-embedded parameters per token, so it runs far more efficiently while maintaining strong capability.

>The model was pretrained on 16 trillion tokens spanning code, web, multilingual, and mathematical data, and supports a 32K context window that enables long-running agentic interactions.

>It is ideal for real-time applications like conversational AI and high-throughput workflows where latency matters

>Sarvam 105B model follows the same MoE design, activating 9B parameters per token to combine large-scale capability with efficient execution.

>With a 128K context window, it is built for more demanding tasks including complex reasoning, agentic task completion, tool use, coding, mathematics, and science.

>This makes it well suited for enterprise and population-scale deployments that require deeper reasoning and structured problem solving

>Both models will be released as open weights on Hugging Face soon. API access and dashboard support to follow!

https://nitter.net/SarvamAI/status/2024493917539094539

Active Users in /g/: N/A