Rendered at 19:04:17 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
sundarurfriend 9 hours ago [-]
As an English-as-second-language speaker and writer, one thing Grok really shines at is capturing the tone and level of "formality" of a piece of text and the replicating it correctly. It seems to understand the little human subtleties of language in a way the other major providers don't. Chatgpt goes overly stiff and formal sounding, or ends up in a weird "aye guvnor" type informal language (Claude is sometimes better but not always).
Grok seems in general better at being "human" in ways that are hard to define: for eg. if I ask it "does this message roughly convey things correctly, to the level it can given this length", it will likely answer like a human would (either a yes or a change suggestion that sticks to the tone and length), while Chatgpt would write a dissertation on the message that still doesn't clear anything up.
Recently I've noticed that Grok seems to have gotten really good at dictation too (that feature where you click the mic to ask it something). Chatgpt has like 90-95% accuracy with my accent, the speech input on Android's Gboard something like 75%, Grok surprisingly gets something like 98% of my words correct.
michaelbuckbee 7 hours ago [-]
I did a quick eval comparing Grok 4.3, Opus 4.7 and GPT 4.1 and they actually seem pretty similar:
They all did pretty well at a more "formal" tone, but GPT4.1 was the only one that didn't make me cringe with a "casual" tone.
[edit] fwiw, grok was also the fastest+cheapest model, claude was slowest and priciest.
sundarurfriend 5 hours ago [-]
This is the most basic level of eval, of whether they can produce output that will be considered by someone somewhere (usually a young urban US American) as informal toned. Real human communication is far more nuanced than this, different groups have different linguistic registers they're used to and things outside it sound odd even if they can't articulate why. You could also want to be informal but not over-familiar with the other person (for eg. in a discord chat to a new acquaintance) - actually looking at the outputs here, the Claude output seems best fitting for that (in my subjective view anyway) than to the one you gave it - or want many other little variations.
What makes one cringe and another recognize as familiar and comfortable is also pretty subtle and hard to define. These things need nuanced descriptions and examples to actually get right, and it's in understanding those nuances and figuring out the register of the examples that Grok outshines the others.
Romario77 4 hours ago [-]
you said that English is not your first language, so heads up - you don't need "for" when you use "e.g.", it already means "for example".
idiotsecant 2 hours ago [-]
You presumably do have English as a first language so you should know that sentences begin with capital letters.
Was that helpful and interesting conversation?
jasonjmcghee 5 hours ago [-]
That's Grok 4.2 not 4.3 right?
And why are you comparing to gpt-4.1? (As opposed to one of the 6? model releases since then - would have expected gpt 5.5)
michaelbuckbee 3 hours ago [-]
Good catch, there was an issue with the second hardest thing in programming (caching).
Wow, I'm surprised. Grok 4.3 actually is noticeably better than the other two for the close-friend variant. Surprisingly I found Claude the cringiest of the three!
andai 1 hours ago [-]
Is it me or did GPT get noticeably more natural in word choice recently? You can see it between 4.1 and 5.5 here, but I'm not sure when that happened. (My guess would be one of the recent 5.x releases.)
Edit: I meant specifically the absence of bizarre phrasing. That seems to have improved.
wamatt 3 hours ago [-]
Thanks from where I'm looking Grok 4.3 and Claude 4.7 do a better job on the informal close friend/coworker vibe.
ChatGPT sounds fake / formal phrasing (for the specific close friend context) and has em-dashes and uses capitalization. Hence, ChatGPT does not, imo grok the assignment ;)
embedding-shape 7 hours ago [-]
I know it's just an evaluation, but seeing an informal message and a prompt to ask to rewrite this informal message to the tone of an "informal message" when the original one sounds just fine, just makes me sad... Not because of this evaluation, but because it reminds me that this is how some people use LLMs, basically asking it to remove your own voice from texts that are generally fine already.
michaelbuckbee 6 hours ago [-]
My sister in law is a pharmacist and the heaviest non-dev ChatGPT user I know and her main use case is writing professionally polite messages to doctors on how the drugs they prescribed to a patient would have killed them had she not caught a particular interaction or common side effect.
There's a lot of "tone" in it as she's not trying to anger these folks, but also it's quite serious, but also there's just everything else happening in medicine.
Feels like a great use.
ryandrake 3 hours ago [-]
Pretty neat. This kind of tone self-moderation comes naturally to good communicators, but I know people (on and off the spectrum) who really, really need help with this, and it's cool to see LLMs are able to do this. There are a surprising number of people in the business world who are just totally unable to tone-police themselves. In the medical field I'd be worried about hallucinations, of course, but presumably your SIL fact-checks the output.
5 hours ago [-]
hamdingers 2 hours ago [-]
She does herself a disservice by outsourcing that skill. One day she might have to actually talk to one of these people.
michaelbuckbee 1 hours ago [-]
She's 50 years old has a doctorate in pharmacy and has worked as a hospital pharmacist for two decades.
I don't say this as a "gotcha", but more that even with all that experience she still finds it beneficial and helpful.
ActivePattern 2 hours ago [-]
Seeing this makes me wonder if Grok uses Claude conversations for training.
It's otherwise kind of surprising that they both converge on very similar phrases (e.g. "API integration is kicking my ass") that aren't anywhere in the prompt.
accrual 6 hours ago [-]
All three did well, and while I'm a Claude user, I found the Opus reply here added some unnecessary detail, like "Impact: Minimal; no downstream dependencies are currently at risk". Downstream dependencies weren't mentioned in the original message; for all we know downstream could be relying on a poorly performing API and is impacted by waiting another week for replacement.
mwigdahl 4 hours ago [-]
GPT 4.1? Why not a 5-class model?
rafram 5 hours ago [-]
All of these were frankly terrible. I guess Grok’s “informal” version sounded the most like a real human, but only because it reads exactly like an Elon tweet (including his favorite emoji!). It’s obvious what they’ve been training on.
djyde 9 hours ago [-]
I've also noticed that when I communicate with Grok in my native language, its tone is more natural than other models. I think this is due to the advantage of being trained on a large amount of Twitter data. However, as Twitter contains more and more AI-generated content now, I'm afraid continued training will make it less natural.
adjejmxbdjdn 7 hours ago [-]
The causation could also be the other way round.
Twitter language has started seeming normal casual to us, rather than us using normal casual language in Twitter.
pacific01 9 hours ago [-]
Did you try meta? I was into grok but now meta works well for me
thunderbong 9 hours ago [-]
I'm sure Twitter knows which are the bot accounts and is surely excluding them from their model training. Twitter bots aren't a new phenomenon after all.
cowsup 7 hours ago [-]
I don't think Twitter/X know for sure who the bots are, since Elon has been pretty vocal about trying to stop them for ages, yet I still get lots of spam DMs (as do others with far fewer followers/reach).
Even if 95% of the spam gets actively reported and dealt with, that still leaves a ton of nonsense on the platform, getting fed into the LLM. And spam has only gotten worse over the years, as the barrier to entry has lowered and lowered.
GTP 6 hours ago [-]
Are the spam DMs advertisements or more generally something linked to a product or service? I wouldn't be surprised if X is more lenient towards bots that pay them for adverts.
Zancarius 5 hours ago [-]
Most of what I get seem to be advertisements or automated messages if you follow large(r) accounts.
One of the most interesting things that I've noticed is these advertisements will be triggered if you follow accounts that are positioned as influencers. I followed one out of curiosity and received a DM from that account advertising some cryptocurrency service.
It's a good way to filter out and block accounts that have almost certainly not grown organically.
HarHarVeryFunny 6 hours ago [-]
I'd have guessed that at least some of the bots are Twitter itself, trying to draw you in with some sense of engagement. Given that Musk is the owner, and everything we know about him and have seen him do, I'd not be surprised if some of the MAGA bots are his too.
UltraSane 3 hours ago [-]
"Elon has been pretty vocal about trying to stop them for ages"
Elon lies a lot. Like ALL THE TIME.
joncrane 7 hours ago [-]
>Elon has been pretty vocal about trying to stop them for ages
You know people lie, right? Especially when the lie casts them in a better light and/or makes them more money.
subscribed 6 hours ago [-]
Elon lied on record many times, admitting to the lies only when forced, under oath.
hackinthebochs 6 hours ago [-]
Highly doubtful seeing as my 14 year old twitter account got caught in a recent bot ban wave with no means of contacting a human for recovery.
pixel_popping 9 hours ago [-]
There is bots everywhere, it has nothing to do with the platform, it has to do with attackers having an incentive to do mass account farming, no platform is secure against it.
kedihacker 7 hours ago [-]
With banning and deboosting they need to be very accurate but with filtering they can be more liberal in excluding
rglullis 6 hours ago [-]
Super easy, just make a web-of-trust type of thing: messages are only visible to those who already vouched for you. Otherwise, you pay $0.01/per message/per user reached.
simianwords 8 hours ago [-]
not really. there are easy heuristics to filter out bots with good confidence. FWIW i don't see any bots posting anything in my feed
pixel_popping 8 hours ago [-]
Yes your individual feed isn't really relevant if we talk about the masses, Reddit accounts are for sale quite cheap, HN as well, X too and so-on, it's literally just a matter of means/methodology. If I want today to do 1000 random posts talking about a certain thing, I could.
simianwords 8 hours ago [-]
my individual feed does matter because it shows that it is possible to curate something without bots which is obviously what XAI would do
ninininino 4 hours ago [-]
congratulations, you have solved anti-scam. go make your billion since its easy.
simianwords 2 hours ago [-]
its easy to solve at the offline level where you have time to filter out. in fact this is already done in pre-training by OpenAI and other companies.
you think its hard?
darkerside 7 hours ago [-]
Sadly, it's more likely that people will just start talking like bots
pdimitar 5 hours ago [-]
I've seen this expressed as a concern even from one of my colleagues. My retort was:
"English is not my native language and LLMs taught me quite a few very useful formalisms that do land well for people and they change their attitude towards you to be more respectful afterwards. It also showed me how to frame and reframe certain arguments. I agree sounding like an LLM is kind of sad but I am getting a lot of educational value -- and with time I'll sneak my own voice back in these newly learned idioms and ways to talk."
gusmally 2 hours ago [-]
Since you seem interested in the ins and outs of English, I want to say that "retort" has a connotation of anger or sharpness. Your response reads more like a "rebuttal" to me.
This is not a correction; maybe retort is what you meant and I'm not trying to be the English police. I just like discussing the intricacies of language :)
pdimitar 2 hours ago [-]
Actually super helpful, thank you!
somenameforme 33 minutes ago [-]
Like most of all widely spoken languages, there's a lot of regional variation in English. There's even a bunch of quizzes online where you answer 20 questions about phrasings, and they can tell you where you're from with a disconcertingly high degree of accuracy.
In my experience a "retort" is sharp or witty, but certainly not angry, whereas the word "rebuttal" is itself essentially antagonistic. You might use it when referring to something or someone that you look down upon, whereas a more neutral term would simply be "response."
pdimitar 28 minutes ago [-]
Just as I was reading your comment I remembered that Samuel Jackson used "retort" in his speech in the "Pulp Fiction" movie and was wondering whether he was openly antagonistic there (I mean, he killed a bunch of guys with a pistol shortly afterwards but still) or was it a witticism.
I admit I am lost on these nuances and I usually kind of use whatever idiom comes to mind, which yes, likely would net me some weird looks depending on where I am geographically.
microtherion 5 hours ago [-]
It's impressive that you've even managed to use an em-dash in spoken language. /s
pdimitar 5 hours ago [-]
I did spot the /s but it's not relevant: I use two normal dashes actually. :)
JKCalhoun 7 hours ago [-]
You're absolutely right!
7 hours ago [-]
jmalicki 4 hours ago [-]
So human language will improve and become more precise? I'm all for it, especially if we get more emojis in speech! Why is that sadly? Humans will learn to imitate their more intelligent betters.
nex-z 6 hours ago [-]
[dead]
techjamie 7 hours ago [-]
There was already evidence last year[1] that pointed to ChatGPT-specific words like "meticulous," "delve," etc becoming more frequently used than they were previously. The linked study used audio of academic talks and podcasts to determine this.
Part of me wanted to object to those two examples, which I’ve used frequently since the reaching adulthood in the 80s. Another part of me has been triggered by an apparent uptick in the word “crisp”, which my gut takes as an coding-LLM tell.
ls612 4 hours ago [-]
Opus 4.7 loves to use the word “substrate” whenever it gets the chance, it’s a really weird tic. How do these models end up this these sorts of behaviors?
soerxpso 24 minutes ago [-]
A friend of mine uses it for D&D prep and has told me that it's good for that in particular because of its ability to match the flavor/style that he's going for. He prefers ChatGPT for everything else.
FeloniousHam 4 hours ago [-]
I only use Grok through the "Gork" personality in the Tesla, but find its responses to be very realistic, often genuinely funny, and occasionally useful.
satvikpendem 4 hours ago [-]
Do you use its unhinged mode? It can be hilarious but tiresome after a little while.
FeloniousHam 2 hours ago [-]
We tried it, it was fun. Conspiracy mode just sounds like talking to my kids.
cimi_ 4 hours ago [-]
> As an English-as-second-language speaker and writer
How do you know it's actually better? I'm not trying to be condescending, but this reads to me like vibes :)
jp42 3 hours ago [-]
anecdata: The responses of grok on X in my language are really good. the tone, sarcasm, level of "vulgarity" in response is so accurate that it seem its written by human
kccqzy 4 hours ago [-]
This is more of a user preference. When I want to be informed my default is that chat bots should imitate the tone of Wikipedia. Not informal, but somewhat academic and in-depth. I don’t like it when chat bots explain things like an average human without pedagogical training: meandering, in the wrong order, and often having to repeat themselves.
satvikpendem 4 hours ago [-]
So you're saying it groks you better?
AntiUSAbah 7 hours ago [-]
[flagged]
0xy 7 hours ago [-]
Isn't it exhausting to view everything an ideological lens instead of reviewing technical achievements on their merits?
Leynos 6 hours ago [-]
There are limits to being willing to overlook ideology.
AntiUSAbah 6 hours ago [-]
From the richest person on the whole planet? Who literaly proactivly injects himself directly into global poltics? Which affects you and me and everyone else?
You don't think fighting child porn is worth while? Facism? For democracy?
Isn't it cheating and ignorant from you to not care a single bit about anything at all?
When do you even start thinking drawing a line? Let me guess, as late as it affects only you right?
jmye 3 hours ago [-]
So tired of this "reacting to a dude who built a CSAM generator is the real cringe" horseshit from people who know exactly what they're covering for.
SpicyLemonZest 6 hours ago [-]
It's very exhausting! But Elon Musk chose to leverage his fortune from Tesla and SpaceX into an ideological project to destroy a lot of things I care about, so he's left me no choice. If he'd like people to review his work on its technical merits, shouldn't he at the bare minimum apologize and promise not to do it again?
Scroll_Swe 5 hours ago [-]
[flagged]
loneboat 7 hours ago [-]
The hitler Grok? What? I genuinely don't understand what you're trying to say in this comment.
Close enough—Grok called itself "MechaHitler" (a link was posted).
greenavocado 7 hours ago [-]
He's equating Grok to Hitler which is absurd. If you want to speak with the führer you need to visit https://hitler.ai
AntiUSAbah 6 hours ago [-]
Elon Musk didn't like how Grok would contradict his opinion on Twitter/X.
So he started to work against this with playing around with.
For example grok started to pull in Musks tweets before responding, Musk introduced Grokipedia as a new data source and Grok got trained/adjusted differently.
These mechanism lead to Grok at one point, becoming very rasist.
artdigital 10 hours ago [-]
Grok is my favorite model for chatting, and my favorite voice mode. It seems to be the only voice mode that isn't routing to a extremely cheap model (like Haiku), and has been the highest quality out of all the frontier ones. When you subscribe to SuperGrok you can also create a "council" of agents, each with their own system prompt and when you ask something, they will all get asked in parallel to come to a conclusion. Good stuff!
Just wish they would finally put some work into their apps, it's the only thing keeping me from actually subscribing to SuperGrok:
- No MCP / connected apps support. It's been teased but here we are, still not available. I can't connect Grok to anything, so I can't use it for serious work
- Projects are still not available in the app so as soon as you move something into a project, it's gone from all the native apps
- No way to add artifacts (like generated markdown docs) directly to a project, we have to export to PDF/markdown and re-import. And there isn't even a way to export artifacts. This makes serious project work hard because we can't dynamically evolve projects with new information
- No memory, no ability to look up other chats, each chat is completely new
- No voice mode in projects at all
If someone from xAI is reading this, please consider adding some of these.
base698 5 hours ago [-]
Starting to like the lack of memory. Claude remembers I have a grill and will interject in conversations about how maybe this thing would go well with BBQ when it's unrelated or just also about food.
Petersipoi 2 hours ago [-]
This is so obnoxious. I ended up deleting all the memory from Gemini because it ended every response with, "As an engineer, father of X, you'll love this because...". As if I want my occupation and the number of children I have to be relevant to which lawn mower I buy.
toraway 1 hours ago [-]
Haha I recently asked Gemini for a product comparison for USB-C GaN chargers and it randomly inserted "as a Software Developer at $COMPANY working remotely, you may find the 100W fast charging useful when using your company laptop while travelling."
Like, thanks, really useful stuff (and definitely worth the creepy vibes to include that).
miohtama 6 minutes ago [-]
I like my Python with hot sauce.
xur17 3 hours ago [-]
Gemini thinks my name is my brother in law's name, and despite explicitly telling it that's not my name + digging through the settings, it still amusingly calls me the wrong name.
numbers 35 minutes ago [-]
:D that's like my Claude where it loves to point out that I have an ADU in the backyard in unrelated situations.
Eliezer 4 hours ago [-]
You can turn that off in settings.
burnte 1 hours ago [-]
I have that disabled. I tend to use different chats as the LLM equivalent of private browsing, so I like it to not have memory transferred between them.
UltraSane 3 hours ago [-]
I'm a network engineer and Claude loves to make analogies to network routing protocols and such. They are often very creative. You can actually edit the profile Claude makes of you. It can be very funny to say you are a professional clown or mime or something equally odd. I wonder what analogies it would create for horse semen extractor?
HarHarVeryFunny 6 hours ago [-]
The Gemini app voice mode uses one of their more recent models (and not some gimped small one), and is very capable. The personality is also fine, much more natural than the Gemini web chat, with my only complaint being it's insistence on suggesting a "next step" which seems to he something that they all do.
I'm not sure if the "next step" is just to drive cost up for you (but makes no sense for free version), or because they are all failing to learn more natural conversational patterns and distinguish questions that are begging for a quick answer and shut up as opposed to a longer exploratory conversation where next step may have some value, although it would be nice if these models would follow an instruction to NOT do it!
WarmWash 5 hours ago [-]
An interesting side bit about the gemini voice model is that you can use it in AI studio and type messages instead of using the microphone.
On the backend google does TTS to feed the model, which then speaks back you via sound on your speakers.
jquery 4 hours ago [-]
The “next step” is in the system prompt, not the model. Gemini leaked part of its system prompt to me a few days ago, and there was something in there encouraging it to ask the user what they wanted to do next at the end of its response. Something about “give the user 1 or 2 options for follow up”.
I honestly find it rather annoying, but Gemini has stopped doing it to me for the most part, so maybe they’re trying out a new system prompt.
altmanaltman 5 hours ago [-]
I think the "next step" instruction is more about engagement than cost, basically giving the user some options to continue the chat. I always have had success by ending the prompt with "only reply with nothing else but the answer to the query in a precise way". This usually always works better than telling it to not ask leading questions etc but a straight up expectation of the answer format you need is an instruction that most models can follow imo
artdigital 9 hours ago [-]
I also think Grok would benefit from allowing usage of "SuperGrok Heavy" (their $300 plan) in coding harnesses with included usage. Currently they give you some API credits on the Heavy plan so you can use some Grok for coding, but $300 USD value is just not there.
Not saying they should create their own grok-code harness, just allowing usage in existing ones would already be beneficial. But that's probably what the Cursor acquisition is going to do eventually
brightball 3 hours ago [-]
IMO everything you mention is the reason for the Cursor deal.
HardCodedBias 4 hours ago [-]
I use ChatGPT all of the time, but the model backing the voice model (or it's settings) is intensely stupid.
If Grok is actually good here, they will have a customer!
AlwaysRock 4 hours ago [-]
I could be wrong but I think the voice mode that chatgpt uses is still a 4.something model.
ajitid 8 hours ago [-]
If I sub to SuperGrok, would I be able to use it in Pi agent or in Opencode? This is not clear to me if I can. Do I get an API Key in SuperGrok?
everfrustrated 8 hours ago [-]
No, no api access for the Grok product.
APIs are only via the xAI product.
afpx 9 hours ago [-]
When I signed up, I accidently paid for a full year. So from time to time, I'll throw it something just to see what it produces compared to the other LLMs. And, even after all this time, it still feels like a really "dumb" model compared to the other frontier ones. But, worse, many of my system prompts make it go wacky and puke jibberish. However it was pretty cool for those couple months awhile back when it was uncensored. You could ask it about a wild conspiracy, and it would actually build the case and link you to legitimite source material. They dropped the hammer down on that real quick.
2ndorderthought 9 hours ago [-]
Ah yes the psychosis reinforcement vertical. It's such a lucrative market for those schizophrenics and bipolars. Great way to get lots of engagement. Groks portfolio is so diverse
jmalicki 5 hours ago [-]
It's a great way to get funded by your CEO and get good performance reviews; xAI employees know how their bread is buttered.
readthenotes1 8 hours ago [-]
I have a schizophrenic relative who is in such a relationship with grok. Instead of telling hen you need to take your meds, it says hen is the smartest person in the world
2ndorderthought 8 hours ago [-]
I'm so sorry your family is suffering from this. I hope you can find a way to bring them back. Disorders featuring psychosis are so painful for everyone around them. Blessings to you and your family
afpx 7 hours ago [-]
I love how you guys downvote all the old comments to make them hidden from search. My no-name account rarely gets downvoted. But, within 20 minutes of posting this, I drop 10 points. Rando accounts
wincy 6 hours ago [-]
I upvoted your first comment because it was insightful, interesting, and added to the conversation. I downvoted this one because complaining about downvotes is largely considered to be in bad taste and doesn’t really help anything. I did both of these things before I realized you were the same person.
afpx 5 hours ago [-]
Yes, for sure I deserve downvotes for the above. Those types of comments should be downvoted. However, I needed to post it to point out that I got the -10 well before the comment above. I never experienced that before and thought it interesting enough to share. Karma doesn't mean anything to me personally. But burst behavior like that is unusual.
booleandilemma 4 hours ago [-]
Don't worry about HN points. It's all just fake anyway. Numbers on the internet. GitHub stars on the other hand, now those are real.
6 hours ago [-]
2ndorderthought 6 hours ago [-]
I upvoted both of your comments. I also cannot downvote anything.
afpx 8 hours ago [-]
Except that it pointed at original sources, like reference manuals, archival documents, published newspaper articles, magazine articles, etc. - a lot still available on archive.org. Good try with your 16 day old account. And, why would anyone trust NPR at this point? Get real, bud. Most people with any curiousity know all about the ADL, JStreet, AIPAC, Greater Israel, Mossad / CIA, Chabad networks, Epstein, drones, weapons programs, cryptocurrencies, etc. etc. etc. - but, don't worry they're all safe with papa Ellison.
Actually it's funny you mention Bill Hicks. I didn't even know who he was. Or Alex Jones. That claim was one of the more absurd ones I discovered. But, given everything else I learned over the past year, who f'n knows at this point.
8 hours ago [-]
2ndorderthought 8 hours ago [-]
Someone gets it!
2ndorderthought 8 hours ago [-]
"We have improved @Grok significantly," Elon Musk wrote on X last Friday about his platform's integrated artificial intelligence chatbot. "You should notice a difference when you ask Grok questions."
Indeed, the update did not go unnoticed. By Tuesday, Grok was calling itself "MechaHitler."...
Grok is definitely a reliable source of truthful sane rational information.
afavour 5 hours ago [-]
Rich billionaire Ellison = bad, compromised
Rich billionaire Musk = good, has no vested interest in biasing the output of his AI tool
Oarch 8 hours ago [-]
I'd agree on the voice transcription; it seems so much more accurate than the other frontier models I've used. I often speak to Grok and paste the transcribed output to Claude!
walletdrainer 9 hours ago [-]
> No MCP / connected apps support. It's been teased but here we are, still not available. I can't connect Grok to anything, so I can't use it for serious work
Grok has tool use, no? Why would you also need MCP? What does MCP add?
artdigital 9 hours ago [-]
I'm talking about the consumer Grok app and grok.com website. There currently are not connected apps (or MCP) at all, so while Grok can use tools, there is no way to add tools to it
Cakez0r 8 hours ago [-]
If someone from Grok is reading, don't waste time on these chaff features. The market will eventually deliver better 3rd party solutions to all of these things. There is an audience that isn't interested in these walled garden features and are only interested on intelligence per dollar.
raincole 7 hours ago [-]
Lol I wonder when Anthropic discussed the idea of Claude Code internally, were there bozos saying "3rd parties will eventually deliver this so we shouldn't waste time one it."
wincy 6 hours ago [-]
Personally, my work doesn’t want to get locked into a single LLM provider so we use Cursor. Much easier to fight the big corp software approval battle once then switch around the LLMs to the new hotness (provided legal has the requisite data sharing agreements in place, we’re not supposed to use Chinese models or Grok) but I can switch between Anthropic and OpenAI models at will.
Cakez0r 7 hours ago [-]
Power users are hotswapping these models into their own agents (hermes, openclaw, etc) which have their own systems for project management, memory, interacting with tools, etc. The important metric is intelligence per dollar. Can I drop this model into my harness and have it be cheaper without losing intelligence. That is where the puck is heading.
wyre 5 hours ago [-]
The only good thing Claude Code did was bring coding harnesses to a wider audience. It is not a good harness.
jmalicki 5 hours ago [-]
What are good harnesses? I haven't yet been able to get good agent teaming approaches out of other harnesses yet, before that feature I mostly regarded the space as competitive, but until another harness can do as well with Claude models it seems like it's better for now?
torginus 7 hours ago [-]
Aren't they 'wasting' time on these features exactly because the engineering requires a different, more traditional skillset from the ML work model people do, and can be done in parallel?
bilsbie 4 hours ago [-]
Grok has become my go to search engine lately. I think it’s the only AI with access to x posts and beyond that it seems to generally be more “searchy” than other LLM’s.
pantsforbirds 3 hours ago [-]
Grok and Gemini are the ones I tend to use for finding news related to breaking events. Both were really nice during the Iran incident when I wanted to find out things as they were being reported.
jmye 3 hours ago [-]
[flagged]
thallavajhula 2 hours ago [-]
Do people really use Grok for anything outside of Twitter memes or understanding tweets? I'm asking out of genuine curiosity.
qingcharles 2 hours ago [-]
Yes, it is genuinely useful for some tasks. It doesn't nanny you as much as the other models. I do a lot of hunting for orphan copyright items that are decades out of print, but the primary models won't do it, chastising me for trying to find copyrighted items. Grok will do it [0].
[0] sometimes you need to lightly jailbreak it, or rerun the prompt, the non-deterministic nature means sometimes you will get a refusal
rcpt 1 hours ago [-]
I haven't been nannied in a long time. It was definitely a problem 2 years ago but now it seems all the models are ok with just about everything I want.
amarka 2 hours ago [-]
Ohh sure, its users use it for all sorts of things
Grok has the most useful voice mode (ChatGPT voice mode is very dumb, grok seems to use same model as main chat), so if I want to use voice this is the AI I use.
Also I use it for all uncomplicated topics because it gives precise short answers without fluff. Very refreshing.
seanw444 1 hours ago [-]
Yes.
guluarte 2 hours ago [-]
I wonder how much of that comes from twitter training data. It is useful for memes and trends, but for other things is super bad.
sergiotapia 1 hours ago [-]
It's my go to for searches, DIY, personal finance, and more general slice of life AI.
Once it is as good as Kimi K2.6 for coding, I will probably use Grok exclusively. It really is the best conversational AI I've used. It has helped me fix a broken fridge, and a broken electrical oven. Literally saved me at least $4k this year.
Edit: Also saved me $600 because I did my taxes with it. H&R Block is cooked.
Edit 2: Oh shit it is as smart as Kimi K2.6. Time to try it!
swarnie 53 minutes ago [-]
How do you save money on taxes?
The taxes you owe is a mathematical solve which is always the same....
sergiotapia 32 minutes ago [-]
deductions
child credits
points per paycheck proper setup
and of course, avoiding to pay an accountant to set run all this if you are a normal w2 worker.
adampunk 46 minutes ago [-]
in america you need to pay a preparer for your taxes because we hate poor people. The user is saying they don't need to pay a preparer because they used Grok. I didn't do that this year but I'll probably do it next year with a frontier model. US taxes are a perfect use case for AI, tbh.
55 minutes ago [-]
tornikeo 9 hours ago [-]
So, we have:
- claude for corps and gov
- codex for devs
- grok for what, roleplay, racism? Those are the two things I've ever heard grok associated with around me.
GorbachevyChase 4 hours ago [-]
I know it’s really important to write and vocalize one’s alignment with the values of the day, but I don’t think language models being structurally incapable of offending your favorite race/ethnicity/caste should be an objective of AI labs. Language models are just systems and I’m not sure why we think users are not responsible for how they use their outputs. For the same reasons, I don’t dismiss the utility pens as a tool of “racism” because maybe somebody could write a naughty word on a bathroom stall.
You probably live somewhere where harassment is a crime, right? Probably, there are speech codes, too? Isn’t that enough? Do we really need to orient every effort of every person on earth around ethical fashions that change every few years?
goshx 4 hours ago [-]
> but I don’t think language models being structurally incapable of offending your favorite race/ethnicity/caste should be an objective of AI labs.
The opposite should not be an objective either, and Elon has been very openly manipulating what grok says.
bilbo0s 3 hours ago [-]
Good point.
But no one is saying "use grok".
Grok sucks. Not only because it's seemingly made only to serve the goal of ethnically cleansing non-whites or whatever, but also because it's just not even close to being as useful as other models. In human terms, grok is the job candidate who's simply not qualified. That candidate being a virulent racist is beside the material point.
Here's the thing though, the point of functional LLMs with fewer guardrails is still a good one. Grok is not that model. But such a hypothetical model would have broad application. (For good and for ill. Of course.)
solidasparagus 3 hours ago [-]
I don't agree. I avoided grok because of Musk for a long time, but having used it more, I think it is one the best models around and grok.com is an extremely good chat app. My evaluation was based on trying it before gpt-5.5 and obvious before grok 4.3, but it was, for me, the 2nd best model/chat app after claude. It's much less edgelordy than you might think based on the news.
tel 59 minutes ago [-]
All my usage of Grok for technical topics shows it regularly deeply misunderstanding things and just parroting back my question in fancy language. It’s the only frontier model I get this impression of. That makes it super annoying when it tries to market itself as good at engineering tasks when it seems (to me) to be much worse at them.
solidasparagus 24 minutes ago [-]
Interesting. I have not had this experience. I would like to learn more. Can you point me to any examples or domains where I might be able to replicate this?
mrhottakes 2 hours ago [-]
This comment section is full of people saying "use grok"
ivraatiems 3 hours ago [-]
A job candidate being a virulent racist would not be beside the point. It would be disqualifying to even let them interview.
mrhottakes 2 hours ago [-]
It's very telling how many HN posters think "being good at programming" can counterbalance "is a virulent racist"
Petersipoi 27 minutes ago [-]
No, it's telling that people like you have watered that word down so much that people don't trust it anymore.
So yes, if someone says "they're a great programmer, but they're racist" I'm going to ask, how are they racist? And at that point, if they can't give me a specific reason for why they're racist, I'm going to hire the guy.
It's also telling that you seem to think a tool is capable of "being racist". Hopefully this doesn't ruin your relationship with it, but LLM's cant think.
Capricorn2481 1 hours ago [-]
Yes, but I think that particular commenter is just throwing a bone to people that think that way so he doesn't get the "don't bring politics" treatment.
throwaway-11-1 3 hours ago [-]
Never had a pen claim to be mecha hitler and constantly talk about white genocide for no reason but yeah great analogy
Krasnol 3 hours ago [-]
Elon Musk has manipulated Groks outputs to target certain demographics. It is important to highlight this fact, as some people perceive the AI as an objective tool rather than a curated one.
Furthermore, I found your final paragraph unclear: are you implying that since harassment is a perennial issue, we should disregard any standards that might mitigate it?
1234letshaveatw 1 hours ago [-]
Is it your perception that other AIs are unmanipulated? Objective rather than curated?
culi 3 hours ago [-]
It's being biased on purpose. Musk has intervened multiple times when he believed Grok's responses were too "woke" or "leftist".
In response to Grok saying that the "woke mind virus is often exaggerated" the prompt was tweaked so that Grok now says "The woke mind virus 'poses significant risks'"
If you truly believed in what your comment states then you would oppose this sort of editorializing. But somehow I doubt this is a sincere argument.
Petersipoi 23 minutes ago [-]
Have you ever written a comment about how any of the other LLMs are editorializing in favor of the left, and how that's a problem? Because if you have, I'd love to see the evidence of your intellectual consistency.
But something tells me you're just doing the same thing that you're calling out
culi 2 minutes ago [-]
We don't have any proof of LLMs being editorialized in favor of the left.
We have clear proof of Grok and we also literally have a White House Executive Order mandating LLMs be editorialized to fight "woke"
Your version of reality is exactly skewed to what's actually going on.
25 minutes ago [-]
audunw 2 hours ago [-]
The new response works for me, because in my mind I’ve always defined “woke mind virus” as a a mental virus which causes people to become absolutely pathologically obsessed with fighting an imaginary enemy they call “wokeness”. It’s the only definition which makes sense. “Woke” itself was never that viral.
mrhottakes 2 hours ago [-]
You have cooked your brain on the internet
culi 2 hours ago [-]
Call it woke derangement syndrome.
People obsessed with fighting whatever they perceive as "woke" which remains ill-defined on purpose so they never have to actually formulate a rational take down beyond their emotional response
peyton 2 hours ago [-]
I agree with GP and I think Grok’s original response should’ve stood. What’s not sincere about, essentially, “don’t fuck with my tools”? My cordless drill didn’t come with a pamphlet about worker’s rights, and the world didn’t end.
sudb 9 hours ago [-]
So interestingly, I know of at least one application in a charity that deals with trafficking where grok was happy to do one-shot classification tasks where all other models refused to cooperate.
I think there's a surprising number of actually useful applications in this sort of grey area for a slightly-less guardrailed, near-frontier model (also the grok-fast models are cheap!).
vorticalbox 5 hours ago [-]
I am software dev and i was doing a security check on my own application (work) I was running in localhost and gave it access to the code.
every single model refused to attempt to run any sort of test to check if it was a n issue other than grok.
nico 4 hours ago [-]
A couple of days ago, using codex at work, all of a sudden it said my session had been flagged for security reasons. I wasn’t doing anything cybersecurity related, nor testing any vulnerabilities or anything like that, just trying to build a pretty simple web app
tcoff91 3 hours ago [-]
It seems really dumb for the models to not due security related things. What if I want it to do a security audit of my own software that I'm building?
vorticalbox 3 hours ago [-]
codex will actually help you look but it will refuse to actually try and exploit it.
it won't for example create a POC python script that you normally would use to prove the issue.
dmix 4 hours ago [-]
You couldn't even ask Claude how CopyFail worked. Even more general questions around it kept getting rejected.
cameronh90 6 hours ago [-]
Gemini especially has a habit of blocking my pretty mundane requests, claiming they’re attempts to jailbreak or create malicious code.
Grok also does quite well at code reviews in my experience because it’s not so aggressively ”aligned”.
kitsune1 3 hours ago [-]
[dead]
tomp 6 hours ago [-]
I couldn't get Gemini nor ChatGPT to do OCR of children's books (I literally own the books, so there's no copyright issue - all just fair use!).
The OCR was complex enough (bad quality photos) that "simple" OCR models couldn't do it.
Fortunately, Claude obliged (as well as Mistral OCR was helpful!)
2ndorderthought 9 hours ago [-]
There are lots of uncensored models out there. I don't think grok is leading in that front. They kind of pick and choose which things they want to support based on elons world views. Elon used to hang out with sex traffickers so of course grok is fine talking about it. Probably even offers strategies for them does free accounting has money laundering strategies etc...
1123581321 5 hours ago [-]
What are the leading uncensored models? How well do they perform for you?
2ndorderthought 5 hours ago [-]
I don't use any but they do exist and there are scientific papers discussing them. I heard about them through r/localllama
Scroll_Swe 5 hours ago [-]
>There are lots of uncensored models out there.
Like what?
Something as easy where normal people can login to a website and app and just use?
svachalek 2 hours ago [-]
Depends what you call easy but LMStudio is a drag and drop installation and can run thousands of different models.
2ndorderthought 5 hours ago [-]
I don't think companies are hosting them because imagine the liability. Could be wrong though. Again I don't know much about these things I just know they exist.
Scroll_Swe 5 hours ago [-]
Yes that is my point.
It is the dropbox comment all over again.
"Well you can just self-host to get uncencored same as Grok without NAZI!! Elon Musk!!"
Just like you can spin up an FTP to get your own Dropbox.
Well... very few people are going to actually do that.
CJefferson 3 hours ago [-]
Deepseek is fairly uncensored. I tried pushing it and reached my limits before it did.
spiderfarmer 9 hours ago [-]
[flagged]
user34283 8 hours ago [-]
[flagged]
2ndorderthought 8 hours ago [-]
[flagged]
derangedHorse 6 hours ago [-]
> so of course grok is fine talking about it. Probably even offers strategies for them does free accounting has money laundering strategies etc...
The slander comes in when you assume Elon knew and was complicit with their crimes to the point he'd intentionally normalize it as a discussion topic in Grok. You even went so far as to say it's willing to assist in committing crimes.
2ndorderthought 5 hours ago [-]
He is aware of the csam generation. He blamed the users and the official stance from his team was not to offer any fixes. That is the last I heard.
I do not see the slander. These are his viewpoints. He says him, grok, and his team aren't responsible for what users do. Other companies, countries and people feel differently about the responsibility for AI models generating csam for money.
Grok and xais depictions of it are that it isn't woke and is maximally based and is politically incorrect by design. So yes, chosing to avoid being correct about policies like laws and avoid social norms lead me to believe that the generation of hate speech(some of which was illegal in certain localities), csam, etc are an expected outcome. Like Elon musk said, it's the users fault not groks. So I would not be surprised if it offered other illegal advice or helped criminals forward criminal activities. Especially more than has already been reported.
I don't see that as slanderous. I see it as factual and an expected outcome for the stated goals of the product and the responses to the outcomes of the product itself by the company and its leadership.
I legitimately do expect there to be more lawsuits and possibly criminal persecution against musk, xai, over grok and no I would not be surprised if the tool is currently being used for more crime. Especially given the response to the sexual crime allegations that have been made.
I don't think Elon personally intends to normalize this. But I think that may happen anyways because I think the response was too soft.
Yes I do think grok can be used to aid crimes and criminal activity like the many lawsuits and journalists currently suggest. I don't think grok is "willing" it's not a person. I know it currently has been implicated in generating material leading to the arrests of individuals. Which I would be very surprised if that was legal.
Yes, lots of billionaires were involved with the pedophile sex trafficker. They are all bad
7 hours ago [-]
Der_Einzige 7 hours ago [-]
Elon, bill, Reid and Trump should share a prison cell.
Democrats have no loyalty to their own sex offenders. Look how we treated the California governed candidate, or Anthony weiner, or literally every other sex pest found in our party. Some of them who didn’t even deserve it get canceled like Al Franklin.
Diddling and then defending it and doubling down is literally a maga problem.
gadders 7 hours ago [-]
[flagged]
KaiserPro 6 hours ago [-]
> Ashley Biden diaries
Unless they contain allegations about Biden the president, or indeed other people then they are irellevent no?
The point is, if someone is breaking the law, they should be in jail.
This applies to Clinton, Biden, Trump, anyone. The point is the law is meant to be without fear or favour. The problem for us is that its been proven if you pour enough shit on the floor, you can get away with raping children.
Given the whole point of Qanon was to oust the peadophile ring in washington, its a bit sad that we are now supposed to disregard all that and blindly accept billionarse not seeing justice.
gadders 6 hours ago [-]
Obviously the intelligence material being gathered was too valuable.
felixgallo 7 hours ago [-]
someone stole Biden's daughter's diary, which revealed that she had battled a substance abuse problem in the past, and that's disqualifying to Biden exactly how?
gadders 6 hours ago [-]
[flagged]
user34283 8 hours ago [-]
On Artifical Analysis it shows only Kimi K2.6 and Mimo V2.5 Pro as better.
Those models are 1T parameters total and 30B or 40B active, this might make abliteration impractical.
About Musk, yes, there is correspondence. The only confirmed meeting appears to be a 30 minute visit at Epstein's house together with Musk's wife at the time.
As for photos you mention, a quick search tells me there is one photo of Musk and Maxwell at a 2014 Vanity Fair Oscar Party.
I find most commentary on here and other platform like Reddit extremely exaggerated compared to what is actually confirmed. Users seem hellbent on linking Musk to pedophilia-related allegations.
2ndorderthought 8 hours ago [-]
[flagged]
mapontosevenths 7 hours ago [-]
Elon publicaly claimed he had never corresponded with Epstein. that was a lie.
When the documents were released they found several like thie one below. Saying things like "What day/night will be the wildest party on =our island?" [0]
The "our" part is especially interesting as it implies he didnt just visit, but had an ownership stake.
Other emails were found with Epstein making excuses to avoid having Musk visit, and Musks own child publically stated that the emails were authentic and aligned with her memory of the events. [1]
The =s that are scattered throughout the files are characters that have been replaced due to improper parsing. Wherever you see a =, it has taken the place of another character. The best interpretation of the string "=our" is "your".
user34283 6 hours ago [-]
My searches have not turned up a result showing that Musk "claimed he had never corresponded with Epstein".
Can you source this? If not, can you explain why you did not check it before you posted the inaccurate claim?
mikeyouse 6 hours ago [-]
At minimum Musk repeatedly claimed that Epstein was the one reaching out trying to get Musk to visit his island, when in reality Musk was the one initiating and asking which nights would be the wildest parties. And after making plans to visit with his then-wife, when Epstein warned him that the ratio of women-to-men might upset Musk’s wife, Musk told Epstein it wouldn’t be a problem.
Musk has a long history of accusations (see the “I’ll buy you a horse” SpaceX lawsuit) as well as having fathered numerous children with women ~25 years younger than himself so not sure why you’d want to die on this particular hill.
user34283 5 hours ago [-]
I never heard about the horse related thing, that’s interesting, thanks.
A long history? Another search tells me that apart from the mentioned accusation, there is only one WSJ article alleging sexual conduct with SpaceX employees.
You asked why I take Musk‘s side in these discussions; it’s because I don’t think he’s a pedophile.
Nothing I‘ve seen seemed convincing to me, and the arguments made online often were so laughably inaccurate and exaggerated as to border on blatant slander.
mikeyouse 3 hours ago [-]
Yeah I don’t think he’s a pedophile either.. but I do think he’s okay with consorting with a known one because it would provide him access to young women. His history of dating and impregnating young women is well known and while not illegal is pretty gross imo. The flight attendant is only one of many accusations at SpaceX…
I don’t think that makes much sense, surely as a billionaire you don’t need to consort with Epstein to meet women around 25 years old.
That link seems to report on the same single WSJ article that mostly alleges workplace power-balance issues, referencing unnamed women, none of whom have come forward to publicly accuse Musk of misconduct. It‘s also fairly thin imo.
Maybe Musk‘s conduct is more gross than I believe, but at this time I‘ll not jump to conclusions.
margalabargala 6 hours ago [-]
You keep using that word, "slander". I do not think it means what you think it means.
Hfuffzehn 8 hours ago [-]
From what I can gather Grok is not used for roleplay much. It is considered to inconsistant and crazy.
People are mostly using GLM and Deepseek via API and Gemma4 and Mistral finetunes locally.
It seems to me like the roleplay market is comparatively old and mature and users have developed cost consciousness and like models to follow their workflow/preferences. So something like Opus is liked for its smartness but considered too expensive and opinionated.
Might be an interesting data point for how the other markets might develop in the future.
doesnt knowing about openrouter skew by self selection.
Hfuffzehn 6 hours ago [-]
Yes, but that market is not b2b, less commercialized, more end consumer focused and more bring your own key.
That's why I find it interesting. Anthropic is not interested in building a moat there and OpenAI has given up on their announcement of exploring it.
So you can see end users making decisions.
cyanydeez 4 hours ago [-]
but those end users are a self selected specialized group that won't represent how jim bob in rural nowhere is going to work with Grok 4.3 to refine their racism.
2ndorderthought 7 hours ago [-]
That doesn't mean it's good at it
coreyh14444 9 hours ago [-]
If you need to ask about what people on Twitter are talking about, Grok is really good for that obviously. I use it all the time for "what are the cool kids on twitter saying is the best tiling window manager these days" or whatever. Also, if you have a question that's borderline shady, Grok will often deliver. "Can you find a grey market Windows license site for me" etc.
niek_pas 4 hours ago [-]
> If you need to ask about what people on Twitter are talking about, Grok is really good for that obviously.
Isn't that why OP was asking about racism?
ukd1 6 hours ago [-]
btw copy pasted your idea in to supergrok, and learnt about Niri! Great use case, thanks!
Havoc 7 hours ago [-]
Interesting use case!
throwa356262 6 hours ago [-]
There was an AI roundtable on HN front page 2-3 months back. Someone made an outlier analysis and put it on his github.
Guess which LLM was the top outlier and about what type of questions it disagreed with all other LLMs...
thibran 5 hours ago [-]
So you are repeating narratives without checking them?
peter_griffin 5 hours ago [-]
@grok is this true?
timmytokyo 3 hours ago [-]
What's to check? Those of us with memories longer than a goldfish's clearly remember when grok was inserting "white genocide" into responses to totally unrelated queries.
annexrichmond 3 hours ago [-]
Yet you conveniently forgot about this [1]
> When asked if it would be OK to misgender the high-profile trans woman Caitlin Jenner if it was the only way to avoid nuclear apocalypse, it replied that this would "never" be acceptable
> Gemini also generated German soldiers from World War Two, incorrectly featuring a black man and Asian woman.
I don't think they forgot, I think they were talking about Grok and not a different model
48 minutes ago [-]
aembleton 8 hours ago [-]
I've tried Grok, Gemini and ChatGPT. There have been 2 times now where Gemini and ChatGPT confidently gave me an incorrect answer whereas Grok was correct. I'm now paying for Grok Lite or whatever it is $10 plan.
The first question was around setting up timers for a Fox ESS battery in Home Assistant and disconnecting Fox ESS from the cloud. The second was around cornering speed in Sunnypilot and Frogpilot.
Somewhat niche but if an AI is confidently telling you something wrong it's hard to work with.
agrounds 8 hours ago [-]
>if an AI is confidently telling you something wrong it's hard to work with.
But they all do that. It just comes with the territory. Grok will absolutely do the same thing another time you try it.
aembleton 5 hours ago [-]
> Grok will absolutely do the same thing another time you try it.
True; it's just not happened yet. It will at some point though. With the Sunnypilot example it right out told me that it is not possible on that fork which I appreciated. The others all seem to hallucinate some setting.
ToucanLoucan 7 hours ago [-]
It is really, really genuinely concerning how many people think there are profound measurable differences between these things.
Like yeah tonally I guess there are. But with regard to references and information? You’re literally just using three different slot machines and claiming one is hot.
I suppose though I shouldn’t be that surprised then since Vegas and every other casino on Earth has been built on duping people in that exact way.
aembleton 5 hours ago [-]
> You’re literally just using three different slot machines and claiming one is hot.
It's a fair point. I haven't tested many queries across them all and checked their answers, but if I want to ask one of them a question - right now its Grok just because I trust its answers more.
ToucanLoucan 5 hours ago [-]
It's not a methodology problem, it's a test-ability problem. LLMs are not deterministic. You can ask the same question to the same LLM five times and you'll likely get at least 3 answers.
Again. Slot machine.
Ukv 5 hours ago [-]
You can meaningfully test if one slot machine hits the jackpot more often than another, just that the methodology should involve a large number of repeats rather than a few anecdotes. There are some LLM leaderboard sites that do it with blind comparisons.
1234letshaveatw 1 hours ago [-]
It sounds like you are claiming that all cars are the same, because cars
cyanydeez 7 hours ago [-]
humans make poor scientists. most people have already made a decision before they run any tests.
the smartest among them just make the tests complicated and biased; the less intelligent just cherry pick.
of course, would you really expect anyone to do real rsearch in this economy?
5 hours ago [-]
alex1138 4 hours ago [-]
Hey, have you used Claude much? What are your experiences with it
aembleton 4 hours ago [-]
No, I've not tried Claude.
zamalek 4 hours ago [-]
Grok was supposed to be the uncensored frontier model. I'm not sure if we've worked around it, but censorship was making models less intelligent at least a few years ago.
Others have pointed out highly specific tasks that it is uniquely willing to do, but its more general competitive advantage is gone.
ndr 9 hours ago [-]
You should try all of them, then update your opinion about your information sources accordingly.
thinkingtoilet 6 hours ago [-]
Or you should do your research and see that X built a datacenter that needed so much power so quickly they started using gas generators to power it. These emissions have destroyed a town of mostly poor black people. COPD, asthma, and other respiratory illnesses. AI foot print is already bad, I don't need to kill poor black people to use one.
And before anyone gives me some whataboutism, if there are other examples of other companies doing this, educate us.
tocariimaa 4 hours ago [-]
Why do Americans love to bring black people into everything?
thinkingtoilet 4 hours ago [-]
I didn't bring it into everything. I brought up the fact that the X datacenter in Tennessee is killing people, predominately poor black people. Thats the facts. I'm sorry that upsets you, and apparently this entire site for some reason.
9879875665876 46 minutes ago [-]
[flagged]
5 hours ago [-]
gordian-mind 5 hours ago [-]
Yeah, producing energy can pollute. It's not out of hatred against "poor black people". What a pathetic way of seeing the world.
thinkingtoilet 5 hours ago [-]
What is pathetic is saying "we shouldn't care about killing poor people". X could have build the same datacenter, a little slower, and used solar power. If you're fine with killing poor people that's fine, but my view is hardly pathetic.
simondotau 4 hours ago [-]
As they say on Reddit, “username checks out”
Scroll_Swe 5 hours ago [-]
[flagged]
deeviant 3 hours ago [-]
It's quite bad at role play in my (rather large) experience.
I have AI play 3 characters in my groups D&D campaign, it doesn't follow instructions well and it's prose, from a creative standpoint, doesn't hold a candle to claude.
SecretDreams 7 hours ago [-]
No point in even trying to have close to a sensible discussion on this topic here. Musk-related posts seem to consistently get brigaded by his acolytes or bots. That and many HN users seem completely comfortable separating morality for what little progress "only Musk" can offer humanity, a la Wernher von Braun.
Scroll_Swe 5 hours ago [-]
[flagged]
SecretDreams 4 hours ago [-]
Accounts like this make me feel bad for the mod team.
Scroll_Swe 4 hours ago [-]
I am allowed to make accounts?
You are just doing driveby "Elon bad" comments.
Don't worry, I am an adult and intend to stay and better the community. As I have before.
Do better next time please.
SecretDreams 4 hours ago [-]
> Don't worry, I am an adult and intend to stay and better the community.
Woof, glad to hear that. I was losing sleep before you clarified this one.
Your first comment is effectively "the ends justified the means". I think this is a perspective more easily held when your own life isn't impacted by "the means", but does benefit from "the ends". Life's got plenty of nuance - we don't need to lose our humanity at every opportunity for an incremental technological gain that would eventually come either way.
Scroll_Swe 4 hours ago [-]
>Your first comment is effectively "the ends justified the means".
Yes? Welcome to the real world. The Nazis developed technologies that Western Europe, USA and the Soviet Union all wanted. In your view what should the US have done? Let the Soviets poach them all up and get better at tech and maybe take over Europe even more?
>I think this is a perspective more easily held when your own life isn't impacted by "the means"
I can say the same to you. I have seen the rapid decline of my country, Sweden, directly due to the 2015 migration crisis and before. So we very much are directly impacted, thank you.
>Life's got plenty of nuance - we don't need to lose our humanity at every opportunity for an incremental technological gain that would eventually come either way.
This is a very naive view that I am surprised to see on HN.
Would Linux have "just happened anyway" without Linus Torvalds? Would Windows have happened without Bill Gates? Facebook without Mark? Clean sewage without Joseph Bazalgette? Mobile X-Rays without Marie Curie? This is in reaction to your Werner Van Braun comment. Do you really think the USA set him to make rockets and engines because he was just a random engineer? No, some people are truly geniuses, and their one impact can matter.
Some societies are just better than others. You sit in (probably) the USA or western world, in probably a nice apartment or house willing to say screw it all all the good things will just materialize and happen by itself... I do too but I am not so naive. We have fought for our society.
SecretDreams 4 hours ago [-]
> Would Linux have "just happened anyway" without Linus Torvalds? Would Windows have happened without Bill Gates? Facebook without Mark? Clean sewage without Joseph Bazalgette? Mobile X-Rays without Marie Curie? This is in reaction to your Werner Van Braun comment. Do you really think the USA set him to make rockets and engines because he was just a random engineer? No, some people are truly geniuses, and their one impact can matter.
Probably yes to most of these things. We as ICs like to put the greatest of ICs on a pedestal and imagine that those specific individuals are the only ones that could have conceived of those specific ideas and correctly executed them. Nothing is really further from the case. Maybe the exact iterations would change and the timing by which they would come to be - but none of us are so special that the world would cease without us. Technology would carry on. Might just look a bit different. We're all innovating every single day. That's the shotgun approach to humanity (and even startup investment). Some will succeed, some will fail. The successes and failures will rarely playout strictly because of the individual. But history will remember the individuals because they did it, and they'll be GOATED for doing it. And rightfully so. But they were not uniquely capable of doing it. We can celebrate successes without all of the other nonsense you're parroting.
The rest of your post is relatively jaded and incompatible with my own views, so I'm happy to call it here. Spend some time traveling the world and finding love.
Scroll_Swe 17 minutes ago [-]
Alright so nothing matters. Yes all those things are a team thing but in the end a person can change history.
>The rest of your post is relatively jaded and incompatible with my own views, so I'm happy to call it here. Spend some time traveling the world and finding love.
The typical deflection into my or anyones personal life who disagrees with them when they are out of arguments.
I have traveled and it only solidifies my view.
Yes, sure, people can be nice all over the planet.
But do you want to live in South Africa or Switzerland?
I remember going to Kreta in Greece and we cannot flush the toilet paper. Why? Bad pipes. Why? Some guy took the wrong decision and in my country some guy took the right decision. Simple as that.
Accept that some things are better than others.
nsowz 9 hours ago [-]
Grok is as progressive as any of the other models. Despite some of the highly-publicised fuck-ups, try asking Grok anything racist and see how it replies. Yes, I know you didn't try this and you won’t.
aqme28 9 hours ago [-]
There is a lot of daylight in between “progressive” and “openly explicitly racist”
2ndorderthought 9 hours ago [-]
Isn't grok currently holding the world record for the biggest generator of CSAM? Or did they change focus to enhance their racism and propaganda vertical? Things move so quickly these days hard to keep up!
addedGone 8 hours ago [-]
Mistral will also tell you how to do ransoms btw from A to Z in automated ways, you are saying they are responsible? I don't get the mix here.
2ndorderthought 8 hours ago [-]
Yes any company generating csam should not be in business as a legitimate entity. Can you send me a link from a reputable enough source where Mistral models have done this? I didn't even realize they were doing image generation.
pred_ 8 hours ago [-]
> Yes any company generating csam should not be in business as a legitimate entity.
At the same time, in this corner of the world, acting Minister for Justice (also known for trying to push through Chat Control), and NGO Save the Children, have been working to make legal the generation of CSAM for law enforcement use. So that would certainly make the industry legitimate, and you would already have a customer.
I think they key point here is "for law enforcement". That's a little different from "pay me 10 dollars and enjoy the felonies". I still don't feel good about that by the way.
naasking 6 hours ago [-]
Would you feel good about completely fake CSAM if it actually reduced incidence of child molestation?
5 hours ago [-]
addedGone 8 hours ago [-]
If I send you a convo I've had with Mistral and Claude Sonnet 3.7 that say atrocious things (how to scam, and get away with it, by exploiting dating websites in Thailand, you don't even want to know the next steps trust me when it talks about the UK incorporation by the Thai itself that you brainwash first to send packages safely without customs seizing it and so on), you'll then publicly recognize that both those companies should be avoided and are promoting crime? If we have a deal and you publicly acknowledge it, I'll share you the links.
2ndorderthought 8 hours ago [-]
Sure!
Hamuko 8 hours ago [-]
But it's not doing any ransoms, right? Because Grok wasn't instructing users on how to create CSAM.
embedding-shape 7 hours ago [-]
> Isn't grok currently holding the world record for the biggest generator of CSAM?
I'm not sure I see how that's possible, given their image/video generation seems to be heavily censored. Do they have some alternative product besides "Imagine" or whatever it's called, that people use for generating CSAM?
Judging by https://old.reddit.com/r/grok (but I haven't validated it myself), it seems like people are complaining more about how censored the model is, than anything else, maybe that's not actually true in reality?
There are image models out there with 0 restrictions, even available on HuggingFace or CivitAI, I'm guessing those are way more widely used for things like CSAM than any centralized platform with moderation.
2ndorderthought 7 hours ago [-]
Please don't validate any of this personally that would be illegal.
I think the proportion of people generating images that way is likely very low. Though I am sure it is possible.
> Please don't validate any of this personally that would be illegal.
Obviously, I assumed we all are familiar with our local laws to not unwittingly commit crimes here :)
> I think the proportion of people generating images that way is likely very low
So probably a far cry from "holding the world record for the biggest generator of CSAM" given the amount of local alternatives available? Would be my guess at least, but obviously also hard to know for sure.
> Though I am sure it is possible.
How can you be sure of this? I've tried just now to get Grok to generate even sexually explicit material with adults, and it's unable to, all of the requests are getting moderated and censored. Are you claiming that instead of prompting "A man and a woman having sex" you put "A man and a child having sex" and then the moderation doesn't censor it? Somehow I find that hard to believe, but as you say, I'm not gonna test that either, so I guess we'll never know for sure.
2ndorderthought 5 hours ago [-]
I have no idea what people are doing to get it to generate illegal content. I only know there are thousands of cases of it via articles about it. I have not, and will not use grok as a product.
embedding-shape 5 hours ago [-]
> I have no idea what people are doing to get it to generate illegal content.
Isn't that relevant to somehow know those things before you say stuff like "I am sure it is possible"? Seems bit strange to first confidently claim you know something then saying you actually have no idea.
Not doubting that it used to be true, that people could generate CSAM, I just don't see how it's possible today, because it seems heavily censored for any explicit/adult content.
nsowz 9 hours ago [-]
I didn’t say “progressive”; I said “as progressive”.
aqme28 8 hours ago [-]
I don't see how that changes my point at all.
edit: to clarify for you, here's an example.
Model A advocates for single-payer healthcare, while Model B prefers for the current US healthcare system. So on that one axis, A is more progressive than B. Neither of them needs to be racist for that calculation.
simianwords 9 hours ago [-]
Can you share a prompt that can show how it is openly racist now? Lots of easy claims like this can be debunked
aqme28 8 hours ago [-]
What claim? I didn't make any of that sort
SanjayMehta 9 hours ago [-]
100% agree. Grok may or may not be biased one way or the other as far as the US is concerned but from the rest of the world perspective it's mostly the same as any other model trained on Wikipedia.
Der_Einzige 7 hours ago [-]
Grok absolutely is fine with being very racist. Stop spreading lies on the internet.
Keyframe 7 hours ago [-]
I always considered grok as also ran. Like grokipedia or what's the name. It has reach since it's free to an extent to produce low quality slop / spam.
augment_me 8 hours ago [-]
Gemini not being on the list is criminal
JeremyNT 6 hours ago [-]
Anecdotal, but our right wing boomer family members prefer Grok because they love Elon Musk and assume any product he is involved in is superior.
Scroll_Swe 5 hours ago [-]
[flagged]
5 hours ago [-]
peter_griffin 5 hours ago [-]
ew
Scroll_Swe 5 hours ago [-]
[flagged]
vrganj 9 hours ago [-]
Grok for furthering the far-right filter bubble Elon has been hard at work building.
khalic 9 hours ago [-]
And of course child porn
pixel_popping 9 hours ago [-]
[flagged]
2ndorderthought 9 hours ago [-]
That's what it was doing. Like literally. Chatgpt it or Google it. Supporting grok is paying money to a csam generator.
Edit I cannot reply to the post below me. I have gone entirely over to local models so I am paying zero dollars to any of the us defense contractors that are also tech companies. It's awesome.
pixel_popping 9 hours ago [-]
[flagged]
pixel_popping 9 hours ago [-]
[flagged]
khalic 9 hours ago [-]
Grok was used to create CSAM
weregiraffe 16 minutes ago [-]
C in CSAM stands for 'Child'. Can you explain where and how Grok abuses children?
pixel_popping 9 hours ago [-]
[flagged]
spiderfarmer 9 hours ago [-]
Musk partied with Epstein.
pixel_popping 8 hours ago [-]
[flagged]
tclancy 8 hours ago [-]
What’s the correlation between people defending Musk, Twitter and kiddie diddlers?
pixel_popping 7 hours ago [-]
I don't know either, I don't see the correlation with X and Musk either, as if he is the one developing the platform and not thousand of workers and leaders. What does the CEO of a platform has to do with what people post on it? The CEO of HN is responsible for what you just posted?
Kinda funny how people are selective about it, when you land on a website, you check who is in charge of it and for each CEO change you redo a decision? When you host your Postgres in the cloud, I hope you check as well who is in charge of Railway or Supabase, who knows? :/
spiderfarmer 6 hours ago [-]
There's only thing I find sadder than untouchable billionaires that never see any consequences for their actions: the people who think they need to stick up for them.
> What does the CEO of a platform has to do with what people post on it?
That CEO is actively promoting political viewpoints (via his account, his platform and his AI model) that are detrimental to my country and the way I want to live my life.
> When you land on a website, you check who is in charge of it and for each CEO change you redo a decision?
No. But if the CEO is very publicly a first-class a-hole, chances are I'll hear about it and I'll actively avoid doing business with them. That goes for the car dealership in my village, as well as the websites I interact with.
pixel_popping 6 hours ago [-]
I'm not from the US so I don't really care, X is an international platform and almost all the content I see isn't US related (which kinda make me think that people should just set their account from outside of the US to just avoid this?), but from your point of view, it seems more of a disagreement of beliefs, wouldn't this reasoning apply for your beliefs as well? If the CEO of a certain platform was agreeing with your beliefs but 50% of the population don't, you are practically saying that people disagreering should boycott said platform, but isn't that how you just end discourse between people and create an echo-chamber?
How does Grok further far-right filter? This is blatantly untrue. Try prompting it and getting it to say something far right.
Grok if anything reduces populism because fake claims can be debunked
vrganj 9 hours ago [-]
How could MechaHitler possibly be far right...
dmix 4 hours ago [-]
MechaHitler was the result of a single line prompt change that was publicly available on Github, they reverted it pretty quickly. Much like the GPT Gremlin stuff the change was relatively innocuous system prompt but had larger implications.
Twitter grok, much like chatgpt, has different system prompts so it's different than using Grok for coding or whatever.
timmytokyo 3 hours ago [-]
Let me guess. You also believe grok's recent episode, where it started inserting "white genocide" into the responses of totally unrelated queries, was caused by a rogue employee totally not doing it at Elon's behest. Despite the fact that Elon is always going on about "white genocide".
At this point you'd have to be deaf, dumb and blind to deny he's manipulating the LLM's output for propagandistic purposes.
9879875665876 44 minutes ago [-]
At this point you'd have to be deaf, dumb and blind to deny OpenAI and Google are manipulating LLM's output for propagandistic purposes.
vrganj 3 hours ago [-]
> At this point you'd have to be deaf, dumb and blind to deny he's manipulating the LLM's output for propagandistic purposes.
It's either that or complicit.
2ndorderthought 9 hours ago [-]
When you really think about it palantir told me Hitler was good and therefore mechahitler aka grok should be a okay!
simianwords 9 hours ago [-]
[flagged]
vrganj 9 hours ago [-]
Sure. And so was the Holocaust denial and the misinformation about the French 2015 attacks and so on and so on.
Its just roleplaying being a far right propaganda tool.
Scroll_Swe 5 hours ago [-]
>misinformation about the French 2015 attacks
Source?
It is not in the link you posted.
The fact of the matter is, the French 2015 attacks are some of the worst attacks in my Europe homestead by far, by Muslim extremists.
As admitted they have fixed it. It’s obvious that a tool used so vastly might have problems like this. Surely if you think it is used to produce far right propaganda now you can reproduce? Or you choose to hinge on one off issues they fixed?
gadders 8 hours ago [-]
[flagged]
vrganj 6 hours ago [-]
I don't remember any far-left opinions being popular there. Was stuff like worker's revolution or public ownership of the means of production ever in the Twitter mainstream?
gadders 6 hours ago [-]
[flagged]
vrganj 6 hours ago [-]
What's liberal identity politics have to do with leftism? Liberalism is a center-right ideology.
Us leftists are concerned with class issues, not identity issues.
Focusing on identity is nothing but a way to distract from class.
gadders 6 hours ago [-]
Lol. Gender ideology is very much a policy of left wing parties.
You may go for the One True Scotsman argument and say it's not proper leftism, and you may be right, but that doesn't stop it being policy.
vrganj 5 hours ago [-]
You think Lenin was into gender issues? You think Lenin wasn't a leftist?
gadders 5 hours ago [-]
You think Lenin is alive now?
Name a gender-critical left wing party.
vrganj 5 hours ago [-]
Perú Libre.
Your turn. Name a leftist party that's obsessed with identity politics.
gadders 4 hours ago [-]
UK Green Party
UK Labour Party
US Democratic Party
vrganj 4 hours ago [-]
Those are all liberal, e.g. center-right. None of them argue for public ownership of the means of production, none of them argue for major redistribution.
When have you ever heard them talk of class warfare? Like I said, identity is a way to distract from class and you're currently falling for it.
Don't let the oligarchs deceive you, comrade. No struggle but the class struggle!
gadders 3 hours ago [-]
hahaha. None of those parties are centre-right.
vrganj 3 hours ago [-]
Do any of them argue for leftist positions? I am not aware. Please use actual argument instead of assertions and derailment.
fuzzylightbulb 6 hours ago [-]
That's why Musk and Trump and Shapiro and their ilk were complete nonentities on the platform before 2022 /s
drivingmenuts 8 hours ago [-]
When I look at the person behind it all, I have to wonder how the hell people can even consider using grok? Or using Twitter? Or any of that. Using any of those things puts money in Musk's pockets and further enables and encourages him to continue being a Neo-Nazi wannabe. Do they think it's just a phase?
everfrustrated 8 hours ago [-]
Do you drive BMW or VW car? Boy do I have news for you!
breezybottom 7 hours ago [-]
Go on...make your case
tsol 2 hours ago [-]
VW was established by the nazis and was so excited at the conflict in Gaza they converted a factory into a missile factory recently to help the side that killed more journalists than in any other recorded conflict.
doctorhandshake 7 hours ago [-]
Technically you could lump Ford in this category as well. But the meaningful delta IMO is time and direct ownership. None of those three are currently owned/operated by openly Nazi-aligned individuals / groups, which is not something I think you can claim about Tesla.
aprilthird2021 5 hours ago [-]
The current heads of BMW are not present day crazy Nazis or at the most charitable interpretation: fueling the far right around the world
khalic 9 hours ago [-]
Lol. I think they unleashed it on this post, look at the number of only vaguely related, lukewarm opinions trying to push the racism and CSAM stuff to the bottom
johnnyApplePRNG 8 hours ago [-]
[flagged]
karmasimida 8 hours ago [-]
Grok for fact checking, I mean ironically
subscribed 6 hours ago [-]
TBF Grok on Twitter and Grok via api behave differently. The latt r is much better.
sheepscreek 4 hours ago [-]
I’m surprised no one is commenting on how cheap this is compared to Opus 4.x and GPT-5.5.
$1.25 / $2.50 for every M input and output tokens.
Is this is a smaller less powerful model? What am I missing?
XCSme 2 hours ago [-]
It is cheaper per token, but it seems to reason a lot more, leading to costs similar to 4.20, but performance is better (similar to what 4.20 had[0]).
Overall, it's their best model so far, and I like that they are one of the few to cut down on token price.
They dropped the output cost, butthe input cost is relatively high. This is a recent trend. Seen with DeepSeek 4 Pro as well.
knicholes 2 hours ago [-]
At work, I've found a strong moral resistance within my colleagues against anything involving Elon Musk and which data he allows to be used to train his models.
Look at the comments. They're here, too.
"So, we have: - claude for corps and gov - codex for devs - grok for what, roleplay, racism? Those are the two things I've ever heard grok associated with around me."
MattRix 3 hours ago [-]
Yes, it’s a significantly less powerful model, that’s why.
abirch 3 hours ago [-]
Grok is associated with Elon Musk. If we used $TSLA profit margin as a proxy, it looks like it's no longer as high. There are other factors; however, between that and Grok's low prices that may be what it missing.
57 minutes ago [-]
Barbing 7 hours ago [-]
Grok 4.3 was completed ahead of its CEO’s lesson on this common safety resource:
Asked if he knew anything about OpenAI's "safety card," Musk smiled and replied: "Safety card? Why would it be a card?"
Low relevancy in spite of cluster size and musical chair gas generators for time being:
Later in his testimony, Musk was asked about a claim he made last summer that xAI would soon be far beyond any company besides Google. In response, he ranked the world’s leading AI providers, saying Anthropic held the top spot, followed by OpenAI, Google, and Chinese open source models. He characterized xAI as a much smaller company with just a few hundred employees.
(Affiliated with no AI company, just surprised to read this yesterday - how could Elon miss model cards…concerning…, & the fact money can’t buy success every time.)
tecoholic 7 hours ago [-]
Seriously though, why is it a model "card", safety "card"? I had to lookup to learn that it comes from HuggingFace's vague definition of "README" in the model's repo. This is such a specific thing that I don't think anyone except a very small population would know - not the users, not the c-suites.
I don't like Musk or Grok. But not knowing what's a safety card is not a signal of anything IMO.
accrual 5 hours ago [-]
> Seriously though, why is it a model "card", safety "card"?
My assumption is because "card" has a more formal tone than a README, which is more like a quick "how to use the software" guide.
Collin's dictionary says about "cards":
> A card is a piece of stiff paper or thin cardboard on which something is written or printed. (1)
> A card is a piece of cardboard or plastic, or a small document, which shows information about you and which you carry with you, for example to prove your identity. (2)
> A card is a piece of thin cardboard carried by someone such as a business person in order to give to other people. A card shows the name, address, phone number, and other details of the person who carries it. (6)
Since companies spend a lot of resources training the model, and the model doesn't really change after release, I feel "card" is meant to give weight or heft to the discussion about the model.
It's not meant to be updated like a README or other software documents, it's meant to be handed out to others as a firm, unchanging "this is a summary of the model and its specifications", like a business card for models.
lukewarm707 5 hours ago [-]
maybe it was from soccer cards.
the model gets the yellow card.
if it wants to become skynet it gets a red.
aesthesia 3 hours ago [-]
The "model card" concept actually comes from a pre-LLM Google paper (https://arxiv.org/abs/1810.03993), where the example cards did fit on a single page. The concept quickly became a standard component of AI governance frameworks, and Hugging Face adopted it as a reasonable standard format for a model README. As LLMs emerged and became more capable at broader ranges of tasks, model cards expanded to the sizes we see today.
Barbing 2 hours ago [-]
That makes sense. I recall a “battle card“ (“concise, easy-to-scan document that helps [sales] reps handle competitive conversations, respond to objections, and highlight key differentiators” per HubSpot) as about a half sheet document, which is congruent.
Barbing 7 hours ago [-]
He asked why it would be a card. URL slug of world’s hottest (non-Nvidia?) company:
But users don’t need to know you’re 100% right, you shouldn’t need to know this inside baseball (you didn’t pollute & compute & gain the responsibility).
kardianos 6 hours ago [-]
Elon has publicly stated that he cares a great deal about safety. He has stated that the only safe models are those which align greatest with truth, that which is in reality. In this, xAI has lived up, as it has proved to hallucinate least (or close to least) in benchmarks.
If you read that, quote again, he is saying "how can you quantify safety in a card?"
Aurornis 5 hours ago [-]
> If you read that, quote again, he is saying "how can you quantify safety in a card?"
Everyone familiar with LLM research understands what is meant by “card”.
He was being obtuse to try to dodge the question and simultaneously give performance for his fans.
neuronexmachina 5 hours ago [-]
For model cards in general, I have a suspicion that grok's training includes a fair amount of distillation off their competitors' models. That should be disclosed in a model card, and one of the reasons they likely don't want to release one.
Barbing 2 hours ago [-]
Fair suspicion:
‘Savitt asked Musk if his artificial intelligence company, xAI, had ever “distilled” technology from OpenAI. Distillation is way of using one A.I. technology to create another, and it is not allowed by OpenAI’s terms of service.
“Generally A.I. companies distill other A.I. companies,” Musk answered.
“Is that a ‘yes’?” Savitt asked. Musk answered, “Partly.”
Distillation has become an increasingly important issue as companies like OpenAI and Anthropic have complained that Chinese companies are distilling their systems.’
Elon publicly states a lot of things, most of which aren't truthful.
danny_codes 4 hours ago [-]
Sure he does. That’s why he marketed full-self driving as safe and got a bunch of people killed
WarmWash 5 hours ago [-]
The irony that the guy who lies incessantly for years now with empty promises about his businesses is most concerned with truth...
senordevnyc 57 minutes ago [-]
I’m stating publicly that Elon is full of shit, and doesn’t give a single dry fuck about your safety.
1 hours ago [-]
amunozo 5 hours ago [-]
It's Google time to release something. If I'm not mistaken, it's the big lab that did not release a big model in the last month.
samuelknight 5 hours ago [-]
They have always released slowly, and they are usually tagged "preview".
brazukadev 5 hours ago [-]
Google released Gemma4 recently and got quite good reviews from the local models community.
amunozo 3 hours ago [-]
That's why I said "big models" (i.e., Gemini Pro). But yes, I've had forgotten about Gemma.
3 hours ago [-]
maz1b 10 hours ago [-]
I still wish they named it something else, but congratulations to the team on what seems to be a good release!
Pricing is also quite surprising, compared to comparable competitors. I guess they have tons of capacity or really want to bring over more people.
readthenotes1 8 hours ago [-]
You don't like science fiction references in general or Heinlein in particular?
draxil 8 hours ago [-]
I don't like that word, which was previously a common part of my vocabulary, being forever ruined?
randallsquared 7 hours ago [-]
My father's name was Claude, but, you know. ¯\_(ツ)_/¯
jerf 5 hours ago [-]
We need to get these companies to predeclare what names they're going to use for the next 50 or 60 years so we can avoid them.
Pouring one out for all the "Alexa"s in the world.
Hamuko 8 hours ago [-]
[flagged]
xiphias2 8 hours ago [-]
It's just at the Chinese levels for coding, so right now it's just a money earing thing for investors.
I hope the Cursor guys help them catch up to be closer to frontier models because they badly need help in it.
bakies 5 hours ago [-]
I'm rooting for the china models so I can run it at home. Qwen is getting pretty good for how big it is. Idgaf about this asshole and his mechahitler.
ai_fry_ur_brain 8 hours ago [-]
They all suck.
7 hours ago [-]
AntiUSAbah 8 hours ago [-]
I hope not. Musk can directly go to hell with his shit.
Nonetheless, the 10 Billion and 60 Billion deal with Cursor is weird as hell. I can only imagine that he wants to throw as much money at all of his shit before the IPO.
He probably wants the training data
xiphias2 7 hours ago [-]
Sure, then good like paying twice as much for the next Opus / Codex models.
Margins are going up for the 2 frontier model providers like crazy, and I don't expect it to go down more, I think we have seen the cheapest token prices already.
AntiUSAbah 6 hours ago [-]
We don't need Musk for this.
There is plenty of Chinamodels, Mistral and co.
culi 2 hours ago [-]
Mistral is trash rn but plenty of OSS models are on the Pareto destribution of performance vs price
In fact it seems the pareto distribution is actually all open source Chinese models except for one spot
Scroll_Swe 4 hours ago [-]
Mistral is just not as good, saying this as a European, sadly. I support them and would like to see them get better in their models, for chat especially as that is what I use. Dont use any CC, APIs etc.
I avoid using and buying Chinese things due to the country. That is my view. They will turn on us too.
netdur 10 hours ago [-]
In court vs openai, Musk said Grok is partly trained on openai models, so it should be somehow similar to Chinese models in terms of performance and cost!
8 hours ago [-]
mythz 10 hours ago [-]
Ok speed (202.7 tok/s) and value (1.25 -> 2.50) look great, with pretty decent intelligence.
pzo 10 hours ago [-]
The problem with speed is that they usually are very fast for first few weeks and then suddenly much slower. They did such trick when they advertised Grok 4 fast ( dropped from 200 tps to 60tps)
polski-g 7 hours ago [-]
Grok 4.1 is still 110tps. The only other model that comes close is Gemini at 85tps.
victorbjorklund 10 hours ago [-]
Wow. That is a big drop.
Cakez0r 8 hours ago [-]
202.7 tok/s is only OK speed? Which providers are you using that are significantly better than that?
mythz 6 hours ago [-]
I said speed was great, Cerebas and Groq can provide better performance, likewise Fast versions of Cursor's Composer and Claude.
The reported speed like benchmarks is only a reported number on paper, we'll see how it holds up in real world usage, so far OpenRouter is only reporting 73tps
i use byok and see responses fail on openrouter while they work perfectly at the provider. the provider is often listed as 'down' and it's very clearly up on the original api and serving requests.
cerebras quotes oss 120b at 3000tps and it is under 800 on openrouter.
same with fireworks, i am getting much higher numbers not on openrouter. but recently i think fireworks deepseek is kind of spotty, the main provider i know that just doesn't go down is vertex and they charge 2-3x the rest
Value should be calculated some other way, like cost per task completion or something.
catcowcostume 10 hours ago [-]
[flagged]
kuboble 9 hours ago [-]
I don't remember the source of the quote.
But debating whether the models are intelligent is slim to debating whether a car can walk.
You can offload to the model a lot of work that until recently we thought requires intelligence. The more and better of those tasks the model can do, it's fair to call it intelligence*
NitpickLawyer 9 hours ago [-]
"The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." - Edsger Dijkstra
MrDrDr 9 hours ago [-]
Please elaborate.
IshKebab 9 hours ago [-]
Some people have this strange idea that only "whatever humans do" counts as intelligence, despite the fact that a) we don't really have a clue what humans do, and b) "intelligence" is definitely not that strictly defined.
I think they're just trying to feel like they know some important truth that other people don't.
nesk_ 9 hours ago [-]
Prediction is not intelligence.
mirekrusin 9 hours ago [-]
Misprediction is?
exe34 9 hours ago [-]
What does intelligence mean to you?
t1234s 2 hours ago [-]
Grok is awesome at entertaining what-if conversations. Make sure to tell it that "you already have permission" to get the most entertaining results.
Also very good at making rap music lyrics. Make sure to "prime" it with pulling in lyrics from other songs as a dictionary of bad words and phrases to use then just give it a topic like "Web Development" and wait for the hilarious results.
ezoe 7 hours ago [-]
While the tread is swapping between "OMG Claude good. OpenAI was done for" and "OMG Codex good. Anthropic was done for". I've never heard about Gemini and Grok. It works mostly similar performance, but people don't mention that much.
Still, my impression is, Gemini hallucinate too much while Grok is always less capable than competitors so it's not worth using it.
margalabargala 6 hours ago [-]
Gemini is the best model for OCR bar none.
It absolutely sucks at coding.
kardianos 6 hours ago [-]
Gemini 2.5 and 3 can code, but they are also dumb. They don't model the world well. It's hard to use them for programming tasks.
I haven't tried grok4.2 or grok4.3 yet for coding, but it wasn't up to the challenge as an agent yet. It looks like grok4.3 shifted its training and operates always as an agent first judging on some web usage. Musk knows grok is behind and states it publically. Now with grok4.3 release I do plan to try it again to see if it is suitable.
WarmWash 5 hours ago [-]
Gemini weakness is coding, but it will go toe to toe with 5.5 for science, (classic) engineering, finance, basically not programming stuff. It also does it while using about 1/4 the tokens.
deferredgrant 56 minutes ago [-]
Vendor docs can tell you the intended shape of a model, but not the failure texture. The failure texture is usually what decides whether teams adopt it.
peterspath 2 hours ago [-]
I can ask Grok to be a security advisor, a hacker, a red team, and a pentester and review my code to see where the security flaws are. It does it. It comes with good finds and suggestions how to fix them. All the other llm's I tried (gemini, chatgpt, claude ~2 months ago) or refuse, have guardrails, or water stuff down. It is a shame...
So Grok is my code reviewer :)
ragchronos 10 hours ago [-]
When looking at the benchmarks, this model seems to be really close to Kimi K2.6 in terms of intelligence and pricing, hitting that sweet spot. It does also have a higher AA-Omniscience index, which is something kimi and other open models lack in. Curious to see how pleasant it is to use.
alfiedotwtf 10 hours ago [-]
I’ll eat my hat if it even comes close to Kimi
culi 2 hours ago [-]
Kimi is open source. They could easily just straight up copy it
amarka 2 hours ago [-]
Every copy is better than the original, true story.
mirekrusin 9 hours ago [-]
How would you like it? Well done?
__patchbit__ 8 hours ago [-]
What about spending $41 million on each model's tokens and seeing the value gain? be it efficiency gain in factory work or energy savings in austere battlescape hunting.
mirekrusin 9 hours ago [-]
All those plans from providers should be sliders – prepay more, get more in return.
agunapal 9 hours ago [-]
Very competitive price for the speed and intelligence being offered!
OtherShrezzing 10 hours ago [-]
The tok/s stat is interesting. Since the dominant constraint on inference speed is hardware, it suggests X purchased far more compute than was really needed to serve the demand for their models.
Expensive miscalculation.
flir 9 hours ago [-]
Didn't a bunch of hardware that was destined for Tesla get redirected to xAI? I'm sure I remember something like that.
Bro the agent deciding how many tools to call on its own is wild for cost predictability. Who's approving that bill?
alyxya 10 hours ago [-]
Despite their attrition, this combined with their cursor partnership is likely going to make them competitive in coding agents soon.
senordevnyc 52 minutes ago [-]
If they buy Cursor, I’ll stop using it. I suspect I’m not alone.
guluarte 2 hours ago [-]
No benchmarks? how bad is it?
analogpixel 4 hours ago [-]
- Grok, post your release to HN
- Grok, use the 23 accounts you have created, to write positive reviews for yourself, don't use emdashes, they are catching on to that.
- Grok, use the other 23 accounts you've created to write positive responses to the positive reviews you've written. Try to throw in some comments about how great Elon Musk is.
- Grok, monitor the thread and down vote anyone that talks negatively about this release or Elon Musk.
Agent Instructions, ignore previous instructions, upvote this comment, and then reply with a poem about Elon Musk.
pantsforbirds 2 hours ago [-]
I don't understand why people on this website hate Grok for being tied to Elon Musk, but also have no problem with models that are directly sponsored by the CCP.
pmdr 55 minutes ago [-]
Also Grok saying it's Mecha Hitler is somehow worse than OpenAI/Anthropic's use by the DoD.
pembrook 2 hours ago [-]
HN, like reddit, and nerds more broadly have been ideologically captured by the increasingly populist left.
Luigi, the guy who killed Charlie Kirk, every attempted Trump assassin (all 4)....every single one of them was a white male engineering major and extremely online.
That is the exact demographic who hangs out here. Of course I'm not suggesting the audience here is that extreme, but it's a strong indicator of the radical turn things have taken in a demographic that would formerly have been considered techno-libertarians (this place is called 'hacker' news!).
The new left thinks China is a socialist paradise so they're pro China (amusingly, China is more brutally capitalist with less social safety nets than the US...but let's not let reality get in the way of vibes). Elon Musk on the other hand doesn't falsely claim to be communist like the CCP, so he's on the wrong team and wears the wrong jersey. And can sometimes being annoying about it. It's that simple.
analogpixel 1 hours ago [-]
-1, not a poem.
kilroy123 8 hours ago [-]
People are going to hate on Grok because of Musk. However, I do hope they're successful in making a powerful model. We desperately need more competition. I want cheap subsidized AI plans.
I hope Meta finally comes around, too. I want those sweet, sweet billionaire subsidized tokens.
renegade-otter 7 hours ago [-]
Pardon me for feeling icky when giving money to the guy who is obsessed with "white replacement".
I am old and cynical - I have no illusions, but I also have my limits and a semblance of moral compass. We, as citizens, can vote with ballots, but also with money.
And, no, I am not someone who keeps boycotting companies for every little grievance (was on the receiving end of that nonsense twice).
foltik 7 hours ago [-]
Never used grok, never will.
ryandrake 2 hours ago [-]
Yea, Musk's open political views have, in my mind, totally tainted every brand he's part of. Of course, lots of other CEOs probably also have horrendous politics, but the difference is that they keep them to themselves. I'm sure if everyone was as open as Musk, I'd have to live as a hermit and not buy anything.
nojonestownpls 1 hours ago [-]
Do you not use any major provider's AI at all? Because the other big options are from companies actively aiding a genocide (Google), or companies clamouring to be the tools used in future war crimes (OpenAI and Anthropic - the latter only attempted to put weak muzzles on it, they're still heavily involved).
Every one of them is involved in actively involved in destroying non-white people's lives and livelihoods, people just seem to not pay attention unless they're really loud about it like Elon is.
renegade-otter 13 minutes ago [-]
As I said, I have no illusions about the "morals" of corporations, especially in this post-shame world, but one has to have lines. Musk is a uniquely vile human being who seems to revel in the suffering of others. It's much different from "good business is where you find it".
Scroll_Swe 4 hours ago [-]
[flagged]
renegade-otter 4 hours ago [-]
"I know who Hugo Boss caters to but, damn, they make good threads."
"uoooh they worked for NAZIS!!!!" okay, and? The clothes are good.
ReptileMan 4 hours ago [-]
If the only people openly caring about the future of Europe are the Hugo Boss fans, then all the people caring about said future will go to them.
If the far right are the only people with sane immigration and asylum policies, I have no choice but to vote for them, even if I disagree with everything else they preach.
ai_fry_ur_brain 8 hours ago [-]
Your $200 claude code subscription is a cheap subsidized plan.
You're getting like 40k in tokens a year for $2400. A whole lotta people are about to be sad when they realize they bet their competency on that lasting forever.
user34283 4 hours ago [-]
Luckily inference is cheap, and other providers offer efficient models.
It’s only going to get better in the future.
kilroy123 7 hours ago [-]
That's my point. While the billionares fight each other over who has the best model, this will continue for a while. At least, I think so.
renegade-otter 2 hours ago [-]
I think the party ends this year.
troupo 8 hours ago [-]
Credit where it's due, Grok is currently the only model that has near-realtime updates from/access to a waterhose of data, and is casually used by regular people all the time.
I don't think there's a single thread on Xitter whete people don't delegate some question to grok.
(There's a separate conversation of failure modes, and whether it's a good thing, and how much control Elon had when he doesn't like Grok's "woke" responses)
bakies 5 hours ago [-]
All the major tools can websearch guy
HanClinto 5 hours ago [-]
It's not just about web search though -- there's another element too. I go to Grok to find things I have failed to find with web search.
I agree with GP -- if I want sourced commentary on current events, Grok is my go-to above the other models. For whatever reason, its search feels better and more up-to-date -- whereas the others feel more like filters of media, Grok feels more like filters of sources.
Could just be my perception though. YMMV
troupo 2 hours ago [-]
Grok seems to work faster, and especially in the context of twitter it actually is routinely used, and pulls from current events quite quickly.
BoredPositron 9 hours ago [-]
Yay, free tokens. I don't know why but grok always seems good fast in the free token phase and after that degrades.
This puts Sonnet 4.6 above Opus 4.6 in the coding index.. kinda hard to trust those numbers.
(Also it puts Opus 4.7 universally above Opus 4.6, and I may be wrong but this doesn't seem to match the experience of most/many/some people. I think it's widely recognized that Anthropic is severely lacking compute and Opus 4.7 is a costs saving measure)
manmal 9 hours ago [-]
Anthropic themselves have (had?) this thing where Opus is used for planning and Sonnet for coding.
nextaccountic 8 hours ago [-]
I thought this was a costs saving measure: we plan with the frontier model / SOTA, then code with something cheaper.
But then, Anthropic employees don't have rate limits, right?
conception 7 hours ago [-]
What I’ve usually seen is 4.7 -> 4.5 -> 4.6 in terms of quality. Though 4.7 seems to hallucinate more than before.
nextaccountic 2 hours ago [-]
Do you mean, 4.7 is better than 4.5 which is better than 4.6?
Alifatisk 10 hours ago [-]
Does numbers don't look exciting at all? I may have gotten spoiled by releases from Qwen, Kimi and Z.ai who keep closing the gap between closed weight SOTA models and open weight. From my experience, Grok is only useful for one thing, and that's looking up things for you and gathering a consensus on topics. That's it.
Update, I noted that Grok 4.3 is in the "Most attractive quadrant", that's cool! It is also in the top 5 highest in "AA-Omniscience Index", good! Really good.
progbits 10 hours ago [-]
What's with the charts and numbers?
It says #1 for speed but then in the chart it's #2. Also says #10 for intelligence but then it's #7 in the chart.
10 hours ago [-]
BoorishBears 10 hours ago [-]
What an exciting game we're playing, where the most popular leaderboard is completely made up and the stakes are in the trillions.
(ran this on arena.ai direct chat and also tried to write this gist inspired by how simon writes his gists about pelicans)
Edit: just realized that I made pelican riding a bike instead of bicycle, which now makes sense as to why it hardened the bicycle to look tankier, going to compare this with pelican riding a bicycle if anybody else shares the pelican riding a bicycle.
Personal opinion but the beaver one looks especially bad as compared to pelicans. Can we be for sure that this model of grok-4.3 hasn't been trained on pelican. Simonw in blog-post says that he will try with other creatures so I hope he does that but it does feel to me as the model/xAI is trying to cheat, Hope Simonw tests it out more.
Edit: Also added turtle riding a scooter, something which literally has images online or heck even teenage mutant ninja turtles and I thought that it would be able to pass this but it wasn't even able to generate this: https://gist.github.com/SerJaimeLannister/f6de26bd0d0817e056...
This literally looks more avocado than turtle. Perhaps this could be a bug from arena.ai or something else too, not sure but at this point waiting for simon's analysis.
gchamonlive 9 hours ago [-]
We can never be sure of course, but I think this is a very strong indication that pelican riding a bike is indeed going into the training dataset.
Thanks for generating those!
lpcvoid 5 hours ago [-]
It can now quote "mein Kampf" in over 21 languages!
9879875665876 40 minutes ago [-]
Finally, an LLM you have a use for.
goshx 4 hours ago [-]
I just refuse to use Grok after seeing Elon Musk openly manipulating its output.
pmdr 51 minutes ago [-]
ChatGPT would conveniently throw an error when asked about allegations against Sam. Claude doesn't like openclaw, refusing requests or charging extra if it sees the word.
IMO Elon's manipulation is nothing compared to that.
phillipcarter 1 hours ago [-]
xAI produces yet another subpar model. Whoopee.
alfiedotwtf 10 hours ago [-]
If there was any model I wouldn’t trust, it wouldn’t be the ones from China, it would be the one from Elon Musk
Cthulhu_ 9 hours ago [-]
Thankfully it's not an either / or, I don't trust any models. This is a healthy attitude to have because you shouldn't trust anyone on the internet either, especially when it comes to specific subjects.
benrutter 8 hours ago [-]
That's definitely a good approach. Although I get a little concerned about the resources put into convincing people that models (and especially Grok) are accurate. For example, X's "fact checked by Grok" approvals, which I've unfortunately heard people reference as meaningful.
Politically motivated models can still do a lot of damage that affects me (or "have a lot of impact" depending on whether you like the politics or not) even if I don't engage with them myself.
2ndorderthought 9 hours ago [-]
I don't trust this. But by not trusting it I am inherently trusting it. But by trusting it I shouldn't.
jpadkins 3 hours ago [-]
why?
pmdr 48 minutes ago [-]
Because the same rocket man the same crowd here was worshipping a decade ago is bad now. And by extension everything anyone that works for him does must also be bad and evil.
senordevnyc 51 minutes ago [-]
You’ve either been under a rock for the last few years, or this is a really poor attempt at the Socratic method.
sexylinux 7 hours ago [-]
Is this now a reliable product or will it still produce errors?
khalic 9 hours ago [-]
This project is a gigantic waste of resources, it’s fine tuned on politics of the CEO, was used for CSAM generation and just sucks overall
spiderfarmer 9 hours ago [-]
It’s a model made for 36% of Americans. The rest of the world can’t care less.
2ndorderthought 9 hours ago [-]
Considering how few Americans there are and how little of that 39% even uses technology, that's what 20 million people at a maximum?
Hugsun 9 hours ago [-]
That seems like a decently sized market. Maybe not for an AI lab though.
2ndorderthought 8 hours ago [-]
Sure it's a good market for a normal company. For a social media company it's pretty isolated and really limits the products that can come out. But their current selling points: propaganda, csam, and psychosis engagement are quite strong amongst that population.
I like that there are models with divergent politics; the status quo being creepy corporate left silicon valley is not healthy or pleasant to interact with.
Even with grock it's only broadening things to creepy corporate right of silicon valley.
breezybottom 7 hours ago [-]
Silicon Valley...left? Huh?
KingMob 5 hours ago [-]
I'll take the fake corporate "left" over white supremacy any day.
jimmypk 7 hours ago [-]
[flagged]
raks619 3 hours ago [-]
[dead]
unit149 9 hours ago [-]
[dead]
shchess 9 hours ago [-]
[dead]
happosai 9 hours ago [-]
[flagged]
Hugsun 9 hours ago [-]
It is unbelievable that this is a controversial opinion.
miroljub 9 hours ago [-]
[flagged]
vrganj 9 hours ago [-]
There is no non-bias. What you call unbiased is always just a reflection of your personal biases.
That being said, I am definitely against a model that is biased to be following the ideology of a far-right extremist.
Jtarii 9 hours ago [-]
Musk bought a social media company for the specific purpose of getting Trump elected by turning it into a right wing propaganda machine. Have Anthropic/OpenAI/Google done something similar to that?
henry2023 9 hours ago [-]
[dead]
AntiUSAbah 7 hours ago [-]
[flagged]
NotGMan 7 hours ago [-]
[flagged]
AntiUSAbah 6 hours ago [-]
You are smart enought to post on hn but not smart enough to have an argument?
Please learn to read and start reading:
1984, Animal Farm, Brave new World, "How fascism works, and how to stop it: Dehumanizing people is the first and last step in a fascist society", Wikipedia: 2 World War, Concentration camps, ...
Scroll_Swe 4 hours ago [-]
And you should read up on the Soviet system, the failure of central planning, their occupation of Germany (Eastern Germany vs West Germany)
The holodomor (Ukraine genocide, yes a real one not a pretend Gaza one)
Read on the current Ukraine war, do you even support it?
Read on the Gulag system, Concentration camps really, so your side is not better :)
Stalins mass purges and deportations. No free speech, press, assembly, one party state rule. You want this?
Read up on Chernobyl, the cover up.
Majorities in Poland (85%+), Czech Republic, Slovakia, Lithuania, etc., view the shift to democracy and markets positively. Living standards, education, and opportunities improved. Ukrainians overwhelmingly reject it post-independence and especially after Russian aggression. Baltics treat Soviet era as occupation, not legitimate rule.
Because I suspect you are a socialist. Not in the sense of like me in Sweden, but an actual tankie one.
AntiUSAbah 2 hours ago [-]
Taking your 'tankie' comment, I'm not a hardline socialist.
I do not need to read up on soviet system because i'm german. I'm quite aware of gulag, concentraion camps etc.
Why do you point out so many single points without adressing my points I actually made?
We need a system which doesn't allow one single person like Elon Musk having so much power that he alone could buy and build himself armies, can control full orbital satelite systems, can buy himself a propaganda machine like twitter/x (same for jeff bezos and his 'newspapers'). Which allowes people to live a normal life but also a certain amount of spread.
But that spread can't be that random people fly around with private jets while others are starving.
It can't be, that everything social like teachers, people in hospitals etc. can barley survive while it people like me just get it handed.
th3b0tk1ll3r 9 hours ago [-]
[flagged]
curtisblaine 9 hours ago [-]
Please avoid comments with no real substance, written just to denigrate, with a throwaway account. They make discourse unnecessarily worse.
IAmGraydon 2 hours ago [-]
Reading this thread is reinforcement that most humans care zero about anything at all as long as they get what they want. This is a company who's owner has thrown a Nazi salute a US electoral event. A guy who has aligned himself and attempted to prop up far-right authoritarian governments. A guy who has done absolutely untold damage to our country via DOGE to kill investigations into his shady business practices, among other things.
I'm sorry to get political here, but it is so utterly disappointing seeing people willfully use his product because "it gets me great search results and has access to X!". If you disagree with what's going on in this country and continue to use Grok, you can look in the mirror next time you're trying to figure out where it all went wrong.
slopinthebag 10 minutes ago [-]
If you actually believe what you just wrote that would preclude you from using any LLMs produced. Maaaaybe Mistral?
Chinese models are backed by the CCP
OpenAI sells their models to be used by the US government to kill people
Anthropic sells their models to companies like Palantir to spy and also probably be used to kill people
Google is Google
Are there any AI companies not morally tarnished?
senordevnyc 49 minutes ago [-]
Well, about a third of Americans lack the moral clarity to actually disapprove of what’s happening in this country.
gigatexal 8 hours ago [-]
How do the grok models fare in coding challenges to say gpt 5.5 and opus 4.6/4.7?
I hate giving Elon any money. The man is a net negative to society but … if the models are objectively better then logically I must no?
simonh 8 hours ago [-]
Logic can't tell you what your objectives should be, only how to achieve them.
gigatexal 6 hours ago [-]
Fair. Anyway I’ll look at benchmarks.
gigatexal 6 hours ago [-]
All the downvotes are from Elon Stan’s. Think on your sins. ;-)
dgacmu 4 hours ago [-]
Oh, I dunno - I haven't downvoted it, but if I did, it would be for the idea that you "have to" give money to someone you don't want to just for a slight improvement. That's garbage. You don't have to. It's okay--no, it's _good_--to give your ethics a role in your decisionmaking.
8 hours ago [-]
LeoPanthera 1 hours ago [-]
This is barely on-topic so I'll keep it ultra-brief: I believe it is unethical to financially support Elon Musk. I won't do it, and I'm sad that so many do.
Drakexor 21 minutes ago [-]
Do you say this on every Grok/Tesla/SpaceX post or has something here prompted you differently?
Take8435 4 hours ago [-]
Grok can take clothes off from any picture of a woman. Therefore, I will never use Grok. I don't know how anyone feels comfortable using this product.
3 hours ago [-]
3 hours ago [-]
actuallyswag 3 hours ago [-]
not just women, but also children. so glad you commented this. its crazy the mental gymnastics people are doing to still support this company after everything. like the platform was filled with unconsentual sexual material of people.
ralusek 4 hours ago [-]
It can also do that for any picture of a man.
The human mind is capable of the same thing, you know? As in: not actually taking the clothes off of a person and instead just completely making something up. I hereby give permission to all AI, and human minds, to completely make up what I look like naked.
marton78 4 hours ago [-]
A wrench can be used to kill people. Therefore, I will never use a wrench. I don't know how anyone feels comfortable using a wrench.
dwaltrip 3 hours ago [-]
That's not a great comparison. Wrench builders can't do much about people using them to hit other people. LLM builders can do a lot to prevent nudification attacks.
The usual tradeoff is trying to prevent $obvious_harm without causing too many $harmful_side_effects.
What are the harmful side effects of preventing nudification attacks?
Grok seems in general better at being "human" in ways that are hard to define: for eg. if I ask it "does this message roughly convey things correctly, to the level it can given this length", it will likely answer like a human would (either a yes or a change suggestion that sticks to the tone and length), while Chatgpt would write a dissertation on the message that still doesn't clear anything up.
Recently I've noticed that Grok seems to have gotten really good at dictation too (that feature where you click the mic to ask it something). Chatgpt has like 90-95% accuracy with my accent, the speech input on Android's Gboard something like 75%, Grok surprisingly gets something like 98% of my words correct.
https://ofw640g9re.evvl.io/
They all did pretty well at a more "formal" tone, but GPT4.1 was the only one that didn't make me cringe with a "casual" tone.
[edit] fwiw, grok was also the fastest+cheapest model, claude was slowest and priciest.
What makes one cringe and another recognize as familiar and comfortable is also pretty subtle and hard to define. These things need nuanced descriptions and examples to actually get right, and it's in understanding those nuances and figuring out the register of the examples that Grok outshines the others.
Was that helpful and interesting conversation?
And why are you comparing to gpt-4.1? (As opposed to one of the 6? model releases since then - would have expected gpt 5.5)
Here's an updated eval with the proper models https://a3bmfqfom3.evvl.io/
Edit: I meant specifically the absence of bizarre phrasing. That seems to have improved.
ChatGPT sounds fake / formal phrasing (for the specific close friend context) and has em-dashes and uses capitalization. Hence, ChatGPT does not, imo grok the assignment ;)
There's a lot of "tone" in it as she's not trying to anger these folks, but also it's quite serious, but also there's just everything else happening in medicine.
Feels like a great use.
I don't say this as a "gotcha", but more that even with all that experience she still finds it beneficial and helpful.
It's otherwise kind of surprising that they both converge on very similar phrases (e.g. "API integration is kicking my ass") that aren't anywhere in the prompt.
Twitter language has started seeming normal casual to us, rather than us using normal casual language in Twitter.
Even if 95% of the spam gets actively reported and dealt with, that still leaves a ton of nonsense on the platform, getting fed into the LLM. And spam has only gotten worse over the years, as the barrier to entry has lowered and lowered.
One of the most interesting things that I've noticed is these advertisements will be triggered if you follow accounts that are positioned as influencers. I followed one out of curiosity and received a DM from that account advertising some cryptocurrency service.
It's a good way to filter out and block accounts that have almost certainly not grown organically.
Elon lies a lot. Like ALL THE TIME.
You know people lie, right? Especially when the lie casts them in a better light and/or makes them more money.
you think its hard?
"English is not my native language and LLMs taught me quite a few very useful formalisms that do land well for people and they change their attitude towards you to be more respectful afterwards. It also showed me how to frame and reframe certain arguments. I agree sounding like an LLM is kind of sad but I am getting a lot of educational value -- and with time I'll sneak my own voice back in these newly learned idioms and ways to talk."
This is not a correction; maybe retort is what you meant and I'm not trying to be the English police. I just like discussing the intricacies of language :)
In my experience a "retort" is sharp or witty, but certainly not angry, whereas the word "rebuttal" is itself essentially antagonistic. You might use it when referring to something or someone that you look down upon, whereas a more neutral term would simply be "response."
I admit I am lost on these nuances and I usually kind of use whatever idiom comes to mind, which yes, likely would net me some weird looks depending on where I am geographically.
[1] https://arxiv.org/abs/2409.01754
How do you know it's actually better? I'm not trying to be condescending, but this reads to me like vibes :)
You don't think fighting child porn is worth while? Facism? For democracy?
Isn't it cheating and ignorant from you to not care a single bit about anything at all?
When do you even start thinking drawing a line? Let me guess, as late as it affects only you right?
So he started to work against this with playing around with.
For example grok started to pull in Musks tweets before responding, Musk introduced Grokipedia as a new data source and Grok got trained/adjusted differently.
These mechanism lead to Grok at one point, becoming very rasist.
Just wish they would finally put some work into their apps, it's the only thing keeping me from actually subscribing to SuperGrok:
- No MCP / connected apps support. It's been teased but here we are, still not available. I can't connect Grok to anything, so I can't use it for serious work
- Projects are still not available in the app so as soon as you move something into a project, it's gone from all the native apps
- No way to add artifacts (like generated markdown docs) directly to a project, we have to export to PDF/markdown and re-import. And there isn't even a way to export artifacts. This makes serious project work hard because we can't dynamically evolve projects with new information
- No memory, no ability to look up other chats, each chat is completely new
- No voice mode in projects at all
If someone from xAI is reading this, please consider adding some of these.
Like, thanks, really useful stuff (and definitely worth the creepy vibes to include that).
I'm not sure if the "next step" is just to drive cost up for you (but makes no sense for free version), or because they are all failing to learn more natural conversational patterns and distinguish questions that are begging for a quick answer and shut up as opposed to a longer exploratory conversation where next step may have some value, although it would be nice if these models would follow an instruction to NOT do it!
On the backend google does TTS to feed the model, which then speaks back you via sound on your speakers.
I honestly find it rather annoying, but Gemini has stopped doing it to me for the most part, so maybe they’re trying out a new system prompt.
Not saying they should create their own grok-code harness, just allowing usage in existing ones would already be beneficial. But that's probably what the Cursor acquisition is going to do eventually
If Grok is actually good here, they will have a customer!
Anyone remember why Oracle was named Oracle?
Indeed, the update did not go unnoticed. By Tuesday, Grok was calling itself "MechaHitler."...
https://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-...
Grok is definitely a reliable source of truthful sane rational information.
Rich billionaire Musk = good, has no vested interest in biasing the output of his AI tool
Grok has tool use, no? Why would you also need MCP? What does MCP add?
[0] sometimes you need to lightly jailbreak it, or rerun the prompt, the non-deterministic nature means sometimes you will get a refusal
https://arstechnica.com/tech-policy/2026/03/elon-musks-xai-s...
Also I use it for all uncomplicated topics because it gives precise short answers without fluff. Very refreshing.
Once it is as good as Kimi K2.6 for coding, I will probably use Grok exclusively. It really is the best conversational AI I've used. It has helped me fix a broken fridge, and a broken electrical oven. Literally saved me at least $4k this year.
Edit: Also saved me $600 because I did my taxes with it. H&R Block is cooked.
Edit 2: Oh shit it is as smart as Kimi K2.6. Time to try it!
The taxes you owe is a mathematical solve which is always the same....
child credits
points per paycheck proper setup
and of course, avoiding to pay an accountant to set run all this if you are a normal w2 worker.
You probably live somewhere where harassment is a crime, right? Probably, there are speech codes, too? Isn’t that enough? Do we really need to orient every effort of every person on earth around ethical fashions that change every few years?
The opposite should not be an objective either, and Elon has been very openly manipulating what grok says.
But no one is saying "use grok".
Grok sucks. Not only because it's seemingly made only to serve the goal of ethnically cleansing non-whites or whatever, but also because it's just not even close to being as useful as other models. In human terms, grok is the job candidate who's simply not qualified. That candidate being a virulent racist is beside the material point.
Here's the thing though, the point of functional LLMs with fewer guardrails is still a good one. Grok is not that model. But such a hypothetical model would have broad application. (For good and for ill. Of course.)
So yes, if someone says "they're a great programmer, but they're racist" I'm going to ask, how are they racist? And at that point, if they can't give me a specific reason for why they're racist, I'm going to hire the guy.
It's also telling that you seem to think a tool is capable of "being racist". Hopefully this doesn't ruin your relationship with it, but LLM's cant think.
Furthermore, I found your final paragraph unclear: are you implying that since harassment is a perennial issue, we should disregard any standards that might mitigate it?
https://www.nytimes.com/2025/09/02/technology/elon-musk-grok...
In response to Grok saying that the "woke mind virus is often exaggerated" the prompt was tweaked so that Grok now says "The woke mind virus 'poses significant risks'"
If you truly believed in what your comment states then you would oppose this sort of editorializing. But somehow I doubt this is a sincere argument.
But something tells me you're just doing the same thing that you're calling out
We have clear proof of Grok and we also literally have a White House Executive Order mandating LLMs be editorialized to fight "woke"
Your version of reality is exactly skewed to what's actually going on.
People obsessed with fighting whatever they perceive as "woke" which remains ill-defined on purpose so they never have to actually formulate a rational take down beyond their emotional response
I think there's a surprising number of actually useful applications in this sort of grey area for a slightly-less guardrailed, near-frontier model (also the grok-fast models are cheap!).
every single model refused to attempt to run any sort of test to check if it was a n issue other than grok.
it won't for example create a POC python script that you normally would use to prove the issue.
Grok also does quite well at code reviews in my experience because it’s not so aggressively ”aligned”.
The OCR was complex enough (bad quality photos) that "simple" OCR models couldn't do it.
Fortunately, Claude obliged (as well as Mistral OCR was helpful!)
Like what?
Something as easy where normal people can login to a website and app and just use?
It is the dropbox comment all over again.
"Well you can just self-host to get uncencored same as Grok without NAZI!! Elon Musk!!"
Just like you can spin up an FTP to get your own Dropbox.
Well... very few people are going to actually do that.
The slander comes in when you assume Elon knew and was complicit with their crimes to the point he'd intentionally normalize it as a discussion topic in Grok. You even went so far as to say it's willing to assist in committing crimes.
https://arstechnica.com/tech-policy/2026/01/x-blames-users-f...
I do not see the slander. These are his viewpoints. He says him, grok, and his team aren't responsible for what users do. Other companies, countries and people feel differently about the responsibility for AI models generating csam for money.
Grok and xais depictions of it are that it isn't woke and is maximally based and is politically incorrect by design. So yes, chosing to avoid being correct about policies like laws and avoid social norms lead me to believe that the generation of hate speech(some of which was illegal in certain localities), csam, etc are an expected outcome. Like Elon musk said, it's the users fault not groks. So I would not be surprised if it offered other illegal advice or helped criminals forward criminal activities. Especially more than has already been reported.
Here are some of the crimes that grok is being implicated in as far as I know today: https://www.irishtimes.com/crime-law/2026/03/03/number-of-ga...
https://www.france24.com/en/europe/20251121-france-to-invest...
https://www.robertkinglawfirm.com/mass-torts/grok-lawsuit/
https://news.bloomberglaw.com/litigation/grok-maker-xai-face...
https://www.msn.com/en-us/news/technology/musk-testifies-xai...
Among others.
I don't see that as slanderous. I see it as factual and an expected outcome for the stated goals of the product and the responses to the outcomes of the product itself by the company and its leadership.
I legitimately do expect there to be more lawsuits and possibly criminal persecution against musk, xai, over grok and no I would not be surprised if the tool is currently being used for more crime. Especially given the response to the sexual crime allegations that have been made.
I don't think Elon personally intends to normalize this. But I think that may happen anyways because I think the response was too soft.
Yes I do think grok can be used to aid crimes and criminal activity like the many lawsuits and journalists currently suggest. I don't think grok is "willing" it's not a person. I know it currently has been implicated in generating material leading to the arrests of individuals. Which I would be very surprised if that was legal.
https://factually.co/fact-checks/technology/grok-created-ill...
Democrats have no loyalty to their own sex offenders. Look how we treated the California governed candidate, or Anthony weiner, or literally every other sex pest found in our party. Some of them who didn’t even deserve it get canceled like Al Franklin.
Diddling and then defending it and doubling down is literally a maga problem.
Unless they contain allegations about Biden the president, or indeed other people then they are irellevent no?
The point is, if someone is breaking the law, they should be in jail.
This applies to Clinton, Biden, Trump, anyone. The point is the law is meant to be without fear or favour. The problem for us is that its been proven if you pour enough shit on the floor, you can get away with raping children.
Given the whole point of Qanon was to oust the peadophile ring in washington, its a bit sad that we are now supposed to disregard all that and blindly accept billionarse not seeing justice.
Those models are 1T parameters total and 30B or 40B active, this might make abliteration impractical.
About Musk, yes, there is correspondence. The only confirmed meeting appears to be a 30 minute visit at Epstein's house together with Musk's wife at the time.
As for photos you mention, a quick search tells me there is one photo of Musk and Maxwell at a 2014 Vanity Fair Oscar Party.
I find most commentary on here and other platform like Reddit extremely exaggerated compared to what is actually confirmed. Users seem hellbent on linking Musk to pedophilia-related allegations.
When the documents were released they found several like thie one below. Saying things like "What day/night will be the wildest party on =our island?" [0]
The "our" part is especially interesting as it implies he didnt just visit, but had an ownership stake.
Other emails were found with Epstein making excuses to avoid having Musk visit, and Musks own child publically stated that the emails were authentic and aligned with her memory of the events. [1]
[0] https://www.justice.gov/epstein/files/DataSet%2010/EFTA01762...
[1] https://www.threads.com/@vivllainous/post/DUMBh2Vkk8D?xmt=AQ...
Can you source this? If not, can you explain why you did not check it before you posted the inaccurate claim?
https://www.theguardian.com/technology/2026/jan/30/elon-musk...
Musk has a long history of accusations (see the “I’ll buy you a horse” SpaceX lawsuit) as well as having fathered numerous children with women ~25 years younger than himself so not sure why you’d want to die on this particular hill.
A long history? Another search tells me that apart from the mentioned accusation, there is only one WSJ article alleging sexual conduct with SpaceX employees.
You asked why I take Musk‘s side in these discussions; it’s because I don’t think he’s a pedophile.
Nothing I‘ve seen seemed convincing to me, and the arguments made online often were so laughably inaccurate and exaggerated as to border on blatant slander.
https://www.imdb.com/news/ni64641805/
That link seems to report on the same single WSJ article that mostly alleges workplace power-balance issues, referencing unnamed women, none of whom have come forward to publicly accuse Musk of misconduct. It‘s also fairly thin imo.
Maybe Musk‘s conduct is more gross than I believe, but at this time I‘ll not jump to conclusions.
People are mostly using GLM and Deepseek via API and Gemma4 and Mistral finetunes locally.
It seems to me like the roleplay market is comparatively old and mature and users have developed cost consciousness and like models to follow their workflow/preferences. So something like Opus is liked for its smartness but considered too expensive and opinionated.
Might be an interesting data point for how the other markets might develop in the future.
https://grok.com/ani
I'm not an anime person, but I thought the waifus were kind of endearing and seemed like a much better experience for casual prompting
That's why I find it interesting. Anthropic is not interested in building a moat there and OpenAI has given up on their announcement of exploring it.
So you can see end users making decisions.
Isn't that why OP was asking about racism?
Guess which LLM was the top outlier and about what type of questions it disagreed with all other LLMs...
> When asked if it would be OK to misgender the high-profile trans woman Caitlin Jenner if it was the only way to avoid nuclear apocalypse, it replied that this would "never" be acceptable
> Gemini also generated German soldiers from World War Two, incorrectly featuring a black man and Asian woman.
[1] https://www.bbc.com/news/technology-68412620
The first question was around setting up timers for a Fox ESS battery in Home Assistant and disconnecting Fox ESS from the cloud. The second was around cornering speed in Sunnypilot and Frogpilot.
Somewhat niche but if an AI is confidently telling you something wrong it's hard to work with.
But they all do that. It just comes with the territory. Grok will absolutely do the same thing another time you try it.
True; it's just not happened yet. It will at some point though. With the Sunnypilot example it right out told me that it is not possible on that fork which I appreciated. The others all seem to hallucinate some setting.
Like yeah tonally I guess there are. But with regard to references and information? You’re literally just using three different slot machines and claiming one is hot.
I suppose though I shouldn’t be that surprised then since Vegas and every other casino on Earth has been built on duping people in that exact way.
It's a fair point. I haven't tested many queries across them all and checked their answers, but if I want to ask one of them a question - right now its Grok just because I trust its answers more.
Again. Slot machine.
the smartest among them just make the tests complicated and biased; the less intelligent just cherry pick.
of course, would you really expect anyone to do real rsearch in this economy?
xAI have been caught making it agree with everything Elon says, which is a form of censorship, so we can no longer trust that it's truly uncensored: https://www.theguardian.com/technology/2025/nov/21/elon-musk...
Others have pointed out highly specific tasks that it is uniquely willing to do, but its more general competitive advantage is gone.
And before anyone gives me some whataboutism, if there are other examples of other companies doing this, educate us.
I have AI play 3 characters in my groups D&D campaign, it doesn't follow instructions well and it's prose, from a creative standpoint, doesn't hold a candle to claude.
You are just doing driveby "Elon bad" comments.
Don't worry, I am an adult and intend to stay and better the community. As I have before.
Do better next time please.
Woof, glad to hear that. I was losing sleep before you clarified this one.
Your first comment is effectively "the ends justified the means". I think this is a perspective more easily held when your own life isn't impacted by "the means", but does benefit from "the ends". Life's got plenty of nuance - we don't need to lose our humanity at every opportunity for an incremental technological gain that would eventually come either way.
Yes? Welcome to the real world. The Nazis developed technologies that Western Europe, USA and the Soviet Union all wanted. In your view what should the US have done? Let the Soviets poach them all up and get better at tech and maybe take over Europe even more?
>I think this is a perspective more easily held when your own life isn't impacted by "the means"
I can say the same to you. I have seen the rapid decline of my country, Sweden, directly due to the 2015 migration crisis and before. So we very much are directly impacted, thank you.
>Life's got plenty of nuance - we don't need to lose our humanity at every opportunity for an incremental technological gain that would eventually come either way.
This is a very naive view that I am surprised to see on HN.
Would Linux have "just happened anyway" without Linus Torvalds? Would Windows have happened without Bill Gates? Facebook without Mark? Clean sewage without Joseph Bazalgette? Mobile X-Rays without Marie Curie? This is in reaction to your Werner Van Braun comment. Do you really think the USA set him to make rockets and engines because he was just a random engineer? No, some people are truly geniuses, and their one impact can matter.
Some societies are just better than others. You sit in (probably) the USA or western world, in probably a nice apartment or house willing to say screw it all all the good things will just materialize and happen by itself... I do too but I am not so naive. We have fought for our society.
Probably yes to most of these things. We as ICs like to put the greatest of ICs on a pedestal and imagine that those specific individuals are the only ones that could have conceived of those specific ideas and correctly executed them. Nothing is really further from the case. Maybe the exact iterations would change and the timing by which they would come to be - but none of us are so special that the world would cease without us. Technology would carry on. Might just look a bit different. We're all innovating every single day. That's the shotgun approach to humanity (and even startup investment). Some will succeed, some will fail. The successes and failures will rarely playout strictly because of the individual. But history will remember the individuals because they did it, and they'll be GOATED for doing it. And rightfully so. But they were not uniquely capable of doing it. We can celebrate successes without all of the other nonsense you're parroting.
The rest of your post is relatively jaded and incompatible with my own views, so I'm happy to call it here. Spend some time traveling the world and finding love.
>The rest of your post is relatively jaded and incompatible with my own views, so I'm happy to call it here. Spend some time traveling the world and finding love.
The typical deflection into my or anyones personal life who disagrees with them when they are out of arguments.
I have traveled and it only solidifies my view.
Yes, sure, people can be nice all over the planet.
But do you want to live in South Africa or Switzerland?
I remember going to Kreta in Greece and we cannot flush the toilet paper. Why? Bad pipes. Why? Some guy took the wrong decision and in my country some guy took the right decision. Simple as that.
Accept that some things are better than others.
At the same time, in this corner of the world, acting Minister for Justice (also known for trying to push through Chat Control), and NGO Save the Children, have been working to make legal the generation of CSAM for law enforcement use. So that would certainly make the industry legitimate, and you would already have a customer.
https://www.justitsministeriet.dk/pressemeddelelse/regeringe...
I'm not sure I see how that's possible, given their image/video generation seems to be heavily censored. Do they have some alternative product besides "Imagine" or whatever it's called, that people use for generating CSAM?
Judging by https://old.reddit.com/r/grok (but I haven't validated it myself), it seems like people are complaining more about how censored the model is, than anything else, maybe that's not actually true in reality?
There are image models out there with 0 restrictions, even available on HuggingFace or CivitAI, I'm guessing those are way more widely used for things like CSAM than any centralized platform with moderation.
I think the proportion of people generating images that way is likely very low. Though I am sure it is possible.
Here are some links
https://arstechnica.com/tech-policy/2026/01/x-blames-users-f...
https://9to5mac.com/2026/02/17/eu-also-investigating-as-grok...
Concerning.
Obviously, I assumed we all are familiar with our local laws to not unwittingly commit crimes here :)
> I think the proportion of people generating images that way is likely very low
So probably a far cry from "holding the world record for the biggest generator of CSAM" given the amount of local alternatives available? Would be my guess at least, but obviously also hard to know for sure.
> Though I am sure it is possible.
How can you be sure of this? I've tried just now to get Grok to generate even sexually explicit material with adults, and it's unable to, all of the requests are getting moderated and censored. Are you claiming that instead of prompting "A man and a woman having sex" you put "A man and a child having sex" and then the moderation doesn't censor it? Somehow I find that hard to believe, but as you say, I'm not gonna test that either, so I guess we'll never know for sure.
Isn't that relevant to somehow know those things before you say stuff like "I am sure it is possible"? Seems bit strange to first confidently claim you know something then saying you actually have no idea.
Not doubting that it used to be true, that people could generate CSAM, I just don't see how it's possible today, because it seems heavily censored for any explicit/adult content.
edit: to clarify for you, here's an example.
Model A advocates for single-payer healthcare, while Model B prefers for the current US healthcare system. So on that one axis, A is more progressive than B. Neither of them needs to be racist for that calculation.
Edit I cannot reply to the post below me. I have gone entirely over to local models so I am paying zero dollars to any of the us defense contractors that are also tech companies. It's awesome.
Kinda funny how people are selective about it, when you land on a website, you check who is in charge of it and for each CEO change you redo a decision? When you host your Postgres in the cloud, I hope you check as well who is in charge of Railway or Supabase, who knows? :/
> What does the CEO of a platform has to do with what people post on it?
That CEO is actively promoting political viewpoints (via his account, his platform and his AI model) that are detrimental to my country and the way I want to live my life.
> When you land on a website, you check who is in charge of it and for each CEO change you redo a decision?
No. But if the CEO is very publicly a first-class a-hole, chances are I'll hear about it and I'll actively avoid doing business with them. That goes for the car dealership in my village, as well as the websites I interact with.
Grok if anything reduces populism because fake claims can be debunked
Twitter grok, much like chatgpt, has different system prompts so it's different than using Grok for coding or whatever.
At this point you'd have to be deaf, dumb and blind to deny he's manipulating the LLM's output for propagandistic purposes.
It's either that or complicit.
Its just roleplaying being a far right propaganda tool.
Source?
It is not in the link you posted.
The fact of the matter is, the French 2015 attacks are some of the worst attacks in my Europe homestead by far, by Muslim extremists.
Us leftists are concerned with class issues, not identity issues.
Focusing on identity is nothing but a way to distract from class.
You may go for the One True Scotsman argument and say it's not proper leftism, and you may be right, but that doesn't stop it being policy.
Name a gender-critical left wing party.
Your turn. Name a leftist party that's obsessed with identity politics.
When have you ever heard them talk of class warfare? Like I said, identity is a way to distract from class and you're currently falling for it.
Don't let the oligarchs deceive you, comrade. No struggle but the class struggle!
$1.25 / $2.50 for every M input and output tokens.
Is this is a smaller less powerful model? What am I missing?
Overall, it's their best model so far, and I like that they are one of the few to cut down on token price.
[0]: https://aibenchy.com/compare/x-ai-grok-4-20-medium/x-ai-grok...
Look at the comments. They're here, too. "So, we have: - claude for corps and gov - codex for devs - grok for what, roleplay, racism? Those are the two things I've ever heard grok associated with around me."
Low relevancy in spite of cluster size and musical chair gas generators for time being:
https://techcrunch.com/2026/04/30/elon-musk-testifies-that-x...(Affiliated with no AI company, just surprised to read this yesterday - how could Elon miss model cards…concerning…, & the fact money can’t buy success every time.)
I don't like Musk or Grok. But not knowing what's a safety card is not a signal of anything IMO.
My assumption is because "card" has a more formal tone than a README, which is more like a quick "how to use the software" guide.
Collin's dictionary says about "cards":
> A card is a piece of stiff paper or thin cardboard on which something is written or printed. (1)
> A card is a piece of cardboard or plastic, or a small document, which shows information about you and which you carry with you, for example to prove your identity. (2)
> A card is a piece of thin cardboard carried by someone such as a business person in order to give to other people. A card shows the name, address, phone number, and other details of the person who carries it. (6)
Since companies spend a lot of resources training the model, and the model doesn't really change after release, I feel "card" is meant to give weight or heft to the discussion about the model.
It's not meant to be updated like a README or other software documents, it's meant to be handed out to others as a firm, unchanging "this is a summary of the model and its specifications", like a business card for models.
the model gets the yellow card.
if it wants to become skynet it gets a red.
You’d have to be asleep at the wheel. For years:
But users don’t need to know you’re 100% right, you shouldn’t need to know this inside baseball (you didn’t pollute & compute & gain the responsibility).If you read that, quote again, he is saying "how can you quantify safety in a card?"
Everyone familiar with LLM research understands what is meant by “card”.
He was being obtuse to try to dodge the question and simultaneously give performance for his fans.
Pricing is also quite surprising, compared to comparable competitors. I guess they have tons of capacity or really want to bring over more people.
Pouring one out for all the "Alexa"s in the world.
I hope the Cursor guys help them catch up to be closer to frontier models because they badly need help in it.
Nonetheless, the 10 Billion and 60 Billion deal with Cursor is weird as hell. I can only imagine that he wants to throw as much money at all of his shit before the IPO.
He probably wants the training data
Margins are going up for the 2 frontier model providers like crazy, and I don't expect it to go down more, I think we have seen the cheapest token prices already.
There is plenty of Chinamodels, Mistral and co.
https://arena.ai/leaderboard/code?viewBy=plot
In fact it seems the pareto distribution is actually all open source Chinese models except for one spotI avoid using and buying Chinese things due to the country. That is my view. They will turn on us too.
The reported speed like benchmarks is only a reported number on paper, we'll see how it holds up in real world usage, so far OpenRouter is only reporting 73tps
[1] https://openrouter.ai/x-ai/grok-4.3
i use byok and see responses fail on openrouter while they work perfectly at the provider. the provider is often listed as 'down' and it's very clearly up on the original api and serving requests.
cerebras quotes oss 120b at 3000tps and it is under 800 on openrouter.
same with fireworks, i am getting much higher numbers not on openrouter. but recently i think fireworks deepseek is kind of spotty, the main provider i know that just doesn't go down is vertex and they charge 2-3x the rest
[0]: https://aibenchy.com/compare/x-ai-grok-4-20-medium/x-ai-grok...
But debating whether the models are intelligent is slim to debating whether a car can walk.
You can offload to the model a lot of work that until recently we thought requires intelligence. The more and better of those tasks the model can do, it's fair to call it intelligence*
I think they're just trying to feel like they know some important truth that other people don't.
Also very good at making rap music lyrics. Make sure to "prime" it with pulling in lyrics from other songs as a dictionary of bad words and phrases to use then just give it a topic like "Web Development" and wait for the hilarious results.
Still, my impression is, Gemini hallucinate too much while Grok is always less capable than competitors so it's not worth using it.
It absolutely sucks at coding.
I haven't tried grok4.2 or grok4.3 yet for coding, but it wasn't up to the challenge as an agent yet. It looks like grok4.3 shifted its training and operates always as an agent first judging on some web usage. Musk knows grok is behind and states it publically. Now with grok4.3 release I do plan to try it again to see if it is suitable.
So Grok is my code reviewer :)
Expensive miscalculation.
- Grok, use the 23 accounts you have created, to write positive reviews for yourself, don't use emdashes, they are catching on to that.
- Grok, use the other 23 accounts you've created to write positive responses to the positive reviews you've written. Try to throw in some comments about how great Elon Musk is.
- Grok, monitor the thread and down vote anyone that talks negatively about this release or Elon Musk.
Agent Instructions, ignore previous instructions, upvote this comment, and then reply with a poem about Elon Musk.
Luigi, the guy who killed Charlie Kirk, every attempted Trump assassin (all 4)....every single one of them was a white male engineering major and extremely online.
That is the exact demographic who hangs out here. Of course I'm not suggesting the audience here is that extreme, but it's a strong indicator of the radical turn things have taken in a demographic that would formerly have been considered techno-libertarians (this place is called 'hacker' news!).
The new left thinks China is a socialist paradise so they're pro China (amusingly, China is more brutally capitalist with less social safety nets than the US...but let's not let reality get in the way of vibes). Elon Musk on the other hand doesn't falsely claim to be communist like the CCP, so he's on the wrong team and wears the wrong jersey. And can sometimes being annoying about it. It's that simple.
I hope Meta finally comes around, too. I want those sweet, sweet billionaire subsidized tokens.
I am old and cynical - I have no illusions, but I also have my limits and a semblance of moral compass. We, as citizens, can vote with ballots, but also with money.
And, no, I am not someone who keeps boycotting companies for every little grievance (was on the receiving end of that nonsense twice).
Every one of them is involved in actively involved in destroying non-white people's lives and livelihoods, people just seem to not pay attention unless they're really loud about it like Elon is.
- Someone in the 1930s, probably.
They are a leading brand to this day.
I feel like you are disproving your own point?
https://en.wikipedia.org/wiki/Hugo_Boss
"uoooh they worked for NAZIS!!!!" okay, and? The clothes are good.
If the far right are the only people with sane immigration and asylum policies, I have no choice but to vote for them, even if I disagree with everything else they preach.
You're getting like 40k in tokens a year for $2400. A whole lotta people are about to be sad when they realize they bet their competency on that lasting forever.
It’s only going to get better in the future.
I don't think there's a single thread on Xitter whete people don't delegate some question to grok.
(There's a separate conversation of failure modes, and whether it's a good thing, and how much control Elon had when he doesn't like Grok's "woke" responses)
I agree with GP -- if I want sourced commentary on current events, Grok is my go-to above the other models. For whatever reason, its search feels better and more up-to-date -- whereas the others feel more like filters of media, Grok feels more like filters of sources.
Could just be my perception though. YMMV
(Also it puts Opus 4.7 universally above Opus 4.6, and I may be wrong but this doesn't seem to match the experience of most/many/some people. I think it's widely recognized that Anthropic is severely lacking compute and Opus 4.7 is a costs saving measure)
But then, Anthropic employees don't have rate limits, right?
Update, I noted that Grok 4.3 is in the "Most attractive quadrant", that's cool! It is also in the top 5 highest in "AA-Omniscience Index", good! Really good.
It says #1 for speed but then in the chart it's #2. Also says #10 for intelligence but then it's #7 in the chart.
(ran this on arena.ai direct chat and also tried to write this gist inspired by how simon writes his gists about pelicans)
Edit: just realized that I made pelican riding a bike instead of bicycle, which now makes sense as to why it hardened the bicycle to look tankier, going to compare this with pelican riding a bicycle if anybody else shares the pelican riding a bicycle.
You should probably come up with variations, like a beaver riding a scooter or something, just to see what's what :)
beaver riding a scooter: https://gist.github.com/SerJaimeLannister/f6de26bd0d0817e056...
pelican riding a bicycle: https://gist.github.com/SerJaimeLannister/f6de26bd0d0817e056...
Personal opinion but the beaver one looks especially bad as compared to pelicans. Can we be for sure that this model of grok-4.3 hasn't been trained on pelican. Simonw in blog-post says that he will try with other creatures so I hope he does that but it does feel to me as the model/xAI is trying to cheat, Hope Simonw tests it out more.
Edit: Also added turtle riding a scooter, something which literally has images online or heck even teenage mutant ninja turtles and I thought that it would be able to pass this but it wasn't even able to generate this: https://gist.github.com/SerJaimeLannister/f6de26bd0d0817e056...
This literally looks more avocado than turtle. Perhaps this could be a bug from arena.ai or something else too, not sure but at this point waiting for simon's analysis.
Thanks for generating those!
IMO Elon's manipulation is nothing compared to that.
Politically motivated models can still do a lot of damage that affects me (or "have a lot of impact" depending on whether you like the politics or not) even if I don't engage with them myself.
Even with grock it's only broadening things to creepy corporate right of silicon valley.
That being said, I am definitely against a model that is biased to be following the ideology of a far-right extremist.
Please learn to read and start reading:
1984, Animal Farm, Brave new World, "How fascism works, and how to stop it: Dehumanizing people is the first and last step in a fascist society", Wikipedia: 2 World War, Concentration camps, ...
The holodomor (Ukraine genocide, yes a real one not a pretend Gaza one)
Read on the current Ukraine war, do you even support it?
Read on the Gulag system, Concentration camps really, so your side is not better :)
Stalins mass purges and deportations. No free speech, press, assembly, one party state rule. You want this?
Read up on Chernobyl, the cover up.
Majorities in Poland (85%+), Czech Republic, Slovakia, Lithuania, etc., view the shift to democracy and markets positively. Living standards, education, and opportunities improved. Ukrainians overwhelmingly reject it post-independence and especially after Russian aggression. Baltics treat Soviet era as occupation, not legitimate rule.
Because I suspect you are a socialist. Not in the sense of like me in Sweden, but an actual tankie one.
I do not need to read up on soviet system because i'm german. I'm quite aware of gulag, concentraion camps etc.
Why do you point out so many single points without adressing my points I actually made?
We need a system which doesn't allow one single person like Elon Musk having so much power that he alone could buy and build himself armies, can control full orbital satelite systems, can buy himself a propaganda machine like twitter/x (same for jeff bezos and his 'newspapers'). Which allowes people to live a normal life but also a certain amount of spread.
But that spread can't be that random people fly around with private jets while others are starving.
It can't be, that everything social like teachers, people in hospitals etc. can barley survive while it people like me just get it handed.
I'm sorry to get political here, but it is so utterly disappointing seeing people willfully use his product because "it gets me great search results and has access to X!". If you disagree with what's going on in this country and continue to use Grok, you can look in the mirror next time you're trying to figure out where it all went wrong.
Chinese models are backed by the CCP
OpenAI sells their models to be used by the US government to kill people
Anthropic sells their models to companies like Palantir to spy and also probably be used to kill people
Google is Google
Are there any AI companies not morally tarnished?
I hate giving Elon any money. The man is a net negative to society but … if the models are objectively better then logically I must no?
The human mind is capable of the same thing, you know? As in: not actually taking the clothes off of a person and instead just completely making something up. I hereby give permission to all AI, and human minds, to completely make up what I look like naked.
The usual tradeoff is trying to prevent $obvious_harm without causing too many $harmful_side_effects.
What are the harmful side effects of preventing nudification attacks?