Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Mistral's Le Chat tops 1M downloads in just 14 days (techcrunch.com)
69 points by oliverchan2024 9 months ago | hide | past | favorite | 22 comments


Mistral's partnership with Cerebras for inference hardware has received less commentary than I expected. They're basically blowing the competition out of the water, with Le Chat getting 1,100+ tokens per second of per-user throughput.


Yes, I'm really impressed by the speed as well.

A bit more about the collaboration can be found here:

https://cerebras.ai/blog/mistral-le-chat


For those that haven’t, best to see it yourself - it is visibly, significantly faster:

https://chat.mistral.ai/chat


Thats just crazy.

I'm curious when someone will do the right experiment in a way that some LLM on Cerebras will do the reasoning so well so big so fast, that it does something very novel


It should be noted that as a customer of the French ISP Free you get a one year free subscription of Le Chat Pro (Free CEO Xavier Niel is an investor).

That probably helped downloads.


The information from the content of the article is not much different from the title.

I think I wasted my time reading it this time. Just my opinion.


[flagged]


Separately Gemini's Flash is $0.10/1m tokens. which is bonkers compared to Chat's $3.50/1m.

Quality gap for sure but a 35x gap? Definitely worth testing for your use case.


Gemini is so bad it's literally unusable. I can't think of any situation in which I could accept the quality of the output of Gemini, no matter how low the cost.


Is this Gemini 2?

Because we've been testing and for our tagging duse case it's been quite quite good. 4o still out performs by 4% (86% vs 90%) but it's acceptable for us given the 35x decrease in cost. We've been able to get 88% on gemini-pro so we're still debating which one we will finalize on given speed, cost, accuracy.


Don't post AI slop in Hacker News comments.


The Le Chat Web UI, after having some code and text generated, slowed down to unusable levels for me(the UI itself, probably has some JS code that goes through all the DOM every time). That's why I downloaded the app.

Generally, I feel like all the AI models are about the same at this point. Grok in Twitter has the ability to access real time events information but the rest seems to be interchangeable at this point.

I pay for ChatGPT for higher usage limits, then use all the rest for different things in order to keep history for different things separated(not because one is better than the other in the smartness department).


Why is it not on the LLM leaderboard?

https://lmarena.ai/?leaderboard

Do they not take part, or is the list not complete?


They're on there, but using the model name rather than the service name. For instance, "Mistral-Large-2407" has an elo of 1252 at the time of writing.


I have found testing coding prompts in mistral and Claude lets me pick, they differ in some details of how to implement my goals (python3, numpy, matplotlib, json, requests sourced data, CSV handling, linear regression)

They are similar speed. I am probably travelling the well worn road so in some equivalent of the LRU cache


Just switched my paid plan over from chatgpt to mistral for the warm fuzzy feelings. C'est genial!


I stopped doing business with Mistral when I got an API subscription and then watched one of their devs break and try to fix their oauth live over several hours over what clearly was something they didn't bother trying in a non-prod environment.


Their websearch is bad.

See https://chat.mistral.ai/chat/01a9ee32-a8fe-4305-8f74-a5af959... as an example.

Try the same on other chats with websearch.


It's funny. Le Chat's gaslighting me in French. Claims Mistral develops ChatGPT when answering in French, but OpenAI when in English.

For your amusement too: https://imgur.com/EgmQ0Ph


Mistral is great, I love their image generation and speed at which it replies. They really don't benefit as much hype from the others contenders but it feels like they are the silent undertaker.


Yes, Flux Ultra is great, too bad they do not allow to access the "raw" mode.

Here is me trying (and finally succeeding) to persuade Le Chat to generate image using filename as a prompt...

https://chat.mistral.ai/chat/9940f6bf-b2e5-4db2-bb64-adcbd9f...

I mean... "pretty please" as a debugging technique. I kind of do not look forward to my future conversations with tea kettle and door knob.


Great app


0.1% market share.

Cute.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: