Hacker Newsnew | past | comments | ask | show | jobs | submit | more harisec's commentslogin

I asked a few “hard” questions and compared o1 with claude. https://github.com/harisec/o1-vs-claude


If you use DeepSeek Coder V2 0724 (that is #2 after Claude 3.5 Sonnet on the Aider leaderboard), the costs are very, very small. https://aider.chat/2024/07/25/new-models.html


aider is great, i also use it almost daily. thanks for writing it Paul!


I agree, I suspect the HuggingFace dataset that I've used is not that randomly distributed and it mostly contains prompts related with those themes. How it works is that I randomly select 5 random prompts from the dataset and use those prompts as seed for new prompts. The complete DeepSeek prompt to generate new prompts can be found here: https://github.com/harisec/llm-dreams


All images in the LLM Dreams gallery are generated using the Flux model from the Black Forest Team, powered by the fal.ai API. The prompts for these images were brainstormed by DeepSeek, drawing inspiration from existing Stable Diffusion prompts found in the Gustavosta/Stable-Diffusion-Prompts dataset on Hugging Face. All the code was written by Claude.


The article is wrong, CVE-2022-21587 is related to Oracle not Solaris


Unfortunately it is both... the CVE is for Oracle software but they're running it on Solaris servers.


Cox has a vulnerability disclosure program. https://www.cox.com/aboutus/policies/cox-security-responsibl...


You still don't know the exact date of the potential acquisition. So you cannot make a successful trade. I would say it's not insider trading.


Researchers from Carnegie Mellon University found that it's possible to automatically construct adversarial attacks on LLMs, forcing them to answer any questions and it's possible to generated unlimited number of such attacks, making them very hard to protect against.


Same here, this is ridiculous


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: