I have never used HARO, though it's being promoted as a way for SEOs to get backlinks by being cited by journalists. That has always made me wonder how expert status is being vetted. I suppose it is up to the individual journalist to do so.
Well, it's fundamentally inspired by what these companies are trying to do. However, we are an SDK so we want to offer a toolset for other apps to build similar experiences.
Hey, yes, that's somewhat unavoidable given that is simply takes OpenAI that long to generate images. I think that's to be expected by most users. We tried to be as friendly as possible to parallelization so users aren't blocked while working in the editor.
There recently was a post by Harper Reed that goes into how to prepare the right prompts before you start coding with LLMs and break that down into smaller debuggable step. It might kill the vibe, but it keeps you in the drivers seat.
https://harper.blog/2025/02/16/my-llm-codegen-workflow-atm/
I find it funny that is message resurfaces on the front page once or twice a year for at least 10 years now.
Product quality is often not the main argument advanced when deciding on a tech stack, only indirectly. Barring any special technical requirements, in the beginning what matters is:
- Can we build quickly without making a massive mess?
- Will we find enough of the right people who can and want to work with this stack?
- Will this tech stack continue to serve us in the future?
Imagine it's 2014 and you're deciding between two hot new framework ember and react, this is not just a question about what is hot or shiny and new.
Great idea! I would totally swipe through. One small recommendation, consider having the title and a short description written by an LLM making it as dumb and sensational as possible, for a true tiktok like experience for instance the this title and abstract could be transformed into the following
Instead of:
"Benchmarking Multimodal RAG through a Chart-based Document Question-Answering Generation Framework
Multimodal Retrieval-Augmented Generation (MRAG) enhances reasoning capabilities by integrating external knowledge. However, existing benchmarks primarily focus on simple image-text interactions, overlooking complex visual formats like charts that are prevalent in real-world applications. In this work, we introduce a novel task, Chart-based MRAG, to address this limitation. To semi-automatically generate high-quality evaluation samples, we propose CHARt-based document question-answering GEneration (CHARGE), a framework that produces evaluation data through structured keypoint extraction, crossmodal verification, and keypoint-based generation. By combining CHARGE with expert validation, we construct Chart-MRAG Bench, a comprehensive benchmark for chart-based MRAG evaluation, featuring 4,738 question-answering pairs across 8 domains from real-world documents. Our evaluation reveals three critical limitations in current approaches: (1) unified multimodal embedding retrieval methods struggles in chart-based scenarios, (2) even with ground-truth retrieval, state-of-the-art MLLMs achieve only 58.19% Correctness and 73.87% Coverage scores, and (3) MLLMs demonstrate consistent text-over-visual modality bias during Chart-based MRAG reasoning. The CHARGE and Chart-MRAG Bench are released at https://github.com/Nomothings/CHARGE.git."
Give me:
"
"AI Fails Hard at Reading Charts: New Study Exposes Shocking Weaknesses!"
A groundbreaking study reveals that even the smartest AI models struggle with charts, scoring just 58% accuracy—proving your brain might still be better than AI at decoding data!
"