A "normal" ocean-going Ro Ro ship would be over 200m long and quite a bit higher. Typically only the lowest deck can carry trucks and heavy loads and the upper decks carry cars. Their solution of only having the heavy load deck is sensible since they need a low center of gravity to counteract the sails and any added deck height hurts the sailing characteristics.
With all that said, there is not much to scale it up. They could make it a bit longer but probably mostly easily: risk in bad weather, structural integrity demands from the added sails and all that may quickly add up.
I did some searching around for this (not a subject matter expert).
Basically the sail area grows with length squared, but ship mass and resistance grows roughly with length cubed. So propulsion gets weaker with size.
To move a full sized freighter, you would need a mast the size of a skyscraper and we don’t currently have materials that can support that.
If we did have a material that supported sails that large, it would still be a problem because you are functionally making the ship top heavy (when wind is applied) increasing how likely it is that the freighter rolls over.
It scales horribly. Essentially, as the ship gets larger, the sail area has to be even larger (proportionally) to maintain the same speed. Further, the larger the sail, the more susceptible to damage and the harder it is to control.
Since you specifically were wondering if something like this exist, I feel okay with mentioning my own tool https://keenious.com since I think it might fit your needs.
Basically we are trying to combine the benefits of chat with normal academic search results using semantic search and keyword search. That way you get the benefit of LLMs but you’re actually engaging with sources like a normal search.
Claude with location access is now an amazing traveling companion. Sharing my experience of using it in Hong Kong in an area I knew nothing about. The best thing is, is that it doesn’t ruin the experience with images etc that in my opinion spoils the locations.
Isn’t this supposed to be a short technical blog? Why does it seem like they’re a salesman and it’s a sales pitch?
> "We are generating more code than ever. With LLMs like Claude already writing most of Anthropic’s code, the challenge is no longer producing code, it is understanding it."
The first sentence already is obviously AI generated, and reading through it it, it is obviously completely written by AI to the point of it being distracting.
I understand the author probably feels that AI is better at writing than they are, but I would heavily recommend they use their own voice.
I’ve personally started to try to think about the points someone prompted an AI to generate some text (the actual thoughts of the author) so that I can more easily skim past the AI generated slop such as: "… you’ll get the env setup, required services, and dependency graph with citations to README, Dockerfile, and scripts, so you can hit the ground running".
All of us will be forgotten eventually after you great-grandkids forget about you. What's the point in trying to keep your name alive when you'll be too dead to care? Focus on the life you live not the one after your death.
Because it has many of the typical 4o stylistic tics like 'it's not X, it's Y' or enumeration or the em dashes, or the twist ending.
It's not 100% unedited ChatGPT and far from the most blatant instance that has caught my eye (they've started showing up in the New York Times and New Yorker as well, have you noticed that?), but certainly sounds like that was used: "Writing compels us to think — not in the chaotic, non-linear way our minds typically wander, but in a structured, intentional manner." "This is not merely a philosophical observation; it is backed by scientific evidence." "Importantly, if writing is thinking, are we not then reading the ‘thoughts’ of the LLM rather than those of the researchers behind the paper?" "overcoming writer’s block, provide alternative explanations for findings or identify connections between seemingly unrelated subjects."
(Note that this is particularly ironic because as the op-ed notes, if they did use it, they are required by Nature to disclose this... https://www.nature.com/articles/d41586-023-00191-1 But of course, how would anyone ever prove they did so? You know how difficult it is to get Nature to retract even blatant fraud.)
I think it really depends on the use case. It is well known that most users really only look and engage with the top few (1-3) results in a search. If you can get the most relevant result from position, let’s say 7 to 2, that can have a big impact on the user experience. And I know they market this for RAG, but I think that’s just marketing and this is as relevant for traditional search.
Spending too much time on HN and other spaces (including offline) where people talk about what they're doing. Making LLM-based things has also been my job since pretty much the original release of GPT3.5 which kicked off the whole industry, so I have an excuse.
The big giveaway is that everyone who has tried it agrees that it's clearly the best agentic coding tool out there. The very few who change back to whatever they were using before (whether IDE fork, extension or terminal agent), do so because of the costs.
Relevant post on the front page right now: A flat pricing subscription for Claude Code [0]. The comment section supports the above as well.
I personally tried it and I felt it way more confusing to use compared to using Cursor with Claude 3.7 Sonnet. The CLI interface seems to me more to lend itself for «vibe coding» where you actually never work and look with the actual code. That is why I think Cursor and IDEs are more popular than CLI only tools.
Together with 3.7 Sonnet. And the claim was that it is rapidly gaining ground, not that it sparked initial interest. I still don’t see much proof of adoption. This is actually the first I’ve heard about anyone actually actively using it since its launch.
>This is actually the first I’ve heard about anyone actually actively using it
I've been reaching for Claude Code first for the last couple weeks. They had offered me a $40 credit after I tried it and didn't really use it, maybe 6 weeks ago, but since I've been using it a lot. I've spent that credit and another $30, and it's REALLY good. One thing I like about Claude Code is you can "/init" and it will create a "CLAUDE.md" that saves off it's understanding of the code, and then you can modify it to give it some working knowledge.
I've also tried Codex with OpenAI and o4-mini, and it works very well too, though I have had it crash on me which claude has not.
I did try Codex with Gemini 2.5 Pro Preview, but it is really weird. It seems to not be able to do any editing, it'll say "You need to make these edits to this file (and describe high level fixes) and then come back when you're done and I'll tell you the edits to do to this other file." So that integration doesn't seem to be complete. I had high hopes because of the reviews of the new 2.5 Pro.
I also tried some claude-like use in the AI panel in Zed yesterday, and made a lot of good progress, it seemed to work pretty well, but then at some point it zeroed out a couple files. I think I might have reached a token limit, it was saying "110K out of 200K" but then something else said "120K" and I wonder if that confused it. With Codex you can compact the history, I didn't see that in Zed. Then at some point my Zed switched from editing to needing me to accept every change. I used nearly the entire trial Zed allowance yesterday asking it to impelement a Galaga-inspired game, with varying success.