Hacker Newsnew | past | comments | ask | show | jobs | submit | maccam912's commentslogin

If I am understanding correctly you are seeing https://en.wikipedia.org/wiki/Crater_chain which are craters caused by debris blasted out when another crater is formed.

Thank you! Glad to see there's a Wikipedia article about it. And good to learn the formal name for this formation: "catena" (plural "catenae").

I don't think Anthropic has a problem with you using a regular pay-per-token API key with opencode. The issue is letting someone use their "Log in with Claude" as if it were a regular API key.


I've had a good experience with https://github.com/obra/superpowers. At first glance this looks similar. Has anyone tried both who can offer a comparison?


I've used both From my experience, gsd is a highly overengineered piece of software that unfortunately does not get shit done, burns limits and takes ages while doing so. Quick mode does not really help because it kills the point of gsd, you can't build full software on ad-hocs. I've used plain markdown planning before, but it was limiting and not very stable, superpowers looks like a good middleground


> gsd is a highly overengineered piece of software that unfortunately does not get shit done, burns limits and takes ages while doing so

That was my impression of superpowers as well. Maybe not highly overengineered but definitely somewhat. I ended up stripping it back to get something useful. Kept maybe 30%.

There's a kernel of a good idea in there but I feel it's something that we're all gradually aligning on independently, these shared systems are just fancy versions of a "standard agentic workflow".


My instinct is to blame these agent frameworks as well but at some point we have to start blaming Claude or Claude Code for engaging in these endless planning loops which burn tokens with no regard. The future of these coding models will eventually need to start factoring in how to use and engage with these skills more competently (assuming that's possible and they aren't always just aimless yesmen).


It's one of those things where having a structure is really helpful - I've used some similar prompt scaffolds, and the difference is very noticeable.

Another great technique is to use one of these structures in a repo, then task your AI with overhauling the framework using best practices for whatever your target project is. It works great for creative writing, humanizing, songwriting, technical/scientific domains, and so on. In conjunction with agents, these are excellent to have.

I think they're going to be a temporary thing - a hack that boosts utility for a few model releases until there's sufficient successful use cases in the training data that models can just do this sort of thing really well without all the extra prompting.

These are fun to use.


I've tried both. Each has pros and cons. Two things I don't like about superpowers is it writes all the codes into the implementation plan, at the plan step, then the subagents basically just rewrite these codes back to the files. And I have to ask Claude to create a progress.md file to track the progress if I want to work in multiple sessions. GSD pretty much solved these problems for me, but the down side of GSD is it takes too many turns to get something done.


There is a fork that uses Claude Code-native features and tracks progress and task dependencies natively: https://github.com/pcvelz/superpowers


If you use it I'm curious if you find it limited at all from lagging behind superpowers? For instance I opened up one skill at random and they haven't yet pulled in the latest commit from last week.

I doubt any hot off the press features are *that* important, but am curious if the customizations of the fork are a net positive considering this.


I tried Superpowers for my current project - migrating my blog from Hugo to Astro (with AstroPaper theme). I wrote the main spec in two ways - 1) my usual method of starting with a small list of what I want in the new blog and working with the agent to expand on it, ask questions and so on (aka Collaborative Spec) and 2) asked Superpowers to write the spec and plan. I did both from the working directory of my blog's repo so that the agent has full access to the code and the content.

My findings:

1. The spec created by Superpowers was very detailed (described the specific fonts, color palette), included the exact content of config files, commit messages etc. But it missed a lot of things like analytics, RSS feed etc.

2. Superpowers wrote the spec and plan as two separate documents which was better than the collaborative method, which put both into one document.

3. Superpowers recommended an in-place migration of the blog whereas the collaborative spec suggested a parallel branch so that Hugo and Astro can co-exist until everything is stable.

And a few more difference written in [0].

In general, I liked the aspect of developing the spec through discussion rather than one-shotting it, it let me add things to the spec as I remember them. It felt like a more iterative discovery process vs. you need to get everything right the first time. That might just be a personal preference though.

At the end of this exercise, I asked Claude to review both specs in detail, it found a few things that both specs missed (SEO, rollback plan etc.) and made a final spec that consolidates everything.

[0] https://annjose.com/redesign/#two-specs-one-project


I usually ask Gemini to review the spec as well. Sometimes it catches things I missed even after going through a few times.


I'm a big fan of Research Plan Implement like this peak build-in-public multi foundation model cross check approach:

https://x.com/i/status/2033368385724014827


I don't get why people need a cli wrapper for this. Can't you just use Claude skills and create everything you need?


What do you mean by cli wrapper?

Superpowers and gsd are claude code plugins (providing skills)


Superpowers is literally a bunch of skills packaged in a Claude plugin


Right on, I was going off the OP's GSD link, which looks like the def of a cli wrapper to me. Hadn't seen superpowers before, seems way too deterministic and convoluted, but you're right, not a cli wrapper.


There's a CLI tool that writes the agent skills into the right folder. The other option would be to have everybody manually unzip a download into a folder which they might not remember.


Yes, and IMO Superpowers is better when you want to Get Not-Shit Done.

Get Shit Done is best when when you're an influencer and need to create a Potemkin SaaS overnight for tomorrow's TikTok posts.


I don't have one going but I do get the appeal. One example might be that it is prompted behind the scenes every time an email comes in and it sorts it, unsubscribes from spam, other tedious stuff you have to do now that is annoying but necessary. Well that is something running in the background, not necessarily continuously in the sense that it's going every second, but could be invoked at any point in time on an incoming email. That particular use case wouldn't sit well with me with today's LLMs, but if we got to a point where I could trust one to handle this task without screwing up then I'd be on board.


I liked the part about attention being the scarce resource now. Everyone is competing for your attention. But then I see a world in which openclaw is managing emails for people and searching the internet for them and shopping in their behalf. How long until we start seeing advertising specifically targeting AI instead of humans?


It's often used as a way to verify identity. Historically it's been one of the more secret pieces of information about someone, so while name and birthday are not very secret, if someone wanted to steal an identity, it's generally the SSN that is hardest to figure out. As a result though, I think a lot of places treat it as "If you know the SSN, then you are who you say you are."


WSPR on HF makes sense down here on the surface of the planet because certain ranges of frequencies (not the same range always, but generally always within HF) can bounce off of upper atmosphere layers and pinball back and forth to get signals to someone or from someone who couldn't be seen line-of-sight because of the curvature of the Earth. For line of sight work, the 2.4GHz in theory would work as well as anything, but another trick WSPR has is that it doesn't allow for arbitrary data to be sent. Sender and receiver encode the limited information in an agreed-upon way and then it takes a long time, like minutes, to send that little bit of data. Very high redundancy.


Yeah, our baloon was recorded by WSPR eeceivers thousands of kilometers away when it crossed the arctic circle for a day - we wpuld have no data if it were dependent on line of sight or even just flying over inhabited territory.

And indeed, the relions take minures to send a couple dozen bits of data. But the modulation is done in such a clever way, that it does not really matter - you know ehere the probe with your callsign was, how high, ground speed, temperature and panel volatage. There is quite agressive heuristics applied (eg. different precision for different altitudes as you don't really expect it to stay low for any ammount of time and survive, position via grid squares with course position still available even if you have incomplete data from a relation) so the few dozen bits are enough. :)

It is all super clever and hats off for those who developed this system. :)


You know that and I know that, it was a Socratic question aimed at OP ;-)

In the olden days we did QRSS, FSK Morse with a dot rate in the order of minutes.


I essentially do this but on a state surplus auctions site. It's just a scheduled action which searches for something, e.g. old Lego kits, once a week. Usually nothing comes up but at least once there are kits I know about it.


But then brands could buy their own products back for cheaper and just get a real life infinite money glitch?


This actually happened to some restaurants who found their service on DoorDash. The restaurant owners were able to make a fine profit out of DoorDash’s arbitrage scheme.


Indeed! Discussed at the time: https://news.ycombinator.com/item?id=23216852


It's worth pointing out that it only worked because doordash scraped their menu incorrectly (using AI maybe?) and used the price of a plain pizza for specialty pizzas. Also it was a trial period where they waved all their usual fees.


Don’t worry. If Amazon decided to undercut by selling at a loss, they would absolutely put it in their ToS that retailers cannot exploit this loophole and they would sue to enforce their ToS.


Retailers could put into their TOS that they are exempt from those clauses when buying things bought from them.


I like this. “By purchasing from us you agree that you cannot enforce your ridiculous terms of service and if you try, you also owe us a pony.”


These manufacturers never signed any ToS, and the most Amazon could do to retaliate would be to de-list the product that they never asked to be listed in the first place.


When the manufacturer buys their own product via Amazon’s service they would become subject to their TOS as a buyer.


I guess they'll just have to use some service to buy for them instead. ;)


It looks like a good idea, this works better for refrigerators than pizza.


To answer the question a different way, I think you are asking how we know the proof actually matches the description the human provided? And I'd say we can't know for sure, but the idea is that you can pretty concisely write and check yourself that the problem is accurate, i.e. "There are an infinite number of primes" or whatever, and then even if an LLM goes off and makes up a lean proof wildly different from your description, if lean says the proof is valid then you have proven the original statement. I guess in theory the actual proof could be way different than what you thought it would be, but ultimately all the logic will still check out.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: