Hacker Newsnew | past | comments | ask | show | jobs | submit | bdbdbdb's commentslogin

> No human could read all of this in a lifetime. AI consumes it in seconds.

And therefore it's impossible to test the accuracy if it's consuming your own data. AI can hallucinate on any data you feed it, and it's been proven that it doesn't summarize, but rather abridges and abbreviates data.

In the authors example

> "What patterns emerge from my last 50 one-on-ones?" AI found that performance issues always preceded tool complaints by 2-3 weeks. I'd never connected those dots.

Maybe that's a pattern from 50 one-on-ones. Or maybe it's only in the first two and the last one.

I'd be wary of using AI to summarize like this and expecting accurate insights


> it's been proven that it doesn't summarize, but rather abridges and abbreviates data

Do you have more resources on that? I'd love to read about the methodology.

> And therefore it's impossible to test the accuracy if it's consuming your own data.

Isn't it only if it's hard to verify the result? If it's a result that's hard to produce but easy to verify, a class which many problems fall into, you'd just need to look at the synthetized results.

If you ask it "given these arbitrary metrics, what is the best business plan for my company?" It'd be really hard to verify the result. I'd be hard to verify the result from anyone for that matter, even specialists.

So I think it's less about expecting the LLM to do autonomous work and more about using LLMs to more efficiently help you search the latent space for interesting correlations, so that you and not the LLM come up with the insights.


Look into the emerging literature around "needle-in-a-haystack" tests of LLM context windows. You'll see what the poster you're replying to is describing, in part. This can also be described as testing "how lazy is my LLM being when it comes to analyzing the input I've provided to it?" Hint: they can get quite lazy! I agree with the poster you replied to that "RAG my Obsidian"-type experiments with local models are middling at best. I'm optimistic things will get a lot better in the future, but it's hard to trust a lot of the 'insights' this blog post talks about, without intense QA-ing (if the author did it, which I doubt, considering their writing is also lazily mostly AI-assisted as well).

> If you ask it "given these arbitrary metrics, what is the best business plan for my company?" It'd be really hard to verify the result. I'd be hard to verify the result from anyone for that matter, even specialists.

Hard to verify something so subjective, for sure. But a specialist will be applying intelligence to the data. An LLM is just generating random text strings that sound good.

The source for my claim about LLMs not summarizing but abbreviating is on hn somewhere, I'll dig it out

Edit: sorry, I tried but couldn't find the source.


> But a specialist will be applying intelligence to the data. An LLM is just generating random text strings that sound good.

I'd only make such a claim if I could demonstrate that human text is a result of intelligence and LLMs not, because really, what's the actual difference? How isn't LLM "intelligent" when it can clearly help me make sense of information? Note that this isn't to say that it's conscious or not. But it's definitely intelligent. The text output is not only coherent, it's right often enough to be useful.

Curiously, I'm human, and I'm wrong a lot, but I'm right often enough to be a developer.


"AI can hallucinate on any data you feed it, and it's been proven that it doesn't summarize, but rather abridges and abbreviates data."

Have you ever met a human? I think one of the biggest reasons people become bearish on AI is that their measure of whether it's good/useful is that it needs to be absolutely perfect, rather than simply superior to human effort.


> one of the biggest reasons people become bearish on AI is that their measure of whether it's good/useful is that it needs to be absolutely perfect, rather than simply superior to human effort.

Meanwhile people bullish on AI don't care if it's perfect or even vastly inferior to human effort, they just want it to be less expensive/troublesome and easier to control than a human would be. Plenty of people would be fine knowing that AI fucks up regularly and ruins other people's lives in the process as long as in the end their profits go up or they can still get what they want out of it.


I'm not saying it needs to be perfect, but the guy in this article is putting a lot of blind faith in an algorithm that's proven time and time again to make things up.

The reason I have become "bearish" on AI is because I see people repeatedly falling into a trap of believing LLMs are intelligent, and actively thinking, rather than just very very fine tuned random noise. We should pay attention to the A in AI more.


> putting a lot of blind faith in an algorithm that's proven time and time again to make things up

Don't be ridiculous. Our entire system of criminal justice relies HEAVILY on the eyewitness testimony of humans, which has been demonstrated time and again to be entirely unreliable. Innocents routinely rot in prison and criminals routinely go free because the human brain is much better at hallucinating than any SOTA LLM.

I can think of no more critical institution that ought to require fidelity of information than criminal justice, and yet we accept extreme levels of hallucination even there.

This argument is tired, played out, and laughable on its face. Human honesty and memory reliability are a disgrace, and if you wish to score points against LLMs, comparing their hallucination rates to those of humans is likely going to result in exactly the opposite conclusion that you intend others to draw.


> the human brain is much better at hallucinating than any SOTA LLM

Aren't the models trained on human content and human intervention? If humans are hallucinating that content, then LLMs even slightly hallucinating from fallible human content, wouldn't that make the LLMs hallucinations still, if even slightly, more than humans? Or am I missing something here where LLMs are somehow correcting the original human hallucinations and thus producing less hallucinated content?


Right now AI is inferior, not superior, to human effort. That's precisely why people are bearish on it.

I don't think thats obvious. In 20 minutes for example, deep research can write a report on a given topic much better than an analyst can produce in a day or two. It's literally cheaper, better, and faster than human effort.

Faster? Yes. Cheaper? Probably, but you need to amortize in all the infrastructure and training and energy costs. Better? Lol no.

> but you need to amortize in all the infrastructure and training and energy costs

The average American human consumes 232kWh of all-in energy (food, transport, hvac, construction, services, etc) daily.

If humans want to get into a competition over lower energy input per unit of cognitive output, I doubt you'd like the result.

> Better? Lol no

The "IQ equivalent" of the current SOTA models (Opus 4.5, Gemini 3 Pro, GPT 5.2, Grok 4.1) is already a full 1SD above the human mean.

Nations and civilizations have perished or been conquered all throughout history because they underestimated and laughed off the relative strength of their rivals. By all means, keep doing this, but know the risks.


What do you man by “better” in this context?

It synthesizes a more comprehensive report, using more sources, more varied sources, more data, and broader insights than a human analyst can produce in 1-2 days of research and writing.

I'm not confused about this. If you don't agree, I will assume it's probably because you've never employed a human to do similar work in the past. Because it's not particularly close. It's night and day. *Note that I'm not saying 20 minutes of deep research beats 9 months of investigative journalism with private interviews with primary sources or anything like that. I'm talking about asking an analyst on your team to do a deep dive into XYZ and have something on your desk tomorrow EOD.


Weird, I'm an attorney and no one is getting rid of associates in order to have LLMs do the research, no less so when they actually hallucinate sources (something associates wont do). I can't imagine that being significantly different in other domains.

> I can't imagine that being significantly different in other domains.

It’s not. There is no industry where AI performs “better” than humans reliably without torturing the meaning of the word (for example, OP says AI is better at analysis iff the act of analysis does not include any form of communication to find or clarify information from primary sources)


> It synthesizes a more comprehensive report, using more sources, more varied sources, more data, and broader insights than a human analyst can produce in 1-2 days of research and writing.

> Note that I'm not saying 20 minutes of deep research beats 9 months of investigative journalism with private interviews with primary sources or anything like that.

I like the idea that AI is objectively better at doing analysis if you simply assume that it takes a person nine months to make a phone call


It has more words put together in seemingly correct sentences, so it's long enough his boss won't actually read it to proof it.

Similar to P/NP, verification can often be faster than solving. For example, you can then ask the AI to give you the list of tool complaints and the performance issues. Then a text search can easily validate the claim.

AI is a new kind of bulk tool, you need to know how to use it well and context management is a huge part of it. For that 1-1 example, you would do a for loop with new context with subagents or a literal for loop for example to prevent the 'first two and last one' issue. Then with those 1-1 summaries, look at that to make the determination for example.

Humanity has gotten amazing results from unreliable stochastic processes, managing humans in organizations is an example of that. It's ok if something new is not completely deterministic to still be incredibly useful.


I think as long as you keep a skeptical loop and force the model to cite or surface raw notes, it can still be useful without being blindly trusted

> “I'd be wary of using AI to summarize like this and expecting accurate insights.”

Sure, but when do you have accurate results when using an iterative process? It can happen at the beginning or at the end when you’re bored, or have exhausted your powers of interrogation. Nevertheless, your reasoning will tell you if the AI result is good, great, acceptable, or trash.

For example, you can ask Chat—Summarize all 50 with names, dates and 2-3 sentence summaries and 2-3 pull quotes. Which can be sufficient to jog your memory, and therefore validate or invalidate the Chat conclusion.

That’s the tool, and its accuracy is still TBD. I for one am not ready to blindly trust our AI overlords, but darn if a talking dog isn’t worth my time if it can make an argument with me.


> ...and it's been proven that it doesn't summarize, but rather abridges and abbreviates data.

I don't really know what this means, or if the distinction is meaningful for the majority of cases.


Your colleagues using the tech will be far ahead of you soon, if they aren’t already.

Far ahead in producing bugs, far ahead in losing their skills, far ahead in becoming irrelevant, far ahead in being unable to think critically, that's absolutely right.

The new tools have sets of problems they are very good at, sets they are very bad at and they are generally mediocre at everything else. Learning those lessons isn’t easy, takes time, and will produce bugs. If you aren’t making those mistakes now with everyone else, you’ll be doing them later when you do decide to start catching up and it will be more noticeable then.

Disagree. For the tools to become really useful (and fulfill the expectations of the people funding them) they will need to produce good results without demanding years of experience understanding their foibles and shortcomings.

I think there’s a chance the people funding this make the returns they hope for but it’ll be a new business model that gets them there, not producing better results. The quality of results have been roughly stable for too long to expect meaningful increases regularly anymore.

The AI hucksters promise us that these tools are getting exponentially better (lol) so the catch up should be exponentially reduced.

I see the sarcasm and agree with but just in case anyone sees this, we were getting exponentially better back in the early days but very much hitting diminishing returns now. We’re probably not seeing any large improvements again now with this tech.

And all of those things (good at, bad at, the lessons learned on current models current implementation) can change arbitrarily with model changes, nudges, guardrails, etc. Not sure that outsourcing your skillset on the current foundation of sand is long term smart, even if it's great for a couple of months.

It may be those un-learning the previous iteration interactions once something stable arrives that are at a disadvantage?


The tools have been very stable for the past year or so. The biggest change I can think of is how much MCP servers have fallen off. I think they’re generally considered not worth the cost in context tokens now. The scope of changes needed to unlearn now with model changes or whatever else is on par with normal language/library updates we’ve been doing for decades. We’ve plateaued and it’s worth jumping now if you’re still in the fence.

Why would the AI skeptics and curmudgeons today not continue to dismiss the "something stable" in the future?

"The market can stay irrational longer than you can stay solvent" feels relevant here.

... I mean, what tools one is supposed to be using, according to the advocates, seems to completely change every six months (in particular, the goto excuse when it doesn't work well is "oh, you used foo You should have used bar which came out three weeks ago!", so I'm not sure that _experience_ is particularly valuable if these things ever turn out to be particularly useful.

I'll be investigating gitlab tomorrow

Have used all of the big 4 forges in anger over the last decade. GitLab isn't perfect, but I'd take it over GitHub any day of the week.

This seems backwards. Why charge for me to run the thing myself instead of them?

GitHub has still been managing the orchestration and monitoring of runs that you run on your own (or other cloud) hardware. They have just decided that they are no longer going to do this for free.

So the question becomes: is $0.002/minute a good price for this. I have never run GitHub Actions, so I am going to assume that experience on other, similar, systems applies.

So if your job takes an hour to build and run though all tests (a bit on the long side, but I have some tests that run for days), then you are going to pay GitHub $.12 for that run. You are probably going to pay significantly more for the compute for running that (especially if you are running on multiple testers simultaneously). So this does not seem to be too bad.

This is probably going to push a lot of people to invest more in parallelizing their workloads, and/or putting them on faster machines in order to reduce the number of minutes they are billed for.

I should note that if you are doing something similar in AWS using SMS (Systems Management Service), that I found that if you are running small jobs on lots of system that the AWS charges can add up very quickly. I had to abandon a monitoring system idea I had for our fleet (~800 systems) because the per-hit cost of just a monitoring ping was $1.84 (I needed a small mount of data from an on-worker process). Running that every 10 minutes was going to be more than $250/day. Writing/running my own monitoring system was much cheaper.


As a solo Founder who recently invested in self-hosted build infrastructure because my company runs ~70,000 minutes/month, this change is going to add an extra $140/month for hardware I own. And that's just today; this number will only go up over time.

I am not open to GitHub extracting usage-based rent for me using my own hardware.

This is the first time in my 15+ years of using GitHub that I'm seriously evaluating alternative products to move my company to.


But it is not for hardware you own. It is for the use of GutHubs coordinators, which they have been donating the use of to you for free. They have now decided that that service is something they are going to charge for. Your objection to GitHub "extracting usage-based rent from me" seems to ignore that you have been getting usage of their hardware for free up to now.

So, like I said, the question for you is whether that $140/month of service is worth that money to you, or can you find a better priced alternative, or build something that costs less yourself.

My guess is that once you think about this some more you will decide it is worth it, and probably spend some time trying to drive down your minutes/month a bit. But at $140 a month, how much time is that worth investing?


No. It is not worth a time-scaled cost each month for them to start a job on my machines and store a few megabytes of log files.

I'd happily pay a fixed monthly fee for this service, as I already do for GitHub.

The problem here is that this is like a grocery store charging me money for every bag I bring to bag my own groceries.

> But at $140 a month, how much time is that worth investing?

It's not $140/month. It's $140/month today, when my company is still relatively small and it's just me. This cost will scale as my company scales, in a way that is completely bonkers.


> The problem here is that this is like a grocery store charging me money for every bag I bring to bag my own groceries.

Maybe they can market it as the Github Actions corkage fee


> It is not worth a time-scaled cost each month for them to start a job on my machines and store a few megabytes of log files

If it is so easy why don’t you write your own orchestrator to run jobs on the hardware you own?


> The problem here is that this is like a grocery store charging me money for every bag I bring to bag my own groceries.

This is an odd take because you're completely discounting the value of the orchestration. In your grocery store analogy, who's the orchestrator? It isn't you.


Do you feel that orchestration runs on a per-minute basis?

As long as they're reserving resources for your job during the period of execution, it does.

Charging people to maintain a row in a database by the minute is top-tier, I agree.

If you really think that's all it is, I would encourage you to write your own.

It would be silly to write a new one today. Plenty of open source + indy options to invest into instead.

For scheduled work, cron + a log sink is fine, and for pull request CI there's plenty of alternatives that don't charge by the minute to use your own hardware. The irony here, unfortunately, is that the latter requires I move entirely off of GitHub now.


so they are selling cent of their CPU time for a minute's worth

> My guess is that once you think about this some more you will decide it is worth it, and probably spend some time trying to drive down your minutes/month a bit. But at $140 a month, how much time is that worth investing?

It's $140 right now. And if they want to squeeze you for cents worth of CPU time (because for artifact storage you're already paying separately), they *will* squeeze harder.

And more importantly *RIGHT NOW* it costs more per minute than running decent sized runner!


I get the frustration. And I’m no GitHub apologist either. But you’re not being charged for hardware you own. You’re being charged for the services surrounding it (the action runner/executor binary you didn’t build, the orchestrator configurable in their DSL you write, the artefact and log retention you’re getting, the plug-n-play with your repo, etc). Whether or not you think that is a fair price is beside the point.

That value to you is apparently less than $140/mo. Find the number you’re comfortable with and then move away from GH Actions if it’s less than $140.

More than 10 years of running my own CI infra with Jenkins on top. In 2023 I gave up Jenkins and paid for BuildKite. It’s still my hardware. BuildKite just provides the “services” I described earlier. Yet I paid them a lot of money to provide their services for me on my own hardware. GH actions, even while free, was never an option for me. I don’t like how it feels.

This is probably bad for GitHub but framing it as “charging me for my hardware” misses the point entirely.


feels like a new generation is learning what life is like when microsoft has a lot of power. (tl;dr: they try to use it.)

I was born in 1993. I kind of heard lots of rumbling about Microsoft being evil as I grew up, but I wasn't fully understanding of the anti trust thing.

It used to suprise me that people saw cool tech from Microsoft (like VSCode) and complain about it.

I now see the first innings of a very silly game Microsoft are going to start playing over the next few years. Sure, they are going to make lots of money, but a whole generation of developers are learning to avoid them.

Thanks for trying to warn us old heads!


ABuse it.

Feels like listening to Halo generation being surprised MS fucks them over, because they thought they were Good Guys, coz they Made Thing They like

Yeah, I'm no GitHub apologist, but I'll be one in this context. This is actually a not-unreasonable thing to charge for. And a price point that's not-unreasonable.

It makes sense to do usage-based pricing with a generously-sized free tier, which seems to be what they're doing? Offering the entire service for free at any scale would imply that you're "paying" for/subsidizing this orchestration elsewhere in your transactions with GitHub. This is more-transparent pricing.

Although, this puts downward pressure on orgs' willingness to pay such a large price for GH enterprise licenses, as this service was hitherto "implicitly" baked into that fee. I don't think the license fees are going to go down any time soon, though :P


I run about 1 action a day taking 18h running on 2 runners One being self hosted 24gb ram 8 core ARM vps and one being a 64gb 13900k x86 dedicated server

Now the GitHub pricing change definitely? costs more than both servers combined a month ... (They cost about 60$ together )

3 step GitHub action builds around 1200 nix packages and derivations , but produces only around 50 lines of logs total if successful and maybe 200 lines of log once when a failure occurs And I'm supposed to pay 4$ a day for that ? Wonder what kind of actual costs are involved on their side of waiting for a runner to complete and storing 50 lines of log


It sounds like you'd be better off self-hosting Jenkins. The other issue with GHA is they cap all runs at 6 hours.

Despite what people say about "maintaining" Jenkins (whatever that means to them personally) - you can set it up in an IaaC way including the jobs. You can migrate/create jobs en masse via its API (I did this about 10 years ago for a large US company converting from what was then called TFS)


I'll likely check out buildbot or just switch to gitlab

What problem does Jenkins solve? When we got jenkins working how we wanted it was a giant groovy script that was handling checkout manually.

Somewhere around 0.00004$ probably.

Nice profit margin…


You know, one might ask what the base fee of $4k/mo (in my org's case) is covering, if not the control plane?

Unless you're on the free org plan, they're hardly doing it "for free" today…


Exactly this. It’s not like they don’t have plenty of other fees and charges. What’s next, charging mil rates for webhook deliveries?

> They have just decided that they are no longer going to do this for free.

Right, instead, they now charge the full cost of orchestration plus runner for just the orchestration part, making the basic runner free.

(Considering that compute for "self-hosted" runners is often also rented from some party that isn't Microsoft, this is arguably leveraging the market power in CI orchestration that is itself derived from their market power in code hosting to create/extend market power in compute for runners, which sounds like a potential violation of both the Sherman Act and the Clayton Act.)


Sure, but that shouldn't be a time-dependent charge. If my build takes an hour to build on GH's hardware, sure thing, charge me for that time. But if my build takes an hour to build on _my_ hardware, then why am I paying GH for that hour?

I get being charged per-run, to recoup the infra cost, but what about my total runtime on my machine impacts what GH needs to spend to trigger my build?


> is $0.002/minute a good price for this

It was free, so anything other than free isn't really a good price. It's hard to estimate the cost on github's side when the hardware is mine and therefore accept this easily.

(Github is already polling my agent to know it's status so whether is "idle" or "running action" shouldn't really change a lot on their side.)

...And we already pay montly subscription for team members and copilot.

I have a self-hosted runner because I must have many tools installed for my builds and find it kinda counter productive to always reinstall those tools for each build as this takes a long time. (Yeah, I know "reproducible builds" aso, but I only have 24h in most of my days)

Even for a few hundreds minutes a month, we're still under a few $ so not worth spending two days to improve anything... yet.


Is it polling the runner, or is the runner sending it progress?

The runner sends progress info, polls for jobs and so on. The runners don't have to be accessible from GitHub, they just needs general internet access (like through a NAT device).

> is $0.002/minute a good price for this

Absolutely not, since it's the same price as their cheapest hosted option. If all they're doing is orchestration, why the hell are they charging per-minute instead of per-action or some other measure that recognizes the difference in their cost between self-hosted and github-hosted?


> is $0.002/minute a good price for this

I think a useful framing of this question is: would you run a c7gn.large instance just to do this orchestration?


Additionally, they could just self-host their code since code is data is a moat.

> GitHub has still been managing the orchestration and monitoring of runs that you run on your own (or other cloud) hardware. They have just decided that they are no longer going to do this for free.

This argument is disingenuous. Companies pay GitHub per seat for access to PR functionality etc. What's next, charging per repository? Because of a decision to no longer provide the repositories "for free"? It's not for free, you're paying already, it's included in the per-seat pricing. If you charge per seat then sometimes there are users who hardly use it and sometimes there are users who use it a lot. The per-seat pricing model is supposed to make the service profitable overall regardless of the usage levels of individual users.


> $0.002/minute a good price for this.

It is not only not good. It is outrageous. The amount of compute required for orchestration is small (async operations) and also they already charge your for artifacts storage. You need to understand that the orchestration just receives details (inbound) from the runner. It needs very little resources.


Because they know Forgejo is starting to get attention from major players and thus becoming competitive, and hosting your own CI infrastructure will make completely moving away from GitHub all that easier - If you don't really care about the metadata all it pretty much takes is moving git repositories with their history.

Or shortly summarized: lock in through pricing.

Pretty sure this will explode straight in their faces though. And pretty damn hard.


How can you lock in through charging money? Seems it’s like the opposite and they are charging because people are already locked in and they can or am I misreading your comment?

Microsoft "suddenly" does not seem to want you to run your own CI, which is a key part of running your own SCM. And this decision miraculously happens the moment a lot of big orgs are looking at self-hosting a cost effective (because open source) near 1:1 alternative to GitHub (=Forgejo).

So they make CI a bit cheaper but a future migration to Forgejo harder.

In fact they could easily pull off some typical sleazy Microsoft bullshit and eventually make it a shit ton harder to migrate out of GitHub once you migrated back in.


The idea is that they let you stay locked in for free. They dissuade people from making their CI pipeline forge-agnostic by charging you if you if you take steps to not be dependent on them. This means they can keep charging in other areas, and keep people in GitHub so that it stays dominant. Dominance is something that can be used to keep people in the Microsoft ecosystem, keep GitHub as the place where code goes so they have training data for LLMs, and dominance can simply be cashed in down the line.

I don’t know if that’s actually why they’re doing this, but it sounds plausible.


If you make running your own runners as expensive as running on Github's runners on top of the cost of actually hosting the runners, then if you are currently on Github and not able to migrate off immediately, the price conscious decision is to migrate runners into Github. But then, its even harder if you ever decide to migrate your whole operation out.

Now, if you are already looking at migrating, its also potentially a kick in the butt to do it now. But if you aren’t, the path of least resistance—or at least, the path of least present recurring cost—is a path to a greater degree of lock-in.


I don't think Forgejo is competitive in the markets GitHub makes most of their money from, nor does it seem Forgejo developers want it to be.

Where does GitHub even make most of their money? Their compliance posture makes them a non-starter for any regulated industries (which is atypical for a Microsoft property, generally MS is the market leader for compliance in all of their products).

Places might be officially regulated, but neither government agencies, healthcare, finance or defense industries are as strict as you think. People have to get stuff done, and most are usually quite incompetent in these protected industries.

Microsoft’s sales reps know this.


Given that a lot of places that deal with money use them, I find your comment quite interesting and would like to learn more :)

The easiest way is to compare GitHub's compliance report list with, say, Atlassian Bitbucket.

https://docs.github.com/en/enterprise-cloud@latest/organizat...

https://www.atlassian.com/trust/compliance/resources


Representatives from the Dutch government recently had a chat with representatives from Forgejo because they are quite interested in migrating their SCM infrastructure from Github to Forgejo.

And trust me, they are running a lot of public and private repositories.

And there are many more orgs and govs throughout Europe doing similar things because there's a (growing) zeitgeist here that the Trump administration nor any American SaaS company can be trusted. This started, by the way, after Microsoft suspended the ICJ from using Microsoft 365 on orders from the White House.


Can confirm.

I have seen this sentiment more and more, which is welcome to me as it’s a drum I have been banging for 15 years.

I have never had so many empathetic conversations than I have recently.


Sounds familiar!

Everybody now is like "Hey, we can take something like Kubernetes which is open source and is backed by a worldwide community, and you know like OpenStack which is open source and is backed by a worldwide community and we can build our own computing platform and deploy services and online communities and stuff on top of that"

And I was like "Wait, you guys are realizing that NOW?!? I've been an activist and part of a movement urging you all to try and be less dependent on US Big Tech and focus more on decentralization for YEARS"

Like you I am really happy things seem to get rolling now, though :)


The Dutch government represenrative mentioned contacts with French colleagues about this also.

Not sure why you think forgejo is competition and not Gitlab.

> Or shortly summarized: lock in through pricing.

how would increasing price make you locked in more ?

> If you don't really care about the metadata all it pretty much takes is moving git repositories with their history.

moving PR/CI/CD/Ticket flow is very significant effort, as in most companies that stuff is referenced everywhere. Having your commits refer ticket ID from system that no longer exists is royal PITA


> Having your commits refer ticket ID from system that no longer exists is royal PITA

just rewrite the short links in your front-end to point to the migrated issues/PRs. write a redirect rule for each migrated issue/PR, easy

hard-coded links in commit messages are annoying, you can redirect in the front-end too but locally you'd have to smudge/clean them on local checkout/commit


I would keep repos on GH but use Jenkins though.

[flagged]


Democratic organization is a strike?

Where do you live that that seems like a bad idea?


Inclusivity and democratic governance of a project is a strike to you? Seems like perhaps your hat is showing...

Inclusive is strike 1?

What color are you?

I'm sure I can find a company that supports ethnostates if you need that for your next project.


Because GHA was stagnant and expensive and multiple services like https://www.warpbuild.com/ popped up, with better performance and much lower price. Looks like they ate enough of GH’s lunch…

Hey, WarpBuild founder here. While it makes it harder for us to communicate this, we're still, we're still faster and cheaper even after the $0.002/min self hosting tax.

Overall costs go up for everyone but we remain the better option.


Because they make money from charging way over cost price for per-minute CI runners, and they don't want people using much much cheaper alternative providers.

They don't care about people actually self-hosting. They care about people "self hosting" with these guys:

https://github.com/neysofu/awesome-github-actions-runners


They still run the whole orchestration.

If you don't want to pay, you'd have to not use GitHub Actions at all, maybe by using their API to test new commits and PRs and mark them as failed or passed.


One problem is that GitHub Actions isn't good. It's not like you're happily paying for some top tier "orchestration". It's there and integrated, which does make it convenient, but any price on this piece of garbage makes switching/self-hosting something to seriously consider.

Github being a single pane of glass for developers with a single login is pretty powerful. Github hosting the runners is also pretty useful, ask anyone who has had to actually manage/scale them what their opinion is about Jenkins is. Being a "Jenkins Farmer" is a thankless job that means a lot of on-call work to fix the build system in the middle of the night at 2am on a Sunday. Paying a small monthly fee is absolutely worth it to rescue the morale of your infra/platform/devops/sre team.

Nothing kills morale faster than wrenching on the unreliable piece of infrastructure everyone hates. Every time I see an alert in slack github is having issues with actions (again) all I think is, "I'm glad that isn't me" and go about my day


I run Jenkins (have done so at multiple jobs) and it's totally fine. Jenkins, like other super customizable systems, is as reliable or crappy as you make it. It's decent out of the box, but if you load it down with a billion plugins and whatnot then yeah it's going to be a nightmare to maintain. It all comes down to whether you've done a good job setting it up, IMO.

Lots of systems are "fine" until they aren't. As you pointed out, Jenkins being super-customizable means it isn't strongly opinionated, and there is plenty of opportunity for a well-meaning developer to add several foot-guns, doing some simple point and click in the GUI. Or the worst case scenario: cleaning up someone elses' Jenkins mess after they leave the company.

Contrast with a declarative system like github actions: "I would like an immutable environment like this, and then perform X actions and send the logs/report back to the centralized single pane of glass in github". Google's "cloud run" product is pretty good in this regard as well. Sure, developers can add foot guns to your GHA/Cloud Run workflow, but since it is inherently git-tracked, you can simply revert those atomically.

I used Jenkins for 5-7 years across several jobs and I don't miss it at all.


Yeah, it seems like a half-assed version of what Jenkins and other tools have been doing for ages. Not that Jenkins is some magical wonderful tool, but I still haven't found a reasonable way to test my actions outside of running them on real Github.

Everyone who has Actions built into their workflow now has to go change it. Microsoft just conned a bunch more people with the same classic tech lock-in strategy they've always pursued, people are right to be pissed. The only learning to take away is never ever use anything from the big tech companies, even if it seems easier or cheaper right now to do so, because they're just waiting for the right moment to try and claw it back from you.

> Microsoft just conned a bunch more people with the same classic tech lock-in strategy they've always pursued, people are right to be pissed

People would be better served by not expecting anything different from Microsoft. As you say yourself, this is how they roll.

> The only learning to take away is never ever use anything from the big tech companies

Do you even believe in this yourself? Not being dependent on them would be a good start.


Can someone share a Github bot that doesn't depend on actions?

I mean maybe https://github.com/rust-lang/bors is enough to fully replace Github Actions? (not sure)


You can use webhooks to replace Github Actions: https://docs.github.com/en/webhooks/about-webhooks

Listen to webhooks for new commits + PRs, and then use the commit status API to push statuses: https://docs.github.com/en/rest/commits/statuses?apiVersion=...


Yep, this mostly works fine (and can be necessary already in some setups anyway), the main issues are that each status update requires an API call (over v3, AFAIK updating statuses was never added to v4) so if you have a lot of statuses and PR traffic you can hit rate limits annoyingly quickly, and github will regularly fail to deliver or forward webhooks (also no ordering guarantees).

I mean, is there some open source project that already uses webhooks to replace Github Actions?

Rather than having to write some ad hoc code to do this


We have internal integrations with GitHub webhooks that will hit our server to checkout a branch, run some compute, and then post a comment on the thread. Not sure if you can integrate something like that to help block a PR from being merged like Actions CI checks, but you can receive webhooks and make API calls for free (for now). Would definitely result in some extra overhead to implement outside of Actions for some tasks.

> Not sure if you can integrate something like that to help block a PR from being merged like Actions CI checks

Post statuses, and add rulesets to require those statuses before a PR can be merged. The step after that is to lock out pushing to the branch entirely and perform the integration externally but that has its own challenges.


Because charging you brings more profits than not charging you.

Because they host the artifacts, logs, and schedule jobs which run on your runners, I assume.

Then why do they charge by the minute instead of gigabytes and number of events?

Ask them. I don’t set the policy at a company I don’t work at.

Their announcement gives a clue, and it’s to do with job orchestration.


they charge you for artifacts and logs separately, already

Yep and the sky is blue and GitHub can charge for that too if they want to.

I don’t make policy at GitHub and I don’t work at GitHub so go ask GitHub why they charge for infrastructure costs like any other cloud service. It has to do with the queueing and assignment of jobs which is not free. Why do they charge per minute? I have no idea, maybe it was easiest to do that given the billing infrastructure they already have. Maybe they tried a million different ways and this was the most reasonable. Maybe it’s Microsoft and they’re giving us all the middle finger, who knows.


I don't think you're responsible for anything more than your own comments.

I added some context that contradicts your assumption that the increased fees were to cover hosting/storage/scheduling costs.


The scheduler isn’t free, I always wondered how the financials work on this one. Turns out they didn’t ;)

Anyway, GitHub actions is a dumpster fire even without this change.


I develop software, I also test and run it. All in my machines.

But you (yes, you personally) have to collect the results and publish them to a webpage for me. For free.

Would you make this deal?


It sounds like a bad deal right?

Except the alternative is I do this for free but also I'm doing all the testing and providing the hardware.

I'm only going to charge you if you do most of the work yourself


If you do it all, you can optimize the whole supply chain. Maybe you can put some expensive capacity you built to use and leverage it when otherwise impossible, etc.

Maybe it's bad business dealing with lots of non-standardized external hosts, and it drags you down.

Maybe people are abusing the free orchestration to do non-CI stuff and they're compromising legitimate users.

Look, I understand it's frustrating to some consumers. However, it's not irrational from GitHub's point of view.


This is actually about abusing Microsoft's market position to eliminate competitors in related markets, plain & simple.

if you were paying me a monthly license fee for each developer working on your repos, I'd probably consider it

What happens if I am, and now my developers suddenly start to produce changes much faster? Like, one developer now produces the volume of five.

Would you keep charging the same rate per head?


why wouldn't you? these are easily compressible text files. storing even like 100x into a 400 day (at most, the default for GH is 90) box is downright cheap to do on even massive scales.

it's 2025, for log files and a spicy cron daemon (you pay for the artifact storage), it's practically free to do so. this isn't like the days of Western Union where paying $0.35 to send some data across the world is a good deal


If that's the case, why all the fuzz?

All the people complaining can just tap into this almost-free and acessible cheap resource you are referring to instead.


we don't need it. we need to run our CI jobs on resources we manage ourselves, and GitHub have started charging per-minute for it. apples and cannonballs

no, I'd cut the monthly seat cost and grow my user base to include more low-volume devs

but realistically, publishing a web page is practically free. you could be sending 100x as much data and I would still be laughing all the way to the bank


Publishing the page is only the last step. It's orchestrating the stuff THEN publishing it.

If you think that's easy, do it for me. I have some projects to migrate, give me the link of your service.


> If you think that's easy

I think it's cheap to maintain. let me know how many devs you have, how many runs you do, and how many tests (by suite) you have, and I can do you up a quote for hosting some Allure reports. can spread the up-front costs over the 3-year monthly commitment if it helps


There are several services I know who offer this for free for open source software, and I really doubt any commercial offerings of that software would charge you extra for what is basic API usage.

But I get to read all your code and use it for training my AI, right?

My projects are public anyway. If you respect the license and make the AI comply to valid license reuse, I'm game.

> My projects are public anyway.

My point was that they profit from accessing your code, which is why they made it free in the first place. Now they make you pay because they believe they will make more profit. But they certainly weren't losing money before.

> If you respect the license and make the AI comply to valid license reuse

I think that the de facto situation is that AI does not have to know about licences or copyright at all. If they hack your computer to train their AI, the illegal part is that they hacked your computer, not that they trained their AI with the stolen data.


> I think that the de facto situation is that AI does not have to know about licences or copyright at all.

That is simply not true.

Companies can get into legal trouble if they don't.

Copilot does that bookeeping:

https://docs.github.com/en/copilot/how-tos/get-code-suggesti...


> Companies can get into legal trouble if they don't.

Heard of Meta torrenting copyrighted material? What kind of trouble did they get into?


What if they lost?

Open source license litigation is a thing:

https://en.wikipedia.org/wiki/Open_source_license_litigation


Not sure what you are trying to say. What I see is that in practice, TooBigTech can do their training with everything they want without any meaningful consequence.

is an underwater drone not just a torpedo? What's the technical difference? Torpedoes can be guided after launch

typically the differentiators are

1. long endurance / loitering

2. remote control / streams

For underwater drone vs torpedo, agility and propulsion method, torpedoes are typically built for speed and don't need the same turning radius

The spectrum is becoming increasingly gray as the variety of unmanned / projectiles increases. You can see the same thing in UAVs vs missiles. Apparently Ruzzia is now putting a2a missiles on their Shahad drones, what even is that now... a missile with self-defense missiles?


I'm guessing here but torpedos are normally fired from a boat or sub a few miles away.

I imagine here they've modified one of their naval drones to go on it's own to near the port running on an internal combustion engine, then dive a few feet and go underwater on battery power the last bit.

Or maybe just battery rather than diesel electric? Batteries are quite good these days.


I haven’t seen much of the “Sub Sea Baby” drones as they call them here but the regular sea baby’s are more like boats. They move across the surface. They are remotely controlled. They have weapons besides kamikaze attacks. I would guess this is the same thing.

I don’t think they are getting “launched” and reliant on maintaining that initial velocity like one imagined torpedos either.


If you're looking for a semantic argument, Montreal is in America

I wonder when they get tired thinking about the problem, do they get more motivated?

Are you thinking of some kind of “hungry intelligence” but for fatigue?

It's on the border with Belarus, one of Russia's major allies. Any leak would affect them

I don't think that Russians care about this. This whole invasion can be summed by 'Russians don't care about trivialities like human life or impact to nature'.

They care about keeping their tiny number of allies happy though.

This wasn't buying a cake from a baker, this was a bakery buying 40% of the flour in the world so nobody else can sell wedding cakes, but now there's gonna be no bread for a year

Similar situation here. Have some old 32bit machines that I'm turning into writer decks. Most Linux distros have left 32bit behind so you can't just use Debian or Ubuntu and a lot of distros that aim to run on lower hardware are Ubuntu derivatives

Same situation but I'm using NetBSD instead. I'm betting it'll still be supporting 32-bit x86 long after the linux kernel drops it.

Personally, I think that dropping 32 bit support for Linux is a mistake. There is a vast number of people in developing countries on 32 bit platforms as well as many low cost embedded platforms and this move feels more than a little insensitive.

won't there still be the older kernels?

Nice work, thanks for making this! Would be great if music apps etc could just auto pause when you go into a call


Currently, on Mac, your music's volume reduces when you get on a call.


Ah cool. Didn't know that. I usually turn off the music before I open zoom anyway so I don't know why I'm asking for that

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: