If AI would truly love it, then that seems like the best-case ATS-optimized resume to me. The right tool for the job. Imagine using real people to review applicants - what is this, the 1800s?
It's funny how Japanese contributors often go nuclear whenever things don't go their way instead of communicating etc. Bugs happen. this is tech. open communication is always the way to go before escalation. we see the same phenomenon in git, ruby etc etc etc.
Ironically, Japanese work culture encourages over-communication. It seems that open-source is considered a counter-culture that they want to escape japanese work culture from.
>In October 22, the sumobot was introduced to Japanese KB articles. I cannot accept its behavior and no words. [...] It has been working now without our acceptance, without controls, without communications.
How exactly did you manage to place the blame for no communication on Japanese contributors here given the actual complaint in question?
Because given the nature and scope of the complaint, they should have disclosed prior communication. It is clear here that they probably attempted, poorly, to communicate but ultimately chose to nuke the relationship instead, claiming that they have a monopoly over japanese translations.
I live in Japan so I have seen what danieltanfh95 describes many times in person. Japanese people at work seem to have infinite patience with whatever random bullshit their company will ask for, but outside of work they get easily frustrated when things don't go the way they expect.
It is really just BS. These are just basic DSA stuff. We deployed a real world solution by doing of all of that on our side. It's not magic. It's engineering.
This is honestly not better than the LLM-powered double booking system where i just record in plain text, and have the LLM convert it into journal entries. the entire thing is in plain text and backed by git+remote. Recording finances is the hardest part about tracking finances because you tend to forget/get lazy/need to track across multiple family members etc.
This is solved via LLMs that can transcribe, categorise, convert messy records into structured book keeping.
I remain unconvinced that anything more rigid is more useful.
> It reports that, in a large national survey, 65 percent of Americans expressed the belief that they are smarter than a typical person.
...
> Looking at education level, 73 percent of people with college degrees asserted they were more intelligent than average.
> “Given that the average college graduate has an IQ of approximately 13 to 15 points above the population mean,” the researchers write, “college graduates in our sample actually slightly underestimated their relative intelligence,”
Is it really that shocking of a result that 15% of people below average intelligence would overestimate their ability? However, as the research you linked points out the ones most likely to underestimate are college graduates. Which includes the American CS graduates we're talking about in this thread.
Maybe, just maybe! It could be the American CS Grads we're being told are "overestimating" their skills are actually underestimating them. As this research implies.
I won't bother wasting a lot of words on how people perceive americans, nor about how obviously not all americans are "trump & co."
Hopefully, I'm not overestimating my reading comprehension ;-)
> Hopefully, I'm not overestimating my reading comprehension ;-)
I am afraid you have :-) Where did you get your 15%? It is actually 65% of the population that overestimates itself.
If you wanted to talk actual numbers you would have read the study referred to in the article. Here it is; 65% of Americans believe they are above average in intelligence: Results of two nationally representative surveys - https://journals.plos.org/plosone/article?id=10.1371/journal... In particular; read carefully the section "Education: Are beliefs calibrated?" since that clarifies your misunderstanding.
Like all statistical studies there can be discussions on sampling/methodology/distributions/etc. but the overall conclusion seems definite viz. last para;
Despite these limitations, we conclude that Americans’ self-flattering beliefs about intelligence are alive and well several decades after their discovery was first reported. Our results update the textbook phenomenon of intelligence overconfidence by (1) replicating the effect using large, representative, contemporary samples and two distinct survey methods, (2) demonstrating a degree of calibration across levels of education, and (3) showing moderation based on sex and age. The endurance of the smarter-than-average effect is consistent with the possibility that a tendency to overrate one’s own abilities is a stable feature of human psychology.
And i might add, more pronounced in the American Culture than others. For more understanding on this see Richard Nisbett's The Geography of Thought: How Asians and Westerners Think Differently...and Why - https://en.wikipedia.org/wiki/The_Geography_of_Thought
Unless your application is relatively trivial you would always want consistent behaviour as much as possible than some random metric that is used to proxy as "performance", routing is NOT the solution.
its bad because they are mixing what was supposed to just be execution boundaries into the overall runtime engine without making it explicit how to bridge between one and another.
this only holds through if the data to be accessed is less valuable than the computational cost. in this case, that is false and spending a few dollars to scrape data is more than worth.
reducing the problem to a cost issue is bound to be short sighted.
This is not about preventing crawling entirely, it's about finding a way to prevent crawlers from repeatedly everything way too frequently just because crawling is just very cheap. Of course it will always be worth it to crawl the Linux Kernel mailing list, but maybe with a high enough cost per crawl the crawlers will learn to be fine with only crawling it once per hour for example
my comment is not about preventing crawling, its stating that with how much revenue AI is bringing (real or not), the value of crawling repeatedly >>> the cost of running these flimsy coin mining algorithms.
At the very least captcha at least tries to make the human-ai distinction, but these algorithms are just purely on the side of making it "expensive". if its just a capital problem, then its not a problem for these big corpo who are the ones who are incentivized to do so in the first place!
even if human captcha solvers are involved, at the very least it provides the society with some jobs (useless as it may be), but these mining algorithms also do society no good, and wastes compute for nothing!
> looks at resume
> garbage formatting that only AI would love, with little substantial content beyond the sea of candidates would offer.
All the talk about humans and yet producing a piece of paper that doesnt respect human time.
reply