Hacker Newsnew | past | comments | ask | show | jobs | submit | jkukul's commentslogin

this! it favors established business with legal teams and (maybe more importantly) with connections.

The EU is also great at creating a heavy regulatory environment. Which entrenches existing incumbents. So the EU creates barriers that favor big companies, then tries to fix it with grants that... also favor big companies.

And then everyone's surprised that there's no innovation in Europe.

From all the world's companies worth over 100B$ there's only one European company - SAP, founded 50 years ago. [1]

[1] https://www.economist.com/briefing/2021/06/05/once-a-corpora...


>From all the world's companies worth over 100B$ there's only one European company - SAP

Would you rather have an economy of SAPs or an economy of Teslas?


I'd rather have no economy at all


I have seen a lot of successful European startups over the years. They just get bought by the US eventually.

In part probably because it's harder to become a monopoly in the EU.


Sounds like it does not favour mega-corporations at least then? That sounds like a desirable outcome to me.


>this! it favors established business with legal teams and (maybe more importantly) with connections.

This x2. A close friend of mine works at a major EU HW tech company and his job is wearing a suit, going to dinner parties and rubbing shoulders with high level local and national bureaucrats to convince them to fund X, Y, Z projects, none of which result in any major commercial success or ROI for the governments because the money they get is not enough to make new successful products, but hey, he's loving his job which gives him amazing job security against the waves of layoffs the company went through due to falling of sales, plus the networking he gets out of that is invaluable.

So at least some people are enjoying the gravy train while it lasts. But that's why a lot of EU tech companies immediately go to the US first before opening up to the EU market. US VSs are more generous with their cheque books than EU governments and investors, plus the 300 million people single consumer market speaking English as the common language and all that.


I don't understand why your comment is downvoted.

The comment you're replying to is tainted with the survivorship bias. We see successful companies that got government funding, but not the opposite. Maybe we'd have more innovation and competition without government picking these specific winners.

Ironically, one of the companies you mentioned (Apple) now operates in an environment with very little competition and regularly faces antitrust claims.

Government picking winners may actually reduce competition in the long run. The key difference: when private money picks wrong, it's their loss. When government picks wrong, it's taxpayer money.


Google actually did make their own Docusign, it's called eSignature [1] and it was built into Google Workplace

[1] https://workspace.google.com/resources/esignature/


And in true google fashion, it only works with Google accounts; if you send a signature request to a non-google account, it says it's sent but does not work...


There’s a specific reason why google doesnt promote a Docusign like product even when they have superior technical abilities.

It probably comes down to the fact that code is not that crucial but all the other non technical aspects like distribution, supplier relations and marketing that makes a product.

Maybe LLM wrappers turn out to be that way. The model may not matter but the distribution and customer relation etc would matter more.


> and other than a bit of open source (PyTorch and React are nice, I guess) as far as I can tell it's never really had any mission other than getting big.

I sometimes wonder what motivations these orgs have in contributing to open source.

My cynical side refuses to believe that the reasons are altruistic (although I'm sure there are altruistic individuals in those orgs!).

I think that the decisions to contribute to open source are calculated business decisions made to benefit the organization by:

* Getting outside contributions to the software that's widely used inside an organization

* Getting more people familiar with the software so that when they're hired they are already up to speed

* Attracting talent

* Improving PR

* Undermining competition (Llama?)

Regardless of the reasons, I think that there's a huge net benefit to society from large companies open-sourcing their software. I just don't think that's an argument to view these companies more favorably.


Commoditizing your complements.

In other words, wiping out your competitor's business moats. If their cashflow is dependent on selling phones, open source your phone operating system to lower the value of the proprietary system.

It can also be used to quickly gain market share where you previously had none and wants to catch up on your competitors. You're bleeding money any way to try to pry open an established market, and open source might be the cheaper route. Most famous examples are perhaps Apple (webkit, cups) and Facebook (AI).


Agree 100%. I noticed that on the Copilot settings page [1] you can switch to Claude Sonnet model (instead of a model trained by Github I assume?). In my experience this improves things.

[1] https://github.com/settings/copilot


I use iOS's built in Screen Time settings. For "bad" apps (Reddit, TikTok, etc) and "bad" websites ("hackernews", etc) I set a daily time limit of, let's say, 15 minutes.

I configure a random password for Screen Time so that it's a real hassle to circumvent the daily limit when I get over it.


Yes, you can pre-fill the assistant's response with "```json {" or even "{" and that should increase the likelihood of getting a proper JSON in the response, but it's still not guaranteed. It's not nearly reliable enough for a production use case, even on a bigger (8B) model.

I could recommend using ollama or VLLm inference servers. They support a `response_format="json"` parameter (by implementing grammars on top of the base model). It makes it reliable for a production use, but in my experience the quality of the response decreases slightly when a grammar is applied.


Grammars are best but if you read their comment they're apparently using ollama in a situation that doesn't support them.


I think in the era of LLMs good docs/FAQ are of an even greater value.

You can write a support bot that sends a user's question + docs/FAQ to an LLM to automatically deal with the basic questions and only involve a human in the loop once a question goes beyond what's in the docs.


For a side project?

Not even the big giants manage to create LLM bots that work.


A leet-code test would be much more standardized if candidates could solve it at home. Just send me a link to the quiz and let me solve it within a specified time frame.

I've done tests like this for some companies. It felt a lot fairer and more closely resembling the actual work environment than live leet-code interviews, with biased interviewer(s) and a stress factor that's not a part of the actual job.


As a hiring manager I HATE leet-code tests, and they do nothing to differentiate candidates, but a take home in the era where people run chatGPT beside the interview window, or have someone else do the interview for them? Not a chance. You are 100% correct that it is way more representative, but the prevalence of cheating is ridiculous.


I totally understand you, but want to offer a different perspective.

They will also be able to use ChatGPT on the job. And StackOverflow. And Google. If they know how to use tools available to solve a problem, that will benefit them on the job.

If you're testing them for what ChatGPT can already solve, then are the skills being tested worth anything, in this day and age?

Take-home LeetCode, even with cheating will still filter out a good chunk of candidates. Those who are not motivated enough or those who don't even know how to use the available help. You'll still be able to rank those who solved the task. You'll still see the produced code and be able to judge it.

Like other commenter points out, you can always follow up the take-home LeetCode. Usually, it becomes apparent really quickly if a candidate solved it on their own.


This does seem like a vexing problem, especially when interviews are conducted remotely.

I wonder if either of the following could be cost-effective:

(a) Fly the candidate to a company office, where their compute usage could be casually monitored by an employee.

(b) Use high-quality proctoring services that are nearby to the candidate. E.g., give them 1-2 days in a coworking space, and hire a proctor to verify that thy're not egregiously using tools like ChatGPT.

Or alternatively, would it suffice to just have a long conversation with the candidate about their solution? E.g. what design trade-offs they considered, or how might they adapt their solution to certain changes in the requirements.


Before Covid, on-sites were, well, on site, and flying the candidate in for a day was just accepted practice.


Take home is fine if you discuss it later in the interview. But also there should be some pre-screening to keep the number of interviewees reasonable.


The better/new interview question, thus, is "here is code that chatGPT generated for $PROBLEM", what's wrong with it.


In theory, yes. Like you’re saying we’re still not quite there yet.

In practice there are constraints that limit job markets geographically, e.g. time zone differences or legal obstacles to hire foreigners.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: