Hacker Newsnew | past | comments | ask | show | jobs | submit | throwawaybbq1's commentslogin

Does this mean a huge hiring uptick in the US/layoff reversal? I do think this law caused some of the bad market. Will undoing it get us back to where we were?


Definitely not. Repealing section 174 (or not extending it, as it were) helped pushed us into a new normal for the market. Adding it back doesn't in and of itself push the market into another new normal, we'd need a lot more. It might take the edge off though, hopefully.


Agreed. Consider that we're in a big tech bubble right now (AI) and have been for at least a couple years. And yet tech layoffs have been way up, and hiring way down. Part of that that could be attributable to 174, but there are other issues that contribute more. One would be that there are vanishingly few people with actual experience in this narrow part of AI (LLMs) - I know people working in AI that have been laid off in the last couple of years because they were in the wrong area of AI (vision & CNNs). Secondly, it turns out that not that many people are needed to work on this stuff (mostly concentrated in large companies like Meta, Google, Microsoft & Amazon). And thirdly, folks in the C suite became convinced that AI is going to replace software engineers so they've quite hiring them.


> And thirdly, folks in the C suite became convinced that AI is going to replace software engineers so they've quite hiring them.

I think this is the real reason for much of the layoffs.

The other reason is simply that the market isn't punishing layoffs. You get rewarded as a CEO for laying off employees and saying "It's because AI makes them obsolete"


A huge uptick / reversal, I'm not sure, that's ultimately far more driven by actual profit / market than taxes.

But as pointed out in the article, US devs now have a tax advantage vs foreign devs. That may lead to some "nearshoring" especially from foreign markets where dev salaries have been jumping up (India, Europe, etc.)


I pray this is true. How can an experienced dev in the Bay area compete with someone in India that would work for $10 an hour with chatgpt?

Now that I write this, it's still a hard decision for big companies.


Same thing happened in the '90s...

I think what we're seeing is the fossilization of the newest batch of mega tech companies looking to rest on their laurels and prioritize profits over innovation.

They won't die, they are just the next IBM.


Because just anyone in India won't work for $10/hr and ChatGPT doesn't make any difference working in enormous big tech codebases with all in-house technologies


just be patient and wait for the off shore contractor's vibe coded shit pile to burst into flames and come in and fix shit for a big premium while being able to communicate with the customer during their preferred time zone and in the local dialect.

simple.


Most people cannot just wait around until this happens, there are bills to pay.


Most US devs are paying their bills. If you can't get a job even freelancing, do something else instead of waiting for the government to give you a handout job. Millions and millions of US devs are worth their salary over foreign ones (in fact many of those millions ARE foreign devs where it was worth paying to bring them here and paying them higher ages)


Can you clarify this? You saying non H1B's would not pay the crazy high rents of SV's housing?


I once asked an (American) director where i worked just this question.

He said what you surmised: while there are plenty of qualified American engineers in the United States, citing Motorola headcount reductions as an example, it was a challenge to convince them to move to California's cost of living.


Curious where you sourced the parts? In Canada, shipping kills it for me. When I priced out the robot + electronics + $100 in shipping, I am around $700 - far cry from the $100 on the "sticker".


My taste buds have become extremely muted after covid bouts and other sinus issues. Wonder if this would be helpful.


In the limit, this is like arguing against the use of wheels. Automation improves labor productivity. Economies that have not invested in capital, have seen labor productivity and incomes stagnant. This is a current debate going on in Canada (especially as compared to the productivity and income gains in the last decade). Canada does have strong Unions, so I wonder if this is related.

Another thing that seems troubling is how a small group of people can hold a majority of the country by the bXlls. Given how this is an election year, I can see this turning into huge fiasco. The rest of the economy is collateral damage.


We've had stagnant incomes for the last 50 years. The fruits of automation are not shared with the workers.

The small minority that keeps a country by the balls is not the unions but the owning class. The 2008 crash that put the whole world in a decade recession is collateral damage.


In the US? I don't think what you are saying is supported by real data. My understanding is that US works did see an improvement in incomes in the last decade but Canadian workers did not.

What makes life better for everyone is competition. Canada's stagnation can be be summed up in a single phrase - lack of competition. Generally, the US has been a free-for-all when it comes to competition and hence its populace enjoys some of the best living standards.

I'll also relate my experience traveling the subway in Asia vs. Manhattan. Asian transit seems like space-age compared to what we have in the West. I think UBI won't save us as the income must come from somewhere. Hiking taxes kills incentives. The better way is to have more freedom/efficiencies in my humble opinion.


North America is very car-focused. Transit in Europe is also much better, although your experience will vary from country to country.

Even with all that oil money gushing through Alberta, it still takes 9.5 hours to drive from Medicine Hat to Grand Prairie - which is the same distance as Barcelona to Seville, a train journey of 5.5 hours including the changing of trains in Madrid.


It has not been stagnant for 50 years: https://fred.stlouisfed.org/graph/?g=1uDSE


It has been: https://www.statista.com/statistics/185369/median-hourly-ear...

I guess we're at a stalemate now?


President needs to send in the national guard to maintain flow, because this is a domestic national security issue. I'm surprised he hasn't done it yet.


(You don't need to censor yourself like that on HN. You can say "balls." Heck, you can say "testicles"!)


I've also been trying to figure out GGUF and the other model formats going around. I'm horrified to see there is no model architecture details in the file! As you say, it seems they are hard-coding the above architectures as constants. If a new hot model comes out, one would need to update the reader code (which has the new model arch implemented). Am I understanding this right?

I'm also a bit confused by the quantization aspect. This is a pretty complex topic. GGML seems to use 16bit as per the article. If was pushing it to 8bit, I reckin I'd see no size improvement the GGML file? The article says they encode quantization versions in that file. Where are they defined?


Why are you horrified?

In designing software, there's often a trade off between (i) generality / configurability, and (ii) performance.

llama.cpp is built for inference, not for training or model architecture research. It seems reasonable to optimize for performance, which is what ~100% of llama.cpp users care about.


GGUF files seems to be proliferating. I think some folks (like myself) make an incorrect assumption that the format has more portability/generalizability than it appears to have. Hence, the horror!


I assume you are referring to Llama 2? Is there a way to compare models? e.g. what is Llama-7b equivalent to in OpenAI land? Perplexity scores?

Also, does ChatGPT use GPT 4 under the hood or 3.5?


Actually, there have been new model releases after LLaMA 2. For example, for small models Mistral 7B is simply unbeatable, with a lot of good fine-tunes available for it.

Usually people compare models with all the different benchmarks, but of course sometimes models get trained on benchmark datasets, so there's no true way of knowing except if you have a private benchmark or just try the model yourself.

I'd say that Mistral 7B is still short of gpt-3.5-turbo, but Mixtral 7x8B (the Mixture-of-Experts one) is comparable. You can try them all at https://chat.lmsys.org/ (choose Direct Chat, or Arena side-by-side)

ChatGPT is a web frontend - they use multiple models and switch them as they create new ones. Currently, the free ChatGPT version is running 3.5, but if you get ChatGPT Plus, you get (limited by messages/hour) access to 4, which is currently served with their GPT-4-Turbo model.


I agree with your comments and want to add re: benchmarks: I don’t pay too much attention to benchmarks, but I have the advantage of now being retired so I can spend time experimenting with a variety of local models I run with Ollama and commercial offerings. I spend time to build my own, very subjective, views of what different models are good for. One kind of model analysis that I do like are the circle displays on Hugging Face that show how a model benchmarks for different capabilities (word problems, coding, etc.)


> Is there a way to compare models?

This is what I like to use for comparing models: https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboar...

It is an ELO system based on users voting LLM answers to real questions

> what is Llama-7b equivalent to in OpenAI land?

I don't think Llama 7b compares with OpenAI models, but if you look in the rank I linked above, there are some 7B models which rank higher than early versions of GPT 3.5. those models are Mistral 7b fine tunes.


Miqu (the leaked large Mistral model) and its finetunes seem to be the most coherent currently, and I'd say they beat GPT-3.5 handily.

There are no models comparable to GPT-4, open source or not. Not even close.


no it's mistral. mistral 7b and mixtral 8x7b MoE which is almost on par (or better than) chatgpt 3.5. Mistral 7b itself packs a punch as well.


Mixtral 8x7b continues to amaze me, even though I have to run it with 3 bit quantization on my Mac (I just have 32G memory). When I run this model on commercial services with 4 or more bits of quantization I definitely notice, subjectively, better results.

I like to play around with smaller models and regular app code in Common Lisp or Racket, and Mistral 7b is very good for that. Mixing and matching old fashioned coding with the NLP, limited world knowledge, and data manipulation capabilities of LLMs.


There is also MiQu (stands for mi(s|x)tral quantized I think?) which is a leaked and older mistral medium model. I have not been able to try it as it needs some RAM / VRAM I don't have but people say it is very good.


This is neat to know. On Ollama, I see mistral and mixtral. Is the latter one the MoE model?


yes, mixtral is the MoE model.


llama 2 isn't open source


I am not good at investing (lost money every single time I've tried). Liquid cash gets spent. I am paying off my 5 year fixed mortgage as fast as I can, as I will be up for renewal at the end of next year.

Some people don't have the financial savvy or time to optimize. One size does not fit all.


And that's why normies investors shouldn't be actively investing (I.e. picking individual stocks).

Dump your money into an all-market index fund and forget about it for 25 years. This requires zero "savvy" and not a lot of time. It does require a bit of research to develop some essential financial knowledge, but that's something everyone can benefit from.

If you can't do that much and you're in Canada, look at Wealthsimple, which is a robo-advisors that does this all for you. If you're not Canadian, there maybe be similar robo-advisors that automate passive investing that might be worth looking at.

One size may not fit all, but it absolutely fits most, and my bet is, no offense, you're actually not that special (I know I'm not).


Immigrants. 95% (made up stat) may be poor but the 5% who are not, can afford to. In Canada, there is a recent narrative which I think is correct that immigrants are driving our inflation. We expanded our population of 30 million by about a million in the last year. That's a million new household formations, and some people bought a buttload of cash.


This was actually a very nice article! Thanks!

I liked the last line the most .. I think we've all gotten accustomed to zero interests, and this has changed our behavior. My father to a large extend screwed up his (and our family's) life because he lived in an era of high rates. He did not understand the world had changed. We could have bought a house for cash in 1995 but he chose to rent. After that, it was always a bubble, with no buying opportunity like 1995 ever.

It makes me wonder if all of us have similarly not realized the world is a different place post rate hikes. There is this inevitable dogma that "rates will go back down". A lot of people are making that bet and I wonder if it is just history again.

What I don't understand is what is driving the US economy today. It seems to be firing on all if not most cylinders. People I know who got laid off are finding work (I hear negative experiences too and feel for those people). Hiring in tech seems like it is picking up.


What do you mean by “screwed up”?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: