>> It’s more than just a fad; we’re in the craze phase, where putting "AI" in your company’s name gets VCs to just blatantly throw money your way.
It's not something new and predates ChatGPT by a long shot.
Some 10 years ago I was trying to market my trading system and the backtesting/simulation software that produced it and there was one and only one question that I was asked by a VC I got in contact with: "Is it AI?". I made the mistake to say "well, no.. plain mathematics". Never heard of them afterwards.
Now I'm soon going to market a new trading system and this time you know what I'll say that it consists of ;)
Incidentally I did work with "AI guys" trying to somehow drain some money out of the stock market using some form of magic. The approach is very "Lords Of The Rings" like, you don't need to know fundamentals, don't need a keen understanding of the processes and various instruments, because "the ring" knows. All you need to know is to spin the ring around your finger and make the wish "make money".
I never saw anything but bullshit coming out of these "AI in finance" but at the same time never saw but "great successes" reported by these guys twisting and twirling and tormenting the numbers in desperation to make them look profitable.
In the 90’s you could prefix your company name with “e-“ or suffix it with “web”, or actually have the “.com” in the registered company name and your stock would immediately skyrocket.
(I’m not kidding or exaggerating - things got that OTT before the bubble burst).
“As a result of these factors, many investors were eager to invest, at any valuation, in any dot-com company, especially if it had one of the Internet-related prefixes or a ‘.com’ suffix in its name.”
Similarly reminded me of the blockchain hype which you knew was just getting out of control when "On-Line PLC" by simply adding said word saw summarily a 4x increase in their share value over night (it's since fallen back to it's pre-2017 value range).
> The approach is very "Lords Of The Rings" like, you don't need to know fundamentals, don't need a keen understanding of the processes and various instruments, because "the ring" knows. All you need to know is to spin the ring around your finger and make the wish "make money".
What's the similarity to Lord of the Rings? That sounds more like Aladdin.
I recall (very badly) having read some Carl Sagan's reflection on this and how the fact is the majority of people hold some form of "magical thinking" which is completely at odds with rationality and the scientific method. Basically all you need is acquire some form of "magic ring" and you can be the "magician". When in fact you are not, you only are the magician if you can forge the magic ring. And diagnose and repair it when it malfunctions.
But that's not what "AI finance" is about, that's classical finance. AI is the magic ring here, somehow magically "it" thinks and does so that you don't have to. You only spin the ring and enjoy the riches.
The only thing the examples there have in common is that they're all rings.
They do different things; what makes "magic rings" a leitmotif when "magic hats" (also very common) aren't? The rule in fairy tales is very simple: any object can be magical.
You have magic swords, magic knives, magic bags, magic shoes, magic dolls, magic paintings, magic lakes, magic trees, magic stoves......
> what makes "magic rings" a leitmotif when "magic hats" (also very common) aren't? The rule in fairy tales is very simple: any object can be magical.
But this is not just about fairy tales, it’s also about folk lore. Things that people in real life sometimes actually believe in.
The linked Wikipedia article says:
> A finger ring is a convenient choice for a magic item: It is ornamental, distinctive and often unique, a commonly worn item, of a shape that is often endowed with mystical properties (circular), can carry an enchanted stone, and is usually worn on a finger, which can be easily pointed at a target.
Seems reasonable that in real life a ring might make a more convincing magical item than a hat.
Hats are ornamental, distinctive, and often unique, they are commonly worn, they are circular, they can carry a stone, and they can be easily pointed at a target.
People have been wearing special hats for symbolism for several thousand years.
And on top of all of that, the specific claim was that rings have a special presence in fairy tales, which is definitely not true.
Big Data. Knowledge Engine. AI. Blockchain. Crypto. IoT. The only industry more driven by fashion than IT is .. the fashion industry. And even there it's a close race.
There is always hustlers, but the current web of AI is undeniable.
I'm a ML Engineer and for the past few months I'm getting as much activity in LinkedIn as at the cusp of 2022 job market. In the startup space, I see 2 major trends:
1. Startups with the mission to solve problem X using AI.
2. Startups that solve problem X, but now are pivoting to AI.
The funniest case was this startup that was hiring for their founding ML team, and the recruiter simply couldn't tell me what this team was supposed to do.
Just saying the word "AI" and the ring may render you invisible. Put something in front to give it a "one ring to rule them all" ring, e.g. "Hyper AI".
I have been in conversations with someone who wants to do something just like this. I shared with him this recent paper that suggests LSTM can produce better returns than the market. (https://onlinelibrary.wiley.com/doi/10.1002/for.3021?af=R) . He will need simulation/back-testing system. Will yours be Cloud / Self-hosted / Open-source / Subscription based ?
Open-Source, self-Hosted. Needs a computer, Java, PostgreSQL and some historical market data (which you usually need to purchase, at least options data).
I have to admit. This is marketing done right. Same as the retool post.
Also, in the next few years all the sauce and moat of these kinds of systems will be in the retrieval architecture. I wish they would go into further detail (just a little bit at then end).
Even though 'context' is mentioned many times in that post, I am still unsure if the power of Cody comes only if you import your repo in sourcegraph. It does not seem like using it within VSCode in your local source directory lets is understand and use everything there, but I am happy to be corrected.
Of course it is working with local files, maybe I phrased it awkwardly. I am just not sure whether it gives you full context if you do that. I have used Cody a few times in the past few months and it's good, but at least its chat feature kept saying it does not have access to the local code. Maybe that's Claude/GPT being honest but when and which local files it picked seemed unpredictable. There's a setting whether you want sourcegraph context or local context only, and from one of the previous announcements I understood it can work with public github repos. Hence my confusion whether it truly and fully can use your local folder by scanning it locally and extracting embeddings or you need it to go through sourcegraph to get the most out of it.
That was too long. Had AI summarize the article. Is this about right?
Here is a summary with some additional context:
1. AI competition is heating up, with companies like OpenAI, Anthropic, Google, Meta, etc investing heavily and releasing new models. There is also a proliferation of AI startups raising large funding rounds. OpenAI fired then quickly rehired its CEO Sam Altman, causing confusion.
2. Machine learning engineers are in extremely high demand in the job market, receiving lavish recruiting offers. Meanwhile, other engineers like backend developers or mobile devs are struggling to move jobs since companies are mainly focused on security and ML hires. ML people get showered with "bear spray" level recruiting interest.
3. Most companies, even big Fortune 1000 brands, are still only in the initial stages of developing a concrete AI strategy and execution plans. There's a lot of "we're working on it" talk but not much to show yet. Companies are slow to focus on this due to inertia and competing internal priorities.
4. Metrics like "completion acceptance rate" (CAR) for coding assistants don't actually quantify developer productivity gains. However, some companies wrongly want to use CAR to closely monitor developers.
5. Cody, a new coding assistant from Sourcegraph using retrieval-augmented generation, leverages Sourcegraph's code search and understanding strengths. It is now generally available. Comments in code can improve Cody's understanding. Sourcegraph has developed Cody quickly thanks to its existing infrastructure.
The proposition coding assistants like Copilot and Sourcegraph make, is that they can help in writing code for every programming language, and speed up a lot the process of software development. While this is true, i consider as a lot more relevant discussion, to talk about the end of software development [1]. The end, as in 98% percent will get automated very soon.
Every time i try to generate Rust code, all LLMs perform very good, while by trying to generate untyped code like Scheme, all of them perform okey-ish at best.
Better data can be produced for LLMs, like cargo-public-api to teach the machine about the api exposed and afterwards the code, as well as cargo-modules to teach the machine about the hierarchy of modules and only afterwards the code and so on.
Tricks like that, also mixture of experts architecture for LLMs to second think their response, will put an end to the coding soon. My point is, that coding assistants will not exist soon, because coding will be a thing of the past.
This is written by a LLM pretrained on Steve's writings and then finetuned and made safe. A singe instance of "F-word" is the strongest it could muster.
Is this AI generated blogspam? It doesn't seem to have a point, it hallucinate things, and it was seemingly endless. This is nothing like their other blog posts.
Unfortunately, this is how Steve Yegge communicates both in writing and real life. As someone who worked at Sourcegraph, it's very difficult to tolerate.
I don't find it funny and it is a long winded and confusing way of saying that their code suggestions use other code and documentation and that people should try it.
If you're not familiar with Steve Yegge, this is his classic rant style. It might not be your cup of tea, but it's definitely more substantial than "AI generated blogspam", and I think you should point out where exactly you think the hallucinations are for everyone if you think it is.
It's not hallucinated, it's not written by AI, it's creative, tongue-an-cheek writing by a person who has literally decades of prior art in this very style on his blog. Try reading it with these statements held as truth and see how much sense your post makes then.
>>Nvidia buying all their employees solid gold full-size Wall Street bull statues as a little holiday thank-you gift.
This is classic Steve Yegge hyperbole. He's been ranting like this for decades[0].
His (accidentally public) rant comparing Google and Amazon business practices is a classic[1]. In it he says, "Jeff Bezos is an infamous micro-manager. He micro-manages every single pixel of Amazon’s retail site." Surely no one would take that literally.
There was a surreal scandal recently driven by the "revelation" that the events described in a standup comic's routine were all made up.
This is of course also true of every other standup comedy routine in history, but it was only a scandal for Hasan Minhaj.
There appears to be a breed of modern people who aren't capable of understanding the concept of humor. Unfortunately for Minhaj, those people were drawn to his act as an expression of piety, which isn't what it was.
After downvoting the parent, it occurred to me that I was basically picking on a handicapped person. So I felt compelled to vote it up instead, to help redress the heartlessness of my fellow HN'ers as well as my own. "Be the light you want to see in the world," that sort of thing.
That doesn't mean my initial vote was wrong, though; it just means I felt bad for the author.
maybe it's a matter of taste, but I haven't read all year a better written piece
it couldn't be further from LLM style - not that LLMs don't write good text, but it's bland and has that unmistakable AI feel, almost always {intro, 3 paragraphs, conclusion} and making sure to name opposing views or warnings
The only question I have after reading this blog is whether they are hiring. Also, please dial down the John Oliver weights in your LLM settings unless you generate audio for your post with JohnAI narrating.
Completion Acceptance Rate may not be sufficient for measuring productivity but it does seem like a valuable measure of usefulness, although it can be useful due to accuracy at best but also laziness, carelessness, or ignorance. A manager who wants to reduce head count can interpret high-CAR programmers as among the lazy, careless or ignorant if so inclined.
It must have crushed your soul to have Neovim in the support list and no Emacs to be seen anywhere. Hilarious post, though... Don't let the Marketing folks keep you down. :)
I have been trying it with WebStorm. Pretty solid. The autocomplete is a little more accurate compared to Copilot. Not a fan of the chat and the fact that the VS Code extension has more features but whatever it's free for now.
Extremely funny until it reached the product announcement part. Then it turns out, sourcegraph have their own code assistant with a nice inoffensive name which is of course better than anything that has came before. I haven't tried and can't comment on it, just pulled the rug under me in regard to the satire I was enjoying before that.
I mean, I applied to SourceGraph for a Machine Learning Engineer role on Thursday, and all they gave me was a rejection, on Friday. Where's mah boat? Also, I have this awesome (aspirational) assistant / agent / coding helper thing that's open source and even has some comments so ... can I have a job?
> In Korea, we found that in contrast to the West, they have a rich and vibrant hierarchical society that celebrates rank and status, and glorifies managers, which by the way is completely unrelated to our leadership team’s recent decision to hire only in Korea.
It's not something new and predates ChatGPT by a long shot.
Some 10 years ago I was trying to market my trading system and the backtesting/simulation software that produced it and there was one and only one question that I was asked by a VC I got in contact with: "Is it AI?". I made the mistake to say "well, no.. plain mathematics". Never heard of them afterwards.
Now I'm soon going to market a new trading system and this time you know what I'll say that it consists of ;)
Incidentally I did work with "AI guys" trying to somehow drain some money out of the stock market using some form of magic. The approach is very "Lords Of The Rings" like, you don't need to know fundamentals, don't need a keen understanding of the processes and various instruments, because "the ring" knows. All you need to know is to spin the ring around your finger and make the wish "make money".
I never saw anything but bullshit coming out of these "AI in finance" but at the same time never saw but "great successes" reported by these guys twisting and twirling and tormenting the numbers in desperation to make them look profitable.