> It mostly boils down to this: delight customers and iterate as fast as you can.
This is maybe the second phase after AWS found a fit and built a consumer base? Once I am in the housing market, I have a need for everything (mortgage, building contractor, construction materials, designer, hardware and accessories, upholstery, decor, etc).
My entry to AWS started with EC2 in the very early days because it of its commodity nature (any size and shape, for however long, with per-minute billing) and instant availability. The elastic nature solved scale. A lot of people didn't move to RDS until later but it was inevitable.
Everything else followed on from there, cross-sells and up-sells for reliability and convenience were always a click away for captive consumers who were already onboarded.
The only thing that didn't sit well with a lot of people about the leaked memo is that it ignored the quality of GPT4 vs GPT3 and made claims that all LLMs were poised to be on par, yet that isn't true till now.
What it also ignored (along with some of the comments here) is data ranking. Google didn't just build a search engine by crawling more of the web -- many search engines before it had already done that. Google managed to rank what's relevant and what isn't. Relevancy is hard. Similarly, not all scientific publications are ranked equally. Or for that matter, even publications with a lot of peer reviews or citations can become obsolete through new discoveries.
Reddit's data has value in that it can fill in a lot of the gaps left by more qualitative sources and furthermore the data is user-ranked by a trusted community. This also has implications for specialised querying, for example training on just r/fitness could be fairly useful for that community.
As a side note, other valuable data stores are not just text but voice/video as well. YouTube and podcast transcripts are readily available, for example to Google. Data and ranking is valuable all over again.
Most likely, just as happened with Stable Diffusion, a community of models will emerge. You could use a pre-trained model for writing chrome extensions, or a model for writing material UI using tailwindCSS, or a very specific model for writing 2d games for Android.
Since a lot of this is trial-error, improving on each iteration the feedback loop (compile, deploy, run, etc) will matter a lot more. Real-time development workflows like React should be interesting at the least. Exciting stuff, truly.
code2prompt.py has some interesting implications for "forking" github projects into the base of your own side project.
Once this stuff matures it will be fascinating to see how the 2030 version of 2010 Rails scaffolding looks. What will the DHH "make a blog in 15min" video look like? (Edit: which apparently is 17yrs old now, aka 2006)
Interesting - I predict the opposite effect. Ultimately the use of AI for programming is still a human-computer interface, and both sides will need a coordinate system to communicate, which is the framework. I mean, until we get super AGI in which case we can tell it "make me the perfect website" and it does. However I don't worry about that case because its equally likely it will tell us "go to work at the chip factory or die." The in-between case is to tell an AI "make a controller" or "make a react component" and things like that. And ChatGPT is very good at doing things like that.
One thing that I’m excited about is the prospect of not having to design a library as a black box with an API on top. That’s the best way we’ve had previously for re-using code, but it’s an enormous effort to go from a working piece of code to a well-designed, well-documented library, and I think we have all experienced the frustration of discovering that a library you’re using doesn’t support a specific use case that is critical to you.
LLMs can potentially allow us to bring the underlying implementation directly in to our source code. Then we can talk to the LLM to adapt it to the specific needs of our project. Instead of a library you would install essentially a well-written prompt that tells the LLM how to guide you through setting up a tailor-made implementation, with tests and docs.
The benefits should be obvious: you’re not artificially restricted by the mental model encoded in the API, you’re not taking on a dependency where the author suddenly decides to release breaking changes or deprecate functionality you’re depending on, and you don’t risk “growing out of” a library that is used all over your codebase, as you can simply ask the LLM to patch the code with any changes you need in the future. The prompt itself could still be versioned so you can opt in to future improvements in security, performance or compatibility.
TLDR: let’s start writing tutorials for bots, rather than libraries.
> you’re not artificially restricted by the mental model encoded in the API
Most of the time I want a restricted mental model because I have so many API's to deal with that if they are not restricted my "mental model" breaks down. Suppose I am using a sockets library. I want to use that like a black box. I don't want that code arbitrarily mixed in with my code. I want to be able to debug my code separately because I assume that 99% of the time the bug is in my code and not the sockets lib etc. etc.
Even when most of the code is my own I will still split it into modules and try to make those as black box as possible in order to manage complexity.
I tend to write facades for many libraries/APIs I use and use the facades, not the actual APIs throughout the project. The facades, aside from being simpler to replace in case I need to switch dependencies, also use a simpler mental model suitable for the project (and me).
oh my. i love the sound of making a blog in 1.5min with smol developer. but feel like that is more of a web framework question than an ai developer question. maybe the qtn is can i spin up the same blog in django rails and nextjs with just prompts.
(think need to give it the ability to install deps before i do this, which is on the “roadmap”)
I think you may have just stumbled onto the new Turing test: when confronted with the confusing world of Javascript frameworks, does it make a choice or collapse into a state of complaining about javascript frameworks.
I’m not sure which would be the most human response though…
They need to take this and similar AI and come up with better dubbing for movies in other languages. Netflix should really lead the way here with the amount of dubbed content that they currently possess.
If dubbing is where you are going... does that mean you're also going to pair it with deepfaking the videos to make the facial movements match the new vocalizations? Because that'd be a wild product.
A bit late but I have a problem with whichever icon search sites I've come across: it's hard to find a set.
For example if I search Facebook, I get dozens of icons. Now how do I find the same style of icons for Twitter, etc. If you can fix that, it'll truly make you awesome.
Three worst addictions: Heroine, Carbohydrates and a monthly salary - Nassim Taleb
It's more common for people who are in-between jobs to take things that would be otherwise compromise their monthly salary income. It's less common for someone to quit a high paying job and take on a risky endeavour.
Relatedly, the Tarzan strategy is another way to mitigate this risk (side projects or finding your next gig before just quitting the current), etc. Called Tarzan because you hang on to the next rope before letting go of the current one.
> Three worst addictions: Heroine, Carbohydrates and a monthly salary - Nassim Taleb
Nassim Taleb is a bit of a wack.
This sentence doesn't make sense. I really hope it was taken out of context, because otherwise, there's absolutely no value to it other than glorifying risk for the sake of it. May as well be talking about gambling money away.
yes, describing the serfdom as "self-employment" reminds how some in US describe the slaves brought here back then as "immigrants" or "labor migrants".
Being a subsistence farmer was incredibly risky. One bad year and your whole family starves to death. People only did that because there was no alternative. As soon as the industrial revolution came, people left their "self employment" en masse to work at a company.
Not even "work at a company". Most of the dairy farmers here where I'm from are part of a collective called Arla where the independent dairy farmers collaborate, effectively building their own safety net with an organization that could support them if they had a bad year.
Companies are not required, but social safety nets are hugely important for modern systems of production.
At least in the case of England, it wasn't initially voluntary. The enclosure acts removed their ability to farm and feed themselves, removing the last benefit they received from feudalism. It was from lack of means to farm anymore they moved to the city.
That's because it wasn't risky to get a job as a helper to a working professional, learn their trade while doing the worst/easiest part of the work, transition to doing skilled work while having your helpers do the crapwork and the professional did inspection and finishing, then either taking over the shop from the professional, partnering with the professional, or opening your own shop with your already established customers.
Self employed in the sense of being part of the gig economy (serfdom), working all day for a modest living under Uber (local ruler of the day). The risk was dying when the next war broke out or it stopped raining for a year.
I take Heroin here to really mean all narcotics including alcohol. I think crystal meth would be worse than heroin anyway, but have no direct or indirect experience.
Low carb diet is and giving up alcohol completely: I recommend people try.
Many very smart people have tried and failed. State of the art remains very basic supervised models with hand engineered features. In the markets, data is permanently scarce, so these methods don't work well. In the RL problems that DeepMind is solving, data is literally unlimited, and that's the problem space that these methods have been designed for.
It's not so clear to me how you would train a reinforcement learning agent for the stock market. You have historic data for prices etc. But that's more of a supervised learning thing. You could set it loose on one of those realtime market simulators, but the agents actions wouldn't have any impact on the simulation right?
There's two problems in markets, price prediction and execution (ie what to do with your prediction). The former is a supervised learning problem but the latter is an action space problem ie an RL problem. Although nobody in industry has gotten any RL methods too work, they overfit to the incredibly small data sets.
> This guy has gone to the zoo and interviewed all the animals. The tiger says that the secret to success is to live alone, be well disguised, have sharp claws and know how to stalk. The snail says that the secret is to live inside a solid shell, stay small, hide under dead trees and move slowly around at night. The parrot says that success lies in eating fruit, being alert, packing light, moving fast by air when necessary, and always sticking by your friends.
His conclusion: These animals are giving contradictory advice! And that's because they're all "outliers".
> But both of these points are subtly misleading. Yes, the advice is contradictory, but that's only a problem if you imagine that the animal kingdom is like a giant arena in which all the world's animals battle for the Animal Best Practices championship [1], after which all the losing animals will go extinct and the entire world will adopt the winning ways of the One True Best Animal. But, in fact, there are a hell of a lot of different ways to be a successful animal, and they coexist nicely. Indeed, they form an ecosystem in which all animals require other, much different animals to exist.
Great analogy, but analogies can be deceptive, because while the snail can coexist with the tiger, for every two animals that coexist hundreds still have to fail the natural selection process.
We like to view the world through an idealistic narrow lens of an analogy but the truth is often far more complex.
I would go further to say that analogies and quotations are dangerous and deceptive. These quotations don't actually offer any new information. You usually only like analogy because you already agree with it, no new information or insights are being offered other than the comparison that is part of the analogy itself.
I will have to agree with the dead comment by @leafboi, it is not that every animal can live harmoniously, indeed every animal is in competition by the laws of natural selection, and each animal can be seen as the outlier in its species. In the case of humans, there may be multiple paths to success, but it may be that many of them lead to failure while producing certain outliers.
Unrelatedly, does anyone know why certain comments immediately become dead, they don't really seem to break any HN rules but I see often that some comments die quickly.
As human beings, moderators have their own preferences which exist beyond the HN guidelines, and are not meta moderated, except in the aggregate.
When a comment becomes dead, it's because some mods chose to vote it down, and fewer mods choose to revive it. The net effect is an expression of the prevailing culture.
Your question about why this sometimes happens "immediately" is an interesting part of the dynamic.
The problem, as with all things in Karachi, is systemic.
It results from lack of ownership which makes accountability hard (politicized institutions with diverging agendas). During these rains, an entire township (Naya Nazimabad) sank underwater and many areas are still waterlogged. This township was built on top of a low-lying lake that was reclaimed. The approval of such projects involves dozens of authorities, each of whom charges an "expediency fees" and corruption is deep rooted within them. An officer who comes in for a 2-4 year tenure on low government salaries is incentivized to maximize his earnings during his short tenure.
The second is the lack of engineering involvement. The government sector doesn't exactly attract the top talent at good salaries. The tenders on the other hand are awarded based on nepotism and personal gain. There is a long list of botched projects in the civil sector. None of the desalination and water treatments plants are operational, in the entire city, for example. They have not been for years now.
The sewerage infrastructure in many parts of the city, as exists right now, is worse than the French sewers built in the 14th century[1]. These sewers are open-top and become dumping grounds for garbage due to lack of a garbage collection infrastructure. The encroachments around and above these sewers fall victim to Tragedy of Commons[2]. The whole thing is a mess with no easy solution and if this years' heavy rainfalls become a future trend, the situation will be unsustainable. Many homes were waterlogged for days and weeks with no power or connectivity (cell towers have around 24-48 hours of standby power after which they went down).
What made it different this time is that chick housing in DHA, and Clifton got flooded, and people living there can shout loud enough for the establishment to care
Unless this, common people wouldn't be so happy seeing DHA residents swimming.
This is maybe the second phase after AWS found a fit and built a consumer base? Once I am in the housing market, I have a need for everything (mortgage, building contractor, construction materials, designer, hardware and accessories, upholstery, decor, etc).
My entry to AWS started with EC2 in the very early days because it of its commodity nature (any size and shape, for however long, with per-minute billing) and instant availability. The elastic nature solved scale. A lot of people didn't move to RDS until later but it was inevitable.
Everything else followed on from there, cross-sells and up-sells for reliability and convenience were always a click away for captive consumers who were already onboarded.