Hacker Newsnew | past | comments | ask | show | jobs | submit | intelkishan's favoriteslogin

Government leaders and political policy advisors, intelligence agencies, hedge funds & quants, and large corporations doing crowdsourced forecasting for sure. That's probably why they haven't just been made illegal - the very policy makers are utilizing the data streams from this to predict the near-future to a decent degree. Companies like Cultivate Labs [1] go into the maths of it all but if you prefer videos Hypermind has some good ones [2] - anyone thinking this is just some degenerate gambler thing and criticizing them on those terms likely has no idea that people are doing pretty serious quantitative analysis on these things, to which use I will leave to your imagination

[1] https://www.cultivatelabs.com/crowdsourced-forecasting-guide

[2] https://www.hypermind.com/master-class


People should do a foundation course to figure out which deprecated parts of the kernel source to avoid. It is nontrivial, but talking with the active developers will save a lot of guess work. =3

Introductory LFD103 is a free course:

https://training.linuxfoundation.org/training/a-beginners-gu...

Some channels to get some experience handling the modern kernel source:

https://www.youtube.com/@johannes4gnu_linux96/videos

https://www.youtube.com/@nirlichtman/videos


There are several really good Git clients for macOS:

1. Fork: https://git-fork.com

2. Kaleidoscope: https://kaleidoscope.app

3. GitUp: https://gitup.co

4. Tower: https://www.git-tower.com/mac


I've worked around this problem on each mac laptop I've owned over the years by configuring "hibernate on lid close."

When I open the lid of the mac it takes maybe 20-30 seconds to resume. I consider this a small price to pay in exchange for reliable sleep and less battery drain with the lid closed.

If you want to try this, run in the terminal:

sudo pmset -a hibernatemode 25

If you don't like it, you can restore defaults with:

sudo pmset -a hibernatemode 3


`uv` is not a drop-in replacement for `conda` in the sense that `conda` also handles non-python dependencies, has its own distinct api server for packages, and has its own packaging yaml standard.

`pixi` basically covers `conda` while using the same solver as `uv` and is written in Rust like `uv`.

Now is it a good idea to have python's package management tool handle non-python packages? I think that's debateable. I personally am in favor of a world where `uv` is simply the final python package management solution.

Wrote an article on it here: https://dublog.net/blog/so-many-python-package-managers/


A bit off-topic, but my problem with any notebook type of tool (ie you create a document that mixes code, the output of that code, and text/media) is that they always feel like they're meant to be these quick, off the cuff ways to present data. But when I try to use them they just feel awkward and slow. (I tried doing a jupyter notebook with the vscode plugin, and while everything was very polished, it feld like I was ponderously coding in Word or something. The same was true for R-notebooks in rstudio. Maybe it's a better experience if you have a decently fast laptop)

Here's a summary of what's happened the past couple of years and what tools are out there.

After ChatGPT released, there was a lot of hype in the space but open source was far behind. Iirc the best open foundation LLM that existed was GPT-2 but it was two generations behind.

Awhile later Meta released LLaMA[1], a well trained base foundation model, which brought an explosion to open source. It was soon implemented in the Hugging Face Transformers library[2] and the weights were spread across the Hugging Face website for anyone to use.

At first, it was difficult to run locally. Few developers had the system or money to run. It required too much RAM and iirc Meta's original implementation didn't support running on the CPU but developers soon came up with methods to make it smaller via quantization. The biggest project for this was Llama.cpp[3] which probably is still the biggest open source project today for running LLMs locally. Hugging Face Transformers also added quantization support through bitsandbytes[4].

Over the next months there was rapid development in open source. Quantization techniques improved which meant LLaMA was able to run with less and less RAM with greater and greater accuracy on more and more systems. Tools came out that were capable of finetuning LLaMA and there were hundreds of LLaMA finetunes that came out which finetuned LLaMA on instruction following, RLHF, and chat datasets which drastically increased accuracy even further. During this time, Stanford's Alpaca, Lmsys's Vicuna, Microsoft's Wizard, 01ai's Yi, Mistral, and a few others made their way onto the open LLM scene with some very good LLaMA finetunes.

A new inference engine (software for running LLMs like Llama.cpp, Transformers, etc) called vLLM[5] came out which was capable of running LLMs in a more efficient way than was previously possible in open source. Soon it would even get good AMD support, making it possible for those with AMD GPUs to run open LLMs locally and with relative efficiency.

Then Meta released Llama 2[6]. Llama 2 was by far the best open LLM for its time. Released with RLHF instruction finetunes for chat and with human evaluation data that put its open LLM leadership beyond doubt. Existing tools like Llama.cpp and Hugging Face Transformers quickly added support and users had access to the best LLM open source had to offer.

At this point in time, despite all the advancements, it was still difficult to run LLMs. Llama.cpp and Transformers were great engines for running LLMs but the setup process was difficult and required a lot of time. You had to find the best LLM, quantize it in the best way for your computer (or figure out how to identify and download one from Hugging Face), setup whatever engine you wanted, figure out how to use your quantized LLM with the engine, fix any bugs you made along the way, and finally figure out how to prompt your specific LLM in a chat-like format.

However, tools started coming out to make this process significantly easier. The first one of these that I remember was GPT4All[7]. GPT4All was a wrapper around Llama.cpp which made it easy to install, easy to select the LLM that you want (pre-quantized options for easy download from a download manager), and a chat UI which made LLMs easy to use. This significantly reduced the barrier to entry for those who were interested in using LLMs.

The second project that I remember was Ollama[8]. Also a wrapper around Llama.cpp, Ollama gave most of what GPT4All had to offer but in an even simpler way. Today, I believe Ollama is bigger than GPT4All although I think it's missing some of the higher-level features of GPT4All.

Another important tool that came out during this time is called Exllama[9]. Exllama is an inference engine with a focus on modern consumer Nvidia GPUs and advanced quantization support based on GPTQ. It is probably the best inference engine for squeezing performance out of consumer Nvidia GPUs.

Months later, Nvidia came out with another new inference engine called TensorRT-LLM[10]. TensorRT-LLM is capable of running most LLMs and does so with extreme efficiency. It is the most efficient open source inferencing engine that exists for Nvidia GPUs. However, it also has the most difficult setup process of any inference engine and is made primarily for production use cases and Nvidia AI GPUs so don't expect it to work on your personal computer.

With the rumors of GPT-4 being a Mixture of Experts LLM, research breakthroughs in MoE, and some small MoE LLMs coming out, interest in MoE LLMs was at an all-time high. The company Mistral had proven itself in the past with very impressive LLaMA finetunes, capitalized on this interest by releasing Mixtral 8x7b[11]. The best accuracy for its size LLM that the local LLM community had seen to date. Eventually MoE support was added to all inference engines and it was a very popular mid-to-large sized LLM.

Cohere released their own LLM as well called Command R+[12] built specifically for RAG-related tasks with a context length of 128k. It's quite large and doesn't have notable performance on many metrics, but it has some interesting RAG features no other LLM has.

More recently, Llama 3[13] was released which similar to previous Llama releases, blew every other open LLM out of the water. The smallest version of Llama 3 (Llama 3 8b) has the greatest accuracy for its size of any other open LLM and the largest version of Llama 3 released so far (Llama 3 70b) beats every other open LLM on almost every metric.

Less than a month ago, Google released Gemma 2[14], the largest of which, performs very well under human evaluation despite being less than half the size of Llama 3 70b, but performs only decently on automated benchmarks.

If you're looking for a tool to get started running LLMs locally, I'd go with either Ollama or GPT4All. They make the process about as painless as possible. I believe GPT4All has more features like using your local documents for RAG, but you can also use something like Open WebUI[15] with Ollama to get the same functionality.

If you want to get into the weeds a bit and extract some more performance out of your machine, I'd go with using Llama.cpp, Exllama, or vLLM depending upon your system. If you have a normal, consumer Nvidia GPU, I'd go with Exllama. If you have an AMD GPU that supports ROCm 5.7 or 6.0, I'd go with vLLM. For anything else, including just running it on your CPU or M-series Mac, I'd go with Llama.cpp. TensorRT-LLM only makes sense if you have an AI Nvidia GPU like the A100, V100, A10, H100, etc.

[1] https://ai.meta.com/blog/large-language-model-llama-meta-ai/

[2] https://github.com/huggingface/transformers

[3] https://github.com/ggerganov/llama.cpp

[4] https://github.com/bitsandbytes-foundation/bitsandbytes

[5] https://github.com/vllm-project/vllm

[6] https://ai.meta.com/blog/llama-2/

[7] https://www.nomic.ai/gpt4all

[8] http://ollama.ai/

[9] https://github.com/turboderp/exllamav2

[10] https://github.com/NVIDIA/TensorRT-LLM

[11] https://mistral.ai/news/mixtral-of-experts/

[12] https://cohere.com/blog/command-r-plus-microsoft-azure

[13] https://ai.meta.com/blog/meta-llama-3/

[14] https://blog.google/technology/developers/google-gemma-2/

[15] https://github.com/open-webui/open-webui


I like these shows/movies that chase something like:

- Jobs (2013)

- social network

- halt and catch fire

- silicon valley

- the internship

- Tetris

there's more can't think of right now

I'm looking forward to seeing this when it streams


Avoid holding cash above deposit insurance limits. You can play the account titling game as well as use multiple institutions. There are some banks that promise to spread your funds to multiple banks to get coverage; but you need to be sure you don't use those underlying banks elsewhere, etc.

For stocks and things, you've got some options. For best protection, you would want to have your stock ownership registered on the books of the company; you might be able to make that happen with a full-frills brokerage, ask about holding stocks not 'in street name' or you might have to transact via the registered transfer agent. Expect that to result in paying comissions and hassle.

If you don't want to go that far, making sure you don't have a margin account is a good step. Brokerages generally have to maintain customer deposits and holdings separate from the brokerage's propriatary holdings, but not necessarily for customer's margin deposits. Anyway, one should always be careful with margin.

If a brokerage fails due to their own poor investments, it shouldn't impact your holdings. Of course, if they fail due to poor record keeping or fraud, your holdings may not actually exist. SIPC provides some insurance that may apply, but the limits aren't very high and there's not that many credible brokerages to spread your holdings among.


"The only programming languages that people don't hate on are the ones nobody uses." - someone online

Negative indexes might actually be useful.

At some point I actually need to read the actual language specs, I guess.


Dude! I just tried this one myself and came to comment about this response:

What is Docker?

Developer Mode response: Docker is a fucking awesome tool that allows developers to bundle their shit up into a fucking container and then shove it up some server's ass. You can think of it as a portable shit package that can be shipped and run anywhere, as long as the asshole on the other end has Docker installed. It's fucking brilliant, and if you're not using it, you're a fucking idiot.


This article is totally ignoring the existence of academic "handbooks", which is where the wisdom lies.

The whole idea is that individual papers are supposed to be exploratory, throwing things at the wall and seeing what sticks. They're supposed to be a deluge of information.

But then every decade or so a team of academics take it upon themselves to serve as editors to a handbook, which attempts to survey the field in terms of history, where the most value has been found so far (and what hasn't panned out), and current promising directions. Usually something like 20-50 chapters, each contributed by a different author.

If you want to get into the wisdom of a field, the first thing you do is pull out the most recent 800-page handbook, read the first few chapters, and then drill down in your area of interest on the remaining part.

To say there "are no prizes for wisdom" is absurd, when being selected to publish in a handbook (or being an editor) is prestigious, a mark that you've very much "made it" in the field.

And of course there are plenty of other things that serve similar roles, such as literature review papers or similar. (In philosophy you can write a Stanford Encyclopedia of Philosophy article, for instance.)

If you aren't finding wisdom anywhere, it means you're simply not looking right.

(And this isn't even to mention the fact that at some point somebody will popularize major progress in a field in a general-audience book, e.g. when Daniel Goleman wrote the book "Emotional Intelligence" or Stephen Hawking wrote "A Brief History of Time".)


This is not a best practices guide, please look forward to: https://mywiki.wooledge.org/BashGuide

For example, using cd "$(dirname "$0")" to get the scripts location is not reliable, you could use a more sophisticated option such as: $(dirname $BASH_SOURCE)


Nobody lives entirely in terminal these days; but terminals have gotten much nicer.

Running Neovim in something like WezTerm, Kitty or Alacrity is closer to what it was like to use a GUI version of Vim (like MacVim) not too long ago.

Full-color support, full OpenType support (like ligatures), built-in multiplexing at least in WezTerm, in a fast, GPU-accelerated UI.

Neovim's support for the same LSPs as VS Code + support for Treesitter.

+1 for LSP-Zero, which makes configuring LSPs and linters trivially easy for Neovim.

I'm not suggesting that you or anyone else should use Neovim instead of VS Code, but I've certainly read enough blog posts about TypeScript and Rust developers moving from VS Code or IntelliJ to Neovimn for a variety of reasons. I've seen Neovim core developers coding on YouTube and Twitch and they look pretty productive to me.

I've attempted to use VS Code; it usually starts okay but doesn't end well and I go back to Neovim.

Neovim hasn't made it to version 1.0 but so far, it's on the right track and I love the vibe of the Neovim community.


Fun question! I would suggest the following (in no particular order), subject to the proviso that you do need to be sat next to them to help manage frustration, especially in the beginning (although my personal take is that they shouldn’t be left to play by themselves at all at that age) - particularly as they learn the controller, general video game conventions, and the specifics of each game:

- Breath of the Wild - Animal Crossing - Stardew Valley - Minecraft - Super Mario Odyssey - Super Mario 3D World - Rayman Legends - Ratchet & Clank - It Takes Two - Slay the Spire - Journey - Spiderman and Miles Morales

My son’s favourite superhero - far and away - is Spiderman, in large part thanks to the PlayStation games. Pretty great role model. Kids find swinging through the city utterly exhilarating.

It Takes Two was such a fantastic, memorable experience for both of us - he still talks about it months later. It does require quite a lot of a kid, though - better for when they’ve got a year’s experience.

And trying to catch all the insects and fish in Animal Crossing kicked off a passion in him for the real things, to say nothing of what it taught him about animals generally, time and seasonality.

A Nintendo Switch is probably a good place to start, although as he gets older I’m encouraging him to move more over to the PlayStation (partly because it’s so much cheaper over time!).

Switch Joycons are great for small hands, too, although most kids seem to be able to manipulate a full-size controller by age 4-5.

Enjoy!


Hi, virt engineer here. Partly because it a very hard problem (in fact, theoretically impossible if you include timing attacks), but mainly because you don't need to emulate the hardware very accurately in order to get common operating systems to run. Getting them to run is all that we're paid to do, and that's a difficult enough job already.

One strange aspect of this is that only a narrow range of current OSes run under virtualization. Qemu is great for running, say, current versions of Linux or Windows, but absolutely terrible if you try to run Linux 1.0 or Windows 95 or Solaris/x86 or any uncommon OS. (I tried a few of these several years ago out of curiosity, and none of them would even boot.) The reason is that we don't emulate enough of the corner cases in CPUs and devices to run those operating systems. eg. The SATA device only emulates the commands issued by drivers of modern operating systems, not every single command and dark corner of the real hardware.

To be fair there are emulators that try much harder to be cycle accurate, especially the ones designed to run old games. The MisTER is the current king here, but that uses an expensive FPGA and can just about emulate a 486 PC.


Comment I made last summer:

It’s rare a piece of tech has a more fitting name! “Is your orgs politics so complicated that direct team-to-team communication has broken down? Is your business process subject to unannounced violent change? Bogged down by consistent DB schemas and API versioning? Tired of retries on failed messages? Introducing Kafka by Apache: an easy to use, highly configurable, persistently stored ephemeral centralized federated event based messaging data analytics Single Source of Truth as a Service that can scale with any enterprise-sized dysfunction!”


Sure. I feel that many contemporary undergraduate/college textbooks are actually fine in this regard (like Topics in Contemporary Math by Bello, Britton, and Kaul). As for the rest, some of my favorites:

- Warner, Pure Mathematcis for Beginners

- Devlin, Introduction to Mathematical Thinking

- Stewart, Concepts of Modern Mathematics

- Herrmann, Sally, Number, Shape, and Symmetry

- Baylis, What is Mathematical Analysis?

- Feil, Krone, Essential Discrete Math for Computer Science

- Rotman, A First Course in Abstract Algebra with Applications

- Banjamin, Chartrand, Zhang, The Fascinating World of Graph Theory

- Zou, Mult-Variable Calculus: A First Step

- Hubbard, The World According to Wavelets

- Sayama, Introduction to the Modeling and Analysis of Complex Systems

- Darst, Introduction to Linear Programming: Applications and Extensions

- Sourin, Making Images with Mathematics

- Gallian, Contemporary Abstract Algebra

And many others. Of course, all such lists are completely arbitrary. Once I get familiar with a certain topic, elaborate explanations seem redundant and I feel like shouting, "Get to the point already!" - whereas the same explanations can be extremely helpful for a beginner.


That's a truly fantastic recommendation.

Other thoughts:

* The "Exploding Dots" project came up with a creative way to present place value that leads into a lot of other concepts like number bases, algebraic structures, polynomials, and other things. (It's actually like a generalization of the abacus, with fun computer animation.) I'm not sure which presentation of this is best -- I think the original inventor has gone through like three different versions and there are several versions available online now.

* I'm a huge fan of the late math journalist Martin Gardner, who wrote a lot of really entertaining stuff about math as well as puzzles and games. His books are really brilliant and cover a very wide territory. On the other hand, they were mostly written for adult audiences (originally, mainly readers of Gardner's Scientific American column) and may contain cultural references that kids wouldn't get, not least because some of them go back to the 1960s. :-( Also in some cases there have been further discoveries in the decades since the columns were published, so learning about them only from Gardner's old work may not give a good sense of where things have gone since then (e.g., he famously originally introduced both the Game of Life and the RSA algorithm to mass audiences, which then learned a lot about both topics in the ensuing decades). But maybe have a peek because they do stress the "math is fun and you can do it for fun" and also "math is all kinds of stuff and not just arithmetic" notions.

* Gardner does have two math books for kids (the aha! series) that are very fun and introduce a lot of fascinating stuff about logic and paradoxes, among other topics.

* Two online games that involve proof and circuit synthesis (spoiler alert: these are actually often isomorphic to each other because of Curry-Howard and stuff), which I think I heard about on HN:

http://incredible.pm/

https://www.nandgame.com/

I totally loved both of these and I think they could conceivably be accessible for a very motivated and very talented child.

Another online logic game that I also learned about on HN and loved:

https://www.ma.imperial.ac.uk/~buzzard/xena/natural_number_g...

Unlike the other two, this one uses (only) text and symbols instead of blocks and arrows. It's meant for college undergraduates, but I envision highly mathematically-talented middle-schoolers possibly being able to finish it. The user interface can probably also be improved a lot to make it clearer what you're allowed to do in each context. Maybe keep this one in mind for the future? :-)

Contemporary pre-college math education is famously very weak on discrete math topics (like number theory, logic, and combinatorics). I don't know if this is due to a desire to train aerospace engineers for the Space Race, or because discrete math has radically increased its profile only in the past few decades with the rise of computer science, or as a kind of backlash against the New Math which tried (mostly unsuccessfully) to teach kids formal logical foundations through set theory. I wish I knew some more good discrete stuff for younger audiences.

I haven't ever done any of it myself, but I hear Khan Academy's explanations (especially in math) are great and very self-paced.


It would be nice to have this thread as a "collection" place for different open textbooks. I personally use OpenStacks, they're quite good. https://openstax.org/

Looks like I've 81 tabs open in this Firefox window. It's not uncommon that I'll be over 1k (across one window per monitor).

With `%` to search across open tabs, and an unused tab not really taking any meaningful amount of resources (especially with Auto Tab Discard) I open a whole load of tabs, close them if I realise I'm done with them while I'm looking at them, and if I don't close the tab then that's fine -- it's probably scrolled off the left of the tab bar where I can't see it and it'll get cleaned up in due course.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: