The most frustrating part here is that this car crash of a policy had cross party support so there wasn’t even a way for UK people like me to vote against it.
Even in Reform, I get the sense that Zia Yusuf was the only person campaigning seriously against it. Going on a one man crusade to force Farage to criticise it and put fully repealing the act on their manifesto.
Yes, but she is now allied with a party that has committed to repealing it. This seems to be the result of pure political calculation by all involved. Dorries is unfortunately quite popular with certain demographics that aren't natural Reform supporters (i.e. the sort who love the OSA), so bringing her in probably does help them. But they know they can't absorb too many former Tories without undermining the justification for their own existence.
Yeah they really shouldn’t have let Nadine Dorris into the party. For the sake of online freedom, thankfully Zia outranks her (as she holds no official position) but even so it’s not an ideal situation.
Surprising there’s no matte-black iPhone 17 Pro - dark, low-reflectance finishes are standard in pro video kit because they minimise specular reflections and stray highlights; keeping a shiny silver finish and skipping a subdued matte black feels like a strange choice and undercuts the “Pro” claim.
Surprising there’s no matte-black iPhone 17 Pro - dark, low-reflectance finishes are standard in pro video kit because they minimise specular reflections and stray highlights; keeping a shiny silver finish and skipping a subdued matte black feels like a strange choice and undercuts the “Pro” claim.
It's not a strange choice at all when you realize that the majority of people use phone cases and it's more difficult to make matte "pop" in promotional content
Movie people don't normally care about the finish of the iPhone they are using. And the ones that do, use a case.
I've seen all sorts of non-black (let alone matte black) iPhone rigs used for motion pictures, including white and natural titanium colors. Eg. 28 Years Later used a variety of iPhone configurations and colors.
But yeah, I'm surprised there's no black/space gray option this year. Some consumers won't buy any other color.
I wonder if someone will come up with the idea of vinyl wrap to protect your phone rather than using a slipon phone case. Then...you could have your phone be thin and get that matte finish. Couple that with a matte phone screen protector and I think the result would be pretty nice.
These have never been actual pro devices. Arguably not even prosumer. You probably don’t want scorched earth ai processing done on your photos as a pro but that is what the iPhones have been doing as of late. Most damning is no way to turn that off.
There is no such thing as a digital camera without processing. But third party camera apps can get images as raw as they want them and it supports professional video standards.
Try Halide with "Process Zero" if you want that, but I'm pretty sure the most popular 3p camera apps are Asian beauty apps that do far more and far worse quality processing.
Sure there is. Shoot in Raw format. Get a file representing a matrix of the sensor readout for each rgb pixel. Your post processing software of choice handles interpolation to the method of your choice.
There is a big difference between interpolation (dealing with the bayer or xtrans array and delivering a 3 layer image file in your choice of format and bit depth using your choice of algorithms), shooting for white balance or tone mapping with a color card and calibrated monitor if you care about that level of accuracy, and what Apple is doing which is black box ML subtly yassifying your images and garbling small printed text. Especially when the commenters use case is building out the family archive and not posting selfies on Instagram.
> shooting for white balance or tone mapping with a color card and calibrated monitor if you care about that level of accuracy
You need to do this if you want to see the image at all, and it involves a lot of subjective choices. The objective auto white balance algorithm usually described is objectively quite bad; for instance it's always described as a single transformation on the image, which doesn't make sense if there are multiple light sources.
The reason you'd want to render humans differently in the image is that a) if you don't get skin tones just right they'll look like corpses b) in real life you can choose to focus on a subject in a scene and this will cause them to appear brighter (because your eyes will adapt to them) but in an image there isn't that flexibility and so it helps to guess what the foreground of the image is and expose for that.
I forgot to say recent iPhone cameras let you turn off the sharpening effects anyway, just move the photographic style control down to Natural. It is true that the sharpening is kind of bad. This is because someone taught everyone that digital images are bandlimited so they use frequency-based sharpening algorithms, but they aren't, so those just give you ringing artifacts. For some reason nobody knows about warp-sharpen anymore.
Thanks so much for this! I’d really appreciate a more consumer oriented subscription offering, similar to Claude Max, that combines Gemini CLI (with IP compliance) and the Gemini app (extra points for API access too!).
From what I can tell, that was a bug and it has been fixed. I regularly use data with much more than 32k tokens in Gemini with my Workspace account, and context window issues are not a thing anymore.
What does set Gemini via Workspace apart from other offerings like AI Studio is the nerfed output limit and safety filters. Also, I never got Gemini to ground replies in Google search, except when in Deep Research, or to execute code. Finally, Workspace users of Gemini either cannot keep their chat history, or have to keep the entire history for a predetermined period (deleting individual chats is not allowed).
I was about to post something similar. While the research is interesting, it doesn’t offer any advantages over 3- or 4-bit quantization. I also have to assume they explored using longer tiles but found it to be ineffective — which would make sense to me from an information theory perspective.
> it doesn’t offer any advantages over 3- or 4-bit quantization.
"zero-shot accuracy retention at 4- and 3-bit compression to be on par with or better than state-of-the-art methods, while maintaining performance comparable to FP16 baselines."
My reading of that says FP16 accuracy at Q3 or Q4 size / memory bandwidth. Which is a huge advantage.
I don't see any comparable numbers on the page you linked. Seems to only have numbers for 1B and 3B parameter models. Comparisons to AWQ and OmniQuant in Table 3 seem quite favorable with SeedLM showing 10% - 50% better performance.
Also seems like the techniques may be possible to combine.
As a rule of thumb, the bigger the model is, the more graciously it degrades under quantisation. So you may assume performance loss for a 8B model would be lower than for a 3B model. (I know that doesn't make up for missing numbers in link, just fyi.)
I think the main advantage is that you can compute the extra parameters (the PRNG seeds) from the network weights alone, whereas most other quantization methods require simulating the quantization procedure at training time (Quantization-Aware Training) or setting them from a calibration dataset (Post-Training Quantization)
This technique has three significant advantages over popular low bit quantization: 1) it retains more accuracy, 2) it does not require calibration data, 3) it's easier to implement in hardware.
A Mac mini with an M4 Pro and 64GB of memory has the same bandwidth and costs £1,999, compared to £1,750 for the Framework Desktop when factoring in the minimum costs for storage, tiles, and necessary expansion cards.
One thing to note on the more RAM: for the 128GB option, my understanding is that the GPU is limited to using only 96GB [1]. In contrast, on Macs, you can safely increase this to, for example, 116GB using `sysctl`.
It can go higher actually, just that when I setup my test devices I had a "ought to be enough for everyone" moment when typing `options amdgpu gttsize=110000`. I guess this number spread too far, heh.
Apologies, I stand corrected. Do you have a reference for this? I'm genuinely curious why the 96GB "limit" is so frequently cited - I assumed it must be a hardware limitation.
Dual-layer LCD is best thought of as a separate technology from tandem OLED, due to being transmissive rather than emissive. In many respects, it surpasses OLED, which is why mastering monitors used in Hollywood still employ this technology. Unfortunately, the poor efficiency and excessive energy consumption/heat output have hindered its adoption in the consumer market.
> In many respects, it surpasses OLED, which is why mastering monitors used in Hollywood still employ this technology.
Flanders Scientific were the main champions of dual LCDs in that market, and even they have phased out all of their dual LCD models in favor of QD-OLED ones now. I think they just brute force through OLEDs usual brightness limits by actively cooling the panel.
Also, it costs $25,000. But that might not be related to it being a dual-layer LCD, but rather other exacting tolerances for professional mastering use.
Those rules and regulations only apply to consumer products, business/professional products are exempt. This product is obviously not a consumer product.
Dual LCDs also have poor brightness since a lot less light gets through the two layers. That can be overcome by a much stronger backlight, which produces tons of heat that require active cooling.
And speaking of high-brightness LED panels, I can swear I got a tan from one during a conference where the panel was sitting with the back towards the panel.
I struggle to see the utility of projects like this. For tabular data in active use, decompression will still require the same peak memory, so optimizing data types (e.g., reducing float and integer precision, using categorical columns) is more effective. For storage or unused data, a more portable and supported solution like Apache Parquet, which offers native compression, or simply gzipping a CSV, seems more practical.
I don’t think you understand what this project is doing. I skim through the doc and it seems it’s neatly summarized by the first paragraph:
> The library's main goal is to compress data frames, excel and csv files so that they consume less space to overcome memory errors. Also to enable dealing with large files that can cause memory errors when reading them in python or that cause slow operations. With lzhw, we can read compressed files and do operations column by column and on specific rows only on chunks that we are interesred in.
If I understand it correctly, it means that the goal is to keep a losslessly compressed copy of data in memory, and provide ways you could work with the data column by column or even in chunk, to reduce the amount of memory needed to complete an operation. And it deals with it generally (not necessarily categorical data) and losslessly (you cannot impose lossiness arbitrarily).
But this library seems to be designed for a very niche purpose. It mentions laptop here and there in the doc. And the use case is the kind of datasets with size just above what your laptop memory has, whereas the losslessly compressed data still fits. That makes it hard to write production code with, as the advantage of compression is unpredictable. Even if it is just for explorative data analysis, it puts a burden on the mental model to reason with, as you really need to be just in the right spot for this to be useful. (There are techniques that can stream data from file to handle data bigger than available memory.)
Thanks for the clarification! In that case, the README could definitely benefit from including at least one example to illustrate this functionality. That said, as you pointed out, selectively loading relevant columns—which pandas supports for both Parquet and CSV—would still be a more straightforward approach for most use cases.